Lines Matching full:shadow
3 * This file contains KASAN runtime code that manages shadow memory for
80 * Perform shadow offset calculation based on untagged address, as in kasan_poison()
109 u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size); in kasan_poison_last_granule() local
110 *shadow = size & KASAN_GRANULE_MASK; in kasan_poison_last_granule()
120 * Perform shadow offset calculation based on untagged address, as in kasan_unpoison()
202 * If shadow is mapped already than it must have been mapped in kasan_mem_notifier()
227 * In the latter case we can use vfree() to free shadow. in kasan_mem_notifier()
231 * Currently it's not possible to free shadow mapped in kasan_mem_notifier()
301 * User Mode Linux maps enough shadow memory for all of virtual memory in kasan_populate_vmalloc()
337 * STORE shadow(a), unpoison_val in kasan_populate_vmalloc()
339 * STORE shadow(a+99), unpoison_val x = LOAD p in kasan_populate_vmalloc()
341 * STORE p, a LOAD shadow(x+99) in kasan_populate_vmalloc()
343 * If there is no barrier between the end of unpoisoning the shadow in kasan_populate_vmalloc()
346 * poison in the shadow. in kasan_populate_vmalloc()
352 * get_vm_area() and friends, the caller gets shadow allocated but in kasan_populate_vmalloc()
391 * That might not map onto the shadow in a way that is page-aligned:
401 * |??AAAAAA|AAAAAAAA|AA??????| < shadow
405 * shadow of the region aligns with shadow page boundaries. In the
406 * example, this gives us the shadow page (2). This is the shadow entirely
410 * partially covered shadow pages - (1) and (3) in the example. For this,
423 * |FFAAAAAA|AAAAAAAA|AAF?????| < shadow
427 * the free region down so that the shadow is page aligned. So we can free
450 * means that so long as we are careful with alignment and only free shadow
522 * Poison the shadow for a vmalloc region. Called as part of the