Lines Matching refs:shadow

141 The state of each 8 aligned bytes of memory is encoded in one shadow byte.
143 We use the following encoding for each shadow byte: 0 means that all 8 bytes
150 In the report above the arrows point to the shadow byte 03, which means that
164 of kmemcheck: use shadow memory to record whether each byte of memory is safe
165 to access, and use compile-time instrumentation to insert checks of shadow
168 Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB
170 translate a memory address to its corresponding shadow address.
172 Here is the function which translates an address to its corresponding shadow
186 access is valid or not by checking corresponding shadow memory.
189 function calls GCC directly inserts the code to check the shadow memory.
201 uses shadow memory to store memory tags associated with each 16-byte memory
202 cell (therefore it dedicates 1/16th of the kernel memory for shadow memory).
212 emits callbacks to check memory accesses; and inline, that performs the shadow
220 manual shadow memory manipulation.
227 that all addresses accessed by instrumented code have a valid shadow
231 real memory to support a real shadow region for every address that
237 By default, architectures only map real memory over the shadow region
240 page is mapped over the shadow area. This read-only shadow page
245 allocator, KASAN can temporarily map real shadow memory to cover
251 the kernel will fault when trying to set up the shadow data for stack
261 allocating real shadow memory to back the mappings.
264 page of shadow space. Allocating a full shadow page per mapping would
266 use different shadow pages, mappings would have to be aligned to
271 of the shadow region. This page can be shared by other vmalloc
274 We hook in to the vmap infrastructure to lazily clean up unused shadow
278 that the part of the shadow region that covers the vmalloc space will
279 not be covered by the early shadow page, but will be left