Lines Matching full:tables
30 * partitions to page tables when the partitions are removed.
65 * sure all memory mappings are the same across all page tables when invoking
186 * Macros for reserving space for page tables
188 * We need to reserve a block of memory equal in size to the page tables
217 * covered by all the page tables needed for the address space
222 /* Number of page tables needed to cover address space. Depends on the specific
238 /* 32-bit page tables just have one toplevel page directory */
243 /* Same semantics as above, but for the page directory pointer tables needed
251 /* All pages needed for page tables, using computed values plus one more for
256 /* Number of pages we need to reserve in the stack for per-thread page tables */
272 /* "dummy" pagetables for the first-phase build. The real page tables
415 /* We're always on the kernel's set of page tables in this context in z_x86_tlb_ipi()
428 * propagating which page tables were modified (in case they are in z_x86_tlb_ipi()
679 /* Dump all linked child tables */ in dump_ptables()
816 * page table isolation. If these are User mode page tables, the user bit
891 /* Indicates that the target page tables will be used by user mode threads.
893 * page tables need nearly all pages that don't have the US bit to also
975 * For the provided set of page tables, update the PTE associated with the
991 * @param ptables Page tables to modify
1065 * Map a physical region in a specific set of page tables.
1072 * scheduled (and therefore, if multiple sets of page tables exist, which one
1077 * @param ptables Page tables to modify
1092 * @retval -EFAULT if errors encountered when updating page tables
1141 * Establish or update a memory mapping for all page tables
1153 * programmed into the page tables.
1160 * will trigger a TLB shootdown after all tables are updated.
1164 * @retval -EFAULT if errors encountered when updating page tables
1192 /* All virtual-to-physical mappings are the same in all page tables. in range_map()
1194 * domain associated with the page tables, and the threads that are in range_map()
1197 * Any new mappings need to be applied to all page tables. in range_map()
1349 /* Invoked to remove the identity mappings in the page tables,
1377 /* Applied to all page tables as this affects supervisor mode. in z_x86_set_stack_guard()
1465 /* Very low memory configuration. A single set of page tables is used for
1469 * set of page tables.
1470 * - No SMP. If that were supported, we would need per-core page tables.
1475 * Because there is no SMP, only one set of page tables, and user threads can't
1478 * updating page tables if the last user thread scheduled was in the same
1482 * up any arch-specific memory domain data (per domain page tables.)
1505 /* Cache of the current memory domain applied to the common page tables and
1544 /* Step 2: The page tables always have some memory domain applied to in z_x86_swap_update_common_page_table()
1546 * update the page tables in z_x86_swap_update_common_page_table()
1583 * page tables.
1628 /* Memory domains each have a set of page tables assigned to them */
1631 * Pool of free memory pages for copying page tables, as needed.
1672 * Duplicate an entire set of page tables
1681 * @param src some paging structure from within the source page tables to copy
1725 * for page tables is identity-mapped, but double- in copy_page_table()
1822 /* If we're not using KPTI then we can use the build time page tables in arch_mem_domain_init()
1823 * (which are mutable) as the set of page tables for the default in arch_mem_domain_init()
1853 /* Make a copy of the boot page tables created by gen_mmu.py */ in arch_mem_domain_init()
1905 /* Update the page tables with the partition info */ in arch_mem_domain_partition_add()
1938 LOG_DBG("set thread %p page tables to 0x%" PRIxPTR, thread, in arch_mem_domain_thread_add()
1956 /* Need to switch to using these new page tables, in case we drop in arch_mem_domain_thread_add()
1997 /* Memory domain access is already programmed into the page tables. in z_x86_current_stack_perms()
1999 * its domain-specific page tables. in z_x86_current_stack_perms()
2156 /* Don't bother looking at other page tables if non-present as we in arch_page_info_get()
2173 /* Logical OR of relevant PTE in all page tables. in arch_page_info_get()
2210 /* TODO: since we only have to query the current set of page tables, in arch_page_location_get()
2240 * fetch the PTE from the page tables until we are inside in z_x86_kpti_is_access_ok()