1# Copyright (c) 2021 Intel Corporation 2# 3# SPDX-License-Identifier: Apache-2.0 4 5menu "Virtual Memory Support" 6 7config KERNEL_VM_SUPPORT 8 bool 9 help 10 Hidden option to enable virtual memory Kconfigs. 11 12if KERNEL_VM_SUPPORT 13 14DT_CHOSEN_Z_SRAM := zephyr,sram 15 16config KERNEL_VM_BASE 17 hex "Virtual address space base address" 18 default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM)) 19 help 20 Define the base of the kernel's address space. 21 22 By default, this is the same as the DT_CHOSEN_Z_SRAM physical base SRAM 23 address from DTS, in which case RAM will be identity-mapped. Some 24 architectures may require RAM to be mapped in this way; they may have 25 just one RAM region and doing this makes linking much simpler, as 26 at least when the kernel boots all virtual RAM addresses are the same 27 as their physical address (demand paging at runtime may later modify 28 this for non-pinned page frames). 29 30 Otherwise, if RAM isn't identity-mapped: 31 1. It is the architecture's responsibility to transition the 32 instruction pointer to virtual addresses at early boot before 33 entering the kernel at z_cstart(). 34 2. The underlying architecture may impose constraints on the bounds of 35 the kernel's address space, such as not overlapping physical RAM 36 regions if RAM is not identity-mapped, or the virtual and physical 37 base addresses being aligned to some common value (which allows 38 double-linking of paging structures to make the instruction pointer 39 transition simpler). 40 41 Zephyr does not implement a split address space and if multiple 42 page tables are in use, they all have the same virtual-to-physical 43 mappings (with potentially different permissions). 44 45config KERNEL_VM_OFFSET 46 hex "Kernel offset within address space" 47 default 0 48 help 49 Offset that the kernel image begins within its address space, 50 if this is not the same offset from the beginning of RAM. 51 52 Some care may need to be taken in selecting this value. In certain 53 build-time cases, or when a physical address cannot be looked up 54 in page tables, the equation: 55 56 virt = phys + ((KERNEL_VM_BASE + KERNEL_VM_OFFSET) - 57 (SRAM_BASE_ADDRESS + SRAM_OFFSET)) 58 59 Will be used to convert between physical and virtual addresses for 60 memory that is mapped at boot. 61 62 This uncommon and is only necessary if the beginning of VM and 63 physical memory have dissimilar alignment. 64 65config KERNEL_VM_SIZE 66 hex "Size of kernel address space in bytes" 67 default 0x800000 68 help 69 Size of the kernel's address space. Constraining this helps control 70 how much total memory can be used for page tables. 71 72 The difference between KERNEL_VM_BASE and KERNEL_VM_SIZE indicates the 73 size of the virtual region for runtime memory mappings. This is needed 74 for mapping driver MMIO regions, as well as special RAM mapping use-cases 75 such as VSDO pages, memory mapped thread stacks, and anonymous memory 76 mappings. The kernel itself will be mapped in here as well at boot. 77 78 Systems with very large amounts of memory (such as 512M or more) 79 will want to use a 64-bit build of Zephyr, there are no plans to 80 implement a notion of "high" memory in Zephyr to work around physical 81 RAM size larger than the defined bounds of the virtual address space. 82 83config KERNEL_DIRECT_MAP 84 bool "Memory region direct-map support" 85 depends on MMU 86 help 87 This enables the direct-map support, namely the region can be 1:1 88 mapping between virtual address and physical address. 89 90 If the specific memory region is in the virtual memory space and 91 there isn't overlap with the existed mappings, it will reserve the 92 region from the virtual memory space and do the mapping, otherwise 93 it will fail. And any attempt across the boundary of the virtual 94 memory space will fail. 95 96 Note that this is for compatibility and portable apps shouldn't 97 be using it. 98 99endif # KERNEL_VM_SUPPORT 100 101menu "MMU Features" 102 103config MMU 104 bool 105 depends on CPU_HAS_MMU 106 select KERNEL_VM_SUPPORT 107 help 108 This option is enabled when the CPU's memory management unit is active 109 and the arch_mem_map() API is available. 110 111if MMU 112config MMU_PAGE_SIZE 113 hex "Size of smallest granularity MMU page" 114 default 0x1000 115 help 116 Size of memory pages. Varies per MMU but 4K is common. For MMUs that 117 support multiple page sizes, put the smallest one here. 118 119menuconfig DEMAND_PAGING 120 bool "Demand paging [EXPERIMENTAL]" 121 depends on ARCH_HAS_DEMAND_PAGING 122 help 123 Enable demand paging. Requires architecture support in how the kernel 124 is linked and the implementation of an eviction algorithm and a 125 backing store for evicted pages. 126 127if DEMAND_PAGING 128config DEMAND_MAPPING 129 bool "Allow on-demand memory mappings" 130 depends on ARCH_HAS_DEMAND_MAPPING 131 default y 132 help 133 When this is enabled, RAM-based memory mappings don't actually 134 allocate memory at mem_map time. They are made to be populated 135 at access time using the demand paging mechanism instead. 136 137config DEMAND_PAGING_ALLOW_IRQ 138 bool "Allow interrupts during page-ins/outs" 139 help 140 Allow interrupts to be serviced while pages are being evicted or 141 retrieved from the backing store. This is much better for system 142 latency, but any code running in interrupt context that page faults 143 will cause a kernel panic. Such code must work with exclusively pinned 144 code and data pages. 145 146 The scheduler is still disabled during this operation. 147 148 If this option is disabled, the page fault servicing logic 149 runs with interrupts disabled for the entire operation. However, 150 ISRs may also page fault. 151 152config DEMAND_PAGING_PAGE_FRAMES_RESERVE 153 int "Number of page frames reserved for paging" 154 default 32 if !LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT 155 default 0 156 help 157 This sets the number of page frames that will be reserved for 158 paging that do not count towards free memory. This is to 159 ensure that there are some page frames available for paging 160 code and data. Otherwise, it would be possible to exhaust 161 all page frames via anonymous memory mappings. 162 163config DEMAND_PAGING_STATS 164 bool "Gather Demand Paging Statistics" 165 help 166 This enables gathering various statistics related to demand paging, 167 e.g. number of pagefaults. This is useful for tuning eviction 168 algorithms and optimizing backing store. 169 170 Should say N in production system as this is not without cost. 171 172config DEMAND_PAGING_STATS_USING_TIMING_FUNCTIONS 173 bool "Use Timing Functions to Gather Demand Paging Statistics" 174 select TIMING_FUNCTIONS_NEED_AT_BOOT 175 help 176 Use timing functions to gather various demand paging statistics. 177 178config DEMAND_PAGING_THREAD_STATS 179 bool "Gather per Thread Demand Paging Statistics" 180 depends on DEMAND_PAGING_STATS 181 help 182 This enables gathering per thread statistics related to demand 183 paging. 184 185 Should say N in production system as this is not without cost. 186 187config DEMAND_PAGING_TIMING_HISTOGRAM 188 bool "Gather Demand Paging Execution Timing Histogram" 189 depends on DEMAND_PAGING_STATS 190 help 191 This gathers the histogram of execution time on page eviction 192 selection, and backing store page in and page out. 193 194 Should say N in production system as this is not without cost. 195 196config DEMAND_PAGING_TIMING_HISTOGRAM_NUM_BINS 197 int "Number of bins (buckets) in Demand Paging Timing Histogram" 198 depends on DEMAND_PAGING_TIMING_HISTOGRAM 199 default 10 200 help 201 Defines the number of bins (buckets) in the histogram used for 202 gathering execution timing information for demand paging. 203 204 This requires k_mem_paging_eviction_histogram_bounds[] and 205 k_mem_paging_backing_store_histogram_bounds[] to define 206 the upper bounds for each bin. See kernel/statistics.c for 207 information. 208 209endif # DEMAND_PAGING 210endif # MMU 211endmenu 212 213config KERNEL_VM_USE_CUSTOM_MEM_RANGE_CHECK 214 bool 215 help 216 Use custom memory range check functions instead of the generic 217 checks in k_mem_phys_addr() and k_mem_virt_addr(). 218 219 sys_mm_is_phys_addr_in_range() and 220 sys_mm_is_virt_addr_in_range() must be implemented. 221 222endmenu # Virtual Memory Support 223