1.. _memory_domain: 2 3Memory Protection Design 4######################## 5 6Zephyr's memory protection design is geared towards microcontrollers with MPU 7(Memory Protection Unit) hardware. We do support some architectures, such as x86, 8which have a paged MMU (Memory Management Unit), but in that case the MMU is 9used like an MPU with an identity page table. 10 11All of the discussion below will be using MPU terminology; systems with MMUs 12can be considered to have an MPU with an unlimited number of programmable 13regions. 14 15There are a few different levels on how memory access is configured when 16Zephyr memory protection features are enabled, which we will describe here: 17 18Boot Time Memory Configuration 19****************************** 20 21This is the configuration of the MPU after the kernel has started up. It should 22contain the following: 23 24- Any configuration of memory regions which need to have special caching or 25 write-back policies for basic hardware and driver function. Note that most 26 MPUs have the concept of a default memory access policy map, which can be 27 enabled as a "background" mapping for any area of memory that doesn't 28 have an MPU region configuring it. It is strongly recommended to use this 29 to maximize the number of available MPU regions for the end user. On 30 ARMv7-M/ARMv8-M this is called the System Address Map, other CPUs may 31 have similar capabilities. See :ref:`mem_mgmt_api` for information on 32 how to annotate the system map in the device tree. 33 34- A read-only, executable region or regions for program text and ro-data, that 35 is accessible to user mode. This could be further sub-divided into a 36 read-only region for ro-data, and a read-only, executable region for text, but 37 this will require an additional MPU region. This is required so that 38 threads running in user mode can read ro-data and fetch instructions. 39 40- Depending on configuration, user-accessible read-write regions to support 41 extra features like GCOV, HEP, etc. 42 43Assuming there is a background map which allows supervisor mode to access any 44memory it needs, and regions are defined which grant user mode access to 45text/ro-data, this is sufficient for the boot time configuration. 46 47Hardware Stack Overflow 48*********************** 49 50:kconfig:option:`CONFIG_HW_STACK_PROTECTION` is an optional feature which detects stack 51buffer overflows when the system is running in supervisor mode. This 52catches issues when the entire stack buffer has overflowed, and not 53individual stack frames, use compiler-assisted :kconfig:option:`CONFIG_STACK_CANARIES` 54for that. 55 56Like any crash in supervisor mode, no guarantees can be made about the overall 57health of the system after a supervisor mode stack overflow, and any instances 58of this should be treated as a serious error. However it's still very useful to 59know when these overflows happen, as without robust detection logic the system 60will either crash in mysterious ways or behave in an undefined manner when the 61stack buffer overflows. 62 63Some systems implement this feature by creating at runtime a 'guard' MPU region 64which is set to be read-only and is at either the beginning or immediately 65preceding the supervisor mode stack buffer. If the stack overflows an 66exception will be generated. 67 68This feature is optional and is not required to catch stack overflows in user 69mode; disabling this may free 1-2 MPU regions depending on the MPU design. 70 71Other systems may have dedicated CPU support for catching stack overflows 72and no extra MPU regions will be required. 73 74Thread Stack 75************ 76 77Any thread running in user mode will need access to its own stack buffer. 78On context switch into a user mode thread, a dedicated MPU region or MMU 79page table entries will be programmed with the bounds of the stack buffer. 80A thread exceeding its stack buffer will start pushing data onto memory 81it doesn't have access to and a memory access violation exception will be 82generated. 83 84Note that user threads have access to the stacks of other user threads in 85the same memory domain. This is the minimum required for architectures to 86support memory domains. Architecture can further restrict access to stacks 87so each user thread only has access to its own stack if such architecture 88advertises this capability via 89:kconfig:option:`CONFIG_ARCH_MEM_DOMAIN_SUPPORTS_ISOLATED_STACKS`. 90This behavior is enabled by default if supported and can be selectively 91disabled via :kconfig:option:`CONFIG_MEM_DOMAIN_ISOLATED_STACKS` if 92architecture supports both operating modes. However, some architectures 93may decide to enable this all the time, and thus this option cannot be 94disabled. Regardless of these kconfigs, user threads cannot access 95the stacks of other user threads outside of their memory domains. 96 97Thread Resource Pools 98********************* 99 100A small subset of kernel APIs, invoked as system calls, require heap memory 101allocations. This memory is used only by the kernel and is not accessible 102directly by user mode. In order to use these system calls, invoking threads 103must assign themselves to a resource pool, which is a :c:struct:`k_heap` 104object. Memory is drawn from a thread's resource pool using 105:c:func:`z_thread_malloc` and freed with :c:func:`k_free`. 106 107The APIs which use resource pools are as follows, with any alternatives 108noted for users who do not want heap allocations within their application: 109 110 - :c:func:`k_stack_alloc_init` sets up a k_stack with its storage 111 buffer allocated out of a resource pool instead of a buffer provided by the 112 user. An alternative is to declare k_stacks that are automatically 113 initialized at boot with :c:macro:`K_STACK_DEFINE()`, or to initialize the 114 k_stack in supervisor mode with :c:func:`k_stack_init`. 115 116 - :c:func:`k_pipe_alloc_init` sets up a k_pipe object with its 117 storage buffer allocated out of a resource pool instead of a buffer provided 118 by the user. An alternative is to declare k_pipes that are automatically 119 initialized at boot with :c:macro:`K_PIPE_DEFINE()`, or to initialize the 120 k_pipe in supervisor mode with :c:func:`k_pipe_init`. 121 122 - :c:func:`k_msgq_alloc_init` sets up a k_msgq object with its 123 storage buffer allocated out of a resource pool instead of a buffer provided 124 by the user. An alternative is to declare a k_msgq that is automatically 125 initialized at boot with :c:macro:`K_MSGQ_DEFINE()`, or to initialize the 126 k_msgq in supervisor mode with :c:func:`k_msgq_init`. 127 128 - :c:func:`k_poll` when invoked from user mode, needs to make a kernel-side 129 copy of the provided events array while waiting for an event. This copy is 130 freed when :c:func:`k_poll` returns for any reason. 131 132 - :c:func:`k_queue_alloc_prepend` and :c:func:`k_queue_alloc_append` 133 allocate a container structure to place the data in, since the internal 134 bookkeeping information that defines the queue cannot be placed in the 135 memory provided by the user. 136 137 - :c:func:`k_object_alloc` allows for entire kernel objects to be 138 dynamically allocated at runtime and a usable pointer to them returned to 139 the caller. 140 141The relevant API is :c:func:`k_thread_heap_assign` which assigns 142a k_heap to draw these allocations from for the target thread. 143 144If the system heap is enabled, then the system heap may be used with 145:c:func:`k_thread_system_pool_assign`, but it is preferable for different 146logical applications running on the system to have their own pools. 147 148Memory Domains 149************** 150 151The kernel ensures that any user thread will have access to its own stack 152buffer, plus program text and read-only data. The memory domain APIs are the 153way to grant access to additional blocks of memory to a user thread. 154 155Conceptually, a memory domain is a collection of some number of memory 156partitions. The maximum number of memory partitions in a domain 157is limited by the number of available MPU regions. This is why it is important 158to minimize the number of boot-time MPU regions. 159 160Memory domains are *not* intended to control access to memory from supervisor 161mode. In some cases this may be unavoidable; for example some architectures do 162not allow for the definition of regions which are read-only to user mode but 163read-write to supervisor mode. A great deal of care must be taken when working 164with such regions to not unintentionally cause the kernel to crash when 165accessing such a region. Any attempt to use memory domain APIs to control 166supervisor mode access is at best undefined behavior; supervisor mode access 167policy is only intended to be controlled by boot-time memory regions. 168 169Memory domain APIs are only available to supervisor mode. The only control 170user mode has over memory domains is that any user thread's child threads 171will automatically become members of the parent's domain. 172 173All threads are members of a memory domain, including supervisor threads 174(even though this has no implications on their memory access). There is a 175default domain ``k_mem_domain_default`` which will be assigned to threads if 176they have not been specifically assigned to a domain, or inherited a memory 177domain membership from their parent thread. The main thread starts as a 178member of the default domain. 179 180Memory Partitions 181================= 182 183Each memory partition consists of a memory address, a size, 184and access attributes. It is intended that memory partitions are used to 185control access to system memory. Defining memory partitions are subject 186to the following constraints: 187 188- The partition must represent a memory region that can be programmed by 189 the underlying memory management hardware, and needs to conform to any 190 underlying hardware constraints. For example, many MPU-based systems require 191 that partitions be sized to some power of two, and aligned to their own 192 size. For MMU-based systems, the partition must be aligned to a page and 193 the size some multiple of the page size. 194 195- Partitions within the same memory domain may not overlap each other. There is 196 no notion of precedence among partitions within a memory domain. Partitions 197 within a memory domain are assumed to have a higher precedence than any 198 boot-time memory regions, however whether a memory domain partition can 199 overlap a boot-time memory region is architecture specific. 200 201- The same partition may be specified in multiple memory domains. For example 202 there may be a shared memory area that multiple domains grant access to. 203 204- Care must be taken in determining what memory to expose in a partition. 205 It is not appropriate to provide direct user mode access to any memory 206 containing private kernel data. 207 208- Memory domain partitions are intended to control access to system RAM. 209 Configuration of memory partitions which do not correspond to RAM 210 may not be supported by the architecture; this is true for MMU-based systems. 211 212There are two ways to define memory partitions: either manually or 213automatically. 214 215Manual Memory Partitions 216------------------------ 217 218The following code declares a global array ``buf``, and then declares 219a read-write partition for it which may be added to a domain: 220 221.. code-block:: c 222 223 uint8_t __aligned(32) buf[32]; 224 225 K_MEM_PARTITION_DEFINE(my_partition, buf, sizeof(buf), 226 K_MEM_PARTITION_P_RW_U_RW); 227 228This does not scale particularly well when we are trying to contain multiple 229objects spread out across several C files into a single partition. 230 231Automatic Memory Partitions 232--------------------------- 233 234Automatic memory partitions are created by the build system. All globals 235which need to be placed inside a partition are tagged with their destination 236partition. The build system will then coalesce all of these into a single 237contiguous block of memory, zero any BSS variables at boot, and define 238a memory partition of appropriate base address and size which contains all 239the tagged data. 240 241.. figure:: auto_mem_domain.png 242 :alt: Automatic Memory Domain build flow 243 :align: center 244 245 Automatic Memory Domain build flow 246 247Automatic memory partitions are only configured as read-write 248regions. They are defined with :c:macro:`K_APPMEM_PARTITION_DEFINE()`. 249Global variables are then routed to this partition using 250:c:macro:`K_APP_DMEM()` for initialized data and :c:macro:`K_APP_BMEM()` for 251BSS. 252 253.. code-block:: c 254 255 #include <zephyr/app_memory/app_memdomain.h> 256 257 /* Declare a k_mem_partition "my_partition" that is read-write to 258 * user mode. Note that we do not specify a base address or size. 259 */ 260 K_APPMEM_PARTITION_DEFINE(my_partition); 261 262 /* The global variable var1 will be inside the bounds of my_partition 263 * and be initialized with 37 at boot. 264 */ 265 K_APP_DMEM(my_partition) int var1 = 37; 266 267 /* The global variable var2 will be inside the bounds of my_partition 268 * and be zeroed at boot size K_APP_BMEM() was used, indicating a BSS 269 * variable. 270 */ 271 K_APP_BMEM(my_partition) int var2; 272 273The build system will ensure that the base address of ``my_partition`` will 274be properly aligned, and the total size of the region conforms to the memory 275management hardware requirements, adding padding if necessary. 276 277If multiple partitions are being created, a variadic preprocessor macro can be 278used as provided in ``app_macro_support.h``: 279 280.. code-block:: c 281 282 FOR_EACH(K_APPMEM_PARTITION_DEFINE, part0, part1, part2); 283 284Automatic Partitions for Static Library Globals 285~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 286 287The build-time logic for setting up automatic memory partitions is in 288``scripts/build/gen_app_partitions.py``. If a static library is linked into Zephyr, 289it is possible to route all the globals in that library to a specific 290memory partition with the ``--library`` argument. 291 292For example, if the Newlib C library is enabled, the Newlib globals all need 293to be placed in ``z_libc_partition``. The invocation of the script in the 294top-level ``CMakeLists.txt`` adds the following: 295 296.. code-block:: none 297 298 gen_app_partitions.py ... --library libc.a z_libc_partition .. 299 300For pre-compiled libraries there is no support for expressing this in the 301project-level configuration or build files; the toplevel ``CMakeLists.txt`` must 302be edited. 303 304For Zephyr libraries created using ``zephyr_library`` or ``zephyr_library_named`` 305the ``zephyr_library_app_memory`` function can be used to specify the memory 306partition where all globals in the library should be placed. 307 308.. _memory_domain_predefined_partitions: 309 310Pre-defined Memory Partitions 311----------------------------- 312 313There are a few memory partitions which are pre-defined by the system: 314 315 - ``z_malloc_partition`` - This partition contains the system-wide pool of 316 memory used by libc malloc(). Due to possible starvation issues, it is 317 not recommended to draw heap memory from a global pool, instead 318 it is better to define various sys_heap objects and assign them 319 to specific memory domains. 320 321 - ``z_libc_partition`` - Contains globals required by the C library and runtime. 322 Required when using either the Minimal C library or the Newlib C Library. 323 Required when :kconfig:option:`CONFIG_STACK_CANARIES` is enabled. 324 325Library-specific partitions are listed in ``include/app_memory/partitions.h``. 326For example, to use the MBEDTLS library from user mode, the 327``k_mbedtls_partition`` must be added to the domain. 328 329Memory Domain Usage 330=================== 331 332Create a Memory Domain 333---------------------- 334 335A memory domain is defined using a variable of type 336:c:struct:`k_mem_domain`. It must then be initialized by calling 337:c:func:`k_mem_domain_init`. 338 339The following code defines and initializes an empty memory domain. 340 341.. code-block:: c 342 343 struct k_mem_domain app0_domain; 344 345 k_mem_domain_init(&app0_domain, 0, NULL); 346 347Add Memory Partitions into a Memory Domain 348------------------------------------------ 349 350There are two ways to add memory partitions into a memory domain. 351 352This first code sample shows how to add memory partitions while creating 353a memory domain. 354 355.. code-block:: c 356 357 /* the start address of the MPU region needs to align with its size */ 358 uint8_t __aligned(32) app0_buf[32]; 359 uint8_t __aligned(32) app1_buf[32]; 360 361 K_MEM_PARTITION_DEFINE(app0_part0, app0_buf, sizeof(app0_buf), 362 K_MEM_PARTITION_P_RW_U_RW); 363 364 K_MEM_PARTITION_DEFINE(app0_part1, app1_buf, sizeof(app1_buf), 365 K_MEM_PARTITION_P_RW_U_RO); 366 367 struct k_mem_partition *app0_parts[] = { 368 app0_part0, 369 app0_part1 370 }; 371 372 k_mem_domain_init(&app0_domain, ARRAY_SIZE(app0_parts), app0_parts); 373 374This second code sample shows how to add memory partitions into an initialized 375memory domain one by one. 376 377.. code-block:: c 378 379 /* the start address of the MPU region needs to align with its size */ 380 uint8_t __aligned(32) app0_buf[32]; 381 uint8_t __aligned(32) app1_buf[32]; 382 383 K_MEM_PARTITION_DEFINE(app0_part0, app0_buf, sizeof(app0_buf), 384 K_MEM_PARTITION_P_RW_U_RW); 385 386 K_MEM_PARTITION_DEFINE(app0_part1, app1_buf, sizeof(app1_buf), 387 K_MEM_PARTITION_P_RW_U_RO); 388 389 k_mem_domain_add_partition(&app0_domain, &app0_part0); 390 k_mem_domain_add_partition(&app0_domain, &app0_part1); 391 392.. note:: 393 The maximum number of memory partitions is limited by the maximum 394 number of MPU regions or the maximum number of MMU tables. 395 396Memory Domain Assignment 397------------------------ 398 399Any thread may join a memory domain, and any memory domain may have multiple 400threads assigned to it. Threads are assigned to memory domains with an API 401call: 402 403.. code-block:: c 404 405 k_mem_domain_add_thread(&app0_domain, app_thread_id); 406 407If the thread was already a member of some other domain (including the 408default domain), it will be removed from it in favor of the new one. 409 410In addition, if a thread is a member of a memory domain, and it creates a 411child thread, that thread will belong to the domain as well. 412 413Remove a Memory Partition from a Memory Domain 414---------------------------------------------- 415 416The following code shows how to remove a memory partition from a memory 417domain. 418 419.. code-block:: c 420 421 k_mem_domain_remove_partition(&app0_domain, &app0_part1); 422 423The k_mem_domain_remove_partition() API finds the memory partition 424that matches the given parameter and removes that partition from the 425memory domain. 426 427Available Partition Attributes 428------------------------------ 429 430When defining a partition, we need to set access permission attributes 431to the partition. Since the access control of memory partitions relies on 432either an MPU or MMU, the available partition attributes would be architecture 433dependent. 434 435The complete list of available partition attributes for a specific architecture 436is found in the architecture-specific include file 437``include/zephyr/arch/<arch name>/arch.h``, (for example, ``include/zehpyr/arch/arm/arch.h``.) 438Some examples of partition attributes are: 439 440.. code-block:: c 441 442 /* Denote partition is privileged read/write, unprivileged read/write */ 443 K_MEM_PARTITION_P_RW_U_RW 444 /* Denote partition is privileged read/write, unprivileged read-only */ 445 K_MEM_PARTITION_P_RW_U_RO 446 447In almost all cases ``K_MEM_PARTITION_P_RW_U_RW`` is the right choice. 448 449Configuration Options 450********************* 451 452Related configuration options: 453 454* :kconfig:option:`CONFIG_MAX_DOMAIN_PARTITIONS` 455 456API Reference 457************* 458 459The following memory domain APIs are provided by :zephyr_file:`include/zephyr/kernel.h`: 460 461.. doxygengroup:: mem_domain_apis 462