Lines Matching full:physical
12 * physical and virtual memory. This is global to all cores
19 * cached one via sys_cache_cached_ptr_get(). However, physical addresses
58 /* declare L2 physical memory block */
80 * @param pa physical address.
181 * Cached addresses for both physical and virtual. in sys_mm_drv_map_page()
184 * the cached physical address is needed to perform in sys_mm_drv_map_page()
204 * When the provided physical address is NULL in sys_mm_drv_map_page()
206 * select the first available free physical address in sys_mm_drv_map_page()
220 /* Check bounds of physical address space */ in sys_mm_drv_map_page()
249 * TLB_PADDR_SIZE bits of the physical page number, in sys_mm_drv_map_page()
251 * architecture design where the same physical page in sys_mm_drv_map_page()
257 * TLB only cares about the lower part of the physical in sys_mm_drv_map_page()
373 * Flush the cache to make sure the backing physical page in sys_mm_drv_unmap_page_wflush()
389 /* Check bounds of physical address space. in sys_mm_drv_unmap_page_wflush()
390 * Initial TLB mappings could point to non existing physical pages. in sys_mm_drv_unmap_page_wflush()
630 * phys_new == NULL and get the physical addresses from in sys_mm_drv_move_region()
696 * flush the cache to make sure the backing physical in sys_mm_drv_move_region()
718 * flush the cache to make sure the backing physical in sys_mm_drv_move_array()
733 * Change size of avalible physical memory according to fw register information in sys_mm_drv_mm_init()
746 * Initialize memblocks that will store physical in sys_mm_drv_mm_init()
747 * page usage. Initially all physical pages are in sys_mm_drv_mm_init()
787 * Unmap all unused physical pages from the entire in sys_mm_drv_mm_init()
837 /* map the physical addr 1:1 to virtual address */ in adsp_mm_save_context()
848 /* map the page 1:1 virtual to physical */ in adsp_mm_save_context()
864 /* save physical address */ in adsp_mm_save_context()