Home
last modified time | relevance | path

Searched full:page (Results 1 – 25 of 814) sorted by relevance

12345678910>>...33

/Zephyr-latest/doc/kernel/memory_management/
Ddemand_paging.rst8 conceptually divided in page-sized page frames as regions to hold data.
10 * When the processor tries to access data and the data page exists in
11 one of the page frames, the execution continues without any interruptions.
13 * When the processor tries to access the data page that does not exist
14 in any page frames, a page fault occurs. The paging code then brings in
15 the corresponding data page from backing store into physical memory if
16 there is a free page frame. If there is no more free page frames,
17 the eviction algorithm is invoked to select a data page to be paged out,
18 thus freeing up a page frame for new data to be paged in. If this data
19 page has been modified after it is first paged in, the data will be
[all …]
/Zephyr-latest/include/zephyr/kernel/mm/
Ddemand_paging.h40 /** Number of page faults */
43 /** Number of page faults with IRQ locked */
46 /** Number of page faults with IRQ unlocked */
50 /** Number of page faults while in ISR */
85 * Evict a page-aligned virtual memory region to the backing store
89 * backing store if they weren't already, with their associated page frames
90 * marked as available for mappings or page-ins.
92 * None of the associated page frames mapped to the provided region should
96 * they could take page faults immediately.
101 * @param addr Base page-aligned virtual address
[all …]
/Zephyr-latest/subsys/demand_paging/eviction/
DKconfig7 prompt "Page frame eviction algorithms"
20 bool "Not Recently Used (NRU) page eviction algorithm"
22 This implements a Not Recently Used page eviction algorithm.
24 When a page frame needs to be evicted, the algorithm will prefer to
25 evict page frames using an ascending order of priority:
33 bool "Least Recently Used (LRU) page eviction algorithm"
36 This implements a Least Recently Used page eviction algorithm.
39 the page eviction queue. This is more efficient than the NRU
41 one page at a time and only when there is a page eviction request.
51 pages that are capable of being paged out. At eviction time, if a page
[all …]
Dlru.c9 * "accessed" page flag so this can be called at the same time.
13 * - Page frames made evictable are appended to the end of the LRU queue with
15 * their corresponding MMU page table initially, but not a deal breaker
18 * - When accessed, an unaccessible page causes a fault. The architecture
19 * fault handler makes the page accessible, marks it as accessed and calls
20 * k_mem_paging_eviction_accessed() which moves the corresponding page frame
23 * - On page reclammation, the page at the head of the queue is removed for
24 * that purpose. The new head page is marked unaccessible.
26 * - If the new head page is actively used, it will cause a fault and be moved
27 * to the end of the queue, preventing it from being the next page
[all …]
Dnru.c15 /* The accessed and dirty states of each page frame are used to create
16 * a hierarchy with a numerical value. When evicting a page, try to evict
17 * page with the highest value (we prefer clean, not accessed pages).
38 /* Clear accessed bit in page tables */ in nru_periodic_update()
74 /* Implies a mismatch with page frame ontology and page in k_mem_paging_eviction_select()
78 "non-present page, %s", in k_mem_paging_eviction_select()
84 /* If we find a not accessed, clean page we're done */ in k_mem_paging_eviction_select()
97 /* Shouldn't ever happen unless every page is pinned */ in k_mem_paging_eviction_select()
98 __ASSERT(last_pf != NULL, "no page to evict"); in k_mem_paging_eviction_select()
/Zephyr-latest/subsys/bluetooth/mesh/
Dlarge_comp_data_srv.c49 uint8_t page; in handle_large_comp_data_get() local
57 page = bt_mesh_comp_parse_page(buf); in handle_large_comp_data_get()
60 LOG_DBG("page %u offset %u", page, offset); in handle_large_comp_data_get()
63 net_buf_simple_add_u8(&rsp, page); in handle_large_comp_data_get()
66 if (atomic_test_bit(bt_mesh.flags, BT_MESH_COMP_DIRTY) && page < 128) { in handle_large_comp_data_get()
70 err = bt_mesh_comp_read(&temp_buf, page); in handle_large_comp_data_get()
72 LOG_ERR("Could not read comp data p%d, err: %d", page, err); in handle_large_comp_data_get()
86 total_size = bt_mesh_comp_page_size(page); in handle_large_comp_data_get()
90 err = bt_mesh_comp_data_get_page(&rsp, page, offset); in handle_large_comp_data_get()
92 LOG_ERR("Could not read comp data p%d, err: %d", page, err); in handle_large_comp_data_get()
[all …]
/Zephyr-latest/kernel/include/
Dmmu.h103 * @defgroup kernel_mm_page_frame_apis Kernel Memory Page Frame Management APIs
107 * Macros and data structures for physical page frame accounting,
113 * @brief Number of page frames.
115 * At present, page frame management is only done for main system RAM,
134 /** This physical page is free and part of the free list */
137 /** This physical page is reserved by hardware; we will never use it */
140 /** This page contains critical kernel data and will never be swapped */
144 * This physical page is mapped to some virtual memory address
146 * Currently, we just support one mapping per page frame. If a page frame
152 * This page frame is currently involved in a page-in/out operation
[all …]
Dkernel_arch_interface.h289 * will be established. If the page tables already had mappings installed
292 * If the target architecture supports multiple page sizes, currently
293 * only the smallest page size will be used.
301 * Architectures are expected to pre-allocate page tables for the entire
311 * @param virt Page-aligned Destination virtual address to map
312 * @param phys Page-aligned Source physical address to map
313 * @param size Page-aligned size of the mapped memory region in bytes
322 * When this completes, the relevant page table entries will be updated as
325 * page tables.
334 * and it is not necessary to free any paging structures. Empty page tables
[all …]
/Zephyr-latest/arch/xtensa/core/
DREADME_MMU.txt23 and data spaces, but the hardware page table refill mechanism (see
44 ## Virtually-mapped Page Tables
47 extremely confusing) "page table" format. The simplest was to begin
53 10 bits with the bottom two bits set to zero" (i.e. the page frame
59 memory fetch vs. e.g. the 2-5 fetches required to walk a page table on
64 physical address. Which means that the page tables occupy a 4M region
70 contains the 1024 PTE entries for the 4M page table itself, pointed to
73 Obviously, the page table memory being virtual means that the fetch
74 can fail: there are 1024 possible pages in a complete page table
77 page translation we want (NOT for the original requested address, we
[all …]
/Zephyr-latest/drivers/mm/
Dmm_drv_common.h24 * is assumed to be page aligned.
26 * @param virt Page-aligned virtual address
36 * @brief Test if address is page-aligned
40 * @retval true if page-aligned
41 * @retval false if not page-aligned
49 * @brief Test if address is page-aligned
53 * @retval true if page-aligned
54 * @retval false if not page-aligned
62 * @brief Test if size is page-aligned
66 * @retval true if page-aligned
[all …]
/Zephyr-latest/arch/x86/include/
Dx86_mmu.h40 #define MMU_PWT BITL(3) /** Page Write Through */
41 #define MMU_PCD BITL(4) /** Page Cache Disable */
44 #define MMU_PS BITL(7) /** Page Size (non PTE)*/
45 #define MMU_PAT BITL(7) /** Page Attribute (PTE) */
60 /* Page fault error code flags. See Chapter 4.7 of the Intel SDM vol. 3A. */
61 #define PF_P BIT(0) /* 0 Non-present page 1 Protection violation */
73 * Dump out page table entries for a particular virtual memory address
78 * @param ptables Page tables to walk
84 * Fetch the page table entry for a virtual memory address
88 * @param val Value stored in page table entry, with address and flags
[all …]
/Zephyr-latest/include/zephyr/xen/
Dmemory.h11 * Add mapping for specified page frame in Xen domain physmap.
17 * @param gpfn page frame where the source mapping page should appear.
24 * Add mapping for specified set of page frames to Xen domain physmap.
31 * @param size number of page frames being mapped.
33 * @param gpfns array of page frames where the mapping should appear.
42 * Removes page frame from Xen domain physmap.
44 * @param domid domain id, whose page is going to be removed. For unprivileged
46 * @param gpfn page frame number, that needs to be removed
52 * Populate specified Xen domain page frames with memory.
58 * @param nr_extents number of page frames being populated.
[all …]
Dgnttab.h12 * Assigns gref and permits access to 4K page for specific domain.
15 * @param gfn - guest frame number of page, where grant will be located
33 * Allocates 4K page for grant and share it via returned
44 * Provides interface to acquire free page, that can be used for
48 * @return - pointer to page start address, that can be used as host_addr
54 * Releases provided page, that was used for mapping foreign grant frame,
57 * @param page_addr - pointer to start address of used page.
67 * also per-page status will be set in map_ops[i].status (GNTST_*)
69 * To map foreign frame you need 4K-aligned 4K memory page, which will be
77 * each page, that was successfully unmapped.
[all …]
/Zephyr-latest/samples/drivers/soc_flash_nand/src/
Dmain.c26 struct flash_pages_info page; in main() local
48 ret = flash_get_page_info_by_offs(nand_dev, 0x00, &page); in main()
51 printk("Nand flash driver page info error\n"); in main()
54 printk("The Page size of %lx\n", page.size); in main()
56 w_Page_buffer = (uint8_t *)k_malloc(page.size * NAND_NUM_PAGES); in main()
58 r_Page_buffer = (uint8_t *)k_malloc(page.size * NAND_NUM_PAGES); in main()
62 for (int index = 0; index < page.size * NAND_NUM_PAGES; index++) { in main()
72 memset(r_Page_buffer, 0x55, page.size * NAND_NUM_PAGES); in main()
89 ret = flash_write(nand_dev, OFFSET_PAGE, w_Page_buffer, page.size * NAND_NUM_PAGES); in main()
98 ret = flash_read(nand_dev, OFFSET_PAGE, r_Page_buffer, page.size * NAND_NUM_PAGES); in main()
[all …]
/Zephyr-latest/doc/hardware/arch/
Dx86.rst9 This page contains information on certain aspects when developing for
15 During very early boot, page tables are loaded so technically the kernel
36 possible as the page table generation script
38 at the page directory level, in addition to mapping virtual addresses
40 the entries for identity mapping at the page directory level are
45 is done at the page directory level, there is no need to allocate
46 additional space for the page table. However, additional space may
47 still be required for additional page directory table.
52 required as the entries in page directory table will be cleared.
58 (Page Directory Pointer) covers 1GB of memory. For example:
[all …]
/Zephyr-latest/include/zephyr/drivers/mm/
Dsystem_mm.h91 * @brief Map one physical page into the virtual address space
93 * This maps one physical page into the virtual address space.
95 * is assumed to be page aligned.
102 * @param virt Page-aligned destination virtual address to map
103 * @param phys Page-aligned source physical address to map
117 * are assumed to be page aligned.
124 * @param virt Page-aligned destination virtual address to map
125 * @param phys Page-aligned source physical address to map
126 * @param size Page-aligned size of the mapped memory region in bytes
141 * are assumed to be page aligned.
[all …]
/Zephyr-latest/drivers/flash/
Dflash_stm32g0x.c110 int page; in erase_page() local
131 page = offset / STM32G0_FLASH_PAGE_SIZE; in erase_page()
136 /* big page-nr w/o swap or small page-nr w/ swap indicate bank2 */ in erase_page()
137 if ((page >= STM32G0_PAGES_PER_BANK) != swap_enabled) { in erase_page()
138 page = (page % STM32G0_PAGES_PER_BANK) + STM32G0_BANK2_START_PAGE_NR; in erase_page()
140 LOG_DBG("Erase page %d on bank 2", page); in erase_page()
142 page = page % STM32G0_PAGES_PER_BANK; in erase_page()
144 LOG_DBG("Erase page %d on bank 1", page); in erase_page()
148 /* Set the PER bit and select the page you wish to erase */ in erase_page()
151 tmp |= ((page << FLASH_CR_PNB_Pos) & FLASH_CR_PNB_Msk); in erase_page()
[all …]
/Zephyr-latest/dts/bindings/mtd/
Dgd,gd32-nv-flash-v2.yaml21 description: Max erase time(millisecond) of a flash page
23 bank0-page-size:
26 description: Flash memory page size for bank0
28 bank1-page-size:
31 description: Flash memory page size for bank1
/Zephyr-latest/subsys/bluetooth/mesh/shell/
Dlarge_comp_data.c25 "%s [0x%04x]: page: %u offset: %u total size: %u", msg, addr, rsp->page, in status_print()
36 uint8_t page; in cmd_large_comp_data_get() local
42 page = shell_strtoul(argv[1], 0, &err); in cmd_large_comp_data_get()
51 bt_mesh_shell_target_ctx.dst, page, offset, &rsp); in cmd_large_comp_data_get()
63 uint8_t page; in cmd_models_metadata_get() local
69 page = shell_strtoul(argv[1], 0, &err); in cmd_models_metadata_get()
78 bt_mesh_shell_target_ctx.dst, page, offset, &rsp); in cmd_models_metadata_get()
86 SHELL_CMD_ARG(large-comp-data-get, NULL, "<page> <offset>", cmd_large_comp_data_get, 3, 0),
87 SHELL_CMD_ARG(models-metadata-get, NULL, "<page> <offset>", cmd_models_metadata_get, 3, 0),
/Zephyr-latest/include/zephyr/bluetooth/mesh/
Dlarge_comp_data_cli.h26 /** Page number. */
27 uint8_t page; member
28 /** Offset within the page. */
30 /** Total size of the page. */
94 * This API is used to read a portion of a Composition Data Page.
106 * @param page Composition Data Page to read.
107 * @param offset Offset within the Composition Data Page.
113 int bt_mesh_large_comp_data_get(uint16_t net_idx, uint16_t addr, uint8_t page,
118 * This API is used to read a portion of a Models Metadata Page.
130 * @param page Models Metadata Page to read.
[all …]
/Zephyr-latest/samples/subsys/usb/webusb/
DREADME.rst5 Receive and echo data from a web page using WebUSB API.
18 based web application (web page) running in the browser at host.
44 This sample application requires the latest Google Chrome, a web page
46 http server running on localhost to serve the web page.
49 only to secure origins. This means the web page/site that is used to
56 #. Implement a web app (web page) using WebUSB API and run
62 This sample web page demonstrates how to create and use a WebUSB
66 There are two ways to access this sample page:
70 * Host the demo page locally: Start a web server
83 to open demo page.
[all …]
/Zephyr-latest/doc/_extensions/zephyr/
Dgh_utils.py13 This Sphinx extension can be used to obtain various Git and GitHub related metadata for a page.
15 of pages, direct links to open a GitHub issue regarding a page, or date of the most recent commit
16 to a page.
20 * ``gh_link_get_blob_url``: Returns a URL to the source of a page on GitHub.
21 * ``gh_link_get_edit_url``: Returns a URL to edit the given page on GitHub.
22 * ``gh_link_get_open_issue_url``: Returns a URL to open a new issue regarding the given page.
23 * ``git_info``: Returns the date and SHA1 of the last commit made to a page (if this page is
62 """Return the prefix that needs to be added to the page path to get its location in the
65 If pagename refers to a page that is automatically generated by Sphinx or if it matches one of
70 pagename: Page name (path).
[all …]
/Zephyr-latest/tests/kernel/mem_protect/stackprot/src/
Dmapped_stack.c24 * @param p1 0 if testing rear guard page, 1 if testing front guard page.
43 /* Middle of front guard page. */ in mapped_thread()
46 /* Middle of rear guard page. */ in mapped_thread()
55 TC_PRINT("Should have fault on guard page but not!\n"); in mapped_thread()
62 * @param is_front True if testing front guard page, false if testing rear guard page.
95 * @brief Test faulting on front guard page
109 * @brief Test faulting on rear guard page
123 * @brief Test faulting on front guard page in user mode
137 * @brief Test faulting on rear guard page in user mode
/Zephyr-latest/doc/
D404.rst5 Sorry, Page Not Found
14 Sorry, the page you requested was not found on this site.
20 document.write("<p>Sorry, the page you requested: " +
24 document.write("<p>Sorry, the page you requested was not found on this site.</p>")
31 It's also possible we've removed or renamed the page you're looking for.
33 Please try using the navigation links on the left of this page to navigate
/Zephyr-latest/kernel/
DKconfig.vm28 this for non-pinned page frames).
42 page tables are in use, they all have the same virtual-to-physical
54 in page tables, the equation:
70 how much total memory can be used for page tables.
111 hex "Size of smallest granularity MMU page"
115 support multiple page sizes, put the smallest one here.
136 bool "Allow interrupts during page-ins/outs"
140 latency, but any code running in interrupt context that page faults
146 If this option is disabled, the page fault servicing logic
148 ISRs may also page fault.
[all …]

12345678910>>...33