/Linux-v6.1/Documentation/admin-guide/mm/ |
D | memory-hotplug.rst | 4 Memory Hot(Un)Plug 7 This document describes generic Linux support for memory hot(un)plug with 15 Memory hot(un)plug allows for increasing and decreasing the size of physical 16 memory available to a machine at runtime. In the simplest case, it consists of 20 Memory hot(un)plug is used for various purposes: 22 - The physical memory available to a machine can be adjusted at runtime, up- or 23 downgrading the memory capacity. This dynamic memory resizing, sometimes 28 example is replacing failing memory modules. 30 - Reducing energy consumption either by physically unplugging memory modules or 31 by logically unplugging (parts of) memory modules from Linux. [all …]
|
D | concepts.rst | 7 The memory management in Linux is a complex system that evolved over the 9 systems from MMU-less microcontrollers to supercomputers. The memory 18 Virtual Memory Primer 21 The physical memory in a computer system is a limited resource and 22 even for systems that support memory hotplug there is a hard limit on 23 the amount of memory that can be installed. The physical memory is not 29 All this makes dealing directly with physical memory quite complex and 30 to avoid this complexity a concept of virtual memory was developed. 32 The virtual memory abstracts the details of physical memory from the 34 physical memory (demand paging) and provides a mechanism for the [all …]
|
D | numaperf.rst | 7 Some platforms may have multiple types of memory attached to a compute 8 node. These disparate memory ranges may share some characteristics, such 12 A system supports such heterogeneous memory by grouping each memory type 14 characteristics. Some memory may share the same node as a CPU, and others 15 are provided as memory only nodes. While memory only nodes do not provide 18 nodes with local memory and a memory only node for each of compute node:: 29 A "memory initiator" is a node containing one or more devices such as 30 CPUs or separate memory I/O devices that can initiate memory requests. 31 A "memory target" is a node containing one or more physical address 32 ranges accessible from one or more memory initiators. [all …]
|
/Linux-v6.1/tools/testing/selftests/memory-hotplug/ |
D | mem-on-off-test.sh | 25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then 26 echo $msg memory hotplug is not supported >&2 30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then 31 echo $msg no hot-pluggable memory >&2 37 # list all hot-pluggable memory 43 for memory in $SYSFS/devices/system/memory/memory*; do 44 if grep -q 1 $memory/removable && 45 grep -q $state $memory/state; then 46 echo ${memory##/*/memory} 63 grep -q online $SYSFS/devices/system/memory/memory$1/state [all …]
|
/Linux-v6.1/Documentation/admin-guide/cgroup-v1/ |
D | memory.rst | 2 Memory Resource Controller 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks 27 uses of the memory controller. The memory controller can be used to 30 Memory-hungry applications can be isolated and limited to a smaller [all …]
|
/Linux-v6.1/Documentation/mm/ |
D | memory-model.rst | 6 Physical Memory Model 9 Physical memory in a system may be addressed in different ways. The 10 simplest case is when the physical memory starts at address 0 and 15 different memory banks are attached to different CPUs. 17 Linux abstracts this diversity using one of the two memory models: 19 memory models it supports, what the default memory model is and 22 All the memory models track the status of physical page frames using 25 Regardless of the selected memory model, there exists one-to-one 29 Each memory model defines :c:func:`pfn_to_page` and :c:func:`page_to_pfn` 36 The simplest memory model is FLATMEM. This model is suitable for [all …]
|
D | hmm.rst | 4 Heterogeneous Memory Management (HMM) 7 Provide infrastructure and helpers to integrate non-conventional memory (device 8 memory like GPU on board memory) into regular kernel path, with the cornerstone 9 of this being specialized struct page for such memory (see sections 5 to 7 of 12 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 20 related to using device specific memory allocators. In the second section, I 24 fifth section deals with how device memory is represented inside the kernel. 30 Problems of using a device specific memory allocator 33 Devices with a large amount of on board memory (several gigabytes) like GPUs 34 have historically managed their memory through dedicated driver specific APIs. [all …]
|
D | numa.rst | 14 or more CPUs, local memory, and/or IO buses. For brevity and to 28 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 32 Memory access time and effective memory bandwidth varies depending on how far 33 away the cell containing the CPU or IO bus making the memory access is from the 34 cell containing the target memory. For example, access to memory by CPUs 36 bandwidths than accesses to memory on other, remote cells. NUMA platforms 41 memory bandwidth. However, to achieve scalable memory bandwidth, system and 42 application software must arrange for a large majority of the memory references 43 [cache misses] to be to "local" memory--memory on the same cell, if any--or 44 to the closest cell with memory. [all …]
|
/Linux-v6.1/Documentation/ABI/testing/ |
D | sysfs-devices-memory | 1 What: /sys/devices/system/memory 5 The /sys/devices/system/memory contains a snapshot of the 6 internal state of the kernel memory blocks. Files could be 9 Users: hotplug memory add/remove tools 12 What: /sys/devices/system/memory/memoryX/removable 16 The file /sys/devices/system/memory/memoryX/removable is a 17 legacy interface used to indicated whether a memory block is 19 "1" if and only if the kernel supports memory offlining. 20 Users: hotplug memory remove tools 24 What: /sys/devices/system/memory/memoryX/phys_device [all …]
|
/Linux-v6.1/Documentation/devicetree/bindings/memory-controllers/fsl/ |
D | fsl,ddr.yaml | 4 $id: http://devicetree.org/schemas/memory-controllers/fsl/fsl,ddr.yaml# 7 title: Freescale DDR memory controller 15 pattern: "^memory-controller@[0-9a-f]+$" 21 - fsl,qoriq-memory-controller-v4.4 22 - fsl,qoriq-memory-controller-v4.5 23 - fsl,qoriq-memory-controller-v4.7 24 - fsl,qoriq-memory-controller-v5.0 25 - const: fsl,qoriq-memory-controller 27 - fsl,bsc9132-memory-controller 28 - fsl,mpc8536-memory-controller [all …]
|
/Linux-v6.1/Documentation/core-api/ |
D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no 28 longer possible from the memory but some of the memory to be offlined [all …]
|
D | memory-allocation.rst | 4 Memory Allocation Guide 7 Linux provides a variety of APIs for memory allocation. You can 14 Most of the memory allocation APIs use GFP flags to express how that 15 memory should be allocated. The GFP acronym stands for "get free 16 pages", the underlying memory allocation function. 19 makes the question "How should I allocate memory?" not that easy to 32 The GFP flags control the allocators behavior. They tell what memory 34 memory, whether the memory can be accessed by the userspace etc. The 39 * Most of the time ``GFP_KERNEL`` is what you need. Memory for the 40 kernel data structures, DMAable memory, inode cache, all these and [all …]
|
/Linux-v6.1/tools/testing/selftests/cgroup/ |
D | test_memcontrol.c | 29 * the memory controller. 37 /* Create two nested cgroups with the memory controller enabled */ in test_memcg_subtree_control() 46 if (cg_write(parent, "cgroup.subtree_control", "+memory")) in test_memcg_subtree_control() 52 if (cg_read_strstr(child, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 55 /* Create two nested cgroups without enabling memory controller */ in test_memcg_subtree_control() 70 if (!cg_read_strstr(child2, "cgroup.controllers", "memory")) in test_memcg_subtree_control() 104 current = cg_read_long(cgroup, "memory.current"); in alloc_anon_50M_check() 111 anon = cg_read_key_long(cgroup, "memory.stat", "anon "); in alloc_anon_50M_check() 138 current = cg_read_long(cgroup, "memory.current"); in alloc_pagecache_50M_check() 142 file = cg_read_key_long(cgroup, "memory.stat", "file "); in alloc_pagecache_50M_check() [all …]
|
/Linux-v6.1/drivers/base/ |
D | memory.c | 3 * Memory subsystem support 9 * a SPARSEMEM-memory-model system's physical memory in /sysfs. 19 #include <linux/memory.h> 29 #define MEMORY_CLASS_NAME "memory" 79 * Memory blocks are cached in a local radix tree to avoid 86 * Memory groups, indexed by memory group id (mgid). 119 * Show the first physical section index (number) of this memory block. 191 * they describe (they remain until the memory is unplugged), doing in memory_block_online() 192 * their initialization and accounting at memory onlining/offlining in memory_block_online() 194 * belong to the same zone as the memory they backed. in memory_block_online() [all …]
|
/Linux-v6.1/include/linux/ |
D | memory.h | 3 * include/linux/memory.h - generic memory definition 9 * Basic handling of the devices is done in drivers/base/memory.c 12 * Memory block are exported via sysfs in the class/memory/devices/ 27 * struct memory_group - a logical group of memory blocks 28 * @nid: The node id for all memory blocks inside the memory group. 29 * @blocks: List of all memory blocks belonging to this memory group. 30 * @present_kernel_pages: Present (online) memory outside ZONE_MOVABLE of this 31 * memory group. 32 * @present_movable_pages: Present (online) memory in ZONE_MOVABLE of this 33 * memory group. [all …]
|
/Linux-v6.1/drivers/gpu/drm/nouveau/nvkm/core/ |
D | memory.c | 24 #include <core/memory.h> 30 nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_put() argument 39 kfree(memory->tags); in nvkm_memory_tags_put() 40 memory->tags = NULL; in nvkm_memory_tags_put() 48 nvkm_memory_tags_get(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_get() argument 56 if ((tags = memory->tags)) { in nvkm_memory_tags_get() 57 /* If comptags exist for the memory, but a different amount in nvkm_memory_tags_get() 84 * As memory can be mapped in multiple places, we still in nvkm_memory_tags_get() 94 *ptags = memory->tags = tags; in nvkm_memory_tags_get() 101 struct nvkm_memory *memory) in nvkm_memory_ctor() argument [all …]
|
/Linux-v6.1/tools/testing/selftests/arm64/mte/ |
D | check_mmap_options.c | 76 /* Only mte enabled memory will allow tag insertion */ in check_anonymous_memory_mapping() 79 ksft_print_msg("FAIL: Insert tags on anonymous mmap memory\n"); in check_anonymous_memory_mapping() 113 /* Only mte enabled memory will allow tag insertion */ in check_file_memory_mapping() 116 ksft_print_msg("FAIL: Insert tags on file based memory\n"); in check_file_memory_mapping() 214 "Check anonymous memory with private mapping, sync error mode, mmap memory and tag check off\n"); in main() 216 …"Check file memory with private mapping, sync error mode, mmap/mprotect memory and tag check off\n… in main() 220 "Check anonymous memory with private mapping, no error mode, mmap memory and tag check off\n"); in main() 222 "Check file memory with private mapping, no error mode, mmap/mprotect memory and tag check off\n"); in main() 225 "Check anonymous memory with private mapping, sync error mode, mmap memory and tag check on\n"); in main() 227 …"Check anonymous memory with private mapping, sync error mode, mmap/mprotect memory and tag check … in main() [all …]
|
/Linux-v6.1/Documentation/powerpc/ |
D | firmware-assisted-dump.rst | 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 21 - Unlike phyp dump, FADump allows user to release all the memory reserved 35 - Once the dump is copied out, the memory that held the dump 44 - The first kernel registers the sections of memory with the 46 These registered sections of memory are reserved by the first 50 low memory regions (boot memory) from source to destination area. 54 The term 'boot memory' means size of the low memory chunk 56 booted with restricted memory. By default, the boot memory 58 Alternatively, user can also specify boot memory size [all …]
|
/Linux-v6.1/drivers/cxl/ |
D | Kconfig | 11 memory targets, the CXL.io protocol is equivalent to PCI Express. 21 The CXL specification defines a "CXL memory device" sub-class in the 22 PCI "memory controller" base class of devices. Device's identified by 24 memory to be mapped into the system address map (Host-managed Device 25 Memory (HDM)). 27 Say 'y/m' to enable a driver that will attach to CXL memory expander 28 devices enumerated by the memory device class code for configuration 35 bool "RAW Command Interface for Memory Devices" 48 potential impact to memory currently in use by the kernel. 58 Enable support for host managed device memory (HDM) resources [all …]
|
/Linux-v6.1/Documentation/admin-guide/mm/damon/ |
D | reclaim.rst | 8 be used for proactive and lightweight reclamation under light memory pressure. 10 to be selectively used for different level of memory pressure and requirements. 15 On general memory over-committed systems, proactively reclaiming cold pages 16 helps saving memory and reducing latency spikes that incurred by the direct 20 Free Pages Reporting [3]_ based memory over-commit virtualization systems are 22 memory to host, and the host reallocates the reported memory to other guests. 23 As a result, the memory of the systems are fully utilized. However, the 24 guests could be not so memory-frugal, mainly because some kernel subsystems and 25 user-space applications are designed to use as much memory as available. Then, 26 guests could report only small amount of memory as free to host, results in [all …]
|
/Linux-v6.1/drivers/memory/tegra/ |
D | Kconfig | 3 bool "NVIDIA Tegra Memory Controller support" 8 This driver supports the Memory Controller (MC) hardware found on 14 tristate "NVIDIA Tegra20 External Memory Controller driver" 21 This driver is for the External Memory Controller (EMC) found on 23 This driver is required to change memory timings / clock rate for 24 external memory. 27 tristate "NVIDIA Tegra30 External Memory Controller driver" 33 This driver is for the External Memory Controller (EMC) found on 35 This driver is required to change memory timings / clock rate for 36 external memory. [all …]
|
/Linux-v6.1/Documentation/dev-tools/ |
D | kasan.rst | 7 Kernel Address Sanitizer (KASAN) is a dynamic memory safety error detector 18 architectures, but it has significant performance and memory overheads. 22 This mode is only supported for arm64, but its moderate memory overhead allows 23 using it for testing on memory-restricted devices with real workloads. 26 is the mode intended to be used as an in-field memory bug detector or as a 28 (Memory Tagging Extension), but it has low memory and performance overheads and 31 For details about the memory and performance impact of each KASAN mode, see the 51 before every memory access and thus require a compiler version that provides 53 these checks but still requires a compiler version that supports the memory 64 Memory types [all …]
|
D | kmemleak.rst | 1 Kernel Memory Leak Detector 4 Kmemleak provides a way of detecting possible kernel memory leaks in a 9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in 16 thread scans the memory every 10 minutes (by default) and prints the 22 To display the details of all the possible scanned memory leaks:: 26 To trigger an intermediate memory scan:: 30 To clear the list of all current possible memory leaks:: 41 Memory scanning parameters can be modified at run-time by writing to the 51 start the automatic memory scanning thread (default) 53 stop the automatic memory scanning thread [all …]
|
/Linux-v6.1/Documentation/userspace-api/media/v4l/ |
D | dev-mem2mem.rst | 6 Video Memory-To-Memory Interface 9 A V4L2 memory-to-memory device can compress, decompress, transform, or 10 otherwise convert video data from one format into another format, in memory. 11 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or 12 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory 16 A memory-to-memory video node acts just like a normal video node, but it 17 supports both output (sending frames from memory to the hardware) 19 memory) stream I/O. An application will have to setup the stream I/O for 23 Memory-to-memory devices function as a shared resource: you can 32 One of the most common memory-to-memory device is the codec. Codecs [all …]
|
/Linux-v6.1/mm/ |
D | Kconfig | 3 menu "Memory Management options" 16 bool "Support for paging of anonymous memory (swap)" 22 used to provide more virtual memory than the actual RAM present 34 compress them into a dynamically allocated RAM-based memory pool. 180 zsmalloc is a slab-based memory allocator designed to store 218 of queues of objects. SLUB can use memory efficiently 238 For reduced kernel memory fragmentation, slab caches can be 298 utilization of a direct-mapped memory-side-cache. See section 299 5.2.27 Heterogeneous Memory Attribute Table (HMAT) in the ACPI 301 the presence of a memory-side-cache. There are also incidental [all …]
|