Searched +full:in +full:- +full:memory (Results 1 – 25 of 1159) sorted by relevance
12345678910>>...47
/Linux-v5.15/Documentation/admin-guide/mm/ |
D | memory-hotplug.rst | 4 Memory Hot(Un)Plug 7 This document describes generic Linux support for memory hot(un)plug with 15 Memory hot(un)plug allows for increasing and decreasing the size of physical 16 memory available to a machine at runtime. In the simplest case, it consists of 20 Memory hot(un)plug is used for various purposes: 22 - The physical memory available to a machine can be adjusted at runtime, up- or 23 downgrading the memory capacity. This dynamic memory resizing, sometimes 27 - Replacing hardware, such as DIMMs or whole NUMA nodes, without downtime. One 28 example is replacing failing memory modules. 30 - Reducing energy consumption either by physically unplugging memory modules or [all …]
|
D | concepts.rst | 7 The memory management in Linux is a complex system that evolved over the 9 systems from MMU-less microcontrollers to supercomputers. The memory 18 Virtual Memory Primer 21 The physical memory in a computer system is a limited resource and 22 even for systems that support memory hotplug there is a hard limit on 23 the amount of memory that can be installed. The physical memory is not 29 All this makes dealing directly with physical memory quite complex and 30 to avoid this complexity a concept of virtual memory was developed. 32 The virtual memory abstracts the details of physical memory from the 33 application software, allows to keep only needed information in the [all …]
|
D | numaperf.rst | 7 Some platforms may have multiple types of memory attached to a compute 8 node. These disparate memory ranges may share some characteristics, such 12 A system supports such heterogeneous memory by grouping each memory type 14 characteristics. Some memory may share the same node as a CPU, and others 15 are provided as memory only nodes. While memory only nodes do not provide 18 nodes with local memory and a memory only node for each of compute node:: 20 +------------------+ +------------------+ 21 | Compute Node 0 +-----+ Compute Node 1 | 23 +--------+---------+ +--------+---------+ 25 +--------+---------+ +--------+---------+ [all …]
|
D | numa_memory_policy.rst | 4 NUMA Memory Policy 7 What is NUMA Memory Policy? 10 In the Linux kernel, "memory policy" determines from which node the kernel will 11 allocate memory in a NUMA system or in an emulated NUMA system. Linux has 12 supported platforms with Non-Uniform Memory Access architectures since 2.4.?. 13 The current memory policy support was added to Linux 2.6 around May 2004. This 14 document attempts to describe the concepts and APIs of the 2.6 memory policy 17 Memory policies should not be confused with cpusets 18 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``) 20 memory may be allocated by a set of processes. Memory policies are a [all …]
|
/Linux-v5.15/tools/testing/selftests/memory-hotplug/ |
D | mem-on-off-test.sh | 2 # SPDX-License-Identifier: GPL-2.0 6 # Kselftest framework requirement - SKIP code is 4. 18 SYSFS=`mount -t sysfs | head -1 | awk '{ print $3 }'` 20 if [ ! -d "$SYSFS" ]; then 25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then 26 echo $msg memory hotplug is not supported >&2 30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then 31 echo $msg no hot-pluggable memory >&2 37 # list all hot-pluggable memory 41 local state=${1:-.\*} [all …]
|
/Linux-v5.15/Documentation/admin-guide/cgroup-v1/ |
D | memory.rst | 2 Memory Resource Controller 12 The Memory Resource Controller has generically been referred to as the 13 memory controller in this document. Do not confuse memory controller 14 used here with the memory controller that is used in hardware. 16 (For editors) In this document: 17 When we mention a cgroup (cgroupfs's directory) with memory controller, 18 we call it "memory cgroup". When you see git-log and source code, you'll 20 In this document, we avoid using it. 22 Benefits and Purpose of the memory controller 25 The memory controller isolates the memory behaviour of a group of tasks [all …]
|
D | cpusets.rst | 11 - Portions Copyright (c) 2004-2006 Silicon Graphics, Inc. 12 - Modified by Paul Jackson <pj@sgi.com> 13 - Modified by Christoph Lameter <cl@linux.com> 14 - Modified by Paul Menage <menage@google.com> 15 - Modified by Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> 25 1.6 What is memory spread ? 41 ---------------------- 43 Cpusets provide a mechanism for assigning a set of CPUs and Memory 44 Nodes to a set of tasks. In this document "Memory Node" refers to 45 an on-line node that contains memory. [all …]
|
/Linux-v5.15/Documentation/vm/ |
D | hmm.rst | 4 Heterogeneous Memory Management (HMM) 7 Provide infrastructure and helpers to integrate non-conventional memory (device 8 memory like GPU on board memory) into regular kernel path, with the cornerstone 9 of this being specialized struct page for such memory (see sections 5 to 7 of 12 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 19 This document is divided as follows: in the first section I expose the problems 20 related to using device specific memory allocators. In the second section, I 23 CPU page-table mirroring works and the purpose of HMM in this context. The 24 fifth section deals with how device memory is represented inside the kernel. 30 Problems of using a device specific memory allocator [all …]
|
D | memory-model.rst | 1 .. SPDX-License-Identifier: GPL-2.0 6 Physical Memory Model 9 Physical memory in a system may be addressed in different ways. The 10 simplest case is when the physical memory starts at address 0 and 15 different memory banks are attached to different CPUs. 17 Linux abstracts this diversity using one of the two memory models: 19 memory models it supports, what the default memory model is and 22 All the memory models track the status of physical page frames using 23 struct page arranged in one or more arrays. 25 Regardless of the selected memory model, there exists one-to-one [all …]
|
D | numa.rst | 14 or more CPUs, local memory, and/or IO buses. For brevity and to 17 'cells' in this document. 19 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 20 of the system--although some components necessary for a stand-alone SMP system 22 connected together with some sort of system interconnect--e.g., a crossbar or 23 point-to-point link are common types of NUMA system interconnects. Both of 28 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 30 is handled in hardware by the processor caches and/or the system interconnect. 32 Memory access time and effective memory bandwidth varies depending on how far 33 away the cell containing the CPU or IO bus making the memory access is from the [all …]
|
D | frontswap.rst | 7 Frontswap provides a "transcendent memory" interface for swap pages. 8 In some environments, dramatic performance savings may be obtained because 9 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk. 11 (Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends" 12 and the only necessary changes to the core kernel for transcendent memory; 13 all other supporting code -- the "backends" -- is implemented as drivers. 14 See the LWN.net article `Transcendent memory in a nutshell`_ 17 .. _Transcendent memory in a nutshell: https://lwn.net/Articles/454795/ 21 a synchronous concurrency-safe page-oriented "pseudo-RAM device" conforming 22 to the requirements of transcendent memory (such as Xen's "tmem", or [all …]
|
/Linux-v5.15/Documentation/dev-tools/ |
D | kasan.rst | 5 -------- 7 KernelAddressSANitizer (KASAN) is a dynamic memory safety error detector 8 designed to find out-of-bound and use-after-free bugs. KASAN has three modes: 11 2. software tag-based KASAN (similar to userspace HWASan), 12 3. hardware tag-based KASAN (based on hardware memory tagging). 14 Generic KASAN is mainly used for debugging due to a large memory overhead. 15 Software tag-based KASAN can be used for dogfood testing as it has a lower 16 memory overhead that allows using it with real workloads. Hardware tag-based 17 KASAN comes with low memory and performance overheads and, therefore, can be 18 used in production. Either as an in-field memory bug detector or as a security [all …]
|
D | kmemleak.rst | 1 Kernel Memory Leak Detector 4 Kmemleak provides a way of detecting possible kernel memory leaks in a 9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in 10 user-space applications. 13 ----- 15 CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel 16 thread scans the memory every 10 minutes (by default) and prints the 20 # mount -t debugfs nodev /sys/kernel/debug/ 22 To display the details of all the possible scanned memory leaks:: 26 To trigger an intermediate memory scan:: [all …]
|
/Linux-v5.15/include/uapi/linux/ |
D | nitro_enclaves.h | 1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 16 * NE_CREATE_VM - The command is used to create a slot that is associated with 20 * setting any resources, such as memory and vCPUs, for an 21 * enclave. Memory and vCPUs are set for the slot mapped to an enclave. 25 * Its format is the detailed in the cpu-lists section: 26 * https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html 30 * in the CPU pool. 34 * * Enclave file descriptor - Enclave file descriptor used with 35 * ioctl calls to set vCPUs and memory 37 * * -1 - There was a failure in the ioctl logic. [all …]
|
/Linux-v5.15/mm/ |
D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only 3 menu "Memory Management options" 10 prompt "Memory model" 16 Linux manages its memory internally. Most users will 21 bool "Flat Memory" 24 This option is best suited for non-NUMA systems with 26 system in terms of performance and resource consumption 29 For systems that have holes in their physical address 30 spaces and for features like NUMA and memory hotplug, 31 choose "Sparse Memory". [all …]
|
/Linux-v5.15/Documentation/core-api/ |
D | memory-hotplug.rst | 4 Memory hotplug 7 Memory hotplug event notifier 12 There are six types of notification defined in ``include/linux/memory.h``: 15 Generated before new memory becomes available in order to be able to 16 prepare subsystems to handle memory. The page allocator is still unable 17 to allocate from the new memory. 23 Generated when memory has successfully brought online. The callback may 24 allocate pages from the new memory. 27 Generated to begin the process of offlining memory. Allocations are no 28 longer possible from the memory but some of the memory to be offlined [all …]
|
D | bus-virt-phys-mapping.rst | 2 How to access I/O mapped memory from within device drivers 11 (see Documentation/core-api/dma-api-howto.rst). They continue 13 must not use them. --davidm 00/12/12 17 [ This is a mail message in response to a query on IO mapping, thus the 20 The AHA-1542 is a bus-master device, and your patch makes the driver give the 22 (because all bus master devices see the physical memory mappings directly). 25 at memory addresses, and in this case we actually want the third, the 26 so-called "bus address". 28 Essentially, the three ways of addressing memory are (this is "real memory", 29 that is, normal RAM--see later about other details): [all …]
|
/Linux-v5.15/Documentation/x86/ |
D | amd-memory-encryption.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 AMD Memory Encryption 7 Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV) are 10 SME provides the ability to mark individual pages of memory as encrypted using 16 SEV enables running encrypted virtual machines (VMs) in which the code and data 19 memory. Private memory is encrypted with the guest-specific key, while shared 20 memory may be encrypted with hypervisor key. When SME is enabled, the hypervisor 21 key is the same key which is used in SME. 25 specified in the cr3 register, allowing the PGD table to be encrypted. Each 27 bit in the page table entry that points to the next table. This allows the full [all …]
|
/Linux-v5.15/drivers/xen/ |
D | Kconfig | 1 # SPDX-License-Identifier: GPL-2.0-only 6 bool "Xen memory balloon driver" 9 The balloon driver allows the Xen domain to request more memory from 10 the system to expand the domain's memory allocation, or alternatively 11 return unneeded memory to the system. 14 bool "Memory hotplug support for Xen balloon driver" 18 Memory hotplug support for Xen balloon driver allows expanding memory 24 memory ranges to use in order to map foreign memory or grants. 26 Memory could be hotplugged in following steps: 28 1) target domain: ensure that memory auto online policy is in [all …]
|
/Linux-v5.15/drivers/staging/media/atomisp/pci/ |
D | ia_css_dvs.h | 1 /* SPDX-License-Identifier: GPL-2.0 */ 10 * This program is distributed in the hope it will be useful, but WITHOUT 34 /* Structure that holds DVS statistics in the ISP internal 44 ia_css_ptr data_ptr; /* base pointer containing all memory */ 45 u32 size; /* size of allocated memory in data_ptr */ 48 /* Structure that holds SKC DVS statistics in the ISP internal 58 /* Map with host-side pointers to ISP-format statistics. 59 * These pointers can either be copies of ISP data or memory mapped 62 * allocated pointer is stored in the data_ptr field. The other fields 69 u32 size; /* total size in bytes */ [all …]
|
/Linux-v5.15/include/linux/ |
D | memory.h | 1 /* SPDX-License-Identifier: GPL-2.0 */ 3 * include/linux/memory.h - generic memory definition 6 * basic "struct memory_block" here, which can be embedded in per-arch 9 * Basic handling of the devices is done in drivers/base/memory.c 10 * and system devices are handled in drivers/base/sys.c. 12 * Memory block are exported via sysfs in the class/memory/devices/ 27 * struct memory_group - a logical group of memory blocks 28 * @nid: The node id for all memory blocks inside the memory group. 29 * @blocks: List of all memory blocks belonging to this memory group. 30 * @present_kernel_pages: Present (online) memory outside ZONE_MOVABLE of this [all …]
|
D | dma-buf-map.h | 1 /* SPDX-License-Identifier: GPL-2.0-only */ 3 * Pointer to dma-buf-mapped memory, plus helpers. 15 * Calling dma-buf's vmap operation returns a pointer to the buffer's memory. 17 * I/O operations or memory load/store operations. For example, copying to 18 * system memory could be done with memcpy(), copying to I/O memory would be 21 * .. code-block:: c 23 * void *vaddr = ...; // pointer to system memory 26 * void *vaddr_iomem = ...; // pointer to I/O memory 29 * When using dma-buf's vmap operation, the returned pointer is encoded as 31 * :c:type:`struct dma_buf_map <dma_buf_map>` stores the buffer's address in [all …]
|
/Linux-v5.15/Documentation/admin-guide/sysctl/ |
D | vm.rst | 11 For general info and legal blurb, please look in index.rst. 13 ------------------------------------------------------------------------------ 15 This file contains the documentation for the sysctl files in 18 The files in this directory can be used to tune the operation 19 of the virtual memory (VM) subsystem of the Linux kernel and 23 files can be found in mm/swap.c. 25 Currently, these files are in /proc/sys/vm: 27 - admin_reserve_kbytes 28 - compact_memory 29 - compaction_proactiveness [all …]
|
/Linux-v5.15/Documentation/powerpc/ |
D | firmware-assisted-dump.rst | 2 Firmware-Assisted Dump 7 The goal of firmware-assisted dump is to enable the dump of 8 a crashed system, and to do so from a fully-reset system, and 10 in production use. 12 - Firmware-Assisted Dump (FADump) infrastructure is intended to replace 14 - Fadump uses the same firmware interfaces and memory reservation model 16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore 17 in the ELF format in the same way as kdump. This helps us reuse the 19 - Unlike phyp dump, userspace tool does not need to refer any sysfs 21 - Unlike phyp dump, FADump allows user to release all the memory reserved [all …]
|
/Linux-v5.15/Documentation/driver-api/pci/ |
D | p2pdma.rst | 1 .. SPDX-License-Identifier: GPL-2.0 4 PCI Peer-to-Peer DMA Support 9 called Peer-to-Peer (or P2P). However, there are a number of issues that 10 make P2P transactions tricky to do in a perfectly safe way. 13 transactions between hierarchy domains, and in PCIe, each Root Port 18 same PCI bridge, as such devices are all in the same PCI hierarchy 23 The second issue is that to make use of existing interfaces in Linux, 24 memory that is used for P2P transactions needs to be backed by struct 33 In a given P2P implementation there may be three or more different 34 types of kernel drivers in play: [all …]
|
12345678910>>...47