Home
last modified time | relevance | path

Searched full:memory (Results 1 – 25 of 6441) sorted by relevance

12345678910>>...258

/Linux-v5.4/Documentation/admin-guide/mm/
Dmemory-hotplug.rst4 Memory Hotplug
10 This document is about memory hotplug including how-to-use and current status.
11 Because Memory Hotplug is still under development, contents of this text will
18 (1) x86_64's has special implementation for memory hotplug.
26 Purpose of memory hotplug
29 Memory Hotplug allows users to increase/decrease the amount of memory.
32 (A) For changing the amount of memory.
38 hardware which supports memory power management.
40 Linux memory hotplug is designed for both purpose.
42 Phases of memory hotplug
[all …]
Dconcepts.rst7 The memory management in Linux is a complex system that evolved over the
9 systems from MMU-less microcontrollers to supercomputers. The memory
18 Virtual Memory Primer
21 The physical memory in a computer system is a limited resource and
22 even for systems that support memory hotplug there is a hard limit on
23 the amount of memory that can be installed. The physical memory is not
29 All this makes dealing directly with physical memory quite complex and
30 to avoid this complexity a concept of virtual memory was developed.
32 The virtual memory abstracts the details of physical memory from the
34 physical memory (demand paging) and provides a mechanism for the
[all …]
Dnumaperf.rst7 Some platforms may have multiple types of memory attached to a compute
8 node. These disparate memory ranges may share some characteristics, such
12 A system supports such heterogeneous memory by grouping each memory type
14 characteristics. Some memory may share the same node as a CPU, and others
15 are provided as memory only nodes. While memory only nodes do not provide
18 nodes with local memory and a memory only node for each of compute node::
29 A "memory initiator" is a node containing one or more devices such as
30 CPUs or separate memory I/O devices that can initiate memory requests.
31 A "memory target" is a node containing one or more physical address
32 ranges accessible from one or more memory initiators.
[all …]
/Linux-v5.4/tools/testing/selftests/memory-hotplug/
Dmem-on-off-test.sh25 if ! ls $SYSFS/devices/system/memory/memory* > /dev/null 2>&1; then
26 echo $msg memory hotplug is not supported >&2
30 if ! grep -q 1 $SYSFS/devices/system/memory/memory*/removable; then
31 echo $msg no hot-pluggable memory >&2
37 # list all hot-pluggable memory
43 for memory in $SYSFS/devices/system/memory/memory*; do
44 if grep -q 1 $memory/removable &&
45 grep -q $state $memory/state; then
46 echo ${memory##/*/memory}
63 grep -q online $SYSFS/devices/system/memory/memory$1/state
[all …]
/Linux-v5.4/Documentation/admin-guide/cgroup-v1/
Dmemory.rst2 Memory Resource Controller
12 The Memory Resource Controller has generically been referred to as the
13 memory controller in this document. Do not confuse memory controller
14 used here with the memory controller that is used in hardware.
17 When we mention a cgroup (cgroupfs's directory) with memory controller,
18 we call it "memory cgroup". When you see git-log and source code, you'll
22 Benefits and Purpose of the memory controller
25 The memory controller isolates the memory behaviour of a group of tasks
27 uses of the memory controller. The memory controller can be used to
30 Memory-hungry applications can be isolated and limited to a smaller
[all …]
/Linux-v5.4/Documentation/ABI/testing/
Dsysfs-devices-memory1 What: /sys/devices/system/memory
5 The /sys/devices/system/memory contains a snapshot of the
6 internal state of the kernel memory blocks. Files could be
9 Users: hotplug memory add/remove tools
12 What: /sys/devices/system/memory/memoryX/removable
16 The file /sys/devices/system/memory/memoryX/removable
17 indicates whether this memory block is removable or not.
19 identify removable sections of the memory before attempting
20 potentially expensive hot-remove memory operation
21 Users: hotplug memory remove tools
[all …]
/Linux-v5.4/Documentation/vm/
Dmemory-model.rst6 Physical Memory Model
9 Physical memory in a system may be addressed in different ways. The
10 simplest case is when the physical memory starts at address 0 and
15 different memory banks are attached to different CPUs.
17 Linux abstracts this diversity using one of the three memory models:
19 memory models it supports, what the default memory model is and
26 All the memory models track the status of physical page frames using
29 Regardless of the selected memory model, there exists one-to-one
33 Each memory model defines :c:func:`pfn_to_page` and :c:func:`page_to_pfn`
40 The simplest memory model is FLATMEM. This model is suitable for
[all …]
Dnuma.rst14 or more CPUs, local memory, and/or IO buses. For brevity and to
28 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
32 Memory access time and effective memory bandwidth varies depending on how far
33 away the cell containing the CPU or IO bus making the memory access is from the
34 cell containing the target memory. For example, access to memory by CPUs
36 bandwidths than accesses to memory on other, remote cells. NUMA platforms
41 memory bandwidth. However, to achieve scalable memory bandwidth, system and
42 application software must arrange for a large majority of the memory references
43 [cache misses] to be to "local" memory--memory on the same cell, if any--or
44 to the closest cell with memory.
[all …]
Dhmm.rst4 Heterogeneous Memory Management (HMM)
7 Provide infrastructure and helpers to integrate non-conventional memory (device
8 memory like GPU on board memory) into regular kernel path, with the cornerstone
9 of this being specialized struct page for such memory (see sections 5 to 7 of
12 HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
20 related to using device specific memory allocators. In the second section, I
24 fifth section deals with how device memory is represented inside the kernel.
30 Problems of using a device specific memory allocator
33 Devices with a large amount of on board memory (several gigabytes) like GPUs
34 have historically managed their memory through dedicated driver specific APIs.
[all …]
/Linux-v5.4/tools/testing/selftests/cgroup/
Dtest_memcontrol.c25 * the memory controller.
33 /* Create two nested cgroups with the memory controller enabled */ in test_memcg_subtree_control()
42 if (cg_write(parent, "cgroup.subtree_control", "+memory")) in test_memcg_subtree_control()
48 if (cg_read_strstr(child, "cgroup.controllers", "memory")) in test_memcg_subtree_control()
51 /* Create two nested cgroups without enabling memory controller */ in test_memcg_subtree_control()
66 if (!cg_read_strstr(child2, "cgroup.controllers", "memory")) in test_memcg_subtree_control()
100 current = cg_read_long(cgroup, "memory.current"); in alloc_anon_50M_check()
107 anon = cg_read_key_long(cgroup, "memory.stat", "anon "); in alloc_anon_50M_check()
134 current = cg_read_long(cgroup, "memory.current"); in alloc_pagecache_50M_check()
138 file = cg_read_key_long(cgroup, "memory.stat", "file "); in alloc_pagecache_50M_check()
[all …]
/Linux-v5.4/mm/
DKconfig3 menu "Memory Management options"
10 prompt "Memory model"
17 Linux manages its memory internally. Most users will
22 bool "Flat Memory"
31 spaces and for features like NUMA and memory hotplug,
32 choose "Sparse Memory"
34 If unsure, choose this option (Flat Memory) over any other.
37 bool "Discontiguous Memory"
41 memory systems, over FLATMEM. These systems have holes
45 Although "Discontiguous Memory" is still used by several
[all …]
/Linux-v5.4/Documentation/core-api/
Dmemory-hotplug.rst4 Memory hotplug
7 Memory hotplug event notifier
12 There are six types of notification defined in ``include/linux/memory.h``:
15 Generated before new memory becomes available in order to be able to
16 prepare subsystems to handle memory. The page allocator is still unable
17 to allocate from the new memory.
23 Generated when memory has successfully brought online. The callback may
24 allocate pages from the new memory.
27 Generated to begin the process of offlining memory. Allocations are no
28 longer possible from the memory but some of the memory to be offlined
[all …]
Dmemory-allocation.rst4 Memory Allocation Guide
7 Linux provides a variety of APIs for memory allocation. You can
14 Most of the memory allocation APIs use GFP flags to express how that
15 memory should be allocated. The GFP acronym stands for "get free
16 pages", the underlying memory allocation function.
19 makes the question "How should I allocate memory?" not that easy to
32 The GFP flags control the allocators behavior. They tell what memory
34 memory, whether the memory can be accessed by the userspace etc. The
39 * Most of the time ``GFP_KERNEL`` is what you need. Memory for the
40 kernel data structures, DMAable memory, inode cache, all these and
[all …]
/Linux-v5.4/include/linux/
Dtee_drv.h23 #define TEE_SHM_MAPPED BIT(0) /* Memory mapped by the kernel */
24 #define TEE_SHM_DMA_BUF BIT(1) /* Memory with dma-buf handle */
25 #define TEE_SHM_EXT_DMA_BUF BIT(2) /* Memory with dma-buf handle */
26 #define TEE_SHM_REGISTER BIT(3) /* Memory registered in secure world */
27 #define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */
28 #define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */
38 * @list_shm: List of shared memory object owned by this context
43 * shared memory release.
90 * @shm_register: register shared memory buffer in TEE
91 * @shm_unregister: unregister shared memory buffer in TEE
[all …]
/Linux-v5.4/Documentation/powerpc/
Dfirmware-assisted-dump.rst14 - Fadump uses the same firmware interfaces and memory reservation model
16 - Unlike phyp dump, FADump exports the memory dump through /proc/vmcore
21 - Unlike phyp dump, FADump allows user to release all the memory reserved
35 - Once the dump is copied out, the memory that held the dump
44 - The first kernel registers the sections of memory with the
46 These registered sections of memory are reserved by the first
50 low memory regions (boot memory) from source to destination area.
54 The term 'boot memory' means size of the low memory chunk
56 booted with restricted memory. By default, the boot memory
58 Alternatively, user can also specify boot memory size
[all …]
/Linux-v5.4/Documentation/devicetree/bindings/reserved-memory/
Dreserved-memory.txt1 *** Reserved memory regions ***
3 Reserved memory is specified as a node under the /reserved-memory node.
4 The operating system shall exclude reserved memory from normal usage
6 normal use) memory regions. Such memory regions are usually designed for
9 Parameters for each memory region can be encoded into the device tree
12 /reserved-memory node
19 /reserved-memory/ child nodes
21 Each child of the reserved-memory node specifies one or more regions of
22 reserved memory. Each child node may either use a 'reg' property to
23 specify a specific range of reserved memory, or a 'size' property with
[all …]
/Linux-v5.4/drivers/gpu/drm/nouveau/nvkm/core/
Dmemory.c24 #include <core/memory.h>
30 nvkm_memory_tags_put(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_put() argument
39 kfree(memory->tags); in nvkm_memory_tags_put()
40 memory->tags = NULL; in nvkm_memory_tags_put()
48 nvkm_memory_tags_get(struct nvkm_memory *memory, struct nvkm_device *device, in nvkm_memory_tags_get() argument
56 if ((tags = memory->tags)) { in nvkm_memory_tags_get()
57 /* If comptags exist for the memory, but a different amount in nvkm_memory_tags_get()
84 * As memory can be mapped in multiple places, we still in nvkm_memory_tags_get()
101 struct nvkm_memory *memory) in nvkm_memory_ctor() argument
103 memory->func = func; in nvkm_memory_ctor()
[all …]
/Linux-v5.4/Documentation/dev-tools/
Dkmemleak.rst1 Kernel Memory Leak Detector
4 Kmemleak provides a way of detecting possible kernel memory leaks in a
9 Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in
17 thread scans the memory every 10 minutes (by default) and prints the
23 To display the details of all the possible scanned memory leaks::
27 To trigger an intermediate memory scan::
31 To clear the list of all current possible memory leaks::
42 Memory scanning parameters can be modified at run-time by writing to the
52 start the automatic memory scanning thread (default)
54 stop the automatic memory scanning thread
[all …]
Dkasan.rst7 KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to
13 memory access, and therefore requires a compiler version that supports that.
41 Both KASAN modes work with both SLUB and SLAB memory allocators.
125 Memory state around the buggy address:
136 access, a stack trace of where the accessed memory was allocated (in case bad
139 the accessed slab object and information about the accessed memory page.
141 In the last section the report shows memory state around the accessed address.
144 The state of each 8 aligned bytes of memory is encoded in one shadow byte.
147 of the corresponding memory region are accessible; number N (1 <= N <= 7) means
151 inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
[all …]
/Linux-v5.4/Documentation/driver-api/
Dedac.rst16 * Memory devices
18 The individual DRAM chips on a memory stick. These devices commonly
20 provides the number of bits that the memory controller expects:
23 * Memory Stick
25 A printed circuit board that aggregates multiple memory devices in
28 called DIMM (Dual Inline Memory Module).
30 * Memory Socket
32 A physical connector on the motherboard that accepts a single memory
37 A memory controller channel, responsible to communicate with a group of
43 It is typically the highest hierarchy on a Fully-Buffered DIMM memory
[all …]
Dntb.rst6 the separate memory systems of two or more computers to the same PCI-Express
8 registers and memory translation windows, as well as non common features like
15 Memory windows allow translated read and write access to the peer memory.
38 Primary purpose of NTB is to share some peace of memory between at least two
40 mainly used to perform the proper memory window initialization. Typically
41 there are two types of memory window interfaces supported by the NTB API:
48 Memory: Local NTB Port: Peer NTB Port: Peer MMIO:
51 | memory | _v____________ | ______________
52 | (addr) |<======| MW xlat addr |<====| MW base addr |<== memory-mapped IO
55 So typical scenario of the first type memory window initialization looks:
[all …]
/Linux-v5.4/Documentation/media/uapi/v4l/
Ddev-mem2mem.rst13 Video Memory-To-Memory Interface
16 A V4L2 memory-to-memory device can compress, decompress, transform, or
17 otherwise convert video data from one format into another format, in memory.
18 Such memory-to-memory devices set the ``V4L2_CAP_VIDEO_M2M`` or
19 ``V4L2_CAP_VIDEO_M2M_MPLANE`` capability. Examples of memory-to-memory
23 A memory-to-memory video node acts just like a normal video node, but it
24 supports both output (sending frames from memory to the hardware)
26 memory) stream I/O. An application will have to setup the stream I/O for
30 Memory-to-memory devices function as a shared resource: you can
39 One of the most common memory-to-memory device is the codec. Codecs
[all …]
/Linux-v5.4/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/
Dmem.c22 #define nvkm_mem(p) container_of((p), struct nvkm_mem, memory)
25 #include <core/memory.h>
31 struct nvkm_memory memory; member
43 nvkm_mem_target(struct nvkm_memory *memory) in nvkm_mem_target() argument
45 return nvkm_mem(memory)->target; in nvkm_mem_target()
49 nvkm_mem_page(struct nvkm_memory *memory) in nvkm_mem_page() argument
55 nvkm_mem_addr(struct nvkm_memory *memory) in nvkm_mem_addr() argument
57 struct nvkm_mem *mem = nvkm_mem(memory); in nvkm_mem_addr()
64 nvkm_mem_size(struct nvkm_memory *memory) in nvkm_mem_size() argument
66 return nvkm_mem(memory)->pages << PAGE_SHIFT; in nvkm_mem_size()
[all …]
/Linux-v5.4/Documentation/
Dbus-virt-phys-mapping.txt2 How to access I/O mapped memory from within device drivers
22 (because all bus master devices see the physical memory mappings directly).
25 at memory addresses, and in this case we actually want the third, the
28 Essentially, the three ways of addressing memory are (this is "real memory",
32 0 is what the CPU sees when it drives zeroes on the memory bus.
38 - bus address. This is the address of memory as seen by OTHER devices,
40 addresses, with each device seeing memory in some device-specific way, but
43 external hardware sees the memory the same way.
47 because the memory and the devices share the same address space, and that is
51 CPU sees a memory map something like this (this is from memory)::
[all …]
/Linux-v5.4/drivers/xen/
DKconfig6 bool "Xen memory balloon driver"
9 The balloon driver allows the Xen domain to request more memory from
10 the system to expand the domain's memory allocation, or alternatively
11 return unneeded memory to the system.
14 bool "Memory hotplug support for Xen balloon driver"
17 Memory hotplug support for Xen balloon driver allows expanding memory
22 Memory could be hotplugged in following steps:
24 1) target domain: ensure that memory auto online policy is in
25 effect by checking /sys/devices/system/memory/auto_online_blocks
29 where <maxmem> is >= requested memory size,
[all …]

12345678910>>...258