Home
last modified time | relevance | path

Searched full:nodes (Results 1 – 25 of 2091) sorted by relevance

12345678910>>...84

/Linux-v5.4/Documentation/admin-guide/mm/
Dnuma_memory_policy.rst19 which is an administrative mechanism for restricting the nodes from which
42 allocations across all nodes with "sufficient" memory, so as
166 an optional set of nodes. The mode determines the behavior of the
168 and the optional set of nodes can be viewed as the arguments to the
190 does not use the optional set of nodes.
192 It is an error for the set of nodes specified for this policy to
197 nodes specified by the policy. Memory will be allocated from
204 allocation fails, the kernel will search other nodes, in order
224 page granularity, across the nodes specified in the policy.
229 Interleave mode indexes the set of nodes specified by the
[all …]
/Linux-v5.4/drivers/gpu/drm/selftests/
Dtest-drm_mm.c267 struct drm_mm_node nodes[2]; in igt_debug() local
270 /* Create a small drm_mm with a couple of nodes and a few holes, and in igt_debug()
276 memset(nodes, 0, sizeof(nodes)); in igt_debug()
277 nodes[0].start = 512; in igt_debug()
278 nodes[0].size = 1024; in igt_debug()
279 ret = drm_mm_reserve_node(&mm, &nodes[0]); in igt_debug()
282 nodes[0].start, nodes[0].size); in igt_debug()
286 nodes[1].size = 1024; in igt_debug()
287 nodes[1].start = 4096 - 512 - nodes[1].size; in igt_debug()
288 ret = drm_mm_reserve_node(&mm, &nodes[1]); in igt_debug()
[all …]
/Linux-v5.4/Documentation/devicetree/bindings/usb/
Dusb-device.txt7 Four types of device-tree nodes are defined: "host-controller nodes"
8 representing USB host controllers, "device nodes" representing USB devices,
9 "interface nodes" representing USB interfaces and "combined nodes"
20 Required properties for device nodes:
30 Required properties for device nodes with interface nodes:
35 Required properties for interface nodes:
49 Required properties for combined nodes:
59 Required properties for hub nodes with device nodes:
64 Required properties for host-controller nodes with device nodes:
/Linux-v5.4/Documentation/devicetree/bindings/cpu/
Dcpu-topology.txt20 For instance in a system where CPUs support SMT, "cpu" nodes represent all
22 In systems where SMT is not supported "cpu" nodes represent all cores present
25 CPU topology bindings allow one to associate cpu nodes with hierarchical groups
27 tree nodes.
32 The cpu nodes, as per bindings defined in [4], represent the devices that
35 A topology description containing phandles to cpu nodes that are not compliant
44 nodes are listed.
60 The cpu-map node's child nodes can be:
62 - one or more cluster nodes or
63 - one or more socket nodes in a multi-socket system
[all …]
/Linux-v5.4/mm/
Dmempolicy.c15 * interleave Allocate memory interleaved over a set of nodes,
22 * bind Only allocate memory on a specific set of nodes,
26 * the allocation to memory nodes instead
150 int (*create)(struct mempolicy *pol, const nodemask_t *nodes);
151 void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes);
167 static int mpol_new_interleave(struct mempolicy *pol, const nodemask_t *nodes) in mpol_new_interleave() argument
169 if (nodes_empty(*nodes)) in mpol_new_interleave()
171 pol->v.nodes = *nodes; in mpol_new_interleave()
175 static int mpol_new_preferred(struct mempolicy *pol, const nodemask_t *nodes) in mpol_new_preferred() argument
177 if (!nodes) in mpol_new_preferred()
[all …]
/Linux-v5.4/Documentation/vm/
Dnuma.rst49 abstractions called "nodes". Linux maps the nodes onto the physical cells
51 architectures. As with physical cells, software nodes may contain 0 or more
53 "closer" nodes--nodes that map to closer cells--will generally experience
64 the emulation of additional nodes. For NUMA emulation, linux will carve up
65 the existing nodes--or the system memory for non-NUMA platforms--into multiple
66 nodes. Each emulated node will manage a fraction of the underlying cells'
76 an ordered "zonelist". A zonelist specifies the zones/nodes to visit when a
81 Because some nodes contain multiple zones containing different types of
87 from the same node before using remote nodes which are ordered by NUMA distance.
94 nodes' zones in the selected zonelist looking for the first zone in the list
[all …]
/Linux-v5.4/include/linux/
Dinterconnect-provider.h20 * @num_nodes: number of nodes in this device
21 * @nodes: array of pointers to the nodes in this device
25 struct icc_node *nodes[]; member
36 * @nodes: internal list of the interconnect provider nodes
41 * @xlate: provider-specific callback for mapping nodes from phandle arguments
48 struct list_head nodes; member
65 * @num_links: number of links to other interconnect nodes
67 * @node_list: the list entry in the parent provider's "nodes" list
68 * @search_list: list used when walking the nodes graph
69 * @reverse: pointer to previous node when walking the nodes graph
[all …]
/Linux-v5.4/fs/ubifs/
Dgc.c14 * nodes) or not. For non-index LEBs, garbage collection finds a LEB which
15 * contains a lot of dirty space (obsolete nodes), and copies the non-obsolete
16 * nodes to the journal, at which point the garbage-collected LEB is free to be
17 * reused. For index LEBs, garbage collection marks the non-obsolete index nodes
19 * to be reused. Garbage collection will cause the number of dirty index nodes
33 * the UBIFS nodes GC deals with. Large nodes make GC waste more space. Indeed,
34 * if GC move data from LEB A to LEB B and nodes in LEB A are large, GC would
35 * have to waste large pieces of free space at the end of LEB B, because nodes
36 * from LEB A would not fit. And the worst situation is when all nodes are of
101 * data_nodes_cmp - compare 2 data nodes.
[all …]
/Linux-v5.4/Documentation/driver-api/md/
Dmd-cluster.rst54 node may write to those sectors. This is used when a new nodes
60 Each node has to communicate with other nodes when starting or ending
70 Normally all nodes hold a concurrent-read lock on this device.
75 Messages can be broadcast to all nodes, and the sender waits for all
76 other nodes to acknowledge the message before proceeding. Only one
87 informs other nodes that the metadata has
94 informs other nodes that a resync is initiated or
104 informs other nodes that a device is being added to
128 The DLM LVB is used to communicate within nodes of the cluster. There
145 acknowledged by all nodes in the cluster. The BAST of the resource
[all …]
/Linux-v5.4/Documentation/filesystems/
Dubifs-authentication.rst76 - *Index*: an on-flash B+ tree where the leaf nodes contain filesystem data
94 Basic on-flash UBIFS entities are called *nodes*. UBIFS knows different types
95 of nodes. Eg. data nodes (`struct ubifs_data_node`) which store chunks of file
96 contents or inode nodes (`struct ubifs_ino_node`) which represent VFS inodes.
97 Almost all types of nodes share a common header (`ubifs_ch`) containing basic
100 and some less important node types like padding nodes which are used to pad
104 as *wandering tree*, where only the changed nodes are re-written and previous
117 a dirty-flag which marks nodes that have to be persisted the next time the
122 on-flash filesystem structures like the index. On every commit, the TNC nodes
131 any changes (in form of inode nodes, data nodes etc.) between commits
[all …]
/Linux-v5.4/fs/btrfs/
Dinode-item.c93 return btrfs_find_name_in_ext_backref(path->nodes[0], path->slots[0], in btrfs_lookup_inode_extref()
135 extref = btrfs_find_name_in_ext_backref(path->nodes[0], path->slots[0], in btrfs_del_inode_extref()
143 leaf = path->nodes[0]; in btrfs_del_inode_extref()
207 ref = btrfs_find_name_in_backref(path->nodes[0], path->slots[0], name, in btrfs_del_inode_ref()
214 leaf = path->nodes[0]; in btrfs_del_inode_ref()
277 if (btrfs_find_name_in_ext_backref(path->nodes[0], in btrfs_insert_inode_extref()
289 leaf = path->nodes[0]; in btrfs_insert_inode_extref()
295 btrfs_set_inode_extref_name_len(path->nodes[0], extref, name_len); in btrfs_insert_inode_extref()
296 btrfs_set_inode_extref_index(path->nodes[0], extref, index); in btrfs_insert_inode_extref()
297 btrfs_set_inode_extref_parent(path->nodes[0], extref, ref_objectid); in btrfs_insert_inode_extref()
[all …]
/Linux-v5.4/drivers/gpu/drm/amd/amdgpu/
Damdgpu_vram_mgr.c219 struct drm_mm_node *nodes = mem->mm_node; in amdgpu_vram_mgr_bo_visible_size() local
229 for (usage = 0; nodes && pages; pages -= nodes->size, nodes++) in amdgpu_vram_mgr_bo_visible_size()
230 usage += amdgpu_vram_mgr_vis_size(adev, nodes); in amdgpu_vram_mgr_bo_visible_size()
275 struct drm_mm_node *nodes; in amdgpu_vram_mgr_new() local
308 nodes = kvmalloc_array((uint32_t)num_nodes, sizeof(*nodes), in amdgpu_vram_mgr_new()
310 if (!nodes) { in amdgpu_vram_mgr_new()
326 r = drm_mm_insert_node_in_range(mm, &nodes[i], pages, in amdgpu_vram_mgr_new()
333 vis_usage += amdgpu_vram_mgr_vis_size(adev, &nodes[i]); in amdgpu_vram_mgr_new()
334 amdgpu_vram_mgr_virt_start(mem, &nodes[i]); in amdgpu_vram_mgr_new()
345 r = drm_mm_insert_node_in_range(mm, &nodes[i], in amdgpu_vram_mgr_new()
[all …]
/Linux-v5.4/lib/
Dinterval_tree_test.c14 __param(int, nnodes, 100, "Number of nodes in the interval tree");
19 __param(bool, search_all, false, "Searches will iterate all nodes in the tree");
24 static struct interval_tree_node *nodes = NULL; variable
49 nodes[i].start = a; in init()
50 nodes[i].last = b; in init()
68 nodes = kmalloc_array(nnodes, sizeof(struct interval_tree_node), in interval_tree_test_init()
70 if (!nodes) in interval_tree_test_init()
75 kfree(nodes); in interval_tree_test_init()
88 interval_tree_insert(nodes + j, &root); in interval_tree_test_init()
90 interval_tree_remove(nodes + j, &root); in interval_tree_test_init()
[all …]
Drbtree_test.c14 __param(int, nnodes, 100, "Number of nodes in the rb-tree");
28 static struct test_node *nodes = NULL; variable
153 nodes[i].key = prandom_u32_state(&rnd); in init()
154 nodes[i].val = prandom_u32_state(&rnd); in init()
248 nodes = kmalloc_array(nnodes, sizeof(*nodes), GFP_KERNEL); in rbtree_test_init()
249 if (!nodes) in rbtree_test_init()
261 insert(nodes + j, &root); in rbtree_test_init()
263 erase(nodes + j, &root); in rbtree_test_init()
277 insert_cached(nodes + j, &root); in rbtree_test_init()
279 erase_cached(nodes + j, &root); in rbtree_test_init()
[all …]
/Linux-v5.4/Documentation/devicetree/bindings/pinctrl/
Dmeson,pinctrl.txt20 === GPIO sub-nodes ===
25 Required properties for sub-nodes are:
34 === Other sub-nodes ===
36 Child nodes without the "gpio-controller" represent some desired
37 configuration for a pin or a group. Those nodes can be pinmux nodes or
38 configuration nodes.
40 Required properties for pinmux nodes are:
47 Required properties for configuration nodes:
50 Configuration nodes support the following generic properties, as
/Linux-v5.4/drivers/md/persistent-data/
Ddm-btree-spine.c131 s->nodes[0] = NULL; in init_ro_spine()
132 s->nodes[1] = NULL; in init_ro_spine()
140 unlock_block(s->info, s->nodes[i]); in exit_ro_spine()
151 unlock_block(s->info, s->nodes[0]); in ro_step()
152 s->nodes[0] = s->nodes[1]; in ro_step()
156 r = bn_read_lock(s->info, new_child, s->nodes + s->count); in ro_step()
167 unlock_block(s->info, s->nodes[s->count]); in ro_pop()
175 block = s->nodes[s->count - 1]; in ro_node()
193 unlock_block(s->info, s->nodes[i]); in exit_shadow_spine()
205 unlock_block(s->info, s->nodes[0]); in shadow_step()
[all …]
/Linux-v5.4/arch/x86/mm/
Dnuma_emulation.c77 * Sets up nr_nodes fake nodes interleaved over physical nodes ranging from addr
108 * Calculate the number of big nodes that can be allocated as a result in split_nodes_interleave()
122 * Continue to fill physical nodes with fake nodes until there is no in split_nodes_interleave()
210 * Sets up fake nodes of `size' interleaved over physical nodes ranging from
231 * physical block and try to create nodes of at least size in split_nodes_size_interleave_uniform()
234 * In the uniform case, split the nodes strictly by physical in split_nodes_size_interleave_uniform()
251 * The limit on emulated nodes is MAX_NUMNODES, so the in split_nodes_size_interleave_uniform()
255 * (but not necessarily over physical nodes). in split_nodes_size_interleave_uniform()
269 * Fill physical nodes with fake nodes of size until there is no memory in split_nodes_size_interleave_uniform()
344 * numa_emulation - Emulate NUMA nodes
[all …]
/Linux-v5.4/tools/perf/tests/
Dmem2node.c49 struct memory_node nodes[3]; in test__mem2node() local
51 .memory_nodes = (struct memory_node *) &nodes[0], in test__mem2node()
52 .nr_memory_nodes = ARRAY_SIZE(nodes), in test__mem2node()
57 for (i = 0; i < ARRAY_SIZE(nodes); i++) { in test__mem2node()
58 nodes[i].node = test_nodes[i].node; in test__mem2node()
59 nodes[i].size = 10; in test__mem2node()
62 (nodes[i].set = get_bitmap(test_nodes[i].map, 10))); in test__mem2node()
74 for (i = 0; i < ARRAY_SIZE(nodes); i++) in test__mem2node()
75 zfree(&nodes[i].set); in test__mem2node()
/Linux-v5.4/Documentation/sphinx/
Dautomarkup.py7 from docutils import nodes
46 repl.append(nodes.Text(t[done:m.start()]))
51 target_text = nodes.Text(target + '()')
54 lit_text = nodes.literal(classes=['xref', 'c', 'c-func'])
79 repl.append(nodes.Text(t[done:]))
86 # kinds of nodes to prune. But this works well for now.
88 # The nodes.literal test catches ``literal text``, its purpose is to
92 for para in doctree.traverse(nodes.paragraph):
93 for node in para.traverse(nodes.Text):
94 if not isinstance(node.parent, nodes.literal):
/Linux-v5.4/drivers/net/ethernet/intel/ice/
Dice_sched.c69 /* Check if TEID matches to any of the children nodes */ in ice_sched_find_node_by_teid()
224 * ice_sched_remove_elems - remove nodes from HW
227 * @num_nodes: number of nodes
230 * This function remove nodes from HW
311 * The parent array is updated below and that shifts the nodes in ice_free_sched_node()
317 /* Leaf, TC and root nodes can't be deleted by SW */ in ice_free_sched_node()
356 /* leaf nodes have no children */ in ice_free_sched_node()
476 * ice_sched_suspend_resume_elems - suspend or resume HW nodes
478 * @num_nodes: number of nodes
482 * This function suspends or resumes HW nodes
[all …]
/Linux-v5.4/arch/arm/mach-sunxi/
Dmc_smp.c690 * This holds any device nodes that we requested resources for,
703 int (*get_smp_nodes)(struct sunxi_mc_smp_nodes *nodes);
707 static void __init sunxi_mc_smp_put_nodes(struct sunxi_mc_smp_nodes *nodes) in sunxi_mc_smp_put_nodes() argument
709 of_node_put(nodes->prcm_node); in sunxi_mc_smp_put_nodes()
710 of_node_put(nodes->cpucfg_node); in sunxi_mc_smp_put_nodes()
711 of_node_put(nodes->sram_node); in sunxi_mc_smp_put_nodes()
712 of_node_put(nodes->r_cpucfg_node); in sunxi_mc_smp_put_nodes()
713 memset(nodes, 0, sizeof(*nodes)); in sunxi_mc_smp_put_nodes()
716 static int __init sun9i_a80_get_smp_nodes(struct sunxi_mc_smp_nodes *nodes) in sun9i_a80_get_smp_nodes() argument
718 nodes->prcm_node = of_find_compatible_node(NULL, NULL, in sun9i_a80_get_smp_nodes()
[all …]
/Linux-v5.4/Documentation/driver-api/acpi/
Dscan_handlers.rst19 acpi_device objects are referred to as "device nodes" in what follows, but they
23 During ACPI-based device hot-remove device nodes representing pieces of hardware
27 initialization of device nodes, such as retrieving common configuration
48 where ids is the list of IDs of device nodes the given handler is supposed to
51 executed, respectively, after registration of new device nodes and before
52 unregistration of device nodes the handler attached to previously.
55 device nodes in the given namespace scope with the driver core. Then, it tries
72 callbacks from the scan handlers of all device nodes in the given namespace
74 nodes in that scope.
79 is the order in which they are matched against device nodes during namespace
/Linux-v5.4/Documentation/ABI/stable/
Dsysfs-devices-node5 Nodes that could be possibly become online at some point.
11 Nodes that are online.
17 Nodes that have regular memory.
23 Nodes that have one or more CPUs.
29 Nodes that have regular or high memory.
70 Distance between the node and all the other nodes
99 The node's relationship to other nodes for access class "Y".
106 nodes that have class "Y" access to this target node's
107 memory. CPUs and other memory initiators in nodes not in
123 nodes found in this access class's linked initiators.
[all …]
/Linux-v5.4/arch/sparc/kernel/
Dcpumap.c45 int num_nodes; /* Number of nodes in a level in a cpuinfo tree */
51 /* Offsets into nodes[] for each level of the tree */
53 struct cpuinfo_node nodes[0]; member
86 * nodes.
121 * end index, and number of nodes for each level in the cpuinfo tree. The
122 * total number of cpuinfo nodes required to build the tree is returned.
197 new_tree = kzalloc(struct_size(new_tree, nodes, n), GFP_ATOMIC); in build_cpuinfo_tree()
211 node = &new_tree->nodes[n]; in build_cpuinfo_tree()
252 node = &new_tree->nodes[level_rover[level]]; in build_cpuinfo_tree()
277 node = &new_tree->nodes[n]; in build_cpuinfo_tree()
[all …]
/Linux-v5.4/Documentation/devicetree/bindings/i2c/
Di2c-fsi.txt9 nodes.
10 - #size-cells = <0>; : Number of size cells in child nodes.
11 - child nodes : Nodes to describe busses off the I2C
18 - child nodes : Nodes to describe devices on the I2C

12345678910>>...84