Home
last modified time | relevance | path

Searched refs:loads (Results 1 – 25 of 133) sorted by relevance

123456

/Linux-v5.10/kernel/sched/
Dloadavg.c72 void get_avenrun(unsigned long *loads, unsigned long offset, int shift) in get_avenrun() argument
74 loads[0] = (avenrun[0] + offset) << shift; in get_avenrun()
75 loads[1] = (avenrun[1] + offset) << shift; in get_avenrun()
76 loads[2] = (avenrun[2] + offset) << shift; in get_avenrun()
/Linux-v5.10/tools/testing/selftests/net/
Ddevlink_port_split.py58 ports = json.loads(stdout)['port']
80 values = list(json.loads(stdout)['port'].values())[0]
98 values = list(json.loads(stdout)['port'].values())[0]
241 devs = json.loads(stdout)['dev']
/Linux-v5.10/arch/powerpc/perf/
Dpower10-pmu.c114 GENERIC_EVENT_ATTR(mem-loads, MEM_LOADS);
118 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1);
122 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1);
125 CACHE_EVENT_ATTR(LLC-loads, PM_DATA_FROM_L3);
127 CACHE_EVENT_ATTR(branch-loads, PM_BR_CMPL);
Dpower9-pmu.c162 GENERIC_EVENT_ATTR(mem-loads, MEM_LOADS);
166 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1);
170 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1);
173 CACHE_EVENT_ATTR(LLC-loads, PM_DATA_FROM_L3);
176 CACHE_EVENT_ATTR(branch-loads, PM_BR_CMPL);
Dpower8-pmu.c134 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1);
139 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1);
143 CACHE_EVENT_ATTR(LLC-loads, PM_DATA_FROM_L3);
149 CACHE_EVENT_ATTR(branch-loads, PM_BRU_FIN);
/Linux-v5.10/tools/perf/Documentation/
Dperf-mem.txt19 right set of options to display a memory access profile. By default, loads
20 and stores are sampled. Use the -t option to limit to loads or stores.
85 Specify desired latency for loads event. (x86 only)
Dperf-c2c.txt52 Configure mem-loads latency. (x86 only)
138 cpu/mem-loads,ldlat=30/P
143 cpu/mem-loads/
186 Total loads
/Linux-v5.10/arch/alpha/lib/
Dev6-copy_user.S64 EXI( ldbu $1,0($17) ) # .. .. .. L : Keep loads separate from stores
116 EXI ( ldbu $2,0($17) ) # .. .. .. L : No loads in the same quad
203 EXI ( ldbu $2,0($17) ) # .. .. .. L : No loads in the same quad
/Linux-v5.10/include/uapi/linux/
Dsysinfo.h10 __kernel_ulong_t loads[3]; /* 1, 5, and 15 minute load averages */ member
/Linux-v5.10/include/linux/sched/
Dloadavg.h16 extern void get_avenrun(unsigned long *loads, unsigned long offset, int shift);
/Linux-v5.10/Documentation/x86/
Dtsx_async_abort.rst13 case certain loads may speculatively pass invalid data to dependent operations
15 Synchronization Extensions (TSX) transaction. This includes loads with no
16 fault or assist condition. Such loads may speculatively expose stale data from
/Linux-v5.10/Documentation/core-api/
Drefcount-vs-atomic.rst41 A strong (full) memory ordering guarantees that all prior loads and
49 A RELEASE memory ordering guarantees that all prior loads and
57 An ACQUIRE memory ordering guarantees that all post loads and
/Linux-v5.10/Documentation/
Dmemory-barriers.txt178 perceived by the loads made by another CPU in the same order as the stores were
247 (*) Overlapping loads and stores within a particular CPU will appear to be
275 (*) It _must_not_ be assumed that independent loads and stores will be issued
369 deferral and combination of memory operations; speculative loads; speculative
388 to have any effect on loads.
401 where two loads are performed such that the second depends on the result
407 A data dependency barrier is a partial ordering on interdependent loads
408 only; it is not required to have any effect on stores, independent loads
409 or overlapping loads.
417 touched by the load will be perceptible to any loads issued after the data
[all …]
/Linux-v5.10/arch/mips/include/asm/
Dmips-r2-to-r6-emul.h22 u64 loads; member
Dfpu_emulator.h26 unsigned long loads; member
/Linux-v5.10/arch/mips/kernel/
Dmips-r2-to-r6-emul.c1274 MIPS_R2_STATS(loads); in mipsr2_decoder()
1348 MIPS_R2_STATS(loads); in mipsr2_decoder()
1608 MIPS_R2_STATS(loads); in mipsr2_decoder()
1727 MIPS_R2_STATS(loads); in mipsr2_decoder()
2267 (unsigned long)__this_cpu_read(mipsr2emustats.loads), in mipsr2_emul_show()
2268 (unsigned long)__this_cpu_read(mipsr2bdemustats.loads)); in mipsr2_emul_show()
2324 __this_cpu_write((mipsr2emustats).loads, 0); in mipsr2_clear_show()
2325 __this_cpu_write((mipsr2bdemustats).loads, 0); in mipsr2_clear_show()
/Linux-v5.10/kernel/debug/kdb/
Dkdb_main.c2503 val->loads[0] = avenrun[0]; in kdb_sysinfo()
2504 val->loads[1] = avenrun[1]; in kdb_sysinfo()
2505 val->loads[2] = avenrun[2]; in kdb_sysinfo()
2549 LOAD_INT(val.loads[0]), LOAD_FRAC(val.loads[0]), in kdb_summary()
2550 LOAD_INT(val.loads[1]), LOAD_FRAC(val.loads[1]), in kdb_summary()
2551 LOAD_INT(val.loads[2]), LOAD_FRAC(val.loads[2])); in kdb_summary()
/Linux-v5.10/tools/memory-model/Documentation/
Dexplanation.txt78 for the loads, the model will predict whether it is possible for the
79 code to run in such a way that the loads will indeed obtain the
141 shared memory locations and another CPU loads from those locations in
153 A memory model will predict what values P1 might obtain for its loads
196 Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from
197 it, as loads can obtain values only from earlier stores.
199 P1 loads from flag before loading from buf, since CPUs execute
222 each CPU stores to its own shared location and then loads from the
271 X: P1 loads 1 from flag executes before
272 Y: P1 loads 0 from buf executes before
[all …]
Drecipes.txt46 tearing, load/store fusing, and invented loads and stores.
204 and another CPU execute a pair of loads from this same pair of variables,
311 smp_rmb() macro orders prior loads against later loads. Therefore, if
354 second, while another CPU loads from the second variable and then stores
475 that one CPU first stores to one variable and then loads from a second,
476 while another CPU stores to the second variable and then loads from the
/Linux-v5.10/tools/testing/selftests/livepatch/
DREADME7 The test suite loads and unloads several test kernel modules to verify
/Linux-v5.10/arch/powerpc/lib/
Dmemcpy_64.S115 ld r9,0(r4) # 3+2n loads, 2+2n stores
127 0: ld r0,0(r4) # 4+2n loads, 3+2n stores
/Linux-v5.10/tools/testing/selftests/powerpc/copyloops/
Dmemcpy_64.S115 ld r9,0(r4) # 3+2n loads, 2+2n stores
127 0: ld r0,0(r4) # 4+2n loads, 3+2n stores
/Linux-v5.10/tools/testing/selftests/bpf/
Dtest_bpftool.py43 return json.loads(res)
/Linux-v5.10/Documentation/ABI/stable/
Dvdso7 On some architectures, when the kernel loads any userspace program it
/Linux-v5.10/Documentation/admin-guide/LSM/
DLoadPin.rst29 still use LoadPin to protect the integrity of other files kernel loads. The

123456