| /Linux-v6.1/tools/perf/Documentation/ |
| D | perf-c2c.txt | 33 for cachelines with highest contention - highest number of HITM accesses. 186 - cacheline percentage of all Remote/Local HITM accesses 189 - cacheline percentage of all peer accesses 198 - sum of all cachelines accesses 201 - sum of all load accesses 204 - sum of all store accesses 207 L1Hit - store accesses that hit L1 208 L1Miss - store accesses that missed L1 209 N/A - store accesses with memory level is not available 215 - count of LLC load accesses, includes LLC hits and LLC HITMs [all …]
|
| /Linux-v6.1/tools/memory-model/Documentation/ |
| D | ordering.txt | 15 2. Ordered memory accesses. These operations order themselves 16 against some or all of the CPU's prior accesses or some or all 17 of the CPU's subsequent accesses, depending on the subcategory 20 3. Unordered accesses, as the name indicates, have no ordering 48 a device driver, which must correctly order accesses to a physical 68 accesses against all subsequent accesses from the viewpoint of all CPUs. 89 CPU's accesses into three groups: 242 Ordered Memory Accesses 245 The Linux kernel provides a wide variety of ordered memory accesses: 264 of the CPU's prior memory accesses. Release operations often provide [all …]
|
| D | access-marking.txt | 1 MARKING SHARED-MEMORY ACCESSES 5 normal accesses to shared memory, that is "normal" as in accesses that do 7 document these accesses, both with comments and with special assertions 17 1. Plain C-language accesses (unmarked), for example, "a = b;" 33 Neither plain C-language accesses nor data_race() (#1 and #2 above) place 40 C-language accesses. It is permissible to combine #2 and #3, for example, 45 C-language accesses, but marking all accesses involved in a given data 54 data_race() and even plain C-language accesses is preferable to 82 reads can enable better checking of the remaining accesses implementing 129 the other accesses to the relevant shared variables. But please note [all …]
|
| D | explanation.txt | 32 24. PLAIN ACCESSES AND DATA RACES 86 factors such as DMA and mixed-size accesses.) But on multiprocessor 87 systems, with multiple CPUs making concurrent accesses to shared 140 This pattern of memory accesses, where one CPU stores values to two 151 accesses by the CPUs. 276 In short, if a memory model requires certain accesses to be ordered, 278 if those accesses would form a cycle, then the memory model predicts 305 Atomic read-modify-write accesses, such as atomic_inc() or xchg(), 312 logical computations, control-flow instructions, or accesses to 342 po-loc is a sub-relation of po. It links two memory accesses when the [all …]
|
| D | glossary.txt | 83 each pair of memory accesses, the outcome where r0, r1, and r2 118 its CPU's prior accesses with all of that CPU's subsequent 119 accesses, or a marked access such as atomic_add_return() 120 that orders all of its CPU's prior accesses, itself, and 121 all of its CPU's subsequent accesses. 123 Happens-Before (hb): A relation between two accesses in which LKMM 134 data between two CPUs requires that both CPUs their accesses.
|
| /Linux-v6.1/Documentation/dev-tools/ |
| D | kcsan.rst | 78 the racing thread, but could also occur due to e.g. DMA accesses. Such reports 85 It may be desirable to disable data race detection for specific accesses, 90 any data races due to accesses in ``expr`` should be ignored and resulting 92 `"Marking Shared-Memory Accesses" in the LKMM`_ for more information. 114 .. _"Marking Shared-Memory Accesses" in the LKMM: https://git.kernel.org/pub/scm/linux/kernel/git/t… 128 accesses are aligned writes up to word size. 190 In an execution, two memory accesses form a *data race* if they *conflict*, 194 Accesses and Data Races" in the LKMM`_. 196 .. _"Plain Accesses and Data Races" in the LKMM: https://git.kernel.org/pub/scm/linux/kernel/git/to… 236 KCSAN relies on observing that two accesses happen concurrently. Crucially, we [all …]
|
| /Linux-v6.1/include/linux/ |
| D | kcsan-checks.h | 4 * uninstrumented accesses, or change KCSAN checking behaviour of accesses. 87 * Accesses within the atomic region may appear to race with other accesses but 100 * Accesses within the atomic region may appear to race with other accesses but 111 * kcsan_atomic_next - consider following accesses as atomic 113 * Force treating the next n memory accesses for the current context as atomic 116 * @n: number of following memory accesses to treat as atomic. 123 * Set the access mask for all accesses for the current context if non-zero. 163 * Scoped accesses are implemented by appending @sa to an internal list for the 223 * Only use these to disable KCSAN for accesses in the current compilation unit; 323 * Check for atomic accesses: if atomic accesses are not ignored, this simply [all …]
|
| D | virtio_pci_modern.h | 28 /* So we can sanity-check accesses. */ 44 * Type-safe wrappers for io accesses. 48 * method, i.e. 32-bit accesses for 32-bit fields, 16-bit accesses 49 * for 16-bit fields and 8-bit accesses for 8-bit fields.
|
| /Linux-v6.1/tools/testing/selftests/bpf/progs/ |
| D | user_ringbuf_fail.c | 32 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 54 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 73 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 92 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 113 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 132 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 151 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 168 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should
|
| /Linux-v6.1/Documentation/core-api/ |
| D | unaligned-memory-access.rst | 2 Unaligned Memory Accesses 15 unaligned accesses, why you need to write code that doesn't cause them, 22 Unaligned memory accesses occur when you try to read N bytes of data starting 59 - Some architectures are able to perform unaligned memory accesses 61 - Some architectures raise processor exceptions when unaligned accesses 64 - Some architectures raise processor exceptions when unaligned accesses 72 memory accesses to happen, your code will not work correctly on certain 103 to pad structures so that accesses to fields are suitably aligned (assuming 136 lead to unaligned accesses when accessing fields that do not satisfy 183 Here is another example of some code that could cause unaligned accesses:: [all …]
|
| /Linux-v6.1/kernel/kcsan/ |
| D | permissive.h | 3 * Special rules for ignoring entire classes of data-racy memory accesses. None 44 * Rules here are only for plain read accesses, so that we still report in kcsan_ignore_data_race() 45 * data races between plain read-write accesses. in kcsan_ignore_data_race() 60 * While it is still recommended that such accesses be marked in kcsan_ignore_data_race() 66 * optimizations (including those that tear accesses), because no more in kcsan_ignore_data_race() 67 * than 1 bit changed, the plain accesses are safe despite the presence in kcsan_ignore_data_race()
|
| /Linux-v6.1/tools/perf/pmu-events/arch/x86/snowridgex/ |
| D | frontend.json | 74 "EventName": "ICACHE.ACCESSES", 77 … accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count … 89 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count… 101 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count…
|
| /Linux-v6.1/tools/perf/pmu-events/arch/x86/elkhartlake/ |
| D | frontend.json | 74 "EventName": "ICACHE.ACCESSES", 77 … accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count … 89 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count… 101 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count…
|
| /Linux-v6.1/Documentation/i2c/ |
| D | i2c-topology.rst | 83 This means that accesses to D2 are lockout out for the full duration 84 of the entire operation. But accesses to D3 are possibly interleaved 165 This means that accesses to both D2 and D3 are locked out for the full 231 When device D1 is accessed, accesses to D2 are locked out for the 233 are locked). But accesses to D3 and D4 are possibly interleaved at 236 Accesses to D3 locks out D1 and D2, but accesses to D4 are still possibly 254 When device D1 is accessed, accesses to D2 and D3 are locked out 256 root adapter). But accesses to D4 are possibly interleaved at any 267 mux. In that case, any interleaved accesses to D4 might close M2 288 When D1 is accessed, accesses to D2 are locked out for the full [all …]
|
| /Linux-v6.1/tools/perf/pmu-events/arch/nds32/n13/ |
| D | atcpmu.json | 75 "PublicDescription": "uITLB accesses", 78 "BriefDescription": "V3 uITLB accesses" 81 "PublicDescription": "uDTLB accesses", 84 "BriefDescription": "V3 uDTLB accesses" 87 "PublicDescription": "MTLB accesses", 90 "BriefDescription": "V3 MTLB accesses" 108 "BriefDescription": "V3 ILM accesses"
|
| /Linux-v6.1/arch/arm/include/uapi/asm/ |
| D | byteorder.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| D | swab.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| /Linux-v6.1/tools/perf/pmu-events/arch/x86/amdzen1/ |
| D | recommended.json | 12 "BriefDescription": "All L1 Data Cache Accesses", 17 "BriefDescription": "All L2 Cache Accesses", 24 "BriefDescription": "L2 Cache Accesses from L1 Instruction Cache Misses (including prefetch)", 30 "BriefDescription": "L2 Cache Accesses from L1 Data Cache Misses (including prefetch)", 35 "BriefDescription": "L2 Cache Accesses from L2 HWPF", 90 "BriefDescription": "L3 Accesses",
|
| /Linux-v6.1/tools/perf/pmu-events/arch/x86/amdzen2/ |
| D | recommended.json | 12 "BriefDescription": "All L1 Data Cache Accesses", 17 "BriefDescription": "All L2 Cache Accesses", 24 "BriefDescription": "L2 Cache Accesses from L1 Instruction Cache Misses (including prefetch)", 30 "BriefDescription": "L2 Cache Accesses from L1 Data Cache Misses (including prefetch)", 35 "BriefDescription": "L2 Cache Accesses from L2 HWPF", 90 "BriefDescription": "L3 Accesses",
|
| /Linux-v6.1/arch/arm/include/asm/ |
| D | swab.h | 6 * that byte accesses appear as: 8 * and word accesses (data or instruction) appear as: 11 * When in big endian mode, byte accesses appear as: 13 * and word accesses (data or instruction) appear as:
|
| /Linux-v6.1/Documentation/driver-api/ |
| D | device-io.rst | 10 Bus-Independent Device Accesses 30 part of the CPU's address space is interpreted not as accesses to 31 memory, but as accesses to a device. Some architectures define devices 54 historical accident, these are named byte, word, long and quad accesses. 55 Both read and write accesses are supported; there is no prefetch support 119 Port Space Accesses 127 addresses is generally not as fast as accesses to the memory mapped 136 Accesses to this space are provided through a set of functions which 137 allow 8-bit, 16-bit and 32-bit accesses; also known as byte, word and 143 that accesses to their ports are slowed down. This functionality is [all …]
|
| /Linux-v6.1/tools/perf/pmu-events/arch/x86/amdzen3/ |
| D | recommended.json | 12 "BriefDescription": "All L1 Data Cache Accesses", 17 "BriefDescription": "All L2 Cache Accesses", 24 "BriefDescription": "L2 Cache Accesses from L1 Instruction Cache Misses (including prefetch)", 30 "BriefDescription": "L2 Cache Accesses from L1 Data Cache Misses (including prefetch)", 35 "BriefDescription": "L2 Cache Accesses from L2 HWPF", 90 "BriefDescription": "L3 Cache Accesses",
|
| /Linux-v6.1/Documentation/admin-guide/hw-vuln/ |
| D | special-register-buffer-data-sampling.rst | 8 infer values returned from special register accesses. Special register 9 accesses are accesses to off core registers. According to Intel's evaluation, 70 accesses from other logical processors will be delayed until the special 82 #. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other 84 legacy locked cache-line-split accesses. 91 processors memory accesses. The opt-out mechanism does not affect Intel SGX
|
| /Linux-v6.1/security/landlock/ |
| D | fs.c | 247 * the last one. When there is multiple requested accesses, for each in unmask_layers() 248 * policy layer, the full set of requested accesses may not be granted in unmask_layers() 313 /* Saves all handled accesses per layer. */ in init_layer_masks() 354 /* Ignores accesses that only make sense for directories. */ in no_more_access() 390 * Removes @layer_masks accesses that are not requested. 433 * check_access_path_dual - Check accesses for requests with a common path 437 * @access_request_parent1: Accesses to check, once @layer_masks_parent1 is 445 * means that @domain allows all possible Landlock accesses (i.e. not only 447 * initially refer to domain layer masks and, when the accesses for the 464 * checks that the collected accesses and the remaining ones are enough to [all …]
|
| /Linux-v6.1/arch/mips/kvm/ |
| D | Kconfig | 35 bool "Maintain counters for COP0 accesses" 38 Maintain statistics for Guest COP0 accesses. 39 A histogram of COP0 accesses is printed when the VM is
|