/Linux-v5.15/mm/ |
D | zpool.c | 109 * the requested module, if needed, but there is no guarantee the module will 150 * Implementations must guarantee this to be thread-safe. 209 * Implementations must guarantee this to be thread-safe, 234 * Implementations must guarantee this to be thread-safe. 250 * Implementations must guarantee this to be thread-safe. 271 * Implementations must guarantee this to be thread-safe. 286 * This frees previously allocated memory. This does not guarantee 290 * Implementations must guarantee this to be thread-safe, 313 * Implementations must guarantee this to be thread-safe.
|
/Linux-v5.15/kernel/sched/ |
D | membarrier.c | 21 * order to enforce the guarantee that any writes occurring on CPU0 before 43 * and r2 == 0. This violates the guarantee that membarrier() is 57 * order to enforce the guarantee that any writes occurring on CPU1 before 78 * the guarantee that membarrier() is supposed to provide. 177 * A sync_core() would provide this guarantee, but in ipi_sync_core() 210 * guarantee that no memory access following registration is reordered in ipi_sync_rq_state() 220 * guarantee that no memory access prior to exec is reordered after in membarrier_exec_mmap() 437 * mm and in the current runqueue to guarantee that no memory in sync_runqueues_membarrier_state()
|
/Linux-v5.15/tools/testing/selftests/rcutorture/formal/srcu-cbmc/include/linux/ |
D | types.h | 129 * The alignment is required to guarantee that bits 0 and 1 of @next will be 133 * This guarantee is important for few reasons: 136 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
/Linux-v5.15/include/linux/ |
D | rbtree_latch.h | 9 * lockless lookups; we cannot guarantee they return a correct result. 21 * However, while we have the guarantee that there is at all times one stable 22 * copy, this does not guarantee an iteration will not observe modifications. 61 * guarantee on which of the elements matching the key is found. See
|
D | types.h | 210 * The alignment is required to guarantee that bit 0 of @next will be 214 * This guarantee is important for few reasons: 217 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
D | u64_stats_sync.h | 25 * 4) If reader fetches several counters, there is no guarantee the whole values 51 * snapshot for each variable (but no guarantee on several ones)
|
/Linux-v5.15/arch/x86/include/asm/vdso/ |
D | gettimeofday.h | 204 * Note: The kernel and hypervisor must guarantee that cpu ID in vread_pvclock() 208 * preemption, it cannot guarantee that per-CPU pvclock time in vread_pvclock() 214 * guarantee than we get with a normal seqlock. in vread_pvclock() 216 * On Xen, we don't appear to have that guarantee, but Xen still in vread_pvclock()
|
/Linux-v5.15/fs/verity/ |
D | Kconfig | 54 used to provide an authenticity guarantee for verity files, as 57 authenticity guarantee.
|
/Linux-v5.15/kernel/printk/ |
D | printk_ringbuffer.c | 455 * Guarantee the state is loaded before copying the descriptor in desc_read() 485 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read() 501 * 2. Guarantee the record data is loaded before re-checking the in desc_read() 674 * 1. Guarantee the block ID loaded in in data_push_tail() 701 * 2. Guarantee the descriptor state loaded in in data_push_tail() 741 * Guarantee any descriptor states that have transitioned to in data_push_tail() 826 * Guarantee any descriptor states that have transitioned to in desc_push_tail() 836 * Guarantee the last state load from desc_read() is before in desc_push_tail() 888 * Guarantee the head ID is read before reading the tail ID. in desc_reserve() 922 * 1. Guarantee the tail ID is read before validating the in desc_reserve() [all …]
|
/Linux-v5.15/Documentation/locking/ |
D | spinlocks.rst | 19 spinlock itself will guarantee the global lock, so it will guarantee that 117 guarantee the same kind of exclusive access, and it will be much faster.
|
/Linux-v5.15/Documentation/networking/ |
D | page_pool.rst | 63 This lockless guarantee naturally comes from running under a NAPI softirq. 64 The protection doesn't strictly have to be NAPI, any guarantee that allocating 87 must guarantee safe context (e.g NAPI), since it will recycle the page
|
/Linux-v5.15/Documentation/core-api/ |
D | refcount-vs-atomic.rst | 84 Memory ordering guarantee changes: 97 Memory ordering guarantee changes: 108 Memory ordering guarantee changes:
|
/Linux-v5.15/Documentation/driver-api/ |
D | reset.rst | 87 Exclusive resets on the other hand guarantee direct control. 99 is no guarantee that calling reset_control_assert() on a shared reset control 152 The reset control API does not guarantee the order in which the individual
|
/Linux-v5.15/tools/memory-model/Documentation/ |
D | ordering.txt | 101 with void return types) do not guarantee any ordering whatsoever. Nor do 106 operations such as atomic_read() do not guarantee full ordering, and 130 such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever. 150 atomic_inc() implementations do not guarantee full ordering, thus 278 from "x" instead of writing to it. Then an smp_wmb() could not guarantee 501 and further do not guarantee "atomic" access. For example, the compiler
|
/Linux-v5.15/Documentation/driver-api/usb/ |
D | anchors.rst | 55 Therefore no guarantee is made that the URBs have been unlinked when 82 destinations in one anchor you have no guarantee the chronologically
|
/Linux-v5.15/Documentation/ |
D | memory-barriers.txt | 332 of the standard containing this guarantee is Section 3.14, which 382 A write memory barrier gives a guarantee that all the STORE operations 436 A read barrier is a data dependency barrier plus a guarantee that all the 453 A general memory barrier gives a guarantee that all the LOAD and STORE 524 There are certain things that the Linux kernel memory barriers do not guarantee: 526 (*) There is no guarantee that any of the memory accesses specified before a 531 (*) There is no guarantee that issuing a memory barrier on one CPU will have 536 (*) There is no guarantee that a CPU will see the correct order of effects 541 (*) There is no guarantee that some intervening piece of off-the-CPU 878 However, they do -not- guarantee any other sort of ordering: [all …]
|
/Linux-v5.15/arch/arc/include/asm/ |
D | futex.h | 82 preempt_disable(); /* to guarantee atomic r-m-w of futex op */ in arch_futex_atomic_op_inuser() 131 preempt_disable(); /* to guarantee atomic r-m-w of futex op */ in futex_atomic_cmpxchg_inatomic()
|
/Linux-v5.15/Documentation/sh/ |
D | booting.rst | 8 guarantee any particular initial register state, kernels built to
|
/Linux-v5.15/Documentation/RCU/Design/Requirements/ |
D | Requirements.rst | 58 #. `Grace-Period Guarantee`_ 59 #. `Publish/Subscribe Guarantee`_ 64 Grace-Period Guarantee 67 RCU's grace-period guarantee is unusual in being premeditated: Jack 68 Slingwine and I had this guarantee firmly in mind when we started work 71 understanding of this guarantee. 73 RCU's grace-period guarantee allows updaters to wait for the completion 83 This guarantee allows ordering to be enforced with extremely low 174 the synchronize_rcu() in start_recovery() to guarantee that 196 Although RCU's grace-period guarantee is useful in and of itself, with [all …]
|
/Linux-v5.15/drivers/net/wireless/ti/wl1251/ |
D | io.c | 145 /* Guarantee that the memory partition doesn't overlap the in wl1251_set_partition() 156 /* Guarantee that the register partition doesn't overlap the in wl1251_set_partition()
|
/Linux-v5.15/Documentation/block/ |
D | stat.rst | 15 By having a single file, the kernel can guarantee that the statistics 18 each, it would be impossible to guarantee that a set of readings
|
/Linux-v5.15/Documentation/RCU/ |
D | UP.rst | 47 its arguments would cause it to fail to make the fundamental guarantee 76 It is far better to guarantee that callbacks are invoked
|
/Linux-v5.15/arch/s390/kernel/ |
D | kprobes_insn_page.S | 8 * The page must be within the kernel image to guarantee that the
|
/Linux-v5.15/net/smc/ |
D | smc_cdc.c | 47 /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ in smc_cdc_tx_handler() 255 /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ in smcd_cdc_msg_send() 331 /* guarantee 0 <= peer_rmbe_space <= peer_rmbe_size */ in smc_cdc_msg_recv_action() 343 /* guarantee 0 <= bytes_to_rcv <= rmb_desc->len */ in smc_cdc_msg_recv_action()
|
/Linux-v5.15/arch/arm64/kernel/ |
D | mte.c | 52 * memory access, but on the current thread we do not guarantee that in mte_sync_page_tags() 206 * in __switch_to() to guarantee that the indirect writes to TFSR_EL1 in mte_thread_switch() 219 * The barriers are required to guarantee that the indirect writes in mte_suspend_enter()
|