Lines Matching +full:b +full:- +full:side
1 /* SPDX-License-Identifier: GPL-2.0+ */
3 * Read-Copy Update mechanism for mutual exclusion
15 * For detailed explanation of Read-Copy Update mechanism see -
34 #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) argument
35 #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) argument
37 #define USHORT_CMP_GE(a, b) (USHRT_MAX / 2 >= (unsigned short)((a) - (b))) argument
38 #define USHORT_CMP_LT(a, b) (USHRT_MAX / 2 < (unsigned short)((a) - (b))) argument
51 // not-yet-completed RCU grace periods.
55 * same_state_synchronize_rcu - Are two old-state values identical?
56 * @oldstate1: First old-state value.
57 * @oldstate2: Second old-state value.
59 * The two old-state values must have been obtained from either
63 * are tracked by old-state values to push these values to a list header,
79 * nesting depth, but makes sense only if CONFIG_PREEMPT_RCU -- in other
82 #define rcu_preempt_depth() READ_ONCE(current->rcu_read_lock_nesting)
145 static inline int rcu_nocb_cpu_offload(int cpu) { return -EINVAL; } in rcu_nocb_cpu_offload()
151 * RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
154 * RCU read-side critical sections are forbidden in the inner idle loop,
155 * that is, between the ct_idle_enter() and the ct_idle_exit() -- RCU
156 * will happily ignore any such read-side critical sections. However,
164 * on the order of a million or so, even on 32-bit systems). It is
176 * Note a quasi-voluntary context switch for RCU-tasks's benefit.
184 if (!(preempt) && READ_ONCE((t)->rcu_tasks_holdout)) \
185 WRITE_ONCE((t)->rcu_tasks_holdout, false); \
196 // Bits for ->trc_reader_special.b.need_qs field.
205 int ___rttq_nesting = READ_ONCE((t)->trc_reader_nesting); \
207 if (likely(!READ_ONCE((t)->trc_reader_special.b.need_qs)) && \
211 !READ_ONCE((t)->trc_reader_special.b.blocked)) { \
244 * cond_resched_tasks_rcu_qs - Report potential quiescent states to RCU
247 * report potential quiescent states to RCU-tasks even if the cond_resched()
348 * RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
365 "Illegal context switch in RCU read-side critical section"); in rcu_preempt_sleep_check()
376 "Illegal context switch in RCU-bh read-side critical section"); \
378 "Illegal context switch in RCU-sched read-side critical section"); \
410 * unrcu_pointer - mark a pointer as not being RCU protected
447 * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
453 * rcu_assign_pointer() - assign to RCU-protected pointer
457 * Assigns the specified value to the specified RCU-protected
465 * will be dereferenced by RCU read-side code.
472 * impossible-to-diagnose memory corruption. So please be careful.
479 * macros, this execute-arguments-only-once property is important, so
495 * rcu_replace_pointer() - replace an RCU pointer, returning its old value
500 * Perform a replacement, where @rcu_ptr is an RCU-annotated
513 * rcu_access_pointer() - fetch RCU pointer with no dereferencing
516 * Return the value of the specified RCU-protected pointer, but omit the
517 * lockdep checks for being in an RCU read-side critical section. This is
519 * not dereferenced, for example, when testing an RCU-protected pointer
521 * where update-side locks prevent the value of the pointer from changing,
523 * Within an RCU read-side critical section, there is little reason to
532 * It is also permissible to use rcu_access_pointer() when read-side
536 * down multi-linked structures after a grace period has elapsed. However,
542 * rcu_dereference_check() - rcu_dereference with debug checking
550 * An implicit check for being in an RCU read-side critical section
555 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock));
557 * could be used to indicate to lockdep that foo->bar may only be dereferenced
559 * the bar struct at foo->bar is held.
565 * bar = rcu_dereference_check(foo->bar, lockdep_is_held(&foo->lock) ||
566 * atomic_read(&foo->usage) == 0);
579 * rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
583 * This is the RCU-bh counterpart to rcu_dereference_check(). However,
595 * rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
599 * This is the RCU-sched counterpart to rcu_dereference_check().
615 * The no-tracing version of rcu_dereference_raw() must not call
622 * rcu_dereference_protected() - fetch RCU pointer when updates prevented
626 * Return the value of the specified RCU-protected pointer, but omit
627 * the READ_ONCE(). This is useful in cases where update-side locks
633 * This function is only for update-side use. Using this function
642 * rcu_dereference() - fetch RCU-protected pointer for dereferencing
650 * rcu_dereference_bh() - fetch an RCU-bh-protected pointer for dereferencing
658 * rcu_dereference_sched() - fetch RCU-sched-protected pointer for dereferencing
666 * rcu_pointer_handoff() - Hand off a pointer from RCU to other mechanism
678 * if (!atomic_inc_not_zero(p->refcnt))
688 * rcu_read_lock() - mark the beginning of an RCU read-side critical section
691 * are within RCU read-side critical sections, then the
694 * on one CPU while other CPUs are within RCU read-side critical
700 * code with interrupts or softirqs disabled. In pre-v5.0 kernels, which
705 * with new RCU read-side critical sections. One way that this can happen
707 * read-side critical section, (2) CPU 1 invokes call_rcu() to register
708 * an RCU callback, (3) CPU 0 exits the RCU read-side critical section,
709 * (4) CPU 2 enters a RCU read-side critical section, (5) the RCU
710 * callback is invoked. This is legal, because the RCU read-side critical
716 * RCU read-side critical sections may be nested. Any deferred actions
717 * will be deferred until the outermost RCU read-side critical section
722 * read-side critical section that would block in a !PREEMPTION kernel.
725 * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
726 * it is illegal to block while in an RCU read-side critical section.
728 * kernel builds, RCU read-side critical sections may be preempted,
730 * implementations in real-time (with -rt patchset) kernel builds, RCU
731 * read-side critical sections may be preempted and they may also block, but
746 * a bug -- this property is what provides RCU's performance benefits.
754 * rcu_read_unlock() - marks the end of an RCU read-side critical section.
759 * also extends to the scheduler's runqueue and priority-inheritance
760 * spinlocks, courtesy of the quiescent-state deferral that is carried
775 * rcu_read_lock_bh() - mark the beginning of an RCU-bh critical section
779 * read-side critical section. However, please note that this equivalence
798 * rcu_read_unlock_bh() - marks the end of a softirq-only RCU critical section
812 * rcu_read_lock_sched() - mark the beginning of a RCU-sched critical section
815 * Read-side critical sections can also be introduced by anything else that
843 * rcu_read_unlock_sched() - marks the end of a RCU-classic critical section
864 * RCU_INIT_POINTER() - initialize an RCU protected pointer
868 * Initialize an RCU-protected pointer in special cases where readers
878 * a. You have not made *any* reader-visible changes to
880 * b. It is OK for readers accessing this structure from its
886 * result in impossible-to-diagnose memory corruption. As in the structures
888 * see pre-initialized values of the referenced data structure. So
891 * If you are creating an RCU-protected linked structure that is accessed
892 * by a single external-to-structure RCU-protected pointer, then you may
893 * use RCU_INIT_POINTER() to initialize the internal RCU-protected
895 * external-to-structure pointer *after* you have completely initialized
896 * the reader-accessible portions of the linked structure.
908 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer
912 * GCC-style initialization for an RCU-protected pointer in a structure field.
924 * kfree_rcu() - kfree an object after a grace period.
925 * @ptr: pointer to kfree for both single- and double-argument invocations.
927 * but only for double-argument invocations.
932 * high-latency rcu_barrier() function at module-unload time.
937 * Because the functions are not allowed in the low-order 4096 bytes of
939 * If the offset is larger than 4095 bytes, a compile-time error will
953 * kvfree_rcu() - kvfree an object after a grace period.
956 * based on whether an object is head-less or not. If it
965 * When it comes to head-less variant, only one argument
973 * Please note, head-less way of freeing is permitted to
988 kvfree_call_rcu(&((___p)->rhf), (rcu_callback_t)(unsigned long) \
1002 * Place this after a lock-acquisition primitive to guarantee that
1017 * rcu_head_init - Initialize rcu_head for rcu_head_after_call_rcu()
1028 rhp->func = (rcu_callback_t)~0L; in rcu_head_init()
1032 * rcu_head_after_call_rcu() - Has this rcu_head been passed to call_rcu()?
1041 * in an RCU read-side critical section that includes a read-side fetch
1047 rcu_callback_t func = READ_ONCE(rhp->func); in rcu_head_after_call_rcu()