Lines Matching full:that

9 that make use of RCU.  Violating any of the rules listed below will
10 result in the same sorts of problems that leaving out a locking primitive
17 performance measurements show that RCU is nonetheless the right
36 of lockless algorithms that garbage collectors do.
54 relating to itself that other tasks can read, there by definition
55 can be no bottleneck). Note that the definition of "large" has
73 Please note that you *cannot* rely on code known to be built
86 any locks or atomic operations. This means that readers will
93 RCU-protected data structures that have been added to
99 locks (that are acquired by both readers and writers)
100 that guard per-element state. Fields that the readers
118 d. Carefully order the updates and the reads so that readers
129 changing data into a separate structure, so that the
139 accesses. The rcu_dereference() primitive ensures that
141 that the pointer points to. This really is necessary
147 Please note that compilers can also reorder code, and
149 just that. The rcu_dereference() primitive therefore also
157 as the list_for_each_entry_rcu(). Note that it is
160 primitives. This is particularly useful in code that
190 e. Updates must ensure that initialization of a given
191 structure happens before pointers to that structure are
193 when publicizing a pointer to a structure that can
201 to block, run that code in a workqueue handler scheduled from
214 configuration-change operations that would not normally be
215 undertaken while a real-time workload is running. Note that
222 Restructure your code so that it batches the updates, allowing
234 rcu_read_unlock(), (2) any pair of primitives that disables
236 rcu_read_unlock_bh(), or (3) any pair of primitives that disables
246 context switches, that is, from blocking. If the updater uses
251 must use anything that disables preemption, for example,
263 that this usage is safe is that readers can use anything that
278 primitive is that it automatically self-limits: if grace periods
301 the memory allocator, so that this wrapper function
315 here is that superuser already has lots of ways to crash
327 Note that although these primitives do take action to avoid
344 The reason that it is permissible to use RCU list-traversal
345 primitives when the update-side lock is held is that doing so
352 time that readers might be accessing that structure. In such
365 disable softirq on a given acquisition of that lock will result
367 your RCU callback while interrupting that acquisition's critical
371 the callback code simply wrappers around kfree(), so that this
372 is not an issue (or, more accurately, to the extent that it is
376 to safely access and/or modify that data structure.
378 Do not assume that RCU callbacks will be executed on the same
379 CPU that executed the corresponding call_rcu() or call_srcu().
381 callback pending, then that RCU callback will execute on some
389 In addition, do not assume that callbacks queued in a given order
390 will be invoked in that order, even if they all are queued on the
391 same CPU. Furthermore, do not assume that same-CPU callbacks will
395 might be concurrently invoked by that CPU's softirq handler and
396 that CPU's rcuo kthread. At such times, that CPU's callbacks
402 Please note that if you don't need to sleep in read-side critical
410 "struct srcu_struct" that defines the scope of a given
416 calls that have been passed the same srcu_struct. This property
445 Note that rcu_assign_pointer() relates to SRCU just as it does to
453 that readers can follow that could be affected by the
458 is the caller's responsibility to guarantee that any subsequent
475 check that accesses to RCU-protected data structures
481 check that you don't pass the same object to call_rcu()
483 since the last time that you passed that same object to
488 with __rcu, and sparse will warn you if you access that
492 These debugging aids can help you find problems that are
498 pending callbacks to be invoked before unloading that module.
499 Note that it is absolutely *not* sufficient to wait for a grace
502 call_rcu(). Or even on the current CPU if that CPU recently