Lines Matching +full:pre +full:- +full:its

8 RCU (read-copy update) is a synchronization mechanism that can be thought
9 of as a replacement for read-writer locking (among other things), but with
10 very low-overhead readers that are immune to deadlock, priority inversion,
11 and unbounded latency. RCU read-side critical sections are delimited
12 by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
18 pre-existing readers have finished. These old versions are needed because
20 rather expensive, and RCU is thus best suited for read-mostly situations.
25 pre-existing readers have completed. An updater wishing to delete an
33 But the above code cannot be used in IRQ context -- the call_rcu()
35 rcu_head struct placed within the RCU-protected data structure and
41 call_rcu(&p->rcu, p_callback);
55 -------------------------------------
62 http://lwn.net/images/ns/kernel/rcu-drop.jpg.
64 We could try placing a synchronize_rcu() in the module-exit code path,
68 One might be tempted to try several back-to-back synchronize_rcu()
70 heavy RCU-callback load, then some of the callbacks might be deferred
76 -------------
82 anywhere, rcu_barrier() is within its rights to return immediately,
85 Pseudo-code using rcu_barrier() is as follows:
103 The rcutorture module makes use of rcu_barrier() in its exit function
160 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
162 57 if (cur_ops->cleanup != NULL)
163 58 cur_ops->cleanup();
171 re-posting themselves. This will not be necessary in most cases, since
176 Lines 7-50 stop all the kernel tasks associated with the rcutorture
179 for any pre-existing callbacks to complete.
181 Then lines 55-62 print status and do operation-specific cleanup, and
182 then return, permitting the module-unload operation to be completed.
206 --------------------------
209 that RCU callbacks are never reordered once queued on one of the per-CPU
210 queues. His implementation queues an RCU callback on each of the per-CPU
248 7 head = &rdp->barrier;
253 Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
282 to avoid disturbing idle CPUs (especially on battery-powered systems)
283 and the need to minimally disturb non-idle CPUs in real-time systems.
288 ---------------------
297 ------------------------
308 filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
310 filesystem-unmount process.
312 Much later, yours truly hit the RCU module-unload problem when
327 Answer: This cannot happen. The reason is that on_each_cpu() has its last
330 causing this latter to spin until the cross-CPU invocation of
332 a grace period from completing on non-CONFIG_PREEMPT kernels,
337 Therefore, on_each_cpu() disables preemption across its call
345 Currently, -rt implementations of RCU keep but a single global
347 problem. However, when the -rt RCU eventually does have per-CPU