1RCU and Unloadable Modules
2
3[Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/]
4
5RCU (read-copy update) is a synchronization mechanism that can be thought
6of as a replacement for read-writer locking (among other things), but with
7very low-overhead readers that are immune to deadlock, priority inversion,
8and unbounded latency. RCU read-side critical sections are delimited
9by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
10kernels, generate no code whatsoever.
11
12This means that RCU writers are unaware of the presence of concurrent
13readers, so that RCU updates to shared data must be undertaken quite
14carefully, leaving an old version of the data structure in place until all
15pre-existing readers have finished. These old versions are needed because
16such readers might hold a reference to them. RCU updates can therefore be
17rather expensive, and RCU is thus best suited for read-mostly situations.
18
19How can an RCU writer possibly determine when all readers are finished,
20given that readers might well leave absolutely no trace of their
21presence? There is a synchronize_rcu() primitive that blocks until all
22pre-existing readers have completed. An updater wishing to delete an
23element p from a linked list might do the following, while holding an
24appropriate lock, of course:
25
26	list_del_rcu(p);
27	synchronize_rcu();
28	kfree(p);
29
30But the above code cannot be used in IRQ context -- the call_rcu()
31primitive must be used instead. This primitive takes a pointer to an
32rcu_head struct placed within the RCU-protected data structure and
33another pointer to a function that may be invoked later to free that
34structure. Code to delete an element p from the linked list from IRQ
35context might then be as follows:
36
37	list_del_rcu(p);
38	call_rcu(&p->rcu, p_callback);
39
40Since call_rcu() never blocks, this code can safely be used from within
41IRQ context. The function p_callback() might be defined as follows:
42
43	static void p_callback(struct rcu_head *rp)
44	{
45		struct pstruct *p = container_of(rp, struct pstruct, rcu);
46
47		kfree(p);
48	}
49
50
51Unloading Modules That Use call_rcu()
52
53But what if p_callback is defined in an unloadable module?
54
55If we unload the module while some RCU callbacks are pending,
56the CPUs executing these callbacks are going to be severely
57disappointed when they are later invoked, as fancifully depicted at
58http://lwn.net/images/ns/kernel/rcu-drop.jpg.
59
60We could try placing a synchronize_rcu() in the module-exit code path,
61but this is not sufficient. Although synchronize_rcu() does wait for a
62grace period to elapse, it does not wait for the callbacks to complete.
63
64One might be tempted to try several back-to-back synchronize_rcu()
65calls, but this is still not guaranteed to work. If there is a very
66heavy RCU-callback load, then some of the callbacks might be deferred
67in order to allow other processing to proceed. Such deferral is required
68in realtime kernels in order to avoid excessive scheduling latencies.
69
70
71rcu_barrier()
72
73We instead need the rcu_barrier() primitive.  Rather than waiting for
74a grace period to elapse, rcu_barrier() waits for all outstanding RCU
75callbacks to complete.  Please note that rcu_barrier() does -not- imply
76synchronize_rcu(), in particular, if there are no RCU callbacks queued
77anywhere, rcu_barrier() is within its rights to return immediately,
78without waiting for a grace period to elapse.
79
80Pseudo-code using rcu_barrier() is as follows:
81
82   1. Prevent any new RCU callbacks from being posted.
83   2. Execute rcu_barrier().
84   3. Allow the module to be unloaded.
85
86There is also an srcu_barrier() function for SRCU, and you of course
87must match the flavor of rcu_barrier() with that of call_rcu().  If your
88module uses multiple flavors of call_rcu(), then it must also use multiple
89flavors of rcu_barrier() when unloading that module.  For example, if
90it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on
91srcu_struct_2(), then the following three lines of code will be required
92when unloading:
93
94 1 rcu_barrier();
95 2 srcu_barrier(&srcu_struct_1);
96 3 srcu_barrier(&srcu_struct_2);
97
98The rcutorture module makes use of rcu_barrier() in its exit function
99as follows:
100
101 1 static void
102 2 rcu_torture_cleanup(void)
103 3 {
104 4   int i;
105 5
106 6   fullstop = 1;
107 7   if (shuffler_task != NULL) {
108 8     VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
109 9     kthread_stop(shuffler_task);
11010   }
11111   shuffler_task = NULL;
11212
11313   if (writer_task != NULL) {
11414     VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
11515     kthread_stop(writer_task);
11616   }
11717   writer_task = NULL;
11818
11919   if (reader_tasks != NULL) {
12020     for (i = 0; i < nrealreaders; i++) {
12121       if (reader_tasks[i] != NULL) {
12222         VERBOSE_PRINTK_STRING(
12323           "Stopping rcu_torture_reader task");
12424         kthread_stop(reader_tasks[i]);
12525       }
12626       reader_tasks[i] = NULL;
12727     }
12828     kfree(reader_tasks);
12929     reader_tasks = NULL;
13030   }
13131   rcu_torture_current = NULL;
13232
13333   if (fakewriter_tasks != NULL) {
13434     for (i = 0; i < nfakewriters; i++) {
13535       if (fakewriter_tasks[i] != NULL) {
13636         VERBOSE_PRINTK_STRING(
13737           "Stopping rcu_torture_fakewriter task");
13838         kthread_stop(fakewriter_tasks[i]);
13939       }
14040       fakewriter_tasks[i] = NULL;
14141     }
14242     kfree(fakewriter_tasks);
14343     fakewriter_tasks = NULL;
14444   }
14545
14646   if (stats_task != NULL) {
14747     VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
14848     kthread_stop(stats_task);
14949   }
15050   stats_task = NULL;
15151
15252   /* Wait for all RCU callbacks to fire. */
15353   rcu_barrier();
15454
15555   rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
15656
15757   if (cur_ops->cleanup != NULL)
15858     cur_ops->cleanup();
15959   if (atomic_read(&n_rcu_torture_error))
16060     rcu_torture_print_module_parms("End of test: FAILURE");
16161   else
16262     rcu_torture_print_module_parms("End of test: SUCCESS");
16363 }
164
165Line 6 sets a global variable that prevents any RCU callbacks from
166re-posting themselves. This will not be necessary in most cases, since
167RCU callbacks rarely include calls to call_rcu(). However, the rcutorture
168module is an exception to this rule, and therefore needs to set this
169global variable.
170
171Lines 7-50 stop all the kernel tasks associated with the rcutorture
172module. Therefore, once execution reaches line 53, no more rcutorture
173RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
174for any pre-existing callbacks to complete.
175
176Then lines 55-62 print status and do operation-specific cleanup, and
177then return, permitting the module-unload operation to be completed.
178
179Quick Quiz #1: Is there any other situation where rcu_barrier() might
180	be required?
181
182Your module might have additional complications. For example, if your
183module invokes call_rcu() from timers, you will need to first cancel all
184the timers, and only then invoke rcu_barrier() to wait for any remaining
185RCU callbacks to complete.
186
187Of course, if you module uses call_rcu(), you will need to invoke
188rcu_barrier() before unloading.  Similarly, if your module uses
189call_srcu(), you will need to invoke srcu_barrier() before unloading,
190and on the same srcu_struct structure.  If your module uses call_rcu()
191-and- call_srcu(), then you will need to invoke rcu_barrier() -and-
192srcu_barrier().
193
194
195Implementing rcu_barrier()
196
197Dipankar Sarma's implementation of rcu_barrier() makes use of the fact
198that RCU callbacks are never reordered once queued on one of the per-CPU
199queues. His implementation queues an RCU callback on each of the per-CPU
200callback queues, and then waits until they have all started executing, at
201which point, all earlier RCU callbacks are guaranteed to have completed.
202
203The original code for rcu_barrier() was as follows:
204
205 1 void rcu_barrier(void)
206 2 {
207 3   BUG_ON(in_interrupt());
208 4   /* Take cpucontrol mutex to protect against CPU hotplug */
209 5   mutex_lock(&rcu_barrier_mutex);
210 6   init_completion(&rcu_barrier_completion);
211 7   atomic_set(&rcu_barrier_cpu_count, 0);
212 8   on_each_cpu(rcu_barrier_func, NULL, 0, 1);
213 9   wait_for_completion(&rcu_barrier_completion);
21410   mutex_unlock(&rcu_barrier_mutex);
21511 }
216
217Line 3 verifies that the caller is in process context, and lines 5 and 10
218use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
219global completion and counters at a time, which are initialized on lines
2206 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is
221shown below. Note that the final "1" in on_each_cpu()'s argument list
222ensures that all the calls to rcu_barrier_func() will have completed
223before on_each_cpu() returns. Line 9 then waits for the completion.
224
225This code was rewritten in 2008 and several times thereafter, but this
226still gives the general idea.
227
228The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
229to post an RCU callback, as follows:
230
231 1 static void rcu_barrier_func(void *notused)
232 2 {
233 3 int cpu = smp_processor_id();
234 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
235 5 struct rcu_head *head;
236 6
237 7 head = &rdp->barrier;
238 8 atomic_inc(&rcu_barrier_cpu_count);
239 9 call_rcu(head, rcu_barrier_callback);
24010 }
241
242Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
243which contains the struct rcu_head that needed for the later call to
244call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line
2458 increments a global counter. This counter will later be decremented
246by the callback. Line 9 then registers the rcu_barrier_callback() on
247the current CPU's queue.
248
249The rcu_barrier_callback() function simply atomically decrements the
250rcu_barrier_cpu_count variable and finalizes the completion when it
251reaches zero, as follows:
252
253 1 static void rcu_barrier_callback(struct rcu_head *notused)
254 2 {
255 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count))
256 4 complete(&rcu_barrier_completion);
257 5 }
258
259Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
260	immediately (thus incrementing rcu_barrier_cpu_count to the
261	value one), but the other CPU's rcu_barrier_func() invocations
262	are delayed for a full grace period? Couldn't this result in
263	rcu_barrier() returning prematurely?
264
265The current rcu_barrier() implementation is more complex, due to the need
266to avoid disturbing idle CPUs (especially on battery-powered systems)
267and the need to minimally disturb non-idle CPUs in real-time systems.
268However, the code above illustrates the concepts.
269
270
271rcu_barrier() Summary
272
273The rcu_barrier() primitive has seen relatively little use, since most
274code using RCU is in the core kernel rather than in modules. However, if
275you are using RCU from an unloadable module, you need to use rcu_barrier()
276so that your module may be safely unloaded.
277
278
279Answers to Quick Quizzes
280
281Quick Quiz #1: Is there any other situation where rcu_barrier() might
282	be required?
283
284Answer: Interestingly enough, rcu_barrier() was not originally
285	implemented for module unloading. Nikita Danilov was using
286	RCU in a filesystem, which resulted in a similar situation at
287	filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
288	in response, so that Nikita could invoke it during the
289	filesystem-unmount process.
290
291	Much later, yours truly hit the RCU module-unload problem when
292	implementing rcutorture, and found that rcu_barrier() solves
293	this problem as well.
294
295Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
296	immediately (thus incrementing rcu_barrier_cpu_count to the
297	value one), but the other CPU's rcu_barrier_func() invocations
298	are delayed for a full grace period? Couldn't this result in
299	rcu_barrier() returning prematurely?
300
301Answer: This cannot happen. The reason is that on_each_cpu() has its last
302	argument, the wait flag, set to "1". This flag is passed through
303	to smp_call_function() and further to smp_call_function_on_cpu(),
304	causing this latter to spin until the cross-CPU invocation of
305	rcu_barrier_func() has completed. This by itself would prevent
306	a grace period from completing on non-CONFIG_PREEMPT kernels,
307	since each CPU must undergo a context switch (or other quiescent
308	state) before the grace period can complete. However, this is
309	of no use in CONFIG_PREEMPT kernels.
310
311	Therefore, on_each_cpu() disables preemption across its call
312	to smp_call_function() and also across the local call to
313	rcu_barrier_func(). This prevents the local CPU from context
314	switching, again preventing grace periods from completing. This
315	means that all CPUs have executed rcu_barrier_func() before
316	the first rcu_barrier_callback() can possibly execute, in turn
317	preventing rcu_barrier_cpu_count from prematurely reaching zero.
318
319	Currently, -rt implementations of RCU keep but a single global
320	queue for RCU callbacks, and thus do not suffer from this
321	problem. However, when the -rt RCU eventually does have per-CPU
322	callback queues, things will have to change. One simple change
323	is to add an rcu_read_lock() before line 8 of rcu_barrier()
324	and an rcu_read_unlock() after line 8 of this same function. If
325	you can think of a better change, please let me know!
326