1What is RCU?  --  "Read, Copy, Update"
2
3Please note that the "What is RCU?" LWN series is an excellent place
4to start learning about RCU:
5
61.	What is RCU, Fundamentally?  http://lwn.net/Articles/262464/
72.	What is RCU? Part 2: Usage   http://lwn.net/Articles/263130/
83.	RCU part 3: the RCU API      http://lwn.net/Articles/264090/
94.	The RCU API, 2010 Edition    http://lwn.net/Articles/418853/
10	2010 Big API Table           http://lwn.net/Articles/419086/
115.	The RCU API, 2014 Edition    http://lwn.net/Articles/609904/
12	2014 Big API Table           http://lwn.net/Articles/609973/
13
14
15What is RCU?
16
17RCU is a synchronization mechanism that was added to the Linux kernel
18during the 2.5 development effort that is optimized for read-mostly
19situations.  Although RCU is actually quite simple once you understand it,
20getting there can sometimes be a challenge.  Part of the problem is that
21most of the past descriptions of RCU have been written with the mistaken
22assumption that there is "one true way" to describe RCU.  Instead,
23the experience has been that different people must take different paths
24to arrive at an understanding of RCU.  This document provides several
25different paths, as follows:
26
271.	RCU OVERVIEW
282.	WHAT IS RCU'S CORE API?
293.	WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
304.	WHAT IF MY UPDATING THREAD CANNOT BLOCK?
315.	WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
326.	ANALOGY WITH READER-WRITER LOCKING
337.	FULL LIST OF RCU APIs
348.	ANSWERS TO QUICK QUIZZES
35
36People who prefer starting with a conceptual overview should focus on
37Section 1, though most readers will profit by reading this section at
38some point.  People who prefer to start with an API that they can then
39experiment with should focus on Section 2.  People who prefer to start
40with example uses should focus on Sections 3 and 4.  People who need to
41understand the RCU implementation should focus on Section 5, then dive
42into the kernel source code.  People who reason best by analogy should
43focus on Section 6.  Section 7 serves as an index to the docbook API
44documentation, and Section 8 is the traditional answer key.
45
46So, start with the section that makes the most sense to you and your
47preferred method of learning.  If you need to know everything about
48everything, feel free to read the whole thing -- but if you are really
49that type of person, you have perused the source code and will therefore
50never need this document anyway.  ;-)
51
52
531.  RCU OVERVIEW
54
55The basic idea behind RCU is to split updates into "removal" and
56"reclamation" phases.  The removal phase removes references to data items
57within a data structure (possibly by replacing them with references to
58new versions of these data items), and can run concurrently with readers.
59The reason that it is safe to run the removal phase concurrently with
60readers is the semantics of modern CPUs guarantee that readers will see
61either the old or the new version of the data structure rather than a
62partially updated reference.  The reclamation phase does the work of reclaiming
63(e.g., freeing) the data items removed from the data structure during the
64removal phase.  Because reclaiming data items can disrupt any readers
65concurrently referencing those data items, the reclamation phase must
66not start until readers no longer hold references to those data items.
67
68Splitting the update into removal and reclamation phases permits the
69updater to perform the removal phase immediately, and to defer the
70reclamation phase until all readers active during the removal phase have
71completed, either by blocking until they finish or by registering a
72callback that is invoked after they finish.  Only readers that are active
73during the removal phase need be considered, because any reader starting
74after the removal phase will be unable to gain a reference to the removed
75data items, and therefore cannot be disrupted by the reclamation phase.
76
77So the typical RCU update sequence goes something like the following:
78
79a.	Remove pointers to a data structure, so that subsequent
80	readers cannot gain a reference to it.
81
82b.	Wait for all previous readers to complete their RCU read-side
83	critical sections.
84
85c.	At this point, there cannot be any readers who hold references
86	to the data structure, so it now may safely be reclaimed
87	(e.g., kfree()d).
88
89Step (b) above is the key idea underlying RCU's deferred destruction.
90The ability to wait until all readers are done allows RCU readers to
91use much lighter-weight synchronization, in some cases, absolutely no
92synchronization at all.  In contrast, in more conventional lock-based
93schemes, readers must use heavy-weight synchronization in order to
94prevent an updater from deleting the data structure out from under them.
95This is because lock-based updaters typically update data items in place,
96and must therefore exclude readers.  In contrast, RCU-based updaters
97typically take advantage of the fact that writes to single aligned
98pointers are atomic on modern CPUs, allowing atomic insertion, removal,
99and replacement of data items in a linked structure without disrupting
100readers.  Concurrent RCU readers can then continue accessing the old
101versions, and can dispense with the atomic operations, memory barriers,
102and communications cache misses that are so expensive on present-day
103SMP computer systems, even in absence of lock contention.
104
105In the three-step procedure shown above, the updater is performing both
106the removal and the reclamation step, but it is often helpful for an
107entirely different thread to do the reclamation, as is in fact the case
108in the Linux kernel's directory-entry cache (dcache).  Even if the same
109thread performs both the update step (step (a) above) and the reclamation
110step (step (c) above), it is often helpful to think of them separately.
111For example, RCU readers and updaters need not communicate at all,
112but RCU provides implicit low-overhead communication between readers
113and reclaimers, namely, in step (b) above.
114
115So how the heck can a reclaimer tell when a reader is done, given
116that readers are not doing any sort of synchronization operations???
117Read on to learn about how RCU's API makes this easy.
118
119
1202.  WHAT IS RCU'S CORE API?
121
122The core RCU API is quite small:
123
124a.	rcu_read_lock()
125b.	rcu_read_unlock()
126c.	synchronize_rcu() / call_rcu()
127d.	rcu_assign_pointer()
128e.	rcu_dereference()
129
130There are many other members of the RCU API, but the rest can be
131expressed in terms of these five, though most implementations instead
132express synchronize_rcu() in terms of the call_rcu() callback API.
133
134The five core RCU APIs are described below, the other 18 will be enumerated
135later.  See the kernel docbook documentation for more info, or look directly
136at the function header comments.
137
138rcu_read_lock()
139
140	void rcu_read_lock(void);
141
142	Used by a reader to inform the reclaimer that the reader is
143	entering an RCU read-side critical section.  It is illegal
144	to block while in an RCU read-side critical section, though
145	kernels built with CONFIG_PREEMPT_RCU can preempt RCU
146	read-side critical sections.  Any RCU-protected data structure
147	accessed during an RCU read-side critical section is guaranteed to
148	remain unreclaimed for the full duration of that critical section.
149	Reference counts may be used in conjunction with RCU to maintain
150	longer-term references to data structures.
151
152rcu_read_unlock()
153
154	void rcu_read_unlock(void);
155
156	Used by a reader to inform the reclaimer that the reader is
157	exiting an RCU read-side critical section.  Note that RCU
158	read-side critical sections may be nested and/or overlapping.
159
160synchronize_rcu()
161
162	void synchronize_rcu(void);
163
164	Marks the end of updater code and the beginning of reclaimer
165	code.  It does this by blocking until all pre-existing RCU
166	read-side critical sections on all CPUs have completed.
167	Note that synchronize_rcu() will -not- necessarily wait for
168	any subsequent RCU read-side critical sections to complete.
169	For example, consider the following sequence of events:
170
171	         CPU 0                  CPU 1                 CPU 2
172	     ----------------- ------------------------- ---------------
173	 1.  rcu_read_lock()
174	 2.                    enters synchronize_rcu()
175	 3.                                               rcu_read_lock()
176	 4.  rcu_read_unlock()
177	 5.                     exits synchronize_rcu()
178	 6.                                              rcu_read_unlock()
179
180	To reiterate, synchronize_rcu() waits only for ongoing RCU
181	read-side critical sections to complete, not necessarily for
182	any that begin after synchronize_rcu() is invoked.
183
184	Of course, synchronize_rcu() does not necessarily return
185	-immediately- after the last pre-existing RCU read-side critical
186	section completes.  For one thing, there might well be scheduling
187	delays.  For another thing, many RCU implementations process
188	requests in batches in order to improve efficiencies, which can
189	further delay synchronize_rcu().
190
191	Since synchronize_rcu() is the API that must figure out when
192	readers are done, its implementation is key to RCU.  For RCU
193	to be useful in all but the most read-intensive situations,
194	synchronize_rcu()'s overhead must also be quite small.
195
196	The call_rcu() API is a callback form of synchronize_rcu(),
197	and is described in more detail in a later section.  Instead of
198	blocking, it registers a function and argument which are invoked
199	after all ongoing RCU read-side critical sections have completed.
200	This callback variant is particularly useful in situations where
201	it is illegal to block or where update-side performance is
202	critically important.
203
204	However, the call_rcu() API should not be used lightly, as use
205	of the synchronize_rcu() API generally results in simpler code.
206	In addition, the synchronize_rcu() API has the nice property
207	of automatically limiting update rate should grace periods
208	be delayed.  This property results in system resilience in face
209	of denial-of-service attacks.  Code using call_rcu() should limit
210	update rate in order to gain this same sort of resilience.  See
211	checklist.txt for some approaches to limiting the update rate.
212
213rcu_assign_pointer()
214
215	typeof(p) rcu_assign_pointer(p, typeof(p) v);
216
217	Yes, rcu_assign_pointer() -is- implemented as a macro, though it
218	would be cool to be able to declare a function in this manner.
219	(Compiler experts will no doubt disagree.)
220
221	The updater uses this function to assign a new value to an
222	RCU-protected pointer, in order to safely communicate the change
223	in value from the updater to the reader.  This function returns
224	the new value, and also executes any memory-barrier instructions
225	required for a given CPU architecture.
226
227	Perhaps just as important, it serves to document (1) which
228	pointers are protected by RCU and (2) the point at which a
229	given structure becomes accessible to other CPUs.  That said,
230	rcu_assign_pointer() is most frequently used indirectly, via
231	the _rcu list-manipulation primitives such as list_add_rcu().
232
233rcu_dereference()
234
235	typeof(p) rcu_dereference(p);
236
237	Like rcu_assign_pointer(), rcu_dereference() must be implemented
238	as a macro.
239
240	The reader uses rcu_dereference() to fetch an RCU-protected
241	pointer, which returns a value that may then be safely
242	dereferenced.  Note that rcu_dereference() does not actually
243	dereference the pointer, instead, it protects the pointer for
244	later dereferencing.  It also executes any needed memory-barrier
245	instructions for a given CPU architecture.  Currently, only Alpha
246	needs memory barriers within rcu_dereference() -- on other CPUs,
247	it compiles to nothing, not even a compiler directive.
248
249	Common coding practice uses rcu_dereference() to copy an
250	RCU-protected pointer to a local variable, then dereferences
251	this local variable, for example as follows:
252
253		p = rcu_dereference(head.next);
254		return p->data;
255
256	However, in this case, one could just as easily combine these
257	into one statement:
258
259		return rcu_dereference(head.next)->data;
260
261	If you are going to be fetching multiple fields from the
262	RCU-protected structure, using the local variable is of
263	course preferred.  Repeated rcu_dereference() calls look
264	ugly, do not guarantee that the same pointer will be returned
265	if an update happened while in the critical section, and incur
266	unnecessary overhead on Alpha CPUs.
267
268	Note that the value returned by rcu_dereference() is valid
269	only within the enclosing RCU read-side critical section.
270	For example, the following is -not- legal:
271
272		rcu_read_lock();
273		p = rcu_dereference(head.next);
274		rcu_read_unlock();
275		x = p->address;	/* BUG!!! */
276		rcu_read_lock();
277		y = p->data;	/* BUG!!! */
278		rcu_read_unlock();
279
280	Holding a reference from one RCU read-side critical section
281	to another is just as illegal as holding a reference from
282	one lock-based critical section to another!  Similarly,
283	using a reference outside of the critical section in which
284	it was acquired is just as illegal as doing so with normal
285	locking.
286
287	As with rcu_assign_pointer(), an important function of
288	rcu_dereference() is to document which pointers are protected by
289	RCU, in particular, flagging a pointer that is subject to changing
290	at any time, including immediately after the rcu_dereference().
291	And, again like rcu_assign_pointer(), rcu_dereference() is
292	typically used indirectly, via the _rcu list-manipulation
293	primitives, such as list_for_each_entry_rcu().
294
295The following diagram shows how each API communicates among the
296reader, updater, and reclaimer.
297
298
299	    rcu_assign_pointer()
300	    			    +--------+
301	    +---------------------->| reader |---------+
302	    |                       +--------+         |
303	    |                           |              |
304	    |                           |              | Protect:
305	    |                           |              | rcu_read_lock()
306	    |                           |              | rcu_read_unlock()
307	    |        rcu_dereference()  |              |
308       +---------+                      |              |
309       | updater |<---------------------+              |
310       +---------+                                     V
311	    |                                    +-----------+
312	    +----------------------------------->| reclaimer |
313	    				         +-----------+
314	      Defer:
315	      synchronize_rcu() & call_rcu()
316
317
318The RCU infrastructure observes the time sequence of rcu_read_lock(),
319rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
320order to determine when (1) synchronize_rcu() invocations may return
321to their callers and (2) call_rcu() callbacks may be invoked.  Efficient
322implementations of the RCU infrastructure make heavy use of batching in
323order to amortize their overhead over many uses of the corresponding APIs.
324
325There are no fewer than three RCU mechanisms in the Linux kernel; the
326diagram above shows the first one, which is by far the most commonly used.
327The rcu_dereference() and rcu_assign_pointer() primitives are used for
328all three mechanisms, but different defer and protect primitives are
329used as follows:
330
331	Defer			Protect
332
333a.	synchronize_rcu()	rcu_read_lock() / rcu_read_unlock()
334	call_rcu()		rcu_dereference()
335
336b.	synchronize_rcu_bh()	rcu_read_lock_bh() / rcu_read_unlock_bh()
337	call_rcu_bh()		rcu_dereference_bh()
338
339c.	synchronize_sched()	rcu_read_lock_sched() / rcu_read_unlock_sched()
340	call_rcu_sched()	preempt_disable() / preempt_enable()
341				local_irq_save() / local_irq_restore()
342				hardirq enter / hardirq exit
343				NMI enter / NMI exit
344				rcu_dereference_sched()
345
346These three mechanisms are used as follows:
347
348a.	RCU applied to normal data structures.
349
350b.	RCU applied to networking data structures that may be subjected
351	to remote denial-of-service attacks.
352
353c.	RCU applied to scheduler and interrupt/NMI-handler tasks.
354
355Again, most uses will be of (a).  The (b) and (c) cases are important
356for specialized uses, but are relatively uncommon.
357
358
3593.  WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
360
361This section shows a simple use of the core RCU API to protect a
362global pointer to a dynamically allocated structure.  More-typical
363uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.
364
365	struct foo {
366		int a;
367		char b;
368		long c;
369	};
370	DEFINE_SPINLOCK(foo_mutex);
371
372	struct foo __rcu *gbl_foo;
373
374	/*
375	 * Create a new struct foo that is the same as the one currently
376	 * pointed to by gbl_foo, except that field "a" is replaced
377	 * with "new_a".  Points gbl_foo to the new structure, and
378	 * frees up the old structure after a grace period.
379	 *
380	 * Uses rcu_assign_pointer() to ensure that concurrent readers
381	 * see the initialized version of the new structure.
382	 *
383	 * Uses synchronize_rcu() to ensure that any readers that might
384	 * have references to the old structure complete before freeing
385	 * the old structure.
386	 */
387	void foo_update_a(int new_a)
388	{
389		struct foo *new_fp;
390		struct foo *old_fp;
391
392		new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
393		spin_lock(&foo_mutex);
394		old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
395		*new_fp = *old_fp;
396		new_fp->a = new_a;
397		rcu_assign_pointer(gbl_foo, new_fp);
398		spin_unlock(&foo_mutex);
399		synchronize_rcu();
400		kfree(old_fp);
401	}
402
403	/*
404	 * Return the value of field "a" of the current gbl_foo
405	 * structure.  Use rcu_read_lock() and rcu_read_unlock()
406	 * to ensure that the structure does not get deleted out
407	 * from under us, and use rcu_dereference() to ensure that
408	 * we see the initialized version of the structure (important
409	 * for DEC Alpha and for people reading the code).
410	 */
411	int foo_get_a(void)
412	{
413		int retval;
414
415		rcu_read_lock();
416		retval = rcu_dereference(gbl_foo)->a;
417		rcu_read_unlock();
418		return retval;
419	}
420
421So, to sum up:
422
423o	Use rcu_read_lock() and rcu_read_unlock() to guard RCU
424	read-side critical sections.
425
426o	Within an RCU read-side critical section, use rcu_dereference()
427	to dereference RCU-protected pointers.
428
429o	Use some solid scheme (such as locks or semaphores) to
430	keep concurrent updates from interfering with each other.
431
432o	Use rcu_assign_pointer() to update an RCU-protected pointer.
433	This primitive protects concurrent readers from the updater,
434	-not- concurrent updates from each other!  You therefore still
435	need to use locking (or something similar) to keep concurrent
436	rcu_assign_pointer() primitives from interfering with each other.
437
438o	Use synchronize_rcu() -after- removing a data element from an
439	RCU-protected data structure, but -before- reclaiming/freeing
440	the data element, in order to wait for the completion of all
441	RCU read-side critical sections that might be referencing that
442	data item.
443
444See checklist.txt for additional rules to follow when using RCU.
445And again, more-typical uses of RCU may be found in listRCU.txt,
446arrayRCU.txt, and NMI-RCU.txt.
447
448
4494.  WHAT IF MY UPDATING THREAD CANNOT BLOCK?
450
451In the example above, foo_update_a() blocks until a grace period elapses.
452This is quite simple, but in some cases one cannot afford to wait so
453long -- there might be other high-priority work to be done.
454
455In such cases, one uses call_rcu() rather than synchronize_rcu().
456The call_rcu() API is as follows:
457
458	void call_rcu(struct rcu_head * head,
459		      void (*func)(struct rcu_head *head));
460
461This function invokes func(head) after a grace period has elapsed.
462This invocation might happen from either softirq or process context,
463so the function is not permitted to block.  The foo struct needs to
464have an rcu_head structure added, perhaps as follows:
465
466	struct foo {
467		int a;
468		char b;
469		long c;
470		struct rcu_head rcu;
471	};
472
473The foo_update_a() function might then be written as follows:
474
475	/*
476	 * Create a new struct foo that is the same as the one currently
477	 * pointed to by gbl_foo, except that field "a" is replaced
478	 * with "new_a".  Points gbl_foo to the new structure, and
479	 * frees up the old structure after a grace period.
480	 *
481	 * Uses rcu_assign_pointer() to ensure that concurrent readers
482	 * see the initialized version of the new structure.
483	 *
484	 * Uses call_rcu() to ensure that any readers that might have
485	 * references to the old structure complete before freeing the
486	 * old structure.
487	 */
488	void foo_update_a(int new_a)
489	{
490		struct foo *new_fp;
491		struct foo *old_fp;
492
493		new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
494		spin_lock(&foo_mutex);
495		old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
496		*new_fp = *old_fp;
497		new_fp->a = new_a;
498		rcu_assign_pointer(gbl_foo, new_fp);
499		spin_unlock(&foo_mutex);
500		call_rcu(&old_fp->rcu, foo_reclaim);
501	}
502
503The foo_reclaim() function might appear as follows:
504
505	void foo_reclaim(struct rcu_head *rp)
506	{
507		struct foo *fp = container_of(rp, struct foo, rcu);
508
509		foo_cleanup(fp->a);
510
511		kfree(fp);
512	}
513
514The container_of() primitive is a macro that, given a pointer into a
515struct, the type of the struct, and the pointed-to field within the
516struct, returns a pointer to the beginning of the struct.
517
518The use of call_rcu() permits the caller of foo_update_a() to
519immediately regain control, without needing to worry further about the
520old version of the newly updated element.  It also clearly shows the
521RCU distinction between updater, namely foo_update_a(), and reclaimer,
522namely foo_reclaim().
523
524The summary of advice is the same as for the previous section, except
525that we are now using call_rcu() rather than synchronize_rcu():
526
527o	Use call_rcu() -after- removing a data element from an
528	RCU-protected data structure in order to register a callback
529	function that will be invoked after the completion of all RCU
530	read-side critical sections that might be referencing that
531	data item.
532
533If the callback for call_rcu() is not doing anything more than calling
534kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
535to avoid having to write your own callback:
536
537	kfree_rcu(old_fp, rcu);
538
539Again, see checklist.txt for additional rules governing the use of RCU.
540
541
5425.  WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
543
544One of the nice things about RCU is that it has extremely simple "toy"
545implementations that are a good first step towards understanding the
546production-quality implementations in the Linux kernel.  This section
547presents two such "toy" implementations of RCU, one that is implemented
548in terms of familiar locking primitives, and another that more closely
549resembles "classic" RCU.  Both are way too simple for real-world use,
550lacking both functionality and performance.  However, they are useful
551in getting a feel for how RCU works.  See kernel/rcupdate.c for a
552production-quality implementation, and see:
553
554	http://www.rdrop.com/users/paulmck/RCU
555
556for papers describing the Linux kernel RCU implementation.  The OLS'01
557and OLS'02 papers are a good introduction, and the dissertation provides
558more details on the current implementation as of early 2004.
559
560
5615A.  "TOY" IMPLEMENTATION #1: LOCKING
562
563This section presents a "toy" RCU implementation that is based on
564familiar locking primitives.  Its overhead makes it a non-starter for
565real-life use, as does its lack of scalability.  It is also unsuitable
566for realtime use, since it allows scheduling latency to "bleed" from
567one read-side critical section to another.  It also assumes recursive
568reader-writer locks:  If you try this with non-recursive locks, and
569you allow nested rcu_read_lock() calls, you can deadlock.
570
571However, it is probably the easiest implementation to relate to, so is
572a good starting point.
573
574It is extremely simple:
575
576	static DEFINE_RWLOCK(rcu_gp_mutex);
577
578	void rcu_read_lock(void)
579	{
580		read_lock(&rcu_gp_mutex);
581	}
582
583	void rcu_read_unlock(void)
584	{
585		read_unlock(&rcu_gp_mutex);
586	}
587
588	void synchronize_rcu(void)
589	{
590		write_lock(&rcu_gp_mutex);
591		smp_mb__after_spinlock();
592		write_unlock(&rcu_gp_mutex);
593	}
594
595[You can ignore rcu_assign_pointer() and rcu_dereference() without missing
596much.  But here are simplified versions anyway.  And whatever you do,
597don't forget about them when submitting patches making use of RCU!]
598
599	#define rcu_assign_pointer(p, v) \
600	({ \
601		smp_store_release(&(p), (v)); \
602	})
603
604	#define rcu_dereference(p) \
605	({ \
606		typeof(p) _________p1 = READ_ONCE(p); \
607		(_________p1); \
608	})
609
610
611The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
612and release a global reader-writer lock.  The synchronize_rcu()
613primitive write-acquires this same lock, then releases it.  This means
614that once synchronize_rcu() exits, all RCU read-side critical sections
615that were in progress before synchronize_rcu() was called are guaranteed
616to have completed -- there is no way that synchronize_rcu() would have
617been able to write-acquire the lock otherwise.  The smp_mb__after_spinlock()
618promotes synchronize_rcu() to a full memory barrier in compliance with
619the "Memory-Barrier Guarantees" listed in:
620
621	Documentation/RCU/Design/Requirements/Requirements.html.
622
623It is possible to nest rcu_read_lock(), since reader-writer locks may
624be recursively acquired.  Note also that rcu_read_lock() is immune
625from deadlock (an important property of RCU).  The reason for this is
626that the only thing that can block rcu_read_lock() is a synchronize_rcu().
627But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
628so there can be no deadlock cycle.
629
630Quick Quiz #1:	Why is this argument naive?  How could a deadlock
631		occur when using this algorithm in a real-world Linux
632		kernel?  How could this deadlock be avoided?
633
634
6355B.  "TOY" EXAMPLE #2: CLASSIC RCU
636
637This section presents a "toy" RCU implementation that is based on
638"classic RCU".  It is also short on performance (but only for updates) and
639on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT
640kernels.  The definitions of rcu_dereference() and rcu_assign_pointer()
641are the same as those shown in the preceding section, so they are omitted.
642
643	void rcu_read_lock(void) { }
644
645	void rcu_read_unlock(void) { }
646
647	void synchronize_rcu(void)
648	{
649		int cpu;
650
651		for_each_possible_cpu(cpu)
652			run_on(cpu);
653	}
654
655Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
656This is the great strength of classic RCU in a non-preemptive kernel:
657read-side overhead is precisely zero, at least on non-Alpha CPUs.
658And there is absolutely no way that rcu_read_lock() can possibly
659participate in a deadlock cycle!
660
661The implementation of synchronize_rcu() simply schedules itself on each
662CPU in turn.  The run_on() primitive can be implemented straightforwardly
663in terms of the sched_setaffinity() primitive.  Of course, a somewhat less
664"toy" implementation would restore the affinity upon completion rather
665than just leaving all tasks running on the last CPU, but when I said
666"toy", I meant -toy-!
667
668So how the heck is this supposed to work???
669
670Remember that it is illegal to block while in an RCU read-side critical
671section.  Therefore, if a given CPU executes a context switch, we know
672that it must have completed all preceding RCU read-side critical sections.
673Once -all- CPUs have executed a context switch, then -all- preceding
674RCU read-side critical sections will have completed.
675
676So, suppose that we remove a data item from its structure and then invoke
677synchronize_rcu().  Once synchronize_rcu() returns, we are guaranteed
678that there are no RCU read-side critical sections holding a reference
679to that data item, so we can safely reclaim it.
680
681Quick Quiz #2:	Give an example where Classic RCU's read-side
682		overhead is -negative-.
683
684Quick Quiz #3:  If it is illegal to block in an RCU read-side
685		critical section, what the heck do you do in
686		PREEMPT_RT, where normal spinlocks can block???
687
688
6896.  ANALOGY WITH READER-WRITER LOCKING
690
691Although RCU can be used in many different ways, a very common use of
692RCU is analogous to reader-writer locking.  The following unified
693diff shows how closely related RCU and reader-writer locking can be.
694
695	@@ -5,5 +5,5 @@ struct el {
696	 	int data;
697	 	/* Other data fields */
698	 };
699	-rwlock_t listmutex;
700	+spinlock_t listmutex;
701	 struct el head;
702
703	@@ -13,15 +14,15 @@
704		struct list_head *lp;
705		struct el *p;
706
707	-	read_lock(&listmutex);
708	-	list_for_each_entry(p, head, lp) {
709	+	rcu_read_lock();
710	+	list_for_each_entry_rcu(p, head, lp) {
711			if (p->key == key) {
712				*result = p->data;
713	-			read_unlock(&listmutex);
714	+			rcu_read_unlock();
715				return 1;
716			}
717		}
718	-	read_unlock(&listmutex);
719	+	rcu_read_unlock();
720		return 0;
721	 }
722
723	@@ -29,15 +30,16 @@
724	 {
725		struct el *p;
726
727	-	write_lock(&listmutex);
728	+	spin_lock(&listmutex);
729		list_for_each_entry(p, head, lp) {
730			if (p->key == key) {
731	-			list_del(&p->list);
732	-			write_unlock(&listmutex);
733	+			list_del_rcu(&p->list);
734	+			spin_unlock(&listmutex);
735	+			synchronize_rcu();
736				kfree(p);
737				return 1;
738			}
739		}
740	-	write_unlock(&listmutex);
741	+	spin_unlock(&listmutex);
742		return 0;
743	 }
744
745Or, for those who prefer a side-by-side listing:
746
747 1 struct el {                          1 struct el {
748 2   struct list_head list;             2   struct list_head list;
749 3   long key;                          3   long key;
750 4   spinlock_t mutex;                  4   spinlock_t mutex;
751 5   int data;                          5   int data;
752 6   /* Other data fields */            6   /* Other data fields */
753 7 };                                   7 };
754 8 rwlock_t listmutex;                  8 spinlock_t listmutex;
755 9 struct el head;                      9 struct el head;
756
757 1 int search(long key, int *result)    1 int search(long key, int *result)
758 2 {                                    2 {
759 3   struct list_head *lp;              3   struct list_head *lp;
760 4   struct el *p;                      4   struct el *p;
761 5                                      5
762 6   read_lock(&listmutex);             6   rcu_read_lock();
763 7   list_for_each_entry(p, head, lp) { 7   list_for_each_entry_rcu(p, head, lp) {
764 8     if (p->key == key) {             8     if (p->key == key) {
765 9       *result = p->data;             9       *result = p->data;
76610       read_unlock(&listmutex);      10       rcu_read_unlock();
76711       return 1;                     11       return 1;
76812     }                               12     }
76913   }                                 13   }
77014   read_unlock(&listmutex);          14   rcu_read_unlock();
77115   return 0;                         15   return 0;
77216 }                                   16 }
773
774 1 int delete(long key)                 1 int delete(long key)
775 2 {                                    2 {
776 3   struct el *p;                      3   struct el *p;
777 4                                      4
778 5   write_lock(&listmutex);            5   spin_lock(&listmutex);
779 6   list_for_each_entry(p, head, lp) { 6   list_for_each_entry(p, head, lp) {
780 7     if (p->key == key) {             7     if (p->key == key) {
781 8       list_del(&p->list);            8       list_del_rcu(&p->list);
782 9       write_unlock(&listmutex);      9       spin_unlock(&listmutex);
783                                       10       synchronize_rcu();
78410       kfree(p);                     11       kfree(p);
78511       return 1;                     12       return 1;
78612     }                               13     }
78713   }                                 14   }
78814   write_unlock(&listmutex);         15   spin_unlock(&listmutex);
78915   return 0;                         16   return 0;
79016 }                                   17 }
791
792Either way, the differences are quite small.  Read-side locking moves
793to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
794a reader-writer lock to a simple spinlock, and a synchronize_rcu()
795precedes the kfree().
796
797However, there is one potential catch: the read-side and update-side
798critical sections can now run concurrently.  In many cases, this will
799not be a problem, but it is necessary to check carefully regardless.
800For example, if multiple independent list updates must be seen as
801a single atomic update, converting to RCU will require special care.
802
803Also, the presence of synchronize_rcu() means that the RCU version of
804delete() can now block.  If this is a problem, there is a callback-based
805mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
806be used in place of synchronize_rcu().
807
808
8097.  FULL LIST OF RCU APIs
810
811The RCU APIs are documented in docbook-format header comments in the
812Linux-kernel source code, but it helps to have a full list of the
813APIs, since there does not appear to be a way to categorize them
814in docbook.  Here is the list, by category.
815
816RCU list traversal:
817
818	list_entry_rcu
819	list_first_entry_rcu
820	list_next_rcu
821	list_for_each_entry_rcu
822	list_for_each_entry_continue_rcu
823	list_for_each_entry_from_rcu
824	hlist_first_rcu
825	hlist_next_rcu
826	hlist_pprev_rcu
827	hlist_for_each_entry_rcu
828	hlist_for_each_entry_rcu_bh
829	hlist_for_each_entry_from_rcu
830	hlist_for_each_entry_continue_rcu
831	hlist_for_each_entry_continue_rcu_bh
832	hlist_nulls_first_rcu
833	hlist_nulls_for_each_entry_rcu
834	hlist_bl_first_rcu
835	hlist_bl_for_each_entry_rcu
836
837RCU pointer/list update:
838
839	rcu_assign_pointer
840	list_add_rcu
841	list_add_tail_rcu
842	list_del_rcu
843	list_replace_rcu
844	hlist_add_behind_rcu
845	hlist_add_before_rcu
846	hlist_add_head_rcu
847	hlist_del_rcu
848	hlist_del_init_rcu
849	hlist_replace_rcu
850	list_splice_init_rcu()
851	hlist_nulls_del_init_rcu
852	hlist_nulls_del_rcu
853	hlist_nulls_add_head_rcu
854	hlist_bl_add_head_rcu
855	hlist_bl_del_init_rcu
856	hlist_bl_del_rcu
857	hlist_bl_set_first_rcu
858
859RCU:	Critical sections	Grace period		Barrier
860
861	rcu_read_lock		synchronize_net		rcu_barrier
862	rcu_read_unlock		synchronize_rcu
863	rcu_dereference		synchronize_rcu_expedited
864	rcu_read_lock_held	call_rcu
865	rcu_dereference_check	kfree_rcu
866	rcu_dereference_protected
867
868bh:	Critical sections	Grace period		Barrier
869
870	rcu_read_lock_bh	call_rcu_bh		rcu_barrier_bh
871	rcu_read_unlock_bh	synchronize_rcu_bh
872	rcu_dereference_bh	synchronize_rcu_bh_expedited
873	rcu_dereference_bh_check
874	rcu_dereference_bh_protected
875	rcu_read_lock_bh_held
876
877sched:	Critical sections	Grace period		Barrier
878
879	rcu_read_lock_sched	synchronize_sched	rcu_barrier_sched
880	rcu_read_unlock_sched	call_rcu_sched
881	[preempt_disable]	synchronize_sched_expedited
882	[and friends]
883	rcu_read_lock_sched_notrace
884	rcu_read_unlock_sched_notrace
885	rcu_dereference_sched
886	rcu_dereference_sched_check
887	rcu_dereference_sched_protected
888	rcu_read_lock_sched_held
889
890
891SRCU:	Critical sections	Grace period		Barrier
892
893	srcu_read_lock		synchronize_srcu	srcu_barrier
894	srcu_read_unlock	call_srcu
895	srcu_dereference	synchronize_srcu_expedited
896	srcu_dereference_check
897	srcu_read_lock_held
898
899SRCU:	Initialization/cleanup
900	DEFINE_SRCU
901	DEFINE_STATIC_SRCU
902	init_srcu_struct
903	cleanup_srcu_struct
904
905All:  lockdep-checked RCU-protected pointer access
906
907	rcu_access_pointer
908	rcu_dereference_raw
909	RCU_LOCKDEP_WARN
910	rcu_sleep_check
911	RCU_NONIDLE
912
913See the comment headers in the source code (or the docbook generated
914from them) for more information.
915
916However, given that there are no fewer than four families of RCU APIs
917in the Linux kernel, how do you choose which one to use?  The following
918list can be helpful:
919
920a.	Will readers need to block?  If so, you need SRCU.
921
922b.	What about the -rt patchset?  If readers would need to block
923	in an non-rt kernel, you need SRCU.  If readers would block
924	in a -rt kernel, but not in a non-rt kernel, SRCU is not
925	necessary.  (The -rt patchset turns spinlocks into sleeplocks,
926	hence this distinction.)
927
928c.	Do you need to treat NMI handlers, hardirq handlers,
929	and code segments with preemption disabled (whether
930	via preempt_disable(), local_irq_save(), local_bh_disable(),
931	or some other mechanism) as if they were explicit RCU readers?
932	If so, RCU-sched is the only choice that will work for you.
933
934d.	Do you need RCU grace periods to complete even in the face
935	of softirq monopolization of one or more of the CPUs?  For
936	example, is your code subject to network-based denial-of-service
937	attacks?  If so, you need RCU-bh.
938
939e.	Is your workload too update-intensive for normal use of
940	RCU, but inappropriate for other synchronization mechanisms?
941	If so, consider SLAB_TYPESAFE_BY_RCU (which was originally
942	named SLAB_DESTROY_BY_RCU).  But please be careful!
943
944f.	Do you need read-side critical sections that are respected
945	even though they are in the middle of the idle loop, during
946	user-mode execution, or on an offlined CPU?  If so, SRCU is the
947	only choice that will work for you.
948
949g.	Otherwise, use RCU.
950
951Of course, this all assumes that you have determined that RCU is in fact
952the right tool for your job.
953
954
9558.  ANSWERS TO QUICK QUIZZES
956
957Quick Quiz #1:	Why is this argument naive?  How could a deadlock
958		occur when using this algorithm in a real-world Linux
959		kernel?  [Referring to the lock-based "toy" RCU
960		algorithm.]
961
962Answer:		Consider the following sequence of events:
963
964		1.	CPU 0 acquires some unrelated lock, call it
965			"problematic_lock", disabling irq via
966			spin_lock_irqsave().
967
968		2.	CPU 1 enters synchronize_rcu(), write-acquiring
969			rcu_gp_mutex.
970
971		3.	CPU 0 enters rcu_read_lock(), but must wait
972			because CPU 1 holds rcu_gp_mutex.
973
974		4.	CPU 1 is interrupted, and the irq handler
975			attempts to acquire problematic_lock.
976
977		The system is now deadlocked.
978
979		One way to avoid this deadlock is to use an approach like
980		that of CONFIG_PREEMPT_RT, where all normal spinlocks
981		become blocking locks, and all irq handlers execute in
982		the context of special tasks.  In this case, in step 4
983		above, the irq handler would block, allowing CPU 1 to
984		release rcu_gp_mutex, avoiding the deadlock.
985
986		Even in the absence of deadlock, this RCU implementation
987		allows latency to "bleed" from readers to other
988		readers through synchronize_rcu().  To see this,
989		consider task A in an RCU read-side critical section
990		(thus read-holding rcu_gp_mutex), task B blocked
991		attempting to write-acquire rcu_gp_mutex, and
992		task C blocked in rcu_read_lock() attempting to
993		read_acquire rcu_gp_mutex.  Task A's RCU read-side
994		latency is holding up task C, albeit indirectly via
995		task B.
996
997		Realtime RCU implementations therefore use a counter-based
998		approach where tasks in RCU read-side critical sections
999		cannot be blocked by tasks executing synchronize_rcu().
1000
1001Quick Quiz #2:	Give an example where Classic RCU's read-side
1002		overhead is -negative-.
1003
1004Answer:		Imagine a single-CPU system with a non-CONFIG_PREEMPT
1005		kernel where a routing table is used by process-context
1006		code, but can be updated by irq-context code (for example,
1007		by an "ICMP REDIRECT" packet).	The usual way of handling
1008		this would be to have the process-context code disable
1009		interrupts while searching the routing table.  Use of
1010		RCU allows such interrupt-disabling to be dispensed with.
1011		Thus, without RCU, you pay the cost of disabling interrupts,
1012		and with RCU you don't.
1013
1014		One can argue that the overhead of RCU in this
1015		case is negative with respect to the single-CPU
1016		interrupt-disabling approach.  Others might argue that
1017		the overhead of RCU is merely zero, and that replacing
1018		the positive overhead of the interrupt-disabling scheme
1019		with the zero-overhead RCU scheme does not constitute
1020		negative overhead.
1021
1022		In real life, of course, things are more complex.  But
1023		even the theoretical possibility of negative overhead for
1024		a synchronization primitive is a bit unexpected.  ;-)
1025
1026Quick Quiz #3:  If it is illegal to block in an RCU read-side
1027		critical section, what the heck do you do in
1028		PREEMPT_RT, where normal spinlocks can block???
1029
1030Answer:		Just as PREEMPT_RT permits preemption of spinlock
1031		critical sections, it permits preemption of RCU
1032		read-side critical sections.  It also permits
1033		spinlocks blocking while in RCU read-side critical
1034		sections.
1035
1036		Why the apparent inconsistency?  Because it is it
1037		possible to use priority boosting to keep the RCU
1038		grace periods short if need be (for example, if running
1039		short of memory).  In contrast, if blocking waiting
1040		for (say) network reception, there is no way to know
1041		what should be boosted.  Especially given that the
1042		process we need to boost might well be a human being
1043		who just went out for a pizza or something.  And although
1044		a computer-operated cattle prod might arouse serious
1045		interest, it might also provoke serious objections.
1046		Besides, how does the computer know what pizza parlor
1047		the human being went to???
1048
1049
1050ACKNOWLEDGEMENTS
1051
1052My thanks to the people who helped make this human-readable, including
1053Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.
1054
1055
1056For more information, see http://www.rdrop.com/users/paulmck/RCU.
1057