1<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
2        "http://www.w3.org/TR/html4/loose.dtd">
3        <html>
4        <head><title>A Tour Through TREE_RCU's Data Structures [LWN.net]</title>
5        <meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
6
7           <p>December 18, 2016</p>
8           <p>This article was contributed by Paul E.&nbsp;McKenney</p>
9
10<h3>Introduction</h3>
11
12This document describes RCU's major data structures and their relationship
13to each other.
14
15<ol>
16<li>	<a href="#Data-Structure Relationships">
17	Data-Structure Relationships</a>
18<li>	<a href="#The rcu_state Structure">
19	The <tt>rcu_state</tt> Structure</a>
20<li>	<a href="#The rcu_node Structure">
21	The <tt>rcu_node</tt> Structure</a>
22<li>	<a href="#The rcu_segcblist Structure">
23	The <tt>rcu_segcblist</tt> Structure</a>
24<li>	<a href="#The rcu_data Structure">
25	The <tt>rcu_data</tt> Structure</a>
26<li>	<a href="#The rcu_head Structure">
27	The <tt>rcu_head</tt> Structure</a>
28<li>	<a href="#RCU-Specific Fields in the task_struct Structure">
29	RCU-Specific Fields in the <tt>task_struct</tt> Structure</a>
30<li>	<a href="#Accessor Functions">
31	Accessor Functions</a>
32</ol>
33
34<h3><a name="Data-Structure Relationships">Data-Structure Relationships</a></h3>
35
36<p>RCU is for all intents and purposes a large state machine, and its
37data structures maintain the state in such a way as to allow RCU readers
38to execute extremely quickly, while also processing the RCU grace periods
39requested by updaters in an efficient and extremely scalable fashion.
40The efficiency and scalability of RCU updaters is provided primarily
41by a combining tree, as shown below:
42
43</p><p><img src="BigTreeClassicRCU.svg" alt="BigTreeClassicRCU.svg" width="30%">
44
45</p><p>This diagram shows an enclosing <tt>rcu_state</tt> structure
46containing a tree of <tt>rcu_node</tt> structures.
47Each leaf node of the <tt>rcu_node</tt> tree has up to 16
48<tt>rcu_data</tt> structures associated with it, so that there
49are <tt>NR_CPUS</tt> number of <tt>rcu_data</tt> structures,
50one for each possible CPU.
51This structure is adjusted at boot time, if needed, to handle the
52common case where <tt>nr_cpu_ids</tt> is much less than
53<tt>NR_CPUs</tt>.
54For example, a number of Linux distributions set <tt>NR_CPUs=4096</tt>,
55which results in a three-level <tt>rcu_node</tt> tree.
56If the actual hardware has only 16 CPUs, RCU will adjust itself
57at boot time, resulting in an <tt>rcu_node</tt> tree with only a single node.
58
59</p><p>The purpose of this combining tree is to allow per-CPU events
60such as quiescent states, dyntick-idle transitions,
61and CPU hotplug operations to be processed efficiently
62and scalably.
63Quiescent states are recorded by the per-CPU <tt>rcu_data</tt> structures,
64and other events are recorded by the leaf-level <tt>rcu_node</tt>
65structures.
66All of these events are combined at each level of the tree until finally
67grace periods are completed at the tree's root <tt>rcu_node</tt>
68structure.
69A grace period can be completed at the root once every CPU
70(or, in the case of <tt>CONFIG_PREEMPT_RCU</tt>, task)
71has passed through a quiescent state.
72Once a grace period has completed, record of that fact is propagated
73back down the tree.
74
75</p><p>As can be seen from the diagram, on a 64-bit system
76a two-level tree with 64 leaves can accommodate 1,024 CPUs, with a fanout
77of 64 at the root and a fanout of 16 at the leaves.
78
79<table>
80<tr><th>&nbsp;</th></tr>
81<tr><th align="left">Quick Quiz:</th></tr>
82<tr><td>
83	Why isn't the fanout at the leaves also 64?
84</td></tr>
85<tr><th align="left">Answer:</th></tr>
86<tr><td bgcolor="#ffffff"><font color="ffffff">
87	Because there are more types of events that affect the leaf-level
88	<tt>rcu_node</tt> structures than further up the tree.
89	Therefore, if the leaf <tt>rcu_node</tt> structures have fanout of
90	64, the contention on these structures' <tt>-&gt;structures</tt>
91	becomes excessive.
92	Experimentation on a wide variety of systems has shown that a fanout
93	of 16 works well for the leaves of the <tt>rcu_node</tt> tree.
94	</font>
95
96	<p><font color="ffffff">Of course, further experience with
97	systems having hundreds or thousands of CPUs may demonstrate
98	that the fanout for the non-leaf <tt>rcu_node</tt> structures
99	must also be reduced.
100	Such reduction can be easily carried out when and if it proves
101	necessary.
102	In the meantime, if you are using such a system and running into
103	contention problems on the non-leaf <tt>rcu_node</tt> structures,
104	you may use the <tt>CONFIG_RCU_FANOUT</tt> kernel configuration
105	parameter to reduce the non-leaf fanout as needed.
106	</font>
107
108	<p><font color="ffffff">Kernels built for systems with
109	strong NUMA characteristics might also need to adjust
110	<tt>CONFIG_RCU_FANOUT</tt> so that the domains of the
111	<tt>rcu_node</tt> structures align with hardware boundaries.
112	However, there has thus far been no need for this.
113</font></td></tr>
114<tr><td>&nbsp;</td></tr>
115</table>
116
117<p>If your system has more than 1,024 CPUs (or more than 512 CPUs on
118a 32-bit system), then RCU will automatically add more levels to the
119tree.
120For example, if you are crazy enough to build a 64-bit system with 65,536
121CPUs, RCU would configure the <tt>rcu_node</tt> tree as follows:
122
123</p><p><img src="HugeTreeClassicRCU.svg" alt="HugeTreeClassicRCU.svg" width="50%">
124
125</p><p>RCU currently permits up to a four-level tree, which on a 64-bit system
126accommodates up to 4,194,304 CPUs, though only a mere 524,288 CPUs for
12732-bit systems.
128On the other hand, you can set both <tt>CONFIG_RCU_FANOUT</tt> and
129<tt>CONFIG_RCU_FANOUT_LEAF</tt> to be as small as 2, which would result
130in a 16-CPU test using a 4-level tree.
131This can be useful for testing large-system capabilities on small test
132machines.
133
134</p><p>This multi-level combining tree allows us to get most of the
135performance and scalability
136benefits of partitioning, even though RCU grace-period detection is
137inherently a global operation.
138The trick here is that only the last CPU to report a quiescent state
139into a given <tt>rcu_node</tt> structure need advance to the <tt>rcu_node</tt>
140structure at the next level up the tree.
141This means that at the leaf-level <tt>rcu_node</tt> structure, only
142one access out of sixteen will progress up the tree.
143For the internal <tt>rcu_node</tt> structures, the situation is even
144more extreme:  Only one access out of sixty-four will progress up
145the tree.
146Because the vast majority of the CPUs do not progress up the tree,
147the lock contention remains roughly constant up the tree.
148No matter how many CPUs there are in the system, at most 64 quiescent-state
149reports per grace period will progress all the way to the root
150<tt>rcu_node</tt> structure, thus ensuring that the lock contention
151on that root <tt>rcu_node</tt> structure remains acceptably low.
152
153</p><p>In effect, the combining tree acts like a big shock absorber,
154keeping lock contention under control at all tree levels regardless
155of the level of loading on the system.
156
157</p><p>RCU updaters wait for normal grace periods by registering
158RCU callbacks, either directly via <tt>call_rcu()</tt>
159or indirectly via <tt>synchronize_rcu()</tt> and friends.
160RCU callbacks are represented by <tt>rcu_head</tt> structures,
161which are queued on <tt>rcu_data</tt> structures while they are
162waiting for a grace period to elapse, as shown in the following figure:
163
164</p><p><img src="BigTreePreemptRCUBHdyntickCB.svg" alt="BigTreePreemptRCUBHdyntickCB.svg" width="40%">
165
166</p><p>This figure shows how <tt>TREE_RCU</tt>'s and
167<tt>PREEMPT_RCU</tt>'s major data structures are related.
168Lesser data structures will be introduced with the algorithms that
169make use of them.
170
171</p><p>Note that each of the data structures in the above figure has
172its own synchronization:
173
174<p><ol>
175<li>	Each <tt>rcu_state</tt> structures has a lock and a mutex,
176	and some fields are protected by the corresponding root
177	<tt>rcu_node</tt> structure's lock.
178<li>	Each <tt>rcu_node</tt> structure has a spinlock.
179<li>	The fields in <tt>rcu_data</tt> are private to the corresponding
180	CPU, although a few can be read and written by other CPUs.
181</ol>
182
183<p>It is important to note that different data structures can have
184very different ideas about the state of RCU at any given time.
185For but one example, awareness of the start or end of a given RCU
186grace period propagates slowly through the data structures.
187This slow propagation is absolutely necessary for RCU to have good
188read-side performance.
189If this balkanized implementation seems foreign to you, one useful
190trick is to consider each instance of these data structures to be
191a different person, each having the usual slightly different
192view of reality.
193
194</p><p>The general role of each of these data structures is as
195follows:
196
197</p><ol>
198<li>	<tt>rcu_state</tt>:
199	This structure forms the interconnection between the
200	<tt>rcu_node</tt> and <tt>rcu_data</tt> structures,
201	tracks grace periods, serves as short-term repository
202	for callbacks orphaned by CPU-hotplug events,
203	maintains <tt>rcu_barrier()</tt> state,
204	tracks expedited grace-period state,
205	and maintains state used to force quiescent states when
206	grace periods extend too long,
207<li>	<tt>rcu_node</tt>: This structure forms the combining
208	tree that propagates quiescent-state
209	information from the leaves to the root, and also propagates
210	grace-period information from the root to the leaves.
211	It provides local copies of the grace-period state in order
212	to allow this information to be accessed in a synchronized
213	manner without suffering the scalability limitations that
214	would otherwise be imposed by global locking.
215	In <tt>CONFIG_PREEMPT_RCU</tt> kernels, it manages the lists
216	of tasks that have blocked while in their current
217	RCU read-side critical section.
218	In <tt>CONFIG_PREEMPT_RCU</tt> with
219	<tt>CONFIG_RCU_BOOST</tt>, it manages the
220	per-<tt>rcu_node</tt> priority-boosting
221	kernel threads (kthreads) and state.
222	Finally, it records CPU-hotplug state in order to determine
223	which CPUs should be ignored during a given grace period.
224<li>	<tt>rcu_data</tt>: This per-CPU structure is the
225	focus of quiescent-state detection and RCU callback queuing.
226	It also tracks its relationship to the corresponding leaf
227	<tt>rcu_node</tt> structure to allow more-efficient
228	propagation of quiescent states up the <tt>rcu_node</tt>
229	combining tree.
230	Like the <tt>rcu_node</tt> structure, it provides a local
231	copy of the grace-period information to allow for-free
232	synchronized
233	access to this information from the corresponding CPU.
234	Finally, this structure records past dyntick-idle state
235	for the corresponding CPU and also tracks statistics.
236<li>	<tt>rcu_head</tt>:
237	This structure represents RCU callbacks, and is the
238	only structure allocated and managed by RCU users.
239	The <tt>rcu_head</tt> structure is normally embedded
240	within the RCU-protected data structure.
241</ol>
242
243<p>If all you wanted from this article was a general notion of how
244RCU's data structures are related, you are done.
245Otherwise, each of the following sections give more details on
246the <tt>rcu_state</tt>, <tt>rcu_node</tt> and <tt>rcu_data</tt> data
247structures.
248
249<h3><a name="The rcu_state Structure">
250The <tt>rcu_state</tt> Structure</a></h3>
251
252<p>The <tt>rcu_state</tt> structure is the base structure that
253represents the state of RCU in the system.
254This structure forms the interconnection between the
255<tt>rcu_node</tt> and <tt>rcu_data</tt> structures,
256tracks grace periods, contains the lock used to
257synchronize with CPU-hotplug events,
258and maintains state used to force quiescent states when
259grace periods extend too long,
260
261</p><p>A few of the <tt>rcu_state</tt> structure's fields are discussed,
262singly and in groups, in the following sections.
263The more specialized fields are covered in the discussion of their
264use.
265
266<h5>Relationship to rcu_node and rcu_data Structures</h5>
267
268This portion of the <tt>rcu_state</tt> structure is declared
269as follows:
270
271<pre>
272  1   struct rcu_node node[NUM_RCU_NODES];
273  2   struct rcu_node *level[NUM_RCU_LVLS + 1];
274  3   struct rcu_data __percpu *rda;
275</pre>
276
277<table>
278<tr><th>&nbsp;</th></tr>
279<tr><th align="left">Quick Quiz:</th></tr>
280<tr><td>
281	Wait a minute!
282	You said that the <tt>rcu_node</tt> structures formed a tree,
283	but they are declared as a flat array!
284	What gives?
285</td></tr>
286<tr><th align="left">Answer:</th></tr>
287<tr><td bgcolor="#ffffff"><font color="ffffff">
288	The tree is laid out in the array.
289	The first node In the array is the head, the next set of nodes in the
290	array are children of the head node, and so on until the last set of
291	nodes in the array are the leaves.
292	</font>
293
294	<p><font color="ffffff">See the following diagrams to see how
295	this works.
296</font></td></tr>
297<tr><td>&nbsp;</td></tr>
298</table>
299
300<p>The <tt>rcu_node</tt> tree is embedded into the
301<tt>-&gt;node[]</tt> array as shown in the following figure:
302
303</p><p><img src="TreeMapping.svg" alt="TreeMapping.svg" width="40%">
304
305</p><p>One interesting consequence of this mapping is that a
306breadth-first traversal of the tree is implemented as a simple
307linear scan of the array, which is in fact what the
308<tt>rcu_for_each_node_breadth_first()</tt> macro does.
309This macro is used at the beginning and ends of grace periods.
310
311</p><p>Each entry of the <tt>-&gt;level</tt> array references
312the first <tt>rcu_node</tt> structure on the corresponding level
313of the tree, for example, as shown below:
314
315</p><p><img src="TreeMappingLevel.svg" alt="TreeMappingLevel.svg" width="40%">
316
317</p><p>The zero<sup>th</sup> element of the array references the root
318<tt>rcu_node</tt> structure, the first element references the
319first child of the root <tt>rcu_node</tt>, and finally the second
320element references the first leaf <tt>rcu_node</tt> structure.
321
322</p><p>For whatever it is worth, if you draw the tree to be tree-shaped
323rather than array-shaped, it is easy to draw a planar representation:
324
325</p><p><img src="TreeLevel.svg" alt="TreeLevel.svg" width="60%">
326
327</p><p>Finally, the <tt>-&gt;rda</tt> field references a per-CPU
328pointer to the corresponding CPU's <tt>rcu_data</tt> structure.
329
330</p><p>All of these fields are constant once initialization is complete,
331and therefore need no protection.
332
333<h5>Grace-Period Tracking</h5>
334
335<p>This portion of the <tt>rcu_state</tt> structure is declared
336as follows:
337
338<pre>
339  1   unsigned long gp_seq;
340</pre>
341
342<p>RCU grace periods are numbered, and
343the <tt>-&gt;gp_seq</tt> field contains the current grace-period
344sequence number.
345The bottom two bits are the state of the current grace period,
346which can be zero for not yet started or one for in progress.
347In other words, if the bottom two bits of <tt>-&gt;gp_seq</tt> are
348zero, then RCU is idle.
349Any other value in the bottom two bits indicates that something is broken.
350This field is protected by the root <tt>rcu_node</tt> structure's
351<tt>-&gt;lock</tt> field.
352
353</p><p>There are <tt>-&gt;gp_seq</tt> fields
354in the <tt>rcu_node</tt> and <tt>rcu_data</tt> structures
355as well.
356The fields in the <tt>rcu_state</tt> structure represent the
357most current value, and those of the other structures are compared
358in order to detect the beginnings and ends of grace periods in a distributed
359fashion.
360The values flow from <tt>rcu_state</tt> to <tt>rcu_node</tt>
361(down the tree from the root to the leaves) to <tt>rcu_data</tt>.
362
363<h5>Miscellaneous</h5>
364
365<p>This portion of the <tt>rcu_state</tt> structure is declared
366as follows:
367
368<pre>
369  1   unsigned long gp_max;
370  2   char abbr;
371  3   char *name;
372</pre>
373
374<p>The <tt>-&gt;gp_max</tt> field tracks the duration of the longest
375grace period in jiffies.
376It is protected by the root <tt>rcu_node</tt>'s <tt>-&gt;lock</tt>.
377
378<p>The <tt>-&gt;name</tt> and <tt>-&gt;abbr</tt> fields distinguish
379between preemptible RCU (&ldquo;rcu_preempt&rdquo; and &ldquo;p&rdquo;)
380and non-preemptible RCU (&ldquo;rcu_sched&rdquo; and &ldquo;s&rdquo;).
381These fields are used for diagnostic and tracing purposes.
382
383<h3><a name="The rcu_node Structure">
384The <tt>rcu_node</tt> Structure</a></h3>
385
386<p>The <tt>rcu_node</tt> structures form the combining
387tree that propagates quiescent-state
388information from the leaves to the root and also that propagates
389grace-period information from the root down to the leaves.
390They provides local copies of the grace-period state in order
391to allow this information to be accessed in a synchronized
392manner without suffering the scalability limitations that
393would otherwise be imposed by global locking.
394In <tt>CONFIG_PREEMPT_RCU</tt> kernels, they manage the lists
395of tasks that have blocked while in their current
396RCU read-side critical section.
397In <tt>CONFIG_PREEMPT_RCU</tt> with
398<tt>CONFIG_RCU_BOOST</tt>, they manage the
399per-<tt>rcu_node</tt> priority-boosting
400kernel threads (kthreads) and state.
401Finally, they record CPU-hotplug state in order to determine
402which CPUs should be ignored during a given grace period.
403
404</p><p>The <tt>rcu_node</tt> structure's fields are discussed,
405singly and in groups, in the following sections.
406
407<h5>Connection to Combining Tree</h5>
408
409<p>This portion of the <tt>rcu_node</tt> structure is declared
410as follows:
411
412<pre>
413  1   struct rcu_node *parent;
414  2   u8 level;
415  3   u8 grpnum;
416  4   unsigned long grpmask;
417  5   int grplo;
418  6   int grphi;
419</pre>
420
421<p>The <tt>-&gt;parent</tt> pointer references the <tt>rcu_node</tt>
422one level up in the tree, and is <tt>NULL</tt> for the root
423<tt>rcu_node</tt>.
424The RCU implementation makes heavy use of this field to push quiescent
425states up the tree.
426The <tt>-&gt;level</tt> field gives the level in the tree, with
427the root being at level zero, its children at level one, and so on.
428The <tt>-&gt;grpnum</tt> field gives this node's position within
429the children of its parent, so this number can range between 0 and 31
430on 32-bit systems and between 0 and 63 on 64-bit systems.
431The <tt>-&gt;level</tt> and <tt>-&gt;grpnum</tt> fields are
432used only during initialization and for tracing.
433The <tt>-&gt;grpmask</tt> field is the bitmask counterpart of
434<tt>-&gt;grpnum</tt>, and therefore always has exactly one bit set.
435This mask is used to clear the bit corresponding to this <tt>rcu_node</tt>
436structure in its parent's bitmasks, which are described later.
437Finally, the <tt>-&gt;grplo</tt> and <tt>-&gt;grphi</tt> fields
438contain the lowest and highest numbered CPU served by this
439<tt>rcu_node</tt> structure, respectively.
440
441</p><p>All of these fields are constant, and thus do not require any
442synchronization.
443
444<h5>Synchronization</h5>
445
446<p>This field of the <tt>rcu_node</tt> structure is declared
447as follows:
448
449<pre>
450  1   raw_spinlock_t lock;
451</pre>
452
453<p>This field is used to protect the remaining fields in this structure,
454unless otherwise stated.
455That said, all of the fields in this structure can be accessed without
456locking for tracing purposes.
457Yes, this can result in confusing traces, but better some tracing confusion
458than to be heisenbugged out of existence.
459
460<h5>Grace-Period Tracking</h5>
461
462<p>This portion of the <tt>rcu_node</tt> structure is declared
463as follows:
464
465<pre>
466  1   unsigned long gp_seq;
467  2   unsigned long gp_seq_needed;
468</pre>
469
470<p>The <tt>rcu_node</tt> structures' <tt>-&gt;gp_seq</tt> fields are
471the counterparts of the field of the same name in the <tt>rcu_state</tt>
472structure.
473They each may lag up to one step behind their <tt>rcu_state</tt>
474counterpart.
475If the bottom two bits of a given <tt>rcu_node</tt> structure's
476<tt>-&gt;gp_seq</tt> field is zero, then this <tt>rcu_node</tt>
477structure believes that RCU is idle.
478</p><p>The <tt>&gt;gp_seq</tt> field of each <tt>rcu_node</tt>
479structure is updated at the beginning and the end
480of each grace period.
481
482<p>The <tt>-&gt;gp_seq_needed</tt> fields record the
483furthest-in-the-future grace period request seen by the corresponding
484<tt>rcu_node</tt> structure.  The request is considered fulfilled when
485the value of the <tt>-&gt;gp_seq</tt> field equals or exceeds that of
486the <tt>-&gt;gp_seq_needed</tt> field.
487
488<table>
489<tr><th>&nbsp;</th></tr>
490<tr><th align="left">Quick Quiz:</th></tr>
491<tr><td>
492	Suppose that this <tt>rcu_node</tt> structure doesn't see
493	a request for a very long time.
494	Won't wrapping of the <tt>-&gt;gp_seq</tt> field cause
495	problems?
496</td></tr>
497<tr><th align="left">Answer:</th></tr>
498<tr><td bgcolor="#ffffff"><font color="ffffff">
499	No, because if the <tt>-&gt;gp_seq_needed</tt> field lags behind the
500	<tt>-&gt;gp_seq</tt> field, the <tt>-&gt;gp_seq_needed</tt> field
501	will be updated at the end of the grace period.
502	Modulo-arithmetic comparisons therefore will always get the
503	correct answer, even with wrapping.
504</font></td></tr>
505<tr><td>&nbsp;</td></tr>
506</table>
507
508<h5>Quiescent-State Tracking</h5>
509
510<p>These fields manage the propagation of quiescent states up the
511combining tree.
512
513</p><p>This portion of the <tt>rcu_node</tt> structure has fields
514as follows:
515
516<pre>
517  1   unsigned long qsmask;
518  2   unsigned long expmask;
519  3   unsigned long qsmaskinit;
520  4   unsigned long expmaskinit;
521</pre>
522
523<p>The <tt>-&gt;qsmask</tt> field tracks which of this
524<tt>rcu_node</tt> structure's children still need to report
525quiescent states for the current normal grace period.
526Such children will have a value of 1 in their corresponding bit.
527Note that the leaf <tt>rcu_node</tt> structures should be
528thought of as having <tt>rcu_data</tt> structures as their
529children.
530Similarly, the <tt>-&gt;expmask</tt> field tracks which
531of this <tt>rcu_node</tt> structure's children still need to report
532quiescent states for the current expedited grace period.
533An expedited grace period has
534the same conceptual properties as a normal grace period, but the
535expedited implementation accepts extreme CPU overhead to obtain
536much lower grace-period latency, for example, consuming a few
537tens of microseconds worth of CPU time to reduce grace-period
538duration from milliseconds to tens of microseconds.
539The <tt>-&gt;qsmaskinit</tt> field tracks which of this
540<tt>rcu_node</tt> structure's children cover for at least
541one online CPU.
542This mask is used to initialize <tt>-&gt;qsmask</tt>,
543and <tt>-&gt;expmaskinit</tt> is used to initialize
544<tt>-&gt;expmask</tt> and the beginning of the
545normal and expedited grace periods, respectively.
546
547<table>
548<tr><th>&nbsp;</th></tr>
549<tr><th align="left">Quick Quiz:</th></tr>
550<tr><td>
551	Why are these bitmasks protected by locking?
552	Come on, haven't you heard of atomic instructions???
553</td></tr>
554<tr><th align="left">Answer:</th></tr>
555<tr><td bgcolor="#ffffff"><font color="ffffff">
556	Lockless grace-period computation!  Such a tantalizing possibility!
557	</font>
558
559	<p><font color="ffffff">But consider the following sequence of events:
560	</font>
561
562	<ol>
563	<li>	<font color="ffffff">CPU&nbsp;0 has been in dyntick-idle
564		mode for quite some time.
565		When it wakes up, it notices that the current RCU
566		grace period needs it to report in, so it sets a
567		flag where the scheduling clock interrupt will find it.
568		</font><p>
569	<li>	<font color="ffffff">Meanwhile, CPU&nbsp;1 is running
570		<tt>force_quiescent_state()</tt>,
571		and notices that CPU&nbsp;0 has been in dyntick idle mode,
572		which qualifies as an extended quiescent state.
573		</font><p>
574	<li>	<font color="ffffff">CPU&nbsp;0's scheduling clock
575		interrupt fires in the
576		middle of an RCU read-side critical section, and notices
577		that the RCU core needs something, so commences RCU softirq
578		processing.
579		</font>
580		<p>
581	<li>	<font color="ffffff">CPU&nbsp;0's softirq handler
582		executes and is just about ready
583		to report its quiescent state up the <tt>rcu_node</tt>
584		tree.
585		</font><p>
586	<li>	<font color="ffffff">But CPU&nbsp;1 beats it to the punch,
587		completing the current
588		grace period and starting a new one.
589		</font><p>
590	<li>	<font color="ffffff">CPU&nbsp;0 now reports its quiescent
591		state for the wrong
592		grace period.
593		That grace period might now end before the RCU read-side
594		critical section.
595		If that happens, disaster will ensue.
596		</font>
597	</ol>
598
599	<p><font color="ffffff">So the locking is absolutely required in
600	order to coordinate clearing of the bits with updating of the
601	grace-period sequence number in <tt>-&gt;gp_seq</tt>.
602</font></td></tr>
603<tr><td>&nbsp;</td></tr>
604</table>
605
606<h5>Blocked-Task Management</h5>
607
608<p><tt>PREEMPT_RCU</tt> allows tasks to be preempted in the
609midst of their RCU read-side critical sections, and these tasks
610must be tracked explicitly.
611The details of exactly why and how they are tracked will be covered
612in a separate article on RCU read-side processing.
613For now, it is enough to know that the <tt>rcu_node</tt>
614structure tracks them.
615
616<pre>
617  1   struct list_head blkd_tasks;
618  2   struct list_head *gp_tasks;
619  3   struct list_head *exp_tasks;
620  4   bool wait_blkd_tasks;
621</pre>
622
623<p>The <tt>-&gt;blkd_tasks</tt> field is a list header for
624the list of blocked and preempted tasks.
625As tasks undergo context switches within RCU read-side critical
626sections, their <tt>task_struct</tt> structures are enqueued
627(via the <tt>task_struct</tt>'s <tt>-&gt;rcu_node_entry</tt>
628field) onto the head of the <tt>-&gt;blkd_tasks</tt> list for the
629leaf <tt>rcu_node</tt> structure corresponding to the CPU
630on which the outgoing context switch executed.
631As these tasks later exit their RCU read-side critical sections,
632they remove themselves from the list.
633This list is therefore in reverse time order, so that if one of the tasks
634is blocking the current grace period, all subsequent tasks must
635also be blocking that same grace period.
636Therefore, a single pointer into this list suffices to track
637all tasks blocking a given grace period.
638That pointer is stored in <tt>-&gt;gp_tasks</tt> for normal
639grace periods and in <tt>-&gt;exp_tasks</tt> for expedited
640grace periods.
641These last two fields are <tt>NULL</tt> if either there is
642no grace period in flight or if there are no blocked tasks
643preventing that grace period from completing.
644If either of these two pointers is referencing a task that
645removes itself from the <tt>-&gt;blkd_tasks</tt> list,
646then that task must advance the pointer to the next task on
647the list, or set the pointer to <tt>NULL</tt> if there
648are no subsequent tasks on the list.
649
650</p><p>For example, suppose that tasks&nbsp;T1, T2, and&nbsp;T3 are
651all hard-affinitied to the largest-numbered CPU in the system.
652Then if task&nbsp;T1 blocked in an RCU read-side
653critical section, then an expedited grace period started,
654then task&nbsp;T2 blocked in an RCU read-side critical section,
655then a normal grace period started, and finally task&nbsp;3 blocked
656in an RCU read-side critical section, then the state of the
657last leaf <tt>rcu_node</tt> structure's blocked-task list
658would be as shown below:
659
660</p><p><img src="blkd_task.svg" alt="blkd_task.svg" width="60%">
661
662</p><p>Task&nbsp;T1 is blocking both grace periods, task&nbsp;T2 is
663blocking only the normal grace period, and task&nbsp;T3 is blocking
664neither grace period.
665Note that these tasks will not remove themselves from this list
666immediately upon resuming execution.
667They will instead remain on the list until they execute the outermost
668<tt>rcu_read_unlock()</tt> that ends their RCU read-side critical
669section.
670
671<p>
672The <tt>-&gt;wait_blkd_tasks</tt> field indicates whether or not
673the current grace period is waiting on a blocked task.
674
675<h5>Sizing the <tt>rcu_node</tt> Array</h5>
676
677<p>The <tt>rcu_node</tt> array is sized via a series of
678C-preprocessor expressions as follows:
679
680<pre>
681 1 #ifdef CONFIG_RCU_FANOUT
682 2 #define RCU_FANOUT CONFIG_RCU_FANOUT
683 3 #else
684 4 # ifdef CONFIG_64BIT
685 5 # define RCU_FANOUT 64
686 6 # else
687 7 # define RCU_FANOUT 32
688 8 # endif
689 9 #endif
69010
69111 #ifdef CONFIG_RCU_FANOUT_LEAF
69212 #define RCU_FANOUT_LEAF CONFIG_RCU_FANOUT_LEAF
69313 #else
69414 # ifdef CONFIG_64BIT
69515 # define RCU_FANOUT_LEAF 64
69616 # else
69717 # define RCU_FANOUT_LEAF 32
69818 # endif
69919 #endif
70020
70121 #define RCU_FANOUT_1        (RCU_FANOUT_LEAF)
70222 #define RCU_FANOUT_2        (RCU_FANOUT_1 * RCU_FANOUT)
70323 #define RCU_FANOUT_3        (RCU_FANOUT_2 * RCU_FANOUT)
70424 #define RCU_FANOUT_4        (RCU_FANOUT_3 * RCU_FANOUT)
70525
70626 #if NR_CPUS &lt;= RCU_FANOUT_1
70727 #  define RCU_NUM_LVLS        1
70828 #  define NUM_RCU_LVL_0        1
70929 #  define NUM_RCU_NODES        NUM_RCU_LVL_0
71030 #  define NUM_RCU_LVL_INIT    { NUM_RCU_LVL_0 }
71131 #  define RCU_NODE_NAME_INIT  { "rcu_node_0" }
71232 #  define RCU_FQS_NAME_INIT   { "rcu_node_fqs_0" }
71333 #  define RCU_EXP_NAME_INIT   { "rcu_node_exp_0" }
71434 #elif NR_CPUS &lt;= RCU_FANOUT_2
71535 #  define RCU_NUM_LVLS        2
71636 #  define NUM_RCU_LVL_0        1
71737 #  define NUM_RCU_LVL_1        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
71838 #  define NUM_RCU_NODES        (NUM_RCU_LVL_0 + NUM_RCU_LVL_1)
71939 #  define NUM_RCU_LVL_INIT    { NUM_RCU_LVL_0, NUM_RCU_LVL_1 }
72040 #  define RCU_NODE_NAME_INIT  { "rcu_node_0", "rcu_node_1" }
72141 #  define RCU_FQS_NAME_INIT   { "rcu_node_fqs_0", "rcu_node_fqs_1" }
72242 #  define RCU_EXP_NAME_INIT   { "rcu_node_exp_0", "rcu_node_exp_1" }
72343 #elif NR_CPUS &lt;= RCU_FANOUT_3
72444 #  define RCU_NUM_LVLS        3
72545 #  define NUM_RCU_LVL_0        1
72646 #  define NUM_RCU_LVL_1        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
72747 #  define NUM_RCU_LVL_2        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
72848 #  define NUM_RCU_NODES        (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2)
72949 #  define NUM_RCU_LVL_INIT    { NUM_RCU_LVL_0, NUM_RCU_LVL_1, NUM_RCU_LVL_2 }
73050 #  define RCU_NODE_NAME_INIT  { "rcu_node_0", "rcu_node_1", "rcu_node_2" }
73151 #  define RCU_FQS_NAME_INIT   { "rcu_node_fqs_0", "rcu_node_fqs_1", "rcu_node_fqs_2" }
73252 #  define RCU_EXP_NAME_INIT   { "rcu_node_exp_0", "rcu_node_exp_1", "rcu_node_exp_2" }
73353 #elif NR_CPUS &lt;= RCU_FANOUT_4
73454 #  define RCU_NUM_LVLS        4
73555 #  define NUM_RCU_LVL_0        1
73656 #  define NUM_RCU_LVL_1        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_3)
73757 #  define NUM_RCU_LVL_2        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
73858 #  define NUM_RCU_LVL_3        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
73959 #  define NUM_RCU_NODES        (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2 + NUM_RCU_LVL_3)
74060 #  define NUM_RCU_LVL_INIT    { NUM_RCU_LVL_0, NUM_RCU_LVL_1, NUM_RCU_LVL_2, NUM_RCU_LVL_3 }
74161 #  define RCU_NODE_NAME_INIT  { "rcu_node_0", "rcu_node_1", "rcu_node_2", "rcu_node_3" }
74262 #  define RCU_FQS_NAME_INIT   { "rcu_node_fqs_0", "rcu_node_fqs_1", "rcu_node_fqs_2", "rcu_node_fqs_3" }
74363 #  define RCU_EXP_NAME_INIT   { "rcu_node_exp_0", "rcu_node_exp_1", "rcu_node_exp_2", "rcu_node_exp_3" }
74464 #else
74565 # error "CONFIG_RCU_FANOUT insufficient for NR_CPUS"
74666 #endif
747</pre>
748
749<p>The maximum number of levels in the <tt>rcu_node</tt> structure
750is currently limited to four, as specified by lines&nbsp;21-24
751and the structure of the subsequent &ldquo;if&rdquo; statement.
752For 32-bit systems, this allows 16*32*32*32=524,288 CPUs, which
753should be sufficient for the next few years at least.
754For 64-bit systems, 16*64*64*64=4,194,304 CPUs is allowed, which
755should see us through the next decade or so.
756This four-level tree also allows kernels built with
757<tt>CONFIG_RCU_FANOUT=8</tt> to support up to 4096 CPUs,
758which might be useful in very large systems having eight CPUs per
759socket (but please note that no one has yet shown any measurable
760performance degradation due to misaligned socket and <tt>rcu_node</tt>
761boundaries).
762In addition, building kernels with a full four levels of <tt>rcu_node</tt>
763tree permits better testing of RCU's combining-tree code.
764
765</p><p>The <tt>RCU_FANOUT</tt> symbol controls how many children
766are permitted at each non-leaf level of the <tt>rcu_node</tt> tree.
767If the <tt>CONFIG_RCU_FANOUT</tt> Kconfig option is not specified,
768it is set based on the word size of the system, which is also
769the Kconfig default.
770
771</p><p>The <tt>RCU_FANOUT_LEAF</tt> symbol controls how many CPUs are
772handled by each leaf <tt>rcu_node</tt> structure.
773Experience has shown that allowing a given leaf <tt>rcu_node</tt>
774structure to handle 64 CPUs, as permitted by the number of bits in
775the <tt>-&gt;qsmask</tt> field on a 64-bit system, results in
776excessive contention for the leaf <tt>rcu_node</tt> structures'
777<tt>-&gt;lock</tt> fields.
778The number of CPUs per leaf <tt>rcu_node</tt> structure is therefore
779limited to 16 given the default value of <tt>CONFIG_RCU_FANOUT_LEAF</tt>.
780If <tt>CONFIG_RCU_FANOUT_LEAF</tt> is unspecified, the value
781selected is based on the word size of the system, just as for
782<tt>CONFIG_RCU_FANOUT</tt>.
783Lines&nbsp;11-19 perform this computation.
784
785</p><p>Lines&nbsp;21-24 compute the maximum number of CPUs supported by
786a single-level (which contains a single <tt>rcu_node</tt> structure),
787two-level, three-level, and four-level <tt>rcu_node</tt> tree,
788respectively, given the fanout specified by <tt>RCU_FANOUT</tt>
789and <tt>RCU_FANOUT_LEAF</tt>.
790These numbers of CPUs are retained in the
791<tt>RCU_FANOUT_1</tt>,
792<tt>RCU_FANOUT_2</tt>,
793<tt>RCU_FANOUT_3</tt>, and
794<tt>RCU_FANOUT_4</tt>
795C-preprocessor variables, respectively.
796
797</p><p>These variables are used to control the C-preprocessor <tt>#if</tt>
798statement spanning lines&nbsp;26-66 that computes the number of
799<tt>rcu_node</tt> structures required for each level of the tree,
800as well as the number of levels required.
801The number of levels is placed in the <tt>NUM_RCU_LVLS</tt>
802C-preprocessor variable by lines&nbsp;27, 35, 44, and&nbsp;54.
803The number of <tt>rcu_node</tt> structures for the topmost level
804of the tree is always exactly one, and this value is unconditionally
805placed into <tt>NUM_RCU_LVL_0</tt> by lines&nbsp;28, 36, 45, and&nbsp;55.
806The rest of the levels (if any) of the <tt>rcu_node</tt> tree
807are computed by dividing the maximum number of CPUs by the
808fanout supported by the number of levels from the current level down,
809rounding up.  This computation is performed by lines&nbsp;37,
81046-47, and&nbsp;56-58.
811Lines&nbsp;31-33, 40-42, 50-52, and&nbsp;62-63 create initializers
812for lockdep lock-class names.
813Finally, lines&nbsp;64-66 produce an error if the maximum number of
814CPUs is too large for the specified fanout.
815
816<h3><a name="The rcu_segcblist Structure">
817The <tt>rcu_segcblist</tt> Structure</a></h3>
818
819The <tt>rcu_segcblist</tt> structure maintains a segmented list of
820callbacks as follows:
821
822<pre>
823 1 #define RCU_DONE_TAIL        0
824 2 #define RCU_WAIT_TAIL        1
825 3 #define RCU_NEXT_READY_TAIL  2
826 4 #define RCU_NEXT_TAIL        3
827 5 #define RCU_CBLIST_NSEGS     4
828 6
829 7 struct rcu_segcblist {
830 8   struct rcu_head *head;
831 9   struct rcu_head **tails[RCU_CBLIST_NSEGS];
83210   unsigned long gp_seq[RCU_CBLIST_NSEGS];
83311   long len;
83412   long len_lazy;
83513 };
836</pre>
837
838<p>
839The segments are as follows:
840
841<ol>
842<li>	<tt>RCU_DONE_TAIL</tt>: Callbacks whose grace periods have elapsed.
843	These callbacks are ready to be invoked.
844<li>	<tt>RCU_WAIT_TAIL</tt>: Callbacks that are waiting for the
845	current grace period.
846	Note that different CPUs can have different ideas about which
847	grace period is current, hence the <tt>-&gt;gp_seq</tt> field.
848<li>	<tt>RCU_NEXT_READY_TAIL</tt>: Callbacks waiting for the next
849	grace period to start.
850<li>	<tt>RCU_NEXT_TAIL</tt>: Callbacks that have not yet been
851	associated with a grace period.
852</ol>
853
854<p>
855The <tt>-&gt;head</tt> pointer references the first callback or
856is <tt>NULL</tt> if the list contains no callbacks (which is
857<i>not</i> the same as being empty).
858Each element of the <tt>-&gt;tails[]</tt> array references the
859<tt>-&gt;next</tt> pointer of the last callback in the corresponding
860segment of the list, or the list's <tt>-&gt;head</tt> pointer if
861that segment and all previous segments are empty.
862If the corresponding segment is empty but some previous segment is
863not empty, then the array element is identical to its predecessor.
864Older callbacks are closer to the head of the list, and new callbacks
865are added at the tail.
866This relationship between the <tt>-&gt;head</tt> pointer, the
867<tt>-&gt;tails[]</tt> array, and the callbacks is shown in this
868diagram:
869
870</p><p><img src="nxtlist.svg" alt="nxtlist.svg" width="40%">
871
872</p><p>In this figure, the <tt>-&gt;head</tt> pointer references the
873first
874RCU callback in the list.
875The <tt>-&gt;tails[RCU_DONE_TAIL]</tt> array element references
876the <tt>-&gt;head</tt> pointer itself, indicating that none
877of the callbacks is ready to invoke.
878The <tt>-&gt;tails[RCU_WAIT_TAIL]</tt> array element references callback
879CB&nbsp;2's <tt>-&gt;next</tt> pointer, which indicates that
880CB&nbsp;1 and CB&nbsp;2 are both waiting on the current grace period,
881give or take possible disagreements about exactly which grace period
882is the current one.
883The <tt>-&gt;tails[RCU_NEXT_READY_TAIL]</tt> array element
884references the same RCU callback that <tt>-&gt;tails[RCU_WAIT_TAIL]</tt>
885does, which indicates that there are no callbacks waiting on the next
886RCU grace period.
887The <tt>-&gt;tails[RCU_NEXT_TAIL]</tt> array element references
888CB&nbsp;4's <tt>-&gt;next</tt> pointer, indicating that all the
889remaining RCU callbacks have not yet been assigned to an RCU grace
890period.
891Note that the <tt>-&gt;tails[RCU_NEXT_TAIL]</tt> array element
892always references the last RCU callback's <tt>-&gt;next</tt> pointer
893unless the callback list is empty, in which case it references
894the <tt>-&gt;head</tt> pointer.
895
896<p>
897There is one additional important special case for the
898<tt>-&gt;tails[RCU_NEXT_TAIL]</tt> array element: It can be <tt>NULL</tt>
899when this list is <i>disabled</i>.
900Lists are disabled when the corresponding CPU is offline or when
901the corresponding CPU's callbacks are offloaded to a kthread,
902both of which are described elsewhere.
903
904</p><p>CPUs advance their callbacks from the
905<tt>RCU_NEXT_TAIL</tt> to the <tt>RCU_NEXT_READY_TAIL</tt> to the
906<tt>RCU_WAIT_TAIL</tt> to the <tt>RCU_DONE_TAIL</tt> list segments
907as grace periods advance.
908
909</p><p>The <tt>-&gt;gp_seq[]</tt> array records grace-period
910numbers corresponding to the list segments.
911This is what allows different CPUs to have different ideas as to
912which is the current grace period while still avoiding premature
913invocation of their callbacks.
914In particular, this allows CPUs that go idle for extended periods
915to determine which of their callbacks are ready to be invoked after
916reawakening.
917
918</p><p>The <tt>-&gt;len</tt> counter contains the number of
919callbacks in <tt>-&gt;head</tt>, and the
920<tt>-&gt;len_lazy</tt> contains the number of those callbacks that
921are known to only free memory, and whose invocation can therefore
922be safely deferred.
923
924<p><b>Important note</b>: It is the <tt>-&gt;len</tt> field that
925determines whether or not there are callbacks associated with
926this <tt>rcu_segcblist</tt> structure, <i>not</i> the <tt>-&gt;head</tt>
927pointer.
928The reason for this is that all the ready-to-invoke callbacks
929(that is, those in the <tt>RCU_DONE_TAIL</tt> segment) are extracted
930all at once at callback-invocation time (<tt>rcu_do_batch</tt>), due
931to which <tt>-&gt;head</tt> may be set to NULL if there are no not-done
932callbacks remaining in the <tt>rcu_segcblist</tt>.
933If callback invocation must be postponed, for example, because a
934high-priority process just woke up on this CPU, then the remaining
935callbacks are placed back on the <tt>RCU_DONE_TAIL</tt> segment and
936<tt>-&gt;head</tt> once again points to the start of the segment.
937In short, the head field can briefly be <tt>NULL</tt> even though the
938CPU has callbacks present the entire time.
939Therefore, it is not appropriate to test the <tt>-&gt;head</tt> pointer
940for <tt>NULL</tt>.
941
942<p>In contrast, the <tt>-&gt;len</tt> and <tt>-&gt;len_lazy</tt> counts
943are adjusted only after the corresponding callbacks have been invoked.
944This means that the <tt>-&gt;len</tt> count is zero only if
945the <tt>rcu_segcblist</tt> structure really is devoid of callbacks.
946Of course, off-CPU sampling of the <tt>-&gt;len</tt> count requires
947careful use of appropriate synchronization, for example, memory barriers.
948This synchronization can be a bit subtle, particularly in the case
949of <tt>rcu_barrier()</tt>.
950
951<h3><a name="The rcu_data Structure">
952The <tt>rcu_data</tt> Structure</a></h3>
953
954<p>The <tt>rcu_data</tt> maintains the per-CPU state for the RCU subsystem.
955The fields in this structure may be accessed only from the corresponding
956CPU (and from tracing) unless otherwise stated.
957This structure is the
958focus of quiescent-state detection and RCU callback queuing.
959It also tracks its relationship to the corresponding leaf
960<tt>rcu_node</tt> structure to allow more-efficient
961propagation of quiescent states up the <tt>rcu_node</tt>
962combining tree.
963Like the <tt>rcu_node</tt> structure, it provides a local
964copy of the grace-period information to allow for-free
965synchronized
966access to this information from the corresponding CPU.
967Finally, this structure records past dyntick-idle state
968for the corresponding CPU and also tracks statistics.
969
970</p><p>The <tt>rcu_data</tt> structure's fields are discussed,
971singly and in groups, in the following sections.
972
973<h5>Connection to Other Data Structures</h5>
974
975<p>This portion of the <tt>rcu_data</tt> structure is declared
976as follows:
977
978<pre>
979  1   int cpu;
980  2   struct rcu_node *mynode;
981  3   unsigned long grpmask;
982  4   bool beenonline;
983</pre>
984
985<p>The <tt>-&gt;cpu</tt> field contains the number of the
986corresponding CPU and the <tt>-&gt;mynode</tt> field references the
987corresponding <tt>rcu_node</tt> structure.
988The <tt>-&gt;mynode</tt> is used to propagate quiescent states
989up the combining tree.
990These two fields are constant and therefore do not require synchronization.
991
992<p>The <tt>-&gt;grpmask</tt> field indicates the bit in
993the <tt>-&gt;mynode-&gt;qsmask</tt> corresponding to this
994<tt>rcu_data</tt> structure, and is also used when propagating
995quiescent states.
996The <tt>-&gt;beenonline</tt> flag is set whenever the corresponding
997CPU comes online, which means that the debugfs tracing need not dump
998out any <tt>rcu_data</tt> structure for which this flag is not set.
999
1000<h5>Quiescent-State and Grace-Period Tracking</h5>
1001
1002<p>This portion of the <tt>rcu_data</tt> structure is declared
1003as follows:
1004
1005<pre>
1006  1   unsigned long gp_seq;
1007  2   unsigned long gp_seq_needed;
1008  3   bool cpu_no_qs;
1009  4   bool core_needs_qs;
1010  5   bool gpwrap;
1011</pre>
1012
1013<p>The <tt>-&gt;gp_seq</tt> field is the counterpart of the field of the same
1014name in the <tt>rcu_state</tt> and <tt>rcu_node</tt> structures.  The
1015<tt>-&gt;gp_seq_needed</tt> field is the counterpart of the field of the same
1016name in the rcu_node</tt> structure.
1017They may each lag up to one behind their <tt>rcu_node</tt>
1018counterparts, but in <tt>CONFIG_NO_HZ_IDLE</tt> and
1019<tt>CONFIG_NO_HZ_FULL</tt> kernels can lag
1020arbitrarily far behind for CPUs in dyntick-idle mode (but these counters
1021will catch up upon exit from dyntick-idle mode).
1022If the lower two bits of a given <tt>rcu_data</tt> structure's
1023<tt>-&gt;gp_seq</tt> are zero, then this <tt>rcu_data</tt>
1024structure believes that RCU is idle.
1025
1026<table>
1027<tr><th>&nbsp;</th></tr>
1028<tr><th align="left">Quick Quiz:</th></tr>
1029<tr><td>
1030	All this replication of the grace period numbers can only cause
1031	massive confusion.
1032	Why not just keep a global sequence number and be done with it???
1033</td></tr>
1034<tr><th align="left">Answer:</th></tr>
1035<tr><td bgcolor="#ffffff"><font color="ffffff">
1036	Because if there was only a single global sequence
1037	numbers, there would need to be a single global lock to allow
1038	safely accessing and updating it.
1039	And if we are not going to have a single global lock, we need
1040	to carefully manage the numbers on a per-node basis.
1041	Recall from the answer to a previous Quick Quiz that the consequences
1042	of applying a previously sampled quiescent state to the wrong
1043	grace period are quite severe.
1044</font></td></tr>
1045<tr><td>&nbsp;</td></tr>
1046</table>
1047
1048<p>The <tt>-&gt;cpu_no_qs</tt> flag indicates that the
1049CPU has not yet passed through a quiescent state,
1050while the <tt>-&gt;core_needs_qs</tt> flag indicates that the
1051RCU core needs a quiescent state from the corresponding CPU.
1052The <tt>-&gt;gpwrap</tt> field indicates that the corresponding
1053CPU has remained idle for so long that the
1054<tt>gp_seq</tt> counter is in danger of overflow, which
1055will cause the CPU to disregard the values of its counters on
1056its next exit from idle.
1057
1058<h5>RCU Callback Handling</h5>
1059
1060<p>In the absence of CPU-hotplug events, RCU callbacks are invoked by
1061the same CPU that registered them.
1062This is strictly a cache-locality optimization: callbacks can and
1063do get invoked on CPUs other than the one that registered them.
1064After all, if the CPU that registered a given callback has gone
1065offline before the callback can be invoked, there really is no other
1066choice.
1067
1068</p><p>This portion of the <tt>rcu_data</tt> structure is declared
1069as follows:
1070
1071<pre>
1072 1 struct rcu_segcblist cblist;
1073 2 long qlen_last_fqs_check;
1074 3 unsigned long n_cbs_invoked;
1075 4 unsigned long n_nocbs_invoked;
1076 5 unsigned long n_cbs_orphaned;
1077 6 unsigned long n_cbs_adopted;
1078 7 unsigned long n_force_qs_snap;
1079 8 long blimit;
1080</pre>
1081
1082<p>The <tt>-&gt;cblist</tt> structure is the segmented callback list
1083described earlier.
1084The CPU advances the callbacks in its <tt>rcu_data</tt> structure
1085whenever it notices that another RCU grace period has completed.
1086The CPU detects the completion of an RCU grace period by noticing
1087that the value of its <tt>rcu_data</tt> structure's
1088<tt>-&gt;gp_seq</tt> field differs from that of its leaf
1089<tt>rcu_node</tt> structure.
1090Recall that each <tt>rcu_node</tt> structure's
1091<tt>-&gt;gp_seq</tt> field is updated at the beginnings and ends of each
1092grace period.
1093
1094<p>
1095The <tt>-&gt;qlen_last_fqs_check</tt> and
1096<tt>-&gt;n_force_qs_snap</tt> coordinate the forcing of quiescent
1097states from <tt>call_rcu()</tt> and friends when callback
1098lists grow excessively long.
1099
1100</p><p>The <tt>-&gt;n_cbs_invoked</tt>,
1101<tt>-&gt;n_cbs_orphaned</tt>, and <tt>-&gt;n_cbs_adopted</tt>
1102fields count the number of callbacks invoked,
1103sent to other CPUs when this CPU goes offline,
1104and received from other CPUs when those other CPUs go offline.
1105The <tt>-&gt;n_nocbs_invoked</tt> is used when the CPU's callbacks
1106are offloaded to a kthread.
1107
1108<p>
1109Finally, the <tt>-&gt;blimit</tt> counter is the maximum number of
1110RCU callbacks that may be invoked at a given time.
1111
1112<h5>Dyntick-Idle Handling</h5>
1113
1114<p>This portion of the <tt>rcu_data</tt> structure is declared
1115as follows:
1116
1117<pre>
1118  1   int dynticks_snap;
1119  2   unsigned long dynticks_fqs;
1120</pre>
1121
1122The <tt>-&gt;dynticks_snap</tt> field is used to take a snapshot
1123of the corresponding CPU's dyntick-idle state when forcing
1124quiescent states, and is therefore accessed from other CPUs.
1125Finally, the <tt>-&gt;dynticks_fqs</tt> field is used to
1126count the number of times this CPU is determined to be in
1127dyntick-idle state, and is used for tracing and debugging purposes.
1128
1129<p>
1130This portion of the rcu_data structure is declared as follows:
1131
1132<pre>
1133  1   long dynticks_nesting;
1134  2   long dynticks_nmi_nesting;
1135  3   atomic_t dynticks;
1136  4   bool rcu_need_heavy_qs;
1137  5   bool rcu_urgent_qs;
1138</pre>
1139
1140<p>These fields in the rcu_data structure maintain the per-CPU dyntick-idle
1141state for the corresponding CPU.
1142The fields may be accessed only from the corresponding CPU (and from tracing)
1143unless otherwise stated.
1144
1145<p>The <tt>-&gt;dynticks_nesting</tt> field counts the
1146nesting depth of process execution, so that in normal circumstances
1147this counter has value zero or one.
1148NMIs, irqs, and tracers are counted by the <tt>-&gt;dynticks_nmi_nesting</tt>
1149field.
1150Because NMIs cannot be masked, changes to this variable have to be
1151undertaken carefully using an algorithm provided by Andy Lutomirski.
1152The initial transition from idle adds one, and nested transitions
1153add two, so that a nesting level of five is represented by a
1154<tt>-&gt;dynticks_nmi_nesting</tt> value of nine.
1155This counter can therefore be thought of as counting the number
1156of reasons why this CPU cannot be permitted to enter dyntick-idle
1157mode, aside from process-level transitions.
1158
1159<p>However, it turns out that when running in non-idle kernel context,
1160the Linux kernel is fully capable of entering interrupt handlers that
1161never exit and perhaps also vice versa.
1162Therefore, whenever the <tt>-&gt;dynticks_nesting</tt> field is
1163incremented up from zero, the <tt>-&gt;dynticks_nmi_nesting</tt> field
1164is set to a large positive number, and whenever the
1165<tt>-&gt;dynticks_nesting</tt> field is decremented down to zero,
1166the the <tt>-&gt;dynticks_nmi_nesting</tt> field is set to zero.
1167Assuming that the number of misnested interrupts is not sufficient
1168to overflow the counter, this approach corrects the
1169<tt>-&gt;dynticks_nmi_nesting</tt> field every time the corresponding
1170CPU enters the idle loop from process context.
1171
1172</p><p>The <tt>-&gt;dynticks</tt> field counts the corresponding
1173CPU's transitions to and from either dyntick-idle or user mode, so
1174that this counter has an even value when the CPU is in dyntick-idle
1175mode or user mode and an odd value otherwise. The transitions to/from
1176user mode need to be counted for user mode adaptive-ticks support
1177(see timers/NO_HZ.txt).
1178
1179</p><p>The <tt>-&gt;rcu_need_heavy_qs</tt> field is used
1180to record the fact that the RCU core code would really like to
1181see a quiescent state from the corresponding CPU, so much so that
1182it is willing to call for heavy-weight dyntick-counter operations.
1183This flag is checked by RCU's context-switch and <tt>cond_resched()</tt>
1184code, which provide a momentary idle sojourn in response.
1185
1186</p><p>Finally, the <tt>-&gt;rcu_urgent_qs</tt> field is used to record
1187the fact that the RCU core code would really like to see a quiescent state from
1188the corresponding CPU, with the various other fields indicating just how badly
1189RCU wants this quiescent state.
1190This flag is checked by RCU's context-switch path
1191(<tt>rcu_note_context_switch</tt>) and the cond_resched code.
1192
1193<table>
1194<tr><th>&nbsp;</th></tr>
1195<tr><th align="left">Quick Quiz:</th></tr>
1196<tr><td>
1197	Why not simply combine the <tt>-&gt;dynticks_nesting</tt>
1198	and <tt>-&gt;dynticks_nmi_nesting</tt> counters into a
1199	single counter that just counts the number of reasons that
1200	the corresponding CPU is non-idle?
1201</td></tr>
1202<tr><th align="left">Answer:</th></tr>
1203<tr><td bgcolor="#ffffff"><font color="ffffff">
1204	Because this would fail in the presence of interrupts whose
1205	handlers never return and of handlers that manage to return
1206	from a made-up interrupt.
1207</font></td></tr>
1208<tr><td>&nbsp;</td></tr>
1209</table>
1210
1211<p>Additional fields are present for some special-purpose
1212builds, and are discussed separately.
1213
1214<h3><a name="The rcu_head Structure">
1215The <tt>rcu_head</tt> Structure</a></h3>
1216
1217<p>Each <tt>rcu_head</tt> structure represents an RCU callback.
1218These structures are normally embedded within RCU-protected data
1219structures whose algorithms use asynchronous grace periods.
1220In contrast, when using algorithms that block waiting for RCU grace periods,
1221RCU users need not provide <tt>rcu_head</tt> structures.
1222
1223</p><p>The <tt>rcu_head</tt> structure has fields as follows:
1224
1225<pre>
1226  1   struct rcu_head *next;
1227  2   void (*func)(struct rcu_head *head);
1228</pre>
1229
1230<p>The <tt>-&gt;next</tt> field is used
1231to link the <tt>rcu_head</tt> structures together in the
1232lists within the <tt>rcu_data</tt> structures.
1233The <tt>-&gt;func</tt> field is a pointer to the function
1234to be called when the callback is ready to be invoked, and
1235this function is passed a pointer to the <tt>rcu_head</tt>
1236structure.
1237However, <tt>kfree_rcu()</tt> uses the <tt>-&gt;func</tt>
1238field to record the offset of the <tt>rcu_head</tt>
1239structure within the enclosing RCU-protected data structure.
1240
1241</p><p>Both of these fields are used internally by RCU.
1242From the viewpoint of RCU users, this structure is an
1243opaque &ldquo;cookie&rdquo;.
1244
1245<table>
1246<tr><th>&nbsp;</th></tr>
1247<tr><th align="left">Quick Quiz:</th></tr>
1248<tr><td>
1249	Given that the callback function <tt>-&gt;func</tt>
1250	is passed a pointer to the <tt>rcu_head</tt> structure,
1251	how is that function supposed to find the beginning of the
1252	enclosing RCU-protected data structure?
1253</td></tr>
1254<tr><th align="left">Answer:</th></tr>
1255<tr><td bgcolor="#ffffff"><font color="ffffff">
1256	In actual practice, there is a separate callback function per
1257	type of RCU-protected data structure.
1258	The callback function can therefore use the <tt>container_of()</tt>
1259	macro in the Linux kernel (or other pointer-manipulation facilities
1260	in other software environments) to find the beginning of the
1261	enclosing structure.
1262</font></td></tr>
1263<tr><td>&nbsp;</td></tr>
1264</table>
1265
1266<h3><a name="RCU-Specific Fields in the task_struct Structure">
1267RCU-Specific Fields in the <tt>task_struct</tt> Structure</a></h3>
1268
1269<p>The <tt>CONFIG_PREEMPT_RCU</tt> implementation uses some
1270additional fields in the <tt>task_struct</tt> structure:
1271
1272<pre>
1273 1 #ifdef CONFIG_PREEMPT_RCU
1274 2   int rcu_read_lock_nesting;
1275 3   union rcu_special rcu_read_unlock_special;
1276 4   struct list_head rcu_node_entry;
1277 5   struct rcu_node *rcu_blocked_node;
1278 6 #endif /* #ifdef CONFIG_PREEMPT_RCU */
1279 7 #ifdef CONFIG_TASKS_RCU
1280 8   unsigned long rcu_tasks_nvcsw;
1281 9   bool rcu_tasks_holdout;
128210   struct list_head rcu_tasks_holdout_list;
128311   int rcu_tasks_idle_cpu;
128412 #endif /* #ifdef CONFIG_TASKS_RCU */
1285</pre>
1286
1287<p>The <tt>-&gt;rcu_read_lock_nesting</tt> field records the
1288nesting level for RCU read-side critical sections, and
1289the <tt>-&gt;rcu_read_unlock_special</tt> field is a bitmask
1290that records special conditions that require <tt>rcu_read_unlock()</tt>
1291to do additional work.
1292The <tt>-&gt;rcu_node_entry</tt> field is used to form lists of
1293tasks that have blocked within preemptible-RCU read-side critical
1294sections and the <tt>-&gt;rcu_blocked_node</tt> field references
1295the <tt>rcu_node</tt> structure whose list this task is a member of,
1296or <tt>NULL</tt> if it is not blocked within a preemptible-RCU
1297read-side critical section.
1298
1299<p>The <tt>-&gt;rcu_tasks_nvcsw</tt> field tracks the number of
1300voluntary context switches that this task had undergone at the
1301beginning of the current tasks-RCU grace period,
1302<tt>-&gt;rcu_tasks_holdout</tt> is set if the current tasks-RCU
1303grace period is waiting on this task, <tt>-&gt;rcu_tasks_holdout_list</tt>
1304is a list element enqueuing this task on the holdout list,
1305and <tt>-&gt;rcu_tasks_idle_cpu</tt> tracks which CPU this
1306idle task is running, but only if the task is currently running,
1307that is, if the CPU is currently idle.
1308
1309<h3><a name="Accessor Functions">
1310Accessor Functions</a></h3>
1311
1312<p>The following listing shows the
1313<tt>rcu_get_root()</tt>, <tt>rcu_for_each_node_breadth_first</tt> and
1314<tt>rcu_for_each_leaf_node()</tt> function and macros:
1315
1316<pre>
1317  1 static struct rcu_node *rcu_get_root(struct rcu_state *rsp)
1318  2 {
1319  3   return &amp;rsp-&gt;node[0];
1320  4 }
1321  5
1322  6 #define rcu_for_each_node_breadth_first(rsp, rnp) \
1323  7   for ((rnp) = &amp;(rsp)-&gt;node[0]; \
1324  8        (rnp) &lt; &amp;(rsp)-&gt;node[NUM_RCU_NODES]; (rnp)++)
1325  9
1326 10 #define rcu_for_each_leaf_node(rsp, rnp) \
1327 11   for ((rnp) = (rsp)-&gt;level[NUM_RCU_LVLS - 1]; \
1328 12        (rnp) &lt; &amp;(rsp)-&gt;node[NUM_RCU_NODES]; (rnp)++)
1329</pre>
1330
1331<p>The <tt>rcu_get_root()</tt> simply returns a pointer to the
1332first element of the specified <tt>rcu_state</tt> structure's
1333<tt>-&gt;node[]</tt> array, which is the root <tt>rcu_node</tt>
1334structure.
1335
1336</p><p>As noted earlier, the <tt>rcu_for_each_node_breadth_first()</tt>
1337macro takes advantage of the layout of the <tt>rcu_node</tt>
1338structures in the <tt>rcu_state</tt> structure's
1339<tt>-&gt;node[]</tt> array, performing a breadth-first traversal by
1340simply traversing the array in order.
1341Similarly, the <tt>rcu_for_each_leaf_node()</tt> macro traverses only
1342the last part of the array, thus traversing only the leaf
1343<tt>rcu_node</tt> structures.
1344
1345<table>
1346<tr><th>&nbsp;</th></tr>
1347<tr><th align="left">Quick Quiz:</th></tr>
1348<tr><td>
1349	What does
1350	<tt>rcu_for_each_leaf_node()</tt> do if the <tt>rcu_node</tt> tree
1351	contains only a single node?
1352</td></tr>
1353<tr><th align="left">Answer:</th></tr>
1354<tr><td bgcolor="#ffffff"><font color="ffffff">
1355	In the single-node case,
1356	<tt>rcu_for_each_leaf_node()</tt> traverses the single node.
1357</font></td></tr>
1358<tr><td>&nbsp;</td></tr>
1359</table>
1360
1361<h3><a name="Summary">
1362Summary</a></h3>
1363
1364So the state of RCU is represented by an <tt>rcu_state</tt> structure,
1365which contains a combining tree of <tt>rcu_node</tt> and
1366<tt>rcu_data</tt> structures.
1367Finally, in <tt>CONFIG_NO_HZ_IDLE</tt> kernels, each CPU's dyntick-idle
1368state is tracked by dynticks-related fields in the <tt>rcu_data</tt> structure.
1369
1370If you made it this far, you are well prepared to read the code
1371walkthroughs in the other articles in this series.
1372
1373<h3><a name="Acknowledgments">
1374Acknowledgments</a></h3>
1375
1376I owe thanks to Cyrill Gorcunov, Mathieu Desnoyers, Dhaval Giani, Paul
1377Turner, Abhishek Srivastava, Matt Kowalczyk, and Serge Hallyn
1378for helping me get this document into a more human-readable state.
1379
1380<h3><a name="Legal Statement">
1381Legal Statement</a></h3>
1382
1383<p>This work represents the view of the author and does not necessarily
1384represent the view of IBM.
1385
1386</p><p>Linux is a registered trademark of Linus Torvalds.
1387
1388</p><p>Other company, product, and service names may be trademarks or
1389service marks of others.
1390
1391</body></html>
1392