Lines Matching refs:rcu_node
50 critical section for the ``rcu_node`` structure's
62 Therefore, for any given ``rcu_node`` structure, any access
71 on different ``rcu_node`` structures.
118 | But the chain of rcu_node-structure lock acquisitions guarantees |
166 | by the CPU's leaf ``rcu_node`` structure's ``->lock`` as described |
194 the ``rcu_node`` structure's ``->lock`` field, so much so that it is
207 6 struct rcu_node *rnp;
250 .. kernel-figure:: rcu_node-lock.svg
252 The box represents the ``rcu_node`` structure's ``->lock`` critical
310 ``rcu_node`` structure's ``->lock``. In all cases, there is full
311 ordering against any prior critical section for that same ``rcu_node``
313 current task's or CPU's prior critical sections for any ``rcu_node``
336 thread, which makes several passes over the ``rcu_node`` tree within the
340 ``rcu_node`` changes over time, just like Heraclitus's river. However,
341 to keep the ``rcu_node`` river tractable, the grace-period kernel
353 root ``rcu_node`` structure is touched.
355 The first pass through the ``rcu_node`` tree updates bitmasks based on
358 this ``rcu_node`` structure has not transitioned to or from zero, this
359 pass will scan only the leaf ``rcu_node`` structures. However, if the
360 number of online CPUs for a given leaf ``rcu_node`` structure has
363 leaf ``rcu_node`` structure has transitioned to zero,
366 ``rcu_node`` structure onlines its first CPU and if the next
367 ``rcu_node`` structure has no online CPUs (or, alternatively if the
368 leftmost ``rcu_node`` structure offlines its last CPU and if the next
369 ``rcu_node`` structure has no online CPUs).
373 The final ``rcu_gp_init()`` pass through the ``rcu_node`` tree traverses
374 breadth-first, setting each ``rcu_node`` structure's ``->gp_seq`` field
385 ``rcu_node`` structure's ``->gp_seq`` field, each CPU's observation of
405 | its last ``rcu_gp_init()`` pass through its leaf ``rcu_node`` |
422 ``rcu_node`` tree only until they encountered an ``rcu_node`` structure
425 that ``rcu_node`` structure's ``->lock``.
430 its leaf ``rcu_node`` lock. Therefore, all execution shown in this
467 traverses up the ``rcu_node`` tree as shown at the bottom of the
468 diagram, clearing bits from each ``rcu_node`` structure's ``->qsmask``
471 Note that traversal passes upwards out of a given ``rcu_node`` structure
473 subtree headed by that ``rcu_node`` structure. A key point is that if a
474 CPU's traversal stops at a given ``rcu_node`` structure, then there will
476 proceeds upwards from that point, and the ``rcu_node`` ``->lock``
480 CPU traverses through the root ``rcu_node`` structure, the “last CPU”
481 being the one that clears the last bit in the root ``rcu_node``
496 while holding the corresponding CPU's leaf ``rcu_node`` structure's
520 ``rcu_node`` structure's ``->lock`` and update this structure's
540 ``rcu_node`` structures, and if there are no new quiescent states due to
546 reaches an ``rcu_node`` structure that has quiescent states outstanding
553 | ``rcu_node`` structure, which means that there are still CPUs |
570 Grace-period cleanup first scans the ``rcu_node`` tree breadth-first
600 Once a given CPU's leaf ``rcu_node`` structure's ``->gp_seq`` field has
618 its leaf ``rcu_node`` structure's ``->lock`` before invoking callbacks,
629 running on a CPU corresponding to the leftmost leaf ``rcu_node``
631 the rightmost leaf ``rcu_node`` structure, and the grace-period kernel