Lines Matching +full:cpu +full:- +full:centric
1 Explanation of the Linux-Kernel Memory Consistency Model
15 7. THE PROGRAM ORDER RELATION: po AND po-loc
18 10. THE READS-FROM RELATION: rf, rfi, and rfe
20 12. THE FROM-READS RELATION: fr, fri, and fre
22 14. PROPAGATION ORDER RELATION: cumul-fence
28 20. THE HAPPENS-BEFORE RELATION: hb
29 21. THE PROPAGATES-BEFORE RELATION: pb
30 22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
38 ------------
40 The Linux-kernel memory consistency model (LKMM) is rather complex and
42 linux-kernel.bell and linux-kernel.cat files that make up the formal
68 ----------
86 factors such as DMA and mixed-size accesses.) But on multiprocessor
97 ----------------
103 is full. Running concurrently on a different CPU might be a part of
132 CPU and P1() represents the read() routine running on another. The
140 This pattern of memory accesses, where one CPU stores values to two
141 shared memory locations and another CPU loads from those locations in
158 predict that r1 = 42 or r2 = -7, because neither of those values ever
180 ----------------------------
184 if each CPU executed its instructions in order but with unspecified
188 program source for each CPU. The model says that the value obtained
190 store to the same memory location, from any CPU.
222 each CPU stores to its own shared location and then loads from the
223 other CPU's location:
254 -------------------
285 ------
305 Atomic read-modify-write accesses, such as atomic_inc() or xchg(),
312 logical computations, control-flow instructions, or accesses to
313 private memory or CPU registers are not of central interest to the
318 is concerned only with the store itself -- its value and its address
319 -- not the computation leading up to it.
327 THE PROGRAM ORDER RELATION: po AND po-loc
328 -----------------------------------------
334 instructions are presented to a CPU's execution unit. Thus, we say
335 that X is po-before Y (written as "X ->po Y" in formulas) if X occurs
338 This is inherently a single-CPU relation; two instructions executing
342 po-loc is a sub-relation of po. It links two memory accesses when the
344 same memory location (the "-loc" suffix).
347 program order we need to explain. The LKMM was inspired by low-level
374 need not even be stored in normal memory at all -- in principle a
375 private variable could be stored in a CPU register (hence the convention
380 ---------
427 ------------------------------------------
482 come earlier in program order. Symbolically, if we have R ->data X,
483 R ->addr X, or R ->ctrl X (where R is a read event), then we must also
484 have R ->po X. It wouldn't make sense for a computation to depend
505 the load generated for a READ_ONCE() -- that's one of the nice
506 properties of READ_ONCE() -- but it is allowed to ignore the load's
540 THE READS-FROM RELATION: rf, rfi, and rfe
541 -----------------------------------------
543 The reads-from relation (rf) links a write event to a read event when
546 write W ->rf R to indicate that the load R reads from the store W. We
548 the same CPU (internal reads-from, or rfi) and where they occur on
549 different CPUs (external reads-from, or rfe).
553 executes on a separate CPU before the main program runs.
557 of load-tearing, where a load obtains some of its bits from one store
559 and WRITE_ONCE() will prevent load-tearing; it's not possible to have:
578 On the other hand, load-tearing is unavoidable when mixed-size
599 If r1 = 0x56781234 (little-endian!) at the end, then P1 must have read
600 from both of P0's stores. It is possible to handle mixed-size and
607 ------------------------------------------------------------------
610 multi-processor system, the CPUs must share a consistent view of the
626 hardware-centric view, the order in which the stores get written to
627 x's cache line). We write W ->co W' if W comes before W' in the
634 Write-write coherence: If W ->po-loc W' (i.e., W comes before
636 and W' are two stores, then W ->co W'.
638 Write-read coherence: If W ->po-loc R, where W is a store and R
642 Read-write coherence: If R ->po-loc W, where R is a load and W
646 Read-read coherence: If R ->po-loc R', where R and R' are two
656 requirement that every store eventually becomes visible to every CPU.)
673 write-write coherence rule: Since the store of 23 comes later in
687 If r1 = 666 at the end, this would violate the read-write coherence
710 would violate the read-read coherence rule: The r1 load comes before
717 encoded in Itanium's Very-Long-Instruction-Word format, and it is yet
721 Just like the po relation, co is inherently an ordering -- it is not
724 occur on the same CPU (internal coherence order, or coi) and stores
729 related by po. Coherence order is strictly per-location, or if you
733 THE FROM-READS RELATION: fr, fri, and fre
734 -----------------------------------------
736 The from-reads relation (fr) can be a little difficult for people to
738 overwritten by a store. In other words, we have R ->fr W when the
762 the load and the store are on the same CPU) and fre (when they are on
767 event W for the same location, we will have R ->fr W if and only if
768 the write which R reads from is co-before W. In symbols,
770 (R ->fr W) := (there exists W' with W' ->rf R and W' ->co W).
774 --------------------
783 For the most part, executing an instruction requires a CPU to perform
787 When CPU C executes a store instruction, it tells the memory subsystem
790 special case, we say that the store propagates to its own CPU at the
793 arrange for the store to be co-later than (i.e., to overwrite) any
794 other store to the same location which has already propagated to CPU C.
796 When a CPU executes a load instruction R, it first checks to see
797 whether there are any as-yet unexecuted store instructions, for the
799 uses the value of the po-latest such store as the value obtained by R,
801 CPU asks the memory subsystem for the value to load and we say that R
803 of the co-latest store to the location in question which has already
804 propagated to that CPU.
807 CPUs have local caches, and propagating a store to a CPU really means
808 propagating it to the CPU's local cache. A local cache can take some
810 to satisfy one of the CPU's loads until it has been processed. On
812 First-In-First-Out order, and consequently the processing delay
814 have a partitioned design that results in non-FIFO behavior. We will
823 CPU to do anything special other than informing the memory subsystem
827 First, a fence forces the CPU to execute various instructions in
832 the CPU to execute all po-earlier instructions before any
833 po-later instructions;
835 smp_rmb() forces the CPU to execute all po-earlier loads
836 before any po-later loads;
838 smp_wmb() forces the CPU to execute all po-earlier stores
839 before any po-later stores;
841 Acquire fences, such as smp_load_acquire(), force the CPU to
843 part of an smp_load_acquire()) before any po-later
846 Release fences, such as smp_store_release(), force the CPU to
847 execute all po-earlier instructions before the store
852 propagates stores. When a fence instruction is executed on CPU C:
854 For each other CPU C', smp_wmb() forces all po-earlier stores
855 on C to propagate to C' before any po-later stores do.
857 For each other CPU C', any store which propagates to C before
858 a release fence is executed (including all po-earlier
863 executed (including all po-earlier stores on C) is forced to
864 propagate to all other CPUs before any instructions po-after
868 affects stores from other CPUs that propagate to CPU C before the
871 strong fences are A-cumulative. By contrast, smp_wmb() fences are not
872 A-cumulative; they only affect the propagation of stores that are
880 PROPAGATION ORDER RELATION: cumul-fence
881 ---------------------------------------
884 smp_wmb() fences) are collectively referred to as cumul-fences, even
885 though smp_wmb() isn't A-cumulative. The cumul-fence relation is
888 E and F are both stores on the same CPU and an smp_wmb() fence
892 where either X = E or else E ->rf X; or
895 order, where either X = E or else E ->rf X.
898 and W ->cumul-fence W', then W must propagate to any given CPU
904 -------------------------------------------------
908 maintaining cache coherence and the fact that a CPU can't operate on a
917 Atomicity: This requires that atomic read-modify-write
921 Happens-before: This requires that certain instructions are
927 Rcu: This requires that RCU read-side critical sections and
929 Grace-Period Guarantee.
931 Plain-coherence: This requires that plain memory accesses
936 memory models (such as those for C11/C++11). The "happens-before" and
938 "rcu" and "plain-coherence" axioms are specific to the LKMM.
944 -----------------------------------
952 first for CPU 0, then CPU 1, etc.
955 and po-loc relations agree with this global ordering; in other words,
956 whenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the
962 X0 -> X1 -> X2 -> ... -> Xn -> X0,
964 where each of the links is either rf, co, fr, or po-loc. This has to
974 -------------------
976 What does it mean to say that a read-modify-write (rmw) update, such
984 CPU 0 loads x obtaining 13;
985 CPU 1 loads x obtaining 13;
986 CPU 0 stores 14 to x;
987 CPU 1 stores 14 to x;
991 In this example, CPU 0's increment effectively gets lost because it
992 occurs in between CPU 1's load and store. To put it another way, the
993 problem is that the position of CPU 0's store in x's coherence order
994 is between the store that CPU 1 reads from and the store that CPU 1
1000 atomic read-modify-write and W' is the write event which R reads from,
1004 (R ->rmw W) implies (there is no X with R ->fr X and X ->co W),
1011 -----------------------------------------
1013 There are many situations where a CPU is obliged to execute two
1015 "preserved program order") relation, which links the po-earlier
1016 instruction to the po-later instruction and is thus a sub-relation of
1021 memory accesses with X ->po Y; then the CPU must execute X before Y if
1040 X and Y are both loads, X ->addr Y (i.e., there is an address
1047 a store W will force the CPU to execute R before W. This is very
1048 simply because the CPU cannot tell the memory subsystem about W's
1055 there is no such thing as a data dependency to a load. Next, a CPU
1062 To be fair about it, all Linux-supported architectures do execute
1064 After all, a CPU cannot ask the memory subsystem to load a value from
1066 the split-cache design used by Alpha can cause it to behave in a way
1074 store and a second, po-later load reads from that store:
1076 R ->dep W ->rfi R',
1079 this situation we know it is possible for the CPU to execute R' before
1084 and W then the CPU can speculatively forward W to R' before executing
1085 R; if the speculation turns out to be wrong then the CPU merely has to
1088 (In theory, a CPU might forward a store to a load when it runs across
1103 R ->po-loc W
1105 (the po-loc link says that R comes before W in program order and they
1106 access the same location), the CPU is obliged to execute W after R.
1109 violation of the read-write coherence rule. Similarly, if we had
1111 W ->po-loc W'
1113 and the CPU executed W' before W, then the memory subsystem would put
1115 overwrite W', in violation of the write-write coherence rule.
1117 allowing out-of-order writes like this to occur. The model avoided
1118 violating the write-write coherence rule by requiring the CPU not to
1123 ------------------------
1130 int y = -1;
1164 value may not become available for P1's CPU to read until after the
1177 effect of the fence is to cause the CPU not to execute any po-later
1198 share this property: They do not allow the CPU to execute any po-later
1199 instructions (or po-later loads in the case of smp_rmb()) until all
1201 case of a strong fence, the CPU first has to wait for all of its
1202 po-earlier stores to propagate to every other CPU in the system; then
1204 as of that time -- not just the stores received when the strong fence
1211 THE HAPPENS-BEFORE RELATION: hb
1212 -------------------------------
1214 The happens-before relation (hb) links memory accesses that have to
1218 W ->rfe R implies that W and R are on different CPUs. It also means
1219 that W's store must have propagated to R's CPU before R executed;
1221 must have executed before R, and so we have W ->hb R.
1223 The equivalent fact need not hold if W ->rfi R (i.e., W and R are on
1224 the same CPU). As we have already seen, the operational model allows
1230 W ->coe W'. This means that W and W' are stores to the same location,
1236 R ->fre W means that W overwrites the value which R reads, but it
1238 for the memory subsystem not to propagate W to R's CPU until after R
1242 events that are on the same CPU. However it is more difficult to
1245 on CPU C in situations where a store from some other CPU comes after
1346 outcome is impossible -- as it should be.
1349 followed by an arbitrary number of cumul-fence links, ending with an
1353 followed by two cumul-fences and an rfe link, utilizing the fact that
1354 release fences are A-cumulative:
1385 store to y does (the first cumul-fence), the store to y propagates to P2
1388 store to z does (the second cumul-fence), and P0's load executes after the
1393 requirement is the content of the LKMM's "happens-before" axiom.
1401 THE PROPAGATES-BEFORE RELATION: pb
1402 ----------------------------------
1404 The propagates-before (pb) relation capitalizes on the special
1406 store is coherence-later than E and propagates to every CPU and to RAM
1408 F via a coe or fre link, an arbitrary number of cumul-fences, an
1416 E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F,
1420 be equal to X). Because of the cumul-fence links, we know that W will
1421 propagate to Y's CPU before X does, hence before Y executes and hence
1423 know that W will propagate to every CPU and to RAM before Z executes.
1426 propagate to every CPU and to RAM before F executes.
1433 have propagated to E's CPU before E executed. If E was a store, the
1435 coherence order, contradicting the fact that E ->coe W. If E was a
1438 contradicting the fact that E ->fre W.
1466 In this example, the sequences of cumul-fence and hb links are empty.
1468 because it does not start and end on the same CPU.
1481 RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
1482 ------------------------------------------------------------------------
1484 RCU (Read-Copy-Update) is a powerful synchronization mechanism. It
1485 rests on two concepts: grace periods and read-side critical sections.
1488 synchronize_rcu(). A read-side critical section (or just critical
1494 Grace-Period Guarantee, which states that a critical section can never
1501 propagates to C's CPU before the end of C must propagate to
1502 every CPU before G ends.
1505 propagates to G's CPU before the start of G must propagate
1506 to every CPU before C starts.
1543 to propagate to every CPU are fulfilled by placing strong fences at
1544 suitable places in the RCU-related code. Thus, if a critical section
1545 starts before a grace period does then the critical section's CPU will
1557 rcu-link relation. rcu-link encompasses a very general notion of
1560 E ->rcu-link F includes cases where E is po-before some memory-access
1561 event X, F is po-after some memory-access event Y, and we have any of
1562 X ->rfe Y, X ->co Y, or X ->fr Y.
1564 The formal definition of the rcu-link relation is more than a little
1568 about rcu-link is the information in the preceding paragraph.
1570 The LKMM also defines the rcu-gp and rcu-rscsi relations. They bring
1571 grace periods and read-side critical sections into the picture, in the
1574 E ->rcu-gp F means that E and F are in fact the same event,
1578 E ->rcu-rscsi F means that E and F are the rcu_read_unlock()
1579 and rcu_read_lock() fence events delimiting some read-side
1584 If we think of the rcu-link relation as standing for an extended
1585 "before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a
1588 Z's CPU before Z begins but doesn't propagate to some other CPU until
1589 after X ends.) Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is
1592 The LKMM goes on to define the rcu-order relation as a sequence of
1593 rcu-gp and rcu-rscsi links separated by rcu-link links, in which the
1594 number of rcu-gp links is >= the number of rcu-rscsi links. For
1597 X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1599 would imply that X ->rcu-order V, because this sequence contains two
1600 rcu-gp links and one rcu-rscsi link. (It also implies that
1601 X ->rcu-order T and Z ->rcu-order V.) On the other hand:
1603 X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1605 does not imply X ->rcu-order V, because the sequence contains only
1606 one rcu-gp link but two rcu-rscsi links.
1608 The rcu-order relation is important because the Grace Period Guarantee
1609 means that rcu-order links act kind of like strong fences. In
1610 particular, E ->rcu-order F implies not only that E begins before F
1611 ends, but also that any write po-before E will propagate to every CPU
1612 before any instruction po-after F can execute. (However, it does not
1614 fence event is linked to itself by rcu-order as a degenerate case.)
1619 G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F.
1622 and there are events X, Y and a read-side critical section C such that:
1624 1. G = W is po-before or equal to X;
1628 3. Y is po-before Z;
1634 From 1 - 4 we deduce that the grace period G ends before the critical
1637 G's CPU before G starts must propagate to every CPU before C starts.
1638 In particular, the write propagates to every CPU before F finishes
1639 executing and hence before any instruction po-after F can execute.
1641 covered by rcu-order.
1643 The rcu-fence relation is a simple extension of rcu-order. While
1644 rcu-order only links certain fence events (calls to synchronize_rcu(),
1645 rcu_read_lock(), or rcu_read_unlock()), rcu-fence links any events
1646 that are separated by an rcu-order link. This is analogous to the way
1647 the strong-fence relation links events that are separated by an
1648 smp_mb() fence event (as mentioned above, rcu-order links act kind of
1649 like strong fences). Written symbolically, X ->rcu-fence Y means
1652 X ->po E ->rcu-order F ->po Y.
1656 every CPU before Y executes. Thus rcu-fence is sort of a
1657 "super-strong" fence: Unlike the original strong fences (smp_mb() and
1658 synchronize_rcu()), rcu-fence is able to link events on different
1659 CPUs. (Perhaps this fact should lead us to say that rcu-fence isn't
1662 Finally, the LKMM defines the RCU-before (rb) relation in terms of
1663 rcu-fence. This is done in essentially the same way as the pb
1664 relation was defined in terms of strong-fence. We will omit the
1665 details; the end result is that E ->rb F implies E must execute
1666 before F, just as E ->pb F does (and for much the same reasons).
1671 and F with E ->rcu-link F ->rcu-order E. Or to put it a third way,
1672 the axiom requires that there are no cycles consisting of rcu-gp and
1673 rcu-rscsi alternating with rcu-link, where the number of rcu-gp links
1674 is >= the number of rcu-rscsi links.
1681 store propagates to the critical section's CPU before the end of the
1682 critical section but doesn't propagate to some other CPU until after
1689 are events Q and R where Q is po-after L (which marks the start of the
1690 critical section), Q is "before" R in the sense used by the rcu-link
1691 relation, and R is po-before the grace period S. Thus we have:
1693 L ->rcu-link S.
1697 section's CPU by reading from W, and let Z on some arbitrary CPU be a
1698 witness that W has not propagated to that CPU, where Z happens after
1699 some event X which is po-after S. Symbolically, this amounts to:
1701 S ->po X ->hb* Z ->fr W ->rf Y ->po U.
1703 The fr link from Z to W indicates that W has not propagated to Z's CPU
1705 discussion of the rcu-link relation earlier) that S and U are related
1706 by rcu-link:
1708 S ->rcu-link U.
1710 Since S is a grace period we have S ->rcu-gp S, and since L and U are
1711 the start and end of the critical section C we have U ->rcu-rscsi L.
1714 S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S,
1719 For something a little more down-to-earth, let's see how the axiom
1744 P1's load at W reads from, so we have W ->fre Y. Since S ->po W and
1745 also Y ->po U, we get S ->rcu-link U. In addition, S ->rcu-gp S
1749 so we have X ->rfe Z. Together with L ->po X and Z ->po S, this
1750 yields L ->rcu-link S. And since L and U are the start and end of a
1751 critical section, we have U ->rcu-rscsi L.
1753 Then U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a
1791 that U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi
1792 L2 ->rcu-link U0. However this cycle is not forbidden, because the
1793 sequence of relations contains fewer instances of rcu-gp (one) than of
1794 rcu-rscsi (two). Consequently the outcome is allowed by the LKMM.
1799 -------------------- -------------------- --------------------
1820 Addendum: The LKMM now supports SRCU (Sleepable Read-Copy-Update) in
1822 above, with new relations srcu-gp and srcu-rscsi added to represent
1823 SRCU grace periods and read-side critical sections. There is a
1824 restriction on the srcu-gp and srcu-rscsi links that can appear in an
1825 rcu-order sequence (the srcu-rscsi links must be paired with srcu-gp
1831 -------
1861 store-release in a spin_unlock() and the load-acquire which forms the
1863 spin_trylock() -- we can call these things lock-releases and
1864 lock-acquires -- have two properties beyond those of ordinary releases
1867 First, when a lock-acquire reads from or is po-after a lock-release,
1868 the LKMM requires that every instruction po-before the lock-release
1869 must execute before any instruction po-after the lock-acquire. This
1872 it also holds when they are on the same CPU, even if they access
1897 Here the second spin_lock() is po-after the first spin_unlock(), and
1903 fences, only to lock-related operations. For instance, suppose P0()
1916 Then the CPU would be allowed to forward the s = 1 value from the
1927 Second, when a lock-acquire reads from or is po-after a lock-release,
1928 and some other stores W and W' occur po-before the lock-release and
1929 po-after the lock-acquire respectively, the LKMM requires that W must
1930 propagate to each CPU before W' does. For example, consider:
1966 P1 had all executed on a single CPU, as in the example before this
1970 These two special requirements for lock-release and lock-acquire do
1978 -----------------------------
1983 operations of one kind or another. Ordinary C-language memory
2035 would be no possibility of a NULL-pointer dereference.
2054 same CPU), and
2060 are "race candidates" if they satisfy 1 - 4. Thus, whether or not two
2085 propagated Y from its own CPU to X's CPU, which won't happen until
2088 will propagate to Y's CPU just as Y is executing. In such a case X
2092 Therefore when X is a store, for X and Y to be non-concurrent the LKMM
2094 propagate to Y's CPU before Y executes. (Or vice versa, of course, if
2095 Y executes before X -- then Y must propagate to X's CPU before X
2097 relation (vis), where X ->vis Y is defined to hold if there is an
2101 cumul-fence links followed by an optional rfe link (if none of
2106 Z is connected to Y by a strong-fence link followed by a
2111 Z is on the same CPU as Y and is connected to Y by a possibly
2117 cumul-fence memory barriers force stores that are po-before
2119 po-after the barrier.
2123 R's CPU before R executed.
2125 strong-fence memory barriers force stores that are po-before
2126 the barrier, or that propagate to the barrier's CPU before the
2128 po-after the barrier can execute.
2153 The smp_wmb() memory barrier gives a cumul-fence link from X to W, and
2156 executes. Next, Z and Y are on the same CPU and the smp_rmb() fence
2158 Y). Therefore we have X ->vis Y: X must propagate to Y's CPU before Y
2165 cumul-fence, pb, and so on -- including vis) apply only to marked
2187 how instructions are executed by the CPU. In Linux kernel source
2198 corresponding to the first group of accesses will all end po-before
2200 -- even if some of the accesses are plain. (Of course, the CPU may
2241 cumul-fence. It guarantees that no matter what hash of
2243 access U, all those instructions will be po-before the fence.
2250 executed, i.e., X ->vis Y. (And if there is no rfe link then
2255 fence. It guarantees that all the machine-level instructions
2256 corresponding to the access V will be po-after the fence, and
2268 X ->xb* E. If E was also a plain access, we would also look for a
2269 marked access Y such that X ->xb* Y, and Y and E are ordered by a
2271 "post-bounded" by X and E is "pre-bounded" by Y.
2274 "r-post-bounded" by X. Similarly, E would be "r-pre-bounded" or
2275 "w-pre-bounded" by Y, depending on whether E was a store or a load.
2279 say that a marked access pre-bounds and post-bounds itself (e.g., if R
2282 The need to distinguish between r- and w-bounding raises yet another
2297 w-pre-bounded or w-post-bounded by a marked access, it also requires
2298 the store to be r-pre-bounded or r-post-bounded, so as to handle cases
2306 Incidentally, the other tranformation -- augmenting a plain load by
2307 adding in a store to the same location -- is not allowed. This is
2317 The LKMM includes a second way to pre-bound plain accesses, in
2324 the LKMM says that the marked load of ptr pre-bounds the plain load of
2327 stipulation, since after all, the CPU can't perform the load of *p
2357 rcu_assign_pointer() performs a store-release, so the plain store to b
2358 is definitely w-post-bounded before the store to ptr, and the two
2362 that the load of ptr in P1 is r-pre-bounded before the load of *p
2387 which would invalidate the memory model's assumption, since the CPU
2392 not need to be w-post-bounded: when it is separated from the other
2393 race-candidate access by a fence. At first glance this may seem
2396 Well, normal fences don't -- but rcu-fence can! Here's an example:
2415 Do the plain stores to y race? Clearly not if P1 reads a non-zero
2417 means that the read-side critical section in P1 must finish executing
2418 before the grace period in P0 does, because RCU's Grace-Period
2422 from the READ_ONCE() to the WRITE_ONCE() gives rise to an rcu-link
2425 This means there is an rcu-fence link from P1's "y = 2" store to P0's
2429 isn't w-post-bounded by any marked accesses.
2432 race-candidate stores W and W', where W ->co W', the LKMM says the
2435 w-post-bounded ; vis ; w-pre-bounded
2439 r-post-bounded ; xb* ; w-pre-bounded
2443 w-post-bounded ; vis ; r-pre-bounded
2445 sequence. For race-candidate load R and store W, the LKMM says the
2448 r-post-bounded ; xb* ; w-pre-bounded
2452 w-post-bounded ; vis ; r-pre-bounded
2457 strong-fence ; xb* ; {w and/or r}-pre-bounded
2459 sequence with no post-bounding, and in every case the LKMM also allows
2466 happens-before, propagates-before, and rcu axioms (which state that
2472 called the "plain-coherence" axiom because of their resemblance to the
2479 W by one of the xb* sequences listed above, then W ->rfe R is
2484 R by one of the vis sequences listed above, then R ->fre W is
2486 load must read from that store or one coherence-after it).
2489 to W' by one of the vis sequences listed above, then W' ->co W
2500 -------------
2509 be on the same CPU. These differences are very unimportant; indeed,
2514 CPU.
2525 that are part of a non-value-returning atomic update. For instance,
2534 non-value-returning atomic operations effectively to be executed off
2535 the CPU. Basically, the CPU tells the memory subsystem to increment
2537 no further involvement from the CPU. Since the CPU doesn't ever read
2543 smp_store_release() -- which is basically how the Linux kernel treats
2553 pre-bounding by address dependencies, it is possible for the compiler
2561 all po-earlier events against all po-later events, as smp_mb() does,
2564 smp_mb__before_atomic() orders all po-earlier events against
2565 po-later atomic updates and the events following them;
2567 smp_mb__after_atomic() orders po-earlier atomic updates and
2568 the events preceding them against all po-later events;
2570 smp_mb__after_spinlock() orders po-earlier lock acquisition
2571 events and the events preceding them against all po-later
2592 non-deadlocking executions. For example:
2616 will self-deadlock in the executions where it stores 36 in y.