Lines Matching +full:cpu +full:- +full:centric

1 Explanation of the Linux-Kernel Memory Consistency Model
15 7. THE PROGRAM ORDER RELATION: po AND po-loc
18 10. THE READS-FROM RELATION: rf, rfi, and rfe
20 12. THE FROM-READS RELATION: fr, fri, and fre
22 14. PROPAGATION ORDER RELATION: cumul-fence
28 20. THE HAPPENS-BEFORE RELATION: hb
29 21. THE PROPAGATES-BEFORE RELATION: pb
30 22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-fence, and rb
37 ------------
39 The Linux-kernel memory consistency model (LKMM) is rather complex and
41 linux-kernel.bell and linux-kernel.cat files that make up the formal
68 ----------
86 factors such as DMA and mixed-size accesses.) But on multiprocessor
97 ----------------
103 is full. Running concurrently on a different CPU might be a part of
132 CPU and P1() represents the read() routine running on another. The
140 This pattern of memory accesses, where one CPU stores values to two
141 shared memory locations and another CPU loads from those locations in
158 predict that r1 = 42 or r2 = -7, because neither of those values ever
180 ----------------------------
184 if each CPU executed its instructions in order but with unspecified
188 program source for each CPU. The model says that the value obtained
190 store to the same memory location, from any CPU.
222 each CPU stores to its own shared location and then loads from the
223 other CPU's location:
254 -------------------
285 ------
305 Atomic read-modify-write accesses, such as atomic_inc() or xchg(),
312 logical computations, control-flow instructions, or accesses to
313 private memory or CPU registers are not of central interest to the
318 is concerned only with the store itself -- its value and its address
319 -- not the computation leading up to it.
327 THE PROGRAM ORDER RELATION: po AND po-loc
328 -----------------------------------------
334 instructions are presented to a CPU's execution unit. Thus, we say
335 that X is po-before Y (written as "X ->po Y" in formulas) if X occurs
338 This is inherently a single-CPU relation; two instructions executing
342 po-loc is a sub-relation of po. It links two memory accesses when the
344 same memory location (the "-loc" suffix).
347 program order we need to explain. The LKMM was inspired by low-level
374 need not even be stored in normal memory at all -- in principle a
375 private variable could be stored in a CPU register (hence the convention
380 ---------
427 ------------------------------------------
482 come earlier in program order. Symbolically, if we have R ->data X,
483 R ->addr X, or R ->ctrl X (where R is a read event), then we must also
484 have R ->po X. It wouldn't make sense for a computation to depend
489 THE READS-FROM RELATION: rf, rfi, and rfe
490 -----------------------------------------
492 The reads-from relation (rf) links a write event to a read event when
495 write W ->rf R to indicate that the load R reads from the store W. We
497 the same CPU (internal reads-from, or rfi) and where they occur on
498 different CPUs (external reads-from, or rfe).
502 executes on a separate CPU before the program runs.
506 of load-tearing, where a load obtains some of its bits from one store
508 and WRITE_ONCE() will prevent load-tearing; it's not possible to have:
527 On the other hand, load-tearing is unavoidable when mixed-size
548 If r1 = 0x56781234 (little-endian!) at the end, then P1 must have read
549 from both of P0's stores. It is possible to handle mixed-size and
556 ------------------------------------------------------------------
559 multi-processor system, the CPUs must share a consistent view of the
575 hardware-centric view, the order in which the stores get written to
576 x's cache line). We write W ->co W' if W comes before W' in the
583 Write-write coherence: If W ->po-loc W' (i.e., W comes before
585 and W' are two stores, then W ->co W'.
587 Write-read coherence: If W ->po-loc R, where W is a store and R
591 Read-write coherence: If R ->po-loc W, where R is a load and W
595 Read-read coherence: If R ->po-loc R', where R and R' are two
605 requirement that every store eventually becomes visible to every CPU.)
622 write-write coherence rule: Since the store of 23 comes later in
636 If r1 = 666 at the end, this would violate the read-write coherence
659 would violate the read-read coherence rule: The r1 load comes before
666 encoded in Itanium's Very-Long-Instruction-Word format, and it is yet
670 Just like the po relation, co is inherently an ordering -- it is not
673 occur on the same CPU (internal coherence order, or coi) and stores
678 related by po. Coherence order is strictly per-location, or if you
682 THE FROM-READS RELATION: fr, fri, and fre
683 -----------------------------------------
685 The from-reads relation (fr) can be a little difficult for people to
687 overwritten by a store. In other words, we have R ->fr W when the
711 the load and the store are on the same CPU) and fre (when they are on
716 event W for the same location, we will have R ->fr W if and only if
717 the write which R reads from is co-before W. In symbols,
719 (R ->fr W) := (there exists W' with W' ->rf R and W' ->co W).
723 --------------------
732 For the most part, executing an instruction requires a CPU to perform
736 When CPU C executes a store instruction, it tells the memory subsystem
739 special case, we say that the store propagates to its own CPU at the
742 arrange for the store to be co-later than (i.e., to overwrite) any
743 other store to the same location which has already propagated to CPU C.
745 When a CPU executes a load instruction R, it first checks to see
746 whether there are any as-yet unexecuted store instructions, for the
748 uses the value of the po-latest such store as the value obtained by R,
750 CPU asks the memory subsystem for the value to load and we say that R
752 of the co-latest store to the location in question which has already
753 propagated to that CPU.
756 CPUs have local caches, and propagating a store to a CPU really means
757 propagating it to the CPU's local cache. A local cache can take some
759 to satisfy one of the CPU's loads until it has been processed. On
761 First-In-First-Out order, and consequently the processing delay
763 have a partitioned design that results in non-FIFO behavior. We will
772 CPU to do anything special other than informing the memory subsystem
776 First, a fence forces the CPU to execute various instructions in
781 the CPU to execute all po-earlier instructions before any
782 po-later instructions;
784 smp_rmb() forces the CPU to execute all po-earlier loads
785 before any po-later loads;
787 smp_wmb() forces the CPU to execute all po-earlier stores
788 before any po-later stores;
790 Acquire fences, such as smp_load_acquire(), force the CPU to
792 part of an smp_load_acquire()) before any po-later
795 Release fences, such as smp_store_release(), force the CPU to
796 execute all po-earlier instructions before the store
801 propagates stores. When a fence instruction is executed on CPU C:
803 For each other CPU C', smp_wmb() forces all po-earlier stores
804 on C to propagate to C' before any po-later stores do.
806 For each other CPU C', any store which propagates to C before
807 a release fence is executed (including all po-earlier
812 executed (including all po-earlier stores on C) is forced to
813 propagate to all other CPUs before any instructions po-after
817 affects stores from other CPUs that propagate to CPU C before the
820 strong fences are A-cumulative. By contrast, smp_wmb() fences are not
821 A-cumulative; they only affect the propagation of stores that are
829 PROPAGATION ORDER RELATION: cumul-fence
830 ---------------------------------------
833 smp_wmb() fences) are collectively referred to as cumul-fences, even
834 though smp_wmb() isn't A-cumulative. The cumul-fence relation is
837 E and F are both stores on the same CPU and an smp_wmb() fence
841 where either X = E or else E ->rf X; or
844 order, where either X = E or else E ->rf X.
847 and W ->cumul-fence W', then W must propagate to any given CPU
853 -------------------------------------------------
857 maintaining cache coherence and the fact that a CPU can't operate on a
866 Atomicity: This requires that atomic read-modify-write
870 Happens-before: This requires that certain instructions are
876 Rcu: This requires that RCU read-side critical sections and
878 Grace-Period Guarantee.
881 memory models (such as those for C11/C++11). The "happens-before" and
889 -----------------------------------
897 first for CPU 0, then CPU 1, etc.
900 and po-loc relations agree with this global ordering; in other words,
901 whenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the
907 X0 -> X1 -> X2 -> ... -> Xn -> X0,
909 where each of the links is either rf, co, fr, or po-loc. This has to
919 -------------------
921 What does it mean to say that a read-modify-write (rmw) update, such
929 CPU 0 loads x obtaining 13;
930 CPU 1 loads x obtaining 13;
931 CPU 0 stores 14 to x;
932 CPU 1 stores 14 to x;
936 In this example, CPU 0's increment effectively gets lost because it
937 occurs in between CPU 1's load and store. To put it another way, the
938 problem is that the position of CPU 0's store in x's coherence order
939 is between the store that CPU 1 reads from and the store that CPU 1
945 atomic read-modify-write and W' is the write event which R reads from,
949 (R ->rmw W) implies (there is no X with R ->fr X and X ->co W),
956 -----------------------------------------
958 There are many situations where a CPU is obligated to execute two
960 "preserved program order") relation, which links the po-earlier
961 instruction to the po-later instruction and is thus a sub-relation of
966 memory accesses with X ->po Y; then the CPU must execute X before Y if
985 X and Y are both loads, X ->addr Y (i.e., there is an address
992 a store W will force the CPU to execute R before W. This is very
993 simply because the CPU cannot tell the memory subsystem about W's
1000 there is no such thing as a data dependency to a load. Next, a CPU
1007 To be fair about it, all Linux-supported architectures do execute
1009 After all, a CPU cannot ask the memory subsystem to load a value from
1011 the split-cache design used by Alpha can cause it to behave in a way
1019 store and a second, po-later load reads from that store:
1021 R ->dep W ->rfi R',
1024 this situation we know it is possible for the CPU to execute R' before
1029 and W then the CPU can speculatively forward W to R' before executing
1030 R; if the speculation turns out to be wrong then the CPU merely has to
1033 (In theory, a CPU might forward a store to a load when it runs across
1048 R ->po-loc W
1050 (the po-loc link says that R comes before W in program order and they
1051 access the same location), the CPU is obliged to execute W after R.
1054 violation of the read-write coherence rule. Similarly, if we had
1056 W ->po-loc W'
1058 and the CPU executed W' before W, then the memory subsystem would put
1060 overwrite W', in violation of the write-write coherence rule.
1062 allowing out-of-order writes like this to occur. The model avoided
1063 violating the write-write coherence rule by requiring the CPU not to
1068 ------------------------
1075 int y = -1;
1109 value may not become available for P1's CPU to read until after the
1122 nothing at all on non-Alpha builds) after every READ_ONCE() and atomic
1123 load. The effect of the fence is to cause the CPU not to execute any
1124 po-later instructions until after the local cache has finished
1146 the CPU to execute any po-later instructions (or po-later loads in the
1148 the local cache. In the case of a strong fence, the CPU first has to
1149 wait for all of its po-earlier stores to propagate to every other CPU
1151 the stores received as of that time -- not just the stores received
1158 THE HAPPENS-BEFORE RELATION: hb
1159 -------------------------------
1161 The happens-before relation (hb) links memory accesses that have to
1165 W ->rfe R implies that W and R are on different CPUs. It also means
1166 that W's store must have propagated to R's CPU before R executed;
1168 must have executed before R, and so we have W ->hb R.
1170 The equivalent fact need not hold if W ->rfi R (i.e., W and R are on
1171 the same CPU). As we have already seen, the operational model allows
1177 W ->coe W'. This means that W and W' are stores to the same location,
1183 R ->fre W means that W overwrites the value which R reads, but it
1185 for the memory subsystem not to propagate W to R's CPU until after R
1189 events that are on the same CPU. However it is more difficult to
1192 on CPU C in situations where a store from some other CPU comes after
1293 outcome is impossible -- as it should be.
1296 followed by an arbitrary number of cumul-fence links, ending with an
1300 followed by two cumul-fences and an rfe link, utilizing the fact that
1301 release fences are A-cumulative:
1332 store to y does (the first cumul-fence), the store to y propagates to P2
1335 store to z does (the second cumul-fence), and P0's load executes after the
1340 requirement is the content of the LKMM's "happens-before" axiom.
1348 THE PROPAGATES-BEFORE RELATION: pb
1349 ----------------------------------
1351 The propagates-before (pb) relation capitalizes on the special
1353 store is coherence-later than E and propagates to every CPU and to RAM
1355 F via a coe or fre link, an arbitrary number of cumul-fences, an
1363 E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F,
1367 be equal to X). Because of the cumul-fence links, we know that W will
1368 propagate to Y's CPU before X does, hence before Y executes and hence
1370 know that W will propagate to every CPU and to RAM before Z executes.
1373 propagate to every CPU and to RAM before F executes.
1380 have propagated to E's CPU before E executed. If E was a store, the
1382 coherence order, contradicting the fact that E ->coe W. If E was a
1385 contradicting the fact that E ->fre W.
1413 In this example, the sequences of cumul-fence and hb links are empty.
1415 because it does not start and end on the same CPU.
1428 RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-fence, and rb
1429 -------------------------------------------------------------
1431 RCU (Read-Copy-Update) is a powerful synchronization mechanism. It
1432 rests on two concepts: grace periods and read-side critical sections.
1435 synchronize_rcu(). A read-side critical section (or just critical
1441 Grace-Period Guarantee, which states that a critical section can never
1448 propagates to C's CPU before the end of C must propagate to
1449 every CPU before G ends.
1452 propagates to G's CPU before the start of G must propagate
1453 to every CPU before C starts.
1490 to propagate to every CPU are fulfilled by placing strong fences at
1491 suitable places in the RCU-related code. Thus, if a critical section
1492 starts before a grace period does then the critical section's CPU will
1504 rcu-link relation. rcu-link encompasses a very general notion of
1507 E ->rcu-link F includes cases where E is po-before some memory-access
1508 event X, F is po-after some memory-access event Y, and we have any of
1509 X ->rfe Y, X ->co Y, or X ->fr Y.
1511 The formal definition of the rcu-link relation is more than a little
1515 about rcu-link is the information in the preceding paragraph.
1517 The LKMM also defines the rcu-gp and rcu-rscsi relations. They bring
1518 grace periods and read-side critical sections into the picture, in the
1521 E ->rcu-gp F means that E and F are in fact the same event,
1525 E ->rcu-rscsi F means that E and F are the rcu_read_unlock()
1526 and rcu_read_lock() fence events delimiting some read-side
1531 If we think of the rcu-link relation as standing for an extended
1532 "before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a
1535 Z's CPU before Z begins but doesn't propagate to some other CPU until
1536 after X ends.) Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is
1539 The LKMM goes on to define the rcu-fence relation as a sequence of
1540 rcu-gp and rcu-rscsi links separated by rcu-link links, in which the
1541 number of rcu-gp links is >= the number of rcu-rscsi links. For
1544 X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1546 would imply that X ->rcu-fence V, because this sequence contains two
1547 rcu-gp links and one rcu-rscsi link. (It also implies that
1548 X ->rcu-fence T and Z ->rcu-fence V.) On the other hand:
1550 X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1552 does not imply X ->rcu-fence V, because the sequence contains only
1553 one rcu-gp link but two rcu-rscsi links.
1555 The rcu-fence relation is important because the Grace Period Guarantee
1556 means that rcu-fence acts kind of like a strong fence. In particular,
1557 E ->rcu-fence F implies not only that E begins before F ends, but also
1558 that any write po-before E will propagate to every CPU before any
1559 instruction po-after F can execute. (However, it does not imply that
1561 is linked to itself by rcu-fence as a degenerate case.)
1566 G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F.
1569 and there are events X, Y and a read-side critical section C such that:
1571 1. G = W is po-before or equal to X;
1575 2. Y is po-before Z;
1581 From 1 - 4 we deduce that the grace period G ends before the critical
1584 G's CPU before G starts must propagate to every CPU before C starts.
1585 In particular, the write propagates to every CPU before F finishes
1586 executing and hence before any instruction po-after F can execute.
1588 covered by rcu-fence.
1590 Finally, the LKMM defines the RCU-before (rb) relation in terms of
1591 rcu-fence. This is done in essentially the same way as the pb
1592 relation was defined in terms of strong-fence. We will omit the
1593 details; the end result is that E ->rb F implies E must execute
1594 before F, just as E ->pb F does (and for much the same reasons).
1599 and F with E ->rcu-link F ->rcu-fence E. Or to put it a third way,
1600 the axiom requires that there are no cycles consisting of rcu-gp and
1601 rcu-rscsi alternating with rcu-link, where the number of rcu-gp links
1602 is >= the number of rcu-rscsi links.
1609 store propagates to the critical section's CPU before the end of the
1610 critical section but doesn't propagate to some other CPU until after
1617 are events Q and R where Q is po-after L (which marks the start of the
1618 critical section), Q is "before" R in the sense used by the rcu-link
1619 relation, and R is po-before the grace period S. Thus we have:
1621 L ->rcu-link S.
1625 section's CPU by reading from W, and let Z on some arbitrary CPU be a
1626 witness that W has not propagated to that CPU, where Z happens after
1627 some event X which is po-after S. Symbolically, this amounts to:
1629 S ->po X ->hb* Z ->fr W ->rf Y ->po U.
1631 The fr link from Z to W indicates that W has not propagated to Z's CPU
1633 discussion of the rcu-link relation earlier) that S and U are related
1634 by rcu-link:
1636 S ->rcu-link U.
1638 Since S is a grace period we have S ->rcu-gp S, and since L and U are
1639 the start and end of the critical section C we have U ->rcu-rscsi L.
1642 S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S,
1647 For something a little more down-to-earth, let's see how the axiom
1672 P1's load at W reads from, so we have W ->fre Y. Since S ->po W and
1673 also Y ->po U, we get S ->rcu-link U. In addition, S ->rcu-gp S
1677 so we have X ->rfe Z. Together with L ->po X and Z ->po S, this
1678 yields L ->rcu-link S. And since L and U are the start and end of a
1679 critical section, we have U ->rcu-rscsi L.
1681 Then U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a
1719 that U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi
1720 L2 ->rcu-link U0. However this cycle is not forbidden, because the
1721 sequence of relations contains fewer instances of rcu-gp (one) than of
1722 rcu-rscsi (two). Consequently the outcome is allowed by the LKMM.
1727 -------------------- -------------------- --------------------
1748 Addendum: The LKMM now supports SRCU (Sleepable Read-Copy-Update) in
1750 above, with new relations srcu-gp and srcu-rscsi added to represent
1751 SRCU grace periods and read-side critical sections. There is a
1752 restriction on the srcu-gp and srcu-rscsi links that can appear in an
1753 rcu-fence sequence (the srcu-rscsi links must be paired with srcu-gp
1759 -------
1789 store-release in a spin_unlock() and the load-acquire which forms the
1791 spin_trylock() -- we can call these things lock-releases and
1792 lock-acquires -- have two properties beyond those of ordinary releases
1795 First, when a lock-acquire reads from a lock-release, the LKMM
1796 requires that every instruction po-before the lock-release must
1797 execute before any instruction po-after the lock-acquire. This would
1799 CPUs, but the LKMM says it holds even when they are on the same CPU.
1830 fences, only to lock-related operations. For instance, suppose P0()
1843 Then the CPU would be allowed to forward the s = 1 value from the
1854 Second, when a lock-acquire reads from a lock-release, and some other
1855 stores W and W' occur po-before the lock-release and po-after the
1856 lock-acquire respectively, the LKMM requires that W must propagate to
1857 each CPU before W' does. For example, consider:
1892 These two special requirements for lock-release and lock-acquire do
1900 -------------
1909 be on the same CPU. These differences are very unimportant; indeed,
1914 CPU.
1925 that are part of a non-value-returning atomic update. For instance,
1934 non-value-returning atomic operations effectively to be executed off
1935 the CPU. Basically, the CPU tells the memory subsystem to increment
1937 no further involvement from the CPU. Since the CPU doesn't ever read
1943 smp_store_release() -- which is basically how the Linux kernel treats
1951 all po-earlier events against all po-later events, as smp_mb() does,
1954 smp_mb__before_atomic() orders all po-earlier events against
1955 po-later atomic updates and the events following them;
1957 smp_mb__after_atomic() orders po-earlier atomic updates and
1958 the events preceding them against all po-later events;
1960 smp_mb_after_spinlock() orders po-earlier lock acquisition
1961 events and the events preceding them against all po-later
1982 non-deadlocking executions. For example:
2006 will self-deadlock in the executions where it stores 36 in y.