Lines Matching full:store
85 store instruction accessing the same location (we ignore complicating
168 store to buf but before the store to flag. In this case, r1 and r2
190 store to the same memory location, from any CPU.
196 Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from
204 store to the same address.
209 Since an instruction (in this case, P1's store to flag) cannot
218 x86 and SPARC follow yet a different memory model: TSO (Total Store
221 Consistency. One example is the Store Buffer (SB) pattern, in which
318 is concerned only with the store itself -- its value and its address
385 both branches of an "if" statement store the same value to the same
411 from x could be executed after the store to y. Thus, the memory
479 a control dependency from the load to the store.
494 write. In colloquial terms, the load "reads from" the store. We
495 write W ->rf R to indicate that the load R reads from the store W. We
496 further distinguish the cases where the load and the store occur on
501 though it had been written there by an imaginary initial store that
505 read from a single store. It doesn't apply properly in the presence
506 of load-tearing, where a load obtains some of its bits from one store
507 and some of them from another store. Fortunately, use of READ_ONCE()
568 another. The imaginary store which establishes x's initial value
569 comes first in the coherence order; the store which directly
570 overwrites the initial value comes second; the store which overwrites
587 Write-read coherence: If W ->po-loc R, where W is a store and R
588 is a load, then R must read from W or from some other store
592 is a store, then the store which R reads from must come before
596 loads, then either they read from the same store or else the
597 store read by R comes before the store read by R' in the
605 requirement that every store eventually becomes visible to every CPU.)
622 write-write coherence rule: Since the store of 23 comes later in
624 thus must overwrite the store of 17.
637 rule: The READ_ONCE() load comes before the WRITE_ONCE() store in
638 program order, so it must not read from that store but rather from one
657 If r1 = 5 (reading from P0's store) and r2 = 0 (reading from the
658 imaginary store which establishes x's initial value) at the end, this
660 the r2 load in program order, so it must not read from a store that
671 possible for a store to directly or indirectly overwrite itself! And
687 overwritten by a store. In other words, we have R ->fr W when the
689 equivalently, when R reads from a store which comes earlier than W in
711 the load and the store are on the same CPU) and fre (when they are on
736 When CPU C executes a store instruction, it tells the memory subsystem
737 to store a certain value at a certain location. The memory subsystem
738 propagates the store to all the other CPUs as well as to RAM. (As a
739 special case, we say that the store propagates to its own CPU at the
741 store falls in the location's coherence order. In particular, it must
742 arrange for the store to be co-later than (i.e., to overwrite) any
743 other store to the same location which has already propagated to CPU C.
746 whether there are any as-yet unexecuted store instructions, for the
748 uses the value of the po-latest such store as the value obtained by R,
749 and we say that the store's value is forwarded to R. Otherwise, the
752 of the co-latest store to the location in question which has already
756 CPUs have local caches, and propagating a store to a CPU really means
758 time to process the stores that it receives, and a store can't be used
796 execute all po-earlier instructions before the store
797 associated with the fence (e.g., the store part of an
806 For each other CPU C', any store which propagates to C before
809 store associated with the release fence does.
811 Any store which propagates to C before a strong fence is
894 each load between the store that it reads from and the following
895 store. This leaves the relative positions of loads that read from the
896 same store unspecified; let's say they are inserted in program order,
937 occurs in between CPU 1's load and store. To put it another way, the
938 problem is that the position of CPU 0's store in x's coherence order
939 is between the store that CPU 1 reads from and the store that CPU 1
991 store; either a data, address, or control dependency from a load R to
992 a store W will force the CPU to execute R before W. This is very
994 store before it knows what value should be stored (in the case of a
996 of an address dependency), or whether the store should actually take
1019 store and a second, po-later load reads from that store:
1025 W, because it can forward the value that W will store to R'. But it
1033 (In theory, a CPU might forward a store to a load when it runs across
1040 because it could tell that the store and the second load access the
1046 program order if the second access is a store. Thus, if we have
1053 read request with the value stored by W (or an even later store), in
1099 smp_wmb() forces P0's store to x to propagate to P1 before the store
1107 that the first store is processed by a busy part of the cache while
1108 the second store is processed by an idle part. As a result, the x = 1
1138 its second load, the x = 1 store would already be fully processed by
1166 that W's store must have propagated to R's CPU before R executed;
1180 execute before W, because the decision as to which store overwrites
1192 on CPU C in situations where a store from some other CPU comes after
1216 had executed before its store then the value of the store would have
1219 event, because P1's store came after P0's store in x's coherence
1220 order, and P1's store propagated to P0 before P0's load executed.
1242 then the x = 9 store must have been propagated to P0 before the first
1245 because P1's store overwrote the value read by P0's first load, and
1246 P1's store propagated to P0 before P0's second load executed.
1274 overwritten by P0's store to buf, the fence guarantees that the store
1275 to buf will propagate to P1 before the store to flag does, and the
1276 store to flag propagates to P1 before P1 reads flag.
1280 from flag were executed first, then the buf = 1 store would already
1329 link from P0's store to its load. This is because P0's store gets
1330 overwritten by P1's store since x = 2 at the end (a coe link), the
1331 smp_wmb() ensures that P1's store to x propagates to P2 before the
1332 store to y does (the first cumul-fence), the store to y propagates to P2
1333 before P2's load and store execute, P2's smp_store_release()
1335 store to z does (the second cumul-fence), and P0's load executes after the
1336 store to z has propagated to P0 (an rfe link).
1353 store is coherence-later than E and propagates to every CPU and to RAM
1359 Consider first the case where E is a store (implying that the sequence
1380 have propagated to E's CPU before E executed. If E was a store, the
1384 request with the value stored by W or an even later store,
1411 load: an fre link from P0's load to P1's store (which overwrites the
1412 value read by P0), and a strong fence between P1's store and its load.
1447 (1) C ends before G does, and in addition, every store that
1451 (2) G starts before C does, and in addition, every store that
1481 means that P0's store to x propagated to P1 before P1 called
1484 other hand, r2 = 0 means that P0's store to y, which occurs before the
1534 this, because it also includes cases where some store propagates to
1609 store propagates to the critical section's CPU before the end of the
1623 Let W be the store mentioned above, let Y come before the end of the
1671 If r2 = 0 at the end then P0's store at Y overwrites the value that
1676 If r1 = 1 at the end then P1's load at Z reads from P0's store at X,
1789 store-release in a spin_unlock() and the load-acquire which forms the
1889 the spin_unlock() in P0. Hence the store to x must propagate to P2
1890 before the store to y does, so we cannot have r2 = 1 and r3 = 0.