Lines Matching full:store

85 store instruction accessing the same location (we ignore complicating
168 store to buf but before the store to flag. In this case, r1 and r2
190 store to the same memory location, from any CPU.
196 Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from
204 store to the same address.
209 Since an instruction (in this case, P0's store to flag) cannot
218 x86 and SPARC follow yet a different memory model: TSO (Total Store
221 Consistency. One example is the Store Buffer (SB) pattern, in which
318 is concerned only with the store itself -- its value and its address
385 both branches of an "if" statement store the same value to the same
411 from x could be executed after the store to y. Thus, the memory
479 a control dependency from the load to the store.
496 There appears to be a data dependency from the load of x to the store
545 write. In colloquial terms, the load "reads from" the store. We
546 write W ->rf R to indicate that the load R reads from the store W. We
547 further distinguish the cases where the load and the store occur on
552 though it had been written there by an imaginary initial store that
556 read from a single store. It doesn't apply properly in the presence
557 of load-tearing, where a load obtains some of its bits from one store
558 and some of them from another store. Fortunately, use of READ_ONCE()
619 another. The imaginary store which establishes x's initial value
620 comes first in the coherence order; the store which directly
621 overwrites the initial value comes second; the store which overwrites
638 Write-read coherence: If W ->po-loc R, where W is a store and R
639 is a load, then R must read from W or from some other store
643 is a store, then the store which R reads from must come before
647 loads, then either they read from the same store or else the
648 store read by R comes before the store read by R' in the
656 requirement that every store eventually becomes visible to every CPU.)
673 write-write coherence rule: Since the store of 23 comes later in
675 thus must overwrite the store of 17.
688 rule: The READ_ONCE() load comes before the WRITE_ONCE() store in
689 program order, so it must not read from that store but rather from one
708 If r1 = 5 (reading from P0's store) and r2 = 0 (reading from the
709 imaginary store which establishes x's initial value) at the end, this
711 the r2 load in program order, so it must not read from a store that
722 possible for a store to directly or indirectly overwrite itself! And
738 overwritten by a store. In other words, we have R ->fr W when the
740 equivalently, when R reads from a store which comes earlier than W in
762 the load and the store are on the same CPU) and fre (when they are on
787 When CPU C executes a store instruction, it tells the memory subsystem
788 to store a certain value at a certain location. The memory subsystem
789 propagates the store to all the other CPUs as well as to RAM. (As a
790 special case, we say that the store propagates to its own CPU at the
792 store falls in the location's coherence order. In particular, it must
793 arrange for the store to be co-later than (i.e., to overwrite) any
794 other store to the same location which has already propagated to CPU C.
797 whether there are any as-yet unexecuted store instructions, for the
799 uses the value of the po-latest such store as the value obtained by R,
800 and we say that the store's value is forwarded to R. Otherwise, the
803 of the co-latest store to the location in question which has already
807 CPUs have local caches, and propagating a store to a CPU really means
809 time to process the stores that it receives, and a store can't be used
847 execute all po-earlier instructions before the store
848 associated with the fence (e.g., the store part of an
857 For each other CPU C', any store which propagates to C before
860 store associated with the release fence does.
862 Any store which propagates to C before a strong fence is
949 each load between the store that it reads from and the following
950 store. This leaves the relative positions of loads that read from the
951 same store unspecified; let's say they are inserted in program order,
992 occurs in between CPU 1's load and store. To put it another way, the
993 problem is that the position of CPU 0's store in x's coherence order
994 is between the store that CPU 1 reads from and the store that CPU 1
1046 store; either a data, address, or control dependency from a load R to
1047 a store W will force the CPU to execute R before W. This is very
1049 store before it knows what value should be stored (in the case of a
1051 of an address dependency), or whether the store should actually take
1074 store and a second, po-later load reads from that store:
1080 W, because it can forward the value that W will store to R'. But it
1088 (In theory, a CPU might forward a store to a load when it runs across
1095 because it could tell that the store and the second load access the
1101 program order if the second access is a store. Thus, if we have
1108 read request with the value stored by W (or an even later store), in
1154 smp_wmb() forces P0's store to x to propagate to P1 before the store
1162 that the first store is processed by a busy part of the cache while
1163 the second store is processed by an idle part. As a result, the x = 1
1191 its second load, the x = 1 store would already be fully processed by
1219 that W's store must have propagated to R's CPU before R executed;
1233 execute before W, because the decision as to which store overwrites
1245 on CPU C in situations where a store from some other CPU comes after
1269 had executed before its store then the value of the store would have
1272 event, because P1's store came after P0's store in x's coherence
1273 order, and P1's store propagated to P0 before P0's load executed.
1295 then the x = 9 store must have been propagated to P0 before the first
1298 because P1's store overwrote the value read by P0's first load, and
1299 P1's store propagated to P0 before P0's second load executed.
1327 overwritten by P0's store to buf, the fence guarantees that the store
1328 to buf will propagate to P1 before the store to flag does, and the
1329 store to flag propagates to P1 before P1 reads flag.
1333 from flag were executed first, then the buf = 1 store would already
1382 link from P0's store to its load. This is because P0's store gets
1383 overwritten by P1's store since x = 2 at the end (a coe link), the
1384 smp_wmb() ensures that P1's store to x propagates to P2 before the
1385 store to y does (the first cumul-fence), the store to y propagates to P2
1386 before P2's load and store execute, P2's smp_store_release()
1388 store to z does (the second cumul-fence), and P0's load executes after the
1389 store to z has propagated to P0 (an rfe link).
1406 store is coherence-later than E and propagates to every CPU and to RAM
1412 Consider first the case where E is a store (implying that the sequence
1433 have propagated to E's CPU before E executed. If E was a store, the
1437 request with the value stored by W or an even later store,
1464 load: an fre link from P0's load to P1's store (which overwrites the
1465 value read by P0), and a strong fence between P1's store and its load.
1500 (1) C ends before G does, and in addition, every store that
1504 (2) G starts before C does, and in addition, every store that
1534 means that P0's store to x propagated to P1 before P1 called
1537 other hand, r2 = 0 means that P0's store to y, which occurs before the
1587 this, because it also includes cases where some store propagates to
1655 executes before Y, but also (if X is a store) that X propagates to
1681 store propagates to the critical section's CPU before the end of the
1695 Let W be the store mentioned above, let Y come before the end of the
1743 If r2 = 0 at the end then P0's store at Y overwrites the value that
1748 If r1 = 1 at the end then P1's load at Z reads from P0's store at X,
1861 store-release in a spin_unlock() and the load-acquire which forms the
1962 the spin_unlock() in P0. Hence the store to x must propagate to P2
1963 before the store to y does, so we cannot have r2 = 1 and r3 = 0. But
2015 P1's store to x propagates to P0 before P0's load from x executes.
2031 NULL pointer, because P1's store to x might propagate to P0 after the
2049 2. at least one of them is a store,
2082 If X is a load and X executes before a store Y, then indeed there is
2087 store, then even if X executes before Y it is still possible that X
2092 Therefore when X is a store, for X and Y to be non-concurrent the LKMM
2096 executes if Y is a store.) This is expressed by the visibility
2155 means that the store to buf must propagate from P0 to P1 before Z
2244 Consequently U's store to buf, no matter how it is carried out
2245 at the machine level, must propagate to P1 before X's store to
2260 Thus U's store to buf is forced to propagate to P1 before V's load
2275 "w-pre-bounded" by Y, depending on whether E was a store or a load.
2283 issue. When the source code contains a plain store, the compiler is
2295 thereby adding a load (and possibly replacing the store entirely).
2296 For this reason, whenever the LKMM requires a plain store to be
2298 the store to be r-pre-bounded or r-post-bounded, so as to handle cases
2302 compiler has augmented a store with a load in this fashion, and the
2307 adding in a store to the same location -- is not allowed. This is
2310 constitute a race (they can't interfere with each other), but a store
2311 does race with a concurrent load. Thus adding a store might create a
2313 something the compiler is forbidden to do. Augmenting a store with a
2357 rcu_assign_pointer() performs a store-release, so the plain store to b
2358 is definitely w-post-bounded before the store to ptr, and the two
2419 Guarantee says that otherwise P0's store to x would have propagated to
2425 This means there is an rcu-fence link from P1's "y = 2" store to P0's
2426 "y = 3" store, and consequently the first must propagate from P1 to P0
2428 concurrent and there is no race, even though P1's plain store to y
2445 sequence. For race-candidate load R and store W, the LKMM says the
2474 is, the rules governing the memory subsystem's choice of a store to
2475 satisfy a load request and its determination of where a store will
2480 not allowed (i.e., a load cannot read from a store that it
2485 not allowed (i.e., if a store is visible to a load then the
2486 load must read from that store or one coherence-after it).
2490 is not allowed (i.e., if one store is visible to a second then
2548 an address dependency from a marked load R to a plain store W,
2549 followed by smp_wmb() and then a marked store W', the LKMM creates a