Lines Matching refs:stores
100 device, stores it in a buffer, and sets a flag to indicate the buffer
132 Thus, P0 stores the data in buf and then sets flag. Meanwhile, P1
138 This pattern of memory accesses, where one CPU stores values to two
195 it, as loads can obtain values only from earlier stores.
200 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2
204 P0 stores 1 to buf before storing 1 to flag, since it executes
220 each CPU stores to its own shared location and then loads from the
268 W: P0 stores 1 to flag executes before
271 Z: P0 stores 1 to buf executes before
272 W: P0 stores 1 to flag.
294 Write events correspond to stores to shared memory, such as
402 executed before either of the stores to y. However, a compiler could
403 lift the stores out of the conditional, transforming the code into
553 from both of P0's stores. It is possible to handle mixed-size and
565 shared memory, the stores to that location must form a single global
571 the stores to x is simply the order in which the stores overwrite one
578 stores reach x's location in memory (or if you prefer a more
579 hardware-centric view, the order in which the stores get written to
589 and W' are two stores, then W ->co W'.
676 just like with the rf relation, we distinguish between stores that
677 occur on the same CPU (internal coherence order, or coi) and stores
680 On the other hand, stores to different memory locations are never
711 stores to x, there would also be fr links from the READ_ONCE() to
737 only internal operations. However, loads, stores, and fences involve
762 time to process the stores that it receives, and a store can't be used
764 most architectures, the local caches process stores in
791 smp_wmb() forces the CPU to execute all po-earlier stores
792 before any po-later stores;
805 propagates stores. When a fence instruction is executed on CPU C:
807 For each other CPU C', smp_wmb() forces all po-earlier stores
808 on C to propagate to C' before any po-later stores do.
812 stores executed on C) is forced to propagate to C' before the
816 executed (including all po-earlier stores on C) is forced to
821 affects stores from other CPUs that propagate to CPU C before the
822 fence is executed, as well as stores that are executed on C before the
825 A-cumulative; they only affect the propagation of stores that are
841 E and F are both stores on the same CPU and an smp_wmb() fence
850 The operational model requires that whenever W and W' are both stores
871 operations really are atomic, that is, no other stores can
877 Propagation: This requires that certain stores propagate to
895 According to the principle of cache coherence, the stores to any fixed
935 CPU 0 stores 14 to x;
936 CPU 1 stores 14 to x;
950 there must not be any stores coming between W' and W in the coherence
976 X and Y are both stores and an smp_wmb() fence occurs between
1132 stores do reach P1's local cache in the proper order, it can happen
1141 incoming stores in FIFO order. By contrast, other architectures
1151 processing all the stores it has already received. Thus, if the code
1173 case of smp_rmb()) until all outstanding stores have been processed by
1175 wait for all of its po-earlier stores to propagate to every other CPU
1177 the stores received as of that time -- not just the stores received
1203 W ->coe W'. This means that W and W' are stores to the same location,
1207 the other is made later by the memory subsystem. When the stores are
1249 read from different stores:
1297 stores. If r1 = 1 and r2 = 0 at the end then there is a prop link
1360 guarantees that the stores to x and y both propagate to P0 before the
1512 In the kernel's implementations of RCU, the requirements for stores
1761 This requires P0 and P2 to execute their loads and stores out of
1896 will self-deadlock in the executions where it stores 36 in y.