Lines Matching full:store

159 	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
160 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
161 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
162 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
163 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
164 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
165 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
166 STORE B=4, ...
217 STORE *A = 5, x = LOAD *D
218 x = LOAD *D, STORE *A = 5
254 a = LOAD *X, STORE *X = b
262 STORE *X = c, d = LOAD *X
282 X = LOAD *A, Y = LOAD *B, STORE *D = Z
283 X = LOAD *A, STORE *D = Z, Y = LOAD *B
284 Y = LOAD *B, X = LOAD *A, STORE *D = Z
285 Y = LOAD *B, STORE *D = Z, X = LOAD *A
286 STORE *D = Z, X = LOAD *A, Y = LOAD *B
287 STORE *D = Z, Y = LOAD *B, X = LOAD *A
306 STORE *A = X; STORE *(A + 4) = Y;
307 STORE *(A + 4) = Y; STORE *A = X;
308 STORE {*A, *(A + 4) } = {X, Y};
380 (1) Write (or store) memory barriers.
382 A write memory barrier gives a guarantee that all the STORE operations
383 specified before the barrier will appear to happen before all the STORE
390 A CPU can be viewed as committing a sequence of store operations to the
453 A general memory barrier gives a guarantee that all the LOAD and STORE
455 the LOAD and STORE operations specified after the barrier with respect to
507 store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
508 only to the store portion of the operation.
636 Q with the store into *Q. In other words, this outcome is prohibited,
692 for load-store control dependencies, as in the following example:
703 the compiler might combine the store to 'b' with other stores to 'b'.
744 Now there is no conditional between the load from 'a' and the store to
797 between the load from variable 'a' and the store to variable 'b'. It is
804 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
814 identical, as noted earlier, the compiler could pull this store outside
863 from 'a' and the store to 'c'. The control dependencies would extend
864 only to the pair of cmov instructions and the store depending on them.
894 between the prior load and the subsequent store, and this
914 need all the CPUs to see a given store at the same time, use smp_mb().
986 Firstly, write barriers act as partial orderings on store operations.
991 STORE A = 1
992 STORE B = 2
993 STORE C = 3
995 STORE D = 4
996 STORE E = 5
999 that the rest of the system might perceive as the unordered set of { STORE A,
1000 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1030 STORE A = 1
1031 STORE B = 2
1033 STORE C = &B LOAD X
1034 STORE D = 4 LOAD C (gets &B)
1076 STORE A = 1
1077 STORE B = 2
1079 STORE C = &B LOAD X
1080 STORE D = 4 LOAD C (gets &B)
1105 prior to the store of C \ +-------+ | |
1117 STORE A=1
1119 STORE B=2
1153 STORE A=1
1155 STORE B=2
1189 STORE A=1
1191 STORE B=2
1355 not always provided by real computer systems, namely that a given store
1360 instead guarantees only that a given store becomes visible at the same
1369 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1371 STORE Y=r1 LOAD X
1374 and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1375 to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1377 CPU 2 executes its load before its store, and CPU 3 loads from Y before
1384 CPU A did not originally store the value which it read), then on
1402 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1404 STORE Y=r1 LOAD X (reads 0)
1411 and store, it does not guarantee to order CPU 1's store. Thus, if this
1413 store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1483 store to u as happening -after- cpu1()'s load from v, even though
1641 (*) Similarly, the compiler is within its rights to omit a store entirely
1649 ... Code that does not store to variable a ...
1653 it might well omit the second store. This would come as a fatal
1661 ... Code that does not store to variable a ...
1761 and "store tearing," in which a single large access is replaced by
1763 16-bit store instructions with 7-bit immediate fields, the compiler
1764 might be tempted to use two 16-bit store-immediate instructions to
1765 implement the following 32-bit store:
1771 than two instructions to build the constant and then store it.
1774 this optimization in a volatile store. In the absence of such bugs,
1775 use of WRITE_ONCE() prevents store tearing in the following example:
1779 Use of packed structures can also result in load and store tearing,
1798 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
2028 ACQUIRE M, STORE *B, STORE *A, RELEASE M
2048 ACQUIRE N, STORE *B, STORE *A, RELEASE M
2146 STORE current->state
2182 is accessed, in particular, it sits between the STORE to indicate the event
2183 and the STORE to set TASK_RUNNING:
2187 set_current_state(); STORE event_indicated
2189 STORE current->state ...
2192 STORE task->state
2379 STORE waiter->task;
2402 STORE waiter->task;
2421 STORE waiter->task;
2498 The store to the data register might happen after the second store to the
2501 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2699 Although any particular load or store may not actually appear outside of the
2707 generate load and store operations which then go into the queue of memory
2779 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2804 mechanisms may alleviate this - once the store has actually hit the cache
2811 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2838 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A