Lines Matching full:store
161 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
162 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
163 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
164 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
165 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
166 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
167 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
168 STORE B=4, ...
219 STORE *A = 5, x = LOAD *D
220 x = LOAD *D, STORE *A = 5
256 a = LOAD *X, STORE *X = b
264 STORE *X = c, d = LOAD *X
284 X = LOAD *A, Y = LOAD *B, STORE *D = Z
285 X = LOAD *A, STORE *D = Z, Y = LOAD *B
286 Y = LOAD *B, X = LOAD *A, STORE *D = Z
287 Y = LOAD *B, STORE *D = Z, X = LOAD *A
288 STORE *D = Z, X = LOAD *A, Y = LOAD *B
289 STORE *D = Z, Y = LOAD *B, X = LOAD *A
308 STORE *A = X; STORE *(A + 4) = Y;
309 STORE *(A + 4) = Y; STORE *A = X;
310 STORE {*A, *(A + 4) } = {X, Y};
382 (1) Write (or store) memory barriers.
384 A write memory barrier gives a guarantee that all the STORE operations
385 specified before the barrier will appear to happen before all the STORE
392 A CPU can be viewed as committing a sequence of store operations to the
455 A general memory barrier gives a guarantee that all the LOAD and STORE
457 the LOAD and STORE operations specified after the barrier with respect to
510 store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
511 only to the store portion of the operation.
639 Q with the store into *Q. In other words, this outcome is prohibited,
695 for load-store control dependencies, as in the following example:
706 the compiler might combine the store to 'b' with other stores to 'b'.
747 Now there is no conditional between the load from 'a' and the store to
800 between the load from variable 'a' and the store to variable 'b'. It is
807 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
817 identical, as noted earlier, the compiler could pull this store outside
866 from 'a' and the store to 'c'. The control dependencies would extend
867 only to the pair of cmov instructions and the store depending on them.
897 between the prior load and the subsequent store, and this
917 need all the CPUs to see a given store at the same time, use smp_mb().
989 Firstly, write barriers act as partial orderings on store operations.
994 STORE A = 1
995 STORE B = 2
996 STORE C = 3
998 STORE D = 4
999 STORE E = 5
1002 that the rest of the system might perceive as the unordered set of { STORE A,
1003 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1033 STORE A = 1
1034 STORE B = 2
1036 STORE C = &B LOAD X
1037 STORE D = 4 LOAD C (gets &B)
1079 STORE A = 1
1080 STORE B = 2
1082 STORE C = &B LOAD X
1083 STORE D = 4 LOAD C (gets &B)
1108 prior to the store of C \ +-------+ | |
1120 STORE A=1
1122 STORE B=2
1156 STORE A=1
1158 STORE B=2
1192 STORE A=1
1194 STORE B=2
1358 not always provided by real computer systems, namely that a given store
1363 instead guarantees only that a given store becomes visible at the same
1372 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1374 STORE Y=r1 LOAD X
1377 and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1378 to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1380 CPU 2 executes its load before its store, and CPU 3 loads from Y before
1387 CPU A did not originally store the value which it read), then on
1405 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1407 STORE Y=r1 LOAD X (reads 0)
1414 and store, it does not guarantee to order CPU 1's store. Thus, if this
1416 store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1486 store to u as happening -after- cpu1()'s load from v, even though
1646 (*) Similarly, the compiler is within its rights to omit a store entirely
1654 ... Code that does not store to variable a ...
1658 it might well omit the second store. This would come as a fatal
1666 ... Code that does not store to variable a ...
1766 and "store tearing," in which a single large access is replaced by
1768 16-bit store instructions with 7-bit immediate fields, the compiler
1769 might be tempted to use two 16-bit store-immediate instructions to
1770 implement the following 32-bit store:
1776 than two instructions to build the constant and then store it.
1779 this optimization in a volatile store. In the absence of such bugs,
1780 use of WRITE_ONCE() prevents store tearing in the following example:
1784 Use of packed structures can also result in load and store tearing,
1803 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
2015 ACQUIRE M, STORE *B, STORE *A, RELEASE M
2035 ACQUIRE N, STORE *B, STORE *A, RELEASE M
2133 STORE current->state
2169 is accessed, in particular, it sits between the STORE to indicate the event
2170 and the STORE to set TASK_RUNNING:
2174 set_current_state(); STORE event_indicated
2176 STORE current->state ...
2179 STORE task->state
2366 STORE waiter->task;
2389 STORE waiter->task;
2408 STORE waiter->task;
2485 The store to the data register might happen after the second store to the
2488 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2686 Although any particular load or store may not actually appear outside of the
2694 generate load and store operations which then go into the queue of memory
2904 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2929 mechanisms may alleviate this - once the store has actually hit the cache
2936 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2963 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A