Lines Matching full:store

159 	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
160 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
161 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
162 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
163 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
164 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
165 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
166 STORE B=4, ...
217 STORE *A = 5, x = LOAD *D
218 x = LOAD *D, STORE *A = 5
254 a = LOAD *X, STORE *X = b
262 STORE *X = c, d = LOAD *X
282 X = LOAD *A, Y = LOAD *B, STORE *D = Z
283 X = LOAD *A, STORE *D = Z, Y = LOAD *B
284 Y = LOAD *B, X = LOAD *A, STORE *D = Z
285 Y = LOAD *B, STORE *D = Z, X = LOAD *A
286 STORE *D = Z, X = LOAD *A, Y = LOAD *B
287 STORE *D = Z, Y = LOAD *B, X = LOAD *A
306 STORE *A = X; STORE *(A + 4) = Y;
307 STORE *(A + 4) = Y; STORE *A = X;
308 STORE {*A, *(A + 4) } = {X, Y};
380 (1) Write (or store) memory barriers.
382 A write memory barrier gives a guarantee that all the STORE operations
383 specified before the barrier will appear to happen before all the STORE
390 A CPU can be viewed as committing a sequence of store operations to the
457 A general memory barrier gives a guarantee that all the LOAD and STORE
459 the LOAD and STORE operations specified after the barrier with respect to
511 store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
512 only to the store portion of the operation.
568 load-to-store relations, address-dependency barriers are not necessary
569 for load-to-store situations.
647 Q with the store into *Q. In other words, this outcome is prohibited,
704 for load-store control dependencies, as in the following example:
715 the compiler might combine the store to 'b' with other stores to 'b'.
756 Now there is no conditional between the load from 'a' and the store to
809 between the load from variable 'a' and the store to variable 'b'. It is
816 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
826 identical, as noted earlier, the compiler could pull this store outside
875 from 'a' and the store to 'c'. The control dependencies would extend
876 only to the pair of cmov instructions and the store depending on them.
906 between the prior load and the subsequent store, and this
926 need all the CPUs to see a given store at the same time, use smp_mb().
998 Firstly, write barriers act as partial orderings on store operations.
1003 STORE A = 1
1004 STORE B = 2
1005 STORE C = 3
1007 STORE D = 4
1008 STORE E = 5
1011 that the rest of the system might perceive as the unordered set of { STORE A,
1012 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1042 STORE A = 1
1043 STORE B = 2
1045 STORE C = &B LOAD X
1046 STORE D = 4 LOAD C (gets &B)
1088 STORE A = 1
1089 STORE B = 2
1091 STORE C = &B LOAD X
1092 STORE D = 4 LOAD C (gets &B)
1117 prior to the store of C \ +-------+ | |
1129 STORE A=1
1131 STORE B=2
1165 STORE A=1
1167 STORE B=2
1201 STORE A=1
1203 STORE B=2
1367 not always provided by real computer systems, namely that a given store
1372 instead guarantees only that a given store becomes visible at the same
1381 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1383 STORE Y=r1 LOAD X
1386 and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1387 to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1389 CPU 2 executes its load before its store, and CPU 3 loads from Y before
1396 CPU A did not originally store the value which it read), then on
1414 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1416 STORE Y=r1 LOAD X (reads 0)
1423 and store, it does not guarantee to order CPU 1's store. Thus, if this
1425 store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1495 store to u as happening -after- cpu1()'s load from v, even though
1653 (*) Similarly, the compiler is within its rights to omit a store entirely
1661 ... Code that does not store to variable a ...
1665 it might well omit the second store. This would come as a fatal
1673 ... Code that does not store to variable a ...
1773 and "store tearing," in which a single large access is replaced by
1775 16-bit store instructions with 7-bit immediate fields, the compiler
1776 might be tempted to use two 16-bit store-immediate instructions to
1777 implement the following 32-bit store:
1783 than two instructions to build the constant and then store it.
1786 this optimization in a volatile store. In the absence of such bugs,
1787 use of WRITE_ONCE() prevents store tearing in the following example:
1791 Use of packed structures can also result in load and store tearing,
1810 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
2049 ACQUIRE M, STORE *B, STORE *A, RELEASE M
2069 ACQUIRE N, STORE *B, STORE *A, RELEASE M
2167 STORE current->state
2203 is accessed, in particular, it sits between the STORE to indicate the event
2204 and the STORE to set TASK_RUNNING:
2208 set_current_state(); STORE event_indicated
2210 STORE current->state ...
2213 STORE task->state
2400 STORE waiter->task;
2423 STORE waiter->task;
2442 STORE waiter->task;
2519 The store to the data register might happen after the second store to the
2522 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2720 Although any particular load or store may not actually appear outside of the
2728 generate load and store operations which then go into the queue of memory
2801 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2826 mechanisms may alleviate this - once the store has actually hit the cache
2833 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2860 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A