1			 ============================
2			 LINUX KERNEL MEMORY BARRIERS
3			 ============================
4
5By: David Howells <dhowells@redhat.com>
6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7    Will Deacon <will.deacon@arm.com>
8    Peter Zijlstra <peterz@infradead.org>
9
10==========
11DISCLAIMER
12==========
13
14This document is not a specification; it is intentionally (for the sake of
15brevity) and unintentionally (due to being human) incomplete. This document is
16meant as a guide to using the various memory barriers provided by Linux, but
17in case of any doubt (and there are many) please ask.  Some doubts may be
18resolved by referring to the formal memory consistency model and related
19documentation at tools/memory-model/.  Nevertheless, even this memory
20model should be viewed as the collective opinion of its maintainers rather
21than as an infallible oracle.
22
23To repeat, this document is not a specification of what Linux expects from
24hardware.
25
26The purpose of this document is twofold:
27
28 (1) to specify the minimum functionality that one can rely on for any
29     particular barrier, and
30
31 (2) to provide a guide as to how to use the barriers that are available.
32
33Note that an architecture can provide more than the minimum requirement
34for any particular barrier, but if the architecture provides less than
35that, that architecture is incorrect.
36
37Note also that it is possible that a barrier may be a no-op for an
38architecture because the way that arch works renders an explicit barrier
39unnecessary in that case.
40
41
42========
43CONTENTS
44========
45
46 (*) Abstract memory access model.
47
48     - Device operations.
49     - Guarantees.
50
51 (*) What are memory barriers?
52
53     - Varieties of memory barrier.
54     - What may not be assumed about memory barriers?
55     - Data dependency barriers (historical).
56     - Control dependencies.
57     - SMP barrier pairing.
58     - Examples of memory barrier sequences.
59     - Read memory barriers vs load speculation.
60     - Multicopy atomicity.
61
62 (*) Explicit kernel barriers.
63
64     - Compiler barrier.
65     - CPU memory barriers.
66     - MMIO write barrier.
67
68 (*) Implicit kernel memory barriers.
69
70     - Lock acquisition functions.
71     - Interrupt disabling functions.
72     - Sleep and wake-up functions.
73     - Miscellaneous functions.
74
75 (*) Inter-CPU acquiring barrier effects.
76
77     - Acquires vs memory accesses.
78     - Acquires vs I/O accesses.
79
80 (*) Where are memory barriers needed?
81
82     - Interprocessor interaction.
83     - Atomic operations.
84     - Accessing devices.
85     - Interrupts.
86
87 (*) Kernel I/O barrier effects.
88
89 (*) Assumed minimum execution ordering model.
90
91 (*) The effects of the cpu cache.
92
93     - Cache coherency.
94     - Cache coherency vs DMA.
95     - Cache coherency vs MMIO.
96
97 (*) The things CPUs get up to.
98
99     - And then there's the Alpha.
100     - Virtual Machine Guests.
101
102 (*) Example uses.
103
104     - Circular buffers.
105
106 (*) References.
107
108
109============================
110ABSTRACT MEMORY ACCESS MODEL
111============================
112
113Consider the following abstract model of the system:
114
115		            :                :
116		            :                :
117		            :                :
118		+-------+   :   +--------+   :   +-------+
119		|       |   :   |        |   :   |       |
120		|       |   :   |        |   :   |       |
121		| CPU 1 |<----->| Memory |<----->| CPU 2 |
122		|       |   :   |        |   :   |       |
123		|       |   :   |        |   :   |       |
124		+-------+   :   +--------+   :   +-------+
125		    ^       :       ^        :       ^
126		    |       :       |        :       |
127		    |       :       |        :       |
128		    |       :       v        :       |
129		    |       :   +--------+   :       |
130		    |       :   |        |   :       |
131		    |       :   |        |   :       |
132		    +---------->| Device |<----------+
133		            :   |        |   :
134		            :   |        |   :
135		            :   +--------+   :
136		            :                :
137
138Each CPU executes a program that generates memory access operations.  In the
139abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
140perform the memory operations in any order it likes, provided program causality
141appears to be maintained.  Similarly, the compiler may also arrange the
142instructions it emits in any order it likes, provided it doesn't affect the
143apparent operation of the program.
144
145So in the above diagram, the effects of the memory operations performed by a
146CPU are perceived by the rest of the system as the operations cross the
147interface between the CPU and rest of the system (the dotted lines).
148
149
150For example, consider the following sequence of events:
151
152	CPU 1		CPU 2
153	===============	===============
154	{ A == 1; B == 2 }
155	A = 3;		x = B;
156	B = 4;		y = A;
157
158The set of accesses as seen by the memory system in the middle can be arranged
159in 24 different combinations:
160
161	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
162	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
163	STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
164	STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
165	STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
166	STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
167	STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
168	STORE B=4, ...
169	...
170
171and can thus result in four different combinations of values:
172
173	x == 2, y == 1
174	x == 2, y == 3
175	x == 4, y == 1
176	x == 4, y == 3
177
178
179Furthermore, the stores committed by a CPU to the memory system may not be
180perceived by the loads made by another CPU in the same order as the stores were
181committed.
182
183
184As a further example, consider this sequence of events:
185
186	CPU 1		CPU 2
187	===============	===============
188	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
189	B = 4;		Q = P;
190	P = &B		D = *Q;
191
192There is an obvious data dependency here, as the value loaded into D depends on
193the address retrieved from P by CPU 2.  At the end of the sequence, any of the
194following results are possible:
195
196	(Q == &A) and (D == 1)
197	(Q == &B) and (D == 2)
198	(Q == &B) and (D == 4)
199
200Note that CPU 2 will never try and load C into D because the CPU will load P
201into Q before issuing the load of *Q.
202
203
204DEVICE OPERATIONS
205-----------------
206
207Some devices present their control interfaces as collections of memory
208locations, but the order in which the control registers are accessed is very
209important.  For instance, imagine an ethernet card with a set of internal
210registers that are accessed through an address port register (A) and a data
211port register (D).  To read internal register 5, the following code might then
212be used:
213
214	*A = 5;
215	x = *D;
216
217but this might show up as either of the following two sequences:
218
219	STORE *A = 5, x = LOAD *D
220	x = LOAD *D, STORE *A = 5
221
222the second of which will almost certainly result in a malfunction, since it set
223the address _after_ attempting to read the register.
224
225
226GUARANTEES
227----------
228
229There are some minimal guarantees that may be expected of a CPU:
230
231 (*) On any given CPU, dependent memory accesses will be issued in order, with
232     respect to itself.  This means that for:
233
234	Q = READ_ONCE(P); D = READ_ONCE(*Q);
235
236     the CPU will issue the following memory operations:
237
238	Q = LOAD P, D = LOAD *Q
239
240     and always in that order.  However, on DEC Alpha, READ_ONCE() also
241     emits a memory-barrier instruction, so that a DEC Alpha CPU will
242     instead issue the following memory operations:
243
244	Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
245
246     Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
247     mischief.
248
249 (*) Overlapping loads and stores within a particular CPU will appear to be
250     ordered within that CPU.  This means that for:
251
252	a = READ_ONCE(*X); WRITE_ONCE(*X, b);
253
254     the CPU will only issue the following sequence of memory operations:
255
256	a = LOAD *X, STORE *X = b
257
258     And for:
259
260	WRITE_ONCE(*X, c); d = READ_ONCE(*X);
261
262     the CPU will only issue:
263
264	STORE *X = c, d = LOAD *X
265
266     (Loads and stores overlap if they are targeted at overlapping pieces of
267     memory).
268
269And there are a number of things that _must_ or _must_not_ be assumed:
270
271 (*) It _must_not_ be assumed that the compiler will do what you want
272     with memory references that are not protected by READ_ONCE() and
273     WRITE_ONCE().  Without them, the compiler is within its rights to
274     do all sorts of "creative" transformations, which are covered in
275     the COMPILER BARRIER section.
276
277 (*) It _must_not_ be assumed that independent loads and stores will be issued
278     in the order given.  This means that for:
279
280	X = *A; Y = *B; *D = Z;
281
282     we may get any of the following sequences:
283
284	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
285	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
286	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
287	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
288	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
289	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
290
291 (*) It _must_ be assumed that overlapping memory accesses may be merged or
292     discarded.  This means that for:
293
294	X = *A; Y = *(A + 4);
295
296     we may get any one of the following sequences:
297
298	X = LOAD *A; Y = LOAD *(A + 4);
299	Y = LOAD *(A + 4); X = LOAD *A;
300	{X, Y} = LOAD {*A, *(A + 4) };
301
302     And for:
303
304	*A = X; *(A + 4) = Y;
305
306     we may get any of:
307
308	STORE *A = X; STORE *(A + 4) = Y;
309	STORE *(A + 4) = Y; STORE *A = X;
310	STORE {*A, *(A + 4) } = {X, Y};
311
312And there are anti-guarantees:
313
314 (*) These guarantees do not apply to bitfields, because compilers often
315     generate code to modify these using non-atomic read-modify-write
316     sequences.  Do not attempt to use bitfields to synchronize parallel
317     algorithms.
318
319 (*) Even in cases where bitfields are protected by locks, all fields
320     in a given bitfield must be protected by one lock.  If two fields
321     in a given bitfield are protected by different locks, the compiler's
322     non-atomic read-modify-write sequences can cause an update to one
323     field to corrupt the value of an adjacent field.
324
325 (*) These guarantees apply only to properly aligned and sized scalar
326     variables.  "Properly sized" currently means variables that are
327     the same size as "char", "short", "int" and "long".  "Properly
328     aligned" means the natural alignment, thus no constraints for
329     "char", two-byte alignment for "short", four-byte alignment for
330     "int", and either four-byte or eight-byte alignment for "long",
331     on 32-bit and 64-bit systems, respectively.  Note that these
332     guarantees were introduced into the C11 standard, so beware when
333     using older pre-C11 compilers (for example, gcc 4.6).  The portion
334     of the standard containing this guarantee is Section 3.14, which
335     defines "memory location" as follows:
336
337     	memory location
338		either an object of scalar type, or a maximal sequence
339		of adjacent bit-fields all having nonzero width
340
341		NOTE 1: Two threads of execution can update and access
342		separate memory locations without interfering with
343		each other.
344
345		NOTE 2: A bit-field and an adjacent non-bit-field member
346		are in separate memory locations. The same applies
347		to two bit-fields, if one is declared inside a nested
348		structure declaration and the other is not, or if the two
349		are separated by a zero-length bit-field declaration,
350		or if they are separated by a non-bit-field member
351		declaration. It is not safe to concurrently update two
352		bit-fields in the same structure if all members declared
353		between them are also bit-fields, no matter what the
354		sizes of those intervening bit-fields happen to be.
355
356
357=========================
358WHAT ARE MEMORY BARRIERS?
359=========================
360
361As can be seen above, independent memory operations are effectively performed
362in random order, but this can be a problem for CPU-CPU interaction and for I/O.
363What is required is some way of intervening to instruct the compiler and the
364CPU to restrict the order.
365
366Memory barriers are such interventions.  They impose a perceived partial
367ordering over the memory operations on either side of the barrier.
368
369Such enforcement is important because the CPUs and other devices in a system
370can use a variety of tricks to improve performance, including reordering,
371deferral and combination of memory operations; speculative loads; speculative
372branch prediction and various types of caching.  Memory barriers are used to
373override or suppress these tricks, allowing the code to sanely control the
374interaction of multiple CPUs and/or devices.
375
376
377VARIETIES OF MEMORY BARRIER
378---------------------------
379
380Memory barriers come in four basic varieties:
381
382 (1) Write (or store) memory barriers.
383
384     A write memory barrier gives a guarantee that all the STORE operations
385     specified before the barrier will appear to happen before all the STORE
386     operations specified after the barrier with respect to the other
387     components of the system.
388
389     A write barrier is a partial ordering on stores only; it is not required
390     to have any effect on loads.
391
392     A CPU can be viewed as committing a sequence of store operations to the
393     memory system as time progresses.  All stores _before_ a write barrier
394     will occur _before_ all the stores after the write barrier.
395
396     [!] Note that write barriers should normally be paired with read or data
397     dependency barriers; see the "SMP barrier pairing" subsection.
398
399
400 (2) Data dependency barriers.
401
402     A data dependency barrier is a weaker form of read barrier.  In the case
403     where two loads are performed such that the second depends on the result
404     of the first (eg: the first load retrieves the address to which the second
405     load will be directed), a data dependency barrier would be required to
406     make sure that the target of the second load is updated after the address
407     obtained by the first load is accessed.
408
409     A data dependency barrier is a partial ordering on interdependent loads
410     only; it is not required to have any effect on stores, independent loads
411     or overlapping loads.
412
413     As mentioned in (1), the other CPUs in the system can be viewed as
414     committing sequences of stores to the memory system that the CPU being
415     considered can then perceive.  A data dependency barrier issued by the CPU
416     under consideration guarantees that for any load preceding it, if that
417     load touches one of a sequence of stores from another CPU, then by the
418     time the barrier completes, the effects of all the stores prior to that
419     touched by the load will be perceptible to any loads issued after the data
420     dependency barrier.
421
422     See the "Examples of memory barrier sequences" subsection for diagrams
423     showing the ordering constraints.
424
425     [!] Note that the first load really has to have a _data_ dependency and
426     not a control dependency.  If the address for the second load is dependent
427     on the first load, but the dependency is through a conditional rather than
428     actually loading the address itself, then it's a _control_ dependency and
429     a full read barrier or better is required.  See the "Control dependencies"
430     subsection for more information.
431
432     [!] Note that data dependency barriers should normally be paired with
433     write barriers; see the "SMP barrier pairing" subsection.
434
435
436 (3) Read (or load) memory barriers.
437
438     A read barrier is a data dependency barrier plus a guarantee that all the
439     LOAD operations specified before the barrier will appear to happen before
440     all the LOAD operations specified after the barrier with respect to the
441     other components of the system.
442
443     A read barrier is a partial ordering on loads only; it is not required to
444     have any effect on stores.
445
446     Read memory barriers imply data dependency barriers, and so can substitute
447     for them.
448
449     [!] Note that read barriers should normally be paired with write barriers;
450     see the "SMP barrier pairing" subsection.
451
452
453 (4) General memory barriers.
454
455     A general memory barrier gives a guarantee that all the LOAD and STORE
456     operations specified before the barrier will appear to happen before all
457     the LOAD and STORE operations specified after the barrier with respect to
458     the other components of the system.
459
460     A general memory barrier is a partial ordering over both loads and stores.
461
462     General memory barriers imply both read and write memory barriers, and so
463     can substitute for either.
464
465
466And a couple of implicit varieties:
467
468 (5) ACQUIRE operations.
469
470     This acts as a one-way permeable barrier.  It guarantees that all memory
471     operations after the ACQUIRE operation will appear to happen after the
472     ACQUIRE operation with respect to the other components of the system.
473     ACQUIRE operations include LOCK operations and both smp_load_acquire()
474     and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
475     semantics from relying on a control dependency and smp_rmb().
476
477     Memory operations that occur before an ACQUIRE operation may appear to
478     happen after it completes.
479
480     An ACQUIRE operation should almost always be paired with a RELEASE
481     operation.
482
483
484 (6) RELEASE operations.
485
486     This also acts as a one-way permeable barrier.  It guarantees that all
487     memory operations before the RELEASE operation will appear to happen
488     before the RELEASE operation with respect to the other components of the
489     system. RELEASE operations include UNLOCK operations and
490     smp_store_release() operations.
491
492     Memory operations that occur after a RELEASE operation may appear to
493     happen before it completes.
494
495     The use of ACQUIRE and RELEASE operations generally precludes the need
496     for other sorts of memory barrier (but note the exceptions mentioned in
497     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
498     pair is -not- guaranteed to act as a full memory barrier.  However, after
499     an ACQUIRE on a given variable, all memory accesses preceding any prior
500     RELEASE on that same variable are guaranteed to be visible.  In other
501     words, within a given variable's critical section, all accesses of all
502     previous critical sections for that variable are guaranteed to have
503     completed.
504
505     This means that ACQUIRE acts as a minimal "acquire" operation and
506     RELEASE acts as a minimal "release" operation.
507
508A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
509RELEASE variants in addition to fully-ordered and relaxed (no barrier
510semantics) definitions.  For compound atomics performing both a load and a
511store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
512only to the store portion of the operation.
513
514Memory barriers are only required where there's a possibility of interaction
515between two CPUs or between a CPU and a device.  If it can be guaranteed that
516there won't be any such interaction in any particular piece of code, then
517memory barriers are unnecessary in that piece of code.
518
519
520Note that these are the _minimum_ guarantees.  Different architectures may give
521more substantial guarantees, but they may _not_ be relied upon outside of arch
522specific code.
523
524
525WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
526----------------------------------------------
527
528There are certain things that the Linux kernel memory barriers do not guarantee:
529
530 (*) There is no guarantee that any of the memory accesses specified before a
531     memory barrier will be _complete_ by the completion of a memory barrier
532     instruction; the barrier can be considered to draw a line in that CPU's
533     access queue that accesses of the appropriate type may not cross.
534
535 (*) There is no guarantee that issuing a memory barrier on one CPU will have
536     any direct effect on another CPU or any other hardware in the system.  The
537     indirect effect will be the order in which the second CPU sees the effects
538     of the first CPU's accesses occur, but see the next point:
539
540 (*) There is no guarantee that a CPU will see the correct order of effects
541     from a second CPU's accesses, even _if_ the second CPU uses a memory
542     barrier, unless the first CPU _also_ uses a matching memory barrier (see
543     the subsection on "SMP Barrier Pairing").
544
545 (*) There is no guarantee that some intervening piece of off-the-CPU
546     hardware[*] will not reorder the memory accesses.  CPU cache coherency
547     mechanisms should propagate the indirect effects of a memory barrier
548     between CPUs, but might not do so in order.
549
550	[*] For information on bus mastering DMA and coherency please read:
551
552	    Documentation/PCI/pci.txt
553	    Documentation/DMA-API-HOWTO.txt
554	    Documentation/DMA-API.txt
555
556
557DATA DEPENDENCY BARRIERS (HISTORICAL)
558-------------------------------------
559
560As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
561added to READ_ONCE(), which means that about the only people who
562need to pay attention to this section are those working on DEC Alpha
563architecture-specific code and those working on READ_ONCE() itself.
564For those who need it, and for those who are interested in the history,
565here is the story of data-dependency barriers.
566
567The usage requirements of data dependency barriers are a little subtle, and
568it's not always obvious that they're needed.  To illustrate, consider the
569following sequence of events:
570
571	CPU 1		      CPU 2
572	===============	      ===============
573	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
574	B = 4;
575	<write barrier>
576	WRITE_ONCE(P, &B)
577			      Q = READ_ONCE(P);
578			      D = *Q;
579
580There's a clear data dependency here, and it would seem that by the end of the
581sequence, Q must be either &A or &B, and that:
582
583	(Q == &A) implies (D == 1)
584	(Q == &B) implies (D == 4)
585
586But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
587leading to the following situation:
588
589	(Q == &B) and (D == 2) ????
590
591Whilst this may seem like a failure of coherency or causality maintenance, it
592isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
593Alpha).
594
595To deal with this, a data dependency barrier or better must be inserted
596between the address load and the data load:
597
598	CPU 1		      CPU 2
599	===============	      ===============
600	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
601	B = 4;
602	<write barrier>
603	WRITE_ONCE(P, &B);
604			      Q = READ_ONCE(P);
605			      <data dependency barrier>
606			      D = *Q;
607
608This enforces the occurrence of one of the two implications, and prevents the
609third possibility from arising.
610
611
612[!] Note that this extremely counterintuitive situation arises most easily on
613machines with split caches, so that, for example, one cache bank processes
614even-numbered cache lines and the other bank processes odd-numbered cache
615lines.  The pointer P might be stored in an odd-numbered cache line, and the
616variable B might be stored in an even-numbered cache line.  Then, if the
617even-numbered bank of the reading CPU's cache is extremely busy while the
618odd-numbered bank is idle, one can see the new value of the pointer P (&B),
619but the old value of the variable B (2).
620
621
622A data-dependency barrier is not required to order dependent writes
623because the CPUs that the Linux kernel supports don't do writes
624until they are certain (1) that the write will actually happen, (2)
625of the location of the write, and (3) of the value to be written.
626But please carefully read the "CONTROL DEPENDENCIES" section and the
627Documentation/RCU/rcu_dereference.txt file:  The compiler can and does
628break dependencies in a great many highly creative ways.
629
630	CPU 1		      CPU 2
631	===============	      ===============
632	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
633	B = 4;
634	<write barrier>
635	WRITE_ONCE(P, &B);
636			      Q = READ_ONCE(P);
637			      WRITE_ONCE(*Q, 5);
638
639Therefore, no data-dependency barrier is required to order the read into
640Q with the store into *Q.  In other words, this outcome is prohibited,
641even without a data-dependency barrier:
642
643	(Q == &B) && (B == 4)
644
645Please note that this pattern should be rare.  After all, the whole point
646of dependency ordering is to -prevent- writes to the data structure, along
647with the expensive cache misses associated with those writes.  This pattern
648can be used to record rare error conditions and the like, and the CPUs'
649naturally occurring ordering prevents such records from being lost.
650
651
652Note well that the ordering provided by a data dependency is local to
653the CPU containing it.  See the section on "Multicopy atomicity" for
654more information.
655
656
657The data dependency barrier is very important to the RCU system,
658for example.  See rcu_assign_pointer() and rcu_dereference() in
659include/linux/rcupdate.h.  This permits the current target of an RCU'd
660pointer to be replaced with a new modified target, without the replacement
661target appearing to be incompletely initialised.
662
663See also the subsection on "Cache Coherency" for a more thorough example.
664
665
666CONTROL DEPENDENCIES
667--------------------
668
669Control dependencies can be a bit tricky because current compilers do
670not understand them.  The purpose of this section is to help you prevent
671the compiler's ignorance from breaking your code.
672
673A load-load control dependency requires a full read memory barrier, not
674simply a data dependency barrier to make it work correctly.  Consider the
675following bit of code:
676
677	q = READ_ONCE(a);
678	if (q) {
679		<data dependency barrier>  /* BUG: No data dependency!!! */
680		p = READ_ONCE(b);
681	}
682
683This will not have the desired effect because there is no actual data
684dependency, but rather a control dependency that the CPU may short-circuit
685by attempting to predict the outcome in advance, so that other CPUs see
686the load from b as having happened before the load from a.  In such a
687case what's actually required is:
688
689	q = READ_ONCE(a);
690	if (q) {
691		<read barrier>
692		p = READ_ONCE(b);
693	}
694
695However, stores are not speculated.  This means that ordering -is- provided
696for load-store control dependencies, as in the following example:
697
698	q = READ_ONCE(a);
699	if (q) {
700		WRITE_ONCE(b, 1);
701	}
702
703Control dependencies pair normally with other types of barriers.
704That said, please note that neither READ_ONCE() nor WRITE_ONCE()
705are optional! Without the READ_ONCE(), the compiler might combine the
706load from 'a' with other loads from 'a'.  Without the WRITE_ONCE(),
707the compiler might combine the store to 'b' with other stores to 'b'.
708Either can result in highly counterintuitive effects on ordering.
709
710Worse yet, if the compiler is able to prove (say) that the value of
711variable 'a' is always non-zero, it would be well within its rights
712to optimize the original example by eliminating the "if" statement
713as follows:
714
715	q = a;
716	b = 1;  /* BUG: Compiler and CPU can both reorder!!! */
717
718So don't leave out the READ_ONCE().
719
720It is tempting to try to enforce ordering on identical stores on both
721branches of the "if" statement as follows:
722
723	q = READ_ONCE(a);
724	if (q) {
725		barrier();
726		WRITE_ONCE(b, 1);
727		do_something();
728	} else {
729		barrier();
730		WRITE_ONCE(b, 1);
731		do_something_else();
732	}
733
734Unfortunately, current compilers will transform this as follows at high
735optimization levels:
736
737	q = READ_ONCE(a);
738	barrier();
739	WRITE_ONCE(b, 1);  /* BUG: No ordering vs. load from a!!! */
740	if (q) {
741		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
742		do_something();
743	} else {
744		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
745		do_something_else();
746	}
747
748Now there is no conditional between the load from 'a' and the store to
749'b', which means that the CPU is within its rights to reorder them:
750The conditional is absolutely required, and must be present in the
751assembly code even after all compiler optimizations have been applied.
752Therefore, if you need ordering in this example, you need explicit
753memory barriers, for example, smp_store_release():
754
755	q = READ_ONCE(a);
756	if (q) {
757		smp_store_release(&b, 1);
758		do_something();
759	} else {
760		smp_store_release(&b, 1);
761		do_something_else();
762	}
763
764In contrast, without explicit memory barriers, two-legged-if control
765ordering is guaranteed only when the stores differ, for example:
766
767	q = READ_ONCE(a);
768	if (q) {
769		WRITE_ONCE(b, 1);
770		do_something();
771	} else {
772		WRITE_ONCE(b, 2);
773		do_something_else();
774	}
775
776The initial READ_ONCE() is still required to prevent the compiler from
777proving the value of 'a'.
778
779In addition, you need to be careful what you do with the local variable 'q',
780otherwise the compiler might be able to guess the value and again remove
781the needed conditional.  For example:
782
783	q = READ_ONCE(a);
784	if (q % MAX) {
785		WRITE_ONCE(b, 1);
786		do_something();
787	} else {
788		WRITE_ONCE(b, 2);
789		do_something_else();
790	}
791
792If MAX is defined to be 1, then the compiler knows that (q % MAX) is
793equal to zero, in which case the compiler is within its rights to
794transform the above code into the following:
795
796	q = READ_ONCE(a);
797	WRITE_ONCE(b, 2);
798	do_something_else();
799
800Given this transformation, the CPU is not required to respect the ordering
801between the load from variable 'a' and the store to variable 'b'.  It is
802tempting to add a barrier(), but this does not help.  The conditional
803is gone, and the barrier won't bring it back.  Therefore, if you are
804relying on this ordering, you should make sure that MAX is greater than
805one, perhaps as follows:
806
807	q = READ_ONCE(a);
808	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
809	if (q % MAX) {
810		WRITE_ONCE(b, 1);
811		do_something();
812	} else {
813		WRITE_ONCE(b, 2);
814		do_something_else();
815	}
816
817Please note once again that the stores to 'b' differ.  If they were
818identical, as noted earlier, the compiler could pull this store outside
819of the 'if' statement.
820
821You must also be careful not to rely too much on boolean short-circuit
822evaluation.  Consider this example:
823
824	q = READ_ONCE(a);
825	if (q || 1 > 0)
826		WRITE_ONCE(b, 1);
827
828Because the first condition cannot fault and the second condition is
829always true, the compiler can transform this example as following,
830defeating control dependency:
831
832	q = READ_ONCE(a);
833	WRITE_ONCE(b, 1);
834
835This example underscores the need to ensure that the compiler cannot
836out-guess your code.  More generally, although READ_ONCE() does force
837the compiler to actually emit code for a given load, it does not force
838the compiler to use the results.
839
840In addition, control dependencies apply only to the then-clause and
841else-clause of the if-statement in question.  In particular, it does
842not necessarily apply to code following the if-statement:
843
844	q = READ_ONCE(a);
845	if (q) {
846		WRITE_ONCE(b, 1);
847	} else {
848		WRITE_ONCE(b, 2);
849	}
850	WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from 'a'. */
851
852It is tempting to argue that there in fact is ordering because the
853compiler cannot reorder volatile accesses and also cannot reorder
854the writes to 'b' with the condition.  Unfortunately for this line
855of reasoning, the compiler might compile the two writes to 'b' as
856conditional-move instructions, as in this fanciful pseudo-assembly
857language:
858
859	ld r1,a
860	cmp r1,$0
861	cmov,ne r4,$1
862	cmov,eq r4,$2
863	st r4,b
864	st $1,c
865
866A weakly ordered CPU would have no dependency of any sort between the load
867from 'a' and the store to 'c'.  The control dependencies would extend
868only to the pair of cmov instructions and the store depending on them.
869In short, control dependencies apply only to the stores in the then-clause
870and else-clause of the if-statement in question (including functions
871invoked by those two clauses), not to code following that if-statement.
872
873
874Note well that the ordering provided by a control dependency is local
875to the CPU containing it.  See the section on "Multicopy atomicity"
876for more information.
877
878
879In summary:
880
881  (*) Control dependencies can order prior loads against later stores.
882      However, they do -not- guarantee any other sort of ordering:
883      Not prior loads against later loads, nor prior stores against
884      later anything.  If you need these other forms of ordering,
885      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
886      later loads, smp_mb().
887
888  (*) If both legs of the "if" statement begin with identical stores to
889      the same variable, then those stores must be ordered, either by
890      preceding both of them with smp_mb() or by using smp_store_release()
891      to carry out the stores.  Please note that it is -not- sufficient
892      to use barrier() at beginning of each leg of the "if" statement
893      because, as shown by the example above, optimizing compilers can
894      destroy the control dependency while respecting the letter of the
895      barrier() law.
896
897  (*) Control dependencies require at least one run-time conditional
898      between the prior load and the subsequent store, and this
899      conditional must involve the prior load.  If the compiler is able
900      to optimize the conditional away, it will have also optimized
901      away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
902      can help to preserve the needed conditional.
903
904  (*) Control dependencies require that the compiler avoid reordering the
905      dependency into nonexistence.  Careful use of READ_ONCE() or
906      atomic{,64}_read() can help to preserve your control dependency.
907      Please see the COMPILER BARRIER section for more information.
908
909  (*) Control dependencies apply only to the then-clause and else-clause
910      of the if-statement containing the control dependency, including
911      any functions that these two clauses call.  Control dependencies
912      do -not- apply to code following the if-statement containing the
913      control dependency.
914
915  (*) Control dependencies pair normally with other types of barriers.
916
917  (*) Control dependencies do -not- provide multicopy atomicity.  If you
918      need all the CPUs to see a given store at the same time, use smp_mb().
919
920  (*) Compilers do not understand control dependencies.  It is therefore
921      your job to ensure that they do not break your code.
922
923
924SMP BARRIER PAIRING
925-------------------
926
927When dealing with CPU-CPU interactions, certain types of memory barrier should
928always be paired.  A lack of appropriate pairing is almost certainly an error.
929
930General barriers pair with each other, though they also pair with most
931other types of barriers, albeit without multicopy atomicity.  An acquire
932barrier pairs with a release barrier, but both may also pair with other
933barriers, including of course general barriers.  A write barrier pairs
934with a data dependency barrier, a control dependency, an acquire barrier,
935a release barrier, a read barrier, or a general barrier.  Similarly a
936read barrier, control dependency, or a data dependency barrier pairs
937with a write barrier, an acquire barrier, a release barrier, or a
938general barrier:
939
940	CPU 1		      CPU 2
941	===============	      ===============
942	WRITE_ONCE(a, 1);
943	<write barrier>
944	WRITE_ONCE(b, 2);     x = READ_ONCE(b);
945			      <read barrier>
946			      y = READ_ONCE(a);
947
948Or:
949
950	CPU 1		      CPU 2
951	===============	      ===============================
952	a = 1;
953	<write barrier>
954	WRITE_ONCE(b, &a);    x = READ_ONCE(b);
955			      <data dependency barrier>
956			      y = *x;
957
958Or even:
959
960	CPU 1		      CPU 2
961	===============	      ===============================
962	r1 = READ_ONCE(y);
963	<general barrier>
964	WRITE_ONCE(x, 1);     if (r2 = READ_ONCE(x)) {
965			         <implicit control dependency>
966			         WRITE_ONCE(y, 1);
967			      }
968
969	assert(r1 == 0 || r2 == 0);
970
971Basically, the read barrier always has to be there, even though it can be of
972the "weaker" type.
973
974[!] Note that the stores before the write barrier would normally be expected to
975match the loads after the read barrier or the data dependency barrier, and vice
976versa:
977
978	CPU 1                               CPU 2
979	===================                 ===================
980	WRITE_ONCE(a, 1);    }----   --->{  v = READ_ONCE(c);
981	WRITE_ONCE(b, 2);    }    \ /    {  w = READ_ONCE(d);
982	<write barrier>            \        <read barrier>
983	WRITE_ONCE(c, 3);    }    / \    {  x = READ_ONCE(a);
984	WRITE_ONCE(d, 4);    }----   --->{  y = READ_ONCE(b);
985
986
987EXAMPLES OF MEMORY BARRIER SEQUENCES
988------------------------------------
989
990Firstly, write barriers act as partial orderings on store operations.
991Consider the following sequence of events:
992
993	CPU 1
994	=======================
995	STORE A = 1
996	STORE B = 2
997	STORE C = 3
998	<write barrier>
999	STORE D = 4
1000	STORE E = 5
1001
1002This sequence of events is committed to the memory coherence system in an order
1003that the rest of the system might perceive as the unordered set of { STORE A,
1004STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1005}:
1006
1007	+-------+       :      :
1008	|       |       +------+
1009	|       |------>| C=3  |     }     /\
1010	|       |  :    +------+     }-----  \  -----> Events perceptible to
1011	|       |  :    | A=1  |     }        \/       the rest of the system
1012	|       |  :    +------+     }
1013	| CPU 1 |  :    | B=2  |     }
1014	|       |       +------+     }
1015	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
1016	|       |       +------+     }        requires all stores prior to the
1017	|       |  :    | E=5  |     }        barrier to be committed before
1018	|       |  :    +------+     }        further stores may take place
1019	|       |------>| D=4  |     }
1020	|       |       +------+
1021	+-------+       :      :
1022	                   |
1023	                   | Sequence in which stores are committed to the
1024	                   | memory system by CPU 1
1025	                   V
1026
1027
1028Secondly, data dependency barriers act as partial orderings on data-dependent
1029loads.  Consider the following sequence of events:
1030
1031	CPU 1			CPU 2
1032	=======================	=======================
1033		{ B = 7; X = 9; Y = 8; C = &Y }
1034	STORE A = 1
1035	STORE B = 2
1036	<write barrier>
1037	STORE C = &B		LOAD X
1038	STORE D = 4		LOAD C (gets &B)
1039				LOAD *C (reads B)
1040
1041Without intervention, CPU 2 may perceive the events on CPU 1 in some
1042effectively random order, despite the write barrier issued by CPU 1:
1043
1044	+-------+       :      :                :       :
1045	|       |       +------+                +-------+  | Sequence of update
1046	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
1047	|       |  :    +------+     \          +-------+  | CPU 2
1048	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
1049	|       |       +------+       |        +-------+
1050	|       |   wwwwwwwwwwwwwwww   |        :       :
1051	|       |       +------+       |        :       :
1052	|       |  :    | C=&B |---    |        :       :       +-------+
1053	|       |  :    +------+   \   |        +-------+       |       |
1054	|       |------>| D=4  |    ----------->| C->&B |------>|       |
1055	|       |       +------+       |        +-------+       |       |
1056	+-------+       :      :       |        :       :       |       |
1057	                               |        :       :       |       |
1058	                               |        :       :       | CPU 2 |
1059	                               |        +-------+       |       |
1060	    Apparently incorrect --->  |        | B->7  |------>|       |
1061	    perception of B (!)        |        +-------+       |       |
1062	                               |        :       :       |       |
1063	                               |        +-------+       |       |
1064	    The load of X holds --->    \       | X->9  |------>|       |
1065	    up the maintenance           \      +-------+       |       |
1066	    of coherence of B             ----->| B->2  |       +-------+
1067	                                        +-------+
1068	                                        :       :
1069
1070
1071In the above example, CPU 2 perceives that B is 7, despite the load of *C
1072(which would be B) coming after the LOAD of C.
1073
1074If, however, a data dependency barrier were to be placed between the load of C
1075and the load of *C (ie: B) on CPU 2:
1076
1077	CPU 1			CPU 2
1078	=======================	=======================
1079		{ B = 7; X = 9; Y = 8; C = &Y }
1080	STORE A = 1
1081	STORE B = 2
1082	<write barrier>
1083	STORE C = &B		LOAD X
1084	STORE D = 4		LOAD C (gets &B)
1085				<data dependency barrier>
1086				LOAD *C (reads B)
1087
1088then the following will occur:
1089
1090	+-------+       :      :                :       :
1091	|       |       +------+                +-------+
1092	|       |------>| B=2  |-----       --->| Y->8  |
1093	|       |  :    +------+     \          +-------+
1094	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
1095	|       |       +------+       |        +-------+
1096	|       |   wwwwwwwwwwwwwwww   |        :       :
1097	|       |       +------+       |        :       :
1098	|       |  :    | C=&B |---    |        :       :       +-------+
1099	|       |  :    +------+   \   |        +-------+       |       |
1100	|       |------>| D=4  |    ----------->| C->&B |------>|       |
1101	|       |       +------+       |        +-------+       |       |
1102	+-------+       :      :       |        :       :       |       |
1103	                               |        :       :       |       |
1104	                               |        :       :       | CPU 2 |
1105	                               |        +-------+       |       |
1106	                               |        | X->9  |------>|       |
1107	                               |        +-------+       |       |
1108	  Makes sure all effects --->   \   ddddddddddddddddd   |       |
1109	  prior to the store of C        \      +-------+       |       |
1110	  are perceptible to              ----->| B->2  |------>|       |
1111	  subsequent loads                      +-------+       |       |
1112	                                        :       :       +-------+
1113
1114
1115And thirdly, a read barrier acts as a partial order on loads.  Consider the
1116following sequence of events:
1117
1118	CPU 1			CPU 2
1119	=======================	=======================
1120		{ A = 0, B = 9 }
1121	STORE A=1
1122	<write barrier>
1123	STORE B=2
1124				LOAD B
1125				LOAD A
1126
1127Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1128some effectively random order, despite the write barrier issued by CPU 1:
1129
1130	+-------+       :      :                :       :
1131	|       |       +------+                +-------+
1132	|       |------>| A=1  |------      --->| A->0  |
1133	|       |       +------+      \         +-------+
1134	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1135	|       |       +------+        |       +-------+
1136	|       |------>| B=2  |---     |       :       :
1137	|       |       +------+   \    |       :       :       +-------+
1138	+-------+       :      :    \   |       +-------+       |       |
1139	                             ---------->| B->2  |------>|       |
1140	                                |       +-------+       | CPU 2 |
1141	                                |       | A->0  |------>|       |
1142	                                |       +-------+       |       |
1143	                                |       :       :       +-------+
1144	                                 \      :       :
1145	                                  \     +-------+
1146	                                   ---->| A->1  |
1147	                                        +-------+
1148	                                        :       :
1149
1150
1151If, however, a read barrier were to be placed between the load of B and the
1152load of A on CPU 2:
1153
1154	CPU 1			CPU 2
1155	=======================	=======================
1156		{ A = 0, B = 9 }
1157	STORE A=1
1158	<write barrier>
1159	STORE B=2
1160				LOAD B
1161				<read barrier>
1162				LOAD A
1163
1164then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
11652:
1166
1167	+-------+       :      :                :       :
1168	|       |       +------+                +-------+
1169	|       |------>| A=1  |------      --->| A->0  |
1170	|       |       +------+      \         +-------+
1171	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1172	|       |       +------+        |       +-------+
1173	|       |------>| B=2  |---     |       :       :
1174	|       |       +------+   \    |       :       :       +-------+
1175	+-------+       :      :    \   |       +-------+       |       |
1176	                             ---------->| B->2  |------>|       |
1177	                                |       +-------+       | CPU 2 |
1178	                                |       :       :       |       |
1179	                                |       :       :       |       |
1180	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1181	  barrier causes all effects      \     +-------+       |       |
1182	  prior to the storage of B        ---->| A->1  |------>|       |
1183	  to be perceptible to CPU 2            +-------+       |       |
1184	                                        :       :       +-------+
1185
1186
1187To illustrate this more completely, consider what could happen if the code
1188contained a load of A either side of the read barrier:
1189
1190	CPU 1			CPU 2
1191	=======================	=======================
1192		{ A = 0, B = 9 }
1193	STORE A=1
1194	<write barrier>
1195	STORE B=2
1196				LOAD B
1197				LOAD A [first load of A]
1198				<read barrier>
1199				LOAD A [second load of A]
1200
1201Even though the two loads of A both occur after the load of B, they may both
1202come up with different values:
1203
1204	+-------+       :      :                :       :
1205	|       |       +------+                +-------+
1206	|       |------>| A=1  |------      --->| A->0  |
1207	|       |       +------+      \         +-------+
1208	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1209	|       |       +------+        |       +-------+
1210	|       |------>| B=2  |---     |       :       :
1211	|       |       +------+   \    |       :       :       +-------+
1212	+-------+       :      :    \   |       +-------+       |       |
1213	                             ---------->| B->2  |------>|       |
1214	                                |       +-------+       | CPU 2 |
1215	                                |       :       :       |       |
1216	                                |       :       :       |       |
1217	                                |       +-------+       |       |
1218	                                |       | A->0  |------>| 1st   |
1219	                                |       +-------+       |       |
1220	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1221	  barrier causes all effects      \     +-------+       |       |
1222	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1223	  to be perceptible to CPU 2            +-------+       |       |
1224	                                        :       :       +-------+
1225
1226
1227But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1228before the read barrier completes anyway:
1229
1230	+-------+       :      :                :       :
1231	|       |       +------+                +-------+
1232	|       |------>| A=1  |------      --->| A->0  |
1233	|       |       +------+      \         +-------+
1234	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1235	|       |       +------+        |       +-------+
1236	|       |------>| B=2  |---     |       :       :
1237	|       |       +------+   \    |       :       :       +-------+
1238	+-------+       :      :    \   |       +-------+       |       |
1239	                             ---------->| B->2  |------>|       |
1240	                                |       +-------+       | CPU 2 |
1241	                                |       :       :       |       |
1242	                                 \      :       :       |       |
1243	                                  \     +-------+       |       |
1244	                                   ---->| A->1  |------>| 1st   |
1245	                                        +-------+       |       |
1246	                                    rrrrrrrrrrrrrrrrr   |       |
1247	                                        +-------+       |       |
1248	                                        | A->1  |------>| 2nd   |
1249	                                        +-------+       |       |
1250	                                        :       :       +-------+
1251
1252
1253The guarantee is that the second load will always come up with A == 1 if the
1254load of B came up with B == 2.  No such guarantee exists for the first load of
1255A; that may come up with either A == 0 or A == 1.
1256
1257
1258READ MEMORY BARRIERS VS LOAD SPECULATION
1259----------------------------------------
1260
1261Many CPUs speculate with loads: that is they see that they will need to load an
1262item from memory, and they find a time where they're not using the bus for any
1263other loads, and so do the load in advance - even though they haven't actually
1264got to that point in the instruction execution flow yet.  This permits the
1265actual load instruction to potentially complete immediately because the CPU
1266already has the value to hand.
1267
1268It may turn out that the CPU didn't actually need the value - perhaps because a
1269branch circumvented the load - in which case it can discard the value or just
1270cache it for later use.
1271
1272Consider:
1273
1274	CPU 1			CPU 2
1275	=======================	=======================
1276				LOAD B
1277				DIVIDE		} Divide instructions generally
1278				DIVIDE		} take a long time to perform
1279				LOAD A
1280
1281Which might appear as this:
1282
1283	                                        :       :       +-------+
1284	                                        +-------+       |       |
1285	                                    --->| B->2  |------>|       |
1286	                                        +-------+       | CPU 2 |
1287	                                        :       :DIVIDE |       |
1288	                                        +-------+       |       |
1289	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1290	division speculates on the              +-------+   ~   |       |
1291	LOAD of A                               :       :   ~   |       |
1292	                                        :       :DIVIDE |       |
1293	                                        :       :   ~   |       |
1294	Once the divisions are complete -->     :       :   ~-->|       |
1295	the CPU can then perform the            :       :       |       |
1296	LOAD with immediate effect              :       :       +-------+
1297
1298
1299Placing a read barrier or a data dependency barrier just before the second
1300load:
1301
1302	CPU 1			CPU 2
1303	=======================	=======================
1304				LOAD B
1305				DIVIDE
1306				DIVIDE
1307				<read barrier>
1308				LOAD A
1309
1310will force any value speculatively obtained to be reconsidered to an extent
1311dependent on the type of barrier used.  If there was no change made to the
1312speculated memory location, then the speculated value will just be used:
1313
1314	                                        :       :       +-------+
1315	                                        +-------+       |       |
1316	                                    --->| B->2  |------>|       |
1317	                                        +-------+       | CPU 2 |
1318	                                        :       :DIVIDE |       |
1319	                                        +-------+       |       |
1320	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1321	division speculates on the              +-------+   ~   |       |
1322	LOAD of A                               :       :   ~   |       |
1323	                                        :       :DIVIDE |       |
1324	                                        :       :   ~   |       |
1325	                                        :       :   ~   |       |
1326	                                    rrrrrrrrrrrrrrrr~   |       |
1327	                                        :       :   ~   |       |
1328	                                        :       :   ~-->|       |
1329	                                        :       :       |       |
1330	                                        :       :       +-------+
1331
1332
1333but if there was an update or an invalidation from another CPU pending, then
1334the speculation will be cancelled and the value reloaded:
1335
1336	                                        :       :       +-------+
1337	                                        +-------+       |       |
1338	                                    --->| B->2  |------>|       |
1339	                                        +-------+       | CPU 2 |
1340	                                        :       :DIVIDE |       |
1341	                                        +-------+       |       |
1342	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1343	division speculates on the              +-------+   ~   |       |
1344	LOAD of A                               :       :   ~   |       |
1345	                                        :       :DIVIDE |       |
1346	                                        :       :   ~   |       |
1347	                                        :       :   ~   |       |
1348	                                    rrrrrrrrrrrrrrrrr   |       |
1349	                                        +-------+       |       |
1350	The speculation is discarded --->   --->| A->1  |------>|       |
1351	and an updated value is                 +-------+       |       |
1352	retrieved                               :       :       +-------+
1353
1354
1355MULTICOPY ATOMICITY
1356--------------------
1357
1358Multicopy atomicity is a deeply intuitive notion about ordering that is
1359not always provided by real computer systems, namely that a given store
1360becomes visible at the same time to all CPUs, or, alternatively, that all
1361CPUs agree on the order in which all stores become visible.  However,
1362support of full multicopy atomicity would rule out valuable hardware
1363optimizations, so a weaker form called ``other multicopy atomicity''
1364instead guarantees only that a given store becomes visible at the same
1365time to all -other- CPUs.  The remainder of this document discusses this
1366weaker form, but for brevity will call it simply ``multicopy atomicity''.
1367
1368The following example demonstrates multicopy atomicity:
1369
1370	CPU 1			CPU 2			CPU 3
1371	=======================	=======================	=======================
1372		{ X = 0, Y = 0 }
1373	STORE X=1		r1=LOAD X (reads 1)	LOAD Y (reads 1)
1374				<general barrier>	<read barrier>
1375				STORE Y=r1		LOAD X
1376
1377Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1378and CPU 3's load from Y returns 1.  This indicates that CPU 1's store
1379to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1380CPU 3's load from Y.  In addition, the memory barriers guarantee that
1381CPU 2 executes its load before its store, and CPU 3 loads from Y before
1382it loads from X.  The question is then "Can CPU 3's load from X return 0?"
1383
1384Because CPU 3's load from X in some sense comes after CPU 2's load, it
1385is natural to expect that CPU 3's load from X must therefore return 1.
1386This expectation follows from multicopy atomicity: if a load executing
1387on CPU B follows a load from the same variable executing on CPU A (and
1388CPU A did not originally store the value which it read), then on
1389multicopy-atomic systems, CPU B's load must return either the same value
1390that CPU A's load did or some later value.  However, the Linux kernel
1391does not require systems to be multicopy atomic.
1392
1393The use of a general memory barrier in the example above compensates
1394for any lack of multicopy atomicity.  In the example, if CPU 2's load
1395from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
1396from X must indeed also return 1.
1397
1398However, dependencies, read barriers, and write barriers are not always
1399able to compensate for non-multicopy atomicity.  For example, suppose
1400that CPU 2's general barrier is removed from the above example, leaving
1401only the data dependency shown below:
1402
1403	CPU 1			CPU 2			CPU 3
1404	=======================	=======================	=======================
1405		{ X = 0, Y = 0 }
1406	STORE X=1		r1=LOAD X (reads 1)	LOAD Y (reads 1)
1407				<data dependency>	<read barrier>
1408				STORE Y=r1		LOAD X (reads 0)
1409
1410This substitution allows non-multicopy atomicity to run rampant: in
1411this example, it is perfectly legal for CPU 2's load from X to return 1,
1412CPU 3's load from Y to return 1, and its load from X to return 0.
1413
1414The key point is that although CPU 2's data dependency orders its load
1415and store, it does not guarantee to order CPU 1's store.  Thus, if this
1416example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
1417store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1418writes.  General barriers are therefore required to ensure that all CPUs
1419agree on the combined order of multiple accesses.
1420
1421General barriers can compensate not only for non-multicopy atomicity,
1422but can also generate additional ordering that can ensure that -all-
1423CPUs will perceive the same order of -all- operations.  In contrast, a
1424chain of release-acquire pairs do not provide this additional ordering,
1425which means that only those CPUs on the chain are guaranteed to agree
1426on the combined order of the accesses.  For example, switching to C code
1427in deference to the ghost of Herman Hollerith:
1428
1429	int u, v, x, y, z;
1430
1431	void cpu0(void)
1432	{
1433		r0 = smp_load_acquire(&x);
1434		WRITE_ONCE(u, 1);
1435		smp_store_release(&y, 1);
1436	}
1437
1438	void cpu1(void)
1439	{
1440		r1 = smp_load_acquire(&y);
1441		r4 = READ_ONCE(v);
1442		r5 = READ_ONCE(u);
1443		smp_store_release(&z, 1);
1444	}
1445
1446	void cpu2(void)
1447	{
1448		r2 = smp_load_acquire(&z);
1449		smp_store_release(&x, 1);
1450	}
1451
1452	void cpu3(void)
1453	{
1454		WRITE_ONCE(v, 1);
1455		smp_mb();
1456		r3 = READ_ONCE(u);
1457	}
1458
1459Because cpu0(), cpu1(), and cpu2() participate in a chain of
1460smp_store_release()/smp_load_acquire() pairs, the following outcome
1461is prohibited:
1462
1463	r0 == 1 && r1 == 1 && r2 == 1
1464
1465Furthermore, because of the release-acquire relationship between cpu0()
1466and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1467outcome is prohibited:
1468
1469	r1 == 1 && r5 == 0
1470
1471However, the ordering provided by a release-acquire chain is local
1472to the CPUs participating in that chain and does not apply to cpu3(),
1473at least aside from stores.  Therefore, the following outcome is possible:
1474
1475	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1476
1477As an aside, the following outcome is also possible:
1478
1479	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1480
1481Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1482writes in order, CPUs not involved in the release-acquire chain might
1483well disagree on the order.  This disagreement stems from the fact that
1484the weak memory-barrier instructions used to implement smp_load_acquire()
1485and smp_store_release() are not required to order prior stores against
1486subsequent loads in all cases.  This means that cpu3() can see cpu0()'s
1487store to u as happening -after- cpu1()'s load from v, even though
1488both cpu0() and cpu1() agree that these two operations occurred in the
1489intended order.
1490
1491However, please keep in mind that smp_load_acquire() is not magic.
1492In particular, it simply reads from its argument with ordering.  It does
1493-not- ensure that any particular value will be read.  Therefore, the
1494following outcome is possible:
1495
1496	r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1497
1498Note that this outcome can happen even on a mythical sequentially
1499consistent system where nothing is ever reordered.
1500
1501To reiterate, if your code requires full ordering of all operations,
1502use general barriers throughout.
1503
1504
1505========================
1506EXPLICIT KERNEL BARRIERS
1507========================
1508
1509The Linux kernel has a variety of different barriers that act at different
1510levels:
1511
1512  (*) Compiler barrier.
1513
1514  (*) CPU memory barriers.
1515
1516  (*) MMIO write barrier.
1517
1518
1519COMPILER BARRIER
1520----------------
1521
1522The Linux kernel has an explicit compiler barrier function that prevents the
1523compiler from moving the memory accesses either side of it to the other side:
1524
1525	barrier();
1526
1527This is a general barrier -- there are no read-read or write-write
1528variants of barrier().  However, READ_ONCE() and WRITE_ONCE() can be
1529thought of as weak forms of barrier() that affect only the specific
1530accesses flagged by the READ_ONCE() or WRITE_ONCE().
1531
1532The barrier() function has the following effects:
1533
1534 (*) Prevents the compiler from reordering accesses following the
1535     barrier() to precede any accesses preceding the barrier().
1536     One example use for this property is to ease communication between
1537     interrupt-handler code and the code that was interrupted.
1538
1539 (*) Within a loop, forces the compiler to load the variables used
1540     in that loop's conditional on each pass through that loop.
1541
1542The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1543optimizations that, while perfectly safe in single-threaded code, can
1544be fatal in concurrent code.  Here are some examples of these sorts
1545of optimizations:
1546
1547 (*) The compiler is within its rights to reorder loads and stores
1548     to the same variable, and in some cases, the CPU is within its
1549     rights to reorder loads to the same variable.  This means that
1550     the following code:
1551
1552	a[0] = x;
1553	a[1] = x;
1554
1555     Might result in an older value of x stored in a[1] than in a[0].
1556     Prevent both the compiler and the CPU from doing this as follows:
1557
1558	a[0] = READ_ONCE(x);
1559	a[1] = READ_ONCE(x);
1560
1561     In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1562     accesses from multiple CPUs to a single variable.
1563
1564 (*) The compiler is within its rights to merge successive loads from
1565     the same variable.  Such merging can cause the compiler to "optimize"
1566     the following code:
1567
1568	while (tmp = a)
1569		do_something_with(tmp);
1570
1571     into the following code, which, although in some sense legitimate
1572     for single-threaded code, is almost certainly not what the developer
1573     intended:
1574
1575	if (tmp = a)
1576		for (;;)
1577			do_something_with(tmp);
1578
1579     Use READ_ONCE() to prevent the compiler from doing this to you:
1580
1581	while (tmp = READ_ONCE(a))
1582		do_something_with(tmp);
1583
1584 (*) The compiler is within its rights to reload a variable, for example,
1585     in cases where high register pressure prevents the compiler from
1586     keeping all data of interest in registers.  The compiler might
1587     therefore optimize the variable 'tmp' out of our previous example:
1588
1589	while (tmp = a)
1590		do_something_with(tmp);
1591
1592     This could result in the following code, which is perfectly safe in
1593     single-threaded code, but can be fatal in concurrent code:
1594
1595	while (a)
1596		do_something_with(a);
1597
1598     For example, the optimized version of this code could result in
1599     passing a zero to do_something_with() in the case where the variable
1600     a was modified by some other CPU between the "while" statement and
1601     the call to do_something_with().
1602
1603     Again, use READ_ONCE() to prevent the compiler from doing this:
1604
1605	while (tmp = READ_ONCE(a))
1606		do_something_with(tmp);
1607
1608     Note that if the compiler runs short of registers, it might save
1609     tmp onto the stack.  The overhead of this saving and later restoring
1610     is why compilers reload variables.  Doing so is perfectly safe for
1611     single-threaded code, so you need to tell the compiler about cases
1612     where it is not safe.
1613
1614 (*) The compiler is within its rights to omit a load entirely if it knows
1615     what the value will be.  For example, if the compiler can prove that
1616     the value of variable 'a' is always zero, it can optimize this code:
1617
1618	while (tmp = a)
1619		do_something_with(tmp);
1620
1621     Into this:
1622
1623	do { } while (0);
1624
1625     This transformation is a win for single-threaded code because it
1626     gets rid of a load and a branch.  The problem is that the compiler
1627     will carry out its proof assuming that the current CPU is the only
1628     one updating variable 'a'.  If variable 'a' is shared, then the
1629     compiler's proof will be erroneous.  Use READ_ONCE() to tell the
1630     compiler that it doesn't know as much as it thinks it does:
1631
1632	while (tmp = READ_ONCE(a))
1633		do_something_with(tmp);
1634
1635     But please note that the compiler is also closely watching what you
1636     do with the value after the READ_ONCE().  For example, suppose you
1637     do the following and MAX is a preprocessor macro with the value 1:
1638
1639	while ((tmp = READ_ONCE(a)) % MAX)
1640		do_something_with(tmp);
1641
1642     Then the compiler knows that the result of the "%" operator applied
1643     to MAX will always be zero, again allowing the compiler to optimize
1644     the code into near-nonexistence.  (It will still load from the
1645     variable 'a'.)
1646
1647 (*) Similarly, the compiler is within its rights to omit a store entirely
1648     if it knows that the variable already has the value being stored.
1649     Again, the compiler assumes that the current CPU is the only one
1650     storing into the variable, which can cause the compiler to do the
1651     wrong thing for shared variables.  For example, suppose you have
1652     the following:
1653
1654	a = 0;
1655	... Code that does not store to variable a ...
1656	a = 0;
1657
1658     The compiler sees that the value of variable 'a' is already zero, so
1659     it might well omit the second store.  This would come as a fatal
1660     surprise if some other CPU might have stored to variable 'a' in the
1661     meantime.
1662
1663     Use WRITE_ONCE() to prevent the compiler from making this sort of
1664     wrong guess:
1665
1666	WRITE_ONCE(a, 0);
1667	... Code that does not store to variable a ...
1668	WRITE_ONCE(a, 0);
1669
1670 (*) The compiler is within its rights to reorder memory accesses unless
1671     you tell it not to.  For example, consider the following interaction
1672     between process-level code and an interrupt handler:
1673
1674	void process_level(void)
1675	{
1676		msg = get_message();
1677		flag = true;
1678	}
1679
1680	void interrupt_handler(void)
1681	{
1682		if (flag)
1683			process_message(msg);
1684	}
1685
1686     There is nothing to prevent the compiler from transforming
1687     process_level() to the following, in fact, this might well be a
1688     win for single-threaded code:
1689
1690	void process_level(void)
1691	{
1692		flag = true;
1693		msg = get_message();
1694	}
1695
1696     If the interrupt occurs between these two statement, then
1697     interrupt_handler() might be passed a garbled msg.  Use WRITE_ONCE()
1698     to prevent this as follows:
1699
1700	void process_level(void)
1701	{
1702		WRITE_ONCE(msg, get_message());
1703		WRITE_ONCE(flag, true);
1704	}
1705
1706	void interrupt_handler(void)
1707	{
1708		if (READ_ONCE(flag))
1709			process_message(READ_ONCE(msg));
1710	}
1711
1712     Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1713     interrupt_handler() are needed if this interrupt handler can itself
1714     be interrupted by something that also accesses 'flag' and 'msg',
1715     for example, a nested interrupt or an NMI.  Otherwise, READ_ONCE()
1716     and WRITE_ONCE() are not needed in interrupt_handler() other than
1717     for documentation purposes.  (Note also that nested interrupts
1718     do not typically occur in modern Linux kernels, in fact, if an
1719     interrupt handler returns with interrupts enabled, you will get a
1720     WARN_ONCE() splat.)
1721
1722     You should assume that the compiler can move READ_ONCE() and
1723     WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1724     barrier(), or similar primitives.
1725
1726     This effect could also be achieved using barrier(), but READ_ONCE()
1727     and WRITE_ONCE() are more selective:  With READ_ONCE() and
1728     WRITE_ONCE(), the compiler need only forget the contents of the
1729     indicated memory locations, while with barrier() the compiler must
1730     discard the value of all memory locations that it has currented
1731     cached in any machine registers.  Of course, the compiler must also
1732     respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1733     though the CPU of course need not do so.
1734
1735 (*) The compiler is within its rights to invent stores to a variable,
1736     as in the following example:
1737
1738	if (a)
1739		b = a;
1740	else
1741		b = 42;
1742
1743     The compiler might save a branch by optimizing this as follows:
1744
1745	b = 42;
1746	if (a)
1747		b = a;
1748
1749     In single-threaded code, this is not only safe, but also saves
1750     a branch.  Unfortunately, in concurrent code, this optimization
1751     could cause some other CPU to see a spurious value of 42 -- even
1752     if variable 'a' was never zero -- when loading variable 'b'.
1753     Use WRITE_ONCE() to prevent this as follows:
1754
1755	if (a)
1756		WRITE_ONCE(b, a);
1757	else
1758		WRITE_ONCE(b, 42);
1759
1760     The compiler can also invent loads.  These are usually less
1761     damaging, but they can result in cache-line bouncing and thus in
1762     poor performance and scalability.  Use READ_ONCE() to prevent
1763     invented loads.
1764
1765 (*) For aligned memory locations whose size allows them to be accessed
1766     with a single memory-reference instruction, prevents "load tearing"
1767     and "store tearing," in which a single large access is replaced by
1768     multiple smaller accesses.  For example, given an architecture having
1769     16-bit store instructions with 7-bit immediate fields, the compiler
1770     might be tempted to use two 16-bit store-immediate instructions to
1771     implement the following 32-bit store:
1772
1773	p = 0x00010002;
1774
1775     Please note that GCC really does use this sort of optimization,
1776     which is not surprising given that it would likely take more
1777     than two instructions to build the constant and then store it.
1778     This optimization can therefore be a win in single-threaded code.
1779     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1780     this optimization in a volatile store.  In the absence of such bugs,
1781     use of WRITE_ONCE() prevents store tearing in the following example:
1782
1783	WRITE_ONCE(p, 0x00010002);
1784
1785     Use of packed structures can also result in load and store tearing,
1786     as in this example:
1787
1788	struct __attribute__((__packed__)) foo {
1789		short a;
1790		int b;
1791		short c;
1792	};
1793	struct foo foo1, foo2;
1794	...
1795
1796	foo2.a = foo1.a;
1797	foo2.b = foo1.b;
1798	foo2.c = foo1.c;
1799
1800     Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1801     volatile markings, the compiler would be well within its rights to
1802     implement these three assignment statements as a pair of 32-bit
1803     loads followed by a pair of 32-bit stores.  This would result in
1804     load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
1805     and WRITE_ONCE() again prevent tearing in this example:
1806
1807	foo2.a = foo1.a;
1808	WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1809	foo2.c = foo1.c;
1810
1811All that aside, it is never necessary to use READ_ONCE() and
1812WRITE_ONCE() on a variable that has been marked volatile.  For example,
1813because 'jiffies' is marked volatile, it is never necessary to
1814say READ_ONCE(jiffies).  The reason for this is that READ_ONCE() and
1815WRITE_ONCE() are implemented as volatile casts, which has no effect when
1816its argument is already marked volatile.
1817
1818Please note that these compiler barriers have no direct effect on the CPU,
1819which may then reorder things however it wishes.
1820
1821
1822CPU MEMORY BARRIERS
1823-------------------
1824
1825The Linux kernel has eight basic CPU memory barriers:
1826
1827	TYPE		MANDATORY		SMP CONDITIONAL
1828	===============	=======================	===========================
1829	GENERAL		mb()			smp_mb()
1830	WRITE		wmb()			smp_wmb()
1831	READ		rmb()			smp_rmb()
1832	DATA DEPENDENCY				READ_ONCE()
1833
1834
1835All memory barriers except the data dependency barriers imply a compiler
1836barrier.  Data dependencies do not impose any additional compiler ordering.
1837
1838Aside: In the case of data dependencies, the compiler would be expected
1839to issue the loads in the correct order (eg. `a[b]` would have to load
1840the value of b before loading a[b]), however there is no guarantee in
1841the C specification that the compiler may not speculate the value of b
1842(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1843tmp = a[b]; ).  There is also the problem of a compiler reloading b after
1844having loaded a[b], thus having a newer copy of b than a[b].  A consensus
1845has not yet been reached about these problems, however the READ_ONCE()
1846macro is a good place to start looking.
1847
1848SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1849systems because it is assumed that a CPU will appear to be self-consistent,
1850and will order overlapping accesses correctly with respect to itself.
1851However, see the subsection on "Virtual Machine Guests" below.
1852
1853[!] Note that SMP memory barriers _must_ be used to control the ordering of
1854references to shared memory on SMP systems, though the use of locking instead
1855is sufficient.
1856
1857Mandatory barriers should not be used to control SMP effects, since mandatory
1858barriers impose unnecessary overhead on both SMP and UP systems. They may,
1859however, be used to control MMIO effects on accesses through relaxed memory I/O
1860windows.  These barriers are required even on non-SMP systems as they affect
1861the order in which memory operations appear to a device by prohibiting both the
1862compiler and the CPU from reordering them.
1863
1864
1865There are some more advanced barrier functions:
1866
1867 (*) smp_store_mb(var, value)
1868
1869     This assigns the value to the variable and then inserts a full memory
1870     barrier after it.  It isn't guaranteed to insert anything more than a
1871     compiler barrier in a UP compilation.
1872
1873
1874 (*) smp_mb__before_atomic();
1875 (*) smp_mb__after_atomic();
1876
1877     These are for use with atomic (such as add, subtract, increment and
1878     decrement) functions that don't return a value, especially when used for
1879     reference counting.  These functions do not imply memory barriers.
1880
1881     These are also used for atomic bitop functions that do not return a
1882     value (such as set_bit and clear_bit).
1883
1884     As an example, consider a piece of code that marks an object as being dead
1885     and then decrements the object's reference count:
1886
1887	obj->dead = 1;
1888	smp_mb__before_atomic();
1889	atomic_dec(&obj->ref_count);
1890
1891     This makes sure that the death mark on the object is perceived to be set
1892     *before* the reference counter is decremented.
1893
1894     See Documentation/atomic_{t,bitops}.txt for more information.
1895
1896
1897 (*) dma_wmb();
1898 (*) dma_rmb();
1899
1900     These are for use with consistent memory to guarantee the ordering
1901     of writes or reads of shared memory accessible to both the CPU and a
1902     DMA capable device.
1903
1904     For example, consider a device driver that shares memory with a device
1905     and uses a descriptor status value to indicate if the descriptor belongs
1906     to the device or the CPU, and a doorbell to notify it when new
1907     descriptors are available:
1908
1909	if (desc->status != DEVICE_OWN) {
1910		/* do not read data until we own descriptor */
1911		dma_rmb();
1912
1913		/* read/modify data */
1914		read_data = desc->data;
1915		desc->data = write_data;
1916
1917		/* flush modifications before status update */
1918		dma_wmb();
1919
1920		/* assign ownership */
1921		desc->status = DEVICE_OWN;
1922
1923		/* notify device of new descriptors */
1924		writel(DESC_NOTIFY, doorbell);
1925	}
1926
1927     The dma_rmb() allows us guarantee the device has released ownership
1928     before we read the data from the descriptor, and the dma_wmb() allows
1929     us to guarantee the data is written to the descriptor before the device
1930     can see it now has ownership.  Note that, when using writel(), a prior
1931     wmb() is not needed to guarantee that the cache coherent memory writes
1932     have completed before writing to the MMIO region.  The cheaper
1933     writel_relaxed() does not provide this guarantee and must not be used
1934     here.
1935
1936     See the subsection "Kernel I/O barrier effects" for more information on
1937     relaxed I/O accessors and the Documentation/DMA-API.txt file for more
1938     information on consistent memory.
1939
1940
1941MMIO WRITE BARRIER
1942------------------
1943
1944The Linux kernel also has a special barrier for use with memory-mapped I/O
1945writes:
1946
1947	mmiowb();
1948
1949This is a variation on the mandatory write barrier that causes writes to weakly
1950ordered I/O regions to be partially ordered.  Its effects may go beyond the
1951CPU->Hardware interface and actually affect the hardware at some level.
1952
1953See the subsection "Acquires vs I/O accesses" for more information.
1954
1955
1956===============================
1957IMPLICIT KERNEL MEMORY BARRIERS
1958===============================
1959
1960Some of the other functions in the linux kernel imply memory barriers, amongst
1961which are locking and scheduling functions.
1962
1963This specification is a _minimum_ guarantee; any particular architecture may
1964provide more substantial guarantees, but these may not be relied upon outside
1965of arch specific code.
1966
1967
1968LOCK ACQUISITION FUNCTIONS
1969--------------------------
1970
1971The Linux kernel has a number of locking constructs:
1972
1973 (*) spin locks
1974 (*) R/W spin locks
1975 (*) mutexes
1976 (*) semaphores
1977 (*) R/W semaphores
1978
1979In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1980for each construct.  These operations all imply certain barriers:
1981
1982 (1) ACQUIRE operation implication:
1983
1984     Memory operations issued after the ACQUIRE will be completed after the
1985     ACQUIRE operation has completed.
1986
1987     Memory operations issued before the ACQUIRE may be completed after
1988     the ACQUIRE operation has completed.
1989
1990 (2) RELEASE operation implication:
1991
1992     Memory operations issued before the RELEASE will be completed before the
1993     RELEASE operation has completed.
1994
1995     Memory operations issued after the RELEASE may be completed before the
1996     RELEASE operation has completed.
1997
1998 (3) ACQUIRE vs ACQUIRE implication:
1999
2000     All ACQUIRE operations issued before another ACQUIRE operation will be
2001     completed before that ACQUIRE operation.
2002
2003 (4) ACQUIRE vs RELEASE implication:
2004
2005     All ACQUIRE operations issued before a RELEASE operation will be
2006     completed before the RELEASE operation.
2007
2008 (5) Failed conditional ACQUIRE implication:
2009
2010     Certain locking variants of the ACQUIRE operation may fail, either due to
2011     being unable to get the lock immediately, or due to receiving an unblocked
2012     signal whilst asleep waiting for the lock to become available.  Failed
2013     locks do not imply any sort of barrier.
2014
2015[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2016one-way barriers is that the effects of instructions outside of a critical
2017section may seep into the inside of the critical section.
2018
2019An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2020because it is possible for an access preceding the ACQUIRE to happen after the
2021ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2022the two accesses can themselves then cross:
2023
2024	*A = a;
2025	ACQUIRE M
2026	RELEASE M
2027	*B = b;
2028
2029may occur as:
2030
2031	ACQUIRE M, STORE *B, STORE *A, RELEASE M
2032
2033When the ACQUIRE and RELEASE are a lock acquisition and release,
2034respectively, this same reordering can occur if the lock's ACQUIRE and
2035RELEASE are to the same lock variable, but only from the perspective of
2036another CPU not holding that lock.  In short, a ACQUIRE followed by an
2037RELEASE may -not- be assumed to be a full memory barrier.
2038
2039Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2040not imply a full memory barrier.  Therefore, the CPU's execution of the
2041critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2042so that:
2043
2044	*A = a;
2045	RELEASE M
2046	ACQUIRE N
2047	*B = b;
2048
2049could occur as:
2050
2051	ACQUIRE N, STORE *B, STORE *A, RELEASE M
2052
2053It might appear that this reordering could introduce a deadlock.
2054However, this cannot happen because if such a deadlock threatened,
2055the RELEASE would simply complete, thereby avoiding the deadlock.
2056
2057	Why does this work?
2058
2059	One key point is that we are only talking about the CPU doing
2060	the reordering, not the compiler.  If the compiler (or, for
2061	that matter, the developer) switched the operations, deadlock
2062	-could- occur.
2063
2064	But suppose the CPU reordered the operations.  In this case,
2065	the unlock precedes the lock in the assembly code.  The CPU
2066	simply elected to try executing the later lock operation first.
2067	If there is a deadlock, this lock operation will simply spin (or
2068	try to sleep, but more on that later).	The CPU will eventually
2069	execute the unlock operation (which preceded the lock operation
2070	in the assembly code), which will unravel the potential deadlock,
2071	allowing the lock operation to succeed.
2072
2073	But what if the lock is a sleeplock?  In that case, the code will
2074	try to enter the scheduler, where it will eventually encounter
2075	a memory barrier, which will force the earlier unlock operation
2076	to complete, again unraveling the deadlock.  There might be
2077	a sleep-unlock race, but the locking primitive needs to resolve
2078	such races properly in any case.
2079
2080Locks and semaphores may not provide any guarantee of ordering on UP compiled
2081systems, and so cannot be counted on in such a situation to actually achieve
2082anything at all - especially with respect to I/O accesses - unless combined
2083with interrupt disabling operations.
2084
2085See also the section on "Inter-CPU acquiring barrier effects".
2086
2087
2088As an example, consider the following:
2089
2090	*A = a;
2091	*B = b;
2092	ACQUIRE
2093	*C = c;
2094	*D = d;
2095	RELEASE
2096	*E = e;
2097	*F = f;
2098
2099The following sequence of events is acceptable:
2100
2101	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2102
2103	[+] Note that {*F,*A} indicates a combined access.
2104
2105But none of the following are:
2106
2107	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
2108	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
2109	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
2110	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
2111
2112
2113
2114INTERRUPT DISABLING FUNCTIONS
2115-----------------------------
2116
2117Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2118(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
2119barriers are required in such a situation, they must be provided from some
2120other means.
2121
2122
2123SLEEP AND WAKE-UP FUNCTIONS
2124---------------------------
2125
2126Sleeping and waking on an event flagged in global data can be viewed as an
2127interaction between two pieces of data: the task state of the task waiting for
2128the event and the global data used to indicate the event.  To make sure that
2129these appear to happen in the right order, the primitives to begin the process
2130of going to sleep, and the primitives to initiate a wake up imply certain
2131barriers.
2132
2133Firstly, the sleeper normally follows something like this sequence of events:
2134
2135	for (;;) {
2136		set_current_state(TASK_UNINTERRUPTIBLE);
2137		if (event_indicated)
2138			break;
2139		schedule();
2140	}
2141
2142A general memory barrier is interpolated automatically by set_current_state()
2143after it has altered the task state:
2144
2145	CPU 1
2146	===============================
2147	set_current_state();
2148	  smp_store_mb();
2149	    STORE current->state
2150	    <general barrier>
2151	LOAD event_indicated
2152
2153set_current_state() may be wrapped by:
2154
2155	prepare_to_wait();
2156	prepare_to_wait_exclusive();
2157
2158which therefore also imply a general memory barrier after setting the state.
2159The whole sequence above is available in various canned forms, all of which
2160interpolate the memory barrier in the right place:
2161
2162	wait_event();
2163	wait_event_interruptible();
2164	wait_event_interruptible_exclusive();
2165	wait_event_interruptible_timeout();
2166	wait_event_killable();
2167	wait_event_timeout();
2168	wait_on_bit();
2169	wait_on_bit_lock();
2170
2171
2172Secondly, code that performs a wake up normally follows something like this:
2173
2174	event_indicated = 1;
2175	wake_up(&event_wait_queue);
2176
2177or:
2178
2179	event_indicated = 1;
2180	wake_up_process(event_daemon);
2181
2182A general memory barrier is executed by wake_up() if it wakes something up.
2183If it doesn't wake anything up then a memory barrier may or may not be
2184executed; you must not rely on it.  The barrier occurs before the task state
2185is accessed, in particular, it sits between the STORE to indicate the event
2186and the STORE to set TASK_RUNNING:
2187
2188	CPU 1 (Sleeper)			CPU 2 (Waker)
2189	===============================	===============================
2190	set_current_state();		STORE event_indicated
2191	  smp_store_mb();		wake_up();
2192	    STORE current->state	  ...
2193	    <general barrier>		  <general barrier>
2194	LOAD event_indicated		  if ((LOAD task->state) & TASK_NORMAL)
2195					    STORE task->state
2196
2197where "task" is the thread being woken up and it equals CPU 1's "current".
2198
2199To repeat, a general memory barrier is guaranteed to be executed by wake_up()
2200if something is actually awakened, but otherwise there is no such guarantee.
2201To see this, consider the following sequence of events, where X and Y are both
2202initially zero:
2203
2204	CPU 1				CPU 2
2205	===============================	===============================
2206	X = 1;				Y = 1;
2207	smp_mb();			wake_up();
2208	LOAD Y				LOAD X
2209
2210If a wakeup does occur, one (at least) of the two loads must see 1.  If, on
2211the other hand, a wakeup does not occur, both loads might see 0.
2212
2213wake_up_process() always executes a general memory barrier.  The barrier again
2214occurs before the task state is accessed.  In particular, if the wake_up() in
2215the previous snippet were replaced by a call to wake_up_process() then one of
2216the two loads would be guaranteed to see 1.
2217
2218The available waker functions include:
2219
2220	complete();
2221	wake_up();
2222	wake_up_all();
2223	wake_up_bit();
2224	wake_up_interruptible();
2225	wake_up_interruptible_all();
2226	wake_up_interruptible_nr();
2227	wake_up_interruptible_poll();
2228	wake_up_interruptible_sync();
2229	wake_up_interruptible_sync_poll();
2230	wake_up_locked();
2231	wake_up_locked_poll();
2232	wake_up_nr();
2233	wake_up_poll();
2234	wake_up_process();
2235
2236In terms of memory ordering, these functions all provide the same guarantees of
2237a wake_up() (or stronger).
2238
2239[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2240order multiple stores before the wake-up with respect to loads of those stored
2241values after the sleeper has called set_current_state().  For instance, if the
2242sleeper does:
2243
2244	set_current_state(TASK_INTERRUPTIBLE);
2245	if (event_indicated)
2246		break;
2247	__set_current_state(TASK_RUNNING);
2248	do_something(my_data);
2249
2250and the waker does:
2251
2252	my_data = value;
2253	event_indicated = 1;
2254	wake_up(&event_wait_queue);
2255
2256there's no guarantee that the change to event_indicated will be perceived by
2257the sleeper as coming after the change to my_data.  In such a circumstance, the
2258code on both sides must interpolate its own memory barriers between the
2259separate data accesses.  Thus the above sleeper ought to do:
2260
2261	set_current_state(TASK_INTERRUPTIBLE);
2262	if (event_indicated) {
2263		smp_rmb();
2264		do_something(my_data);
2265	}
2266
2267and the waker should do:
2268
2269	my_data = value;
2270	smp_wmb();
2271	event_indicated = 1;
2272	wake_up(&event_wait_queue);
2273
2274
2275MISCELLANEOUS FUNCTIONS
2276-----------------------
2277
2278Other functions that imply barriers:
2279
2280 (*) schedule() and similar imply full memory barriers.
2281
2282
2283===================================
2284INTER-CPU ACQUIRING BARRIER EFFECTS
2285===================================
2286
2287On SMP systems locking primitives give a more substantial form of barrier: one
2288that does affect memory access ordering on other CPUs, within the context of
2289conflict on any particular lock.
2290
2291
2292ACQUIRES VS MEMORY ACCESSES
2293---------------------------
2294
2295Consider the following: the system has a pair of spinlocks (M) and (Q), and
2296three CPUs; then should the following sequence of events occur:
2297
2298	CPU 1				CPU 2
2299	===============================	===============================
2300	WRITE_ONCE(*A, a);		WRITE_ONCE(*E, e);
2301	ACQUIRE M			ACQUIRE Q
2302	WRITE_ONCE(*B, b);		WRITE_ONCE(*F, f);
2303	WRITE_ONCE(*C, c);		WRITE_ONCE(*G, g);
2304	RELEASE M			RELEASE Q
2305	WRITE_ONCE(*D, d);		WRITE_ONCE(*H, h);
2306
2307Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2308through *H occur in, other than the constraints imposed by the separate locks
2309on the separate CPUs.  It might, for example, see:
2310
2311	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2312
2313But it won't see any of:
2314
2315	*B, *C or *D preceding ACQUIRE M
2316	*A, *B or *C following RELEASE M
2317	*F, *G or *H preceding ACQUIRE Q
2318	*E, *F or *G following RELEASE Q
2319
2320
2321
2322ACQUIRES VS I/O ACCESSES
2323------------------------
2324
2325Under certain circumstances (especially involving NUMA), I/O accesses within
2326two spinlocked sections on two different CPUs may be seen as interleaved by the
2327PCI bridge, because the PCI bridge does not necessarily participate in the
2328cache-coherence protocol, and is therefore incapable of issuing the required
2329read memory barriers.
2330
2331For example:
2332
2333	CPU 1				CPU 2
2334	===============================	===============================
2335	spin_lock(Q)
2336	writel(0, ADDR)
2337	writel(1, DATA);
2338	spin_unlock(Q);
2339					spin_lock(Q);
2340					writel(4, ADDR);
2341					writel(5, DATA);
2342					spin_unlock(Q);
2343
2344may be seen by the PCI bridge as follows:
2345
2346	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2347
2348which would probably cause the hardware to malfunction.
2349
2350
2351What is necessary here is to intervene with an mmiowb() before dropping the
2352spinlock, for example:
2353
2354	CPU 1				CPU 2
2355	===============================	===============================
2356	spin_lock(Q)
2357	writel(0, ADDR)
2358	writel(1, DATA);
2359	mmiowb();
2360	spin_unlock(Q);
2361					spin_lock(Q);
2362					writel(4, ADDR);
2363					writel(5, DATA);
2364					mmiowb();
2365					spin_unlock(Q);
2366
2367this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2368before either of the stores issued on CPU 2.
2369
2370
2371Furthermore, following a store by a load from the same device obviates the need
2372for the mmiowb(), because the load forces the store to complete before the load
2373is performed:
2374
2375	CPU 1				CPU 2
2376	===============================	===============================
2377	spin_lock(Q)
2378	writel(0, ADDR)
2379	a = readl(DATA);
2380	spin_unlock(Q);
2381					spin_lock(Q);
2382					writel(4, ADDR);
2383					b = readl(DATA);
2384					spin_unlock(Q);
2385
2386
2387See Documentation/driver-api/device-io.rst for more information.
2388
2389
2390=================================
2391WHERE ARE MEMORY BARRIERS NEEDED?
2392=================================
2393
2394Under normal operation, memory operation reordering is generally not going to
2395be a problem as a single-threaded linear piece of code will still appear to
2396work correctly, even if it's in an SMP kernel.  There are, however, four
2397circumstances in which reordering definitely _could_ be a problem:
2398
2399 (*) Interprocessor interaction.
2400
2401 (*) Atomic operations.
2402
2403 (*) Accessing devices.
2404
2405 (*) Interrupts.
2406
2407
2408INTERPROCESSOR INTERACTION
2409--------------------------
2410
2411When there's a system with more than one processor, more than one CPU in the
2412system may be working on the same data set at the same time.  This can cause
2413synchronisation problems, and the usual way of dealing with them is to use
2414locks.  Locks, however, are quite expensive, and so it may be preferable to
2415operate without the use of a lock if at all possible.  In such a case
2416operations that affect both CPUs may have to be carefully ordered to prevent
2417a malfunction.
2418
2419Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2420queued on the semaphore, by virtue of it having a piece of its stack linked to
2421the semaphore's list of waiting processes:
2422
2423	struct rw_semaphore {
2424		...
2425		spinlock_t lock;
2426		struct list_head waiters;
2427	};
2428
2429	struct rwsem_waiter {
2430		struct list_head list;
2431		struct task_struct *task;
2432	};
2433
2434To wake up a particular waiter, the up_read() or up_write() functions have to:
2435
2436 (1) read the next pointer from this waiter's record to know as to where the
2437     next waiter record is;
2438
2439 (2) read the pointer to the waiter's task structure;
2440
2441 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2442
2443 (4) call wake_up_process() on the task; and
2444
2445 (5) release the reference held on the waiter's task struct.
2446
2447In other words, it has to perform this sequence of events:
2448
2449	LOAD waiter->list.next;
2450	LOAD waiter->task;
2451	STORE waiter->task;
2452	CALL wakeup
2453	RELEASE task
2454
2455and if any of these steps occur out of order, then the whole thing may
2456malfunction.
2457
2458Once it has queued itself and dropped the semaphore lock, the waiter does not
2459get the lock again; it instead just waits for its task pointer to be cleared
2460before proceeding.  Since the record is on the waiter's stack, this means that
2461if the task pointer is cleared _before_ the next pointer in the list is read,
2462another CPU might start processing the waiter and might clobber the waiter's
2463stack before the up*() function has a chance to read the next pointer.
2464
2465Consider then what might happen to the above sequence of events:
2466
2467	CPU 1				CPU 2
2468	===============================	===============================
2469					down_xxx()
2470					Queue waiter
2471					Sleep
2472	up_yyy()
2473	LOAD waiter->task;
2474	STORE waiter->task;
2475					Woken up by other event
2476	<preempt>
2477					Resume processing
2478					down_xxx() returns
2479					call foo()
2480					foo() clobbers *waiter
2481	</preempt>
2482	LOAD waiter->list.next;
2483	--- OOPS ---
2484
2485This could be dealt with using the semaphore lock, but then the down_xxx()
2486function has to needlessly get the spinlock again after being woken up.
2487
2488The way to deal with this is to insert a general SMP memory barrier:
2489
2490	LOAD waiter->list.next;
2491	LOAD waiter->task;
2492	smp_mb();
2493	STORE waiter->task;
2494	CALL wakeup
2495	RELEASE task
2496
2497In this case, the barrier makes a guarantee that all memory accesses before the
2498barrier will appear to happen before all the memory accesses after the barrier
2499with respect to the other CPUs on the system.  It does _not_ guarantee that all
2500the memory accesses before the barrier will be complete by the time the barrier
2501instruction itself is complete.
2502
2503On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2504compiler barrier, thus making sure the compiler emits the instructions in the
2505right order without actually intervening in the CPU.  Since there's only one
2506CPU, that CPU's dependency ordering logic will take care of everything else.
2507
2508
2509ATOMIC OPERATIONS
2510-----------------
2511
2512Whilst they are technically interprocessor interaction considerations, atomic
2513operations are noted specially as some of them imply full memory barriers and
2514some don't, but they're very heavily relied on as a group throughout the
2515kernel.
2516
2517See Documentation/atomic_t.txt for more information.
2518
2519
2520ACCESSING DEVICES
2521-----------------
2522
2523Many devices can be memory mapped, and so appear to the CPU as if they're just
2524a set of memory locations.  To control such a device, the driver usually has to
2525make the right memory accesses in exactly the right order.
2526
2527However, having a clever CPU or a clever compiler creates a potential problem
2528in that the carefully sequenced accesses in the driver code won't reach the
2529device in the requisite order if the CPU or the compiler thinks it is more
2530efficient to reorder, combine or merge accesses - something that would cause
2531the device to malfunction.
2532
2533Inside of the Linux kernel, I/O should be done through the appropriate accessor
2534routines - such as inb() or writel() - which know how to make such accesses
2535appropriately sequential.  Whilst this, for the most part, renders the explicit
2536use of memory barriers unnecessary, there are a couple of situations where they
2537might be needed:
2538
2539 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2540     so for _all_ general drivers locks should be used and mmiowb() must be
2541     issued prior to unlocking the critical section.
2542
2543 (2) If the accessor functions are used to refer to an I/O memory window with
2544     relaxed memory access properties, then _mandatory_ memory barriers are
2545     required to enforce ordering.
2546
2547See Documentation/driver-api/device-io.rst for more information.
2548
2549
2550INTERRUPTS
2551----------
2552
2553A driver may be interrupted by its own interrupt service routine, and thus the
2554two parts of the driver may interfere with each other's attempts to control or
2555access the device.
2556
2557This may be alleviated - at least in part - by disabling local interrupts (a
2558form of locking), such that the critical operations are all contained within
2559the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2560routine is executing, the driver's core may not run on the same CPU, and its
2561interrupt is not permitted to happen again until the current interrupt has been
2562handled, thus the interrupt handler does not need to lock against that.
2563
2564However, consider a driver that was talking to an ethernet card that sports an
2565address register and a data register.  If that driver's core talks to the card
2566under interrupt-disablement and then the driver's interrupt handler is invoked:
2567
2568	LOCAL IRQ DISABLE
2569	writew(ADDR, 3);
2570	writew(DATA, y);
2571	LOCAL IRQ ENABLE
2572	<interrupt>
2573	writew(ADDR, 4);
2574	q = readw(DATA);
2575	</interrupt>
2576
2577The store to the data register might happen after the second store to the
2578address register if ordering rules are sufficiently relaxed:
2579
2580	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2581
2582
2583If ordering rules are relaxed, it must be assumed that accesses done inside an
2584interrupt disabled section may leak outside of it and may interleave with
2585accesses performed in an interrupt - and vice versa - unless implicit or
2586explicit barriers are used.
2587
2588Normally this won't be a problem because the I/O accesses done inside such
2589sections will include synchronous load operations on strictly ordered I/O
2590registers that form implicit I/O barriers.  If this isn't sufficient then an
2591mmiowb() may need to be used explicitly.
2592
2593
2594A similar situation may occur between an interrupt routine and two routines
2595running on separate CPUs that communicate with each other.  If such a case is
2596likely, then interrupt-disabling locks should be used to guarantee ordering.
2597
2598
2599==========================
2600KERNEL I/O BARRIER EFFECTS
2601==========================
2602
2603When accessing I/O memory, drivers should use the appropriate accessor
2604functions:
2605
2606 (*) inX(), outX():
2607
2608     These are intended to talk to I/O space rather than memory space, but
2609     that's primarily a CPU-specific concept.  The i386 and x86_64 processors
2610     do indeed have special I/O space access cycles and instructions, but many
2611     CPUs don't have such a concept.
2612
2613     The PCI bus, amongst others, defines an I/O space concept which - on such
2614     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2615     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2616     memory map, particularly on those CPUs that don't support alternate I/O
2617     spaces.
2618
2619     Accesses to this space may be fully synchronous (as on i386), but
2620     intermediary bridges (such as the PCI host bridge) may not fully honour
2621     that.
2622
2623     They are guaranteed to be fully ordered with respect to each other.
2624
2625     They are not guaranteed to be fully ordered with respect to other types of
2626     memory and I/O operation.
2627
2628 (*) readX(), writeX():
2629
2630     Whether these are guaranteed to be fully ordered and uncombined with
2631     respect to each other on the issuing CPU depends on the characteristics
2632     defined for the memory window through which they're accessing.  On later
2633     i386 architecture machines, for example, this is controlled by way of the
2634     MTRR registers.
2635
2636     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2637     provided they're not accessing a prefetchable device.
2638
2639     However, intermediary hardware (such as a PCI bridge) may indulge in
2640     deferral if it so wishes; to flush a store, a load from the same location
2641     is preferred[*], but a load from the same device or from configuration
2642     space should suffice for PCI.
2643
2644     [*] NOTE! attempting to load from the same location as was written to may
2645	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2646	 example.
2647
2648     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2649     force stores to be ordered.
2650
2651     Please refer to the PCI specification for more information on interactions
2652     between PCI transactions.
2653
2654 (*) readX_relaxed(), writeX_relaxed()
2655
2656     These are similar to readX() and writeX(), but provide weaker memory
2657     ordering guarantees.  Specifically, they do not guarantee ordering with
2658     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2659     ordering with respect to LOCK or UNLOCK operations.  If the latter is
2660     required, an mmiowb() barrier can be used.  Note that relaxed accesses to
2661     the same peripheral are guaranteed to be ordered with respect to each
2662     other.
2663
2664 (*) ioreadX(), iowriteX()
2665
2666     These will perform appropriately for the type of access they're actually
2667     doing, be it inX()/outX() or readX()/writeX().
2668
2669
2670========================================
2671ASSUMED MINIMUM EXECUTION ORDERING MODEL
2672========================================
2673
2674It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2675maintain the appearance of program causality with respect to itself.  Some CPUs
2676(such as i386 or x86_64) are more constrained than others (such as powerpc or
2677frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2678of arch-specific code.
2679
2680This means that it must be considered that the CPU will execute its instruction
2681stream in any order it feels like - or even in parallel - provided that if an
2682instruction in the stream depends on an earlier instruction, then that
2683earlier instruction must be sufficiently complete[*] before the later
2684instruction may proceed; in other words: provided that the appearance of
2685causality is maintained.
2686
2687 [*] Some instructions have more than one effect - such as changing the
2688     condition codes, changing registers or changing memory - and different
2689     instructions may depend on different effects.
2690
2691A CPU may also discard any instruction sequence that winds up having no
2692ultimate effect.  For example, if two adjacent instructions both load an
2693immediate value into the same register, the first may be discarded.
2694
2695
2696Similarly, it has to be assumed that compiler might reorder the instruction
2697stream in any way it sees fit, again provided the appearance of causality is
2698maintained.
2699
2700
2701============================
2702THE EFFECTS OF THE CPU CACHE
2703============================
2704
2705The way cached memory operations are perceived across the system is affected to
2706a certain extent by the caches that lie between CPUs and memory, and by the
2707memory coherence system that maintains the consistency of state in the system.
2708
2709As far as the way a CPU interacts with another part of the system through the
2710caches goes, the memory system has to include the CPU's caches, and memory
2711barriers for the most part act at the interface between the CPU and its cache
2712(memory barriers logically act on the dotted line in the following diagram):
2713
2714	    <--- CPU --->         :       <----------- Memory ----------->
2715	                          :
2716	+--------+    +--------+  :   +--------+    +-----------+
2717	|        |    |        |  :   |        |    |           |    +--------+
2718	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2719	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2720	|        |    | Queue  |  :   |        |    |           |--->| Memory |
2721	|        |    |        |  :   |        |    |           |    |        |
2722	+--------+    +--------+  :   +--------+    |           |    |        |
2723	                          :                 | Cache     |    +--------+
2724	                          :                 | Coherency |
2725	                          :                 | Mechanism |    +--------+
2726	+--------+    +--------+  :   +--------+    |           |    |	      |
2727	|        |    |        |  :   |        |    |           |    |        |
2728	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2729	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2730	|        |    | Queue  |  :   |        |    |           |    |        |
2731	|        |    |        |  :   |        |    |           |    +--------+
2732	+--------+    +--------+  :   +--------+    +-----------+
2733	                          :
2734	                          :
2735
2736Although any particular load or store may not actually appear outside of the
2737CPU that issued it since it may have been satisfied within the CPU's own cache,
2738it will still appear as if the full memory access had taken place as far as the
2739other CPUs are concerned since the cache coherency mechanisms will migrate the
2740cacheline over to the accessing CPU and propagate the effects upon conflict.
2741
2742The CPU core may execute instructions in any order it deems fit, provided the
2743expected program causality appears to be maintained.  Some of the instructions
2744generate load and store operations which then go into the queue of memory
2745accesses to be performed.  The core may place these in the queue in any order
2746it wishes, and continue execution until it is forced to wait for an instruction
2747to complete.
2748
2749What memory barriers are concerned with is controlling the order in which
2750accesses cross from the CPU side of things to the memory side of things, and
2751the order in which the effects are perceived to happen by the other observers
2752in the system.
2753
2754[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2755their own loads and stores as if they had happened in program order.
2756
2757[!] MMIO or other device accesses may bypass the cache system.  This depends on
2758the properties of the memory window through which devices are accessed and/or
2759the use of any special device communication instructions the CPU may have.
2760
2761
2762CACHE COHERENCY
2763---------------
2764
2765Life isn't quite as simple as it may appear above, however: for while the
2766caches are expected to be coherent, there's no guarantee that that coherency
2767will be ordered.  This means that whilst changes made on one CPU will
2768eventually become visible on all CPUs, there's no guarantee that they will
2769become apparent in the same order on those other CPUs.
2770
2771
2772Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2773has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2774
2775	            :
2776	            :                          +--------+
2777	            :      +---------+         |        |
2778	+--------+  : +--->| Cache A |<------->|        |
2779	|        |  : |    +---------+         |        |
2780	|  CPU 1 |<---+                        |        |
2781	|        |  : |    +---------+         |        |
2782	+--------+  : +--->| Cache B |<------->|        |
2783	            :      +---------+         |        |
2784	            :                          | Memory |
2785	            :      +---------+         | System |
2786	+--------+  : +--->| Cache C |<------->|        |
2787	|        |  : |    +---------+         |        |
2788	|  CPU 2 |<---+                        |        |
2789	|        |  : |    +---------+         |        |
2790	+--------+  : +--->| Cache D |<------->|        |
2791	            :      +---------+         |        |
2792	            :                          +--------+
2793	            :
2794
2795Imagine the system has the following properties:
2796
2797 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2798     resident in memory;
2799
2800 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2801     resident in memory;
2802
2803 (*) whilst the CPU core is interrogating one cache, the other cache may be
2804     making use of the bus to access the rest of the system - perhaps to
2805     displace a dirty cacheline or to do a speculative load;
2806
2807 (*) each cache has a queue of operations that need to be applied to that cache
2808     to maintain coherency with the rest of the system;
2809
2810 (*) the coherency queue is not flushed by normal loads to lines already
2811     present in the cache, even though the contents of the queue may
2812     potentially affect those loads.
2813
2814Imagine, then, that two writes are made on the first CPU, with a write barrier
2815between them to guarantee that they will appear to reach that CPU's caches in
2816the requisite order:
2817
2818	CPU 1		CPU 2		COMMENT
2819	===============	===============	=======================================
2820					u == 0, v == 1 and p == &u, q == &u
2821	v = 2;
2822	smp_wmb();			Make sure change to v is visible before
2823					 change to p
2824	<A:modify v=2>			v is now in cache A exclusively
2825	p = &v;
2826	<B:modify p=&v>			p is now in cache B exclusively
2827
2828The write memory barrier forces the other CPUs in the system to perceive that
2829the local CPU's caches have apparently been updated in the correct order.  But
2830now imagine that the second CPU wants to read those values:
2831
2832	CPU 1		CPU 2		COMMENT
2833	===============	===============	=======================================
2834	...
2835			q = p;
2836			x = *q;
2837
2838The above pair of reads may then fail to happen in the expected order, as the
2839cacheline holding p may get updated in one of the second CPU's caches whilst
2840the update to the cacheline holding v is delayed in the other of the second
2841CPU's caches by some other cache event:
2842
2843	CPU 1		CPU 2		COMMENT
2844	===============	===============	=======================================
2845					u == 0, v == 1 and p == &u, q == &u
2846	v = 2;
2847	smp_wmb();
2848	<A:modify v=2>	<C:busy>
2849			<C:queue v=2>
2850	p = &v;		q = p;
2851			<D:request p>
2852	<B:modify p=&v>	<D:commit p=&v>
2853			<D:read p>
2854			x = *q;
2855			<C:read *q>	Reads from v before v updated in cache
2856			<C:unbusy>
2857			<C:commit v=2>
2858
2859Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2860no guarantee that, without intervention, the order of update will be the same
2861as that committed on CPU 1.
2862
2863
2864To intervene, we need to interpolate a data dependency barrier or a read
2865barrier between the loads (which as of v4.15 is supplied unconditionally
2866by the READ_ONCE() macro).  This will force the cache to commit its
2867coherency queue before processing any further requests:
2868
2869	CPU 1		CPU 2		COMMENT
2870	===============	===============	=======================================
2871					u == 0, v == 1 and p == &u, q == &u
2872	v = 2;
2873	smp_wmb();
2874	<A:modify v=2>	<C:busy>
2875			<C:queue v=2>
2876	p = &v;		q = p;
2877			<D:request p>
2878	<B:modify p=&v>	<D:commit p=&v>
2879			<D:read p>
2880			smp_read_barrier_depends()
2881			<C:unbusy>
2882			<C:commit v=2>
2883			x = *q;
2884			<C:read *q>	Reads from v after v updated in cache
2885
2886
2887This sort of problem can be encountered on DEC Alpha processors as they have a
2888split cache that improves performance by making better use of the data bus.
2889Whilst most CPUs do imply a data dependency barrier on the read when a memory
2890access depends on a read, not all do, so it may not be relied on.
2891
2892Other CPUs may also have split caches, but must coordinate between the various
2893cachelets for normal memory accesses.  The semantics of the Alpha removes the
2894need for hardware coordination in the absence of memory barriers, which
2895permitted Alpha to sport higher CPU clock rates back in the day.  However,
2896please note that (again, as of v4.15) smp_read_barrier_depends() should not
2897be used except in Alpha arch-specific code and within the READ_ONCE() macro.
2898
2899
2900CACHE COHERENCY VS DMA
2901----------------------
2902
2903Not all systems maintain cache coherency with respect to devices doing DMA.  In
2904such cases, a device attempting DMA may obtain stale data from RAM because
2905dirty cache lines may be resident in the caches of various CPUs, and may not
2906have been written back to RAM yet.  To deal with this, the appropriate part of
2907the kernel must flush the overlapping bits of cache on each CPU (and maybe
2908invalidate them as well).
2909
2910In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2911cache lines being written back to RAM from a CPU's cache after the device has
2912installed its own data, or cache lines present in the CPU's cache may simply
2913obscure the fact that RAM has been updated, until at such time as the cacheline
2914is discarded from the CPU's cache and reloaded.  To deal with this, the
2915appropriate part of the kernel must invalidate the overlapping bits of the
2916cache on each CPU.
2917
2918See Documentation/core-api/cachetlb.rst for more information on cache management.
2919
2920
2921CACHE COHERENCY VS MMIO
2922-----------------------
2923
2924Memory mapped I/O usually takes place through memory locations that are part of
2925a window in the CPU's memory space that has different properties assigned than
2926the usual RAM directed window.
2927
2928Amongst these properties is usually the fact that such accesses bypass the
2929caching entirely and go directly to the device buses.  This means MMIO accesses
2930may, in effect, overtake accesses to cached memory that were emitted earlier.
2931A memory barrier isn't sufficient in such a case, but rather the cache must be
2932flushed between the cached memory write and the MMIO access if the two are in
2933any way dependent.
2934
2935
2936=========================
2937THE THINGS CPUS GET UP TO
2938=========================
2939
2940A programmer might take it for granted that the CPU will perform memory
2941operations in exactly the order specified, so that if the CPU is, for example,
2942given the following piece of code to execute:
2943
2944	a = READ_ONCE(*A);
2945	WRITE_ONCE(*B, b);
2946	c = READ_ONCE(*C);
2947	d = READ_ONCE(*D);
2948	WRITE_ONCE(*E, e);
2949
2950they would then expect that the CPU will complete the memory operation for each
2951instruction before moving on to the next one, leading to a definite sequence of
2952operations as seen by external observers in the system:
2953
2954	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2955
2956
2957Reality is, of course, much messier.  With many CPUs and compilers, the above
2958assumption doesn't hold because:
2959
2960 (*) loads are more likely to need to be completed immediately to permit
2961     execution progress, whereas stores can often be deferred without a
2962     problem;
2963
2964 (*) loads may be done speculatively, and the result discarded should it prove
2965     to have been unnecessary;
2966
2967 (*) loads may be done speculatively, leading to the result having been fetched
2968     at the wrong time in the expected sequence of events;
2969
2970 (*) the order of the memory accesses may be rearranged to promote better use
2971     of the CPU buses and caches;
2972
2973 (*) loads and stores may be combined to improve performance when talking to
2974     memory or I/O hardware that can do batched accesses of adjacent locations,
2975     thus cutting down on transaction setup costs (memory and PCI devices may
2976     both be able to do this); and
2977
2978 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2979     mechanisms may alleviate this - once the store has actually hit the cache
2980     - there's no guarantee that the coherency management will be propagated in
2981     order to other CPUs.
2982
2983So what another CPU, say, might actually observe from the above piece of code
2984is:
2985
2986	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2987
2988	(Where "LOAD {*C,*D}" is a combined load)
2989
2990
2991However, it is guaranteed that a CPU will be self-consistent: it will see its
2992_own_ accesses appear to be correctly ordered, without the need for a memory
2993barrier.  For instance with the following code:
2994
2995	U = READ_ONCE(*A);
2996	WRITE_ONCE(*A, V);
2997	WRITE_ONCE(*A, W);
2998	X = READ_ONCE(*A);
2999	WRITE_ONCE(*A, Y);
3000	Z = READ_ONCE(*A);
3001
3002and assuming no intervention by an external influence, it can be assumed that
3003the final result will appear to be:
3004
3005	U == the original value of *A
3006	X == W
3007	Z == Y
3008	*A == Y
3009
3010The code above may cause the CPU to generate the full sequence of memory
3011accesses:
3012
3013	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
3014
3015in that order, but, without intervention, the sequence may have almost any
3016combination of elements combined or discarded, provided the program's view
3017of the world remains consistent.  Note that READ_ONCE() and WRITE_ONCE()
3018are -not- optional in the above example, as there are architectures
3019where a given CPU might reorder successive loads to the same location.
3020On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
3021necessary to prevent this, for example, on Itanium the volatile casts
3022used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
3023and st.rel instructions (respectively) that prevent such reordering.
3024
3025The compiler may also combine, discard or defer elements of the sequence before
3026the CPU even sees them.
3027
3028For instance:
3029
3030	*A = V;
3031	*A = W;
3032
3033may be reduced to:
3034
3035	*A = W;
3036
3037since, without either a write barrier or an WRITE_ONCE(), it can be
3038assumed that the effect of the storage of V to *A is lost.  Similarly:
3039
3040	*A = Y;
3041	Z = *A;
3042
3043may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
3044reduced to:
3045
3046	*A = Y;
3047	Z = Y;
3048
3049and the LOAD operation never appear outside of the CPU.
3050
3051
3052AND THEN THERE'S THE ALPHA
3053--------------------------
3054
3055The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
3056some versions of the Alpha CPU have a split data cache, permitting them to have
3057two semantically-related cache lines updated at separate times.  This is where
3058the data dependency barrier really becomes necessary as this synchronises both
3059caches with the memory coherence system, thus making it seem like pointer
3060changes vs new data occur in the right order.
3061
3062The Alpha defines the Linux kernel's memory model, although as of v4.15
3063the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE()
3064greatly reduced Alpha's impact on the memory model.
3065
3066See the subsection on "Cache Coherency" above.
3067
3068
3069VIRTUAL MACHINE GUESTS
3070----------------------
3071
3072Guests running within virtual machines might be affected by SMP effects even if
3073the guest itself is compiled without SMP support.  This is an artifact of
3074interfacing with an SMP host while running an UP kernel.  Using mandatory
3075barriers for this use-case would be possible but is often suboptimal.
3076
3077To handle this case optimally, low-level virt_mb() etc macros are available.
3078These have the same effect as smp_mb() etc when SMP is enabled, but generate
3079identical code for SMP and non-SMP systems.  For example, virtual machine guests
3080should use virt_mb() rather than smp_mb() when synchronizing against a
3081(possibly SMP) host.
3082
3083These are equivalent to smp_mb() etc counterparts in all other respects,
3084in particular, they do not control MMIO effects: to control
3085MMIO effects, use mandatory barriers.
3086
3087
3088============
3089EXAMPLE USES
3090============
3091
3092CIRCULAR BUFFERS
3093----------------
3094
3095Memory barriers can be used to implement circular buffering without the need
3096of a lock to serialise the producer with the consumer.  See:
3097
3098	Documentation/core-api/circular-buffers.rst
3099
3100for details.
3101
3102
3103==========
3104REFERENCES
3105==========
3106
3107Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3108Digital Press)
3109	Chapter 5.2: Physical Address Space Characteristics
3110	Chapter 5.4: Caches and Write Buffers
3111	Chapter 5.5: Data Sharing
3112	Chapter 5.6: Read/Write Ordering
3113
3114AMD64 Architecture Programmer's Manual Volume 2: System Programming
3115	Chapter 7.1: Memory-Access Ordering
3116	Chapter 7.4: Buffering and Combining Memory Writes
3117
3118ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
3119	Chapter B2: The AArch64 Application Level Memory Model
3120
3121IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3122System Programming Guide
3123	Chapter 7.1: Locked Atomic Operations
3124	Chapter 7.2: Memory Ordering
3125	Chapter 7.4: Serializing Instructions
3126
3127The SPARC Architecture Manual, Version 9
3128	Chapter 8: Memory Models
3129	Appendix D: Formal Specification of the Memory Models
3130	Appendix J: Programming with the Memory Models
3131
3132Storage in the PowerPC (Stone and Fitzgerald)
3133
3134UltraSPARC Programmer Reference Manual
3135	Chapter 5: Memory Accesses and Cacheability
3136	Chapter 15: Sparc-V9 Memory Models
3137
3138UltraSPARC III Cu User's Manual
3139	Chapter 9: Memory Models
3140
3141UltraSPARC IIIi Processor User's Manual
3142	Chapter 8: Memory Models
3143
3144UltraSPARC Architecture 2005
3145	Chapter 9: Memory
3146	Appendix D: Formal Specifications of the Memory Models
3147
3148UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3149	Chapter 8: Memory Models
3150	Appendix F: Caches and Cache Coherency
3151
3152Solaris Internals, Core Kernel Architecture, p63-68:
3153	Chapter 3.3: Hardware Considerations for Locks and
3154			Synchronization
3155
3156Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3157for Kernel Programmers:
3158	Chapter 13: Other Memory Models
3159
3160Intel Itanium Architecture Software Developer's Manual: Volume 1:
3161	Section 2.6: Speculation
3162	Section 4.4: Memory Access
3163