Lines Matching full:before
13 * barrier before sending the IPI
20 * order to enforce the guarantee that any writes occurring on CPU0 before
39 * so it's possible to have "r1 = x" reordered before "y = 1" at any
46 * before the IPI-induced memory barrier on CPU1.
48 * B) Userspace thread execution before IPI vs membarrier's memory
56 * order to enforce the guarantee that any writes occurring on CPU1 before
76 * before (b) (although not before (a)), so we get "r1 = 0". This violates
173 * ensure that memory on remote CPUs that occur before the IPI in ipi_sync_core()
190 * to the current task before the current task resumes. We could in ipi_rseq()
211 * before registration. in ipi_sync_rq_state()
219 * Issue a memory barrier before clearing membarrier_state to in membarrier_exec_mmap()
393 * task in the same mm just before, during, or after in membarrier_private_expedited()
438 * access following registration is reordered before in sync_runqueues_membarrier_state()