Lines Matching full:flush
39 * More scalable flush, from Andi Kleen
41 * Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi
55 * its own ASID and flush/restart when we run out of ASID space.
174 * forces a TLB flush when the context is loaded.
190 /* Do not need to flush the current asid */ in clear_asid_other()
195 * this asid, we do a flush: in clear_asid_other()
243 * Given an ASID, flush the corresponding user ASID. We can delay this
293 * If so, our callers still expect us to flush the TLB, but there in leave_mm()
391 * Only flush when switching to a user space task with a in cond_ibpb()
461 * a global flush to minimize the chance of corruption. in switch_mm_irqs_off()
502 * process. No TLB flush required. in switch_mm_irqs_off()
508 * Read the tlb_gen to check whether a flush is needed. in switch_mm_irqs_off()
615 * flush.
635 /* Force ASID 0 and force a TLB flush. */ in initialize_tlbstate_and_flush()
651 * TLB fills that happen after we flush the TLB are ordered after we
653 * because all x86 flush operations are serializing and the
665 * - f->new_tlb_gen: the generation that the requester of the flush in flush_tlb_func_common()
684 * We're in lazy mode. We need to at least flush our in flush_tlb_func_common()
687 * slower than a minimal flush, just switch to init_mm. in flush_tlb_func_common()
699 * happen if two concurrent flushes happen -- the first flush to in flush_tlb_func_common()
701 * the second flush. in flush_tlb_func_common()
712 * This does not strictly imply that we need to flush (it's in flush_tlb_func_common()
714 * going to need to flush in the very near future, so we might in flush_tlb_func_common()
717 * The only question is whether to do a full or partial flush. in flush_tlb_func_common()
719 * We do a partial flush if requested and two extra conditions in flush_tlb_func_common()
725 * f->new_tlb_gen == 3, then we know that the flush needed to bring in flush_tlb_func_common()
726 * us up to date for tlb_gen 3 is the partial flush we're in flush_tlb_func_common()
730 * are two concurrent flushes. The first is a full flush that in flush_tlb_func_common()
732 * flush that changes context.tlb_gen from 2 to 3. If they get in flush_tlb_func_common()
737 * 1 without the full flush that's needed for tlb_gen 2. in flush_tlb_func_common()
742 * to do a partial flush if that won't bring our TLB fully up to in flush_tlb_func_common()
743 * date. By doing a full flush instead, we can increase in flush_tlb_func_common()
745 * avoid another flush in the very near future. in flush_tlb_func_common()
750 /* Partial flush */ in flush_tlb_func_common()
762 /* Full flush. */ in flush_tlb_func_common()
810 * CPUs in lazy TLB mode. They will flush the CPU themselves in native_flush_tlb_others()
837 * flush is about 100 ns, so this caps the maximum overhead at
895 /* Should we flush just the requested range? */ in flush_tlb_mm_range()
940 /* flush range by one by one 'invlpg' */ in do_kernel_range_flush()
947 /* Balance as user space task's flush, a bit conservative */ in flush_tlb_kernel_range()
985 * Flush one page in the kernel mapping
998 * __flush_tlb_one_user() will flush the given address for the current in flush_tlb_one_kernel()
1000 * not flush it for other address spaces. in flush_tlb_one_kernel()
1008 * See above. We need to propagate the flush to all other address in flush_tlb_one_kernel()
1017 * Flush one page in the user mapping
1044 * Flush everything
1071 /* write old PGE again and flush TLBs */ in native_flush_tlb_global()
1078 * Flush the entire current user mapping
1101 * Flush everything
1115 * !PGE -> !PCID (setup_pcid()), thus every flush is total. in __flush_tlb_all()
1123 * arch_tlbbatch_flush() performs a full TLB flush regardless of the active mm.
1125 * flush is actually fixed. We therefore set a single fixed struct and use it in