Lines Matching full:fault

40  * Returns 0 if mmiotrace is disabled, or if the fault is not
131 * If it was a exec (instruction fetch) fault on NX page, then in is_prefetch()
132 * do not ignore the fault: in is_prefetch()
216 * Handle a fault on the vmalloc or module mapping area
227 * unhandled page-fault when they are accessed.
410 * The OS sees this as a page fault with the upper 32bits of RIP cleared.
447 * We catch this in the page fault handler because these addresses
535 pr_alert("BUG: unable to handle page fault for address: %px\n", in show_fault_oops()
558 * contributory exception from user code and gets a page fault in show_fault_oops()
559 * during delivery, the page fault can be delivered as though in show_fault_oops()
647 * Stack overflow? During boot, we can fault near the initial in page_fault_oops()
658 * double-fault even before we get this far, in which case in page_fault_oops()
659 * we're fine: the double-fault handler will deal with it. in page_fault_oops()
662 * and then double-fault, though, because we're likely to in page_fault_oops()
669 : "D" ("kernel stack overflow (page fault)"), in page_fault_oops()
677 * Buggy firmware could access regions which might page fault. If in page_fault_oops()
718 /* Are we prepared to handle this kernel fault? */ in kernelmode_fixup_or_oops()
721 * Any interrupt that takes a fault gets the fixup. This makes in kernelmode_fixup_or_oops()
722 * the below recursive fault logic only apply to a faults from in kernelmode_fixup_or_oops()
754 * AMD erratum #91 manifests as a spurious page fault on a PREFETCH in kernelmode_fixup_or_oops()
824 * Valid to do another page fault here because this one came in __bad_area_nosemaphore()
906 * A protection key fault means that the PKRU value did not allow in bad_area_access_error()
913 * fault and that there was a VMA once we got in the fault in bad_area_access_error()
921 * 5. T1 : enters fault handler, takes mmap_lock, etc... in bad_area_access_error()
935 vm_fault_t fault) in do_sigbus() argument
944 /* User-space => ok to do another page fault: */ in do_sigbus()
956 if (fault & (VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { in do_sigbus()
961 "MCE: Killing %s:%d due to hardware memory corruption fault at %lx\n", in do_sigbus()
963 if (fault & VM_FAULT_HWPOISON_LARGE) in do_sigbus()
964 lsb = hstate_index_to_shift(VM_FAULT_GET_HINDEX(fault)); in do_sigbus()
965 if (fault & VM_FAULT_HWPOISON) in do_sigbus()
986 * Handle a spurious fault caused by a stale TLB entry.
1001 * Returns non-zero if a spurious fault was handled, zero otherwise.
1084 * a follow-up action to resolve the fault, like a COW. in access_error()
1093 * fix the cause of the fault. Handle the fault as an access in access_error()
1159 * We can fault-in kernel-space virtual memory on-demand. The in do_kern_addr_fault()
1168 * fault is not any of the following: in do_kern_addr_fault()
1169 * 1. A fault on a PTE with a reserved bit set. in do_kern_addr_fault()
1170 * 2. A fault caused by a user-mode access. (Do not demand- in do_kern_addr_fault()
1171 * fault kernel memory due to user-mode accesses). in do_kern_addr_fault()
1172 * 3. A fault caused by a page-level protection violation. in do_kern_addr_fault()
1173 * (A demand fault would be on a non-present page which in do_kern_addr_fault()
1191 /* Was the fault spurious, caused by lazy TLB invalidation? */ in do_kern_addr_fault()
1202 * and handling kernel code that can fault, like get_user(). in do_kern_addr_fault()
1205 * fault we could otherwise deadlock: in do_kern_addr_fault()
1227 vm_fault_t fault; in do_user_addr_fault() local
1279 * in a region with pagefaults disabled then we must not take the fault in do_user_addr_fault()
1288 * vmalloc fault has been handled. in do_user_addr_fault()
1291 * potential system fault or CPU buglet: in do_user_addr_fault()
1329 * tables. But, an erroneous kernel fault occurring outside one of in do_user_addr_fault()
1331 * to validate the fault against the address space. in do_user_addr_fault()
1341 * Fault from code in kernel from in do_user_addr_fault()
1385 * If for any reason at all we couldn't handle the fault, in do_user_addr_fault()
1387 * the fault. Since we never set FAULT_FLAG_RETRY_NOWAIT, if in do_user_addr_fault()
1392 * repeat the page fault later with a VM_FAULT_NOPAGE retval in do_user_addr_fault()
1397 fault = handle_mm_fault(vma, address, flags, regs); in do_user_addr_fault()
1399 if (fault_signal_pending(fault, regs)) { in do_user_addr_fault()
1416 if (unlikely((fault & VM_FAULT_RETRY) && in do_user_addr_fault()
1423 if (likely(!(fault & VM_FAULT_ERROR))) in do_user_addr_fault()
1432 if (fault & VM_FAULT_OOM) { in do_user_addr_fault()
1443 * userspace (which will retry the fault, or kill us if we got in do_user_addr_fault()
1448 if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON| in do_user_addr_fault()
1450 do_sigbus(regs, error_code, address, fault); in do_user_addr_fault()
1451 else if (fault & VM_FAULT_SIGSEGV) in do_user_addr_fault()
1481 /* Was the fault on kernel-controlled part of the address space? */ in handle_page_fault()
1487 * User address page fault handling might have reenabled in handle_page_fault()
1506 * (asynchronous page fault mechanism). The event happens when a in DEFINE_IDTENTRY_RAW_ERRORCODE()
1531 * be invoked because a kernel fault on a user space address might in DEFINE_IDTENTRY_RAW_ERRORCODE()
1534 * In case the fault hit a RCU idle region the conditional entry in DEFINE_IDTENTRY_RAW_ERRORCODE()