Lines Matching full:nmi
18 #include <linux/nmi.h>
31 #include <asm/nmi.h>
39 #include <trace/events/nmi.h>
80 * Prevent NMI reason port (0x61) being accessed simultaneously, can
81 * only be used in NMI handler.
117 "INFO: NMI handler (%ps) took too long to run: %lld.%03d msecs\n", in nmi_check_duration()
150 /* return total number of NMI events handled */ in nmi_handle()
167 * internal NMI handler call chains (SERR and IO_CHECK). in __register_nmi_handler()
196 * the name passed in to describe the nmi handler in unregister_nmi_handler()
201 "Trying to free NMI (%s) from NMI context!\n", n->name); in unregister_nmi_handler()
223 pr_emerg("NMI: PCI system error (SERR) for reason %02x on CPU %d.\n", in pci_serr_error()
227 nmi_panic(regs, "NMI: Not continuing"); in pci_serr_error()
247 "NMI: IOCK error (debug interrupt?) for reason %02x on CPU %d.\n", in io_check_error()
252 nmi_panic(regs, "NMI IOCK error: Not continuing"); in io_check_error()
255 * If we end up here, it means we have received an NMI while in io_check_error()
286 * if it caused the NMI) in unknown_nmi_error()
296 pr_emerg("Uhhuh. NMI received for unknown reason %02x on CPU %d.\n", in unknown_nmi_error()
300 nmi_panic(regs, "NMI: Not continuing"); in unknown_nmi_error()
316 * CPU-specific NMI must be processed before non-CPU-specific in default_do_nmi()
317 * NMI, otherwise we may lose it, because the CPU-specific in default_do_nmi()
318 * NMI can not be detected/processed on other CPUs. in default_do_nmi()
323 * be two NMI or more than two NMIs (any thing over two is dropped in default_do_nmi()
324 * due to NMI being edge-triggered). If this is the second half in default_do_nmi()
325 * of the back-to-back NMI, assume we dropped things and process in default_do_nmi()
326 * more handlers. Otherwise reset the 'swallow' NMI behaviour in default_do_nmi()
341 * There are cases when a NMI handler handles multiple in default_do_nmi()
342 * events in the current NMI. One of these events may in default_do_nmi()
343 * be queued for in the next NMI. Because the event is in default_do_nmi()
344 * already handled, the next NMI will result in an unknown in default_do_nmi()
345 * NMI. Instead lets flag this for a potential NMI to in default_do_nmi()
354 * Non-CPU-specific NMI: NMI sources can be processed on any CPU. in default_do_nmi()
375 * Reassert NMI in case it became active in default_do_nmi()
387 * Only one NMI can be latched at a time. To handle in default_do_nmi()
388 * this we may process multiple nmi handlers at once to in default_do_nmi()
389 * cover the case where an NMI is dropped. The downside in default_do_nmi()
390 * to this approach is we may process an NMI prematurely, in default_do_nmi()
391 * while its real NMI is sitting latched. This will cause in default_do_nmi()
392 * an unknown NMI on the next run of the NMI processing. in default_do_nmi()
397 * of a back-to-back NMI, so we flag that condition too. in default_do_nmi()
400 * NMI previously and we swallow it. Otherwise we reset in default_do_nmi()
404 * a 'real' unknown NMI. For example, while processing in default_do_nmi()
405 * a perf NMI another perf NMI comes in along with a in default_do_nmi()
406 * 'real' unknown NMI. These two NMIs get combined into in default_do_nmi()
407 * one (as described above). When the next NMI gets in default_do_nmi()
409 * no one will know that there was a 'real' unknown NMI sent in default_do_nmi()
411 * perf NMI returns two events handled then the second in default_do_nmi()
412 * NMI will get eaten by the logic below, again losing a in default_do_nmi()
413 * 'real' unknown NMI. But this is the best we can do in default_do_nmi()
427 * its NMI context with the CPU when the breakpoint or page fault does an IRET.
430 * NMI processing. On x86_64, the asm glue protects us from nested NMIs
431 * if the outer NMI came from kernel mode, but we can still nest if the
432 * outer NMI came from user mode.
440 * When no NMI is in progress, it is in the "not running" state.
441 * When an NMI comes in, it goes into the "executing" state.
442 * Normally, if another NMI is triggered, it does not interrupt
443 * the running NMI and the HW will simply latch it so that when
444 * the first NMI finishes, it will restart the second NMI.
446 * when one is running, are ignored. Only one NMI is restarted.)
448 * If an NMI executes an iret, another NMI can preempt it. We do not
449 * want to allow this new NMI to run, but we want to execute it when the
451 * the first NMI will perform a dec_return, if the result is zero
452 * (NOT_RUNNING), then it will simply exit the NMI handler. If not, the
455 * rerun the NMI handler again, and restart the 'latched' NMI.
462 * In case the NMI takes a page fault, we need to save off the CR2
463 * because the NMI could have preempted another page fault and corrupt
466 * CR2 must be done before converting the nmi state back to NOT_RUNNING.
467 * Otherwise, there would be a race of another nested NMI coming in
550 /* reset the back-to-back NMI logic */