Lines Matching full:we
76 * We need to change the IDT table before calling TRACE_IRQS_ON/OFF to
149 * We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
177 TRACE_IRQS_IRETQ /* we're about to change IF */
180 * Try to use SYSRET instead of IRET if we're returning to
181 * a completely clean 64-bit userspace context. If we're not,
222 * restore RF properly. If the slowpath sets it for whatever reason, we
247 * We win! This label is here just for ease of understanding
266 * We are on the trampoline stack. All regs except RDI are live.
267 * We can do future final exit work right here.
329 * rax: prev task we switched from
364 * We pack 1 stub into every 8-byte block.
403 * Enters the IRQ stack if we're not already using it. NMI-safe. Clobbers
431 * Right now, if we just incremented irq_count to zero, we've
432 * claimed the IRQ stack but we haven't switched to it yet.
440 * stack linking back to the previous RSP for the entire time we're
441 * on the IRQ stack. For this to work reliably, we need to write
442 * it before we actually move ourselves to the IRQ stack.
451 * changes, the only way we'll notice is if we try to unwind right
452 * here. Assert that we set up the stack right to catch this type
483 /* We need to be off the IRQ stack before decrementing irq_count. */
491 * As in ENTER_IRQ_STACK, irq_count == 0, we are still claiming
492 * the irq stack but we're not on it.
537 * We have RDI, return address, and orig_ax on the stack on
565 * We need to tell lockdep that IRQs are off. We can't do this until
566 * we fix gsbase, and we should do it before enter_from_user_mode
569 * we enter from user mode. There's no reason to optimize this since
578 /* We entered an interrupt context - irqs are off: */
652 * We are on the trampoline stack. All regs except RDI are live.
653 * We can do future final exit work right here.
669 /* Check if we need preemption */
701 * Are we returning to a stack segment from the LDT? Note: in
722 * We are running with user GSBASE. All GPRs contain their user
723 * values. We have a percpu ESPFIX stack that is eight slots
727 * We clobber RAX and RDI in this code. We stash RDI on the
730 * The ESPFIX stack layout we set up looks like this:
737 * RIP <-- RSP points here when we're done
780 * At this point, we cannot write to the stack any more, but we can
788 * values. We can now IRET back to userspace.
956 * On an exit to kernel mode, if @paranoid == 0, we check for preemption,
957 * whereas we omit the preemption check if @paranoid != 0. This is purely
969 * #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS.
1093 * We want to avoid stacking callback handlers due to events occurring
1094 * during handling of the last event. To do this, we keep events disabled
1095 * until we've done all processing. HOWEVER, we must enable events before
1098 * Although unlikely, bugs of that kind are hard to track down, so we'd
1100 * So, on entry to the handler we detect whether we interrupted an
1101 * existing activation in its critical region -- if so, we pop the current
1107 * Since we don't modify %rdi, evtchn_do_upall(struct *pt_regs) will
1111 movq %rdi, %rsp /* we don't return, adjust the stack frame */
1126 * We get here for two reasons:
1129 * Category 1 we do not need to fix up as Xen has already reloaded all segment
1131 * Category 2 we fix up by killing the current process. We cannot use the
1132 * normal Linux return path in this case because if we use the IRET hypercall
1133 * to pop the stack frame we end up in an infinite loop of failsafe callbacks.
1134 * We distinguish between categories by comparing each saved segment register
1135 * with its current contents: any discrepancy means we in category 1.
1214 * Use slow, but surefire "are we in kernel?" check.
1258 * We may be returning to very strange contexts (e.g. very early
1260 * be complicated. Fortunately, we there's no good reason
1296 * We entered from user mode or we're pretending to have entered
1301 /* We have user CR3. Change to kernel CR3. */
1337 * gsbase and proceed. We'll fix up the exception and land in
1352 * We came from an IRET to user mode, so we have user
1361 * as if we faulted immediately after IRET.
1380 * so we can use real assembly here.
1390 * We allow breakpoints in NMIs. If a breakpoint occurs, then
1392 * This means that we can have nested NMIs where the next
1393 * NMI is using the top of the stack of the previous NMI. We
1398 * To handle this case we do the following:
1414 * Now on exit of the first NMI, we first clear the stack variable
1416 * nested. Then we pop the stack normally with iret, and if there was
1423 * can fault. We therefore handle NMIs from user space like
1436 * NMI from user mode. We need to run on the thread stack, but we
1438 * we don't want to enable interrupts, because then we'll end
1442 * We also must not push anything to the stack before switching
1443 * stacks lest we corrupt the "NMI executing" variable.
1464 * At this point we no longer need to worry about stack damage
1465 * due to nesting -- we're on the normal thread stack and we're
1474 * Return back to user mode. We must *not* do the normal exit
1475 * work, because we don't want to enable interrupts.
1509 * NMIs, we need to be done with it, and we need to leave enough
1512 * We return by executing IRET while RSP points to the "iret" frame.
1522 * Determine whether we're a nested NMI.
1524 * If we interrupted kernel code between repeat_nmi and
1525 * end_repeat_nmi, then we are a nested NMI. We must not
1528 * about to about to call do_nmi anyway, so we can just
1541 * Now check "NMI executing". If it's set, then we're nested.
1542 * This will not detect if we interrupted an outer NMI just
1550 * the case where we interrupt an outer NMI after it clears
1551 * "NMI executing" but before IRET. We need to be careful, though:
1554 * RSP at the very beginning of the SYSCALL targets. We can
1555 * pull a fast one on naughty userspace, though: we program
1557 * if it controls the kernel's RSP. We set DF before we clear
1561 /* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */
1597 /* We are returning to kernel mode, so this cannot result in a fault. */
1637 * here. But NMIs are still enabled and we can take another
1640 * it will just return, as we are about to repeat an NMI anyway.
1645 * we're repeating an NMI, gsbase has the same value that it had on
1647 * gsbase if needed before we call do_nmi. "NMI executing"
1654 * here must not modify the "iret" frame while we're writing to
1673 * as we should not be calling schedule in NMI context.
1703 * Clear "NMI executing". Set DF first so that we can easily
1707 * We arguably should just inspect RIP instead, but I (Andy) wrote
1716 * single instruction. We are returning to kernel mode, so this
1717 * cannot result in a fault. Similarly, we don't need to worry