Lines Matching full:stack

15  *			at the top of the kernel process stack.
65 * are not needed). SYSCALL does not save anything on the stack
100 /* Construct struct pt_regs on stack */
204 * Save old stack pointer and switch to trampoline stack.
214 * We are on the trampoline stack. All regs except RDI are live.
249 /* switch stack */
259 * When switching from a shallower to a deeper call stack
340 * @has_error_code: Hardware pushed error code on stack
345 * Call error_entry() and switch to the task stack if from userspace.
347 * When in XENPV, it is already in the task stack, and it can't fault
378 * @has_error_code: Hardware pushed error code on stack
381 * and simple IDT entries. No IST stack, no paranoid entry checks.
417 + The interrupt stubs push (vector) onto the stack, which is the error_code
479 /* Switch to the regular task stack and use the noist entry point */
495 * runs on an IST stack and needs to be able to cause nested #VC exceptions.
498 * an IST stack by switching to the task stack if coming from user-space (which
499 * includes early SYSCALL entry path) or back to the stack in the IRET frame if
502 * If entered from kernel-mode the return stack is validated first, and if it is
503 * not safe to use (e.g. because it points to the entry stack) the #VC handler
504 * will switch to a fall-back stack (VC2) and call a special handler function.
532 * Switch off the IST stack to make it free for nested exceptions. The
534 * stack if it is safe to do so. If not it switches to the VC fall-back
535 * stack.
539 movq %rax, %rsp /* Switch to new stack */
553 * No need to switch back to the IST stack. The current stack is either
554 * identical to the stack in the IRET frame or the VC fall-back stack,
559 /* Switch to the regular task stack */
631 * The stack is now user RDI, orig_ax, RIP, CS, EFLAGS, RSP, SS.
632 * Save old stack pointer and switch to trampoline stack.
638 /* Copy the IRET frame to the trampoline stack. */
645 /* Push user RDI on the trampoline stack. */
649 * We are on the trampoline stack. All regs except RDI are live.
686 * Are we returning to a stack segment from the LDT? Note: in
687 * 64-bit mode SS:RSP on the exception stack is always valid.
708 * values. We have a percpu ESPFIX stack that is eight slots
710 * of the ESPFIX stack.
713 * normal stack and RAX on the ESPFIX stack.
715 * The ESPFIX stack layout we set up looks like this:
717 * --- top of ESPFIX stack ---
724 * --- bottom of ESPFIX stack ---
753 * still points to an RO alias of the ESPFIX stack.
765 * At this point, we cannot write to the stack any more, but we can
821 * popping the stack frame (can't be done atomically) and so it would still
822 * be possible to get enough handler activations to overflow the stack.
838 movq %rdi, %rsp /* we don't return, adjust the stack frame */
855 * to pop the stack frame we end up in an infinite loop of failsafe callbacks.
981 * "Paranoid" exit path from exception stack. This is invoked
1064 /* Put us onto the real thread stack. */
1136 * Runs on exception stack. Xen PV does not go through this path at all,
1151 * NMI is using the top of the stack of the previous NMI. We
1153 * stack of the previous NMI. NMI handlers are not re-entrant
1157 * Check the a special location on the stack that contains
1159 * The interrupted task's stack is also checked to see if it
1160 * is an NMI stack.
1161 * If the variable is not set and the stack is not the NMI
1162 * stack then:
1163 * o Set the special variable on the stack
1165 * stack
1166 * o Copy the interrupt frame into an "iret" location on the stack
1168 * If the variable is set or the previous stack is the NMI stack:
1172 * Now on exit of the first NMI, we first clear the stack variable
1173 * The NMI stack will tell any nested NMIs at that point that it is
1174 * nested. Then we pop the stack normally with iret, and if there was
1175 * a nested NMI that updated the copy interrupt stack frame, a
1195 * NMI from user mode. We need to run on the thread stack, but we
1201 * We also must not push anything to the stack before switching
1225 * At this point we no longer need to worry about stack damage
1226 * due to nesting -- we're on the normal thread stack and we're
1227 * done with the NMI stack.
1242 * Here's what our stack frame will look like:
1310 * Now test if the previous stack was an NMI stack. This covers
1313 * there is one case in which RSP could point to the NMI stack
1322 /* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */
1324 /* If the stack pointer is above the NMI stack, this is a normal NMI */
1329 /* If it is below the NMI stack, it is a normal NMI */
1332 /* Ah, it is within the NMI stack. */
1352 /* Put stack back */
1403 * This makes it safe to copy to the stack frame that a nested
1497 * iretq reads the "iret" frame and exits the NMI stack in a