Lines Matching full:stack

15  *			at the top of the kernel process stack.
65 * are not needed). SYSCALL does not save anything on the stack
100 /* Construct struct pt_regs on stack */
204 * Save old stack pointer and switch to trampoline stack.
214 * We are on the trampoline stack. All regs except RDI are live.
249 /* switch stack */
259 * When switching from a shallower to a deeper call stack
289 * This is the start of the kernel stack; even through there's a
293 * This ensures stack unwinds of kernel threads terminate in a known
307 * Set the stack state to what is expected for the target function
340 * @has_error_code: Hardware pushed error code on stack
345 * Call error_entry() and switch to the task stack if from userspace.
347 * When in XENPV, it is already in the task stack, and it can't fault
378 * @has_error_code: Hardware pushed error code on stack
381 * and simple IDT entries. No IST stack, no paranoid entry checks.
424 + The interrupt stubs push (vector) onto the stack, which is the error_code
486 /* Switch to the regular task stack and use the noist entry point */
502 * runs on an IST stack and needs to be able to cause nested #VC exceptions.
505 * an IST stack by switching to the task stack if coming from user-space (which
506 * includes early SYSCALL entry path) or back to the stack in the IRET frame if
509 * If entered from kernel-mode the return stack is validated first, and if it is
510 * not safe to use (e.g. because it points to the entry stack) the #VC handler
511 * will switch to a fall-back stack (VC2) and call a special handler function.
539 * Switch off the IST stack to make it free for nested exceptions. The
541 * stack if it is safe to do so. If not it switches to the VC fall-back
542 * stack.
546 movq %rax, %rsp /* Switch to new stack */
560 * No need to switch back to the IST stack. The current stack is either
561 * identical to the stack in the IRET frame or the VC fall-back stack,
566 /* Switch to the regular task stack */
638 * The stack is now user RDI, orig_ax, RIP, CS, EFLAGS, RSP, SS.
639 * Save old stack pointer and switch to trampoline stack.
645 /* Copy the IRET frame to the trampoline stack. */
652 /* Push user RDI on the trampoline stack. */
656 * We are on the trampoline stack. All regs except RDI are live.
693 * Are we returning to a stack segment from the LDT? Note: in
694 * 64-bit mode SS:RSP on the exception stack is always valid.
715 * values. We have a percpu ESPFIX stack that is eight slots
717 * of the ESPFIX stack.
720 * normal stack and RAX on the ESPFIX stack.
722 * The ESPFIX stack layout we set up looks like this:
724 * --- top of ESPFIX stack ---
731 * --- bottom of ESPFIX stack ---
760 * still points to an RO alias of the ESPFIX stack.
772 * At this point, we cannot write to the stack any more, but we can
828 * popping the stack frame (can't be done atomically) and so it would still
829 * be possible to get enough handler activations to overflow the stack.
846 movq %rdi, %rsp /* we don't return, adjust the stack frame */
863 * to pop the stack frame we end up in an infinite loop of failsafe callbacks.
991 * "Paranoid" exit path from exception stack. This is invoked
1073 /* Put us onto the real thread stack. */
1145 * Runs on exception stack. Xen PV does not go through this path at all,
1160 * NMI is using the top of the stack of the previous NMI. We
1162 * stack of the previous NMI. NMI handlers are not re-entrant
1166 * Check the a special location on the stack that contains
1168 * The interrupted task's stack is also checked to see if it
1169 * is an NMI stack.
1170 * If the variable is not set and the stack is not the NMI
1171 * stack then:
1172 * o Set the special variable on the stack
1174 * stack
1175 * o Copy the interrupt frame into an "iret" location on the stack
1177 * If the variable is set or the previous stack is the NMI stack:
1181 * Now on exit of the first NMI, we first clear the stack variable
1182 * The NMI stack will tell any nested NMIs at that point that it is
1183 * nested. Then we pop the stack normally with iret, and if there was
1184 * a nested NMI that updated the copy interrupt stack frame, a
1204 * NMI from user mode. We need to run on the thread stack, but we
1210 * We also must not push anything to the stack before switching
1234 * At this point we no longer need to worry about stack damage
1235 * due to nesting -- we're on the normal thread stack and we're
1236 * done with the NMI stack.
1251 * Here's what our stack frame will look like:
1319 * Now test if the previous stack was an NMI stack. This covers
1322 * there is one case in which RSP could point to the NMI stack
1331 /* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */
1333 /* If the stack pointer is above the NMI stack, this is a normal NMI */
1338 /* If it is below the NMI stack, it is a normal NMI */
1341 /* Ah, it is within the NMI stack. */
1361 /* Put stack back */
1412 * This makes it safe to copy to the stack frame that a nested
1506 * iretq reads the "iret" frame and exits the NMI stack in a