Lines Matching +full:slave +full:- +full:kernel
2 An ad-hoc collection of notes on IA64 MCA and INIT processing
7 ---
15 ---
27 * Slave cpus that receive the MCA interrupt call down into SAL, they
30 * If any slave cpu was already spinning disabled when the MCA occurred
32 sends an unmaskable INIT event to the slave cpus that have not
43 * If an MCA/INIT event occurs while the kernel was running (not user
44 space) and the kernel has called PAL then the MCA/INIT handler cannot
45 assume that the kernel stack is in a fit state to be used. Mainly
47 Because the MCA/INIT handlers cannot trust the kernel stack, they
48 have to use their own, per-cpu stacks. The MCA/INIT stacks are
53 the kernel stack[1]. So switching to a new kernel stack means that
54 we switch to a new task as well. Because various bits of the kernel
72 rendezvous interrupt are still running on their normal kernel stacks!
75 tasks are on a cpu and which are not. Hence each slave cpu that
90 struct task and the kernel stacks. Then the MCA/INIT data would be
94 Mosberger vetoed that approach. Which meant that separate kernel
97 ---
114 ---
125 ---
135 * x86 has a separate struct task which points to one of multiple kernel
136 stacks. ia64 has the struct task embedded in the single kernel
142 kernel stack.
148 ---
164 ---
170 verifies the original kernel stack, copies the dirty registers from
179 ---
185 state, then sos->prev_task on the MCA/INIT stack is updated to point to
192 as KERNEL_STACK_SIZE - sizeof(struct pt_regs) - sizeof(struct