Searched refs:IPI (Results 1 – 19 of 19) sorted by relevance
47 order to perform some KVM maintenance. To do so, an IPI is sent, forcing53 1) Send an IPI. This forces a guest mode exit.68 as well as to avoid sending unnecessary IPIs (see "IPI Reduction"), and69 even to ensure IPI acknowledgements are waited upon (see "Waiting for158 then the caller will wait for each VCPU to acknowledge its IPI before160 If, for example, the VCPU is sleeping, so no IPI is necessary, then190 kick will send an IPI to force an exit from guest mode when necessary.195 enter guest mode. This means that an optimized implementation (see "IPI196 Reduction") must be certain when it's safe to not send the IPI. One206 !kvm_request_pending() on its last check and then not receiving an IPI for[all …]
11 # when returning from IPI handler, and when returning to user-space.15 # x86-32 uses IRET as return from interrupt, which takes care of the IPI.19 # x86-64 uses IRET as return from interrupt, which takes care of the IPI.
29 * Pending IPI (inter-processor interrupt) priority, 8 bits30 Zero is the highest priority, 255 means no IPI is pending.33 Zero means no interrupt pending, 2 means an IPI is pending
71 non-IPI interrupts to a single CPU at a time (EG: Freescale MPIC).127 2 = MPIC inter-processor interrupt (IPI)130 the MPIC IPI number. The type-specific193 * MPIC IPI interrupts. Note the interrupt
341 HAC, IPI, SPDIF, HUDI, I2C, enumerator367 INTC_VECT(HAC, 0x580), INTC_VECT(IPI, 0x5c0),427 DMAC, I2C, HUDI, SPDIF, IPI, HAC, TMU, GPIO } },432 { 0xffe00004, 0, 32, 8, /* INT2PRI1 */ { IPI, SPDIF, HUDI, I2C } },
53 VECTOR handle_interrupt ; (19) Inter core Interrupt (IPI)55 VECTOR handle_interrupt ; (21) Software Triggered Intr (Self IPI)
80 # Generic IRQ IPI support
49 1: Soft-irq. Uses IPI to complete IOs across CPU nodes. Simulates the overhead
96 performs an IPI to inform all processors about the new mapping. This results
288 unless absolutely necessary. Please consider using an IPI to wake up
161 CPU awakens, the scheduler will send an IPI that can result in
153 /* IPI called on each CPU. */
156 global clock event devices. The support of such hardware would involve IPI
296 This indicates that CPU 7 has failed to respond to a reschedule IPI.
311 achieved by using an IPI to the local processor.
712 to each of the threads, where the IPI handler will also write
134 packets have been queued to their backlog queue. The IPI wakes backlog
1144 which sends an IPI to the CPUs that are running the same ASID
625 # are unmapped instead of sending one IPI per page to flush. The architecture