Searched refs:IPI (Results 1 – 25 of 25) sorted by relevance
1 Xilinx IPI Mailbox Controller4 The Xilinx IPI(Inter Processor Interrupt) mailbox controller is to manage5 messaging between two Xilinx Zynq UltraScale+ MPSoC IPI agents. Each IPI9 | Xilinx ZynqMP IPI Controller |21 Hardware | | IPI Agent | | IPI Buffers | |26 | Xilinx IPI Agent Block |34 IPI agent node:39 - xlnx,ipi-id: local Xilinx IPI agent ID40 - #address-cells: number of address cells of internal IPI mailbox nodes41 - #size-cells: number of size cells of internal IPI mailbox nodes[all …]
47 order to perform some KVM maintenance. To do so, an IPI is sent, forcing53 1) Send an IPI. This forces a guest mode exit.68 as well as to avoid sending unnecessary IPIs (see "IPI Reduction"), and69 even to ensure IPI acknowledgements are waited upon (see "Waiting for158 then the caller will wait for each VCPU to acknowledge its IPI before160 If, for example, the VCPU is sleeping, so no IPI is necessary, then190 kick will send an IPI to force an exit from guest mode when necessary.195 enter guest mode. This means that an optimized implementation (see "IPI196 Reduction") must be certain when it's safe to not send the IPI. One206 !kvm_request_pending() on its last check and then not receiving an IPI for[all …]
149 Purpose: Hypercall used to yield if the IPI target vCPU is preempted153 Usage example: When sending a call-function IPI-many to vCPUs, yield if154 any of the IPI target vCPUs was preempted.
5309 This capability indicates that KVM supports paravirtualized Hyper-V IPI send
11 # when returning from IPI handler, and when returning to user-space.15 # x86-32 uses IRET as return from interrupt, which takes care of the IPI.19 # x86-64 uses IRET as return from interrupt, which takes care of the IPI.
29 * Pending IPI (inter-processor interrupt) priority, 8 bits30 Zero is the highest priority, 255 means no IPI is pending.33 Zero means no interrupt pending, 2 means an IPI is pending
58 interrupt of the device being passed-through or the initial IPI ESB
71 non-IPI interrupts to a single CPU at a time (EG: Freescale MPIC).127 2 = MPIC inter-processor interrupt (IPI)130 the MPIC IPI number. The type-specific193 * MPIC IPI interrupts. Note the interrupt
221 bool "Xilinx ZynqMP IPI Mailbox"224 Say yes here to add support for Xilinx IPI mailbox driver.226 between processors with Xilinx ZynqMP IPI. It will place the227 message to the IPI buffer and will access the IPI control
10 a remote vCPU to avoid sending an IPI (and the associated11 cost of handling the IPI) when performing a wakeup.
338 HAC, IPI, SPDIF, HUDI, I2C, enumerator364 INTC_VECT(HAC, 0x580), INTC_VECT(IPI, 0x5c0),424 DMAC, I2C, HUDI, SPDIF, IPI, HAC, TMU, GPIO } },429 { 0xffe00004, 0, 32, 8, /* INT2PRI1 */ { IPI, SPDIF, HUDI, I2C } },
50 VECTOR handle_interrupt ; (19) Inter core Interrupt (IPI)52 VECTOR handle_interrupt ; (21) Software Triggered Intr (Self IPI)
81 # Generic IRQ IPI support
96 performs an IPI to inform all processors about the new mapping. This results
288 unless absolutely necessary. Please consider using an IPI to wake up
57 1 Soft-irq. Uses IPI to complete IOs across CPU nodes. Simulates the overhead
153 /* IPI called on each CPU. */
160 global clock event devices. The support of such hardware would involve IPI
161 CPU awakens, the scheduler will send an IPI that can result in
299 This indicates that CPU 7 has failed to respond to a reschedule IPI.
312 achieved by using an IPI to the local processor.
731 to each of the threads, where the IPI handler will also write
146 packets have been queued to their backlog queue. The IPI wakes backlog
1035 which sends an IPI to the CPUs that are running the same ASID
780 # are unmapped instead of sending one IPI per page to flush. The architecture