Lines Matching +full:iommu +full:- +full:v1

2 VFIO - "Virtual Function I/O" [1]_
7 allotted. This includes x86 hardware with AMD-Vi and Intel VT-d,
9 systems such as Freescale PAMU. The VFIO driver is an IOMMU/device
11 a secure, IOMMU protected environment. In other words, this allows
12 safe [2]_, non-privileged, userspace drivers.
19 bare-metal device drivers [3]_.
22 field, also benefit from low-overhead, direct device access from
23 userspace. Examples include network adapters (often non-TCP/IP based)
27 which has no notion of IOMMU protection, limited interrupt support,
36 ---------------------------
42 as allowing a device read-write access to system memory imposes the
53 though. Even when an IOMMU is capable of this, properties of devices,
54 interconnects, and IOMMU topologies can each reduce this isolation.
55 For instance, an individual device may be part of a larger multi-
56 function enclosure. While the IOMMU may be able to distinguish
58 transactions between devices to reach the IOMMU. Examples of this
59 could be anything from a multi-function PCI device with backdoors
60 between functions to a non-PCI-ACS (Access Control Services) capable
61 bridge allowing redirection without reaching the IOMMU. Topology
62 can also play a factor in terms of hiding devices. A PCIe-to-PCI
64 from the bridge itself. Obviously IOMMU design plays a major factor
67 Therefore, while for the most part an IOMMU may have device level
69 IOMMU API therefore supports a notion of IOMMU groups. A group is
91 $GROUP is the IOMMU group number of which the device is a member.
92 If the IOMMU group contains multiple devices, each will need to
96 group available, but not that particular device). TBD - interface
102 previously opened container file. If desired and if the IOMMU driver
103 supports sharing the IOMMU context between groups, multiple groups may
109 ioctls become available, enabling access to the VFIO IOMMU interfaces.
119 ------------------
126 This device is therefore in IOMMU group 26. This device is on the
127 pci bus, therefore the user will make use of vfio-pci to manage the
130 # modprobe vfio-pci
132 Binding this device to the vfio-pci driver creates the VFIO group
135 $ lspci -n -s 0000:06:0d.0
138 # echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id
143 $ ls -l /sys/bus/pci/devices/0000:06:0d.0/iommu_group/devices
145 lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:00:1e.0 ->
147 lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.0 ->
149 lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.1 ->
152 This device is behind a PCIe-to-PCI bridge [4]_, therefore we also
156 bind this device to the vfio-pci driver (vfio-pci does not currently
166 The user now has full access to all the devices and the iommu for this
183 /* Doesn't support the IOMMU driver we want. */
197 /* Enable the IOMMU model we want */
200 /* Get addition IOMMU info */
243 -------------------------------------------------------------------------------
248 -------------------------------------------------------------------------------
250 VFIO bus drivers, such as vfio-pci make use of only a few interfaces
263 vfio_init_group_dev() to pre-configure it before going to registration
264 and call vfio_uninit_group_dev() after completing the un-registration.
297 -------------------------------
301 1) On older systems (POWER7 with P5IOC2/IODA1) only one IOMMU group per
302 container is supported as an IOMMU table is allocated at the boot time,
303 one table per a IOMMU group which is a Partitionable Endpoint (PE)
307 to remove this limitation and have multiple IOMMU groups per a VFIO
310 2) The hardware supports so called DMA windows - the PCI address range
323 error recovery. A PE may be a single or multi-function IOA (IO Adapter), a
324 function of a multi-function IOA, or multiple IOAs (possibly including
353 /* Enable the IOMMU model we want */
356 /* Get addition sPAPR IOMMU info */
390 * PE, and put child devices belonging to same IOMMU group to the
404 /* Inject EEH error, which is expected to be caused by 32-bits
458 5) There is v2 of SPAPR TCE IOMMU. It deprecates VFIO_IOMMU_ENABLE/
461 (which are unsupported in v1 IOMMU).
466 The v2 IOMMU splits accounting and pinning into separate operations:
468 - VFIO_IOMMU_SPAPR_REGISTER_MEMORY/VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY ioctls
475 - VFIO_IOMMU_MAP_DMA/VFIO_IOMMU_UNMAP_DMA ioctls only update the actual
476 IOMMU table and do not do pinning; instead these check that the userspace
477 address is from pre-registered range.
487 be as big as entire RAM, use different page size, it is optional - guests
488 create those in run-time if the guest driver supports 64bit DMA.
500 -------------------------------------------------------------------------------
507 possible for multi-function devices to have backdoors between
511 IOMMU driver to group multi-function PCI devices together
512 (iommu=group_mf). The latter we can't prevent, but the IOMMU should
513 still provide isolation. For PCI, SR-IOV Virtual Functions are the
517 .. [3] As always there are trade-offs to virtual machine device
519 future IOMMU technologies will reduce some, but maybe not all, of
520 these trade-offs.
523 from either function of the device are indistinguishable to the iommu::
525 -[0000:00]-+-1e.0-[06]--+-0d.0
526 \-0d.1