Lines Matching +full:hardware +full:- +full:accelerated
1 .. SPDX-License-Identifier: GPL-2.0
13 multi-processor systems.
17 - RSS: Receive Side Scaling
18 - RPS: Receive Packet Steering
19 - RFS: Receive Flow Steering
20 - Accelerated Receive Flow Steering
21 - XPS: Transmit Packet Steering
28 (multi-queue). On reception, a NIC can send different packets to different
33 generally known as “Receive-side Scaling” (RSS). The goal of RSS and
35 Multi-queue distribution can also be used for traffic prioritization, but
39 and/or transport layer headers-- for example, a 4-tuple hash over
40 IP addresses and TCP ports of a packet. The most common hardware
41 implementation of RSS uses a 128-entry indirection table where each entry
49 can be directed to their own receive queue. Such “n-tuple” filters can
50 be configured from ethtool (--config-ntuple).
54 -----------------
56 The driver for a multi-queue capable NIC typically provides a kernel
57 module parameter for specifying the number of hardware queues to
68 commands (--show-rxfh-indir and --set-rxfh-indir). Modifying the
78 signaling path for PCIe devices uses message signaled interrupts (MSI-X),
81 an IRQ may be handled on any CPU. Because a non-negligible part of packet
84 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems
96 NIC maximum, if lower). The most efficient high-rate configuration
102 Per-cpu load can be observed using the mpstat utility, but note that on
114 Whereas RSS selects the queue and hence CPU that will run the hardware
122 3) it does not increase hardware device interrupt rate (although it does
123 introduce inter-processor interrupts (IPIs))
131 flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
133 associated flow of the packet. The hash is either provided by hardware
134 or will be computed in the stack. Capable hardware can pass the hash in
137 skb->hash and can be used elsewhere in the stack as a hash of the
140 Each receive hardware queue has an associated list of CPUs to which
152 -----------------
159 /sys/class/net/<dev>/queues/rx-<n>/rps_cpus
163 CPU. Documentation/core-api/irq/irq-affinity.rst explains how CPUs are assigned to
176 For a multi-queue system, if RSS is configured so that a hardware
178 and unnecessary. If there are fewer hardware queues than CPUs, then
184 --------------
187 reordering. The trade-off to sending all packets from the same flow
199 net.core.netdev_max_backlog), the kernel starts a per-flow packet
220 Per-flow rate is calculated by hashing each packet into a hashtable
221 bucket and incrementing a per-bucket counter. The hash function is
223 be much larger than the number of CPUs, flow limit has finer-grained
278 for each flow: rps_dev_flow_table is a table specific to each hardware
305 - The current CPU's queue head counter >= the recorded tail counter
307 - The current CPU is unset (>= nr_cpu_ids)
308 - The current CPU is offline
318 -----------------
326 The number of entries in the per-queue flow table are set through::
328 /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
343 For a multi-queue device, the rps_flow_cnt for each queue might be
350 Accelerated RFS
353 Accelerated RFS is to RFS what RSS is to RPS: a hardware-accelerated load
356 Accelerated RFS should perform better than RFS since packets are sent
361 To enable accelerated RFS, the networking stack calls the
362 ndo_rx_flow_steer driver function to communicate the desired hardware
368 The hardware queue for a flow is derived from the CPU recorded in
369 rps_dev_flow_table. The stack consults a CPU to hardware queue map which
370 is maintained by the NIC driver. This is an auto-generated reverse map of
377 Accelerated RFS Configuration
378 -----------------------------
380 Accelerated RFS is only available if the kernel is compiled with
392 NIC supports hardware acceleration.
399 which transmit queue to use when transmitting a packet on a multi-queue
401 a mapping of CPU to hardware queue(s) or a mapping of receive queue(s)
402 to hardware transmit queue(s).
423 busy polling multi-threaded workloads where there are challenges in
430 the same queue-association that a given application is polling on. This
437 CPUs/receive-queues that may use that queue to transmit. The reverse
438 mapping, from CPUs to transmit queues or from receive-queues to transmit
442 for the socket connection for a match in the receive queue-to-transmit queue
444 running CPU as a key into the CPU-to-queue lookup table. If the
457 skb->ooo_okay is set for a packet in the flow. This flag indicates that
465 -----------------
469 how, XPS is configured at device init. The mapping of CPUs/receive-queues
474 /sys/class/net/<dev>/queues/tx-<n>/xps_cpus
476 For selection based on receive-queues map::
478 /sys/class/net/<dev>/queues/tx-<n>/xps_rxqs
485 has no effect, since there is no choice in this case. In a multi-queue
495 explicitly configured mapping receive-queue(s) to transmit queue(s). If the
496 user configuration for receive-queue map does not apply, then the transmit
503 These are rate-limitation mechanisms implemented by HW, where currently
504 a max-rate attribute is supported, by setting a Mbps value to::
506 /sys/class/net/<dev>/queues/tx-<n>/tx_maxrate
517 Accelerated RFS was introduced in 2.6.35. Original patches were
522 - Tom Herbert (therbert@google.com)
523 - Willem de Bruijn (willemb@google.com)