Lines Matching +full:packet +full:- +full:based

1 .. SPDX-License-Identifier: GPL-2.0
13 multi-processor systems.
17 - RSS: Receive Side Scaling
18 - RPS: Receive Packet Steering
19 - RFS: Receive Flow Steering
20 - Accelerated Receive Flow Steering
21 - XPS: Transmit Packet Steering
28 (multi-queue). On reception, a NIC can send different packets to different
30 applying a filter to each packet that assigns it to one of a small number
33 generally known as “Receive-side Scaling” (RSS). The goal of RSS and
35 Multi-queue distribution can also be used for traffic prioritization, but
39 and/or transport layer headers-- for example, a 4-tuple hash over
40 IP addresses and TCP ports of a packet. The most common hardware
41 implementation of RSS uses a 128-entry indirection table where each entry
42 stores a queue number. The receive queue for a packet is determined
44 packet (usually a Toeplitz hash), taking this number as a key into the
47 Some advanced NICs allow steering packets to queues based on
49 can be directed to their own receive queue. Such “n-tuple” filters can
50 be configured from ethtool (--config-ntuple).
54 -----------------
56 The driver for a multi-queue capable NIC typically provides a kernel
68 commands (--show-rxfh-indir and --set-rxfh-indir). Modifying the
78 signaling path for PCIe devices uses message signaled interrupts (MSI-X),
81 an IRQ may be handled on any CPU. Because a non-negligible part of packet
84 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems
96 NIC maximum, if lower). The most efficient high-rate configuration
102 Per-cpu load can be observed using the mpstat utility, but note that on
109 RPS: Receive Packet Steering
112 Receive Packet Steering (RPS) is logically a software implementation of
116 above the interrupt handler. This is accomplished by placing the packet
123 introduce inter-processor interrupts (IPIs))
126 a driver sends a packet up the network stack with netif_rx() or
128 selects the queue that should process a packet.
131 flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
133 associated flow of the packet. The hash is either provided by hardware
135 the receive descriptor for the packet; this would usually be the same
137 skb->hash and can be used elsewhere in the stack as a hash of the
138 packet’s flow.
141 RPS may enqueue packets for processing. For each received packet,
143 of the list. The indexed CPU is the target for processing the packet,
144 and the packet is queued to the tail of that CPU’s backlog queue. At
152 -----------------
159 /sys/class/net/<dev>/queues/rx-<n>/rps_cpus
163 CPU. Documentation/core-api/irq/irq-affinity.rst explains how CPUs are assigned to
176 For a multi-queue system, if RSS is configured so that a hardware
184 --------------
187 reordering. The trade-off to sending all packets from the same flow
188 to the same CPU is CPU load imbalance if flows vary in packet rate.
197 destination CPU approaches saturation. Once a CPU's input packet
199 net.core.netdev_max_backlog), the kernel starts a per-flow packet
201 default, half) of these packets when a new packet arrives, then the
202 new packet is dropped. Packets from other flows are still only
203 dropped once the input packet queue reaches netdev_max_backlog.
204 No packets are dropped when the input packet queue length is below
220 Per-flow rate is calculated by hashing each packet into a hashtable
221 bucket and incrementing a per-bucket counter. The hash function is
223 be much larger than the number of CPUs, flow limit has finer-grained
241 The feature depends on the input packet queue length to exceed
250 While RPS steers packets solely based on hash, and thus generally
255 consuming the packet is running. RFS relies on the same RPS mechanisms
288 CPU's backlog when a packet in this flow was last enqueued. Each backlog
297 CPU for packet processing (from get_rps_cpu()) the rps_sock_flow table
298 and the rps_dev_flow table of the queue that the packet was received on
301 table), the packet is enqueued onto that CPU’s backlog. If they differ,
305 - The current CPU's queue head counter >= the recorded tail counter
307 - The current CPU is unset (>= nr_cpu_ids)
308 - The current CPU is offline
310 After this check, the packet is sent to the (possibly updated) current
318 -----------------
326 The number of entries in the per-queue flow table are set through::
328 /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
343 For a multi-queue device, the rps_flow_cnt for each queue might be
353 Accelerated RFS is to RFS what RSS is to RPS: a hardware-accelerated load
354 balancing mechanism that uses soft state to steer flows based on where
370 is maintained by the NIC driver. This is an auto-generated reverse map of
378 -----------------------------
395 XPS: Transmit Packet Steering
398 Transmit Packet Steering is a mechanism for intelligently selecting
399 which transmit queue to use when transmitting a packet on a multi-queue
418 This mapping is used to pick transmit queue based on the receive
423 busy polling multi-threaded workloads where there are challenges in
430 the same queue-association that a given application is polling on. This
437 CPUs/receive-queues that may use that queue to transmit. The reverse
438 mapping, from CPUs to transmit queues or from receive-queues to transmit
440 transmitting the first packet in a flow, the function get_xps_queue() is
442 for the socket connection for a match in the receive queue-to-transmit queue
444 running CPU as a key into the CPU-to-queue lookup table. If the
447 into the set. When selecting the transmit queue based on receive queue(s)
457 skb->ooo_okay is set for a packet in the flow. This flag indicates that
465 -----------------
469 how, XPS is configured at device init. The mapping of CPUs/receive-queues
472 For selection based on CPUs map::
474 /sys/class/net/<dev>/queues/tx-<n>/xps_cpus
476 For selection based on receive-queues map::
478 /sys/class/net/<dev>/queues/tx-<n>/xps_rxqs
485 has no effect, since there is no choice in this case. In a multi-queue
494 For transmit queue selection based on receive queue(s), XPS has to be
495 explicitly configured mapping receive-queue(s) to transmit queue(s). If the
496 user configuration for receive-queue map does not apply, then the transmit
497 queue is selected based on the CPUs map.
503 These are rate-limitation mechanisms implemented by HW, where currently
504 a max-rate attribute is supported, by setting a Mbps value to::
506 /sys/class/net/<dev>/queues/tx-<n>/tx_maxrate
522 - Tom Herbert (therbert@google.com)
523 - Willem de Bruijn (willemb@google.com)