Lines Matching full:flow

19 - RFS: Receive Flow Steering
20 - Accelerated Receive Flow Steering
31 of logical flows. Packets for each flow are steered to a separate receive
131 flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
133 associated flow of the packet. The hash is either provided by hardware
138 packet’s flow.
142 an index into the list is computed from the flow hash modulo the size
183 RPS Flow Limit
187 reordering. The trade-off to sending all packets from the same flow
189 In the extreme case a single flow dominates traffic. Especially on
194 Flow Limit is an optional RPS feature that prioritizes small flows
199 net.core.netdev_max_backlog), the kernel starts a per-flow packet
200 count over the last 256 packets. If a flow exceeds a set ratio (by
205 the threshold, so flow limit does not sever connections outright:
212 Flow limit is compiled in by default (CONFIG_NET_FLOW_LIMIT), but not
220 Per-flow rate is calculated by hashing each packet into a hashtable
223 be much larger than the number of CPUs, flow limit has finer-grained
236 Flow limit is useful on systems with many concurrent connections,
242 the flow limit threshold (50%) + the flow history length (256).
247 RFS: Receive Flow Steering
252 application locality. This is accomplished by Receive Flow Steering
260 but the hash is used as index into a flow lookup table. This table maps
261 flows to the CPUs where those flows are being processed. The flow hash
263 The CPU recorded in each entry is the one which last processed the flow.
267 a single application thread handles flows with many different flow hashes.
269 rps_sock_flow_table is a global flow table that contains the *desired* CPU
270 for flows: the CPU that is currently processing the flow in userspace.
277 avoid this, RFS uses a second flow table to track outstanding packets
278 for each flow: rps_dev_flow_table is a table specific to each hardware
281 for this flow are enqueued for further kernel processing. Ideally, kernel
288 CPU's backlog when a packet in this flow was last enqueued. Each backlog
291 in rps_dev_flow[i] records the last element in flow i that has
292 been enqueued onto the currently designated CPU for flow i (of course,
299 are compared. If the desired CPU for the flow (found in the
311 CPU. These rules aim to ensure that a flow only moves to a new CPU when
322 configured. The number of entries in the global flow table is set through::
326 The number of entries in the per-queue flow table are set through::
336 suggested flow count depends on the expected number of active connections
355 the application thread consuming the packets of each flow is running.
363 queue for packets matching a particular flow. The network stack
364 automatically calls this function every time a flow entry in
368 The hardware queue for a flow is derived from the CPU recorded in
440 transmitting the first packet in a flow, the function get_xps_queue() is
446 queues match, one is selected by using the flow hash to compute an index
451 The queue chosen for transmitting a particular flow is saved in the
452 corresponding socket structure for the flow (e.g. a TCP connection).
453 This transmit queue is used for subsequent packets sent on the flow to
455 of calling get_xps_queues() over all packets in the flow. To avoid
456 ooo packets, the queue for a flow can subsequently only be changed if
457 skb->ooo_okay is set for a packet in the flow. This flag indicates that
458 there are no outstanding packets in the flow, so the transmit queue can