Lines Matching full:reordering
418 nr_segs = max_t(u32, nr_segs, tp->reordering + 1); in tcp_sndbuf_expand()
1005 * DSACKs that may have been due to reordering causing RACK to trigger in tcp_dsack_seen()
1007 * without having seen reordering, or that match TLP probes (TLP in tcp_dsack_seen()
1020 /* It's reordering when higher sequence was delivered (i.e. sacked) before
1021 * some lower never-retransmitted sequence ("low_seq"). The maximum reordering
1022 * distance is approximated in full-mss packet distance ("reordering").
1036 if ((metric > tp->reordering * mss) && mss) { in tcp_check_sack_reordering()
1040 tp->reordering, in tcp_check_sack_reordering()
1045 tp->reordering = min_t(u32, (metric + mss - 1) / mss, in tcp_check_sack_reordering()
1143 * Reordering detection.
1145 * Reordering metric is maximal distance, which a packet can be displaced
1149 * ever retransmitted -> reordering. Alas, we cannot use it
1152 * for retransmitted and already SACKed segment -> reordering..
1199 * fragmentation and packet reordering past skb's retransmission. To consider
1375 * which was in hole. It is reordering. in tcp_sacktag_one()
1853 /* Don't count olds caused by ACK reordering */ in tcp_sacktag_write_queue()
2012 * in assumption of absent reordering, interpret this as reordering.
2022 tp->reordering = min_t(u32, tp->packets_out + addend, in tcp_check_reno_reordering()
2150 * suggests that the degree of reordering is over-estimated. in tcp_enter_loss()
2154 tp->reordering = min_t(unsigned int, tp->reordering, in tcp_enter_loss()
2197 * With reordering, holes may still be in flight, so RFC3517 recovery
2264 * (reordering). This is implemented in tcp_mark_head_lost and
2293 * fast retransmit (reordering) and underestimated RTO, analyzing
2315 if (!tcp_is_rack(sk) && tcp_dupack_heuristics(tp) > tp->reordering) in tcp_time_to_recover()
2323 * has at least tp->reordering SACKed seqments above it; "packets" refers to
2377 int sacked_upto = tp->sacked_out - tp->reordering; in tcp_update_scoreboard()
2856 tp->snd_una + tp->reordering * tp->mss_cache); in tcp_force_fast_retransmit()
2867 * packet, rather than with a retransmit. Check reordering. in tcp_try_undo_partial()
2871 /* We are getting evidence that the reordering degree is higher in tcp_try_undo_partial()
2992 * starts a new recovery (e.g. reordering then loss); in tcp_fastretrans_alert()
3365 /* Non-retransmitted hole got filled? That's reordering */ in tcp_clean_rtx_queue()
3449 /* If reordering is high then always grow cwnd whenever data is in tcp_may_raise_cwnd()
3455 if (tcp_sk(sk)->reordering > sock_net(sk)->ipv4.sysctl_tcp_reordering) in tcp_may_raise_cwnd()
4224 * the biggest problem on large power networks even with minor reordering.