Lines Matching full:throughput
28 * to distribute the device throughput among processes as desired,
29 * without any distortion due to throughput fluctuations, or to device
34 * guarantees that each queue receives a fraction of the throughput
37 * processes issuing sequential requests (to boost the throughput),
76 * preserving both a low latency and a high throughput on NCQ-capable,
81 * the maximum-possible throughput at all times, then do switch off
191 * writes to steal I/O throughput to reads.
241 * because it is characterized by limited throughput and apparently
321 * a) unjustly steal throughput to applications that may actually need
324 * in loss of device throughput with most flash-based storage, and may
350 * throughput-friendly I/O operations. This is even more true if BFQ
686 * must receive the same share of the throughput (symmetric scenario),
688 * throughput lower than or equal to the share that every other active
691 * throughput even if I/O dispatching is not plugged when bfqq remains
1204 * throughput: the quicker the requests of the activated queues are
1210 * weight-raising these new queues just lowers throughput in most
1234 * idling depending on which choice boosts the throughput more. The
1488 * I/O, which may in turn cause loss of throughput. Finally, there may
1605 * budget. Do not care about throughput consequences, in bfq_update_bfqq_wr_on_rq_arrival()
1820 * guarantees or throughput. As for guarantees, we care in bfq_bfqq_handle_idle_busy_switch()
1850 * As for throughput, we ask bfq_better_to_idle() whether we in bfq_bfqq_handle_idle_busy_switch()
1853 * boost throughput or to perserve service guarantees. Then in bfq_bfqq_handle_idle_busy_switch()
1855 * would certainly lower throughput. We may end up in this in bfq_bfqq_handle_idle_busy_switch()
1907 * throughput, as explained in detail in the comments in in bfq_reset_inject_limit()
1975 * A remarkable throughput boost can be reached by unconditionally
1978 * plugged for bfqq. In addition to boosting throughput, this
2003 * The sooner a waker queue is detected, the sooner throughput can be
2601 * the best possible order for throughput. in bfq_find_close_cooperator()
2662 * are likely to increase the throughput. in bfq_setup_merge()
2779 * throughput, it must have many requests enqueued at the same in bfq_setup_cooperator()
2785 * the throughput reached by the device is likely to be the in bfq_setup_cooperator()
2789 * terms of throughput. Merging tends to make many workloads in bfq_setup_cooperator()
2798 * for BFQ to let the device reach a high throughput. in bfq_setup_cooperator()
3103 * budget. This prevents seeky processes from lowering the throughput.
3207 * its reserved share of the throughput (in particular, it is in bfq_arm_slice_timer()
3230 * this maximises throughput with sequential workloads.
3239 * Update parameters related to throughput and responsiveness, as a
3497 * throughput concerns, but to preserve the throughput share of
3509 * determine also the actual throughput distribution among
3511 * concern about per-process throughput distribution, and
3514 * scheduler is likely to coincide with the desired throughput
3517 * (i-a) each of these processes must get the same throughput as
3521 * throughput than any of the other processes;
3530 * same throughput. This is exactly the desired throughput
3537 * that bfqq receives its assigned fraction of the device throughput
3540 * The problem is that idling may significantly reduce throughput with
3544 * throughput, it is important to check conditions (i-a), i(-b) and
3560 * share of the throughput even after being dispatched. In this
3565 * guaranteed its fair share of the throughput (basically because
3593 * risk of getting less throughput than its fair share.
3597 * throughput. This mechanism and its benefits are explained
3634 * part) without minimally sacrificing throughput. And, if
3636 * this device is probably a high throughput.
3812 * for throughput. in __bfq_bfqq_recalc_budget()
3836 * the throughput, as discussed in the in __bfq_bfqq_recalc_budget()
3851 * the chance to boost the throughput if this in __bfq_bfqq_recalc_budget()
3865 * candidate to boost the disk throughput. in __bfq_bfqq_recalc_budget()
3947 * their chances to lower the throughput. More details in the comments
4059 * throughput with the I/O of the application (e.g., because the I/O
4150 * tends to lower the throughput). In addition, this time-charging
4296 * only to be kicked off for preserving a high throughput.
4329 * boosts the throughput. in idling_boosts_thr_without_issues()
4332 * idling is virtually always beneficial for the throughput if: in idling_boosts_thr_without_issues()
4342 * throughput even with sequential I/O; rather it would lower in idling_boosts_thr_without_issues()
4343 * the throughput in proportion to how fast the device in idling_boosts_thr_without_issues()
4366 * of the device throughput proportional to their high in idling_boosts_thr_without_issues()
4394 * device idling plays a critical role for both throughput boosting
4399 * beneficial for throughput or, even if detrimental for throughput,
4401 * latency, desired throughput distribution, ...). In particular, on
4404 * device boost the throughput without causing any service-guarantee
4445 * either boosts the throughput (without issues), or is in bfq_better_to_idle()
4459 * why performing device idling is the best choice to boost the throughput
4544 * drive reach a very high throughput, even if in bfq_choose_bfqq_for_injection()
4631 * provide a reasonable throughput. in bfq_select_queue()
4647 * throughput and is possible. in bfq_select_queue()
4687 * throughput. The best action to take is therefore to in bfq_select_queue()
4707 * bfqq delivers more throughput when served without in bfq_select_queue()
4710 * count more than overall throughput, and may be in bfq_select_queue()
4731 * reasons. First, throughput may be low because the in bfq_select_queue()
4986 * throughput. in __bfq_dispatch_request()
5453 * Many throughput-sensitive workloads are made of several parallel
5462 * throughput, and not detrimental for service guarantees. The
5469 * throughput of the flows and task-wide I/O latency. In particular,
5490 * with ten random readers on /dev/nullb shows a throughput boost of
5492 * the total per-request processing time, the above throughput boost
5519 * throughput-beneficial if not merged. Currently this is in bfq_do_or_sched_stable_merge()
5521 * such a drive, not merging bfqq is better for throughput if in bfq_do_or_sched_stable_merge()
5542 * throughput benefits compared with in bfq_do_or_sched_stable_merge()
5759 * and in a severe loss of total throughput. in bfq_update_has_short_ttime()
5785 * performed at all times, and throughput gets boosted. in bfq_update_has_short_ttime()
5804 * to boost throughput more effectively, by injecting the I/O in bfq_update_has_short_ttime()
5841 * - we are idling to boost throughput, and in bfq_rq_enqueued()
6017 * only possible result is a throughput loss in bfq_insert_request()
6181 * control troubles than throughput benefits. Then reset in bfq_completed_request()
6267 * and the throughput is not affected. In contrast, if BFQ is not
6278 * To counter this loss of throughput, BFQ implements a "request
6282 * both boost throughput and not break bfqq's bandwidth and latency
6325 * set to 1, to start boosting throughput, and to prepare the