Lines Matching full:throughput

28  * to distribute the device throughput among processes as desired,
29 * without any distortion due to throughput fluctuations, or to device
34 * guarantees that each queue receives a fraction of the throughput
37 * processes issuing sequential requests (to boost the throughput),
76 * preserving both a low latency and a high throughput on NCQ-capable,
81 * the maximum-possible throughput at all times, then do switch off
191 * writes to steal I/O throughput to reads.
241 * because it is characterized by limited throughput and apparently
321 * a) unjustly steal throughput to applications that may actually need
324 * in loss of device throughput with most flash-based storage, and may
350 * throughput-friendly I/O operations. This is even more true if BFQ
797 * must receive the same share of the throughput (symmetric scenario),
799 * throughput lower than or equal to the share that every other active
802 * throughput even if I/O dispatching is not plugged when bfqq remains
1316 * throughput: the quicker the requests of the activated queues are
1322 * weight-raising these new queues just lowers throughput in most
1346 * idling depending on which choice boosts the throughput more. The
1600 * I/O, which may in turn cause loss of throughput. Finally, there may
1717 * budget. Do not care about throughput consequences, in bfq_update_bfqq_wr_on_rq_arrival()
1932 * guarantees or throughput. As for guarantees, we care in bfq_bfqq_handle_idle_busy_switch()
1962 * As for throughput, we ask bfq_better_to_idle() whether we in bfq_bfqq_handle_idle_busy_switch()
1965 * boost throughput or to perserve service guarantees. Then in bfq_bfqq_handle_idle_busy_switch()
1967 * would certainly lower throughput. We may end up in this in bfq_bfqq_handle_idle_busy_switch()
2019 * throughput, as explained in detail in the comments in in bfq_reset_inject_limit()
2087 * A remarkable throughput boost can be reached by unconditionally
2090 * plugged for bfqq. In addition to boosting throughput, this
2114 * The sooner a waker queue is detected, the sooner throughput can be
2144 * doesn't hurt throughput that much. The condition below makes sure in bfq_check_waker()
2740 * the best possible order for throughput. in bfq_find_close_cooperator()
2809 * are likely to increase the throughput. in bfq_setup_merge()
2942 * throughput, it must have many requests enqueued at the same in bfq_setup_cooperator()
2948 * the throughput reached by the device is likely to be the in bfq_setup_cooperator()
2952 * terms of throughput. Merging tends to make many workloads in bfq_setup_cooperator()
2961 * for BFQ to let the device reach a high throughput. in bfq_setup_cooperator()
3263 * budget. This prevents seeky processes from lowering the throughput.
3367 * its reserved share of the throughput (in particular, it is in bfq_arm_slice_timer()
3390 * this maximises throughput with sequential workloads.
3399 * Update parameters related to throughput and responsiveness, as a
3657 * throughput concerns, but to preserve the throughput share of
3669 * determine also the actual throughput distribution among
3671 * concern about per-process throughput distribution, and
3674 * scheduler is likely to coincide with the desired throughput
3677 * (i-a) each of these processes must get the same throughput as
3681 * throughput than any of the other processes;
3690 * same throughput. This is exactly the desired throughput
3697 * that bfqq receives its assigned fraction of the device throughput
3700 * The problem is that idling may significantly reduce throughput with
3704 * throughput, it is important to check conditions (i-a), i(-b) and
3720 * share of the throughput even after being dispatched. In this
3725 * guaranteed its fair share of the throughput (basically because
3753 * risk of getting less throughput than its fair share.
3757 * throughput. This mechanism and its benefits are explained
3794 * part) without minimally sacrificing throughput. And, if
3796 * this device is probably a high throughput.
3972 * for throughput. in __bfq_bfqq_recalc_budget()
3996 * the throughput, as discussed in the in __bfq_bfqq_recalc_budget()
4011 * the chance to boost the throughput if this in __bfq_bfqq_recalc_budget()
4025 * candidate to boost the disk throughput. in __bfq_bfqq_recalc_budget()
4107 * their chances to lower the throughput. More details in the comments
4219 * throughput with the I/O of the application (e.g., because the I/O
4310 * tends to lower the throughput). In addition, this time-charging
4456 * only to be kicked off for preserving a high throughput.
4489 * boosts the throughput. in idling_boosts_thr_without_issues()
4492 * idling is virtually always beneficial for the throughput if: in idling_boosts_thr_without_issues()
4502 * throughput even with sequential I/O; rather it would lower in idling_boosts_thr_without_issues()
4503 * the throughput in proportion to how fast the device in idling_boosts_thr_without_issues()
4526 * of the device throughput proportional to their high in idling_boosts_thr_without_issues()
4554 * device idling plays a critical role for both throughput boosting
4559 * beneficial for throughput or, even if detrimental for throughput,
4561 * latency, desired throughput distribution, ...). In particular, on
4564 * device boost the throughput without causing any service-guarantee
4605 * either boosts the throughput (without issues), or is in bfq_better_to_idle()
4619 * why performing device idling is the best choice to boost the throughput
4704 * drive reach a very high throughput, even if in bfq_choose_bfqq_for_injection()
4791 * provide a reasonable throughput. in bfq_select_queue()
4807 * throughput and is possible. in bfq_select_queue()
4847 * throughput. The best action to take is therefore to in bfq_select_queue()
4867 * bfqq delivers more throughput when served without in bfq_select_queue()
4870 * count more than overall throughput, and may be in bfq_select_queue()
4891 * reasons. First, throughput may be low because the in bfq_select_queue()
5146 * throughput. in __bfq_dispatch_request()
5612 * Many throughput-sensitive workloads are made of several parallel
5621 * throughput, and not detrimental for service guarantees. The
5628 * throughput of the flows and task-wide I/O latency. In particular,
5649 * with ten random readers on /dev/nullb shows a throughput boost of
5651 * the total per-request processing time, the above throughput boost
5678 * throughput-beneficial if not merged. Currently this is in bfq_do_or_sched_stable_merge()
5680 * such a drive, not merging bfqq is better for throughput if in bfq_do_or_sched_stable_merge()
5701 * throughput benefits compared with in bfq_do_or_sched_stable_merge()
5909 * and in a severe loss of total throughput. in bfq_update_has_short_ttime()
5935 * performed at all times, and throughput gets boosted. in bfq_update_has_short_ttime()
5954 * to boost throughput more effectively, by injecting the I/O in bfq_update_has_short_ttime()
5991 * - we are idling to boost throughput, and in bfq_rq_enqueued()
6304 * control troubles than throughput benefits. Then reset in bfq_completed_request()
6383 * and the throughput is not affected. In contrast, if BFQ is not
6394 * To counter this loss of throughput, BFQ implements a "request
6398 * both boost throughput and not break bfqq's bandwidth and latency
6441 * set to 1, to start boosting throughput, and to prepare the