Lines Matching full:flush
13 * indicates a simple flush request. If there is data, REQ_PREFLUSH indicates
28 * The actual execution of flush is double buffered. Whenever a request
31 * REQ_OP_FLUSH is issued and the pending_idx is toggled. When the flush
37 * flush.
39 * C1. At any given time, only one flush shall be in progress. This makes
42 * C2. Flush is deferred if any request is executing DATA of its sequence.
90 * If flush has been pending longer than the following timeout,
118 return 1 << ffz(rq->flush.seq); in blk_flush_cur_seq()
124 * After flush data completion, @rq->bio is %NULL but we need to in blk_flush_restore_request()
132 rq->end_io = rq->flush.saved_end_io; in blk_flush_restore_request()
152 * blk_flush_complete_seq - complete flush sequence
154 * @fq: flush queue
158 * @rq just completed @seq part of its flush sequence, record the
172 BUG_ON(rq->flush.seq & seq); in blk_flush_complete_seq()
173 rq->flush.seq |= seq; in blk_flush_complete_seq()
184 /* queue for flush */ in blk_flush_complete_seq()
187 list_move_tail(&rq->flush.list, pending); in blk_flush_complete_seq()
191 list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); in blk_flush_complete_seq()
198 * flush sequencing and may already have gone through the in blk_flush_complete_seq()
199 * flush data request completion path. Restore @rq for in blk_flush_complete_seq()
203 list_del_init(&rq->flush.list); in blk_flush_complete_seq()
235 * Flush request has to be marked as IDLE when it is really ended in flush_end_io()
253 /* account completion of the flush request */ in flush_end_io()
257 list_for_each_entry_safe(rq, n, running, flush.list) { in flush_end_io()
268 * blk_kick_flush - consider issuing flush request
270 * @fq: flush queue
273 * Flush related states of @q have changed, consider issuing flush request.
285 list_first_entry(pending, struct request, flush.list); in blk_kick_flush()
299 * Issue flush and toggle pending_idx. This makes pending_idx in blk_kick_flush()
300 * different from running_idx, which means flush is in flight. in blk_kick_flush()
309 * the tag's ownership for flush req. in blk_kick_flush()
311 * In case of IO scheduler, flush rq need to borrow scheduler tag in blk_kick_flush()
322 * this flush request as INFLIGHT for avoiding double in blk_kick_flush()
394 * An empty flush handed down from a stacking driver may in blk_insert_flush()
407 * If there's data but flush is not necessary, the request can be in blk_insert_flush()
408 * processed directly without going through flush machinery. Queue in blk_insert_flush()
418 * @rq should go through flush machinery. Mark it part of flush in blk_insert_flush()
421 memset(&rq->flush, 0, sizeof(rq->flush)); in blk_insert_flush()
422 INIT_LIST_HEAD(&rq->flush.list); in blk_insert_flush()
424 rq->flush.saved_end_io = rq->end_io; /* Usually NULL */ in blk_insert_flush()
434 * blkdev_issue_flush - queue a flush
435 * @bdev: blockdev to issue flush for
439 * Issue a flush for the block device in question.
490 /* bio based request queue hasn't flush queue */ in blk_free_flush_queue()