Lines Matching full:flush

13  * indicates a simple flush request.  If there is data, REQ_PREFLUSH indicates
28 * The actual execution of flush is double buffered. Whenever a request
31 * REQ_OP_FLUSH is issued and the pending_idx is toggled. When the flush
37 * flush.
39 * C1. At any given time, only one flush shall be in progress. This makes
42 * C2. Flush is deferred if any request is executing DATA of its sequence.
90 * If flush has been pending longer than the following timeout,
124 return 1 << ffz(rq->flush.seq); in blk_flush_cur_seq()
130 * After flush data completion, @rq->bio is %NULL but we need to in blk_flush_restore_request()
138 rq->end_io = rq->flush.saved_end_io; in blk_flush_restore_request()
158 * blk_flush_complete_seq - complete flush sequence
160 * @fq: flush queue
164 * @rq just completed @seq part of its flush sequence, record the
178 BUG_ON(rq->flush.seq & seq); in blk_flush_complete_seq()
179 rq->flush.seq |= seq; in blk_flush_complete_seq()
190 /* queue for flush */ in blk_flush_complete_seq()
193 list_move_tail(&rq->flush.list, pending); in blk_flush_complete_seq()
197 list_move_tail(&rq->flush.list, &fq->flush_data_in_flight); in blk_flush_complete_seq()
204 * flush sequencing and may already have gone through the in blk_flush_complete_seq()
205 * flush data request completion path. Restore @rq for in blk_flush_complete_seq()
208 list_del_init(&rq->flush.list); in blk_flush_complete_seq()
240 * Flush request has to be marked as IDLE when it is really ended in flush_end_io()
260 /* account completion of the flush request */ in flush_end_io()
264 list_for_each_entry_safe(rq, n, running, flush.list) { in flush_end_io()
281 * blk_kick_flush - consider issuing flush request
283 * @fq: flush queue
286 * Flush related states of @q have changed, consider issuing flush request.
298 list_first_entry(pending, struct request, flush.list); in blk_kick_flush()
312 * Issue flush and toggle pending_idx. This makes pending_idx in blk_kick_flush()
313 * different from running_idx, which means flush is in flight. in blk_kick_flush()
322 * the tag's ownership for flush req. in blk_kick_flush()
324 * In case of IO scheduler, flush rq need to borrow scheduler tag in blk_kick_flush()
335 * this flush request as INFLIGHT for avoiding double in blk_kick_flush()
416 * An empty flush handed down from a stacking driver may in blk_insert_flush()
429 * If there's data but flush is not necessary, the request can be in blk_insert_flush()
430 * processed directly without going through flush machinery. Queue in blk_insert_flush()
440 * @rq should go through flush machinery. Mark it part of flush in blk_insert_flush()
443 memset(&rq->flush, 0, sizeof(rq->flush)); in blk_insert_flush()
444 INIT_LIST_HEAD(&rq->flush.list); in blk_insert_flush()
446 rq->flush.saved_end_io = rq->end_io; /* Usually NULL */ in blk_insert_flush()
456 * blkdev_issue_flush - queue a flush
457 * @bdev: blockdev to issue flush for
460 * Issue a flush for the block device in question.
502 /* bio based request queue hasn't flush queue */ in blk_free_flush_queue()