Lines Matching full:we
26 * Called with the ail lock held, but we don't want to assert fail with it
27 * held otherwise we'll lock everything up and won't be able to debug the
28 * cause. Hence we sample and check the state under the AIL lock and return if
29 * everything is fine, otherwise we drop the lock and run the ASSERT checks.
110 * We need the AIL lock in order to get a coherent read of the lsn of the last
191 * When the traversal is complete, we need to remove the cursor from the list
206 * freed object. We set the low bit of the cursor item pointer so we can
289 * Splice the log item list into the AIL at the given LSN. We splice to the
307 * provided. If not, or if the one we got is not valid, in xfs_ail_splice()
315 * If a cursor is provided, we know we're processing the AIL in xfs_ail_splice()
318 * cursor to point to that last item, now while we have a in xfs_ail_splice()
352 * We clear the log item failed state here as well, but we have to be careful
354 * may be the failed log items. Hence if we clear the log item failed state
355 * before queuing the buffer for IO we can release all active references to
358 * order we process them in - the buffer is locked, and we own the buffer list
359 * so nothing on them is going to change while we are performing this action.
361 * Hence we can safely queue the buffer for IO before we clear the failed log
432 * If we encountered pinned items or did not finish writing out all in xfsaild_push()
433 * buffers the last time we ran, force a background CIL push to get the in xfsaild_push()
434 * items unpinned in the near future. We do not wait on the CIL push as in xfsaild_push()
456 /* we're done if the AIL is empty or our push has reached the end */ in xfsaild_push()
468 * Note that iop_push may unlock and reacquire the AIL lock. We in xfsaild_push()
485 * inode buffer is locked because we already pushed the in xfsaild_push()
488 * We do not want to stop flushing just because lots in xfsaild_push()
489 * of items are already being flushed, but we need to in xfsaild_push()
521 * Are there too many items we can't do anything with? in xfsaild_push()
523 * If we are skipping too many items because we can't flush in xfsaild_push()
524 * them or they are already being flushed, we back off and in xfsaild_push()
526 * done. i.e. remove pressure from the AIL while we can't make in xfsaild_push()
551 * We reached the target or the AIL is empty, so wait a bit in xfsaild_push()
553 * AIL before we start the next scan from the start of the AIL. in xfsaild_push()
559 * Either there is a lot of contention on the AIL or we are in xfsaild_push()
561 * is defined as >90% of the items we tried to push were stuck. in xfsaild_push()
572 * Assume we have more work to do in a short while. in xfsaild_push()
598 * Check kthread_should_stop() after we set the task state to in xfsaild()
599 * guarantee that we either see the stop bit and exit or the in xfsaild()
619 * happen if we're shutting down, so this is the last in xfsaild()
631 * Idle if the AIL is empty and we are not racing with a target in xfsaild()
632 * update. We check the AIL after we set the task to a sleep in xfsaild()
633 * state to guarantee that we either catch an ail_target update in xfsaild()
635 * Otherwise, we run the risk of sleeping indefinitely. in xfsaild()
671 * We don't want to interrupt any push that is in progress, hence we only queue
672 * work if we set the pushing bit appropriately.
674 * We do this unlocked - we only need to know whether there is anything in the
675 * AIL at the time we are called. We don't need to access the contents of
766 * it to the AIL. If we move the first item in the AIL, update the log tail to
771 * lock held. As a result, once we have the AIL lock, we need to check each log
774 * To optimise the insert operation, we delete all the items from the AIL in
801 /* check if we really need to move the item */ in xfs_trans_ail_update_bulk()
838 * that we can use it to check if the LSN of the tail of the log has moved