Lines Matching full:we
24 * recover, so we don't allow failure here. Also, we allocate in a context that
25 * we don't want to be issuing transactions from, so we need to tell the
28 * We don't reserve any space for the ticket - we are going to steal whatever
29 * space we require from transactions as they commit. To ensure we reserve all
30 * the space required, we need to set the current reservation of the ticket to
31 * zero so that we know to steal the initial transaction overhead from the
43 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
79 * After the first stage of log recovery is done, we know where the head and
80 * tail of the log are. We need this log initialisation done before we can
83 * Here we allocate a log ticket to track space usage during a CIL push. This
84 * ticket is passed to xlog_write() directly so that we don't slowly leak log
114 * If we do this allocation within xlog_cil_insert_format_items(), it is done
116 * the memory allocation. This means that we have a potential deadlock situation
117 * under low memory conditions when we have lots of dirty metadata pinned in
118 * the CIL and we need a CIL commit to occur to free memory.
120 * To avoid this, we need to move the memory allocation outside the
127 * process, we cannot share the buffer between the transaction commit (which
130 * unreliable, but we most definitely do not want to be allocating and freeing
137 * the incoming modification. Then during the formatting of the item we can swap
138 * the active buffer with the new one if we can't reuse the existing buffer. We
140 * it's size is right, otherwise we'll free and reallocate it at that point.
173 * Ordered items need to be tracked but we do not wish to write in xlog_cil_alloc_shadow_bufs()
174 * them. We need a logvec to track the object, but we do not in xlog_cil_alloc_shadow_bufs()
184 * We 64-bit align the length of each iovec so that the start in xlog_cil_alloc_shadow_bufs()
185 * of the next one is naturally aligned. We'll need to in xlog_cil_alloc_shadow_bufs()
195 * that space to ensure we can align it appropriately and not in xlog_cil_alloc_shadow_bufs()
201 * if we have no shadow buffer, or it is too small, we need to in xlog_cil_alloc_shadow_bufs()
208 * We free and allocate here as a realloc would copy in xlog_cil_alloc_shadow_bufs()
209 * unnecessary data. We don't use kmem_zalloc() for the in xlog_cil_alloc_shadow_bufs()
210 * same reason - we don't need to zero the data area in in xlog_cil_alloc_shadow_bufs()
217 * We are in transaction context, which means this in xlog_cil_alloc_shadow_bufs()
220 * holds. This means we can use GFP_KERNEL here so the in xlog_cil_alloc_shadow_bufs()
222 * contiguous page allocation failure as we require. in xlog_cil_alloc_shadow_bufs()
274 * If there is no old LV, this is the first time we've seen the item in in xfs_cil_prepare_item()
275 * this CIL context and so we need to pin it. If we are replacing the in xfs_cil_prepare_item()
277 * buffer for later freeing. In both cases we are now switching to the in xfs_cil_prepare_item()
297 * CIL, store the sequence number on the log item so we can in xfs_cil_prepare_item()
308 * For delayed logging, we need to hold a formatted buffer containing all the
316 * guaranteed to be large enough for the current modification, but we will only
317 * use that if we can't reuse the existing lv. If we can't reuse the existing
318 * lv, then simple swap it out for the shadow lv. We don't free it - that is
321 * We don't set up region headers during this process; we simply copy the
322 * regions into the flat buffer. We can do this because we still have to do a
324 * ophdrs during the iclog write means that we can support splitting large
328 * Hence what we need to do now is change the rewrite the vector array to point
329 * to the copied region inside the buffer we just allocated. This allows us to
343 /* Bail out if we didn't find a log item. */ in xlog_cil_insert_format_items()
418 * as well. Remove the amount of space we added to the checkpoint ticket from
437 * We can do this safely because the context can't checkpoint until we in xlog_cil_insert_items()
438 * are done so it doesn't matter exactly how we update the CIL. in xlog_cil_insert_items()
456 * reservation has to grow as well as the current reservation as we in xlog_cil_insert_items()
457 * steal from tickets so we can correctly determine the space used in xlog_cil_insert_items()
466 /* do we need space for more log record headers? */ in xlog_cil_insert_items()
482 * If we've overrun the reservation, dump the tx details before we move in xlog_cil_insert_items()
498 * We do this here so we only need to take the CIL lock once during in xlog_cil_insert_items()
549 * pagb_lock. Note that we need a unbounded workqueue, otherwise we might
606 * Mark all items committed and clear busy extents. We free the log vector
607 * chains in a separate pass so that we unpin the log items as quickly as
618 * If the I/O failed, we're aborting the commit and already shutdown. in xlog_cil_committed()
619 * Wake any commit waiters before aborting the log items so we don't in xlog_cil_committed()
664 * Record the LSN of the iclog we were just granted space to start writing into.
681 * The LSN we need to pass to the log items on transaction in xlog_cil_set_ctx_write_state()
683 * the commit lsn. If we use the commit record lsn then we can in xlog_cil_set_ctx_write_state()
693 * Take a reference to the iclog for the context so that we still hold in xlog_cil_set_ctx_write_state()
701 * iclog for an entire commit record, so we can attach the context in xlog_cil_set_ctx_write_state()
702 * callbacks now. This needs to be done before we make the commit_lsn in xlog_cil_set_ctx_write_state()
712 * Now we can record the commit LSN and wake anyone waiting for this in xlog_cil_set_ctx_write_state()
746 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_order_write()
840 * If the current sequence is the same as xc_push_seq we need to do a flush. If
842 * flushed and we don't need to do anything - the caller will wait for it to
846 * Hence we can allow log forces to run racily and not issue pushes for the
847 * same sequence twice. If we get a race between multiple pushes for the same
885 * As we are about to switch to a new, empty CIL context, we no longer in xlog_cil_push_work()
896 * Check if we've anything to push. If there is nothing, then we don't in xlog_cil_push_work()
897 * move on to a new sequence number and so we have to be able to push in xlog_cil_push_work()
914 * We are now going to push this context, so add it to the committing in xlog_cil_push_work()
915 * list before we do anything else. This ensures that anyone waiting on in xlog_cil_push_work()
924 * waiting on. If the CIL is not empty, we get put on the committing in xlog_cil_push_work()
926 * an empty CIL and an unchanged sequence number means we jumped out in xlog_cil_push_work()
942 * because we hold the flush lock exclusively. Hence we can now issue in xlog_cil_push_work()
943 * a cache flush to ensure all the completed metadata in the journal we in xlog_cil_push_work()
946 * Because we are issuing this cache flush before we've written the in xlog_cil_push_work()
947 * tail lsn to the iclog, we can have metadata IO completions move the in xlog_cil_push_work()
949 * being written. In this case, we need to re-issue the cache flush in xlog_cil_push_work()
951 * the tail LSN *before* we issue the flush. in xlog_cil_push_work()
959 * items from the CIL. We don't need the CIL lock here because it's only in xlog_cil_push_work()
981 * Switch the contexts so we can drop the context lock and move out in xlog_cil_push_work()
982 * of a shared context. We can't just go straight to the commit record, in xlog_cil_push_work()
983 * though - we need to synchronise with previous and future commits so in xlog_cil_push_work()
985 * that we process items during log IO completion in the correct order. in xlog_cil_push_work()
987 * For example, if we get an EFI in one checkpoint and the EFD in the in xlog_cil_push_work()
988 * next (e.g. due to log forces), we do not want the checkpoint with in xlog_cil_push_work()
990 * we must strictly order the commit records of the checkpoints so in xlog_cil_push_work()
995 * Hence we need to add this context to the committing context list so in xlog_cil_push_work()
1001 * committing list. This also ensures that we can do unlocked checks in xlog_cil_push_work()
1012 * begin the transaction. We need to account for the space used by the in xlog_cil_push_work()
1015 * The LSN we need to pass to the log items on transaction commit is in xlog_cil_push_work()
1016 * the LSN reported by the first log vector write. If we use the commit in xlog_cil_push_work()
1017 * record lsn then we can move the tail beyond the grant write head. in xlog_cil_push_work()
1034 * Before we format and submit the first iclog, we have to ensure that in xlog_cil_push_work()
1051 * to complete before we submit the commit_iclog. We can't use state in xlog_cil_push_work()
1055 * In the latter case, if it's a future iclog and we wait on it, the we in xlog_cil_push_work()
1057 * wakeup until this commit_iclog is written to disk. Hence we use the in xlog_cil_push_work()
1058 * iclog header lsn and compare it to the commit lsn to determine if we in xlog_cil_push_work()
1069 * iclogs older than ic_prev. Hence we only need to wait in xlog_cil_push_work()
1077 * We need to issue a pre-flush so that the ordering for this in xlog_cil_push_work()
1124 * We need to push CIL every so often so we don't cache more than we can fit in
1137 * The cil won't be empty because we are called while holding the in xlog_cil_push_background()
1138 * context lock so whatever we added to the CIL will still be there in xlog_cil_push_background()
1143 * Don't do a background push if we haven't used up all the in xlog_cil_push_background()
1158 * Drop the context lock now, we can't hold that if we need to sleep in xlog_cil_push_background()
1159 * because we are over the blocking threshold. The push_lock is still in xlog_cil_push_background()
1166 * If we are well over the space limit, throttle the work that is being in xlog_cil_push_background()
1192 * If the caller is performing a synchronous force, we will flush the workqueue
1197 * If the caller is performing an async push, we need to ensure that the
1198 * checkpoint is fully flushed out of the iclogs when we finish the push. If we
1202 * mechanism. Hence in this case we need to pass a flag to the push work to
1223 * If the CIL is empty or we've already pushed the sequence then in xlog_cil_push_now()
1224 * there's no work we need to do. in xlog_cil_push_now()
1255 * To do this, we need to format the item, pin it in memory if required and
1256 * account for the space used by the transaction. Once we have done that we
1258 * transaction to the checkpoint context so we carry the busy extents through
1276 * Do all necessary memory allocation before we lock the CIL. in xlog_cil_commit()
1298 * This needs to be done before we drop the CIL context lock because we in xlog_cil_commit()
1300 * to disk. If we don't, then the CIL checkpoint can race with us and in xlog_cil_commit()
1301 * we can run checkpoint completion before we've updated and unlocked in xlog_cil_commit()
1336 * We only need to push if we haven't already pushed the sequence number given.
1337 * Hence the only time we will trigger a push here is if the push sequence is
1340 * We return the current commit lsn to allow the callers to determine if a
1359 * check to see if we need to force out the current context. in xlog_cil_force_seq()
1367 * See if we can find a previous sequence still committing. in xlog_cil_force_seq()
1368 * We need to wait for all previous sequence commits to complete in xlog_cil_force_seq()
1375 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_force_seq()
1400 * Hence by the time we have got here it our sequence may not have been in xlog_cil_force_seq()
1406 * Hence if we don't find the context in the committing list and the in xlog_cil_force_seq()
1410 * it means we haven't yet started the push, because if it had started in xlog_cil_force_seq()
1411 * we would have found the context on the committing list. in xlog_cil_force_seq()
1423 * We detected a shutdown in progress. We need to trigger the log force in xlog_cil_force_seq()
1425 * we are already in a shutdown state. Hence we can't return in xlog_cil_force_seq()
1427 * LSN is already stable), so we return a zero LSN instead. in xlog_cil_force_seq()
1436 * We can't rely on just the log item being in the CIL, we have to check
1455 * current sequence, we're in a new checkpoint. in xfs_log_item_in_current_chkpt()