Lines Matching full:we

24  * recover, so we don't allow failure here. Also, we allocate in a context that
25 * we don't want to be issuing transactions from, so we need to tell the
28 * We don't reserve any space for the ticket - we are going to steal whatever
29 * space we require from transactions as they commit. To ensure we reserve all
30 * the space required, we need to set the current reservation of the ticket to
31 * zero so that we know to steal the initial transaction overhead from the
43 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
63 * We can't rely on just the log item being in the CIL, we have to check
81 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt()
141 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate()
142 * counter we use to detect when the current context is nearing in xlog_cil_push_pcp_aggregate()
152 * limit threshold so we can switch to atomic counter aggregation for accurate
195 * After the first stage of log recovery is done, we know where the head and
196 * tail of the log are. We need this log initialisation done before we can
199 * Here we allocate a log ticket to track space usage during a CIL push. This
200 * ticket is passed to xlog_write() directly so that we don't slowly leak log
231 * If we do this allocation within xlog_cil_insert_format_items(), it is done
233 * the memory allocation. This means that we have a potential deadlock situation
234 * under low memory conditions when we have lots of dirty metadata pinned in
235 * the CIL and we need a CIL commit to occur to free memory.
237 * To avoid this, we need to move the memory allocation outside the
244 * process, we cannot share the buffer between the transaction commit (which
247 * unreliable, but we most definitely do not want to be allocating and freeing
254 * the incoming modification. Then during the formatting of the item we can swap
255 * the active buffer with the new one if we can't reuse the existing buffer. We
257 * it's size is right, otherwise we'll free and reallocate it at that point.
290 * Ordered items need to be tracked but we do not wish to write in xlog_cil_alloc_shadow_bufs()
291 * them. We need a logvec to track the object, but we do not in xlog_cil_alloc_shadow_bufs()
301 * We 64-bit align the length of each iovec so that the start of in xlog_cil_alloc_shadow_bufs()
302 * the next one is naturally aligned. We'll need to account for in xlog_cil_alloc_shadow_bufs()
305 * We also add the xlog_op_header to each region when in xlog_cil_alloc_shadow_bufs()
307 * at this point. Hence we'll need an addition number of bytes in xlog_cil_alloc_shadow_bufs()
319 * that space to ensure we can align it appropriately and not in xlog_cil_alloc_shadow_bufs()
325 * if we have no shadow buffer, or it is too small, we need to in xlog_cil_alloc_shadow_bufs()
331 * We free and allocate here as a realloc would copy in xlog_cil_alloc_shadow_bufs()
332 * unnecessary data. We don't use kvzalloc() for the in xlog_cil_alloc_shadow_bufs()
333 * same reason - we don't need to zero the data area in in xlog_cil_alloc_shadow_bufs()
385 * If there is no old LV, this is the first time we've seen the item in in xfs_cil_prepare_item()
386 * this CIL context and so we need to pin it. If we are replacing the in xfs_cil_prepare_item()
388 * buffer for later freeing. In both cases we are now switching to the in xfs_cil_prepare_item()
407 * CIL, store the sequence number on the log item so we can in xfs_cil_prepare_item()
418 * For delayed logging, we need to hold a formatted buffer containing all the
426 * guaranteed to be large enough for the current modification, but we will only
427 * use that if we can't reuse the existing lv. If we can't reuse the existing
428 * lv, then simple swap it out for the shadow lv. We don't free it - that is
431 * We don't set up region headers during this process; we simply copy the
432 * regions into the flat buffer. We can do this because we still have to do a
434 * ophdrs during the iclog write means that we can support splitting large
438 * Hence what we need to do now is change the rewrite the vector array to point
439 * to the copied region inside the buffer we just allocated. This allows us to
451 /* Bail out if we didn't find a log item. */ in xlog_cil_insert_format_items()
541 * as well. Remove the amount of space we added to the checkpoint ticket from
562 * We can do this safely because the context can't checkpoint until we in xlog_cil_insert_items()
563 * are done so it doesn't matter exactly how we update the CIL. in xlog_cil_insert_items()
568 * Subtract the space released by intent cancelation from the space we in xlog_cil_insert_items()
569 * consumed so that we remove it from the CIL space and add it back to in xlog_cil_insert_items()
575 * Grab the per-cpu pointer for the CIL before we start any accounting. in xlog_cil_insert_items()
576 * That ensures that we are running with pre-emption disabled and so we in xlog_cil_insert_items()
583 * We need to take the CIL checkpoint unit reservation on the first in xlog_cil_insert_items()
584 * commit into the CIL. Test the XLOG_CIL_EMPTY bit first so we don't in xlog_cil_insert_items()
585 * unnecessarily do an atomic op in the fast path here. We can clear the in xlog_cil_insert_items()
586 * XLOG_CIL_EMPTY bit as we are under the xc_ctx_lock here and that in xlog_cil_insert_items()
594 * Check if we need to steal iclog headers. atomic_read() is not a in xlog_cil_insert_items()
595 * locked atomic operation, so we can check the value before we do any in xlog_cil_insert_items()
596 * real atomic ops in the fast path. If we've already taken the CIL unit in xlog_cil_insert_items()
597 * reservation from this commit, we've already got one iclog header in xlog_cil_insert_items()
598 * space reserved so we have to account for that otherwise we risk in xlog_cil_insert_items()
601 * If the CIL is already at the hard limit, we might need more header in xlog_cil_insert_items()
603 * commit that occurs once we are over the hard limit to ensure the CIL in xlog_cil_insert_items()
606 * This can steal more than we need, but that's OK. in xlog_cil_insert_items()
637 * If we just transitioned over the soft limit, we need to in xlog_cil_insert_items()
652 * We do this here so we only need to take the CIL lock once during in xlog_cil_insert_items()
669 * If we've overrun the reservation, dump the tx details before we move in xlog_cil_insert_items()
713 * pagb_lock. Note that we need a unbounded workqueue, otherwise we might
770 * Mark all items committed and clear busy extents. We free the log vector
771 * chains in a separate pass so that we unpin the log items as quickly as
782 * If the I/O failed, we're aborting the commit and already shutdown. in xlog_cil_committed()
783 * Wake any commit waiters before aborting the log items so we don't in xlog_cil_committed()
828 * Record the LSN of the iclog we were just granted space to start writing into.
845 * The LSN we need to pass to the log items on transaction in xlog_cil_set_ctx_write_state()
847 * the commit lsn. If we use the commit record lsn then we can in xlog_cil_set_ctx_write_state()
856 * Make sure the metadata we are about to overwrite in the log in xlog_cil_set_ctx_write_state()
867 * Take a reference to the iclog for the context so that we still hold in xlog_cil_set_ctx_write_state()
875 * iclog for an entire commit record, so we can attach the context in xlog_cil_set_ctx_write_state()
876 * callbacks now. This needs to be done before we make the commit_lsn in xlog_cil_set_ctx_write_state()
886 * Now we can record the commit LSN and wake anyone waiting for this in xlog_cil_set_ctx_write_state()
920 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_order_write()
1027 * Build a checkpoint transaction header to begin the journal transaction. We
1031 * This is the only place we write a transaction header, so we also build the
1033 * transaction header. We keep the start record in it's own log vector rather
1085 * CIL item reordering compare function. We want to order in ascending ID order,
1086 * but we want to leave items with the same ID in the order they were added to
1087 * the list. This is important for operations like reflink where we log 4 order
1088 * dependent intents in a single transaction when we overwrite an existing
1106 * the CIL. We don't need the CIL lock here because it's only needed on the
1109 * If a log item is marked with a whiteout, we do not need to write it to the
1110 * journal and so we just move them to the whiteout list for the caller to
1136 /* we don't write ordered log vectors */ in xlog_cil_build_lv_chain()
1164 * If the current sequence is the same as xc_push_seq we need to do a flush. If
1166 * flushed and we don't need to do anything - the caller will wait for it to
1170 * Hence we can allow log forces to run racily and not issue pushes for the
1171 * same sequence twice. If we get a race between multiple pushes for the same
1206 * As we are about to switch to a new, empty CIL context, we no longer in xlog_cil_push_work()
1219 * Check if we've anything to push. If there is nothing, then we don't in xlog_cil_push_work()
1220 * move on to a new sequence number and so we have to be able to push in xlog_cil_push_work()
1237 * We are now going to push this context, so add it to the committing in xlog_cil_push_work()
1238 * list before we do anything else. This ensures that anyone waiting on in xlog_cil_push_work()
1247 * waiting on. If the CIL is not empty, we get put on the committing in xlog_cil_push_work()
1249 * an empty CIL and an unchanged sequence number means we jumped out in xlog_cil_push_work()
1266 * Switch the contexts so we can drop the context lock and move out in xlog_cil_push_work()
1267 * of a shared context. We can't just go straight to the commit record, in xlog_cil_push_work()
1268 * though - we need to synchronise with previous and future commits so in xlog_cil_push_work()
1270 * that we process items during log IO completion in the correct order. in xlog_cil_push_work()
1272 * For example, if we get an EFI in one checkpoint and the EFD in the in xlog_cil_push_work()
1273 * next (e.g. due to log forces), we do not want the checkpoint with in xlog_cil_push_work()
1275 * we must strictly order the commit records of the checkpoints so in xlog_cil_push_work()
1280 * Hence we need to add this context to the committing context list so in xlog_cil_push_work()
1286 * committing list. This also ensures that we can do unlocked checks in xlog_cil_push_work()
1296 * Sort the log vector chain before we add the transaction headers. in xlog_cil_push_work()
1297 * This ensures we always have the transaction headers at the start in xlog_cil_push_work()
1304 * begin the transaction. We need to account for the space used by the in xlog_cil_push_work()
1306 * Add the lvhdr to the head of the lv chain we pass to xlog_write() so in xlog_cil_push_work()
1328 * Grab the ticket from the ctx so we can ungrant it after releasing the in xlog_cil_push_work()
1329 * commit_iclog. The ctx may be freed by the time we return from in xlog_cil_push_work()
1331 * callback run) so we can't reference the ctx after the call to in xlog_cil_push_work()
1338 * to complete before we submit the commit_iclog. We can't use state in xlog_cil_push_work()
1342 * In the latter case, if it's a future iclog and we wait on it, the we in xlog_cil_push_work()
1344 * wakeup until this commit_iclog is written to disk. Hence we use the in xlog_cil_push_work()
1345 * iclog header lsn and compare it to the commit lsn to determine if we in xlog_cil_push_work()
1356 * iclogs older than ic_prev. Hence we only need to wait in xlog_cil_push_work()
1364 * We need to issue a pre-flush so that the ordering for this in xlog_cil_push_work()
1417 * We need to push CIL every so often so we don't cache more than we can fit in
1431 * The cil won't be empty because we are called while holding the in xlog_cil_push_background()
1432 * context lock so whatever we added to the CIL will still be there. in xlog_cil_push_background()
1437 * We are done if: in xlog_cil_push_background()
1438 * - we haven't used up all the space available yet; or in xlog_cil_push_background()
1439 * - we've already queued up a push; and in xlog_cil_push_background()
1440 * - we're not over the hard limit; and in xlog_cil_push_background()
1443 * If so, we don't need to take the push lock as there's nothing to do. in xlog_cil_push_background()
1460 * Drop the context lock now, we can't hold that if we need to sleep in xlog_cil_push_background()
1461 * because we are over the blocking threshold. The push_lock is still in xlog_cil_push_background()
1468 * If we are well over the space limit, throttle the work that is being in xlog_cil_push_background()
1493 * If the caller is performing a synchronous force, we will flush the workqueue
1498 * If the caller is performing an async push, we need to ensure that the
1499 * checkpoint is fully flushed out of the iclogs when we finish the push. If we
1503 * mechanism. Hence in this case we need to pass a flag to the push work to
1526 * If this is an async flush request, we always need to set the in xlog_cil_push_now()
1535 * If the CIL is empty or we've already pushed the sequence then in xlog_cil_push_now()
1536 * there's no more work that we need to do. in xlog_cil_push_now()
1565 * committed in the current (same) CIL checkpoint, we don't need to write either
1567 * journalled atomically within this checkpoint. As we cannot remove items from
1603 * To do this, we need to format the item, pin it in memory if required and
1604 * account for the space used by the transaction. Once we have done that we
1606 * transaction to the checkpoint context so we carry the busy extents through
1625 * Do all necessary memory allocation before we lock the CIL. in xlog_cil_commit()
1650 * This needs to be done before we drop the CIL context lock because we in xlog_cil_commit()
1652 * to disk. If we don't, then the CIL checkpoint can race with us and in xlog_cil_commit()
1653 * we can run checkpoint completion before we've updated and unlocked in xlog_cil_commit()
1695 * We only need to push if we haven't already pushed the sequence number given.
1696 * Hence the only time we will trigger a push here is if the push sequence is
1699 * We return the current commit lsn to allow the callers to determine if a
1718 * check to see if we need to force out the current context. in xlog_cil_force_seq()
1726 * See if we can find a previous sequence still committing. in xlog_cil_force_seq()
1727 * We need to wait for all previous sequence commits to complete in xlog_cil_force_seq()
1734 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_force_seq()
1759 * Hence by the time we have got here it our sequence may not have been in xlog_cil_force_seq()
1765 * Hence if we don't find the context in the committing list and the in xlog_cil_force_seq()
1769 * it means we haven't yet started the push, because if it had started in xlog_cil_force_seq()
1770 * we would have found the context on the committing list. in xlog_cil_force_seq()
1782 * We detected a shutdown in progress. We need to trigger the log force in xlog_cil_force_seq()
1784 * we are already in a shutdown state. Hence we can't return in xlog_cil_force_seq()
1786 * LSN is already stable), so we return a zero LSN instead. in xlog_cil_force_seq()
1796 * We have to lock the CIL context here to ensure that nothing is modifying
1798 * the CIL context lock, so grabbing that exclusively here will ensure we can