Lines Matching full:we

89  * We need to make sure the buffer pointer returned is naturally aligned for the
90 * biggest basic data type we put into it. We have already accounted for this
93 * However, this padding does not get written into the log, and hence we have to
98 * We also add space for the xlog_op_header that describes this region in the
99 * log. This prepends the data region we return to the caller to copy their data
101 * is not 8 byte aligned, we have to be careful to ensure that we align the
102 * start of the buffer such that the region we return to the call is 8 byte
256 * Hence when we are woken here, it may be that the head of the in xlog_grant_head_wake()
259 * reservation we require. However, if the AIL has already in xlog_grant_head_wake()
260 * pushed to the target defined by the old log head location, we in xlog_grant_head_wake()
265 * the grant head, we need to push the AIL again to ensure the in xlog_grant_head_wake()
267 * position before we wait for the tail to move again. in xlog_grant_head_wake()
330 * path. Hence any lock will be globally hot if we take it unconditionally on
333 * As tickets are only ever moved on and off head->waiters under head->lock, we
334 * only need to take that lock if we are going to add the ticket to the queue
335 * and sleep. We can avoid taking the lock if the ticket was never added to
336 * head->waiters because the t_queue list head will be empty and we hold the
353 * logspace before us. Wake up the first waiters, if we do not wake in xlog_grant_head_check()
415 * This is a new transaction on the ticket, so we need to change the in xfs_log_regrant()
417 * the log. Just add one to the existing tid so that we can see chains in xfs_log_regrant()
442 * If we are failing, make sure the ticket doesn't have any current in xfs_log_regrant()
443 * reservations. We don't want to add this back when the ticket/ in xfs_log_regrant()
455 * When writes happen to the on-disk log, we don't subtract the length of the
457 * reservation, we prevent over allocation problems.
499 * If we are failing, make sure the ticket doesn't have any current in xfs_log_reserve()
500 * reservations. We don't want to add this back when the ticket/ in xfs_log_reserve()
510 * space waiters so they can process the newly set shutdown state. We really
511 * don't care what order we process callbacks here because the log is shut down
512 * and so state cannot change on disk anymore. However, we cannot wake waiters
513 * until the callbacks have been processed because we may be in unmount and
514 * we must ensure that all AIL operations the callbacks perform have completed
515 * before we tear down the AIL.
517 * We avoid processing actively referenced iclogs so that we don't run callbacks
552 * If XLOG_ICL_NEED_FUA is already set on the iclog, we need to ensure that the
555 * within the iclog. We need to ensure that the log tail does not move beyond
564 * the iclog will get zeroed on activation of the iclog after sync, so we
582 * of the tail LSN into the iclog so we guarantee that the log tail does in xlog_state_release_iclog()
583 * not move between the first time we know that the iclog needs to be in xlog_state_release_iclog()
584 * made stable and when we eventually submit it. in xlog_state_release_iclog()
668 * Note: we can't just reject the mount if the validation fails. This in xfs_log_mount()
673 * We can, however, reject mounts for CRC format filesystems, as the in xfs_log_mount()
719 * Initialize the AIL now we have a log. in xfs_log_mount()
734 * log recovery ignores readonly state and so we need to clear in xfs_log_mount()
759 * Now the log has been fully initialised and we know were our in xfs_log_mount()
760 * space grant counters are, we can initialise the permanent ticket in xfs_log_mount()
781 * If we finish recovery successfully, start the background log work. If we are
782 * not doing recovery, then we have a RO filesystem and we don't need to start
799 * log recovery ignores readonly state and so we need to clear in xfs_log_mount_finish()
805 * During the second phase of log recovery, we need iget and in xfs_log_mount_finish()
808 * of inodes before we're done replaying log items on those in xfs_log_mount_finish()
810 * so that we don't leak the quota inodes if subsequent mount in xfs_log_mount_finish()
813 * We let all inodes involved in redo item processing end up on in xfs_log_mount_finish()
814 * the LRU instead of being evicted immediately so that if we do in xfs_log_mount_finish()
817 * in log recovery failure. We have to evict the unreferenced in xfs_log_mount_finish()
818 * lru inodes after clearing SB_ACTIVE because we don't in xfs_log_mount_finish()
834 * but we do it unconditionally to make sure we're always in a clean in xfs_log_mount_finish()
856 /* Make sure the log is dead if we're returning failure. */ in xfs_log_mount_finish()
892 * have been ordered and callbacks run before we are woken here, hence
918 * Write out an unmount record using the ticket provided. We have to account for
981 * At this point, we're umounting anyway, so there's no point in in xlog_unmount_write()
1014 * We just write the magic number now since that particular field isn't
1033 * If we think the summary counters are bad, avoid writing the unmount in xfs_log_unmount_write()
1052 * To do this, we first need to shut down the background log work so it is not
1053 * trying to cover the log as we clean up. We then need to unpin all objects in
1054 * the log so we can then flush them out. Once they have completed their IO and
1055 * run the callbacks removing themselves from the AIL, we can cover the log.
1062 * Clear log incompat features since we're quiescing the log. Report in xfs_log_quiesce()
1082 * XBF_ASYNC flag set, so we need to use a lock/unlock pair to wait for in xfs_log_quiesce()
1104 * During unmount, we need to ensure we flush all the dirty metadata objects
1105 * from the AIL so that the log is empty before we write the unmount record to
1106 * the log. Once this is done, we can tear down the AIL and the log.
1143 * Wake up processes waiting for log space after we have moved the log tail.
1175 * Determine if we have a transaction that has gone to disk that needs to be
1178 * we start attempting to cover the log.
1180 * Only if we are then in a state where covering is needed, the caller is
1184 * If there are any items in the AIl or CIL, then we do not want to attempt to
1185 * cover the log as we may be in a situation where there isn't log space
1188 * there's no point in running a dummy transaction at this point because we
1250 * state machine if the log requires covering. Therefore, we must call in xfs_log_cover()
1251 * this function once and use the result until we've issued an sb sync. in xfs_log_cover()
1270 * we found it. in xfs_log_cover()
1283 * We may be holding the log iclog lock upon entering this routine.
1296 * To make sure we always have a valid LSN for the log tail we keep in xlog_assign_tail_lsn_locked()
1330 * wrap the tail, we should blow up. Rather than catch this case here,
1331 * we depend on other ASSERTions in other parts of the code. XXXmiken
1333 * If reservation head is behind the tail, we have a problem. Warn about it,
1337 * shortcut invalidity asserts in this case so that we don't trigger them
1368 * The reservation head is behind the tail. In this case we just want to in xlog_space_left()
1398 * Race to shutdown the filesystem if we see an error. in xlog_ioend_work()
1409 * Drop the lock to signal that we are done. Nothing references the in xlog_ioend_work()
1412 * unlock as we could race with it being freed. in xlog_ioend_work()
1422 * If the filesystem blocksize is too large, we may need to choose a
1455 * Clear the log incompat flags if we have the opportunity.
1457 * This only happens if we're about to log the second dummy transaction as part
1458 * of covering the log and we can get the log incompat feature usage lock.
1481 * Every sync period we need to unpin all items in the AIL and push them to
1482 * disk. If there is nothing dirty, then we might need to cover the log to
1500 * We cannot use an inode here for this - that will push dirty in xfs_log_worker()
1502 * will prevent log covering from making progress. Hence we in xfs_log_worker()
1605 * done this way so that we can use different sizes for machines in xlog_alloc_log()
1682 * Compute the LSN that we'd need to push the log tail towards in order to have
1739 * Push the tail of the log if we need to do so to maintain the free log space
1740 * thresholds set out by xlog_grant_push_threshold. We may need to adopt a
1741 * policy which pushes on an lsn which is further along in the log once we
1742 * reach the high water mark. In this manner, we would be creating a low water
1887 * We lock the iclogbufs here so that we can serialise against I/O in xlog_write_iclog()
1888 * completion during unmount. We might be processing a shutdown in xlog_write_iclog()
1890 * unmount thread, and hence we need to ensure that completes before in xlog_write_iclog()
1891 * tearing down the iclogbufs. Hence we need to hold the buffer lock in xlog_write_iclog()
1897 * It would seem logical to return EIO here, but we rely on in xlog_write_iclog()
1899 * doing it here. We kick of the state machine and unlock in xlog_write_iclog()
1909 * We use REQ_SYNC | REQ_IDLE here to tell the block layer the are more in xlog_write_iclog()
1924 * For external log devices, we also need to flush the data in xlog_write_iclog()
1927 * but it *must* complete before we issue the external log IO. in xlog_write_iclog()
1929 * If the flush fails, we cannot conclude that past metadata in xlog_write_iclog()
1931 * not possible, hence we must shut down with log IO error to in xlog_write_iclog()
1953 * If this log buffer would straddle the end of the log we will have in xlog_write_iclog()
1954 * to split it up into two bios, so that we can continue at the start. in xlog_write_iclog()
1972 * We need to bump cycle number for the part of the iclog that is
2016 * fashion. Previously, we should have moved the current iclog
2020 * to save away the 1st word of each BBSIZE block into the header. We replace
2024 * we can't have part of a 512 byte block written and part not written. By
2025 * tagging each block, we will know which blocks are valid when recovering
2054 * If we have a ticket, account for the roundoff via the ticket in xlog_sync()
2056 * Otherwise, we have to move grant heads directly. in xlog_sync()
2079 /* Do we need to split this write into 2 parts? */ in xlog_sync()
2118 * is done before we tear down these buffers. in xlog_dealloc_log()
2316 * length. We write until we cannot fit a full record into the remaining space
2317 * and then stop. We return the log vector that is to be written that cannot
2336 /* walk the logvec, copying until we run out of space in the iclog */ in xlog_write_partial()
2344 * start recovering from the next opheader it finds. Because we in xlog_write_partial()
2350 * opheader, then we need to start afresh with a new iclog. in xlog_write_partial()
2372 /* If we wrote the whole region, move to the next. */ in xlog_write_partial()
2377 * We now have a partially written iovec, but it can span in xlog_write_partial()
2378 * multiple iclogs so we loop here. First we release the iclog in xlog_write_partial()
2379 * we currently have, then we get a new iclog and add a new in xlog_write_partial()
2380 * opheader. Then we continue copying from where we were until in xlog_write_partial()
2381 * we either complete the iovec or fill the iclog. If we in xlog_write_partial()
2382 * complete the iovec, then we increment the index and go right in xlog_write_partial()
2383 * back to the top of the outer loop. if we fill the iclog, we in xlog_write_partial()
2388 * and get a new one before returning to the outer loop. We must in xlog_write_partial()
2389 * always guarantee that we exit this inner loop with at least in xlog_write_partial()
2391 * iclog, hence we cannot just terminate the loop at the end in xlog_write_partial()
2392 * of the of the continuation. So we loop while there is no in xlog_write_partial()
2398 * Ensure we include the continuation opheader in the in xlog_write_partial()
2399 * space we need in the new iclog by adding that size in xlog_write_partial()
2400 * to the length we require. This continuation opheader in xlog_write_partial()
2402 * consumes hasn't been accounted to the lv we are in xlog_write_partial()
2424 * continuation. Otherwise we're going around again. in xlog_write_partial()
2460 * 2. Check whether we violate the tickets reservation.
2467 * 3. Find out if we can fit entire region into this iclog
2487 * we don't really know exactly how much space will be used. As a result,
2488 * we don't update ic_offset until the end when we know exactly how many
2522 * If we have a context pointer, pass it the first iclog we are in xlog_write()
2541 * We have no iclog to release, so just return in xlog_write()
2554 * We've already been guaranteed that the last writes will fit inside in xlog_write()
2556 * those writes accounted to it. Hence we do not need to update the in xlog_write()
2577 * dummy transaction, we can change state into IDLE (the second time in xlog_state_activate_iclog()
2578 * around). Otherwise we should change the state into NEED a dummy. in xlog_state_activate_iclog()
2579 * We don't need to cover the dummy. in xlog_state_activate_iclog()
2586 * We have two dirty iclogs so start over. This could also be in xlog_state_activate_iclog()
2630 * We go to NEED for any non-covering writes. We go to NEED2 if we just in xlog_covered_state()
2631 * wrote the first covering record (DONE). We go to IDLE if we just in xlog_covered_state()
2700 * transactions can be large enough to span many iclogs. We cannot change the
2703 * will prevent recovery from finding the start of the transaction. Hence we
2707 * We have to do this before we drop the icloglock to ensure we are the only one
2710 * If we are moving the last_sync_lsn forwards, we also need to ensure we kick
2712 * target is bound by the current last_sync_lsn value. Hence if we have a large
2715 * freeing space in the log. Hence once we've updated the last_sync_lsn we
2740 * Return true if we need to stop processing, false to continue to the next
2761 * Now that we have an iclog that is in the DONE_SYNC state, do in xlog_state_iodone_process_iclog()
2762 * one more check here to see if we have chased our tail around. in xlog_state_iodone_process_iclog()
2763 * If this is not the lowest lsn iclog, then we will leave it in xlog_state_iodone_process_iclog()
2775 * in the DONE_SYNC state, we skip the rest and just try to in xlog_state_iodone_process_iclog()
2784 * we ran any callbacks, indicating that we dropped the icloglock. We don't need
2874 * If we got an error, either on the first buffer, or in the case of in xlog_state_done_syncing()
2875 * split log writes, on the second, we shut down the file system and in xlog_state_done_syncing()
2885 * iclog buffer, we wake them all, one will get to do the in xlog_state_done_syncing()
2894 * If the head of the in-core log ring is not (ACTIVE or DIRTY), then we must
2895 * sleep. We wait on the flush queue on the head iclog as that should be
2897 * we will wait here and all new writes will sleep until a sync completes.
2974 * If we are the only one writing to this iclog, sync it to in xlog_state_get_iclog_space()
2975 * disk. We need to do an atomic compare and decrement here to in xlog_state_get_iclog_space()
2988 /* Do we have enough room to write the full amount in the remainder in xlog_state_get_iclog_space()
2989 * of this iclog? Or must we continue a write on the next iclog and in xlog_state_get_iclog_space()
2990 * mark this iclog as completely taken? In the case where we switch in xlog_state_get_iclog_space()
3008 * The first cnt-1 times a ticket goes through here we don't need to move the
3032 /* just return if we still have some of the pre-reserved space */ in xfs_log_ticket_regrant()
3047 * All the information we need to make a correct determination of space left
3049 * count should have been decremented to zero. We only need to deal with the
3053 * reservation can be done before we need to ask for more space. The first
3054 * one goes to fill up the first current reservation. Once we run out of
3073 * If this is a permanent reservation ticket, we may be able to free in xfs_log_ticket_ungrant()
3143 * pmem) or fast async storage because we drop the icloglock to issue the IO.
3173 * we don't guarantee this data will be written out. A change from past
3176 * Basically, we try and perform an intelligent scan of the in-core logs.
3177 * If we determine there is no flushable data, we just return. There is no
3185 * We may sleep if:
3193 * b) when we return from flushing out this iclog, it is still
3220 * If the head is dirty or (active and empty), then we need to in xfs_log_force()
3223 * If the previous iclog is active or dirty we are done. There in xfs_log_force()
3224 * is nothing to sync out. Otherwise, we attach ourselves to the in xfs_log_force()
3230 /* We have exclusive access to this iclog. */ in xfs_log_force()
3240 * Someone else is still writing to this iclog, so we in xfs_log_force()
3242 * gets synced immediately as we may be waiting on it. in xfs_log_force()
3249 * The iclog we are about to wait on may contain the checkpoint pushed in xfs_log_force()
3251 * to disk yet. Like the ACTIVE case above, we need to make sure caches in xfs_log_force()
3307 * We sleep here if we haven't already slept (e.g. this is the in xlog_force_lsn()
3308 * first time we've looked at the correct iclog buf) and the in xlog_force_lsn()
3310 * is that if we are doing sync transactions here, by waiting in xlog_force_lsn()
3311 * for the previous I/O to complete, we can allow a few more in xlog_force_lsn()
3312 * transactions into this iclog before we close it down. in xlog_force_lsn()
3314 * Otherwise, we mark the buffer WANT_SYNC, and bump up the in xlog_force_lsn()
3315 * refcnt so we can release the log (which drops the ref count). in xlog_force_lsn()
3340 * ACTIVE case above, we need to make sure caches are flushed in xlog_force_lsn()
3349 * completes, so we don't need to manipulate caches here at all. in xlog_force_lsn()
3350 * We just need to wait for completion if necessary. in xlog_force_lsn()
3371 * a synchronous log force, we will wait on the iclog with the LSN returned by
3448 * We need to account for all the leadup data and trailer data in xlog_calc_unit_res()
3450 * And then we need to account for the worst case in terms of using in xlog_calc_unit_res()
3475 * the space used for the headers. If we use the iclog size, then we in xlog_calc_unit_res()
3487 * Fundamentally, this means we must pass the entire log vector to in xlog_calc_unit_res()
3496 /* add extra header reservations if we overrun */ in xlog_calc_unit_res()
3557 * the cycles are the same, we can't be overlapping. Otherwise, make sure that
3620 * 2. Make sure we have a good magic number
3621 * 3. Make sure we don't have magic numbers in the data
3736 * Return true if the shutdown cause was a log IO error and we actually shut the
3751 * being shut down. We need to do this first as shutting down the log in xlog_force_shutdown()
3755 * When we are in recovery, there are no transactions to flush, and in xlog_force_shutdown()
3756 * we don't want to touch the log because we don't want to perturb the in xlog_force_shutdown()
3757 * current head/tail for future recovery attempts. Hence we need to in xlog_force_shutdown()
3760 * If we are shutting down due to a log IO error, then we must avoid in xlog_force_shutdown()
3769 * set, there someone else is performing the shutdown and so we are done in xlog_force_shutdown()
3770 * here. This should never happen because we should only ever get called in xlog_force_shutdown()
3774 * cannot change once they hold the log->l_icloglock. Hence we need to in xlog_force_shutdown()
3775 * hold that lock here, even though we use the atomic test_and_set_bit() in xlog_force_shutdown()
3800 * We don't want anybody waiting for log reservations after this. That in xlog_force_shutdown()
3801 * means we have to wake up everybody queued up on reserveq as well as in xlog_force_shutdown()
3802 * writeq. In addition, we make sure in xlog_{re}grant_log_space that in xlog_force_shutdown()
3803 * we don't enqueue anything once the SHUTDOWN flag is set, and this in xlog_force_shutdown()
3860 * resets the in-core LSN. We can't validate in this mode, but in xfs_log_check_lsn()
3890 * Notify the log that we're about to start using a feature that is protected
3901 /* Notify the log that we've finished using log incompat features. */