Lines Matching full:we

31 	/* Make sure we have enough space to handle the data first */  in btrfs_alloc_data_chunk_ondemand()
39 * If we don't have enough free bytes in this space then we need in btrfs_alloc_data_chunk_ondemand()
50 * It is ugly that we don't call nolock join in btrfs_alloc_data_chunk_ondemand()
52 * But it is safe because we only do the data space in btrfs_alloc_data_chunk_ondemand()
79 * If we don't have enough pinned space to deal with this in btrfs_alloc_data_chunk_ondemand()
113 * more space is released. We don't need to in btrfs_alloc_data_chunk_ondemand()
163 * Called if we need to clear a data reservation for this inode
167 * which we can't sleep and is sure it won't affect qgroup reserved space.
188 * Called if we need to clear a data reservation for this inode
210 * @inode - the inode we need to release from.
212 * Unlike normal operation, qgroup meta reservation needs to know if we are
227 * Since we statically set the block_rsv->size we just want to say we in btrfs_inode_rsv_release()
228 * are releasing 0 bytes, and then we'll just get the reservation over in btrfs_inode_rsv_release()
313 * If we are a free space inode we need to not flush since we will be in in btrfs_delalloc_reserve_metadata()
314 * the middle of a transaction commit. We also don't need the delalloc in btrfs_delalloc_reserve_metadata()
315 * mutex since we won't race with anybody. We need this mostly to make in btrfs_delalloc_reserve_metadata()
318 * If we have a transaction open (can happen if we call truncate_block in btrfs_delalloc_reserve_metadata()
319 * from truncate), then we need FLUSH_LIMIT so we don't deadlock. in btrfs_delalloc_reserve_metadata()
338 * We always want to do it this way, every other way is wrong and ends in btrfs_delalloc_reserve_metadata()
339 * in tears. Pre-reserving the amount we are going to add will always in btrfs_delalloc_reserve_metadata()
340 * be the right way, because otherwise if we have enough parallelism we in btrfs_delalloc_reserve_metadata()
344 * everything out and try again, which is bad. This way we just in btrfs_delalloc_reserve_metadata()
345 * over-reserve slightly, and clean up the mess when we are done. in btrfs_delalloc_reserve_metadata()
357 * Now we need to update our outstanding extents and csum bytes _first_ in btrfs_delalloc_reserve_metadata()
360 * needs to free the reservation we just made. in btrfs_delalloc_reserve_metadata()
369 /* Now we can safely add our space to our block rsv */ in btrfs_delalloc_reserve_metadata()
392 * @num_bytes: the number of bytes we are releasing.
396 * once we complete IO for a given set of bytes to release their metadata
419 * @num_bytes: the number of bytes we originally reserved with
421 * When we reserve space we increase outstanding_extents for the extents we may
422 * add. Once we've set the range as delalloc or created our ordered extents we
423 * have outstanding_extents to track the real usage, so we use this to free our
447 * @inode: inode we're writing to
448 * @start: start range we are writing to
449 * @len: how long the range we are writing to
485 * @inode: inode we're releasing space for
488 * @release_bytes: the len of the space we consumed or didn't use