Lines Matching full:buffers
82 * Returns if the folio has dirty or writeback buffers. If all the buffers
84 * any of the buffers are locked, it is assumed they are locked for IO.
180 * But it's the page lock which protects the buffers. To get around this,
222 /* we might be here because some of the buffers on this page are in __find_get_block_slow()
225 * elsewhere, don't buffer_error if we had some unmapped buffers in __find_get_block_slow()
285 * If all of the buffers are uptodate then we can set the page in end_buffer_async_read()
385 * If a page's buffers are under async readin (end_buffer_async_read
387 * control could lock one of the buffers after it has completed
388 * but while some of the other buffers have not completed. This
393 * The page comes unlocked when it has no locked buffer_async buffers
397 * the buffers.
434 * management of a list of dependent buffers at ->i_mapping->private_list.
436 * Locking is a little subtle: try_to_free_buffers() will remove buffers
439 * at the time, not against the S_ISREG file which depends on those buffers.
441 * which backs the buffers. Which is different from the address_space
442 * against which the buffers are listed. So for a particular address_space,
447 * Which introduces a requirement: all buffers on an address_space's
450 * address_spaces which do not place buffers at ->private_list via these
461 * mark_buffer_dirty_fsync() to clearly define why those buffers are being
468 * that buffers are taken *off* the old inode's list when they are freed
495 * as you dirty the buffers, and then use osync_inode_buffers to wait for
496 * completion. Any other dirty buffers which are not yet queued for
531 * sync_mapping_buffers - write out & wait upon a mapping's "associated" buffers
532 * @mapping: the mapping which wants those buffers written
534 * Starts I/O against the buffers at mapping->private_list, and waits upon
538 * @mapping is a file or directory which needs those buffers to be written for
597 * If the page has buffers, the uptodate buffers are set dirty, to preserve
598 * dirty-state coherency between the page and the buffers. It the page does
599 * not have buffers then when they are later attached they will all be set
602 * The buffers are dirtied before the page is dirtied. There's a small race
605 * before the buffers, a concurrent writepage caller could clear the page dirty
606 * bit, see a bunch of clean buffers and we'd end up with dirty buffers/clean
610 * page's buffer list. Also use this to protect against clean buffers being
652 * Write out and wait upon a list of buffers.
655 * initially dirty buffers get waited on, but that any subsequently
656 * dirtied buffers don't. After all, we don't want fsync to last
659 * Do this in two main stages: first we copy dirty buffers to a
667 * the osync code to catch these locked, dirty buffers without requeuing
668 * any newly dirty buffers for write.
750 * Invalidate any and all dirty buffers on a given inode. We are
752 * done a sync(). Just drop the buffers from the inode list.
755 * assumes that all the buffers are against the blockdev. Not true
774 * Remove any clean buffers from the inode's buffer list. This is called
775 * when we're trying to free the inode itself. Those buffers can pin it.
777 * Returns true if all buffers were removed.
803 * Create the appropriate buffers when given a page for data area and
805 * follow the buffers created. Return NULL if unable to create more
806 * buffers.
888 * Initialise the state of a blockdev page's buffers.
963 * Allocate some buffers for this page in grow_dev_page()
968 * Link the page to the buffers and initialise them. Take the in grow_dev_page()
986 * Create buffers for the specified block device block's page. If
987 * that page was dirty, the buffers are set dirty also.
1010 /* Create a page with the proper size buffers.. */ in grow_buffers()
1045 * The relationship between dirty buffers and dirty pages:
1047 * Whenever a page has any dirty buffers, the page's dirty bit is set, and
1050 * At all times, the dirtiness of the buffers represents the dirtiness of
1051 * subsections of the page. If the page has buffers, the page dirty bit is
1054 * When a page is set dirty in its entirety, all its buffers are marked dirty
1055 * (if the page has buffers).
1058 * buffers are not.
1060 * Also. When blockdev buffers are explicitly read with bread(), they
1062 * uptodate - even if all of its buffers are uptodate. A subsequent
1064 * buffers, will set the folio uptodate and will perform no I/O.
1133 * Decrement a buffer_head's reference count. If all buffers against a page
1135 * and unlocked then try_to_free_buffers() may strip the buffers from the page
1136 * in preparation for freeing it (sometimes, rarely, buffers are removed from
1137 * a page but it ends up not being freed, and buffers may later be reattached).
1188 * The bhs[] array is sorted - newest buffer is at bhs[0]. Buffers have their
1477 * block_invalidate_folio() does not have to release all buffers, but it must
1521 * We release buffers only if the entire folio is being invalidated. in block_invalidate_folio()
1534 * We attach and possibly dirty the buffers atomically wrt
1569 * clean_bdev_aliases: clean a range of buffers in block device
1570 * @bdev: Block device to clean buffers in
1584 * writeout I/O going on against recently-freed buffers. We don't wait on that
1610 * to pin buffers here since we can afford to sleep and in clean_bdev_aliases()
1679 * While block_write_full_page is writing back the dirty buffers under
1680 * the page lock, whoever dirtied the buffers may decide to clean them
1711 * here, and the (potentially unmapped) buffers may become dirty at in __block_write_full_page()
1715 * Buffers outside i_size may be dirtied by block_dirty_folio; in __block_write_full_page()
1727 * Get all the dirty buffers mapped to disk addresses and in __block_write_full_page()
1733 * mapped buffers outside i_size will occur, because in __block_write_full_page()
1783 * The page and its buffers are protected by PageWriteback(), so we can in __block_write_full_page()
1803 * The page was marked dirty, but the buffers were in __block_write_full_page()
1824 /* Recovery: lock and submit the mapped buffers */ in __block_write_full_page()
1857 * If a page has any new buffers, zero them out here, and mark them uptodate
2075 * If this is a partial write which happened to make all buffers in __block_commit_write()
2125 * The buffers that were written will now be uptodate, so in block_write_end()
2190 * block_is_partially_uptodate checks whether buffers within a folio are
2193 * Returns true if all buffers which correspond to the specified part
2299 * All buffers are uptodate - we can set the folio uptodate in block_read_full_folio()
2308 /* Stage two: lock the buffers */ in block_read_full_folio()
2763 * try_to_free_buffers() checks if all the buffers on this particular folio
2769 * If the folio is dirty but all the buffers are clean then we need to
2771 * may be against a block device, and a later reattachment of buffers
2772 * to a dirty folio will set *all* buffers dirty. Which would corrupt
2775 * The same applies to regular filesystem folios: if all the buffers are
2834 * If the filesystem writes its buffers by hand (eg ext3) in try_to_free_buffers()
2835 * then we can have clean buffers against a dirty folio. We in try_to_free_buffers()
2840 * the folio's buffers clean. We discover that here and clean in try_to_free_buffers()
2983 * __bh_read_batch - Submit read for a batch of unlocked buffers