Lines Matching full:descriptor
28 * The internal state information of a descriptor is the key element to allow
34 * Descriptor Ring
36 * The descriptor ring is an array of descriptors. A descriptor contains
39 * "Data Rings" below). Each descriptor is assigned an ID that maps
40 * directly to index values of the descriptor array and has a state. The ID
41 * and the state are bitwise combined into a single descriptor field named
52 * descriptor (transitioning it back to reserved), but in the committed
57 * writer cannot reopen the descriptor.
64 * descriptor to query. This can yield a possible fifth (pseudo) state:
67 * The descriptor being queried has an unexpected ID.
69 * The descriptor ring has a @tail_id that contains the ID of the oldest
70 * descriptor and @head_id that contains the ID of the newest descriptor.
72 * When a new descriptor should be created (and the ring is full), the tail
73 * descriptor is invalidated by first transitioning to the reusable state and
75 * associated with the tail descriptor (for the text ring). Then
77 * @state_var of the new descriptor is initialized to the new ID and reserved
84 * Descriptor Finalization
115 * the identifier of a descriptor that is associated with the data block. A
119 * 1) The descriptor associated with the data block is in the committed
122 * 2) The blk_lpos struct within the descriptor associated with the data
142 * descriptor. If a data block is not valid, the @tail_lpos cannot be
148 * stored in an array with the same number of elements as the descriptor ring.
149 * Each info corresponds to the descriptor of the same index in the
150 * descriptor ring. Info validity is confirmed by evaluating the corresponding
151 * descriptor before and after loading the info.
272 * push descriptor tail (id), then push descriptor head (id)
275 * push data tail (lpos), then set new descriptor reserved (state)
278 * push descriptor tail (id), then set new descriptor reserved (state)
281 * push descriptor tail (id), then set new descriptor reserved (state)
284 * set new descriptor id and reserved (state), then allow writer changes
287 * set old descriptor reusable (state), then modify new data block area
293 * store writer changes, then set new descriptor committed (state)
296 * set descriptor reserved (state), then read descriptor data
299 * set new descriptor committed (state), then check descriptor head (id)
302 * set descriptor reusable (state), then push data tail (lpos)
305 * set descriptor reusable (state), then push descriptor tail (id)
340 * @id: the ID of the associated descriptor
344 * descriptor.
352 * Return the descriptor associated with @n. @n can be either a
353 * descriptor ID or a sequence number.
362 * descriptor ID or a sequence number.
413 /* Query the state of a descriptor. */
424 * Get a copy of a specified descriptor and return its queried state. If the
425 * descriptor is in an inconsistent state (miss or reserved), the caller can
426 * only expect the descriptor's @state_var field to be valid.
429 * non-state_var data, they are only valid if the descriptor is in a
442 /* Check the descriptor state. */ in desc_read()
447 * The descriptor is in an inconsistent state. Set at least in desc_read()
455 * Guarantee the state is loaded before copying the descriptor in desc_read()
456 * content. This avoids copying obsolete descriptor content that might in desc_read()
457 * not apply to the descriptor state. This pairs with _prb_commit:B. in desc_read()
473 * Copy the descriptor data. The data is not valid until the in desc_read()
485 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read()
486 * the state. This avoids reading an obsolete descriptor state in desc_read()
502 * state. This avoids reading an obsolete descriptor state that may in desc_read()
525 * The data has been copied. Return the current descriptor state, in desc_read()
536 * Take a specified descriptor out of the finalized state by attempting
553 * Given the text data ring, put the associated descriptor of each
556 * If there is any problem making the associated descriptor reusable, either
557 * the descriptor has not yet been finalized or another writer context has
582 * area. If the loaded value matches a valid descriptor ID, in data_make_reusable()
583 * the blk_lpos of that descriptor will be checked to make in data_make_reusable()
599 * This data block is invalid if the descriptor in data_make_reusable()
608 * This data block is invalid if the descriptor in data_make_reusable()
641 * Any descriptor states that have transitioned to reusable due to the in data_push_tail()
664 * sees the new tail lpos, any descriptor states that transitioned to in data_push_tail()
701 * 2. Guarantee the descriptor state loaded in in data_push_tail()
705 * recycled descriptor causing the tail lpos to in data_push_tail()
741 * Guarantee any descriptor states that have transitioned to in data_push_tail()
744 * the descriptor states reusable. This pairs with in data_push_tail()
758 * descriptor, thus invalidating the oldest descriptor. Before advancing
759 * the tail, the tail descriptor is made reusable and all data blocks up to
760 * and including the descriptor's data block are invalidated (i.e. the data
761 * ring tail is pushed past the data block of the descriptor being made
787 * tail and recycled the descriptor already. Success is in desc_push_tail()
804 * descriptor can be made available for recycling. Invalidating in desc_push_tail()
806 * data blocks once their associated descriptor is gone. in desc_push_tail()
813 * Check the next descriptor after @tail_id before pushing the tail in desc_push_tail()
817 * A successful read implies that the next descriptor is less than or in desc_push_tail()
826 * Guarantee any descriptor states that have transitioned to in desc_push_tail()
828 * verifying the recycled descriptor state. A full memory in desc_push_tail()
830 * descriptor states reusable. This pairs with desc_reserve:D. in desc_push_tail()
838 * case that the descriptor has been recycled. This pairs in desc_push_tail()
860 * Re-check the tail ID. The descriptor following @tail_id is in desc_push_tail()
871 /* Reserve a new descriptor, invalidating the oldest if necessary. */
914 * Make space for the new descriptor by in desc_reserve()
923 * recycled descriptor state. A read memory barrier is in desc_reserve()
946 * recycling the descriptor. Data ring tail changes can in desc_reserve()
953 * descriptor. A full memory barrier is needed since in desc_reserve()
959 * finalize the previous descriptor. This pairs with in desc_reserve()
968 * If the descriptor has been recycled, verify the old state val. in desc_reserve()
979 * Assign the descriptor a new ID and set its state to reserved. in desc_reserve()
982 * Guarantee the new descriptor ID and state is stored before making in desc_reserve()
1019 * a specified descriptor.
1051 * 1. Guarantee any descriptor states that have transitioned in data_alloc()
1054 * since other CPUs may have made the descriptor states in data_alloc()
1091 * Try to resize an existing data block associated with the descriptor
1252 * Attempt to transition the newest descriptor from committed back to reserved
1254 * if the descriptor is not yet finalized and the provided @caller_id matches.
1269 * To reduce unnecessarily reopening, first check if the descriptor in desc_reopen_last()
1358 /* Transition the newest descriptor back to the reserved state. */ in prb_reserve_in_last()
1378 * exclusive access at that point. The descriptor may have in prb_reserve_in_last()
1442 * Attempt to finalize a specified descriptor. If this fails, the descriptor
1500 /* Descriptor reservation failures are tracked. */ in prb_reserve()
1543 * previous descriptor now so that it can be made available to in prb_reserve()
1544 * readers. (For seq==0 there is no previous descriptor.) in prb_reserve()
1579 * Set the descriptor as committed. See "ABA Issues" about why in _prb_commit()
1582 * 1 Guarantee all record data is stored before the descriptor state in _prb_commit()
1586 * 2. Guarantee the descriptor state is stored as committed before in _prb_commit()
1588 * descriptor. This pairs with desc_reserve:D. in _prb_commit()
1634 * If this descriptor is no longer the head (i.e. a new record has in prb_commit()
1735 * descriptor. However, it also verifies that the record is finalized and has
1756 * does not exist. A descriptor in the reserved or committed state in desc_read_finalized_seq()
1767 * A descriptor in the reusable state may no longer have its data in desc_read_finalized_seq()
1796 /* Extract the ID, used to specify the descriptor to read. */ in prb_read()
1799 /* Get a local copy of the correct descriptor (if available). */ in prb_read()
1823 /* Get the sequence number of the tail descriptor. */
1847 * that the descriptor has been recycled. This pairs with in prb_first_seq()
2010 /* Search forward from the oldest descriptor. */ in prb_next_seq()
2023 * @descs: The descriptor buffer for ringbuffer records.