Lines Matching full:descriptor

28  * The internal state information of a descriptor is the key element to allow
34 * Descriptor Ring
36 * The descriptor ring is an array of descriptors. A descriptor contains
39 * "Data Rings" below). Each descriptor is assigned an ID that maps
40 * directly to index values of the descriptor array and has a state. The ID
41 * and the state are bitwise combined into a single descriptor field named
52 * descriptor (transitioning it back to reserved), but in the committed
57 * writer cannot reopen the descriptor.
64 * descriptor to query. This can yield a possible fifth (pseudo) state:
67 * The descriptor being queried has an unexpected ID.
69 * The descriptor ring has a @tail_id that contains the ID of the oldest
70 * descriptor and @head_id that contains the ID of the newest descriptor.
72 * When a new descriptor should be created (and the ring is full), the tail
73 * descriptor is invalidated by first transitioning to the reusable state and
75 * associated with the tail descriptor (for the text ring). Then
77 * @state_var of the new descriptor is initialized to the new ID and reserved
84 * Descriptor Finalization
115 * the identifier of a descriptor that is associated with the data block. A
119 * 1) The descriptor associated with the data block is in the committed
122 * 2) The blk_lpos struct within the descriptor associated with the data
142 * descriptor. If a data block is not valid, the @tail_lpos cannot be
148 * stored in an array with the same number of elements as the descriptor ring.
149 * Each info corresponds to the descriptor of the same index in the
150 * descriptor ring. Info validity is confirmed by evaluating the corresponding
151 * descriptor before and after loading the info.
272 * push descriptor tail (id), then push descriptor head (id)
275 * push data tail (lpos), then set new descriptor reserved (state)
278 * push descriptor tail (id), then set new descriptor reserved (state)
281 * push descriptor tail (id), then set new descriptor reserved (state)
284 * set new descriptor id and reserved (state), then allow writer changes
287 * set old descriptor reusable (state), then modify new data block area
293 * store writer changes, then set new descriptor committed (state)
296 * set descriptor reserved (state), then read descriptor data
299 * set new descriptor committed (state), then check descriptor head (id)
302 * set descriptor reusable (state), then push data tail (lpos)
305 * set descriptor reusable (state), then push descriptor tail (id)
340 * @id: the ID of the associated descriptor
344 * descriptor.
352 * Return the descriptor associated with @n. @n can be either a
353 * descriptor ID or a sequence number.
362 * descriptor ID or a sequence number.
413 /* Query the state of a descriptor. */
424 * Get a copy of a specified descriptor and return its queried state. If the
425 * descriptor is in an inconsistent state (miss or reserved), the caller can
426 * only expect the descriptor's @state_var field to be valid.
429 * non-state_var data, they are only valid if the descriptor is in a
442 /* Check the descriptor state. */ in desc_read()
447 * The descriptor is in an inconsistent state. Set at least in desc_read()
455 * Guarantee the state is loaded before copying the descriptor in desc_read()
456 * content. This avoids copying obsolete descriptor content that might in desc_read()
457 * not apply to the descriptor state. This pairs with _prb_commit:B. in desc_read()
473 * Copy the descriptor data. The data is not valid until the in desc_read()
487 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read()
488 * the state. This avoids reading an obsolete descriptor state in desc_read()
504 * state. This avoids reading an obsolete descriptor state that may in desc_read()
527 * The data has been copied. Return the current descriptor state, in desc_read()
539 * Take a specified descriptor out of the finalized state by attempting
556 * Given the text data ring, put the associated descriptor of each
559 * If there is any problem making the associated descriptor reusable, either
560 * the descriptor has not yet been finalized or another writer context has
585 * area. If the loaded value matches a valid descriptor ID, in data_make_reusable()
586 * the blk_lpos of that descriptor will be checked to make in data_make_reusable()
602 * This data block is invalid if the descriptor in data_make_reusable()
611 * This data block is invalid if the descriptor in data_make_reusable()
644 * Any descriptor states that have transitioned to reusable due to the in data_push_tail()
667 * sees the new tail lpos, any descriptor states that transitioned to in data_push_tail()
704 * 2. Guarantee the descriptor state loaded in in data_push_tail()
708 * recycled descriptor causing the tail lpos to in data_push_tail()
744 * Guarantee any descriptor states that have transitioned to in data_push_tail()
747 * the descriptor states reusable. This pairs with in data_push_tail()
761 * descriptor, thus invalidating the oldest descriptor. Before advancing
762 * the tail, the tail descriptor is made reusable and all data blocks up to
763 * and including the descriptor's data block are invalidated (i.e. the data
764 * ring tail is pushed past the data block of the descriptor being made
790 * tail and recycled the descriptor already. Success is in desc_push_tail()
807 * descriptor can be made available for recycling. Invalidating in desc_push_tail()
809 * data blocks once their associated descriptor is gone. in desc_push_tail()
816 * Check the next descriptor after @tail_id before pushing the tail in desc_push_tail()
820 * A successful read implies that the next descriptor is less than or in desc_push_tail()
829 * Guarantee any descriptor states that have transitioned to in desc_push_tail()
831 * verifying the recycled descriptor state. A full memory in desc_push_tail()
833 * descriptor states reusable. This pairs with desc_reserve:D. in desc_push_tail()
841 * case that the descriptor has been recycled. This pairs in desc_push_tail()
863 * Re-check the tail ID. The descriptor following @tail_id is in desc_push_tail()
874 /* Reserve a new descriptor, invalidating the oldest if necessary. */
917 * Make space for the new descriptor by in desc_reserve()
926 * recycled descriptor state. A read memory barrier is in desc_reserve()
949 * recycling the descriptor. Data ring tail changes can in desc_reserve()
956 * descriptor. A full memory barrier is needed since in desc_reserve()
962 * finalize the previous descriptor. This pairs with in desc_reserve()
971 * If the descriptor has been recycled, verify the old state val. in desc_reserve()
982 * Assign the descriptor a new ID and set its state to reserved. in desc_reserve()
985 * Guarantee the new descriptor ID and state is stored before making in desc_reserve()
1022 * a specified descriptor.
1054 * 1. Guarantee any descriptor states that have transitioned in data_alloc()
1057 * since other CPUs may have made the descriptor states in data_alloc()
1094 * Try to resize an existing data block associated with the descriptor
1255 * Attempt to transition the newest descriptor from committed back to reserved
1257 * if the descriptor is not yet finalized and the provided @caller_id matches.
1272 * To reduce unnecessarily reopening, first check if the descriptor in desc_reopen_last()
1361 /* Transition the newest descriptor back to the reserved state. */ in prb_reserve_in_last()
1381 * exclusive access at that point. The descriptor may have in prb_reserve_in_last()
1445 * Attempt to finalize a specified descriptor. If this fails, the descriptor
1506 /* Descriptor reservation failures are tracked. */ in prb_reserve()
1549 * previous descriptor now so that it can be made available to in prb_reserve()
1550 * readers. (For seq==0 there is no previous descriptor.) in prb_reserve()
1585 * Set the descriptor as committed. See "ABA Issues" about why in _prb_commit()
1588 * 1 Guarantee all record data is stored before the descriptor state in _prb_commit()
1592 * 2. Guarantee the descriptor state is stored as committed before in _prb_commit()
1594 * descriptor. This pairs with desc_reserve:D. in _prb_commit()
1640 * If this descriptor is no longer the head (i.e. a new record has in prb_commit()
1746 * descriptor. However, it also verifies that the record is finalized and has
1767 * does not exist. A descriptor in the reserved or committed state in desc_read_finalized_seq()
1778 * A descriptor in the reusable state may no longer have its data in desc_read_finalized_seq()
1807 /* Extract the ID, used to specify the descriptor to read. */ in prb_read()
1810 /* Get a local copy of the correct descriptor (if available). */ in prb_read()
1834 /* Get the sequence number of the tail descriptor. */
1858 * that the descriptor has been recycled. This pairs with in prb_first_seq()
2064 * @descs: The descriptor buffer for ringbuffer records.