Lines Matching +full:reserved +full:- +full:memory

12 ----------
18 - more efficient memory utilization by sharing ring buffer across CPUs;
19 - preserving ordering of events that happen sequentially in time, even across
23 Both are a result of a choice to have per-CPU perf ring buffer. Both can be
25 problem could technically be solved for perf buffer with some in-kernel
30 ------------------
56 The approach chosen has an advantage of re-using existing BPF map
62 combined with ``ARRAY_OF_MAPS`` and ``HASH_OF_MAPS`` map-in-maps to implement
75 - variable-length records;
76 - if there is no more space left in ring buffer, reservation fails, no
78 - memory-mappable data area for user-space applications for ease of
80 - epoll notifications for new incoming data;
81 - but still the ability to do busy polling for new data to achieve the
86 - ``bpf_ringbuf_output()`` allows to *copy* data from one place to a ring
88 - ``bpf_ringbuf_reserve()``/``bpf_ringbuf_commit()``/``bpf_ringbuf_discard()``
90 is reserved. If successful, a pointer to a data inside ring buffer data
92 array/hash maps. Once ready, this piece of memory is either committed or
96 ``bpf_ringbuf_output()`` has disadvantage of incurring extra memory copy,
102 ``bpf_ringbuf_reserve()`` avoids the extra copy of memory by providing a memory
103 pointer directly to ring buffer memory. In a lot of cases records are larger
104 than BPF stack space allows, so many programs have use extra per-CPU array as
106 completely. But in exchange, it only allows a known constant size of memory to
107 be reserved, such that verifier can verify that BPF program can't access memory
108 outside its reserved record space. bpf_ringbuf_output(), while slightly slower
109 due to extra memory copy, covers some use cases that are not suitable for
114 code. Discard is useful for some advanced use-cases, such as ensuring
115 all-or-nothing multi-record submission, or emulating temporary
118 Each reserved record is tracked by verifier through existing
119 reference-tracking logic, similar to socket ref-tracking. It is thus
125 - ``BPF_RB_AVAIL_DATA`` returns amount of unconsumed data in ring buffer;
126 - ``BPF_RB_RING_SIZE`` returns the size of ring buffer;
127 - ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical possition
133 into account highly-changeable nature of some of those characteristics.
135 One such heuristic might involve more fine-grained control over poll/epoll
139 efficient batched notifications. Default self-balancing strategy, though,
144 -------------------------
150 same ring buffer, they will both get a record reserved (provided there is
156 The ring buffer itself internally is implemented as a power-of-2 sized
157 circular buffer, with two logical and ever-increasing counters (which might
158 wrap around on 32-bit architectures, that's not a problem):
160 - consumer counter shows up to which logical position consumer consumed the
162 - producer counter denotes amount of data reserved by all producers.
164 Each time a record is reserved, producer that "owns" the record will
167 length of reserved record, as well as two extra bits: busy bit to denote that
174 itself. Ring buffer memory location will be restored from record metadata
183 off submitted records, that were reserved later.
187 area is mapped twice contiguously back-to-back in the virtual memory. This
191 appear completely contiguous in virtual memory. See comment and a simple ASCII
195 a self-pacing notifications of new data being availability.