Home
last modified time | relevance | path

Searched refs:buffers (Results 1 – 25 of 402) sorted by relevance

12345678910>>...17

/Linux-v5.4/drivers/media/platform/vivid/
Dvivid-vid-common.c41 .buffers = 1,
50 .buffers = 1,
58 .buffers = 1,
66 .buffers = 1,
74 .buffers = 1,
82 .buffers = 1,
90 .buffers = 1,
98 .buffers = 1,
106 .buffers = 1,
114 .buffers = 1,
[all …]
/Linux-v5.4/lib/xz/
Dxz_dec_test.c52 static struct xz_buf buffers = { variable
75 buffers.in_pos = 0; in xz_dec_test_open()
76 buffers.in_size = 0; in xz_dec_test_open()
77 buffers.out_pos = 0; in xz_dec_test_open()
120 while ((remaining > 0 || buffers.out_pos == buffers.out_size) in xz_dec_test_write()
122 if (buffers.in_pos == buffers.in_size) { in xz_dec_test_write()
123 buffers.in_pos = 0; in xz_dec_test_write()
124 buffers.in_size = min(remaining, sizeof(buffer_in)); in xz_dec_test_write()
125 if (copy_from_user(buffer_in, buf, buffers.in_size)) in xz_dec_test_write()
128 buf += buffers.in_size; in xz_dec_test_write()
[all …]
/Linux-v5.4/Documentation/media/uapi/v4l/
Dmmap.rst24 Streaming is an I/O method where only pointers to buffers are exchanged
26 mapping is primarily intended to map buffers in device memory into the
30 drivers support streaming as well, allocating buffers in DMA-able main
33 A driver can support many sets of buffers. Each set is identified by a
38 To allocate device buffers applications call the
40 of buffers and buffer type, for example ``V4L2_BUF_TYPE_VIDEO_CAPTURE``.
41 This ioctl can also be used to change the number of buffers or to free
42 the allocated memory, provided none of the buffers are still mapped.
44 Before applications can access the buffers they must map them into their
46 location of the buffers in device memory can be determined with the
[all …]
Dcapture.c.rst58 struct buffer *buffers;
98 if (-1 == read(fd, buffers[0].start, buffers[0].length)) {
113 process_image(buffers[0].start, buffers[0].length);
139 process_image(buffers[buf.index].start, buf.bytesused);
167 if (buf.m.userptr == (unsigned long)buffers[i].start
168 && buf.length == buffers[i].length)
275 buf.m.userptr = (unsigned long)buffers[i].start;
276 buf.length = buffers[i].length;
294 free(buffers[0].start);
299 if (-1 == munmap(buffers[i].start, buffers[i].length))
[all …]
Duserp.rst32 No buffers (planes) are allocated beforehand, consequently they are not
33 indexed and cannot be queried like mapped buffers with the
57 :ref:`VIDIOC_QBUF <VIDIOC_QBUF>` ioctl. Although buffers are commonly
66 Filled or displayed buffers are dequeued with the
72 Applications must take care not to free buffers without dequeuing.
73 Firstly, the buffers remain locked for longer, wasting physical memory.
79 buffers, to start capturing and enter the read loop. Here the
82 and enqueue buffers, when enough buffers are stacked up output is
84 buffers it must wait until an empty buffer can be dequeued and reused.
86 more buffers can be dequeued. By default :ref:`VIDIOC_DQBUF
[all …]
Ddev-decoder.rst12 from the client to process these buffers.
50 the destination buffer queue; for decoders, the queue of buffers containing
51 decoded frames; for encoders, the queue of buffers containing an encoded
54 into ``CAPTURE`` buffers.
78 ``OUTPUT`` buffers must be queued by the client in decode order; for
79 encoders ``CAPTURE`` buffers must be returned by the encoder in decode order.
86 buffers must be queued by the client in display order; for decoders,
87 ``CAPTURE`` buffers must be returned by the decoder in display order.
110 the source buffer queue; for decoders, the queue of buffers containing
111 an encoded bytestream; for encoders, the queue of buffers containing raw
[all …]
Dvidioc-reqbufs.rst43 Memory mapped buffers are located in device memory and must be allocated
45 space. User buffers are allocated by applications themselves, and this
47 to setup some internal structures. Similarly, DMABUF buffers are
52 To allocate device buffers applications initialize all fields of the
55 the desired number of buffers, ``memory`` must be set to the requested
58 allocate the requested number of buffers and it stores the actual number
61 number is also possible when the driver requires more buffers to
63 buffers, one displayed and one filled by the application.
69 buffers. Note that if any buffers are still mapped or exported via DMABUF,
73 If ``V4L2_BUF_CAP_SUPPORTS_ORPHANED_BUFS`` is set, then these buffers are
[all …]
Ddmabuf.rst16 The DMABUF framework provides a generic method for sharing buffers
25 exporting V4L2 buffers as DMABUF file descriptors.
31 importing DMA buffers through DMABUF file descriptors is supported is
35 This I/O method is dedicated to sharing DMA buffers between different
38 application. Next, these buffers are exported to the application as file
70 buffers, every plane can be associated with a different DMABUF
71 descriptor. Although buffers are commonly cycled, applications can pass
128 Captured or displayed buffers are dequeued with the
136 buffers, to start capturing and enter the read loop. Here the
139 and enqueue buffers, when enough buffers are stacked up output is
[all …]
Dvidioc-create-bufs.rst19 VIDIOC_CREATE_BUFS - Create buffers for Memory Mapped or User Pointer or DMA Buffer I/O
42 This ioctl is used to create buffers for :ref:`memory mapped <mmap>`
46 over buffers is required. This ioctl can be called multiple times to
47 create buffers of different sizes.
49 To allocate the device buffers applications must initialize the relevant
51 ``count`` field must be set to the number of requested buffers, the
55 The ``format`` field specifies the image format that the buffers must be
62 sizes (for multi-planar formats) will be used for the allocated buffers.
66 The buffers created by this ioctl will have as minimum size the size
76 will attempt to allocate up to the requested number of buffers and store
[all …]
Dv4l2grab.c.rst74 struct buffer *buffers;
103 buffers = calloc(req.count, sizeof(*buffers));
113 buffers[n_buffers].length = buf.length;
114 buffers[n_buffers].start = v4l2_mmap(NULL, buf.length,
118 if (MAP_FAILED == buffers[n_buffers].start) {
163 fwrite(buffers[buf.index].start, buf.bytesused, 1, fout);
172 v4l2_munmap(buffers[i].start, buffers[i].length);
Dvidioc-streamon.rst49 Capture hardware is disabled and no input buffers are filled (if there
50 are any empty buffers in the incoming queue) until ``VIDIOC_STREAMON``
58 If ``VIDIOC_STREAMON`` fails then any already queued buffers will remain
62 in progress, unlocks any user pointer buffers locked in physical memory,
63 and it removes all buffers from the incoming and outgoing queues. That
70 If buffers have been queued with :ref:`VIDIOC_QBUF` and
72 ``VIDIOC_STREAMON``, then those queued buffers will also be removed from
84 but ``VIDIOC_STREAMOFF`` will return queued buffers to their starting
103 The buffer ``type`` is not supported, or no buffers have been
/Linux-v5.4/drivers/android/
Dbinder_alloc_selftest.c116 struct binder_buffer *buffers[], in binder_selftest_alloc_buf() argument
122 buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0); in binder_selftest_alloc_buf()
123 if (IS_ERR(buffers[i]) || in binder_selftest_alloc_buf()
124 !check_buffer_pages_allocated(alloc, buffers[i], in binder_selftest_alloc_buf()
133 struct binder_buffer *buffers[], in binder_selftest_free_buf() argument
139 binder_alloc_free_buf(alloc, buffers[seq[i]]); in binder_selftest_free_buf()
179 struct binder_buffer *buffers[BUFFER_NUM]; in binder_selftest_alloc_free() local
181 binder_selftest_alloc_buf(alloc, buffers, sizes, seq); in binder_selftest_alloc_free()
182 binder_selftest_free_buf(alloc, buffers, sizes, seq, end); in binder_selftest_alloc_free()
185 binder_selftest_alloc_buf(alloc, buffers, sizes, seq); in binder_selftest_alloc_free()
[all …]
/Linux-v5.4/drivers/iio/buffer/
Dindustrialio-hw-consumer.c23 struct list_head buffers; member
58 list_for_each_entry(buf, &hwc->buffers, head) { in iio_hw_consumer_get_buffer()
72 list_add_tail(&buf->head, &hwc->buffers); in iio_hw_consumer_get_buffer()
94 INIT_LIST_HEAD(&hwc->buffers); in iio_hw_consumer_alloc()
116 list_for_each_entry(buf, &hwc->buffers, head) in iio_hw_consumer_alloc()
134 list_for_each_entry_safe(buf, n, &hwc->buffers, head) in iio_hw_consumer_free()
217 list_for_each_entry(buf, &hwc->buffers, head) { in iio_hw_consumer_enable()
226 list_for_each_entry_continue_reverse(buf, &hwc->buffers, head) in iio_hw_consumer_enable()
240 list_for_each_entry(buf, &hwc->buffers, head) in iio_hw_consumer_disable()
/Linux-v5.4/Documentation/media/uapi/dvb/
Ddmx-reqbufs.rst45 Memory mapped buffers are located in device memory and must be allocated
47 space. User buffers are allocated by applications themselves, and this
49 to setup some internal structures. Similarly, DMABUF buffers are
54 To allocate device buffers applications initialize all fields of the
56 to the desired number of buffers, and ``size`` to the size of each
60 attempt to allocate the requested number of buffers and it stores the actual
62 number is also possible when the driver requires more buffers to
70 buffers, however this cannot succeed when any buffers are still mapped.
71 A ``count`` value of zero frees all buffers, after aborting or finishing
/Linux-v5.4/drivers/media/pci/ivtv/
Divtv-queue.c35 q->buffers = 0; in ivtv_queue_init()
53 q->buffers++; in ivtv_enqueue()
68 q->buffers--; in ivtv_dequeue()
82 from->buffers--; in ivtv_queue_move_buf()
88 to->buffers++; in ivtv_queue_move_buf()
143 steal->buffers--; in ivtv_queue_move()
147 from->buffers++; in ivtv_queue_move()
184 int SGsize = sizeof(struct ivtv_sg_host_element) * s->buffers; in ivtv_stream_alloc()
187 if (s->buffers == 0) in ivtv_stream_alloc()
192 s->name, s->buffers, s->buf_size, s->buffers * s->buf_size / 1024); in ivtv_stream_alloc()
[all …]
/Linux-v5.4/drivers/scsi/isci/
Dunsolicited_frame_control.c110 uf = &uf_control->buffers.array[i]; in sci_unsolicited_frame_control_construct()
136 *frame_header = &uf_control->buffers.array[frame_index].header->data; in sci_unsolicited_frame_control_get_header()
149 *frame_buffer = uf_control->buffers.array[frame_index].buffer; in sci_unsolicited_frame_control_get_buffer()
184 uf_control->buffers.array[frame_index].state = UNSOLICITED_FRAME_RELEASED; in sci_unsolicited_frame_control_release_frame()
198 while (uf_control->buffers.array[frame_get].state == UNSOLICITED_FRAME_RELEASED) { in sci_unsolicited_frame_control_release_frame()
199 uf_control->buffers.array[frame_get].state = UNSOLICITED_FRAME_EMPTY; in sci_unsolicited_frame_control_release_frame()
/Linux-v5.4/Documentation/media/v4l-drivers/
Dcafe_ccic.rst37 buffers until the time comes to transfer data. If this option is set,
38 then worst-case-sized buffers will be allocated at module load time.
42 - dma_buf_size: The size of DMA buffers to allocate. Note that this
43 option is only consulted for load-time allocation; when buffers are
48 buffers. Normally, the driver tries to use three buffers; on faster
51 - min_buffers: The minimum number of streaming I/O buffers that the driver
56 - max_buffers: The maximum number of streaming I/O buffers; default is
/Linux-v5.4/Documentation/media/kapi/
Dv4l2-videobuf.rst21 and user space. It handles the allocation and management of buffers for
34 Not all video devices use the same kind of buffers. In fact, there are (at
38 address spaces. (Almost) all user-space buffers are like this, but it
39 makes great sense to allocate kernel-space buffers this way as well when
45 contiguous; buffers allocated with vmalloc(), in other words. These
46 buffers are just as hard to use for DMA operations, but they can be
48 buffers are convenient.
54 Videobuf can work with all three types of buffers, but the driver author
57 [It's worth noting that there's a fourth kind of buffer: "overlay" buffers
61 benefits merit the use of this technique. Overlay buffers can be handled
[all …]
/Linux-v5.4/fs/
Dsplice.c199 while (pipe->nrbufs < pipe->buffers) { in splice_to_pipe()
200 int newbuf = (pipe->curbuf + pipe->nrbufs) & (pipe->buffers - 1); in splice_to_pipe()
236 } else if (pipe->nrbufs == pipe->buffers) { in add_to_pipe()
239 int newbuf = (pipe->curbuf + pipe->nrbufs) & (pipe->buffers - 1); in add_to_pipe()
255 unsigned int buffers = READ_ONCE(pipe->buffers); in splice_grow_spd() local
257 spd->nr_pages_max = buffers; in splice_grow_spd()
258 if (buffers <= PIPE_DEF_BUFFERS) in splice_grow_spd()
261 spd->pages = kmalloc_array(buffers, sizeof(struct page *), GFP_KERNEL); in splice_grow_spd()
262 spd->partial = kmalloc_array(buffers, sizeof(struct partial_page), in splice_grow_spd()
377 if (pipe->nrbufs == pipe->buffers) in default_file_splice_read()
[all …]
Dpipe.c324 curbuf = (curbuf + 1) & (pipe->buffers - 1); in pipe_read()
404 (pipe->buffers - 1); in pipe_write()
435 if (bufs < pipe->buffers) { in pipe_write()
436 int newbuf = (pipe->curbuf + bufs) & (pipe->buffers-1); in pipe_write()
479 if (bufs < pipe->buffers) in pipe_write()
528 buf = (buf+1) & (pipe->buffers - 1); in pipe_ioctl()
558 mask |= (nrbufs < pipe->buffers) ? EPOLLOUT | EPOLLWRNORM : 0; in pipe_poll()
682 pipe->buffers = pipe_bufs; in alloc_pipe_info()
700 (void) account_pipe_buffers(pipe->user, pipe->buffers, 0); in free_pipe_info()
702 for (i = 0; i < pipe->buffers; i++) { in free_pipe_info()
[all …]
/Linux-v5.4/lib/reed_solomon/
Ddecode_rs.c32 uint16_t *lambda = rsc->buffers + RS_DECODE_LAMBDA * (nroots + 1);
33 uint16_t *syn = rsc->buffers + RS_DECODE_SYN * (nroots + 1);
34 uint16_t *b = rsc->buffers + RS_DECODE_B * (nroots + 1);
35 uint16_t *t = rsc->buffers + RS_DECODE_T * (nroots + 1);
36 uint16_t *omega = rsc->buffers + RS_DECODE_OMEGA * (nroots + 1);
37 uint16_t *root = rsc->buffers + RS_DECODE_ROOT * (nroots + 1);
38 uint16_t *reg = rsc->buffers + RS_DECODE_REG * (nroots + 1);
39 uint16_t *loc = rsc->buffers + RS_DECODE_LOC * (nroots + 1);
/Linux-v5.4/Documentation/filesystems/
Drelay.txt9 as a set of per-cpu kernel buffers ('channel buffers'), each
11 clients write into the channel buffers using efficient write
16 are associated with the channel buffers using the API described below.
18 The format of the data logged into the channel buffers is completely
33 sub-buffers. Messages are written to the first sub-buffer until it is
35 the next (if available). Messages are never split across sub-buffers.
57 read sub-buffers; thus in cases where read(2) is being used to drain
58 the channel buffers, special-purpose communication between kernel and
93 allowing both to convey the state of buffers (full, empty, amount of
95 consumes the read sub-buffers; thus in cases where read(2) is being
[all …]
/Linux-v5.4/kernel/trace/
Dring_buffer.c495 struct ring_buffer_per_cpu **buffers; member
522 return buffer->buffers[cpu]->nr_pages; in ring_buffer_nr_pages()
537 read = local_read(&buffer->buffers[cpu]->pages_read); in ring_buffer_nr_dirty_pages()
538 cnt = local_read(&buffer->buffers[cpu]->pages_touched); in ring_buffer_nr_dirty_pages()
594 cpu_buffer = buffer->buffers[cpu]; in ring_buffer_wait()
698 cpu_buffer = buffer->buffers[cpu]; in ring_buffer_poll_wait()
1409 buffer->buffers = kzalloc(ALIGN(bsize, cache_line_size()), in __ring_buffer_alloc()
1411 if (!buffer->buffers) in __ring_buffer_alloc()
1416 buffer->buffers[cpu] = rb_allocate_cpu_buffer(buffer, nr_pages, cpu); in __ring_buffer_alloc()
1417 if (!buffer->buffers[cpu]) in __ring_buffer_alloc()
[all …]
/Linux-v5.4/drivers/media/usb/pvrusb2/
Dpvrusb2-io.c49 struct pvr2_buffer **buffers; member
305 memcpy(nb, sp->buffers, in pvr2_stream_buffer_count()
307 kfree(sp->buffers); in pvr2_stream_buffer_count()
309 sp->buffers = nb; in pvr2_stream_buffer_count()
321 sp->buffers[sp->buffer_total_count] = bp; in pvr2_stream_buffer_count()
328 bp = sp->buffers[sp->buffer_total_count - 1]; in pvr2_stream_buffer_count()
330 sp->buffers[sp->buffer_total_count - 1] = NULL; in pvr2_stream_buffer_count()
338 nb = kmemdup(sp->buffers, scnt * sizeof(*nb), in pvr2_stream_buffer_count()
342 kfree(sp->buffers); in pvr2_stream_buffer_count()
343 sp->buffers = nb; in pvr2_stream_buffer_count()
[all …]
/Linux-v5.4/Documentation/admin-guide/hw-vuln/
Dtsx_async_abort.rst7 data which is available in various CPU internal buffers by using asynchronous
39 data into temporary microarchitectural structures (buffers). The data in
40 those buffers can be forwarded to load operations as an optimization.
54 executed loads may read data from those internal buffers and pass it to dependent
58 Because the buffers are potentially shared between Hyper-Threads cross
63 which in turn potenitally leaks data stored in the buffers.
100 * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
101 - The system tries to clear the buffers but the microcode might not support the operation.
102 * - 'Mitigation: Clear CPU buffers'
103 - The microcode has been updated to clear the buffers. TSX is still enabled.
[all …]

12345678910>>...17