Home
last modified time | relevance | path

Searched refs:requests (Results 1 – 25 of 467) sorted by relevance

12345678910>>...19

/Linux-v5.4/Documentation/block/
Dstat.rst29 read I/Os requests number of read I/Os processed
30 read merges requests number of read I/Os merged with in-queue I/O
32 read ticks milliseconds total wait time for read requests
33 write I/Os requests number of write I/Os processed
34 write merges requests number of write I/Os merged with in-queue I/O
36 write ticks milliseconds total wait time for write requests
37 in_flight requests number of I/Os currently in flight
39 time_in_queue milliseconds total wait time for all requests
40 discard I/Os requests number of discard I/Os processed
41 discard merges requests number of discard I/Os merged with in-queue I/O
[all …]
Dwriteback_cache_control.rst17 a forced cache flush, and the Force Unit Access (FUA) flag for requests.
26 guarantees that previously completed write requests are on non-volatile
58 on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
68 support required, the block layer completes empty REQ_PREFLUSH requests before
70 requests that have a payload. For devices with volatile write caches the
76 and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn. Note that
77 REQ_PREFLUSH requests with a payload are automatically turned into a sequence
84 and the driver must handle write requests that have the REQ_FUA bit set
Ddeadline-iosched.rst32 fifo_batch (number of requests)
38 maximum number of requests per batch.
49 When we have to move requests from the io scheduler queue to the block
66 front merge requests. Setting front_merges to 0 disables this functionality.
/Linux-v5.4/Documentation/devicetree/bindings/dma/
Dlpc1850-dmamux.txt11 - dma-requests: Number of DMA requests for the mux
15 - dma-requests: Number of DMA requests the controller can handle
28 dma-requests = <16>;
40 dma-requests = <64>;
Dstm32-dmamux.txt17 - dma-channels : Number of DMA requests supported.
18 - dma-requests : Number of DMAMUX requests supported.
41 dma-requests = <8>;
61 dma-requests = <8>;
69 dma-requests = <128>;
Dti-dma-crossbar.txt9 - dma-requests: Number of DMA requests the crossbar can receive
13 - dma-requests: Number of DMA requests the controller can handle
43 dma-requests = <127>;
51 dma-requests = <205>;
Dmtk-uart-apdma.txt11 One interrupt per dma-requests, or 8 if no dma-requests property is present
13 - dma-requests: The number of DMA channels
49 dma-requests = <12>;
Dfsl-imx-dma.txt17 - #dma-requests : Number of DMA requests supported.
32 Clients have to specify the DMA requests with phandles in a list.
38 - dma-names: List of string identifiers for the DMA requests. For the correct
Dstm32-dma.txt4 supporting 8 independent DMA channels. Each channel can have up to 8 requests.
16 - dma-requests : Number of DMA requests supported.
38 dma-requests = <8>;
Darm-pl330.txt17 - dma-requests: contains the total number of DMA requests supported by the DMAC
31 #dma-requests = <32>;
/Linux-v5.4/Documentation/filesystems/
Dvirtiofs.rst42 Since the virtio-fs device uses the FUSE protocol for file system requests, the
48 FUSE requests are placed into a virtqueue and processed by the host. The
55 prioritize certain requests over others. Virtqueues have queue semantics and
56 it is not possible to change the order of requests that have been enqueued.
58 impossible to add high priority requests. In order to address this difference,
59 the virtio-fs device uses a "hiprio" virtqueue specifically for requests that
60 have priority over normal requests.
Dgfs2-glocks.txt16 The gl_holders list contains all the queued lock requests (not
69 grant for which we ignore remote demote requests. This is in order to
151 1. DLM lock time (non-blocking requests)
152 2. DLM lock time (blocking requests)
157 currently means any requests when (a) the current state of
161 lock requests.
164 how many lock requests have been made, and thus how much data
168 of dlm lock requests issued.
186 the average time between lock requests for a glock means we
209 srtt - Smoothed round trip time for non-blocking dlm requests
[all …]
/Linux-v5.4/Documentation/virt/kvm/
Dvcpu-requests.rst12 /* Check if any requests are pending for VCPU @vcpu. */
38 as possible after making the request. This means most requests
67 ensure VCPU requests are seen by VCPUs (see "Ensuring Requests Are Seen"),
88 certain VCPU requests, namely KVM_REQ_TLB_FLUSH, to wait until the VCPU
94 VCPU requests are simply bit indices of the ``vcpu->requests`` bitmap.
98 clear_bit(KVM_REQ_UNHALT & KVM_REQUEST_MASK, &vcpu->requests);
102 independent requests, all additional bits are available for architecture
103 dependent requests.
140 VCPU requests should be masked by KVM_REQUEST_MASK before using them with
150 This flag is applied to requests that only need immediate attention
[all …]
/Linux-v5.4/arch/powerpc/kvm/
Dtrace.h106 __field( __u32, requests )
111 __entry->requests = vcpu->requests;
115 __entry->cpu_nr, __entry->requests)
/Linux-v5.4/drivers/gpu/drm/i915/
Di915_scheduler.h17 for (idx = 0; idx < ARRAY_SIZE((plist)->requests); idx++) \
18 list_for_each_entry(it, &(plist)->requests[idx], sched.link)
25 &(plist)->requests[idx], \
/Linux-v5.4/Documentation/ABI/stable/
Dsysfs-bus-xen-backend39 Number of flush requests from the frontend.
46 Number of requests delayed because the backend was too
47 busy processing previous requests.
54 Number of read requests from the frontend.
68 Number of write requests from the frontend.
/Linux-v5.4/Documentation/scsi/
Dhptiop.txt84 All queued requests are handled via inbound/outbound queue port.
99 - Post the packet to IOP by writing it to inbound queue. For requests
101 requests allocated in host memory, write (0x80000000|(bus_addr>>5))
108 For requests allocated in IOP memory, the request offset is posted to
111 For requests allocated in host memory, (0x80000000|(bus_addr>>5))
118 For requests allocated in IOP memory, the host driver free the request
121 Non-queued requests (reset/flush etc) can be sent via inbound message
129 All queued requests are handled via inbound/outbound list.
143 round to 0 if the index reaches the supported count of requests.
160 Non-queued requests (reset communication/reset/flush etc) can be sent via PCIe
/Linux-v5.4/Documentation/ABI/testing/
Dsysfs-class-scsi_tape33 The number of I/O requests issued to the tape drive other
34 than SCSI read/write requests.
54 Shows the total number of read requests issued to the tape
65 read I/O requests to complete.
85 Shows the total number of write requests issued to the tape
96 write I/O requests to complete.
Dsysfs-driver-ppi61 for the requests defined by TCG, i.e. requests from 1 to 22.
72 for the verdor specific requests, i.e. requests from 128 to
/Linux-v5.4/drivers/base/
Ddevtmpfs.c50 } *requests; variable
123 req.next = requests; in devtmpfs_create_node()
124 requests = &req; in devtmpfs_create_node()
153 req.next = requests; in devtmpfs_delete_node()
154 requests = &req; in devtmpfs_delete_node()
405 while (requests) { in devtmpfsd()
406 struct req *req = requests; in devtmpfsd()
407 requests = NULL; in devtmpfsd()
/Linux-v5.4/Documentation/admin-guide/device-mapper/
Dlog-writes.rst10 that is in the WRITE requests is copied into the log to make the replay happen
17 cache. This means that normal WRITE requests are not actually logged until the
22 This works by attaching all WRITE requests to a list once the write completes.
39 Any REQ_FUA requests bypass this flushing mechanism and are logged as soon as
40 they complete as those requests will obviously bypass the device cache.
42 Any REQ_OP_DISCARD requests are treated like WRITE requests. Otherwise we would
43 have all the DISCARD requests, and then the WRITE requests and then the FLUSH
/Linux-v5.4/Documentation/vm/
Dbalance.rst16 allocation requests that have order-0 fallback options. In such cases,
19 __GFP_IO allocation requests are made to prevent file system deadlocks.
21 In the absence of non sleepable allocation requests, it seems detrimental
26 That being said, the kernel should try to fulfill requests for direct
28 the dma pool, so as to keep the dma pool filled for dma requests (atomic
31 regular memory requests by allocating one from the dma pool, instead
76 probably because all allocation requests are coming from intr context
90 watermark[WMARK_HIGH]. When low_on_memory is set, page allocation requests will
99 1. Dynamic experience should influence balancing: number of failed requests
/Linux-v5.4/Documentation/driver-api/firmware/
Drequest_firmware.rst12 Synchronous firmware requests
15 Synchronous firmware requests will wait until the firmware is found or until
38 Asynchronous firmware requests
41 Asynchronous firmware requests allow driver code to not have to wait
/Linux-v5.4/Documentation/hid/
Dhid-transport.rst105 - Control Channel (ctrl): The ctrl channel is used for synchronous requests and
108 events or answers to host requests on this channel.
112 SET_REPORT requests.
120 requiring explicit requests. Devices can choose to send data continuously or
123 to device and may include LED requests, rumble requests or more. Output
131 Feature reports are never sent without requests. A host must explicitly set
142 channel provides synchronous GET/SET_REPORT requests. Plain reports are only
150 simultaneous GET_REPORT requests.
159 GET_REPORT requests can be sent for any of the 3 report types and shall
173 multiple synchronous SET_REPORT requests.
[all …]
/Linux-v5.4/drivers/iio/adc/
Dtwl4030-madc.c166 struct twl4030_madc_request requests[TWL4030_MADC_NUM_METHODS]; member
498 madc->requests[i].result_pending = 1; in twl4030_madc_threaded_irq_handler()
501 r = &madc->requests[i]; in twl4030_madc_threaded_irq_handler()
523 r = &madc->requests[i]; in twl4030_madc_threaded_irq_handler()
624 if (twl4030_madc->requests[req->method].active) { in twl4030_madc_conversion()
655 twl4030_madc->requests[req->method].active = 1; in twl4030_madc_conversion()
659 twl4030_madc->requests[req->method].active = 0; in twl4030_madc_conversion()
664 twl4030_madc->requests[req->method].active = 0; in twl4030_madc_conversion()

12345678910>>...19