/Linux-v5.4/tools/perf/pmu-events/arch/x86/icelake/ |
D | floating-point.json | 16 …instructions retired; some instructions will count twice as noted below. Each count represents 1 … 23 …instructions retired; some instructions will count twice as noted below. Each count represents 1 … 27 …instructions retired; some instructions will count twice as noted below. Each count represents 1 … 34 …instructions retired; some instructions will count twice as noted below. Each count represents 1 … 38 …nstructions will count twice as noted below. Each count represents 2 computation operations, one … 45 …nstructions will count twice as noted below. Each count represents 2 computation operations, one … 49 …nstructions will count twice as noted below. Each count represents 4 computation operations, one … 56 …nstructions will count twice as noted below. Each count represents 4 computation operations, one … 60 …nstructions will count twice as noted below. Each count represents 4 computation operations, one … 67 …nstructions will count twice as noted below. Each count represents 4 computation operations, one … [all …]
|
/Linux-v5.4/tools/perf/pmu-events/arch/x86/cascadelakex/ |
D | floating-point.json | 5 …instructions retired; some instructions will count twice as noted below. Each count represents 1 … 14 …instructions retired; some instructions will count twice as noted below. Each count represents 1 … 23 …nstructions will count twice as noted below. Each count represents 2 computation operations, one … 32 …nstructions will count twice as noted below. Each count represents 4 computation operations, one … 41 …nstructions will count twice as noted below. Each count represents 4 computation operations, one … 50 …nstructions will count twice as noted below. Each count represents 8 computation operations, one … 59 …nstructions will count twice as noted below. Each count represents 8 computation operations, one … 68 …nstructions will count twice as noted below. Each count represents 16 computation operations, one…
|
/Linux-v5.4/Documentation/hwmon/ |
D | ibmpowernv.rst | 18 'hwmon' populates the 'sysfs' tree having attribute files, each for a given 21 All the nodes in the DT appear under "/ibm,opal/sensors" and each valid node in 45 each OCC. Using this attribute each OCC can be asked to 58 each OCC. Using this attribute each OCC can be asked to 69 each OCC. Using this attribute each OCC can be asked to 80 each OCC. Using this attribute each OCC can be asked to
|
/Linux-v5.4/Documentation/filesystems/nfs/ |
D | pnfs.txt | 5 reference multiple devices, each of which can reference multiple data servers. 6 Each data server can be referenced by multiple devices. Each device 15 Each nfs_inode may hold a pointer to a cache of these layout 18 We reference the header for the inode pointing to it, across each 20 LAYOUTCOMMIT), and for each lseg held within. 22 Each header is also (when non-empty) put on a list associated with 31 nfs4_deviceid_cache). The cache itself is referenced across each 33 the lifetime of each lseg referencing them. 61 layout types: "files", "objects", "blocks", and "flexfiles". For each
|
/Linux-v5.4/lib/842/ |
D | 842.h | 6 /* The 842 compressed format is made up of multiple blocks, each of 12 * template operation. For normal operations, each arg is either a specific 18 * table, the static "decomp_ops" table used in decompress. For each template 19 * (table row), there are between 1 and 4 actions; each action corresponds to 20 * an arg following the template code bits. Each action is either a "data" 21 * type action, or a "index" type action, and each action results in 2, 4, or 8 22 * bytes being written to the output buffer. Each template (i.e. all actions 36 * The number of bits for each index's arg are: 8 bits for I2, 9 bits for I4, 37 * and 8 bits for I8. Since each index points to a 2, 4, or 8 byte section, 41 * each of I2, I4, and I8 that are updated for each byte written to the output [all …]
|
/Linux-v5.4/Documentation/networking/ |
D | scaling.rst | 30 applying a filter to each packet that assigns it to one of a small number 31 of logical flows. Packets for each flow are steered to a separate receive 41 implementation of RSS uses a 128-entry indirection table where each entry 60 for each CPU if the device supports enough queues, or otherwise at least 61 one for each memory domain, where a memory domain is a set of CPUs that 76 Each receive queue has a separate IRQ associated with it. The NIC triggers 79 that can route each interrupt to a particular CPU. The active mapping 84 affinity of each interrupt see Documentation/IRQ-affinity.txt. Some systems 100 interrupts (and thus work) grows with each additional queue. 103 processors with hyperthreading (HT), each hyperthread is represented as [all …]
|
/Linux-v5.4/Documentation/devicetree/bindings/display/tegra/ |
D | nvidia,tegra20-host1x.txt | 7 For Tegra186, one entry for each entry in reg-names: 18 - resets: Must contain an entry for each entry in reset-names. 23 The host1x top-level node defines a number of children, each representing one 34 - resets: Must contain an entry for each entry in reset-names. 47 - resets: Must contain an entry for each entry in reset-names. 60 - resets: Must contain an entry for each entry in reset-names. 73 - resets: Must contain an entry for each entry in reset-names. 86 - resets: Must contain an entry for each entry in reset-names. 96 - clocks: Must contain an entry for each entry in clock-names. 103 - resets: Must contain an entry for each entry in reset-names. [all …]
|
/Linux-v5.4/Documentation/devicetree/bindings/gpio/ |
D | nvidia,tegra186-gpio.txt | 26 address space, each of which access the same underlying state. See the hardware 31 implemented by the SoC. Each GPIO is assigned to a port, and a port may control 32 a number of GPIOs. Thus, each GPIO is named according to an alphabetical port 36 The number of ports implemented by each GPIO controller varies. The number of 37 implemented GPIOs within each port varies. GPIO registers within a controller 48 Each GPIO controller can generate a number of interrupt signals. Each signal 54 Each GPIO controller in fact generates multiple interrupts signals for each set 55 of ports. Each GPIO may be configured to feed into a specific one of the 56 interrupt signals generated by a set-of-ports. The intent is for each generated 57 signal to be routed to a different CPU, thus allowing different CPUs to each [all …]
|
/Linux-v5.4/Documentation/devicetree/bindings/pinctrl/ |
D | pinctrl-bindings.txt | 5 controllers. Each pin controller must be represented as a node in device tree, 9 designated client devices. Again, each client device must be represented as a 16 device is inactive. Hence, each client device can define a set of named 35 For each client device individually, every pin state is assigned an integer 36 ID. These numbers start at 0, and are contiguous. For each state ID, a unique 37 property exists to define the pin configuration. Each state may also be 41 Each client device's own binding determines the set of states that must be 47 pinctrl-0: List of phandles, each pointing at a pin configuration 52 from multiple nodes for a single pin controller, each 65 pinctrl-1: List of phandles, each pointing at a pin configuration [all …]
|
D | pinctrl-vt8500.txt | 3 These SoCs contain a combined Pinmux/GPIO module. Each pin may operate as 23 Each pin configuration node lists the pin(s) to which it applies, and one or 25 configuration. Each subnode only affects those parameters that are explicitly 31 - wm,pins: An array of cells. Each cell contains the ID of a pin. 44 Each of wm,function and wm,pull may contain either a single value which 45 will be applied to all pins in wm,pins, or one value for each entry in
|
/Linux-v5.4/Documentation/filesystems/ |
D | qnx6.txt | 36 Each qnx6fs got two superblocks, each one having a 64bit serial number. 38 In write mode with reach new snapshot (after each synchronous write), the 47 Each superblock holds a set of root inodes for the different filesystem 49 Each of these root nodes holds information like total size of the stored 51 If the level value is 0, up to 16 direct blocks can be addressed by each 53 Level 1 adds an additional indirect addressing level where each indirect 70 0x1000 is the size reserved for each superblock - regardless of the 76 Each object in the filesystem is represented by an inode. (index node) 97 It is a specially formatted file containing records which associate each 129 Each data block (tree leaves) holds one long filename. That filename is [all …]
|
/Linux-v5.4/Documentation/devicetree/bindings/dma/ |
D | stm32-mdma.txt | 38 described in the dma.txt file, using a five-cell specifier for each channel: 50 0x10: Source address pointer is incremented after each data transfer 51 0x11: Source address pointer is decremented after each data transfer 54 0x10: Destination address pointer is incremented after each data 56 0x11: Destination address pointer is decremented after each data 71 0x00: Each MDMA request triggers a buffer transfer (max 128 bytes) 72 0x01: Each MDMA request triggers a block transfer (max 64K bytes) 73 0x10: Each MDMA request triggers a repeated block transfer 74 0x11: Each MDMA request triggers a linked list transfer
|
/Linux-v5.4/Documentation/scheduler/ |
D | sched-domains.rst | 5 Each CPU has a "base" scheduling domain (struct sched_domain). The domain 10 Each scheduling domain spans a number of CPUs (stored in the ->span field). 13 i. The top domain for each CPU will generally span all CPUs in the system 19 Each scheduling domain must have one or more CPU groups (struct sched_group) 27 Balancing within a sched domain occurs between groups. That is, each group 29 load of each of its member CPUs, and only when the load of a group becomes 32 In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU 57 of SMT, you'll span all siblings of the physical CPU, with each group being 61 node. Each group being a single physical CPU. Then with NUMA, the parent 62 of the SMP domain will span the entire machine, with each group having the
|
/Linux-v5.4/drivers/staging/unisys/Documentation/ |
D | overview.txt | 25 The back-end for each device is owned and managed by a small, 27 with each guest partition sharing that device through an area of shared memory 32 Each virtual device requires exactly 1 dedicated channel, which the guest 44 * Because the s-Par back-end provides a standard EFI framebuffer to each 61 visorbus_register_visor_driver() that is called by each of the function 74 form in the hotplug uevent environment when each virtual device is 79 visorbus notifies each function driver when a device of its registered class 83 The actual struct device objects that correspond to each virtual bus and 84 each virtual device are created and owned by visorbus. These device objects 86 special control channel called the "controlvm channel" (each guest partition [all …]
|
/Linux-v5.4/include/linux/ |
D | prime_numbers.h | 11 * for_each_prime_number - iterate over each prime upto a value 15 * Starting from the first prime number 2 iterate over each prime number up to 16 * the @max value. On each iteration, @prime is set to the current prime number. 25 * for_each_prime_number_from - iterate over each prime upto a value 30 * Starting from @from iterate over each successive prime number up to the 31 * @max value. On each iteration, @prime is set to the current prime number.
|
/Linux-v5.4/Documentation/gpu/ |
D | msm-crash-dump.rst | 11 Each entry is in the form key: value. Sections headers will not have a value 13 Each section might have multiple array entries the start of which is designated 43 Section containing the contents of each ringbuffer. Each ringbuffer is 47 Ringbuffer ID (0 based index). Each ringbuffer in the section 73 Each buffer object will have a uinque iova. 86 Set of registers values. Each entry is on its own line enclosed
|
/Linux-v5.4/Documentation/devicetree/bindings/phy/ |
D | apm-xgene-phy.txt | 3 PHY nodes are defined to describe on-chip 15Gbps Multi-purpose PHY. Each 19 Two set of 3-tuple setting for each (up to 3) 25 Two set of 3-tuple setting for each (up to 3) 28 gain control. Two set of 3-tuple setting for each 32 each (up to 3) supported link speed on the host. 36 3-tuple setting for each (up to 3) supported link 40 3-tuple setting for each (up to 3) supported link 46 - apm,tx-speed : Tx operating speed. One set of 3-tuple for each
|
/Linux-v5.4/include/media/ |
D | v4l2-device.h | 36 * Each instance of a V4L2 device should create the v4l2_device struct, 192 * @arg: arguments for the notification. Those are specific to each 225 * the @sd variable pointing to each sub-device in turn. 239 * Each element there groups a set of operations functions. 242 * each element at &struct v4l2_subdev_ops. 264 * Each element there groups a set of operations functions. 267 * each element at &struct v4l2_subdev_ops. 292 * Each element there groups a set of operations functions. 295 * each element at &struct v4l2_subdev_ops. 327 * Each element there groups a set of operations functions. [all …]
|
/Linux-v5.4/Documentation/admin-guide/cgroup-v1/ |
D | cgroups.rst | 62 hierarchy, and a set of subsystems; each subsystem has system-specific 63 state attached to each cgroup in the hierarchy. Each hierarchy has 67 cgroups. Each hierarchy is a partition of all tasks in the system. 81 tasks in each cgroup. 101 different subsystems - having parallel hierarchies allows each 107 At one extreme, each resource controller or subsystem could be in a 175 - Each task in the system has a reference-counted pointer to a 179 cgroup_subsys_state objects, one for each cgroup subsystem 181 the cgroup of which it's a member in each hierarchy, but this 188 field of each task_struct using the css_set, anchored at [all …]
|
/Linux-v5.4/drivers/net/ethernet/qlogic/qlcnic/ |
D | qlcnic_dcb.c | 571 struct qlcnic_dcb_param *each; in qlcnic_83xx_dcb_query_cee_param() local 598 each = &mbx_out.type[j]; in qlcnic_83xx_dcb_query_cee_param() 600 each->hdr_prio_pfc_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 601 each->hdr_prio_pfc_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 602 each->prio_pg_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 603 each->prio_pg_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 604 each->pg_bw_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 605 each->pg_bw_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 606 each->pg_tsa_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 607 each->pg_tsa_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() [all …]
|
/Linux-v5.4/Documentation/admin-guide/device-mapper/ |
D | statistics.rst | 10 Each user-defined region specifies a starting sector, length and step. 11 Individual statistics will be collected for each step-sized area within 14 The I/O statistics counters for each step-sized area of a region are 26 Each region has a corresponding unique identifier, which we call a 31 on each other's data. 55 the range is subdivided into areas each containing 78 nanoseconds. For each range, the kernel will report the 133 Print counters for each step-sized area of a region. 146 Output format for each step-sized area of a region: 210 Set the auxiliary data string to "foo bar baz" (the escape for each
|
/Linux-v5.4/drivers/net/ethernet/cavium/liquidio/ |
D | cn66xx_regs.h | 103 /* 1 register (32-bit) - instr. size of each input queue. */ 121 /* 1 register (64-bit) - Back Pressure for each input queue - SLI_PKT0_IN_BP */ 124 /* Each Input Queue register is at a 16-byte Offset in BAR0 */ 133 * - 2 bits for each input ring. SLI_PKT_INSTR_RD_SIZE. 138 * - 2 bits for each input ring. SLI_PKT_IN_PCIE_PORT. 199 /* Each Output Queue register is at a 16-byte Offset in BAR0 */ 202 /* 1 register (32-bit) - 1 bit for each output queue 208 /* 1 register (32-bit) - 1 bit for each output queue 214 /* 1 register (64-bit) - 2 bits for each output queue 220 /* 1 register (32-bit) - 1 bit for each output queue [all …]
|
/Linux-v5.4/Documentation/devicetree/bindings/c6x/ |
D | dscr.txt | 19 For device state control (enable/disable), each device control is assigned an 46 a lock register. Each tuple consists of the register offset, lock register 56 MAC addresses are contained in two registers. Each element of a MAC address 57 is contained in a single byte. This property has two tuples. Each tuple has 65 Each tuple describes a range of identical bitfields used to control one or 66 more devices (one bitfield per device). The layout of each tuple is: 81 for device states controlled by the DSCR. Each tuple describes a range of 83 bitfield per device). The layout of each tuple is:
|
/Linux-v5.4/Documentation/media/v4l-drivers/ |
D | vivid.rst | 11 Up to 64 vivid instances can be created, each with up to 16 inputs and 16 outputs. 13 Each input can be a webcam, TV capture device, S-Video capture device or an HDMI 14 capture device. Each output can be an S-Video output device or an HDMI output 60 which devices should each driver instance create. An array of 61 hexadecimal values, one for each instance. The default is 0x1d3d. 62 Each value is a bitmask with the following meaning: 83 the number of inputs, one for each instance. By default 4 inputs 84 are created for each video capture device. At most 16 inputs can be created, 89 the input types for each instance, the default is 0xe4. This defines 90 what the type of each input is when the inputs are created for each driver [all …]
|
/Linux-v5.4/Documentation/devicetree/bindings/remoteproc/ |
D | ti,keystone-rproc.txt | 15 Each DSP Core sub-system is represented as a single DT node, and should also 16 have an alias with the stem 'rproc' defined. Each node has a number of required 31 - reg: Should contain an entry for each value in 'reg-names'. 32 Each entry should have the memory region's start address 36 - reg-names: Should contain strings with the following names, each 54 - interrupts: Should contain an entry for each value in 'interrupt-names'. 55 Each entry should have the interrupt source number used by 58 'interrupt-parent' node. The purpose of each is as per the 61 - interrupt-names: Should contain strings with the following names, each
|