Searched refs:workloads (Results 1 – 25 of 63) sorted by relevance
123
/Linux-v4.19/drivers/crypto/qat/ |
D | Kconfig | 21 for accelerating crypto and compression workloads. 32 for accelerating crypto and compression workloads. 43 for accelerating crypto and compression workloads. 56 Virtual Function for accelerating crypto and compression workloads. 68 Virtual Function for accelerating crypto and compression workloads. 80 Virtual Function for accelerating crypto and compression workloads.
|
/Linux-v4.19/Documentation/timers/ |
D | NO_HZ.txt | 24 workloads, you will normally -not- want this option. 36 right approach, for example, in heavy workloads with lots of tasks 39 hundreds of microseconds). For these types of workloads, scheduling 53 are running light workloads, you should therefore read the following 113 computationally intensive short-iteration workloads: If any CPU is 229 aggressive real-time workloads, which have the option of disabling 231 some workloads will no doubt want to use adaptive ticks to 233 options for these workloads: 253 workloads, which have few such transitions. Careful benchmarking 254 will be required to determine whether or not other workloads
|
/Linux-v4.19/drivers/crypto/cavium/nitrox/ |
D | Kconfig | 17 for accelerating crypto workloads.
|
/Linux-v4.19/drivers/gpu/drm/i915/gvt/ |
D | scheduler.c | 1065 kmem_cache_destroy(s->workloads); in intel_vgpu_clean_submission() 1113 s->workloads = kmem_cache_create_usercopy("gvt-g_vgpu_workload", in intel_vgpu_setup_submission() 1120 if (!s->workloads) { in intel_vgpu_setup_submission() 1205 kmem_cache_free(s->workloads, workload); in intel_vgpu_destroy_workload() 1214 workload = kmem_cache_zalloc(s->workloads, GFP_KERNEL); in alloc_workload() 1377 kmem_cache_free(s->workloads, workload); in intel_vgpu_create_workload()
|
D | gvt.h | 159 struct kmem_cache *workloads; member
|
/Linux-v4.19/Documentation/vm/ |
D | cleancache.rst | 12 many workloads in many environments at a negligible cost. 119 Cleancache provides a significant performance benefit to many workloads 142 well-publicized special-case workloads). Cleancache -- and frontswap -- 218 Briefly, performance gains can be significant on most workloads, 222 overhead is negligible even in worst case workloads. Basically
|
D | frontswap.rst | 83 Frontswap significantly increases performance in many such workloads by 104 on some workloads under high memory pressure. 120 well-publicized special-case workloads).
|
/Linux-v4.19/drivers/cpufreq/ |
D | Kconfig.x86 | 148 the CPUs' workloads are. CPU-bound workloads will be more sensitive 150 workloads will be less sensitive -- they will not necessarily perform
|
/Linux-v4.19/lib/ |
D | Kconfig.kasan | 53 memory accesses. This is faster than outline (in some workloads
|
/Linux-v4.19/Documentation/scheduler/ |
D | sched-design-CFS.txt | 96 "server" (i.e., good batching) workloads. It defaults to a setting suitable 97 for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too. 105 than the previous vanilla scheduler: both types of workloads are isolated much
|
/Linux-v4.19/Documentation/md/ |
D | raid5-cache.txt | 56 completely avoid the overhead, so it's very helpful for some workloads. A 72 mode depending on the workloads. It's recommended to use a cache disk with at
|
/Linux-v4.19/fs/squashfs/ |
D | Kconfig | 77 poor performance on parallel I/O workloads when using multiple CPU 91 poor performance on parallel I/O workloads when using multiple CPU
|
/Linux-v4.19/tools/perf/Documentation/ |
D | perf-stat.txt | 255 determine bottle necks in the CPU pipeline for CPU bound workloads, 267 mode like -I 1000, as the bottleneck of workloads can change often. 341 For workload sessions we also display time the workloads spent in
|
D | perf-bench.txt | 187 Suite for evaluating NUMA workloads.
|
/Linux-v4.19/Documentation/block/ |
D | writeback_cache_control.txt | 11 behavior obviously speeds up various workloads, but it means the operating
|
D | bfq-iosched.txt | 71 background workloads are being executed: 102 sequential workloads considered in our tests. With random workloads, 103 and with all the workloads on flash-based devices, BFQ achieves, 121 possibly heavy workloads are being served, BFQ guarantees:
|
D | cfq-iosched.txt | 18 (for sequential workloads) and service trees (for random workloads) before
|
/Linux-v4.19/Documentation/x86/ |
D | orc-unwinder.txt | 31 Gorman [1] have shown a slowdown of 5-10% for some workloads. 43 footprint. That can transform to even higher speedups for workloads
|
/Linux-v4.19/tools/power/cpupower/bench/ |
D | README-BENCH | 21 - Real world (workloads)
|
/Linux-v4.19/Documentation/ABI/testing/ |
D | sysfs-block | 131 workloads where a high number of I/O operations is 143 preferred request size for workloads where sustained
|
/Linux-v4.19/Documentation/device-mapper/ |
D | cache-policies.txt | 51 workloads. smq also does not have any cumbersome tuning knobs.
|
/Linux-v4.19/Documentation/filesystems/pohmelfs/ |
D | design_notes.txt | 55 workloads and can fully utilize the bandwidth to the servers when doing bulk
|
/Linux-v4.19/Documentation/RCU/ |
D | checklist.txt | 199 to real-time workloads. Use of the expedited primitives should 202 However, real-time workloads can use rcupdate.rcu_normal kernel 212 of the system, especially to real-time workloads running on 411 real-time workloads than is synchronize_rcu_expedited(),
|
/Linux-v4.19/Documentation/filesystems/ext4/ |
D | ext4.rst | 53 important to try multiple workloads; very often a subtle change in a 62 data=writeback' can be faster for some workloads. (Note however that 67 metadata-intensive workloads. 277 multi-threaded, synchronous workloads on very
|
/Linux-v4.19/Documentation/locking/ |
D | mutex-design.txt | 70 number of workloads. Note that this technique is also used for rw-semaphores.
|
123