/Linux-v5.10/drivers/acpi/ |
D | processor_perflib.c | 83 if (ppc >= pr->performance->state_count || in acpi_processor_get_platform_limit() 88 pr->performance->states[ppc].core_frequency * 1000); in acpi_processor_get_platform_limit() 116 if (ignore_ppc || !pr->performance) { in acpi_processor_ppc_has_changed() 146 if (!pr || !pr->performance || !pr->performance->state_count) in acpi_processor_get_bios_limit() 148 *limit = pr->performance->states[pr->performance_platform_limit]. in acpi_processor_get_bios_limit() 228 memcpy(&pr->performance->control_register, obj.buffer.pointer, in acpi_processor_get_performance_control() 245 memcpy(&pr->performance->status_register, obj.buffer.pointer, in acpi_processor_get_performance_control() 317 pr->performance->state_count = pss->package.count; in acpi_processor_get_performance_states() 318 pr->performance->states = in acpi_processor_get_performance_states() 322 if (!pr->performance->states) { in acpi_processor_get_performance_states() [all …]
|
/Linux-v5.10/Documentation/admin-guide/acpi/ |
D | cppc_sysfs.rst | 11 performance of a logical processor on a contigious and abstract performance 12 scale. CPPC exposes a set of registers to describe abstract performance scale, 13 to request performance levels and to measure per-cpu delivered performance. 38 * highest_perf : Highest performance of this processor (abstract scale). 39 * nominal_perf : Highest sustained performance of this processor 41 * lowest_nonlinear_perf : Lowest performance of this processor with nonlinear 43 * lowest_perf : Lowest performance of this processor (abstract scale). 47 The above frequencies should only be used to report processor performance in 51 * feedback_ctrs : Includes both Reference and delivered performance counter. 52 Reference counter ticks up proportional to processor's reference performance. [all …]
|
D | fan_performance_states.rst | 10 These attributes list properties of fan performance states. 37 where each of the "state*" files represents one performance state of the fan 47 to this performance state (0-9).
|
/Linux-v5.10/Documentation/power/ |
D | energy-model.rst | 11 the power consumed by devices at various performance levels, and the kernel 53 'performance domain' in the system. A performance domain is a group of CPUs 54 whose performance is scaled together. Performance domains generally have a 55 1-to-1 mapping with CPUFreq policies. All CPUs in a performance domain are 56 required to have the same micro-architecture. CPUs in different performance 69 2.2 Registration of performance domains 72 Drivers are expected to register performance domains into the EM framework by 79 for each performance state. The callback function provided by the driver is free 82 performance domains using cpumask. For other devices than CPUs the last 89 2.3 Accessing performance domains [all …]
|
/Linux-v5.10/arch/x86/events/ |
D | Kconfig | 5 tristate "Intel uncore performance events" 9 Include support for Intel uncore performance events. These are 13 tristate "Intel/AMD rapl performance events" 17 Include support for Intel and AMD rapl performance events for power 21 tristate "Intel cstate performance events" 25 Include support for Intel cstate performance events for power
|
/Linux-v5.10/Documentation/admin-guide/pm/ |
D | intel-speed-select.rst | 8 collection of features that give more granular control over CPU performance. 9 With Intel(R) SST, one server can be configured for power and performance for a 15 …tel.com/docs/networkbuilders/intel-speed-select-technology-base-frequency-enhancing-performance.pdf 25 how these commands change the power and performance profile of the system under 83 performance requirements. This helps users during deployment as they do not have 86 that allows multiple optimized performance profiles per system. Each profile 89 performance profile and meet CPU online/offline requirement, the user can expect 93 Number or performance levels 96 There can be multiple performance profiles on a system. To get the number of 111 On this system under test, there are 4 performance profiles in addition to the [all …]
|
D | intel_epb.rst | 26 a value of 0 corresponds to a hint preference for highest performance 31 with one of the strings: "performance", "balance-performance", "normal",
|
D | intel_pstate.rst | 17 :doc:`CPU performance scaling subsystem <cpufreq>` in the Linux kernel 25 than just an operating frequency or an operating performance point (see the 30 uses frequencies for identifying operating performance points of CPUs and 58 active mode, it uses its own internal performance scaling governor algorithm or 61 a certain performance scaling algorithm. Which of them will be in effect 88 active mode: ``powersave`` and ``performance``. The way they both operate 94 Namely, if that option is set, the ``performance`` algorithm will be used by 117 HWP + ``performance`` 123 internal P-state selection logic is expected to focus entirely on performance. 127 the EPP/EPB to a value different from 0 ("performance") via ``sysfs`` in this [all …]
|
/Linux-v5.10/Documentation/admin-guide/ |
D | perf-security.rst | 14 depends on the nature of data that perf_events performance monitoring 15 units (PMU) [2]_ and Perf collect and expose for performance analysis. 16 Collected system and performance data may be split into several 21 its topology, used kernel and Perf versions, performance monitoring 30 faults, CPU migrations), architectural hardware performance counters 46 So, perf_events performance monitoring and observability operations are 56 all kernel security permission checks so perf_events performance 70 as privileged processes with respect to perf_events performance 73 privilege [13]_ (POSIX 1003.1e: 2.2.2.39) for performance monitoring and 85 denial logging related to usage of performance monitoring and observability. [all …]
|
/Linux-v5.10/tools/power/cpupower/bench/ |
D | README-BENCH | 7 - Identify worst case performance loss when doing dynamic frequency 12 - Identify cpufreq related performance regressions between kernels 18 - Power saving related regressions (In fact as better the performance 28 For that purpose, it compares the performance governor to a configured 56 takes on this machine and needs to be run in a loop using the performance 58 Then the above test runs are processed using the performance governor 61 on full performance and you get the overall performance loss. 80 trigger of the cpufreq-bench, you will see no performance loss (compare with 84 will always see 50% loads and you get worst performance impact never
|
/Linux-v5.10/drivers/xen/ |
D | xen-acpi-processor.c | 144 dst_states = kcalloc(_pr->performance->state_count, in xen_copy_pss_data() 149 dst_perf->state_count = _pr->performance->state_count; in xen_copy_pss_data() 150 for (i = 0; i < _pr->performance->state_count; i++) { in xen_copy_pss_data() 152 memcpy(&(dst_states[i]), &(_pr->performance->states[i]), in xen_copy_pss_data() 168 dst->shared_type = _pr->performance->shared_type; in xen_copy_psd_data() 170 pdomain = &(_pr->performance->domain_info); in xen_copy_psd_data() 219 xen_copy_pct_data(&(_pr->performance->control_register), in push_pxx_to_hypervisor() 221 xen_copy_pct_data(&(_pr->performance->status_register), in push_pxx_to_hypervisor() 246 perf = _pr->performance; in push_pxx_to_hypervisor() 279 if (_pr->performance && _pr->performance->states) in upload_pm_data() [all …]
|
/Linux-v5.10/drivers/perf/hisilicon/ |
D | Kconfig | 6 Support for HiSilicon SoC L3 Cache performance monitor, Hydra Home 7 Agent performance monitor and DDR Controller performance monitor.
|
/Linux-v5.10/tools/perf/Documentation/ |
D | perf-bench.txt | 53 System call performance (throughput). 56 Memory access performance. 76 Suite for evaluating performance of scheduler and IPC mechanisms. 146 Suite for evaluating performance of core system call throughput (both usecs/op and ops/sec metrics). 154 Suite for evaluating performance of simple memory copy in various ways. 178 Suite for evaluating performance of simple memory set in various ways. 234 Suite for evaluating perf's event synthesis performance.
|
D | perf-kvm.txt | 23 a performance counter profile of guest os in realtime 26 'perf kvm record <command>' to record the performance counter profile 39 'perf kvm report' to display the performance counter profile information 42 'perf kvm diff' to displays the performance difference amongst two perf.data 51 'perf kvm stat <command>' to run a command and gather performance counter 77 Collect host side performance profile. 79 Collect guest side performance profile.
|
/Linux-v5.10/kernel/rcu/ |
D | Kconfig.debug | 27 tristate "performance tests for RCU" 36 This option provides a kernel module that runs performance 40 Say Y here if you want RCU performance tests to be built into 42 Say M if you want the RCU performance tests to build as a module. 74 This option provides a kernel module that runs performance tests 79 Say Y here if you want these performance tests built into the kernel. 126 lifetime and kills performance. Don't try this on large
|
/Linux-v5.10/drivers/perf/ |
D | Kconfig | 56 Say y if you want to use CPU performance monitors on ARM-based 76 Provides support for performance monitor unit in ARM DynamIQ Shared 85 Provides support for the DDR performance monitor in i.MX8, which 94 Provides support for the L2 cache performance monitor unit (PMU) 104 Provides support for the L3 cache performance monitor unit (PMU) 123 Say y if you want to use APM X-Gene SoC performance monitors.
|
/Linux-v5.10/Documentation/scheduler/ |
D | sched-energy.rst | 38 performance [inst/s] 48 while still getting 'good' performance. It is essentially an alternative 49 optimization objective to the current performance-only objective for the 51 performance. 78 task/CPU is, and to take this into consideration when evaluating performance vs 84 per 'performance domain' in the system (see Documentation/power/energy-model.rst 85 for futher details about performance domains). 89 scheduler maintains a singly linked list of all performance domains intersecting 95 necessarily match those of performance domains, the lists of different root 99 Let us consider a platform with 12 CPUs, split in 3 performance domains [all …]
|
/Linux-v5.10/include/acpi/ |
D | processor.h | 166 u16 performance; member 206 u8 performance:1; member 230 struct acpi_processor_performance *performance; member 251 __percpu *performance); 254 *performance, unsigned int cpu);
|
/Linux-v5.10/kernel/ |
D | Kconfig.hz | 24 with lots of processors that may show reduced performance if 30 250 Hz is a good compromise choice allowing server performance 38 300 Hz is a good compromise choice allowing server performance
|
/Linux-v5.10/Documentation/networking/device_drivers/ethernet/neterion/ |
D | s2io.rst | 63 significant performance improvement on certain platforms(SGI Altix, 67 (IA64, Xeon) resulting in noticeable performance improvement(up to 7% 123 good performance:: 133 Transmit performance: 164 Receive performance: 173 b. Use 2-buffer mode. This results in large performance boost on
|
/Linux-v5.10/Documentation/admin-guide/mm/ |
D | numaperf.rst | 9 as CPU cache coherence, but may have different performance. For example, 13 under different domains, or "nodes", based on locality and performance 35 performance when accessing a given memory target. Each initiator-target 55 nodes' access characteristics share the same performance relative to other 69 be allocated from based on the node's performance characteristics. If 79 The performance characteristics the kernel provides for the local initiators 104 performance characteristics in order to provide large address space of 130 attributes in order to maximize the performance out of such a setup.
|
/Linux-v5.10/Documentation/scsi/ |
D | link_power_management_policy.rst | 15 sacrifice some performance due to increased latency 19 the controller to have performance be a priority
|
/Linux-v5.10/Documentation/ABI/testing/ |
D | sysfs-bus-event_source-devices-events | 15 Description: Generic performance monitoring events 17 A collection of performance monitoring events that may be 33 Description: Per-pmu performance monitoring events specific to the running system 37 performance monitoring event supported by the <pmu>. The name
|
D | sysfs-platform-hidma-mgmt | 64 Choosing a higher number gives better performance but 65 can also cause performance reduction to other peripherals 85 Choosing a higher number gives better performance but 86 can also cause performance reduction to other peripherals
|
/Linux-v5.10/Documentation/devicetree/bindings/nds32/ |
D | atl2c.txt | 4 for high performance systems, such as thoese designs with AndesCore processors. 5 Level-2 cache controller in general enhances overall system performance
|