/Linux-v5.4/drivers/acpi/ |
D | processor_perflib.c | 83 if (ppc >= pr->performance->state_count || in acpi_processor_get_platform_limit() 88 pr->performance->states[ppc].core_frequency * 1000); in acpi_processor_get_platform_limit() 116 if (ignore_ppc || !pr->performance) { in acpi_processor_ppc_has_changed() 146 if (!pr || !pr->performance || !pr->performance->state_count) in acpi_processor_get_bios_limit() 148 *limit = pr->performance->states[pr->performance_platform_limit]. in acpi_processor_get_bios_limit() 228 memcpy(&pr->performance->control_register, obj.buffer.pointer, in acpi_processor_get_performance_control() 245 memcpy(&pr->performance->status_register, obj.buffer.pointer, in acpi_processor_get_performance_control() 317 pr->performance->state_count = pss->package.count; in acpi_processor_get_performance_states() 318 pr->performance->states = in acpi_processor_get_performance_states() 322 if (!pr->performance->states) { in acpi_processor_get_performance_states() [all …]
|
/Linux-v5.4/Documentation/admin-guide/acpi/ |
D | cppc_sysfs.rst | 11 performance of a logical processor on a contigious and abstract performance 12 scale. CPPC exposes a set of registers to describe abstract performance scale, 13 to request performance levels and to measure per-cpu delivered performance. 38 * highest_perf : Highest performance of this processor (abstract scale). 39 * nominal_perf : Highest sustained performance of this processor 41 * lowest_nonlinear_perf : Lowest performance of this processor with nonlinear 43 * lowest_perf : Lowest performance of this processor (abstract scale). 47 The above frequencies should only be used to report processor performance in 51 * feedback_ctrs : Includes both Reference and delivered performance counter. 52 Reference counter ticks up proportional to processor's reference performance. [all …]
|
/Linux-v5.4/Documentation/power/ |
D | energy-model.rst | 9 the power consumed by CPUs at various performance levels, and the kernel 50 The EM framework manages power cost tables per 'performance domain' in the 51 system. A performance domain is a group of CPUs whose performance is scaled 53 policies. All CPUs in a performance domain are required to have the same 54 micro-architecture. CPUs in different performance domains can have different 67 2.2 Registration of performance domains 70 Drivers are expected to register performance domains into the EM framework by 76 Drivers must specify the CPUs of the performance domains using the cpumask 85 2.3 Accessing performance domains 90 the performance domains, and kept in memory untouched. [all …]
|
/Linux-v5.4/arch/x86/events/ |
D | Kconfig | 5 tristate "Intel uncore performance events" 9 Include support for Intel uncore performance events. These are 13 tristate "Intel rapl performance events" 17 Include support for Intel rapl performance events for power 21 tristate "Intel cstate performance events" 25 Include support for Intel cstate performance events for power
|
/Linux-v5.4/Documentation/admin-guide/ |
D | perf-security.rst | 14 depends on the nature of data that perf_events performance monitoring 15 units (PMU) [2]_ and Perf collect and expose for performance analysis. 16 Collected system and performance data may be split into several 21 its topology, used kernel and Perf versions, performance monitoring 30 faults, CPU migrations), architectural hardware performance counters 46 properly. So, perf_events/Perf performance monitoring is the subject for 56 all kernel security permission checks so perf_events performance 70 as privileged processes with respect to perf_events performance 81 performance analysis of monitored processes or a system. For example, 91 performance monitoring without scope limits. The following steps can be [all …]
|
/Linux-v5.4/tools/power/cpupower/bench/ |
D | README-BENCH | 7 - Identify worst case performance loss when doing dynamic frequency 12 - Identify cpufreq related performance regressions between kernels 18 - Power saving related regressions (In fact as better the performance 28 For that purpose, it compares the performance governor to a configured 56 takes on this machine and needs to be run in a loop using the performance 58 Then the above test runs are processed using the performance governor 61 on full performance and you get the overall performance loss. 80 trigger of the cpufreq-bench, you will see no performance loss (compare with 84 will always see 50% loads and you get worst performance impact never
|
/Linux-v5.4/drivers/xen/ |
D | xen-acpi-processor.c | 144 dst_states = kcalloc(_pr->performance->state_count, in xen_copy_pss_data() 149 dst_perf->state_count = _pr->performance->state_count; in xen_copy_pss_data() 150 for (i = 0; i < _pr->performance->state_count; i++) { in xen_copy_pss_data() 152 memcpy(&(dst_states[i]), &(_pr->performance->states[i]), in xen_copy_pss_data() 168 dst->shared_type = _pr->performance->shared_type; in xen_copy_psd_data() 170 pdomain = &(_pr->performance->domain_info); in xen_copy_psd_data() 219 xen_copy_pct_data(&(_pr->performance->control_register), in push_pxx_to_hypervisor() 221 xen_copy_pct_data(&(_pr->performance->status_register), in push_pxx_to_hypervisor() 246 perf = _pr->performance; in push_pxx_to_hypervisor() 279 if (_pr->performance && _pr->performance->states) in upload_pm_data() [all …]
|
/Linux-v5.4/drivers/perf/ |
D | Kconfig | 49 Say y if you want to use CPU performance monitors on ARM-based 69 Provides support for performance monitor unit in ARM DynamIQ Shared 78 Provides support for the DDR performance monitor in i.MX8, which 86 Support for HiSilicon SoC uncore performance monitoring 93 Provides support for the L2 cache performance monitor unit (PMU) 103 Provides support for the L3 cache performance monitor unit (PMU) 122 Say y if you want to use APM X-Gene SoC performance monitors.
|
/Linux-v5.4/Documentation/scheduler/ |
D | sched-energy.rst | 38 performance [inst/s] 48 while still getting 'good' performance. It is essentially an alternative 49 optimization objective to the current performance-only objective for the 51 performance. 78 task/CPU is, and to take this into consideration when evaluating performance vs 84 per 'performance domain' in the system (see Documentation/power/energy-model.rst 85 for futher details about performance domains). 89 scheduler maintains a singly linked list of all performance domains intersecting 95 necessarily match those of performance domains, the lists of different root 99 Let us consider a platform with 12 CPUs, split in 3 performance domains [all …]
|
/Linux-v5.4/include/acpi/ |
D | processor.h | 166 u16 performance; member 206 u8 performance:1; member 230 struct acpi_processor_performance *performance; member 251 __percpu *performance); 254 *performance, unsigned int cpu);
|
/Linux-v5.4/tools/perf/Documentation/ |
D | perf-kvm.txt | 23 a performance counter profile of guest os in realtime 26 'perf kvm record <command>' to record the performance counter profile 39 'perf kvm report' to display the performance counter profile information 42 'perf kvm diff' to displays the performance difference amongst two perf.data 51 'perf kvm stat <command>' to run a command and gather performance counter 76 Collect host side performance profile. 78 Collect guest side performance profile.
|
D | perf-bench.txt | 53 Memory access performance. 70 Suite for evaluating performance of scheduler and IPC mechanisms. 140 Suite for evaluating performance of simple memory copy in various ways. 164 Suite for evaluating performance of simple memory set in various ways.
|
/Linux-v5.4/Documentation/networking/device_drivers/neterion/ |
D | s2io.txt | 48 significant performance improvement on certain platforms(SGI Altix, 52 (IA64, Xeon) resulting in noticeable performance improvement(up to 7% 92 good performance. 99 Transmit performance: 120 Receive performance: 125 b. Use 2-buffer mode. This results in large performance boost on
|
/Linux-v5.4/Documentation/admin-guide/pm/ |
D | intel_epb.rst | 26 a value of 0 corresponds to a hint preference for highest performance 31 with one of the strings: "performance", "balance-performance", "normal",
|
D | intel_pstate.rst | 17 :doc:`CPU performance scaling subsystem <cpufreq>` in the Linux kernel 25 than just an operating frequency or an operating performance point (see the 30 uses frequencies for identifying operating performance points of CPUs and 84 active mode: ``powersave`` and ``performance``. The way they both operate 90 Namely, if that option is set, the ``performance`` algorithm will be used by 113 HWP + ``performance`` 119 internal P-state selection logic is expected to focus entirely on performance. 136 internal P-state selection logic to be less performance-focused. 150 ``powersave`` or ``performance``, depending on the ``scaling_governor`` policy 155 ``performance`` [all …]
|
/Linux-v5.4/Documentation/admin-guide/mm/ |
D | numaperf.rst | 9 as CPU cache coherence, but may have different performance. For example, 13 under different domains, or "nodes", based on locality and performance 35 performance when accessing a given memory target. Each initiator-target 55 nodes' access characteristics share the same performance relative to other 64 be allocated from based on the node's performance characteristics. If 74 The performance characteristics the kernel provides for the local initiators 96 performance characteristics in order to provide large address space of 122 attributes in order to maximize the performance out of such a setup.
|
/Linux-v5.4/kernel/ |
D | Kconfig.hz | 24 with lots of processors that may show reduced performance if 30 250 Hz is a good compromise choice allowing server performance 38 300 Hz is a good compromise choice allowing server performance
|
/Linux-v5.4/Documentation/scsi/ |
D | link_power_management_policy.txt | 8 sacrifice some performance due to increased latency 12 the controller to have performance be a priority
|
/Linux-v5.4/kernel/rcu/ |
D | Kconfig.debug | 27 tristate "performance tests for RCU" 34 This option provides a kernel module that runs performance 38 Say Y here if you want RCU performance tests to be built into 40 Say M if you want the RCU performance tests to build as a module.
|
/Linux-v5.4/Documentation/ABI/testing/ |
D | sysfs-platform-hidma-mgmt | 64 Choosing a higher number gives better performance but 65 can also cause performance reduction to other peripherals 85 Choosing a higher number gives better performance but 86 can also cause performance reduction to other peripherals
|
D | sysfs-bus-event_source-devices-events | 15 Description: Generic performance monitoring events 17 A collection of performance monitoring events that may be 33 Description: Per-pmu performance monitoring events specific to the running system 37 performance monitoring event supported by the <pmu>. The name
|
D | sysfs-devices-mmc | 7 area can help to improve the card performance. If the feature 18 area can help to improve the card performance. If the feature
|
/Linux-v5.4/Documentation/devicetree/bindings/nds32/ |
D | atl2c.txt | 4 for high performance systems, such as thoese designs with AndesCore processors. 5 Level-2 cache controller in general enhances overall system performance
|
/Linux-v5.4/fs/squashfs/ |
D | Kconfig | 51 Doing so can significantly improve performance because 63 decompression performance and CPU and memory usage. 78 poor performance on parallel I/O workloads when using multiple CPU 82 using this option may improve overall I/O performance. 92 poor performance on parallel I/O workloads when using multiple CPU 192 This, however, gives poor performance on MTD NAND devices where 197 performance for some file access patterns (e.g. sequential
|
/Linux-v5.4/Documentation/devicetree/bindings/devfreq/event/ |
D | exynos-nocp.txt | 5 NoC provides the primitive values to get the performance data. The packets 11 that you can use while analyzing system performance.
|