Searched full:optimized (Results 1 – 25 of 654) sorted by relevance
12345678910>>...27
/Linux-v6.1/Documentation/trace/ |
D | kprobes.rst | 193 instruction (the "optimized region") lies entirely within one function. 198 jump into the optimized region. Specifically: 203 optimized region -- Kprobes checks the exception tables to verify this); 204 - there is no near jump to the optimized region (other than to the first 207 - For each instruction in the optimized region, Kprobes verifies that 219 - the instructions from the optimized region 229 - Other instructions in the optimized region are probed. 236 If the kprobe can be optimized, Kprobes enqueues the kprobe to an 238 it. If the to-be-optimized probepoint is hit before being optimized, 249 optimized region [3]_. As you know, synchronize_rcu() can ensure [all …]
|
/Linux-v6.1/Documentation/devicetree/bindings/opp/ |
D | ti-omap5-opp-supply.txt | 26 "ti,omap5-opp-supply" - OMAP5+ optimized voltages in efuse(class0)VDD 28 "ti,omap5-core-opp-supply" - OMAP5+ optimized voltages in efuse(class0) VDD 33 optimized efuse configuration. Each item consists of the following: 35 efuse_offseet: efuse offset from reg where the optimized voltage is stored.
|
/Linux-v6.1/drivers/opp/ |
D | ti-opp-supply.c | 25 * struct ti_opp_supply_optimum_voltage_table - optimized voltage table 27 * @optimized_uv: Optimized voltage from efuse 36 * @vdd_table: Optimized voltage mapping table 68 * _store_optimized_voltages() - store optimized voltages 72 * Picks up efuse based optimized voltages for VDD unique per device and 157 * Some older samples might not have optimized efuse in _store_optimized_voltages() 192 * Return: if a match is found, return optimized voltage, else return 215 dev_err_ratelimited(dev, "%s: Failed optimized voltage match for %d\n", in _get_optimal_vdd_voltage() 395 /* If we need optimized voltage */ in ti_opp_supply_probe()
|
/Linux-v6.1/arch/ia64/lib/ |
D | io.c | 9 * This needs to be optimized. 24 * This needs to be optimized. 39 * This needs to be optimized.
|
D | clear_page.S | 56 // Optimized for Itanium 62 // Optimized for McKinley
|
/Linux-v6.1/fs/crypto/ |
D | Kconfig | 26 # algorithms, not any per-architecture optimized implementations. It is 27 # strongly recommended to enable optimized implementations too. It is safe to 28 # disable these generic implementations if corresponding optimized
|
/Linux-v6.1/arch/arm/kernel/ |
D | io.c | 43 * This needs to be optimized. 59 * This needs to be optimized. 75 * This needs to be optimized.
|
/Linux-v6.1/drivers/video/fbdev/aty/ |
D | atyfb.h | 228 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_le32() 241 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le32() 255 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le16() 267 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_8() 279 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_8()
|
/Linux-v6.1/Documentation/locking/ |
D | percpu-rw-semaphore.rst | 6 optimized for locking for reading. 26 The idea of using RCU for optimized rw-lock was introduced by
|
/Linux-v6.1/Documentation/devicetree/bindings/memory-controllers/ |
D | atmel,ebi.txt | 67 - atmel,smc-tdf-mode: "normal" or "optimized". When set to 68 "optimized" the data float time is optimized
|
/Linux-v6.1/arch/x86/kernel/kprobes/ |
D | opt.c | 45 /* This function only handles jump-optimized kprobe */ in __recover_optprobed_insn() 57 * If the kprobe can be optimized, original bytes which can be in __recover_optprobed_insn() 174 /* Optimized kprobe call back function: called from optinsn */ 360 /* Check optimized_kprobe can actually be optimized. */ 375 /* Check the addr is within the optimized instructions. */ 383 /* Free optimized instruction slot */ 565 /* This kprobe is really able to run optimized path. */ in setup_detour_execution()
|
/Linux-v6.1/arch/x86/include/asm/ |
D | qspinlock_paravirt.h | 10 * and restored. So an optimized version of __pv_queued_spin_unlock() is 21 * Optimized assembly version of __raw_callee_save___pv_queued_spin_unlock
|
/Linux-v6.1/drivers/staging/media/atomisp/pci/isp/kernels/tdf/tdf_1.0/ |
D | ia_css_tdf_types.h | 34 s32 thres_flat_table[64]; /** Final optimized strength table of NR for flat region. */ 35 s32 thres_detail_table[64]; /** Final optimized strength table of NR for detail region. */
|
/Linux-v6.1/include/linux/ |
D | omap-gpmc.h | 34 * gpmc_omap_onenand_set_timings - set optimized sync timings. 40 * Sets optimized timings for the @cs region based on @freq and @latency.
|
/Linux-v6.1/arch/sparc/lib/ |
D | strlen.S | 2 /* strlen.S: Sparc optimized strlen code 3 * Hand optimized from GNU libc's strlen
|
D | M7memset.S | 2 * M7memset.S: SPARC M7 optimized memset. 8 * M7memset.S: M7 optimized memset. 100 * (can create a more optimized version later.) 114 * (can create a more optimized version later.)
|
/Linux-v6.1/kernel/ |
D | kprobes.c | 420 * This must be called from arch-dep optimized caller. 436 /* Free optimized instructions and optimized_kprobe */ 488 * Return an optimized kprobe whose optimizing code replaces 677 /* Optimize kprobe if p is ready to be optimized */ 687 /* kprobes with 'post_handler' can not be optimized */ in optimize_kprobe() 693 /* Check there is no other kprobes at the optimized instructions */ in optimize_kprobe() 697 /* Check if it is already optimized. */ in optimize_kprobe() 709 * 'op' must have OPTIMIZED flag in optimize_kprobe() 726 /* Unoptimize a kprobe if p is optimized */ 732 return; /* This is not an optprobe nor optimized */ in unoptimize_kprobe() [all …]
|
/Linux-v6.1/arch/x86/crypto/ |
D | twofish_glue.c | 2 * Glue Code for assembler optimized version of TWOFISH 98 MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized");
|
D | serpent_avx2_glue.c | 3 * Glue Code for x86_64/AVX2 assembler optimized version of Serpent 128 MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX2 optimized");
|
D | camellia_aesni_avx_glue.c | 3 * Glue Code for x86_64/AVX/AES-NI assembler optimized version of Camellia 135 MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX optimized");
|
D | camellia_aesni_avx2_glue.c | 3 * Glue Code for x86_64/AVX2/AES-NI assembler optimized version of Camellia 136 MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX2 optimized");
|
/Linux-v6.1/arch/m68k/include/asm/ |
D | delay.h | 72 * the const factor (4295 = 2**32 / 1000000) can be optimized out when 88 * first constant multiplications gets optimized away if the delay is
|
/Linux-v6.1/arch/s390/include/asm/ |
D | checksum.h | 8 * Martin Schwidefsky (heavily optimized CKSM version) 57 * This is a version of ip_compute_csum() optimized for IP headers,
|
/Linux-v6.1/arch/arc/lib/ |
D | memset-archs.S | 10 * The memset implementation below is optimized to use prefetchw and prealloc 12 * If you want to implement optimized memset for other possible L1 data cache
|
/Linux-v6.1/arch/sparc/crypto/ |
D | crc32c_glue.c | 2 /* Glue code for CRC32C optimized for sparc64 crypto opcodes. 161 pr_info("Using sparc64 crc32c opcode optimized CRC32C implementation\n"); in crc32c_sparc64_mod_init()
|
12345678910>>...27