Searched full:optimized (Results 1 – 25 of 611) sorted by relevance
12345678910>>...25
/Linux-v5.15/Documentation/trace/ |
D | kprobes.rst | 193 instruction (the "optimized region") lies entirely within one function. 198 jump into the optimized region. Specifically: 203 optimized region -- Kprobes checks the exception tables to verify this); 204 - there is no near jump to the optimized region (other than to the first 207 - For each instruction in the optimized region, Kprobes verifies that 219 - the instructions from the optimized region 229 - Other instructions in the optimized region are probed. 236 If the kprobe can be optimized, Kprobes enqueues the kprobe to an 238 it. If the to-be-optimized probepoint is hit before being optimized, 249 optimized region [3]_. As you know, synchronize_rcu() can ensure [all …]
|
/Linux-v5.15/arch/arm/crypto/ |
D | Kconfig | 18 using optimized ARM assembler. 28 using optimized ARM NEON assembly, when NEON instructions are 55 using optimized ARM assembler and NEON, when available. 63 using optimized ARM assembler and NEON, when available. 69 BLAKE2s digest algorithm optimized with ARM scalar instructions. This 79 BLAKE2b digest algorithm optimized with ARM NEON instructions. 89 Use optimized AES assembler routines for ARM platforms.
|
/Linux-v5.15/Documentation/devicetree/bindings/opp/ |
D | ti-omap5-opp-supply.txt | 26 "ti,omap5-opp-supply" - OMAP5+ optimized voltages in efuse(class0)VDD 28 "ti,omap5-core-opp-supply" - OMAP5+ optimized voltages in efuse(class0) VDD 33 optimized efuse configuration. Each item consists of the following: 35 efuse_offseet: efuse offset from reg where the optimized voltage is stored.
|
/Linux-v5.15/drivers/opp/ |
D | ti-opp-supply.c | 25 * struct ti_opp_supply_optimum_voltage_table - optimized voltage table 27 * @optimized_uv: Optimized voltage from efuse 36 * @vdd_table: Optimized voltage mapping table 64 * _store_optimized_voltages() - store optimized voltages 68 * Picks up efuse based optimized voltages for VDD unique per device and 153 * Some older samples might not have optimized efuse in _store_optimized_voltages() 188 * Return: if a match is found, return optimized voltage, else return 211 dev_err_ratelimited(dev, "%s: Failed optimized voltage match for %d\n", in _get_optimal_vdd_voltage() 401 /* If we need optimized voltage */ in ti_opp_supply_probe()
|
/Linux-v5.15/arch/ia64/lib/ |
D | io.c | 9 * This needs to be optimized. 24 * This needs to be optimized. 39 * This needs to be optimized.
|
D | clear_page.S | 56 // Optimized for Itanium 62 // Optimized for McKinley
|
/Linux-v5.15/fs/crypto/ |
D | Kconfig | 26 # algorithms, not any per-architecture optimized implementations. It is 27 # strongly recommended to enable optimized implementations too. It is safe to 28 # disable these generic implementations if corresponding optimized
|
/Linux-v5.15/arch/arm/kernel/ |
D | io.c | 43 * This needs to be optimized. 59 * This needs to be optimized. 75 * This needs to be optimized.
|
/Linux-v5.15/drivers/video/fbdev/aty/ |
D | atyfb.h | 228 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_le32() 241 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le32() 255 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le16() 267 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_8() 279 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_8()
|
/Linux-v5.15/Documentation/locking/ |
D | percpu-rw-semaphore.rst | 6 optimized for locking for reading. 26 The idea of using RCU for optimized rw-lock was introduced by
|
/Linux-v5.15/arch/x86/kernel/kprobes/ |
D | opt.c | 45 /* This function only handles jump-optimized kprobe */ in __recover_optprobed_insn() 57 * If the kprobe can be optimized, original bytes which can be in __recover_optprobed_insn() 169 /* Optimized kprobe call back function: called from optinsn */ 353 /* Check optimized_kprobe can actually be optimized. */ 368 /* Check the addr is within the optimized instructions. */ 376 /* Free optimized instruction slot */ 558 /* This kprobe is really able to run optimized path. */ in setup_detour_execution()
|
/Linux-v5.15/arch/x86/include/asm/ |
D | qspinlock_paravirt.h | 8 * and restored. So an optimized version of __pv_queued_spin_unlock() is 19 * Optimized assembly version of __raw_callee_save___pv_queued_spin_unlock
|
/Linux-v5.15/Documentation/devicetree/bindings/memory-controllers/ |
D | atmel,ebi.txt | 67 - atmel,smc-tdf-mode: "normal" or "optimized". When set to 68 "optimized" the data float time is optimized
|
/Linux-v5.15/drivers/staging/media/atomisp/pci/isp/kernels/tdf/tdf_1.0/ |
D | ia_css_tdf_types.h | 34 s32 thres_flat_table[64]; /** Final optimized strength table of NR for flat region. */ 35 s32 thres_detail_table[64]; /** Final optimized strength table of NR for detail region. */
|
/Linux-v5.15/include/linux/ |
D | omap-gpmc.h | 34 * gpmc_omap_onenand_set_timings - set optimized sync timings. 40 * Sets optimized timings for the @cs region based on @freq and @latency.
|
/Linux-v5.15/arch/sparc/lib/ |
D | strlen.S | 2 /* strlen.S: Sparc optimized strlen code 3 * Hand optimized from GNU libc's strlen
|
D | M7memset.S | 2 * M7memset.S: SPARC M7 optimized memset. 8 * M7memset.S: M7 optimized memset. 100 * (can create a more optimized version later.) 114 * (can create a more optimized version later.)
|
/Linux-v5.15/kernel/ |
D | kprobes.c | 411 * This must be called from arch-dep optimized caller. 427 /* Free optimized instructions and optimized_kprobe */ 479 * Return an optimized kprobe whose optimizing code replaces 668 /* Optimize kprobe if p is ready to be optimized */ 678 /* kprobes with post_handler can not be optimized */ in optimize_kprobe() 684 /* Check there is no other kprobes at the optimized instructions */ in optimize_kprobe() 688 /* Check if it is already optimized. */ in optimize_kprobe() 698 /* On unoptimizing/optimizing_list, op must have OPTIMIZED flag */ in optimize_kprobe() 714 /* Unoptimize a kprobe if p is optimized */ 720 return; /* This is not an optprobe nor optimized */ in unoptimize_kprobe() [all …]
|
/Linux-v5.15/arch/x86/crypto/ |
D | twofish_glue.c | 2 * Glue Code for assembler optimized version of TWOFISH 98 MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized");
|
D | camellia_aesni_avx_glue.c | 3 * Glue Code for x86_64/AVX/AES-NI assembler optimized version of Camellia 135 MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX optimized");
|
D | serpent_avx2_glue.c | 3 * Glue Code for x86_64/AVX2 assembler optimized version of Serpent 128 MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX2 optimized");
|
D | camellia_aesni_avx2_glue.c | 3 * Glue Code for x86_64/AVX2/AES-NI assembler optimized version of Camellia 136 MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX2 optimized");
|
/Linux-v5.15/crypto/ |
D | Kconfig | 483 SSE2 optimized implementation of the hash function used by the 491 AVX2 optimized implementation of the hash function used by the 674 optimized for 64bit platforms and can produce digests of any size 692 optimized for 8-32bit platforms and can produce digests of any size 779 tristate "Poly1305 authenticator algorithm (MIPS optimized)" 1371 optimized using SPARC64 crypto opcodes. 1383 algorithm that is optimized for x86-64 processors. Two versions of 1402 an algorithm optimized for 64-bit processors with good performance 1437 SSSE3, AVX2, and AVX-512VL optimized implementations of the ChaCha20, 1441 tristate "ChaCha stream cipher algorithms (MIPS 32r2 optimized)" [all …]
|
/Linux-v5.15/arch/s390/include/asm/ |
D | checksum.h | 8 * Martin Schwidefsky (heavily optimized CKSM version) 57 * This is a version of ip_compute_csum() optimized for IP headers,
|
/Linux-v5.15/arch/m68k/include/asm/ |
D | delay.h | 72 * the const factor (4295 = 2**32 / 1000000) can be optimized out when 88 * first constant multiplications gets optimized away if the delay is
|
12345678910>>...25