Home
last modified time | relevance | path

Searched full:harts (Results 1 – 15 of 15) sorted by relevance

/Linux-v6.1/arch/riscv/kernel/
Dsbi.c123 * sbi_shutdown() - Remove all the harts from executing supervisor code.
413 * @cpu_mask: A cpu mask containing all the target harts.
424 * sbi_remote_fence_i() - Execute FENCE.I instruction on given remote harts.
425 * @cpu_mask: A cpu mask containing all the target harts.
438 * harts for the specified virtual address range.
439 * @cpu_mask: A cpu mask containing all the target harts.
456 * remote harts for a virtual address range belonging to a specific ASID.
458 * @cpu_mask: A cpu mask containing all the target harts.
477 * harts for the specified guest physical address range.
478 * @cpu_mask: A cpu mask containing all the target harts.
[all …]
Dtraps.c226 * harts concurrently. This isn't a real spinlock because the lock side must
240 * overflow stack. Tell any other concurrent overflowing harts that in handle_bad_stack()
Dmachine_kexec.c128 * harts and possibly devices etc) for a kexec reboot.
202 * executed. We assume at this point that all other harts are
Dhead.S190 /* We lack SMP support or have too many harts, so park this hart */
Dentry.S409 * harts are concurrently overflowing their kernel stacks. We could
/Linux-v6.1/arch/riscv/mm/
Dcacheflush.c32 * informs the remote harts they need to flush their local instruction caches.
35 * IPIs for harts that are not currently executing a MM context and instead
55 * Flush the I$ of other harts concurrently executing, and mark them as in flush_icache_mm()
119 pr_warn("cbom-block-size mismatched between harts %lu and %lu\n", in riscv_init_cbom_blocksize()
Dcontext.c276 * shoot downs, so instead we send an IPI that informs the remote harts they
279 * machine, ie 'make -j') we avoid the IPIs for harts that are not currently
/Linux-v6.1/arch/csky/abiv2/
Dcacheflush.c77 * Flush the I$ of other harts concurrently executing, and mark them as in flush_icache_mm_range()
/Linux-v6.1/Documentation/devicetree/bindings/timer/
Dsifive,clint.yaml17 lines of various HARTs (or CPUs) so RISC-V per-HART (or per-CPU) local
/Linux-v6.1/Documentation/devicetree/bindings/interrupt-controller/
Driscv,cpu-intc.txt23 a PLIC interrupt property will typically list the HLICs for all present HARTs
Dsifive,plic-1.0.0.yaml18 in an 4 core system with 2-way SMT, you have 8 harts and probably at least two
/Linux-v6.1/drivers/clocksource/
Dtimer-riscv.c60 * It is guaranteed that all the timers across all the harts are synchronized
/Linux-v6.1/Documentation/devicetree/bindings/riscv/
Dcpus.yaml24 having four harts.
/Linux-v6.1/drivers/perf/
Driscv_pmu_sbi.c46 * RISC-V doesn't have hetergenous harts yet. This need to be part of
47 * per_cpu in case of harts with different pmu counters
/Linux-v6.1/Documentation/devicetree/bindings/cpu/
Didle-states.yaml55 On RISC-V systems, the HARTs (or CPUs) [6] can be put in platform specific