Lines Matching refs:kprobe
36 (also called return probes). A kprobe can be inserted on virtually
63 When a kprobe is registered, Kprobes makes a copy of the probed
70 associated with the kprobe, passing the handler the addresses of the
71 kprobe struct and the saved registers.
80 "post_handler," if any, that is associated with the kprobe.
109 When you call register_kretprobe(), Kprobes establishes a kprobe at
114 At boot time, Kprobes registers a kprobe at the trampoline.
147 field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
182 Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
235 If the kprobe can be optimized, Kprobes enqueues the kprobe to an
236 optimizing list, and kicks the kprobe-optimizer workqueue to optimize
251 of kprobe optimization supports only kernels with CONFIG_PREEMPT=n [4]_.
260 When an optimized kprobe is unregistered, disabled, or blocked by
261 another kprobe, it will be unoptimized. If this happens before
262 the optimization is complete, the kprobe is just dequeued from the
278 The jump optimization changes the kprobe's pre_handler behavior.
285 - Specify an empty function for the kprobe's post_handler.
339 kprobe address resolution code.
362 int register_kprobe(struct kprobe *kp);
375 1. With the introduction of the "symbol_name" field to struct kprobe,
384 2. Use the "offset" field of struct kprobe if the offset into the symbol
388 3. Specify either the kprobe "symbol_name" OR the "addr". If both are
389 specified, kprobe registration will fail with -EINVAL.
392 does not validate if the kprobe.addr is at an instruction boundary.
401 int pre_handler(struct kprobe *p, struct pt_regs *regs);
403 Called with p pointing to the kprobe associated with the breakpoint,
411 void post_handler(struct kprobe *p, struct pt_regs *regs,
421 int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
451 regs is as described for kprobe.pre_handler. ri points to the
473 void unregister_kprobe(struct kprobe *kp);
490 int register_kprobes(struct kprobe **kps, int num);
512 void unregister_kprobes(struct kprobe **kps, int num);
530 int disable_kprobe(struct kprobe *kp);
542 int enable_kprobe(struct kprobe *kp);
553 So if you install a kprobe with a post_handler, at an optimized
581 handlers won't be run in that instance, and the kprobe.nmissed member
592 kretprobe handlers and optimized kprobe handlers run without interrupt
642 of the kprobe, because the bytes in DCR are replaced by
656 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
660 hit typically takes 50-75% longer than a kprobe hit.
661 When you have a return probe set on a function, adding a kprobe at
666 k = kprobe; r = return probe; kr = kprobe + return probe
681 Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
684 k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
739 - Use ftrace dynamic events (kprobe event) with perf-probe.
765 The second column identifies the type of probe (k - kprobe and r - kretprobe)