Lines Matching +full:3 +full:a
27 is really a framework of several assorted tracing utilities.
29 disabled and enabled, as well as for preemption and from a time
30 a task is woken to the task is actually scheduled in.
62 For quicker access to that directory you may want to make a soft link to
90 of ftrace. Here is a list of some of the key files:
127 This file holds the output of the trace in a human
130 Note, this file is not a consumer. If tracing is off
141 retrieved. Unlike the "trace" file, this file is a
145 will not be read again with a sequential read. The
154 files. Options also exist to modify how a tracer
159 This is a directory that has a file for every available
161 or cleared by writing a "1" or "0" respectively into the
169 stored, and displayed by "trace". A new max trace will only be
173 By echoing in a time into this file, no latency will be recorded
178 Some latency tracers will record a trace whenever the
180 Only active when the file contains a number greater than 0.
191 A few extra pages may be allocated to accommodate buffer management
195 ( Note, the size may not be a multiple of the page size
208 If a process is performing tracing, and the ring buffer should be
210 killed by a signal, this file can be used for that purpose. On close
212 Having a process that is tracing also open this file, when the process
220 This is a mask that lets the user only trace on specified CPUs.
221 The format is a hex string representing the CPUs.
230 has a side effect of enabling or disabling specific functions
242 As a speed up, since processing strings can be quite expensive
243 and requires a check of all functions registered to tracing, instead
244 an index can be written into this file. A number (starting with "1")
252 be traced. If a function exists in both set_ftrace_filter
260 If the "function-fork" option is set, then when a task whose
271 If the "function-fork" option is set, then when a task whose
277 If a PID is in both this file and "set_ftrace_pid", then this
282 Have the events only trace a task with a PID listed in this file.
293 Have the events not trace a task with a PID listed in this file.
295 in this file, even if a thread's PID is in the file if the
296 sched_switch or sched_wakeup events also trace a thread that should
317 by a specific function.
341 in seeing if any function has a callback attached to it.
344 displays all functions that have a callback attached to them
346 Note, a callback may also call multiple functions which will
349 If the callback registered to be traced by a function with
350 the "save regs" attribute (thus even more overhead), a 'R'
354 If the callback registered to be traced by a function with
359 If a non ftrace trampoline is attached (BPF) a 'D' will be displayed.
361 "direct" trampoline can be attached to a given function at a time.
367 If a function had either the "ip modify" or a "direct" call attached to
368 it in the past, a 'M' will be shown. This flag is never cleared. It is
369 used to know if a function was every modified by the ftrace infrastructure,
376 If the callback of a function jumps to a trampoline that is
383 This file contains all the functions that ever had a function callback
388 To see any function that has every been modified by "ip modify" or a
397 keep a histogram of the number of functions that were called
406 A directory that holds different tracing stats.
419 it will trace into a function. Setting this to a value of
426 the ring buffer references a string, only a pointer to the string
434 Only the pid of the task is recorded in a trace event unless
436 makes a cache of pid mappings to comms to try to display
437 comms for events. If a pid for a comm is not listed, then
452 the Task Group ID of a task is saved in a table mapping the PID of
459 take a snapshot of the current running trace.
481 Whenever an event is recorded into the ring buffer, a
482 "timestamp" is added. This stamp comes from a specified
501 be a bit slower than the local clock.
504 This is not a clock at all, but literally an atomic
547 sees a partial update. These effects are rare and post
563 To set a clock, simply echo the clock name into this file::
567 Setting a clock clears the ring buffer content as well as the
572 This is a very useful file for synchronizing user space
603 example in Documentation/trace/histogram.rst (Section 3.)
608 to be written to it, where a tool can be used to parse the data
622 This is a way to make multiple trace buffers where different
633 when a "1" is written to them.
645 A list of events that can be enabled in tracing.
653 different modes can coexist within a buffer but the mode in
666 delta: Default timestamp mode - timestamp is a delta against
667 a per-buffer timestamp.
669 absolute: The timestamp is a full timestamp, not a delta
680 This is a directory that contains the trace per_cpu information.
684 The ftrace buffer is defined per_cpu. That is, there's a separate
699 This is similar to the "trace_pipe" file, and is a consuming
709 a file or to the network where a server is collecting the
712 Like trace_pipe, this is a consuming reader, where multiple
719 the content of the snapshot for a given CPU, and if
740 This gets set if so many events happened within a nested
774 to draw a graph of function calls similar to C code
792 See tracing_max_latency. When a new max is recorded,
824 a SCHED_DEADLINE task to be woken (as the "wakeup" and
829 A special tracer that is used to trace binary module.
830 It will trace all the calls that a module makes to the
837 calls within the kernel. It will trace when a likely and
857 information is available. The tracing/error_log file is a circular
858 error log displaying a small number (currently, 8) of ftrace errors
914 A header is printed with the tracer name that is represented by
933 why a latency happened. Here is a typical trace::
998 .. caution:: If the architecture does not support a way to
1009 - 'Z' - NMI occurred inside a hardirq
1011 - 'H' - hard irq occurred inside a softirq.
1022 output includes a timestamp relative to the start of the
1027 This is just to help catch your eye a bit better. And
1042 Note, the latency tracers will usually end with a back trace
1137 Similar to raw, but the numbers will be in a hexadecimal format.
1146 Print the fields as described by their types. This is a better
1147 option than using hex, bin or raw, as it gives a better parsing
1155 and one CPU buffer had a lot of events recently, thus
1156 a shorter time frame, were another CPU may have only had
1157 a few events, which lets it have older events. When
1161 display when a new CPU buffer started::
1172 This option changes the trace. It records a
1178 object the address belongs to, and print a
1180 ASLR is on, otherwise you don't get a chance to
1187 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
1188 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
1218 When any event or tracer is enabled, a hook is enabled
1226 When any event or tracer is enabled, a hook is enabled
1291 When set, a stack trace is recorded after any trace event
1310 When set, a stack trace is recorded after every
1319 Since the function_graph tracer has a slightly different output
1327 Each task has a fixed array of functions to
1339 A certain amount, then a delay marker is
1346 when a task is traced in and out during a context
1365 only a closing curly bracket "}" is displayed for
1366 the return of a function.
1383 the time a task schedules out in its function.
1397 Shows a more minimalistic output.
1406 the kernel know of a new mouse event. The result is a latency
1410 disabled. When a new maximum latency is hit, the tracer saves
1411 the trace leading up to that latency point so that every time a
1465 Here we see that we had a latency of 16 microseconds (which is
1473 function-trace, we get a much larger output::
1481 # latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1497 bash-2042 3d... 0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1498 bash-2042 3d... 0us : add_preempt_count <-_raw_spin_lock_irqsave
1499 bash-2042 3d..1 1us : ata_scsi_find_dev <-ata_scsi_queuecmd
1500 bash-2042 3d..1 1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1501 bash-2042 3d..1 2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1502 bash-2042 3d..1 2us : ata_qc_new_init <-__ata_scsi_queuecmd
1503 bash-2042 3d..1 3us : ata_sg_init <-__ata_scsi_queuecmd
1504 bash-2042 3d..1 4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1505 bash-2042 3d..1 4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1507 bash-2042 3d..1 67us : delay_tsc <-__delay
1508 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1509 bash-2042 3d..2 67us : sub_preempt_count <-delay_tsc
1510 bash-2042 3d..1 67us : add_preempt_count <-delay_tsc
1511 bash-2042 3d..2 68us : sub_preempt_count <-delay_tsc
1512 bash-2042 3d..1 68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1513 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1514 bash-2042 3d..1 71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1515 bash-2042 3d..1 72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1516 bash-2042 3d..1 120us : <stack trace>
1544 Here we traced a 71 microsecond latency. But we also see all the
1579 3 us | 0) bash-1507 | d..2 | | __unwind_start() {
1580 3 us | 0) bash-1507 | d..2 | | get_stack_info() {
1581 3 us | 0) bash-1507 | d..2 | 0.351 us | in_task_stack();
1607 interrupts but the task cannot be preempted and a higher
1609 before it can preempt a lower priority task.
1781 # latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1797 ls-2230 3d... 0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1798 ls-2230 3...1 100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1799 ls-2230 3...1 101us+: trace_preempt_on <-ata_scsi_queuecmd
1800 ls-2230 3...1 111us : <stack trace>
1828 Here is a trace with function-trace set::
1834 # latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1850 kworker/-59 3...1 0us : __schedule <-schedule
1851 kworker/-59 3d..1 0us : rcu_preempt_qs <-rcu_note_context_switch
1852 kworker/-59 3d..1 1us : add_preempt_count <-_raw_spin_lock_irq
1853 kworker/-59 3d..2 1us : deactivate_task <-__schedule
1854 kworker/-59 3d..2 1us : dequeue_task <-deactivate_task
1855 kworker/-59 3d..2 2us : update_rq_clock <-dequeue_task
1856 kworker/-59 3d..2 2us : dequeue_task_fair <-dequeue_task
1857 kworker/-59 3d..2 2us : update_curr <-dequeue_task_fair
1858 kworker/-59 3d..2 2us : update_min_vruntime <-update_curr
1859 kworker/-59 3d..2 3us : cpuacct_charge <-update_curr
1860 kworker/-59 3d..2 3us : __rcu_read_lock <-cpuacct_charge
1861 kworker/-59 3d..2 3us : __rcu_read_unlock <-cpuacct_charge
1862 kworker/-59 3d..2 3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1863 kworker/-59 3d..2 4us : clear_buddies <-dequeue_task_fair
1864 kworker/-59 3d..2 4us : account_entity_dequeue <-dequeue_task_fair
1865 kworker/-59 3d..2 4us : update_min_vruntime <-dequeue_task_fair
1866 kworker/-59 3d..2 4us : update_cfs_shares <-dequeue_task_fair
1867 kworker/-59 3d..2 5us : hrtick_update <-dequeue_task_fair
1868 kworker/-59 3d..2 5us : wq_worker_sleeping <-__schedule
1869 kworker/-59 3d..2 5us : kthread_data <-wq_worker_sleeping
1870 kworker/-59 3d..2 5us : put_prev_task_fair <-__schedule
1871 kworker/-59 3d..2 6us : pick_next_task_fair <-pick_next_task
1872 kworker/-59 3d..2 6us : clear_buddies <-pick_next_task_fair
1873 kworker/-59 3d..2 6us : set_next_entity <-pick_next_task_fair
1874 kworker/-59 3d..2 6us : update_stats_wait_end <-set_next_entity
1875 ls-2269 3d..2 7us : finish_task_switch <-__schedule
1876 ls-2269 3d..2 7us : _raw_spin_unlock_irq <-finish_task_switch
1877 ls-2269 3d..2 8us : do_IRQ <-ret_from_intr
1878 ls-2269 3d..2 8us : irq_enter <-do_IRQ
1879 ls-2269 3d..2 8us : rcu_irq_enter <-irq_enter
1880 ls-2269 3d..2 9us : add_preempt_count <-irq_enter
1881 ls-2269 3d.h2 9us : exit_idle <-do_IRQ
1883 ls-2269 3d.h3 20us : sub_preempt_count <-_raw_spin_unlock
1884 ls-2269 3d.h2 20us : irq_exit <-do_IRQ
1885 ls-2269 3d.h2 21us : sub_preempt_count <-irq_exit
1886 ls-2269 3d..3 21us : do_softirq <-irq_exit
1887 ls-2269 3d..3 21us : __do_softirq <-call_softirq
1888 ls-2269 3d..3 21us+: __local_bh_disable <-__do_softirq
1889 ls-2269 3d.s4 29us : sub_preempt_count <-_local_bh_enable_ip
1890 ls-2269 3d.s5 29us : sub_preempt_count <-_local_bh_enable_ip
1891 ls-2269 3d.s5 31us : do_IRQ <-ret_from_intr
1892 ls-2269 3d.s5 31us : irq_enter <-do_IRQ
1893 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1895 ls-2269 3d.s5 31us : rcu_irq_enter <-irq_enter
1896 ls-2269 3d.s5 32us : add_preempt_count <-irq_enter
1897 ls-2269 3d.H5 32us : exit_idle <-do_IRQ
1898 ls-2269 3d.H5 32us : handle_irq <-do_IRQ
1899 ls-2269 3d.H5 32us : irq_to_desc <-handle_irq
1900 ls-2269 3d.H5 33us : handle_fasteoi_irq <-handle_irq
1902 ls-2269 3d.s5 158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1903 ls-2269 3d.s3 158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1904 ls-2269 3d.s3 159us : __local_bh_enable <-__do_softirq
1905 ls-2269 3d.s3 159us : sub_preempt_count <-__local_bh_enable
1906 ls-2269 3d..3 159us : idle_cpu <-irq_exit
1907 ls-2269 3d..3 159us : rcu_irq_exit <-irq_exit
1908 ls-2269 3d..3 160us : sub_preempt_count <-irq_exit
1909 ls-2269 3d... 161us : __mutex_unlock_slowpath <-mutex_unlock
1910 ls-2269 3d... 162us+: trace_hardirqs_on <-mutex_unlock
1911 ls-2269 3d... 186us : <stack trace>
1926 When an interrupt is running inside a softirq, the annotation is 'H'.
1933 time it takes for a task that is woken to actually wake up.
1950 # latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1952 # | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
1963 <idle>-0 3dNs7 0us : 0:120:R + [003] 312:100:R kworker/3:1H
1964 <idle>-0 3dNs7 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
1965 <idle>-0 3d..3 15us : __schedule <-schedule
1966 <idle>-0 3d..3 15us : 0:120:R ==> [003] 312:100:R kworker/3:1H
1970 the kworker with a nice priority of -20 (not very nice), took
1974 Non Real-Time tasks are not that interesting. A more interesting
1980 In a Real-Time environment it is very important to know the
1991 and not the average. We can have a very fast scheduler that may
1992 only have a large latency once in a while, but that would not
1998 tracer for a while to see that effect).
2019 # latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2032 <idle>-0 3d.h4 0us : 0:120:R + [003] 2389: 94:R sleep
2033 <idle>-0 3d.h4 1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
2034 <idle>-0 3d..3 5us : __schedule <-schedule
2035 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
2041 is about to schedule in. This may change if we add a new marker at the
2052 <idle>-0 3d..3 5us : 0:120:R ==> [003] 2389: 94:R sleep
2054 The 0:120:R means idle was running with a nice priority of 0 (120 - 120)
2068 # latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
2081 <idle>-0 3d.h4 1us+: 0:120:R + [003] 2448: 94:R sleep
2082 <idle>-0 3d.h4 2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
2083 <idle>-0 3d.h3 3us : check_preempt_curr <-ttwu_do_wakeup
2084 <idle>-0 3d.h3 3us : resched_curr <-check_preempt_curr
2085 <idle>-0 3dNh3 4us : task_woken_rt <-ttwu_do_wakeup
2086 <idle>-0 3dNh3 4us : _raw_spin_unlock <-try_to_wake_up
2087 <idle>-0 3dNh3 4us : sub_preempt_count <-_raw_spin_unlock
2088 <idle>-0 3dNh2 5us : ttwu_stat <-try_to_wake_up
2089 <idle>-0 3dNh2 5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
2090 <idle>-0 3dNh2 6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2091 <idle>-0 3dNh1 6us : _raw_spin_lock <-__run_hrtimer
2092 <idle>-0 3dNh1 6us : add_preempt_count <-_raw_spin_lock
2093 <idle>-0 3dNh2 7us : _raw_spin_unlock <-hrtimer_interrupt
2094 <idle>-0 3dNh2 7us : sub_preempt_count <-_raw_spin_unlock
2095 <idle>-0 3dNh1 7us : tick_program_event <-hrtimer_interrupt
2096 <idle>-0 3dNh1 7us : clockevents_program_event <-tick_program_event
2097 <idle>-0 3dNh1 8us : ktime_get <-clockevents_program_event
2098 <idle>-0 3dNh1 8us : lapic_next_event <-clockevents_program_event
2099 <idle>-0 3dNh1 8us : irq_exit <-smp_apic_timer_interrupt
2100 <idle>-0 3dNh1 9us : sub_preempt_count <-irq_exit
2101 <idle>-0 3dN.2 9us : idle_cpu <-irq_exit
2102 <idle>-0 3dN.2 9us : rcu_irq_exit <-irq_exit
2103 <idle>-0 3dN.2 10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
2104 <idle>-0 3dN.2 10us : sub_preempt_count <-irq_exit
2105 <idle>-0 3.N.1 11us : rcu_idle_exit <-cpu_idle
2106 <idle>-0 3dN.1 11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
2107 <idle>-0 3.N.1 11us : tick_nohz_idle_exit <-cpu_idle
2108 <idle>-0 3dN.1 12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
2109 <idle>-0 3dN.1 12us : ktime_get <-tick_nohz_idle_exit
2110 <idle>-0 3dN.1 12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
2111 <idle>-0 3dN.1 13us : cpu_load_update_nohz <-tick_nohz_idle_exit
2112 <idle>-0 3dN.1 13us : _raw_spin_lock <-cpu_load_update_nohz
2113 <idle>-0 3dN.1 13us : add_preempt_count <-_raw_spin_lock
2114 <idle>-0 3dN.2 13us : __cpu_load_update <-cpu_load_update_nohz
2115 <idle>-0 3dN.2 14us : sched_avg_update <-__cpu_load_update
2116 <idle>-0 3dN.2 14us : _raw_spin_unlock <-cpu_load_update_nohz
2117 <idle>-0 3dN.2 14us : sub_preempt_count <-_raw_spin_unlock
2118 <idle>-0 3dN.1 15us : calc_load_nohz_stop <-tick_nohz_idle_exit
2119 <idle>-0 3dN.1 15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
2120 <idle>-0 3dN.1 15us : hrtimer_cancel <-tick_nohz_idle_exit
2121 <idle>-0 3dN.1 15us : hrtimer_try_to_cancel <-hrtimer_cancel
2122 <idle>-0 3dN.1 16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
2123 <idle>-0 3dN.1 16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2124 <idle>-0 3dN.1 16us : add_preempt_count <-_raw_spin_lock_irqsave
2125 <idle>-0 3dN.2 17us : __remove_hrtimer <-remove_hrtimer.part.16
2126 <idle>-0 3dN.2 17us : hrtimer_force_reprogram <-__remove_hrtimer
2127 <idle>-0 3dN.2 17us : tick_program_event <-hrtimer_force_reprogram
2128 <idle>-0 3dN.2 18us : clockevents_program_event <-tick_program_event
2129 <idle>-0 3dN.2 18us : ktime_get <-clockevents_program_event
2130 <idle>-0 3dN.2 18us : lapic_next_event <-clockevents_program_event
2131 <idle>-0 3dN.2 19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
2132 <idle>-0 3dN.2 19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2133 <idle>-0 3dN.1 19us : hrtimer_forward <-tick_nohz_idle_exit
2134 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
2135 <idle>-0 3dN.1 20us : ktime_add_safe <-hrtimer_forward
2136 <idle>-0 3dN.1 20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2137 <idle>-0 3dN.1 20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
2138 <idle>-0 3dN.1 21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
2139 <idle>-0 3dN.1 21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
2140 <idle>-0 3dN.1 21us : add_preempt_count <-_raw_spin_lock_irqsave
2141 <idle>-0 3dN.2 22us : ktime_add_safe <-__hrtimer_start_range_ns
2142 <idle>-0 3dN.2 22us : enqueue_hrtimer <-__hrtimer_start_range_ns
2143 <idle>-0 3dN.2 22us : tick_program_event <-__hrtimer_start_range_ns
2144 <idle>-0 3dN.2 23us : clockevents_program_event <-tick_program_event
2145 <idle>-0 3dN.2 23us : ktime_get <-clockevents_program_event
2146 <idle>-0 3dN.2 23us : lapic_next_event <-clockevents_program_event
2147 <idle>-0 3dN.2 24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
2148 <idle>-0 3dN.2 24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
2149 <idle>-0 3dN.1 24us : account_idle_ticks <-tick_nohz_idle_exit
2150 <idle>-0 3dN.1 24us : account_idle_time <-account_idle_ticks
2151 <idle>-0 3.N.1 25us : sub_preempt_count <-cpu_idle
2152 <idle>-0 3.N.. 25us : schedule <-cpu_idle
2153 <idle>-0 3.N.. 25us : __schedule <-preempt_schedule
2154 <idle>-0 3.N.. 26us : add_preempt_count <-__schedule
2155 <idle>-0 3.N.1 26us : rcu_note_context_switch <-__schedule
2156 <idle>-0 3.N.1 26us : rcu_sched_qs <-rcu_note_context_switch
2157 <idle>-0 3dN.1 27us : rcu_preempt_qs <-rcu_note_context_switch
2158 <idle>-0 3.N.1 27us : _raw_spin_lock_irq <-__schedule
2159 <idle>-0 3dN.1 27us : add_preempt_count <-_raw_spin_lock_irq
2160 <idle>-0 3dN.2 28us : put_prev_task_idle <-__schedule
2161 <idle>-0 3dN.2 28us : pick_next_task_stop <-pick_next_task
2162 <idle>-0 3dN.2 28us : pick_next_task_rt <-pick_next_task
2163 <idle>-0 3dN.2 29us : dequeue_pushable_task <-pick_next_task_rt
2164 <idle>-0 3d..3 29us : __schedule <-preempt_schedule
2165 <idle>-0 3d..3 30us : 0:120:R ==> [003] 2448: 94:R sleep
2167 This isn't that big of a trace, even with function tracing enabled,
2176 As function tracing can induce a much larger latency, but without
2178 caused it. There is a middle ground, and that is with enabling
2212 <idle>-0 2.N.2 3us : cpu_idle: state=4294967295 cpu_id=2
2213 <idle>-0 2dN.3 4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
2214 …<idle>-0 2dN.3 4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer ex…
2217 <idle>-0 2d..3 6us : __schedule <-schedule
2218 <idle>-0 2d..3 6us : 0:120:R ==> [002] 5882: 94:R sleep
2227 periodically make a CPU constantly busy with interrupts disabled.
2246 …<...>-1729 [005] d... 714.756290: #3 inner/outer(us): 16/16 ts:1581527519.678961629 co…
2268 runs in a loop checking a timestamp twice. The latency detected within
2279 The number of times a latency was detected during the window.
2310 When the test is started. A kernel thread is created that
2322 ftrace_enabled is set; otherwise this tracer is a nop.
2359 tracing directly from a program. This allows you to stop the
2361 interested in. To disable the tracing directly from a C program,
2380 By writing into set_ftrace_pid you can trace a
2406 ##### CPU 3 buffer started ####
2413 If you want to trace a function when executing, you could use
2484 write(ffd, "nop", 3);
2518 probes a function on its entry and its exit. This is done by
2519 using a dynamically allocated stack of return addresses in each
2521 address of each function traced to set a custom probe. Thus the
2525 Probing on both ends of a function leads to special features
2528 - measure of a function's time execution
2529 - having a reliable call stack to draw function calls graph
2533 - you want to find the reason of a strange kernel behavior and
2540 - you want to find quickly which path is taken by a specific
2543 - you just want to peek inside a working kernel and want to see
2584 the closing bracket line of a function or on the same line
2585 than the current function in case of a leaf one. It is default
2600 3) # 1837.709 us | } /* __switch_to */
2601 3) | finish_task_switch() {
2602 3) 0.313 us | _raw_spin_unlock_irq();
2603 3) 3.177 us | }
2604 3) # 1889.063 us | } /* __schedule */
2605 3) ! 140.417 us | } /* __schedule */
2606 3) # 2034.948 us | } /* schedule */
2607 3) * 33998.59 us | } /* schedule_preempt_disabled */
2672 system clock since it started. A snapshot of this time is
2699 for a function if the start of that function is not in the
2753 be displayed in a smart way. Specifically, if it is an error code,
2777 - Even if the function return type is void, a return value will still
2783 a 64-bit return value, with the lower 32 bits saved in eax and the
2787 - In certain procedure call standards, such as arm64's AAPCS64, when a
2788 type is smaller than a GPR, it is the responsibility of the consumer
2791 when using a u8 in a 64-bit GPR, bits [63:8] may contain arbitrary values,
2840 trace_printk() For example, if you want to put a comment inside
2844 trace_printk("I'm a comment!\n")
2849 1) | /* I'm a comment! */
2864 starts of pointing to a simple return. (Enabling FTRACE will
2876 a notrace, or blocked another way and all inline functions are not
2880 A section called "__mcount_loc" is created that holds
2884 references into a single table.
2890 are loaded and before they are executed. When a module is
2901 (which is just a function stub). They now call into the ftrace
2905 a breakpoint at the location to be modified, sync all CPUs, modify
2928 A list of available functions that you can add to these files is
3045 To clear out a filter so that all functions will be recorded
3120 case of setting thousands of specific functions at a time. By passing
3121 in a list of numbers, no string processing will occur. Instead, the function
3211 Note, the proc sysctl ftrace_enable is a big on/off switch for the
3216 cannot be disabled if there is a callback with FTRACE_OPS_FL_PERMANENT set
3235 A few commands are supported by the set_ftrace_filter interface.
3251 in a different module is accomplished by appending (>>) to the
3258 functions except a specific module::
3278 no limit. For example, to disable tracing when a schedule bug
3288 to set_ftrace_filter. To remove a command, prepend it by '!'
3294 that have a counter. To remove commands without counters::
3299 Will cause a snapshot to be triggered when the function is hit.
3315 These commands can enable or disable a trace event. Note, because
3318 a "soft" mode. That is, the tracepoint will be called, but
3320 as long as there's a command that triggers it.
3341 something, and want to dump the trace when a certain function
3342 is hit. Perhaps it's a function that is called before a triple
3343 fault happens and does not allow you to get a regular dump.
3352 When the function is hit, a stack trace is recorded.
3421 To modify the buffer, simple echo in a number (in 1024 byte segments).
3461 CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
3467 Snapshot preserves a current trace buffer at a particular point
3469 buffer with a spare buffer, and tracing continues in the new
3477 This is used to take a snapshot and to read the output
3478 of the snapshot. Echo 1 into this file to allocate a
3479 spare buffer and to take a snapshot (swap), then read
3548 In the tracefs tracing directory, there is a directory called "instances".
3568 is a separate and new buffer. The files affect that buffer but do not
3610 …<idle>-0 [003] d..3 136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 p…
3611 …empt-9 [003] d..3 136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 p…
3614 …bash-1998 [000] d..3 136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_…
3616 …kworker/0:1-59 [000] d..3 136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_pr…
3620 migration/1-14 [001] d.h3 138.732674: softirq_raise: vec=3 [action=NET_RX]
3621 <idle>-0 [001] dNh3 138.732725: softirq_raise: vec=3 [action=NET_RX]
3647 bash-1998 [000] d... 140.733504: sys_dup2(oldfd: a, newfd: 1)
3649 bash-1998 [000] d... 140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
3651 bash-1998 [000] d... 140.733510: sys_close(fd: a)
3669 Note, if a process has a trace file open in one of the instance
3675 Since the kernel has a fixed sized stack, it is important not to
3676 waste it in functions. A kernel developer must be conscious of
3678 can be in danger of a stack overflow, and corruption will occur,
3679 usually leading to a system panic.
3682 periodically checking usage. But if you can perform a check
3684 a function tracer, it makes it convenient to check the stack size
3688 To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
3697 After running it for a few minutes, the output looks like:
3709 3) 2288 80 idle_balance+0xbb/0x130