Lines Matching +full:- +full:a
6 Zephyr provides a robust and scalable timing framework to enable
22 The kernel presents a "cycle" count via the :c:func:`k_cycle_get_32`
25 to present to the user (for example, a CPU cycle counter) and that the
27 application code might use this in a polling manner to achieve maximal
31 platforms is a runtime constant that evaluates to
34 For asynchronous timekeeping, the kernel defines a "ticks" concept. A
42 emulation platforms and legacy drivers using a more traditional 100 Hz
46 ----------
56 For example: :c:func:`k_ms_to_ticks_ceil32` will convert a
58 a result truncated to 32 bits of precision; and
59 :c:func:`k_cyc_to_us_floor64` will convert a measured cycle count
60 to an elapsed number of microseconds in a full 64 bits of precision.
65 multiples of each other and where the output fits within a single
66 word, these conversions expand to a 2-4 operation sequence, requiring
74 The kernel tracks a system uptime count on behalf of the application.
79 The internal tracking, however, is as a 64 bit integer count of ticks.
87 The Zephyr kernel provides many APIs with a "timeout" parameter.
92 :c:func:`k_queue_get` may provide a timeout after which the
98 * The kernel :c:struct:`k_work_delayable` API provides a timeout parameter
99 indicating when a work queue item will be added to the system queue.
101 All these values are specified using a :c:type:`k_timeout_t` value. This is
102 an opaque struct type that must be initialized using one of a family
104 a time in milliseconds after the current time.
108 * When scheduling a relative timeout from within a timeout callback (e.g. from
113 within another timer's callback will always be calculated with a precise offset
117 * When scheduling a timeout from application context, "current time" means the
127 being 32 bits. Large uptime counts in non-tick units will experience
129 timing-sensitive applications with long uptimes will be configured to
130 use a 64 bit timeout type.
133 system boot. A timeout initialized with :c:macro:`K_TIMEOUT_ABS_MS`
134 indicates a timeout that will expire after the system uptime reaches
142 -------------
145 managed in a single, global queue of events. Each event is stored in
146 a double-linked list, with an attendant delta count in ticks from the
147 previous event. The action to take on an event is specified as a
149 event, along with a :c:struct:`_timeout` tracking struct that is
150 expected to be embedded within subsystem-defined data structures (for
151 example: a :c:struct:`wait_q` struct, or a :c:type:`k_tid_t` thread struct).
153 Note that all variant units passed via a :c:type:`k_timeout_t` are converted
155 multiple-conversion steps internal to the kernel, so precision is
157 long a timeout might be.
162 permit a more scalable backend data structure, but no such
166 -------------
168 Kernel timing at the tick level is driven by a timer driver with a
177 (i.e. not "halfway through" a tick), and most importantly that they
181 * The driver is expected to provide a :c:func:`sys_clock_set_timeout` call
186 missed. Note that the timeout value passed here is in a delta from
188 requirement to provide ticks at a steady rate over time. Naive
192 * The driver is expected to provide a :c:func:`sys_clock_elapsed` call which
193 provides a current indication of how many ticks have elapsed (as
194 compared to a real world clock) since the last call to
198 Note that a natural implementation of this API results in a "tickless"
201 provide irregular interrupts. But a traditional, "ticked" or "dumb"
204 * The driver can receive interrupts at a regular rate corresponding to
217 -----------
220 a multiprocessor context. The kernel will internally synchronize all
226 have every timer interrupt handled on a single processor. Existing
231 per-CPU tracking, and expects that if two timer interrupts fire near
233 the timing subsystem. The other may legally provide a tick count of
237 * Some SMP hardware uses a single, global timer device, others use a
238 per-CPU counter. The complexity here (for example: ensuring counter
245 every event, even though by definition only one of them should see a
246 non-zero ticks argument to :c:func:`sys_clock_announce`. This is probably
247 a correct default for timing sensitive applications (because it
249 a timeout), but may be a performance problem in some cases. The
254 ------------
258 A thread time-slice cannot be a timeout value, as it does not reflect
259 a global expiration but instead a per-CPU value that needs to be
264 value passed to :c:func:`sys_clock_set_timeout` may be clamped to a
265 smaller value than the current next timeout when a time sliced thread
269 -------------------------------------
285 will need to use a different, integer-valued token to represent
287 of course, but as it is (and has always been) simply a numerical zero,
288 it has a natural porting path.
291 --------------------------------
293 Ideally, code that takes a "timeout" parameter specifying a time to
302 The more complicated case is when the subsystem needs to take a
307 .. code-block:: c
319 k_sem_take(obj->sem, timeout_in_ms);
322 timeout_in_ms -= (k_uptime_get_32() - start);
329 that converts an arbitrary timeout to and from a timepoint value based on
330 an uptime tick at which it will expire. So such a loop might look like:
333 .. code-block:: c
349 k_sem_take(obj->sem, timeout);
361 delta timeouts need to be interpreted relative to a "current time",
366 creation in user code. It should not be used on a "stored" timeout
367 value, and should never be called iteratively in a loop.