Lines Matching refs:that
10 It doesn't describe the reasons why rtmutex.c exists. For that please see
12 that happen without this code, but that is in the concept to understand
16 inheritance (PI) algorithm that is used, as well as reasons for the
17 decisions that were made to implement PI in the manner that was done.
26 to use a resource that a lower priority process has (a mutex for example),
36 that C owns and must wait and lets C run to release the lock. But in the
70 inherited priority, and A then can continue with the resource that C had.
75 Here I explain some terminology that is used in this document to help describe
76 the design that is used to implement PI.
78 PI chain - The PI chain is an ordered series of locks and processes that cause
79 processes to inherit priorities from a previous process that is
83 mutex - In this document, to differentiate from locks that implement
84 PI and spin locks that are used in the PI code, from now on
88 referring to spin locks that are used to protect parts of the PI
95 waiter - A waiter is a struct that is stored on the stack of a blocked
99 structure holds a pointer to the task, as well as the mutex that
104 waiter is sometimes used in reference to the task that is waiting
107 waiters - A list of processes that are blocked on a mutex.
112 that a specific process owns.
115 differentiate between two processes that are being described together.
121 The PI chain is a list of processes and mutexes that may cause priority
167 have multiple chains merge at mutexes. If we add another process G that is
184 to that of G.
189 Every mutex keeps track of all the waiters that are blocked on itself. The
191 by a spin lock that is located in the struct of the mutex. This lock is called
199 a tree of all top waiters of the mutexes that are owned by the process.
200 Note that this tree only holds the top waiters and not all waiters that are
203 The top of the task's PI tree is always the highest priority task that
204 is waiting on a mutex that is owned by the task. So if the task has
205 inherited a priority, it will always be the priority of the task that is
222 be directly nested that way.
264 Now we add 4 processes that run each of these functions separately.
266 respectively, and such that D runs first and A last. With D being preempted
267 in func4 in the "do something again" area, we have a locking that follows:
281 it still is very difficult to find the possibilities of that depth.
284 type of application that nests large amounts of mutexes to create a large
332 the system for architectures that support it. This will also be explained
339 The implementation of the PI code in rtmutex.c has several places that a
347 priority process that is waiting any of mutexes owned by the task. Since
349 of all the mutexes that the task owns, we simply need to compare the top
352 new priority. Note that rt_mutex_setprio is defined in kernel/sched/core.c
359 It is interesting to note that rt_mutex_adjust_prio can either increase
360 or decrease the priority of the task. In the case that a higher priority
365 always contains the highest priority task that is waiting on a mutex owned
366 by the task, so we only need to compare the priority of that top pi waiter
383 (de)boosting (the owner of a mutex that a process is blocking on), a flag to
384 check for deadlocking, the mutex that the task owns, a pointer to a waiter
385 that is the process's waiter struct that is blocked on the mutex (although this
393 that the state of the owner and lock can change when entered into this function.
396 performed on it. This means that the task is set to the priority that it
399 in the pi_waiters and waiters trees that the task is blocked on. This function
400 solves all that.
412 The first thing that is tried is the fast taking of the mutex. This is
429 We then call try_to_take_rt_mutex. This is where the architecture that
434 slow path. The first thing that is done here is an atomic setting of
469 and add the current process to that tree. Since the pi_waiter of the owner
494 The second case is only applicable for tasks that are grabbing a mutex
495 that can wake up before getting the lock, either due to a signal or
518 A check is made to see if the mutex has waiters or not. On architectures that
519 do not have CMPXCHG, this is the location that the owner of the mutex will
520 determine if a waiter needs to be awoken or not. On architectures that
521 do have CMPXCHG, that check is done in the fast path, but it is still needed