Lines Matching full:that
11 It doesn't describe the reasons why rtmutex.c exists. For that please see
13 that happen without this code, but that is in the concept to understand
17 inheritance (PI) algorithm that is used, as well as reasons for the
18 decisions that were made to implement PI in the manner that was done.
27 to use a resource that a lower priority process has (a mutex for example),
30 is something called unbounded priority inversion. That is when the high
37 that C owns and must wait and lets C run to release the lock. But in the
71 inherited priority, and A then can continue with the resource that C had.
76 Here I explain some terminology that is used in this document to help describe
77 the design that is used to implement PI.
80 - The PI chain is an ordered series of locks and processes that cause
81 processes to inherit priorities from a previous process that is
86 - In this document, to differentiate from locks that implement
87 PI and spin locks that are used in the PI code, from now on
92 referring to spin locks that are used to protect parts of the PI
101 - A waiter is a struct that is stored on the stack of a blocked
105 structure holds a pointer to the task, as well as the mutex that
110 waiter is sometimes used in reference to the task that is waiting
114 - A list of processes that are blocked on a mutex.
121 that a specific process owns.
125 differentiate between two processes that are being described together.
131 The PI chain is a list of processes and mutexes that may cause priority
177 have multiple chains merge at mutexes. If we add another process G that is
194 to that of G.
199 Every mutex keeps track of all the waiters that are blocked on itself. The
201 by a spin lock that is located in the struct of the mutex. This lock is called
209 a tree of all top waiters of the mutexes that are owned by the process.
210 Note that this tree only holds the top waiters and not all waiters that are
213 The top of the task's PI tree is always the highest priority task that
214 is waiting on a mutex that is owned by the task. So if the task has
215 inherited a priority, it will always be the priority of the task that is
232 be directly nested that way::
274 Now we add 4 processes that run each of these functions separately.
276 respectively, and such that D runs first and A last. With D being preempted
277 in func4 in the "do something again" area, we have a locking that follows::
291 it still is very difficult to find the possibilities of that depth.
294 type of application that nests large amounts of mutexes to create a large
342 the system for architectures that support it. This will also be explained
349 The implementation of the PI code in rtmutex.c has several places that a
357 priority process that is waiting any of mutexes owned by the task. Since
359 of all the mutexes that the task owns, we simply need to compare the top
362 new priority. Note that rt_mutex_setprio is defined in kernel/sched/core.c
370 It is interesting to note that rt_mutex_adjust_prio can either increase
371 or decrease the priority of the task. In the case that a higher priority
375 would decrease/unboost the priority of the task. That is because the pi_waiters
376 always contains the highest priority task that is waiting on a mutex owned
377 by the task, so we only need to compare the priority of that top pi waiter
394 (de)boosting (the owner of a mutex that a process is blocking on), a flag to
395 check for deadlocking, the mutex that the task owns, a pointer to a waiter
396 that is the process's waiter struct that is blocked on the mutex (although this
403 When this function is called, there are no locks held. That also means
404 that the state of the owner and lock can change when entered into this function.
407 performed on it. This means that the task is set to the priority that it
410 in the pi_waiters and waiters trees that the task is blocked on. This function
411 solves all that.
423 The first thing that is tried is the fast taking of the mutex. This is
440 We then call try_to_take_rt_mutex. This is where the architecture that
445 slow path. The first thing that is done here is an atomic setting of
481 and add the current process to that tree. Since the pi_waiter of the owner
506 The second case is only applicable for tasks that are grabbing a mutex
507 that can wake up before getting the lock, either due to a signal or
530 A check is made to see if the mutex has waiters or not. On architectures that
531 do not have CMPXCHG, this is the location that the owner of the mutex will
532 determine if a waiter needs to be awoken or not. On architectures that
533 do have CMPXCHG, that check is done in the fast path, but it is still needed