1# 2# Copyright (c) 2006 Steven Rostedt 3# Licensed under the GNU Free Documentation License, Version 1.2 4# 5 6RT-mutex implementation design 7------------------------------ 8 9This document tries to describe the design of the rtmutex.c implementation. 10It doesn't describe the reasons why rtmutex.c exists. For that please see 11Documentation/locking/rt-mutex.txt. Although this document does explain problems 12that happen without this code, but that is in the concept to understand 13what the code actually is doing. 14 15The goal of this document is to help others understand the priority 16inheritance (PI) algorithm that is used, as well as reasons for the 17decisions that were made to implement PI in the manner that was done. 18 19 20Unbounded Priority Inversion 21---------------------------- 22 23Priority inversion is when a lower priority process executes while a higher 24priority process wants to run. This happens for several reasons, and 25most of the time it can't be helped. Anytime a high priority process wants 26to use a resource that a lower priority process has (a mutex for example), 27the high priority process must wait until the lower priority process is done 28with the resource. This is a priority inversion. What we want to prevent 29is something called unbounded priority inversion. That is when the high 30priority process is prevented from running by a lower priority process for 31an undetermined amount of time. 32 33The classic example of unbounded priority inversion is where you have three 34processes, let's call them processes A, B, and C, where A is the highest 35priority process, C is the lowest, and B is in between. A tries to grab a lock 36that C owns and must wait and lets C run to release the lock. But in the 37meantime, B executes, and since B is of a higher priority than C, it preempts C, 38but by doing so, it is in fact preempting A which is a higher priority process. 39Now there's no way of knowing how long A will be sleeping waiting for C 40to release the lock, because for all we know, B is a CPU hog and will 41never give C a chance to release the lock. This is called unbounded priority 42inversion. 43 44Here's a little ASCII art to show the problem. 45 46 grab lock L1 (owned by C) 47 | 48A ---+ 49 C preempted by B 50 | 51C +----+ 52 53B +--------> 54 B now keeps A from running. 55 56 57Priority Inheritance (PI) 58------------------------- 59 60There are several ways to solve this issue, but other ways are out of scope 61for this document. Here we only discuss PI. 62 63PI is where a process inherits the priority of another process if the other 64process blocks on a lock owned by the current process. To make this easier 65to understand, let's use the previous example, with processes A, B, and C again. 66 67This time, when A blocks on the lock owned by C, C would inherit the priority 68of A. So now if B becomes runnable, it would not preempt C, since C now has 69the high priority of A. As soon as C releases the lock, it loses its 70inherited priority, and A then can continue with the resource that C had. 71 72Terminology 73----------- 74 75Here I explain some terminology that is used in this document to help describe 76the design that is used to implement PI. 77 78PI chain - The PI chain is an ordered series of locks and processes that cause 79 processes to inherit priorities from a previous process that is 80 blocked on one of its locks. This is described in more detail 81 later in this document. 82 83mutex - In this document, to differentiate from locks that implement 84 PI and spin locks that are used in the PI code, from now on 85 the PI locks will be called a mutex. 86 87lock - In this document from now on, I will use the term lock when 88 referring to spin locks that are used to protect parts of the PI 89 algorithm. These locks disable preemption for UP (when 90 CONFIG_PREEMPT is enabled) and on SMP prevents multiple CPUs from 91 entering critical sections simultaneously. 92 93spin lock - Same as lock above. 94 95waiter - A waiter is a struct that is stored on the stack of a blocked 96 process. Since the scope of the waiter is within the code for 97 a process being blocked on the mutex, it is fine to allocate 98 the waiter on the process's stack (local variable). This 99 structure holds a pointer to the task, as well as the mutex that 100 the task is blocked on. It also has rbtree node structures to 101 place the task in the waiters rbtree of a mutex as well as the 102 pi_waiters rbtree of a mutex owner task (described below). 103 104 waiter is sometimes used in reference to the task that is waiting 105 on a mutex. This is the same as waiter->task. 106 107waiters - A list of processes that are blocked on a mutex. 108 109top waiter - The highest priority process waiting on a specific mutex. 110 111top pi waiter - The highest priority process waiting on one of the mutexes 112 that a specific process owns. 113 114Note: task and process are used interchangeably in this document, mostly to 115 differentiate between two processes that are being described together. 116 117 118PI chain 119-------- 120 121The PI chain is a list of processes and mutexes that may cause priority 122inheritance to take place. Multiple chains may converge, but a chain 123would never diverge, since a process can't be blocked on more than one 124mutex at a time. 125 126Example: 127 128 Process: A, B, C, D, E 129 Mutexes: L1, L2, L3, L4 130 131 A owns: L1 132 B blocked on L1 133 B owns L2 134 C blocked on L2 135 C owns L3 136 D blocked on L3 137 D owns L4 138 E blocked on L4 139 140The chain would be: 141 142 E->L4->D->L3->C->L2->B->L1->A 143 144To show where two chains merge, we could add another process F and 145another mutex L5 where B owns L5 and F is blocked on mutex L5. 146 147The chain for F would be: 148 149 F->L5->B->L1->A 150 151Since a process may own more than one mutex, but never be blocked on more than 152one, the chains merge. 153 154Here we show both chains: 155 156 E->L4->D->L3->C->L2-+ 157 | 158 +->B->L1->A 159 | 160 F->L5-+ 161 162For PI to work, the processes at the right end of these chains (or we may 163also call it the Top of the chain) must be equal to or higher in priority 164than the processes to the left or below in the chain. 165 166Also since a mutex may have more than one process blocked on it, we can 167have multiple chains merge at mutexes. If we add another process G that is 168blocked on mutex L2: 169 170 G->L2->B->L1->A 171 172And once again, to show how this can grow I will show the merging chains 173again. 174 175 E->L4->D->L3->C-+ 176 +->L2-+ 177 | | 178 G-+ +->B->L1->A 179 | 180 F->L5-+ 181 182If process G has the highest priority in the chain, then all the tasks up 183the chain (A and B in this example), must have their priorities increased 184to that of G. 185 186Mutex Waiters Tree 187----------------- 188 189Every mutex keeps track of all the waiters that are blocked on itself. The 190mutex has a rbtree to store these waiters by priority. This tree is protected 191by a spin lock that is located in the struct of the mutex. This lock is called 192wait_lock. 193 194 195Task PI Tree 196------------ 197 198To keep track of the PI chains, each process has its own PI rbtree. This is 199a tree of all top waiters of the mutexes that are owned by the process. 200Note that this tree only holds the top waiters and not all waiters that are 201blocked on mutexes owned by the process. 202 203The top of the task's PI tree is always the highest priority task that 204is waiting on a mutex that is owned by the task. So if the task has 205inherited a priority, it will always be the priority of the task that is 206at the top of this tree. 207 208This tree is stored in the task structure of a process as a rbtree called 209pi_waiters. It is protected by a spin lock also in the task structure, 210called pi_lock. This lock may also be taken in interrupt context, so when 211locking the pi_lock, interrupts must be disabled. 212 213 214Depth of the PI Chain 215--------------------- 216 217The maximum depth of the PI chain is not dynamic, and could actually be 218defined. But is very complex to figure it out, since it depends on all 219the nesting of mutexes. Let's look at the example where we have 3 mutexes, 220L1, L2, and L3, and four separate functions func1, func2, func3 and func4. 221The following shows a locking order of L1->L2->L3, but may not actually 222be directly nested that way. 223 224void func1(void) 225{ 226 mutex_lock(L1); 227 228 /* do anything */ 229 230 mutex_unlock(L1); 231} 232 233void func2(void) 234{ 235 mutex_lock(L1); 236 mutex_lock(L2); 237 238 /* do something */ 239 240 mutex_unlock(L2); 241 mutex_unlock(L1); 242} 243 244void func3(void) 245{ 246 mutex_lock(L2); 247 mutex_lock(L3); 248 249 /* do something else */ 250 251 mutex_unlock(L3); 252 mutex_unlock(L2); 253} 254 255void func4(void) 256{ 257 mutex_lock(L3); 258 259 /* do something again */ 260 261 mutex_unlock(L3); 262} 263 264Now we add 4 processes that run each of these functions separately. 265Processes A, B, C, and D which run functions func1, func2, func3 and func4 266respectively, and such that D runs first and A last. With D being preempted 267in func4 in the "do something again" area, we have a locking that follows: 268 269D owns L3 270 C blocked on L3 271 C owns L2 272 B blocked on L2 273 B owns L1 274 A blocked on L1 275 276And thus we have the chain A->L1->B->L2->C->L3->D. 277 278This gives us a PI depth of 4 (four processes), but looking at any of the 279functions individually, it seems as though they only have at most a locking 280depth of two. So, although the locking depth is defined at compile time, 281it still is very difficult to find the possibilities of that depth. 282 283Now since mutexes can be defined by user-land applications, we don't want a DOS 284type of application that nests large amounts of mutexes to create a large 285PI chain, and have the code holding spin locks while looking at a large 286amount of data. So to prevent this, the implementation not only implements 287a maximum lock depth, but also only holds at most two different locks at a 288time, as it walks the PI chain. More about this below. 289 290 291Mutex owner and flags 292--------------------- 293 294The mutex structure contains a pointer to the owner of the mutex. If the 295mutex is not owned, this owner is set to NULL. Since all architectures 296have the task structure on at least a two byte alignment (and if this is 297not true, the rtmutex.c code will be broken!), this allows for the least 298significant bit to be used as a flag. Bit 0 is used as the "Has Waiters" 299flag. It's set whenever there are waiters on a mutex. 300 301See Documentation/locking/rt-mutex.txt for further details. 302 303cmpxchg Tricks 304-------------- 305 306Some architectures implement an atomic cmpxchg (Compare and Exchange). This 307is used (when applicable) to keep the fast path of grabbing and releasing 308mutexes short. 309 310cmpxchg is basically the following function performed atomically: 311 312unsigned long _cmpxchg(unsigned long *A, unsigned long *B, unsigned long *C) 313{ 314 unsigned long T = *A; 315 if (*A == *B) { 316 *A = *C; 317 } 318 return T; 319} 320#define cmpxchg(a,b,c) _cmpxchg(&a,&b,&c) 321 322This is really nice to have, since it allows you to only update a variable 323if the variable is what you expect it to be. You know if it succeeded if 324the return value (the old value of A) is equal to B. 325 326The macro rt_mutex_cmpxchg is used to try to lock and unlock mutexes. If 327the architecture does not support CMPXCHG, then this macro is simply set 328to fail every time. But if CMPXCHG is supported, then this will 329help out extremely to keep the fast path short. 330 331The use of rt_mutex_cmpxchg with the flags in the owner field help optimize 332the system for architectures that support it. This will also be explained 333later in this document. 334 335 336Priority adjustments 337-------------------- 338 339The implementation of the PI code in rtmutex.c has several places that a 340process must adjust its priority. With the help of the pi_waiters of a 341process this is rather easy to know what needs to be adjusted. 342 343The functions implementing the task adjustments are rt_mutex_adjust_prio 344and rt_mutex_setprio. rt_mutex_setprio is only used in rt_mutex_adjust_prio. 345 346rt_mutex_adjust_prio examines the priority of the task, and the highest 347priority process that is waiting any of mutexes owned by the task. Since 348the pi_waiters of a task holds an order by priority of all the top waiters 349of all the mutexes that the task owns, we simply need to compare the top 350pi waiter to its own normal/deadline priority and take the higher one. 351Then rt_mutex_setprio is called to adjust the priority of the task to the 352new priority. Note that rt_mutex_setprio is defined in kernel/sched/core.c 353to implement the actual change in priority. 354 355(Note: For the "prio" field in task_struct, the lower the number, the 356 higher the priority. A "prio" of 5 is of higher priority than a 357 "prio" of 10.) 358 359It is interesting to note that rt_mutex_adjust_prio can either increase 360or decrease the priority of the task. In the case that a higher priority 361process has just blocked on a mutex owned by the task, rt_mutex_adjust_prio 362would increase/boost the task's priority. But if a higher priority task 363were for some reason to leave the mutex (timeout or signal), this same function 364would decrease/unboost the priority of the task. That is because the pi_waiters 365always contains the highest priority task that is waiting on a mutex owned 366by the task, so we only need to compare the priority of that top pi waiter 367to the normal priority of the given task. 368 369 370High level overview of the PI chain walk 371---------------------------------------- 372 373The PI chain walk is implemented by the function rt_mutex_adjust_prio_chain. 374 375The implementation has gone through several iterations, and has ended up 376with what we believe is the best. It walks the PI chain by only grabbing 377at most two locks at a time, and is very efficient. 378 379The rt_mutex_adjust_prio_chain can be used either to boost or lower process 380priorities. 381 382rt_mutex_adjust_prio_chain is called with a task to be checked for PI 383(de)boosting (the owner of a mutex that a process is blocking on), a flag to 384check for deadlocking, the mutex that the task owns, a pointer to a waiter 385that is the process's waiter struct that is blocked on the mutex (although this 386parameter may be NULL for deboosting), a pointer to the mutex on which the task 387is blocked, and a top_task as the top waiter of the mutex. 388 389For this explanation, I will not mention deadlock detection. This explanation 390will try to stay at a high level. 391 392When this function is called, there are no locks held. That also means 393that the state of the owner and lock can change when entered into this function. 394 395Before this function is called, the task has already had rt_mutex_adjust_prio 396performed on it. This means that the task is set to the priority that it 397should be at, but the rbtree nodes of the task's waiter have not been updated 398with the new priorities, and this task may not be in the proper locations 399in the pi_waiters and waiters trees that the task is blocked on. This function 400solves all that. 401 402The main operation of this function is summarized by Thomas Gleixner in 403rtmutex.c. See the 'Chain walk basics and protection scope' comment for further 404details. 405 406Taking of a mutex (The walk through) 407------------------------------------ 408 409OK, now let's take a look at the detailed walk through of what happens when 410taking a mutex. 411 412The first thing that is tried is the fast taking of the mutex. This is 413done when we have CMPXCHG enabled (otherwise the fast taking automatically 414fails). Only when the owner field of the mutex is NULL can the lock be 415taken with the CMPXCHG and nothing else needs to be done. 416 417If there is contention on the lock, we go about the slow path 418(rt_mutex_slowlock). 419 420The slow path function is where the task's waiter structure is created on 421the stack. This is because the waiter structure is only needed for the 422scope of this function. The waiter structure holds the nodes to store 423the task on the waiters tree of the mutex, and if need be, the pi_waiters 424tree of the owner. 425 426The wait_lock of the mutex is taken since the slow path of unlocking the 427mutex also takes this lock. 428 429We then call try_to_take_rt_mutex. This is where the architecture that 430does not implement CMPXCHG would always grab the lock (if there's no 431contention). 432 433try_to_take_rt_mutex is used every time the task tries to grab a mutex in the 434slow path. The first thing that is done here is an atomic setting of 435the "Has Waiters" flag of the mutex's owner field. By setting this flag 436now, the current owner of the mutex being contended for can't release the mutex 437without going into the slow unlock path, and it would then need to grab the 438wait_lock, which this code currently holds. So setting the "Has Waiters" flag 439forces the current owner to synchronize with this code. 440 441The lock is taken if the following are true: 442 1) The lock has no owner 443 2) The current task is the highest priority against all other 444 waiters of the lock 445 446If the task succeeds to acquire the lock, then the task is set as the 447owner of the lock, and if the lock still has waiters, the top_waiter 448(highest priority task waiting on the lock) is added to this task's 449pi_waiters tree. 450 451If the lock is not taken by try_to_take_rt_mutex(), then the 452task_blocks_on_rt_mutex() function is called. This will add the task to 453the lock's waiter tree and propagate the pi chain of the lock as well 454as the lock's owner's pi_waiters tree. This is described in the next 455section. 456 457Task blocks on mutex 458-------------------- 459 460The accounting of a mutex and process is done with the waiter structure of 461the process. The "task" field is set to the process, and the "lock" field 462to the mutex. The rbtree node of waiter are initialized to the processes 463current priority. 464 465Since the wait_lock was taken at the entry of the slow lock, we can safely 466add the waiter to the task waiter tree. If the current process is the 467highest priority process currently waiting on this mutex, then we remove the 468previous top waiter process (if it exists) from the pi_waiters of the owner, 469and add the current process to that tree. Since the pi_waiter of the owner 470has changed, we call rt_mutex_adjust_prio on the owner to see if the owner 471should adjust its priority accordingly. 472 473If the owner is also blocked on a lock, and had its pi_waiters changed 474(or deadlock checking is on), we unlock the wait_lock of the mutex and go ahead 475and run rt_mutex_adjust_prio_chain on the owner, as described earlier. 476 477Now all locks are released, and if the current process is still blocked on a 478mutex (waiter "task" field is not NULL), then we go to sleep (call schedule). 479 480Waking up in the loop 481--------------------- 482 483The task can then wake up for a couple of reasons: 484 1) The previous lock owner released the lock, and the task now is top_waiter 485 2) we received a signal or timeout 486 487In both cases, the task will try again to acquire the lock. If it 488does, then it will take itself off the waiters tree and set itself back 489to the TASK_RUNNING state. 490 491In first case, if the lock was acquired by another task before this task 492could get the lock, then it will go back to sleep and wait to be woken again. 493 494The second case is only applicable for tasks that are grabbing a mutex 495that can wake up before getting the lock, either due to a signal or 496a timeout (i.e. rt_mutex_timed_futex_lock()). When woken, it will try to 497take the lock again, if it succeeds, then the task will return with the 498lock held, otherwise it will return with -EINTR if the task was woken 499by a signal, or -ETIMEDOUT if it timed out. 500 501 502Unlocking the Mutex 503------------------- 504 505The unlocking of a mutex also has a fast path for those architectures with 506CMPXCHG. Since the taking of a mutex on contention always sets the 507"Has Waiters" flag of the mutex's owner, we use this to know if we need to 508take the slow path when unlocking the mutex. If the mutex doesn't have any 509waiters, the owner field of the mutex would equal the current process and 510the mutex can be unlocked by just replacing the owner field with NULL. 511 512If the owner field has the "Has Waiters" bit set (or CMPXCHG is not available), 513the slow unlock path is taken. 514 515The first thing done in the slow unlock path is to take the wait_lock of the 516mutex. This synchronizes the locking and unlocking of the mutex. 517 518A check is made to see if the mutex has waiters or not. On architectures that 519do not have CMPXCHG, this is the location that the owner of the mutex will 520determine if a waiter needs to be awoken or not. On architectures that 521do have CMPXCHG, that check is done in the fast path, but it is still needed 522in the slow path too. If a waiter of a mutex woke up because of a signal 523or timeout between the time the owner failed the fast path CMPXCHG check and 524the grabbing of the wait_lock, the mutex may not have any waiters, thus the 525owner still needs to make this check. If there are no waiters then the mutex 526owner field is set to NULL, the wait_lock is released and nothing more is 527needed. 528 529If there are waiters, then we need to wake one up. 530 531On the wake up code, the pi_lock of the current owner is taken. The top 532waiter of the lock is found and removed from the waiters tree of the mutex 533as well as the pi_waiters tree of the current owner. The "Has Waiters" bit is 534marked to prevent lower priority tasks from stealing the lock. 535 536Finally we unlock the pi_lock of the pending owner and wake it up. 537 538 539Contact 540------- 541 542For updates on this document, please email Steven Rostedt <rostedt@goodmis.org> 543 544 545Credits 546------- 547 548Author: Steven Rostedt <rostedt@goodmis.org> 549Updated: Alex Shi <alex.shi@linaro.org> - 7/6/2017 550 551Original Reviewers: Ingo Molnar, Thomas Gleixner, Thomas Duetsch, and 552 Randy Dunlap 553Update (7/6/2017) Reviewers: Steven Rostedt and Sebastian Siewior 554 555Updates 556------- 557 558This document was originally written for 2.6.17-rc3-mm1 559was updated on 4.12 560