Lines Matching +full:one +full:- +full:timer +full:- +full:only

37   +------------------------------------+------------------------------------+
41 +------------------------------------+------------------------------------+
43 +------------------------------------+------------------------------------+
45 +------------------------------------+------------------------------------+
47 +------------------------------------+------------------------------------+
49 +------------------------------------+------------------------------------+
51 +------------------------------------+------------------------------------+
57 +------------------------------------+------------------------------------+
61 +------------------------------------+------------------------------------+
63 +------------------------------------+------------------------------------+
65 +------------------------------------+------------------------------------+
67 +------------------------------------+------------------------------------+
69 +------------------------------------+------------------------------------+
71 +------------------------------------+------------------------------------+
75 ------------------------------------
80 Linux starting running on SMP machines, they became one of the major
83 Preemption can have the same effect, even if there is only one CPU: by
84 preempting one task during the critical region, we have exactly the same
89 use locks to make sure that only one instance can enter the critical
97 If I could give you one piece of advice on locking: **keep it simple**.
102 -----------------------------------------------------
106 single-holder lock: if you can't get the spinlock, you keep trying
122 ------------------------------
126 design decision: when no-one else can run at the same time, there is no
141 Locking Only In User Context
142 ----------------------------
144 If you have a data structure which is only ever accessed from user
154 nf_register_sockopt(). Registration and de-registration
155 are only done on module load and unload (and boot time, where there is
156 no concurrency), and the list of registrations is only consulted for an
162 -----------------------------------------
184 -----------------------------------------
190 ---------------------------------------
197 -------------------------------
199 Sometimes a tasklet or timer might want to share data with another
200 tasklet or timer.
202 The Same Tasklet/Timer
212 If another tasklet/timer wants to share data with your tasklet or timer
219 ------------------------
221 Often a softirq might want to share data with itself or a tasklet/timer.
226 The same softirq can run on the other CPUs: you can use a per-CPU array
227 (see `Per-CPU Data`_) for better performance. If you're
238 spin_unlock() for shared data, whether it be a timer,
250 ----------------------------------------------
262 spin_lock(), which is slightly faster. The only exception
284 -------------------------------------
288 architecture-specific whether all interrupts are disabled inside irq
296 - If you are in a process context (any syscall) and want to lock other
300 - Otherwise (== data can be touched in an interrupt), use
304 - Avoid holding spinlock for more than 5 lines of code and across any
308 -----------------------------
311 various contexts. In some cases, the same context can only be running on
312 one CPU at a time, so no locking is required for that context (eg. a
313 particular thread can only run on one CPU at a time, but if it needs
321 . IRQ Handler A IRQ Handler B Softirq A Softirq B Tasklet A Tasklet B Timer A Timer B …
329 Timer A SLI SLI SL SL SL SL None
330 Timer B SLI SLI SL SL SL SL SL None
337 +--------+----------------------------+
339 +--------+----------------------------+
341 +--------+----------------------------+
343 +--------+----------------------------+
345 +--------+----------------------------+
347 +--------+----------------------------+
354 There are functions that try to acquire a lock only once and immediately
360 spin_trylock() does not spin but returns non-zero if it
367 non-zero if it could lock the mutex on the first try or 0 if not. This
376 when it gets full, throws out the least used one.
379 -------------------
411 if (i->id == id) {
412 i->popularity++;
422 list_del(&obj->list);
424 cache_num--;
430 list_add(&obj->list, &cache);
434 if (!outcast || i->popularity < outcast->popularity)
446 return -ENOMEM;
448 strscpy(obj->name, name, sizeof(obj->name));
449 obj->id = id;
450 obj->popularity = 0;
468 int ret = -ENOENT;
474 strcpy(name, obj->name);
488 grabbing the lock. This is safe, as no-one else can access it until we
492 --------------------------------
496 example would be a timer which deletes object from the cache.
498 The change is shown below, in standard patch format: the ``-`` are lines
503 --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
504 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
505 @@ -12,7 +12,7 @@
509 -static DEFINE_MUTEX(cache_lock);
514 @@ -55,6 +55,7 @@
521 return -ENOMEM;
522 @@ -63,30 +64,33 @@
523 obj->id = id;
524 obj->popularity = 0;
526 - mutex_lock(&cache_lock);
529 - mutex_unlock(&cache_lock);
536 - mutex_lock(&cache_lock);
541 - mutex_unlock(&cache_lock);
548 int ret = -ENOENT;
551 - mutex_lock(&cache_lock);
556 strcpy(name, obj->name);
558 - mutex_unlock(&cache_lock);
569 with the ``GFP_KERNEL`` flag, which is only legal in user context. I
570 have assumed that cache_add() is still only called in
575 ----------------------------------
583 we'd need to make this non-static so the rest of the code can use it.
584 This makes locking trickier, as it is no longer all in one place.
588 valid. Unfortunately, this is only guaranteed while you hold the lock,
590 worse, add another object, re-using the same address.
592 As there is only one lock, you can't hold it forever: no-one else would
602 --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
603 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
604 @@ -7,6 +7,7 @@
612 @@ -17,6 +18,35 @@
618 + if (--obj->refcnt == 0)
624 + obj->refcnt++;
648 @@ -35,6 +65,7 @@
651 list_del(&obj->list);
653 cache_num--;
656 @@ -63,6 +94,7 @@
657 strscpy(obj->name, name, sizeof(obj->name));
658 obj->id = id;
659 obj->popularity = 0;
660 + obj->refcnt = 1; /* The cache holds a reference */
664 @@ -79,18 +111,15 @@
668 -int cache_find(int id, char *name)
672 - int ret = -ENOENT;
677 - if (obj) {
678 - ret = 0;
679 - strcpy(name, obj->name);
680 - }
684 - return ret;
706 although for anything non-trivial using spinlocks is clearer. The
713 --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
714 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
715 @@ -7,7 +7,7 @@
719 - unsigned int refcnt;
724 @@ -18,33 +18,15 @@
728 -static void __object_put(struct object *obj)
729 -{
730 - if (--obj->refcnt == 0)
731 - kfree(obj);
732 -}
733 -
734 -static void __object_get(struct object *obj)
735 -{
736 - obj->refcnt++;
737 -}
738 -
741 - unsigned long flags;
742 -
743 - spin_lock_irqsave(&cache_lock, flags);
744 - __object_put(obj);
745 - spin_unlock_irqrestore(&cache_lock, flags);
746 + if (atomic_dec_and_test(&obj->refcnt))
752 - unsigned long flags;
753 -
754 - spin_lock_irqsave(&cache_lock, flags);
755 - __object_get(obj);
756 - spin_unlock_irqrestore(&cache_lock, flags);
757 + atomic_inc(&obj->refcnt);
761 @@ -65,7 +47,7 @@
764 list_del(&obj->list);
765 - __object_put(obj);
767 cache_num--;
770 @@ -94,7 +76,7 @@
771 strscpy(obj->name, name, sizeof(obj->name));
772 obj->id = id;
773 obj->popularity = 0;
774 - obj->refcnt = 1; /* The cache holds a reference */
775 + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
779 @@ -119,7 +101,7 @@
783 - __object_get(obj);
790 ---------------------------------
796 - You can make ``cache_lock`` non-static, and tell people to grab that
799 - You can provide a cache_obj_rename() which grabs this
803 - You can make the ``cache_lock`` protect only the cache itself, and
806 Theoretically, you can make the locks as fine-grained as one lock for
810 - One lock which protects the infrastructure (the ``cache`` list in
813 - One lock which protects the infrastructure (including the list
814 pointers inside the objects), and one lock inside the object which
817 - Multiple locks to protect the infrastructure (eg. one lock per hash
818 chain), possibly with a separate per-object lock.
820 Here is the "lock-per-object" implementation:
824 --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
825 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
826 @@ -6,11 +6,17 @@
841 - int popularity;
845 @@ -77,6 +84,7 @@
846 obj->id = id;
847 obj->popularity = 0;
848 atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
849 + spin_lock_init(&obj->lock);
855 ``cache_lock`` rather than the per-object lock: this is because it (like
863 id: the object lock is only used by a caller who wants to read or write
875 -----------------------------
881 stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
895 timer or compiling with ``DEBUG_SPINLOCK`` set
899 A more complex problem is the so-called 'deadly embrace', involving two
902 sometimes want to alter an object from one place in the hash to another:
904 hash chain, and delete the object from the old one, and insert it in the
905 new one.
913 +-----------------------+-----------------------+
916 | Grab lock A -> OK | Grab lock B -> OK |
917 +-----------------------+-----------------------+
918 | Grab lock B -> spin | Grab lock A -> spin |
919 +-----------------------+-----------------------+
927 -------------------
936 are never held around calls to non-trivial functions outside the same
939 one. People using your code don't even need to know you are using a
955 -------------------------------
958 collection of objects (list, hash, etc) where each object has a timer
969 struct foo *next = list->next;
970 del_timer(&list->timer);
978 Sooner or later, this will crash on SMP, because a timer can have just
979 gone off before the spin_lock_bh(), and it will only get
984 del_timer(): if it returns 1, the timer has been deleted.
992 struct foo *next = list->next;
993 if (!del_timer(&list->timer)) {
994 /* Give timer a chance to delete this */
1006 calling add_timer() at the end of their timer function).
1008 use del_timer_sync() (``include/linux/timer.h``) to
1009 handle this case. It returns the number of times the timer had to be
1025 grab the lock only when we are ready to insert it in the list.
1029 the last one to grab the lock (ie. is the lock cache-hot for this CPU):
1032 increment takes about 58ns, a lock which is cache-hot on this CPU takes
1038 by splitting locks into parts (such as in our final per-object-lock
1047 ------------------------
1051 users into two classes: the readers and the writers. If you are only
1062 --------------------------------
1075 new->next = list->next;
1077 list->next = new;
1099 list->next = old->next;
1108 don't realize that the pre-fetched contents is wrong when the ``next``
1122 destroy the object once all pre-existing readers are finished.
1124 until all pre-existing are finished.
1140 --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1141 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
1142 @@ -1,15 +1,18 @@
1152 - /* These two protected by cache_lock. */
1162 @@ -40,7 +43,7 @@
1166 - list_for_each_entry(i, &cache, list) {
1168 if (i->id == id) {
1169 i->popularity++;
1171 @@ -49,19 +52,25 @@
1185 - list_del(&obj->list);
1186 - object_put(obj);
1187 + list_del_rcu(&obj->list);
1188 cache_num--;
1189 + call_rcu(&obj->rcu, cache_delete_rcu);
1195 - list_add(&obj->list, &cache);
1196 + list_add_rcu(&obj->list, &cache);
1200 @@ -104,12 +114,11 @@
1204 - unsigned long flags;
1206 - spin_lock_irqsave(&cache_lock, flags);
1211 - spin_unlock_irqrestore(&cache_lock, flags);
1217 __cache_find(), and now it doesn't hold a lock. One
1229 hold the lock, no one can delete the object, so you don't need to get
1236 __cache_find() by making it non-static, and such
1243 Per-CPU Data
1244 ------------
1257 Of particular use for simple per-cpu counters is the ``local_t`` type,
1267 ----------------------------------------
1287 call, so it only makes sense if this type of access happens extremely
1299 --------------------------
1307 - Accesses to userspace:
1309 - copy_from_user()
1311 - copy_to_user()
1313 - get_user()
1315 - put_user()
1317 - kmalloc(GP_KERNEL) <kmalloc>`
1319 - mutex_lock_interruptible() and
1329 --------------------------------
1334 - printk()
1336 - kfree()
1338 - add_timer() and del_timer()
1343 .. kernel-doc:: include/linux/mutex.h
1346 .. kernel-doc:: kernel/locking/mutex.c
1352 .. kernel-doc:: kernel/futex/core.c
1355 .. kernel-doc:: kernel/futex/futex.h
1358 .. kernel-doc:: kernel/futex/pi.c
1361 .. kernel-doc:: kernel/futex/requeue.c
1364 .. kernel-doc:: kernel/futex/waitwake.c
1370 - ``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking
1373 - Unix Systems for Modern Architectures: Symmetric Multiprocessing and
1408 deprecated, and will eventually be replaced by tasklets. Only one bottom
1420 Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
1428 Strictly speaking a softirq is one of up to 32 enumerated software
1433 A dynamically-registrable software interrupt, which is guaranteed to
1434 only run on one CPU at a time.
1436 timer
1437 A dynamically-registrable software interrupt, which is run at (or close
1442 Uni-Processor: Non-SMP. (``CONFIG_SMP=n``).