Home
last modified time | relevance | path

Searched full:readers (Results 1 – 25 of 306) sorted by relevance

12345678910>>...13

/Linux-v5.15/kernel/locking/
Drwbase_rt.c8 * 2) Remove the reader BIAS to force readers into the slow path
9 * 3) Wait until all readers have left the critical section
14 * 2) Set the reader BIAS, so readers can use the fast path again
15 * 3) Unlock rtmutex, to release blocked readers
34 * active readers. A blocked writer would force all newly incoming readers
45 * The lock/unlock of readers can run in fast paths: lock and unlock are only
58 * Increment reader count, if sem->readers < 0, i.e. READER_BIAS is in rwbase_read_trylock()
61 for (r = atomic_read(&rwb->readers); r < 0;) { in rwbase_read_trylock()
63 if (likely(atomic_try_cmpxchg(&rwb->readers, &r, r + 1))) in rwbase_read_trylock()
77 * Allow readers, as long as the writer has not completely in __rwbase_read_lock()
[all …]
Dpercpu-rwsem.c58 * Conversely, any readers that increment their sem->read_count after in __percpu_down_read_trylock()
111 * We use EXCLUSIVE for both readers and writers to preserve FIFO order,
112 * and play games with the return value to allow waking multiple readers.
114 * Specifically, we wake readers until we've woken a single writer, or until a
136 return !reader; /* wake (readers until) 1 writer */ in percpu_rwsem_wake_function()
194 * newly arriving readers increment a given counter, they will immediately
219 /* Notify readers to take the slow path. */ in percpu_down_write()
224 * Having sem->block set makes new readers block. in percpu_down_write()
237 /* Wait for all active readers to complete. */ in percpu_down_write()
250 * that new readers might fail to see the results of this writer's in percpu_up_write()
Drwsem.c37 * - Bit 0: RWSEM_READER_OWNED - The rwsem is owned by readers
54 * is involved. Ideally we would like to track all the readers that own
109 * 1) rwsem_mark_wake() for readers.
291 * The lock is owned by readers when
296 * Having some reader bits set is not enough to guarantee a readers owned
297 * lock as the readers may be in the process of backing out from the count
344 RWSEM_WAKE_READERS, /* Wake readers only */
362 * Magic number to batch-wakeup waiting readers, even when writers are
403 * Readers, on the other hand, will block as they in rwsem_mark_wake()
421 * We prefer to do the first reader grant before counting readers in rwsem_mark_wake()
[all …]
Dqrwlock.c23 * Readers come here when they cannot get the lock without waiting in queued_read_lock_slowpath()
27 * Readers in interrupt context will get the lock immediately in queued_read_lock_slowpath()
73 /* Set the waiting flag to notify readers that a writer is pending */ in queued_write_lock_slowpath()
76 /* When no more readers or writers, set the locked flag */ in queued_write_lock_slowpath()
/Linux-v5.15/Documentation/RCU/
Dchecklist.rst30 One final exception is where RCU readers are used to prevent
40 RCU does allow *readers* to run (almost) naked, but *writers* must
80 The whole point of RCU is to permit readers to run without
81 any locks or atomic operations. This means that readers will
94 locks (that are acquired by both readers and writers)
96 the readers refrain from accessing can be guarded by
101 c. Make updates appear atomic to readers. For example,
105 appear to be atomic to RCU readers, nor will sequences
111 readers see valid data at all phases of the update.
128 a. Readers must maintain proper ordering of their memory
[all …]
DwhatisRCU.rst47 Section 1, though most readers will profit by reading this section at
70 new versions of these data items), and can run concurrently with readers.
72 readers is the semantics of modern CPUs guarantee that readers will see
76 removal phase. Because reclaiming data items can disrupt any readers
78 not start until readers no longer hold references to those data items.
82 reclamation phase until all readers active during the removal phase have
84 callback that is invoked after they finish. Only readers that are active
92 readers cannot gain a reference to it.
94 b. Wait for all previous readers to complete their RCU read-side
97 c. At this point, there cannot be any readers who hold references
[all …]
Drcu.rst10 must be long enough that any readers accessing the item being deleted have
22 The advantage of RCU's two-part approach is that RCU readers need
27 in read-mostly situations. The fact that RCU readers need not
31 if the RCU readers give no indication when they are done?
33 Just as with spinlocks, RCU readers are not permitted to
43 same effect, but require that the readers manipulate CPU-local
Dlockdep.rst41 invoked by both RCU readers and updaters.
45 is invoked by both RCU-bh readers and updaters.
49 is invoked by both RCU-sched readers and updaters.
53 is invoked by both SRCU readers and updaters.
Drcubarrier.rst10 very low-overhead readers that are immune to deadlock, priority inversion,
16 readers, so that RCU updates to shared data must be undertaken quite
18 pre-existing readers have finished. These old versions are needed because
19 such readers might hold a reference to them. RCU updates can therefore be
22 How can an RCU writer possibly determine when all readers are finished,
23 given that readers might well leave absolutely no trace of their
25 pre-existing readers have completed. An updater wishing to delete an
/Linux-v5.15/include/linux/
Drwbase_rt.h12 atomic_t readers; member
18 .readers = ATOMIC_INIT(READER_BIAS), \
25 atomic_set(&(rwbase)->readers, READER_BIAS); \
31 return atomic_read(&rwb->readers) != READER_BIAS; in rw_base_is_locked()
36 return atomic_read(&rwb->readers) > 0; in rw_base_is_contended()
Drcu_sync.h16 /* Structure to mediate between updaters and fastpath-using readers. */
26 * rcu_sync_is_idle() - Are readers permitted to use their fastpaths?
29 * Returns true if readers are permitted to use their fastpaths. Must be
Du64_stats_sync.h17 * be lost, thus blocking readers forever.
29 * 5) Readers are allowed to sleep or be preempted/interrupted: they perform
32 * 6) Readers must use both u64_stats_fetch_{begin,retry}_irq() if the stats
34 * seqcounts are not used for UP kernels). 32-bit UP stat readers could read
199 * In case irq handlers can update u64 counters, readers can use following helpers
Ddma-resv.h93 * that the lock is only against other writers, readers will run concurrently
94 * with a writer under RCU. The seqlock is used to notify readers if they
114 * modification. Note, that the lock is only against other writers, readers
116 * notify readers if they overlap with a writer.
165 * Note, that the lock is only against other writers, readers will run
166 * concurrently with a writer under RCU. The seqlock is used to notify readers
/Linux-v5.15/kernel/rcu/
Dsync.c28 * rcu_sync_enter_start - Force readers onto slow path for multiple updates
58 * If it is called by rcu_sync_enter() it signals that all the readers were
67 * readers back onto their fastpaths (after a grace period). If both
70 * rcu_sync_exit(). Otherwise, set all state back to idle so that readers
107 * rcu_sync_enter() - Force readers onto slowpath
110 * This function is used by updaters who need readers to make use of
113 * tells readers to stay off their fastpaths. A later call to
159 * rcu_sync_exit() - Allow readers back onto fast path after grace period
163 * now allow readers to make use of their fastpaths after a grace period
165 * calls to rcu_sync_is_idle() will return true, which tells readers that
/Linux-v5.15/Documentation/locking/
Dlockdep-design.rst405 spin_lock() or write_lock()), non-recursive readers (i.e. shared lockers, like
406 down_read()) and recursive readers (recursive shared lockers, like rcu_read_lock()).
410 r: stands for non-recursive readers.
411 R: stands for recursive readers.
412 S: stands for all readers (non-recursive + recursive), as both are shared lockers.
413 N: stands for writers and non-recursive readers, as both are not recursive.
417 Recursive readers, as their name indicates, are the lockers allowed to acquire
421 While non-recursive readers will cause a self deadlock if trying to acquire inside
424 The difference between recursive readers and non-recursive readers is because:
425 recursive readers get blocked only by a write lock *holder*, while non-recursive
[all …]
Dseqlock.rst9 lockless readers (read-only retry loops), and no writer starvation. They
23 is odd and indicates to the readers that an update is in progress. At
25 even again which lets readers make progress.
153 from interruption by readers. This is typically the case when the read
195 1. Normal Sequence readers which never block a writer but they must
206 2. Locking readers which will wait if a writer or another locking reader
218 according to a passed marker. This is used to avoid lockless readers
Dlocktypes.rst95 readers.
135 rw_semaphore is a multiple readers and single writer lock mechanism.
141 exist special-purpose interfaces that allow non-owner release for readers.
151 readers, a preempted low-priority reader will continue holding its lock,
152 thus starving even high-priority writers. In contrast, because readers
155 writer from starving readers.
302 rwlock_t is a multiple readers and single writer lock mechanism.
317 readers, a preempted low-priority reader will continue holding its lock,
318 thus starving even high-priority writers. In contrast, because readers
321 preventing that writer from starving readers.
/Linux-v5.15/fs/btrfs/
Dlocking.c26 * - try-lock semantics for readers and writers
210 * if there are pending readers no new writers would be allowed to come in and
222 atomic_set(&lock->readers, 0); in btrfs_drew_lock_init()
237 if (atomic_read(&lock->readers)) in btrfs_drew_try_write_lock()
242 /* Ensure writers count is updated before we check for pending readers */ in btrfs_drew_try_write_lock()
244 if (atomic_read(&lock->readers)) { in btrfs_drew_try_write_lock()
257 wait_event(lock->pending_writers, !atomic_read(&lock->readers)); in btrfs_drew_write_lock()
269 atomic_inc(&lock->readers); in btrfs_drew_read_lock()
289 if (atomic_dec_and_test(&lock->readers)) in btrfs_drew_read_unlock()
/Linux-v5.15/drivers/misc/ibmasm/
Devent.c30 list_for_each_entry(reader, &sp->event_buffer->readers, node) in wake_up_event_readers()
39 * event readers.
40 * There is no reader marker in the buffer, therefore readers are
73 * Called by event readers (initiated from user space through the file
123 list_add(&reader->node, &sp->event_buffer->readers); in ibmasm_event_reader_register()
153 INIT_LIST_HEAD(&buffer->readers); in ibmasm_event_buffer_init()
/Linux-v5.15/drivers/misc/cardreader/
DKconfig9 Alcor Micro card readers support access to many types of memory cards,
20 Realtek card readers support access to many types of memory cards,
29 Select this option to get support for Realtek USB 2.0 card readers
/Linux-v5.15/arch/x86/include/asm/
Dspinlock.h30 * Read-write spinlocks, allowing multiple readers
33 * NOTE! it is quite common to have readers in interrupts
36 * irq-safe write-lock, but readers can get non-irqsafe
/Linux-v5.15/drivers/hid/
Dhid-roccat.c18 * It is inspired by hidraw, but uses only one circular buffer for all readers.
47 struct list_head readers; member
48 /* protects modifications of readers list */
52 * circular_buffer has one writer and multiple readers with their own
191 list_add_tail(&reader->node, &device->readers); in roccat_open()
239 * roccat_report_event() - output data to readers
268 list_for_each_entry(reader, &device->readers, node) { in roccat_report_event()
335 INIT_LIST_HEAD(&device->readers); in roccat_connect()
/Linux-v5.15/arch/sh/include/asm/
Dspinlock-cas.h44 * Read-write spinlocks, allowing multiple readers but only one writer.
46 * NOTE! it is quite common to have readers in interrupts but no interrupt
48 * needs to get a irq-safe write-lock, but readers can get non-irqsafe
/Linux-v5.15/drivers/ptp/
Dptp_private.h42 int defunct; /* tells readers to go away when clock is being removed */
71 * The function queue_cnt() is safe for readers to call without
72 * holding q->lock. Readers use this function to verify that the queue
/Linux-v5.15/arch/s390/include/asm/
Dspinlock.h96 * Read-write spinlocks, allowing multiple readers
99 * NOTE! it is quite common to have readers in interrupts
102 * irq-safe write-lock, but readers can get non-irqsafe

12345678910>>...13