Lines Matching +full:omap4 +full:- +full:hwspinlock

12 For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP,
17 A generic hwspinlock framework allows platform-independent drivers to use
18 the hwspinlock device in order to access data structures that are shared
22 This is necessary, for example, for Inter-processor communications:
23 on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the
26 To achieve fast message-based communications, a minimal kernel support
31 the remote processors, and access to it is synchronized using the hwspinlock
35 A common hwspinlock interface makes it possible to have generic, platform-
43 struct hwspinlock *hwspin_lock_request(void);
45 Dynamically assign an hwspinlock and return its address, or NULL
46 in case an unused hwspinlock isn't available. Users of this
54 struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
56 Assign a specific hwspinlock id and return its address, or NULL
57 if that hwspinlock is already in use. Usually board code will
58 be calling this function in order to reserve specific hwspinlock
67 Retrieve the global lock id for an OF phandle-based specific lock.
68 This function provides a means for DT users of a hwspinlock module
69 to get the global lock id of a specific hwspinlock, so that it can
72 The function returns a lock id number on success, -EPROBE_DEFER if
73 the hwspinlock device is not yet registered with the core, or other
80 int hwspin_lock_free(struct hwspinlock *hwlock);
82 Free a previously-assigned hwspinlock; returns 0 on success, or an
83 appropriate error code on failure (e.g. -EINVAL if the hwspinlock
90 int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
92 Lock a previously-assigned hwspinlock with a timeout limit (specified in
93 msecs). If the hwspinlock is already taken, the function will busy loop
96 the caller must not sleep, and is advised to release the hwspinlock as
101 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
106 int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
108 Lock a previously-assigned hwspinlock with a timeout limit (specified in
109 msecs). If the hwspinlock is already taken, the function will busy loop
113 release the hwspinlock as soon as possible.
116 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
121 int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
124 Lock a previously-assigned hwspinlock with a timeout limit (specified in
125 msecs). If the hwspinlock is already taken, the function will busy loop
130 release the hwspinlock as soon as possible.
133 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
139 int hwspin_lock_timeout_raw(struct hwspinlock *hwlock, unsigned int timeout);
141 Lock a previously-assigned hwspinlock with a timeout limit (specified in
142 msecs). If the hwspinlock is already taken, the function will busy loop
146 or spinlock to avoid dead-lock, that will let user can do some time-consuming
150 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
156 int hwspin_lock_timeout_in_atomic(struct hwspinlock *hwlock, unsigned int to);
158 Lock a previously-assigned hwspinlock with a timeout limit (specified in
159 msecs). If the hwspinlock is already taken, the function will busy loop
166 notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
172 int hwspin_trylock(struct hwspinlock *hwlock);
175 Attempt to lock a previously-assigned hwspinlock, but immediately fail if
179 caller must not sleep, and is advised to release the hwspinlock as soon as
184 notably -EBUSY if the hwspinlock was already taken).
189 int hwspin_trylock_irq(struct hwspinlock *hwlock);
192 Attempt to lock a previously-assigned hwspinlock, but immediately fail if
197 release the hwspinlock as soon as possible.
200 notably -EBUSY if the hwspinlock was already taken).
206 int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
208 Attempt to lock a previously-assigned hwspinlock, but immediately fail if
214 to release the hwspinlock as soon as possible.
217 notably -EBUSY if the hwspinlock was already taken).
222 int hwspin_trylock_raw(struct hwspinlock *hwlock);
224 Attempt to lock a previously-assigned hwspinlock, but immediately fail if
228 or spinlock to avoid dead-lock, that will let user can do some time-consuming
232 notably -EBUSY if the hwspinlock was already taken).
237 int hwspin_trylock_in_atomic(struct hwspinlock *hwlock);
239 Attempt to lock a previously-assigned hwspinlock, but immediately fail if
245 notably -EBUSY if the hwspinlock was already taken).
250 void hwspin_unlock(struct hwspinlock *hwlock);
252 Unlock a previously-locked hwspinlock. Always succeed, and can be called
257 code should **never** unlock an hwspinlock which is already unlocked
262 void hwspin_unlock_irq(struct hwspinlock *hwlock);
264 Unlock a previously-locked hwspinlock and enable local interrupts.
265 The caller should **never** unlock an hwspinlock which is already unlocked.
274 hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
276 Unlock a previously-locked hwspinlock.
278 The caller should **never** unlock an hwspinlock which is already unlocked.
286 void hwspin_unlock_raw(struct hwspinlock *hwlock);
288 Unlock a previously-locked hwspinlock.
290 The caller should **never** unlock an hwspinlock which is already unlocked.
296 void hwspin_unlock_in_atomic(struct hwspinlock *hwlock);
298 Unlock a previously-locked hwspinlock.
300 The caller should **never** unlock an hwspinlock which is already unlocked.
306 int hwspin_lock_get_id(struct hwspinlock *hwlock);
308 Retrieve id number of a given hwspinlock. This is needed when an
309 hwspinlock is dynamically assigned: before it can be used to achieve
313 Returns the hwspinlock id number, or -EINVAL if hwlock is null.
320 #include <linux/hwspinlock.h>
325 struct hwspinlock *hwlock;
328 /* dynamically assign a hwspinlock */
358 struct hwspinlock *hwlock;
362 * assign a specific hwspinlock id - this should be called early
373 return -EBUSY;
400 To be called from the underlying platform-specific implementation, in
401 order to register a new hwspinlock device (which is usually a bank of
411 To be called from the underlying vendor-specific implementation, in order
412 to unregister an hwspinlock device (which is usually a bank of numerous
417 Returns the address of hwspinlock on success, or NULL on error (e.g.
418 if the hwspinlock is still in use).
424 of hardware locks. It is registered by the underlying hwspinlock
430 * struct hwspinlock_device - a device which usually spans numerous hwspinlocks
432 * @ops: platform-specific hwspinlock handlers
435 * @lock: dynamically allocated array of 'struct hwspinlock'
442 struct hwspinlock lock[0];
445 struct hwspinlock_device contains an array of hwspinlock structs, each
449 * struct hwspinlock - this struct represents a single hwspinlock instance
451 * @lock: initialized and used by hwspinlock core
452 * @priv: private data, owned by the underlying platform-specific hwspinlock drv
454 struct hwspinlock {
460 When registering a bank of locks, the hwspinlock driver only needs to
462 initialized by the hwspinlock core itself.
470 int (*trylock)(struct hwspinlock *lock);
471 void (*unlock)(struct hwspinlock *lock);
472 void (*relax)(struct hwspinlock *lock);
477 The ->trylock() callback should make a single attempt to take the lock, and
480 The ->unlock() callback releases the lock. It always succeed, and it, too,
483 The ->relax() callback is optional. It is called by hwspinlock core while
485 a delay between two successive invocations of ->trylock(). It may **not** sleep.