Lines Matching full:request

32  * retirement order) request relevant for the desired mode of access.
34 * track the most recent fence request, typically this is done as part of
39 * itself as idle (i915_active_request.request == NULL). The owner
45 struct i915_request *request);
50 * @rq - initial request to track, can be NULL
55 * an activity tracker, that is for tracking the last known active request
56 * associated with it. When the last request becomes idle, when it is retired
65 RCU_INIT_POINTER(active->request, rq); in i915_active_request_init()
77 * i915_active_request_set - updates the tracker to watch the current request
79 * @request - the request to watch
81 * __i915_active_request_set() watches the given @request for completion. Whilst
82 * that @request is busy, the @active reports busy. When that @request is
87 struct i915_request *request) in __i915_active_request_set() argument
92 list_move(&active->link, &request->active_list); in __i915_active_request_set()
93 rcu_assign_pointer(active->request, request); in __i915_active_request_set()
101 * i915_active_request_raw - return the active request
104 * i915_active_request_raw() returns the current request being tracked, or NULL.
105 * It does not obtain a reference on the request for the caller, so the caller
112 return rcu_dereference_protected(active->request, in i915_active_request_raw()
117 * i915_active_request_peek - report the active request being monitored
120 * i915_active_request_peek() returns the current request being tracked if
121 * still active, or NULL. It does not obtain a reference on the request
128 struct i915_request *request; in i915_active_request_peek() local
130 request = i915_active_request_raw(active, mutex); in i915_active_request_peek()
131 if (!request || i915_request_completed(request)) in i915_active_request_peek()
134 return request; in i915_active_request_peek()
138 * i915_active_request_get - return a reference to the active request
141 * i915_active_request_get() returns a reference to the active request, or NULL
152 * __i915_active_request_get_rcu - return a reference to the active request
155 * __i915_active_request_get() returns a reference to the active request,
163 * Performing a lockless retrieval of the active request is super in __i915_active_request_get_rcu()
165 * slab of request objects will not be freed whilst we hold the in __i915_active_request_get_rcu()
166 * RCU read lock. It does not guarantee that the request itself in __i915_active_request_get_rcu()
171 * rq = active.request in __i915_active_request_get_rcu()
174 * active.request = NULL in __i915_active_request_get_rcu()
179 * To prevent the request from being reused whilst the caller in __i915_active_request_get_rcu()
182 * (refcnt == 0). That prevents the request being reallocated in __i915_active_request_get_rcu()
183 * whilst the caller holds on to it. To check that the request in __i915_active_request_get_rcu()
185 * check that our request remains the active request across in __i915_active_request_get_rcu()
190 * In the middle of all that, we inspect whether the request is in __i915_active_request_get_rcu()
191 * complete. Retiring is lazy so the request may be completed long in __i915_active_request_get_rcu()
193 * request is complete is far cheaper (as it involves no locked in __i915_active_request_get_rcu()
197 * seqno nor HWS is the right one! However, if the request was in __i915_active_request_get_rcu()
198 * reallocated, that means the active tracker's request was complete. in __i915_active_request_get_rcu()
199 * If the new request is also complete, then both are and we can in __i915_active_request_get_rcu()
200 * just report the active tracker is idle. If the new request is in __i915_active_request_get_rcu()
202 * it remained the active request. in __i915_active_request_get_rcu()
204 * It is then imperative that we do not zero the request on in __i915_active_request_get_rcu()
209 struct i915_request *request; in __i915_active_request_get_rcu() local
211 request = rcu_dereference(active->request); in __i915_active_request_get_rcu()
212 if (!request || i915_request_completed(request)) in __i915_active_request_get_rcu()
218 * re-emit the load for request->fence.seqno. A race would catch in __i915_active_request_get_rcu()
229 request = i915_request_get_rcu(request); in __i915_active_request_get_rcu()
235 * the request, we may not notice a change in the active in __i915_active_request_get_rcu()
250 * returns the request (and so with the reference counted in __i915_active_request_get_rcu()
253 * that this request is the one currently being tracked. in __i915_active_request_get_rcu()
258 if (!request || request == rcu_access_pointer(active->request)) in __i915_active_request_get_rcu()
259 return rcu_pointer_handoff(request); in __i915_active_request_get_rcu()
261 i915_request_put(request); in __i915_active_request_get_rcu()
266 * i915_active_request_get_unlocked - return a reference to the active request
269 * i915_active_request_get_unlocked() returns a reference to the active request,
278 struct i915_request *request; in i915_active_request_get_unlocked() local
281 request = __i915_active_request_get_rcu(active); in i915_active_request_get_unlocked()
284 return request; in i915_active_request_get_unlocked()
292 * assigned to a request. Due to the lazy retiring, that request may be idle
298 return rcu_access_pointer(active->request); in i915_active_request_isset()
302 * i915_active_request_retire - waits until the request is retired
303 * @active - the active request on which to wait
305 * i915_active_request_retire() waits until the request is completed,
314 struct i915_request *request; in i915_active_request_retire() local
317 request = i915_active_request_raw(active, mutex); in i915_active_request_retire()
318 if (!request) in i915_active_request_retire()
321 ret = i915_request_wait(request, in i915_active_request_retire()
328 RCU_INIT_POINTER(active->request, NULL); in i915_active_request_retire()
330 active->retire(active, request); in i915_active_request_retire()
338 * Each set of commands submitted to the GPU compromises a single request that
346 * track every single request associated with the task, but knowing that
347 * each request belongs to an ordered timeline (later requests within a
349 * latest request in each timeline to determine the overall status of the
357 * provide a serialisation point either for request submission or for CPU