Lines Matching full:we

69 	 * We could extend the life of a context to beyond that of all  in i915_fence_get_timeline_name()
71 * or we just give them a false name. Since in i915_fence_get_timeline_name()
120 * freed when the slab cache itself is freed, and so we would get in i915_fence_release()
234 * is-banned?, or we know the request is already inflight. in i915_request_active_engine()
236 * Note that rq->engine is unstable, and so we double in i915_request_active_engine()
237 * check that we have acquired the lock on the final engine. in i915_request_active_engine()
320 * We know the GPU must have read the request to have in i915_request_retire()
325 * Note this requires that we are always called in request in i915_request_retire()
331 /* Poison before we release our space in the ring */ in i915_request_retire()
345 * We only loosely track inflight requests across preemption, in i915_request_retire()
346 * and so we may find ourselves attempting to retire a _completed_ in i915_request_retire()
347 * request that we have removed from the HW and put back on a run in i915_request_retire()
350 * As we set I915_FENCE_FLAG_ACTIVE on the request, this should be in i915_request_retire()
351 * after removing the breadcrumb and signaling it, so that we do not in i915_request_retire()
398 * Even if we have unwound the request, it may still be on in __request_in_flight()
405 * As we know that there are always preemption points between in __request_in_flight()
406 * requests, we know that only the currently executing request in __request_in_flight()
407 * may be still active even though we have cleared the flag. in __request_in_flight()
408 * However, we can't rely on our tracking of ELSP[0] to know in __request_in_flight()
417 * latter, it may send the ACK and we process the event copying the in __request_in_flight()
419 * this implies the HW is arbitrating and not struck in *active, we do in __request_in_flight()
420 * not worry about complete accuracy, but we do require no read/write in __request_in_flight()
422 * as the array is being overwritten, for which we require the writes in __request_in_flight()
428 * that we received an ACK from the HW, and so the context is not in __request_in_flight()
429 * stuck -- if we do not see ourselves in *active, the inflight status in __request_in_flight()
430 * is valid. If instead we see ourselves being copied into *active, in __request_in_flight()
431 * we are inflight and may signal the callback. in __request_in_flight()
471 * active. This ensures that if we race with the in __await_execution()
472 * __notify_execute_cb from i915_request_submit() and we are not in __await_execution()
473 * included in that list, we get a second bite of the cherry and in __await_execution()
477 * In i915_request_retire() we set the ACTIVE bit on a completed in __await_execution()
479 * callback first, then checking the ACTIVE bit, we serialise with in __await_execution()
515 * breadcrumb at the end (so we get the fence notifications). in __i915_request_skip()
566 * With the advent of preempt-to-busy, we frequently encounter in __i915_request_submit()
567 * requests that we have unsubmitted from HW, but left running in __i915_request_submit()
569 * resubmission of that completed request, we can skip in __i915_request_submit()
573 * We must remove the request from the caller's priority queue, in __i915_request_submit()
576 * request has *not* yet been retired and we can safely move in __i915_request_submit()
593 * Are we using semaphores when the gpu is already saturated? in __i915_request_submit()
601 * If we installed a semaphore on this request and we only submit in __i915_request_submit()
604 * increases the amount of work we are doing. If so, we disable in __i915_request_submit()
605 * further use of semaphores until we are idle again, whence we in __i915_request_submit()
632 * In the future, perhaps when we have an active time-slicing scheduler, in __i915_request_submit()
635 * quite hairy, we have to carefully rollback the fence and do a in __i915_request_submit()
641 /* We may be recursing from the signal callback of another i915 fence */ in __i915_request_submit()
675 * Before we remove this breadcrumb from the signal list, we have in __i915_request_unsubmit()
677 * attach itself. We first mark the request as no longer active and in __i915_request_unsubmit()
686 /* We've already spun, don't charge on resubmitting. */ in __i915_request_unsubmit()
691 * We don't need to wake_up any waiters on request->execute, they in __i915_request_unsubmit()
738 * We need to serialize use of the submit_request() callback in submit_notify()
740 * i915_gem_set_wedged(). We use the RCU mechanism to mark the in submit_notify()
791 /* If we cannot wait, dip into our reserves */ in request_alloc_slow()
816 /* Retire our old requests in the hope that we free some */ in request_alloc_slow()
853 * We use RCU to look up requests in flight. The lookups may in __i915_request_create()
855 * That is the request we are writing to here, may be in the process in __i915_request_create()
857 * we have to be very careful when overwriting the contents. During in __i915_request_create()
858 * the RCU lookup, we change chase the request->engine pointer, in __i915_request_create()
864 * with dma_fence_init(). This increment is safe for release as we in __i915_request_create()
865 * check that the request we have a reference to and matches the active in __i915_request_create()
868 * Before we increment the refcount, we chase the request->engine in __i915_request_create()
869 * pointer. We must not call kmem_cache_zalloc() or else we set in __i915_request_create()
871 * we see the request is completed (based on the value of the in __i915_request_create()
873 * If we decide the request is not completed (new engine or seqno), in __i915_request_create()
874 * then we grab a reference and double check that it is still the in __i915_request_create()
896 * (e.g. i915_fence_get_driver_name). We could likely change these in __i915_request_create()
921 /* We bump the ref for the fence chain */ in __i915_request_create()
940 * Note that due to how we add reserved_space to intel_ring_begin() in __i915_request_create()
941 * we need to double our request to ensure that if we need to wrap in __i915_request_create()
950 * should we detect the updated seqno part-way through the in __i915_request_create()
951 * GPU processing the request, we never over-estimate the in __i915_request_create()
970 /* Make sure we didn't add ourselves to external state before freeing */ in __i915_request_create()
1003 /* Check that we do not interrupt ourselves with a new request */ in i915_request_create()
1026 * The caller holds a reference on @signal, but we do not serialise in i915_request_await_start()
1029 * We do not hold a reference to the request before @signal, and in i915_request_await_start()
1031 * we follow the link backwards. in i915_request_await_start()
1084 * both the GPU and CPU. We want to limit the impact on others, in already_busywaiting()
1086 * latency. Therefore we restrict ourselves to not using more in already_busywaiting()
1088 * if we have detected the engine is saturated (i.e. would not be in already_busywaiting()
1092 * See the are-we-too-late? check in __i915_request_submit(). in already_busywaiting()
1110 /* We need to pin the signaler's HWSP until we are finished reading. */ in __emit_semaphore_wait()
1124 * Using greater-than-or-equal here means we have to worry in __emit_semaphore_wait()
1125 * about seqno wraparound. To side step that issue, we swap in __emit_semaphore_wait()
1164 * that may fail catastrophically, then we want to avoid using in emit_semaphore_wait()
1165 * sempahores as they bypass the fence signaling metadata, and we in emit_semaphore_wait()
1171 /* Just emit the first semaphore we see as request space is limited. */ in emit_semaphore_wait()
1229 * The execution cb fires when we submit the request to HW. But in in __i915_request_await_execution()
1231 * run (consider that we submit 2 requests for the same context, where in __i915_request_await_execution()
1232 * the request of interest is behind an indefinite spinner). So we hook in __i915_request_await_execution()
1234 * in the worst case, though we hope that the await_start is elided. in __i915_request_await_execution()
1243 * Now that we are queued to the HW at roughly the same time (thanks in __i915_request_await_execution()
1247 * signaler depends on a semaphore, so indirectly do we, and we do not in __i915_request_await_execution()
1249 * So we wait. in __i915_request_await_execution()
1251 * However, there is also a second condition for which we need to wait in __i915_request_await_execution()
1256 * immediate execution, and so we must wait until it reaches the in __i915_request_await_execution()
1282 * The downside of using semaphores is that we lose metadata passing in mark_external()
1283 * along the signaling chain. This is particularly nasty when we in mark_external()
1285 * fatal errors we want to scrub the request before it is executed, in mark_external()
1286 * which means that we cannot preload the request onto HW and have in mark_external()
1355 * We don't squash repeated fence dependencies here as we in i915_request_await_execution()
1375 * If we are waiting on a virtual engine, then it may be in await_request_submit()
1378 * engine and then passed to the physical engine. We cannot allow in await_request_submit()
1431 * we should *not* decompose it into its individual fences. However, in i915_request_await_dma_fence()
1432 * we don't currently store which mode the fence-array is operating in i915_request_await_dma_fence()
1434 * amdgpu and we should not see any incoming fence-array from in i915_request_await_dma_fence()
1482 * @to: request we are wishing to use
1487 * Conceptually we serialise writes between engines inside the GPU.
1488 * We only allow one engine to write into a buffer at any time, but
1489 * multiple readers. To ensure each has a coherent view of memory, we must:
1495 * - If we are a write request (pending_write_domain is set), the new
1550 * is special cased so that we can eliminate redundant ordering in __i915_request_add_to_timeline()
1551 * operations while building the request (we know that the timeline in __i915_request_add_to_timeline()
1552 * itself is ordered, and here we guarantee it). in __i915_request_add_to_timeline()
1554 * As we know we will need to emit tracking along the timeline, in __i915_request_add_to_timeline()
1555 * we embed the hooks into our request struct -- at the cost of in __i915_request_add_to_timeline()
1560 * that we can apply a slight variant of the rules specialised in __i915_request_add_to_timeline()
1562 * If we consider the case of virtual engine, we must emit a dma-fence in __i915_request_add_to_timeline()
1575 * we need to be wary in case the timeline->last_request in __i915_request_add_to_timeline()
1634 * should we detect the updated seqno part-way through the in __i915_request_commit()
1635 * GPU processing the request, we never over-estimate the in __i915_request_commit()
1657 * request - i.e. we may want to preempt the current request in order in __i915_request_queue()
1658 * to run a high priority dependency chain *before* we can execute this in __i915_request_queue()
1661 * This is called before the request is ready to run so that we can in __i915_request_queue()
1708 * the comparisons are no longer valid if we switch CPUs. Instead of in local_clock_ns()
1709 * blocking preemption for the entire busywait, we can detect the CPU in local_clock_ns()
1736 * Only wait for the request if we know it is likely to complete. in __i915_spin_request()
1738 * We don't track the timestamps around requests, nor the average in __i915_spin_request()
1739 * request length, so we do not have a good indicator that this in __i915_spin_request()
1740 * request will complete within the timeout. What we do know is the in __i915_spin_request()
1741 * order in which requests are executed by the context and so we can in __i915_spin_request()
1753 * rate. By busywaiting on the request completion for a short while we in __i915_spin_request()
1755 * if it is a slow request, we want to sleep as quickly as possible. in __i915_spin_request()
1825 * We must never wait on the GPU while holding a lock as we in i915_request_wait()
1826 * may need to perform a GPU reset. So while we don't need to in i915_request_wait()
1827 * serialise wait/reset with an explicit lock, we do want in i915_request_wait()
1835 * We may use a rather large value here to offset the penalty of in i915_request_wait()
1842 * short wait, we first spin to see if the request would have completed in i915_request_wait()
1845 * We need upto 5us to enable the irq, and upto 20us to hide the in i915_request_wait()
1853 * duration, which we currently lack. in i915_request_wait()
1865 * We can circumvent that by promoting the GPU frequency to maximum in i915_request_wait()
1866 * before we sleep. This makes the GPU throttle up much more quickly in i915_request_wait()
1881 * We sometimes experience some latency between the HW interrupts and in i915_request_wait()
1888 * If the HW is being lazy, this is the last chance before we go to in i915_request_wait()
1889 * sleep to catch any pending events. We will check periodically in in i915_request_wait()
1984 * The prefix is used to show the queue status, for which we use in i915_request_show()