Lines Matching refs:fences

156    :doc: DMA fences overview
218 * Future fences, used in HWC1 to signal when a buffer isn't used by the display
222 * Proxy fences, proposed to handle &drm_syncobj for which the fence has not yet
225 * Userspace fences or gpu futexes, fine-grained locking within a command buffer
231 batch DMA fences for memory management instead of context preemption DMA
232 fences which get reattached when the compute job is rescheduled.
235 fences and controls when they fire. Mixing indefinite fences with normal
236 in-kernel DMA fences does not work, even when a fallback timeout is included to
242 * Only userspace knows about all dependencies in indefinite fences and when
246 for memory management needs, which means we must support indefinite fences being
247 dependent upon DMA fences. If the kernel also support indefinite fences in the
258 userspace [label="userspace controlled fences"]
273 fences in the kernel. This means:
275 * No future fences, proxy fences or userspace fences imported as DMA fences,
278 * No DMA fences that signal end of batchbuffer for command submission where
287 implications for DMA fences.
291 But memory allocations are not allowed to gate completion of DMA fences, which
292 means any workload using recoverable page faults cannot use DMA fences for
293 synchronization. Synchronization fences controlled by userspace must be used
297 Linux rely on DMA fences, which means without an entirely new userspace stack
298 built on top of userspace fences, they cannot benefit from recoverable page
333 requiring DMA fences or jobs requiring page fault handling: This means all DMA
334 fences must complete before a compute job with page fault handling can be
342 fences. This results very wide impact on the kernel, since resolving the page
348 GPUs do not have any impact. This allows us to keep using DMA fences internally
353 Fences` discussions: Infinite fences from compute workloads are allowed to
354 depend on DMA fences, but not the other way around. And not even the page fault