Lines Matching refs:fences

147    :doc: DMA fences overview
209 * Future fences, used in HWC1 to signal when a buffer isn't used by the display
213 * Proxy fences, proposed to handle &drm_syncobj for which the fence has not yet
216 * Userspace fences or gpu futexes, fine-grained locking within a command buffer
222 batch DMA fences for memory management instead of context preemption DMA
223 fences which get reattached when the compute job is rescheduled.
226 fences and controls when they fire. Mixing indefinite fences with normal
227 in-kernel DMA fences does not work, even when a fallback timeout is included to
233 * Only userspace knows about all dependencies in indefinite fences and when
237 for memory management needs, which means we must support indefinite fences being
238 dependent upon DMA fences. If the kernel also support indefinite fences in the
249 userspace [label="userspace controlled fences"]
264 fences in the kernel. This means:
266 * No future fences, proxy fences or userspace fences imported as DMA fences,
269 * No DMA fences that signal end of batchbuffer for command submission where
278 implications for DMA fences.
282 But memory allocations are not allowed to gate completion of DMA fences, which
283 means any workload using recoverable page faults cannot use DMA fences for
284 synchronization. Synchronization fences controlled by userspace must be used
288 Linux rely on DMA fences, which means without an entirely new userspace stack
289 built on top of userspace fences, they cannot benefit from recoverable page
324 requiring DMA fences or jobs requiring page fault handling: This means all DMA
325 fences must complete before a compute job with page fault handling can be
333 fences. This results very wide impact on the kernel, since resolving the page
339 GPUs do not have any impact. This allows us to keep using DMA fences internally
344 Fences` discussions: Infinite fences from compute workloads are allowed to
345 depend on DMA fences, but not the other way around. And not even the page fault