Lines Matching full:vblank
75 struct drm_vblank_crtc *vblank = &dev->vblank[drm_crtc_index(&crtc->base)]; in intel_crtc_get_vblank_counter() local
80 if (!vblank->max_vblank_count) in intel_crtc_get_vblank_counter()
128 * requires vblank support on some platforms/outputs. in intel_crtc_vblank_on()
140 * requires vblank support on some platforms/outputs. in intel_crtc_vblank_off()
285 /* no hw vblank counter */
422 * Interrupt latency is critical for getting the vblank in intel_crtc_vblank_work_init()
423 * work executed as early as possible during the vblank. in intel_crtc_vblank_work_init()
470 * atomically regarding vblank. If the next vblank will happens within
471 * the next 100 us, this function waits until the vblank passes.
562 * increment approx. 1/3 of a scanline before start of vblank. in intel_pipe_update_start()
563 * The registers still get latched at start of vblank however. in intel_pipe_update_start()
565 * line of vblank (since not the whole line is actually in in intel_pipe_update_start()
566 * vblank). And unfortunately we can't use the interrupt to in intel_pipe_update_start()
625 * before a vblank.
651 /* We're still in the vblank-evade critical section, this can't race. in intel_pipe_update_end()
652 * Would be slightly nice to just grab the vblank count and arm the in intel_pipe_update_end()
672 * Send VRR Push to terminate Vblank. If we are already in vblank in intel_pipe_update_end()
674 * otherwise the push would immediately terminate the vblank and in intel_pipe_update_end()
678 * There is a tiny race here (iff vblank evasion failed us) where in intel_pipe_update_end()
679 * we might sample the frame counter just before vmax vblank start in intel_pipe_update_end()
683 * vblank start instead of vmax vblank start. in intel_pipe_update_end()