Lines Matching +full:uv +full:- +full:sensor

1 .. SPDX-License-Identifier: GPL-2.0
24 ImgU). The CIO2 driver is available as drivers/media/pci/intel/ipu3/ipu3-cio2*
36 Both of the drivers implement V4L2, Media Controller and V4L2 sub-device
38 MIPI CSI-2 interfaces through V4L2 sub-device sensor drivers.
44 interface to the user space. There is a video node for each CSI-2 receiver,
47 The CIO2 contains four independent capture channel, each with its own MIPI CSI-2
48 receiver and DMA engine. Each channel is modelled as a V4L2 sub-device exposed
49 to userspace as a V4L2 sub-device node and has two pads:
53 .. flat-table::
55 * - pad
56 - direction
57 - purpose
59 * - 0
60 - sink
61 - MIPI CSI-2 input, connected to the sensor subdev
63 * - 1
64 - source
65 - Raw video capture, connected to the V4L2 video interface
71 ------------------------------------
81 -- The IPU3 CSI2 receiver outputs the captured frames from the sensor in packed
84 -- Multiple video nodes have to be operated simultaneously.
86 Let us take the example of ov5670 sensor connected to CSI2 port 0, for a
89 Using the media controller APIs, the ov5670 sensor is configured to send
92 .. code-block:: none
97 # and that ov5670 sensor is connected to i2c bus 10 with address 0x36
98 export SDEV=$(media-ctl -d $MDEV -e "ov5670 10-0036")
100 # Establish the link for the media devices using media-ctl [#f3]_
101 media-ctl -d $MDEV -l "ov5670:0 -> ipu3-csi2 0:0[1]"
104 media-ctl -d $MDEV -V "ov5670:0 [fmt:SGRBG10/2592x1944]"
105 media-ctl -d $MDEV -V "ipu3-csi2 0:0 [fmt:SGRBG10/2592x1944]"
106 media-ctl -d $MDEV -V "ipu3-csi2 0:1 [fmt:SGRBG10/2592x1944]"
108 Once the media pipeline is configured, desired sensor specific settings
113 .. code-block:: none
115 yavta -w 0x009e0903 444 $SDEV
116 yavta -w 0x009e0913 1024 $SDEV
117 yavta -w 0x009e0911 2046 $SDEV
119 Once the desired sensor settings are set, frame captures can be done as below.
123 .. code-block:: none
125 yavta --data-prefix -u -c10 -n5 -I -s2592x1944 --file=/tmp/frame-#.bin \
126 -f IPU3_SGRBG10 $(media-ctl -d $MDEV -e "ipu3-cio2 0")
131 The captured frames are available as /tmp/frame-#.bin files.
143 The ImgU contains two independent pipes, each modelled as a V4L2 sub-device
144 exposed to userspace as a V4L2 sub-device node.
150 .. flat-table::
152 * - pad
153 - direction
154 - purpose
156 * - 0
157 - sink
158 - Input raw video stream
160 * - 1
161 - sink
162 - Processing parameters
164 * - 2
165 - source
166 - Output processed video stream
168 * - 3
169 - source
170 - Output viewfinder video stream
172 * - 4
173 - source
174 - 3A statistics
180 ----------------
182 With ImgU, once the input video node ("ipu3-imgu 0/1":0, in
183 <entity>:<pad-number> format) is queued with buffer (in packed raw Bayer
195 ----------------------------------------
203 :ref:`v4l2-pix-fmt-ipu3-sbggr10`.
207 Only the multi-planar API is supported. More details can be found at
208 :ref:`planar-apis`.
211 ---------------------
217 :ref:`v4l2-meta-fmt-params`.
220 ------------------------
235 ------------------------------------------
238 in time-sharing with single input frame data. Each pipe can run at certain mode
239 - "VIDEO" or "STILL", "VIDEO" mode is commonly used for video frames capture,
249 drivers/staging/media/ipu3/include/uapi/intel-ipu3.h) to query and set the
252 enabled and buffers need be queued, the statistics and the view-finder queues
263 ----------------------------------------
271 Let us take "ipu3-imgu 0" subdev as an example.
273 .. code-block:: none
275 media-ctl -d $MDEV -r
276 media-ctl -d $MDEV -l "ipu3-imgu 0 input":0 -> "ipu3-imgu 0":0[1]
277 media-ctl -d $MDEV -l "ipu3-imgu 0":2 -> "ipu3-imgu 0 output":0[1]
278 media-ctl -d $MDEV -l "ipu3-imgu 0":3 -> "ipu3-imgu 0 viewfinder":0[1]
279 media-ctl -d $MDEV -l "ipu3-imgu 0":4 -> "ipu3-imgu 0 3a stat":0[1]
285 .. code-block:: none
287 yavta -w "0x009819A1 1" /dev/v4l-subdev7
292 There is also a block which can change the frame resolution - YUV Scaler, it is
298 .. kernel-figure:: ipu3_rcb.svg
305 Input Feeder gets the Bayer frame data from the sensor, it can enable cropping
336 intermediate resolutions can be generated by specific tool -
338 https://github.com/intel/intel-ipu3-pipecfg
343 https://chromium.googlesource.com/chromiumos/overlays/board-overlays/+/master
345 Under baseboard-poppy/media-libs/cros-camera-hal-configs-poppy/files/gcss
375 .. code-block:: none
377 v4l2n --pipe=4 --load=/tmp/frame-#.bin --open=/dev/video4
378 --fmt=type:VIDEO_OUTPUT_MPLANE,width=2592,height=1944,pixelformat=0X47337069 \
379 --reqbufs=type:VIDEO_OUTPUT_MPLANE,count:1 --pipe=1 \
380 --output=/tmp/frames.out --open=/dev/video5 \
381 --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
382 --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=2 \
383 --output=/tmp/frames.vf --open=/dev/video6 \
384 --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
385 --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=3 --open=/dev/video7 \
386 --output=/tmp/frames.3A --fmt=type:META_CAPTURE,? \
387 --reqbufs=count:1,type:META_CAPTURE --pipe=1,2,3,4 --stream=5
391 .. code-block:: none
393 yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
394 --file=frame-#.out-f NV12 /dev/video5 & \
395 yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
396 --file=frame-#.vf -f NV12 /dev/video6 & \
397 yavta --data-prefix -Bmeta-capture -c10 -n5 -I \
398 --file=frame-#.3a /dev/video7 & \
399 yavta --data-prefix -Boutput-mplane -c10 -n5 -I -s2592x1944 \
400 --file=/tmp/frame-in.cio2 -f IPU3_SGRBG10 /dev/video4
406 ----------------------------------------------
414 .. code-block:: none
416 raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.out /tmp/frames.out.ppm
424 .. code-block:: none
426 raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.vf /tmp/frames.vf.ppm
436 https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/master/
446 .. kernel-render:: DOT
476 { rank=same; a -> b -> c -> d -> e -> f -> g -> h -> i }
477 { rank=same; j -> k -> l -> m -> n -> o -> p -> q -> s -> t}
479 a -> j [style=invis, weight=10]
480 i -> j
481 q -> r
489 Optical Black Correction Optical Black Correction block subtracts a pre-defined
494 address non-linearity sensor effects. The Lookup table
498 non-uniformity of the pixel response due to optical
509 DM Demosaicing converts raw sensor data in Bayer format
514 Color Correction Color Correction algo transforms sensor specific color
519 basic non-linear tone mapping correction that is
523 UV: Luminance) presentation. This is done by applying
528 is applied for a UV plane down sampling by a factor
573 Y-tone mapping
590 .. [#f5] drivers/staging/media/ipu3/include/uapi/intel-ipu3.h
596 .. [#f3] http://git.ideasonboard.org/?p=media-ctl.git;a=summary