1.. SPDX-License-Identifier: GPL-2.0 2 3.. include:: <isonum.txt> 4 5=============================================================== 6Intel Image Processing Unit 3 (IPU3) Imaging Unit (ImgU) driver 7=============================================================== 8 9Copyright |copy| 2018 Intel Corporation 10 11Introduction 12============ 13 14This file documents the Intel IPU3 (3rd generation Image Processing Unit) 15Imaging Unit drivers located under drivers/media/pci/intel/ipu3 (CIO2) as well 16as under drivers/staging/media/ipu3 (ImgU). 17 18The Intel IPU3 found in certain Kaby Lake (as well as certain Sky Lake) 19platforms (U/Y processor lines) is made up of two parts namely the Imaging Unit 20(ImgU) and the CIO2 device (MIPI CSI2 receiver). 21 22The CIO2 device receives the raw Bayer data from the sensors and outputs the 23frames in a format that is specific to the IPU3 (for consumption by the IPU3 24ImgU). The CIO2 driver is available as drivers/media/pci/intel/ipu3/ipu3-cio2* 25and is enabled through the CONFIG_VIDEO_IPU3_CIO2 config option. 26 27The Imaging Unit (ImgU) is responsible for processing images captured 28by the IPU3 CIO2 device. The ImgU driver sources can be found under 29drivers/staging/media/ipu3 directory. The driver is enabled through the 30CONFIG_VIDEO_IPU3_IMGU config option. 31 32The two driver modules are named ipu3_csi2 and ipu3_imgu, respectively. 33 34The drivers has been tested on Kaby Lake platforms (U/Y processor lines). 35 36Both of the drivers implement V4L2, Media Controller and V4L2 sub-device 37interfaces. The IPU3 CIO2 driver supports camera sensors connected to the CIO2 38MIPI CSI-2 interfaces through V4L2 sub-device sensor drivers. 39 40CIO2 41==== 42 43The CIO2 is represented as a single V4L2 subdev, which provides a V4L2 subdev 44interface to the user space. There is a video node for each CSI-2 receiver, 45with a single media controller interface for the entire device. 46 47The CIO2 contains four independent capture channel, each with its own MIPI CSI-2 48receiver and DMA engine. Each channel is modelled as a V4L2 sub-device exposed 49to userspace as a V4L2 sub-device node and has two pads: 50 51.. tabularcolumns:: |p{0.8cm}|p{4.0cm}|p{4.0cm}| 52 53.. flat-table:: 54 55 * - pad 56 - direction 57 - purpose 58 59 * - 0 60 - sink 61 - MIPI CSI-2 input, connected to the sensor subdev 62 63 * - 1 64 - source 65 - Raw video capture, connected to the V4L2 video interface 66 67The V4L2 video interfaces model the DMA engines. They are exposed to userspace 68as V4L2 video device nodes. 69 70Capturing frames in raw Bayer format 71------------------------------------ 72 73CIO2 MIPI CSI2 receiver is used to capture frames (in packed raw Bayer format) 74from the raw sensors connected to the CSI2 ports. The captured frames are used 75as input to the ImgU driver. 76 77Image processing using IPU3 ImgU requires tools such as raw2pnm [#f1]_, and 78yavta [#f2]_ due to the following unique requirements and / or features specific 79to IPU3. 80 81-- The IPU3 CSI2 receiver outputs the captured frames from the sensor in packed 82raw Bayer format that is specific to IPU3. 83 84-- Multiple video nodes have to be operated simultaneously. 85 86Let us take the example of ov5670 sensor connected to CSI2 port 0, for a 872592x1944 image capture. 88 89Using the media contorller APIs, the ov5670 sensor is configured to send 90frames in packed raw Bayer format to IPU3 CSI2 receiver. 91 92# This example assumes /dev/media0 as the CIO2 media device 93 94export MDEV=/dev/media0 95 96# and that ov5670 sensor is connected to i2c bus 10 with address 0x36 97 98export SDEV=$(media-ctl -d $MDEV -e "ov5670 10-0036") 99 100# Establish the link for the media devices using media-ctl [#f3]_ 101media-ctl -d $MDEV -l "ov5670:0 -> ipu3-csi2 0:0[1]" 102 103# Set the format for the media devices 104media-ctl -d $MDEV -V "ov5670:0 [fmt:SGRBG10/2592x1944]" 105 106media-ctl -d $MDEV -V "ipu3-csi2 0:0 [fmt:SGRBG10/2592x1944]" 107 108media-ctl -d $MDEV -V "ipu3-csi2 0:1 [fmt:SGRBG10/2592x1944]" 109 110Once the media pipeline is configured, desired sensor specific settings 111(such as exposure and gain settings) can be set, using the yavta tool. 112 113e.g 114 115yavta -w 0x009e0903 444 $SDEV 116 117yavta -w 0x009e0913 1024 $SDEV 118 119yavta -w 0x009e0911 2046 $SDEV 120 121Once the desired sensor settings are set, frame captures can be done as below. 122 123e.g 124 125yavta --data-prefix -u -c10 -n5 -I -s2592x1944 --file=/tmp/frame-#.bin \ 126 -f IPU3_SGRBG10 $(media-ctl -d $MDEV -e "ipu3-cio2 0") 127 128With the above command, 10 frames are captured at 2592x1944 resolution, with 129sGRBG10 format and output as IPU3_SGRBG10 format. 130 131The captured frames are available as /tmp/frame-#.bin files. 132 133ImgU 134==== 135 136The ImgU is represented as two V4L2 subdevs, each of which provides a V4L2 137subdev interface to the user space. 138 139Each V4L2 subdev represents a pipe, which can support a maximum of 2 streams. 140This helps to support advanced camera features like Continuous View Finder (CVF) 141and Snapshot During Video(SDV). 142 143The ImgU contains two independent pipes, each modelled as a V4L2 sub-device 144exposed to userspace as a V4L2 sub-device node. 145 146Each pipe has two sink pads and three source pads for the following purpose: 147 148.. tabularcolumns:: |p{0.8cm}|p{4.0cm}|p{4.0cm}| 149 150.. flat-table:: 151 152 * - pad 153 - direction 154 - purpose 155 156 * - 0 157 - sink 158 - Input raw video stream 159 160 * - 1 161 - sink 162 - Processing parameters 163 164 * - 2 165 - source 166 - Output processed video stream 167 168 * - 3 169 - source 170 - Output viewfinder video stream 171 172 * - 4 173 - source 174 - 3A statistics 175 176Each pad is connected to a corresponding V4L2 video interface, exposed to 177userspace as a V4L2 video device node. 178 179Device operation 180---------------- 181 182With ImgU, once the input video node ("ipu3-imgu 0/1":0, in 183<entity>:<pad-number> format) is queued with buffer (in packed raw Bayer 184format), ImgU starts processing the buffer and produces the video output in YUV 185format and statistics output on respective output nodes. The driver is expected 186to have buffers ready for all of parameter, output and statistics nodes, when 187input video node is queued with buffer. 188 189At a minimum, all of input, main output, 3A statistics and viewfinder 190video nodes should be enabled for IPU3 to start image processing. 191 192Each ImgU V4L2 subdev has the following set of video nodes. 193 194input, output and viewfinder video nodes 195---------------------------------------- 196 197The frames (in packed raw Bayer format specific to the IPU3) received by the 198input video node is processed by the IPU3 Imaging Unit and are output to 2 video 199nodes, with each targeting a different purpose (main output and viewfinder 200output). 201 202Details onand the Bayer format specific to the IPU3 can be found in 203:ref:`v4l2-pix-fmt-ipu3-sbggr10`. 204 205The driver supports V4L2 Video Capture Interface as defined at :ref:`devices`. 206 207Only the multi-planar API is supported. More details can be found at 208:ref:`planar-apis`. 209 210Parameters video node 211--------------------- 212 213The parameters video node receives the ImgU algorithm parameters that are used 214to configure how the ImgU algorithms process the image. 215 216Details on processing parameters specific to the IPU3 can be found in 217:ref:`v4l2-meta-fmt-params`. 218 2193A statistics video node 220------------------------ 221 2223A statistics video node is used by the ImgU driver to output the 3A (auto 223focus, auto exposure and auto white balance) statistics for the frames that are 224being processed by the ImgU to user space applications. User space applications 225can use this statistics data to compute the desired algorithm parameters for 226the ImgU. 227 228Configuring the Intel IPU3 229========================== 230 231The IPU3 ImgU pipelines can be configured using the Media Controller, defined at 232:ref:`media_controller`. 233 234Firmware binary selection 235------------------------- 236 237The firmware binary is selected using the V4L2_CID_INTEL_IPU3_MODE, currently 238defined in drivers/staging/media/ipu3/include/intel-ipu3.h . "VIDEO" and "STILL" 239modes are available. 240 241Processing the image in raw Bayer format 242---------------------------------------- 243 244Configuring ImgU V4L2 subdev for image processing 245~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 246 247The ImgU V4L2 subdevs have to be configured with media controller APIs to have 248all the video nodes setup correctly. 249 250Let us take "ipu3-imgu 0" subdev as an example. 251 252media-ctl -d $MDEV -r 253 254media-ctl -d $MDEV -l "ipu3-imgu 0 input":0 -> "ipu3-imgu 0":0[1] 255 256media-ctl -d $MDEV -l "ipu3-imgu 0":2 -> "ipu3-imgu 0 output":0[1] 257 258media-ctl -d $MDEV -l "ipu3-imgu 0":3 -> "ipu3-imgu 0 viewfinder":0[1] 259 260media-ctl -d $MDEV -l "ipu3-imgu 0":4 -> "ipu3-imgu 0 3a stat":0[1] 261 262Also the pipe mode of the corresponding V4L2 subdev should be set as desired 263(e.g 0 for video mode or 1 for still mode) through the control id 0x009819a1 as 264below. 265 266yavta -w "0x009819A1 1" /dev/v4l-subdev7 267 268RAW Bayer frames go through the following ImgU pipeline HW blocks to have the 269processed image output to the DDR memory. 270 271RAW Bayer frame -> Input Feeder -> Bayer Down Scaling (BDS) -> Geometric 272Distortion Correction (GDC) -> DDR 273 274The ImgU V4L2 subdev has to be configured with the supported resolutions in all 275the above HW blocks, for a given input resolution. 276 277For a given supported resolution for an input frame, the Input Feeder, Bayer 278Down Scaling and GDC blocks should be configured with the supported resolutions. 279This information can be obtained by looking at the following IPU3 ImgU 280configuration table. 281 282https://chromium.googlesource.com/chromiumos/overlays/board-overlays/+/master 283 284Under baseboard-poppy/media-libs/cros-camera-hal-configs-poppy/files/gcss 285directory, graph_settings_ov5670.xml can be used as an example. 286 287The following steps prepare the ImgU pipeline for the image processing. 288 2891. The ImgU V4L2 subdev data format should be set by using the 290VIDIOC_SUBDEV_S_FMT on pad 0, using the GDC width and height obtained above. 291 2922. The ImgU V4L2 subdev cropping should be set by using the 293VIDIOC_SUBDEV_S_SELECTION on pad 0, with V4L2_SEL_TGT_CROP as the target, 294using the input feeder height and width. 295 2963. The ImgU V4L2 subdev composing should be set by using the 297VIDIOC_SUBDEV_S_SELECTION on pad 0, with V4L2_SEL_TGT_COMPOSE as the target, 298using the BDS height and width. 299 300For the ov5670 example, for an input frame with a resolution of 2592x1944 301(which is input to the ImgU subdev pad 0), the corresponding resolutions 302for input feeder, BDS and GDC are 2592x1944, 2592x1944 and 2560x1920 303respectively. 304 305Once this is done, the received raw Bayer frames can be input to the ImgU 306V4L2 subdev as below, using the open source application v4l2n [#f1]_. 307 308For an image captured with 2592x1944 [#f4]_ resolution, with desired output 309resolution as 2560x1920 and viewfinder resolution as 2560x1920, the following 310v4l2n command can be used. This helps process the raw Bayer frames and produces 311the desired results for the main output image and the viewfinder output, in NV12 312format. 313 314v4l2n --pipe=4 --load=/tmp/frame-#.bin --open=/dev/video4 315--fmt=type:VIDEO_OUTPUT_MPLANE,width=2592,height=1944,pixelformat=0X47337069 316--reqbufs=type:VIDEO_OUTPUT_MPLANE,count:1 --pipe=1 --output=/tmp/frames.out 317--open=/dev/video5 318--fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 319--reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=2 --output=/tmp/frames.vf 320--open=/dev/video6 321--fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 322--reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=3 --open=/dev/video7 323--output=/tmp/frames.3A --fmt=type:META_CAPTURE,? 324--reqbufs=count:1,type:META_CAPTURE --pipe=1,2,3,4 --stream=5 325 326where /dev/video4, /dev/video5, /dev/video6 and /dev/video7 devices point to 327input, output, viewfinder and 3A statistics video nodes respectively. 328 329Converting the raw Bayer image into YUV domain 330---------------------------------------------- 331 332The processed images after the above step, can be converted to YUV domain 333as below. 334 335Main output frames 336~~~~~~~~~~~~~~~~~~ 337 338raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.out /tmp/frames.out.ppm 339 340where 2560x1920 is output resolution, NV12 is the video format, followed 341by input frame and output PNM file. 342 343Viewfinder output frames 344~~~~~~~~~~~~~~~~~~~~~~~~ 345 346raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.vf /tmp/frames.vf.ppm 347 348where 2560x1920 is output resolution, NV12 is the video format, followed 349by input frame and output PNM file. 350 351Example user space code for IPU3 352================================ 353 354User space code that configures and uses IPU3 is available here. 355 356https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/master/ 357 358The source can be located under hal/intel directory. 359 360Overview of IPU3 pipeline 361========================= 362 363IPU3 pipeline has a number of image processing stages, each of which takes a 364set of parameters as input. The major stages of pipelines are shown here: 365 366.. kernel-render:: DOT 367 :alt: IPU3 ImgU Pipeline 368 :caption: IPU3 ImgU Pipeline Diagram 369 370 digraph "IPU3 ImgU" { 371 node [shape=box] 372 splines="ortho" 373 rankdir="LR" 374 375 a [label="Raw pixels"] 376 b [label="Bayer Downscaling"] 377 c [label="Optical Black Correction"] 378 d [label="Linearization"] 379 e [label="Lens Shading Correction"] 380 f [label="White Balance / Exposure / Focus Apply"] 381 g [label="Bayer Noise Reduction"] 382 h [label="ANR"] 383 i [label="Demosaicing"] 384 j [label="Color Correction Matrix"] 385 k [label="Gamma correction"] 386 l [label="Color Space Conversion"] 387 m [label="Chroma Down Scaling"] 388 n [label="Chromatic Noise Reduction"] 389 o [label="Total Color Correction"] 390 p [label="XNR3"] 391 q [label="TNR"] 392 r [label="DDR"] 393 394 { rank=same; a -> b -> c -> d -> e -> f } 395 { rank=same; g -> h -> i -> j -> k -> l } 396 { rank=same; m -> n -> o -> p -> q -> r } 397 398 a -> g -> m [style=invis, weight=10] 399 400 f -> g 401 l -> m 402 } 403 404The table below presents a description of the above algorithms. 405 406======================== ======================================================= 407Name Description 408======================== ======================================================= 409Optical Black Correction Optical Black Correction block subtracts a pre-defined 410 value from the respective pixel values to obtain better 411 image quality. 412 Defined in :c:type:`ipu3_uapi_obgrid_param`. 413Linearization This algo block uses linearization parameters to 414 address non-linearity sensor effects. The Lookup table 415 table is defined in 416 :c:type:`ipu3_uapi_isp_lin_vmem_params`. 417SHD Lens shading correction is used to correct spatial 418 non-uniformity of the pixel response due to optical 419 lens shading. This is done by applying a different gain 420 for each pixel. The gain, black level etc are 421 configured in :c:type:`ipu3_uapi_shd_config_static`. 422BNR Bayer noise reduction block removes image noise by 423 applying a bilateral filter. 424 See :c:type:`ipu3_uapi_bnr_static_config` for details. 425ANR Advanced Noise Reduction is a block based algorithm 426 that performs noise reduction in the Bayer domain. The 427 convolution matrix etc can be found in 428 :c:type:`ipu3_uapi_anr_config`. 429DM Demosaicing converts raw sensor data in Bayer format 430 into RGB (Red, Green, Blue) presentation. Then add 431 outputs of estimation of Y channel for following stream 432 processing by Firmware. The struct is defined as 433 :c:type:`ipu3_uapi_dm_config`. 434Color Correction Color Correction algo transforms sensor specific color 435 space to the standard "sRGB" color space. This is done 436 by applying 3x3 matrix defined in 437 :c:type:`ipu3_uapi_ccm_mat_config`. 438Gamma correction Gamma correction :c:type:`ipu3_uapi_gamma_config` is a 439 basic non-linear tone mapping correction that is 440 applied per pixel for each pixel component. 441CSC Color space conversion transforms each pixel from the 442 RGB primary presentation to YUV (Y: brightness, 443 UV: Luminance) presentation. This is done by applying 444 a 3x3 matrix defined in 445 :c:type:`ipu3_uapi_csc_mat_config` 446CDS Chroma down sampling 447 After the CSC is performed, the Chroma Down Sampling 448 is applied for a UV plane down sampling by a factor 449 of 2 in each direction for YUV 4:2:0 using a 4x2 450 configurable filter :c:type:`ipu3_uapi_cds_params`. 451CHNR Chroma noise reduction 452 This block processes only the chrominance pixels and 453 performs noise reduction by cleaning the high 454 frequency noise. 455 See struct :c:type:`ipu3_uapi_yuvp1_chnr_config`. 456TCC Total color correction as defined in struct 457 :c:type:`ipu3_uapi_yuvp2_tcc_static_config`. 458XNR3 eXtreme Noise Reduction V3 is the third revision of 459 noise reduction algorithm used to improve image 460 quality. This removes the low frequency noise in the 461 captured image. Two related structs are being defined, 462 :c:type:`ipu3_uapi_isp_xnr3_params` for ISP data memory 463 and :c:type:`ipu3_uapi_isp_xnr3_vmem_params` for vector 464 memory. 465TNR Temporal Noise Reduction block compares successive 466 frames in time to remove anomalies / noise in pixel 467 values. :c:type:`ipu3_uapi_isp_tnr3_vmem_params` and 468 :c:type:`ipu3_uapi_isp_tnr3_params` are defined for ISP 469 vector and data memory respectively. 470======================== ======================================================= 471 472Other often encountered acronyms not listed in above table: 473 474 ACC 475 Accelerator cluster 476 AWB_FR 477 Auto white balance filter response statistics 478 BDS 479 Bayer downscaler parameters 480 CCM 481 Color correction matrix coefficients 482 IEFd 483 Image enhancement filter directed 484 Obgrid 485 Optical black level compensation 486 OSYS 487 Output system configuration 488 ROI 489 Region of interest 490 YDS 491 Y down sampling 492 YTM 493 Y-tone mapping 494 495A few stages of the pipeline will be executed by firmware running on the ISP 496processor, while many others will use a set of fixed hardware blocks also 497called accelerator cluster (ACC) to crunch pixel data and produce statistics. 498 499ACC parameters of individual algorithms, as defined by 500:c:type:`ipu3_uapi_acc_param`, can be chosen to be applied by the user 501space through struct :c:type:`ipu3_uapi_flags` embedded in 502:c:type:`ipu3_uapi_params` structure. For parameters that are configured as 503not enabled by the user space, the corresponding structs are ignored by the 504driver, in which case the existing configuration of the algorithm will be 505preserved. 506 507References 508========== 509 510.. [#f5] drivers/staging/media/ipu3/include/intel-ipu3.h 511 512.. [#f1] https://github.com/intel/nvt 513 514.. [#f2] http://git.ideasonboard.org/yavta.git 515 516.. [#f3] http://git.ideasonboard.org/?p=media-ctl.git;a=summary 517 518.. [#f4] ImgU limitation requires an additional 16x16 for all input resolutions 519