1.. SPDX-License-Identifier: GPL-2.0 2 3i.MX Video Capture Driver 4========================= 5 6Introduction 7------------ 8 9The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which 10handles the flow of image frames to and from capture devices and 11display devices. 12 13For image capture, the IPU contains the following internal subunits: 14 15- Image DMA Controller (IDMAC) 16- Camera Serial Interface (CSI) 17- Image Converter (IC) 18- Sensor Multi-FIFO Controller (SMFC) 19- Image Rotator (IRT) 20- Video De-Interlacing or Combining Block (VDIC) 21 22The IDMAC is the DMA controller for transfer of image frames to and from 23memory. Various dedicated DMA channels exist for both video capture and 24display paths. During transfer, the IDMAC is also capable of vertical 25image flip, 8x8 block transfer (see IRT description), pixel component 26re-ordering (for example UYVY to YUYV) within the same colorspace, and 27packed <--> planar conversion. The IDMAC can also perform a simple 28de-interlacing by interweaving even and odd lines during transfer 29(without motion compensation which requires the VDIC). 30 31The CSI is the backend capture unit that interfaces directly with 32camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 buses. 33 34The IC handles color-space conversion, resizing (downscaling and 35upscaling), horizontal flip, and 90/270 degree rotation operations. 36 37There are three independent "tasks" within the IC that can carry out 38conversions concurrently: pre-process encoding, pre-process viewfinder, 39and post-processing. Within each task, conversions are split into three 40sections: downsizing section, main section (upsizing, flip, colorspace 41conversion, and graphics plane combining), and rotation section. 42 43The IPU time-shares the IC task operations. The time-slice granularity 44is one burst of eight pixels in the downsizing section, one image line 45in the main processing section, one image frame in the rotation section. 46 47The SMFC is composed of four independent FIFOs that each can transfer 48captured frames from sensors directly to memory concurrently via four 49IDMAC channels. 50 51The IRT carries out 90 and 270 degree image rotation operations. The 52rotation operation is carried out on 8x8 pixel blocks at a time. This 53operation is supported by the IDMAC which handles the 8x8 block transfer 54along with block reordering, in coordination with vertical flip. 55 56The VDIC handles the conversion of interlaced video to progressive, with 57support for different motion compensation modes (low, medium, and high 58motion). The deinterlaced output frames from the VDIC can be sent to the 59IC pre-process viewfinder task for further conversions. The VDIC also 60contains a Combiner that combines two image planes, with alpha blending 61and color keying. 62 63In addition to the IPU internal subunits, there are also two units 64outside the IPU that are also involved in video capture on i.MX: 65 66- MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus 67 interface. This is a Synopsys DesignWare core. 68- Two video multiplexers for selecting among multiple sensor inputs 69 to send to a CSI. 70 71For more info, refer to the latest versions of the i.MX5/6 reference 72manuals [#f1]_ and [#f2]_. 73 74 75Features 76-------- 77 78Some of the features of this driver include: 79 80- Many different pipelines can be configured via media controller API, 81 that correspond to the hardware video capture pipelines supported in 82 the i.MX. 83 84- Supports parallel, BT.565, and MIPI CSI-2 interfaces. 85 86- Concurrent independent streams, by configuring pipelines to multiple 87 video capture interfaces using independent entities. 88 89- Scaling, color-space conversion, horizontal and vertical flip, and 90 image rotation via IC task subdevs. 91 92- Many pixel formats supported (RGB, packed and planar YUV, partial 93 planar YUV). 94 95- The VDIC subdev supports motion compensated de-interlacing, with three 96 motion compensation modes: low, medium, and high motion. Pipelines are 97 defined that allow sending frames to the VDIC subdev directly from the 98 CSI. There is also support in the future for sending frames to the 99 VDIC from memory buffers via a output/mem2mem devices. 100 101- Includes a Frame Interval Monitor (FIM) that can correct vertical sync 102 problems with the ADV718x video decoders. 103 104 105Entities 106-------- 107 108imx6-mipi-csi2 109-------------- 110 111This is the MIPI CSI-2 receiver entity. It has one sink pad to receive 112the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has 113four source pads, corresponding to the four MIPI CSI-2 demuxed virtual 114channel outputs. Multiple source pads can be enabled to independently 115stream from multiple virtual channels. 116 117This entity actually consists of two sub-blocks. One is the MIPI CSI-2 118core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block 119is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the 120four virtual channels streams, providing four separate parallel buses 121containing each virtual channel that are routed to CSIs or video 122multiplexers as described below. 123 124On i.MX6 solo/dual-lite, all four virtual channel buses are routed to 125two video multiplexers. Both CSI0 and CSI1 can receive any virtual 126channel, as selected by the video multiplexers. 127 128On i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected 129by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1 130and IPU2-CSI0, respectively, and virtual channel 3 is routed to 131IPU2-CSI1 (again selected by a video mux). 132 133ipuX_csiY_mux 134------------- 135 136These are the video multiplexers. They have two or more sink pads to 137select from either camera sensors with a parallel interface, or from 138MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a 139single source pad that routes to a CSI (ipuX_csiY entities). 140 141On i.MX6 solo/dual-lite, there are two video mux entities. One sits 142in front of IPU1-CSI0 to select between a parallel sensor and any of 143the four MIPI CSI-2 virtual channels (a total of five sink pads). The 144other mux sits in front of IPU1-CSI1, and again has five sink pads to 145select between a parallel sensor and any of the four MIPI CSI-2 virtual 146channels. 147 148On i.MX6 Quad, there are two video mux entities. One sits in front of 149IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual 150channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to 151select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two 152sink pads). 153 154ipuX_csiY 155--------- 156 157These are the CSI entities. They have a single sink pad receiving from 158either a video mux or from a MIPI CSI-2 virtual channel as described 159above. 160 161This entity has two source pads. The first source pad can link directly 162to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links 163that require no IDMAC memory buffer transfer. 164 165When the direct source pad is routed to the ipuX_ic_prp entity, frames 166from the CSI can be processed by one or both of the IC pre-processing 167tasks. 168 169When the direct source pad is routed to the ipuX_vdic entity, the VDIC 170will carry out motion-compensated de-interlace using "high motion" mode 171(see description of ipuX_vdic entity). 172 173The second source pad sends video frames directly to memory buffers 174via the SMFC and an IDMAC channel, bypassing IC pre-processing. This 175source pad is routed to a capture device node, with a node name of the 176format "ipuX_csiY capture". 177 178Note that since the IDMAC source pad makes use of an IDMAC channel, 179pixel reordering within the same colorspace can be carried out by the 180IDMAC channel. For example, if the CSI sink pad is receiving in UYVY 181order, the capture device linked to the IDMAC source pad can capture 182in YUYV order. Also, if the CSI sink pad is receiving a packed YUV 183format, the capture device can capture a planar YUV format such as 184YUV420. 185 186The IDMAC channel at the IDMAC source pad also supports simple 187interweave without motion compensation, which is activated if the source 188pad's field type is sequential top-bottom or bottom-top, and the 189requested capture interface field type is set to interlaced (t-b, b-t, 190or unqualified interlaced). The capture interface will enforce the same 191field order as the source pad field order (interlaced-bt if source pad 192is seq-bt, interlaced-tb if source pad is seq-tb). 193 194This subdev can generate the following event when enabling the second 195IDMAC source pad: 196 197- V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR 198 199The user application can subscribe to this event from the ipuX_csiY 200subdev node. This event is generated by the Frame Interval Monitor 201(see below for more on the FIM). 202 203Cropping in ipuX_csiY 204--------------------- 205 206The CSI supports cropping the incoming raw sensor frames. This is 207implemented in the ipuX_csiY entities at the sink pad, using the 208crop selection subdev API. 209 210The CSI also supports fixed divide-by-two downscaling independently in 211width and height. This is implemented in the ipuX_csiY entities at 212the sink pad, using the compose selection subdev API. 213 214The output rectangle at the ipuX_csiY source pad is the same as 215the compose rectangle at the sink pad. So the source pad rectangle 216cannot be negotiated, it must be set using the compose selection 217API at sink pad (if /2 downscale is desired, otherwise source pad 218rectangle is equal to incoming rectangle). 219 220To give an example of crop and /2 downscale, this will crop a 2211280x960 input frame to 640x480, and then /2 downscale in both 222dimensions to 320x240 (assumes ipu1_csi0 is linked to ipu1_csi0_mux): 223 224.. code-block:: none 225 226 media-ctl -V "'ipu1_csi0_mux':2[fmt:UYVY2X8/1280x960]" 227 media-ctl -V "'ipu1_csi0':0[crop:(0,0)/640x480]" 228 media-ctl -V "'ipu1_csi0':0[compose:(0,0)/320x240]" 229 230Frame Skipping in ipuX_csiY 231--------------------------- 232 233The CSI supports frame rate decimation, via frame skipping. Frame 234rate decimation is specified by setting the frame intervals at 235sink and source pads. The ipuX_csiY entity then applies the best 236frame skip setting to the CSI to achieve the desired frame rate 237at the source pad. 238 239The following example reduces an assumed incoming 60 Hz frame 240rate by half at the IDMAC output source pad: 241 242.. code-block:: none 243 244 media-ctl -V "'ipu1_csi0':0[fmt:UYVY2X8/640x480@1/60]" 245 media-ctl -V "'ipu1_csi0':2[fmt:UYVY2X8/640x480@1/30]" 246 247Frame Interval Monitor in ipuX_csiY 248----------------------------------- 249 250The adv718x decoders can occasionally send corrupt fields during 251NTSC/PAL signal re-sync (too little or too many video lines). When 252this happens, the IPU triggers a mechanism to re-establish vertical 253sync by adding 1 dummy line every frame, which causes a rolling effect 254from image to image, and can last a long time before a stable image is 255recovered. Or sometimes the mechanism doesn't work at all, causing a 256permanent split image (one frame contains lines from two consecutive 257captured images). 258 259From experiment it was found that during image rolling, the frame 260intervals (elapsed time between two EOF's) drop below the nominal 261value for the current standard, by about one frame time (60 usec), 262and remain at that value until rolling stops. 263 264While the reason for this observation isn't known (the IPU dummy 265line mechanism should show an increase in the intervals by 1 line 266time every frame, not a fixed value), we can use it to detect the 267corrupt fields using a frame interval monitor. If the FIM detects a 268bad frame interval, the ipuX_csiY subdev will send the event 269V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR. Userland can register with 270the FIM event notification on the ipuX_csiY subdev device node. 271Userland can issue a streaming restart when this event is received 272to correct the rolling/split image. 273 274The ipuX_csiY subdev includes custom controls to tweak some dials for 275FIM. If one of these controls is changed during streaming, the FIM will 276be reset and will continue at the new settings. 277 278- V4L2_CID_IMX_FIM_ENABLE 279 280Enable/disable the FIM. 281 282- V4L2_CID_IMX_FIM_NUM 283 284How many frame interval measurements to average before comparing against 285the nominal frame interval reported by the sensor. This can reduce noise 286caused by interrupt latency. 287 288- V4L2_CID_IMX_FIM_TOLERANCE_MIN 289 290If the averaged intervals fall outside nominal by this amount, in 291microseconds, the V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR event is sent. 292 293- V4L2_CID_IMX_FIM_TOLERANCE_MAX 294 295If any intervals are higher than this value, those samples are 296discarded and do not enter into the average. This can be used to 297discard really high interval errors that might be due to interrupt 298latency from high system load. 299 300- V4L2_CID_IMX_FIM_NUM_SKIP 301 302How many frames to skip after a FIM reset or stream restart before 303FIM begins to average intervals. 304 305- V4L2_CID_IMX_FIM_ICAP_CHANNEL 306- V4L2_CID_IMX_FIM_ICAP_EDGE 307 308These controls will configure an input capture channel as the method 309for measuring frame intervals. This is superior to the default method 310of measuring frame intervals via EOF interrupt, since it is not subject 311to uncertainty errors introduced by interrupt latency. 312 313Input capture requires hardware support. A VSYNC signal must be routed 314to one of the i.MX6 input capture channel pads. 315 316V4L2_CID_IMX_FIM_ICAP_CHANNEL configures which i.MX6 input capture 317channel to use. This must be 0 or 1. 318 319V4L2_CID_IMX_FIM_ICAP_EDGE configures which signal edge will trigger 320input capture events. By default the input capture method is disabled 321with a value of IRQ_TYPE_NONE. Set this control to IRQ_TYPE_EDGE_RISING, 322IRQ_TYPE_EDGE_FALLING, or IRQ_TYPE_EDGE_BOTH to enable input capture, 323triggered on the given signal edge(s). 324 325When input capture is disabled, frame intervals will be measured via 326EOF interrupt. 327 328 329ipuX_vdic 330--------- 331 332The VDIC carries out motion compensated de-interlacing, with three 333motion compensation modes: low, medium, and high motion. The mode is 334specified with the menu control V4L2_CID_DEINTERLACING_MODE. The VDIC 335has two sink pads and a single source pad. 336 337The direct sink pad receives from an ipuX_csiY direct pad. With this 338link the VDIC can only operate in high motion mode. 339 340When the IDMAC sink pad is activated, it receives from an output 341or mem2mem device node. With this pipeline, the VDIC can also operate 342in low and medium modes, because these modes require receiving 343frames from memory buffers. Note that an output or mem2mem device 344is not implemented yet, so this sink pad currently has no links. 345 346The source pad routes to the IC pre-processing entity ipuX_ic_prp. 347 348ipuX_ic_prp 349----------- 350 351This is the IC pre-processing entity. It acts as a router, routing 352data from its sink pad to one or both of its source pads. 353 354This entity has a single sink pad. The sink pad can receive from the 355ipuX_csiY direct pad, or from ipuX_vdic. 356 357This entity has two source pads. One source pad routes to the 358pre-process encode task entity (ipuX_ic_prpenc), the other to the 359pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads 360can be activated at the same time if the sink pad is receiving from 361ipuX_csiY. Only the source pad to the pre-process viewfinder task entity 362can be activated if the sink pad is receiving from ipuX_vdic (frames 363from the VDIC can only be processed by the pre-process viewfinder task). 364 365ipuX_ic_prpenc 366-------------- 367 368This is the IC pre-processing encode entity. It has a single sink 369pad from ipuX_ic_prp, and a single source pad. The source pad is 370routed to a capture device node, with a node name of the format 371"ipuX_ic_prpenc capture". 372 373This entity performs the IC pre-process encode task operations: 374color-space conversion, resizing (downscaling and upscaling), 375horizontal and vertical flip, and 90/270 degree rotation. Flip 376and rotation are provided via standard V4L2 controls. 377 378Like the ipuX_csiY IDMAC source, this entity also supports simple 379de-interlace without motion compensation, and pixel reordering. 380 381ipuX_ic_prpvf 382------------- 383 384This is the IC pre-processing viewfinder entity. It has a single sink 385pad from ipuX_ic_prp, and a single source pad. The source pad is routed 386to a capture device node, with a node name of the format 387"ipuX_ic_prpvf capture". 388 389This entity is identical in operation to ipuX_ic_prpenc, with the same 390resizing and CSC operations and flip/rotation controls. It will receive 391and process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is 392receiving from ipuX_vdic. 393 394Like the ipuX_csiY IDMAC source, this entity supports simple 395interweaving without motion compensation. However, note that if the 396ipuX_vdic is included in the pipeline (ipuX_ic_prp is receiving from 397ipuX_vdic), it's not possible to use interweave in ipuX_ic_prpvf, 398since the ipuX_vdic has already carried out de-interlacing (with 399motion compensation) and therefore the field type output from 400ipuX_vdic can only be none (progressive). 401 402Capture Pipelines 403----------------- 404 405The following describe the various use-cases supported by the pipelines. 406 407The links shown do not include the backend sensor, video mux, or mipi 408csi-2 receiver links. This depends on the type of sensor interface 409(parallel or mipi csi-2). So these pipelines begin with: 410 411sensor -> ipuX_csiY_mux -> ... 412 413for parallel sensors, or: 414 415sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ... 416 417for mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route 418to the video mux (ipuX_csiY_mux) before sending to the CSI, depending 419on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in 420parenthesis. 421 422Unprocessed Video Capture: 423-------------------------- 424 425Send frames directly from sensor to camera device interface node, with 426no conversions, via ipuX_csiY IDMAC source pad: 427 428-> ipuX_csiY:2 -> ipuX_csiY capture 429 430IC Direct Conversions: 431---------------------- 432 433This pipeline uses the preprocess encode entity to route frames directly 434from the CSI to the IC, to carry out scaling up to 1024x1024 resolution, 435CSC, flipping, and image rotation: 436 437-> ipuX_csiY:1 -> 0:ipuX_ic_prp:1 -> 0:ipuX_ic_prpenc:1 -> ipuX_ic_prpenc capture 438 439Motion Compensated De-interlace: 440-------------------------------- 441 442This pipeline routes frames from the CSI direct pad to the VDIC entity to 443support motion-compensated de-interlacing (high motion mode only), 444scaling up to 1024x1024, CSC, flip, and rotation: 445 446-> ipuX_csiY:1 -> 0:ipuX_vdic:2 -> 0:ipuX_ic_prp:2 -> 0:ipuX_ic_prpvf:1 -> ipuX_ic_prpvf capture 447 448 449Usage Notes 450----------- 451 452To aid in configuration and for backward compatibility with V4L2 453applications that access controls only from video device nodes, the 454capture device interfaces inherit controls from the active entities 455in the current pipeline, so controls can be accessed either directly 456from the subdev or from the active capture device interface. For 457example, the FIM controls are available either from the ipuX_csiY 458subdevs or from the active capture device. 459 460The following are specific usage notes for the Sabre* reference 461boards: 462 463 464SabreLite with OV5642 and OV5640 465-------------------------------- 466 467This platform requires the OmniVision OV5642 module with a parallel 468camera interface, and the OV5640 module with a MIPI CSI-2 469interface. Both modules are available from Boundary Devices: 470 471- https://boundarydevices.com/product/nit6x_5mp 472- https://boundarydevices.com/product/nit6x_5mp_mipi 473 474Note that if only one camera module is available, the other sensor 475node can be disabled in the device tree. 476 477The OV5642 module is connected to the parallel bus input on the i.MX 478internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2. 479 480The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2 481receiver, and the four virtual channel outputs from the receiver are 482routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1, 483vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is 484also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and 485OV5640 must not share the same i2c slave address. 486 487The following basic example configures unprocessed video capture 488pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and 489the OV5640, transmitting on MIPI CSI-2 virtual channel 1 (which is 490imx6-mipi-csi2 pad 2), is routed to ipu1_csi1. Both sensors are 491configured to output 640x480, and the OV5642 outputs YUYV2X8, the 492OV5640 UYVY2X8: 493 494.. code-block:: none 495 496 # Setup links for OV5642 497 media-ctl -l "'ov5642 1-0042':0 -> 'ipu1_csi0_mux':1[1]" 498 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" 499 media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" 500 # Setup links for OV5640 501 media-ctl -l "'ov5640 1-0040':0 -> 'imx6-mipi-csi2':0[1]" 502 media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]" 503 media-ctl -l "'ipu1_csi1':2 -> 'ipu1_csi1 capture':0[1]" 504 # Configure pads for OV5642 pipeline 505 media-ctl -V "'ov5642 1-0042':0 [fmt:YUYV2X8/640x480 field:none]" 506 media-ctl -V "'ipu1_csi0_mux':2 [fmt:YUYV2X8/640x480 field:none]" 507 media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/640x480 field:none]" 508 # Configure pads for OV5640 pipeline 509 media-ctl -V "'ov5640 1-0040':0 [fmt:UYVY2X8/640x480 field:none]" 510 media-ctl -V "'imx6-mipi-csi2':2 [fmt:UYVY2X8/640x480 field:none]" 511 media-ctl -V "'ipu1_csi1':2 [fmt:AYUV32/640x480 field:none]" 512 513Streaming can then begin independently on the capture device nodes 514"ipu1_csi0 capture" and "ipu1_csi1 capture". The v4l2-ctl tool can 515be used to select any supported YUV pixelformat on the capture device 516nodes, including planar. 517 518SabreAuto with ADV7180 decoder 519------------------------------ 520 521On the SabreAuto, an on-board ADV7180 SD decoder is connected to the 522parallel bus input on the internal video mux to IPU1 CSI0. 523 524The following example configures a pipeline to capture from the ADV7180 525video decoder, assuming NTSC 720x480 input signals, using simple 526interweave (unconverted and without motion compensation). The adv7180 527must output sequential or alternating fields (field type 'seq-bt' for 528NTSC, or 'alternate'): 529 530.. code-block:: none 531 532 # Setup links 533 media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]" 534 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" 535 media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" 536 # Configure pads 537 media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]" 538 media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480]" 539 media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]" 540 # Configure "ipu1_csi0 capture" interface (assumed at /dev/video4) 541 v4l2-ctl -d4 --set-fmt-video=field=interlaced_bt 542 543Streaming can then begin on /dev/video4. The v4l2-ctl tool can also be 544used to select any supported YUV pixelformat on /dev/video4. 545 546This example configures a pipeline to capture from the ADV7180 547video decoder, assuming PAL 720x576 input signals, with Motion 548Compensated de-interlacing. The adv7180 must output sequential or 549alternating fields (field type 'seq-tb' for PAL, or 'alternate'). 550$outputfmt can be any format supported by the ipu1_ic_prpvf entity 551at its output pad: 552 553.. code-block:: none 554 555 # Setup links 556 media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]" 557 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" 558 media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]" 559 media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]" 560 media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]" 561 media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]" 562 # Configure pads 563 media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]" 564 media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x576]" 565 media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]" 566 media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]" 567 media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]" 568 media-ctl -V "'ipu1_ic_prpvf':1 [fmt:$outputfmt field:none]" 569 570Streaming can then begin on the capture device node at 571"ipu1_ic_prpvf capture". The v4l2-ctl tool can be used to select any 572supported YUV or RGB pixelformat on the capture device node. 573 574This platform accepts Composite Video analog inputs to the ADV7180 on 575Ain1 (connector J42). 576 577SabreSD with MIPI CSI-2 OV5640 578------------------------------ 579 580Similarly to SabreLite, the SabreSD supports a parallel interface 581OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642 582connects to i2c bus 1 and the OV5640 to i2c bus 2. 583 584The device tree for SabreSD includes OF graphs for both the parallel 585OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI 586CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled. 587The OV5640 module connects to MIPI connector J5 (sorry I don't have the 588compatible module part number or URL). 589 590The following example configures a direct conversion pipeline to capture 591from the OV5640, transmitting on MIPI CSI-2 virtual channel 1. $sensorfmt 592can be any format supported by the OV5640. $sensordim is the frame 593dimension part of $sensorfmt (minus the mbus pixel code). $outputfmt can 594be any format supported by the ipu1_ic_prpenc entity at its output pad: 595 596.. code-block:: none 597 598 # Setup links 599 media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]" 600 media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]" 601 media-ctl -l "'ipu1_csi1':1 -> 'ipu1_ic_prp':0[1]" 602 media-ctl -l "'ipu1_ic_prp':1 -> 'ipu1_ic_prpenc':0[1]" 603 media-ctl -l "'ipu1_ic_prpenc':1 -> 'ipu1_ic_prpenc capture':0[1]" 604 # Configure pads 605 media-ctl -V "'ov5640 1-003c':0 [fmt:$sensorfmt field:none]" 606 media-ctl -V "'imx6-mipi-csi2':2 [fmt:$sensorfmt field:none]" 607 media-ctl -V "'ipu1_csi1':1 [fmt:AYUV32/$sensordim field:none]" 608 media-ctl -V "'ipu1_ic_prp':1 [fmt:AYUV32/$sensordim field:none]" 609 media-ctl -V "'ipu1_ic_prpenc':1 [fmt:$outputfmt field:none]" 610 611Streaming can then begin on "ipu1_ic_prpenc capture" node. The v4l2-ctl 612tool can be used to select any supported YUV or RGB pixelformat on the 613capture device node. 614 615 616Known Issues 617------------ 618 6191. When using 90 or 270 degree rotation control at capture resolutions 620 near the IC resizer limit of 1024x1024, and combined with planar 621 pixel formats (YUV420, YUV422p), frame capture will often fail with 622 no end-of-frame interrupts from the IDMAC channel. To work around 623 this, use lower resolution and/or packed formats (YUYV, RGB3, etc.) 624 when 90 or 270 rotations are needed. 625 626 627File list 628--------- 629 630drivers/staging/media/imx/ 631include/media/imx.h 632include/linux/imx-media.h 633 634References 635---------- 636 637.. [#f1] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6DQRM.pdf 638.. [#f2] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdf 639 640 641Authors 642------- 643 644- Steve Longerbeam <steve_longerbeam@mentor.com> 645- Philipp Zabel <kernel@pengutronix.de> 646- Russell King <linux@armlinux.org.uk> 647 648Copyright (C) 2012-2017 Mentor Graphics Inc. 649