Video and Vision Processing Suite IP User Guide

ID 683329
Date 3/30/2025
Public

Visible to Intel only — GUID: wjd1717150325550

Ixiasoft

Document Table of Contents
1. About the Video and Vision Processing Suite 2. Getting Started with the Video and Vision Processing IPs 3. Video and Vision Processing IPs Functional Description 4. Video and Vision Processing IP Interfaces 5. Video and Vision Processing IP Registers 6. Video and Vision Processing IPs Software Programming Model 7. Protocol Converter IP 8. 1D LUT IP 9. 3D LUT IP 10. Adaptive Noise Reduction IP 11. Advanced Test Pattern Generator IP 12. AXI-Stream Broadcaster IP 13. Bits per Color Sample Adapter IP 14. Black Level Correction IP 15. Black Level Statistics IP 16. Chroma Key IP 17. Chroma Resampler IP 18. Clipper IP 19. Clocked Video Input IP 20. Clocked Video to Full-Raster Converter IP 21. Clocked Video Output IP 22. Color Plane Manager IP 23. Color Space Converter IP 24. Defective Pixel Correction IP 25. Deinterlacer IP 26. Demosaic IP 27. FIR Filter IP 28. Frame Cleaner IP 29. Full-Raster to Clocked Video Converter IP 30. Full-Raster to Streaming Converter IP 31. Genlock Controller IP 32. Generic Crosspoint IP 33. Genlock Signal Router IP 34. Guard Bands IP 35. Histogram Statistics IP 36. Interlacer IP 37. Mixer IP 38. Pixels in Parallel Converter IP 39. Scaler IP 40. Stream Cleaner IP 41. Switch IP 42. Text Box IP 43. Tone Mapping Operator IP 44. Test Pattern Generator IP 45. Unsharp Mask IP 46. Video and Vision Monitor Intel FPGA IP 47. Video Frame Buffer IP 48. Video Frame Reader Intel FPGA IP 49. Video Frame Writer Intel FPGA IP 50. Video Streaming FIFO IP 51. Video Timing Generator IP 52. Vignette Correction IP 53. Warp IP 54. White Balance Correction IP 55. White Balance Statistics IP 56. Design Security 57. Document Revision History for Video and Vision Processing Suite User Guide

25.3. Deinterlacer IP Functional Description

The IP offers bob, weave, or motion adaptive deinterleavers.
Figure 74. Bob DeinterlacingThe figure shows the bob deinterlacing where the IP drops or deinterlaces interlaced fields. The IP passes all progressive frames through.
Figure 75. Weave Interlacing

The figure shows the weave deinterlacing, where the IP drops or deinterlaces interlaced fields. The IP passes all progressive frames through. For incoming F1 and F0, the weave deinterlacer deinterlaces or drops the fields based on the nibble values. For more information on nibble, refer to the Intel FPGA Streaming Video Protocol Specification.

For motion adaptive deinterlacer, the IP calculates the motion coefficient for each pixel and calculates the output pixel depending on motion coefficient. If the motion calculated from the current and the previous pixels is higher than the stored motion value, the stored motion value is irrelevant. The IP uses the calculated motion in the blending algorithm, which then becomes the next stored motion value. However, if the computed motion value is lower than the stored motion value, the following actions occur:

  • The IP determines the next stored motion value by calculating the sum of three-fourth of the computed motion and one-fourths of the previously stored motion.
  • The blending algorithm uses the next stored motion value.

This computed motion means that the motion that the blending algorithm uses climbs up immediately, but takes a few frames to stabilize. The motion-adaptive algorithm fills in the rows that are missing in the current field by calculating a function of other pixels in the current field and the three preceding fields as in the following sequence:

  1. The IP collects pixels from the current field and the three preceding it (the X denotes the location of the desired output pixel).
    Figure 76. Pixel Collection for the Motion-Adaptive Algorithm


  2. The IP assembels these pixels into two 3×3 groups of pixels.
    Figure 77. Pixel Assembly for the Motion-Adaptive AlgorithmThe figure shows the minimum absolute difference of the two groups.
  3. The IP normalizes the minimum absolute difference value into the same range as the input pixel data. The IP compares the motion value with a recorded motion value for the same location in the previous frame. If it is greater, the IP keeps the new value. If the new value is less than the stored value, the IP uses the motion value that is the mean of the two values. This action reduces unpleasant flickering artifacts.
  4. The IP uses a weighted mean of the interpolation pixels to calculate the output pixel and the equivalent to the output pixel in the previous field with the following equation: