Video and Vision Processing Suite IP User Guide

ID 683329
Date 3/30/2025
Public
Document Table of Contents
1. About the Video and Vision Processing Suite 2. Getting Started with the Video and Vision Processing IPs 3. Video and Vision Processing IPs Functional Description 4. Video and Vision Processing IP Interfaces 5. Video and Vision Processing IP Registers 6. Video and Vision Processing IPs Software Programming Model 7. Protocol Converter IP 8. 1D LUT IP 9. 3D LUT IP 10. Adaptive Noise Reduction IP 11. Advanced Test Pattern Generator IP 12. AXI-Stream Broadcaster IP 13. Bits per Color Sample Adapter IP 14. Black Level Correction IP 15. Black Level Statistics IP 16. Chroma Key IP 17. Chroma Resampler IP 18. Clipper IP 19. Clocked Video Input IP 20. Clocked Video to Full-Raster Converter IP 21. Clocked Video Output IP 22. Color Plane Manager IP 23. Color Space Converter IP 24. Defective Pixel Correction IP 25. Deinterlacer IP 26. Demosaic IP 27. FIR Filter IP 28. Frame Cleaner IP 29. Full-Raster to Clocked Video Converter IP 30. Full-Raster to Streaming Converter IP 31. Genlock Controller IP 32. Generic Crosspoint IP 33. Genlock Signal Router IP 34. Guard Bands IP 35. Histogram Statistics IP 36. Interlacer IP 37. Mixer IP 38. Pixels in Parallel Converter IP 39. Scaler IP 40. Stream Cleaner IP 41. Switch IP 42. Text Box IP 43. Tone Mapping Operator IP 44. Test Pattern Generator IP 45. Unsharp Mask IP 46. Video and Vision Monitor Intel FPGA IP 47. Video Frame Buffer IP 48. Video Frame Reader Intel FPGA IP 49. Video Frame Writer Intel FPGA IP 50. Video Streaming FIFO IP 51. Video Timing Generator IP 52. Vignette Correction IP 53. Warp IP 54. White Balance Correction IP 55. White Balance Statistics IP 56. Design Security 57. Document Revision History for Video and Vision Processing Suite User Guide

39.3.4. Partial Frame Scaling

For large video wall applications, you may want to scale video frames using multiple scaler IPs. The output frames are split into tiles (either horizontally, vertically or horizontally and vertically), and each scaler IP processes the data for a single tile.

Typically, the goal is for the combined output image to look identical to how it looks when processed by a single scaler, without tiling. This result is not achieved automatically when using multiple scalers, and the IP requires additional information about the horizontal and vertical offset position of the scaler inside the combined image. If the scaling algorithms are not offset correctly for the chosen tiling, visible seams may appear in the combined output image. You can configure the scaler to support partial frame scaling, with separate parameters to enable tiling in the horizontal and vertical directions (Horizontal partial image scaling and Vertical partial image scaling respectively).

If you turn on partial image scaling in either the horizontal or vertical direction, you must provide additional offset information at run time via the register map. You must enable the Avalon memory-mapped control agent interface if you turn on partial scaling. The additional control values specify the fractional pixel offset for the first input pixel, an initial phase offset, and a fractional phase offset. You must also supply the resolutions of the untiled input and output images so that the IP can calculate the overall scaling ratio. For further details of the register map and how control values should be set for partial scaling, see Scaler Registers.

When using partial frames, you must calculate which portion of the overall input image is required to create the given output tile. Only send the required tile in the input image to the scaler IP. The following example demonstrates the calculation of the horizontal start index and end index of the required area within each line. The same calculations determine the vertical window, with width replaced by height. Where i s is the start index for the required tile in each input line and i e is the end index for the required tile in each input line. o s and o e are the indices of the first and last pixels of the output tile respectively in the overall, combined scaled output frame.

The indices i s and i e mark the edges of the minimum input tile required to generate the given output tile. However, to achieve a seamless join between tiles, extra overscan data is required on the edges of the tile to populate all the taps of the scaling filter. A filter with N taps has (N-1) /2 taps to the left (above for vertical) and N/2 taps to the right (below for vertical) of the center pixel in the filter. On the true left or right edge of the overall frame, you cannot fill these taps with any real data, so the IP replicates the edge pixel to populate them. However, for an output tile whose left and right edge does not sit at the edge of the overall frame, the IP must populate these taps with pixels from the input frame. The first and last indices of the input tile including overscan, i so and i eo respectively, are then defined as follows:

The horizontal dimension the scaler requires that pixel index i s is transmitted in pixel 0 within each group of pixels transmitted in parallel (as set by the Number of pixels in parallel parameter). Ensure you round the number of overscan pixels to the left of i s (nominally N-1) /2) to the nearest multiple of pixels in parallel (pip). The equation defines the final horizontal input tile left index (h_i so ) when overscan is on:

You can choose not to supply the additional overscan data if your system does not allow it or the cost is too high, but expect to see noticeable seams at the tile edges. Registers in the register map allow you to turn on and off the left and top edge overscan for horizontal and vertical scaling respectively at run time. This register merely informs the scaler whether it should expect the overscan data or not. The scaler responds automatically if it receives right or bottom edge overscan data, with no register map setting required. Controlling the overscan behavior of the scaler at runtime allows you to change the position of the tile that each scaler IP processes dynamically.