Video and Vision Processing Suite IP User Guide

ID 683329
Date 3/30/2025
Public

Visible to Intel only — GUID: orr1625154557551

Ixiasoft

Document Table of Contents
1. About the Video and Vision Processing Suite 2. Getting Started with the Video and Vision Processing IPs 3. Video and Vision Processing IPs Functional Description 4. Video and Vision Processing IP Interfaces 5. Video and Vision Processing IP Registers 6. Video and Vision Processing IPs Software Programming Model 7. Protocol Converter IP 8. 1D LUT IP 9. 3D LUT IP 10. Adaptive Noise Reduction IP 11. Advanced Test Pattern Generator IP 12. AXI-Stream Broadcaster IP 13. Bits per Color Sample Adapter IP 14. Black Level Correction IP 15. Black Level Statistics IP 16. Chroma Key IP 17. Chroma Resampler IP 18. Clipper IP 19. Clocked Video Input IP 20. Clocked Video to Full-Raster Converter IP 21. Clocked Video Output IP 22. Color Plane Manager IP 23. Color Space Converter IP 24. Defective Pixel Correction IP 25. Deinterlacer IP 26. Demosaic IP 27. FIR Filter IP 28. Frame Cleaner IP 29. Full-Raster to Clocked Video Converter IP 30. Full-Raster to Streaming Converter IP 31. Genlock Controller IP 32. Generic Crosspoint IP 33. Genlock Signal Router IP 34. Guard Bands IP 35. Histogram Statistics IP 36. Interlacer IP 37. Mixer IP 38. Pixels in Parallel Converter IP 39. Scaler IP 40. Stream Cleaner IP 41. Switch IP 42. Text Box IP 43. Tone Mapping Operator IP 44. Test Pattern Generator IP 45. Unsharp Mask IP 46. Video and Vision Monitor Intel FPGA IP 47. Video Frame Buffer IP 48. Video Frame Reader Intel FPGA IP 49. Video Frame Writer Intel FPGA IP 50. Video Streaming FIFO IP 51. Video Timing Generator IP 52. Vignette Correction IP 53. Warp IP 54. White Balance Correction IP 55. White Balance Statistics IP 56. Design Security 57. Document Revision History for Video and Vision Processing Suite User Guide

53.5.2. Warp IP Software Code Examples

8K UHD Workflow example

This example shows the workflow and basic warp software usage of the C++ source code to generate and apply 15 degree rotation warp. The example is for 7680x4320 video, which requires the processing to be split between four warp engines. The frame buffer and warp coefficient base addresses in the example are arbitrary. Actual values depend on a particular system design.

uint32_t FRAMEBUF_BASE_ADDR = 0x80000000;
uint32_t COEF_BASE_ADDR = 0xa0000000;
uint32_t SKIP_RAM_PAGE = 0;
uint32_t VIDEO_WIDTH = 7680;
uint32_t VIDEO_HEIGHT = 4320;

intel_vvp_warp_base_t base = INTEL_VVP_WARP_BASE;

// Warp driver instance
intel_vvp_warp_instance_t wrp0;

// Initialize driver instance
intel_vvp_warp_init_instance(&wrp0, base);

// Create warp channel
intel_vvp_warp_channel_t* ch0 = intel_vvp_warp_create_quad_channel(&wrp0, 0, 0);

// Fill in warp channel configuration structure
intel_vvp_warp_channel_config_t cfg;
cfg.ram_addr = FRAMEBUF_BASE_ADDR;  // Frame buffers base address
cfg.cs = ERGB_FULL;                 // Video colourspace
cfg.width_input = VIDEO_WIDTH;      // Video dimensions
cfg.height_input = VIDEO_HEIGHT;
cfg.width_output = VIDEO_WIDTH;
cfg.height_output = VIDEO_HEIGHT;
cfg.lfr = 0;                        // No low frame rate fallback

// Configure warp channel using the parameters above
intel_vvp_warp_configure_channel(ch0, &cfg);

// Obtain required hardware information
WarpHwContextPtr hw = WarpDataHelper::GetHwContext(ch0);

// Instantiate and initialize mesh generator
WarpConfigurator configurator{ hw }; 
configurator.SetInputResolution(VIDEO_WIDTH, VIDEO_HEIGHT);
configurator.SetOutputResolution(VIDEO_WIDTH, VIDEO_HEIGHT);
configurator.Reset();
configurator.SetRotate(15.0f);

// Generate mesh
WarpMeshPtr mesh = configurator.GenerateMeshFromFixed();

// Instantiate data generator
WarpDataGenerator data_generator;

WarpDataContext ctx{ hw,
    VIDEO_WIDTH, VIDEO_HEIGHT,
    VIDEO_WIDTH, VIDEO_HEIGHT
};

// Generate warp data using provided hardware configuration and mesh
WarpDataPtr user_data = data_generator.GenerateData(ctx, mesh);

// Allocate and fill in intel_vvp_warp_data_t object required by the warp driver
WarpDataHelper::WarpDriverDataPtr driver_data = WarpDataHelper::GenerateDriverData(ctx, user_data, COEF_BASE_ADDR, SKIP_RAM_PAGE);

// Transfer generated warp data to the calculated destination for each engine
for(uint32_t i = 0; i < driver_data->num_engines; ++i)
{
    const WarpEngineData* ue = user_data->GetEngineData(i);
    intel_vvp_warp_engine_data_t* de = &(driver_data->engine_data[i]);
    memcpy((void*)(de->mesh_addr), ue->GetMeshData(), de->mesh_size);
    memcpy((void*)(de->filter_addr),ue->GetFilterData(), de->filter_size);
    memcpy((void*)(de->fetch_addr), ue->GetFetchData(), de->fetch_size);
}

// Apply warp by passing new warp data set to the driver
intel_vvp_warp_apply_transform(ch0, driver_data.get());
// Release allocated resources 
intel_vvp_warp_free_channel(ch0);

Warp mesh usage

Define required warp using the WarpMesh object. The example shows the simplest case of 1:1 (unity) warp for a 8K UHD video.

WarpMeshPtr mesh = WarpMesh::Create({7680, 4320}, {7680, 4320}, hw);
uint32_t mesh_step = mesh->GetStep();
uint32_t fract_bits = mesh->GetFractBits();

for(uint32_t v = 0; v < mesh->GetVNodes(); ++v)
{
    mesh_node_t* node = mesh->GetRow(v);

    for(uint32_t h = 0; h < mesh->GetHNodes(); ++h)
    {
        node->_x = (h * mesh_step) << fract_bits; node->_y = (v * mesh_step) << fract_bits;
    }
}

Mesh coordinates use the least significant bits as fractional part for subpixel precision. In the example above the fractional part is always 0. Store subpixel positions in the following way:

mesh_node_t* node = mesh->GetRow(v);
float k = static_cast<float>(1 << mesh->GetFractBits());
float pos_x = 10.6f;
node->_x = static_cast<int32_t>(roundf(pos_x * k));

Easy warp example

When you turn on Easy warp you can rotate the input video to 0, 90, 180 or 270 degrees or mirror the video without the need of transform mesh and associated warp data.

uint32_t FRAMEBUF_BASE_ADDR = 0x80000000;
uint32_t VIDEO_WIDTH = 3840;
uint32_t VIDEO_HEIGHT = 2160;
intel_vvp_warp_base_t base = INTEL_VVP_WARP_BASE;

intel_vvp_warp_instance wrp0;

// Initialize driver instance
intel_vvp_warp_init_instance(&wrp0, base);

// Allocate Easy warp channel
intel_vvp_warp_channel_t* ch0 = intel_vvp_warp_create_easy_warp_channel(&wrp0, 0, 0);

assert(intel_vvp_warp_check_easy_warp_capable(ch0) == 0);

// Configure channel
intel_vvp_warp_channel_config_t cfg;
// Configure in 4K, RGB Full colourspace
cfg.ram_addr = FRAMEBUF_BASE_ADDR;
cfg.cs = ERGB_FULL;
cfg.width_input = VIDEO_WIDTH;
cfg.height_input = VIDEO_HEIGHT;
cfg.width_output = VIDEO_WIDTH;
cfg.height_output = VIDEO_HEIGHT;
cfg.lfr = 0;

intel_vvp_warp_configure_channel(ch0, &cfg);

// Mirror input video
// Enable video bypass, keep original resolution
intel_vvp_warp_bypass(ch0, 1, 0, VIDEO_WIDTH, VIDEO_HEIGHT);

// Configure Easy warp mirror
intel_vvp_warp_set_easy_warp(ch0, 0x4);

// Rotate input video 90 degrees CCW
// Enable video bypass, flip input width and height
intel_vvp_warp_bypass(ch0, 1, 0, VIDEO_HEIGHT, VIDEO_WIDTH);

// Configure Easy warp rotation
intel_vvp_warp_set_easy_warp(ch0, 0x01);

// Release warp channel
intel_vvp_warp_free_channel(ch0);

Warp latency parameters generation example

The example shows how to generate recommended minimum latency parameters for a given video transformation. These parameters are necessary to configure video pipeline for low latency operation.

// Example video and system clock
uint32_t EXAMPLE_CLOCK  = 300000000;
// UHD video dimensions
uint32_t VIDEO_WIDTH = 3840; 
uint32_t VIDEO_HEIGHT = 2160;
uint32_t VIDEO_HEIGHT_FULL = 2250;
// Output frame rate x100
uint32_t OUTPUT_FRAME_RATE = 6000;
// In this example system and video clock are the same
uint32_t system_clock = EXAMPLE_CLOCK;
uint32_t video_clock = EXAMPLE_CLOCK;
// Warp channel
intel_vvp_warp_channel_t* ch0{nullptr};

// Allocate and initialize a warp channel here
//  ...
//////////////////////////////////////////////

// Obtain required hardware information
WarpHwContextPtr hw = WarpDataHelper::GetHwContext(ch0);

// Generate a 4K mesh for 5 degree CCW rotation
WarpConfigurator configurator{hw};

configurator.SetInputResolution(VIDEO_WIDTH, VIDEO_HEIGHT);
configurator.SetOutputResolution(VIDEO_WIDTH, VIDEO_HEIGHT);
configurator.Reset();
configurator.SetRotate(5.0f);

WarpMeshPtr mesh = configurator.GenerateMeshFromFixed();

// Parameters required for warp data generation
WarpDataContext ctx{
    hw,
    VIDEO_WIDTH, VIDEO_HEIGHT,
    VIDEO_WIDTH, VIDEO_HEIGHT
};

WarpDataGenerator data_generator;

WarpDataPtr user_data = data_generator.GenerateData(ctx, mesh);

// Obtain latency params for the configured warp
WarpLatencyParams latency_params = data_generator.GenerateLatencyParams(ctx, user_data, system_clock, video_clock, VIDEO_HEIGHT_FULL, OUTPUT_FRAME_RATE);

// Upload and apply generated warp data here
// …
// intel_vvp_warp_apply_transform(ch0, …);
// …

// Pass on “output_latency” to the driver
intel_vvp_warp_set_output_latency(ch0, latency_params._output_latency);

// The “total_latency” member represents the recommended minimum offset
// between the input and output frames
// The value is in axi4s_vid_out_0_clock clock cycles
// Use the this parameter to configure the rest of the video pipeline
// as appropriate
//
//  latency_params._total_latency;