Article ID: 000092935 Content Type: Product Information & Documentation Last Reviewed: 02/28/2023

Is It Possible to Implement OpenVINO™ Runtime Inference Pipeline with Intermediate Representation (IR)?

BUILT IN - ARTICLE INTRO SECOND COMPONENT
Summary

Steps to Implement OpenVINO™ Runtime inference pipeline with IR.

Description
  1. Converted TensorFlow* model into IR.
  2. Unable to determine steps to implement OpenVINO™ Runtime inference pipeline with IR.
Resolution
  1. Create OpenVINO™ Runtime Core
    import openvino.runtime as ov
    core = ov.Core()

     
  2. Compile the Model
    compiled_model = core.compile_model("model.xml", "AUTO")
     
  3. Create an Infer Request
    infer_request = compiled_model.create_infer_request()
     
  4. Set Inputs
    # Create tensor from external memory
    input_tensor = ov.Tensor(array=memory, shared_memory=True)
    # Set input tensor for model with one input
    infer_request.set_input_tensor(input_tensor)

     
  5. Start Inference
    infer_request.start_async()
    infer_request.wait()

     
  6. Process the Inference Results
    # Get output tensor for model with one output
    output = infer_request.get_output_tensor()
    output_buffer = output.data
    # output_buffer[] - accessing output tensor data

Related Products

This article applies to 1 products