Developer Guide

  • 2022.3
  • 10/25/2022
  • Public Content
Contents

ROS 2 OpenVINO™ Toolkit Sample Application

This tutorial tells you how to run the segmentation demo application on both a static image and on a video stream received from a Intel® RealSense™ camera.

Run the Sample Application

  1. Check if your installation has the amr-ros2-openvino Docker* image.
    docker images |grep amr-ros2-openvino #if you have it installed, the result is: amr-ros2-openvino
    If the image is not installed, continuing with these steps
    triggers a build that takes longer than an hour
    (sometimes, a lot longer depending on the system resources and internet connection).
  2. If the image is not installed, Intel recommends installing the Robot Base Kit or Robot Complete Kit with the Get Started Guide for Robots.
  3. Go to the
    AMR_containers
    folder:
    cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
  4. Prepare the environment setup:
    source ./01_docker_sdk_env/docker_compose/common/docker_compose.source export CONTAINER_BASE_PATH=`pwd` export ROS_DOMAIN_ID=16
  5. Launch the automated execution of the ROS 2 OpenVINO™ toolkit sample applications:
    CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml up
    Expected output:
    1. Execution of the object segmentation sample code input from the image: This takes one minute, and you can see the semantic segmentation being applied to the image.
      Original image
      Image with semantic object segmentation
    2. Execution of the object segmentation sample code input from the Intel® RealSense™ camera topic: This requires a Intel® RealSense™ camera connected to the testing target. It takes one minute, and you can see the semantic segmentation being applied to the video stream received from a Intel® RealSense™ camera.
  6. To close this, do one of the following:
    • Type
      Ctrl-c
      in the terminal where you did the up command.
    • Run this command in another terminal:
    CHOOSE_USER=eiforamr 01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml down

How it Works

All of the commands required to run this tutorial are documented in:
01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml
To use your own image to run semantic segmentation:
  1. Copy your image into the
    AMR_containers
    folder at:
    cp <path_to_image>/my_image.jpg 01_docker_sdk_env/docker_compose/05_tutorial/param/
  2. Edit
    01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml
    , at line 34, adding the following command:
    cp ${CONTAINER_BASE_PATH}/01_docker_sdk_env/docker_compose/05_tutorials/param/my_image.jpg ../ros2_ws/src/ros2_openvino_toolkit/data/images/
  3. Edit
    01_docker_sdk_env/docker_compose/05_tutorials/param/pipeline_segmentation_image.yaml
    to change the
    input_path:
    , line 4:
    input_path: /home/eiforamr/ros2_ws/src/ros2_openvino_toolkit/data/images/my_image.jpg
  4. Run the automated yml:
    CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml up
Expected result: Execution of semantic segmentation on the image you selected

Troubleshooting

For general robot issues, go to: Troubleshooting for Robot Tutorials.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.