Developer Guide

  • 2022.3
  • 10/25/2022
  • Public Content
Contents

Collaborative Visual SLAM

Collaborative visual SLAM is compiled natively for both Intel® Core™ and Intel® Atom® processor-based systems. By default, in this tutorial, the Intel® Core™ processor-based system version is used. If you are running an Intel® Atom® processor-based system, you must make the changes detailed in Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems for collaborative visual SLAM to work.

Collaborative Visual SLAM with Two Robots

Prerequisites:
  • The main input is a camera, either monocular or stereo or RGB-D.
  • IMU and odometry data are supported as auxiliary inputs.
  1. Check if your installation has the amr-collab-slam Docker* image.
    docker images |grep amr-collab-slam #if you have it installed, the result is: amr-collab-slam
    If the image is not installed, continuing with these steps
    triggers a build that takes longer than an hour
    (sometimes, a lot longer depending on the system resources and internet connection).
  2. If the image is not installed, Intel recommends re-installing the EI for AMR Robot Kit with the Get Started Guide for Robots.
  3. Go to the
    AMR_containers
    folder:
    cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/AMR_containers
  4. Prepare the environment setup:
    source ./01_docker_sdk_env/docker_compose/common/docker_compose.source export CONTAINER_BASE_PATH=`pwd` export ROS_DOMAIN_ID=17
  5. If the bags were not extracted before, do it now:
    unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/ sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
  6. Run the collaborative visual SLAM algorithm using two bags simulating two robots going through the same area:
    CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/cslam.tutorial.yml up
    Expected result: On the server rviz2, both trackers are seen.
    • Red indicates the path robot 1 is taking right now.
    • Blue indicates the path robot 2 took.
    • Green indicates the points known to the server.

Collaborative Visual SLAM with FastMapping Enabled

  1. Go to the
    AMR_containers
    folder:
    cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/AMR_containers
  2. Prepare the environment setup:
    source ./01_docker_sdk_env/docker_compose/common/docker_compose.source export CONTAINER_BASE_PATH=`pwd` export ROS_DOMAIN_ID=17
  3. If the bags were not extracted before, do it now:
    unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/ sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
  4. Run the collaborative visual SLAM algorithm with FastMapping enabled:
    docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-fastmapping.tutorial.yml up
    Expected result: On the opened rviz2, you see the visual SLAM keypoints, the 3D map, and the 2D map.
  5. You can disable the
    /univloc_tracker_0/local_map
    ,
    univloc_tracker_0/fused_map
    , or both topics.
    Visible Test: Showing keypoints, the 3D map, and the 2D map
    Expected Result:
    Visible Test: ``Showing the 3D map
    Expected Result:
    Visible Test: Map showing the 2D map
    Expected Result:
    Visible Test: Showing keypoints and the 2D map
    Expected Result:

Collaborative Visual SLAM with GPU Offloading

  1. Check if your installation has the amr-collab-slam Docker* image.
    docker images |grep amr-collab-slam-gpu #if you have it installed, the result is: amr-collab-slam-gpu
    If the image is not installed, continuing with these steps
    triggers a build that takes longer than an hour
    (sometimes, a lot longer depending on the system resources and internet connection).
  2. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started Guide for Robots.
  3. Go to the
    AMR_containers
    folder:
    cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/AMR_containers
  4. Prepare the environment setup:
    source ./01_docker_sdk_env/docker_compose/common/docker_compose.source export CONTAINER_BASE_PATH=`pwd` export ROS_DOMAIN_ID=17
  5. If the bags were not extracted before, do it now:
    unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/ sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
  6. Run the collaborative visual SLAM algorithm with GPU offloading:
    docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-gpu.tutorial.yml up
    Expected result: On the opened rviz2, you see the visual SLAM keypoints, the 3D map, and the 2D map
  7. On a different terminal, check how much of the GPU is using
    intel-gpu-top
    .
    sudo apt-get install intel-gpu-tools sudo intel_gpu_top
  8. To close this execution, close the rviz2 window, and press
    Ctrl-c
    in the terminal.
  9. Clean up the Docker* images:
    docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-gpu.tutorial.yml down --remove-orphans

Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems

  1. Open the collaborative visual SLAM yml file for editing (depending on which tutorial you want to run, replace
    <collab-slam-tutorial>
    with
    cslam
    ,
    collab-slam-fastmapping
    , or
    collab-slam-gpu
    ):
    cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers gedit 01_docker_sdk_env/docker_compose/05_tutorials/<collab-slam-tutorial>.tutorial.yml
  2. Replace this line:
    source /home/eiforamr/workspace/ros_entrypoint.sh
    With these lines:
    unset CMAKE_PREFIX_PATH unset AMENT_PREFIX_PATH unset LD_LIBRARY_PATH source /home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_atom/setup.bash

Troubleshooting

  • If the tracker (
    univloc_tracker_ros
    ) fails to start, giving this error:
    amr-collab-slam | [ERROR] [univloc_tracker_ros-2]: process has died [pid 140, exit code -4, cmd '/home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_core/univloc_tracker/lib/univloc_tracker/univloc_tracker_ros --ros-args -r __node:=univloc_tracker_0 -r __ns:=/ --params-file /tmp/launch_params_zfr70odz -r /tf:=tf -r /tf_static:=tf_static -r /univloc_tracker_0/map:=map'].
  • The odometry feature
    use_odom:=true
    does not work with these bags.
    The ROS 2 bags used in this example do not have the necessary topics recorded for the odometry feature of collaborative visual SLAM.
    If the
    use_odom:=true
    parameter is set, the
    collab-slam
    reports errors.
  • The bags fail to play.
    The
    collab_slam
    Docker* is started with the local user and needs access to the ROS 2 bags folder.
    Make sure that your local user has read and write access to this path:
    <path to edge_insights_for_amr>//Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/01_docker_sdk_env/docker_compose/06_bags
    The best way to do this is to make your user the owner of the folder. If the EI for AMR bundle was installed with sudo,
    chown
    the folder to your local user.
  • If the following error is encountered:
    amr-collab-slam-gpu | [univloc_tracker_ros-2] /workspace/src/gpu/l0_rt_helpers.h:56: L0 error 78000001
    The render group might be on a different ID then 109 which is placed in the yaml files used in the examples.
    To find what ID the render group has on your system:
    getent group render cat /etc/group |grep render
    If the result is not
    render:x:109
    , change the yml file:
    gedit 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-gpu.tutorial.yml # Change the at the line 26 from 109, to the number you got above.
  • For general robot issues, go to: Troubleshooting for Robot Tutorials.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.