Run Remote Edge Server for AI Inference using OpenVINO™
This tutorial demonstrates that when the robot is configured to use the
Edge Server for AI inference and the robot AI application sends camera
frames to the Edge Server via Wi-Fi, the Edge Server runs the AI
inference using OpenVINO™, sending the AI inference results back to the
robot AI application.
- Machine A is the OpenVINO™ model server.
- Machine B is an AMR target that sends data to the server and waits for results.
Machine A needs to be in the same network as Machine B.
Run the Sample Application
- Machine A: On a machine (for example, a NUC), clone the OpenVINO™ model server Docker* image.$ docker pull openvino/model_server:latest
- Machine A: Get the NN model.$ curl --create-dirs https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/1/face-detection-retail-0004.xml -o model/1/face-detection-retail-0004.bin
- Machine A: Start the OpenVINO™ model server Docker* image.$ docker run -d -u $(id -u):$(id -g) -v $(pwd)/model:/models/face-detection -p 9000:9000 openvino/model_server:latest --model_path /models/face-detection --model_name face-detection --port 9000 --plugin_config '{"CPU_THROUGHPUT_STREAMS": "1"}' --shape auto
- Machine B: On an AMR target, go to the containers folder, and start the Docker* container.$ cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_2021.3/AMR_containers $ ./run_interactive_docker.sh amr-ubuntu2004-full-flavour-sdk:2021.3 eiforamr
- Machine A: Get the IP.$ ifconfig
- Machine B: Set the no_grpc_proxy.$ export no_grpc_proxy=<ip_from_machine_A>
- Machine B: Start the client script.cd /home/eiforamr/workspace/ovms $ python3 face_detection.py --batch_size 1 --width 600 --height 400 --input_images_dir images --output_dir results --grpc_address <ip_from_machine_A> --grpc_port 9000
- Machine B: Verify that the image that contains the inference results exists in the results directory.$ cd /home/eiforamr/workspace/ovms/results $ ls face-detection_1_0.jpgExpected output:The image that contains the inference results exists.The result is the modified input image with red bounding boxes indicating detected faces.
Troubleshooting
If the following error is encountered:
$ ./run_interactive_docker.sh amr-ubuntu2004-full-flasvour-sdk:<TAG> eiforamr
bash: ./run_interactive_docker.sh Permission denied
Give executable permission to the script:
$ chmod 755 run_interactive_docker.sh
Summary and Next Steps
In this tutorial, you learned how to do remote AI inference using
OpenVINO™.
As a next step, see the Build New and Custom Docker* Images from the Edge Insights for Autonomous Mobile Robots SDK tutorial.