Developer Guide

  • 2022.3
  • 10/25/2022
  • Public Content
Contents

Basic Fleet Management

The basic fleet management solution consists of server and client architecture.
The following diagram presents the architecture, components, and communication between components.
Basic Fleet Management use cases:
  • Basic Fleet Management use case, commanding a robot to return to a docking station triggered by battery level or commanding to multiple specified locations
  • Remote Inference use case with a remote inference request to an OpenVINO™ model server by battery level
The basic fleet management server is one of the microservices of orchestration, and it is the solution provided by ThingsBoard* (https://thingsboard.io/docs/user-guide/install/docker/).
Using a ThingsBoard* server to connect to the Intel® In-Band Manageability framework (https://github.com/intel/intel-inb-manageability), deployed clients are able to provide fleet management and telemetry. The ThingsBoard* server GUI gives a clear view of the telemetry data with the Intel® In-Band Manageability-tailored dashboard. In addition, rules can be set for configured, validated events to reach fleet management use cases.
The basic fleet management client (which is deployed on robots) consists of Intel® In-Band Manageability, the VDA5050 client which complies with VDA5050 v2.0 (https://www.vda.de/dam/jcr:f0c9c019-1506-4dee-998a-e92723fbf025/EN-VDA5050-V2_0_0.pdf), and ROS 2 nodes (can be navigation or object detection purposes). When a subscribed topic is published by Intel® In-Band Manageability, the VDA5050 client processes the VDA5050 complied JSON format (https://github.com/VDA5050/VDA5050/tree/main/json_schemas) and translates it into ROS2 topics to publish.
The VDA5050 complied JSON format message can be conducted in the ThingsBoard* Rule Engine (https://thingsboard.io/docs/user-guide/rule-engine-2-0/re-getting-started/) nodes with configured telemetry message validation and sent via an RPC call node, or it can be sent manually on the GUI.
For the remote inference use case, the requests from ROS 2 node go to OpenVINO™ model server (https://github.com/openvinotoolkit/model_server/tree/main/extras/nginx-mtls-auth) via SSL channel.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.