Get Started Guide

  • 2022.3
  • 10/23/2022
  • Public Content

Update an Application-Over-The-Air

Application-Over-The-Air (AOTA)
updates enable cloud to edge manageability of application services running on Edge Insights for Fleet (EIF) enabled systems through the Device Manageability component. Device Manageability is a Device Manageability Software which includes SOTA, FOTA, AOTA, and few system operations.
For EIF use case, only AOTA features from Device Manageability are validated and will be supported through ThingsBoard* cloud-based management front-end service.
The following section will walk you through setting up ThingsBoard*, creating and establishing connectivity with the target systems, as well as updating applications on those systems.
Device manageability was previously named Turtle Creek. Remnants of the previous name still exists in some components. The name replacement is ongoing and will be completed in future releases.

Install Device Manageability Functionality

You need 2 hosts to run AOTA. One host is the server on which an EIF Reference Implementation will be installed, and one host is the worker on which the Reference Implementation will be deployed through AOTA for execution.
In this guide, the server host will be referred to as
Server
and the worker host will be referred to as
Worker
.
Similarly,
$BUILD_PATH
refers to the build ingredients directory of EIF where most of the commands will be executed. Depending on which EIF Reference Implementation is installed,
$BUILD_PATH
has the following form:
/path/to/<RI_name>/<RI_name><release_version>/IEdgeInsights/build
. For the exact value of
<RI_name>
and
<release_version>
for your setup, refer to the respective Reference Implementation documentation.
Server Prerequisites
  1. Refer to Requirements for the hardware requirements.
  2. Ubuntu* 20.04 LTS
  3. One of the EIF Reference Implementations is installed.
Worker Prerequisites
  1. Refer to Reference Implementations for the specific hardware requirements for the worker host.
  2. Ubuntu* 20.04 LTS
  3. Install the latest Docker cli/Docker daemon by following sections
    Install using the repository
    and
    Install Docker Engine
    at: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
    Run Docker without sudo following the
    Manage Docker as a non-root user
    instructions at: https://docs.docker.com/engine/install/linux-postinstall/.
  4. If your Worker is running behind a HTTP/S proxy server, perform these steps. If not, you can skip this step.
    1. Configure proxy settings for the Docker* client to connect to internet and for containers to access the internet by following the details at: https://docs.docker.com/network/proxy/.
    2. Configure proxy settings for the Docker* daemon by following the steps at: https://docs.docker.com/config/daemon/systemd/#httphttps-proxy.
  5. Install the docker-compose tool by following the steps at: https://docs.docker.com/compose/install/#install-compose.
All Device Manageability devices can be controlled using a correctly set up cloud service application. Follow the steps below to install Device Manageability on Worker:
  1. Go to the manageability folder on the Server, create an archive with the contents, copy the archive on the Worker, unzip it, and run the commands:
    sudo chmod 775 install-tc.sh sudo ./install-tc.sh
    The manageability folder should be
    $BUILD_PATH/../../manageability/
    for an EIF Reference Implementation.
  2. After reading all the licenses, press
    q
    once to finish.
  3. Accept the License by typing
    Y
    After installation is complete, you should see terminal output similar to the following:
  4. Once Device Manageability has been installed successfully, edit the
    /etc/intel_manageability.conf
    file and update the required values.
    sudo vi /etc/intel_manageability.conf
    1. Under the
      <all> </all>
      section, change dbs from ON to OFF.
      DBS stands for Docker* Bench Security. This feature of Device Manageability is not used for EIF.
    2. Add the IP endpoint for the developer files to the trustedRepositories:
      <trustedRepositories> http://[Server_IP]:5003 </trustedRepositories
    3. Save and exit. You must restart the Worker before these changes can take effect.

Multi-Node Deployment

EIF deployment on multiple nodes requires the use of the Docker* registry. The following sections outline some of the commands to be run on the Server and on any newly added Workers.
Execute the following steps on the Server.
Configure Docker* Registry
  1. Launch the local Docker* registry:
    docker run -d -p 5002:5000 --name registry --restart unless-stopped registry:2
  2. Configure the Docker* daemon to allow pushing images to the local Docker* registry:
    1. Edit the
      /etc/docker/daemon.json
      file:
      sudo vi /etc/docker/daemon.json
    2. Add the following lines:
      { “insecure-registries”: [“<Server_IP>:5002”] }
    3. Restart the Docker service to reload the change:
      sudo service docker restart
  3. Update the Docker* registry URL in the
    DOCKER_REGISTRY
    variable, with the
    [Server_IP]:5002/
    value:
    sudo vi $BUILD_PATH/.env
  4. Identify the images that need to be tagged:
    cat docker-compose-push.yml | grep container_name
  5. Identify the version of the Docker* images:
    docker images | grep <docker_image_from_above_command>
  6. Tag each image with the following command:
    docker tag <docker_image>:<version> [Server_IP]:5002/<docker_image>:<version>
  7. Push the Reference Implementation images to the registry:
    docker-compose -f docker-compose-push.yml push
Configure ETCD
  1. Update the
    ETCD_HOST
    variable with the IP of Worker in the
    .env
    file:
    sudo vi $BUILD_PATH/.env
Generate the AOTA bundle for Deployment
  1. Identify the list of services that need to be configured:
    cat config_all.yml | grep ia_ cd $BUILD_PATH/deploy/
  2. Edit the
    mgr.json
    file and replace the
    include_service
    list with
    ia_configmgr_agent
    :
    sudo cp config.json mgr.json sudo vi mgr.json
  3. Edit the
    eif.json
    file and replace the
    include_service
    list with the list got above, but skip
    ia_configmgr_agent
    from the list:
    sudo cp config.json eif.json sudo vi eif.json
  4. Generate the
    mgr
    archive that will be used to launch the Config Manager Agent:
    sudo mkdir mgr sudo cp ../eii_config.json mgr/ sudo python3 generate_eii_bundle.py -t mgr -c mgr.json
  5. Generate the
    eif
    archive that will be used to launch AOTA:
    sudo mkdir eif sudo cp ../eii_config.json eif/ sudo python3 generate_eii_bundle.py -t eif -c eif.json
These commands will generate
mgr.tar.gz
and
eif.tar.gz
archives. Save these archives. They will be served through a Python* HTTP server and ThingsBoard* to the Worker to launch AOTA.

Cloud Service: ThingsBoard* Setup

Follow these instructions to set up ThingsBoard on the Server. It is possible to set up ThingsBoard on a different host, even one that is not part of the cluster, however, this tutorial describes how to deploy it on the AWS.
  1. ThingsBoard* can also be set up on a local host if AWS is not a viable option.
  2. On the ThingsBoard* page, go to
    Devices
    . On the device list, click on the shield icon to find out the credentials of a device:
  3. Save the credentials for Worker provisioning. In this dialog you have the option to add a custom Access Token instead of using the provided one. Do not forget to click the
    Save
    button if you modify the Access Token.

Worker Provisioning

With the Server and the Cloud Service set up, in this section you will set up the Worker. All the steps and commands in this section must be executed on the Worker.
Docker* Provisioning
  1. Configure the Docker* daemon to allow pulling images from Server:
    1. Edit the
      /etc/docker/daemon.json
      file:
      sudo vi /etc/docker/daemon.json
    2. Add the following lines:
      { “insecure-registries”: [“<Server_IP>:5002”] }
    3. Restart the Docker service to reload the change:
      sudo service docker restart
ThingsBoard* Provisioning
  1. Add the
    DISPLAY
    variable to the
    /etc/environment
    file.
    Update the value according to your environment.
  2. Launch the provisioning binary:
    sudo PROVISION_TPM=auto provision-tc
  3. If the Worker was previously provisioned, the following message will appear. To override the previous cloud configuration, enter
    Y
    .
  4. Select ThingsBoard* as the cloud service by entering
    3
    and
    Enter
    .
  5. Provide the IP of the Server:
  6. When asked for the server port, press
    Enter
    to use the default value
    1883
    .
  7. When asked about provision type, choose
    Token Authentication
    .
  8. Enter the device token extracted in the Cloud Service: ThingsBoard* Setup section.
  9. When asked about
    Configure TLS
    , enter
    N
    .
  10. When asked about signature checks for OTA packages, enter
    N
    .
    The script will start the Intel® Manageability Services. When the script finishes, you will be able to interact with the device via the ThingsBoard* dashboard.
If at any time the cloud service configuration needs to be changed or updated, you must run the provisioning script again.
Configure Reference Implementation
  1. Create the necessary folders and files for the Reference Implementation configuration:
    sudo mkdir -p /opt/intel/eii/local_storage sudo touch /opt/intel/eii/local_storage/credentials.env sudo touch /opt/intel/eii/local_storage/cloud_dashboard.env sudo mkdir /opt/intel/eii/local_storage/saved_images sudo mkdir /opt/intel/eii/local_storage/saved_videos sudo mkdir /opt/intel/eii/local_storage/alert_logs
  2. Add the AWS* cloud credentials and ThingsBoard* credentials into
    credentials.env
    and
    cloud_dashboard.env
    , created above. These are the credentials that you would usually add into the Configuration page of the webpage of the Reference Implementation:
    1. In the
      /opt/intel/eii/local_storage/credentials.env
      file, add the following variables along with the values you would’ve added in the webpage for AWS* credentials:
      AWS_ACCESS_KEY= AWS_SECRET_ACCESS_KEY= AWS_BUCKET_NAME=
    2. (Optional) In the
      /opt/intel/eii/local_storage/cloud_dashboard.env
      file, add the following variables along with the values you would’ve added in the webpage for ThingsBoard* credentials:
      HOST= PORT= ACCESS_TOKEN=
      This step is optional. It must be executed if you wish to continue receiving updates in the cloud dashboard and have the Reference Implementation save the events on AWS* storage.
  3. Add the ThingsBoard* public certificate, used when setting ThingsBoard* up, to
    /opt/intel/eii/local_storage
    with the command:
    cp server.pem /opt/intel/eii/local_storage/server_pub_tb.pem

Perform AOTA

In the Multi-Node Deployment section you created the AOTA
mgr.tar.gz
and
eif.tar.gz
bundles.
On the Server, go to the path where those files are located, and, for development purposes only, launch a Python* HTTP server:
python3 -m http.server 5003
  1. Go to ThingsBoard* page,
    Dashboard
    , and select
    Intel Fleet Manager v2.0
    :
  2. Select the device that you chose in the Cloud Service: ThingsBoard* Setup section, then click on the
    Trigger AOTA
    button:
  3. Once the Trigger AOTA dialog opens, complete each field per the information below:
    App:
    docker-compose
    Command:
    up
    Container Tag:
    mgr
    Fetch:
    Enter the HTTP server that was set up at the start of this section.
    Leave the other sections empty and click on the
    Send
    button.
    In the step above, the Worker will access the Server through the local HTTP server to fetch the mgr bundle.
    In the device telemetry log, you can see that mgr was fetched from the local server and was deployed successfully:
  4. Click on the
    Trigger AOTA
    button again and complete each field per information below:
    App:
    docker-compose
    Command:
    up
    Container Tag:
    eif
    Fetch:
    Enter the HTTP server that was set up at the start of this section.
    Leave the other sections empty and click on the
    Send
    button.
    In the step above, the Worker will access the Server through the local HTTP server to fetch the eif bundle.
    In the device telemetry log, you can see that eif bundle was fetched from the local server and was deployed successfully:
To stop the application on Worker, trigger the AOTA events again and set
Command
to
down
instead of
up
.
To verify that the EIF Reference Implementation was successfully deployed on the new node, check the list of running containers with the command:
docker ps
The output will be similar to the following snapshot:
As the Reference Implementation is launched with test videos, which are available only on the Server and are not deployed on Worker, it will crash on first launch. To fix this, you need to find out what test video needs to be copied from the Server to the Worker and where.
To find out which video to copy, first identify all the Video Ingestors/Analytics Docker* images that were launched for the RI with the command above, and for each of them run the following command:
docker logs <container_id>
You will see output similar to:
To identify the location where these videos need to be copied, run the command for each container as above:
docker inspect -f '{{ .Mounts }}' <container_id>
You will see output similar to:
The name will be different based on the installed Reference Implementation.
The videos were deployed along with the Reference Implementation on the Server. Search for them in the installation folder and copy them to the folder shown above. Once the videos are in the right place, trigger AOTA with ‘down’ event and another one with ‘up’ event.
If the Visualizer does not appear, run the following command:
xhost +
Verify Triggered AOTA in Event
Once the AOTA event is triggered, you can verify the log of the triggered call. This can be one of the verification tasks done during the development phase.
  1. Go to Worker and run the following command to see the logs:
    journalctl -fu dispatched & journalctl -fu cloudadapter
  2. Note the event logs on the ThingsBoard* server show which commands have been run.
If the event log does not appear, follow these steps:
  • Change settings from ERROR to DEBUG everywhere in these files:
    /etc/intel-manageability/public/dispatcher-agent/logging.ini /etc/intel-manageability/public/cloudadapter-agent/logging.ini
  • Run the commands:
    sudo systemctl restart dispatcher sudo systemctl restart cloudadapter

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.