Documentation

  • 2022.1
  • 03/23/2022
  • Public Content

Update an Application-Over-The-Air

Application-Over-The-Air (AOTA)
updates enable cloud to edge manageability of application services running on
Edge Insights for Fleet
(EIF) enabled systems through the Device Manageability component. Device Manageability is a Device Manageability Software which includes SOTA, FOTA, AOTA, and few system operations.
For EIF use case, only AOTA features from Device Manageability are validated and will be supported through ThingsBoard* cloud-based management front-end service.
The following section will walk you through setting up ThingsBoard*, creating and establishing connectivity with the target systems, as well as updating applications on those systems.
Device manageability was previously named Turtle Creek. Remnants of the previous name still exists in some components. The name replacement is ongoing and will be completed in future releases.

Install Device Manageability Functionality

You need 2 hosts to run AOTA. One host is the server on which an EIF Reference Implementation will be installed, and one host is the worker on which the Reference Implementation will be deployed through AOTA for execution.
In this guide, the server host will be referred to as
Server
and the worker host will be referred to as
Worker
.
Similarly,
$BUILD_PATH
refers to the build ingredients directory of EIF where most of the commands will be executed. Depending on which EIF Reference Implementation is installed,
$BUILD_PATH
has the following form:
/path/to/<RI_name>/<RI_name><release_version>/IEdgeInsights/build
. For the exact value of
<RI_name>
and
<release_version>
for your setup, refer to the respective Reference Implementation documentation.
Server Prerequisites
  1. Refer to Requirements for the hardware requirements.
  2. Ubuntu* 18.04.3 LTS
  3. One of the EIF Reference Implementations is installed.
Worker Prerequisites
  1. Refer to Reference Implementations for the specific hardware requirements for the worker host.
  2. Ubuntu* 18.04.3 LTS
  3. Install the latest Docker cli/Docker daemon by following sections
    Install using the repository
    and
    Install Docker Engine
    at: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
    Run Docker without sudo following the
    Manage Docker as a non-root user
    instructions at: https://docs.docker.com/engine/install/linux-postinstall/.
  4. If your Worker is running behind a HTTP/S proxy server, perform these steps. If not, you can skip this step.
    1. Configure proxy settings for the Docker* client to connect to internet and for containers to access the internet by following the details at: https://docs.docker.com/network/proxy/.
    2. Configure proxy settings for the Docker* daemon by following the steps at: https://docs.docker.com/config/daemon/systemd/#httphttps-proxy.
  5. Install the docker-compose tool by following the steps at: https://docs.docker.com/compose/install/#install-compose.
All Device Manageability devices can be controlled using a correctly set up cloud service application. Follow the steps below to install Device Manageability on Server:
  1. Go to the manageability folder and run the commands:
    sudo chmod 775 install_tc.sh sudo ./install_tc.sh
    The manageability folder should be
    $BUILD_PATH/../../manageability/
    for an EIF Reference Implementation.
  2. After reading all the licenses, press
    q
    once to finish.
  3. Accept the License by typing
    Y
    After installation is complete, you should see terminal output similar to the following:
  4. Once Device Manageability has been installed successfully, edit the
    /etc/intel_manageability.conf
    file and update the required values.
    sudo vi /etc/intel_manageability.conf
    1. Under the
      <all> </all>
      section, change dbs from ON to OFF.
      DBS stands for Docker* Bench Security. This feature of Device Manageability is not used for EIF.
    2. Add the IP endpoint for the developer files to the trustedRepositories:
      <trustedRepositories> http://[Server_IP]:5003 </trustedRepositories
    3. Save and exit. You must restart the Server before these changes can take effect.
Repeat these steps on the Worker machine. Create an archive with the manageability folder from step 1, copy it on the Worker machine, and repeat the installation steps.

Multi-Node Deployment

EIF deployment on multiple nodes requires the use of the Docker* registry. The following sections outline some of the commands to be run on the Server and on any newly added Workers.
Execute the following steps on the Server.
Configure Docker* Registry
  1. Launch the local Docker* registry:
    docker run -d -p 5002:5000 --name registry --restart unless-stopped registry:2
  2. Update the Docker* registry URL in the
    DOCKER_REGISTRY
    variable, with the
    localhost:5002/
    value:
    sudo vi $BUILD_PATH/.env
  3. Identify the images that need to be tagged:
    cat docker-compose-push.yml | grep container_name
  4. Identify the version of the Docker* images:
    docker images | <docker_image_from_above_command>
  5. Tag each image with the following command:
    docker tag <docker_image>:<version> localhost:5002/<docker_image>:<version>
  6. Push the Reference Implementation images to the registry:
    docker-compose -f docker-compose-push.yml push
  7. Identify the names and versions, and tag the etcd and etcd_provision images too, as shown above:
    docker images | grep etcd
  8. Push the etcd and etcd_provision images to the Docker* registry:
    docker push localhost:5002/<image_name>:<version>
Configure ETCD* as leader
  1. Update the
    ETCD_HOST
    variable with the IP of Server, and
    ETCD_NAME
    variable with value
    leader
    in the
    .env
    file:
    sudo vi $BUILD_PATH/.env
  2. Run the provisioning script:
    cd $BUILD_PATH/provision sudo ./provision ../docker-compose.yml
Generate the provisioning bundle
  1. Update the Docker* registry URL in
    DOCKER_REGISTRY
    variable, with the IP of Server and port 5002, <Server_IP>:5002/ , and
    ETCD_NAME
    variable with the
    worker
    value:
    sudo vi $BUILD_PATH/.env
  2. Generate the provisioning archive:
    cd $BUILD_PATH/deploy/ sudo python3 generate_eii_bundle.py -p
This command will generate a
worker_provisioning.tar.gz
archive. Save this archive. It will be deployed on the Worker in the following steps.
Generate the AOTA bundle
  1. Identify the list of services that need to be configured:
    cat ../config_all.yml | grep ia_
  2. Edit the
    config.json
    file and replace the
    include_service
    list with the list of services from above command:
    sudo vi config.json
  3. Generate the eii_bundle archive that will be used to launch AOTA.
    sudo python3 generate_eii_bundle.py
This command will generate a
eii_bundle.tar.gz
archive. Save this archive. It will be served through a Python* HTTP server and ThingsBoard* to the Worker to launch AOTA.

Cloud Service: ThingsBoard* Setup

Follow these instructions to set up ThingsBoard on the Server. It is possible to set up ThingsBoard on a different host, even one that is not part of the cluster, however, this tutorial describes how to deploy it on the Server.
  1. On the ThingsBoard* page, go to
    Devices
    , add a new device with name
    Worker
    and a new profile with profile name
    INB
    .
  2. On the device list, click on the shield icon to find out the credentials of this device:
  3. Save the credentials for Worker provisioning. In this dialog you have the option to add a custom Access Token instead of using the provided one. Do not forget to click the
    Save
    button if you modify the Access Token.
  4. Set up the
    AOTA
    dashboard:
    1. Go to the Widgets Library in the ThingsBoard* menu.
    2. Click on the
      +
      button and select
      'Imports widgets bundle'
      .
    3. Browse to
      /usr/share/cloudadapter-agent/thingsboard/
      and select the
      intel_manageability_widgets_version_3.3.json
      file.
    4. Click the
      Import
      button.
    5. Go to
      Dashboards
      in the ThingsBoard* menu.
    6. Click on the
      +
      button and select
      Import dashboard
      .
    7. Browse to
      /usr/share/cloudadapter-agent/thingsboard/
      and select the
      intel_manageability_devices_versoin_3.3.json
      file.
    8. Click the
      Import
      button.

Worker Provisioning

With the Server and the Cloud Service set up, in this section you will set up the Worker. All the steps and commands in this section must be executed on the Worker.
Docker* Provisioning
  1. Copy the
    worker_provisioning.tar.gz
    archive created in the Multi-Node Deployment section to the Worker host.
  2. Untar the archive:
    tar -xf worker_provisioning.tar.gz
  3. Run the provisioning script:
    cd worker_provisioning/provision sudo ./provision.sh
  4. Configure the Docker* daemon to allow pulling images from Server:
    1. Edit the
      /etc/docker/daemon.json
      file:
      sudo vi /etc/docker/daemon.json
    2. Add the following lines:
      { “insecure-registries”: [“<Server_IP>:5002”] }
    3. Restart the Docker service to reload the change:
      sudo service docker restart
ThingsBoard* Provisioning
  1. Add the
    DISPLAY
    variable to the
    /etc/environment
    file.
    Update the value according to your environment.
  2. Launch the provisioning binary:
    sudo PROVISION_TPM=auto provision-tc
  3. If the Worker was previously provisioned, the following message will appear. To override the previous cloud configuration, enter
    Y
    .
  4. Select ThingsBoard* as the cloud service by entering
    3
    and
    Enter
    .
  5. Provide the IP of the Server:
  6. When asked for the server port, press
    Enter
    to use the default value
    1883
    .
  7. When asked about provision type, choose
    Token Authentication
    .
  8. Enter the device token extracted in the Cloud Service: ThingsBoard* Setup section.
  9. When asked about
    Configure TLS
    , enter
    N
    .
  10. When asked about signature checks for OTA packages, enter
    N
    .
    The script will start the Intel® Manageability Services. When the script finishes, you will be able to interact with the device via the ThingsBoard* dashboard.
If at any time the cloud service configuration needs to be changed or updated, you must run the provisioning script again.
Configure Reference Implementation
  1. Create the necessary folders and files for the Reference Implementation configuration:
    sudo mkdir -p /opt/intel/eii/local_storage sudo touch /opt/intel/eii/local_storage/credentials.env sudo touch /opt/intel/eii/local_storage/cloud_dashboard.env sudo mkdir /opt/intel/eii/local_storage/saved_images sudo mkdir /opt/intel/eii/local_storage/saved_videos sudo chown eiiuser:eiiuser -R /opt/intel/eii
  2. Add the AWS* cloud credentials and ThingsBoard* credentials into
    credentials.env
    and
    cloud_dashboard.env
    , created above. These are the credentials that you would usually add into the Configuration page of the webpage of the Reference Implementation:
    1. In the
      /opt/intel/eii/local_storage/credentials.env
      file, add the following variables along with the values you would've added in the webpage for AWS* credentials:
      AWS_ACCESS_KEY= AWS_SECRET_ACCESS_KEY= AWS_BUCKET_NAME=
    2. (Optional) In the
      /opt/intel/eii/local_storage/cloud_credentials.env
      file, add the following variables along with the values you would've added in the webpage for ThinbsBoard* credentials:
      HOST= PORT= ACCESS_TOKEN=
      This step is optional. It must be executed if you wish to continue receiving updates in the cloud dashboard and have the Reference Implementation save the events on AWS* storage.

Perform AOTA

In the Multi-Node Deploymentsection you created the
AOTA
eii_bundle.tar.gz
bundle.
On the Server, go to the path where that file is located, and, for development purposes only, launch a Python* HTTP server:
python3 -m http.server 5003
  1. Go to ThingBoard* page,
    Dashboard
    , and select
    INB-Intel Manageability Devices
    :
  2. Click on the
    Trigger AOTA
    button:
  3. Once the Trigger AOTA dialog opens, complete each field per information below:
    App:
    docker-compose
    Command:
    up
    Container Tag:
    eii_bundle
    Fetch:
    enter the HTTP server that was set up at the start of this section.
    Leave the other sections empty and click on the
    Send
    button.
    In the step above, the Worker will access the Server through the local HTTP server to fetch the eii_bundle.
    In the ThingsBoard* log, you can see that eii_bundle was fetched from the local server and was deployed successfully:
To stop the application on Worker, trigger another AOTA event and set
Command
to
down
instead of
up
.
To verify that the EIF Reference Implementation was successfully deployed on the new node, check the list of running containers with the command:
docker ps
The output will be similar to the following snapshot:
As the Reference Implementation is launched with test videos, which are available only on the Server and are not deployed on Worker, it will crash on first launch. To fix this, you need to find out what test video needs to be copied from the Server to the Worker and where.
To find out which video to copy, first identify all the Video Ingestors/Analytics Docker* images that were launched for the RI with the command above, and for each of them run the following command:
docker logs <container_id>
You will see output similar to:
To identify the location where these videos need to be copied, run the command for each container as above:
docker inspect -f ‘{{ .Mounts }}’ <container_id>
You will see output similar to:
The name will be different based on the installed Reference Implementation.
The videos were deployed along with the Reference Implementation on the Server. Search for them in the installation folder and copy them to the folder shown above. Once the videos are in the right place, trigger AOTA with 'down' event and another one with 'up' event.
If the Visualizer does not appear, run the following command:
xhost +
Verify Triggered AOTA in Event
Once the AOTA event is triggered, you can verify the log of the triggered call. This can be one of the verification tasks done during the development phase.
  1. Go to Worker and run the following command to see the logs:
    journalctl -fu dispatched * journalctl -fu cloudadapter
  2. Note the event logs on the ThingsBoard* server show which commands have been run.
If the event log does not appear, follow these steps:
  • Change settings from ERROR to DEBUG everywhere in these files:
    /etc/intel-manageability/public/dispatcher-agent/logging.ini /etc/intel-manageability/public/cloudadapter-agent/logging.ini
  • Run the commands:
    sudo systemctl restart dispatcher sudo systemctl restart cloudadapter

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.