Update an Application-Over-The-Air
Application-Over-The-Air (AOTA)
updates enable cloud to edge manageability of application services running on
Edge Insights for Fleet
(EIF) enabled systems through the Device Manageability component. Device Manageability is a Device Manageability Software which includes SOTA, FOTA, AOTA, and few system operations.
For EIF use case, only AOTA features from Device Manageability are validated and will be supported through ThingsBoard* cloud-based management front-end service.
The following section will walk you through setting up ThingsBoard*, creating and establishing connectivity with the target systems, as well as updating applications on those systems.
Device manageability was previously named Turtle Creek. Remnants of the previous name still exists in some components. The name replacement is ongoing and will be completed in future releases.
Install Device Manageability Functionality
You need 2 hosts to run AOTA. One host is the server on which an EIF Reference Implementation will be installed, and one host is the worker on which the Reference Implementation will be deployed through AOTA for execution.
In this guide, the server host will be referred to as
Server
and the worker host will be referred to as
Worker
.
Similarly,
$BUILD_PATH
refers to the build ingredients directory of EIF where most of the commands will be executed. Depending on which EIF Reference Implementation is installed,
$BUILD_PATH
has the following form:
/path/to/<RI_name>/<RI_name><release_version>/IEdgeInsights/build
. For the exact value of
<RI_name>
and
<release_version>
for your setup, refer to the respective Reference Implementation documentation.
Server Prerequisites
- Refer to Requirements for the hardware requirements.
- Ubuntu* 18.04.3 LTS
- One of the EIF Reference Implementations is installed.
Worker Prerequisites
- Refer to Reference Implementations for the specific hardware requirements for the worker host.
- Ubuntu* 18.04.3 LTS
- Install the latest Docker cli/Docker daemon by following sectionsInstall using the repositoryandInstall Docker Engineat: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repositoryRun Docker without sudo following theManage Docker as a non-root userinstructions at: https://docs.docker.com/engine/install/linux-postinstall/.
- If your Worker is running behind a HTTP/S proxy server, perform these steps. If not, you can skip this step.
- Configure proxy settings for the Docker* client to connect to internet and for containers to access the internet by following the details at: https://docs.docker.com/network/proxy/.
- Configure proxy settings for the Docker* daemon by following the steps at: https://docs.docker.com/config/daemon/systemd/#httphttps-proxy.
- Install the docker-compose tool by following the steps at: https://docs.docker.com/compose/install/#install-compose.
All Device Manageability devices can be controlled using a correctly set up cloud service application. Follow the steps below to install Device Manageability on Server:
- Go to the manageability folder and run the commands:sudo chmod 775 install_tc.sh sudo ./install_tc.shThe manageability folder should be$BUILD_PATH/../../manageability/for an EIF Reference Implementation.
- After reading all the licenses, pressqonce to finish.
- Accept the License by typingYAfter installation is complete, you should see terminal output similar to the following:
- Once Device Manageability has been installed successfully, edit the/etc/intel_manageability.conffile and update the required values.sudo vi /etc/intel_manageability.conf
- Under the<all> </all>section, change dbs from ON to OFF.DBS stands for Docker* Bench Security. This feature of Device Manageability is not used for EIF.
- Add the IP endpoint for the developer files to the trustedRepositories:<trustedRepositories> http://[Server_IP]:5003 </trustedRepositories
- Save and exit. You must restart the Server before these changes can take effect.
Repeat these steps on the Worker machine. Create an archive with the manageability folder from step 1, copy it on the Worker machine, and repeat the installation steps.
Multi-Node Deployment
EIF deployment on multiple nodes requires the use of the Docker* registry. The following sections outline some of the commands to be run on the Server and on any newly added Workers.
Execute the following steps on the Server.
Configure Docker* Registry
- Launch the local Docker* registry:docker run -d -p 5002:5000 --name registry --restart unless-stopped registry:2
- Update the Docker* registry URL in theDOCKER_REGISTRYvariable, with thelocalhost:5002/value:sudo vi $BUILD_PATH/.env
- Identify the images that need to be tagged:cat docker-compose-push.yml | grep container_name
- Identify the version of the Docker* images:docker images | <docker_image_from_above_command>
- Tag each image with the following command:docker tag <docker_image>:<version> localhost:5002/<docker_image>:<version>
- Push the Reference Implementation images to the registry:docker-compose -f docker-compose-push.yml push
- Identify the names and versions, and tag the etcd and etcd_provision images too, as shown above:docker images | grep etcd
- Push the etcd and etcd_provision images to the Docker* registry:docker push localhost:5002/<image_name>:<version>
Configure ETCD* as leader
- Update theETCD_HOSTvariable with the IP of Server, andETCD_NAMEvariable with valueleaderin the.envfile:sudo vi $BUILD_PATH/.env
- Run the provisioning script:cd $BUILD_PATH/provision sudo ./provision ../docker-compose.yml
Generate the provisioning bundle
- Update the Docker* registry URL inDOCKER_REGISTRYvariable, with the IP of Server and port 5002, <Server_IP>:5002/ , andETCD_NAMEvariable with theworkervalue:sudo vi $BUILD_PATH/.env
- Generate the provisioning archive:cd $BUILD_PATH/deploy/ sudo python3 generate_eii_bundle.py -p
This command will generate a
worker_provisioning.tar.gz
archive. Save this archive. It will be deployed on the Worker in the following steps.
Generate the AOTA bundle
- Identify the list of services that need to be configured:cat ../config_all.yml | grep ia_
- Edit theconfig.jsonfile and replace theinclude_servicelist with the list of services from above command:sudo vi config.json
- Generate the eii_bundle archive that will be used to launch AOTA.sudo python3 generate_eii_bundle.py
This command will generate a
eii_bundle.tar.gz
archive. Save this archive. It will be served through a Python* HTTP server and ThingsBoard* to the Worker to launch AOTA.
Cloud Service: ThingsBoard* Setup
Follow these instructions to set up ThingsBoard on the Server. It is possible to set up ThingsBoard on a different host, even one that is not part of the cluster, however, this tutorial describes how to deploy it on the Server.
- Follow the Installation section of Set Up ThingsBoard* Local Cloud Data.
- On the ThingsBoard* page, go toDevices, add a new device with nameWorkerand a new profile with profile nameINB.
- On the device list, click on the shield icon to find out the credentials of this device:
- Save the credentials for Worker provisioning. In this dialog you have the option to add a custom Access Token instead of using the provided one. Do not forget to click theSavebutton if you modify the Access Token.
- Set up theAOTAdashboard:
- Go to the Widgets Library in the ThingsBoard* menu.
- Click on the+button and select'Imports widgets bundle'.
- Browse to/usr/share/cloudadapter-agent/thingsboard/and select theintel_manageability_widgets_version_3.3.jsonfile.
- Click theImportbutton.
- Go toDashboardsin the ThingsBoard* menu.
- Click on the+button and selectImport dashboard.
- Browse to/usr/share/cloudadapter-agent/thingsboard/and select theintel_manageability_devices_versoin_3.3.jsonfile.
- Click theImportbutton.
Worker Provisioning
With the Server and the Cloud Service set up, in this section you will set up the Worker. All the steps and commands in this section must be executed on the Worker.
Docker* Provisioning
- Copy theworker_provisioning.tar.gzarchive created in the Multi-Node Deployment section to the Worker host.
- Untar the archive:tar -xf worker_provisioning.tar.gz
- Run the provisioning script:cd worker_provisioning/provision sudo ./provision.sh
- Configure the Docker* daemon to allow pulling images from Server:
- Edit the/etc/docker/daemon.jsonfile:sudo vi /etc/docker/daemon.json
- Add the following lines:{ “insecure-registries”: [“<Server_IP>:5002”] }
- Restart the Docker service to reload the change:sudo service docker restart
ThingsBoard* Provisioning
- Add theDISPLAYvariable to the/etc/environmentfile.Update the value according to your environment.
- Launch the provisioning binary:sudo PROVISION_TPM=auto provision-tc
- If the Worker was previously provisioned, the following message will appear. To override the previous cloud configuration, enterY.
- Select ThingsBoard* as the cloud service by entering3andEnter.
- Provide the IP of the Server:
- When asked for the server port, pressEnterto use the default value1883.
- When asked about provision type, chooseToken Authentication.
- Enter the device token extracted in the Cloud Service: ThingsBoard* Setup section.
- When asked aboutConfigure TLS, enterN.
- When asked about signature checks for OTA packages, enterN.The script will start the Intel® Manageability Services. When the script finishes, you will be able to interact with the device via the ThingsBoard* dashboard.
If at any time the cloud service configuration needs to be changed or updated, you must run the provisioning script again.
Configure Reference Implementation
- Create the necessary folders and files for the Reference Implementation configuration:sudo mkdir -p /opt/intel/eii/local_storage sudo touch /opt/intel/eii/local_storage/credentials.env sudo touch /opt/intel/eii/local_storage/cloud_dashboard.env sudo mkdir /opt/intel/eii/local_storage/saved_images sudo mkdir /opt/intel/eii/local_storage/saved_videos sudo chown eiiuser:eiiuser -R /opt/intel/eii
- Add the AWS* cloud credentials and ThingsBoard* credentials intocredentials.envandcloud_dashboard.env, created above. These are the credentials that you would usually add into the Configuration page of the webpage of the Reference Implementation:
- In the/opt/intel/eii/local_storage/credentials.envfile, add the following variables along with the values you would've added in the webpage for AWS* credentials:AWS_ACCESS_KEY= AWS_SECRET_ACCESS_KEY= AWS_BUCKET_NAME=
- (Optional) In the/opt/intel/eii/local_storage/cloud_credentials.envfile, add the following variables along with the values you would've added in the webpage for ThinbsBoard* credentials:HOST= PORT= ACCESS_TOKEN=This step is optional. It must be executed if you wish to continue receiving updates in the cloud dashboard and have the Reference Implementation save the events on AWS* storage.
Perform AOTA
On the Server, go to the path where that file is located, and, for development purposes only, launch a Python* HTTP server:
python3 -m http.server 5003
- Go to ThingBoard* page,Dashboard, and selectINB-Intel Manageability Devices:
- Click on theTrigger AOTAbutton:
- Once the Trigger AOTA dialog opens, complete each field per information below:App:docker-composeCommand:upContainer Tag:eii_bundleFetch:enter the HTTP server that was set up at the start of this section.Leave the other sections empty and click on theSendbutton.In the step above, the Worker will access the Server through the local HTTP server to fetch the eii_bundle.In the ThingsBoard* log, you can see that eii_bundle was fetched from the local server and was deployed successfully:
To stop the application on Worker, trigger another AOTA event and set
Command
to
down
instead of
up
.
To verify that the EIF Reference Implementation was successfully deployed on the new node, check the list of running containers with the command:
docker ps
The output will be similar to the following snapshot:

As the Reference Implementation is launched with test videos, which are available only on the Server and are not deployed on Worker, it will crash on first launch. To fix this, you need to find out what test video needs to be copied from the Server to the Worker and where.
To find out which video to copy, first identify all the Video Ingestors/Analytics Docker* images that were launched for the RI with the command above, and for each of them run the following command:
docker logs <container_id>
You will see output similar to:

To identify the location where these videos need to be copied, run the command for each container as above:
docker inspect -f ‘{{ .Mounts }}’ <container_id>
You will see output similar to:

The name will be different based on the installed Reference Implementation.
The videos were deployed along with the Reference Implementation on the Server. Search for them in the installation folder and copy them to the folder shown above. Once the videos are in the right place, trigger AOTA with 'down' event and another one with 'up' event.
If the Visualizer does not appear, run the following command:
xhost +
Verify Triggered AOTA in Event
Once the AOTA event is triggered, you can verify the log of the triggered call. This can be one of the verification tasks done during the development phase.
- Go to Worker and run the following command to see the logs:journalctl -fu dispatched * journalctl -fu cloudadapter
- Note the event logs on the ThingsBoard* server show which commands have been run.
If the event log does not appear, follow these steps:
- Change settings from ERROR to DEBUG everywhere in these files:/etc/intel-manageability/public/dispatcher-agent/logging.ini /etc/intel-manageability/public/cloudadapter-agent/logging.ini
- Run the commands:sudo systemctl restart dispatcher sudo systemctl restart cloudadapter