Intelligent Traffic Management Reference Implementation

Version: 2022.1   Published: 08/12/2020  

Last Updated: 08/30/2022

Overview

Intelligent Traffic Management is designed to detect and track vehicles as well as pedestrians and to estimate a safety metric for an intersection. Object tracking recognizes the same object across successive frames, giving the ability to estimate trajectories and speeds of the objects. The reference implementation also detects collisions and near misses. A real-time dashboard visualizes the intelligence extracted from the traffic intersection along with annotated video stream(s).   

This collected intelligence can be used to adjust traffic lights to optimize the traffic flow of the intersection, or to evaluate and enhance the safety of the intersection by allowing emergency services notifications, such as 911 calls, to be triggered by collision detection, reducing emergency response times. 

To run the reference implementation, you will need to first configure the control plane host and the worker node host as presented in Prerequisites

Select Configure & Download to download the reference implementation and the software listed below.  

Configure & Download

A grid of 9 surveillance camera video feeds. In each feed, detected cars are brightly outlined with yellow and detected pedestrians are brightly outlined with blue.

  • Time to Complete: 30 - 45 minutes
  • Programming Language: Python* 
  • Available Software: 
    • Intel® Distribution of OpenVINO™ toolkit 2021 Release
    • Kubernetes*

Target System Requirements

Control Plane

  • One of the following processors:
    • 6th to 12th Generation Intel® Core™ processors with Iris® Pro Graphics or Intel® HD Graphics
  • At least 32 GB RAM.
  • At least 256 GB hard drive.
  • An Internet connection.
  • Ubuntu* 20.04 LTS Server.

Worker Nodes

  • One of the following processors:
    • 6th to 12th Generation Intel® Core™ processors with Iris® Pro Graphics or Intel® HD Graphics
  • At least 32 GB RAM.
  • At least 256 GB hard drive.
  • An Internet connection.
  • Ubuntu* 20.04 LTS Server.
  • IP camera or pre-recorded video(s).

How It Works

The application uses the inference engine and the Intel® Deep Learning Streamer (Intel® DL Streamer) included in the Intel® Distribution of OpenVINO™ toolkit. The solution is designed to detect and track vehicles and pedestrians and upload cloud data to Amazon Web Services* (AWS*) S3 storage.

How it works is represented by a block diagram. The leftmost block is labeled Monitoring Area with a video camera icon. The raw video input stream in the leftmost block flows into the middle section labeled Edge Applications. The Edge Applications are Intelligent Traffic Management application, InfluxDB database, and Grafana. After the raw video input stream is processed by the Edge Applications, it flows to the rightmost block labeled User Interface.
Figure 1: How It Works

 

The Intelligent Traffic Management application requires the services pods, database and a visualizer. Once the installation is successful, the application is ready to be deployed using Helm. After the deployment, the application pod takes in the virtual/real RTSP stream addresses and performs inference and sends metadata for each stream to the InfluxDB* database. The visualizer in parallel shows the analysis over the metadata like pedestrians detected, observed collisions and processed video feed.

The application has capability to perform inferences over as much as 20 channels. In addition, the visualizer is capable of showing each feed separately as well as all the feeds at the same time using Grafana*. The user can visualize the output remotely over a browser, provided that they are in the same network.

New in this release are Rule Engine and Cloud Connector pods.

  • Rule engine analyzes each video frame and its inference results. If it matches the configured rules (collision, near miss, overcrowd), it sends the video frame to Cloud Connector to be uploaded to the cloud storage.
  • Cloud Connector uses Amazon Web Services* Cloud Storage to save the video captures.
The architecture is represented by a complex block diagram. The upper half of the diagram shows the data flow for the reference implementation (RI). A camera passes data to the RI itself, which is comprised of blocks labelled Video Inference, Analytics, Cloud Connector, Rule Engine, and Dashboard. Next, the data flows to Amazon Web Services in the cloud, where it is displayed using the Amazon Web Services dashboard. The lower half of the diagram shows software and hardware components used by the RI divided into three categories: third-party developed, open source, and Intel developed hardware. Third party components are Kubernetes, Influx DB, Harbor, Grafana, HiveMQ MQTT, and PostgreSQL. The open source component is Linux. The Intel developed hardware is an Edge Compute Node using Intel Atom® processors, Intel® Core™ processors, or Intel® Xeon® processors.
Figure 2: Architecture Diagram

Get Started

Prerequisites

In order to run the latest version of Intelligent Traffic Management, you will need 2 Linux hosts: one for Kubernetes control plane and one for Kubernetes worker. The following steps describe how to prepare both targets before installing the reference implementation.

  1. Install docker-ce and docker-compose. Run the following commands on both targets:

    • Get gpg key for Docker binaries and add the apt repository to your apt source list:
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
      
      echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 

       

    • Install Docker packages: 

      sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

       

    • Configure Docker service. In your Docker daemon /etc/docker/daemon.json file, add the following: 
       
      
      "exec-opts": [
         "native.cgroupdriver=systemd"
      ],
      "default-ulimits": {
         "nofile": {
            "Name": "nofile",
            "Hard": 65535,
            "Soft": 65535
         },
         "nproc": {
            "Name": "nproc",
            "Hard": 4096,
            "Soft": 4096
         } 
      } 
      
    • Configure Docker service proxy. Update Docker service no proxy settings configuration on https and http proxy configuration files: /etc/systemd/system/docker.service.d/https-proxy.conf and /etc/systemd/system/docker.service.d/http-proxy.conf
       
      NO_PROXY="PREVIOUS_NO_PROXY, CONTROL_PLANE_IP, WORKER_IP"
    • Both machines should have no_proxy set to controller_plane_ip and worker_ip

    • Run sudo systemctl daemon-reload and sudo systemctl restart docker.service to apply changes.
  2. Install Helm. Run the following commands on both targets.

    curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
    sudo apt-get install apt-transport-https --yes
    echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
    sudo apt-get update
    sudo apt-get install helm 

     

  3. Install and configure the Kubernetes cluster. Run the following commands on both targets:

    • Get Google key:

      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

       

    • Add kube apt repo:
      echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
      sudo mv ~/kubernetes.list /etc/apt/sources.list.d
    • Disable swap on your machine. (Kubernetes cluster doesn't work while using swap memory.)
      sudo swapoff -a
    • Install Kubernetes binaries:
      sudo apt-get update && sudo apt-get install -yq kubelet=1.23.4-00 kubeadm=1.23.4-00 kubectl=1.23.4-00 kubernetes-cni

       
  4. Initialize the Kubernetes cluster on the Control Plane machine: 
    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    NOTE: Save the kube join command prompted at the end of the cluster creation.
     
  5. Configure access to Kubernetes cluster. 
     
    • Current user configuration: 
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    • Root user configuration: 
      sudo su -
      mkdir .kube
      cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      exit
    • Enable kubelet service:
      sudo chmod -R 755 /etc/cni/
      sudo systemctl restart kubelet.service
    • Check kubelet service status using sudo systemctl status kubelet.service (status should be active).
  6. Add network plugin to your Kubernetes cluster:
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
  7. Check that the current node is ready by using the following command:
    kubectl get nodes -A 

    Output should look like: 
    NAME                STATUS   ROLES                  AGE     VERSION
    Machine1          Ready    control-plane,master   1m     v1.23.4
  8. Join Kubernetes worker node: 
    • If you didn't save the join command in step 4, run the following command on the control plane to generate another token. (If you have the join command, skip this step.)
      kubeadm token create --print-join-command 
    • Run the kubeadm join command on the worker node, for example:
      kubeadm join <Controller_IP>:6443 --token token12456.token12456 \ --discovery-token-ca-cert-hash sha256:<sha_of_the_kubernetes_certificate> 
    • If join failed, proceed with the following step and give kubelet access to network policy: 
      sudo chmod -R 755 /etc/cni/
  9. Configure Kubernetes on the worker side:

    • Create .kube config folder on worker side:

      mkdir $HOME/.kube

       

    • Copy the configuration file from controller to worker node:

      scp /home/controller_user/.kube/config worker_user@worker_ip:/home/worker_user/.kube/

       

    • Restart kubelet service:

      sudo systemctl restart kubelet.service 

       

  10. Check Kubernetes nodes from both machines to be ready with the command: 
    kubectl get nodes -A

    Output should look similar to: 
    NAME                                    STATUS   ROLES                              AGE     VERSION
    Control-plane-host-name                 Ready    control-plane,master               5m      v1.23.4
    Worker-host-name                        Ready    <unassigned>                       1m      v1.23.4
  11. Assign role to the worker node from control-plane host:
    kubectl label node <node_name> node-role.kubernetes.io/worker=worker
    Check again to see that the label was placed using the following command:
    kubectl get nodes -A

    Output should look similar to: 
    NAME                                    STATUS   ROLES                              AGE     VERSION
    Control-plane-host-name                 Ready    control-plane,master               5m      v1.23.4
    Worker-host-name                        Ready    worker                             1m      v1.23.4

NOTE: For local build using harbor local registry, add the following line in the /etc/docker/daemon.json configuration file:  "insecure-registries": ["https://WORKER_IP:30003"]

 

Step 1: Install the Reference Implementation

NOTE: The following sections may use <Controller_IP> in a URL or command. Make note of your Edge Controller’s IP address and substitute it in these instructions.

 

Select Configure & Download to download the reference implementation and then follow the steps below to install it. 

Configure & Download

  1. Make sure that the Target System Requirements are met properly before proceeding further.

  2. If you are behind a proxy network, please ensure that proxy addresses are configured in the system:

    export http_proxy=proxy-address:proxy-port
    export https_proxy=proxy-address:proxy-port 

     

  3. Open a new terminal, go to the downloaded folder and unzip the downloaded RI package:

    unzip intelligent_traffic_management.zip

     

  4. Go to the intelligent_traffic_management/ directory:

    cd intelligent_traffic_management/

     

  5. Change permissions of the executable edgesoftware file to enable execution:

    chmod 755 edgesoftware

     

  6. Run the command below to install the Reference Implementation:

    ./edgesoftware install

     

  7. During the installation, you will be prompted for the AWS Key ID, AWS Secret, AWS Bucket and Product Key.
    The Product Key is contained in the email you received from Intel confirming your download. 
    AWS credentials are optional. AWS Key ID, AWS Secret and AWS Bucket are obtained after following the steps in the Set Up Amazon Web Services Cloud* Storage section. If you do not need the cloud upload feature, simply provide empty values by pressing Enter when prompted for the AWS credentials.

    NOTE: Installation logs are available at the path:  /var/log/esb-cli/Intelligent_Traffic_Management_<version>/<Component_Name>/install.log 
     

    A console window showing a system prompt to enter the Product Key.
    Figure 3: Product Key



     

  8. When the installation is complete, you see the message “Installation of package complete” and the installation status for each module. 
     
    A console window showing system output during the install process. At the end of the process, the system displays the message “Installation of package complete” and the installation status for each module.
    Figure 4: Installation Success

 

 


Step 2: Check the Application

Check the Intelligent_Traffic_Management pods with the command:

kubectl get pod -A

 

You will see output similar to: 
 

A console window showing system output after running the “kubectl get pods” command. The system displays a list of all the pods and the pod status. The expected status is “Running”.
Figure 5: Intelligent Traffic Management Pods Status

 

 

NOTE: If the pods have a status of ContainerCreating, please wait for some time, since Kubernetes will pull the images from the local registry and then deploy them. This happens only the first time the containers are deployed, and the wait time will depend upon the network bandwidth available. 

 


Step 3: Data Visualization on Grafana

  1. Navigate to https://Controller_IP:30300/dashboard on your browser to check the Intelligent Traffic Management dashboard.
    A browser window using the ITM dashboard URL and showing a warning message “Your connection is not private”. The option to Proceed to the website is outlined in red, indicating you should click the link.
    Figure 6: Login to ITM Dashboard

     

  2. Navigate to https://Controller_IP:30303/camera/0 on your browser to check the Camera.
     
    A browser window using the Camera URL and showing a warning message “Your connection is not private”. The option to Proceed to the website is outlined in red, indicating you should click the link.
    Figure 7: Intelligent Traffic Management Camera 0


     
    A web app dashboard showing a large map of a city. There are 8 blue drop pins on the map.
    Figure 8: Intelligent Traffic Management Dashboard
  3. Navigate to <Controller_IP>:32000 on your browser to login to the Grafana dashboard.
     
  4. Get the Grafana Password by entering the command: 
    kubectl get secrets/grafana -n default -o json | jq -r '.data."admin-password"' | base64 -d

     
  5. Login with user as admin and password as Grafana Password

     
  6. Click Home and select the ITM to open the main dashboard.
    A browser window using the Grafana dashboard URL showing a message “Welcome to Grafana”. You are directed to click the ITM dashboard link.
    Figure 9: Grafana Home Screen

     

     
    A browser window using the Grafana dashboard URL showing the ITM dashboard outlined in red, indicating you should click the link.
    Figure 10: Grafana Dashboard List

     



    An example of the Intelligent Traffic Management dashboard:
    A web app dashboard with navigation, a large map of a city, and an analysis sidebar. There are 8 blue drop pins on the map. The sidebar shows four metrics: number of collisions detected, number of vehicles detected, number of pedestrians detected, and number of bikes detected.
    Figure 11: Grafana Main Dashboard - Intelligent Traffic Management

     

    The above dashboard shows the number of vehicles, pedestrians and collisions detected on the left side. These may be used for adjusting traffic lights and calling emergency services if collisions are detected.

The blue drop pins on the Map are the geographic coordinates of cameras. By clicking on these pins, a small window of the camera feed can be visible with the detection results, as shown in the figure below.

A web app dashboard with navigation, a large map of a city, and an analysis sidebar. The sidebar shows four metrics: number of collisions detected, number of vehicles detected, number of pedestrians detected, and number of bikes detected. The 8 blue drop pins on the map are the geographic coordinates of cameras. The image shows a small window of one camera feed and detected pedestrians are brightly outlined with blue.
Figure 12: Detection Results on MapUI

 

 

To open the Grafana Dashboard for a particular camera with the detection results and other data metrics, click on the camera feed on the small window, as shown in the figure below. 

NOTE: To close the small window with camera feed, click the close button (X) on the top left corner of the window. 

A web app dashboard with showing details of one camera feed including statistics in graphs and analysis in bar charts. The detected pedestrians are brightly outlined with blue.
Figure 13: Grafana Dashboard of an Individual Camera Feed

 


To view the detection results of all the configured camera feeds, click on View All Streams from the top right corner on the MapUI from the main Grafana Dashboard, i.e. ITM. Refer to Figure 11, Grafana Main Dashboard – Intelligent Traffic Management. 

A web app dashboard showing a grid of 9 surveillance camera video feeds. In each feed, detected cars are brightly outlined with red and detected pedestrians are brightly outlined with blue. The sidebar shows four metrics: number of collisions detected, number of vehicles detected, number of pedestrians detected, and number of bikes detected.
Figure 14: Detection Results of all the Configured Camera Feeds

 

 

NOTE: To open combined streams in full tab, go to: http://<Controller_IP>:30300/get_all_streams 

 

If the AWS credentials were provided during the installation steps, then you enabled the Cloud Upload feature. 

Navigate to the configured AWS storage to find the uploaded video captures.

A web app dashboard showing the AWS management console with a list of AWS S3 Bucket Objects that were uploaded as video captures.
Figure 15: List of AWS S3 Bucket Objects

 

The AWS management console showing Properties and other details for an AWS S3 Bucket Object.
Figure 16: AWS S3 Bucket Object Properties

 

The AWS management console showing an AWS S3 Bucket Object photo. Detected vehicles are brightly outlined with orange.
Figure 17: AWS S3 Bucket Object Photo

 


Step 4: Uninstall the Application

  1. Check installed modules with the command below: 
    cd <install_location>/intelligent_traffic_management
    ./edgesoftware list


    All installed modules will show as seen in the screen below: 
    A console window showing the output of the "edgesoftware list" command. The installed modules are listed.
    Figure 18. Installed Modules List

     


     
  2. Run the command below to uninstall all the modules: 
    ./edgesoftware uninstall –a
  3. Run the command below to uninstall the Reference Implementation: 
    ./edgesoftware uninstall <itm-id get from step 1>
     
    A console window showing the output of the "edgesoftware list" and "edgesoftware uninstall" commands. First, the system lists the installed modules. Next, the system displays output during the uninstall process. At the end of the process, the system displays the message “Uninstall finished” and the uninstallation status for each module.
    Figure 19. Uninstalled Modules

     

 


Public Helm Registry for Helm Charts

Installation of Intelligent Traffic Management Reference Implementation on local Kubernetes Cluster is accomplished using Helm charts. In earlier releases, Helm charts used to be a part of the Reference Implementation installation package. Now a global Helm repo is issued so that Reference Implementation Helm charts can be accessible from private and public networks. This will speed up and ease the process of introducing updates and their integration with Reference Implementations.


Local Build Instructions

After you have installed Kubernetes Cluster from Prerequisites, you can build your own Intelligent Traffic Management Docker image using the following instructions.

You can proceed with the steps presented using either edgesoftware sources or GitHub sources: Intelligent Traffic Management

Setup

For GitHub:

git clone
cd intelligent_traffic_management/

 

Use your preferred text editor to make the following file updates.

In the next steps, the tag <REPOSITORY_PATH> indicates the path to the repository.

In the Change examples, replace the line indicated by - with the line indicated by +

 

  1. <REPOSITORY_PATH>/src/build_images.sh - update the tag and version for the image.
    Change example:
    -    TAG="5.0"
    +    TAG="5.1" 
    
  2. <REPOSITORY_PATH>/helm/services/values.yaml - update image deployment harbor.
    Change example:
    - images:
    -   registry: ""
    
    + images:
    +   registry: <local harbor host>:<local_harbor_port>/library/  
  3. <REPOSITORY_PATH>/deploy/services/values.yaml - update version.
    Change example:
    - images:
    -   tag: "5.0"
    
    + images:
    +   tag: "5.1" 

     

Build and Install

Build the Docker image with the following commands: 

# Login on your Harbor Docker registry
docker login WORKER_IP:30003
user: admin
password: Harbor12345

cd <REPOSITORY_PATH>/src/
./build_images.sh -c worker_ip  # The local Docker image will be built on the Ubuntu machine. 

 

Install Helm with the following commands:

  1. Get Grafana password:
    kubectl get secrets/grafana -n default -o json | jq -r '.data."admin-password"' | base64 -d

     

  2. Get the Grafana service IP using the following command:

    kubectl describe service -n default grafana |grep -i Endpoint

     

  3. Get the host IP using the following command:

    hostname -I | awk '{print $1}'
    

     

  4. Change directory to deployment directory from repository path:

    cd <REPOSITORY_PATH>/deploy/
    

     

  5. Deploy the MQTT broker and wait for it to initialize:

    helm install broker broker/ --set namespace=default
    kubectl wait --namespace=default --for=condition=Ready pods --timeout=600s --selector=app=hivemq-cluster
    

     

  6. Using the host IP, Grafana service IP and password from steps 1 and 2, run the following Helm installation command:

    helm install itm services/ --wait --timeout 10m \
              --set grafana.password=<Grafana_Password> \
              --set grafana.ip=<Grafana_PodIP> \
              --set host_ip=<Controller_IP> \
              --set namespace=default \
              --set proxy.http=<HTTP_PROXY> \
              --set proxy.https=<HTTPS_PROXY> \
              --set cloud_connector.aws_key=<AWS_KEY_ID> \
              --set cloud_connector.aws_secret=<AWS_SECRET> \
              --set cloud_connector.aws_bucket=<AWS_BUCKET>

     

    NOTES:
    1. If your host is not behind a firewall, then skip setting the http and https proxy.
    2. Cloud connector requires your AWS credentials to connect to it to upload video captures in case of collision, near miss and overcrowd events. If you don't want this feature enabled, then skip setting these parameters. For instructions on how to configure AWS, refer to the Set Up Amazon Web Services Cloud* Storage section.

After step 6 completes, use your preferred browser to access ITM at: https://Controller_IP:30300 and Grafana https://Controller_IP:32000


Optional Steps

Configure the Input  

The Helm templates contain all the necessary configurations for the cameras.

If you wish to change the input, edit the ./helm/services/values.yaml file and add the video inputs to the test_videos array:  

itm_video_inference:
  name: "itm-video-inference"
  topic:
    publisher: "camera"
  test_videos:
    - uri: "file:///app/test_videos/video_car_crash.avi"
    - uri: "file:///app/test_videos/video_pedestrians.avi"

 

To use camera stream instead of video, replace the video file name with /dev/video0

To use RTSP stream instead of video, replace the video file name with the RTSP link.

Each ITM Video Inference service will pick a video input in the order above.

If you wish to change the coordinates, address and the analytics type of the cameras, edit the ./helm/services/templates/itm-analytics-configmap.yaml file:

  • address: Name of the camera’s geographic location. Must be a non-empty alphanumeric string. 
  • latitude: Latitude of the camera’s geographic location. 
  • longitude: Longitude of the camera’s geographic location. 
  • analytics: Attribute to be detected by the model. 

NOTE: The default model support is pedestrian, vehicle and bike detection. You can select desired attributes from these, e.g., "analytics": "pedestrian vehicle detection".   

Stop the Application

To remove the deployment of this reference implementation, run the following commands.

NOTE: The following commands will remove all the running pods and the data and configuration stored in the device, except the MQTT Broker.

helm delete itm

If you wish to remove the MQTT Broker also, enter the command:

helm delete broker

 


Set Up Amazon Web Services Cloud* Storage

To enable Cloud Storage on the installed Reference Implementation, you will need Amazon Web Services* (AWS*) paid/free subscription to enable your root user account that has to support the following services:

  • Identity and Access Management (IAM)
  • Amazon S3 Bucket

After finishing the setup for IAM and S3, you will have your AWS_KEY_ID, AWS_SECRET_KEY and AWS_BUCKET_NAME to be used on your Intelligent Traffic Management Cloud Connector - Configuration.

References

Setup Steps

  1. From your AWS management console, search for IAM and open the IAM Dashboard. 

    A web app dashboard showing the AWS management console with the IAM dashboard in the main view.
    Figure 20: IAM Dashboard


  2. On the left menu of the dashboard, go to Access management and click on Users to open the IAM Users tab.

    The AWS management console showing the IAM dashboard with the IAM Users tab in the main view.
    Figure 21: IAM Users Tab


  3. From the IAM users tab, click on Add User to access the AWS add user setup.

  4. On the first tab, provide the username and select the AWS credentials type to be Access key.

    The AWS management console showing the IAM dashboard with the Set user details tab in the main view. The User name field is highlighted, indicating you should enter a user name.
    Figure 22: Set User Details Tab


  5. On the second tab, create a group to attach policies for the new IAM user.

    a. Search for S3 and select AmazonS3FullAccess policy.

    b. Click on Create group.

    The AWS management console showing the IAM dashboard with the Create Group tab in the main view. The Group name field is highlighted, indicating you should enter a group name.
    Figure 23: Create Group Tab


     

  6. Select the group you have created and click on Next: Tags.

  7. Tags are optional. If you don't want to add tags, you can continue to the Review tab by clicking on Next: Review.

  8. After review, you can click on the Create User button.

  9. On this page, you have access to AWS Key and AWS Secret Access key. (Click on Show to view them.)

    • Save both of them to be used later on your Cloud Data - Configuration on the Edge Insights for Fleet Reference Implementation you have installed. 
      NOTE: The AWS Secret Key is visible only on this page, you cannot get the key in other way.
    • If you forget to save the AWS Secret Key, you can delete the last key and create another key.
      The AWS management console showing the IAM dashboard with the Add User Success dialog in the main view. The AWS Key and AWS Secret Access key are covered with a blue bar for security.
      Figure 24: AWS Key and Secret Access Key
  10. After you have saved the keys, close the tab. You are returned to the IAM Dashboard page.

  11. Click on the user created and save the User ARN to be used on S3 bucket setup.
    NOTE: If you forget to save the AWS Secret key from the User tab, you can select Security Credentials, delete the Access Key and create another one.

 

S3 Bucket

S3 Bucket Service offers cloud storage to be used on cloud based applications.

Perform the steps below to set up S3 Bucket Service.

  1. Open the Amazon Management Console and search for Amazon S3.

  2. Click on S3 to open the AWS S3 Bucket dashboard.

    The AWS management console showing the AWS S3 Bucket dashboard with the Account snapshot and Buckets list in the main view. The Account Snapshot details, Bucket Name, and AWS Region are covered with a blue bar for security.
    Figure 25: AWS S3 Bucket Dashboard


  3. On the left side menu, click on Buckets.

  4. Click on the Create Bucket button to open the Create Bucket dashboard.

  5. Enter a name for your bucket and select your preferred region.

    The AWS management console showing the Create Bucket dialog with General Configuration and Block Public Access settings in the main view. The AWS Region is covered with a blue bar for security.
    Figure 26: Create Bucket General Configuration


  6. Scroll down and click on Create Bucket.

  7. From the S3 Bucket Dashboard, click on the newly created bucket and go to the Permissions tab.

  8. Scroll to Bucket Policy and click on Edit to add a new statement in statements tab that is already created to deny all the uploads.

    The AWS management console showing the Create Bucket dialog with Edit Bucket Policy settings in the main view. The Bucket ARN is covered with a blue bar for security.
    Figure 27: Edit Bucket Policy


  9. You must add a comma before adding the following information.

    {
      "Sid": "<Statement name>",
      "Effect": "Allow",
      "Principal": {
          "AWS": "<User_ARN_Saved>"
      },
      "Action": "s3:*",
      "Resource": [
          "arn:aws:s3:::<bucket_name>",
          "arn:aws:s3:::<bucket_name>/*"
      ]
    }
    

    a. Update with the following statement with statement name, your user ARN saved at IAM setup - step 11 and your bucket name.

    b. Click on Save changes. If the change is successful, you will see a success saved message, otherwise you need to re-analyze the json file to fix the error.


Summary and Next Steps

This application successfully leverages Intel® Distribution of OpenVINO™ toolkit plugins for detecting and tracking vehicles and pedestrians and estimating a safety metric for an intersection. It can be extended further to provide support for a feed from a network stream (RTSP or camera device).

As a next step, you can experiment with accuracy/throughput trade-offs by substituting object detector models and tracking and collision detection algorithms with alternative ones.


Create a Microsoft Azure* IoT Central Dashboard 

As a next step, you can create an Azure* IoT Central dashboard for this reference implementation, run standalone Python code to fetch telemetry data from Influx DB, and send data to the Azure IoT Central dashboard for visualizing telemetry data. See Connect Edge Devices to Azure IoT* for instructions.


Learn More

To continue your learning, see the following guides and software resources:


Troubleshooting

Pods status check

Verify that the pods are Ready as well as in Running state using below command: 

kubectl get pods -A

If any pods are not in Running state, use the following command to get more information about the pod state: 

kubectl describe -n default pod <pod_name>

ITM Dashboard Not Showing on Browser After Restart Server

Run the following commands:

# Get Grafana pod ip
kubectl get pod -n default -owide |grep grafana* 
grafana-8465558bc8-5p65x            3/3     Running   24 (5h23m ago)   12d   10.245.179.203

#update ITM yaml file
kubectl set env itm -n default GRAFANA_HOST=10.245.179.203

Pod status shows “ContainerCreating” for long time 

If Pod status shows ContainerCreating or Error or CrashLoopBackOff for 5 minutes or more, run the following commands: 

reboot 
su  
swapoff -a  
systemctl restart kubelet  # Wait till all pods are in “Running” state. 
./edgesoftware install 

Subprocess:32 issue 

If you see any error related to subprocess, run the command below: 

pip install --ignore-installed subprocess32==3.5.4 

Support Forum 

If you're unable to resolve your issues, contact the Support Forum.  

To attach the installation logs with your issue, execute the command below to consolidate a list of the log files in tar.gz compressed format, e.g., ITM.tar.gz.  

tar -czvf ITM.tar.gz /var/log/esb-cli/Intelligent_Traffic_Management_<version>/Component_name/install.log

 

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.