Build from a common open application reference middleware to accelerate software modernization strategies for Process Control and Process Optimization in the upstream oil and gas industry. The Universal Wellpad Controller (UWC) reference middleware provides a set of essential services for experts in the field to create microservice based solutions. Features include data collection with Modbus, prioritized data exchange pathways for process control data, connectors for device management, SCADA, and backend data systems. You can add new protocols, data analysis, and other capabilities to meet specific application needs.
The UWC reference middleware enables upstream oil and gas operators with the flexibility to innovate and optimize their oil and gas production. It is ideal for an Original Equipment Manufacturer (OEM) or Independent Software Vendor (ISV) to incorporate into solutions to meet operator needs.
Select Configure & Download to download the reference implementation and the software listed below.
Time to Complete
C, C++ are used in the codebase.
Naonworks*, at the convergence of cybersecurity and industrial control systems, is pioneering the use of open architectures for automation control in Smart Oil & Gas.
The Naonworks ALITA* (Artificial Lift Innovated Technology Application) is an open architecture based plunger lift control and automation system installed at the wellsite for reliable and optimized production of gas and oil from the field. ALITA Monitors process parameters in real time (pressures, flows, valve control), controls the lift process, and keeps the production in a healthy regime to help eliminate costly well closing events.
The ALITA is a microservices-based application that builds upon industry standards such as MQTT Sparkplug and Modbus and is compatible with the Universal Wellpad Controller (UWC) reference middleware from Intel. Naonworks’ user interface for ALITA gives the system operator advanced insight over the status, production effectiveness, and condition of plunger lift at each well and across a field area or region.
Open Architecture allows an operator to select the right hardware for the site, seamlessly upgrade and replace hardware in the field, perform upgrades to the application and the underlying software infrastructure, and integration of analytics and AI at the edge. The Naonworks ALITA with UWC reference middleware from Intel is enterprise-ready and virtually eliminates the challenges of stranded assets and obsolescence of HW and SW.
The ALITA application can be requested at https://download.naonworks.com.
The UWC reference middleware is scalable and flexible to run on a variety of Intel based platforms. For commercial scale deployment to Upstream Oil and Gas well pads, devices with Atex/C1D2 certifications are required. Hardware vendors have made available a range of platforms based on the Intel Atom® processor such as the Intel Atom® E3930 processor for power-efficient operation, Intel Atom® E3950 processor, and Intel® Core™ i7 processors for scalable solutions and larger and more complex sites.
The base hardware requirements for UWC reference middleware are:
- Two or more physical or virtual cores
- Intel Atom® or Intel® Core™ processor family
- 4GB memory
- 25GB hard disk space
- Ubuntu* 20.04 server operating system with Preempt RT Patch
- Two or more Ethernet connections
- Optional: if user wishes to have RS-232 or RS-485 serial ports
When enabling the Data Persistence recipe (with InfluxDB* and Telegraf*), if data access queries are intensive or desired database size is large, it is recommended to use higher performance and capacity components for CPU, memory, and storage:
- Four or more physical or virtual cores
- Intel® Core™ processor family
- 8GB memory
- 64GB hard disk space
The UWC reference middleware can also be evaluated on a virtualized platform based on Intel® Core™ i7 processor, such as a desktop or laptop computer running Windows* 10 with Hyper-V*.
This open application framework uses Docker* containers for Well control applications. It comes with a preemptive Real-Time Linux* Operating System (RT-OS) and policy-based real-time data bus to allow for your applications to reliably communicate through the included Modbus applications with commercial quality hardened Modbus stacks.
The MQTT/Sparkplug* SCADA interface is included as well to provide interoperability to the leading SCADA systems and cloud infrastructure. The reference middleware includes comprehensive device manageability including Firmware, OS, and application lifecycle management.
The application can subscribe to MQTT topics to receive polled data. Similarly, the application can publish data to be written on MQTT. The platform will accordingly publish or subscribe to respective topics. The MQTT topics are dynamically created by the framework based on the site configurations IDs.
Internal Data Sharing and Data Persistence
The Edge Insights for Industrial (EII) Data Bus, an abstraction over ZeroMQ*, is shown in Figure 1 and is the primary and most efficient data sharing bus of the UWC reference middleware. Interfacing to the EII Data Bus is recommended if you are an application developer concerned with latency, performance, and timing jitter for your application. The bus utilizes the XPUB-XSUB model of ZMQ when the IPC mode is used by your container(s). For scalability and interoperability reasons, a second bus, using MQTT*, is also available and is explained in the section MQTT-Bridge Container.
Starting with UWC 1.6, an optional Data Persistence recipe is available wherein an Influx Database and a Telegraf data service are installed and configured. The Telegraf service has a plug-in provided that listens on the EII ZMQ Data Bus to allow for ingesting messages from the EII Data Bus into the Influx Database based on the configuration preferences of the developer, as per the configuration settings of the plug-in and the database.
The configuration shipped by Intel has the bus connector listening for all topics on the EII Data Bus and filters to ingest only those JSON payloads for which a flag is present: dataPersist=true.
UWC Site Configurations
UWC needs the following site configuration to function properly:
The site configurations describe the number of wellpads and configurations of each wellpad that are managed by the UWC. The site configuration file format is YAML. The YAML configurations file contains the "ID" field and must be unique for the UWC deployment. For a complete list of fields, refer to the sample YAML files included within source code (.<source_code>.\Others\Config\UWC\Device_Config).
You have to configure site configuration before the UWC deployment. The site configuration is fully user-defined. UWC can manage one to multiple wellpads, and each wellpad may manage multiple Modbus TCP or Modbus RTU devices, and each Modbus device can have multiple data points to monitor or control. The combination of "ID" field defines the MQTT topic name for data polling, writes and reads.
The MQTT topic name format is as follows:
Example topic name : flowmeter/PL0/TodaysVolume/update
The Operations could be:
1. <update> - This operation indicates the data polling at user-configured frequency into site configuration YAML files.
2. <read> - The vendor apps can initiate on-demand read of attributes using a read operation.
3. <readresponse> - The readresponse is the response topic for the corresponding read operation from the vendor app.
4. <write> - The vendor apps can initiate on-demand write of attributes using a write operation.
5. <writeresponse> - The writeresponse is the response topic for the corresponding write operation from the vendor app. It indicates the status of write operations.
UWC supports Modbus TCP servers (formerly called Modbus TCP master) and Modbus RTU servers (formerly called Modbus RTU master), which issue poll and write requests to edge devices in the field that are running as Modbus clients (formerly called Modbus slave devices). These are developed as two separate containers, i.e., Modbus TCP container and Modbus RTU container.
Modbus RTU Client Container
Modbus RTU devices can be connected using RS-485 or RS-232 serial physical connections. Normally, with RS-232, only one device is connected at one time. Hence, to communicate with two Modbus RTU devices over RS-232, two different serial ports will be needed. It is also possible to have a combination of serial ports with RS-232 and RS-485 on the same device at the same time.
Modbus RTU protocol with RS-485 physical transport uses a twisted pair of wires in a daisy-chain shared media for all devices on a chain. The communications parameters of all devices on a chain should be the same, so, if different devices have different configuration (e.g., different parity, different baud rate, etc.), then different Modbus RTU chains can be formed. To communicate with two different Modbus RTU networks, two different serial ports will be needed. It is important to verify the analog signal integrity of the RS-485 chains including the use of termination resistors as per well-known RS-485 best practices.
In UWC, one Modbus RTU client can be configured to communicate over multiple serial ports. Hence, a single Modbus RTU client container handles communication with multiple Modbus RTU networks. The configuration for one Modbus RTU network (e.g., port, baud rate, etc.) can be configured in a RTU network configuration file.
Modbus containers communicate over the internal Edge Insights for Industrial Data Bus (ZMQ). The MQTT-Bridge module enables communication with Modbus containers using MQTT. The MQTT-Bridge module reads data on ZMQ received from Modbus containers and publishes that data on MQTT. Similarly, the MQTT-Bridge module reads data from MQTT and publishes it on EII Data Bus ZMQ.
Sparkplug-Bridge is a container used for external data interfaces to SCADA (Supervisory Control and Data Acquisition) and similar operational control systems. Sparkplug-Bridge implements the Eclipse Foundation* Sparkplug* B specification to bridge messages using Sparkplug B to the internal data buses of UWC. The Sparkplug-Bridge container publishes machine model of sensors and actuators configured in the Site configurations files and reports tag data by exception (on change). The vendor apps have well-defined topic-based interface to publish machine model and tag data. Sparkplug-Bridge supports TLS based secure connection with MQTT broker.
Sparkplug-Bridge container has the following Sparkplug message types for configured sensors & actuators in site configuration files and vendor applications:
- Edge node birth (NBIRTH)
- Edge node death (NDEATH)
- Device birth (DBIRTH)
- Device death (DDEATH)
- Device data Messages (DDATA)
- Device command (DCMD)
The Sparkplug-Bridge container updates realtime online/offline status of sensors & actuators to the SCADA Headend.
Key Performance Indicator (KPI) Container
UWC offers infrastructure creating priority data pathways from field devices through to the application microservice layer. Whether the included application layer microservices (such as MQTT-Bridge or Sparkplug-Bridge) or an application built by a developer for controlling a gas well or monitoring, it is important to be able to measure and evaluate the performance of the data pathways through the middleware. The “KPI Application” is a microservice provided to give the developer ability to do just that. In addition, the KPI application can be used as a starting point (i.e., a sample application) for a developer who wants to connect these segments of data pathways into functions such as single-input, single-output PID Control loops and/or Event-Response functions.
In the KPI Application configuration, a data “loop” is defined as the pairing of one read operation and one write operation: the “polling read” mechanism of UWC is used for the read operation and the “on-demand-write” operation is used for the write segment. In the sample application, a simple time delay is inserted. For a real control loop function, mathematical and/or boolean logic would be inserted into the data flow in place of the simple delay. A sensor of the wellpad, such as a pressure sensor, would deliver values to the read function and an actuator, such as a valve positioner, would receive the control signal from the write operation. In the current implementation, the read and write both use Modbus protocols.
This KPI application can either be executed based on MQTT communication or based on Edge Insights for Industrial (EII) internal Data Bus (ZMQ) communication.
The KPI Application also logs all data received as a part of control loop application in a log file. A set of timing parameters of the loop performance are included in the log file including the total loop delay which is defined as the time between the trigger for the polling interval through until the KPI application has the confirmation message from the edge device that the control signal write was received. In addition to the timings, error codes from all layers of the system are collected and logged. This data can be used for measuring performance of the system.
Although the default recipes ship with a single instance of KPI-App, you can extend the UWC reference middleware to have multiple applications by updating the recipes. This is possible because the EII Data Bus used supports an xPUB-xSUB based ZMQ broker underneath. Allowing multiple publishers & subscribers to perform pub/sub on the EII Data Bus supports multiple publishers and multiple subscribers. As always zero, one, or more KPI Application instances can connect concurrently to the MQTT bus.
Pre-processor flag should be used for enabling/disabling KPI-App on high performance/low power processors:
- By default, pre-processor flag UWC_HIGH_PERFORMANCE_PROCESSOR is disabled in Kpi App for Debug and Release modes.
- To enable KPI-App on high performance processor in Release/Debug modes, go to <Sourcecode> -> kpi-Tactic -> KPIApp -> Release/Debug -> src directory and open subdir.mk file. Add the option “- DUWC_HIGH_PERFORMANCE_PROCESSOR” in the line where GCC compiler is invoked.
- To disable pre-processor flag in Kpi App, remove the option “-DUWC_HIGH_PERFORMANCE_PROCESSOR” added in above step 2 for both Release and Debug modes.
High performance processors are Intel® Core™ processors and low power systems are Intel Atom® processors.
- Internet connection (with proper proxy settings, if any) is required for installation.
- Install Ubuntu* 20.04 Server OS using the ISO image ubuntu-20.04.2-live-server-amd64.iso Server OS.
- Apply RT kernel patch (optional). Refer to the steps below to choose and apply RT patch for the kernel and OS version. Make sure you have all the MQTT required client certificates/keys for Sparkplug-Bridge container in place, i.e.,:
- CA Certificate – CA certificate
- Client Certificate – Client Certificate File
- Client Key – Client Private Key
The above MQTT client certificates/keys are required for the Sparkplug-Bridge Docker container to establish a secure TLS connection with the external MQTT broker. The Sparkplug-Bridge container acts as a client to the external MQTT broker. The SCADA Head-end (external system) also connects to the MQTT broker. The required client certificates are "client key", "client certificate" and CA certificate. The client and MQTT broker server must use the same CA (certificate authority). The server client certificates can be generated using OpenSSL commands for own deployment of MQTT broker. Sometimes client certificates are provided by the external service providers who manages MQTT broker deployment.
NOTE: If the host system already has Docker images and containers, you might encounter errors while building the packages. If you encounter errors, refer to the Troubleshooting section at the end of this document before starting the installation.
The latest version of UWC has been tested with Ubuntu 20.04.2 LTS. Check the kernel version corresponding to the Ubuntu OS version being used and map it with the correct RT kernel patch.
Use the links below to map the kernel version with the RT kernel patch:
Step 1: Apply RT Kernel Patch (Optional)
Install all the prerequisites using the following command:
sudo apt-get install -y libncurses-dev libssl-dev bison flex build-essential wget libelf-dev
NOTE: You will see a prompt to update the package runtime. Click on the Yes button.
Default Ubuntu 20.04.2 LTS kernel version is 5.4.0-80.
1. Make a working directory on the system:
mkdir ~/kernel && cd ~/kernel
2. Download kernel in ~kernel directory created in step 1.
- Download kernel manually: https://www.kernel.org/pub/linux/kernel/ or use the following command to download it from command line inside current directory:
- Download preempt rt patch: https://www.kernel.org/pub/linux/kernel/projects/rt/ or use the following command to download it from command line inside current directory.
- Recommendation is to get PREEMPT_RT version 5.4.129-rt61
3. Unzip the kernel using the following command:
tar -xzvf linux-5.4.129.tar.gz
4. Patch the kernel:
cd linux-5.4.129 gzip -cd ../patch-5.4.129-rt61.patch.gz | patch -p1 --verbose
5. Launch the graphical UI for setting configurations. The following command launches a graphical menu in the terminal to generate the .config file.
6. Select the preemption model as Basic RT using tab key on your keyboard.
- Select and Enter on “General setup” option.
- Select and Enter "Preemption Model (Fully Preemptible Kernel (Real-Time))"
- Select and Enter "Fully Preemptible Kernel (Real-Time)"
- After successful selection, click on Save and then come back to main page using Esc on the keyboard.
7. To save the current setting, click on <Save> and then exit the UI using <Exit>.
8. Compile the kernel (Execute the following commands):
In a production environment, the system key management infrastructure will be provided for the end user to ensure the patched Kernel works with the Secure Boot flow. When Secure Boot is not used, comment out the CONFIG_SYSTEM_TRUSTED_KEYS and CONFIG_MODULE_SIG_KEY lines from the /boot/config<version> file. Failure to do one of these two things will cause the following make commands to fail.
make -j20 sudo make INSTALL_MOD_STRIP=1 modules_install -j20 sudo make install -j20
9. Verify that initrd.img-'5.4.129-rt61, vmlinuz-'5.4.129-rt61, and config-'5.4.129-rt61 are generated in /boot directory and update the grub.
cd /boot ls sudo update-grub
10. Verify that there is a menu entry containing the text "menuentry 'Ubuntu, with Linux '5.4.129-rt61" in /boot/grub/grub.cfg file
11. To change default kernel in grub, edit the GRUB_DEFAULT value in /etc/default/grub to your desired kernel.
NOTE: 0 is the 1st menu entry.
12. Reboot and verify using the following command:
13. Once the system reboots, open the terminal and use uname -a to check the kernel version.
Step 2: Install the Use Case
Select Configure & Download to download the use case and then follow the steps below to install it.
1. Open a new terminal, go to the folder location and unzip the downloaded package.
2. Go to the universal_wellpad_controller/ directory.
3. Change permission of the executable edgesoftware file.
chmod 755 edgesoftware
4. Run the command below to download the modules.
5. Go to the build directory to install the UWC pre_requisites, where <version> indicates the downloaded version of Universal Wellpad Controller.
cd universal_wellpad_controller/Universal_Wellpad_Controller_<version>/IEdgeInsights/build/ chmod 755 pre_requisites.sh sudo ./pre_requisites.sh
NOTE: If the machine is in a proxy environment, run the command below:
sudo ./pre_requisites.sh --proxy=<proxy-address>:<port-number>
6. After successful installation, you should logout and login to update the prerequisites installation changes.
7. Run the command below to install:
NOTE: Refer to the Troubleshooting section at the end of this document if you encounter any Docker pull-related issues during the installation process.
NOTE: Uninstall the old version of manageability before installation, using the script below under the path universal_wellpad_controller/Universal_Wellpad_Controller_<version>/manageability/ where <version> indicates the downloaded version of Universal Wellpad Controller.
chmod 755 uninstall_tc.sh sudo ./uninstall_tc.sh
8. During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download.
9. There are eight supported installation modes (eight use-cases).
- Data Persistence and Its Use cases: Data persistence is a feature of UWC wherein the data coming from the end Modbus devices can optionally be stored in the InfluxDB database. The datapoints that need to be stored in the database can be configured in the datapoints YML configuration file as true or false for “dataPersist” field.
- As can be seen in the below image, the use cases 5-8 are for data persistence feature of UWC.
- Use case 5 is with minimal set of UWC containers along with data persistence.
- Use case 6 is minimal set of UWC containers along with data persistence and KPI-kactic app.
- Use case 7 is minimal set of UWC containers along with data persistence and Sparkplug-bridge service.
- Use case 8 is to run a sample database publisher service which publishes the sample data into the influxDB.
- As can be seen in the below image, the use cases 5-8 are for data persistence feature of UWC.
10. UWC includes Device Manageability software that helps to achieve Software Updates and Deployment from Cloud to device, which includes SOTA, FOTA, AOTA, and few system operations.
To install manageability, you will be prompted with “yes” or “no” options. Choose the appropriate option to continue with the installation.
11. When the installation is complete, you see the message Installation of package complete and the installation status for each module.
Summary and Next Steps
UWC reference middleware is successfully installed. Now write your process control application and optimization app using the MQTT based Publish-Subscribe interface. You can refer to the UWC user guide at universal_wellpad_controller/Universal_Wellpad_Controller_1.6/IEdgeInsights/uwc/Document for more information on the MQTT publish-subscribe topics, how to configure wellpads, how topics are dynamically created base configurations, and troubleshooting.
To continue learning, see the following guides and software resources:
You might encounter issues during installation if the target system has existing Docker images and containers. Please stop the container and remove the Docker images.
To remove all stopped containers, dangling images, and unused networks:
sudo docker system prune –-volumes
To stop Docker containers:
sudo docker stop $(sudo docker ps -aq)
To remove Docker containers:
sudo docker rm $(sudo docker ps -aq)
To remove all Docker images:
sudo docker rmi -f $(sudo docker images -aq)
Docker Image Pull Issue
If you are experiencing Docker image pull issues, please check the following:
- This issue can be because of the latest pull rate limitations introduced by Docker Hub. Check this Docker site to help determine the exact pull limit that’s applicable on the system where you are trying to pull the publicly available Docker Hub images, such as Ubuntu, Python*, etc.
NOTE: This limit is only applicable for the 6-hour window.
If someone sees this issue with anonymous user (pull limit of 100), i.e., without Docker login, you can create an account at https://hub.docker.com and try to do the build after logging in using the below command.
docker login -u <username> -p <password>
The other alternative is to use a paid subscription at https://www.docker.com/pricing.
Docker Installation Fails
If Docker CE installation failed due to dpkg error while executing pre_requisites.sh file, re-run the “sudo ./pre_requisites.sh” step to troubleshoot the issue. (This issue occurs due to apt docker-ce package uninstall and re-install scenario.)
Installation Script Fails
If the ./edgesoftware install script fails, you can try manual installation. Please refer to the release notes and user guide at https://open-edge-insights.github.io/uwc-docs/ and the UWC source code, which is available at universal_wellpad_controller/Universal_Wellpad_Controller_1.6/IEdgeInsights/uwc.
If you're unable to resolve your issues, contact the Support Forum.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.