Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
491 Discussions

Accelerating fault detection in 3D Seismic data using OpenVINO - Reducing time to the first Oil

MaryT_Intel
Employee
0 0 506

Authors:

Flaviano Christian Reyes, Deep Learning R&D Intern, Intel

Ravi Panchumarthy, Ph.D., Machine Learning Engineer, Intel

Manas Pathak, Ph.D., Global AI Lead for Oil & Gas, Intel

Introduction

In seismic interpretation, fault detection is a crucial step that often requires considerable manual labor and time. Detecting faults is important as they indicate the locations of petroleum reservoirs. Recently, there have been multiple studies published that demonstrate how convolutional neural networks (CNNs) can be used to identify sub-surface faults [1], [2], [3]. The convolutional neural network (CNN) today can even perform even better than humans at image recognition. Similarly, CNN models obtain state-of-the-art performance on seismic use-cases. One recent interesting development is that CNN models that have been trained on synthetic seismic datasets are producing acceptable accuracy [1] in identifying faults using real datasets [1], [2]. This has huge potential to further accelerate oil and gas exploration since scientists do not need to train models on newly acquired seismic datasets in order to automatically interpret them. 2nd Generation Intel® Xeon® Scalable processors [4] and the Intel® Distribution of OpenVINO™ toolkit enable fast inference on 3D seismic datasets from a pre-trained model.

Method

In the current work, we have used a pre-trained model from Wu et al., 2019 [1] to accelerate the detection of faults on data from the F3 Dutch block [5] in the North Sea. The FaultSeg model is based on U-Net architecture with input shape 128x128x128 and the output shape is 128x128x128. Following up on the workflow established in the previous work [6] on the acceleration of salt detection in seismic data using OpenVINO, we created docker containers to perform these benchmarks. The method to create an OpenVINO benchmark docker container followed previous work by Intel [7]. Benchmarks were performed on the validation datasets provided with the FaultSeg model [8] by running experiments on a different set of hardware.

Two docker containers were made for each of the two hardware configurations – CPU and GPU. The benchmark script outputs average inference time per image and average balanced cross-entropy loss. The average inference time per image was calculated in seconds and converted into milliseconds, as is displayed in Figure 1. The average balanced cross-entropy loss was calculated to assess inference differences between representations. Each model ran inference on the validation set for at least 1 minute.

OpenVINO/Intel-Tensorflow* Benchmarking Method:

Based on the Ubuntu 18.04 base image, the CPU docker container uses OpenVINO vR2020.3 and Intel-Tensorflow v1.15.2 to run inference on their FaultSeg representations. Prior to benchmarking, the original Keras FaultSeg model was converted to a Tensorflow frozen graph and then to OpenVINO IR using the OpenVINO model optimizer.

Tensorflow-GPU/Tensorflow-TensorRT Benchmarking Method:

Based on the Nvidia CUDA 10.0/cuDNN 7.3.6 base image, the GPU docker container uses Tensorflow-GPU v1.15.3 and Tensorflow with TensorRT v5.1.5 GA to run inference on their FaultSeg representations. Docker container access to GPUs on internal infrastructure was handled via Nvidia-docker installed on the internal server. Steps for benchmarking these models were otherwise consistent with that of the OpenVINO/Intel-Tensorflow models.

Results

The benchmarks indicated that Intel® Xeon® Gold along with OpenVINO produced the fastest inference (in FP32 precision) including comparison to Tensorflow on V100 optimized by TensorRT, as shown in Figure 1. Intel® Xeon® was further used to produce inference results on the F3 dataset.

latency comparison of TensorRT vs OpenVINO
latency comparison of TensorRT (on NVIDIA V100) vs OpenVINO

Figure 1: Performance graph. The graph shows the latency comparison of TensorRT (on NVIDIA V100) vs OpenVINO (on Intel Xeon Gold 6252). See back up for configuration details. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.

Inference Results

The FaultSeg Model optimized by OpenVINO was then used to perform fault detection on the F3 dataset as shown in the Figure 2.

deep learning on a seismic dataset

Conclusion

OpenVINO model of FaultSeg on the 2nd gen Xeons provided the best performance for predicting faults in a 3D seismic volume. This reduces the time to detect faults and thereby speeding up the oil & gas exploration. A workflow was established and shown to perform accelerated fault detection for 3D seismic data.

References

[1] Xinming Wu, Luming Liang, Yunzhi Shi, and Sergey Fomel, (2019), "FaultSeg3D: Using synthetic data sets to train an end-to-end convolutional neural network for 3D seismic fault segmentation," GEOPHYSICS 84: IM35-IM45.

[2] York Zheng, Qie Zhang, Anar Yusifov, and Yunzhi Shi, (2019), “Applications of supervised deep learning for seismic interpretation”, The Leading Edge 38: 526–533.

[3] Shengrong Li, Changchun Yang, Hui Sun, Hao Zhang, “Seismic fault detection using an encoder–decoder convolutional neural network with a small training set”, Journal of Geophysics and Engineering, Volume 16, Issue 1, February 2019, Pages 175–189, https://doi.org/10.1093/jge/gxy015 on and inversion," The Leading Edge 38: 526–533.

[4] https://www.intel.com/content/www/us/en/products/processors/xeon/scalable.html

[5] https://terranubis.com/datainfo/Netherlands-Offshore-F3-Block-Complete

[6] https://www.intel.com/content/www/us/en/artificial-intelligence/posts/seismic-data-analysis-with-intel-ai-technology.html

[7] Scaling Edge Inference Deployments on Enterprise IoT Implementations https://www.wwt.com/api/attachments/5e99b8b600cd970084c45ed2/file

[8] FaultSeg Validation dataset https://github.com/xinwucwp/faultSeg/tree/master/data/validation/fault

Acknowledgments

  • Intel® technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of dates reflected in the configurations and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
  • Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. 
  • Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information, see Performance Benchmark Test Disclosure
  • Intel® compilers may or may not optimize to the same degree for non-Intel® microprocessors for optimizations that are not unique to Intel® microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. 
  • Your costs and results may vary. 
  • © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

Optimization Notice

Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #2010804

 

Config1

Config2

Test by​

Intel

Intel
Test date07/06/202007/06/2020​
PlatformIntel® Xeon® Gold 6252N CPU @2.30GHz​Intel® Xeon® Gold 5220 CPU @ 2.20GH​
GPUn/a​NVIDIA V100​
# Nodes​11
# Sockets​22
CPU​9672
Cores/socket, Threads/socket​24/48​18/36​
Serial No cpu0​  
Serial No cpu1​  
ucode​0x5002f01​0x5002f01​
HT​On​On​
Turbo​On​On​
BIOS version (including microcodeverison: cat /proc/cpuinfo | grep microcode –m1)​4.1.13,0x5002f013.1, 0x5002f01​
System DDR Mem Config: slots / cap / run-speed​  
System DCPMM Config: slots / cap /  run-speed​  
Total Memory/Node (DDR+DCPMM)​188 GB​314 GB​
Total GPU Memory​n/a​32GB​
Storage - boot​  
Storage - application drives​439.56GB​7TB​
NIC​  
PCH​  
Other HW (Accelerator)​  
   
OS​Ubuntu 18.04.4 LTS

Ubuntu 16.04.6 LTS

Kernel​4.15.0-108-generic4.15.0-106-generic

Mitigation variants (1,2,3,3a,4, L1TF) https://github.com/speed47/spectre-meltdown-checker​

Mitigated​Mitigated​

 

8/31/23 Edits: Authors edited or added.

 

About the Author
Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.