Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU) with Neural Compute Engine
Take your imaging, computer vision and machine intelligence applications into network edge devices with the Movidius family of vision processing units (VPUs) by Intel.
Excellent Performance at Ultra-Low Power
Intel® Movidius™ Myriad™ X VPU delivers outstanding performance in computer vision and deep neural network inferencing applications. As a member of the Movidius VPU family known for ultra-low power consumption, theIntel® Movidius™ Myriad™ X VPU is capable of delivering a total performance of over 4 trillion operations per second (TOPS).2 With new performance enhancements, the Intel® Movidius™ Myriad™ X VPU is a power efficient solution that brings advanced vision and artificial intelligence applications to devices such as drones, smart cameras, smart home, security, VR/AR headsets, and 360 cameras.
New Generation of Deep Neural Network Performance
Intel has introduced an entirely new deep neural network processing unit into the Intel® Movidius™ Myriad™ X VPU architecture: the Neural Compute Engine. Specifically designed to run deep neural networks at high speed and low power, the Neural Compute Engine enables the Intel® Movidius™ Myriad™ X VPU to reach over 1 TOPS of compute performance on deep neural network inferences.1 The Neural Compute Engine is integrated as part of the power efficient Movidius VPU architecture, which minimizes power by reducing data movement on-chip. Based on the Intel® Movidius™ Myriad™ X VPU architecture, the maximum number of Neural Network inference operations per second achievable by the Neural Compute Engine in combination with the 16 SHAVE cores (916 billion operations per second) is more than 10x the maximum number of neural network inference operations per second achievable by the Movidius Myriad 2 VPU’s SHAVE processors (80 billion operations per second) for executing neural network inference.1
Customizable Imaging & Vision Pipelines
The Movidius family of VPUs have always provided a unique, flexible architecture for image processing, computer vision, and deep neural networks. The architecture provides a modular approach to configuring imaging and vision workloads because it combines a set of imaging and vision hardware accelerators, such as stereo depth or the Neural Compute Engine, with an array of C-programmable VLIW vector processors, all accessing a common on-chip memory. This approach enables outstanding image signal processing (ISP) without the need to make trips to memory for best power efficiency, in addition to interleaved computer vision and deep neural network inference application pipelines, all with a data flow methodology that reduces power by minimizing data movement. Movidius VPUs deliver an optimal balance between programmability and performance at low power.
Support for 8 HD Sensors and 4K Encoding
The Intel® Movidius™ Myriad™ X VPU features 16 MIPI lanes, which supports up to 8 HD resolution RGB sensors to be connected directly. The high-throughput inline ISP ensures streams are processed at high speeds, while new hardware encoders provide support for 4K resolutions at both 30 Hz (H.264/H.265) and 60 Hz (M/JPEG) frame rates. Other featured interfaces include USB 3.1 and PCIe* Gen 3.
Software Development Kit (SDK) and Tools
The Intel® Movidius™ Myriad™ X VPU ships with a rich SDK that contains all of the software development frameworks, tools, drivers and libraries to implement custom imaging, vision and deep learning applications on Intel® Movidius™ Myriad™ X VPU. The SDK also includes a specialized FLIC framework with a plug-in approach to developing application pipelines including image processing, computer vision, and deep learning. This framework helps developers focus on the processing, leaving data flow optimization to the tools. For deep neural network development, the SDK includes a neural network compiler that enables developers to rapidly port neural networks from common frameworks such as Caffe* and TensorFlow* with an automated conversion and optimization tool that maximizes performance while retaining network model accuracy.