The power of AI continues to shift from potential to reality, driving a sea of change in nearly every major industry.

If that weren’t enough, compute architectures also continue to shift, moving from yesterday’s CPU- and GPU-only platforms to today’s heterogeneous setups.

But you knew that already.

What you may not know is that the Intel® Distribution of OpenVINO™ toolkit was designed specifically to help developers deploy AI-powered solutions across the heterogeneous landscape—combinations of CPU, GPU, VPU, and FPGA—with write once, deploy anywhere flexibility.

In this webinar, technical consulting engineer Munara Tolubaeva showcases the Intel Distribution of OpenVINO toolkit and its core role in AI application and solution development. Topics include:

  • How the toolkit can be used to develop and deploy AI deep-learning applications across Intel® architecture—CPUs, CPUs with Intel® Processor Graphics, Intel® Movidius™ VPUs, and FPGAs
  • Cross-architecture deployment of your applications and solutions with little to no code rewriting
  • Innovative improvements from the hardware and software stacks

Get the Software


Munara Tolubaeva
Software technical consulting engineer, Intel Corporation

Munara Tolubaeva is responsible for enabling customers to be successful on Intel® platforms by using Intel® software. She specializes in areas like high-performance computing, AI and deep learning, performance analysis and optimization, compilers, and heterogeneous computing. She holds a PhD degree in computer science from University of Houston.

 

 

Intel® Distribution of OpenVINO™ Toolkit

Deploy deep learning inference with unified programming models and broad support for trained neural networks from popular deep learning frameworks.

Get It Now

See All Tools