The Intel® Distribution of OpenVINO™ toolkit enables high-performance, deep-learning deployments.

In this session, software specialists Zoe Cayetano, Anna Belova, and Dmitry Temnov discuss optimization best practices to maximize your deep-learning metrics, including throughput, accuracy, and latency.

You will:

  • Get an understanding of the different optimization metrics that are important in deep learning
  • Learn about the available tools and techniques to optimize your deep-learning applications within the Intel Distribution of OpenVINO toolkit
  • See a demo of how to use the Deep Learning Workbench to visually fine-tune your models
  • Explore benchmarking resources to test on your own applications
  • And more

Get Started


Zoe Cayetano
Product manager for AI and IoT, Intel Corporation

Passionate about democratizing technology access for everyone and working on projects with outsized impact on the world, Zoe works on various interdisciplinary business and engineering problems. Before Intel, she was a data science researcher for a particle accelerator at Arizona State University, where she analyzed electron beam dynamics of novel X-ray lasers that were used for crystallography, quantum materials and bioimaging. She holds BA degrees in applied physics and business. Philippines-born, she is based in San Francisco, California.

 

Intel® Distribution of OpenVINO™ Toolkit

Deploy deep learning inference with unified programming models and broad support for trained neural networks from popular deep learning frameworks.

Get It Now

See All Tools