The Intel® Distribution of OpenVINO™ toolkit enables high-performance, deep-learning deployments.
In this session, software specialists Zoe Cayetano, Anna Belova, and Dmitry Temnov discuss optimization best practices to maximize your deep-learning metrics, including throughput, accuracy, and latency.
- Get an understanding of the different optimization metrics that are important in deep learning
- Learn about the available tools and techniques to optimize your deep-learning applications within the Intel Distribution of OpenVINO toolkit
- See a demo of how to use the Deep Learning Workbench to visually fine-tune your models
- Explore benchmarking resources to test on your own applications
- And more
- Download the latest version of the Intel Distribution of OpenVINO toolkit
- Find out more about Deep Learning Workbench, a web-based graphical environment for visualizing, fine-tuning, and comparing performance of deep-learning models
Product manager for AI and IoT, Intel Corporation
Passionate about democratizing technology access for everyone and working on projects with outsized impact on the world, Zoe works on various interdisciplinary business and engineering problems. Before Intel, she was a data science researcher for a particle accelerator at Arizona State University, where she analyzed electron beam dynamics of novel X-ray lasers that were used for crystallography, quantum materials and bioimaging. She holds BA degrees in applied physics and business. Philippines-born, she is based in San Francisco, California.