Intel’s deep investments in developer ecosystems, tools, technology and an open platform are clearing the path forward to scale artificial intelligence everywhere. Intel’s role is to responsibly scale this technology. Intel has made AI more accessible and scalable for developers through extensive optimizations of popular libraries and frameworks on Intel® Xeon® Scalable processors. Intel’s investment in multiple AI architectures to meet diverse customer requirements, using an open standards-based programming model, makes it easier for developers to run more AI workloads in more use cases. Many of the world’s leading organizations leverage Intel AI to solve complex tasks.
MORE: Press Kits: Intel Innovation 2021 | 12th Gen Intel Core … Webcast: Intel Innovation Keynote … News Releases: Intel Innovation Spotlights New Products, Technology and Tools for Developers | Intel Unveils 12th Gen Intel Core, Launches World’s Best Gaming Processor, i9-12900K … Intel Innovation Topic News: Developer/oneAPI | Ubiquitous Computing | Cloud-to-Edge Infrastructure | Pervasive Connectivity
Intel Targets a 30x Total AI Performance Gain by 2022
Intel announced today it is targeting to deliver up to a 30 times total AI performance gain for Intel Xeon Scalable processors by 2022. Intel provided a demonstration at Intel Innovation using pre-production next-generation Intel Xeon Scalable processors (code-named “Sapphire Rapids”). Performance gains were achieved by harnessing the built-in Advanced Matrix Extensions (AMX) engine in the next-gen Xeon processor, the Intel® Neural Compressor (INC), as well as oneDNN optimizations based on the oneAPI open industry standard. The Intel Neural Compressor automatically optimizes trained neural networks with negligible accuracy loss, going from FP32 to int8 numerical precision, taking full advantage of the built-in AI acceleration (Intel® Deep Learning Boost) available today in 3rd Gen Intel Xeon Scalable processors. Intel also demonstrated the significant AI performance gains expected for the Next Gen Intel Xeon Scalable processors in comparison to Nvidia graphics processing units (GPU), achieving over 24,000 images per second on ResNet50, which exceeds the latest Nvidia A30 GPU at 16,000 images per second. This demonstrates that a general-purpose CPU with built-in AI acceleration can solve even more customer use cases that once necessitated GPU acceleration.
Alibaba Recommendation Engine Toolkits Optimized on Intel
Alibaba and Intel partnered to build an end-to-end toolkit called DeepRec to facilitate deep learning training and deployment of recommendation systems, a workload that consumes a significant portion of all data center and cloud AI cycles and has diverse compute, memory, bandwidth and network needs. DeepRec is used on Intel Xeon Scalable processors across various Alibaba internal businesses and external Alibaba Cloud customers like WEIBO, which provides Twitter-like service in China. DeepRec is powered by oneAPI with AVX-512, VNNI and BF16 acceleration. DeepRec developers can easily load and update models, process embedding layers, leverage existing model zoos and deploy extremely-large-scale recommendation-based services with trillions of samples.
Extensive Tool Optimizations Makes It Easier to Run AI Everywhere
Intel continues to make AI more scalable and productive from ingest to deployment and cloud to edge. Data infrastructure is already Intel-optimized, and Intel has now streamlined the most popular data science and AI tools, and created new ones that help clear the path forward. BigDL is an open-source development platform that simplifies end-to-end distributed big data with AI pipelines in a production Spark environment. Modin is an open source library that accelerates the popular Pandas data library by up to 20 times. Intel optimizations for popular machine learning and deep learning libraries and frameworks deliver 10 to 100 times performance gains, including scikit-learn, XGBoost, TensorFlow, PyTorch and others. For AI deployment, the Intel Neural Compressor and OpenVINO™ toolkit deliver all new capabilities to automate deep learning inference optimization. And the MLops platform from cnvrg.io makes it easier to orchestrate the end-to-end data, modeling and deployment workflows in production.
Intel AI Global Impact Festival Makes AI More Accessible
Intel is announcing a new annual Intel® AI Global Impact Festival for governments, academia and student innovators to learn about AI and celebrate the excellence of next-generation innovators who solve real-world problems using AI. The theme for the first year is “Enriching Lives with AI Innovation” to foster human-centric, responsible AI. This festival will bring together geeks, problem-solvers, future developers, innovators and policymakers. Student innovators and developers with an existing AI innovation or an AI idea worth building as a prototype will go through a rigorous process to win prizes worth a total of $200,000, along with opportunities for future mentorship. To help future developers, we need to expand digital readiness as we see increased digitalization of everything. Making technology inclusive and expanding digital readiness is a key component of Intel’s RISE strategy, and it is critical to the company’s corporate purpose. Intel has committed to expand digital readiness to reach 30 million people in 30,000 institutions in 30 countries by 2030. The country and global winners will be announced Oct. 27. They will receive an opportunity to showcase their innovations in multiple Intel and industry forums. After a six-month mentorship opportunity with Intel experts, the global winners will also have an opportunity to showcase the progress of their AI innovations to Intel and global industry leaders. Intel believes every student, irrespective of country, gender, background or ethnicity, has the potential to learn and use AI as a superpower to solve the world’s greatest challenges.