Developers have always been at the heart of Intel’s actively evolving ecosystem of hardware and software computing resources. They are the top contributors to and beneficiaries of an open source and open standards-centric community-driven approach to multiarchitecture software development and accelerated computing.
The cornerstones of this are:
-
Intel® Software Development Tools powered by oneAPI and SYCL*
-
Contributions to oneAPI specifications and performance library ecosystem as part of the Unified Acceleration Foundation (UXL)
-
Optimizations for AI frameworks and models targeting performance, size, and ease of use.
-
The embrace of open AI frameworks with numerous contributions to PyTorch* and TensorFlow*
All this stays in focus while enabling the latest hardware, including CPUs, GPUs, AI PC NPUs, and other accelerators.
This enables expedited AI development and high-performance computing across diverse architectures free from vendor-lock.
Translating these innovations into developer productivity, we support many events including webinars, workshops, oneAPI Developer Summits (DevSummits), and collegiate hackathons throughout the year. These events themselves and their on-demand recordings allow developer communities, our strategic partners, and customers to learn about our developer resources and utilize them in real-world applications.
Recap of major developer events in 2024:
• 30+ webinars,
• 20+ workshops,
• 13 collegiate hackathons, and
• 2 oneAPI DevSummits
In this article, we will highlight our active engagement with the developer community through a variety of tech events held in 2024 and our open-source initiatives advocating multiarchitecture, cross-vendor, accelerated parallel computing on modern platforms within a rich, developer-focused software ecosystem.
Empowering LLMs for Advanced GenAI
Large Language Models (LLMs) are central to many GenAI applications, including prompt-driven virtual assistants or chatbots, recommendation systems, content generation and translation, etc. Intel’s high-performance AI tools and framework optimizations enable building and fine-tuning LLMs for complex applications, for example, those based on Retrieval Augmented Generation (RAG). Tools like Intel® Neural Compressor aid in model optimization techniques such as quantization and distillation of LLMs and other machine learning models. You can seamlessly integrate popular LLMs, including Meta* Llama, Microsoft* Phi-3, and various Hugging Face* models, using our AI tools and libraries.
GenAI has been the central theme of several developer events that we conducted in 2024. 15+ webinars and workshops held this year were focused on building and optimizing GenAI solutions using our software on the latest hardware. The events covered a variety of topics such as building LLMs on Intel Tiber AI Cloud, Intel® AI PCs, CPUs, and GPUs; optimizing LLMs using Intel® Extension for Transformers and PyTorch optimizations; enhancing LLM inference capabilities for applications like multi-modal chatbots; and more.
→ The webinars and workshop recordings are available on Tech.Decoded.
→ Explore our tools and experts’ tips to become a top AI developer.
Enhanced Machine Learning with Python* and PyTorch* Optimizations
The PyTorch framework serves as an important building block for numerous AI applications involving deep neural networks. As a premiere member of the PyTorch Foundation, Intel actively contributes to the open source PyTorch community for streamlining the stock framework. Our AI software stack provides PyTorch optimizations, enabling faster and more efficient model training and inference on Intel hardware. Our optimizations include Intel® Extension for PyTorch that boosts the performance of traditional PyTorch on Intel platforms, is available as an open source repository, and has also been upstreamed to the PyTorch community. Our efforts for nurturing the PyTorch ecosystem include adding optimized code paths and resolving PyTorch issues on GitHub, enhancing the PyTorch documentation, and publishing technical collaterals highlighting our latest PyTorch-based applications.
Our PyTorch optimizations are supported by the Intel® Distribution for Python and Data Parallel Extensions for Python.
We conduct webinars and workshops sharing the results of our joint efforts with the PyTorch Foundation, such as ‘Learn LLM Optimization Using Transformers and PyTorch on CPUs and GPUs’. Some of our Python-focused webinars in 2024 included ‘Open Source Heterogeneous Programming with Python Developers’.
Optimizing and Deploying Deep Learning Models using OpenVINO™ Toolkit
OpenVINO™ is an open-source AI toolkit that helps accelerate inference with and deployment of AI workloads for computer vision, GenAI, etc domains. The models optimized using popular frameworks like PyTorch, TensorFlow*, etc., can be deployed on a combination of Intel hardware – on-premise, on-device, in a cloud environment, or in the browser using the OpenVINO toolkit. Our evangelists have developed Edge AI Reference Kits powered by OpenVINO that allow leveraging the toolkit for practical use cases such as defect detection and explainable AI. We hosted a DEVCON workshops series in 2024 based on programming with the OpenVINO toolkit. The series included workshop sessions on topics like ‘GenAI Fundamentals with OpenVINO’ and ‘Bringing GenAI to AI PC’.
We also educated developers on unlocking the potential of the OpenVINO toolkit through 5+ workshops in 2024 - for example, ‘Create GenAI with the OpenVINO Toolkit and AI PCs from Intel’.
Developing Scalable AI Workloads on Intel® Tiber™ AI Cloud
Intel® Tiber™ AI Cloud serves as a unified platform where developers can access our software tools, libraries and frameworks and the latest Intel architectures including CPUs, GPUs, Intel Gaudi AI accelerator, and more. Our cloud platform provides the computing environment for many practical AI applications built using our AI software development frameworks discussed above, among others. Some of the crucial oneAPI libraries available on the cloud platform (and also as stand-alone versions) are:
-
oneCCL for scalable deep learning in distributed environment,
-
oneDNN for accelerated deep learning on CPUs and GPUs,
-
oneDAL for faster data science and analytics,
-
oneDPL for parallel programming in C/C++ with SYCL,
-
oneMKL for accelerated math routines and more.
Intel provides access to the Intel Tiber AI Cloud to the next generation of developers at hackathons. We sponsored 13 collegiate hackathons (such as Stanford’s TreeHacks and UT Austin’s HackTX) in 2024, providing attendees free access to our cloud services. The attendees get to learn about and try their hands on not only the cloud platform, but also the Intel AI PCs that our team brings to the events. We have a dedicated projects track called the ‘Best Use of Intel AI’ at each hackathon, where the competition is among the participants’ hackathon projects built on our cloud platform. We also conduct hands-on workshops and interactive meetups at the hackathons, educating the attendees, including university students and professors, on topics like the Intel® Liftoff Program for AI startups, our educational initiatives (Intel® Student Ambassador Program and Educator Program), GenAI essentials, etc.).
We had 15+ webinars and workshops in 2024 based on accelerated software development on Intel Tiber AI Cloud.
Shaping a Better Future Together with Key Industry Leaders and Academic Institutions
We collaborate with the strategic partners and customers of our AI and HPC developer ecosystem, allowing them to leverage our resources for building and enhancing enterprise projects and products. The key developer ecosystem leaders we work with include large-scale companies (e.g., Microsoft*, Red Hat*, IBM*, etc.), startups (like Vectara, Prediction Guard*), renowned academic institutions (such as UC Berkeley, Indian Institutes of Technology (IITs)), and open-source communities (like PyTorch, Hugging Face).
In 2024, we organized webinars partenering with PyTorch, Bilic*, Vectara, Seekr*, and others showcasing how they leverage our technologies to enhance their products or tackle real-world concerns. We have also joined hands with reputable academic institutions by setting up AI and oneAPI Centers of Excellence (CoEs) that help accelerate integration of our AI and oneAPI initiatives in the educational sector.
Adopting UXL Foundation’s Open-Source Accelerated Parallel Computing Approach Backed by oneAPI and SYCL*
The Unified Acceleration (UXL) Foundation steers an open software ecosystem for accelerated, multiarchitecture, vendor-independent parallel computing. It is a collaborative effort of 30+ key member organizations, including Intel and several other expert contributors. It stands on the foundations of the oneAPI initiative and parallel computing frameworks such as SYCL and OpenCL™.
The UXL Foundation, in collaboration with Intel, organizes oneAPI DevSummits every year. These events are deeply technical conferences where industry experts discuss the latest innovations in developing high-performance AI, HPC, and edge-computing workloads on multi-vendor accelerators using a single code base. They discuss the oneAPI specification and open-source projects based on it through panel discussions, technical talks, demos, tutorials, and more.
The two DevSummits in 2024 received awesome responses with participation of 1K+ developer attendees each. They revolved around a variety of topics advocating the oneAPI programming paradigm and its aim of accelerated heterogeneous computing. For example, the DevSummit in October’24 covered topics such as open-source development models, evaluating and benchmarking AI systems, and more.
Stay Up-to-date on the Latest Computing Technologies!
In addition to webinars, workshops, hackathons, DevSummits, and marketing collaterals, Intel strives to keep the developers abreast of the latest technological innovations through technical conferences. For example,
-
by organizing Intel ON events (Intel Vision for industry leaders and Intel Innovation for developers/tech enthusiasts), and
-
by actively participating in events like Supercomputing and International Workshop on OpenCL™ and SYCL (IWOCL).
Check out Innovation Selects of 2024, a comprehensive list of demos and tech talks tailored to developers and technophiles. The playlist covers a wide range of conceptual and application-based topics for AI and HPC, such as ‘Open Standard, Multi-vendor AI Training and Inference with LLMs’, ‘A PyTorch and OPEA based AI Audio Avatar Chatbot’, ‘Enterprise RAG Systems’, ‘Optimizing Software on Lunar Lake Processor Hybrid Architecture’, and more.
Interviews with industry experts from Intel and developer resources on various trending technologies such as Intel Gaudi3 AI Accelerator, Intel® Core™ Ultra, and Intel® Xeon®6 Processors are available on the Intel Innovation landing page.
What’s Next?
We are committed to maximizing the gain of our stakeholders. Our collaborations with them are built on a foundation of our efforts for compute offload and acceleration based on oneAPI and SYCL. We encourage you to explore our oneAPI toolkits that include AI tools and frameworks, compilers, analysis/debugging tools like Intel® VTune™ Profiler, Intel® DPC++ Compatibility Tool for automated CUDA to SYCL code migration, and much more!
Various developer-focused events that we host/support give us an opportunity to share our latest technologies with key players in our developer ecosystem.
→ In 2024, we celebrated 30 years of oneMKL, 5 years of the oneAPI initiative, and the 1st anniversary of the UXL Foundation.
We look forward to having you all enthusiastically engage with us and joining us at webinars, workshops, and industry events in 2025!
Key Events Available On-Demand