Intel® oneAPI IoT Toolkit
Accelerate Development of Smart, Connected Devices
Speed Up Development for Solutions That Run at the Network's Edge
The Intel® oneAPI IoT Toolkit is tailored for developers who are bringing the power of big data technology to global IoT edge innovations—healthcare, smart cities, industrial, retail, transportation, security, and more. Its build and analysis tools and libraries are enhanced to help with system design, development, and deployment across CPU, GPU, FPGA, and other accelerator architectures.
These modern IoT edge workloads are incredibly diverse, and so are the architectures on which they run. No single architecture is best for every workload. The toolkit’s benefits include:
- Enhanced build and analysis tools and libraries to help with system design, development, and deployment across Intel® CPU, GPU, and FPGA architectures
- Faster integration across the software stack, optimized performance and power efficiency, and improved time to market
- Integration with Intel’s performance libraries and parallel programming models such as OpenMP* and Intel® oneAPI Threading Building Blocks
- Seamless compatibility with popular compilers, development environments, and operating systems
- System behavioral analysis, including power-related metrics and hardware-specific optimizations
This toolkit is an add-on to the Intel® oneAPI Base Toolkit (Base Kit). As such, it requires the Base Kit for full functionality, including access to the Intel® oneAPI DPC++/C++ Compiler, powerful performance libraries, and advanced analysis tools.
Develop in the Cloud
Build and optimize oneAPI multiarchitecture applications using the latest optimized Intel® oneAPI and AI tools, and test your workloads across Intel® CPUs and GPUs. No hardware installations, software downloads, or configuration necessary. Free for 120 days with extensions possible.
Download the Toolkit
Speed up development for applications and solutions that run at the network's edge with build and analysis tools and libraries enhanced for system design, development, and deployment across CPU, GPU, FPGA, and other accelerator architectures.
Features
Build
Implement efficient, elegant code that optimizes system and IoT applications on CPUs and accelerators with Intel’s industry-leading compiler technology and libraries.
Analyze
Pinpoint code-tuning opportunities; that is, how system resource use impacts your IoT application. With these tools, you can deliver a deep, comprehensive analysis of performance characteristics to ensure faster cross-architecture performance.
Resolve critical memory and threading issues quickly to ensure application stability and optimized performance.
What's Included
Intel® oneAPI DPC++/C++ Compiler
Use this standards-based C++ compiler with support for OpenMP to create performance-optimized IoT applications that takes advantage of more cores and built-in technologies in platforms based on Intel® Xeon®, Intel® Core™, and Intel Atom® processors with Intel® Processor Graphics.
Intel® C++ Compiler Classic
Create performance-optimized IoT application code that takes advantage of more cores and built-in technologies in platforms based on Intel® processors.
Intel® Inspector
Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
OpenEmbedded meta-intel Layer for Yocto Project*
The meta-intel layer integrates essential oneAPI tools to quickly create and customize Linux* kernels based on the Yocto Project* for edge devices and systems. Access meta-intel layer through OpenEmbedded or Yocto Project.
Using Intel® architecture and oneAPI software, Samsung Medison* accelerates image processing in ultrasound systems, resulting in more efficient and accurate diagnoses.
Documentation & Code Samples
Documentation
Get Started Guides:
Linux | Windows* | Containers
Code Samples
Learn how to access oneAPI code samples in a tool command line or IDE.
Training
Threading & Vectorization
- Introduction to Threading Your Application:
Windows | Linux | macOS* - Using Auto-Vectorization:
Windows | Linux & macOS
SYCL* & C++ Development
Specifications
Processors:
- Intel Xeon Scalable processors
- Intel Xeon processor family
- Intel Core processors
- Intel Atom processors
GPUs:
- Intel® UHD Graphics for 11th generation Intel processors or newer
- Intel® Iris® Xe graphics or newer
- Intel® Arc™ A-series graphics
- Intel® Server GPU
- Intel® Data Center GPU Flex Series
- Intel® Data Center GPU Max Series Power and Thermal Analysis Tool
Languages:
Note Must have the Base Kit installed
- C and C++
- SYCL*
- Python*
Host operating systems:
- Windows
- Linux
Target operating systems:
- Windows
- Linux
- Embedded Linux*
- Yocto Project
- Android*
Compilers:
- Intel® compilers
- Microsoft* compilers
- GNU Compiler Collection (GCC)*
- Other compilers that follow the same standards
Development environments:
- Linux: Eclipse*
- Windows: Microsoft Visual Studio*, Microsoft Visual Studio Code
- Command line interface
For more information, see the system requirements.
Get Help
Your success is our success. Access these support resources when you need assistance.
- Intel oneAPI IoT Toolkit
- Intel oneAPI DPC++/C++ Compiler
- Intel C++ Compiler Classic
- Intel Inspector
- Linux Kernel Build Tools
For additional help, see our general oneAPI Support.
Stay in the Know with All Things CODE
Sign up to receive the latest trends, tutorials, tools, training, and more to
help you write better code optimized for CPUs, GPUs, FPGAs, and other
accelerators—stand-alone or in any combination.