Developer Guide
Developer Guide for Intel® oneAPI Math Kernel Library Windows*
ID
766692
Date
6/30/2025
Public
Getting Help and Support
What's New
Notational Conventions
Related Information
Getting Started
Structure of the Intel® oneAPI Math Kernel Library
Linking Your Application with the Intel® oneAPI Math Kernel Library
Managing Performance and Memory
Language-Specific Usage Options
Obtaining Numerically Reproducible Results
Coding Tips
Managing Output
Working with the Intel® Math Kernel Library Cluster Edition Software
Managing Behavior of the Intel® oneAPI Math Kernel Library with Environment Variables
Programming with Intel® Math Kernel Library in Integrated Development Environments (IDE)
Intel® Math Kernel Library Benchmarks
Appendix A: Intel® oneAPI Math Kernel Library Language Interfaces Support
Appendix B: Support for Third-Party Interfaces
Appendix C: Directory Structure in Detail
Notices and Disclaimers
Using the /Qmkl Compiler Option
Using the /Qmkl-ilp64 Compiler Option
Automatically Linking a Project in the Visual Studio* Integrated Development Environment with Intel® oneAPI Math Kernel Library
Using the Single Dynamic Library
Selecting Libraries to Link with
Using the Link-line Advisor
Using the Command-Line Link Tool
OpenMP* Threaded Functions and Problems
Functions Threaded with Intel® Threading Building Blocks
Avoiding Conflicts in the Execution Environment
Techniques to Set the Number of Threads
Setting the Number of Threads Using an OpenMP* Environment Variable
Changing the Number of OpenMP* Threads at Run Time
Using Additional Threading Control
Calling oneMKL Functions from Multi-threaded Applications
Using Intel® Hyper-Threading Technology
Managing Multi-core Performance
Managing Performance with Heterogeneous Cores
Message-Passing Interface Support
Linking with Intel® Math Kernel Library Cluster Edition Software
Determining the Number of OpenMP* Threads
Using DLLs
Setting Environment Variables on a Cluster
Interaction with the Message-Passing Interface
Using a Custom Message-Passing Interface
Examples of Linking for Clusters
Overview of the Intel® Distribution for LINPACK* Benchmark
Contents of the Intel® Distribution for LINPACK* Benchmark
Building the Intel® Distribution for LINPACK* Benchmark
Building the Netlib HPL from Source Code
Configuring Parameters
Ease-of-use Command-Line Parameters
Running the Intel® Distribution for LINPACK* Benchmark
Heterogeneous Support in the Intel® Distribution for LINPACK* Benchmark
Environment Variables
Improving Performance of Your Cluster
Using the Single Dynamic Library
You can simplify your link line through the use of the Intel® oneAPI Math Kernel Library (oneMKL) Single Dynamic Library (SDL).
To use SDL, place mkl_rt.lib on your link line. For example:
icx.exe application.c mkl_rt.lib
mkl_rt.lib is the import library for mkl_rt.dll.
SDL enables you to select the interface and threading library for Intel® oneAPI Math Kernel Library (oneMKL) at run time. By default, linking with SDL provides:
- Intel LP64 interface on systems based on the Intel® 64 architecture
- Intel threading
To use other interfaces or change threading preferences, including use of the sequential version of Intel® oneAPI Math Kernel Library (oneMKL), you need to specify your choices using functions or environment variables as explained in sectionDynamically Selecting the Interface and Threading Layer.
NOTE:
Intel® oneAPI Math Kernel Library (oneMKL) SDL (mkl_rt) does not support DPC++ APIs. If your application requires support of Intel® oneAPI Math Kernel Library (oneMKL) DPC++ APIs, refer to Intel® oneAPI Math Kernel Library Link-line Advisor to configure your link command.
Parent topic: Linking Quick Start