Developer Guide
Developer Guide for Intel® oneAPI Math Kernel Library Windows*
ID
766692
Date
12/16/2022
Public
A newer version of this document is available. Customers should click here to go to the newest version.
Getting Help and Support
What's New
Notational Conventions
Related Information
Getting Started
Structure of the Intel® oneAPI Math Kernel Library
Linking Your Application with the Intel® oneAPI Math Kernel Library
Managing Performance and Memory
Language-specific Usage Options
Obtaining Numerically Reproducible Results
Coding Tips
Managing Output
Working with the Intel® oneAPI Math Kernel Library Cluster Software
Managing Behavior of the Intel® oneAPI Math Kernel Library with Environment Variables
Programming with Intel® Math Kernel Library in Integrated Development Environments (IDE)
Intel® oneAPI Math Kernel Library Benchmarks
Appendix A: Intel® oneAPI Math Kernel Library Language Interfaces Support
Appendix B: Support for Third-Party Interfaces
Appendix C: Directory Structure in Detail
Notices and Disclaimers
OpenMP* Threaded Functions and Problems
Functions Threaded with Intel® Threading Building Blocks
Avoiding Conflicts in the Execution Environment
Techniques to Set the Number of Threads
Setting the Number of Threads Using an OpenMP* Environment Variable
Changing the Number of OpenMP* Threads at Run Time
Using Additional Threading Control
Calling oneMKL Functions from Multi-threaded Applications
Using Intel® Hyper-Threading Technology
Managing Multi-core Performance
Managing Performance with Heterogeneous Cores
Overview of the Intel® Distribution for LINPACK* Benchmark
Contents of the Intel® Distribution for LINPACK* Benchmark
Building the Intel® Distribution for LINPACK* Benchmark for a Customized MPI Implementation
Building the Netlib HPL from Source Code
Configuring Parameters
Ease-of-use Command-line Parameters
Running the Intel® Distribution for LINPACK* Benchmark
Heterogeneous Support in the Intel® Distribution for LINPACK* Benchmark
Environment Variables
Improving Performance of Your Cluster
Static Libraries in the <span class='filepath'>lib</span> <span class='filepath'>\\</span> <span class='filepath'>ia32</span> Directory
Dynamic Libraries in the <span class='filepath'>lib</span> <span class='filepath'>\\</span> <span class='filepath'>ia32</span> Directory
Contents of the <span class='filepath'>redist\\ia32</span> Directory
Static Libraries in the <span class='filepath'>lib</span> <span class='filepath'>\\</span> <span class='filepath'>intel64</span> Directory
Dynamic Libraries in the <span class='filepath'>lib</span> <span class='filepath'>\\</span> <span class='filepath'>intel64</span> Directory
Contents of the <span class='filepath'>redist\\intel64</span> Directory
Using MKL_DIRECT_CALL in C Applications
The following examples of code and link lines show how to activate direct calls to Intel® oneAPI Math Kernel Library kernels in C applications:
Include the mkl.h header file:
#include "mkl.h" int main(void) { // Call Intel MKL DGEMM return 0; }
For multi-threaded Intel® oneAPI Math Kernel Library, compile withMKL_DIRECT_CALL preprocessor macro:
icl /DMKL_DIRECT_CALL /Qstd=c99 your_application.c mkl_intel_lp64.lib mkl_core.lib mkl_intel_thread.lib /Qopenmp -I%MKLROOT%/include
To use Intel® oneAPI Math Kernel Library in the sequential mode, compile withMKL_DIRECT_CALL_SEQ preprocessor macro:
icl /DMKL_DIRECT_CALL_SEQ /Qstd=c99 your_application.c mkl_intel_lp64.lib mkl_core.lib mkl_sequential.lib -I%MKLROOT%/include
Product and Performance Information |
---|
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex. Notice revision #20201201 |
Parent topic: Improving Performance for Small Size Problems