Getting Help and Support What's New Notational Conventions Related Information Getting Started Structure of the Intel® oneAPI Math Kernel Library Linking Your Application with the Intel® oneAPI Math Kernel Library Managing Performance and Memory Language-specific Usage Options Obtaining Numerically Reproducible Results Coding Tips Managing Output Working with the Intel® oneAPI Math Kernel Library Cluster Software Managing Behavior of the Intel® oneAPI Math Kernel Library with Environment Variables Configuring Your Integrated Development Environment to Link with Intel® oneAPI Math Kernel Library Intel® oneAPI Math Kernel Library Benchmarks Appendix A: Intel® oneAPI Math Kernel Library Language Interfaces Support Appendix B: Support for Third-Party Interfaces Appendix C: Directory Structure in Detail Notices and Disclaimers
OpenMP* Threaded Functions and Problems Functions Threaded with Intel® Threading Building Blocks Avoiding Conflicts in the Execution Environment Techniques to Set the Number of Threads Setting the Number of Threads Using an OpenMP* Environment Variable Changing the Number of OpenMP* Threads at Run Time Using Additional Threading Control Calling oneMKL Functions from Multi-threaded Applications Using Intel® Hyper-Threading Technology
Structure of the Intel® oneAPI Math Kernel Library
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
Notice revision #20201201