The Parallel Universe Magazine
Intel's semiannual magazine for software innovation helps you take your software development into the future with the latest tools, tips, and training to expand your expertise.
Issue 58, April 2025
Feature: Accelerate Quantitative Finance with SYCL* and Intel® oneAPI Math Kernel Library (oneMKL)
This issue explores innovative approaches to AI, data science, and high-performance computing, with insights into financial engineering, open standards, and time-series analysis.
Download Issue 58 to find:
- Our feature article on accelerating quantitative finance with SYCL* and Intel® oneAPI Math Kernel Library (oneMKL), showing how financial models use simulations for risk analysis.
- How to extend portable data-parallel computing in Python*, enabling GPU acceleration across vendors through an open standard.
- New approaches to time series analysis, including clustering and transformer-based prediction.
- A closer look into JAX and OpenXLA.
Contents
Use SYCL and oneMKL to accelerate quantitative finance simulations. Learn how optimized math libraries improve risk modeling and option pricing calculations.
Accelerate these algorithms using Intel® Extension for Scikit-learn*. Improve time-series analysis with principal component analysis (PCA) and density-based spatial clustering of applications with noise (DBSCAN). Learn how Intel® Extension for Scikit-learn speeds up clustering for pattern discovery in unlabeled data.
Explore how JAX and OpenXLA run Python* programs on Intel GPUs. Learn how code transforms from JAX expressions to stable high-level operations (StableHLO) and optimized HLO representations.
Extend Python with portable data-parallel computing using unified acceleration (UXL). Learn how dpctl.tensor enables GPU acceleration across vendors within the same Python session.
Discover an easier time-series analysis using Intel® Tiber™ AI Cloud. Explore time-series prediction with Chronos* on Intel Tiber AI Cloud. Learn how to train and deploy models for accurate forecasting using open tools.
Look deeper into running JAX and OpenXLA. Learn how HLO transforms into optimized LLVM intermediate representation (IR) and SPIR-V* files to enable efficient model running on Intel GPUs.
Stay In the Know on All Things CODE
Sign up to receive the latest tech articles, tutorials, dev tools, training opportunities, product updates, and more, hand-curated to help you optimize your code, no matter where you are in your developer journey. Take a chance and subscribe. You can change your mind at any time.