Building VASP* with Intel® oneAPI Base and HPC toolkits
Step 1 – Overview
This guide is intended to help users on how to build VASP (Vienna Ab-Initio Package Simulation) using Intel® oneAPI Base and HPC toolkits on Linux* platforms.
VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudo potentials and a plane wave basis set. The approach implemented in VAMP/VASP is based on a finite-temperature local-density approximation (with the free energy as variational quantity) and an exact evaluation of the instantaneous electronic ground state at each MD-step using efficient matrix diagonalization schemes and an efficient Pulay mixing. These techniques avoid all problems occurring in the original Car-Parrinello method which is based on the simultaneous integration of electronic and ionic equations of motion. The interaction between ions and electrons is described using ultrasoft Vanderbilt pseudopotentials (US-PP) or the projector augmented wave method (PAW). Both techniques allow a considerable reduction of the necessary number of plane-waves per atom for transition metals and first row elements. Forces and stress can be easily calculated with VAMP/VASP and used to relax atoms into their instantaneous groundstate.
This instruction is designed to help users build or implement VASP on Linux platform with oneAPI. This application note is verified with VASP 6.2.0 and Intel® oneAPI Base and HPC toolkits. More information on VASP can be found on the VASP homepage
Note: This is an update for VASP 6.x version with oneAPI.
Intel® oneAPI Base and HPC toolkits; these products can be download from the Intel® oneAPI Toolkits page
Step 2 - Configuration
Use the following commands to extract the VASP files:
$tar –xvzf vasp.6.2.0.tgz
This will create vasp.6.2.0 directories
Set the Intel software tools environment variables by running the following command assuming the default path installation and building for Intel64 platform:
Note: This application note is written specifically for use with the Intel compilers and MPI.
Step 3 – Building VASP
a. Build libfftw3xf_intel.a
This is a highly optimized performance library of fftw that can speed up the fftw part of VASP.
Change directory to Intel® oneAPI Math Kernel Library (oneMKL) fftw3xf library.
Build fftw3xf in the oneMKL directory
After a successful compilation, libfftw3xf_intel.a will be built in the same directory.
b. Build VASP
Change directory to vasp.6.2.0.
Copy arch/makefile.include.linux_intel file to current directory.
cp arch/makefile.include.linux_intel ./makefile.include
Edit the makefile.linux to link with oneMKL fftw library, such as bellow.
OBJECTS = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o /opt/intel/oneAPI/2021.2/mkl/latest/interfaces/fftw3xf/libfftw3xf_intel.a
Check the FORTAN and C++ compiler commands are correctly assigned for: mpiifort, icc and icpc
FC = mpiifort FCL = mpiifort -mkl=sequential …… …… CC_LIB = icc …… …… CXX_PARS = icpc
Check the FORTRAN flags section are as shown below.
FFLAGS = -assume byterecl -w -xHOST
Make use of -xHOST to enable the highest available SIMD instruction if you are building and running the VASP on the same platform.
Check MKL section as shown below to include Intel oneMKL libraries, that will link with oneMKL BLAS, LAPACK, FFT, BLACS, ScaLAPACK functions that are used in VASP.
MKL_PATH = $(MKLROOT)/lib/intel64 BLAS = LAPACK = BLACS = -lmkl_blacs_intelmpi_lp64 SCALAPACK = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
Run the following command to build VASP.
$make std gam ncl
This will create the vasp_std, vasp_gam, vasp_ncl executables in the bin directory.
Step 4 - Running VASP
Run vasp by executing mpiexec command with your required parameters. For example, to run 48 processes use as shown below, with your workloads, and hostnames are mentioned in the machinefile.
$mpiexec.hydra –np 48 -f machinefile ./vasp_std
Appendix - How to check whether vasp is linked with oneMKL?
To confirm the successful linking of oneMKL with VASP, please run ldd on bin/vasp as below.
[vasp.6.2.0]$ ldd bin/vasp_std inux-vdso.so.1 => (0x00007ffff3bd2000) libmkl_intel_lp64.so.1 => /opt/intel/oneAPI/2021.2/mkl/2021.2.0/lib/intel64/libmkl_intel_lp64.so.1 (0x00002ac6f2a46000) libmkl_cdft_core.so.1 => /opt/intel/oneAPI/2021.2/mkl/2021.2.0/lib/intel64/libmkl_cdft_core.so.1 (0x00002ac6f37ab000) libmkl_scalapack_lp64.so.1 => /opt/intel/oneAPI/2021.2/mkl/2021.2.0/lib/intel64/libmkl_scalapack_lp64.so.1 (0x00002ac6f39d3000) libmkl_blacs_intelmpi_lp64.so.1 => /opt/intel/oneAPI/2021.2/mkl/2021.2.0/lib/intel64/libmkl_blacs_intelmpi_lp64.so.1 (0x00002ac6f42fe000) libmkl_sequential.so.1 => /opt/intel/oneAPI/2021.2/mkl/2021.2.0/lib/intel64/libmkl_sequential.so.1 (0x00002ac6f4544000) libmkl_core.so.1 => /opt/intel/oneAPI/2021.2/mkl/2021.2.0/lib/intel64/libmkl_core.so.1 (0x00002ac6f6142000) libiomp5.so => /opt/intel/oneAPI/2021.2/compiler/2021.2.0/linux/compiler/lib/intel64_lin/libiomp5.so (0x00002ac6ff6dd000) …… libmpifort.so.12 => /opt/intel/oneAPI/2021.2/mpi/2021.2.0//lib/libmpifort.so.12 (0x00002ac6ffdfb000) libmpi.so.12 => /opt/intel/oneAPI/2021.2/mpi/2021.2.0//lib/release/libmpi.so.12 (0x00002ac7001b9000) …… …… libfabric.so.1 => /opt/intel/oneAPI/2021.2/mpi/2021.2.0//libfabric/lib/libfabric.so.1 (0x00002ac70238d000)