Developer Reference for Intel® oneAPI Math Kernel Library for Fortran

ID 766686
Date 12/16/2022
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

?getri_oop_batch

Computes the inverses for 1 or more groups of LU factored, n-by-n matrices.

Syntax

call sgetri_oop_batch(n_array, A_array, lda_array, ipiv_array, Ainv_array, ldainv_array, group_count, group_size, info_array)

call dgetri_oop_batch(n_array, A_array, lda_array, ipiv_array, Ainv_array, ldainv_array, group_count, group_size, info_array)

call cgetri_oop_batch(n_array, A_array, lda_array, ipiv_array, Ainv_array, ldainv_array, group_count, group_size, info_array)

call zgetri_oop_batch(n_array, A_array, lda_array, ipiv_array, Ainv_array, ldainv_array, group_count, group_size, info_array)

Include Files

mkl.fi

Description

The ?getri_oop_batch routines are similar to the ?getri counterparts, but instead compute the inverses for groups of n-by-n LU factored matrices, processing one or more groups at once. Each group contains matrices with the same parameters.

The operation is defined as

i = 1
for g = 1 … group_count
    ng and ldag in n_array(g) and lda_array(g)
    for j = 1 … group_size(g)
	Ai, Ainvi, ipivi in A_array(i), Ainv_array(i), ipiv_array(i)
	Ainvi :=  inv(Pi * Li* Ui)
	i = i + 1
    end for
end for

where Pi is a permutation matrix, Li is lower triangular with unit diagonal elements and Ui is upper triangular. These routines use partial pivoting, with row interchanges.

Ai and Ainvi represents matrices stored at the addresses pointed to by A_array and Ainv_array. The dimensions of each matrix is ng-by-ng, where ng is the g-th elements of n_array. Similarly, ipivi represents the pivot arrays stored at addresses pointed to by ipiv_array, where the size of the pivoting arrays is ng.

The number of entries in A_array, Ainv_array and ipiv_array is total_batch_count, which is equal to the sum of all the entries in the array group_size.

Refer to ?getri for a detailed description of the inversion of LU factorized matrices.

Input Parameters
n_array

INTEGER. Array of size group_count. For the group g, ng = n_array(g) specifies the order of the matrices Ai in group g.

The value of each element of n_array must be at least zero.

A_array

INTEGER*8 for Intel® 64 architecture

INTEGER*4 for IA-32 architecture

Array, size total_batch_count, of pointers to the Ai matrices.

lda_array

INTEGER. Array of size group_count. For group g, ldag = lda_array(g) specifies the leading dimension of the matricies Ai in group g, as declared in the calling (sub)program.

The value of ldag must be at least max(1, ng).

ipiv_array

INTEGER*8 for Intel® 64 architecture

INTEGER*4 for IA-32 architecture

Array, size total_batch_count, of pointers to the pivot arrays associated with the LU-factored Ai matrices, as returned by ?getrf_batch.

group_count

INTEGER.

Specifies the number of groups. Must be at least 0.

group_size

INTEGER.

Array of size group_count. The element group_size(g) specifies the number of matrices in group g. Each element in group_size must be at least 0.

Output Parameters
Ainv_array

INTEGER*8 for Intel® 64 architecture

INTEGER*4 for IA-32 architecture

Array, size total_batch_count, of pointers to the Ainvi matrices.

Each matrix is overwritten by the ng-by-ng matrix inv(Ai).

ldainv_array

INTEGER.

Array of size group_count. For group g, ldainvg = ldainv_array(g) specifies the leading dimension of the matrices Ainvi in group g.

The value of ldainvg must be at least max(1, ng).

info_array

INTEGER.

Array of size total_batch_count, which reports the inversion status for each matrix.

If info(i) = 0, the execution is successful for Ai.

If info(i) = -j, the j-th parameter had an illegal value for Ai.

If info(i) = j, the j-th diagonal element of the factor Ui is 0, Ui is singular, and the inversion could not be completed.

Related Information

Refer to ?getri_oop_batch_strided , which computes inverses for a group of n-by-n matrices that are allocated at a constant stride from each other in the same contiguous block of memory.