kNearest Neighbors (kNN) Classifier
kNearest Neighbors Classifier is also available with oneAPI interfaces:
kNearest Neighbors (kNN) classification is a nonparametric
classification algorithm. The model of the kNN classifier is based on
feature vectors and class labels from the training data set. This
classifier induces the class of the query vector from the labels of
the feature vectors in the training data set to which the query
vector is similar. A similarity between feature vectors is determined
by the type of distance (for example, Euclidian) in a
multidimensional feature space.
Details
Given n feature vectors of size and a vector
of class labels , where
and is the number of classes, describes the
class to which the feature vector belongs, the problem is
to build a kNN classifier.
Given a positive integer parameter and a test observation
, the kNN classifier does the following:
 Identifies the set of the k feature vectors in the training data that are closest to according to the distance metric
 Estimates the conditional probability for the class as the fraction of vectors in whose labels y are equal to
 Assigns the class with the largest probability to the test observation
On CPU, kNN classification might use KD tree, a spacepartitioning data structure,
or Brute Force search to find nearest neighbors,
while on GPU only Brute Force search is available.
KD tree
On CPU, the library provides kNN classification based on multidimensional binary search tree
(KD tree, where D means the dimension and K means the number of dimensions in the feature space).
For more details, see [James2013], [Patwary2016].
oneDAL version of the kNN algorithm with KD trees uses the PANDA algorithm
[Patwary2016].
Each nonleaf node of a tree contains the identifier of a feature along
which to split the feature space and an appropriate feature value (a
cutpoint) that defines the splitting hyperplane to partition the
feature space into two parts. Each leaf node of the tree has an
associated subset (a bucket) of elements of the training data set.
Feature vectors from any bucket belong to the region of the space
defined by tree nodes on the path from the root node to the
respective leaf.
Brute Force
Brute Force kNN algorithm calculates the squared distances from each query feature vector
to each reference feature vector in the training data set. Then,
for each query feature vector it selects objects from the training set that are closest to that query feature vector.
For details, see [Li2015], [Verma2014].
Training Stage
Training using KD Tree
For each nonleaf node, the process of building a KD tree
involves the choice of the feature (that is, dimension in the
feature space) and the value for this feature (a cutpoint) to
split the feature space. This procedure starts with the entire
feature space for the root node of the tree, and for every next
level of the tree deals with ever smaller part of the feature
space.
The PANDA algorithm constructs the KD tree by choosing the
dimension with the maximum variance for splitting
[Patwary2016].
Therefore, for each new nonleaf node of the tree, the algorithm
computes the variance of values that belong to the respective
region of the space for each of the features and chooses the
feature with the largest variance. Due to high computational cost
of this operation, PANDA uses a subset of feature values to
compute the variance.
PANDA uses a sampling heuristic to estimate the data distribution
for the chosen feature and chooses the median estimate as the
cutpoint.
PANDA generates new KD tree levels until the number of feature
vectors in a leaf node gets less or equal to a predefined
threshold. Once the threshold is reached, PANDA stops growing the
tree and associates the feature vectors with the bucket of the
respective leaf node.
Training using Brute Force
During training with the Brute Force approach, the algorithm stores all feature vectors from the training data set
to calculate their distances to the query feature vectors.
Prediction Stage
Given kNN classifier and query vectors ,
the problem is to calculate the labels for those vectors.
Prediction using KD Tree
To solve the problem for each given query vector
, the algorithm traverses the KD tree to find feature
vectors associated with a leaf node that are closest to
. During the search, the algorithm limits exploration
of the nodes for which the distance between the query vector and
respective part of the feature space is not less than the distance
from the neighbor. This distance is progressively
updated during the tree traverse.
Prediction using Brute Force
To solve the problem, the algorithm computes distances between vectors from training and testing sets:
.
For example, if Euclidean distance is used, would be the following:
K training vectors with minimal distance to the testing vector are the nearest neighbors the algorithms searches for.
Batch Processing
kNN classification follows the general workflow described in
Classification Usage Model.
Training
For a description of the input and output, refer to Usage Model: Training and Prediction.
At the training stage, both Brute Force and KD tree based kNN classifier have the
following parameters:
Parameter  Default Value  Description 

algorithmFPType  float  The floatingpoint type that the algorithm uses for intermediate computations. Can be float or double . 
method  defaultDense  The computation method used by kNN classification.
The only training method supported so far is the default dense method. 
nClasses  The number of classes.  
dataUseInModel  doNotUse  A parameter to enable/disable use of the input data set in the kNN
model. Possible values:
KD tree based kNN reorders feature vectors and corresponding labels in the
input data set or its copy to improve performance at the prediction stage. If the value is doUse , do not deallocate the memory for input data and labels. 
engine  SharePtr< engines:: mt19937:: Batch>()  Pointer to the random number generator engine that is used internally to
perform sampling needed to choose dimensions and cutpoints for the KD tree. 
Prediction
For a description of the input and output, refer to Usage Model: Training and Prediction.
At the prediction stage, both Brute Force and KD tree based kNN classifier have the
following parameters:
Parameter  Default Value  Description 

algorithmFPType  float  The floatingpoint type that the algorithm uses for intermediate computations. Can be float or double . 
method  defaultDense  The computation method used kNN classification.
The only prediction method supported so far is the default dense method. 
nClasses  The number of classes.  
The number of neighbors.  
resultsToCompute  The 64bit integer flag that specifies which extra characteristics of the kNN algorithm to compute.
Provide one of the following values to request a single characteristic or use bitwise OR to request a combination of the characteristics:
 
voteWeights  voteUniform  The voting method for prediction:

Output
In addition to classifier output, kNN calculates the results described below.
Pass the
Result ID
as a parameter to the methods that access the result of your algorithm.Result ID  Result 

indices  A numeric table containing indices of rows from training dataset that are nearest neighbors computed when the computeIndicesOfNeigtbors option is on.By default, this result is an object of the HomogenNumericTable class,
but you can define the result as an object of any class derived from NumericTable . 
distances  A numeric table containing distances to nearest neighbors computed when the computeDistances option is on.By default, this result is an object of the HomogenNumericTable class,
but you can define the result as an object of any class derived from NumericTable . 
Examples
oneAPI DPC++
Batch Processing:
oneAPI C++
Batch Processing:
C++ (CPU)
Batch Processing:
Java*
There is no support for Java on GPU.
Batch Processing:
Python* with DPC++ support
Batch Processing:
Python*
Batch Processing: