C++ API Reference for Intel® Data Analytics Acceleration Library 2020 Update 1

References | Namespaces | Classes | Enumerations
Stochastic Gradient Descent Algorithm

Contains classes for computing the Stochastic gradient descent. More...

References

 Batch
 

Namespaces

 daal::algorithms::optimization_solver::sgd
 Contains classes for computing the Stochastic gradient descent.
 
 daal::algorithms::optimization_solver::sgd::interface1
 Contains version 1.0 of the Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) interface.
 
 daal::algorithms::optimization_solver::sgd::interface2
 Contains version 1.0 of the Intel(R) Data Analytics Acceleration Library (Intel(R) DAAL) interface.
 

Classes

struct  BaseParameter
 BaseParameter base class for the Stochastic gradient descent algorithm More...
 
struct  Parameter< method >
 
struct  Parameter< defaultDense >
 Parameter for the Stochastic gradient descent algorithm More...
 
struct  Parameter< miniBatch >
 Parameter for the Stochastic gradient descent algorithm More...
 

Enumerations

enum  Method { defaultDense = 0, miniBatch = 1, momentum = 2 }
 
enum  OptionalDataId { pastUpdateVector = iterative_solver::lastOptionalData + 1, pastWorkValue = pastUpdateVector + 1 }
 

Enumeration Type Documentation

enum Method

Available methods for computing the Stochastic gradient descent

Enumerator
defaultDense 

Default: Required gradient is computed using only one term of objective function

miniBatch 

Required gradient is computed using batchSize terms of objective function

momentum 

Required gradient is computed using batchSize terms of objective function, perform momentum update rule

enum OptionalDataId

Available identifiers of optional input for the iterative solver

Enumerator
pastUpdateVector 

NumericTable of size p x 1 with vector update from past iteration. Applied for momentum method

pastWorkValue 

NumericTable of size p x 1 with work vector value from past main iteration. Applied for minibatch method

For more complete information about compiler optimizations, see our Optimization Notice.