C++ API Reference for Intel® Data Analytics Acceleration Library 2020 Update 1
Parameter class for LBFGS algorithm More...
Parameter | ( | sum_of_functions::BatchPtr | function = sum_of_functions::BatchPtr() , |
size_t | nIterations = 100 , |
||
double | accuracyThreshold = 1.0e-5 , |
||
size_t | batchSize = 10 , |
||
size_t | correctionPairBatchSize_ = 100 , |
||
size_t | m = 10 , |
||
size_t | L = 10 , |
||
size_t | seed = 777 |
||
) |
Constructs the parameters of LBFGS algorithm
[in] | function | Objective function that can be represented as sum |
[in] | nIterations | Maximal number of iterations of the algorithm |
[in] | accuracyThreshold | Accuracy of the LBFGS algorithm |
[in] | batchSize | Number of observations to compute the stochastic gradient |
[in] | correctionPairBatchSize_ | The number of observations to compute the sub-sampled Hessian for correction pairs computation |
[in] | m | Memory parameter of LBFGS |
[in] | L | The number of iterations between the curvature estimates calculations |
[in] | seed | Seed for random choosing terms from objective function |
|
virtual |
Checks the correctness of the parameter
data_management::NumericTablePtr correctionPairBatchIndices |
Numeric table of size (nIterations / L) x correctionPairBatchSize that represent indices that will be used instead of random values for the sub-sampled Hessian matrix computations. If not set then random indices will be chosen.
size_t correctionPairBatchSize |
Number of observations to compute the sub-sampled Hessian for correction pairs computation
engines::EnginePtr engine |
Engine for random choosing terms from objective function.
size_t L |
The number of iterations between the curvature estimates calculations
size_t m |
Memory parameter of LBFGS. The maximum number of correction pairs that define the approximation of inverse Hessian matrix.
size_t seed |
Seed for random choosing terms from objective function.
data_management::NumericTablePtr stepLengthSequence |
Numeric table of size:
For more complete information about compiler optimizations, see our Optimization Notice.