C++ API Reference for Intel® Data Analytics Acceleration Library 2020 Update 1

List of all members
Batch< algorithmFPType, method > Class Template Reference

Computes the results of the backward hyperbolic tangent in the batch processing mode. More...

Class Declaration

template<typename algorithmFPType = DAAL_ALGORITHM_FP_TYPE, Method method = defaultDense>
class daal::algorithms::neural_networks::layers::tanh::backward::interface1::Batch< algorithmFPType, method >

Template Parameters
algorithmFPTypeData type to use in intermediate computations for the backward hyperbolic tangent layer, double or float
methodThe backward hyperbolic tangent layer computation method, Method
Enumerations
  • Method Computation methods for the backward hyperbolic tangent layer
  • backward::InputId Identifiers of input objects for the backward hyperbolic tangent layer
  • LayerDataId Identifiers of collection in input objects for the hyperbolic tangent layer
  • backward::InputLayerDataId Identifiers of extra results computed by the forward hyperbolic tangent layer
  • backward::ResultId Identifiers of result objects for the backward hyperbolic tangent layer
References
Deprecated:
This item will be removed in a future release.

Constructor & Destructor Documentation

DAAL_DEPRECATED Batch ( )
inline

Default constructor

Deprecated:
This item will be removed in a future release.
DAAL_DEPRECATED Batch ( Parameter &  parameter)
inline

Constructs a backward tanh layer in the batch processing mode and initializes its parameter with the provided parameter

Parameters
[in]parameterParameter to initialize the parameter of the layer
Deprecated:
This item will be removed in a future release.
Batch ( const Batch< algorithmFPType, method > &  other)
inline

Constructs the backward hyperbolic tangent layer by copying input objects of another backward hyperbolic tangent layer in the batch processing mode

Parameters
[in]otherAn algorithm to be used as the source to initialize the input objects of the backward hyperbolic tangent layer
Deprecated:
This item will be removed in a future release.

Member Function Documentation

virtual services::Status allocateResult ( )
inlinevirtual

Allocates memory to store the result of the backward hyperbolic tangent layer

Returns
Status of computations
services::SharedPtr<Batch<algorithmFPType, method> > clone ( ) const
inline

Returns a pointer to a newly allocated backward hyperbolic tangent layer with a copy of input objects of this backward hyperbolic tangent layer

Returns
Pointer to the newly allocated algorithm
virtual InputType* getLayerInput ( )
inlinevirtual

Returns the structure that contains input objects of the backward hyperbolic tangent layer

Returns
Structure that contains input objects of the backward hyperbolic tangent layer
virtual ParameterType* getLayerParameter ( )
inlinevirtual

Returns the structure that contains parameters of the forward hyperbolic tangent layer

Returns
Structure that contains parameters of the forward hyperbolic tangent layer
layers::backward::ResultPtr getLayerResult ( )
inline

Returns the structure that contains results of the backward hyperbolic tangent layer

Returns
Structure that contains results of the backward hyperbolic tangent layer
virtual int getMethod ( ) const
inlinevirtual

Returns method of the backward hyperbolic tangent layer

Returns
Method of the backward hyperbolic tangent layer
ResultPtr getResult ( )
inline

Returns the structure that contains the result of the backward hyperbolic tangent layer

Returns
Structure that contains the result of backward hyperbolic tangent layer
services::Status setResult ( const ResultPtr &  result)
inline

Registers user-allocated memory to store results of the backward hyperbolic tangent layer

Parameters
[in]resultStructure to store results of the backward hyperbolic tangent layer
Returns
Status of computations

Member Data Documentation

InputType input

Input objects of the layer

ParameterType& parameter

tanh layer parameters structure


The documentation for this class was generated from the following file:

For more complete information about compiler optimizations, see our Optimization Notice.