Optimizer - Maple Help

# Online Help

###### All Products    Maple    MapleSim

DeepLearning

 Optimizer
 create an Optimizer object

 Calling Sequence Optimizer(X)

Parameters

 X - function; optimizer name with parameters.

Description

 • The Optimizer(X) command creates an Optimizer using the specified function X with parameters.
 • An Optimizer computes gradients for a loss function and applies gradients to variables. Among the implemented optimizers are included classic optimization algorithms such as gradient descent and Adagrad.

Supported Optimizers

 • In the Usage column in the table below, all arguments must be real-valued input parameters. The input learning_rate is mandatory for all optimizers, the remaining parameters may be omitted and will revert to default values.

 Distribution Usage Description Gradient Descent GradientDescent(learning_rate) Gradient descent algorithm Adadelta Adadelta(learning_rate,rho,epsilon) Adadelta algorithm Adagrad Adagrad(learning_rate,initial_accumulator_value) Adagrad algorithm Adam Adam(learning_rate,beta1,beta2) Adam algorithm Ftrl Ftrl(learning_rate,learning_rate_power,initial_accumulator_value,l1_regularization_strength,l2_regularization_strength) FTRL algorithm ProximalGradientDescent ProximalGradientDescent(learning_rate,l1_regularization_strength,l2_regularization_strength) Proximal gradient descent algorithm RMSProp RMSProp(learning_rate,decay,momentum,epsilon) RMSProp algorithm

 • This function is part of the DeepLearning package, so it can be used in the short form Optimizer(..) only after executing the command with(DeepLearning). However, it can always be accessed through the long form of the command by using DeepLearning[Optimizer](..).

Details

 • For more information on any of the above optimizers and the meaning of the input parameters, consult the TensorFlow Python API documentation for tf.train.

Examples

 > $\mathrm{with}\left(\mathrm{DeepLearning}\right):$
 > $f≔\mathrm{Optimizer}\left(\mathrm{Adagrad}\left(0.01\right)\right)$
 ${f}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Optimizer}}\\ {\mathrm{}}\end{array}\right]$ (1)
 > $g≔\mathrm{Optimizer}\left(\mathrm{Adam}\left(0.05\right)\right)$
 ${g}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Optimizer}}\\ {\mathrm{}}\end{array}\right]$ (2)

Compatibility

 • The DeepLearning[Optimizer] command was introduced in Maple 2018.
 • For more information on Maple 2018 changes, see Updates in Maple 2018.

 See Also