View source on GitHub |
Optimization parameters for Adagrad with TPU embeddings.
tf.compat.v1.tpu.experimental.AdagradParameters(
learning_rate: float,
initial_accumulator: float = 0.1,
use_gradient_accumulation: bool = True,
clip_weight_min: Optional[float] = None,
clip_weight_max: Optional[float] = None,
weight_decay_factor: Optional[float] = None,
multiply_weight_decay_factor_by_learning_rate: Optional[bool] = None,
clip_gradient_min: Optional[float] = None,
clip_gradient_max: Optional[float] = None
)
Pass this to tf.estimator.tpu.experimental.EmbeddingConfigSpec
via the
optimization_parameters
argument to set the optimizer and its parameters.
See the documentation for tf.estimator.tpu.experimental.EmbeddingConfigSpec
for more details.
estimator = tf.estimator.tpu.TPUEstimator(
...
embedding_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
...
optimization_parameters=tf.tpu.experimental.AdagradParameters(0.1),
...))
Args | |
---|---|
learning_rate
|
used for updating embedding table. |
initial_accumulator
|
initial accumulator for Adagrad. |
use_gradient_accumulation
|
setting this to False makes embedding
gradients calculation less accurate but faster. Please see
optimization_parameters.proto for details.
for details.
|
clip_weight_min
|
the minimum value to clip by; None means -infinity. |
clip_weight_max
|
the maximum value to clip by; None means +infinity. |
weight_decay_factor
|
amount of weight decay to apply; None means that the weights are not decayed. |
multiply_weight_decay_factor_by_learning_rate
|
if true,
weight_decay_factor is multiplied by the current learning rate.
|
clip_gradient_min
|
the minimum value to clip by; None means -infinity. |
clip_gradient_max
|
the maximum value to clip by; None means +infinity. |