An optimizer that averages gradients across TPU shards.
Inherits From: Optimizer
tf.compat.v1.tpu.CrossShardOptimizer(
opt, reduction=losses.Reduction.MEAN, name='CrossShardOptimizer',
group_assignment=None
)
Args |
opt
|
An existing Optimizer to encapsulate.
|
reduction
|
The reduction to apply to the shard losses.
|
name
|
Optional name prefix for the operations created when applying
gradients. Defaults to "CrossShardOptimizer".
|
group_assignment
|
Optional 2d int32 lists with shape
[num_groups, num_replicas_per_group] which describles how to apply
optimizer to subgroups.
|
Raises |
ValueError
|
If reduction is not a valid cross-shard reduction.
|
Methods
apply_gradients
View source
apply_gradients(
grads_and_vars, global_step=None, name=None
)
Apply gradients to variables.
Calls tpu_ops.cross_replica_sum() to sum gradient contributions across
replicas, and then applies the real optimizer.
Args |
grads_and_vars
|
List of (gradient, variable) pairs as returned by
compute_gradients().
|
global_step
|
Optional Variable to increment by one after the
variables have been updated.
|
name
|
Optional name for the returned operation. Default to the
name passed to the Optimizer constructor.
|
Returns |
An Operation that applies the gradients. If global_step was not None,
that operation also increments global_step .
|
Raises |
ValueError
|
If the grads_and_vars is malformed.
|
compute_gradients
View source
compute_gradients(
loss, var_list=None, **kwargs
)
Compute gradients of "loss" for the variables in "var_list".
This simply wraps compute_gradients()
from the real optimizer. The
gradients will be aggregated in apply_gradients()
so that user can
modify the gradients like clipping with per replica global norm if needed.
The global norm with aggregated gradients can be bad as one replica's huge
gradients can hurt the gradients from other replicas.
When the CrossShardOptimizer is constructed with
reduction == losses.Reduction.MEAN
(default), this function scales the
loss by 1.0 / num_shards
before computing the gradients. Assuming the
optimizer uses the default implementation of compute_gradients()
, the
gradients of the scaled loss are scaled by 1.0 / num_shards
compared to
the gradients of the original loss. This scaling factor is important because
apply_gradients()
sums gradients across shards, rather than averaging
them. However, the scaling factor must be taken into account when clipping
the norm of the gradients or performing other postprocessing.
Args |
loss
|
A Tensor containing the value to minimize.
|
var_list
|
Optional list or tuple of tf.Variable to update to minimize
loss . Defaults to the list of variables collected in the graph
under the key GraphKey.TRAINABLE_VARIABLES .
|
**kwargs
|
Keyword arguments for compute_gradients().
|
Returns |
A list of (gradient, variable) pairs.
|
Raises |
ValueError
|
If not within a tpu_shard_context or group_assignment is
invalid.
|
get_name
View source
get_name()
get_slot
View source
get_slot(
*args, **kwargs
)
Return a slot named "name" created for "var" by the Optimizer.
This simply wraps the get_slot() from the actual optimizer.
Args |
*args
|
Arguments for get_slot().
|
**kwargs
|
Keyword arguments for get_slot().
|
Returns |
The Variable for the slot if it was created, None otherwise.
|
get_slot_names
View source
get_slot_names(
*args, **kwargs
)
Return a list of the names of slots created by the Optimizer
.
This simply wraps the get_slot_names() from the actual optimizer.
Args |
*args
|
Arguments for get_slot().
|
**kwargs
|
Keyword arguments for get_slot().
|
Returns |
A list of strings.
|
minimize
View source
minimize(
loss, global_step=None, var_list=None, gate_gradients=GATE_OP,
aggregation_method=None, colocate_gradients_with_ops=False, name=None,
grad_loss=None
)
Add operations to minimize loss
by updating var_list
.
This method simply combines calls compute_gradients()
and
apply_gradients()
. If you want to process the gradient before applying
them call compute_gradients()
and apply_gradients()
explicitly instead
of using this function.
Args |
loss
|
A Tensor containing the value to minimize.
|
global_step
|
Optional Variable to increment by one after the
variables have been updated.
|
var_list
|
Optional list or tuple of Variable objects to update to
minimize loss . Defaults to the list of variables collected in
the graph under the key GraphKeys.TRAINABLE_VARIABLES .
|
gate_gradients
|
How to gate the computation of gradients. Can be
GATE_NONE , GATE_OP , or GATE_GRAPH .
|
aggregation_method
|
Specifies the method used to combine gradient terms.
Valid values are defined in the class AggregationMethod .
|
colocate_gradients_with_ops
|
If True, try colocating gradients with
the corresponding op.
|
name
|
Optional name for the returned operation.
|
grad_loss
|
Optional. A Tensor holding the gradient computed for loss .
|
Returns |
An Operation that updates the variables in var_list . If global_step
was not None , that operation also increments global_step .
|
Raises |
ValueError
|
If some of the variables are not Variable objects.
|
Eager Compatibility
When eager execution is enabled, loss
should be a Python function that
takes no arguments and computes the value to be minimized. Minimization (and
gradient computation) is done with respect to the elements of var_list
if
not None, else with respect to any trainable variables created during the
execution of the loss
function. gate_gradients
, aggregation_method
,
colocate_gradients_with_ops
and grad_loss
are ignored when eager
execution is enabled.
variables
View source
variables()
Forwarding the variables from the underlying optimizer.
Class Variables |
GATE_GRAPH
|
2
|
GATE_NONE
|
0
|
GATE_OP
|
1
|