TensorFlow 1 version | View source on GitHub |
Create a regularizer that applies both L1 and L2 penalties.
tf.keras.regularizers.l1_l2(
l1=0.01, l2=0.01
)
Used in the notebooks
Used in the tutorials |
---|
The L1 regularization penalty is computed as:
loss = l1 * reduce_sum(abs(x))
The L2 regularization penalty is computed as:
loss = l2 * reduce_sum(square(x))
Arguments | |
---|---|
l1
|
Float; L1 regularization factor. |
l2
|
Float; L2 regularization factor. |
Returns | |
---|---|
An L1L2 Regularizer with the given regularization factors. |