Layer that applies an update to the cost function based input activity.
Layer that applies an update to the cost function based input activity.
layer_activity_regularization(object, l1 = 0, l2 = 0, input_shape = NULL,
batch_input_shape = NULL, batch_size = NULL, dtype = NULL,
name = NULL, trainable = NULL, weights = NULL)Arguments
| object | Model or layer object |
| l1 | L1 regularization factor (positive float). |
| l2 | L2 regularization factor (positive float). |
| input_shape | Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model. |
| batch_input_shape | Shapes, including the batch size. For instance,
|
| batch_size | Fixed batch size for layer |
| dtype | The data type expected by the input, as a string ( |
| name | An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
| trainable | Whether the layer weights will be updated during training. |
| weights | Initial weights for layer. |
Input shape
Arbitrary. Use the keyword argument input_shape (list
of integers, does not include the samples axis) when using this layer as
the first layer in a model.
Output shape
Same shape as input.
See also
Other core layers: layer_activation,
layer_dense, layer_dropout,
layer_flatten, layer_input,
layer_lambda, layer_masking,
layer_permute,
layer_repeat_vector,
layer_reshape