Applies Dropout to the input.
Dropout consists in randomly setting a fraction rate of input units to 0 at
each update during training time, which helps prevent overfitting.
layer_dropout(object, rate, noise_shape = NULL, seed = NULL,
batch_size = NULL, name = NULL, trainable = NULL, weights = NULL)Arguments
| object | Model or layer object |
| rate | float between 0 and 1. Fraction of the input units to drop. |
| noise_shape | 1D integer tensor representing the shape of the binary
dropout mask that will be multiplied with the input. For instance, if your
inputs have shape |
| seed | integer to use as random seed. |
| batch_size | Fixed batch size for layer |
| name | An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
| trainable | Whether the layer weights will be updated during training. |
| weights | Initial weights for layer. |
See also
Other core layers: layer_activation,
layer_activity_regularization,
layer_dense, layer_flatten,
layer_input, layer_lambda,
layer_masking, layer_permute,
layer_repeat_vector,
layer_reshape
Other dropout layers: layer_spatial_dropout_1d,
layer_spatial_dropout_2d,
layer_spatial_dropout_3d