Activation functions
Activations functions can either be used through layer_activation()
, or
through the activation argument supported by all forward layers.
activation_relu(x, alpha = 0, max_value = NULL)
activation_elu(x, alpha = 1)
activation_selu(x)
activation_hard_sigmoid(x)
activation_linear(x)
activation_sigmoid(x)
activation_softmax(x, axis = -1)
activation_softplus(x)
activation_softsign(x)
activation_tanh(x)
Arguments
x | Tensor |
alpha | Alpha value |
max_value | Max value |
axis | Integer, axis along which the softmax normalization is applied |
Value
Tensor with the same shape and dtype as x
.
Details
activation_selu()
to be used together with the initialization "lecun_normal".activation_selu()
to be used together with the dropout variant "AlphaDropout".
References
activation_selu()
: Self-Normalizing Neural Networks