Fast LSTM implementation backed by <a href='https://developer.nvidia.com/cudnn'>CuDNN</a>.
Can only be run on GPU, with the TensorFlow backend.
layer_cudnn_lstm(object, units, kernel_initializer = "glorot_uniform",
  recurrent_initializer = "orthogonal", bias_initializer = "zeros",
  unit_forget_bias = TRUE, kernel_regularizer = NULL,
  recurrent_regularizer = NULL, bias_regularizer = NULL,
  activity_regularizer = NULL, kernel_constraint = NULL,
  recurrent_constraint = NULL, bias_constraint = NULL,
  return_sequences = FALSE, return_state = FALSE, stateful = FALSE,
  input_shape = NULL, batch_input_shape = NULL, batch_size = NULL,
  dtype = NULL, name = NULL, trainable = NULL, weights = NULL)Arguments
| object | Model or layer object | 
| units | Positive integer, dimensionality of the output space. | 
| kernel_initializer | Initializer for the  | 
| recurrent_initializer | Initializer for the  | 
| bias_initializer | Initializer for the bias vector. | 
| unit_forget_bias | Boolean. If TRUE, add 1 to the bias of the forget
gate at initialization. Setting it to true will also force
 | 
| kernel_regularizer | Regularizer function applied to the  | 
| recurrent_regularizer | Regularizer function applied to the
 | 
| bias_regularizer | Regularizer function applied to the bias vector. | 
| activity_regularizer | Regularizer function applied to the output of the layer (its "activation").. | 
| kernel_constraint | Constraint function applied to the  | 
| recurrent_constraint | Constraint function applied to the
 | 
| bias_constraint | Constraint function applied to the bias vector. | 
| return_sequences | Boolean. Whether to return the last output in the output sequence, or the full sequence. | 
| return_state | Boolean (default FALSE). Whether to return the last state in addition to the output. | 
| stateful | Boolean (default FALSE). If TRUE, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. | 
| input_shape | Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model. | 
| batch_input_shape | Shapes, including the batch size. For instance,
 | 
| batch_size | Fixed batch size for layer | 
| dtype | The data type expected by the input, as a string ( | 
| name | An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. | 
| trainable | Whether the layer weights will be updated during training. | 
| weights | Initial weights for layer. | 
References
- Long short-term memory (original 1997 paper) 
- A Theoretically Grounded Application of Dropout in Recurrent Neural Networks 
See also
Other recurrent layers: layer_cudnn_gru,
  layer_gru, layer_lstm,
  layer_simple_rnn
