Deep Neural Networks
Create a deep neural network (DNN) estimator.
dnn_regressor(hidden_units, feature_columns, model_dir = NULL,
label_dimension = 1L, weight_column = NULL, optimizer = "Adagrad",
activation_fn = "relu", dropout = NULL, input_layer_partitioner = NULL,
config = NULL)
dnn_classifier(hidden_units, feature_columns, model_dir = NULL,
n_classes = 2L, weight_column = NULL, label_vocabulary = NULL,
optimizer = "Adagrad", activation_fn = "relu", dropout = NULL,
input_layer_partitioner = NULL, config = NULL)
Arguments
hidden_units | An integer vector, indicating the number of hidden
units in each layer. All layers are fully connected. For example,
|
feature_columns | An R list containing all of the feature columns used
by the model (typically, generated by |
model_dir | Directory to save the model parameters, graph, and so on. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. |
label_dimension | Number of regression targets per example. This is the
size of the last dimension of the labels and logits |
weight_column | A string, or a numeric column created by
|
optimizer | Either the name of the optimizer to be used when training the model, or a TensorFlow optimizer instance. Defaults to the Adagrad optimizer. |
activation_fn | The activation function to apply to each layer. This can either be an
actual activation function (e.g. |
dropout | When not |
input_layer_partitioner | An optional partitioner for the input layer.
Defaults to |
config | A run configuration created by |
n_classes | The number of label classes. |
label_vocabulary | A list of strings represents possible label values.
If given, labels must be string type and have any value in
|
See also
Other canned estimators: dnn_linear_combined_estimators
,
linear_estimators