Serve a SavedModel
Serve a TensorFlow SavedModel as a local web api under http://localhost:8089.
serve_savedmodel(model_dir, host = "127.0.0.1", port = 8089,
daemonized = FALSE, browse = !daemonized)
Arguments
model_dir | The path to the exported model, as a string. |
host | Address to use to serve model, as a string. |
port | Port to use to serve model, as numeric. |
daemonized | Makes 'httpuv' server daemonized so R interactive sessions are not blocked to handle requests. To terminate a daemonized server, call 'httpuv::stopDaemonizedServer()' with the handle returned from this call. |
browse | Launch browser with serving landing page? |
See also
Examples
# NOT RUN {
# serve an existing model over a web interface
tfdeploy::serve_savedmodel(
system.file("models/tensorflow-mnist", package = "tfdeploy")
)
# }