How to import a saved Tensorflow model train using tf.estimator and predict input

I saved the model using tf.estimator.method export_savedmodel as follows:

export_dir="exportModel/" feature_spec = tf.feature_column.make_parse_example_spec(feature_columns) input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec) classifier.export_savedmodel(export_dir, input_receiver_fn, as_text=False, checkpoint_path="Model/model.ckpt-400") 

How to import this saved model and use it for forecasts?

+14
source share
4 answers

I tried to find a good basic example, but it looks like the documentation and samples are a bit scattered about this topic. So let's start with a basic example: tf.estimator quickstart .

This specific example does not actually export the model, so let it do it (not necessary to use 1):

 def serving_input_receiver_fn(): """Build the serving inputs.""" # The outer dimension (None) allows us to batch up inputs for # efficiency. However, it also means that if we want a prediction # for a single instance, we'll need to wrap it in an outer list. inputs = {"x": tf.placeholder(shape=[None, 4], dtype=tf.float32)} return tf.estimator.export.ServingInputReceiver(inputs, inputs) export_dir = classifier.export_savedmodel( export_dir_base="/path/to/model", serving_input_receiver_fn=serving_input_receiver_fn) 

A huge star on this code . An error appears in TensorFlow 1.3 that does not allow you to perform the above export on a β€œcanned” evaluation (for example, DNNClassifier). For a workaround, see "Appendix: Workaround."

The following is the export_dir code (return value at the export stage) to emphasize that it is not "/ path / in / model", but rather is a subdirectory of this directory, whose name is a timestamp.

Use case 1: make a prediction in the same process as training

This is a scientific experience based on the use of sci-kit, and is already shown as a sample. For completeness, you simply call predict on the trained model:

 classifier.train(input_fn=train_input_fn, steps=2000) # [...snip...] predictions = list(classifier.predict(input_fn=predict_input_fn)) predicted_classes = [p["classes"] for p in predictions] 

Use example 2. Download SavedModel in Python / Java / C ++ and make predictions

Python client

Perhaps the easiest way to use it if you want to make a prediction in Python is SavedModelPredictor . In a Python program that will use SavedModel , we need code like this:

 from tensorflow.contrib import predictor predict_fn = predictor.from_saved_model(export_dir) predictions = predict_fn( {"x": [[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]]}) print(predictions['scores']) 

Java client

 package dummy; import java.nio.FloatBuffer; import java.util.Arrays; import java.util.List; import org.tensorflow.SavedModelBundle; import org.tensorflow.Session; import org.tensorflow.Tensor; public class Client { public static void main(String[] args) { Session session = SavedModelBundle.load(args[0], "serve").session(); Tensor x = Tensor.create( new long[] {2, 4}, FloatBuffer.wrap( new float[] { 6.4f, 3.2f, 4.5f, 1.5f, 5.8f, 3.1f, 5.0f, 1.7f })); // Doesn't look like Java has a good way to convert the // input/output name ("x", "scores") to their underlying tensor, // so we hard code them ("Placeholder:0", ...). // You can inspect them on the command-line with saved_model_cli: // // $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default final String xName = "Placeholder:0"; final String scoresName = "dnn/head/predictions/probabilities:0"; List<Tensor> outputs = session.runner() .feed(xName, x) .fetch(scoresName) .run(); // Outer dimension is batch size; inner dimension is number of classes float[][] scores = new float[2][3]; outputs.get(0).copyTo(scores); System.out.println(Arrays.deepToString(scores)); } } 

C ++ Client

Most likely, you will want to use tensorflow::LoadSavedModel with Session .

 #include <unordered_set> #include <utility> #include <vector> #include "tensorflow/cc/saved_model/loader.h" #include "tensorflow/core/framework/tensor.h" #include "tensorflow/core/public/session.h" namespace tf = tensorflow; int main(int argc, char** argv) { const string export_dir = argv[1]; tf::SavedModelBundle bundle; tf::Status load_status = tf::LoadSavedModel( tf::SessionOptions(), tf::RunOptions(), export_dir, {"serve"}, &bundle); if (!load_status.ok()) { std::cout << "Error loading model: " << load_status << std::endl; return -1; } // We should get the signature out of MetaGraphDef, but that a bit // involved. We'll take a shortcut like we did in the Java example. const string x_name = "Placeholder:0"; const string scores_name = "dnn/head/predictions/probabilities:0"; auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4})); auto matrix = x.matrix<float>(); matrix(0, 0) = 6.4; matrix(0, 1) = 3.2; matrix(0, 2) = 4.5; matrix(0, 3) = 1.5; matrix(0, 1) = 5.8; matrix(0, 2) = 3.1; matrix(0, 3) = 5.0; matrix(0, 4) = 1.7; std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}}; std::vector<tf::Tensor> outputs; tf::Status run_status = bundle.session->Run(inputs, {scores_name}, {}, &outputs); if (!run_status.ok()) { cout << "Error running session: " << run_status << std::endl; return -1; } for (const auto& tensor : outputs) { std::cout << tensor.matrix<float>() << std::endl; } } 

Case Study 3: Submitting a Model Using the TensorFlow Service

Exporting models in a manner suitable for servicing the Classification Model requires the input to be tf.Example . Here we can export the model for TensorFlow service:

 def serving_input_receiver_fn(): """Build the serving inputs.""" # The outer dimension (None) allows us to batch up inputs for # efficiency. However, it also means that if we want a prediction # for a single instance, we'll need to wrap it in an outer list. example_bytestring = tf.placeholder( shape=[None], dtype=tf.string, ) features = tf.parse_example( example_bytestring, tf.feature_column.make_parse_example_spec(feature_columns) ) return tf.estimator.export.ServingInputReceiver( features, {'examples': example_bytestring}) export_dir = classifier.export_savedmodel( export_dir_base="/path/to/model", serving_input_receiver_fn=serving_input_receiver_fn) 

To refer to the documentation of TensorFlow Servers for additional information on how to configure the TensorFlow service, I have provided here only the client code:

  # Omitting a bunch of connection/initialization code... # But at some point we end up with a stub whose lifecycle # is generally longer than that of a single request. stub = create_stub(...) # The actual values for prediction. We have two examples in this # case, each consisting of a single, multi-dimensional feature `x`. # This data here is the equivalent of the map passed to the # `predict_fn` in use case #2. examples = [ tf.train.Example( features=tf.train.Features( feature={"x": tf.train.Feature( float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})), tf.train.Example( features=tf.train.Features( feature={"x": tf.train.Feature( float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})), ] # Build the RPC request. predict_request = predict_pb2.PredictRequest() predict_request.model_spec.name = "default" predict_request.inputs["examples"].CopyFrom( tensor_util.make_tensor_proto(examples, tf.float32)) # Perform the actual prediction. stub.Predict(request, PREDICT_DEADLINE_SECS) 

Note that the examples key referenced in predict_request.inputs must match the key used in serving_input_receiver_fn during export (see the constructor in ServingInputReceiver in this code).

Appendix: Exporting from Canned Models to TF 1.3

In TensorFlow 1.3, an error appears in which canned models are not correctly exported for use in case 2 (the problem does not exist for "user" ratings). Here is a workaround that wraps the DNNClassifier to make everything work, in particular for the Iris example:

 # Build 3 layer DNN with 10, 20, 10 units respectively. class Wrapper(tf.estimator.Estimator): def __init__(self, **kwargs): dnn = tf.estimator.DNNClassifier(**kwargs) def model_fn(mode, features, labels): spec = dnn._call_model_fn(features, labels, mode) export_outputs = None if spec.export_outputs: export_outputs = { "serving_default": tf.estimator.export.PredictOutput( {"scores": spec.export_outputs["serving_default"].scores, "classes": spec.export_outputs["serving_default"].classes})} # Replace the 3rd argument (export_outputs) copy = list(spec) copy[4] = export_outputs return tf.estimator.EstimatorSpec(mode, *copy) super(Wrapper, self).__init__(model_fn, kwargs["model_dir"], dnn.config) classifier = Wrapper(feature_columns=feature_columns, hidden_units=[10, 20, 10], n_classes=3, model_dir="/tmp/iris_model") 
+39
source

I do not think that there is a mistake with canned appraisers (more precisely, if it was ever alone, it was fixed). I was able to successfully export the canned assessment model using Python and import it into Java.

Here is my code to export the model:

 a = tf.feature_column.numeric_column("a"); b = tf.feature_column.numeric_column("b"); feature_columns = [a, b]; model = tf.estimator.DNNClassifier(feature_columns=feature_columns ...); # To export feature_spec = tf.feature_column.make_parse_example_spec(feature_columns); export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec); servable_model_path = model.export_savedmodel(servable_model_dir, export_input_fn, as_text=True); 

To import the model into Java, I used the Java client code provided by rhaertel80 above and it works. Hope this also answers Ben Fowler's question above.

+3
source

It seems that the TensorFlow team does not agree that there is an error in version 1.3 that uses canned estimates to export the model in case of using # 2. I sent an error report here: https://github.com/tensorflow/tensorflow/issues/ 13477

The answer I received from TensorFlow is that the input signal should be only one string tensor. It looks like there might be a way to combine several functions into one string tensor using serialized TF.examples, but I have not found a clear method for this. If anyone has a code showing how to do this, I would appreciate it.

0
source

You need to export the saved model using tf.contrib.export_savedmodel, and you need to define the input function of the receiver for input. Later, you can load the saved model (usually saved.model.pb) from disk and execute it.

TensorFlow: how to predict using SavedModel?

0
source

Source: https://habr.com/ru/post/1271608/


All Articles