TensorFlow: how to predict from SavedModel?

I exported SavedModel , and now I have to load it back and make a prediction. He was trained with the following features and inscriptions:

 F1 : FLOAT32 F2 : FLOAT32 F3 : FLOAT32 L1 : FLOAT32 

So, say that I want to pass the values 20.9, 1.8, 0.9 to get one FLOAT32 forecast. How to do it? I was able to successfully download the model, but I'm not sure how to access it to make a forecast request.

 with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load( sess, [tf.saved_model.tag_constants.SERVING], "/job/export/Servo/1503723455" ) # How can I predict from here? # I want to do something like prediction = model.predict([20.9, 1.8, 0.9]) 

This question is not a duplicate of the posted question here . This question focuses on the minimal output execution example in SavedModel any model class (not limited to tf.estimator only) and the syntax for specifying node input and output names.

+3
source share
3 answers

Once the chart is loaded, it is available in the current context, and you can transfer input through it to receive forecasts. Each use case is different, but the addition to your code will look something like this:

 with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load( sess, [tf.saved_model.tag_constants.SERVING], "/job/export/Servo/1503723455" ) prediction = sess.run( 'prefix/predictions/Identity:0', feed_dict={ 'Placeholder:0': [20.9], 'Placeholder_1:0': [1.8], 'Placeholder_2:0': [0.9] } ) print(prediction) 

Here you need to know the names of what your prediction inputs will be. If you did not give them nave in your serving_fn , then they are by default equal to Placeholder_n , where n is the nth function.

The first argument to the sess.run line is the name of the prediction target. This will depend on your use case.

+1
source

Assuming you want predictions in Python, SavedModelPredictor is probably the easiest way to load SavedModel and get predictions. Suppose you saved your model as follows:

 # Build the graph f1 = tf.placeholder(shape=[], dtype=tf.float32) f2 = tf.placeholder(shape=[], dtype=tf.float32) f3 = tf.placeholder(shape=[], dtype=tf.float32) l1 = tf.placeholder(shape=[], dtype=tf.float32) output = build_graph(f1, f2, f3, l1) # Save the model inputs = {'F1': f1, 'F2': f2, 'F3': f3, 'L1': l1} outputs = {'output': output_tensor} tf.contrib.simple_save(sess, export_dir, inputs, outputs) 

(Inputs can be of any shape and do not even have to be placeholders or root nodes on the graph).

Then, in the Python program that will use SavedModel , we can get these predictions:

 from tensorflow.contrib import predictor predict_fn = predictor.from_saved_model(export_dir) predictions = predict_fn( {"F1": 1.0, "F2": 2.0, "F3": 3.0, "L1": 4.0}) print(predictions) 

This answer shows how to get predictions in Java, C ++, and Python (although the question is focused on evaluations, the answer actually applies regardless of how SavedModel is created).

+2
source

For those who need a working example of saving a trained canning model and serving it without using a tensor flow, I documented here https://github.com/tettusud/tensorflow-examples/tree/master/estimators

  • You can create a predictor from tf.tensorflow.contrib.predictor.from_saved_model( exported_model_path)
  • Prepare input

     tf.train.Example( features= tf.train.Features( feature={ 'x': tf.train.Feature( float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]) ) } ) ) 

Here x is the name of the input entered into input_receiver_function during export. eg:

 feature_spec = {'x': tf.FixedLenFeature([4],tf.float32)} def serving_input_receiver_fn(): serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_tensors') receiver_tensors = {'inputs': serialized_tf_example} features = tf.parse_example(serialized_tf_example, feature_spec) return tf.estimator.export.ServingInputReceiver(features, receiver_tensors) 
+2
source

Source: https://habr.com/ru/post/1271611/


All Articles