How to derive a Tensorflow model with an I / O pipeline?

I am struggling with my Tensorflow model. I trained it with tf.PaddingFIFOQueue and then I mainly used this tutorial: https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a- python-api-d4f3596b3adc # .dykqbzqek , to freeze the graph with its variables, and then load it into the library to output the model.

My problem is that I really don't know how to run the model to predict after loading it. In the case of just a placeholder, you just need to get the input and output variables as input, and then run the model:

# We load the graph
graph_path = ...
graph = load_graph(graph_path)

# We launch a Session
with tf.Session(graph=graph) as sess:
  # Note: we didn't initialize/restore anything, everything is stored in the graph_def
  y_out = sess.run(y, feed_dict={
    x: [[3, 5, 7, 4, 5, 1, 1, 1, 1, 1]] # < 45
  })
  print(y_out) # [[ False ]] Yay, it works!

In this example, it really looks simple, but for the use case with an input pipeline, I really did not understand how to make it work. I did not even find anything related. I could give me a hint how this should be done or how people usually use Tensorflow in production, it would be really helpful.

+4
source share
1 answer

I'm waiting for your "complete answer"

0
source

Source: https://habr.com/ru/post/1672224/


All Articles