Attempt to reset tensor flow graph using keras, crash

I am breeding the Python 3 API w / gunicorn, which uses keras to compute vectors for an image, is pretty simple.

How can I reset to store data in memory for each request? Slowly over time, requests increase over the time required to respond. I started the profiler, and this is exactly this line in tensorflow (also the memory usage slowly grows over time for each process):

#tensorflow/python/framework/ops.py:2317:_as_graph_def graph.node.extend([op.node_def]) 

This takes longer since more data is in the node. Here is the code that I am executing:

 # We have 11439MiB of GPU memory, lets only use 2GB of it: config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.22 sess = tf.Session(config=config) set_session(sess) sess.graph.as_default() # Get the vector for the image img_size = (224,224) vgg = VGG16(include_top=False, weights='imagenet') img = kimage.load_img(tmpfile.name, target_size=img_size) x = kimage.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = vgg.predict(x) vectors = pred.ravel().tolist() 

I thought as_default() would help, but it is not. I also tried to close the session after getting the list of vectors, and this fails.

+5
source share
1 answer
 from keras import backend as K K.clear_session() 
+9
source

Source: https://habr.com/ru/post/1269762/


All Articles