Working with the tensor: using the GPU memory share for each model

I have one GPU at my disposal for deployment, but I need to deploy several models. I do not want to allocate full GPU memory for the first deployed model, because then I cannot deploy my subsequent models. During training, this can be controlled using the gpu_memory_fraction parameter. I use the following command to deploy my model -

tensorflow_model_server --port=9000 --model_name=<name of model> --model_base_path=<path where exported models are stored &> <log file path>

Is there a flag that I can set to control the distribution of gpu memory?

thanks

+5
source share
2 answers

I will just add one flag to gpu configuration memory. https://github.com/zhouyoulie/serving

0
source

New TF Serving allowed to set per_process_gpu_memory_fraction flag in this pull request

0
source

Source: https://habr.com/ru/post/1273782/


All Articles