Convert GPU-based anano model to CPU?

I have gpu deep learning model pickling files. I try to use them in production. But when I try to open them on the server, I get the following error.

Traceback (last last call):
File "score.py", line 30, at
model = (cPickle.load (file))
File "/usr/local/python2.7/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/sandbox/cuda/type.py", line 485, in CudaNdarray_unpickler < w> return cuda.CudaNdarray (npa)
AttributeError: (The "NoneType" object does not have the "CudaNdarray" attribute, (array ([[[0.011515, 0.01171047, 0.10408644, ..., -0.0343636,
0.04944979, -0.06583775],
[-0.03771918, 0.080524, -0.10609912, ..., 0.11019105,
-0.0570752, 0.02100536],
[-0.03628891, -0.07109226, -0.00932018, ..., 0.04316209,
0.02817888, 0.05785328],
...,
[0.0703947, -0.00172865, -0.05942701, ..., -0.00999349,
0.01624184, 0.09832744],
[-0.09029484, -0.11509365, -0.07193922, ..., 0.10658887,
0.17730837, 0.01104965],
[0.06659461, -0.02492988, 0.02271739, ..., -0.0646857,
0.03879852, 0.08779807]], dtype = float32),))

I checked this cudaNdarray package on my local computer and it is not installed, but still I can unlock them. But on the server I can’t. How to make them work on a server that does not have a GPU?

+5
source share
4 answers

Pylearn2 has a script that can do what you need:

pylearn2/scripts/gpu_pkl_to_cpu_pkl.py

+4
source

The related Anano code is here .

From there, there seems to be a config.experimental.unpickle_gpu_on_cpu option that you could set, which would make CudaNdarray_unpickler return the base raw Numpy array.

+2
source

This works for me. Note: this does not work if the following environment variable is not set: export THEANO_FLAGS='device=cpu'

 import os from pylearn2.utils import serial import pylearn2.config.yaml_parse as yaml_parse if __name__=="__main__": _, in_path, out_path = sys.argv os.environ['THEANO_FLAGS']="device=cpu" model = serial.load(in_path) model2 = yaml_parse.load(model.yaml_src) model2.set_param_values(model.get_param_values()) serial.save(out_path, model2) 
+1
source

I solved this problem by simply saving the parameters W and b, but not the whole model. You can save the parameters using this: http://deeplearning.net/software/theano/tutorial/loading_and_saving.html?highlight=saving%20load#robust-serialization This can save the CudaNdarray array to numpy. Then you need to read the parameters of numpy.load () and finally convert the numpy array to tensorSharedVariable, use theano.shared ().

0
source

Source: https://habr.com/ru/post/1200003/


All Articles