Testing anano code, how does the GPU work?

import numpy as np
import time
import theano
A = np.random.rand(1000,10000).astype(theano.config.floatX)
B = np.random.rand(10000,1000).astype(theano.config.floatX)
np_start = time.time()
AB = A.dot(B)
np_end = time.time()
X,Y = theano.tensor.matrices('XY')
mf = theano.function([X,Y],X.dot(Y))
t_start = time.time()
tAB = mf(A,B)
t_end = time.time()
print ("NP time: %f[s], theano time: %f[s] **(times should be close when run
on CPU!)**" %(np_end-np_start, t_end-t_start))
print ("Result difference: %f" % (np.abs(AB-tAB).max(), ))

I am running this code with python 3.5

NP time: 0.161123[s], theano time: 0.167119[s] (times should be close when
run on CPU!)
Result difference: 0.000000

he says that if the time is close, then you are starting the CPU. How do I use a GPU.

Note:

  • I have a workstation with an Nvidia Quadro k4200.
  • I install the Cuda toolkit
  • I have successfully worked on the cuda vectorAdd project on VS2012.
+4
source share
3 answers

You configure Theano to use the GPU by specifying device=gpuAnano in the configuration. There are two main methods for configuring the configuration: (1) in an environment variable THEANO_FLAGSor (2) through a file. Theanorc. Both methods and all Theano configuration flags are documented .

, Theano GPU, import theano , :

Using gpu device 0: GeForce GT 640 (CNMeM is disabled)

, , Theano .

, GPU, . , ,

f = theano.function(...)
theano.printing.debugprint(f)

, "Gpu", . , , .

+13

Linux, .theanorc , theano GPU.

[global]
device = gpu
floatx = float32
+6

, GPU-:

import theano.sandbox.cuda
theano.sandbox.cuda.use("gpu0")

:

Using gpu device 0: Tesla K80

Useful if the environment in which you work is not easy to configure.

+5
source

Source: https://habr.com/ru/post/1614424/


All Articles