Can TensorFlow work with multiple processors (no GPUs)?

I am trying to learn distributed TensorFlow. Tried a piece of code as described here :

with tf.device("/cpu:0"):
    W = tf.Variable(tf.zeros([784, 10]))
    b = tf.Variable(tf.zeros([10]))

with tf.device("/cpu:1"):
    y = tf.nn.softmax(tf.matmul(x, W) + b)
    loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

Getting the following error:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'MatMul': Operation was explicitly assigned to / device: CPU: 1 but available devices are [/ job: localhost / replica: 0 / task: 0 / cpu : 0]. Make sure the device specification refers to a valid device.
     [[Node: MatMul = MatMul [T = DT_FLOAT, transpose_a = false, transpose_b = false, _device = "/ device: CPU: 1"] (Placeholder, Variable / read)]]

This means that TensorFlow does not recognize the processor: 1 .

I work on a RedHat server with 40 processors ( cat/proc/cpuinfo | grep processor | wc -l).

Any ideas?

+6
source share
2 answers

Following the link in the comment:

It turns out that the session should be configured to have a device counter> 1:

config = tf.ConfigProto(device_count={"CPU": 8})
with tf.Session(config=config) as sess:
   ...

Something shocking that I missed something so basic, and no one could pinpoint the error, which seems too obvious.

Not sure if this is a problem with me or with the TensorFlow code samples and documentation. Since this is Google, I have to say that it is me.

+2
source
0

Source: https://habr.com/ru/post/1684833/


All Articles