I have a tensor A with the form [a, n], and I need to perform op my_op
with another tensor B of the form [b, n] so that the resulting tensor C has the form [a, b].
In other words, for each podtensora in A (A [0], A 1 , ... A [n]) I need to perform an element wise op with each sub-tensor in B .
Thus, the resulting tensor will contain the following:
[ [ A[0] op B[0] , A[0] op B[1], ... , A[0] op B[b] ],
[ A[1] op B[0] , A[1] op B[1], ... , A[1] op B[b] ],
[ ... ],
[ A[a] op B[0] , A[a] op B[1], ... , A[a] op B[b] ] ]
The only way I could find this is by using tf.map_fn nested
this way:
import tensorflow as tf
import time
import numpy as np
a_size = 64
b_size = 256*256
n = 256
A = tf.placeholder(tf.float32,[a_size,n])
B = tf.placeholder(tf.float32,[b_size,n])
def elementwise_op(a,b):
return tf.reduce_sum(tf.multiply(a,b))
def intermediate_op(sub_a,my_b):
sample_values = tf.map_fn(lambda x: elementwise_op(sub_a,x),my_b)
return sample_values
my_op = tf.map_fn(lambda x: intermediate_op(x,B),A)
with tf.Session() as sess:
a = np.random.rand(a_size,n)
b = np.random.rand(b_size,n)
start_time = time.time()
result = sess.run (my_op,feed_dict={A:a,B:b})
print ("exec time: " ,time.time()-start_time)
print (result.shape)
, GPU ( ~ 15% , nvidia-smi
). , CPU! ( 12- ). (~ 15%) 100% . 100% - .
5 : 11.33s
5 : 111,88
Docker Tensorflow: tensorflow/tensorflow:latest-py3
( CPU) tensorflow/tensorflow:latest-gpu-py3
( )
, map_fn
lambda python CPU GPU , op . SO- , .
, :
- .
-
, : map_fn ? , Python?
, - (, -y) , , , get ?
Edit:
( , , RAM ), :
node name | output bytes | total execution time | accelerator execution time | cpu execution time
Mul 1.02KB (22.23%, 0.29%), 195.07ms (85.00%, 13.06%), 5.29ms (100.00%, 25.79%), 189.78ms (84.79%, 12.89%)
Sum 256B (21.41%, 0.07%), 241.48ms (69.08%, 16.17%), 6.01ms (74.21%, 29.29%), 235.47ms (69.01%, 15.99%)
TensorArrayScatterV3 512B (0.64%, 0.15%), 658.31ms (46.87%, 44.09%), 9.19ms (44.80%, 44.80%), 649.12ms (46.90%, 44.08%)
, !