Tensor manipulation examples
I do not know if this can help. Review Reshape, Gather, Dynamic_partition, and Split operations and adapt them to your needs. The following example illustrates these operations, which can be adapted for use in your situation. I copied this from my git repository. I believe that if you run these examples in ipython, you can understand what you really want and get an even better idea.
Change shape, assemble, dynamic_separate and split
Build the operation (tf.gather ())
Generate an array and check the collection operation. Check out this approach for rapid prototyping:
- We generate an array in Numpy and check the operations of the tensor flow on it.
Usage: collecting fragments from parameters according to indexes.
indices should be an integer tensor of any dimension (usually 0-D or 1-D). This is best illustrated by an example:
array = np.array([[1,2,3],[4,9,6],[2,3,4],[7,8,0]]) array.shape (4, 3) In [27]: gather_output0 = tf.gather(array,1) gather_output01 = tf.gather(array,2) gather_output02 = tf.gather(array,3) gather_output11 = tf.gather(array,[1,2]) gather_output12 = tf.gather(array,[1,3]) gather_output13 = tf.gather(array,[3,2]) gather_output = tf.gather(array,[1,0,2]) gather_output1 = tf.gather(array,[1,1,2]) gather_output2 = tf.gather(array,[1,2,1]) In [28]: with tf.Session() as sess: print (gather_output0.eval());print("\n") print (gather_output01.eval());print("\n") print (gather_output02.eval());print("\n") print (gather_output11.eval());print("\n") print (gather_output12.eval());print("\n") print (gather_output13.eval());print("\n") print (gather_output.eval());print("\n") print (gather_output1.eval());print("\n") print (gather_output2.eval());print("\n")
And looking at this simple example:
- Initialize a simple array
test collection operation
AT 11]:
array_simple = np.array([1,2,3]) In [15]: print "shape of simple array is: ", array_simple.shape shape of simple array is: (3,) In [57]: gather1 = tf.gather(array1,[0]) gather01 = tf.gather(array1,[1]) gather02 = tf.gather(array1,[2]) gather2 = tf.gather(array1,[1,2]) gather3 = tf.gather(array1,[0,1]) with tf.Session() as sess: print (gather1.eval());print("\n") print (gather01.eval());print("\n") print (gather02.eval());print("\n") print (gather2.eval());print("\n") print (gather3.eval());print("\n") [1] [2] [3] [2 3] [1 2] tf.reshape( ) Note: * Use the same array that was initiated * Do reshape using tf.reshape( ) In [64]: array.shape # Confirm array shape Out[64]: (4, 3) In [74]: print ("This is the array\n" ,array) # see the output and compare with the initial array, This is the array [[1 2 3] [4 9 6] [2 3 4] [7 8 0]] In [84]: reshape_ops= tf.reshape(array,[-1,4]) # Note the parameters in reshpe reshape_ops1= tf.reshape(array,[-1,3]) # Note the parameters in reshpe reshape_ops2= tf.reshape(array,[-1,6]) # Note the parameters in reshpe reshape_ops_back1= tf.reshape(array,[6,-1]) # Note the parameters in reshpe reshape_ops_back2= tf.reshape(array,[3,-1]) # Note the parameters in reshpe reshape_ops_back3= tf.reshape(array,[4,-1]) # Note the parameters in reshpe In [86]: with tf.Session() as sess: print(reshape_ops.eval());print("\n") print(reshape_ops1.eval());print("\n") print(reshape_ops2.eval());print("\n") print ("Output when we reverse the parameters:");print("\n") print(reshape_ops_back1.eval());print("\n") print(reshape_ops_back2.eval());print("\n") print(reshape_ops_back3.eval());print("\n") [[1 2 3 4] [9 6 2 3] [4 7 8 0]] [[1 2 3] [4 9 6] [2 3 4] [7 8 0]] [[1 2 3 4 9 6] [2 3 4 7 8 0]] Output when we reverse the parameters: [[1 2] [3 4] [9 6] [2 3] [4 7] [8 0]] [[1 2 3 4] [9 6 2 3] [4 7 8 0]] [[1 2 3] [4 9 6] [2 3 4] [7 8 0]]
Note. The input size and output size must be the same. --- otherwise it gives an error. An easy way to check this is to make sure that the input can be processed by the change parameters by performing simple multiplications.
Dynamic_cell_partitions
This is declared as : tf.dynamic_partition (array, partitions, num_partitions, name=None) Note: * we decalare number_partitions --- number of partitions * Use our array initialised earlier * We declare the partition as [0 1 0 1] . This signifies the partitions we want 0 fall to one partition and 1 the other partitions given that we have two num_partitions=2. * The output is a list In [96]: print ("This is the array\n" ,array) # This is output array This is the array [[1 2 3] [4 9 6] [2 3 4] [7 8 0]] We show how to make two and three partitions below In [123]: num_partitions = 2 num_partitions1 = 3 partitions = [0, 0, 1, 1] partitions1 = [0 ,1 ,1, 2 ] In [119]: dynamic_ops =tf.dynamic_partition(array, partitions, num_partitions, name=None) # 2 partitions dynamic_ops1 =tf.dynamic_partition(array, partitions1, num_partitions1, name=None) # 3 partitions In [125]: with tf.Session() as sess: run = sess.run(dynamic_ops) run1 = sess.run(dynamic_ops1) print("Output for 2 partitions: ") print (run[0]);print("\n") print(run[1]) ;print("\n")# Compare result with initial array. Out is list print("Output for three partitions: ") print (run1[0]);print("\n") print (run1[1]);print("\n") print (run1[2]);print("\n") Output for 2 partitions: [[1 2 3] [4 9 6]] [[2 3 4] [7 8 0]] Output for three partitions: [[1 2 3]] [[4 9 6] [2 3 4]] [[7 8 0]]
tf.split ()
Make sure you are using the modern version of tensor flow. Otherwise, in older versions, this implementation will result in an error.
This is indicated in the documentation as shown below:
tf.split (value, num_or_size_splits, axis = 0, num = None, name = 'split').
It breaks the tensor into subtensors. This is best illustrated by an example:
* we define (5,30) aray in numpy * we split the array along axis 1 * We specify the number of splits as 1-Dimen Tensor along axis 1. So we have 3 splits. Specify an array Create a (5 by 30) numpy array. The syntax using numpy is shown below In [2]: ArrayBeforeSplitting = np.arange(150).reshape(5,30) print ("Array shape without split operation is : " ,ArrayBeforeSplitting.shape) ('Array shape without split operation is : ', (5, 30)) specify number of splits In [3]: split_1D = tf.Variable([8,13,9]) print("specify number of partions using 1-Dimen Variable:" , tf.shape(split_1D)) ('specify number of partions using 1-Dimen Variable:', <tf.Tensor 'Shape:0' shape=(1,) dtype=int32>) Use tf.split Make 3 splits aong y axis so that we have (5,8) ,(5,13),(5,9) splits. The axis 1 add up to give 30-- we can see axis 1 has 30 elements so the partition along that axis should add up to 30 otherwise it gives error. In [6]: split1,split2,split3 = tf.split(ArrayBeforeSplitting,split_1D,1) # we have 3 splits along axis 1 specified spcifically # by the split_1D . That is split axis 1D (with 30 elements) into partions with 8 ,13, and 9 elements while the x axis #remains constant In [7]: #INitialise global variables. because split_ID is a variable and needs to be initialised before being #used in a computational graph init_op = tf.global_variables_initializer() In [16]: with tf.Session() as sess: sess.run(init_op) # run variable initialisation. result=split1.eval();print("\n") print(result) print("the shape of the first split operation is : ",result.shape) result2=split2.eval();print("\n") print(result2) print("the shape of the second split operation is : ",result2.shape) result3=split3.eval();print("\n") print(result3) print("the shape of the third split operation is : ",result3.shape) [[ 0 1 2 3 4 5 6 7] [ 30 31 32 33 34 35 36 37] [ 60 61 62 63 64 65 66 67] [ 90 91 92 93 94 95 96 97] [120 121 122 123 124 125 126 127]] ('the shape of the first split operation is : ', (5, 8)) [[ 8 9 10 11 12 13 14 15 16 17 18 19 20] [ 38 39 40 41 42 43 44 45 46 47 48 49 50] [ 68 69 70 71 72 73 74 75 76 77 78 79 80] [ 98 99 100 101 102 103 104 105 106 107 108 109 110] [128 129 130 131 132 133 134 135 136 137 138 139 140]] ('the shape of the second split operation is : ', (5, 13))
Hope this helps!