Tensor flow einsum vs matmul vs tenordot

The tensor flow functions tf.einsum, tf.matmuland tf.tensordotcan be used for the same tasks. (I understand that tf.einsumthey tf.tensordothave more general definitions, I also understand that it tf.matmulhas batch functionality.) In a situation where any of the three can be used, does one function tend to be the fastest? Are there other guidelines for recommendations?

For example, suppose you Aare a rank-2 tensor, but ba rank-1 tensor, and you want to calculate the product c_j = A_ij b_j. Of the three options:

c = tf.einsum('ij,j->i', A, b)

c = tf.matmul(A, tf.expand_dims(b,1))

c = tf.tensordot(A, b, 1)

is generally preferable to others?

+13
source share
2 answers

Both tf.tensordot()and tf.einsum()are syntactic sugar that wraps one or more calls tf.matmul()(although in some special cases tf.einsum()can be reduced to a simple elementary tf.multiply()).

In the limit, I would expect all three functions to have equivalent performance for the same calculation. However, for smaller matrices it can be more efficient to use tf.matmul()directly, because it will give a simpler TensorFlow graph with fewer operations, and therefore, the cost of calls for each operation will be lower.

+11
source

Speed tf.einsumdepends on the optimization package opt_einsumthat was included in numpy 1.14 (released in January 2018).

@ : http://github.com/tensorflow/tensorflow/issues/1062

Matmul .

0

Source: https://habr.com/ru/post/1673531/


All Articles