The main advantage of TMP when it comes to operations with matrices is the inability to recompute the result of operations with the matrix, but rather the ability to optimize the generated code to perform the actual calculation of the matrix at runtime. You are right - it would be unlikely that you would ever want to pre-comprehend the matrix in the program, but salmon wants to optimize the mathematical math during compilation before the program starts to work. For example, consider this code:
Matrix a, b, c; Matrix d = a + b + c;
This last line uses some overloaded operators to evaluate the matrix expression. Using traditional C ++ programming methods, this will work as follows:
- Compute b * c, returning a temporary matrix object containing a copy.
- Calculate a + b + c by returning the temporary copy again.
- Copy the result to d.
This is slow - there is no reason to make any copies of any values ββhere. instead, we should just loop around all the indices in the matrices and summarize all the values ββwe found. However, using the TMP method called expression patterns , you can implement these operators in such a way that in fact this calculation is an intelligent, optimized way than a slow, standard way. This is a family of techniques that I think Meyers mentioned in the book.
The best-known examples of TMP are simple programs for precompiling values ββat compile time, but in practice they are much more complex methods like these that are actually used in practice.
source share