I have tried the following.
a = randn(100,100);
b = randn(100,100);
c = randn(100,1);
@time a*b*c
@time a*(b*c)
results
julia> @time abc; 0.000591 seconds (7 distributions: 79.234 KiB)
julia> @time a * (b * c); 0.000101 seconds (6 distributions: 1.906 KiB)
The results are consistent with the above. Although it makes an intuitive sense why the second is better (matrix-vector multiplication twice instead of large matrix-matrix multiplication).
I was wondering if Julia should optimize this, knowing what the sizes of the matrices are, and that could change the order of the operation in order to optimize it? Or am I just too lazy to wish for this or there are other technical problems that I do not see.
So this is what I get when I use dump () on (abc)
head: Symbol call
args: Array{Any}((4,))
1: Symbol *
2: Array{Float64}((100, 100)) [0.290788 -0.0601455 … -0.408164 1.16261; -0.539274 -1.56979 … 2.56233 0.806247; … ; 1.30981 -1.31929 … 1.38655 -1.89169; -1.58483 0.318804 … -0.0500151 2.13105]
3: Array{Float64}((100, 100)) [-0.464882 1.60371 … -0.390234 0.605401; -1.06837 0.296049 … 0.759708 0.0124688; … ; -0.149613 -1.38653 … 0.284494 1.47524; 0.34351 0.420449 … 0.544973 1.85736]
4: Array{Float64}((100, 1)) [1.64066; 0.593296; … ; 0.908361; 0.486164]
typ: Any
dump (a * (b * s))
Expr
head: Symbol call
args: Array{Any}((3,))
1: Symbol *
2: Array{Float64}((100, 100)) [0.290788 -0.0601455 … -0.408164 1.16261; -0.539274 -1.56979 … 2.56233 0.806247; … ; 1.30981 -1.31929 … 1.38655 -1.89169; -1.58483 0.318804 … -0.0500151 2.13105]
3: Expr
head: Symbol call
args: Array{Any}((3,))
1: Symbol *
2: Array{Float64}((100, 100)) [-0.464882 1.60371 … -0.390234 0.605401; -1.06837 0.296049 … 0.759708 0.0124688; … ; -0.149613 -1.38653 … 0.284494 1.47524; 0.34351 0.420449 … 0.544973 1.85736]
3: Array{Float64}((100, 1)) [1.64066; 0.593296; … ; 0.908361; 0.486164]
typ: Any
typ: Any
, , , ? ?