I am trying to optimize a code designed to calculate double sums of a product of elements from two square matrices. Suppose we have two square matrices of size n, W and V. The object to be calculated is a vector B with elements

In simple words: calculate the step-by-step products of two different rows in two different matrices and take their sum, then take an additional sum over all rows of the second matrix (without identical indices).
The problem is that the computational complexity of this task is apparently O (n 3 ), since the length of this object that we create, B, is n, and each element requires two summations. This is what I came up with:
- Given i and j (i β j), we start with the inner sum over k. The sum for all k, and then subtract the terms for k = i and k = j and multiply by the indicator j β i.
- Since the restriction j β I was taken into account in the internal sum, the external sum is taken only for j = 1, ..., n.
If designated
, then two steps will look like this:
and
.
However, loop recording turned out to be very inefficient. n = 100 is fast (0.05 seconds). But, for example, when n = 500 (we are talking about real applications here), the average calculation time is 3 seconds, and for n = 1000 it goes to 22 s.
The inner cycle over k can be easily replaced by a sum, but an outer one ... The proposed solution to this questionsapply , but this means that summation should be performed over all elements.
, n.
set.seed(1)
N <- 500
x1 <- rnorm(N)
x2 <- rchisq(N, df=3)
bw1 <- bw.nrd(x1)
bw2 <- bw.nrd(x2)
w <- outer(x1, x1, function(x, y) dnorm((x-y)/bw1) )
w <- w/rowSums(w)
v <- outer(x2, x2, function(x, y) dnorm((x-y)/bw2) )
v <- v/rowSums(v)
Bij <- matrix(NA, ncol=N, nrow=N)
for (i in 1:N) {
for (j in 1:N) {
Bij[i, j] <- (sum(w[i, ]*v[j, ]) - w[i, i]*v[j, i] - w[i, j]*v[j, j]) * (i!=j)
}
}
Bi <- rowSums(Bij)
- R ?