I think the comparison is unfair. Of course, you will get outliers, the calculation time depends on several factors (garbage collection, caching results, etc.), so this is not a surprise. You use the same vector a
in all tests, so caching will certainly play a role.
I adjusted the process a bit by randomizing the variable a
before computing, and I got relatively comparable results:
library("microbenchmark") do.not<-function() { a <- sample(0:1, size=3e6, replace=TRUE) a!=0; } do<-function() { a <- sample(0:1, size=3e6, replace=TRUE) a==0; } randomize <- function() { a <- sample(0:1, size=3e6, replace=TRUE) } speed <- microbenchmark(randomize(), do.not(), do(), times=100) boxplot(speed, notch=TRUE, unit="ms", log=F)

I also added the sample
function as a reference and see how this happens.
Personally, I am not surprised by emissions. Also, even if you use the same tests for size=10
, you still get outliers. They are not a consequence of the calculation, but the general condition of the PC (other scenarios, memory loading, etc.).
thanks
Nikos source share