Time to retrieve individual elements from data.table and data.frame objects

In my work, I use several tables (client information, transaction records, etc.). Some of them are very large (millions of lines), I recently switched to the data.table package (thanks to Matthew). However, some of them are quite small (several hundred rows and 4/5 columns) and are called several times. So I started thinking about the overhead [.data.table in retrieving data, not in set () the ting value, as already clearly described in ?set , where, regardless of the size of the table, one element is set in about 2 microseconds (depending from the processor).

However, it does not seem to be equivalent to set to get a value from data.table , knowing the exact row and column. Type looped [.data.table .

 library(data.table) library(microbenchmark) m = matrix(1,nrow=100000,ncol=100) DF = as.data.frame(m) DT = as.data.table(m) # same data used in ?set > microbenchmark(DF[3450,1] , DT[3450, V1], times=1000) # much more overhead in DT Unit: microseconds expr min lq median uq max neval DF[3450, 1] 32.745 36.166 40.5645 43.497 193.533 1000 DT[3450, V1] 788.791 803.453 813.2270 832.287 5826.982 1000 > microbenchmark(DF$V1[3450], DT[3450, 1, with=F], times=1000) # using atomic vector and # removing part of DT overhead Unit: microseconds expr min lq median uq max neval DF$V1[3450] 2.933 3.910 5.865 6.354 36.166 1000 DT[3450, 1, with = F] 297.629 303.494 305.938 309.359 1878.632 1000 > microbenchmark(DF$V1[3450], DT$V1[3450], times=1000) # using only atomic vectors Unit: microseconds expr min lq median uq max neval DF$V1[3450] 2.933 2.933 3.421 3.422 40.565 1000 # DF seems still a bit faster (23%) DT$V1[3450] 3.910 3.911 4.399 4.399 16.128 1000 

The latter method is really the best way to quickly get one item several times. However set even faster

 > microbenchmark(set(DT,1L,1L,5L), times=1000) Unit: microseconds expr min lq median uq max neval set(DT, 1L, 1L, 5L) 1.955 1.956 2.444 2.444 24.926 1000 

question : if we can set , the value in 2.444 microseconds shouldn't it be possible to get the value in a smaller (or at least similar) amount of time? Thank.

EDIT: adding two more options:

 > microbenchmark(`[.data.frame`(DT,3450,1), DT[["V1"]][3450], times=1000) Unit: microseconds expr min lq median uq max neval `[.data.frame`(DT, 3450, 1) 46.428 47.895 48.383 48.872 2165.509 1000 DT[["V1"]][3450] 20.038 21.504 23.459 24.437 116.316 1000 

which, unfortunately, are not faster than previous attempts.

+16
performance r dataframe data.table
Jun 02 '13 at 10:51 on
source share
1 answer

Thanks @hadley we have a solution!

 > microbenchmark(DT$V1[3450], set(DT,1L,1L,5L), .subset2(DT, "V1")[3450], times=1000, unit="us") Unit: microseconds expr min lq median uq max neval DT$V1[3450] 2.566 3.208 3.208 3.528 27.582 1000 set(DT, 1L, 1L, 5L) 1.604 1.925 1.925 2.246 15.074 1000 .subset2(DT, "V1")[3450] 0.000 0.321 0.322 0.642 8.339 1000 
+7
Jun 09 '13 at 14:36
source share



All Articles