How to speed up this row splitting operation in data.table

I have a data table with some identifiers inserted together as a column with a single character separated by underscores. I am trying to split identifiers into separate columns, but my best method is very slow for my large dataset (~ 250M rows). Interestingly, the operation does not seem to take the O (N) time that I expect. In other words, it is pretty fast until about 50M + lines, and then it becomes very slow.

Make some data

require(data.table)
set.seed(2016)
sim_rows <- 40000000
dt <- data.table(
  LineId = rep("L0123", times=sim_rows),
  StationId = rep("S0123", times=sim_rows),
  TimeId = rep("T0123", times=sim_rows)
)
dt[, InfoId := paste(LineId, StationId, TimeId, sep="_")]
dt[, c("LineId", "StationId", "TimeId") := NULL]
gc(reset=T) # free up 1.5Gb of memory

dt
                     InfoId
       1: L0123_S0123_T0123
       2: L0123_S0123_T0123
       3: L0123_S0123_T0123
       4: L0123_S0123_T0123
       5: L0123_S0123_T0123
      ---                  
39999996: L0123_S0123_T0123
39999997: L0123_S0123_T0123
39999998: L0123_S0123_T0123
39999999: L0123_S0123_T0123
40000000: L0123_S0123_T0123

Check timings

system.time( dt[1:10000000, c("LineId", "StationId", "TimeId") :=
    tstrsplit(InfoId, split="_", fixed=TRUE)] )
 user  system elapsed 
5.179   0.634   3.867

system.time( dt[1:20000000, c("LineId", "StationId", "TimeId") :=
    tstrsplit(InfoId, split="_", fixed=TRUE)] )
 user  system elapsed 
7.805   0.958   7.703

system.time( dt[1:30000000, c("LineId", "StationId", "TimeId") :=
    tstrsplit(InfoId, split="_", fixed=TRUE)] )
  user  system elapsed 
12.556   1.782  12.349

system.time( dt[1:40000000, c("LineId", "StationId", "TimeId") :=
    tstrsplit(InfoId, split="_", fixed=TRUE)] )
  user  system elapsed 
29.260   2.822  29.895

Check result

dt
                     InfoId LineId StationId TimeId
       1: L0123_S0123_T0123  L0123     S0123  T0123
       2: L0123_S0123_T0123  L0123     S0123  T0123
       3: L0123_S0123_T0123  L0123     S0123  T0123
       4: L0123_S0123_T0123  L0123     S0123  T0123
       5: L0123_S0123_T0123  L0123     S0123  T0123
      ---                                          
39999996: L0123_S0123_T0123  L0123     S0123  T0123
39999997: L0123_S0123_T0123  L0123     S0123  T0123
39999998: L0123_S0123_T0123  L0123     S0123  T0123
39999999: L0123_S0123_T0123  L0123     S0123  T0123
40000000: L0123_S0123_T0123  L0123     S0123  T0123

How can I speed this baby up?

+4
source share
2 answers

stringr , stringi .

, stringi, stringr (fixed/coll/regex/words/boundaries/charclass), .

stri_split_fixed(..., '_'), .

require(stringi)
> system.time( dt[1:1e6, c("LineId", "StationId", "TimeId") := stri_split_fixed(InfoId, "_")] )
   user  system elapsed 
  2.635   0.497   3.379  # on my old machine; please tell us your numbers?
+3

: stri_split stringi

library(stringi)
dt1 <- copy(dt)
system.time( dt[1:40000000, c("LineId", "StationId", "TimeId") := 
          tstrsplit(InfoId, split="_", fixed=TRUE)] )
#   user  system elapsed 
#  41.20    1.03   42.39 

system.time( dt1[1:40000000, c("LineId", "StationId", "TimeId") := 
     transpose(stri_split(InfoId, fixed = "_"))] )
#   user  system elapsed 
#  28.78    0.98   29.74 
+1

Source: https://habr.com/ru/post/1659126/


All Articles