The fastest way to enter big data columns

I have a large numerical data set (~ 700 rows, 350,000 columns read as data.table in R) containing some NA, which I would like to replace with columns as quickly as possible. I found a previous post that replaces NA with 0, but when I change the solution instead of entering a column, I get j, the column number. It seems I should be missing something obvious ... Any suggestions on how to calculate columns use this method?

The fastest way to replace NA in a large data table

#original code f_dowle3 = function(DT) { for(j in seq_len(ncol((DT))) set(DT,which(is.na(DT[[j]])),j,0) } #modified code impute = function(DT) { for(j in 2:ncol(DT)) set(DT,which(is.na(DT[[j]])),j,mean(DT[,j],na.rm = TRUE)) } test_impute = fread("test_impute.csv") test_impute ID snp1 snp2 snp3 snp4 1: 1 2 1 1 0 2: 2 2 2 0 0 3: 3 2 NA 0 NA 4: 4 2 1 2 0 5: 5 2 NA 2 0 6: 6 2 1 1 0 7: 7 1 1 NA 0 8: 8 NA 1 0 0 9: 9 2 2 2 NA 10: 10 1 1 0 0 impute(test_impute) test_impute ID snp1 snp2 snp3 snp4 1: 1 2 1 1 0 2: 2 2 2 0 0 3: 3 2 3 0 5 4: 4 2 1 2 0 5: 5 2 3 2 0 6: 6 2 1 1 0 7: 7 1 1 4 0 8: 8 2 1 0 0 9: 9 2 2 2 5 10: 10 1 1 0 0 
+5
source share
2 answers

You cannot use dt1[, j] to grab a column from a data table.

 dt1[, 1] # [1] 1 dt1[, 2342] # [1] 2342 

Change DT[, j] to DT[[j]] to fix.

First, some data:

 set.seed(47) n = 10 ncol = 10 dt1 = data.table(replicate(ncol, expr = { ifelse(runif(n) < 0.2, NA_real_, rpois(n, 10)) })) impute1 = function(DT) { for (j in 2:ncol(DT)) set(DT, which(is.na(DT[[j]])), j, mean(DT[[j]], na.rm = TRUE)) } dt1 # V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 # 1: 6 11 10 7 13 10 12 8 13 12 # 2: 10 8 NA 7 16 10 10 8 5 5 # 3: 14 7 9 9 NA 13 9 NA 10 NA # 4: 4 4 13 10 7 10 14 8 13 15 # 5: 7 NA 8 NA 12 NA 15 10 11 8 # 6: 6 9 7 15 NA 5 12 15 10 5 # 7: 4 9 5 NA 10 12 9 8 12 14 # 8: 12 8 NA 9 7 NA 11 4 8 11 # 9: 8 10 12 14 10 NA 11 9 10 10 # 10: 7 6 NA 13 8 14 11 6 10 NA impute1(dt1) dt1 # V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 # 1: 6 11 10.000000 7.0 13.000 10.00000 12 8.000000 13 12 # 2: 10 8 9.142857 7.0 16.000 10.00000 10 8.000000 5 5 # 3: 14 7 9.000000 9.0 10.375 13.00000 9 8.444444 10 10 # 4: 4 4 13.000000 10.0 7.000 10.00000 14 8.000000 13 15 # 5: 7 8 8.000000 10.5 12.000 10.57143 15 10.000000 11 8 # 6: 6 9 7.000000 15.0 10.375 5.00000 12 15.000000 10 5 # 7: 4 9 5.000000 10.5 10.000 12.00000 9 8.000000 12 14 # 8: 12 8 9.142857 9.0 7.000 10.57143 11 4.000000 8 11 # 9: 8 10 12.000000 14.0 10.000 10.57143 11 9.000000 10 10 # 10: 7 6 9.142857 13.0 8.000 14.00000 11 6.000000 10 10 

Another option is to pre-compute the column facilities. colMeans is pretty fast, so it can be the fastest, especially with as many columns as you have.

 impute2 = function(DT) { means = colMeans(DT, na.rm = T) for (j in 2:ncol(DT)) set(DT, which(is.na(DT[[j]])), j, means[j]) } 
+6
source

If you do not want to create your own function, you can also use other imputation packages.

For example imputeTS :

 library(imputeTS) solution <- na.mean(yourDataframe) 

Other packages, such as mice , also have similar options.

Guess what you will need to check which one is the fastest. Perhaps the latest Gregors solution is the fastest.

+1
source

Source: https://habr.com/ru/post/1259852/


All Articles