R data.table by group replace first row of all missing columns

I have data.table and I'm trying to do something similar to data[ !is.na(variable) ] . However, for groups that are completely absent, I would just like to keep the first line of this group. So I'm trying to use a subset. I did some research on the Internet and got a solution, but I believe that it is ineffective.

I gave an example below, showing what I hope to achieve, and I wonder if this can be done without creating two additional columns.

 d_sample = data.table( ID = c(1, 1, 2, 2, 3, 3), Time = c(10, 15, 100, 110, 200, 220), Event = c(NA, NA, NA, 1, 1, NA)) d_sample[ !is.na(Event), isValidOutcomeRow := T, by = ID] d_sample[ , isValidOutcomePatient := any(isValidOutcomeRow), by = ID] d_sample[ is.na(isValidOutcomePatient), isValidOutcomeRow := c(T, rep(NA, .N - 1)), by = ID] d_sample[ isValidOutcomeRow == T ] 

EDIT: Below are some speed comparisons with thelatemail and Frank solutions with a larger dataset with 60K lines.

 d_sample = data.table( ID = sort(rep(seq(1,30000), 2)), Time = rep(c(10, 15, 100, 110, 200, 220), 10000), Event = rep(c(NA, NA, NA, 1, 1, NA), 10000) ) 

Thelatemail solution gets a 20.65 on my computer.

 system.time(d_sample[, if(all(is.na(Event))) .SD[1] else .SD[!is.na(Event)][1], by=ID]) 

Frank's first first decision gets runtime 0

 system.time( unique( d_sample[order(is.na(Event))], by="ID" ) ) 

Frank's second solution gets a runtime of 0.05

 system.time( d_sample[order(is.na(Event)), .SD[1L], by=ID] ) 
+5
source share
2 answers

It works:

 unique( d_sample[order(is.na(Event))], by="ID" ) ID Time Event 1: 2 110 1 2: 3 200 1 3: 1 10 NA 

Alternatively, d_sample[order(is.na(Event)), .SD[1L], by=ID] .


Extending the OP example, I also find similar timings for two approaches:

 n = 12e4 # must be a multiple of 6 set.seed(1) d_sample = data.table( ID = sort(rep(seq(1,n/2), 2)), Time = rep(c(10, 15, 100, 110, 200, 220), n/6), Event = rep(c(NA, NA, NA, 1, 1, NA), n/6) ) system.time(rf <- unique( d_sample[order(is.na(Event))], by="ID" )) # 1.17 system.time(rf2 <- d_sample[order(is.na(Event)), .SD[1L], by=ID] ) # 1.24 system.time(rt <- d_sample[, if(all(is.na(Event))) .SD[1] else .SD[!is.na(Event)], by=ID]) # 10.42 system.time(rt2 <- d_sample[ d_sample[, { w = which(is.na(Event)); .I[ if (length(w) == .N) 1L else -w ] }, by=ID]$V1 ] ) # .13 # verify identical(rf,rf2) # TRUE identical(rf,rt) # FALSE fsetequal(rf,rt) # TRUE identical(rt,rt2) # TRUE 

The change in @thelatemail rt2's rt2 is the fastest with a wide scope.

+6
source

Here is an attempt that could probably be improved, but relies on a quick if() logic check to determine which result to return:

 d_sample[, if(all(is.na(Event))) .SD[1] else .SD[!is.na(Event)], by=ID] # ID Time Event #1: 1 10 NA #2: 2 110 1 #3: 3 200 1 

Following the @eddi workaround for a subset of groups , this becomes ...

 d_sample[ d_sample[, { w = which(is.na(Event)) .I[ if (length(w) == .N) 1L else -w ] }, by=ID]$V1 ] 
+5
source

Source: https://habr.com/ru/post/1258371/


All Articles