I believe that I found the source of the problem in biglm code.
The number of observations ( n ) is stored as an integer that has a maximum value of 2^31 - 1 .
The numeric type does not fall under this restriction and, as far as I can tell, it can be used instead of integers to store n .
Here is a commit on github that shows how to fix this problem with one extra line of code that converts the integer n to numeric . As the model is updated, the number of lines in the new batch is added to the old n , so the type n remains numeric .
I was able to reproduce the error described in this question and make sure that my fix works with this code:
(WARNING: this consumes a lot of memory, consider doing more iterations with a smaller array if you have limited memory limits)
library(biglm) df = as.data.frame(replicate(3, rnorm(10000000))) a = biglm(V1 ~ V2 + V3, df) for (i in 1:300) { a = update(a, df) } print(summary(a))
In the biglm source library, this code outputs:
Large data regression model: biglm(ff, df) Sample size = NA Coef (95% CI) SE p (Intercept) -1e-04 NA NA NA NA V2 -1e-04 NA NA NA NA V3 -2e-04 NA NA NA NA
My patched versions:
Large data regression model: biglm(V1 ~ V2 + V3, df) Sample size = 3.01e+09 Coef (95% CI) SE p (Intercept) -3e-04 -3e-04 -3e-04 0 0 V2 -2e-04 -2e-04 -1e-04 0 0 V3 3e-04 3e-04 3e-04 0 0
The values โโof SE and p are nonzero, only rounded in the output above.
I am new to the R ecosystem, so I would appreciate it if someone could tell me how to submit this patch so that it can be viewed by the original author and ultimately included in the upstream package.