The ridge-regulated glmnet computes the coefficients for the first lambda value differently when the lambda vector is selected by the glmnet algorithm compared to when it is specified in a function call. For example, two models (which I would expect to be identical)
> m <- glmnet(rbind(c(1, 0), c(0, 1)), c(1, 0), alpha=0)
> m2 <- glmnet(rbind(c(1, 0), c(0, 1)), c(1, 0), alpha=0, lambda=m$lambda)
give completely different coefficients:
> coef(m, s=m$lambda[1])
3 x 1 sparse Matrix of class "dgCMatrix"
1
(Intercept) 5.000000e-01
V1 1.010101e-36
V2 -1.010101e-36
> coef(m2, s=m2$lambda[1])
3 x 1 sparse Matrix of class "dgCMatrix"
1
(Intercept) 0.500000000
V1 0.000998004
V2 -0.000998004
The same thing happens with different data sets. When lambda is not provided for glmnet, all coefficients for the coefficient lambda.max (m, s = m $ lambda [1]) (except for interception) are very close to zero, and the forecasts are equal for any X (due to rounding?).
My questions:
- Why is this so? Is the difference intentional?
- - (m, s = m $lambda [1])?