Results cv.glmnet vs glmnet; measuring force of explanation

When evaluating the lasso model through the glmnet package, I wonder if: (a) pulling the coefficients / predictions / deviations directly from the cv.fit object purchased from cv.glmnet, or (b) using a minimum lambda from cv.glmnetto restart glmnetand pull these objects out of the process glmnet. (Please be patient - I have a feeling that this is documented, but I see examples / training online and there is no solid logic for going anyway.)

That is, for the coefficients, I can run (a):

cvfit = cv.glmnet(x=xtrain, y=ytrain, alpha=1, type.measure = "mse", nfolds = 20)
coef.cv <- coef(cvfit, s = "lambda.min")

Or I can execute run (b):

fit = glmnet(x=xtrain, y=ytrain, alpha=1, lambda=cvfit$lambda.min)
coef <- coef(fit, s = "lambda.min")

Although these two processes select the same model variables, they do not give the same coefficients. Similarly, I could predict one of the following two processes:

prdct <- predict(fit,newx=xtest)
prdct.cv <- predict(cvfit, newx=xtest, s = "lambda.min")

, .

, , % deviance :

percdev <- fit$dev.ratio
percdev.cv <- cvfit$glmnet.fit$dev.ratio[cvfit$cvm==mse.min.cereal]

percdev.cv , , lamda, cv.glmnet, 100 , cvfit$glmnet.fit$dev.ratio cvfit$cvm==mse.min.cereal , , dev.ratio cvfit $glmnet.fit.

, , , , dev.ratio. !

+4

Source: https://habr.com/ru/post/1690092/


All Articles