How to get y_hat using the prediction function () when the response variable and explanatory variables are converted to a log?

I have a logarithmic linear function:

lom1 = lm(log(y)~log(x1)+log(x2),data=mod_dt) 

I want to get y_hat using the same dataset and I did

 yhat = exp(predict(lom1)) 

The result seems a lot (compared to the y-hat, which I calculated manually in R).

Any reason?

Second related question: I first added three more columns in the original mod_dt dataset for the logarithmic transforms y, x1, and x2. Let's say they are called logy, logx1 and logx2, and then I ran lm:

 lom2 = lm(logy ~ logx1 + logx2, data=mod_dt) 

This gives a different set of coefficients.

Can this give the right y-hat by doing

 exp(predict(lom2)) 

Thank you very much in advance.

+4
source share
2 answers

When a model such as your formula is evaluated, it is converted to Y ~ X1 * X2 on an untransformed scale. You will need to provide data for verification if you want a more specific overview of your results.

0
source

This is not an answer for sure. Just want to share some of my opinions. The linear regression model assumes E (y) = x * beta. If y is converted via log, it becomes E (log (y)) = x * beta. However, when we try to predict y, we usually do not have exp (E (log (y))) = E (y)

0
source

Source: https://habr.com/ru/post/1397934/


All Articles