Linear Regression of the Tensor: Obtaining Values ​​for the Adjusted Square R, Coefficients, P Values

There are several key parameters associated with linear regression, for example. Adjusted squared R, coefficients, P-value, R squared, multiple R, etc. When using the Google Tensorflow API to implement linear regression, how are these options displayed? Is there any way to get the value of these parameters after / during model execution

+4
source share
3 answers

From my experience, if you want to have these values ​​while your model is running, you need to pass them using tensorflow functions. If you want them after running the model, you can use scipy or other implementations. Below are some examples of how you can use R ^ 2, MAPE, RMSE encoding ...

enter image description here

total_error = tf.reduce_sum(tf.square(tf.sub(y, tf.reduce_mean(y))))
unexplained_error = tf.reduce_sum(tf.square(tf.sub(y, prediction)))
R_squared = tf.sub(tf.div(total_error, unexplained_error),1.0)
R = tf.mul(tf.sign(R_squared),tf.sqrt(tf.abs(unexplained_error)))

MAPE = tf.reduce_mean(tf.abs(tf.div(tf.sub(y, prediction), y)))

RMSE = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(y, prediction))))
+3
source

I believe that the formula for R2 should be as follows. Note that this will be negative when the network is so bad that it works worse than just the average as a predictor:

total_error = tf.reduce_sum(tf.square(tf.subtract(y, tf.reduce_mean(y))))

unexplained_error = tf.reduce_sum(tf.square(tf.subtract(y, pred)))

R_squared = tf.subtract(1.0, tf.divide(unexplained_error, total_error)) 
0
source

Adjusted_R_squared = 1 - [(1-R_squared) * (n-1)/(nk-1)]

n - , k - .

0

Source: https://habr.com/ru/post/1651740/


All Articles