Conclusion on the slope in R

By default, lm total gluing coefficient of the test is zero. My question is very simple. I want to know how to check a slope equal to a non-zero value. One approach may be to use confint , but this does not provide a p-value. I also wonder how to do a one-way test using lm .

 ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2,10,20, labels=c("Ctl","Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) summary(lm.D9) Call: lm(formula = weight ~ group) Residuals: Min 1Q Median 3Q Max -1.0710 -0.4938 0.0685 0.2462 1.3690 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.0320 0.2202 22.850 9.55e-15 *** groupTrt -0.3710 0.3114 -1.191 0.249 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.6964 on 18 degrees of freedom Multiple R-squared: 0.07308, Adjusted R-squared: 0.02158 F-statistic: 1.419 on 1 and 18 DF, p-value: 0.249 confint(lm.D9) 2.5 % 97.5 % (Intercept) 4.56934 5.4946602 groupTrt -1.02530 0.2833003 

Thank you for your time and effort.

+4
source share
6 answers

Use the linearHypothesis function from the car package. For example, you can check if the coefficient groupTrt -1.

 linearHypothesis(lm.D9, "groupTrt = -1") Linear hypothesis test Hypothesis: groupTrt = - 1 Model 1: restricted model Model 2: weight ~ group Res.Df RSS Df Sum of Sq F Pr(>F) 1 19 10.7075 2 18 8.7292 1 1.9782 4.0791 0.05856 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
+2
source

As @power says, you can do it yourself. here is an example:

 > est <- summary.lm(lm.D9)$coef[2, 1] > se <- summary.lm(lm.D9)$coef[2, 2] > df <- summary.lm(lm.D9)$df[2] > > m <- 0 > 2 * abs(pt((est-m)/se, df)) [1] 0.2490232 > > m <- 0.2 > 2 * abs(pt((est-m)/se, df)) [1] 0.08332659 

and you can perform a one-way test by omitting 2* .

UPDATES

here is an example of two-sided and one-sided probability:

 > m <- 0.2 > > # two-side probability > 2 * abs(pt((est-m)/se, df)) [1] 0.08332659 > > # one-side, upper (ie, greater than 0.2) > pt((est-m)/se, df, lower.tail = FALSE) [1] 0.9583367 > > # one-side, lower (ie, less than 0.2) > pt((est-m)/se, df, lower.tail = TRUE) [1] 0.0416633 

note that the sum of the upper and lower probabilities is 1.

+7
source

The smatr package has a slope.test() function, with which you can use OLS.

+1
source

In addition to all the other good answers you can use bias. This is a bit more complicated with categorical predictors because you need to know the encoding.

 lm(weight~group+offset(1*(group=="Trt"))) 

1* is not necessary here, but it should be emphasized that you are testing the hypothesis that the difference is 1 (if you want to test the hypothesis of difference d , use d*(group=="Trt")

+1
source

You can use t.test to do this for your data. The mu parameter sets the hypothesis of the difference in group environments. The alternative parameter allows you to choose between one-way and two-way tests.

 t.test(weight~group,var.equal=TRUE) Two Sample t-test data: weight by group t = 1.1913, df = 18, p-value = 0.249 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.2833003 1.0253003 sample estimates: mean in group Ctl mean in group Trt 5.032 4.661 t.test(weight~group,var.equal=TRUE,mu=-1) Two Sample t-test data: weight by group t = 4.4022, df = 18, p-value = 0.0003438 alternative hypothesis: true difference in means is not equal to -1 95 percent confidence interval: -0.2833003 1.0253003 sample estimates: mean in group Ctl mean in group Trt 5.032 4.661 
0
source

Create your own test. You know the estimated coefficient and you know the standard error. You can create your own test stat.

-1
source

Source: https://habr.com/ru/post/1380698/


All Articles