I recently switched from SPSS to R for some data analysis. As part of this, I have already done some of the already done analyzes in R that were previously in SPSS, just to have a good script order, which makes sense.
My data in this case is self-esteem on the feelings of hostility of 9 participants in an isolated and closed environment. I checked them five times (Summer, Autumn, Winter, Spring, Summer again). Data is not disseminated.
I conducted the Friedman test at SPSS, which gave me p=.012, χ2(4df)=12.79many years ago. I reran the test in R today, and he gave it to me: p=.951 (χ2(4df)=.69). It really torments me, because it gives me reason to doubt all my analyzes.
Once I discovered this, I re-exported the SPSS file to .csv, opened it with an R script, and ran the Friedman test again. To verify that I did not accidentally use different data files. Definitely not so.
I used the Friedman test as described by Andy Field:
Summer1 <- c(2,0,0,0,0,0,0,0,0)
Autumn <- c(3,0,1,0,0,4,2,0,1)
Winter <- c(1,0,0,0,0,2,5,1,1)
Spring <- c(1,0,2,2,2,8,4,0,1)
Summer2 <- c(3,0,2,1,0,4,7,1,1)
Hostility <- matrix(c(Summer1, Autumn, Winter, Spring, Summer2), nrow=9, byrow=TRUE)
friedman.test(Hostility)
Does anyone have an explanation for this, or an idea whose result is correct?