OCAIR

Overseas Chinese Association for Institutional Research
An AIR Affiliate That Supports IR Professionals Since 1996

How to calculate Effect Size in SPSS?

7/18/2006

Chunju Chen:

Could I get some help with SPSS? How can I get the effect size for three groups of different means. I am running some chi-square analysis, so it's not a General Linear Model. I saw only GLM has the function of getting the effect size.


Willard Hom:

Summery of response: Hmm. If your effect size is for the three different means, I don't quite see the use of chi-square here... Did I miss something in your question (that's entirely possible you know).
Anyway, if my memory has not abandoned me, the most recent issue of Psychology In the Schools has a decent article on effect size calculation (including examples in SPSS). It shows the formulae for finding effect sizes for a variety of basic situations (tabular data, ANOVA, and correlations).


Yuxiang Liu:

Summery of response: The following SPSS Output is the handout for the Research Design and Statistics course that I teach. It provides the calculation of the effect size in ANOVA. Please be aware that there is more than one version of effect sizes and the calculations are different.

If you want to compare the mean scores of two independent groups, and want to know the effect size, you may use the following formula:

Delta (effect size) = (mean score of the experimental group - mean score of comparison group) / standard deviation of comparison group

Source: page 226 of the following book: Wallen, N. E., & Fraenkel, J. P. (2001). Educational Research: A Guide to the Process. McGraw Hill, Inc, New York.


Webmaster's' notes:

Effect size is a way to measure the magnitude of treatment effect. The larger the effect size, the easier it is to observe the difference between experimental group and comparison group. Click here to read more details about Effect Size.


Yuxiang Liu:

The effect size is often defined as the magnitude of an observed effect. For instance, if we run a correlation analysis with a software package, we usually get two major statistics, the significance value (often called probability value or p-value) and the correlation coefficient, among other things. We often check the p-value first to see whether the relationship is statistically significant. If the p-value < .05 (in education we usually set the significance level at .05), that means the relationship is statistically significant ("significant" here means "did not happen by chance"). The second thing we do is to look at how large the effect size is, and in this case it is the correlation coefficient. As we know that the correlation coefficient ranges from –1 to +1, and the closer it is to zero, the weaker the relationship is. If the correlation coefficient is .04, for instance, we make the conclusion that the relationship is statistically significant, but the effect size is too small (too small to be meaningful).

How large is the effect size large enough to be meaningful. No definite answer to this question. Cohen (1988, 1992) "hesitantly" made the following suggestions and they have been widely accepted:

r = .10 (small effect)
r = .30 (medium effect
r = .50 (large effect)


Chunju Chen:

Hi Yuxiang,

Your explanation is making sense as the paragraph goes on, then I got lost starting: "If the correlation coefficient is .04, for instance, we make the conclusion that the relationship is statistically significant, but the effect size is too small (too small to be meaningful)." The blue writing is still making sense, but the red bolded sentence is what I don't quite understand.


Jiali Luo:

Chunju,

In this case, if we square the correlation coefficient, it will make us easier to understand how strong the relationship between two variables. The square of the coefficient, or r square, is equal to the percent of the variation in one variable that is related to the variation in the other variable. When we square the coefficient under discussion (i.e., r = 0.04), we get 0.0016; in other words, only 0.16% of the variance is related between A and B, which is almost negligible. Hence we can understand why the effect is too small to be meaningful. With a very large sample size, we can easily have statistically significant results, but sometimes when we check the magnitude of the variance, it is just too small to have any practical implications.

Hope this helps.