English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

you want to estimate the difference in grade point averages between two groupd of college students accurate to within .2 grade point, with probability approximately equal to .95.

if the standard deviation of the grade point measurements is approximately equal to .6,
how many students must be included in each group? (assume that the groups will be of equal size)

2007-12-16 12:55:21 · 2 answers · asked by Anonymous in Science & Mathematics Mathematics

2 answers

Let X be one sample of size n
Let Y be another sample of size m

we know the the standard error in the difference of two means is:

z * (sqrt(σx² / n + σy² / m)

the z value for a 95% CI is 1.96

we assume that n = m in this case and we have:

1.96 * sqrt( 0.6^2 / n + 0.6^2 / n) = 0.02
1.96 * sqrt( 2 * 0.6^2 / n ) = 0.02
sqrt( 2 * 0.6^2 / n) = 0.01020408
0.72 / n = 0.0001041232
n = 6914.885

n can only be integer valued

you need to take 6915 samples from each group.

2007-12-20 10:37:14 · answer #1 · answered by Merlyn 7 · 0 0

Wikipedia is your friend. Check out:
http://en.wikipedia.org/wiki/Standard_error_(statistics)

Note that if you want the error to be less than 0.2, then the error in each should be under 0.1 (though in theory, you can be a bit looser.)

2007-12-18 02:05:27 · answer #2 · answered by simplicitus 7 · 0 0

fedest.com, questions and answers