Say you had a drawer with 5 red socks and 5 green socks, so the true mean is .5 red.
You pick one sock at random -it's either red (1.0) or green (0.0). Your pick will be off by 0.5.
Now suppose you put that sock back and pick a second sock at random. The two socks together will have four possiblities:
RR - 0.25 chance, off by 0.5
RG - 0.25 chance, off by 0.0 (right on)
GR - 0.25 chance, off by 0.0 (right on again)
GG - 0.25 chance, off by 0.5
The average deviation from the mean will be 0.25 in this case.
The more you pick, the closer the average pick will be to the mean. If you pick ten socks, you'll always have 5R, 5G with no variation at all.
2007-07-02 13:23:21
·
answer #1
·
answered by Steve A 7
·
0⤊
0⤋
Simply examine the formula for variance.
The sample size is in the denominator.
Larger denominator = larger division.
2007-07-02 13:53:07
·
answer #2
·
answered by Emilou 2
·
0⤊
0⤋
look at the formula.
the differences of your observations from the mean value remains pretty much the same, but the number of the observations grows higher and higher every time you introduce a new observation. that's why it gets smaller.
2007-07-02 13:47:45
·
answer #3
·
answered by Alberd 4
·
0⤊
0⤋