English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

I am conducting an investigation in which I am measuring the period of a pendulum. I know that my timing equipment has an accuracy of plus or minus 33 milliseconds. The pendulum's I am testing usually have a period on the order of one or two seconds. How do I figure out how many trials I have to do with each pendulum to get a mean value within x% accuracy?

2006-11-02 19:36:11 · 4 answers · asked by Noachr 2 in Science & Mathematics Mathematics

4 answers

You said that your timing equipment has an accuracy of +- 33 milliseconds. In statistics, I can assume that the +- 33 milliseconds is a 95% confidence interval of the mean period of the pendulum. So, since, the critical value in a normal distribution for a 95% confidence interval is 1.96, therefore, your standard deviation is about 0.0168367 s. Working shown below,

1.96 * stddev = 0.033
stddev = 0.033/1.96 = 0.0168367

So, using the Central Limit Theorem, the standard deviation of the estimated mean is stddev/sqrt(n), where n is the number of measurements and sqrt is the square root.

So the accuracy (95% confidence interval) of the period would be
1.96 * stddev/sqrt(n)
= 1.96 * 0.0168367 / sqrt(n)
= 0.033 / sqrt(n)

So as you can see the accuracy drops by a factor of 1/sqrt(n)

So, if you do 2 measurements, you get an accuracy of plus minus 0.033/sqrt(2) = 0.023 s, ie plus minus 23 milliseconds

If you do 3 measuremenets, you get an accuracy of plus minus 0.033/sqrt(3) = 0.019 s, ie plus minus 19 milliseconds

2006-11-02 20:22:44 · answer #1 · answered by ali 6 · 0 0

4

2006-11-02 19:45:20 · answer #2 · answered by VIDYADHARA B 2 · 0 1

Calculate the moments.

2006-11-02 19:53:38 · answer #3 · answered by ag_iitkgp 7 · 0 0

minimum 3 trials,
maximum upto u

2006-11-02 19:42:30 · answer #4 · answered by Anonymous · 0 0

fedest.com, questions and answers