Accuracy and consistency are important in error analysis of an experiment because "accuracy" represents your systematic errors in the experiment, while "consistency" represents your random errors.
Example: You need to know how long it takes an object to fall precisely 1 meter to the nearest 0.01 seconds. If you time it with a stopwatch 10 times, you will get a variety of different answers, all perhaps a few tenths of a second different from each other. These may be random errors (due to lack of consistency in measuring time) but they may also be systematic errors (due to lack of accuracy in your stopwatch or a mistake in measuring the 1 meter height).
You can reduce your experiment's random errors by simply making more measurements and taking the average of those measurements. The "consistency" or amount of random error allows a scientist to determine how many times you must repeat the experiment or measurement to get a good estimate of the average value.
On the other hand, no matter how may measurements you make, your systematic error will be the same. These systematic errors affect your "accuracy" because all meaurements will be affected the same way. You have to correct the systematic errors by such techniques as calibrating your instruments carefully (such as the stopwatch), eliminating any errors in the experiment's design (such as measuring the 1 meter height from which you drop the object incorrectly).
2006-07-03 11:55:35
·
answer #1
·
answered by volume_watcher 3
·
2⤊
0⤋
Accuracy is how close you are to perfection the action is.
Consistency (or precision) is how repeatable or close together the action is.
There are many examples of this (mostly involving a target). What I have realized is that these leave things still muddled.
A great example is taking science tests...
Let’s say the scores for your science tests for the year are as follows: 94, 85, 80, 99, and 92. I would say that these tests (and your knowledge) are accurate all high (or close to perfection).
On the other hand let’s say your scores are the following: 45, 46, 45, 44, and 45. I would say that these tests (and your knowledge) are very consistent (or precise) or repeatable (all close within 1 from 45), however not very accurate (close to perfection)
Accuracy is closeness to perfection.
Consistency (or precision) is being able to repeat the same results, or closeness to each other.
2006-07-03 13:57:41
·
answer #2
·
answered by kmclean48 3
·
0⤊
0⤋
I think volume_watcher above has answered the question from the experimental perspective very well.
These two terms are also important in theoretical physics. A theory is internally consistent if it contains no paradoxes or contradictions. It is externally consistent if it fits with other accepted theory. And it is accurate if it makes testable predictions and experiments can confirm these predictions.
Consistency and accuracy are minimum requirements for a physical theory.
Hope this helps!
The Chicken
2006-07-03 19:51:42
·
answer #3
·
answered by Magic Chicken 3
·
0⤊
0⤋
accuracy = doint the right thing
consistency = doing the same thing
accuracy is best, but consistency is second best - if a person or machine keep making the same error, it could be corrected further down the line.
e.g. an accurate clock is best, but a clock that's consistently 5 minutes ahead is usefull too, while a clock that can be 5 minutes ahead or behind is totally useless.
2006-07-03 13:49:54
·
answer #4
·
answered by Anonymous
·
0⤊
0⤋
I learned it with the dartboard analogy:
If all the darts are in a cluster, no matter where that cluster is, that is consistency. If the darts are close to the bullseye, that's accuracy.
Ideally, you'd want both. In science publications, you'll see error bars on (mostly) every chart or graph. These are there to show within what limits they can say that their data is accurate.
2006-07-03 14:03:34
·
answer #5
·
answered by Anonymous
·
0⤊
0⤋
consistency is getting the same value of a measurement a number of times or when a no. of trials have been performed.
accuracy is the closeness with which a measurement tends towards the actual value being measured.
a consistent instrument need not b accurate. it can show an error in measurement consistently.
2006-07-03 13:57:20
·
answer #6
·
answered by prash 2
·
0⤊
0⤋