Without knowing more about what actually went on in the lab, I can only talk in general terms.
There are two types of errors, systematic and random. Suppose your pulse rate counter is biased, that means its zero is not really zero but is +10 or - 5. Then the pulse rate measured is inaccurate by the bias, i.e zero error. This is a systematic error.
Suppose your pulse rate counter is also not very precise, and measures the same pulse rate say 100 per second as 97, 102, 98, 101...etc. You are now introducing a random error in to your results.
Random errors are caused by poor repeatabiity. Systematic errors are caused by zero errors and other such consistent, fixed errors such as calibration errors etc.
Random errors get averaged out by multiple measurements. That is the reason, an experiment is usually repeated an odd number of times, say 3 or 5, so that the measurements find their mean! For larger number of repetitions, there is no problem with even number of trials.
Systematic errors do not get removed and in fact get propagated through the experiment if it has several measurement stages.
2007-03-10 15:05:42
·
answer #1
·
answered by Swamy 7
·
0⤊
0⤋
Questions such as this are always *so* much easier to answer if there's a bit of data on what your experiment was, how it worked (or was supposed to have worked), what you were measuring, how you measured it, what kind(s) of equipment you used, etc. etc.
OTOH, even the most ill-conceived, poorly planned, and fumblingly executed experiment can, so long as it's well documented, be of use. If nothing else, it can always serve admirably as a bad example.
Doug
2007-03-10 08:07:22
·
answer #2
·
answered by doug_donaghue 7
·
0⤊
0⤋