English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Say a meter is connected to a sensor, if the sensor has +/-0.5% accuracy and the meter has +/-0.25%, then what's the readout's total accuracy?

2006-12-27 08:15:35 · 4 answers · asked by ghmag 1 in Science & Mathematics Engineering

4 answers

Total accuracy is the cumulative effects of all inaccuracies -- basically you multiply the worst possible cases of each element together to get the worst possible cases for the system:

Minimum reading = actual value multiplied by minimum sensor response to that value, multiplied by minimum meter response to that sensor output:
OutMin = K * (1-0.005)*(1-0.0025), where K = the actual value.
OutMin = K * 0.995*0.9975
OutMin = K * 0.995*0.9975
OutMin = K * 0.9925125

Maximum reading = actual value multiplied by maximum sensor response to that value, multiplied by maximum meter response to that sensor output:
OutMax = K * (1+0.005)*(1+0.0025)
OutMax = K * (1.005)*(1.0025)
OutMax = K * 1.0075125

So total accuracy is +0.75125% / -0.75875%

That's for a theoretical system, from an engineering point of view -- what defenderoftheunderground (above me) said is correct for a real-world system from a practical point of view, because many sensors and meters actually have different degrees of accuracy (or rather innacuracy...) depending on the measurement range and other factors, so for each operating condition the values may be different. This is particularly true of auto-ranging meters and sensors that produce logarithmic outputs or operate over several decades of range. It becomes much easier to just look at the specified performaces of the entire system than it is to try to calculate all of the intermediate sources of error and how they can affect the readings. In many systems a meter may be chosen to have equal but opposite error as compared to the sensor, so as to provide a certain level of self-correction within the system (though in today's digital world this approach is becoming far less practiced as compared to simply having a computer in the system to inject lies... errr I mean "corrections"... of it's own to offset the sensor errors).

2006-12-27 09:00:00 · answer #1 · answered by Mustela Frenata 5 · 1 0

About +/-0.75%

For example, let's say the actual true value of the quantity being measured is 1.00000. A 0.5% sensor will then respond in the range of 0.99500 to 1.00500. Applied to a 0.25% meter, the reported value will fall someewhere in the range of

0.99500 * 0.9975 = 0.9925125

and

1.005 * 1.0025 = 1.0075125

which is approximately +/-0.75%. In other words, accuracies in series (such as this case) approximately sum.

I guess an exact equation for this example would be:

+/-(100(1-[(1+(sensor% / 100))*(1+(meter% / 100))]%

Sound good?

2006-12-27 17:14:14 · answer #2 · answered by Gary H 6 · 0 0

Contact the manufacturer of the meter and sensor, they should be able to give you this info up front, it may also tell you how to calculate it in the Service or operations guide for the meter.

2006-12-27 16:23:59 · answer #3 · answered by Anonymous · 1 0

5%

2006-12-27 16:18:30 · answer #4 · answered by Silly 3 · 0 1

fedest.com, questions and answers