I have a measurement device that I want to calibrate. From observation, it appears that a quadratic cal equation will be needed, i.e., y = ax^2 + bx + c where y is the value to be displayed, x is the raw value produced by the device, and a, b, & c are the calibration coefficients. The plan is to simultaneously take 'n' different measurements from the device (i.e., a list of 'x' values) and from a calibration standard (i.e., a list of 'y' values) that are approximately uniformly distributed over the device measurement range and then to use least-squares regression to compute a, b, & c. The minimum value of 'n' is clearly 3. Intuitively, increasing 'n' will reduce the error of the calibration process, however I'm hoping to get some quantitative guidance in choosing 'n'.
2007-01-09
12:19:52
·
2 answers
·
asked by
Calli Braytor
1