General Metrology Forum

ACCURACY DIFFRENCE BETWEEN MASTER CALIBRATION EQUIPMENT AND UNDER CALIBRATION EQUIPMENT

 
James D Jenkins
Re: ACCURACY DIFFRENCE BETWEEN MASTER CALIBRATION EQUIPMENT AND UNDER CALIBRATION EQUIPMENT
by James Jenkins - Monday, February 15, 2010, 9:15 AM
 
That depends on several things. Basically, it all comes down to two questions:

How close are the measured values to the theoretical 'true' or industry consensus values? (with calibration, this is the trueness of the assessed bias of the device being calibrated)
This is best estimated using measurement uncertainty analysis methods.

How close do they need to be?
This is best dealt with using measurement-based decision risk analysis methods.

Although, as a good general rule in calibration, our references are recommended to be at least 4 times more accurate by specification, than the devices being calibrated, commonly known as the TAR (Test Accuracy Ratio). This is ONLY a general rule with different people/sources recommended different minimum TAR's. I have heard recommendations from as low as 3 to as high as 10. Obviously, the more accurate our reference device, in any calibration, the better.

But keep in mind that any comparison is better than no comparison. On occasion, with customer approval, labs may even perform a 1:1 TAR calibration. Although any statement of compliance, at that low of a "Test Accuracy Ratio", to stated accuracy specifications, comes with an extreme risk of being incorrect. We call this risk either "False Accept Risk" or "False Reject Risk", depending on the outcome of the comparison process. A guard-band can be applied to minimize either the reject or the accept risk. It is a trade of one for the other.

As a general rule, the lower the TAR, or preferably the TUR (Test Uncertainty Ratio), the greater the risk of a false determination of being either within tolerance or not.

NOTE: A TUR is preferred over a TAR as the TAR does not include process uncertainties.