Measurement Uncertainty Forum

Resolution

 
James D Jenkins
Re: Resolution
by James Jenkins - Saturday, February 11, 2012, 7:13 PM
 
First for clarity, it is important to understand that ALL measurements suffer from uncertainty whether or not they are repeatable. Non-repeatability is just one of the major contributors to consider when evaluating or estimating measurement uncertainty associate with a measurement resulting value.

So with the above understood, I will continue the discussion on the parameter of non-repeatability of a measurement result and how that affects the combined uncertainty of measurement.

If the calibration actually provided repeated measurements, along with the average. We are talking about two different results. Individual results or averaged results. The uncertainty, due to non-repeatability, is different.

Consider this; When the normal process for a measurement result involves just a single measurement, the uncertainty of that measurement is affected by the process repeatability; which is unknown. To deal with this unknown, we perform an 'experiment', in other words, a special planned non-normal event, we perform several repeats of a measurement to learn the approximate variation in the results of a specific measurement scenario. The standard deviation of this series of measurements is used as the estimate of the standard uncertainty due to non-repeatability. It will represent the uncertainty of single measurement values when using stated process in stated scenario.

However, when we have a series of measurement results from a calibration, we can average those results to reduce the effect of variation, meaning we can lower the uncertainty of the final result (average value reported).

The math for each scenario is as follows (again, this only addresses the 'non-repeatability' contributor of measurement uncertainty and is to be combined with other process and systematic uncertainties);
  1. Single values suffer the entire spread of possible values as their possible variance to the mean. The "Standard Deviation" of the data set is the metric we use as the "standard uncertainty" of non-repeatability. You can use Microsoft Excel enter "=STDEV(B10:B20)", (minus quotes) where B10 through B20 contain your repeated measurements.
  2. Averaged values suffer the spread of means based on sample size used to derive the mean. The larger the sample size from which the average is calculated, the smaller the spread. Again, you can use Microsoft Excel and enter the following, once you have the Standard Deviation of the data set; "=B21/SQRT(10)"; where B21 = Standard Deviation and you have averaged 10 measurements to get the reported mean value.

Result for 1. applies to individual values where the result from 2. applies to means of 10 samples.

So in summary; if the report provides a mean, the reported uncertainty should logically be computed and reported as part of the Expanded Combined Measurement Uncertainty for the resulting mean value of performance; meaning the standard deviation is divided by the square root of the number of samples and RSS'd with the rest of the major uncertainty contributors; such as:

  1. Traceability Uncertainty of the reference(s)
  2. Drift and Allowed Deviation between calibration of the reference(s) providing traceability.
  3. Resolution(s) of measurement data
  4. Process Non-Repeatability (THE ONE BEING DISCUSSED)
  5. Operator Bias (if applicable)
  6. Environmental Effects, as applicable.

P.S. I should also discuss the obvious question that arises when I look at your sample data. First some facts. Rounding error uncertainty limit = 0.5 * resolution. Rounding error and non-repeatability, evaluated from potentially rounded values is somewhat overlapping. This is typically not a concern when the non-repeatability is two or three counts of spread, as in that case, even when overlapped, the result of considering and including both is a non-topic as the non-repeatability will completely overshadow the effect of the rounding error in a 4:1 ratio between the two values. (When two numbers are combined in RSS, and one is four times or greater than the other, the result is that the smaller value nearly disappears in the result. For example; the value of 1 RSS'd with 0.25 = 1.03.

However, in your case, the question that is so obvious to me is this; Is the spread of one count, as seen in your data, actually a one count spread of non-repeatability OR does your device estimate the measurand at a value between two counts and what we are seeing is a very small non-repeatability being rounded back and forth? If the device is typically stable in its output estimate of temperature, one might be tempted to leave resolution or non-repeatability off the list of combined contributors as an obvious "double-count".

Next ISSUE: You can NOT have a calibration uncertainty of 0.16% of value for an indicated value at 23.0 C when the resolution of the result is 0.3C. With just rounding uncertainty alone we have; 0.3/SQRT(12) or 0.086 C. That value times 2 (expanded) provides a value of 0.17 C or 0.74% of value. When doing ISO17025 assessments, one quick trick for determining if a reported expanded uncertainty is incorrectly undersized, is to compare the value to 0.6 times the resolution of the indicating device(s) used in the calibration. Where the indicating device is the device under calibration, then the quality of the calibration is obviously limited by the potential rounding error of the device being calibrated. Consider; If I have a temperature bath set at 23.0 C which is known to be 23C +/- 0.0000001 C, and I am calibrating a digital thermometer that resolves measurements to 0.1 C, and I note that at 23C the device under calibration indicates a stable 23.0 C. The most I can assume is that the device believes the 23C bath to be somewhere between 22.95 and 23.05, which still leaves an uncertainty limit of 0.05 C OR 0.05/SQRT(3) (same as full count of 0.1/SQRT(12) in example above) OR 0.029 C standard uncertainty. This times 2 for expanded uncertainty = roughly 0.06 C or 0.6 times resolution of 0.1 C.

Not to assume that you were thinking repeatability is uncertainty, but the idea that measurement repeatability alone equals uncertainty is a fallacy that just won't go away easily. Measurement uncertainty includes everything that is significant to the delta between truth and reported values. The true value is unknown and indeterminate by its very nature, repeatable or not. We like to say things like at 23C your device indicates 23.3C, but truth be known, we can't say either and be telling the whole truth. Uncertainty statements return honestly back to metrology, without them "traceability" is lost and measured values can not be relied upon at all for use in measurement based decisions. Uncertainty estimation can be complex or simple. When using a tape measure around the house, I have always assumed about a 16th or 32nd of an inch, depending on the situation. Meaning if I am measuring an item to fit into an area and I get the same value for both, or values less than a 32nd inch difference, due to uncertainty, I can't say if it will fit confidently, however, if I get a half inch difference I can be fairly confident that my measurement findings confirm my decision regarding "will it fit?". Uncertainty is needed to determine the appropriate level of confidence we can have in whatever decision we are using the data to support.

I hope this helps in your quest for understanding.