Types of Measurement Error

We always want to avoid error, but it is a fact of life. At the foundation of analysis and modelling, we rely on measurements. Because errors in measurements are inescapable, the important question is how much does the error affect the result? I start the conversation by explaining what measurement error is, including its component parts, and what we can do to minimize its effect.

It is practically impossible to consistently measure anything with perfect accuracy. There are always factors complicating the measurement, clouding the actual value. Similarly, because of those complicating factors, it is difficult to measure anything exactly the same every time (repeatability; see post contrasting accuracy and precision). The conditions of these complicating factors change, resulting in different answers, even when we think we are using the same method to measure the same thing. This is why statistics considers the ‘true’ value to be unobservable. However, statistics embraces this inherent variability and uses our knowledge of it to better interpret observations.

The description of ‘complicating factors’ in the previous paragraph sounds vague, but that is the point. If we fully understood them, we could account for them and reduce the measurement error. So in this sense, measurement errors are the differences produced by any of the things that make it impossible to directly observe the true value of what we want to measure. However, we can minimize this error to the point that it doesn’t have a consequence on what we want to know. For example, if I want to know what city I am in, reducing the measurement error of my location to within a few centimeters will almost always be sufficient to correctly answer the question.

I should also mention that the error being described here may not be noticeable if the measuring instrument is not sufficiently sensitive. In other words, this measurement error always exists, but the differences can be smaller than the device can detect. In which case, the sensitivity of the measuring instrument becomes the dominant factor in deciding if the quality of the measurement is sufficient.

measurement-error
An illustration of using the observed systematic error to make a correction to all of the observed values. The two outside points dominate the estimation of the systematic error. For the middle point, making a correction for the observed systematic error appears to move the value too far. The difference is the error we cannot account for and we assume the remaining error is random. For the observed values without a corresponding “known” value, we can only make the assumption that applying the systematic correction will improve accuracy (this is usually the case).

Because measurement error can have predictable and not so predictable components, we like to divide measurement error into systematic and random error. Systematic error (sometimes called bias) is a measurement error with an observed pattern. Of course, observing a pattern in the error requires some kind of knowledge about what the true value should be for comparison. Random error is essentially the part of the error that we cannot explain, which often means we must rely on statistical assumptions to interpret how this remaining component might affect our results. You may have noticed that the boundaries for these categories of error are largely dependent on what we know. With advancements in technology, we are regularly decreasing measurement error and detecting patterns that help minimize random error, even if we don’t fully understand the processes behind the systematic error.

To decrease the proportion of measurement error relegated to random error, we need some basis of comparison for identifying the systematic error. If we have a way to compare our measurement of something with its measurement by a more reliable source, we could look for a pattern in those differences and adjust the rest of our measurements accordingly. For example, a network of reference stations exists for recording measurement errors in the global positioning system (GPS). These stations have had their locations determined by very accurate methods. At these stations, the difference between the location measured instantaneously by GPS and the established location are recorded. These differences can then be used to improve the accuracy of GPS measurements taken elsewhere by shifting the measurements by the error observed for the same time at the nearest reference station. GPS devices with DGPS capability can now make these differential corrections in real-time by receiving a radio signal from such reference stations.

Although measurement error is inevitable, there are some things we can do to prevent it from being an obstacle for what we want to know. We can use measurement instruments that are sensitive enough to detect the level of detail we need. However, there are sometimes measurement errors that our available technology cannot directly overcome. In those cases, we can sometimes further reduce the measurement error if we have some reference points that we can use to at least detect patterns in the error. That little bit of knowledge can improve the accuracy of measurements, even if the reasons behind that systematic error are not well understood. For what is left, we have to call random error.

For more, see:
Measurement Error (Research Methods Knowledge Base)

Particle Size Analysis Toolpack v2

A zip file containing a suite of tools for analyzing continuous particle size curves from laser diffractometry.

Includes:

  • export templates for Malvern software,
  • analysis template for recommended quality control procedure,
  • reporting templates for organized presentation of results with additional metrics, and
  • a data filter for removing the larger particle size peak from bimodal curves.

 

[wpdm_package id=’2450′]

Accuracy vs Precision

Scientists often measure and predict things. Therefore, we need ways to describe how much we know, how close a number is to reality, and how likely we are to get the same number again. The terms accuracy and precision are generally used to describe these things, but there can be some ambiguity. This post explains the difference between the two and explores the different aspects of precision’s multiple meanings.

accuracy v precision
These targets illustrate different combinations of accuracy and precision. Accuracy describes the proximity to the desired answer (the bull’s eye). Precision describes the points’ proximity to each other. Less scatter is higher precision.

Accuracy refers to the correctness of a measurement or prediction. The results can vary a lot, but what matters is the difference between the measurements or predictions to what is considered to be the real or accepted value. Precision is often contrasted with accuracy by emphasizing the repeatability meaning of the word. This is most applicable to measurements, but can be applied to modelling too (e.g., stochastic models). When measuring something, we want some confidence that if we were to measure the same thing again, we would get a similar answer. In this situation, precision describes the degree of this similarity.

Accuracy and precision are both highly desirable characteristics for our measurements and predictions, but they are usually independent. For example, most models should produce the same results when based on the same algorithms and inputs, but this has nothing to do with the accuracy of the model’s predictions. Measurements can also be precise without being accurate. When this situation occurs, the difference between the measurements and the true value is called the bias (i.e., bias of an estimator). If we can quantify the bias, then it is possible to adjust for the bias to improve the accuracy.

Another meaning for precision creates some ambiguity when using the term. This second meaning of precision describes how much we know about a measurement. For example, rules of significant digits help us to use numbers that don’t express more than what we really know about a measurement. This definition of precision is mostly used to describe the exactness of measurements, but is sometimes used to describe the level of detail in spatial applications. For example, the popular term ‘precision agriculture’ is trying to emphasize the extra level of detail that the work can be done.

Confusion between precision and accuracy is fueled by the common mixing of the terms. Merriam-Webster defines precision as “the accuracy (as in binary or decimal places) with which a number can be represented.” Although it’s really a misuse of the term ‘accuracy’ to define a meaning of precision, they are trying to get at the concept of exactness. However, ambiguity exists in the scientific community too. The ISO has advocated defining accuracy to encompass both trueness and precision.

In order to avoid ambiguity, using alternative terms for precision may be a wise choice. When describing the ability to get similar results from multiple measurements, using the term ‘repeatability’ would be clearer in meaning. Precision, in terms of describing the quantitative or spatial detail of something, isn’t as easily replaced by a term such as ‘exactness’ because even that can conjure ideas of accuracy. As semantics evolve, maybe the meaning of precision will evolve to something more limited. In the meantime, when possible, we can use ‘detail’ and ‘resolution’ for spatial applications and ‘significant digits’ for quantitative applications.

For more, check out:

Precision of soil particle size results using laser diffractometry

Miller, B.A. and R.J. Schaetzl. 2012. Precision of soil sample particle size results using laser diffractometry. Soil Science Society of America Journal 76(5):1719-1727. doi:10.2136/sssaj2011.0303. Continue reading “Precision of soil particle size results using laser diffractometry”