Wednesday, 13 October 2010

Metrology

This post is actually about the poor quality and processing of historical climatic temperature records rather than metrology.


My main points are that in climatology many important factors that are accounted for in other areas of science and engineering are completely ignored by many scientists:

  1. Human Errors in accuracy and resolution of historical data are ignored
  2. Mechanical thermometer resolution is ignored
  3. Electronic gauge calibration is ignored
  4. Mechanical and Electronic temperature gauge accuracy is ignored
  5. Hysteresis in modern data acquisition is ignored
  6. Conversion from Degrees F to Degrees C introduces false resolution into data.

Metrology is the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology. Believe it or not, the metrology of temperature measurement is complex.


It is actually quite difficult to measure things accurately, yet most people just assume that information they are given is "spot on".  A significant number of scientists and mathematicians also do not seem to realise how the data they are working with is often not very accurate. Over the years as part of my job I have read dozens of papers based on pressure and temperature records where no reference is made to the instruments used to acquire the data, or their calibration history. The result is that many scientists  frequently reach incorrect conclusions about their experiments and data because the do not take into account the accuracy and resolution of their data. (It seems this is especially true in the area of climatology.)


Do you have a thermometer stuck to your kitchen window so you can see how warm it is outside?


Let's say you glance at this thermometer and it indicates about 31 degrees centigrade. If it is a mercury or alcohol thermometer you may have to squint to read the scale. If the scale is marked in 1c steps (which is very common), then you probably cannot extrapolate between the scale markers. 
This means that this particular  thermometer's resolution is1c, which is normally stated as plus or minus 0.5c (+/- 0.5c)
This example of resolution is where observing the temperature is under perfect conditions, and you have been properly trained to read a thermometer. In reality you might glance at the thermometer or you might have to use a flash-light to look at it, or it may be covered in a dusting of snow, rain, etc. Mercury forms a pronounced meniscus in a thermometer that can exceed 1c  and many observers incorrectly observe the temperature as the base of the meniscus rather than it's peak: ( this picture shows an alcohol meniscus, a mercury meniscus bulges upward rather than down)
Another  major common error in reading a thermometer is the parallax error. 
Image courtesy of Surface meteorological instruments and measurement practices By G.P. Srivastava (with a mercury meniscus!) This is where refraction of light through the glass thermometer exaggerates any error caused by the eye not being level with the surface of the fluid in the thermometer.
(click on image to zoom)
If you are using data from 100's of thermometers scattered over a wide area, with data being recorded by hand, by dozens of different people, the observational resolution should be reduced. In the oil industry it is common to accept an error margin of 2-4% when using manually acquired data for example. 


As far as I am aware, historical raw multiple temperature data from weather stations has never attempted to account for observer error.


We should also consider the accuracy of the typical mercury and alcohol thermometers that have been in use for the last 120 years.  Glass thermometers are calibrated by immersing them in ice/water at 0c and a steam bath at 100c. The scale is then divided equally into 100 divisions between zero and 100. However, a glass thermometer at 100c is longer than a thermometer at 0c. This means that the scale on the thermometer gives a false high reading at low temperatures (between 0 and 25c) and a false low reading at high temperatures (between 70 and 100c) This process is also followed with weather thermometers with a range of -20 to +50c 


 25 years ago, very accurate mercury thermometers used in labs (0.01c resolution) had a calibration chart/graph with them to convert observed temperature on the thermometer scale to actual temperature. This would take into account all the inconsistencies inherent in manufacturing the thermometer. Here is an example of a 0-100c thermometer that indicates a correction  of -0.2c at zero, a -0.35C correction at 50C and +0.4C at 100C - this curve accounts for the change in length of the thermometer due to temperature changes, the increase in volume of mercury inside the capillary tube as opposed to the volume in the bulb, but most importantly it accounts for variations in the diameter of the capillary tube. (it is almost impossible to make a perfectly consistent glass capillary tube)



New Edit inserted here 12.feb.2011


Nowadays, precision "standard" thermometers used in weather stations have an accuracy of +/- 0.5c




What this means is that even with the best will in the world, using a modern thermometer manufactured in the last 25 years, the best accuracy achievable is +/- 0.5c, and the best resolution would be +/- 0.25c. Combining these tow potential errors, gives us a minimum error range of +/- 0.75c


Most weather station thermometers are a lot older than 25 years though. Thermometers made in the 19th century might have an error range of 3-5c...




Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to +50c thermometer will give a false high reading of around 0.7c


Over time, repeated high temperature cycles cause alcohol thermometers to evaporate  vapour into the vacuum at the top of the thermometer, creating false low temperature readings of up to 5c. (5.0c not 0.5 it's not a typo...)


Electronic temperature sensors have been used more and more in the last 20 years for measuring environmental temperature. These also have their own resolution and accuracy problems. Electronic sensors suffer from drift and hysteresis and must be calibrated annually to be accurate, yet most weather station temp sensors are NEVER calibrated after they have been installed. drift is where the recorder temp increases steadily or decreases steadily, even when the real temp is static and is a fundamental characteristic of all electronic devices.

Drift, is where a recording error gradually gets larger and larger over time- this is a quantum mechanics effect in the metal parts of the temperature sensor that cannot be compensated for typical drift of a -100c to+100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annualy to fix this error.


Hysteresis is a common problem as well- this is where increasing temperature has a different mechanical affect on the thermometer compared to decreasing temperature, so for example if the ambient temperature increases by 1.05c, the thermometer reads an increase on 1c, but when the ambient temperature drops by 1.05c, the same thermometer records a drop of 1.1c. (this is a VERY common problem in metrology) 


Here is a typical food temperature sensor behaviour compared to a calibrated thermometer without even considering sensor drift: Thermometer Calibration depending on the measured temperature in this high accuracy gauge, the offset is from -.8 to +1c


But on top of these issues, the people who make these thermometers and weather stations state clearly the accuracy of their instruments, yet scientists ignore them!  a -20c to +50c mercury thermometer packaging will state the accuracy of the instrument is +/-0.75c for example, yet frequently this information is not incorporated into statistical calculations used in climatology.


Finally we get to the infamous conversion of Degrees Fahrenheit to Degrees Centigrade. Until the 1960's almost all global temperatures were measured in Fahrenheit. Nowadays all the proper scientists use Centigrade. So, all old data is routinely converted to Centigrade.  take the original temperature, minus 32 times 5 divided by 9.
C= ((F-32) x 5)/9


example- original reading from 1950 data file is 60F. This data was eyeballed by the local weatherman and written into his tallybook. 50 years later a scientist takes this figure and converts it to centigrade:
60-32 =28
28x5=140
140/9= 15.56
This is usually (incorrectly) rounded  to two decimal places =: 15.56c without any explanation as to why this level of resolution has been selected. 


The correct mathematical method of handling this issue of resolution is to look at the original resolution of the recorded data. Typically old Fahrenheit data was recorded in increments of 2 degrees F, eg 60, 62, 64, 66, 68,70. very rarely on old data sheets do you see 61, 63 etc (although 65 is slightly more common)


If the original resolution was 2 degrees F, the resolution used for the same data converted to  Centigrade should be 1.1c.


Therefore mathematically :
 60F=16C
61F17C
62F=17C
etc


In conclusion, when interpreting historical environmental temperature records one must account for errors of accuracy built into the thermometer and errors of resolution built into the instrument as well as errors of observation and recording of the temperature.


 In a high quality glass environmental  thermometer manufactured in 1960, the accuracy would be +/- 1.4F. (2% of range)


The resolution of an astute and dedicated observer would be around +/-1F.
Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c...


Any comments much appreciated