Thursday, 23 December 2010

Explaining the exponential growth in Technological change


In the 1950'sThe Nobel prize winning economist Solow discovered that about 90% of all economic growth in the USA between 1909 and 1949 was directly attributable to technological invention and innovation. http://nobelprize.org/nobel_prizes/economics/laureates/1987/press.html

There is a huge area of economic study related to this discovery and how best to finance R&D- For example, should it all be government funded or privately funded ?

Anyway, my own view is that the exponential growth of knowledge, technology and innovation is directly related to the number of human brains on the planet. As population increases, less communal brain power is dedicated to getting food, and more thinking time is dedicated to things such as how to improve a jet engine's fuel efficiency, or how to make a microprocessor smaller for example. So as population increases, the rate of change in technological innovation also increases. For this reason I am optimistic for the human race. The computing power of 9 billion brains by 2050 are sure to solve most of our pressing problems about environment, sustainable development and the improvement of the human condition.

Sunday, 28 November 2010

Gauge resolution and accuracy

If you have a thermometer with 1 degree centigrade divisions, it is reasonable to say that the thermometer has a resolution of 1c, or +/- 0.5c If the accuracy stated by the manufacturer is 0.5c, then the combined error margin is +/- 1.0c. In this case all temperatures recorded with this thermometer should be rounded up to the nearest 1c. It is impossible to measure 1.5c or 1.75c etc, even if it appears you can do so when you are looking at the thermometer. Even then, you should still remember that your observed recording is +/- 1c.
(need to expand on this at some point)

Wednesday, 13 October 2010

Metrology

This post is actually about the poor quality and processing of historical climatic temperature records rather than metrology.


My main points are that in climatology many important factors that are accounted for in other areas of science and engineering are completely ignored by many scientists:

  1. Human Errors in accuracy and resolution of historical data are ignored
  2. Mechanical thermometer resolution is ignored
  3. Electronic gauge calibration is ignored
  4. Mechanical and Electronic temperature gauge accuracy is ignored
  5. Hysteresis in modern data acquisition is ignored
  6. Conversion from Degrees F to Degrees C introduces false resolution into data.

Metrology is the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology. Believe it or not, the metrology of temperature measurement is complex.


It is actually quite difficult to measure things accurately, yet most people just assume that information they are given is "spot on".  A significant number of scientists and mathematicians also do not seem to realise how the data they are working with is often not very accurate. Over the years as part of my job I have read dozens of papers based on pressure and temperature records where no reference is made to the instruments used to acquire the data, or their calibration history. The result is that many scientists  frequently reach incorrect conclusions about their experiments and data because the do not take into account the accuracy and resolution of their data. (It seems this is especially true in the area of climatology.)


Do you have a thermometer stuck to your kitchen window so you can see how warm it is outside?


Let's say you glance at this thermometer and it indicates about 31 degrees centigrade. If it is a mercury or alcohol thermometer you may have to squint to read the scale. If the scale is marked in 1c steps (which is very common), then you probably cannot extrapolate between the scale markers. 
This means that this particular  thermometer's resolution is1c, which is normally stated as plus or minus 0.5c (+/- 0.5c)
This example of resolution is where observing the temperature is under perfect conditions, and you have been properly trained to read a thermometer. In reality you might glance at the thermometer or you might have to use a flash-light to look at it, or it may be covered in a dusting of snow, rain, etc. Mercury forms a pronounced meniscus in a thermometer that can exceed 1c  and many observers incorrectly observe the temperature as the base of the meniscus rather than it's peak: ( this picture shows an alcohol meniscus, a mercury meniscus bulges upward rather than down)
Another  major common error in reading a thermometer is the parallax error. 
Image courtesy of Surface meteorological instruments and measurement practices By G.P. Srivastava (with a mercury meniscus!) This is where refraction of light through the glass thermometer exaggerates any error caused by the eye not being level with the surface of the fluid in the thermometer.
(click on image to zoom)
If you are using data from 100's of thermometers scattered over a wide area, with data being recorded by hand, by dozens of different people, the observational resolution should be reduced. In the oil industry it is common to accept an error margin of 2-4% when using manually acquired data for example. 


As far as I am aware, historical raw multiple temperature data from weather stations has never attempted to account for observer error.


We should also consider the accuracy of the typical mercury and alcohol thermometers that have been in use for the last 120 years.  Glass thermometers are calibrated by immersing them in ice/water at 0c and a steam bath at 100c. The scale is then divided equally into 100 divisions between zero and 100. However, a glass thermometer at 100c is longer than a thermometer at 0c. This means that the scale on the thermometer gives a false high reading at low temperatures (between 0 and 25c) and a false low reading at high temperatures (between 70 and 100c) This process is also followed with weather thermometers with a range of -20 to +50c 


 25 years ago, very accurate mercury thermometers used in labs (0.01c resolution) had a calibration chart/graph with them to convert observed temperature on the thermometer scale to actual temperature. This would take into account all the inconsistencies inherent in manufacturing the thermometer. Here is an example of a 0-100c thermometer that indicates a correction  of -0.2c at zero, a -0.35C correction at 50C and +0.4C at 100C - this curve accounts for the change in length of the thermometer due to temperature changes, the increase in volume of mercury inside the capillary tube as opposed to the volume in the bulb, but most importantly it accounts for variations in the diameter of the capillary tube. (it is almost impossible to make a perfectly consistent glass capillary tube)



New Edit inserted here 12.feb.2011


Nowadays, precision "standard" thermometers used in weather stations have an accuracy of +/- 0.5c




What this means is that even with the best will in the world, using a modern thermometer manufactured in the last 25 years, the best accuracy achievable is +/- 0.5c, and the best resolution would be +/- 0.25c. Combining these tow potential errors, gives us a minimum error range of +/- 0.75c


Most weather station thermometers are a lot older than 25 years though. Thermometers made in the 19th century might have an error range of 3-5c...




Temperature cycles in the glass bulb of a thermometer harden the glass and shrink over time, a 10 yr old -20 to +50c thermometer will give a false high reading of around 0.7c


Over time, repeated high temperature cycles cause alcohol thermometers to evaporate  vapour into the vacuum at the top of the thermometer, creating false low temperature readings of up to 5c. (5.0c not 0.5 it's not a typo...)


Electronic temperature sensors have been used more and more in the last 20 years for measuring environmental temperature. These also have their own resolution and accuracy problems. Electronic sensors suffer from drift and hysteresis and must be calibrated annually to be accurate, yet most weather station temp sensors are NEVER calibrated after they have been installed. drift is where the recorder temp increases steadily or decreases steadily, even when the real temp is static and is a fundamental characteristic of all electronic devices.

Drift, is where a recording error gradually gets larger and larger over time- this is a quantum mechanics effect in the metal parts of the temperature sensor that cannot be compensated for typical drift of a -100c to+100c electronic thermometer is about 1c per year! and the sensor must be recalibrated annualy to fix this error.


Hysteresis is a common problem as well- this is where increasing temperature has a different mechanical affect on the thermometer compared to decreasing temperature, so for example if the ambient temperature increases by 1.05c, the thermometer reads an increase on 1c, but when the ambient temperature drops by 1.05c, the same thermometer records a drop of 1.1c. (this is a VERY common problem in metrology) 


Here is a typical food temperature sensor behaviour compared to a calibrated thermometer without even considering sensor drift: Thermometer Calibration depending on the measured temperature in this high accuracy gauge, the offset is from -.8 to +1c


But on top of these issues, the people who make these thermometers and weather stations state clearly the accuracy of their instruments, yet scientists ignore them!  a -20c to +50c mercury thermometer packaging will state the accuracy of the instrument is +/-0.75c for example, yet frequently this information is not incorporated into statistical calculations used in climatology.


Finally we get to the infamous conversion of Degrees Fahrenheit to Degrees Centigrade. Until the 1960's almost all global temperatures were measured in Fahrenheit. Nowadays all the proper scientists use Centigrade. So, all old data is routinely converted to Centigrade.  take the original temperature, minus 32 times 5 divided by 9.
C= ((F-32) x 5)/9


example- original reading from 1950 data file is 60F. This data was eyeballed by the local weatherman and written into his tallybook. 50 years later a scientist takes this figure and converts it to centigrade:
60-32 =28
28x5=140
140/9= 15.56
This is usually (incorrectly) rounded  to two decimal places =: 15.56c without any explanation as to why this level of resolution has been selected. 


The correct mathematical method of handling this issue of resolution is to look at the original resolution of the recorded data. Typically old Fahrenheit data was recorded in increments of 2 degrees F, eg 60, 62, 64, 66, 68,70. very rarely on old data sheets do you see 61, 63 etc (although 65 is slightly more common)


If the original resolution was 2 degrees F, the resolution used for the same data converted to  Centigrade should be 1.1c.


Therefore mathematically :
 60F=16C
61F17C
62F=17C
etc


In conclusion, when interpreting historical environmental temperature records one must account for errors of accuracy built into the thermometer and errors of resolution built into the instrument as well as errors of observation and recording of the temperature.


 In a high quality glass environmental  thermometer manufactured in 1960, the accuracy would be +/- 1.4F. (2% of range)


The resolution of an astute and dedicated observer would be around +/-1F.
Therefore the total error margin of all observed weather station temperatures would be a minimum of +/-2.5F, or +/-1.30c...


Any comments much appreciated



























Tuesday, 7 September 2010

A Major Deception on Global Warming

I almost forgot about this key news item from 1996

The crux of the matter was that the IPCC deliberately changed their climate report before publishing AFTER the document had been signed off by the scientific contributors in order
to hide these facts:
  • "None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases."
  • "No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [man-made] causes."
  • "Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced."



A Major Deception on Global Warming

Op-Ed by Frederick Seitz
Wall Street Journal, June 12, 1996

Last week the Intergovernmental Panel on Climate Change, a United Nations organization regarded by many as the best source of scientific information about the human impact on the earth's climate, released "The Science of Climate Change 1995," its first new report in five years. The report will surely be hailed as the latest and most authoritative statement on global warming. Policy makers and the press around the world will likely view the report as the basis for critical decisions on energy policy that would have an enormous impact on U.S. oil and gas prices and on the international economy.

This IPCC report, like all others, is held in such high regard largely because it has been peer-reviewed. That is, it has been read, discussed, modified and approved by an international body of experts. These scientists have laid their reputations on the line. But this report is not what it appears to be--it is not the version that was approved by the contributing scientists listed on the title page. In my more than 60 years as a member of the American scientific community, including service as president of both the National Academy of Sciences and the American Physical Society, I have never witnessed a more disturbing corruption of the peer-review process than the events that led to this IPCC report.

A comparison between the report approved by the contributing scientists and the published version reveals that key changes were made after the scientists had met and accepted what they thought was the final peer-reviewed version. The scientists were assuming that the IPCC would obey the IPCC Rules--a body of regulations that is supposed to govern the panel's actions. Nothing in the IPCC Rules permits anyone to change a scientific report after it has been accepted by the panel of scientific contributors and the full IPCC.

The participating scientists accepted "The Science of Climate Change" in Madrid last November; the full IPCC accepted it the following month in Rome. But more than 15 sections in Chapter 8 of the report--the key chapter setting out the scientific evidence for and against a human influence over climate--were changed or deleted after the scientists charged with examining this question had accepted the supposedly final text.

Few of these changes were merely cosmetic; nearly all worked to remove hints of the skepticism with which many scientists regard claims that human activities are having a major impact on climate in general and on global warming in particular.

The following passages are examples of those included in the approved report but deleted from the supposedly peer-reviewed published version:

  • "None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases."
  • "No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [man-made] causes."
  • "Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced."

The reviewing scientists used this original language to keep themselves and the IPCC honest. I am in no position to know who made the major changes in Chapter 8; but the report's lead author, Benjamin D. Santer, must presumably take the major responsibility.

IPCC reports are often called the "consensus" view. If they lead to carbon taxes and restraints on economic growth, they will have a major and almost certainly destructive impact on the economies of the world. Whatever the intent was of those who made these significant changes, their effect is to deceive policy makers and the public into believing that the scientific evidence shows human activities are causing global warming.

If the IPCC is incapable of following its most basic procedures, it would be best to abandon the entire IPCC process, or at least that part that is concerned with the scientific evidence on climate change, and look for more reliable sources of advice to governments on this important question.

Mr. Seitz is president emeritus of Rockefeller University and chairman of the George C. Marshall Institute.

Coverup in the Greenhouse?


THE WALL STREET JOURNAL
July 11, 1996

Dr. Seitz, former president of the U. S. National Academy of Sciences, has revealed that a UN-sponsored scientific report promoting global warming has been tampered with for political purposes. Predictably, there have been protests from officials of the IPCC, claiming that the revisions in their report, prior to its publication, did nothing to change its emphasis. They also claim that such unannounced changes of an approved draft do not violate their rules of transparency and open review.

It is good therefore to have on hand an editorial from the international science journal Nature (June 13). Even though the writer openly takes the side of the IPCC in this controversy, impugning the motives of the industry group that first uncovered the alterations in the text, the editorial confirms that:

  1. A crucial chapter of the IPCC's report was altered between the time of its formal acceptance and its printing.
  2. Whether in accord with IPCC rules or not—still a hotly debated matter—"there is some evidence that the revision process did result in a subtle shift . . . that . . . tended to favour arguments that aligned with the report's broad conclusions." (Critics of the IPCC would have used much stronger words.) The editorial further admits that "phrases that might have been (mis)interpreted as undermining these conclusions have disappeared."
  3. "IPCC officials," quoted (but not named) by Nature, claim that the reason for the revisions to the chapter was "to ensure that it conformed to a 'policymakers' summary' of the full report...." Their claim begs the obvious question: Should not a summary conform to the underlying scientific report rather than vice versa?

The IPCC summary itself, a political document, is economical with the truth: It has problems with selective presentation of facts, not the least of which is that it totally ignores global temperature data gathered by weather satellites, which contradict the results of models used to predict a substantial future warming. It seems to me that IPCC officials, having failed to validate the current climate models, are now desperately grasping at straws to buttress their (rather feeble) conclusion that "the balance of evidence suggests a discernible human influence on climate." In this crusade to provide a scientific cover for political action, they are misusing the work of respected scientists who never made extravagant claims about future warming.

It is clear that politicians and activists striving for international controls on energy use (to be discussed in Geneva in July when the parties to the Global Climate Treaty convene) are anxious to stipulate that the science is settled and trying to marginalize the growing number of scientific critics. It is disappointing, however, to find a respected science journal urging in an editorial that "charges . . . that [the IPCC report on global climate change has been 'scientifically cleansed' should not be allowed to undermine efforts to win political support for abatement strategies."

Sunday, 29 August 2010

Are Humans really unique?

The Neanderthals had an intelligence that almost became a civilisation. From 300,000 bc to 30,000 bc-they had a society in europe, used tools and fire. They buried their dead and painted caves with pictures. They became extinct around 20,000 years ago.
Humans have had a flourishing civilisation for about 5000 years- and here we are, on the moon, in space, nuclear power etc.
Chimpanzees have used tools for 100’s of years- straws to fish insects out of holes, stones to smash nuts open etc.
Three branches of the same species tree. Humans and Neanderthals are like twins genetically, Gorillas and chimps are brothers, and other primates are close cousins.
Here we seem to have several minor branches from the same main species branch that use intelligence as a primary tool of survival and have flourished because of it over the space of a 1/2 a million years.
It is clear that intelligence is a primary survival tool for all species. Bees use a form of intelligence to find pollen/communicate for example.
Shark, Tuna, Dolphin arrive at the same physical design for optimal survival in the ocean. There have been several iterations in the fossil record of the sabre tooth tiger.
Therefore, to believe humans have created the first intelligent civilisation in 4 billion years is unreasonable from a statistical  perspective.
It is quite likely that in the past 4 billion years there have been other species that have reached a similar level of intelligence to humans before going extinct.
we base our assumption that humanity was the first and only civilised species on knowledge gleaned from the fossil record. However, a civilisation that existed 200 millions of years ago would not have left a significant trace. 
Imagine if we had a nuclear war today and went extinct. In 50 million years our cities would have become thin layers of calcium carbonate, or iron ore, 1000s of feet below the surface,  no other trace would exist on the earth.
So if a very ancient civilisation did exist is the past, it would have left at best a marker in orbit, in the solar system, or on the moon- nothing would be left on earth that was easily detectable.
If we were to target a search for prehistoric civilisations what should we look for?
1. Concentration of Iron (EG "red beds"!!!)
2.Concentration of uranium/other heavy metal.
3.fossil record- global coveage of a species across climate/environment.
4. manipulator physiology in the fossil record - think outside the box- tentacles/elephant trunk etc not just fingers.
NB- this will be edited a lot over time!

Sunday, 22 August 2010

Trees raise air temperature in forests

Notes on trees for future essay

Trees move water from below ground level to their leaves, sometime 100ft or more above ground level.

The water in the ground is warmed by geothermal activity/stored solar heat etc.

Ground water is ALWAYS at a significantly higher temperature than the local air.

Trees concentrate the heat from the ground. Tree roots transport heat from a large area underground (typically the area of the tree at ground level x thousand) and concentrate it into small diameter tubes of wood that stick out of the ground.

The result is that air temperature in forest is significantly higher than air temperature in the same region where there are no trees.

ALSO tree ring diameter is linked to the density of trees in the surrounding area.




Monday, 18 January 2010