Wednesday, February 25, 2015

Peter Miesler Helps Expose WUWT Homogenization Insanity, USHCN #2

Although I've been temporarily diverted from my Steele climate science horror story project, I haven't forgotten about it.  For instance, responding to Steele's outlandish Jan 7th WattUpWithThat post claiming to "expose USHCN Homogenization insanity" I had promised more information, but there are only so many hours in a day.

Fortunately, today there were a couple posts over at that give an excellent review of this topic.  In one Dr. Kevin Cowtan, a bona fide expert in the field explains weather station calibration adjustments, the why and how they're done along with comparing adjusted to unadjusted data.  

The second "bulletin inventories rebuttals to two recent articles by Christopher Booker published in the UK's Daily Telegraph claiming that climate scientists have nefariously manipulated temperature data in order to propagate the "myth of manmade climate change".

With thanks to I'll check this off my list and let Dr. Cowtan and John Hartz take it from here:

Updated 3/27/2015 with this addition from

Two new reviews of the homogenization methods used to remove non-climatic changes

Two new reviews of the homogenization methods used to remove non-climatic changes
By coincidence this week two initiatives have been launched to review the methods to remove non-climatic changes from temperature data. One initiative was launched by the Global Warming Policy Foundation (GWPF), a UK free-market think tank. The other by the Task Team on Homogenization (TT-HOM) of the Commission for Climatology (CCl) of the World meteorological organization (WMO). Disclosure: I chair the TT-HOM. 
The WMO is one of the oldest international organizations and has meteorological and hydrological services in almost all countries of the world as its members. The international exchange of weather data has always been important for understanding the weather and to make weather predictions. Its main role is to provide guidance and define standards to make collaboration easier. The CCl coordinates climate research, especially when it comes to data measured by the national weather services.  
The review on homogenization, which the TT-HOM will write, is thus mainly aimed at helping national weather services produce better quality datasets to study climate change. This will allow the weather services to provide better climate services to help their nations adapt to climate change.

Homogenization is necessary because much has happened in the world between the French and the industrial revolution, two world wars, the rise and the fall of communism, and the start of the internet age. Inevitably many changes have occurred in climate monitoring practices. Many global datasets start in 1880, the year toilet paper was invented in the USA and 3 decades before the T-Ford. 
As a consequence, the instruments used to measure temperature have changed, the screens to protect the sensors from the weather have changed and the surrounding of the stations has often been changed and stations have been moved in response. These non-climatic changes in temperature have to be removed as well as possible to make more accurate assessments of how much the world has warmed.  
Removing such non-climatic changes is called homogenization. For the land surface temperature measured at meteorological stations, homogenization is normally performed using relative statistical homogenizing methods. Here a station is compared to its neighbours. If the neighbour is sufficiently nearby, both stations should show about the same climatic changes. Strong jumps or gradual increases happening at only one of the stations indicate a non-climatic change. 
If there is a bias in the trend, statistical homogenization can reduce it. How well trend biases can be removed depends on the density of the network. In industrialised countries a large part of the bias can be removed for the last century. In developing countries and in earlier times removing biases is more difficult and a large part may remain. Because many governments unfortunately limit the exchange of climate data, also the global temperature collections can only remove a part of the trend biases.

Some differences

Telegraph {and Jim Steele} wrong again on temperature adjustments
Posted on 24 February 2015 by Kevin Cowtan

There has been a vigorous discussion of weather station calibration adjustments in the media over the past few weeks. While these adjustments don't have a big effect on the global temperature record, they are needed to obtain consistent local records from equipment which has changed over time. 

Despite this, the Telegraph has produced two highly misleading stories {that are bouncing around the echo-chamber} about the station adjustments, the second including the demonstrably false claim that they are responsible for the recent rapid warming of the Arctic.

In the following video I show why this claim is wrong. But more importantly, I demonstrate three tools to allow you to test claims like this for yourself.

The central error in the Telegraph story is the attribution of Arctic warming (and somehow sea ice loss) to weather station adjustments. 

This conclusion is based on a survey of two dozen weather stations. But you can of course demonstrate anything you want by cherry picking your data, in this case in the selection of stations. The solution to cherry picking is to look at all of the relevant data - in this case all of the station records in the Arctic and surrounding region. I downloaded both the raw and adjusted temperature records from NOAA, and took the difference to determine the adjustments which had been applied. 

Then I calculated the trend in the adjustment averaged over the stations in each grid cell on the globe, to determine whether the adjustments were increasing or decreasing the temperature trend. The results are shown for the last 50 and 100 years in the following two figures:

Trend in weather station adjustments over the period 1965-2014, averaged by grid cell. Warm colours show upwards adjustments over time, cold colour downwards. For cells with less than 50 years of data, the trend is over the available period.

Trend in weather station adjustments over the period 1915-2014, averaged by grid cell. Warm colours show upwards adjustments over time, cold colour downwards. For cells with less than 100 years of data, the trend is over the available period.
The majority of cells show no significant adjustment. The largest adjustments are in the high Arctic, but are downwards, i.e. they reduce the warming trend

This is the opposite of what is claimed in the Telegraph story. You can check these stations using the GHCN station browser.

The upward adjustments to the Iceland stations, referred to in the Telegraph, predate the late 20th century warming. They occur mostly in the 1960's, so they only appear in the centennial map. Berkeley Earth show a rather different pattern of adjustments for these stations.

Iceland is a particularly difficult case, with a small network of stations on an island isolated from the larger continental networks. The location of Iceland with respect to the North Atlantic Drift, which carries warm water from the tropics towards the poles, may also contribute to the temperature series being mismatched with records from Greenland or Scotland. 

However given that the Iceland contribution is weighted according to land area in the global records, the impact of this uncertainty is minimal. Global warming is evaluated on the basis of the land-ocean temperature record; the impact of adjustments on recent warming is minimal, and on the whole record it is small compared to the total amount of warming. As Zeke Hausfather has noted, the land temperature adjustments in the early record are smaller than and in the opposite direction to the sea surface temperature adjustments.

Impact of the weather stations adjustments on the global land-ocean temperature record, calculated using the Skeptical Science temperature record calculator in 'CRU' mode.
Manual recalibration of the Iceland records may make an interesting citizen science project. 

Most of the stations show good agreement since 1970, however they diverge in the earlier record. The challenge is to work out the minimum number of adjustments required to bring them into agreement over the whole period. But the answer may not be unique, and noise and geographical differences may also cause problems. 

To facilitate this challenge, I've made annualized data available for the eight stations as a spreadsheet file.

In the video I demonstrate three tools which are useful in understanding and evaluating temperature adjustments:
  • A GHCN (global historical climatology network) station report browser. GHCN provide graphical reports on the adjustments made to each station record, but you need to know the station ID to find them. I have created an interactive map to make this easier.
  • The Berkeley Earth station browser. The Berkeley Earth station reports provide additional information to help you understand why particular adjustments have been made.
  • The Skeptical Science temperature record calculator. This allows you to construct your own version of the temperature record, using either adjusted or unadjusted data for both the land and sea surface temperatures.
Data for the temperature calculator may be obtained from the following sources:
  • The GHCN station data. For daily average temperature data, look for the 'tavg' files. The 'qca' files are adjusted temperatures, and the 'qcu' files are unadjusted temperatures. The files are stored in '.tar.gz' archives, and may be extracted using most freely available unzip software.
Finally, here are some interesting papers discussing why adjustments are required.
  • Menne et al (2009) The U.S. historical climatology network monthly temperature data, version 2.
  • Bohm et al (2010) The early instrumental warm-bias: a solution for long central European temperature series 1760–2007.
  • Brunet et al (2010) The minimization of the screen bias from ancient Western Mediterranean air temperature records: an exploratory statistical analysis.
  • Ellis (1890) On the difference produced in the mean temperature derived from daily maximum and minimum readings, as depending on the time at which the thermometers are read

Posted on 24 February 2015 by John Hartz

This bulletin inventories rebuttals to two recent articles by Christopher Booker published in the UK's Daily Telegraph claiming that climate scientists have nefariously manipulated temperature data in order to propagate the "myth of manmade climate change".

This bulletin also functions as a supplemenatry reading list to two recently posted SkS articles rebutting Booker's false claims and innuendos, i.e., 

by Dave Levitan, Watch, Feb 14, 2015 

by Peter Sinclair, Climate Crock of the Week, Feb 10, 2015

Research by Denise Robbins, Media Matters, Feb 10, 2015

The Planet Oz by Graham Readfearn, The Guardian, Feb 12, 2015 

by Victor Venema, Variable Variability, Feb 10, 2015

by Dana Nuccitelli, Climate Consensus - the 97%, The Guardian, Feb 11, 2015 

by Brad Friedman, Brad's Blog, Feb 19, 2015

by John Greenberg,, Feb 13, 2015

Media Matters, Feb 10, 2015  

by Rasmus Benestad, Real Climate, Feb 11, 2015

The Carbon Brief, Feb 10, 2015 

by ClimateDenierRoundup, Daily Kos, Feb 10, 2015

by John Timmer, Ars Technica, Feb 9, 2015 

by Neville Nicholls, The Conversation US Pilot, Jan 27, 2015

Note: If you would like to learn more about how scientists measure, compute, and display the average annual mean global temperature of the Earth's lower troposhpere, check out the following four-part series of SkS articles:  

by Glenn Tamblyn, Skeptical Science, May 29, 2011

by Glenn Tamblyn, Skeptical Science, May 30, 2011

by Glenn Tamblyn, Skeptical Science, June 4, 2011

by Glenn Tamblyn, Skeptical Science, June 5, 2011

1 comment:

citizenschallenge said...

Here's an interesting article summarizing the decision process in determining El Niño conditions. It's another insight into how real scientists approach these issues.

"March 2015 ENSO discussion: El Niño is here"
Emily Beckerño-here