Guest post in Guardian on microsite influences
Posted on 28 January 2010 by John Cook
After the recent post on microsite influences, I was asked to write on the subject for the Guardian Environment Blog. So here it is, Climate sceptics distract us from the scientific realities of global warming (I didn't write that headline, btw, I suggested "On measuring temperature: how data analysis trumps photographs" but apparently headlines with the phrase "data analysis" just aren't sexy enough). It's basically a less technical version of the original blog post along with an introduction to the concept of microsite influences (while studiously not using the term 'microsite influences' once). The one thing the article does do, I believe, is more succinctly explain how poorly sited weather stations produce a cooler trend:
The cause of this cooling bias appears to have been a change in instruments. In the late 1980s, many sites converted from Cotton Region Shelters (CRS, otherwise known as Stevenson Screens) to electronic Maximum/Minimum Temperature Systems (MMTS). This had two effects. Firstly, MMTS sensors record lower daily maximums compared to their CRS counterparts. So the switch from CRS to MMTS sensors caused a cooling bias in certain stations.
Secondly, the MMTS sensors were attached by cable to an indoor readout device. Limited by cable length, the MMTS weather stations were often located closer to buildings and other artificial sources of heat. This meant most of the stations with the newer MMTS sensors also happened to fall under poorly sited categories. The net result is that poor stations show an overall cooler trend compared with good stations.
Anyway, it's weird excerpting my own writing so go to the Guardian blog to read the full article. I will say one thing - I'm glad I'm not moderating comments on that website. If you consider the behaviour in most online climate discussions, Skeptical Science users are well above the bell curve as far as constructive scientific dialogue goes. Pat yourselves on the back, people!
While this result was initially met with dismay, Watts rallied and criticised the result, saying it was made with only a small percentage of stations being rated. I believe some time after this, Anthony Watts made the data on station ratings unavailable to prevent any other data analyses comparing good and bad weather stations - but I'm not sure of the timing of this.
The next analysis was by NOAA who also published an analysis comparing only the good stations to the total record (NOAA 2009):
Again, the trends are near identical (you expect some discrepancy as both records cover slightly different regions). Watts criticised this result as a result of homogenisation (data adjustment) of both the good data and the full dataset. That's why Menne 2010 is interesting in that it uses unadjusted data - this is where the cooling bias is revealed.