Drost, Karoly, and Braganza Find Human Fingerprints in Global Warming
Posted on 11 March 2013 by dana1981
We recently re-examined the physical reality that global warming continues unabated. We have also previously examined a number of studies demonstrating that the observed global warming has primarily been caused by humans, for example by looking for human 'fingerprints' in global warming patterns (Figure 1), or by using physics, statistics, and/or climate models to determine the causes of the warming.
Figure 1: Various human-caused global warming 'fingerprints'
In this post we examine a paper published in Climate Dynamics, Drost, Karoly, and Braganza 2012 (DKB12), which uses the former approach, looking for specific 'fingerprints' of human-caused global warming.
Methodology and Data
DKB12 notes that there are several measurements of global-scale temperature variations besides average global surface air temperature (GM) which can be used to distinguish between natural and human-caused global warming. Some of these indices include:
"...the land–ocean temperature contrast (LO), the Northern Hemisphere meridional temperature gradient (MTG), the magnitude of the annual cycle of average temperatures over land (AC) and the hemispheric temperature contrast (NS)"
These measurements have been previously used to show that humans are the primary cause of the current global warming (e.g. Braganza et al. 2003 and 2004). DKB12 expands on those previous studies to include data from the past 10 years and determine if evidence for the human-caused 'fingerprints' has grown, and also to test the accuracy of climate models from the World Climate Research Programme’s Coupled Model Intercomparison Project phase 3 (CMIP3) in predicting these changes.
DKB12 uses temperature data from GISS, NCDC, and HadCRUT3v, and they examined a subset of the CMIP3 models:
"...that had submitted at least one simulation for the Pre-Industrial Control scenario (PICNTRL) and multiple simulations for the twentieth century (20C3M) and emission scenario A1B (SRESA1B), a midrange future emission scenario. The reason for restricting the analysis to data only from models that fit these criteria is that multiple output from the same model for the 20C3M and A1B scenario will provide an indication of the model’s internal variability."
DKB12 notes that Braganza et al. (2004) was able to detect statistically significant trends in their analysis of the indices over the period 1950–1999 in both observations and model data, so they compare those results to the most recent 50-year period (1961-2010) to determine if the trends have now become even more statistically significant.
First they ran control simulations to provide the 5–95% confidence interval for natural variability of 50-year trends for each index for a single climate realization. DKB12 notes that
"If observational trends in the indices are outside this interval then it is most likely that they can not be attributable to natural variability."
In other words, if the trend in these indices is larger than the spread of trends in the control runs which don't include a human-caused global warming component, then those trends are probably not due to natural variability alone.
Looking for the Human Fingerprint
The models and data for each index are compared in Figure 2, and the trends are compared in Figure 3.
Figure 2: The temporal evolution of the mean (dash-dot line), one standard deviation (dark grey shaded area), and the minimum and maximum range (light grey shaded area) of the indices determined for all historical simulations for the 8 models. The 3 thin light-grey lines in each graph are the values for the indices derived from the 3 observational datasets used in this study. The twentieth century simulation data were extended with data from the SRESA1B simulations. Figure 3 from Drost, Karoly, and Braganza 2011.
Figure 3: Trends in the indices in the observations (A, B, C) and in all the historical simulations of the model data (1–8) for the period 1961–2010 at annual time scales. Listed along the x axis are: Observations (black squares) A = NCDC, B = HadCRUT3v, C = GISS. Models (grey circles): The numbers refer to the models as listed in Table 1 of DKB12. The shaded area marks the 5–95% confidence interval for no trend in each index. Figure 4 from Drost, Karoly, and Braganza 2011.
Comparing the black squares (trends observational data) to the shaded area (5-95% confidence interval in the control runs) in Figure 3 is the key in determining whether the observed trends in these indices can be attributed to human influences. DKB12 concludes as follows:
- All three observational datasets clearly indicate that the trend in global mean surface temperature (GM) and LO exceed the 5–95% confidence interval determined from the control simulations (shaded region).
- The mean value of the three observational trends in the indices MTG and AC exceeds the 95% significance level.
- The NCDC and HadCRUT3v datasets show a trend in NS that does not exceed the 5–95% confidence interval. However, the 95% significance level lies within the margin of error of the mean value of the three observational trends in NS, and the observational ensemble mean trend in NS is therefore either significant at, or very near the 95% significance level.
It's worth noting that for NS, MTG, and AC, the HadCRUT3v trend (labeled 'B' in Figure 2) does not fall outside of the 'no trend' 95% significance level, but HadCRUT3v has also been replaced by HadCRUT4 and is therefore the least reliable of the three observational datasets, having a known cool bias in recent decades. DKB12 concludes:
"...these results indicate that the observational trends in the indices have gained significance over the last decade. Furthermore, as this analysis uses three observational datasets, our results have higher confidence as our findings are in general robust across the three datasets."
Testing the Models
Regarding the accuracy of models in simulating the changes in these indices (grey circles vs. the shaded region in Figure 3), DKB12 notes,
"Although the range of trends as simulated by the models cover the range of possible trends as indicated by the observational data quite well, there is a tendency for some models to overestimate the trend in GM and underestimate the trend in LO."
Some models (particularly cccma_cgcm3_1 [1 in Figure 3] and ncar_ccsm3_0 [6 in Figure 3]) predict more overall global surface warming than observed, although most models simulate the observed average global surface warming accurately. Due to those overpredictions, on average the models simulate a 0.167°C per decade average global surface warming trend from 1961-2010, whereas the observed trend is approximately 0.138 ± 0.028°C per decade, approximately 20% lower.
Some models (particularly mpi_echam5 [4 in Figure 3] and mri_cgcm2_3_2a [5 in Figure 3]) do not adequately simulate the larger surface warming over land as compared to the warming over the oceans.
The 95% significance level for MTG and AC lies within the margin of error of the multi-model ensemble mean trends (1.04 ± 0.13°C/100 year for MTG and -0.44 ± 0.07°C per century for AC), so the multi-model ensemble mean trends in these indices are either significant at, or very near the 95% significance level. However, there is a wide spread in individual model simulations for both the Northern hemisphere meridional temperature gradient trend and the trend in the annual cycle magnitude.
The mean value of all the trends in NS in the simulations is 0.46 ± 0.05°C per century. This means that the multi-model ensemble mean trend in the hemispheric temperature contrast is significant at the 95% level, although similar to MTG and AC, there is a wide spread between individual model NS trend simulations.
Ratio of Surface Land and Ocean Temperature Changes
The ratio of LO changes (RLO) is a key in looking for a human 'fingerprint'. If there is no radiative forcing, we expect to see a ratio of 1, whereas under global warming scenarios we expect a ratio of greater than 1. Joshi and Gregory (2008) showed that RLO varies significantly depending on whether changes in radiative forcings were due to CO2 changes or to natural changes. DKB12 examines the recent RLO trends in both observations and models:
"The mean values for RLO over the period 1990–2010 in the observational datasets are 1.69 (GISS), 1.40 (NCDC) and 1.39 (HadCRUT3v)....The [multi-model] mean value for RLO at 2010 is 1.54 ± 0.04 which sits well within the range of values determined from the observational data."
Increased Evidence for Human-Caused Global Warming
Overall, DKB12 finds increased evidence for human-cased global warming compared to the Braganza studies last decade.
"This increased evidence can be described in two ways. Qualitatively we see increased evidence as the multiobservational mean trend in the indices GM, LO, MTG, and AC are all outside the 5–95% confidence interval for natural variability of 50 year trends. The same statement can nearly be said of NS as well, except that the uncertainty estimate of the multi-observational mean trend in NS overlaps with our estimate of the range of intrinsic variability in the index. The fact that the trends in these observational indices have higher significance than in Braganza et al. (2004) reflects increased evidence for anthropogenic climate change."
"...there is also increased evidence for anthropogenic climate change from a [quantitative] point of view as we have greatly increased the amount of data on which we have applied the analysis and we find consistently similar results among all observational and model data. Evaluating all results together has increased our confidence that changes in the climate indices are statistically significant and, following from the attribution studies of Braganza et al. (2003; 2004), that such changes are very likely caused by anthropogenic gas emissions.
This finding is further supported by the analysis of the sixth index, the ratio of warming over land to that over the oceans"
While these results are consistent with previous studies, the most interesting aspect is that this increased evidence for human-caused global warming comes at a time when the average warming of surface air temperatures has temporarily slowed. DKB12 also shows that while there is a wide spread in model simulations of some of these indices, on average the model runs accurately simulate these human global warming fingerprints.
Dana:
Kudos on yet another excellent post.
Question: In scientific circles, is "fingerprints" short-hand for "lines of evidence"?
Also, does your above article update any of the existing SkS Rebuttal articles?
Typo alert, first line, "the physical reality that global warmig continues unabated" - warmig.
John @1 - 'fingerprint' is basically shorthand for 'evidence specific to or consistent with human-caused warming'. Something that we expect to see if humans are causing global warming. I didn't update any rebuttals with this one. I could probably update 'it's not us' with some of this info. I'll let you know if I do.
jsam @2 - thanks, typo fixed.
@dana #3:
You said: Something that we expect to see if humans are causing global warming.
Aren't we also seeing fingerprints that we didn't expect to see? (We certainly are seeing some fingerprints sooner than we had expected to see them.)
John H @4 - climate models should anticipate these 'fingerprints' pretty well. I'm not aware of an example where an observation was determined after-the-fact to be an anthropogenic 'fingerprint', though there might be some examples I'm not aware of.
Of course, the climate "skeptics" takeaway from this will likely be "Although the range of trends as simulated by the models cover the range of possible trends as indicated by the observational data quite well, there is a tendency for some models to overestimate the trend in GM..."
I can see the headline now: "Models overestimate warming, says new paper!"
Sigh.
Maybe this is a little pedantic, but I just want to point out that none of the "fingerprints" in figure 1 is a proof of human-caused global warming by themselves, and that each of them in theory could have a natural cause. It’s the combination of all these factors together that represents a huge, clear-cut AGW-fingerprint.
1) The shrinking and cooling upper atmosphere could be caused by reduced solar activity, but that option is excluded if we accept that the ongoing warming is real.
2) The rising tropopause could be the result of any warming, whether this was caused by increased energy from the sun or decreased energy-loss to space, but 1) rules out the sun as the cause for warming.
3) The pattern of ocean warming (most in the upper layers) rules out geothermal heat, even if it had been physically possible for GH to change significantly during a few decades.
4) Winters and nights warming faster than summers and days, less heat escaping to space and more heat returning to Earth is a clear fingerprint of the greenhouse effect, but it could be the result of a natural increase of CO2 and other GHGs, which has happened several times in the distant past.
5) Less oxygen in the air clearly shows that the extra CO2 comes from burning of organic matter, which rules out volcanoes and oceanic out-gassing. It could in theory be a result of widespread forest fires and other natural decay of organic matter, but would there be any forests left on Earth today if that was the source of about 500 gigatons of extra carbon in the atmosphere and oceans since the industrial revolution?
6) More fossil fuel carbon in the atmosphere, trees and so on (if that refers to the increased C12/C13 ratio) could also be the result of widespread burning and decay of organic matter, but again, we still have a lot of forests on this planet. And I guess the lack of C14 proves that the source for this extra carbon has to be very old organic matter, not recently dead trees.
7) And finally, we know with absolute certainty – as far as anything can be certain in science – that CO2 is a heat-trapping gas, which is nicely demonstrated by John Nielsen-Gammon here.
HK@7,
You missed the simple accounting of C mass balance in the atmosphere: emissions vs. ΔCATM, after Cawley, 2011 and from the Keeling curve. A preschool child can calculate it (well, if told that ΔCATM = Δppm*earth surface) concluding that at least since 1958 (when Charlie started measuring Δppm) and likely for much longer, nature has always been the net sink of CO2. Therefore nature has always been helping to get rid of humen CO2 in the atmospher and not adding to it.
That, combined with your point 7) is the bottom line of evidence concluding the causation of AGW. The rest of your points (as well as this study) is just an icing on the cake. That's it, you don't need any knowledge beyond primary school arithmetics to understand that.
Wow, chriskoz - that's a bright kid! ;-)
HK, rather than needing a combination of all the listed factors I'd say rather that none of the items listed proves AGW by itself. Rather various combinations of them do. As chriskoz said, mass balance plus item 7 is one proof. As are 5, 6, and 7 together. Or 4, 5, and 6. Or 1, 2, 3, and 7. Et cetera.
Each of the fingerprints rules out various other possibilities and together different sets of them rule out every possibility except AGW.
8 Chriskoz & 10 CBDunkerson:
It seems that we pretty much agree!
I was also considering the mass balance of carbon, but left it out because it wasn’t included in figure 1. If nature was a net emitter of CO2 it would of course be very hard to explain how 10 gigatons of manmade carbon can disappear without a trace every year, unless we assume that it is converted into energy according to Einstein’s famous formula E = mc2. (a new climate myth?)
The only problem is that if we put 10 gigatons of mass into that formula, we end up with about 9 x 1029 Joules of energy, enough to boil away the oceans nearly 250 times! JJ
HK... Let me ask a question. How did you suddenly convert all that mass to energy?
Rob Honeycutt
E (energy in Joules) = m (mass in kilograms) x c2 (speed of light in meters/sec)
When mass is multiplied by (3 x 108)2 you get a lot of energy!
That’s why 4.3 million tons of mass converted to energy every second is able to power the Sun.
Maybe this has got a little off-topic, but it should make it obvious that many gigatons of manmade carbon can’t disappear without a trace every year and that nature has been a net carbon sink for many decades.
HK... You don't understand my question. I'm not asking about the math. I know the equation.
I'm asking, by what method are you converting the mass to energy. It doesn't just magically change by itself. There has to be a mechanism by which the mass is changed to energy.
More to the point, I'm saying you can't just randomly change mass to energy. That is what you do in a nuclear explosion or a nuclear reactor. You can't say "if" that mass were converted to energy because you can't change the mass of 10 GT of atmospheric carbon into energy.
I'm telling you that you're making a completely meaningless point.
Rob, I think HK's point is that the mass of carbon emitted by humans each year can't just disappear. He brought up conversion to energy as a theoretical example of where someone might argue the mass had gone... but then noted that the amount of energy involved would be too great to have missed (thus disproving that possibility). He was never suggesting that the carbon really did or would transform into energy.
Yes, CBDunkerson, that’s exactly my point!
So, I guess we can agree that the mass balance argument is a very elegant way to prove that the extra CO2 in the atmosphere has to be manmade.
Dana, Thanks for your post on our paper from last year. An obvious question is why did the authors use the CMIP3 models, not the CMIP5 model runs, and the answer is that at the time the paper was completed, in early 2011, there weren't enough CMIP5 model runs to use. When data from the CMIP5 model runs became available, we redid the analysis and made use of the 20th century simulations with different forcings, including all forcings, greenhouse gas increases only, and natural forcing changes only, to verify that our fingerprints are not consistent with natural forcing. The results confirm and strengthen the conclusions you summarise and are available at http://onlinelibrary.wiley.com/doi/10.1029/2012GL052667/abstract
Drost, F., and D. J. Karoly (2012) Evaluating global climate responses to different forcings using simple indices, Geophys. Res. Lett., 39, L16701, 5pp, doi:10.1029/2012GL052667.
CBD and HK... That blew by me. X-|... It makes sense now.
Thanks for the update, Dr. Karoly.
" ... there is a tendency for some models to overestimate the trend in GM and underestimate the trend in LO." I do not belive that is possible to estimate the influence of human on nature in brief period of time. Although I support the opinion for human influence.
The max of the observed trend is 0.138 + 0.028 = 0.166.
What does it mean when the max of the observed trend is less than the model's prediction?
Since this covers 49 - 50 years, it is a substantial amount of time. I would say that the model is out of whack!
Observed trends, Jan, 1961- Dec, 2010:
GISS: 0.151 +/- 0.027 C/decade. (Upper confidence interval: 0.178 C/decade)
NOAA: 0.142 +/- 0.025 C/decade. (Upper confidence interval: 0.167 C/decade)
HadCRUT3: 0.140 +/- 0.029 C/decade. (Upper confidence interval: 0.169 C/decade)
HadCRUT4: 0.139 +/- 0.027 C/decade. (Upper confidence interval: 0.166 C/decade)
So, one out of four temperature indices just fails to scrape in the confidence interval. That index is known to not have global coverage, and in particular to have poor coverage of the Arctic, Asia, and North Africa (all areas showing very high temperatures in 2010) Indeed, the only index of the four to have truly global coverage is also the one that most closely matches the predicted trend.
Kevin does point toward a genuine problem, however, though it is not what he thinks it is. It is about time climate scientists started using a HadCRUT3 (or 4) mask on their predictions when comparing predicted temperatures and trends to the Hadley products. It is known that they do not have global coverage, and it is known that that effects the temperature trends. The continued reliance on HadleyCRU products without produceing a Hadley mask prediction is the equivalent of comparing North American continent temperature predictions to USHCN CONUS temperature products. It is not a prediction of the thing being measured.
Tom Curtis,
I did not specify which observed trend, the Author did that. Regardless, from your data, while only one doesn't encompass the models, another has the upper limit right on the model's prediction, and another just above it (0.002) at 0.169. This still shows a problem with the models. It is not just due to a small sampling time.
Kevin:
1) I am aware the authors chose the index to compare with. What I am saying is that it is a wrong choice for straightforward reasons.
2) Only one Global Mean Surface Temperature record is in fact global. The NCDC record does not include the poles, for example. Therefore, when comparing with NCDC, an NCDC mask of the model results should be used.
3) The meaning of statistical significance is that if the observations lie within the 95% confidence intervals of the prediction, the theory is not falsified by the data. If it exceeds it, it may be falsified given certain other conditions. Saying that indice is very close to the limit shows a problem simply means you do not understand statistical significane. This is especially so as you have reversed the appropriate comparison by comparing the mean of the prediction with the confidence limit of the observations (it should be the ohter way round).
4) If you look at the GM section of figure 2, it is very clear that all three indices used lie, for the most part, within the 1 sigma (66%) confidence interval of the predictions. I know that you are desperate to beat that fact into a "falisification" of the models, but all that is being falsified is any belief that you are capable of a sensible analysis.
Tom,
I didn't say anything about falsification.
But you didn't comment on this, except in regards to my comment. Why?
Same as above. You have a problem with the paper, but point it out when commenting on my comment.
I was not trying to say anything "statistically speaking", I was just pointing out, using the comparison the author chose, the trends that the models predict, do not seem to be that good.
Kevin:
There's your problem right there. You are making a claim about trends that are computed using statistical techniques. So if you're not trying to say anything about the statistics, your claim won't be particularly convincing.
Composer99,
Have you expressed these statistical concerns to the author? After all, it was the author who compared an averaged trend with observed trend. As noted earlier by Tom Curtis, these trends are from different models, and averaging them isn't the best thing.
I don't have all the data. I don't want to do all the calculations. I don't need to. I, again, was just making the point that the author's chosen comparrison does not help make his point.
As noted above, the author made a comparrison of an average trend to the observed trend. It is interesting that his average does not include any +/- , which questions the statistical legitimacy of this averaging. As such, any comment regarding this comparrison does not require a statistical test.
My claim doesn't have to be particularly convincing, the data already is!
Kevin:
Since when is it my responsibility to report to the paper's authors (or, since your claim follows from the OP text rather than from the paper, to Dana) what Tom feels are issues with the way the paper handles the observational datasets?
What I was taking issue with was not the content of the paper itself, but your comment upthread, which you defended because you weren't "trying to say anything 'statistically speaking'".
You are questioning the quantified analysis using.. what, exactly? Your gut feelings?
As I said, not very convincing.
Kevin, since you keep going on about short term trends, (the flattish last 10 years), then lets see if I understand what you mean.
Am I correct that, deep down, you reject the idea that trend is mostly due to negative/neutral ENSO state and believe it is due to some other part of the climate system. And furthermore, if we only understood this "other part" of the climate system we would realise AGW isnt the problem that we thought. Is this what you believe?
Or alternatively, do you believe the ENSO has gone fundamental change (something models should have found but havent) and it will remain mostly low and temperatures will be stable from now on?
Composer99,
What I was using is the reality of the facts.
The observable trend over the period is 0.138 +/- 0.028 degrees C/decade.
The reported average trend is 0.167 degrees C/decade.
The fact is 0.138 + 0.028 = 0.166.
The fact is 0.166 is less than 0.167.
All I'm saying is that it is this article that is not very convincing.
Kevn @ a model projection is an estimate, from basic principles (Planck's law, Newtons laws of motion and gravitation, the laws of thermodynamics), known current conditions, and projections of future forcings, of the future changes in the climate system. Because of limited computer power they must be run at resolutions in which micro behaviour is not modelled, where micro-behaviour includes such things as tornados and hurricanes. As a result, the such micro behaviour must be matched to the resolution of the model by parametrization. Further, there is uncertainty about the exact values of some current conditions. Each model represents an estimate of the correct parametrization and value of uncertain conditions. Those estimates are not predicted by theory, and though modellers try to constrain them with observations, they cannot entirely do so.
The result is that our best prediction from basic physical principles is uncertain. Each model represents a sample from the range of possible parametrizations given current knowledge, and hence provides a sample from the range of possible predictions from basic physics given our current limitations in computer capacity and knowledge.
Because of that, our best possible prediction from basic physics is determined by the statistical properties of the ensemble of models. As such, our best prediction is the mean of the ensemble, with the uncertainty of the prediction being a function of the range of the predictions by individual models.
If you look at the GM section of figure 3 above, you will see that the mode of the distribution of GM trend predictions is very close to the values observed, but that two models drag the mean away from the mode. The distribution is skewed. In that situation I would have thought it was better to quote the median model trend rather than the mean of the trends, but there are certainly other ways to show this data, including (as the authors did) showing the full range of model projections relative to the observed trends. When you look at that comparison, it becomes obvious that the observations have not falsified the ensemble prediction. Not even close!
In that context, you are focusing on a single comparison to the exclusion of the full range of data presented to try and create the impression that there is a very large discrepancy between the ensemble prediction and observations. In fact, there is only a small discrepancy between ensemble predictions and observations because the observations lie close to the mode (and median) of the individual predictions within the ensemble. That the distribution of the ensemble predictions is skewed needs to be conveyed because science does not procede by only noting the points that help you make a point, and that fact was conveyed both by figure 3 and by the note about the mean.
You, however, faced with a usefull discussion of the full issue, have chosen to ignore the majority of the data presented to make a case that is not supported by the full range of data. It seems to be a specialty of yours.
Kevin:
Look at what the article said:
>> Some models (particularly cccma_cgcm3_1 [1 in Figure 3] and ncar_ccsm3_0 [6 in Figure 3]) predict more overall global surface warming than observed, although most models simulate the observed average global surface warming accurately. Due to those overpredictions, on average the models simulate a 0.167°C per decade average global surface warming trend from 1961-2010, whereas the observed trend is approximately 0.138 ± 0.028°C per decade, approximately 20% lower.
As Tom and/or others pointed out:
a) It appears that some models are off from the others. If we remove those stray cases, the ensemble average gets rather close to the "observed trend". The study highlights that point, perhaps suggesting future improvements to IPCC projections might be in filtering out the models that are far off the mode before calculating the new mean. [Haven't read the paper.]
b) The error bars you quoted are I think from our attempt to pin down the observed trends because there is inherently error in observation. It isn't the error bars of the models. If the observations were exact, there would be no error bars around that 0.138 value. On the other hand, a particular model ensemble might predict a trend of .167/decade with say a 95% confidence envelope through the first 3 decades of +/- .1. So if we had this model and the current observed values with error bars, then we'd have this: the observed might be as high as 0.138+0.028=0.166 while the model predicts that the temp might be as low as 0.167-0.1=0.067. In this case, we have that the actual temp -- best we can observe -- is possibly much higher than the lower bounds of the models.
Tom Curtis 31 >> That the distribution of the ensemble predictions is skewed needs to be conveyed because science does not procede by only noting the points that help you make a point, and that fact was conveyed both by figure 3 and by the note about the mean.
OK, so maybe the paper wasn't suggesting that the models tending far from the average be removed (contrary to what I guessed in Jose_X 32).
Kevin @30,
Do you realise your mistake in post #30 now? Putting aside that your objection is based on picking just one model, and misses the big picture, you have the mathematical argument exactly backwards: you are using the wrong confidence interval, as others have pointed out. If you can't see this, further discussion is appropriate.
The problem with throwing out all these spurious Gish-Gallop-style objections is that some readers might find your simplstic 'facts' easier to follow than the actual statistical argument that follows. Some acknowledgement of your errors, or at least further discussion of where you got confused, might be appropriate to show that your post @30 is not simply a trolling exercise.