Conservative media can't stop denying there was no global warming 'pause'
Posted on 10 January 2017 by dana1981
Scientists have proven time and time again that global warming continues unabated. Most recently, a study published last week showed that over the past two decades, the oceans have warmed faster than prior estimates. This study affirmed the findings of a 2015 NOAA paper – not surprisingly attacked by deniers – that removed a cool bias in the data, finding there never was a global warming “pause.”
This particular myth has been a favorite of deniers over the past decade for one simple reason – if people can be convinced that global warming stopped, they won’t consider it a threat that we need to urgently address by cutting fossil fuel consumption. It’s thus become one of the most common myths peddled by carbon polluters and their allies.
One of those allies is the anti-climate policy advocacy group Global Warming Policy Foundation (GWPF), which tries to make the case that aggressive climate policy isn’t needed. This weekend, its “science” editor David Whitehouse wrote for the conservative UK Spectator periodical – which often promotes climate denial – denying that the “pause” is dead:
their case rests on the El Nino temperature increase and will be destroyed when the El Nino subsides, as it is currently doing. A temporary victory over the ‘pause’.
The ‘pause’ can be accommodated into global warming – but not for very much longer. The world’s temperature has to increase outside the El Nino effect.
Testing the myth
If Whitehouse is correct and temperatures are not increasing outside the El Niño effect, then 2015 and 2016 should be no hotter than previous El Niño years. It’s a relatively simple test to run. In the video below, I’ve broken out the temperature data into years with an El Niño warming influence, years with a La Niña cooling influence, and neutral years.
Whitehouse’s argument immediately crumbles. Among just El Niño years over the past five decades, there’s a 0.18°C per decade warming trend. Among La Niña years it’s also 0.18°C per decade, and among neutral years it’s 0.16°C per decade. And recent years aren’t falling below the long-term trend lines.
In fact, 2016 is well above the El Niño trend line, as was 1998, because both saw particularly strong El Niño events. However, 2016 was 0.35°C hotter than 1998. How is it that the “pause” supposedly started in 1998, any subsequent warming is supposedly due to El Niño, and yet 2016 was 0.35°C hotter than 1998?
The answer is that global warming has continued unabated over the past 18 years. There are of course natural temperature influences superimposed on top of that human-caused warming trend. It just so happens that 2008, 2009, 2011, and 2012 were all influenced by La Niña cooling, which along with some other factors, acted to temporarily dampen the warming.
But those La Niña years were about 0.2°C warmer than the La Niña years around the turn of the century. That’s because human-caused global warming has continued to push temperatures higher, despite cherry picked arguments to the contrary.
The faux pause was debunked before 2015
Whitehouse’s argument also falls apart because scientists debunked the faux pause myth before the El Niño of 2015–2016. For example, in the summer of 2015, Grant Foster and John Abraham published a paper showing that there was no statistical evidence of a pause:
A barrage of statistical tests was applied to global surface temperature time series to search for evidence of any significant departure from a linear increase at constant rate since 1970. In every case, the analysis not only failed to establish a trend change with statistical significance, it failed by a wide margin.
A few months later, a study by Stephen Lewandowsky, James Risbey, and Naomi Oreskes showed that not only did the myth lack statistical support, but in a blind test, economists found “pause” claims “misleading and ill-informed.” In fact, by late 2015, at least six papers had been published debunking this myth. The record-shattering hot temperatures of 2015 and 2016 were simply more nails in its coffin. It’s a coffin with so many nails it’s hard to find room for more.
If the denier/delayers lose traction with their 'pause' propaganda they will surely move to another delaying tactic like saying they need to see another 10 years of data to be convinced. That can be repeated in 10 years, as many times as it works for them.
Promoting denial is just one of the actions they will try to benefit from. Ultimately the biggest trouble-makers are the ones with the most to lose because they chose to deny that they should dramatically reduce their reliance on getting away with benefiting from the global burining up of the non-renewable buried ancient hydrocarbons. They gambled big on getting away with behaving less acceptably.
Clearly it is irresponsible for anyone to try to delay actions that will develop truly lasting improvements for all of humanity far into the future, developing almost perpetual ways for all of humanity to live decently on this amazing planet.
It is equally clear that 'the conservative media' are not the real problem. The problem is any individual or organization that pursues the increase of its perceptions of success "Winning" popularity and profitability without responsibly limiting their actions to "Helping" to advance humanity to a lasting better future for all.
Winners who are not Honestly Helpful are actually Threats regardless of temporarily created perceptions in the minds of a portion of current day humanity.
Not only are El Nino years hotter than previous El Nino years and La Nina years hotter than previous La Nina years, but we also have La Nina years that are hotter than earlier El Nino years. If that isn't evidence of warming what is?
I like to remind people that 2008, which was the coolest year since 2000, was still warmer than all but one year of the 1900's. So a year that is actually in 99th percentile of warmest years is now what counts as a 'cool" year in the 21st Century.
My conservative friends still claim that the Earth is in a cooling period, but we know that has now become part of their political identity and has nothing at all to do with the scientific facts.
Thanks Dana for another good article, but its title, with 3 negations "can't stop", "denying" and "no global warming 'pause'" is virtually impossible to grasp even for me, a fluent English reader. I grasped it only by inference, because I know Dana and what he thinks on the subject, and of course from the text.
However, those less fluent (e.g. what google translate would do with it if some need to read in another language?) or who don't read the text carefully, will just remain confused.
Simplicity and clarity of arguments & language is what separate this website from denialist crowd and let's not forget about titles...
"Conservative media keep denying that the global warming 'pause' was debunked" would be far better. Or a different take: "Science denying conservative media keep believing in a myth of the global warming 'pause'"
Like chriskoz, I didn't like the headline. But you can't always not get what you don't want.
But let's get serious. I really agree with chriskoz that this web-site needs to be easy to read. I come here frequently to keep myself informed and find the site higly valuable.
I am not a native English speaker and I am so-so fluent, but I have a way with words and I have I way with logic. Still I needed to read the 3-negtion headline several times to get the meaning right.
You state that we've had warming of between 0.16C and 0.18C per decade. How does that compare to model predictions?
[Rob P] Here's an update to a figure which appeared in the IPCC AR5.
BBHY, would you therefore agree that the pattern markets need to see before a consolidatory approach toward self imposed carbon intensity efficiency would be: <b>a temperature trend of la ninas above that of el ninos?</b>
Thanks Rob P for posting that graph but virtually all that it shows is that models are good at predicting the past and that isn't something I'm terribly interested in. But when the model was actually predicting the future, from say 2000 onwards, your graph clearly shows that actual recorded temperatures were quite flat whereas the prediction was for them to increase (which is ironic as this article is all about there not being a pause when your own graph clearly shows that there was!) and the measured temperature was falling rapidly down the confidence limits. Granted, the recent El Nino has upped the measured temperatures, but my understanding is that this will only be temporary as it was in 1998.
So rather than show a graph of hindcast predictions could you please let me know what the predicted global temperature rise is according to our latest models for the next few decades? Don't show a graph, just write down the number in degrees C per decade that we are expecting for the next 20 or 30 years. My understanding was that it was 0.3C/decade but please correct me if I'm wrong.
[Rob P] ".....your graph clearly shows that actual recorded temperatures....."
The ol' eyecrometer is not a statistical tool. Note what appeared in the post above - from Foster & Abraham [2015]:
"My understanding was that it was 0.3C/decade....."
In the IPCC's Second Assessment Report, the projected global annual mean rate of warming for the early 21st century (slightly larger than scenario IS92a), including greenhouse gases and sulfate aerosols, was 0.2°C per decade.
Sean OConnor @7, I compared the RCP8.5 model runs for CMIP5 to the Berkeley Earth Surface Temperature Land Ocean Temperature Index (BEST LOTI) from 1861 to 2015 (that being currently the most recently available annual data point for BEST LOTI). Taking runing 10 year means over that interval, the BEST LOTI has a mean difference from the CMIP5 RCP 8.5 ensemble mean of 0.01 Standard Deviations (ie, a normalized error of 0.01), with a standard deviation of the normalized error of 1.11. That is, over that period the historical record shows more variability than the ensemble mean, but the running 10 year trends show no significant bias. Indeed, the trend of the normalized error is -0.004 +/-0.004, so that over time, the mean normalized error has decreased - a trend which is almost but not quite significant.
The greater variability of the historical record is expected. That is because the mean of the CMIP5 RCP8.5 ensemble is the average of 39 different runs, each of which varies in different locations. That is due to different timings of simulated ENSO events, along with other quasiperiodic cycles represented in the model. Consequently the ensemble mean is far smoother than any individual run.
Now you might correctly point out that the majority of that run represents a hindcast. We are more interested in the accuracy of forecasts. However, the proper test of the accuracy of the forecast is that it is not statistically distinguishable from the accuracy of the hindcast. If we apply an absolute test of the accuracy in the forecast, we treat the actual record as though it were an ensemble mean; rather than the logical equivalent to just another ensemble member. We know from the hindcast that the timing of ensemble member ENSO events (and the like) are not coordinated and that those drive the substantial variability from year to year from the ensemble mean. We also know that the historical record exhibits the same behaviour. Therefore we expect in future the variability from ENSO events to result in considerable variability in the short term trend from the ensemble mean, but that overtime that variability will average out.
As it happens, the normalized error in the 10 year trend terminating from 2006 - 2015 is -0.67. That is, it is running two thirds of a standard deviation of the typical error below the ensemble mean. Therefore it does not even hint at a problem in accurate prediction.
Note that the 2006-2015 interval has high values at either end and low values in the middle and so would be expected to have a flat trend. That, from our knowledge of the ENSO record, and that the ensemble mean averages out the effects of ENSO, we would expect a lower than average 10 year trend to 2015.
OK, I'll ask for a third time: What do the models predict that the rate of global warming will be over the next few decades?
The article states that we've been seeing something like 0.16C to 0.18C per decade as measured by scientific instruments.
My understanding was that the prediction was 0.3C per decade. So I am surprised to see an article on Skeptical Science giving a figure of about half what was predicted.
@9 Sean OConnor,
Models don't predict, they project. There is a difference. in other words:
If ABC.... then D +/- an uncertainty factor
ABC... has some factors that are known and some factors that can't be known until they happen. So because all the factors ABC... can't be known ahead of time, they can only be projected as to what range of values they might have in the future, depending on our actions and many events that can't be known ahead of time.
For example, exactly how many new solar panels will be bought next year? Wind generators? Do you know? When will be the next el nino? la nina? The next major volcanic eruption? How large? The average fuel efficiency of next years car and truck models? How many will be sold? How much will they be driven? We can project possible numbers for all of these within ranges, but no one can predict ahead of time those exact figures and hundreds more until they actually happen!
As of yet NO ONE can accurately predict the future. What climate scientists can do with their models is project possible outcomes depending upon how a multitude of future events unfold. Then those projections are tested against the empirical evidence after it happens and is measured, and it gives scientists new knowledge about how our climate system works.
The reason you keep needing to ask over and over and are frustrated at not getting a satisfactory answer, is because you are asking the wrong question.
Maybe this will help:
Gavin Schmidt: The emergent patterns of climate change
OK, I'll make it easy. If the level of CO2 continues to grow for the next couple of decades as it did for the last couple of decades what will be the projected expected rate of global temperature rise over that time? (and we continue to get similar numbers of volcanos, El Ninos, La Ninas etc...)
Sean OConnor @9, what your really saying is that you want a gotcha moment, not actual understanding.
For the record, the running 10 year means in the CMIP5 RCP8.5 from 2011.5 (ie, the period from 2007-2016) to 2020.5 (ie, 2016-2025) is 0.284 +/- 0.278 C/decade. The mean value is meaningless without the uncertainty. It would only be a meaningful value if, contrary to fact, we lived in a world without short term, unpredictable influences on temperature such as ENSO. With that uncertainty, any observed trend during that interval from 0.006 - 0.562 C/decade lies within the predicted model range.
More importantly, the model projection for 2011-2015 was a 10 year trend of 0.255 +/- 0.323 C/decade, with an observed trend of 0.148 C/decade - well within the uncertainty level. Indeed, according to the models, there is a 19-25% chance of a trend that low, or lower. If you conclude from those statistics that the models have been falsified, you are like a person who, seeing a 1 rolled on a six sided dice, conclude that probability theory is bunk because it predicted only a 1 in six (16.67%) chance of such an event occuring in a single roll.
That leaves aside such details as that the models are projections, ie, predictions on the assumption that a particular forcing history has occurred. That is particularly significant because it is known the observed forcing history showed a lower growth than the RCP8.5 forcing scenario; which would lead us to expect the observed trend to be less than the projected trend.
Sean, your statement that "the measured temperature was falling rapidly down the confidence limits" is incorrect, because those are not "confidence limits" in the usual statistical sense. You are not at all alone in misunderstanding that; climate scientists like most all other scientists (me included!) usually speak and write in shorthand about information that they all know. The "envelope" of model runs in the graph that Rob posted for you is shading to cover the area spanned by all the CMIP5 model runs. The individual model runs can be represented by a spaghetti graph that most people find hard to read, so usually the shading is substituted for the strands of spaghetti. See Rob Painting's post's Figure 2 as a schematic to understand that, and his Figure 3 for all the actual model run spaghetti strands.
The CMIP5 project had multiple models, most produced by different teams. Each model was run at least once, but some were run multiple times with different parameter values. The set of all model runs is a "convenience" sample of the population of all possible model runs. Indeed, it is only a "convenience" sample of all possible models. "Convenience" sampling in science does not have the "casual" or "lazy" implication that the word "convenience" does in lay language. It means that the sample is not a random selection from the population, and not even a stratified random sample. In this case, it is impossible to randomly sample from those populations of all possible model runs and all possible models. Therefore the usual "confidence limits" related concepts of inferential statistics do not apply.
So what does this distribution of model runs mean? It is multiple researchers' attempts to create models and model parameterizations that span the ranges of those researchers' best estimates of a whole bunch of things. So it does represent "confidence" and "uncertainty," but in more of a subjective judgement way than what you probably were thinking. Read Rob's post for more explanation.
Notably, none of the individual model runs has the shape of the multi-model ("model ensemble") mean line. In other words, we expect the global temperature to not follow that multi-model mean line. That's a stronger statement than "we don't expect the global temperature to exactly follow the multi-model mean line." It would be disappointing if any of the individual model runs followed that mean line, because it is quite clear that the global temperature varies a lot more than that. That's because the global temperature in the short term is weather by definition, and only in the long term is climate. So what we expect is for global temperature to vary a lot day to day, month to month, year to year, and even decade to decade, in response to variations in internal variations such as ENSO; and to variations in forcings such as volcanoes, insolation, greenhouse gas emissions, and reflective aerosol emissions.
We do expect that the resulting wavy actual global temperature line will follow the general pattern of all those model runs. That includes expecting the actual temperature line to stay within the range of all those model runs (the bounds of the ensemble). We expect it will not hug the ensemble mean; we expect it will swing up and down across that mean line, sometimes all the way to the edge of the range (not just to the edge of 95% of the range). We expect 10 year actual trends to deviate substantially from that mean line. We expect 20 year actual trends to deviate significantly from that mean line. We expect 30 year trends to deviate somewhat from that mean line. We expect 50 year trends to deviate slightly from that mean line. Beyond 50 years into the future the uncertainty starts increasing again.
Read Rob's post, then with that knowledge as context, re-read Tom Curtis's replies to you.
Sean, following up on my reply to you: The climate models do a good job of reproducing the sizes and durations of internal variability of temperature--those large swings above and below the model ensemble mean that I described previously. So changes in short term trends such as those you referred to as "flattening" are expected--projected by the models.
What the climate models do poorly is project the timings of those short term changes--for example, internal variability's oscillations due to ENSO. The sizes and durations of temperature oscillations due to ENSO are projected well, but the phase alignments of those oscillations with the calendar are poorly projected.
That's due to the inherent difficulty of modeling those things, but also to the difference between climate models and weather models. Those two types of models essentially are identical, except that weather models are initialised with current conditions in attempts to project those very details of timings that climate models project poorly. Weather models do well up through at least 5 days into the future, but after about 10 days get really poor. Climate models, in contrast, are not initialized with current conditions. They are initialized with conditions far in the past, the Sun is turned on, and they are run until they stabilize. It turns out that it doesn't matter much what the initialization condition details are, because fundamental physics of energy balance ("boundary conditions") constrain the weather within boundaries that are "climate." You might think of it as the mathematical (not the normal English!) concept of "chaos," with weather being the poorly predictable variations around stable "attractors." (Type "chaos" in the Search field at the top left of this page to see relevant posts.)
Evidence that the models well-project durations and sizes of temperature swings can be seen if you pick out from those model run spaghetti lines, the runs whose timings/phasings of some major internal variability in ocean activity just happen to match (by sheer accident) the actual calendar timings of those. Risbey et al. did that, as described well by Stephen Lewandowski along with several other approaches.