Thursday, August 14, 2014

A Clear Example of IPCC Ideology Trumping Fact

A Clear Example of IPCC Ideology Trumping Fact | Cato @ Liberty

A Clear Example of IPCC Ideology Trumping Fact

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.
———

When it comes to global warming, facts often take a back seat to fiction. This is especially true with proclamations coming from the White House. But who can blame them, as they are just following the lead from Big Green groups (aka, “The Green Blob”), the U.S. Climate Change Research Program (responsible for the U.S. National Climate Assessment Report), and of course, the U.N.’s Intergovernmental Panel on Climate Change (IPCC).
We have documented this low regard for the facts (some might say, deception) on many occasions, but recently we have uncovered  a particularly clear example where the IPCC’s ideology trumps the plain facts, giving the impression that climate models perform a lot better than they actually do. This is an important façade for the IPCC to keep up, for without the overheated climate model  projections of future climate change, the issue would be a lot less politically interesting (and government money could be used for other things…or simply not extorted from us in the first place).
The IPCC is given deference when it comes to climate change opinion at all Northwest Washingon DC cocktail parties (which means also by the U.S. federal government) and other governments around the world. We tirelessly point out why this is not a good idea. By the time you get to the end of this post, you will see that the IPCC does not seek to tell the truth—the inconvenient one being that it dramatically overstated the case for climate worry in its previous reports. Instead, it continues to obfuscate.
This extracts a cost. The IPCC is harming the public health and welfare of all mankind as it pressures governments to seek to limit energy choice instead of seeking ways to help expand energy availability (or, one would hope, just stay out of the market).
Everyone knows that global warming (as represented by the rise in the earth’s average surface temperature) has stopped for nearly two decades. As historians of science have noted, scientists can be very  creative when defending the  paradigm that pays.  In fact, there are  already  several dozen explanations.
Climate modellers are scrambling to try to save their precious children’s  reputations—because the one thing that they do not want to have to admit is that they exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the projected impacts that derive from the model projections—and that would be a disaster for all those pushing for regulations limiting our use of fossil fuels for energy. Its safe to say the number of people employed by creating, legislating, lobbying, and enforcing these regulations is huge, as in “The Green Blob.”
In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, the IPCC  pays brief attention to the  recent divergence between model simulations and real-world observations:
“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”
But, lest you foolishly  think that there may be some problem with the climate models, the IPCC clarifies:
“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”
Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.
The IPCC references its “Box9.2” in support of the statements quoted above.
In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.

Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)
As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends.  The IPCC describes this as:
…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble
This gives rise to the IPCC SPM statement (quoted above) that
“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”
No kidding!
Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.
The IPCC describes the situation depicted there as:
Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…
This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):
The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.
Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.
Don’t.
The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC.  From there, you can assess 108 (or the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.
We do this in our Figure 2.  However, we adjust both axes of the graph such that all the data are shown and that you can see the inconvenient details.

Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 emissions scenario results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).
What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).
So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,”  “medium,”  “high,” or  “very high,” how would you describe your “confidence” in this statement:
The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.
OK. You got your answer?
Our answer is, maybe, “medium”, and there is plenty of room for improvement.
The model range should be much tighter, indicating that the models were in better agreement with one another as to what the simulate trend should have been.  As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (with  the observed trend at 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.
Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.
What would lower our confidence?
The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world  (or that natural variability was very large over the period of trend analysis).  Or,  the observed trend could move further from the center point of the model trend distribution.  This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012).
Unfortunately, that’s what is happening.
Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.
 
Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.
After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop.  Clearly, as anyone can see, this “tendency within a  trend” (which Casey Stengel or Yogi Berra would have doubtlessly  called the “trendency”) is looking bad for the models as the level of agreement with observations is steadily decreasing with time.
In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results.  In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.
So, just how far away from either of these situations are we?
It all depends on how the earth’s average surface temperature evolves in the near future.
We explore three different scenarios  between now and the year 2030.
Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.
Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.
Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.
Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend (beginning  in 1951)would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.
 
Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 greenhouse gas scenario thereafter.
It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest yet observed (Scenario 3) still leads to complete model failure within two decades.
So let’s review.
1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.
2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.
3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.
4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.
So with all this information in hand, we’ll give you a moment to revisit your initial response to this question:
On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,”  “medium,”  “high,” or “very high,” how would you describe your “confidence” in this statement:
The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.
Got your final answer?
OK, let’s compare that to the IPCC’s assessment of the situation.
The IPCC gave it “very high confidence”—the highest level of confidence that they assign.
Do we hear stunned silence?
This in a nutshell sums up the IPCC process.  The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC tells us with “very high confidence” that models agree with observations, and therefore are a reliable indicator of future climate changes.
Taking the IPCC at its word is not a good idea.
[This is major revision of a post that first appeared at Watts Up With That on April 16, 2014.]

No comments:

Post a Comment