Tuesday, September 24, 2019

A skeptic attempts to break the ‘pal review’ glass ceiling in climate modeling

A skeptic attempts to break the ‘pal review’ glass ceiling in climate modeling 


A skeptic attempts to break the ‘pal review’ glass ceiling in climate modeling

Propagation of Error and the Reliability of Global Air Temperature Projections
Guest essay by Pat Frank
Regular readers at Anthony’s Watts Up With That will know that for several years, since July 2013 in fact, I have been trying to publish an analysis of climate model error.
The analysis propagates a lower limit calibration error of climate models through their air temperature projections. Anyone reading here can predict the result. Climate models are utterly unreliable. For a more extended discussion see my prior WUWT post on this topic (thank-you Anthony).
The bottom line is that when it comes to a CO2 effect on global climate, no one knows what they’re talking about.
Before continuing, I would like to extend a profoundly grateful thank-you! to Anthony for providing an uncensored voice to climate skeptics, over against those who would see them silenced. By “climate skeptics” I mean science-minded people who have assessed the case for anthropogenic global warming and have retained their critical integrity.
In any case, I recently received my sixth rejection; this time from Earth and Space Science, an AGU journal. The rejection followed the usual two rounds of uniformly negative but scientifically meritless reviews (more on that later).
After six tries over more than four years, I now despair of ever publishing the article in a climate journal. The stakes are just too great. It’s not the trillions of dollars that would be lost to sustainability troughers.
Nope. It’s that if the analysis were published, the career of every single climate modeler would go down the tubes, starting with James Hansen. Their competence comes into question. Grants disappear. Universities lose enormous income.
Given all that conflict of interest, what consensus climate scientist could possibly provide a dispassionate review? They will feel justifiably threatened. Why wouldn’t they look for some reason, any reason, to reject the paper?
Somehow climate science journal editors have seemed blind to this obvious conflict of interest as they chose their reviewers.
With the near hopelessness of publication, I have decided to make the manuscript widely available as samizdat literature.
The manuscript with its Supporting Information document is available without restriction here (13.4 MB pdf).
Please go ahead and download it, examine it, comment on it, and send it on to whomever you like. For myself, I have no doubt the analysis is correct.
Here’s the analytical core of it all:
Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.
Complicated, isn’t it. I have yet to encounter a consensus climate scientist able to grasp that concept.
Willis Eschenbach demonstrated that climate models are just linearity machines back in 2011, by the way, as did I in my 2008 Skeptic paper and at CA in 2006.
The manuscript shows that this linear equation …
clip_image002
… will emulate the air temperature projection of any climate model; fCO2 reflects climate sensitivity and “a” is an offset. Both coefficients vary with the model. The parenthetical term is just the fractional change in forcing. The air temperature projections of even the most advanced climate models are hardly more than y = mx+b.
The manuscript demonstrates dozens of successful emulations, such as these:
clip_image004
Legend: points are CMIP5 RCP4.5 and RCP8.5 projections. Panel ‘a’ is the GISS GCM Model-E2-H-p1. Panel ‘b’ is the Beijing Climate Center Climate System GCM Model 1-1 (BCC-CSM1-1). The PWM lines are emulations from the linear equation.
CMIP5 models display an inherent calibration error of ±4 Wm-2 in their simulations of long wave cloud forcing (LWCF). This is a systematic error that arises from incorrect physical theory. It propagates into every single iterative step of a climate simulation. A full discussion can be found in the manuscript.
The next figure shows what happens when this error is propagated through CMIP5 air temperature projections (starting at 2005).
clip_image006
Legend: Panel ‘a’ points are the CMIP5 multi-model mean anomaly projections of the 5AR RCP4.5 and RCP8.5 scenarios. The PWM lines are the linear emulations. In panel ‘b’, the colored lines are the same two RCP projections. The uncertainty envelopes are from propagated model LWCF calibration error.
For RCP4.5, the emulation departs from the mean near projection year 2050 because the GHG forcing has become constant.
As a monument to the extraordinary incompetence that reigns in the field of consensus climate science, I have made the 29 reviews and my responses for all six submissions available here for public examination (44.6 MB zip file, checked with Norton Antivirus).
When I say incompetence, here’s what I mean and here’s what you’ll find.
Consensus climate scientists:
1. Think that precision is accuracy
2. Think that a root-mean-square error is an energetic perturbation on the model
3. Think that climate models can be used to validate climate models
4. Do not understand calibration at all
5. Do not know that calibration error propagates into subsequent calculations
6. Do not know the difference between statistical uncertainty and physical error
7. Think that ±” uncertainty means positive error offset
8. Think that fortuitously cancelling errors remove physical uncertainty
9. Think that projection anomalies are physically accurate (never demonstrated)
10. Think that projection variance about a mean is identical to propagated error
11. Think that a “±K” uncertainty is a physically real temperature
12. Think that a “±K” uncertainty bar means the climate model itself is oscillating violently between ice-house and hot-house climate states
Item 12 is especially indicative of the general incompetence of consensus climate scientists.
Not one of the PhDs making that supposition noticed that a “±” uncertainty bar passes through, and cuts vertically across, every single simulated temperature point. Not one of them figured out that their “±” vertical oscillations meant that the model must occupy the ice-house and hot-house climate states simultaneously!
If you download them, you will find these mistakes repeated and ramified throughout the reviews.
Nevertheless, my manuscript editors apparently accepted these obvious mistakes as valid criticisms. Several have the training to know the manuscript analysis is correct.
For that reason, I have decided their editorial acuity merits them our applause.
Here they are:
  • Steven Ghan___________Journal of Geophysical Research-Atmospheres
  • Radan Huth____________International Journal of Climatology
  • Timothy Li____________Earth Science Reviews
  • Timothy DelSole_______Journal of Climate
  • Jorge E. Gonzalez-cruz__Advances in Meteorology
  • Jonathan Jiang_________Earth and Space Science
Please don’t contact or bother any of these gentlemen. On the other hand, one can hope some publicity leads them to blush in shame.
After submitting my responses showing the reviews were scientifically meritless, I asked several of these editors to have the courage of a scientist, and publish over meritless objections. After all, in science analytical demonstrations are bullet proof against criticism. However none of them rose to the challenge.
If any journal editor or publisher out there wants to step up to the scientific plate after examining my manuscript, I’d be very grateful.
The above journals agreed to send the manuscript out for review. Determined readers might enjoy the few peculiar stories of non-review rejections in the appendix at the bottom.
Really weird: several reviewers inadvertently validated the manuscript while rejecting it.
For example, the third reviewer in JGR round 2 (JGR-A R2#3) wrote that,
“[emulation] is only successful in situations where the forcing is basically linear …” and “[emulations] only work with scenarios that have roughly linearly increasing forcings. Any stabilization or addition of large transients (such as volcanoes) will cause the mismatch between this emulator and the underlying GCM to be obvious.”
The manuscript directly demonstrated that every single climate model projection was linear in forcing. The reviewer’s admission of linearity is tantamount to a validation.
But the reviewer also set a criterion by which the analysis could be verified — emulate a projection with non-linear forcings. He apparently didn’t check his claim before making it (big oh, oh!) even though he had the emulation equation.
My response included this figure:
clip_image008
Legend: The points are Jim Hansen’s 1988 scenario A, B, and C. All three scenarios include volcanic forcings. The lines are the linear emulations.
The volcanic forcings are non-linear, but climate models extrapolate them linearly. The linear equation will successfully emulate linear extrapolations of non-linear forcings. Simple. The emulations of Jim Hansen’s GISS Model II simulations are as good as those of any climate model.
The editor was clearly unimpressed with the demonstration, and that the reviewer inadvertently validated the manuscript analysis.
The same incongruity of inadvertent validations occurred in five of the six submissions: AM R1#1 and R2#1; IJC R1#1 and R2#1; JoC, #2; ESS R1#6 and R2#2 and R2#5.
In his review, JGR R2 reviewer 3 immediately referenced information found only in the debate I had (and won) with Gavin Schmidt at Realclimate. He also used very Gavin-like language. So, I strongly suspect this JGR reviewer was indeed Gavin Schmidt. That’s just my opinion, though. I can’t be completely sure because the review was anonymous.
So, let’s call him Gavinoid Schmidt-like. Three of the editors recruited this reviewer. One expects they called in the big gun to dispose of the upstart.
The Gavinoid responded with three mostly identical reviews. They were among the most incompetent of the 29. Every one of the three included mistake #12.
Here’s Gavinoid’s deep thinking:
“For instance, even after forcings have stabilized, this analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”
And there it is. Gavinoid thinks the increasingly large “±K” projection uncertainty bars mean the climate model itself is oscillating increasingly wildly between ice-house and hot-house climate states. He thinks a statistic is a physically real temperature.
A naïve freshman mistake, and the Gavinoid is undoubtedly a PhD-level climate modeler.
The majority of Gavinoid’s analytical mistakes include list items 2, 5, 6, 10, and 11. If you download the paper and Supporting Information, section 10.3 of the SI includes a discussion of the total hash Gavinoid made of a Stefan-Boltzmann analysis.
And if you’d like to see an extraordinarily bad review, check out ESS round 2 review #2. It apparently passed editorial muster.
I can’t finish without mentioning Dr. Patrick Brown’s video criticizing the youtube presentation of the manuscript analysis. This was my 2016 talk for the Doctors for Disaster Preparedness. Dr. Brown’s presentation was also cross-posted at “andthentheresphysics” (named with no appreciation of the irony) and on youtube.
Dr. Brown is a climate modeler and post-doctoral scholar working with Prof. Kenneth Caldiera at the Carnegie Institute, Stanford University. He kindly notified me after posting his critique. Our conversation about it is in the comments section below his video.
Dr. Brown’s objections were classic climate modeler, making list mistakes 2, 4, 5, 6, 7, and 11.
He also made the nearly unique mistake of confusing an root-sum-square average of calibration error statistics with an average of physical magnitudes; nearly unique because one of the ESS reviewers made the same mistake.
Mr. andthentheresphysics weighed in with his own mistaken views, both at Patrick Brown’s site and at his own. His blog commentators expressed fatuous insubstantialities and his moderator was tediously censorious.
That’s about it. Readers moved to mount analytical criticisms are urged to first consult the list and then the reviews. You’re likely to find your objections critically addressed there.
I made the reviews easy to apprise by starting them with a summary list of reviewer mistakes. That didn’t seem to help the editors, though.
Thanks for indulging me by reading this.
I felt a true need to go public, rather than submitting in silence to what I see as reflexive intellectual rejectionism and indeed a noxious betrayal of science by the very people charged with its protection.
Appendix of Also-Ran Journals with Editorial ABM* Responses
Risk Analysis. L. Anthony (Tony) Cox, chief editor; James Lambert, manuscript editor.
This was my first submission. I expected a positive result because they had no dog in the climate fight, their website boasts competence in mathematical modeling, and they had published papers on error analysis of numerical models. What could go wrong?
Reason for declining review: “the approach is quite narrow and there is little promise of interest and lessons that transfer across the several disciplines that are the audience of the RA journal.
Chief editor Tony Cox agreed with that judgment.
A risk analysis audience not interested to discover there’s no knowable risk to CO2 emissions.
Right.
Asia-Pacific Journal of Atmospheric Sciences. Songyou Hong, chief editor; Sukyoung Lee, manuscript editor. Dr. Lee is a professor of atmospheric meteorology at Penn State, a colleague of Michael Mann, and altogether a wonderful prospect for unbiased judgment.
Reason for declining review: “model-simulated atmospheric states are far from being in a radiative convective equilibrium as in Manabe and Wetherald (1967), which your analysis is based upon.” and because the climate is complex and nonlinear.
Chief editor Songyou Hong supported that judgment.
The manuscript is about error analysis, not about climate. It uses data from Manabe and Wetherald but is very obviously not based upon it.
Dr. Lee’s rejection follows either a shallow analysis or a convenient pretext.
I hope she was rewarded with Mike’s appreciation, anyway.
Science Bulletin. Xiaoya Chen, chief editor, unsigned email communication from “zhixin.”
Reason for declining review: “We have given [the manuscript] serious attention and read it carefully. The criteria for Science Bulletin to evaluate manuscripts are the novelty and significance of the research, and whether it is interesting for a broad scientific audience. Unfortunately, your manuscript does not reach a priority sufficient for a full review in our journal. We regret to inform you that we will not consider it further for publication.
An analysis that invalidates every single climate model study for the past 30 years, demonstrates that a global climate impact of CO2 emissions, if any, is presently unknowable, and that indisputably proves the scientific vacuity of the IPCC, does not reach a priority sufficient for a full review in Science Bulletin.
Right.
Science Bulletin then courageously went on to immediately block my email account.
*ABM = anyone but me; a syndrome widely apparent among journal editors.

No comments:

Post a Comment