Friday, July 28, 2017

+Myth: Santer et al. Show that Climate Models are Very Flawed

The outline for this post is as follows:
  1. The Myth and Its Flaws
  2. Context and Analysis (divided into multiple sections)
  3. Posts Providing Further Information and Analysis
  4. References

This is the "+References" version of this post, which means that this post contains my full list of references and citations. If you would like an abbreviated and easier to read version, then please go to the "main version" of this post.

References are cited as follows: "[#]", with "#" corresponding to the reference number given in the References section at the end of this post.

1.  The Myth and Its Flaws

Climate scientist Ben Santer recently co-authored a paper [1]. Santer and his co-authors showed that climate models over-estimated recent atmospheric warming, thereby revealing a deep flaw in the models. The climate models likely exaggerated CO2-induced atmospheric warming, thus vindicating John Christy's and Ted Cruz's claims about the models.

Different myth proponents accept different parts of the above myth. The Daily Caller's Ryan Maue, Michael Bastasch [2; 3], and Ned Nikolov imply that the models over-estimated the warming due to a flaw in the models [106]; the Australian's Graham Lloyd concurs [4]. Judith Curry, Maue, and Bastasch state that Santer's co-authored paper vindicates Christy and Cruz's claims [2; 3; 5]. And Roger Pielke Sr. takes the myth further by attributing the over-estimation to the models exaggerating CO2 affect of climate [6; 7]:

Figure 1: A portion of a tweet from Pielke Sr., in which Pielke Sr. comments on Santer's co-authored paper [6].

The myth's flaws: According Ben Santer's co-authored paper, observed atmospheric warming does not show that climate models over-estimate CO2-induced warming [1, pages 482 and 483] So Santer's paper contradicts claims made by John Christy  [8, page 379; 26, pages 3 and 4] and Ted Cruz [13, pages 1 - 4] regarding the models. In fact, Santer's paper explicitly argues against Christy's position [1, pages 482 and 483], consistent with Santer's previously published work [8]. Santer's paper also argues that there are errors in the climate information inputted into the climate models, and these input errors (not an error in the models themselves) account for much of the difference between observed atmospheric warming vs. the amount of atmospheric warming projected by climate models [1].

2. Context and Analysis

Section 2.1: Factors influencing model-based trends and observational analyses

The following analogy may help in understanding this myth:

Suppose Harvey generates a model that predicts coin flips; if you input information into the model, then the model will predict the number of heads and tails you will get from your coin flips. You can input conditions such as how many times you flipped the coin, whether the coin is fair or slightly loaded on one side of the coin, etc. So suppose Yomotsu tells Harvey that Yomotsu's coin is fair/unloaded and that Yomotsu flipped the coin 40 times. Harvey inputs this information into his model and then runs the model on his computer 10,000 times, generating an output of 10,000 individual model runs. A small number of these runs generate ratios such as "29 heads, 11 tails" or "15 heads, 25 tails". But the average of the model runs is "20 heads, 20 tails."

Then Yomotsu claims he flipped his fair coin 40 times, and ended up with a ratio of 12 heads to 28 tails. This differs from the "20 heads, 20 tails" average for Harvey's model runs. There are multiple possible explanations for the discrepancy between Harvey's model average and Yomotsu's claims. These explanations include:

  • Observational uncertainty: There is error or uncertainty with respect to Yomotsu's reported observations, and this error/uncertainty affects Harvey's comparison of his model's output with Yomotsu's reported observations. For instance, Yomotsu may have misremembered the number of heads and tails for his 40 coin flips. This explanation implies a flaw in Yomotsu's reported observations, but not necessarily a flaw in Harvey's model.
  • Error in the inputs: This explanation implies an error in the information inputted into Harvey's model; this leads to the model generating incorrect output. So, for example, Yomotsu's coin may be loaded on one side and thus not fair, despite Yomotsu's claims to the contrary. The error would then be with Yomotsu's inputted claims, not Harvey's model.
  • Natural variability and/or model uncertainty: Yomotsu's results may be real, but due to chance. Chance does not mean magic or something unnatural, since one can (in principle) give a scientific explanation for why each flip came up the way it did. This explanation would include information about the motion of Yomotsu's hand during each flip, differences in air pressure as the coin moved through the air, etc. These natural phenomena underlie the natural variability that results in chance fluctuations in the ratio of heads-to-tails. Chance fluctuations will affect smaller sample sizes more than larger sample sizes; for instance, one is more likely to get (by chance) 3 heads with 5 flips of a fair coin vs. getting 30,000 heads in 50,000 flips of said coin, despite the fact that these are both 3-to-5 ratios. So unlikely results can occur by chance, especially in smaller sample sizes. Even Harvey's model output illustrates this point, since some of his 40-flip model runs produced unlikely results; this yields model uncertainty, where individual model runs can differ from the model average. Thus natural variability contributes to model uncertainty. Even accurate models can have this model uncertainty due to natural variability / chance. And both model uncertainty and natural variability can account for why Yomotsu's results differ from Harvey's model average, even if Harvey's model is correct.
  • Model error: On this explanation, Yomotsu's results differ from Harvey's model average because of an error in Harvey's model. Harvey's model, for example, may contain inaccurate equations relating a coin's geometry to the coin's flight through the air during a coin flip.

Note that the first three explanations explain Yomotsu's claims without implying a flaw in Harvey's model, while only the fourth explanation implies a flaw in Harvey's model. Furthermore, these explanations are not mutually exclusive, since these four explanations could all simultaneously contribute to the difference between Harvey's model average and Yomotsu's reported results.

Just as Yomotsu's claims diverged from Harvey's model-based projections, global warming observations can diverge from climate model projections of this warming. In addition to projecting warming of Earth's land surface and oceans, climate models also project warming of the troposphere, the atmospheric layer closest to the Earth's surface air [1; 8; 24]. In his Congressional testimony, climate scientist John Christy depicted differences between observed tropospheric warming vs. modeled tropospheric warming [26, pages 2 - 4]. Figure 2 depicts this difference, as presented in one of Christy's graphs from his Congressional testimony:

Figure 2: Christy's comparison of the relative global mid-tropospheric temperature ("TMT") as projected by models versus as determined using observational analyses that rely on weather balloons and satellites [145, page 2]. Christy presents similar figures in other political testimony [26, page 3].

[The above graph is deeply flawed, for reasons I go over in "John Christy, Climate Models, and Long-term Tropospheric Warming". For example, the graph under-estimates observational uncertainty, obscures how natural variability affects model uncertainty, and uses a very short baseline that exaggerates any differences between the model-based projections vs. the observational analyses.]

One could offer a number of different explanations for figure 2's discrepancy between model-based projections vs. observations. These climate science explanations parallel the coin flip explanations previously discussed:

  • Explanation A1: Uncertainty or error in the observations, due to data correction for known errors (also known as homogenization) [8; 10; 11, 32; 44 - 50; 66], differences between the temperature trends produced by different research groups [1, page 478; 8, pages 374 and 379; 9; 10, page 3; 11; 12, page 24; 13, pages 2 - 4; 14; 15; 32, table 4 on page 2285; 43; 55; 67; 114; 115], the 1998 transition in satellite equipment for monitoring atmospheric temperature [11, pages 69 and 72; 13, pages 2 and 3], changes in weather balloon equipment [68 - 70], etc. [141; 142]. This observational uncertainty does not imply a flaw in the climate models [13, pages 3 and 4; 66; 96, pages 1706 and 1707; 124].
  • Explanation A2: Errors in measurements of human, solar, volcanic, and other factors [1, pages 478 and 483; 8, page 379; 12, page 27; 13, page 4; 14; 15; 66; 89; 103, as per 104; 113, box 9.2 on pages 769 - 772; 114]. Estimates of these factors serve as input for the climate models, and the climate models then use this input (in addition to other information) to predict future trends under various scenarios. So errors in this model input could lead to incorrect model projections, even if the climate models themselves were perfect with respect to the physical processes relevant to climate change [13, page 4; 89, page 188; 96, pages 1706 and 1707; 124].
  • Explanation A3: Differences in natural or internal variability. Certain factors strongly influence shorter-term climate trends, while different factors more strongly influence longer-term climate trends [9; 10, page 3; 12, pages 16 and 27; 13, page 4; 15; 66; 90; 113, box 9.2 on pages 769 - 772; 125; 140; 144]. Shorter-term factors include aerosols and the El Niño phase of the El Niño-Southern Oscillation (ENSO) [1 page 478; 8, pages 379 - 381; 11, pages 70 and 72; 12, page 27; 13, pages 4 and 5; 16, page 194; 100; 101; 103, as per 104; 140; 144]. ENSO-based variability in particular skews shorter-term model-based temperature projections beginning with a strong El Niño year like 1998 [8; 11; 13; 15; 16, page 194; 18 - 23; 140, section 3; 144]. Shorter-term temperature variability is also often due to the randomness / stochastic noise that afflicts smaller sample sizes. This randomness tends to have less of an effect on larger sample sizes [16, page 194; 17, figure 3; 140]. This shorter-term, natural variability can cause a discrepancy between observed short-term trends and model projections, even if the climate models themselves were perfect with respect to the physical processes relevant to climate change [13, page 4; 90; 96, pages 1706 and 1707; 124].
  • Explanation A4: Model error [56; 57; 103, as per 104; 112; 113, box 9.2 on pages 769 - 772] due to, for example, the models being too sensitive to CO2 and thus over-estimating how much warming CO2 causes [8, page 379; 13, page 4; 14; 95, page 1551].

As in the case of the coin flip explanations, explanations A1 to A3 do not imply a flaw in the climate models, while only explanation A4 implies a flaw in the models.

Section 2.2: Scientists correct Christy and Cruz's distortions of of model-based trends vs. observational analyses

Explanations A1 to A4 are not mutually exclusive, so each explanation could contribute to explaining differences between observed warming vs model-projected warming [1, page 480; 8, page 379; 13, page 4]. Michael Mann and other climate scientists published research showing that observational uncertainty (explanation A1) accounts for much of the difference between observed surface warming and climate model projections of this warming [18; 24; 35; 107 - 111; 114; 115; 125]. Other scientists argued for a contribution from forcing errors (explanation A2) [33; 35; 41; 114] or internal variability (explanation A3) [34; 35; 40; 41; 144]. These explanations may also be relevant to tropospheric warming, since surface warming often rises to the troposphere, especially in the tropics [8, page 27; 36, page 4; 37 - 39; 55] (see "Myth: The Tropospheric Hot Spot does not Existfor more on this).

For example, in 2008 John Christy said the following with respect to a (debunked [96; 133; 139, from 31:13 to 42:40 {explained at a layman's level in GavinCawley's comments: 146; 147}]) paper he co-authored [132] on model-projected tropical troposphere warming vs. observational analyses:

"In other words we asked a very simple question, “If models had the same tropical SURFACE trend as is observed, then how would the UPPER AIR model trends compare with observations?” As it turned out, models show a very robust, repeatable temperature profile … that is significantly different from observations [131]."

Christy's critique of the models lacks merit. For instance, he co-authored a 2018 paper showing that model-based projections and updated weather balloon (radiosonde) analyses displayed about the same ratio of tropical upper tropospheric warming to tropical near-surface warming [102, figure 18].
(This 2018 paper suffers from serious flaws that I discuss in section 2.2 of "Myth: Evidence Supports Curry's Claims Regarding Satellite-based Analyses and the Hot Spot", along with a separate multi-tweet Twitter thread [119].)

Another recent paper argued that once sea surface warming is accounted for, a particular climate model (the Whole Atmosphere Community Climate Model, a.k.a. WACCM) performed fairly well in representing tropospheric warming [79]. Four other papers supported a similar conclusion with respect to other climate models and tropical upper tropospheric warming [92; 93, figure 1; 116, sections 4 and 5; 138], while two other papers [8, figure 9 and page 384; 57] showed that models performed fairly well with respect to the ratio of tropical upper tropospheric warming to lower tropical tropospheric warming.

Consistent with these results, another paper argued that climate models accurately represent the ratio of surface warming to mid-to-upper tropospheric warming in the tropics, based on two other satellite-based analyses from teams at the University of Washington (UW) and the National Oceanic and Atmospheric Administration Center for Satellite Applications and Research (NOAA/STAR) [32]. The most recent satellite-based analysis from Remote Sensing Systems (RSS) [85] supports the same conclusion, by showing mid-to-upper tropical tropospheric warming on par with the NOAA/STAR analysis [8; 79]. Thus climate models represent tropospheric warming better, once near-surface warming is accounted for using explanations A1, A2, and A3.

And now we reach one of the central questions relevant to the myth: 
Q1How much do explanations A1 to A4 contribute to explaining Christy's model-observations discrepancy from figure 2

Let's start with the radiosonde trends. Christy implied that model error explains the discrepancy between radiosonde analyses vs. model-based projections  [8, page 379; 26, pages 3 and 4]. But Christy's conclusion was premature. For years scientists have known that radiosonde analyses contain spurious cooling in the tropical troposphere [9; 11; 12, page 19; 32; 58; 59; 68 - 70; 94, pages 74 and 121; 121; 122], as pointed out in a report that Christy co-authored [10, pages 3 and 7; 94, pages 74 and 121]. Christy commented on this cold bias before [60; 102, section 3.5; 120], so he had excuse for not being aware of it.

Christy should be aware of this cooling for another reason: over a decade ago, Christy emphasized how radiosonde analyses fit with Christy's small tropospheric warming trend from his work at the University of Alabama in Huntsville (UAH) [61; 63; 78; 88]. However, researchers at RSS then showed Christy that his tropospheric warming trend was spuriously low and needed to be adjusted upwards [62; 63]; I discuss this more later in this section. Thus Christy should aware of the dangers on relying on spuriously cool, radiosonde-based trends. Yet in figure 2 Christy made the same error over a decade later. Christy continues to exploit this spurious cooling in order to exaggerate the difference between models vs. radiosonde analyses [64; 65].

The spuriously cool radiosonde trends likely resulted from changes in radiosonde equipment during the 1980s [68 - 70; 94, pages 74 and 121; 121; 122]. Including pre-1979 radiosonde data dilutes the effect of these 1980s change, and thus greatly reduces the difference between radiosonde trends vs. model-based projections for tropical tropospheric warming [9, figure 2; 66]. Accounting for the spurious post-1979 cooling (explanation A1), along with internal variability (explanation A3), explains most of the difference between models vs. radiosonde analyses with respect to the amount of tropical tropospheric warming [9; 90; 143, slide 30]. Similar explanations likely account for model-data differences outside of the tropics, though the differences are more pronounced in the tropics [68 - 70; 94, pages 74 and 121]. So Christy was wrong when he prematurely accepted model error (explanation A4) as an explanation. That address Q1 for the radiosonde analyses in figure 2. So what about the satellite-based analyses?

Climate scientist Ben Santer co-authored an unpublished document addressing the satellite-based portion of Q1. In this document he cited evidence on explanations A1 to A4 [13, pages 3 - 5]. Let's call this unpublished document Santer et al. #1 [13]. Santer et al. #1 criticizes US Senator Ted Cruz for offering only explanation A4 as an account of Christy's depicted differences between observed warming vs. modeled warming:

"[Senator Cruz argues that the] mismatch between modeled and observed tropospheric warming in the early 21st century has only one possible explanation – computer models are a factor of three too sensitive to human-caused changes in greenhouse gases (GHGs) [13, pages 1 and 2]. 
The hearing also failed to do justice to the complex issue of how to interpret differences between observed and model-simulated tropospheric warming over the last 18 years. Senator Cruz offered only one possible interpretation of these differences – the existence of large, fundamental errors in model physics [...]. In addition to this possibility, there are at least three other plausible explanations for the warming rate differences [...] [13, pages 3 and 4]."

These points from Santer et. al. #1 re-iterate similar points Santer made in a 2000 paper [95, page 1551], a 2008 paper [96, pages 1706 and 1707], and 2014 paper [89, page 188]. Let's call these papers Santer et al. #2 [95], Santer et al. #3 [96], and Santer et al. #4 [89], respectively. Santer followed up Santer et al. #1, #2, #3, and #4 with a 2017 paper which again cited evidence on explanations A1 to A4 [8, page 379 and 380]. Let's call this paper Santer et al. #5 [8]. In Santer et al. #5, Santer and his co-authors show that satellite readings of tropospheric temperature were contaminated by satellite readings of cooling in the stratosphere, an atmospheric layer higher than the troposphere [8]. This contamination introduced spurious cooling into satellite-based mid- to upper tropospheric temperature trends [8].

Scientists have known about this bogus cooling since at least 1997 [30; 80]. The cooling is accounted for by 4 of the 5 major research groups (including RSS) that generate satellite-based tropospheric temperature measurements [8; 30; 31, page 2; 32, table 4 on page 2285; 71 - 74; 79, section 4.1; 80; 97, page 2]. Only the UAH research team fails to adequately correct for this cooling, since they object [77; 98; 99, section 1] to the validated [75; 76; 129; 130] correction method used by three of the other research groups [1; 8; 23; 30; 31, page 2; 32, table 4 on page 2285; 71 - 76; 79, section 4.1; 129; 130]; I discuss this correction further in section 2.3 of "Myth: The Tropospheric Hot Spot does not Exist".

The UAH team member John Christy then cited satellite-based tropospheric temperature trends containing this bogus cooling [26, pages 2 - 4]; figure 2 shows this spuriously cool, satellite-based temperature trend. Christy then used this flawed trend to exaggerate the discrepancy between observed satellite-based tropospheric warming vs. model projections of this warming [26, pages 2 - 4]. And Christy finally implies that model error (explanation A4) explains this exaggerated discrepancy [8, page 379; 26, pages 3 and 4]. He did this despite the fact that he and his UAH team know about the stratospheric contamination of this analysis [102, sections 1 and 3.5; 117, section 7a; 118; 123, page S18].

But as we just saw, Christy's stated model-observations discrepancy was largely due to observational error (explanation A1) resulting from the spurious cooling in Christy's reported tropospheric warming trend [8]. Santer et al. #5 corrects Christy's spuriously cooling trend, as shown in figure 3 below:

Figure 3: (A),(B) 1979 - 2016 near-global mid- to upper tropospheric warming trends predicted by climate models and observed in satellite data analyses from UAH, Remote Sensing Systems (RSS), and the National Oceanic and Atmospheric Administration Center for Satellite Applications and Research (NOAA/STAR). Trends are presented as an average of all the trend values for a given trend length. Trends are not corrected for stratospheric cooling (A) or corrected for stratospheric cooling (B). (C),(D) Ratio between the tropospheric warming trend predicted by the climate models vs. the tropospheric warming trend observed in the satellite data analyses. The dotted lines in (C) show the ratios Christy reported to Congress [26, page 3]. Trend ratios are not corrected for stratospheric cooling (C) or corrected for stratospheric cooling (D) [8, figure 2 on page 377].

Santer et al. #5 applies this stratospheric cooling correction to Christy's flawed analysis from figure 2 above. This correction allows for a more accurate comparison between the observations and the model-based projections, as shown in figure 4:

Figure 4: Near-global, mid- to upper tropospheric relative temperature projected by climate models and observed in satellite data analyses. The pink line is the observed tropospheric warming trend, corrected for stratospheric cooling and shown as an average of the UAH, Remote Sensing Systems (RSS), and the National Oceanic and Atmospheric Administration Center for Satellite Applications and Research (NOAA/STAR) satellite data analyses. The black line shows the average warming trend from an ensemble of climate models, while the gray region shows the range of values taken by different realizations of each model; different realizations have slightly different internal/natural variability [8, figure 1 on page 376].

Figure 4 incorporates data from the UAH. But the UAH's mid- to upper tropospheric warming trend remains much lower than the warming trends from RSS and a research team at the National Oceanic and Atmospheric Administration Center for Satellite Applications and Research (NOAA/STAR). The UAH analysis is most likely the flawed analysis since:

  • UAH has a long history of under-estimating tropospheric warming due to UAH's faulty homogenization [11; 59; 60; 62; 81; 82; 83, from 36:31 to 37:10; 84, pages 5 and 6; 126 - 128].
  • Other scientists have critiqued UAH's homogenization methods [12, pages 17 - 19; 30; 32; 47; 48; 72; 81; 82; 85; 86; 126 - 128].
  • UAH's satellite-based temperature analyses often diverge from analyses made by other research groups, in both the troposphere and other atmospheric layers [12, pages 17 - 19; 30; 32; 47; 48; 72; 81; 85; 87; 123, pages S17 and S18; 126 - 130].

These points support Santer et al. #5's contention that residual errors in the UAH analysis cause the analysis to under-estimate tropospheric warming [8, page 384]. Since figure 4 incorporates a UAH analysis with a spuriously low warming, figure 4 likely under-estimates mid- to upper tropospheric warming. Thus explanation A1 would account for some of the model-observations discrepancy for figure 4.

In addition to defending explanation A1, Santer et al. #5 also rejects Christy's leap to explanation A4:

"It is incorrect to assert that a large model error in the climate sensitivity to greenhouse gases is the only or most plausible explanation for differences in simulated and observed warming rates (Christy 2015) [8, page 379]."

By 2015, Christy had no excuse for leaping to explanation A4 and evading explanation A1. After all, explanation A1 debunked a number of his past claims regarding observational analyses. For example, in the 1990s Christy used his UAH analysis to falsely claim that the troposphere had not warmed. Other research teams corrected Christy's erroneous claim. These research teams showed that Christy's UAH analysis did not contain homogenization for known artifacts/errors in the data [11; 59; 62; 82; 83, from 36:31 to 37:10; 84, pages 5 and 6; 126 - 128; 137] (I discuss homogenization in more detail in section 3.1 of "John Christy, Climate Models, and Long-term Tropospheric Warming", with examples of scientists validating homogenization techniques [8; 32; 134]).

Two important examples of homogenization are a correction for satellites decaying in their orbits, and a diurnal drift correction to account for the fact that satellite measurements occur at different times of day [32; 60; 62; 85; 134 - 136]. Since temperature at noon will likely be warmer than temperature at midnight, correcting for this time-of-day effects remains crucial for discovering any underlying tropospheric warming trends. Even when UAH began using these corrections (after other researchers showed UAH previously failed to apply these corrections [60; 137]), the RSS team revealed that UAH bungled the diurnal drift homogenization in a way that spuriously reduced UAH's tropospheric warming trend [60; 62]. According to UAH team members Spencer and Christy, correcting the UAH team's error increased UAH's lower tropospheric warming trend by ~40% [60]. RSS' own warming trend was even larger than this [62].

The UAH team's error occurred because the UAH team falsely assumed that the lower troposphere warmed at midnight and cooled at mid-day [60]. When Christy admitted this error, RSS members Carl Mears and Frank Wentz offered the following priceless reply [60] (highlighting added):

Yes, one wonders why the UAH team adopted an obviously wrong adjustment that conveniently reduced their stated amount of lower tropospheric warming. Maybe because it made it easier for UAH team member John Christy to claim that models were wrong, as per explanation A4? In any event, this episode shows how observational uncertainty arising from UAH's flawed data analysis (explanation A1) explained the difference between model-projected tropospheric warming vs. the lack of tropospheric warming in the UAH analysis [11; 59; 62; 82; 83, from 36:31 to 37:10; 84, pages 5 and 6; 126 - 128; 137]. 

Similarly, UAH's flawed data analysis (explanation A1) likely explains why the UAH analysis does not show tropical amplified upper tropospheric warming, while this amplified warming appears in model-based projections and most other up-to-date observational analyses, as I discuss in "Myth: The Tropospheric Hot Spot does not Exist". In "John Christy and Atmospheric Temperature Trends", I summarize other instances in which Christy (unintentionally and/or intentionally) distorted observational analyses in a way that created or exaggerated discrepancies between the analyses and model-based projections, as per explanation A1. 

So Christy should take explanation A1 more seriously, since A1 keeps contributing to the discrepancies he discusses between models vs. observational analyses. Christy is also aware of explanations involving errors in inputted forcings [91, page 517] and internal variability [91, page 517; 137] (explanations A2 and A3, respectively); he admits he cannot fully discount these explanations [91, page 517]. But Christy continues to leap to model error as an explanation anyway [8, page 379; 91, pages 515 - 517; 102, section 5]. This makes it easier for him to argue against using climate models to support policies he dislikes, such as government regulation of CO2 emissions (I discuss this issue more in section 2.5 of "Myth: The Sun Caused Recent Global Warming and the Tropical Stratosphere Warmed"). Unlike Christy, Santer et al. #5 both pays sufficient attention to explanation A1 and explains why Christy was wrong to leap to explanation A4 [8].

Section 2.3: Myth proponents distort Santer et al.'s claims regarding climate models

After rebutting Christy's position in Santer et al. #5 [8, page 379], Santer co-authored another 2017 paper with Michael Mann and other climate scientists. This paper further investigated explanations of the model-observations discrepancy in the troposphere [1]. Let's call this paper Santer et al. #6 [1]. Santer et al. #6 is the subject of our myth: myth proponents misrepresent Santer et al. #6.

Santer et al. #6 argues for the "error in inputted forcings" explanation A2, after accounting for the spurious stratospheric contamination discussed in Santer et al. #5. Santer et al. #6 also offers three arguments against the "oversensitive models" explanation A4 as an account of most of the post-1998 model-observations divergence in the troposphere:
  1. If the models are much too sensitive to CO2, then there should be a specific discrepancy between observed climate responses to volcanic eruptions vs. the models' predicted response to said eruptions. But this discrepancy does not appear [1, page 483], as shown in Santer et al. #4 [89].
  2. If the over-sensitivity accounts for most of the post-1998 model-observations discrepancy, then models should exaggerate pre-1998 CO2-induced warming as well. So there should be a similar pre-1998 model-observations discrepancy with respect to tropospheric warming. Yet this pre-1998 discrepancy is not evident [1, page 483], as shown in figure 4 above.
  3. A statistical, model-based test using a proxy for each model's climate sensitivity argues against model-sensitivity explanation A4 [1, page 483].

Thus Santer et al. #4 and Santer et al. #6 argue against the "oversensitive models" explanation from Christy's testimony to Congress:

"It has been posited that the differences between modelled and observed tropospheric warming rates are solely attributable to a fundamental error in model sensitivity to anthropogenic greenhouse gas increases [by John Christy in reference 25 of this blogpost]. Several aspects of our results cast doubt on the ‘sensitivity error’ explanation [1, pages 482 and 483]."

So Santer #6 does not confirm Christy and Cruz's "over-sensitive models" explanation [1, pages 482 and 483], in line with the stance taken by Santer et al. #1 [13, pages 1 - 4], #2 [95, page 1551], #3 [96; pages 1706 and 1707], #4 [89, page 188], and #5 [8, page 279]. In a recent paper, Christy failed to rebut any of these arguments against the "over-sensitive models" explanation, and he admitted he could not fully discount the effects of internal variability and errors in inputted forcings (explanations A3 and A2, respectively) [91, page 517]. And in contrast to other researchers [24; 33; 35], Christy did not use updated forcing estimates [91; 102], which conveniently allowed him to further exaggerate differences between model-based projections vs. observational analyses. Yet he still jumped to the "over-sensitive" models explanation anyway [91], contrary to the rebuttal Santer gave in Santer et. al #6 [1, pages 482 and 483].

Myth proponent Judith Curry is therefore wrong when she claims that Santer et al. #6 confirms what Christy and Cruz have been saying:

"The paper confirms what John Christy has been saying for the last decade, and also supports the ‘denier’ statements made by Ted Cruz about the hiatus [5]."

Curry's above "hiatus" statement [5] is a distortion, since Santer et al. #5 [8] and Santer et al. #6 [1, figure 1] rebut the idea of a "hiatus" by showing tropospheric warming over the past two decades (I discuss further evidence against this "hiatus" claim in "Myth: No Global Warming for Two Decades"). Curry evidently does not agree with Santer et al. #6's conclusion [5]. But instead of accurately reporting Santer et al. #6's claims and then stating why she disagrees with those claims (which would be a fine thing for Curry to do), Curry instead claims that Santer et al. #6 confirms a position that Santer et al. #6 actually argues against. Maue and Bastasch engage in a similar distortion when they say that Santer and Christy "seem to be on the same page [2; 3]." Thus Maue, Bastasch, and Curry employ a common tactic used by critics of mainstream science: they misrepresent sources [28; 29]. There is no excuse for their misrepresentations, especially since Santer made these points for at least 17 years, dating back to at least Santer et al. #2 in 2000 [95, page 1551].

Maue [2; 3], Bastasch [2; 3], and Lloyd [4] also use Santer et al. #6 to claim that climate models are flawed. But this too is a misrepresentation of Santer et al. #6, since Santer et al. #6 argues for the "error in inputted forcings" explanation A2 [1] and this explanation does not imply a flaw in the climate models [13, page 4; 89, page 188; 96, pages 1706 and 1707; 124]. Santer et al. #6 would need to support the "model error" explanation A4 in order for Maue, Bastasch, and Lloyd to be right. Yet Santer et al. #6 argues against explanation A4 [1, pages 482 and 483]. So Maue, Bastasch, and Lloyd are again distorting the implications of Santer et al. #6

Pielke Sr., another myth proponent, states that Santer et al. #6 shows that CO2 is not the primary controller of climate changes on multi-decadal time-scales [6] (in making this comment, Pielke may be throwing shade at an often-cited paper that called CO2 the primary control knob of long-term climate [27]). Pielke's claim would make sense only if Santer et al. #6 supported the "oversensitive models" explanation A4. But Santer et al. #6 instead argue against this explanation [1, pages 482 and 483]. Thus Pielke also misrepresents Santer et al. #6. This represents another instance of a pattern extending back at least a decade: Pielke distorts science, and Santer addresses the distortion [105]. And as we saw in section 2.2, Santer went through the same pattern with Christy as well. Some things never change.

Section 2.4: Further context and future work on climate models and observational analyses

What's sad about all this, is that many genuinely curious people will not read Santer et al. #6. Instead these people will trust the claims Pielke [6; 7], Curry [5], Maue [2; 3], etc. make about Santer's paper. After all, why would climate scientists (Pielke and Curry), a meteorologist (Maue), and press sources (Maue, Bastasch, and Lloyd) blatantly misrepresent a scientific paper to an inquiring public? I am not going to answer that question now, but I have my suspicions on what the right answer is. All I will note here is that these myth proponents misrepresented Santer's paper, as other's have noted before me [42]. There is no need to trust these myth proponents, when the authors of Santer et al. #6 have offered a non-technical fact sheet summarizing the main points of Santer et al. #6 [51].

So where does this topic go from here? Genuinely curious people can keep at an eye out for at least two things:

  • Feel free to check the peer-reviewed scientific literature regularly, to see if climate scientists incorporated the updated forcings into model-based tropospheric warming projections. These temperature projections would come from the Coupled Model Intercomparison Project Phase 5 (CMIP5) model ensemble discussed by Santer in Santer et. al #4 [89], Santer et al. #5 [8], and Santer et al. #6 [1]. I think this incorporation will likely occur, since scientists previously updated the forcings for model-based surface warming projections [24; 33; 35].
  • At the 2017 European Geosciences Union (EGU) General Assembly, a scientist submitted a poster abstract comparing satellite-based tropospheric warming trends with model-based projections that use observed forcings [52]. This abstract uses the Whole Atmosphere Community Climate Model (WACCM) for its comparison [52]. Another abstract [53] from the same poster session [54] compared weather-balloon-based tropical tropospheric warming trends with model-based warming projections.This weather-balloon-based abstract used the CMIP5 model ensemble and the Max Planck Institute Earth System model (MPI-ESM) ensemble; the abstract then formed the basis of a peer-reviewed paper published earlier this year [9]. So, hopefully, the satellite-based WACCM abstract will also lead to a peer-reviewed publication in the near future. Feel free to keep an eye out for that satellite-based paper (added note: the paper was subsequently published on September 25, 2017 [79]). 

Christy also recently co-authored two papers again emphasizing comparisons of models and climate data. I address his 2017 paper on lower tropospheric temperature trends [91] in my blogpost "John Christy Fails to Show that Climate Models Exaggerate CO2-induced Warming". And I briefly cover his March 2018 paper on mid-to-upper tropospheric temperature trends [102] in section 2.2 of "Myth: Evidence Supports Curry's Claims Regarding Satellite-based Analyses and the Hot Spot", along with a separate multi-tweet Twitter thread [119].

3. Posts Providing Further Information and Analysis

4. References

  1. "Causes of differences in model and satellite tropospheric warming rates"
  8. "Comparing tropospheric warming in climate models and satellite data"
  9. "Internal variability in simulated and observed tropical tropospheric temperature trends"
  10. "Executive summary: Temperature trends in the lower atmosphere - Understanding and reconciling differences"
  11. "Tropospheric temperature trends: history of an ongoing controversy"
  12. "Extended summary of the Climate Dialogue on the (missing) tropical hot spot"
  13. "A response to the “Data or Dogma?” hearing"
  16. "Climate change 2013: The physical science basis; Chapter 2: Observations: Atmosphere and Surface"
  17. "A reassessment of temperature variations and trends from global reanalyses and monthly surface climatological datasets"
  18. "Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends"
  19. "Debunking the climate hiatus"
  20. "Sensitivity to factors underlying the hiatus"
  21. "Misdiagnosis of Earth climate sensitivity based on energy balance model results"
  22. "Natural variability, radiative forcing and climate response in the recent hiatus reconciled"
  23. "Tropospheric warming over the past two decades"
  24. "Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures"
  26. "Testimony. Data or dogma? Promoting open inquiry in the debate over the magnitude of human impact on Earth’s climate. Hearing in front of the U.S. Senate Committee on Commerce, Science, and Transportation, Subcommittee on Space, Science, and Competitiveness, 8 December 2015"
  27. "Atmospheric CO2: Principal control knob governing Earth’s temperature"
  28. "How the growth of denialism undermines public health"
  29. "Denialism: what is it and how should scientists respond?"
  30. "Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends"
  31. "Temperature trends at the surface and in the troposphere"
  32. "Removing diurnal cycle contamination in satellite-derived tropospheric temperatures: understanding tropical tropospheric trend discrepancies"
  33. "Reconciling warming trends"
  34. "Natural variability, radiative forcing and climate response in the recent hiatus reconciled"
  35. "Reconciling controversies about the ‘global warming hiatus’"
  36. "Response of the large-scale structure of the atmosphere to global warming"
  37. "Physical mechanisms of tropical climate feedbacks investigated using temperature and moisture trends"
  38. "Regional variation of the tropical water vapor and lapse rate feedbacks"
  39. "Elevation-dependent warming in mountain regions of the world"
  40. "Investigating the recent apparent hiatus in surface temperature increases: 2. Comparison of model ensembles to observational estimates"
  41. "Forcing, feedback and internal variability in global temperature trends"
  43. "A comparative analysis of data derived from orbiting MSU/AMSU instruments"
  44. "Classic examples of inhomogeneities in climate datasets"
  46. "Homogenized monthly upper-air temperature data set for Australia"
  47. "A bias in the midtropospheric channel warm target factor on the NOAA-9 Microwave Sounding Unit"
  48. "Reply to “Comments on 'A bias in the midtropospheric channel warm target factor on the NOAA-9 Microwave Sounding Unit'"
  49. "Homogenization of the global radiosonde temperature dataset through combined comparison with reanalysis background series and neighboring stations"
  50. "Discrepancies in tropical upper tropospheric warming between atmospheric circulation models and satellites"
  51. "Fact sheet for “Causes of differences between model and satellite tropospheric warming rates”"
  52. EGU 2017 General Assembly 2017: "Comparisons of historic satellite temperature trends with ensemble simulations from WACCM constrained by observed forcings"
  53. EGU 2017 General Assembly 2017: "Internal variability in simulated and observed tropical tropospheric temperature trends"
  54. EGU 2017 General Assembly 2017, Posters AS1.25/CL4.14: Past and future atmospheric temperature changes and their drivers
  55. "Tropical temperature trends in Atmospheric General Circulation Model simulations and the impact of uncertainties in observed SSTs"
  56. "The distribution of precipitation and the spread in tropical upper tropospheric temperature trends in CMIP5/AMIP simulations"
  57. "Vertical structure of warming consistent with an upward shift in the middle and upper troposphere"
  58. "Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations"
  59. "The reproducibility of observational estimates of surface and atmospheric temperature change"
  60. "Correcting temperature data sets"
  61. "Error estimates of Version 5.0 of MSU–AMSU bulk atmospheric temperatures"
  62. "The effect of diurnal correction on satellite-derived lower tropospheric temperature"
  66. "A quantification of uncertainties in historical tropical tropospheric temperature trends from radiosondes"
  67. "Uncertainties in climate trends: Lessons from upper-air temperature records"
  68. "Biases in stratospheric and tropospheric temperature trends derived from historical radiosonde data"
  69. "Radiosonde daytime biases and late-20th century warming"
  70. "Toward elimination of the warm bias in historic radiosonde temperature records—Some new results from a comprehensive intercomparison of upper-air data"
  71. "Robustness of tropospheric temperature trends from MSU channels 2 and 4"
  72. "Satellite-derived vertical dependence of tropical tropospheric temperature trends"
  73. "Error structure and atmospheric temperature trends in observations from the Microwave Sounding Unit"
  74. "Stability of the MSU-derived atmospheric temperature trend"
  75. "Atmospheric science: Stratospheric cooling and the troposphere"
  76. "Atmospheric science: Stratospheric cooling and the troposphere (reply)"
  77. "Estimation of tropospheric temperature trends from MSU channels 2 and 4"
  78. "What may we conclude about global tropospheric temperature trends?"
  79. "Troposphere-stratosphere temperature trends derived from satellite data compared with ensemble simulations from WACCM"
  80. "Difficulties in obtaining reliable temperature trends: Reconciling the surface and satellite microwave sounding unit records"
  81. "Spurious trends in satellite MSU temperatures from merging different satellite records"
  82. "Effects of orbital decay on satellite-derived lower-tropospheric temperature trends"
  83. Ray Pierrehumbert's 2012 video: "Tyndall Lecture: GC43I. Successful Predictions - 2012 AGU Fall Meeting"
  84. "Review of the consensus and asymmetric quality of research on human-induced climate change"
  85. "Sensitivity of satellite-derived tropospheric temperature trends to the diurnal cycle adjustment"
  86. "A comparative analysis of data derived from orbiting MSU/AMSU instruments"
  87. "Stratospheric temperature changes during the satellite era"
  88. "How accurate are satellite ‘thermometers’?"
  89. "Volcanic contribution to decadal changes in tropospheric temperature"
  90. "Recent slowdown of tropical upper tropospheric warming associated with Pacific climate variability"
  91. "Satellite bulk tropospheric temperatures as a metric for climate sensitivity"
  92. "Revisiting the controversial issue of tropical tropospheric temperature trends"
  93. "Common warming pattern emerges irrespective of forcing location"
  94. "Temperature trends in the lower atmosphere: Steps for understanding and reconciling differences"
  95. "Amplification of surface temperature trends and variability in the tropical atmosphere"
  96. "Consistency of modelled and observed temperature trends in the tropical troposphere"
  97. "30-year atmospheric temperature record derived by one-dimensional variational data assimilation of MSU/AMSU-A observations"
  98. "The role of remote sensing in monitoring global bulk tropospheric temperatures"
  99. "What do observational datasets say about modeled tropospheric temperature trends since 1979?"
  100. "Distinct global warming rates tied to multiple ocean surface temperature changes"
  101. "The subtle origins of surface-warming hiatuses"
  102. "Examination of space-based bulk atmospheric temperatures used in climate research"
  103. "Overestimated global warming over the past 20 years"
  104. "Recent observed and simulated warming"
  105. "Response to Comment on "Contributions of anthropogenic and natural forcing to recent tropopause height changes""
  107. "Recently amplified arctic warming has contributed to a continual global warming trend"
  108. "Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. UPDATE COBE-SST2 based land-ocean dataset"
  109. "Arctic warming in ERA‐Interim and other analyses"
  110. "An investigation into the impact of using various techniques to estimate arctic surface air temperature anomalies"
  111. "Response to Gleisner et al (2015): Recent global warming hiatus dominated by low latitude temperature trends in surface and troposphere data" [A comment on: "Recent global warming hiatus dominated by low‐latitude temperature trends in surface and troposphere data"]
  112. "Plausible reasons for the inconsistencies between the modeled and observed temperatures in the tropical troposphere"
  113. "Climate change 2013: The physical science basis; Chapter 9: Evaluation of climate models"
  114. "Reconciled climate response estimates from climate models and the energy budget of Earth"
  115. "Statistical analysis of coverage error in simple global temperature estimators"
  116. "New estimates of tropical mean temperature trend profiles from zonal mean historical radiosonde and pilot balloon wind shear observations"
  117. "UAH version 6 global satellite temperature products: Methodology and results"
  119. ( ; ; ; ;
  120. ["HadAT2, using a more conservative methodology for detecting shifts in balloon measurements, likely has retained spurious upper troposphere/lower stratosphere cooling from radiosonde equipment changes over time which contributes to its relatively “cool” trend"]
  121. "Temporal homogenization of monthly radiosonde temperature data. Part I: Methodology"
  122. "Temporal homogenization of monthly radiosonde temperature data. Part II: Trends, sensitivities, and MSU comparison"
  123. "State of the climate in 2017"
  125. "Comparing climate projections to observations up to 2011"
  126. "Global warming deduced from MSU"
  127. "Comments on "Analysis of the merging procedure for the MSU daily temperature time series""
  128. "Global warming- Evidence from satellite observations"
  129. "Stratospheric influences on MSU-derived tropospheric temperature trends: A direct error analysis"
  130. "On using global climate model simulations to assess the accuracy of MSU retrieval methods for tropospheric warming trends"
  131. (
  132. "A comparison of tropical temperature trends with model predictions"
  133. "Open letter to the climate science community: Response to "A Climatology Conspiracy?""
  134. "A satellite-derived lower tropospheric atmospheric temperature dataset using an optimized adjustment for diurnal effects"
  135. "Effects of diurnal adjustment on biases and trends derived from inter-sensor calibrated AMSU-A data"
  136. "New generation of US satellite microwave sounder achieves high radiometric stability performance for reliable climate change detection"
  137. "Among global thermometers, warming still wins out"
  138. "Estimating low-frequency variability and trends in atmospheric temperature using ERA-Interim"
  139. Ben Santer's 2010 video: "The General Public: Why Such Resistance?"
  140. "Separating signal and noise in atmospheric temperature changes: The importance of timescale"
  141. "Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte-Carlo estimation technique"
  142. "Assessing the value of Microwave Sounding Unit–radiosonde comparisons in ascertaining errors in climate data records of tropospheric temperatures"
  143. ("WP4 Estimating and reducing uncertainty of Reanalyses and observations")
  144. "The extreme El Niño of 2015–2016 and the end of global warming hiatus"
  145. "U.S. House Committee on Science, Space & Technology; 2 Feb 2016; Testimony of John R. Christy; University of Alabama in Huntsville"
  146. ["It is worth noting that the statistical test used in Douglass et al. (2008) is obviously inappropriate as a perfect climate model is almost guaranteed to fail it! [...]"   "The discussion in the 2013 paper does not include a discussion of the validity of the statistical test used, so it fails to address the criticism raised in my comment."    "Prof Christy: You have missed the point. [...] When we compare the observed trend with the GCMs we are comparing ONE realisation of a chaotic process with the MEAN of a set of simulations of that chaotic process. Even if the model producing the simulations is absoultely [sic] perfect, there is no reason to expect the realisation we actually observe to be any closer to the MEAN than any of the individual simulations."]