Translate

Friday, July 28, 2017

+Myth: Santer et al. Show that Climate Models are Very Flawed

The outline for this post is as follows:
  1. The Myth and Its Flaws
  2. Context and Analysis
  3. Posts Providing Further Information and Analysis
  4. References

This is the "+References" version of this post, which means that this post contains my full list of references and citations. If you would like an abbreviated and easier to read version, then please go to the "main version" of this post.

References are cited as follows: "[#]", with "#" corresponding to the reference number given in the References section at the end of this post.




1.  The Myth and Its Flaws



Climate scientist Ben Santer recently co-authored a paper [1]. Santer and his co-authors showed that climate models over-estimated recent atmospheric warming, thereby revealing a deep flaw in the models. The climate models likely exaggerated CO2-induced atmospheric warming, thus vindicating John Christy's and Ted Cruz's claims about the models.

Different myth proponents accept different parts of the above myth. The Daily Caller's Ryan Maue and Michael Bastasch imply that the models over-estimated the warming due to a flaw in the models [2; 3]; the Australian's Graham Lloyd concurs [4]. Judith Curry, Maue, and Bastasch state that Santer's co-authored paper vindicates Christy and Cruz's claims [2; 3; 5]. And Roger Pielke Sr. takes the myth further by attributing the over-estimation to the models exaggerating CO2 affect of climate [6; 7]:


Figure 1: A portion of a tweet from Pielke Sr., in which Pielke Sr. comments on Santer's co-authored paper [6].

The myth's flaws: According Ben Santer's co-authored paper, observed atmospheric warming does not show that climate models over-estimate CO2-induced warming [1, pages 482 and 483] So Santer's paper contradicts claims made by John Christy  [8, page 379; 26, pages 3 and 4] and Ted Cruz [13, pages 1 - 4] regarding the models. In fact, Santer's paper explicitly argues against Christy's position [1, pages 482 and 483], consistent with Santer's previously published work [8]. Santer's paper also argues that there are errors in the climate information inputted into the climate models, and these input errors (not an error in the models themselves) account for much of the difference between observed atmospheric warming vs. the amount of atmospheric warming projected by climate models [1].



2. Context and Analysis



The following analogy may help in understanding this myth:

Suppose Harvey generates a model that predicts coin flips; if you input information into the model, then the model will predict the number of heads and tails you will get from your coin flips. You can input conditions such as how many times you flipped the coin, whether the coin is fair or slightly loaded on one side of the coin, etc. So suppose Yomotsu tells Harvey that Yomotsu's coin is fair/unloaded and that Yomotsu flipped the coin 40 times. Harvey inputs this information into his model and then runs the model on his computer 10,000 times, generating an output of 10,000 individual model runs. A small number of these runs generate ratios such as "29 heads, 11 tails" or "15 heads, 25 tails". But the average of the model runs is "20 heads, 20 tails."

Then Yomotsu claims he flipped his fair coin 40 times, and ended up with a ratio of 12 heads to 28 tails. This differs from the "20 heads, 20 tails" average for Harvey's model runs. There are multiple possible explanations for the discrepancy between Harvey's model average and Yomotsu's claims. These explanations include:

  • Observational uncertainty: There is error or uncertainty with respect to Yomotsu's reported observations, and this errors affects Harvey's comparison of his model's output with Yomotsu's reported observations. For instance, Yomotsu may have misremembered the number of heads and tails for his 40 coin flips. This explanation implies a flaw in Yomotsu's reported observations, but not necessarily a flaw in Harvey's model.
  • Error in the inputs: This explanation implies an error in the information inputted into Harvey's model; this error leads to the model generating incorrect output. So, for example, Yomotsu's coin may be loaded on one side and thus not fair, despite Yomotsu's claims to the contrary. The error would then be with Yomotsu's inputted claims, not Harvey's model.
  • Natural variability and/or model uncertainty: Yomotsu's results may be real, but due to chance. Chance does not mean magic or something unnatural, since one can (in principle) give a scientific explanation for why each flip came up the way it did. This explanation would include information about the motion of Yomotsu's hand during each flip, differences in air pressure as the coin moved through the air, etc. These natural phenomena underlie the natural variability that results in chance fluctuations in the ratio of heads-to-tails. Chance fluctuations will affect smaller sample sizes more than larger sample sizes; for instance, one is more likely to get (by chance) 3 heads with 5 flips of a fair coin versus getting 30,000 heads in 50,000 flips of said coin. So unlikely results can occur by chance, especially in smaller sample sizes. Even Harvey's model output illustrates this point, since some of his 40-flip model runs produced unlikely results; this results in model uncertainty, where individual model runs can differ from the model average. Thus natural variability contributes to model uncertainty. Even accurate models can have this model uncertainty due to natural variability / chance. And both model uncertainty and natural variability can account for why Yomotsu's results differ Harvey's model average, even if Harvey's model is correct.
  • Model error: On this explanation, Yomotsu's results differ from Harvey's model average because of an error in Harvey's model. Harvey's model, for example, may contain inaccurate equations relating a coin's geometry to the coin's flight through the air during a coin flip.

Note that the first three explanations explain Yomotsu's claims without implying a flaw in Harvey's model, while only the fourth explanation implies a flaw in Harvey's model. Furthermore, these explanations are not mutually exclusive, since these four explanations could all simultaneously contribute to the difference between Harvey's model average and Yomotsu's reported results.

Just as Yomotsu's claims diverged from Harvey's model-based projections, global warming observations can diverge from climate model projections of this warming. In addition to projecting warming of Earth's land surface and oceans, climate models also project warming of the troposphere, the atmospheric layer closest to the Earth's surface air [1; 8; 24]. In his Congressional testimony, climate scientist John Christy depicted differences between observed tropospheric warming vs. modeled tropospheric warming [26, pages 2 - 4]. Figure 2 depicts this difference, as presented in one of Christy's graphs from his Congressional testimony:


Figure 2: Christy's comparison of the relative global mid-tropospheric temperature ("TMT") as projected by models versus as determined using observations from weather balloons and satellites [26, page 3].

[The above graph is deeply flawed, for reasons I go over in "John Christy, Climate Models, and Long-term Tropospheric Warming".]

One could offer a number of different explanations for figure 2's discrepancy between model-based projections vs. observations. These climate science explanations parallel the coin flip explanations previously discussed:

  • Explanation A1: Uncertainty or error in the observations, due to data correction for known errors (also known as homogenization) [8; 10; 11, 32; 44 - 50; 66], differences between the temperature trends produced by different research groups [1, page 478; 8, pages 374 and 379; 9; 10, page 3; 11; 12, page 24; 13, pages 2 - 4; 14; 15; 32, table 4 on page 2285; 43; 55; 67], the 1998 transition in satellite equipment for monitoring atmospheric temperature [11, pages 69 and 72; 13, pages 2 and 3], changes in weather balloon equipment [68 - 70], etc. This observational uncertainty does not imply a flaw in the climate models [13, pages 3 and 4; 66].
  • Explanation A2: Errors in measurements of human, solar, volcanic, and other factors [1, pages 478 and 483; 8, page 379; 12, page 27; 13, page 4; 14; 15; 66]. Estimates of these factors serve as input for the climate models, and the climate models then use this input (in addition to other information) to predict future trends under various scenarios. So errors in this model input could lead to incorrect model projections, even if the climate models themselves were perfect with respect to the physical processes relevant to climate change [13, page 4].
  • Explanation A3: Differences in natural or internal variability. Certain factors strongly influence shorter-term climate trends, while different factors more strongly influence longer-term climate trends [9; 10, page 3; 12, pages 16 and 27; 13, page 4; 15; 66]. Shorter-term factors include aerosols and the El Niño phase of the El Niño-Southern Oscillation (ENSO) [1 page 478; 8, pages 379 - 381; 11, pages 70 and 72; 12, page 27; 13, pages 4 and 5; 16, page 194]. ENSO-based variability in particular skews shorter-term model-based temperature projections beginning with a strong El Niño year like 1998 [8; 11; 13; 15; 16, page 194; 18 - 23]. Shorter-term temperature variability is also often due to the randomness / stochastic noise that afflicts smaller sample sizes. This randomness tends to have less of an effect on larger sample sizes [16, page 194; 17, figure 3]. This shorter-term, natural variability can cause a discrepancy between observed short-term trends and model projections, even if the climate models themselves were perfect with respect to the physical processes relevant to climate change [13, page 4].
  • Explanation A4: Model error [56; 57] due to, for example, the models being too sensitive to CO2 and thus over-estimating how much warming CO2 causes [8, page 379; 13, page 4; 14].

As in the case of the coin flip explanations, explanations A1 to A3 do not imply a flaw in the climate models, while only explanation A4 implies a flaw in the models.

Explanations A1 to A4 are not mutually exclusive, so each explanation could contribute to explaining differences between observed warming vs model-projected warming [1, page 480; 8, page 379; 13, page 4]. Michael Mann and other climate scientists published research showing that observational uncertainty (explanation A1) accounts for much of the difference between observed surface warming and climate model projections of this warming [18; 24; 35]. Other scientists argued for a contribution from forcing errors (explanation A2) [33; 35; 41] or internal variability (explanation A3) [34; 35; 40; 41]. These explanations may also be relevant to tropospheric warming, since surface warming often rises to the troposphere, especially in the tropics [8, page 27; 36, page 4; 37 - 39; 55] (see "Myth: The Tropospheric Hot Spot does not Existfor more on this).

And now we reach one of the central questions relevant to the myth: 
Q1How much do explanations A1 to A4 contribute to explaining Christy's model-observations discrepancy from figure 2

Let's start with the weather balloon (radiosonde) trends. Christy implied that model error explains the discrepancy between radiosonde analyses vs. model-based projections  [8, page 379; 26, pages 3 and 4]. But Christy's conclusion was premature. For years scientists have known that radiosonde analyses contain spurious cooling in the tropical troposphere [5; 9; 11; 12, page 19; 32; 58; 59], as pointed out in a report that Christy co-authored [10, pages 3 and 7]. Christy commented on this cold bias before [60], so he had excuse for not being aware of it.

Christy should be aware of this cooling for another reason: over a decade ago, Christy emphasized how radiosonde analyses fit with Christy's small tropospheric warming trend from his work at the University of Alabama in Huntsville (UAH) [61; 63; 78; 88]. However, researchers at Remote Sensing Systems (RSS) then showed Christy that his tropospheric warming trend was spuriously low and needed to be adjusted upwards [62; 63]. Thus Christy should aware of the dangers on relying on spuriously cool, radiosonde-based trends. Yet in figure 2 Christy made the same error over a decade later. Christy continues to exploit this spurious cooling in order to exaggerate the difference between models vs. radiosonde analyses [64; 65].

The spuriously cool radiosonde trends likely resulted from changes in radiosonde equipment during the 1980s [68 - 70]. Accounting for the spurious cooling (explanation A1), along with internal variability (explanation A3), explains most of the difference between models vs. radiosonde analyses with respect to tropical tropospheric warming [9]. Similar explanations likely account for model-data differences outside of the tropics, though the differences are more pronounced in the tropics [68 - 70]. So Christy was wrong when he prematurely accepted model error (explanation A4) as an explanation. That address Q1 for the radiosonde analyses in figure 2. So what about the satellite-based analyses?

Climate scientist Ben Santer co-authored an unpublished document addressing the satellite-based portion of Q1. In this document he cited evidence on explanations A1 to A4 [13, pages 3 - 5]. Let's call this unpublished document Santer et al. #1 [13]. Santer et al. #1 criticizes US Senator Ted Cruz for offering only explanation A4 as an account of Christy's depicted differences between observed warming vs. modeled warming:

"[Senator Cruz argues that the] mismatch between modeled and observed tropospheric warming in the early 21st century has only one possible explanation – computer models are a factor of three too sensitive to human-caused changes in greenhouse gases (GHGs) [13, pages 1 and 2]. 
[...]
The hearing also failed to do justice to the complex issue of how to interpret differences between observed and model-simulated tropospheric warming over the last 18 years. Senator Cruz offered only one possible interpretation of these differences – the existence of large, fundamental errors in model physics [...]. In addition to this possibility, there are at least three other plausible explanations for the warming rate differences [...] [13, pages 3 and 4]."

Santer followed Santer et al. #1 up with a 2017 paper which again cited evidence on explanations A1 to A4 [8, page 379 and 380]. Let's call this paper Santer et al. #2 [8]. In Santer et al. #2, Santer and his co-authors show that satellite readings of tropospheric temperature were contaminated by satellite readings of cooling in the stratosphere, an atmospheric layer higher than the troposphere [8]. This contamination introduced spurious cooling into satellite-based mid- to upper tropospheric temperature trends [8].

Scientists have known about this bogus cooling since at least 1997 [30; 80]. The cooling is accounted for by 4 of the 5 major research groups (including RSS) that generate satellite-based tropospheric temperature measurements [8; 30; 31, page 2; 32, table 4 on page 2285; 71 - 74; 80]. Only the UAH research team fails to correct for this cooling, since they object [77] to the validated [30; 71; 75; 76] correction method used by three of the other research groups [8; 30; 31, page 2; 32, table 4 on page 2285; 71 - 74]. And it was UAH team member John Christy who cited satellite-based tropospheric temperature trends containing this bogus cooling [26, pages 2 - 4]; figure 2 shows this spuriously cool, satellite-based temperature trend. Christy then used this flawed trend to exaggerate the discrepancy between observed satellite-based tropospheric warming vs. model projections of this warming [26, pages 2 - 4]. And Christy finally implies that model error (explanation A4) explains this exaggerated discrepancy [8, page 379; 26, pages 3 and 4].

But as we just saw, Christy's stated model-observations discrepancy was largely due to observational error (explanation A1) resulting from the spurious cooling in Christy's reported tropospheric warming trend [8]. Santer et al. #2 corrects Christy's spuriously cooling trend, as shown in figure 3 below:



Figure 3: (A),(B) 1979 - 2016 near-global mid- to upper tropospheric warming trends predicted by climate models and observed in satellite data analyses from UAH, Remote Sensing Systems (RSS), and the National Oceanic and Atmospheric Administration Center for Satellite Applications and Research (NOAA/STAR). Trends are presented as an average of all the trend values for a given trend length. Trends are not corrected for stratospheric cooling (A) or corrected for stratospheric cooling (B). (C),(D) Ratio between the tropospheric warming trend predicted by the climate models vs. the tropospheric warming trend observed in the satellite data analyses. The dotted lines in (C) show the ratios Christy reported to Congress [26, page 3]. Trend ratios are not corrected for stratospheric cooling (C) or corrected for stratospheric cooling (D) [8, figure 2 on page 377].


Santer et al. #2 applied this stratospheric cooling correction to Christy's flawed analysis from figure 2 above. This correction allowed for a more accurate comparison between the observations and the model-based projections, as shown in figure 4:

Figure 4: Near-global, mid- to upper tropospheric relative temperature projected by climate models and observed in satellite data analyses. The pink line is the observed tropospheric warming trend, corrected for stratospheric cooling and shown as an average of the UAH, Remote Sensing Systems (RSS), and the National Oceanic and Atmospheric Administration Center for Satellite Applications and Research (NOAA/STAR) satellite data analyses. The black line shows the average warming trend from an ensemble of climate models, while the gray region shows the range of values taken by different realizations of each model; different realizations have slightly different internal/natural variability [8, figure 1 on page 376].

Figure 4 incorporates data from the UAH. But the UAH's mid- to upper tropospheric warming trend remains much lower than the warming trends from RSS and a research team at the National Oceanic and Atmospheric Administration Center for Satellite Applications and Research (NOAA/STAR). The UAH analysis is most likely the flawed analysis since:

  • UAH has a long history of under-estimating tropospheric warming due to UAH's faulty homogenization [11; 59; 60; 62; 81; 82; 83, from 36:31 to 37:10; 84, pages 5 and 6].
  • Other scientists have critiqued UAH's homogenization methods [12, pages 17 - 19; 30; 32; 47; 48; 72; 81; 82; 85; 86].
  • UAH's satellite-based temperature analyses often diverge from analyses made by other research groups, in both the troposphere and other atmospheric layers [12, pages 17 - 19; 30; 32; 47; 48; 72; 81; 85; 87].

These point support Santer et al. #2's contention that residual errors in the UAH analysis cause the analysis to under-estimate tropospheric warming [8, page 384]. Since figure 4 incorporates a UAH analysis with a spuriously low warming, figure 4 likely under-estimates mid- to upper tropospheric warming. Thus explanation A1 would account for some of the model-observations discrepancy for figure 4.

In addition to defending explanation A1, Santer et al. #2 also rejects Christy's leap to explanation A4:

"It is incorrect to assert that a large model error in the climate sensitivity to greenhouse gases is the only or most plausible explanation for differences in simulated and observed warming rates (Christy 2015) [8, page 379]."

After rebutting Christy's position in Santer et al. #2, Santer co-authored another 2017 paper with Michael Mann and other climate scientists. This paper further investigated other explanations for the model-observations discrepancy in the troposphere [1]. Let's call this paper Santer et al. #3 [1]. Santer et al. #3 is the subject of our myth: myth proponents misrepresent Santer et al. #3.

Santer et al. #3 offers three arguments against the "oversensitive models" explanation A4 as an account of most of the post-1998 model-observations divergence in the troposphere:
  1. If the models are much too sensitive to CO2, then there should be a specific discrepancy between observed climate responses to volcanic eruptions vs. the models' predicted response to said eruptions. But this discrepancy does not appear [1, page 483].
  2. If the over-sensitivity accounts for most of the post-1998 model-observations discrepancy, then models should exaggerate pre-1998 CO2-induced warming as well. So there should be a similar pre-1998 model-observations discrepancy with respect to tropospheric warming. Yet this pre-1998 discrepancy is not evident [1, page 483], as shown in figure 4 above.
  3. A statistical, model-based test using a proxy for each model's climate sensitivity argues against model-sensitivity explanation A4 [1, page 483].

Thus Santer et al. #3 argues against the "oversensitive models" explanation from Christy's testimony to Congress:

"It has been posited that the differences between modelled and observed tropospheric warming rates are solely attributable to a fundamental error in model sensitivity to anthropogenic greenhouse gas increases [by John Christy in reference 25 of this blogpost]. Several aspects of our results cast doubt on the ‘sensitivity error’ explanation [1, pages 482 and 483]."

So Santer #3 does not confirm Christy and Cruz's "over-sensitive models" explanation [1, pages 482 and 483], in line with the stance taken by Santer et al. #1 [13, pages 1 - 4] and Santer et al. #2 [8, page 279]. Myth proponent Judith Curry is therefore wrong when she claims that Santer et al. #3 confirms what Christy and Cruz have been saying:

"The paper confirms what John Christy has been saying for the last decade, and also supports the ‘denier’ statements made by Ted Cruz about the hiatus [5]."

Curry evidently does not agree with Santer et al. #3's conclusion [5]. But instead of accurately reporting Santer et al. #3.'s claims and then stating why she disagrees with those claims (which would be a fine thing for Curry to do), Curry instead claims that Santer et al. #3 confirms a position that Santer et al. #3 actually argues against. Maue and Bastasch engage in a similar distortion when they say that Santer and Christy "seem to be on the same page [2; 3]." Thus Maue, Bastasch, and Curry employ a common tactic used by critics of mainstream science: they misrepresent sources [28; 29]. There is no excuse for their misrepresentations.

Maue [2; 3], Bastasch [2; 3], and Lloyd [4] also use Santer et al. #3 to claim that climate models are flawed. But this too is a misrepresentation of Santer et al. #3, since Santer et al. #3 argues for the "error in inputted forcings" explanation A2 [1] (after accounting for the spurious stratospheric contamination discussed in Santer et al. #2) and this explanation does not imply a flaw in the climate models. Santer et al. #3 would need to support the "model error" explanation A4 in order for Maue, Bastasch, and Lloyd to be right. Yet Santer et al. #3 argues against explanation A4 [1, pages 482 and 483]. So Maue, Bastasch, and Lloyd are again distorting the implications of Santer et al. #3

Pielke Sr., another myth proponent, states that Santer et al. #3 shows that CO2 is not the primary controller of climate changes on multi-decadal time-scales [6] (in making this comment, Pielke may be throwing shade at an often-cited paper that called CO2 the primary control knob of long-term climate [27]). Pielke's claim would make sense only if Santer et al. #3 supported the "oversensitive models" explanation A4. But Santer et al. #3 instead argue against this explanation. Thus Pielke also misrepresents Santer et al. #3.


What's sad about all this, is that many genuinely curious people will not read Santer et al. #3. Instead these people will trust the claims Pielke [6; 7], Curry [5], Maue [2; 3], etc. make about Santer's paper. After all, why would climate scientists (Pielke and Curry), a meteorologist (Maue), and press sources (Maue, Bastasch, and Lloyd) blatantly misrepresent a scientific paper to an inquiring public? I am not going to answer that question now, but I have my suspicions on what the right answer is. All I will note here is that these myth proponents misrepresented Santer's paper, as other's have noted before me [42]. There is no need to trust these myth proponents, when the authors of Santer et al. #3 have offered a non-technical fact sheet summarizing the main points of Santer et al. #3 [51].


So where does this topic go from here? Genuinely curious people can keep at an eye out for at least two things:

  • Feel free to check the peer-reviewed scientific literature regularly, to see if climate scientists incorporated the updated forcings into model-based tropospheric warming projections. These temperature projections would come from the Coupled Model Intercomparison Project Phase 5 (CMIP5) model ensemble used by discussed by Santer in Santer et al. #2 [8] and Santer et al. #3 [1]. I think this incorporation will likely occur, since scientists previously updated the forcings for model-based surface warming projections [33].
  • At the 2017 European Geosciences Union (EGU) General Assembly, a scientist submitted a poster abstract comparing satellite-based tropospheric warming trends with model-based projections that use observed forcings [52]. This abstract uses the Whole Atmosphere Community Climate Model (WACCM) for its comparison [52]. Another abstract [53] from the same poster session [54] compared weather-balloon-based tropical tropospheric warming trends with model-based warming projections.This weather-balloon-based abstract used the CMIP5 model ensemble and the Max Planck Institute Earth System model (MPI-ESM) ensemble; the abstract then formed the basis of a peer-reviewed paper published earlier this year [9]. So, hopefully, the satellite-based WACCM abstract will also lead to a peer-reviewed publication in the near future. Feel free to keep an eye out for that satellite-based paper (the paper was published on September 25, 2017 [79]).




3. Posts Providing Further Information and Analysis





4. References


  1. "Causes of differences in model and satellite tropospheric warming rates"
  2. http://dailycaller.com/2017/06/19/take-a-look-at-the-new-consensus-on-global-warming/
  3. https://wattsupwiththat.com/2017/06/20/the-new-consensus-on-global-warming-a-shocking-admission-by-team-climate/
  4. http://www.theaustralian.com.au/news/health-science/climate-models-overestimated-temperature-rises-scientists/news-story/3df40de24758698cba22d98743d4e4c5
  5. https://judithcurry.com/2017/06/24/consensus-enforcers-versus-the-trump-administration/
  6. https://twitter.com/RogerAPielkeSr/status/876888650788810752
  7. https://twitter.com/RogerAPielkeSr/status/876868371664576514
  8. "Comparing tropospheric warming in climate models and satellite data"
  9. "Internal variability in simulated and observed tropical tropospheric temperature trends"
  10. "Executive summary: Temperature trends in the lower atmosphere - Understanding and reconciling differences"
  11. "Tropospheric temperature trends: history of an ongoing controversy"
  12. "Extended summary of the Climate Dialogue on the (missing) tropical hot spot"
  13. "A response to the “Data or Dogma?” hearing"
  14. http://climatefeedback.org/scientists-reactions-us-house-science-committee-hearing-climate-science/
  15. http://www.remss.com/blog/recent-slowing-rise-global-temperatures
  16. "Climate change 2013: The physical science basis; Chapter 2: Observations: Atmosphere and Surface"
  17. "A reassessment of temperature variations and trends from global reanalyses and monthly surface climatological datasets"
  18. "Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends"
  19. "Debunking the climate hiatus"
  20. "Sensitivity to factors underlying the hiatus"
  21. "Misdiagnosis of Earth climate sensitivity based on energy balance model results"
  22. "Natural variability, radiative forcing and climate response in the recent hiatus reconciled"
  23. "Tropospheric warming over the past two decades"
  24. "Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures"
  25. https://www.commerce.senate.gov/public/index.cfm/2015/12/data-or-dogma-promoting-open-inquiry-in-the-debate-over-the-magnitude-of-human-impact-on-earth-s-climate
  26. "Testimony. Data or dogma? Promoting open inquiry in the debate over the magnitude of human impact on Earth’s climate. Hearing in front of the U.S. Senate Committee on Commerce, Science, and Transportation, Subcommittee on Space, Science, and Competitiveness, 8 December 2015"
  27. "Atmospheric CO2: Principal control knob governing Earth’s temperature"
  28. "How the growth of denialism undermines public health"
  29. "Denialism: what is it and how should scientists respond?"
  30. "Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends"
  31. "Temperature trends at the surface and in the troposphere"
  32. "Removing diurnal cycle contamination in satellite-derived tropospheric temperatures: understanding tropical tropospheric trend discrepancies"
  33. "Reconciling warming trends"
  34. "Natural variability, radiative forcing and climate response in the recent hiatus reconciled"
  35. "Reconciling controversies about the ‘global warming hiatus’"
  36. "Response of the large-scale structure of the atmosphere to global warming"
  37. "Physical mechanisms of tropical climate feedbacks investigated using temperature and moisture trends"
  38. "Regional variation of the tropical water vapor and lapse rate feedbacks"
  39. "Elevation-dependent warming in mountain regions of the world"
  40. "Investigating the recent apparent hiatus in surface temperature increases: 2. Comparison of model ensembles to observational estimates"
  41. "Forcing, feedback and internal variability in global temperature trends"
  42. http://blog.hotwhopper.com/2017/06/no-hiatus-or-vacation-from-denial.html
  43. "A comparative analysis of data derived from orbiting MSU/AMSU instruments"
  44. "Classic examples of inhomogeneities in climate datasets"
  45. http://www.metoffice.gov.uk/hadobs/hadat/index.html
  46. "Homogenized monthly upper-air temperature data set for Australia"
  47. "A bias in the midtropospheric channel warm target factor on the NOAA-9 Microwave Sounding Unit"
  48. "Reply to “Comments on 'A bias in the midtropospheric channel warm target factor on the NOAA-9 Microwave Sounding Unit'"
  49. "Homogenization of the global radiosonde temperature dataset through combined comparison with reanalysis background series and neighboring stations"
  50. "Discrepancies in tropical upper tropospheric warming between atmospheric circulation models and satellites"
  51. "Fact sheet for “Causes of differences between model and satellite tropospheric warming rates”"
  52. EGU 2017 General Assembly 2017: "Comparisons of historic satellite temperature trends with ensemble simulations from WACCM constrained by observed forcings"
  53. EGU 2017 General Assembly 2017: "Internal variability in simulated and observed tropical tropospheric temperature trends"
  54. EGU 2017 General Assembly 2017, Posters AS1.25/CL4.14: Past and future atmospheric temperature changes and their drivers
  55. "Tropical temperature trends in Atmospheric General Circulation Model simulations and the impact of uncertainties in observed SSTs"
  56. "The distribution of precipitation and the spread in tropical upper tropospheric temperature trends in CMIP5/AMIP simulations"
  57. "Vertical structure of warming consistent with an upward shift in the middle and upper troposphere"
  58. "Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations"
  59. "The reproducibility of observational estimates of surface and atmospheric temperature change"
  60. "Correcting temperature data sets"
  61. "Error estimates of Version 5.0 of MSU–AMSU bulk atmospheric temperatures"
  62. "The effect of diurnal correction on satellite-derived lower tropospheric temperature"
  63. http://www.realclimate.org/index.php/archives/2005/08/et-tu-lt/
  64. http://www.drroyspencer.com/2016/10/new-santer-et-al-paper-on-satellites-vs-models-even-cherry-picking-ends-with-model-failure/
  65. https://wattsupwiththat.com/2016/10/20/new-santer-et-al-paper-on-satellites-vs-models-even-cherry-picking-ends-with-model-failure/
  66. "A quantification of uncertainties in historical tropical tropospheric temperature trends from radiosondes"
  67. "Uncertainties in climate trends: Lessons from upper-air temperature records"
  68. "Biases in stratospheric and tropospheric temperature trends derived from historical radiosonde data"
  69. "Radiosonde daytime biases and late-20th century warming"
  70. "Toward elimination of the warm bias in historic radiosonde temperature records—Some new results from a comprehensive intercomparison of upper-air data"
  71. "Robustness of tropospheric temperature trends from MSU channels 2 and 4"
  72. "Satellite-derived vertical dependence of tropical tropospheric temperature trends"
  73. "Error structure and atmospheric temperature trends in observations from the Microwave Sounding Unit"
  74. "Stability of the MSU-derived atmospheric temperature trend"
  75. "Atmospheric science: Stratospheric cooling and the troposphere"
  76. "Atmospheric science: Stratospheric cooling and the troposphere (a reply)"
  77. "Estimation of tropospheric temperature trends from MSU channels 2 and 4"
  78. "What may we conclude about global tropospheric temperature trends?"
  79. "Troposphere-stratosphere temperature trends derived from satellite data compared with ensemble simulations from WACCM"
  80. "Difficulties in obtaining reliable temperature trends: Reconciling the surface and satellite microwave sounding unit records"
  81. "Spurious trends in satellite MSU temperatures from merging different satellite records"
  82. "Effects of orbital decay on satellite-derived lower-tropospheric temperature trends"
  83. Ray Pierrehumbert's 2012 video: "Tyndall Lecture: GC43I. Successful Predictions - 2012 AGU Fall Meeting"
  84. "Review of the consensus and asymmetric quality of research on human-induced climate change"
  85. "Sensitivity of satellite-derived tropospheric temperature trends to the diurnal cycle adjustment"
  86. "A comparative analysis of data derived from orbiting MSU/AMSU instruments"
  87. "Stratospheric temperature changes during the satellite era"
  88. "How accurate are satellite ‘thermometers’?"