Ross McKitrick
Professor of Economics 
Department of Economics and Finance
University of Guelph
ross.mckitrick [at] uoguelph.ca
  • Home
  • Environmental Economics
    • Textbook
    • Theory of Policy Design
    • Environment and Health
    • Air Pollution and Growth
  • Global Warming
    • Climategate
    • Economics/Climate Policy
    • Emission Trends
    • Fingerprinting
    • General Overviews / Misc
    • IPCC Reviews and Critiques
    • Model Testing / Hiatus
    • Paleoclimate/Hockey Stick
    • Submissions/Responses to Gov't Inquiries
    • Temperature-Indexed Tax
    • Taken by Storm
    • Temperature Data Quality
  • Ontario Energy Policy
  • CV, talks, bio, etc.
  • EPEQ
  • Op-Eds
  • Science and Public Policy
  • Global Financial Crisis
  • Other Research
  • Earth Hour
  • EPEQ
  • YourEnvironmentSources

Global Warming: Model Testing / Hiatus 

JOURNAL ARTICLES

PERVASIVE WARM BIAS IN CMIP6 TROPOSPHERIC LAYERS

John Christy and I have published a paper in Earth and Space Science comparing tropospheric warming rates in the new generation CMIP6 climate models to observations from satellites, weather balloons and reanalysis systems.

  • McKitrick, Ross and John Christy (2020) Pervasive Warm Bias in CMIP6 Tropospheric Layers. Earth and Space Science Vol 7(9) September 2020. 

Every single model overpredicts warming. It has long been known that climate models on average overstate warming in the troposphere over the tropics. This was flagged as a serious inconsistency in 2005 in the first US Climate Change Science Program report and has been mentioned in every IPCC report since. But instead of the problem being corrected, it's gotten worse over time, and the bias is now global. We examined runs over the post-1979 interval in the first 38 climate models made available in the CMIP6 archive, looking at the lower- and mid-troposphere in the tropics and globally. Every model over-predicted warming in both layers, both globally and in the tropics. In most individual cases the bias is statistically significant and on average it is highly significant. We also show that the bias is larger in high-ECS models, but even the models with lower average ECS predict too much warming. If a group of models were to appear that had a realistic representation of global tropospheric warming, it would likely have to have a lower ECS than even the low-ECS members of the CMIP6 ensemble. Data and code here. 

ASSESSING LONG TERM CHANGES IN US REGIONAL PRECIPITATION

John Christy and I published a paper in the Journal of Hydrology:

  • McKitrick, Ross R. and John Christy (2019) Assessing Changes in US Regional Precipitation on Multiple Time Scales Journal of Hydrology vol. 578 Nov 2019, https://doi.org/10.1016/j.jhydrol.2019.124074
​
The published version is temporarily available at this link. If that does not work a pre-print is available here. The Supplement is here. We look at the claim (made by the recent US National Climate Assessment) that US precipitation increased over the 20th century, that precipitation extremes did likewise and that confidence is high that this is due to greenhouse gases. We discuss 2,000 year drought proxies that reveal Hurst behaviour (long term persistence) which means spurious trend detection is a risk. We replicate the NCA finding on 2 regional data sets, both for average precipitation and for various measures of extreme rainfall. We then show that the trend inferences don't hold up when the data are extended back into the 1800s and that the trend signs reverse on the last 4 decades of the sample, which is the opposite of what should happen if GHG's are driving the changes. We conclude that natural variability is likely the dominant driver of historical changes in precipitation and hence drought dynamics in the US regions we examine.

TESTING THE MAJOR HYPOTHESIS IN CLIMATE MODELS

John Christy and I published a paper in Earth and Space Science, a publication of the American Geophysical Union:

  • McKitrick, Ross R and John Christy (2018) A Test of the Tropical 200-300mb Warming Rate in Climate Models. Earth and Space Science doi: 10.1029/2018EA000401. Data/code archive here.

There has been a lot of discussion about the relative lack of observed warming in the tropical troposphere compared to model projections. We confirm the mismatch using three 60-year weather balloon records. We also outline four criteria for a valid test of the major component of interest in climate models, namely the moist thermodynamics in the troposphere that generates amplified global warming in response to rising greenhouse gases. The criteria are measurability, specificity, independence and uniqueness. The 200-300mb layer in the tropics satisfies all four, pretty much uniquely in the climate system, making it very suitable as a test target. The results clearly show that models misrepresent a process fundamental to their usability for studying the climate impacts of greenhouse gases. 

MODEL-OBSERVATION COMPARISON 1958-2012 IN THE TROPICAL TROPOSPHERE
Tim Vogelsang and I published a paper in Environmetrics  called
  • **McKitrick, Ross R. and Timothy Vogelsang (2014) "HAC-Robust Trend Comparisons Among Climate Series with Possible Level Shifts" Environmetrics DOI: 10.1002/env.2294. 

In it we compare the temperature trends in climate models over the 1958-2012 interval in the tropical troposphere to those observed in weather balloon data. The models tend not only to over-predict observed warming, but also to represent it differently. Models exhibit a relatively smooth upward trend, whereas observations show almost all the warming took place in a single jump in the late 1970s and the trend either side is practically and statistically zero. Since the tropical troposphere is where models predict the maximum response to GHG forcing should be observed, the absence of a significant trend there over a 55-year interval is a serious inconsistency. Data and code are here. A discussion at Climate Audit is here. 

CLIMATE MODELS INABILITY TO GET THE SPATIAL TREND PATTERN RIGHT:
Lise Tole and I published a paper in Climate Dynamics testing the ability of climate models to reproduce the spatial pattern of temperature trends over land. This builds on previous work of mine looking at the correlation between indicators of industrial development over land and the spatial pattern of warming trends, a relationship that is not predicted by models and is supposed to have been filtered out of the surface climate record. The paper is
  •  **McKitrick, Ross R. and Lise Tole (2012) “Evaluating Explanatory Models of the Spatial Pattern of Surface Climate Trends using Model Selection and Bayesian Averaging Methods” Climate Dynamics, 2012, DOI: 10.1007/s00382-012-1418-9
Preprint here; data and code archive here; university press release here. We apply classical and Bayesian methods to look at how well 3 different types of variables can explain the spatial pattern of temperature trends over 1979-2002. One type is the output of a collection of 22 General Circulation Models (GCMs) used by the IPCC in the Fourth Assessment Report. Another is a collection of measures of socioeconomic development over land. The third is a collection of geopgraphic indicators including latitude, coastline proximity and tropospheric temperature trends. The question is whether one can justify an extreme position that rules out one or more categories of data, or whether some combination of the three types is necessary. I would describe the IPCC position as extreme since they dismiss the role of socioeconomic factors in their assessments. In the classical tests, we look at whether any combination of one or two types can "encompass" the third, and whether non-nested tests combining pairs of groups reject either 0% or 100% weighting on either. ("Encompass" means provide sufficient explanatory power not only to fit the data but also to account for the apparent explanatory power of the rival model.) In all cases we strongly reject leaving out the socioeconomic data. In only 3 of 22 cases do we reject leaving out the climate model data, but in one of those cases the correlation is negative, so only 2 count--that is, in 20 of 22 cases we find the climate models are either no better than or worse than random numbers. We then apply Bayesian Model Averaging to search over the space of 537 million possible combinations of explanatory variables and generate coefficients and standard errors robust to model selection (aka cherry-picking). In addition to the geographic data (which we include by assumption) we identify 3 socioeconomic variables and 3 climate models as the ones that belong in the optimal explanatory model, a combination that encompasses all remaining data. So our conclusion is that a valid explanatory model of the pattern of climate change over land requires use of both socioeconomic indicators and GCM processes. The failure to include the socioeconomic factors in empirical work may be biasing analysis of the magnitude and causes of observed climate trends since 1979. 
COMMENTARY & TECHNICAL REPORTS


A STATISTICALLY-ROBUST DEFINITION OF THE LENGTH OF THE GLOBAL WARMING PAUSE
I have published a paper proposing a definition of the length of the pause that is robust to autocorrelation and cherry-picking endpoints. 
  • **McKitrick, R. (2014) HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series. Open Journal of Statistics, 4, 527-535. doi: 10.4236/ojs.2014.47050.
I make the duration out to be 19 years at the surface and 16-26 years in the lower troposphere depending on the data set used. R Code to generate the graphs, tables and results is here. 

ARE CLIMATE MODELS OVERSTATING GLOBAL WARMING? 2019 UPDATE

I have written a number of times over the years about the fact that after 2000, most climate model runs overstate observed surface warming. There were some compelling graphs of this problem circulating in about 2014. Even the IPCC noticed the issue. The 2016 El Nino largely eliminated the discrepancy, but that could only be temporary. With the El Nino heat leaving the system, I was curious what the graphs look like now. I have written a note up about the results:

  • McKitrick, Ross R. (2019) Climate Models versus Observations: 2019 Update

R Code and Data to generate the comparison chart is available here. I centered the series on a 1961-1990 mean, but if I'd used a 1971-2010 mean it would give pretty much the same result. What is striking is that we are heading back into a pause-like interval in which observations fail to keep up with (in this case) the RCP4.5 mean. The reckoning was postponed by the El Nino, but not permanently. ​

ARE CLIMATE MODELS OVERSTATING WARMING?

There has been a lot of discussion about a new paper tying model over-estimation of warming to the policy agenda; viz., there is more time than previously claimed to implement emission controls. I have written on this previously but in light of the current discussion I put up a blog post at Judy Curry's Climate Etc. blog:
  • Ross McKitrick: Are Climate Models Overstating Warming? 

Basically I go through a couple of indicators and arrive at an affirmative answer. 
Data/code here. 

POLICY IMPLICATIONS OF THE PAUSE IN GLOBAL WARMING
I have published a report for the Fraser Institute looking at the economic policy implications of the lack of global warming. Had the situation been reversed, namely had there been much more warming than models projected over the past 20 years, there would likely be loud calls for a policy response, namely a ramping up of current plans and targets. The same reasoning applies under the opposite circumstances, namely that there was much less warming than models projected. Fundamentally the problem is that the policy models are trained to match climate models, not climate data, and this needs to change. 
  • *McKitrick, Ross R. "Climate Policy Implications of the Hiatus in Global Warming" Vancouver: Fraser Institute, October 2, 2014.

My report argues for building a more robust connection between empirical findings on climate processes and the economic models that generate climate policy plans.

Picture
THE GLOBAL WARMING HIATUS, aka DISCREPANCY

I have a column today (June 17 2014) in the Financial Post on the widening discrepancy between models and observations. The talk of the "pause" in global warming is somewhat misplaced, since a pause is not out of place amidst a long term upward trend. What is out of place is an extended pause just where models predict a sharp rise. That's the issue that merits attention, both for the scientific issues it gives rise to, and also the potential policy implications. NOTE: the line shades were mislabeled in the article--black should be gray and vice-versa. The above graph has the correct shading. R code to draw that graph is here. 


MODEL-DATA TREND COMPARISONS FOR THE TROPICAL TROPOSPHERE: 
My first foray into this topic looks at how to compare model-generated trends to observations. There have been some rather simplistic methods used before now, based on t-stats with "effective degrees of freedom" adjustments &whatnot. The following paper explains more accurate testing methods using panel regression and multivariate trend estimations that have higher power and greater robustness to complex autocorrelation patterns. The application is to the tropical troposphere, an important regions for testing models' ability to quantify the atmospheric response to greenhouse gases. A few recent studies differed on whether models significantly overstate the warming or not. We find that up to 1999 there was only weak evidence for this, but on updated data the models appear to significantly overpredict warming. 
  • ** McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) "Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets". Atmospheric Science Letters, DOI: 10.1002/asl.290. Data/code archive. 

CORRECTION to MMH10: In 2010 Steve, Chad and I published a paper that applied panel and multivariate (VF) methods to test the significance of trends and of model-obs differences in the tropical troposphere. There were a couple of typos, and also Chad discovered an error in the GISS data as archived at the PCMDI (not a huge one, just an error splicing pre- and post-2000 runs together). We re-did our analyses and used the updated versions of the observational data for the purpose. The correction has been published:
  • *McKitrick, Ross, Stephen McIntyre and Chad Herman (2011) Correction to "Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Series" Atmospheric Science Letters October 7 2011, DOI: 10.1002/asl.360. 

The GISS correction and data revisions strengthen all our original findings, reducing the observational trends and raising (slightly) the model trends. (a) The combined MSU trends have a p-value just over 0.05; still significant but "marginal". (b) The HadAT 1979-2009 trend in the LT drops from significance to marginal. (c) The average 1979-2009 MT trend across all observational series drops to insignificance. (d) The RICH 1979-2009 MT trend drops to insignificance. (e) The RSS 1979-2009 MT series is now significantly different from models in the panel regression test. For the 1979-2009 interval, all observational series individually and jointly are significantly below models at both the LT and MT layers. (f) Over the 1979-1999 interval the model-obs differences are still marginally significant but in the MT layer it is now at about the 6% level, so it is nearly significant. 

OP-EDS ON CLIMATE MODEL FAILURES: I wrote a pair of op-eds in 2012 on this topic. The first part appeared in the Financial Post on June 21. A version with the citations provided is here. Part II is here online, and the versions with citations is here. 
Proudly powered by Weebly