Site icon Fabius Maximus website

We can end the climate policy wars: demand a test of the models

Summary: This is the last of my series about ways to resolve the public policy debate about climate change. It puts my proposal to test the models in a wider context of science norms and the climate science literature. My experience shows that neither side of the climate wars has much interest in a fair test; both sides want to win through politics. Resolving the climate policy wars through science will require action by us, the American public.

Ending the climate policy debate the right way

Do you trust the predictions of climate models? That is, do they provide an adequate basis on which to make major public policy decisions about issues with massive social, economic and environmental effects? In response to my posts on several high-profile websites, I’ve had discussions about this with hundreds of people. Some say “yes”; some say “no”. The responses are alike in that both sides have sublime confidence in their answers; the discussions are alike in they quickly become a cacophony.

The natural result: while a slim majority of the public says they “worry” about climate change — they consistently rank it near or at the bottom of their policy concerns. Accordingly, the US public policy machinery has gridlocked on this issue.

Yet the issue continues to burn, absorbing policy makers’ attention and scarce public funds. Worst of all, the paralysis prevents efforts to prepare even for the almost certain repeat of past climate events — as Tropical Storm Sandy showed NYC and several studies have shown for Houston — and distracts attention from other serious environmental problems (e.g., destruction of ocean ecosystems).

How can we resolve this policy deadlock?  Eventually, either the weather or science will answer our questions, albeit perhaps at great cost. We could just vote, abandoning the pretense there is any rational basis for US public policy (fyi, neither Kansas nor Alabama voted that Pi = 3).

We can do better. The government can focus the institutions of science on questions of public policy importance. We didn’t wait for the normal course of research to produce an atomic bomb or send men to the moon. We’re paying for it, so the government can set standards for research, as is routinely done for the military and health care industries (e.g., FDA drug approval regulations).

The policy debate turns on the reliability of the predictions of climate models. These can be tested to give “good enough” answers for policy decision-makers so that they can either proceed or require more research. I proposed one way to do this in Climate scientists can restart the climate change debate & win: test the models! — with includes a long list of cites (with links) to the literature about this topic. This post shows that such a test is in accord with both the norms of science and the work of climate scientists.

We can resolve this policy debate.  So far America lacks only the will to do so. That will have to come from us, the American public.

Vapor Visualization of Hurricane in the Weather Research & Forecasting (WRF) Model. From NCAR/UCAR.

The goal: providing a sound basis for public policy

“Thus an extraordinary claim requires “extraordinary” (meaning stronger than usual) proof.”
— By Marcello Truzzi in “Zetetic Ruminations on Skepticism and Anomalies in Science“, Zetetic Scholar, August 1987. See the full text here.

“For such a model there is no need to ask the question ‘Is the model true?’. If ‘truth’ is to be the ‘whole truth’ the answer must be ‘No’. The only question of interest is ‘Is the model illuminating and useful?’”
— G.E.P. Box in “Robustness in the strategy of scientific model building” (1978). He also said “All models are wrong; some are useful.”

Measures to fix climate change range from massive (e.g., carbon taxes and new regulations) to changing the nature of our economic system (as urged by Pope Francis and Naomi Klein). Such actions requires stronger proof than usual in science (academic disputes are so vicious because the stakes are so small). On the other hand, politics is not geometry; it’s “the art of the possible” (Bismarck, 1867). Perfect proof is not needed. The norms of science can guide us in constructing useful tests.

Successful predictions: the gold standard for validating theories

“Probably {scientists’} most deeply held values concern predictions: they should be accurate; quantitative predictions are preferable to qualitative ones; whatever the margin of permissible error, it should be consistently satisfied in a given field; and so on.”
— Thomas Kuhn in The Structure of Scientific Revolutions (1962).

“Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.”
— Karl Popper in Conjectures and Refutations: The Growth of Scientific Knowledge (1963).

“The ultimate goal of a positive science is the development of a “theory” or, “hypothesis” that yields valid and meaningful (i.e., not truistic) predictions about phenomena not yet observed. … the only relevant test of the validity of a hypothesis is comparison of its predictions with experience.”
— Milton Friedman in “The Methodology of Positive Economics“ from Essays in Positive Economics (1966).

“Some annoying propositions: Complex econometrics never convinces anyone. … Natural experiments rule. But so do surprising predictions that come true.”
— Paul Krugman in “What have we learned since 2008“ (2016). True as well for climate science.

The policy debate rightly turns on the reliability of climate models. Models produce projections of global temperatures when run with estimates of future conditions (e.g., aerosols and greenhouse gases). When run with observations they produce predictions which can be compared with actual temperatures to determine the model’s skill. These tests are hindcasting, comparing models’ predictions with past temperatures. That’s a problem.

“There is an important methodological point here: Distrust conclusions reached primarily on the basis of model results. Models are estimated or parameterized on the basis of historical data. They can be expected to go wrong whenever the world changes in important ways.”
— Lawrence Summers in a WaPo op-ed on 6 Sept 2016, talking about public policy to manage the economy.

“One of the main problems faced by predictions of long-term climate change is that they are difficult to evaluate. … Trying to use present predictions of past climate change across historical periods as a verification tool is open to the allegation of tuning, because those predictions have been made with the benefit of hindsight and are not demonstrably independent of the data that they are trying to predict.
— “Assessment of the first consensus prediction on climate change“, David J. Frame and Dáithí A. Stone, Nature Climate Change, April 2013.

“…results that are predicted “out-of-sample” demonstrate more useful skill than results that are tuned for (or accommodated).”
— “A practical philosophy of complex climate modelling” by Gavin A. Schmidt and Steven Sherwood, European Journal for Philosophy of Science, May 2015 (ungated copy).

Don’t believe the excuses: models can be thoroughly tested

Models should be tested vs. out of sample observations to prevent “tuning” the model to match known data (even inadvertently), for the same reason that scientists run double-blind experiments). The future is the ideal out of sample data, since model designers cannot tune their models to it. Unfortunately…

“…if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time.”
— Thomas R. Knutson and Robert E. Tuleya, note in Journal of Climate, December 2005.

There is a solution. The models from the first four IPCC assessment reports can be run with observations made after their design (from their future, our past) — a special kind of hindcast.

“To avoid confusion, it should perhaps be noted explicitly that the “predictions” by which the validity of a hypothesis is tested need not be about phenomena that have not yet occurred, that is, need not be forecasts of future events; they may be about phenomena that have occurred but observations on which have not yet been made or are not known to the person making the prediction.”
— Milton Friedman, ibid.

“However, the passage of time helps with this problem: the scientific community has now been working on the climate change topic for a period comparable to the prediction and the timescales over which the climate is expected to respond to these types of external forcing (from now on simply referred to as the response). This provides the opportunity to start evaluating past predictions of long-term climate change: even though we are only halfway through the period explicitly referred to in some predictions, we think it is reasonable to start evaluating their performance…”
— Frame and Stone, ibid.

We can run the models as they were originally run for the IPCC in the First Assessment Report (1990), in the Second (1995), and the Third (2001) — using actual emissions as inputs but with no changes of the algorithms, etc. The results allow testing of their predictions over multi-decade periods.

These older models were considered skillful when published, so determination of their skill will help us decide if we now have sufficiently strong evidence to take large-scale policy action on climate change. The cost of this test will be trivial compared to overall cost of climate science research — and even more so compared to the stakes at risk for the world should the high-end impact forecasts prove correct.

Determining models’ skill

“Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.”
— Karl Popper in Conjectures and Refutations: The Growth of Scientific Knowledge (1963).

“We stress that adequacy or utility of climate models is best assessed via their skill against more naıve predictions.”
— Mann-Sherwood, ibid.

From “The uncertainty loop haunting our climate models” at VOX, 23 October 2016. Click to enlarge.

Have model predictions been tested?

“The scientific community has placed little emphasis on providing assessments of CMP {climate model prediction} quality in light of performance at severe tests. … CMP quality is thus supposed to depend on simulation accuracy. However, simulation accuracy is not a measure of test severity. If, for example, a simulation’s agreement with data results from accommodation of the data, the agreement will not be unlikely, and therefore the data will not severely test the suitability of the model that generated the simulation for making any predictions.”
— “Should we assess climate model predictions in light of severe tests?” by Joel Katzav in EOS (by the American Geophysical Union), 11 June 2011.

A proposal similar to mine was made by Roger Pielke Jr. in “Climate predictions and observations“, Nature Geoscience, April 2008. Oddly, this has not been done, although there have been simple examinations of model projections (i.e., using estimates of future data, not predictions using observations), countless naive hindcasts (predicting the past using observations available to model developers), and reports of successful predictions of specific components of weather dynamics (useful if it were to be done systematically so an overall “score” was produced).

For a review of this literature see (f) in the For More Information section of my proposal.

Conclusions

In my experience both sides in the public policy debate have become intransigent and excessively confident. Hence both sides’ lack of interest in testing since they plan to win by brute force: electing politicians that agree with them. I’ve found a few exceptions, such as climate scientists Roger Pielke Sr. and Judith Curry, and meteorologist Anthony Watts. More common are the many scientists who told me that they shifted to low-profile research topics to avoid the pressure, or have abandoned climate science entirely (e.g., Roger Pielke Jr.).

We — the American public — have to force change in this dysfunctional public policy debate. We pay for most climate science research, and can focus it on providing answers that we need. The alternative is to wait for the weather to answer our questions, after which we pay the costs. They might be high.

Other posts about the climate policy debate

This post is a summary of the information and conclusions from these posts, which discuss these matters in greater detail.

  1. How we broke the climate change debates. Lessons learned for the future.
  2. My proposal: Climate scientists can restart the climate change debate – & win.
  3. Thomas Kuhn tells us what we need to know about climate science.
  4. Daniel Davies’ insights about predictions can unlock the climate change debate.
  5. Karl Popper explains how to open the deadlocked climate policy debate.
  6. Paul Krugman talks about economics. Climate scientists can learn from his insights.
  7. Milton Friedman’s advice about restarting the climate policy debate.
  8. We can end the climate policy wars: demand a test of the models.

For More Information

See chapter 9 of AR5: Evolution of Climate Models.

Please like us on Facebook, follow us on Twitter. For more information see The keys to understanding climate change and My posts about climate change. Also see these about the climate policy debate…

 

Exit mobile version