We can end the climate policy wars: demand a test of the models

Summary: This is the last of my series about ways to resolve the public policy debate about climate change. It puts my proposal to test the models in a wider context of science norms and the climate science literature. My experience shows that neither side of the climate wars has much interest in a fair test; both sides want to win through politics. Resolving the climate policy wars through science will require action by us, the American public.

Global Warming

Ending the climate policy debate the right way

Do you trust the predictions of climate models? That is, do they provide an adequate basis on which to make major public policy decisions about issues with massive social, economic and environmental effects? In response to my posts on several high-profile websites, I’ve had discussions about this with hundreds of people. Some say “yes”; some say “no”. The responses are alike in that both sides have sublime confidence in their answers; the discussions are alike in they quickly become a cacophony.

The natural result: while a slim majority of the public says they “worry” about climate change — they consistently rank it near or at the bottom of their policy concerns. Accordingly, the US public policy machinery has gridlocked on this issue.

Yet the issue continues to burn, absorbing policy makers’ attention and scarce public funds. Worst of all, the paralysis prevents efforts to prepare even for the almost certain repeat of past climate events — as Tropical Storm Sandy showed NYC and several studies have shown for Houston — and distracts attention from other serious environmental problems (e.g., destruction of ocean ecosystems).

How can we resolve this policy deadlock?  Eventually, either the weather or science will answer our questions, albeit perhaps at great cost. We could just vote, abandoning the pretense there is any rational basis for US public policy (fyi, neither Kansas nor Alabama voted that Pi = 3).

We can do better. The government can focus the institutions of science on questions of public policy importance. We didn’t wait for the normal course of research to produce an atomic bomb or send men to the moon. We’re paying for it, so the government can set standards for research, as is routinely done for the military and health care industries (e.g., FDA drug approval regulations).

The policy debate turns on the reliability of the predictions of climate models. These can be tested to give “good enough” answers for policy decision-makers so that they can either proceed or require more research. I proposed one way to do this in Climate scientists can restart the climate change debate & win: test the models! — with includes a long list of cites (with links) to the literature about this topic. This post shows that such a test is in accord with both the norms of science and the work of climate scientists.

We can resolve this policy debate.  So far America lacks only the will to do so. That will have to come from us, the American public.

Model of a hurricane.
Vapor Visualization of Hurricane in the Weather Research & Forecasting (WRF) Model. From NCAR/UCAR.

The goal: providing a sound basis for public policy

“Thus an extraordinary claim requires “extraordinary” (meaning stronger than usual) proof.”
— By Marcello Truzzi in “Zetetic Ruminations on Skepticism and Anomalies in Science“, Zetetic Scholar, August 1987. See the full text here.

“For such a model there is no need to ask the question ‘Is the model true?’. If ‘truth’ is to be the ‘whole truth’ the answer must be ‘No’. The only question of interest is ‘Is the model illuminating and useful?’”
— G.E.P. Box in “Robustness in the strategy of scientific model building” (1978). He also said “All models are wrong; some are useful.”

Measures to fix climate change range from massive (e.g., carbon taxes and new regulations) to changing the nature of our economic system (as urged by Pope Francis and Naomi Klein). Such actions requires stronger proof than usual in science (academic disputes are so vicious because the stakes are so small). On the other hand, politics is not geometry; it’s “the art of the possible” (Bismarck, 1867). Perfect proof is not needed. The norms of science can guide us in constructing useful tests.

Successful predictions: the gold standard for validating theories

“Probably {scientists’} most deeply held values concern predictions: they should be accurate; quantitative predictions are preferable to qualitative ones; whatever the margin of permissible error, it should be consistently satisfied in a given field; and so on.”
— Thomas Kuhn in The Structure of Scientific Revolutions (1962).

“Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.”
— Karl Popper in Conjectures and Refutations: The Growth of Scientific Knowledge (1963).

“The ultimate goal of a positive science is the development of a “theory” or, “hypothesis” that yields valid and meaningful (i.e., not truistic) predictions about phenomena not yet observed. … the only relevant test of the validity of a hypothesis is comparison of its predictions with experience.”
— Milton Friedman in “The Methodology of Positive Economics“ from Essays in Positive Economics (1966).

“Some annoying propositions: Complex econometrics never convinces anyone. … Natural experiments rule. But so do surprising predictions that come true.”
— Paul Krugman in “What have we learned since 2008“ (2016). True as well for climate science.

The policy debate rightly turns on the reliability of climate models. Models produce projections of global temperatures when run with estimates of future conditions (e.g., aerosols and greenhouse gases). When run with observations they produce predictions which can be compared with actual temperatures to determine the model’s skill. These tests are hindcasting, comparing models’ predictions with past temperatures. That’s a problem.

“There is an important methodological point here: Distrust conclusions reached primarily on the basis of model results. Models are estimated or parameterized on the basis of historical data. They can be expected to go wrong whenever the world changes in important ways.”
— Lawrence Summers in a WaPo op-ed on 6 Sept 2016, talking about public policy to manage the economy.

“One of the main problems faced by predictions of long-term climate change is that they are difficult to evaluate. … Trying to use present predictions of past climate change across historical periods as a verification tool is open to the allegation of tuning, because those predictions have been made with the benefit of hindsight and are not demonstrably independent of the data that they are trying to predict.
— “Assessment of the first consensus prediction on climate change“, David J. Frame and Dáithí A. Stone, Nature Climate Change, April 2013.

“…results that are predicted “out-of-sample” demonstrate more useful skill than results that are tuned for (or accommodated).”
— “A practical philosophy of complex climate modelling” by Gavin A. Schmidt and Steven Sherwood, European Journal for Philosophy of Science, May 2015 (ungated copy).

Don’t believe the excuses: models can be thoroughly tested

Models should be tested vs. out of sample observations to prevent “tuning” the model to match known data (even inadvertently), for the same reason that scientists run double-blind experiments). The future is the ideal out of sample data, since model designers cannot tune their models to it. Unfortunately…

“…if we had observations of the future, we obviously would trust them more than models, but unfortunately observations of the future are not available at this time.”
— Thomas R. Knutson and Robert E. Tuleya, note in Journal of Climate, December 2005.

There is a solution. The models from the first four IPCC assessment reports can be run with observations made after their design (from their future, our past) — a special kind of hindcast.

“To avoid confusion, it should perhaps be noted explicitly that the “predictions” by which the validity of a hypothesis is tested need not be about phenomena that have not yet occurred, that is, need not be forecasts of future events; they may be about phenomena that have occurred but observations on which have not yet been made or are not known to the person making the prediction.”
— Milton Friedman, ibid.

“However, the passage of time helps with this problem: the scientific community has now been working on the climate change topic for a period comparable to the prediction and the timescales over which the climate is expected to respond to these types of external forcing (from now on simply referred to as the response). This provides the opportunity to start evaluating past predictions of long-term climate change: even though we are only halfway through the period explicitly referred to in some predictions, we think it is reasonable to start evaluating their performance…”
— Frame and Stone, ibid.

We can run the models as they were originally run for the IPCC in the First Assessment Report (1990), in the Second (1995), and the Third (2001) — using actual emissions as inputs but with no changes of the algorithms, etc. The results allow testing of their predictions over multi-decade periods.

These older models were considered skillful when published, so determination of their skill will help us decide if we now have sufficiently strong evidence to take large-scale policy action on climate change. The cost of this test will be trivial compared to overall cost of climate science research — and even more so compared to the stakes at risk for the world should the high-end impact forecasts prove correct.

Determining models’ skill

“Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.”
— Karl Popper in Conjectures and Refutations: The Growth of Scientific Knowledge (1963).

“We stress that adequacy or utility of climate models is best assessed via their skill against more naıve predictions.”
— Mann-Sherwood, ibid.

The climate change uncertainty loop
From “The uncertainty loop haunting our climate models” at VOX, 23 October 2016. Click to enlarge.

Have model predictions been tested?

“The scientific community has placed little emphasis on providing assessments of CMP {climate model prediction} quality in light of performance at severe tests. … CMP quality is thus supposed to depend on simulation accuracy. However, simulation accuracy is not a measure of test severity. If, for example, a simulation’s agreement with data results from accommodation of the data, the agreement will not be unlikely, and therefore the data will not severely test the suitability of the model that generated the simulation for making any predictions.”
— “Should we assess climate model predictions in light of severe tests?” by Joel Katzav in EOS (by the American Geophysical Union), 11 June 2011.

A proposal similar to mine was made by Roger Pielke Jr. in “Climate predictions and observations“, Nature Geoscience, April 2008. Oddly, this has not been done, although there have been simple examinations of model projections (i.e., using estimates of future data, not predictions using observations), countless naive hindcasts (predicting the past using observations available to model developers), and reports of successful predictions of specific components of weather dynamics (useful if it were to be done systematically so an overall “score” was produced).

For a review of this literature see (f) in the For More Information section of my proposal.

Conclusions

In my experience both sides in the public policy debate have become intransigent and excessively confident. Hence both sides’ lack of interest in testing since they plan to win by brute force: electing politicians that agree with them. I’ve found a few exceptions, such as climate scientists Roger Pielke Sr. and Judith Curry, and meteorologist Anthony Watts. More common are the many scientists who told me that they shifted to low-profile research topics to avoid the pressure, or have abandoned climate science entirely (e.g., Roger Pielke Jr.).

We — the American public — have to force change in this dysfunctional public policy debate. We pay for most climate science research, and can focus it on providing answers that we need. The alternative is to wait for the weather to answer our questions, after which we pay the costs. They might be high.

We Can Do It

Other posts about the climate policy debate

This post is a summary of the information and conclusions from these posts, which discuss these matters in greater detail.

  1. How we broke the climate change debates. Lessons learned for the future.
  2. My proposal: Climate scientists can restart the climate change debate – & win.
  3. Thomas Kuhn tells us what we need to know about climate science.
  4. Daniel Davies’ insights about predictions can unlock the climate change debate.
  5. Karl Popper explains how to open the deadlocked climate policy debate.
  6. Paul Krugman talks about economics. Climate scientists can learn from his insights.
  7. Milton Friedman’s advice about restarting the climate policy debate.
  8. We can end the climate policy wars: demand a test of the models.

Clear vision

For More Information

See chapter 9 of AR5: Evolution of Climate Models.

Please like us on Facebook, follow us on Twitter. For more information see The keys to understanding climate change and My posts about climate change. Also see these about the climate policy debate…

 

55 thoughts on “We can end the climate policy wars: demand a test of the models”

      1. AARI is Arctic and Antarctic Research Institute at St. Petersburg, Russia. Toward the end of my post, I quote them saying that they know of no physically-based climate model without adjustable parameters that has been rigorously tested against historic climate data.
        I am preparing a post to bring this comparative analysis up to date.

      2. Ron,

        Thank you for this quote! A few notes.

        • When you quote someone, it is helpful to give a citation. In this case, Climate Change in Eurasian Arctic Shelf Seas – Centennial Ice Cover Observations (2009). Authors: Frolov, I.E., Gudkovich, Z.M., Karklin, V.P., Kovalev, E.G., Smolyanitsky, V.M. The small image of the book cover at the end is not enough.
        • I suggest that you not assume that statements in this book are the official opinion of the AARI without specific evidence.
        • The quote from the book is interesting. It is, however, not peer-reviewed research. The book is also 7 years old. A lot has happened since.
      3. Sorry for being too brief. I was on my ipad at the time. I stated they were scientists at AARI, but as you say, it is not an official AARI statement. I am aware of other, such as Zakharov who have also published along the same lines.
        In any case, the analysis stands on its own, and I have extended it today.

    1. I’m afraid they are evading the obvious. Twenty years of predictions have failed to match reality, It’s really very simple. A hypothesis is used to develop a prediction scheme or schemes based on a sample of data, called the “dependent” sample. The scheme continues to make predictions which are then compared to a new sample of data, the “independent” sample. The goodness of fit can then be determined.
      Climate models have been making predictions of this new data for 20 years and in spite of manipulations of the data, the fit is not good.
      As Richard Feynman once said, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”

  1. Roger Pielke Sr. is an emminent climate scientist. See his Wikipedia entry, his research, and his Google Scholar page (with links to his research). He has an exceptional H-index of 87.

  2. Since my MIT masters thesis in 1958, the terminology has changed, but the procedure for testing a prediction is the same. In my day we developed our prediction scheme on a sample of data, called the dependent sample. We then tried the scheme on a new set of data, the independent sample. As Richard Feynman once said. “If it doesn’t agree with experiment, it’s wrong”

    1. Jim,

      The nature of climate timeseries makes it difficult to hold an out of sample segment for validation. The record is not long — a century or so with more-or-less acceptable global data — with a high degree of serial correlation (time-dependent), and with long natural cycles (decadal and longer).

      Worse yet, after decades of acrimonious conflict, I doubt the skeptics would trust such methodological safeguards when the policy stakes are so high. After all, tuning can be inadvertent.

      My proposal fits with the norms of science and is, I believe, the only way forward. Other than, that is, just letting the bickering continue until the weather answers our questions. I believe each side has fantasies of their politicans scoring a decisive win and defunding scientists with the “wrong views” — so they’re disinterested in supporting a scientific test. The reaction to my proposal might show if I am correct.

      1. If they can’t successfully predict the first 15-20 years, how can they be trusted to predict the next century? .We don’t have time to wait for that.

      2. Jim,

        Each side declares victory. You have your version of the situation, they have theirs. Theirs is supported by most (or all) of the major science institutions in the western world.

        Bold assertions of your belief, no matter how confidently said, will not resolve this.

  3. Pingback: It’s Time To Test Climate Models Against Reality | The Global Warming Policy Forum (GWPF)

  4. Editor of Fabious, further to your point about AARI, I did find a recent statement of an institutional position:

    MOSCOW, September 26 (RIA Novosti) – The climate shifts in the Arctic, caused by global warming, will be gradual and will not lead to any drastic changes in the region, Alexander Danilov, Deputy Director for Research at the Arctic and Antarctic Research Institute, said at a press conference hosted by RIA Novosti Friday.
    “The Arctic will remain a cold pole, as it always has. This will not change,” Danilov said.

    Read more: http://sputniknews.com/analysis/20140926/193328388/Expert-Arctic-Region-to-Remain-Stable-Despite-Global-Warming.html#ixzz42K4pDfjg

  5. No matter what the “skill” of the models , I won’t feel I grok planetary temperature until I see the essential quantitative equations expressed as rigorously as one expects in any other branch of applied physics .

    So far as I can tell , the “models” are some inscrutable massive computer programs written in verbose languages tended by a blindly adulated priesthood . I found this attitude prevalent even at an NOAA conference I attended last year .

    I would welcome being disabused of this perception ,

    1. Bob,

      “So far as I can tell , the “models” are some inscrutable massive computer programs written in verbose languages tended by a blindly adulated priesthood . ”

      So you want them written in English in a simple fashion you can understand? Good luck with that.

      “Tended by a blindly adulated priesthood.”

      Too weird a description to comment on.

      1. No . APL . Or even traditional notation one finds in a physics text .

        Start with the equations for radiant heat transfer and show the physical “audit trail” from the power spectrum we receive from the Sun to our estimated mean temperature .

        At last year’s NOAA/ESRL conference , I noticed the same “the models say … ” attitude where the computerized oracles were alluded to by the journeyman “climate scientists” as founts of wisdom “above the pay grade” of the uninitiated .

  6. Pingback: We can end the climate policy wars: demand a test of the models | Watts Up With That?

  7. stevefitzpatrick

    Larry,
    This is a thoughtful post. You say “Resolving the climate policy wars through science will require action by us, the American public.”

    The problem is that the disagreement is not, and has never been, a fundamentally scientific disagreement. It is, and has always been, a political disagreement, reflecting strongly held views about values, goals, costs, risks, benefits, obligations, and most of all morality (AKA, right and wrong). You are certainly correct that many on either side are not interested in scientific clarification, for fear it might weaken the popularity of their desired policy outcomes, and many are more than willing to misrepresent, exaggerate, understate, mislead, and worse to achieve their desired policy outcome. And of course, what makes the situation even more difficult is that many scientists who work in the field are simultaneously advocates for specific public policy outcomes. There could be a just tiny bit of bias involved in this scientific field.

    But imagine for a moment we were able to wave a magic wand and know with >99% certainty the true sensitivity to GHG forcing, the consequences for specific public policies (future rate of sea level rise, future rainfall patterns, future regional warming, etc.), and the exact cost and consequence for every mitigation proposal. In sum, we would know the exact future costs, benefits, and consequences of every possible public policy related to energy production and use. Would this reduce the disagreement and lead to a public consensus? Would the Greens suddenly agree that nuclear power is a good idea? Would people suddenly accept that drastically reduced future material wealth is a good thing? HECK NO! The fundamental political disagreement would continue, as intractable as today, independent of the resolution of all current ‘scientific’ questions. And both sides would continue to support the same policies they do today, because the two sides do not share a moral calculus… presented the same absolutely factual data, their desired policies will always be very different. Compromise on morality just does not take place. One side winds, the other side loses, or IOW, one side imposes it’s policy views on the other. (think Roe v Wade, Shia v Sunni, capitalists v communists, and many, many more)

    Resolution of global warming disagreements will come only with time (probably several decades) and, I hope, via the ballot box, rather than some other means. Sorry, I just don’t think there’s much hope for science helping to resolve a political disagreement.

    1. Steve,

      “It is, and has always been, a political disagreement”

      You miss the point. Yes, the activists on both sides will not change their minds. But the goal of the political debate is — as always — to gain support of the vast middle. Now they are disinterested, but history shows how that can easily change. A decisive change in the election. Some severe weather events with large damages (to be blamed on CO2). There is a long list of things that can change the political balance.

      You belief that people’s opinions will not change in the face of new information has been proven false countless times during our own lives, let alone in history. It’s a common belief that people live in Zeno’s world — change is impossible — but that’s quite wrong.

  8. An open source climate model and temperature reconstruction would solve all this. There is 0.000% chance the Hockeystick would be recreated under an open source effort. The IPCC models have an R^2 around 0.00, and are designed to reach a preconceived conclusion. No open source effort would ever produce models worst than the IPCC models.

    1. co2,

      It’s nice that you have such definite views on this. But people on the other side have definite views as well, and agreement of the major climate science agencies. Hence the point of a test, rather than both sides confidently and loudly yelling at one another.

      I don’t know what “open source” would mean in this context. The cost of these systems is large and the pool of people competent to build them is small. On the other hand, a review bu a multi-disciplinary team of experts (physics, climate, chemistry, statistics, software engineers, etc) would tell us much.

    2. CO2 ,
      An open source model is very doable by a cluster of talented and motivated individuals — in an APL level language . Here are 2 places I have asserted this : http://cosy.com/CoSy/MinnowBrook2013.html#PlanetTemp and http://cosy.com/y14/CoSyNL201410.html .

      Implementing the physics determining the gray body temperature in our orbit based on the temperature of the Sun and our distance from it is 3 or 4 succinct expressions . Adding the calculations for arbitrary isotropic spectra , another couple of lines . That’s the point I reach in my Heartland talk , http://cosy.com/Science/HeartlandBasicBasics.html , with all the APL included on the slides .

      Mapping a Lambertian cosine over the hemisphere , a few symbols . Applying a measured spectral map of the planet over the sphere , another line .

      In summary , an APL model of the planet can be expressed as or more succinctly than the physics can be expressed in any traditional text book notation — but will execute and can be “played” with efficiently on any scale computer . If you can hack even undergraduate physics , you can hack APL . If you can’t hack the physics , you likely won’t be able to grok the APL either . But then you will not be able to really understand planetary physics anyway . In fact being able to play with the computations in APL is a great help in understanding the physics .

      I continue to contend that a rather thorough planetary model can be expressed in not more than a couple of hundred succinct APL definitions , about the number of equations I would expect to be necessary in a textbook on the subject .

      I find it impressive that Roy Spencer has gotten his satellite data analysis in FORTRAN under 10,000 lines . But it’s hard to imagine the APL to do the same computations being anywhere near even a 10th of that .

      If you are interested in such an open source project , contact me directly . I’ve begun a draft blog page on “Computational Earth Physics” but don’t have time to even get it KickStarted with out some other interested parties .

      1. I’m not sure what that’s supposed to imply . Many of the most important computer languages today from LINUX to Python are open source projects .

        You seem quite disparaging of efforts to understand the physics as opposed to just running statistics of models you care little about actually understanding — the exact attitude toward a modeling “priesthood” I’ve noted .

      2. Bob,

        Your comments make it quite clear you have no idea what you’re talking about. I suggest going to the proposal post — and the end are links to some of the large literature about climate modeling — by people actually doing it. You’ll learn something from it.

        Your contempt for climate scientists is bizarre, and says more about you than them.

      3. Ed ,

        I suggest you look more deeply into http://CoSy.com . I get the distinct impression that you have very little understanding of basic physics . I highly respect many of the truly great scientists I have met at Heartland conferences , for instance Roy Spencer . However very few understand the revolution in programming notation which Ken Iverson fathered a half century ago . https://en.wikipedia.org/wiki/APL_%28programming_language%29 is not a bad summary .

        Incidentally , by the “Six degrees of separation” metric , I am only 1 degree separated from Popper and read Kuhn 50 years ago .

  9. “The results allow testing of their predictions over multi-decade periods. These older models were considered skillful when published, so determination of their skill will help us decide if we now have sufficiently strong evidence to take large-scale policy action on climate change.”

    No they were not. You have several definitional problems which I can highlight with an example.

    If you drive a new car that car will typically have a computer that runs a Distance To Empty model. It measures the level of gas in the tank (poorly) it watches how you are consuming gas, and how fast you are going. Based on this it makes a crude model of future gas usage and projects a distance until Empty.

    Let’s look at everything the model cannot predict. It cant predict the grade of the road ahead. it cant predict the speed you will travel. Or the wind. or traffic.. it just makes a crude extrapolation that the future will be like the past. And Then it hides data from you. You may have 10 gallons in the tank, but it does the calculation to give you a buffer. Typically a gallon or so, maybe 2.

    Is the model accurate? No. is it wrong? yes. Is it useful? Yes. Useful for WHAT? useful to plan a stop for gas. Useful for whom? For a class of drivers who would rather be safe than sorry. Who decides if it is useful? The user who has a use.

    It is simple to compare the model to reality. It’s simple to show that its wrong. It would be simple to construct cases were it was useless to some drivers in some circumstances.

    When it comes to climate models the important questions are Useful for what?
    A) defining a precise policy ( say a specific tax rate )
    b) defining a broad strategy– move away from coal and toward Nuclear
    c) Defining limits but not mechanisms: States need to limit their emmissions to X

    Depending on the use different levels of accuracy may be good enough.

    Next comes the issue of deBiasing a model. My distance to empty model is always wrong by a couple of gallons. So, I can choose to drive a little further after the distance goes to Zero. I can also add a Safety factor. None of this is really science. Its more about risk management. useful for whom? Well who is driving the policy car? Not you. Not me. Imagine if you told me that I should not use my distance to empty model because it was always wrong by 2 gallons.

    I’d say.. mind your own business. You dont drive my car.

    The question is thus, what do policy makers find useful? Well that differs as well. Bottomline folks want to to think that there is a scientific way to make these decisions. There isnt. A climate model gives guidance. Period. Policy makers can choose to rely on that or not DESPITE it’s skill. if my DTE model always had 5 gallons left, i could STILL rationally defend using it. I dont like running out of gas.

    1. Steven,

      Thank you for your interesting comment. It’s quite abstract. Perhaps you can just state your opinion — without the long analogies.

      It sounds like you are saying that models are either don’t play a major role in the climate policy debate or should not do so.

      • The former is obviously quite wrong. Almost every advocacy for major policy change relies on forecasts by climate models. I doubt that you seriously dispute this.
      • The latter is an opinion. If that is your view, you are entitled to it (of course). After over 2 decades of debate about models’ forecasts, it seems a bit late to make that point — and it’s far outside the consensus.

      Also: Policy here is to work strictly within the IPCC’s framework, both analytically and how they present information to decision-makers. I’ll continue to do so, whatever your opinion.

  10. An example of the confusion about climate “predictions”

    From “What Weather Is the Fault of Climate Change?” by Heidi Cullen, op-ed in the NYT, 11 March 2016. Bold emphasis added.

    “Here’s an example that underscores the predictive power of extreme event attribution: A recently published study in the journal Nature Climate Change analyzed record-breaking rains in Britain that flooded thousands of homes and businesses and caused more than $700 million in damage in the winter of 2013-14. Scientists found that such an event had become about 40% more likely. As a result, roughly 1,000 more properties are now at risk of flooding, with potential damage of about $40 million.”

    It appears she means “potential predictive power”. Heidi Cullen is chief scientist at Climate Central, a climate research and communications organization, and the author of The Weather of the Future.

    The study she mentions is “Human influence on climate in the 2014 southern England winter floods and their impacts” by Nathalie Schaller et al, Nature Climate Change, in press. Abstract:

    A succession of storms reaching southern England in the winter of 2013/2014 caused severe floods and £451 million insured losses. In a large ensemble of climate model simulations, we find that, as well as increasing the amount of moisture the atmosphere can hold, anthropogenic warming caused a small but significant increase in the number of January days with westerly flow, both of which increased extreme precipitation. Hydrological modelling indicates this increased extreme 30-day-average Thames river flows, and slightly increased daily peak flows, consistent with the understanding of the catchment’s sensitivity to longer-duration precipitation and changes in the role of snowmelt.

    Consequently, flood risk mapping shows a small increase in properties in the Thames catchment potentially at risk of riverine flooding, with a substantial range of uncertainty, demonstrating the importance of explicit modelling of impacts and relatively subtle changes in weather-related risks when quantifying present-day effects of human influence on climate.

  11. I don’t know what “open source” would mean in this context. The cost of these systems is large and the pool of people competent to build them is small. On the other hand, a review bu a multi-disciplinary team of experts (physics, climate, chemistry, statistics, software engineers, etc) would tell us much.

    Michael Mann almost single-handedly constructed the “Hockeystick,” so the resources don’t seem to be an issue, and every university has a computer system capable to construct the models needed to replicate what the IPCC modelers do. The point of an “Open Source” effort would be that everything is public, people have to contribute data and knowledge that is open for scrutiny. Right now we have to accept the research of Michael Mann, and the IPCC hand picked modelers. They are chosen to give the desired results. No open source system would do that. To get a data set added or a line of code added to a model it has to be publicly announced, explained and defended. Most importantly, any additional line of code would be objectively measured by its impact on the R^2 of the model. A good starting point would simply be to have the meta data and algorithms used to create the Hockeystick and IPCC models made public for review.

    BTW, the X-Prize’s $10,000,000 award was enough to put a man in space. That is a fraction of what NASA spent. Considering the cost of climate legislation, a similar crowd funded X-Prize for a Climate Model and Temperature Reconstruction would be a tremendously efficient to manner to address this issue. Facts are, sunlight is the greatest enemy of climate science. The more people look behind the curtain, the more obvious is becomes that there are serious problems.

    1. It’s so nice to see people coming by to prove my statement that neither side is interested in fair tests of the climate models!

      “BTW, the X-Prize’s $10,000,000 award was enough to put a man in space. That is a fraction of what NASA spent.”

      BTW, that’s false. NASA developed the technology that has been used by the private sector since then.

  12. BTW, that’s false. NASA developed the technology that has been used by the private sector since then.

    1) NASA didn’t invent the Rocket Engine
    2) NASA didn’t invent the carbon fiber and other composited used
    3) NASA didn’t invent the airplane

    Facts are the private sector is rapidly moving ahead with the next generation of rocket technology, and they are doing it far more efficient and effective than what NASA and others are doing. Amazon is developing a rocket that lands on a sea platform. NASA never attempted space tourism. These are new privately funded technologies.

  13. Pingback: Weekly Climate and Energy News Roundup #218 | Watts Up With That?

  14. Pingback: Why skeptics could lose the US climate policy debate | Watts Up With That?

  15. Pingback: Imagine the horrific fate of the losers after the climate policy debate ends | Watts Up With That?

  16. Pingback: Paul Krugman explains how to break the climate policy deadlock | Watts Up With That?

  17. Pingback: Recent Research Shows Climate Models Are Mostly “Black Box” Fudging, Not Real Science

  18. Pingback: See the cost to America of damage from climate change in the 21st century | Watts Up With That?

  19. Pingback: A Red Team to end the climate wars: fun but likely to fail. | Watts Up With That?

  20. Pingback: Focusing on worst case climate futures doesn’t work. It shouldn’t work. | Watts Up With That?

  21. Pingback: Paul Krugman shows why the climate campaign failed | Watts Up With That?

  22. Pingback: Paul Krugman Shows Why the Climate Campaign Failed | US Issues

  23. Pingback: A climate science milestone: a successful 10-year forecast! | Watts Up With That?

  24. Pingback: Lessons from the failure of the climate change crusade | Watts Up With That?

  25. Pingback: Paying attention to local weather doomsters makes our scenario worse – All My Daily News

  26. Pingback: Klimawissen­schaft ist gestorben. Die Auswir­kungen werden erheblich sein. – EIKE – Europäisches Institut für Klima & Energie

  27. Pingback: An glaring approach to the local weather coverage disaster – All My Daily News

  28. Pingback: An obtrusive method to the local weather coverage disaster – Daily News

  29. Pingback: Classes from the coronavirus about local weather alternate – All My Daily News

    1. JP,

      Welcome to the climate science debate. However you will learn nothing reading only activists.

      A score of posts review the publications about model validation. In brief, the tuning of models to match past observations means that their ability to predict the past tells us nothing.

      This is a core principle of both quantitative model validation and broad science methodology. Predictions are the gold standard in science.

      Especially severe testing – aka unexpected predictions.

      I would give links, but I doubt you’d read them. Horse to water and all that.

      1. Yes, since they have been tuned to match past observations – their output well match’s past observations.

        It’s sad that this method of validation – backtesting, considered ludicrously bogus by experts in validation of quantitative modeling – gets so much attention.

        Also interesting is that there is no interest in running older models, like those used in the third IPCC report, with actual inputs from after their date of design through now – and comparing their temperature predictions with actuals. Although over too short period to be statistically strong, the results would be interesting.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Fabius Maximus website

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
Scroll to Top