How accurate are climate scientists’ findings? Look at ocean warming.

Summary:  This might be one of the more important of our 3500 posts. It looks at an often asked question about climate science — how accurate are its findings, a key factor when we make decisions about trillions of dollars (and affecting billions of people). The example examined is ocean heat content, a vital metric since the oceans absorbing 90%+ of global warming. How accurate are those numbers? The error bars look oddly small, especially compared to those of sea surface temperatures. This also shows how work from the frontiers of climate science can provide problematic evidence for policy action. Different fields have different standards.

“The spatial pattern of ocean heat content change is the appropriate metric to assess climate system heat changes including global warming.”
— Climate scientists Roger Pielke Sr. (source).

Warming of the World Ocean

NOAA: Yearly Vertically Averaged Temperature Anomaly 0-2000 meters layer
NOAA website’s current graph of Yearly Vertically Averaged Temperature Anomaly 0-2000 meters with error bars (±2*S.E.). Very tiny error bars. Reference period is 1955–2006.

.
Posts at the FM website report the findings of the peer-reviewed literature and major climate agencies, and compare them with what we get from journalists and activists (of Left and Right). This post does something different. It looks at some research on the frontiers of climate science, and its error bars.

The subject is “World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010” by Sydney Levitus et al, Geophysical Research Letters, 28 May 2012. Also see his presentation. The bottom line: from 1955-2010 the upper 700 meters of the World Ocean warmed (volume mean warming) by 0.18°C (Abraham 2013 says that it warmed by ~0.2°C during 1970-2012). The upper 2,000m warmed by 0.09°C, which “accounts for approximately 93% of the warming of the earth system that has occurred since 1955.”

Levitus 2012 puts that in perspective by giving two illustrations. First…

“If all the heat stored in the world ocean since 1955 was instantly transferred to the lowest 10 km (5 miles) of the atmosphere, this part of the atmosphere would warm by ~65°F. This of course will not happen {it’s just an illustration}.”

Ocean heat content over time from Levitus 2012World Ocean of ocean heat content (1022 Joules) for 0–2000 m (red) and 700–2000 m (black) layers based on running pentadal (five-year) analyses. Reference period is 1955–2006. Each estimate is the midpoint of the period. The vertical bars represent ±2.*S.E. Click to enlarge.

Second, they show this graph to put that 93% of total warming in perspective with the other 7%.

Components of global warming from Levitus 2013

A large question about confidence

These are impressive graphs of compelling data. How accurate are these numbers? Uncertainty is a complex subject because there are many kinds of errors. Descriptions of errors in studies are seldom explicit about the factors included in their calculation.

Levitus says the uncertainty in estimates of warming in the top 2,000 meters of the world ocean during 1955-2010 is 0.09°C ±0.007°C (±2 S.E.). That translates to 24.0 ±1.9 x 1022 Joules (±2 S.E.). That margin of error is reassuring — an order of magnitude smaller than the temperature change. But is that plausible for measurements of such a large area over 55 years?

Abraham 2013 lists the sources of error in detail. It’s a long list, including the range of technology used (the ARGO data became reliable only in 2005), the vast area of the ocean (in three dimensions), and its complex spacial distribution of warming both vertically and horizontally (e.g., the warming in the various oceans ranges from 0.04 to 0.19°C).

We can compare these error bars with those for the sea surface temperature (SST) of the Nino3.4 region of the Pacific — only two dimensions of a smaller area (6.2 million sq km, 2% of the world ocean’s area). The uncertainty is ±0.3°C (see the next section for details). That’s two orders of magnitude greater than the margin of error given for the ocean heat content of the top 2,000 meters of the world ocean — ±0.007°C (±2 S.E.). Hence the tiny error bars in the graph at the top of this post.

If the margin of error is just the same magnitude as that given below for NINO3.4 SST, then it is a magnitude larger than the ocean temperature change of 1955-2010 for the upper 2,000 m.

How do climate scientists explain this? I cannot find anything in the literature. It seems unlikely to realistically describe the uncertainty in these estimates.

Map of Pacific El Nino regions
From Australia’s Bureau of Meteorology.

Compare with the uncertainty of SST in the Niño3.4 region

Here NOAA’s Anthony Barnston explains the measurement uncertainty of the sea surface temperature (SST) of the Pacific’s Nino3.4 region. This is a comment to their “December El Niño update“. Barnston is Chief Forecaster of Climate and ENSO Forecasting at Columbia’s International Research Institute for Climate and Society.

He does not say if the ±0.3C accuracy is for current or historic data (NOAA’s record of the Oceanic Niño Index (based on the Niño3.4 region SST) goes back to 1950). Above I conservatively assumed it is for historic data (i.e., current data has smaller errors). Red emphasis added.

“The accuracy for a single SST-measuring thermometer is on the order of 0.1C. … We’re trying to measure the Nino3.4 region, which extends over an enormous area. There are vast portions of that area where no measurements are taken directly (called in-situ). The uncertainty comes about because of these holes in coverage.

“Satellite measurements help tremendously with this problem. But they are not as reliable as in-situ measurements, because they are indirect (remote sensed) measurements. We’ve come a long way with them, but there are still biases that vary in space and from one day to another, and are partially unpredictable. These can cause errors of over a full degree in some cases. We hope that these errors cancel one another out, but it’s not always the case, because they are sometimes non-random, and large areas have the same direction of error (no cancellation).

“Because of this problem of having large portions of the Nino3.4 area not measured directly, and relying on very helpful but far-from-perfect satellite measurements, the SST in the Nino3.4 region has a typical uncertainty of 0.3C or even more sometimes.

“That’s part of why the ERSSv4 and the OISSTv2 SST data sets, the two most commonly used ones in this country, can disagree by several tenths of a degree. So, while the accuracy of a single thermometer may be a tenth or a hundredth of a degree, the accuracy of our estimates of the entire Nino3.4 region is only about plus or minus 0.3C.“

Examples of scientists’ careful treatment of uncertainties

The above does not imply that this is a pervasive problem. Climate scientists often provide clear statements of uncertainty for their conclusions, such as in these four examples.

(1) Explicit statements about their level of confidence

Activists — and their journalist fans — usually report the findings of climate science as certainties. Scientists usually speak in more nuanced terms. NOAA, NASA, and the IPCC routinely qualify their confidence. NOAA and the IPCC do so with clear standards. Here are NOAA’s.

NOAA: describing confidence
NOAA 2014 State of the Climate

(2)  Was 2014 as the hottest year on record?

NOAA calculated the margin for error of the 2014 average surface atmosphere temperature: +0.69°C ± 0.09 (+1.24°F ± 0.16). The increase over the previous record (0.04°C) is less than the margin of error (±0.09°C). That gives 2014 a probability of 48% of being the warmest of the 135 years on record, and 90.4% of being among the five warmest years. NOAA came to similar conclusions.  This is not a finding from a frontier of climate science, but among the most publicized.

NASA-NOAA "Global Analysis for 2014"

(3) The warmest decades of the past millennium

Scientists use proxies to estimate the weather before the instrument era. Tree rings are a rich source of information: aka dendrochronology (see Wikipedia and this website by Prof Grissino-Mayer at U TN). The latest study is “Last millennium northern hemisphere summer temperatures from tree rings: Part I: The long term context” by Rob Wilson et al in Quaternary Science Reviews, in press.

“1161-1170 is the 3rd warmest decade in the reconstruction followed by 1946-1955 (2nd) and 1994-2003 (1st). It should be noted that these three decades cannot be statistically distinguished when uncertainty estimates are taken into account. Following 2003, 1168 is the 2nd warmest year, although caution is advised regarding the inter-annual fidelity of the reconstruction…”

(4) Finding anthropogenic signals in extreme weather statistics

Need for Caution in Interpreting Extreme Weather Statistics” by Prashant D. Sardeshmukh et al, Journal of Climate, December 2015 — Abstract…

“Given the reality of anthropogenic global warming, it is tempting to seek an anthropogenic component in any recent change in the statistics of extreme weather. This paper cautions that such efforts may, however, lead to wrong conclusions if the distinctively skewed and heavy-tailed aspects of the probability distributions of daily weather anomalies are ignored or misrepresented. Departures of several standard deviations from the mean, although rare, are far more common in such a distinctively non-Gaussian world than they are in a Gaussian world. This further complicates the problem of detecting changes in tail probabilities from historical records of limited length and accuracy. …”

For More Information

If you liked this post, like us on Facebook and follow us on Twitter. For more information about this vital issue see The keys to understanding climate change and My posts about climate change. Also here are some papers about warming of the oceans…

  1. The annual variation in the global heat balance of the Earth“, Ellis et al, Journal of Geophysical Research, 20 April 1978.
  2. Heat storage within the Earth system“, R.A. Pielke Sr., BAMS, March 2003.
  3. On the accuracy of North Atlantic temperature and heat storage fields from Argo“, R. E. Hadfield et al, Journal of Geophysical Research, January 2007.
  4. A broader view of the role of humans in the climate system“, R.A. Pielke Sr., Physics Today, November 2008.
  5. World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010” by Sydney Levitus et al, Geophysical Research Letters, 28 May 2012.
  6. A review of global ocean temperature observations: Implications for ocean heat content estimates and climate change“, J.P. Abraham et al, Reviews of Geophysics, September 2013.
  7. An apparent hiatus in global warming?“, Kevin E. Trenberth and John T. Fasullo, Earth’s Future, 5 December 2013 — Also see Trenberth’s “Has Global Warming Stalled” at The Conversation, 23 May 2014.
  8. Sixteen years into the mysterious ‘global-warming hiatus’, scientists are piecing together an explanation.“, Jeff Tollefson, Nature, 15 January 2014 — Well-written news feature; not research.
  9. Industrial-era global ocean heat uptake doubles in recent decades” by Peter Glecker et al, Nature Climate Change, Jan 2016. Again little attention to uncertainty of the data. Also see “Study: Man-Made Heat Put in Oceans Has Doubled Since 1997” by AP’s Seth Borenstein.

34 thoughts on “How accurate are climate scientists’ findings? Look at ocean warming.”

  1. It is important to emphasise that for measurement errors we can’t do ‘tests’. As the essay above points out, measurement errors are ‘estimates’. How on earth anyone knows what the measurement errors are in sea-surface temperature measurements, when some of the data come from the temperature of sea-water in leather buckets, some come from floating buoys, some come from engine inlet manifolds — I do not know. And I’ve never seen either an estimate or an explanation of how the estimate was calculated.

    1. Don,

      I agree that constructing highly accurate climate data about the past gets increasingly difficult as one looks further back. Tech provides solutions for more recent — the ARGO floats provide relatively consistent and detailed information about ocean warming, from which error estimates can be constructed. The modern buoy and satellite data does the same for SST to help construct SST data.

      But my point is not that constructing error bars is difficult — of course it could be done, reliably if using conservative assumptions. Rather some of the error bars we are given look far too small. The first example here shows how this becomes problematic: if the error bars are larger than those given for NINO3.4 SST, then they are much larger than the temperature change found. Which would be inconvenient, since ocean heat accumulation is one of the major explanations of the “pause” in warming.

  2. Pingback: Week in review – science edition | Climate Etc.

  3. Pingback: Week in review – science edition |

  4. Pingback: Esa ciencia del calentamiento global, tan firme | PlazaMoyua.com

  5. My basic thoughts on ocean heat content are that, given the massive heat content of the seas, any contribution from GHG would be far to small to possibly measure throughout all oceans and depths for at the very least several decades, probably centuries.

    It’s counting angels on pinheads stuff, and open to all sorts of confirmed bias nonsense

    1. Paul,

      Levites and other p-r studies run the energy balance math and conclude that the oceans could have warmed.

      My point is not about the theory. Rather that the change 1950-2010 appears to be less than the likely ability of our instruments during that period to detect (< error bars), even with the sophisticated analytics used.

      I have asked some experts to comment. This has been mentioned by several climate scientists on Twitter. Prof Curry mentioned it at Climate Etc. it is going on a few skeptics' websites.

      We will see what responses come in. From experience I am not hopeful. This is third time I have played the child asking about the Emperor's new climate change clothes. It is not a fun role.

      I would prefer to play Sam Spade.

    1. ntesdorf,

      You get Best of Thread for imagination! I’m a fan of the first Terminator trilogy.

      On the other hand, I see no evidence whatsoever that there is a “CAGW scam” in the climate sciences. Science is a social process, and runs better than other social processes. Fitfully, long detours going to dead ends, with lots of running in circles. Lots of internal politics. These things will resolve themselves, eventually.

      Our public policy needs, however, are ill met by science as usual. Which is why biomedical research runs by different rules. We should apply more of those to climate science.

  6. Re biomedical:

    Yes, it would be nice.

    But the anxiety about global warming is part of a wider and older environmentalist worry about humans’ despoiling nature. The Greens (and their equivalents everywhere) picked this up as an electoral matter in the 1970s, and the major parties were forced to at least take it seriously. One thing led to another, and the outcome is that the science is following political values in the community, funded by the party in power. Fiscally, it is part of forward estimates, meaning that the funding and the context will go ahead forever unless a government starts winding it back. Very hard to do, given the proliferation of interest groups that lobby vociferously for more funding, not less. As others have said, as well as me, what we have now is a kind of secular religion, well and truly enmeshed in government.

    1. Don,

      I agree on all points. The long embrace by Greens of doomster forecasts has come to define them. Their refusal to change tactics despite the repeated failure of the forecasts suggests a faith-based worldview. The public’s refusal to believe CAGW suggests that the public might be learning.

      As for the future, I wonder about the fate of climate science if the extreme forecasts — which have come to be associated with them — fail to come true. The alarmists have staked everything on one or more extreme weather events hitting before their credibility is exhausted. As with all insurgencies, they need only win once.

      Have the leaders and rank-and-file in climate science considered the consequences of lost credibility during the next 5 -10 years? No books, TV appearances, lectures, lavish “genius” awards — and a reduction in public funds — are among the likely results. They might suffer in 2025 even if their forecasts prove correct in 2100.

  7. Oh, and I agree that CAGW is not a scam or a hoax. It is much more complex than that. And yes, it will be dealt with in time with better science. But the policy side can go more quickly, if there are much more powerful competing priorities.

  8. Fabius,
    The question about instrument accuracy has been answered many times by instrumentation engineers. The answer from instrumentation engineers and technicians has always been that the accuracy of the climate monitoring instrumentation is not nearly as good as the accuracy claimed. The response to that from those people without instrumentation background is that averaging the data from multiple instruments increases accuracy. No amount of explaining seems to convince them otherwise.

    Measuring temperature to an accuracy of 0.1 degrees Celsius is difficult even in lab conditions. Measuring something to an accuracy of 0.01 degrees Celsius takes extensive effort and a controlled environment. Manufacturing and engineering practices are to take all possible error sources in a measurement and add them together. What you get is what you have to work with. Period.

  9. The biggest uncertainty may arise from the data before 2003 because before then the data was collected by ships. There were two methods of taking temperature, one before and one after after WWII. If modern data collected by buoys after 2003 is spliced onto prior data, then any discontinuity in trend would be suspect,

    Data from 1945 is not shown so we cannot determine how smooth was the transition between the bucket and intake method of collecting ship data. However, there is are inflections around 2003, presumably because the buoy data was added. As more and more buoys came into service the trend accelerated, which is what we would expect if the buoy data is not consistent with ship data.

    How many ships were traversing the Southern Ocean from 1960 to 1980? The southern hemisphere is 68% ocean, 27% land and 5% Antarctica, with a small fraction of the Earth’s population and shipping. So the ship data was probably sparsely sampled. In the early years the north Pacific data was probably also sparsely sampled.

    If there are problems with the method of acquiring data, then in the first graph above we would expect to see two inflection points, one around 1945 and another around 2003. So we would really need to see the data before, during and after WWII.

    The second and third graphs are interesting because they show the energy absorbed by type of Earth location. Note that in the second graph 0-2000 contains 700-2000 m. To make anything out of this graph, we should also see a line for 0-700 m.

    Qualitatively, we learn nothing from the third graph that we did not know before: only the world ocean could store big amounts of energy during the period covered by the data.

    1. Frederick,

      Thanks for your comment. A few bits of information…

      (1) “Data from 1945 is not shown so we cannot determine how smooth was the transition between the bucket and intake method of collecting ship data. ”

      You appear to be thinking of sea surface temperatures, which was collected by ship intakes. This article discusses the temperature of the depths. It does not mention 1945 because the period examined in Levitus begins in 1955.

      For details about the methods used over time to collect ocean temperatures at various depths see the For More Info section: “A review of global ocean temperature observations: Implications for ocean heat content estimates and climate change“, J.P. Abraham et al, Reviews of Geophysics, September 2013. It does not mention buckets in the relevant period (after 1955), only more sophisticated methods.

      (2) “To make anything out of this graph, we should also see a line for 0-700 m. …”

      This post looks in detail only at the error bars. You can click through to see the actual paper for more info. It also has many more graphs. For a wide range of updated graphs click on the link in the first graph for the NOAA Ocean Heat Content website page.

  10. You have Hadfield et al 2007: “On the accuracy of North Atlantic temperature and heat storage fields from Argo“, R. E. Hadfield et al, Journal of Geophysical Research, January 2007.

    Jo Nova wrote about it: “Study shows ARGO ocean robots uncertainty was up to 100 times larger than advertised“, June 2015. Excerpt:

    Over much of the section, the Argo-based estimates of temperature agree with the cruise measurements to within 0.5ºC. However, there are several regions in the 500–1000 m layer west of about 40W where the differences exceed this value (Figure 9a). Furthermore at the western boundary, west of 74W, the temperature is more than 2ºC warmer in the Argo section than in the cruise section. As expected, the climatological values from the WOA typically show larger differences from the cruise section than the Argo-based sections, particularly in the surface waters across the section (Figure 9b, upper panel) and the upper 1200 m at 65–73W.

    Abstract

    The accuracy with which the Argo profiling float dataset can estimate the upper ocean temperature and heat storage in the North Atlantic is investigated. A hydrographic section across 36°N is used to assess uncertainty in Argo-based estimates of the temperature field. The root-mean-square (RMS) difference in the Argo-based temperature field relative to the section measurements is about 0.6°C. The RMS difference is smaller, less than 0.4°C, in the eastern basin and larger, up to 2.0°C, toward the western boundary.

    In comparison, the difference of the section with respect to the World Ocean Atlas (WOA) is 0.8°C. For the upper 100 m, the improvement with Argo is more dramatic, the RMS difference being 0.56°C, compared to 1.13°C with WOA.

    The Ocean Circulation and Climate Advanced Model (OCCAM) is used to determine the Argo sampling error in mixed layer heat storage estimates. Using OCCAM subsampled to typical Argo sampling density, it is found that outside of the western boundary, the mixed layer monthly heat storage in the subtropical North Atlantic has a sampling error of 10–20 Wm−2 when averaged over a 10° × 10° area. This error reduces to less than 10 Wm−2 when seasonal heat storage is considered.

    Errors of this magnitude suggest that the Argo dataset is of use for investigating variability in mixed layer heat storage on interannual timescales. However, the expected sampling error increases to more than 50 Wm−2 in the Gulf Stream region and north of 40°N, limiting the use of Argo in these areas.

    1. Plazame,

      Thank you for that interesting citation! I added the abstract to your comment, and tweaked the format for clarity. Also, I added Hadfield to the list of studies in this post.

      This is interesting since the pre-Argo ocean temperature data (roughly 1955-2005) is off far less accuracy than Argo’s. So the error bars for average ocean temperature 1955-2010 must be quite large, if accurately calculated.

  11. Editor,

    Where I’ve found myself confused (and apparently in error according to some) is when I’ve tried to use what I presumed was a standard in scientific certainty/uncertainty (The IPCC chart used above) as being the same as that which is used by NOAA/NASA. This lead me to understand that NASA’s 38% (NOAA’s 48%) probability would mean it’s more ‘unlikely than likely’ that 2014 was actually the warmest year ever. Since I’ve not found a definition for NOAA/NASA’s use, I cannot be certain.

    1. Danny,

      “The IPCC chart used above. … Since I’ve not found a definition for NOAA/NASA’s use,”

      The probability chart shown is from NOAA’s 2014 State of the Climate (see the caption for the link). I believe the IPCC, NOAA, and NASA all use the same definitions.

      1. Editor,

        Here’s why I stated what I did: “Hi Danny, NOAA did a pretty nice uncertainty analysis of ‘warmest year.’ However, how that was communicated to the public was rather misleading. Note, the formal confidence levels used by the IPCC is not necessarily the same verbiage used in the NOAA/NASA press release, which provides further confusion to people try to pay attention to this.” (http://judithcurry.com/2015/02/21/week-in-review-44/#comment-676692)

        As I said, I’ve not been able to locate the formal definitions of NOAA/NASA for their uses of probabilities and/or confidence levels (such as that detailed by IPCC). Seems there should be some sort of standard in the scientific community.

        Then yet a third source. (Apologies for off topic, yet pertinent in communicating uncertainty when distributing scientific results as your topic relates).

        After all, each term (Ocean Heat Content, Global Average Temperature) are not measurements but are ‘projections’ based on some method of compilation.

        Regards,

      2. Danny,

        Good catch by Prof Curry, that the IPCC (AR5) and NOAA statements of likelihood are not the same.

        “I’ve not been able to locate the formal definitions of NOAA/NASA for their uses of probabilities … ”

        To repeat what I said before: it is in this post. Click on the link to go to the original on the NOAA website. I don’t know if NASA has one. I don’t believe either have a schedule of confidence levels like that of the IPCC.

  12. Two types of pseudoscience
    There are two types of pseudoscience, distinguishable by the following assumptions:
    1) Valid science is indicated by consensus
    2) Valid science is obvious
    The first tends to be associated with people who are politically liberal.
    The second tends to be associate with people who are politically conservative.

    Very often these two groups are at odds with each other on a particular issue. But when it comes to undermining the credibility of science they play for the same team.

    About ten years ago I confronted the “settled science” of global warming. My examination revealed it as plainly inept. That brought me to wonder if there were not other, deeper, ineptitudes. I found myself examining the foundational assumptions of meteorology and, deeper still, core issues regarding the physical chemistry of H2O. And then I made a discovery:
    BREAKTHROUGH: Hydrogen Bonding as The Mechanism That Neutralizes H2O Polarity
    https://goo.gl/Hrb6Sb

  13. It has been a very long time since I used them, but it would be interesting to review ASME standards for measuring temperature (19.3) and uncertainty (19.1) to see if any lessons can be learned. Unfortunately I do not have immediate access to these docs and do not feel like spending the $138 per. I recall (vaguely) the particularly onerous requirement for measuring condenser cooling water temperatures (required for reporting turbine performance for contractual reasons). This would apply to those engine room intake temperature logs touted to be so precise and reliable.

    1. Jeff,

      That’s an interesting idea!

      “This would apply to those engine room intake temperature logs touted to be so precise and reliable.”

      I too am skeptical about the accuracy of older sea surface temperature records, especially when extrapolated to give global averages.

      I don’t believe engine intakes have any role in computing ocean heat (i.e., temperatures at depth), or modern sea surface temperatures (1990-now? 2000-now?).

  14. Pingback: How accurate are findings from the frontiers of climate science?

  15. I suspect that the error bars on the metric “Yearly Vertically Averaged Temperature Anomaly 0-2000 meters” actually represent a statistical measure called the Probable Error of the Mean (PEM). When a mean is calculated from a set of data the standard deviation is the measure of the scatter in the data. The Probable Error (PE) is the value of the boundary distance above and below the mean that is needed to include 50% of the data. The “Vertically Averaged Temperature” would use multiple data points from 0-2000 meters to produce a single mean value – a single number – and the difference of the raw data from this mean becomes the ‘anomaly’. Half of the data would fall within the range of the mean plus or minus the PE.

    A *Yearly* “Vertically Averaged Temperature Anomaly 0-2000 meters” would average those mean values (!) over a year. The corresponding probable error would be called the Probable Error of the Mean (PEM). The PEM is smaller than the PE by a factor equal to the square root of the number of mean values averaged. The square root of 365 is a little larger than 19.

    By reporting the averaged values of Mean Values (rather than an average of raw data) they are ‘entitled’ to report the PEM. By failing to distinguish this fact they are being purposely disingenuous, and reporting error bars that are 19 times smaller than the probable error of the data used (by dubiously averaging TWICE). This is how they arrive at ‘error bars’ of about +/- 0.01 degrees using raw data that is only accurate, at best, to +/- 0.2 degrees and may range over several degrees..

    1. Tadchem,

      Thanks for the explanation!

      These studies build on one another, as each considers the previous one’s finding a foundation. For the results see the AP story by Seth Borenstein’s “Study: Man-Made Heat Put in Oceans Has Doubled Since 1997“, based on “Industrial-era global ocean heat uptake doubles in recent decades” by Peter Glecker et al, Nature Climate Change, Jan 2016. Lots of firm conclusions built with little consideration of uncertainties in the data.

  16. Anthony Barnston’s quote below confirms to me that the people playing with this data know nothing about precision and error. Errors don’t cancel each other out, they accumulate.

    “These can cause errors of over a full degree in some cases. We hope that these errors cancel one another out, but it’s not always the case, because they are sometimes non-random, and large areas have the same direction of error (no cancellation)”.

    The guy is saying “I went to the pub and threw two darts at a dart board. One went exactly 3 feet below, the other went exactly 3 feet above, so I bought everyone a drink because averaged out I got a direct hit on the bullseye”.

    Anthony Barnston’s bio says he’s been producing temperature datasets, maps, and diagrams for 17 years and is an expert in climate data correction and compilation. Shame he knows jack about statistics, and relies on “hope” for his datasets to be correct.

    1. Big bell,

      “Anthony Barnston’s quote below confirms to me that the people playing with this data know nothing about precision and error”

      Not my field, so I won’t comment on the specifics. However I’m confident your statement is false. I suggest you post a comment in reply at the NOAA website. He’s good about replying!

  17. Pingback: How accurate are findings from the frontiers of climate science? For example, about warming of the oceans. | Watts Up With That?

  18. Steven Mosher reminds us of an important point in this comment at WUWT (Mosher is with the BEST project): “There is only one dataset that is actually tied to SI standards. ”

    He point to this excerpt from “A quantification of uncertainties in historical tropical tropospheric temperature trends from radiosondes” by Peter Thorne et al, JGR-Atmospheres, 27 June 2011:

    “With the notable exception of the Keeling curve of CO2 concentration changes [Keeling et al., 1976], to date there exists no climate record that is definitively tied to SI standards. Such records require comprehensive metadata, traceability at every step to absolute (SI) standards, and a careful and comprehensive calculation of error budgets [Immler et al., 2010]. They are expensive, time consuming to produce, and difficult to construct and maintain. It is therefore understandable that virtually all of the historical meteorological data available to the community fail, usually substantially, to measure up to such exacting standards.

    “As a result, there will always be uncertainty in establishing how the climate system has evolved, notwithstanding careful attempts to identify and adjust for all apparent nonclimatic artifacts. Despite some claims to the contrary, no single approach is likely to encapsulate all of the myriad uncertainties in the data set construction process. The issue is most critical for multidecadal trends, since residual errors act as red noise, projecting most strongly onto the longest timescales [Seidel et al., 2004; Thorne et al., 2005b].”

  19. Pingback: Weekly Climate and Energy News Roundup #213 | Watts Up With That?

  20. Pingback: A climate science milestone: a successful 10-year forecast! | Watts Up With That?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Fabius Maximus website

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
Scroll to Top