Measuring the 21st Century: are our eyes on the economy corrupt & incompetent?

Summary: The government’s statistics provide the eyes by which we see the US economy. This NBER article explains how they’re working to keep the numbers accurate as the world changes, and reminds us why we shouldn’t listen to the those on the Right who question their skill and integrity.

The bad news: we lost a trillion dollars/year of GDP since the crash (figure 1)

Long-term graph of US real GDP

Our world is steered by governments and corporations who see the world in terms of economic statistics. We have two problems with these. First, we are hosed with a stream of misinformation from the Right, fanciful stories of incompetent and corrupt government statisticians circulated as part of their campaign to discredit the government (Zero Hedge runs these almost weekly). Second, they have to change as the world changes. Economists constantly work to update them, but it’s a Red Queen race (running as fast as they can just to stay in place). Worse, they’re grossly underfunded (no legion of lobbyists for their funding).

As an antidote to the first and to help us better understand the second, here is an essay from one of organizations at the forefront of this key project — explaining…

Measuring the Economy of the 21st Century

An excerpt from an article by Charles R. Hulten
in the Reporter of the National Bureau of Economic Research, 2015 #4

We are now well into the 21st century, and as with many other great inventions, there are constant challenges in updating the national statistical system to reflect the current technological environment. GDP provides a statistical portrait of the economy as it evolves over time. However, the process of evolution itself has altered these flows in ways that undermine the accuracy or relevance of past concepts and data sources. The rapid transformation of the U.S. economy brought about by the revolution in information technology has introduced a profusion of new products and processes, new market channels, and greater organizational complexity. Parts of the statistical system are struggling to keep up.

The problem is nowhere more evident than in the difficulties associated with the Internet’s contribution to GDP. Valuing the ‘net and the wide range of applications offered with little or no direct charge is challenging because there is no reliable monetary yardstick to guide measurement, and their omission or undervaluation surely affects GDP.

This is important for the recent debate over future living standards and employment. The two percent growth rate of real U.S. GDP since the end of the Great Recession has lagged the long-term historical rate of 3%, inviting speculation about the emergence of a New Normal. [See Figure 1 above.] This view is reinforced by Robert Gordon’s recent suggestion that the growth effects of the information revolution are not of the same order of importance as those of previous technological revolutions and are, in any event, playing out.2 The future may look very different if recent GDP growth is significantly understated because of the mismeasurement of new goods and services.

Past and current efforts {of the NBER to study this} are reviewed in this summary, starting with the importance of accurately accounting for new goods and improvements in the quality of existing ones, and the related problem of measuring the output of the service-producing sectors of the economy. The following sections take a closer look at three of the most important service sectors: health care, education, and finance. Subsequent sections focus on capital and labor in the new economy, the role of entrepreneurship and company formation, and the problem of national income accounting in an increasingly globalized world. A final section sums up.

New Goods and Quality Change

In his discussion of “Effects of the Progress of Improvement upon the Real Price of Manufactures” in The Wealth of Nations, Adam Smith dodged the problem of changing product quality by saying “Quality, however, is so very disputable a matter, that I look upon all information of this kind as somewhat uncertain.”3 He was referring to price trends in the production of cloth, but fast-forward more than two centuries, to William Nordhaus writing on the history of lighting, when he argues that official price indexes may “miss the most important revolutions in economic history” because of the way they are constructed.4 The quality problem endures and, if anything, has gotten more difficult with the profusion of new and improved goods.

The quality change problem arises when a new version of a good is introduced that embodies characteristics that make it more desirable. The new model may not cost much more than the old, but represents a greater effective amount of output from the user’s standpoint. If the price per unit transacted in the market does not change, the substitution of a new unit for an older model will not affect either nominal or apparent real GDP, because the apparent market price has not changed. However, effective real output has increased, and the benefits of the innovation are lost in the official data. Personal computers are an important example, and in the mid-1980s, the Bureau of Economic Analysis {BEA} began adjusting computer prices to better reflect the technological gains in computing power.

The new goods variant of the product innovation problem is even more challenging because, unlike the quality change problem, there are no prior versions of the good on which to base price comparisons. Current procedures for incorporating new goods into existing price indexes are complicated, but may miss much of the value of these innovations. Jerry Hausman {has} examined the introduction of a new brand of breakfast cereal and found that the treatment (or non-treatment) of new goods in official statistics resulted in a 20% upward bias in that component of the Consumer Price Index.5 He arrived at a similar conclusion in a subsequent paper on mobile cellular telephones, though the magnitude of the bias is larger.6

By implication, the benefits of important new information technology goods, like the Internet and the many applications it enables, may be subject to significant undervaluation.

The question of how much product innovation has been omitted from estimates of real GDP is germane to the issues raised by Gordon. If the upward bias in price indexes is of the magnitude suggested by Nordhaus, Hausman, and others, then the growth in real GDP may be considerably greater than the official estimates suggest.10 Whether the bias has increased in recent years and is large enough to offset the apparent slowdown in recent growth is another matter. It is a subject that will undoubtedly be on the agendas of future {NBER} conferences.

The Services Sector Problem

The private services-producing sectors of the U.S. economy constitute some four-fifths of recent private business value added. Not only do they account for a large fraction of GDP, these sectors are essential for understanding the trends in aggregate economic growth. In his introduction to the {NBER} volume Output Measurement in the Service Sectors, Zvi Griliches wrote that the much-discussed productivity slowdown of the 1970s and 1980s might be due to two factors: “Baumol’s disease,” in which the relative labor intensity and a high income elasticity of demand doom these sectors to slower productivity growth, and the possibility that the output of these sectors was inherently more difficult to measure.11

Fast forward, again, to the 2007 NBER paper by Barry Bosworth and Jack Triplett on service sector productivity. This paper revisits and updates Griliches’ earlier finding that services were a drag on overall growth during the slowdown. Looking at a longer period, they report a speed-up in services relative to the goods-producing sectors: Labor productivity growth in services rose from an annual rate of 0.7% in 1987–1995 to 2.6% for 1995–2001, while the corresponding numbers for the goods-producing sectors were 1.8% and 2.3%, respectively. They also find that 80% of the increase in overall labor productivity growth after 1995 came from the contribution of information technology in the service sectors, contrary to the Baumol hypothesis that services were inherently resistant to productivity change.

Sorting out changing sectoral trends is made more difficult because the output of the services sectors is resistant to accurate measurement, in part because the quality change problem is particularly large in many of these sectors, and in part because of their very nature. Griliches also observed that a “problem arises because in many services sectors it is not exactly clear what is being transacted, what is the output, and what services correspond to the payments made to their providers.”13

A simple contingency-state model illustrates the problem. The outcome of expert advice or intervention (e.g., medical, legal, financial, educational, management consulting) can be thought of as a shift from an initial state of being to a post-intervention state, where “state” refers variously to the condition of wellness, legal or financial position, knowledge, etc. The subject purchases expert services, X, in the expectation or hope that they will have a positive outcome. However, the outcome also depends on the subject’s own efforts and initial state of being. Measured GDP records the payment for X, and perhaps ancillary expenses incurred (e.g., joining a health club), but not necessarily the value of the outcome to the recipient, which may be different and is often complex and subjective.

A fundamental problem arises when trying to separate X into price and quantity components in order to measure real GDP: In what units do you measure X? Doctors and lawyers may provide information but bill by the visit, or the hour, or the procedure. This is their “output,” and it is not measured in bits or bytes of expert information. The service providers usually do not sell guaranteed outcomes, since the advice they provide may not be heeded and outcomes are often uncertain.

There is a parallel problem in the units in which outcomes are measured: Whatever these units are, they are not necessarily the same for buyers and sellers. But if there are no clear units of measurement, how is it possible to determine the level of output and tell if improvements in technology have increased outcome-based output over time? This is a problem for understanding the factors driving recent GDP growth, given the service sector’s technological dynamism in recent years and the increased availability of expert advice and information on the Internet.

Selected Service Industries

Anyone who remembers a visit to the dentist in the 1950s can testify to the enormous gains in efficacy and patient comfort that have occurred. Huge advances have been made in diagnostics (e.g., the MRI), treatment (e.g., laparoscopic surgery), and drug therapies (e.g., statins). Any attempt to measure real output in the health sector and its contribution to real GDP growth must account for these advances. More technology is on the way, with gene-based therapies, robotic surgery, and diagnoses that make use of the potential of Big Data. A pure expenditure approach misses some of the most important technological advances of the last 50 years.

Adjusting expenditures (X) to reflect better outcomes is not a simple matter, as the contingent-state model illustrates. Outcomes have a subjective component, like improved quality of life, and depend on the pre-treatment state of health. Expenditures are price-denominated, whereas outcomes are not, at least not in their pure state.

The same general line of analysis applies to the education sector. The overall objective of schooling is to move a student from one state of knowledge or capability to another. The “output” of the sector, as measured by educational attainment, has increased dramatically in the United States, as have per capita expenditures. The fraction of the adult population with a bachelor’s degree increased from less than 10% in 1960 to around 30% today, and some two-thirds of high school graduates go on to some form of tertiary education.

The improvement in educational outcomes is another matter. The recent “Nation’s Report Card” from the National Assessment of Education Progress reported that literacy and numeracy scores of 12th graders have been stagnant in recent years, and that a majority of students are stuck at skill levels that are rated below proficient, with one-quarter of students below “basic” in reading and one-third below “basic” in mathematics.15 International comparisons have found similar results.

The difficulty that statisticians have in keeping up with rapid changes in technology and markets is another complicating factor. This is nowhere more apparent than in the financial services sector in the years after the financial crisis and sharp economic downturn. Why wasn’t a crisis of such huge proportion more evident beforehand in official aggregate statistics of the economy?

new financial instruments and market innovations are disruptive and take time to understand and integrate into large scale macro data systems like the national accounts, which have requirements of temporal consistency and breadth of coverage that limit the rate at which the accounts can change

Labor and Capital in the New Economy

The input side of the economy has also been affected by the digital revolution. This is apparent in the 2005 volume Measuring Capital in the New Economy, which is largely devoted to the growing importance of intangible capital formation. This form of capital investment includes scientific and other R&D, brand equity, customer lists and reputation, worker training, and management and human resource systems.

Carol Corrado, Daniel Sichel, and I find that investment in intangibles has become the dominant source of business capital formation, far outstripping the rate of investment in tangible plants and equipment, where the rate has been on a downward trajectory. [See Figure 2.] In 2010, the investment rate in the latter was around 8%, versus an estimated rate of 14% for intangibles.21 This is relevant for the debate over slowing productivity growth, since most of the studies do not include intangible capital and thus omit a major and growing source of technological and organizational innovation.

Tangible and intangible rates of investment
        Figure 2                               Source: C.A. Corrado and C.R. Hulten (2014)

 

Measuring intangible capital presents a host of problems, since much of it is produced with firms on “own account” without a market transaction to fix prices and quantities. However, while the problems are difficult, progress is possible. In a major advance in innovation accounting, the Bureau of Economic Analysis successfully incorporated own-account R&D into the national accounts in 2013, along with artistic originals.Labor markets also are changing, and the 2010 volume Labor in the New Economy takes up some important issues, including the outsourcing of jobs, job security, “good” jobs versus “bad” jobs, the aging of the workforce, different forms of worker compensation, and rising wage inequality.22

——————————— End excerpt ——————————— 

Charles R. Hulten

About the author

Charles R. Hulten is a research associate of the National Bureau of Economic Research and professor of economics at the University of Maryland. He has served for 30 years as chairman of the NBER’s Conference on Research in Income and Wealth, a forum in which academics, members of the policy community, and representatives of the statistical agencies meet regularly to discuss problems with the statistical system.

Hulten’s recent research includes work on measurement of intangible capital and effects of intangibles on innovation, economic growth, and corporate wealth. His research interests also include the measurement of economic depreciation, public and private capital formation, and productivity theory and analysis.

See these articles by C. A. Corrado and C. R. Hulten, “How Do You Measure a Technological Revolution?American Economic Review, May 2010 — and “Innovation Accounting” in Measuring Economic Sustainability and Progress, D. W. Jorgenson, J. S. Landefeld, and P. Schreyer, eds., (2014).

NBER

About the National Bureau of Economic Research (NBER)

Founded in 1920, the NBER is the nation’s leading nonprofit economic research organization, a private, non-profit, non-partisan organization dedicated to conducting economic research.

The Bureau’s associates concentrate on four types of empirical research: developing new statistical measurements, estimating quantitative models of economic behavior, assessing the economic effects of public policies, and projecting the effects of alternative policy proposals. The NBER is supported by research grants from government agencies and private foundations, by investment income, and by contributions from individuals and corporations.

For More Information

Please like us on Facebook, follow us on Twitter, and post your comments — because we value your participation.See these posts about the economic statistics by which our world is guided…

11 thoughts on “Measuring the 21st Century: are our eyes on the economy corrupt & incompetent?”

  1. Any measurement that changes qualitatively from year to year is by any useful definition, no longer a measurement. Beating one’s head against the wall trying to figure out how to measure apples with oranges is useful only in filling up an itinerary of rainy day activities. Anything else is just being dishonest about the uncertainties and assumptions involved. A time series with changing definition of variables without error bars is just dishonest. When I was a young and naive student in economics, the hubris of the discipline had me convinced these were useful metrics. But then one grows up!

    But that is why most people with common sense ignore these data!

    1. David,

      I don’t see the relevance of your rant to this essay, especially since you provide no specifics.

      “…without error bars is just dishonest”

      Please be more specific. The Census (including the Bureau of Economic Analysis) and the Bureau of Labor Statistics routinely include error bars on their surveys.

      Some economic statistics are just collected activity data — not surveys — and therefor have no error bars. Such as the weekly unemployment claims.

      1. David,

        Thanks for the explanation! About GDP…

        “But that is why most people with common sense ignore these data!”

        That’s false. GDP stats are the highest level economic stats, and are highly conceptual attempts to measure what is pretty much an intangible: national income. It’s a useful number, but users must be aware of its limitations.

        “without error bars is just dishonest.”

        Also false. Many kinds of information do not have calculable error bars. The BEA clearly states that “Because the GDP estimates are based on administrative records and other nonsample data, confidence intervals and standard errors cannot be used to measure accuracy.”

        http://www.bea.gov/about/pdf/jep_spring2008.pdf

  2. Most interesting essay. I had thought about the problems of measuring ‘productivity’. As a writer and essayist my own productivity, in real terms, has grown enormously because of the Internet and Google, not just because the Mac has replaced my typewriter. But I hadn’t thought of the intangible capital formation, though I’d read about about businesses simply acquiring a set of scientists through purchasing the firm that employed them. How do I measure my own productivity? The income is insubstantial, but my sense of worth has increased a lot.

    Good stuff — though no answers yet.

    1. Don,

      “As a writer and essayist my own productivity, in real terms, has grown enormously because of the Internet and Google”

      Mine, on the other hand, has improved only in some dimensions. The proofing is better. The data sources used are better in quality and quantity. But I doubt my articles are more intelligently conceived or written. The volume is not higher. Whatever productivity from going paperless is lost by the need to add visuals (to gain audience) and more complex formatting.

      I suspect that is true overall of business writing. For example, power tolerance for typos and misspelling, even in an interoffice context where it should not matter. And the shift to powerpoint has probably created a large drop in productivity by both writers and the audience.

  3. Powerpoint! Oh yes. I used it once, when a CEO, and never afterwards.

    What I meant by productivity can be explained by an example. I have been a writer, newspaper columnist, author, novelist since the mid 1960s. In the old days, if I need an exact item, like — how many Papal visits have there been to my country? — I would ring up my friendly newspaper librarian, schmooze a bit, and ask her. Today I can get that item in less than a minute. I am ‘speeded up’. Whether or not the outcome is any better — that is for others to say. It’s tricky, isn’t it.

    1. Don,

      While I agree about the increased speed, efficency, and scope of research today vs. past, I believe the gains in productivity are more than offset by increased time spent in formatting and preparing visual. That’s true for me, at least.

      Worse, the core element of writing — thinking and composition — has not accelerated. Yet. Drugs for increased intelligence — steroids of the mind — comming, and will be used even if they have severe side-effects.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Fabius Maximus website

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
Scroll to Top