About models, increasingly often the lens through which we see the world

Here is one of Paul Krugman’s many brilliant essays.  It’s one with broad applicability today, as we grow increasingly dependent on mathematical models.  Such as science, public health, banking, and public finance.  Yet both practitioners and laypeople too-seldom understand their strengths and weaknesses.

A note about Krugman

One of the saddest aspects of the comments on the FM website is seeing the closed minds.  Esp the people who judge people’s knowledge by agreement with their beliefs.  As in “Don’t pay attention to him, he’s a Liberal/Conservative/climate-denier/racist.  This is esp strong among conservatives with respect to economics.  Paul Krugman, despite winning the Nobel, is dismissed as a buffoon.  I am no fan of his part-time work as an attack dog for the Democratic Party (a liberal Spiro Agnew), his professional work is first-rate.  (Unfortunately, like too many scientists these days, he does not distinguish between these 2 roles).

Excerpt from “ The Rise and Fall of the Development Economics

By Paul Krugman, from Rethinking the Development Experience, edited by Donald A. Schon (1994) — I have added section headings.

Metaphors and Models

I have just acknowledged that the tendency of economists to emphasize what they know how to model formally can create blind spots; yet I have also claimed that the insistence on modeling is basically right. What I want to do now is call a time out and discuss more broadly the role of models in social science.

It is said that those who can, do, while those who cannot, discuss methodology. So the very fact that I raise the issue of methodology in this paper tells you something about the state of economics. Yet in some ways the problems of economics and of social science in general are part of a broader methodological problem that afflicts many fields: how to deal with complex systems.

It is in a way unfortunate that for many of us the image of a successful field of scientific endeavor is basic physics. The objective of the most basic physics is a complete description of what happens. In principle and apparently in practice, quantum mechanics gives a complete account of what goes on inside, say, a hydrogen atom. But most things we want to analyze, even in physical science, cannot be dealt with at that level of completeness. The only exact model of the global weather system is that system itself. Any model of that system is therefore to some degree a falsification: it leaves out some (many) aspects of reality.

How, then, does the meteorological researcher decide what to put into his model? And how does he decide whether his model is a good one? The answer to the first question is that the choice of model represents a mixture of judgement and compromise. The model must be something you know how to make — that is, you are constrained by your modeling techniques. And the model must be something you can construct given your resources — time, money, and patience are not unlimited. There may be a wide variety of models possible given those constraints; which one or ones you choose actually to build depends on educated guessing.

And how do you know that the model is good? It will never be right in the way that quantum electrodynamics is right. At a certain point you may be good enough at predicting that your results can be put to repeated practical use, like the giant weather-forecasting models that run on today’s supercomputers; in that case predictive success can be measured in terms of dollars and cents, and the improvement of models becomes a quantifiable matter. In the early stages of a complex science, however, the criterion for a good model is more subjective: it is a good model if it succeeds in explaining or rationalizing some of what you see in the world in a way that you might not have expected.

Notice that I have not specified exactly what I mean by a model. You may think that I must mean a mathematical model, perhaps a computer simulation. And indeed that’s mostly what we have to work with in economics. But a model can equally well be a physical one, and I’d like to describe briefly an example from the pre-computer era of meteorological research: Fultz’s dish-pan.

Fultz’s dish-pan

Dave Fultz was a meteorological theorist at the University of Chicago, who asked the following question: what factors are essential to generating the complexity of actual weather? Is it a process that depends on the full complexity of the world — the interaction of ocean currents and the atmosphere, the locations of mountain ranges, the alternation of the seasons, and so on — or does the basic pattern of weather, for all its complexity, have simple roots?

He was able to show the essential simplicity of the weather’s causes with a “model” that consisted of a dish-pan filled with water, placed on a slowly rotating turntable, with an electric heating element bent around the outside of the pan. Aluminum flakes were suspended in the water, so that a camera perched overhead and rotating with the pan could take pictures of the pattern of flow.

The setup was designed to reproduce two features of the global weather pattern: the temperature differential between the poles and the equator, and the Coriolis force that results from the Earth’s spin. Everything else — all the rich detail of the actual planet — was suppressed. And yet the dish-pan exhibited an unmistakable resemblance to actual weather patterns: a steady flow near the rim evidently corresponding to the trade winds, constantly shifting eddies reminiscent of temperate-zone storm systems, even a rapidly moving ribbon of water that looked like the recently discovered jet stream.

What did one learn from the dish-pan? It was not telling an entirely true story: the Earth is not flat, air is not water, the real world has oceans and mountain ranges and for that matter two hemispheres. The unrealism of Fultz’s model world was dictated by what he was able to or could be bothered to build — in effect, by the limitations of his modeling technique. Nonetheless, the model did convey a powerful insight into why the weather system behaves the way it does.

The important point is that any kind of model of a complex system — a physical model, a computer simulation, or a pencil-and-paper mathematical representation — amounts to pretty much the same kind of procedure. You make a set of clearly untrue simplifications to get the system down to something you can handle; those simplifications are dictated partly by guesses about what is important, partly by the modeling techniques available. And the end result, if the model is a good one, is an improved insight into why the vastly more complex real system behaves the way it does.

Social Sciences

When it comes to physical science, few people have problems with this idea. When we turn to social science, however, the whole issue of modeling begins to raise people’s hackles. Suddenly the idea of representing the relevant system through a set of simplifications that are dictated at least in part by the available techniques becomes highly objectionable. Everyone accepts that it was reasonable for Fultz to represent the Earth, at least for a first pass, with a flat dish, because that was what was practical. But what do you think about the decision of most economists between 1820 and 1970 to represent the economy as a set of perfectly competitive markets, because a model of perfect competition was what they knew how to build? It’s essentially the same thing, but it raises howls of indignation.

Why is our attitude so different when we come to social science? There are some discreditable reasons: like Victorians offended by the suggestion that they were descended from apes, some humanists imagine that their dignity is threatened when human society is represented as the moral equivalent of a dish on a turntable. Also, the most vociferous critics of economic models are often politically motivated. They have very strong ideas about what they want to believe; their convictions are essentially driven by values rather than analysis, but when an analysis threatens those beliefs they prefer to attack its assumptions rather than examine the basis for their own beliefs.

Still, there are highly intelligent and objective thinkers who are repelled by simplistic models for a much better reason: they are very aware that the act of building a model involves loss as well as gain. Africa isn’t empty {see below}, but the act of making accurate maps can get you into the habit of imagining that it is. Model-building, especially in its early stages, involves the evolution of ignorance as well as knowledge; and someone with powerful intuition, with a deep sense of the complexities of reality, may well feel that from his point of view more is lost than is gained. It is in this honorable camp that I would put Albert Hirschman and his rejection of mainstream economics.

Clouds predict storms

The cycle of knowledge lost before it can be regained seems to be an inevitable part of formal model-building. Here’s another story from meteorology. Folk wisdom has always said that you can predict future weather from the aspect of the sky, and had claimed that certain kinds of clouds presaged storms. As meteorology developed in the 19th and early 20th centuries, however — as it made such fundamental discoveries, completely unknown to folk wisdom, as the fact that the winds in a storm blow in a circular path — it basically stopped paying attention to how the sky looked. Serious students of the weather studied wind direction and barometric pressure, not the pretty patterns made by condensing water vapor.

It was not until 1919 that a group of Norwegian scientists realized that the folk wisdom had been right all along — that one could identify the onset and development of a cyclonic storm quite accurately by looking at the shapes and altitude of the cloud cover.

The point is not that a century of research into the weather had only reaffirmed what everyone knew from the beginning. The meteorology of 1919 had learned many things of which folklore was unaware, and dispelled many myths. Nor is the point that meteorologists somehow sinned by not looking at clouds for so long. What happened was simply inevitable: during the process of model-building, there is a narrowing of vision imposed by the limitations of one’s framework and tools, a narrowing that can only be ended definitively by making those tools good enough to transcend those limitations.

But that initial narrowing is very hard for broad minds to accept. And so they look for an alternative.

The problem is that there is no alternative to models. We all think in simplified models, all the time. The sophisticated thing to do is not to pretend to stop, but to be self-conscious — to be aware that your models are maps rather than reality.

There are many intelligent writers on economics who are able to convince themselves — and sometimes large numbers of other people as well — that they have found a way to transcend the narrowing effect of model-building. Invariably they are fooling themselves. If you look at the writing of anyone who claims to be able to write about social issues without stooping to restrictive modeling, you will find that his insights are based essentially on the use of metaphor. And metaphor is, of course, a kind of heuristic modeling technique.

In fact, we are all builders and purveyors of unrealistic simplifications. Some of us are self-aware: we use our models as metaphors. Others, including people who are indisputably brilliant and seemingly sophisticated, are sleepwalkers: they unconsciously use metaphors as models.

Another example — Africa isn’t empty:  the evolution of ignorance

A friend of mine who combines a professional interest in Africa with a hobby of collecting antique maps has written a fascinating paper called “The evolution of European ignorance about Africa.” The paper describes how European maps of the African continent evolved from the 15th to the 19th centuries.

You might have supposed that the process would have been more or less linear: as European knowledge of the continent advanced, the maps would have shown both increasing accuracy and increasing levels of detail. But that’s not what happened. In the 15th century, maps of Africa were, of course, quite inaccurate about distances, coastlines, and so on. They did, however, contain quite a lot of information about the interior, based essentially on second- or third-hand travellers’ reports. Thus the maps showed Timbuktu, the River Niger, and so forth. Admittedly, they also contained quite a lot of untrue information, like regions inhabited by men with their mouths in their stomachs. Still, in the early 15th century Africa on maps was a filled space.

Over time, the art of mapmaking and the quality of information used to make maps got steadily better. The coastline of Africa was first explored, then plotted with growing accuracy, and by the 18th century that coastline was shown in a manner essentially indistinguishable from that of modern maps. Cities and peoples along the coast were also shown with great fidelity.

On the other hand, the interior emptied out. The weird mythical creatures were gone, but so were the real cities and rivers. In a way, Europeans had become more ignorant about Africa than they had been before.

It should be obvious what happened: the improvement in the art of mapmaking raised the standard for what was considered valid data. Second-hand reports of the form “six days south of the end of the desert you encounter a vast river flowing from east to west” were no longer something you would use to draw your map. Only features of the landscape that had been visited by reliable informants equipped with sextants and compasses now qualified. And so the crowded if confused continental interior of the old maps became “darkest Africa”, an empty space.

Of course, by the end of the 19th century darkest Africa had been explored, and mapped accurately. In the end, the rigor of modern cartography led to infinitely better maps. But there was an extended period in which improved technique actually led to some loss in knowledge.

Another example of the misuse of models

Can You Really Predict the Success of a Marriage in 15 Minutes?“, Laurie Abraham, Slate, 8 March 2010 — “An excerpt from Laurie Abraham’s The Husbands and Wives Club.”  Excerpt:

John Gottman’s own road to Velcro-level fame started with a 1998 article in the Journal of Marriage and the Family. He and his colleagues at the University of Washington had videotaped newlywed couples discussing a contentious topic for 15 minutes to measure precisely how they fought over it: Did they criticize? Were they defensive? Did either spouse curl his or her lip in contempt? Then, three to six years later, Gottman’s team checked on the same couples’ marital status and announced that based on the coding of the tapes, they could predict with 83% accuracy which ones were divorced.

Over the next decade, Gottman’s narrow, bald head, fringed by a neat, gray beard and topped by a discrete yarmulke, began to appear everywhere—on 20/20 and The Today Show, in the New York Times Magazine and the Atlantic, and in hundreds of newspapers across the country. Malcolm Gladwell devoted most of a chapter to him in his huge best-seller Blink.

… Then, while researching my book The Husbands and Wives Club, I looked into Gottman’s research and saw that there were reasons other than a silly attachment to romance to think twice before trusting his formula—or anyone else’s—to predict the outcome of your marriage. Gottman’s “predictions” are not exactly what most of us think of as real predictions. And the way he reports them in all likelihood makes them seem much more robust than they really are. …

For more information

Please like us on Facebook, follow us on Twitter.

Posts on the FM site about the theory and practice of economics:

  1. The greatness of John Maynard Keynes, our only guide in this crisis, 4 December 2008
  2. About the state of economic science, and advice from a famous economist, 8 December 2008
  3. “A depression is for capitalism like a good, cold douche.”, 17 December 2008
  4. Words of wisdom about the global recession, from the greatest economist of our era, 29 December 2008
  5. A very important article by an expert, discussing the necessary next step to solve the financial crisis, 17 February 2009
  6. Economic theory as a guiding light for government action in this crisis, 10 March 2009
  7. Economics in action, 6 June 2009
  8. Where to go to learn about economics, and help you understand what’s happening to America and the world, 16 February 2010

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top
%d bloggers like this: