Has America grown old, and can no longer grow? Or are wonders like the singularity in our future?

Summary:  The future of economic growth has become a hot topic. Here’s a brief look at two of the contrasting visions for our future. Read about both visions and decide for yourself!

No longer a consensus vision of the future


  1. Introduction
  2. Slow growth ahead — or even no growth
  3. Accelerating growth leading to the singularity
  4. For more information: the two theories about economic growth

(1) Introduction

Forecasts of stagnating growth have been commonplace throughout the centuries of economic progress from industrialization. Resource exhaustion, the end of innovation, overpopulation, pollution, wars, and social collapse.  These are timeless concerns, by which far seeing (or pessimistic) people in each generation force us to think ahead — to plan and act now to avoid forseeable disasters.

Posts on the FM website tend to debunk doomsters, but not all warnings are exaggerated or in vain.  Here we look at two contradictory views of economic growth (a narrow but vital perspective).  No final judgement can be made which is correct.  Everyone must choose as they will, and live their lives accordingly.


(2)  Slow growth ahead — or even no growth

“The more important fundamental laws and facts of physical science have all been discovered, and these are so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. … Many instances might be cited, but these will suffice to justify the statement that “our future discoveries must be looked for in the sixth place of decimals.”

— The great physicist Albert Abraham Michelson (1852-1931) in Light Waves and Their Uses (1903)

Is America, and perhaps the entire developed world, doomed to slow economic growth? Are we all fated to follow Japan into a multi-generational stagnation?

(a) The best-known version of this story

The Great Stagnation: How America Ate All The Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better, an e-book by Tyler Cowen (Prof Economics at George Mason U). About 15,000 words, and costs $4.00. Amazon link: here.

Another common version is “Japanification”:  deflation, low growth, aging population, and slow decay of the government’s solvency. Japan has shown how easily a nation falls into this, and how difficult the cure.

(b) A more analytic version of a no-growth forecast

Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds“, Robert J. Gordon, National Bureau of Economic Research, August 2012 — Abstract:

This paper raises basic questions about the process of economic growth. It questions the assumption, nearly universal since Solow’s seminal contributions of the 1950s, that economic growth is a continuous process that will persist forever. There was virtually no growth before 1750, and thus there is no guarantee that growth will continue indefinitely. Rather, the paper suggests that the rapid progress made over the past 250 years could well turn out to be a unique episode in human history.

The paper is only about the United States and views the future from 2007 while pretending that the financial crisis did not happen. Its point of departure is growth in per-capita real GDP in the frontier country since 1300, the U.K. until 1906 and the U.S. afterwards. Growth in this frontier gradually accelerated after 1750, reached a peak in the middle of the 20th century, and has been slowing down since. The paper is about “how much further could the frontier growth rate decline?”

The analysis links periods of slow and rapid growth to the timing of the three industrial revolutions (IR’s), that is:

  1. steam & railroads — from 1750 to 1830;
  2. electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, & petroleum — from 1870 to 1900; and
  3. computers, the web, & mobile phones — from 1960 to present.

It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972. Once the spin-off inventions from IR #2 (airplanes, air conditioning, interstate highways) had run their course, productivity growth during 1972-96 was much slower than before.

In contrast, IR #3 created only a short-lived growth revival between 1996 and 2004. Many of the original and spin-off inventions of IR #2 could happen only once – urbanization, transportation speed, the freedom of females from the drudgery of carrying tons of water per year, and the role of central heating and air conditioning in achieving a year-round constant temperature.

Even if innovation were to continue into the future at the rate of the two decades before 2007, the U.S. faces six headwinds that are in the process of dragging long-term growth to half or less of the 1.9% annual rate experienced between 1860 and 2007. These include demography, education, inequality, globalization, energy/environment, and the overhang of consumer and government debt. A provocative “exercise in subtraction” suggests that future growth in consumption per capita for the bottom 99% of the income distribution could fall below 0.5 %per year for an extended period of decades.

From the conclusion of the paper:

Volumes have been and will be written on all the issues identified here, and thus it is beyond the scope of this short article to make any serious attempt to provide solutions. Some of the headwinds contain a sense of inevitability. The most daunting is headwind #4, the interplay between globalization and modern technology, which accelerates the process of catching up of the emerging markets and the downward pressure on wages and real incomes in the advanced nations.

(c)  Posts about economic stagnation or even collapse

The man of tomorrow?
  1. Peak Oil Doomsters debunked, end of civilization called off , 8 May 2008
  2. From the 3rd century BC, Polybius warns us about demographic collapse, 11 June 2008
  3. Is global food production peaking?, 13 January 2010
  4. Recovering lost knowledge about exhaustion of the Earth’s resources (such as Peak Oil), 27 January 2011
  5. A look at forecasts for peak oil – and the end of civilization, 13 July 2012
  6. The Reference Page Demography – studies & reports

(3)  Accelerating growth leading to the singularity

“Everything that can be invented has been invented.”
— Charles H. Duell, Director of U.S. Patent Office, 1899 (a famous fake quote, as false as the idea it expresses)

The singularity is a amalgam of ideas, some contradictory, about a future of accelerating technological evolution leading to a new world beyond our understanding. We cannot understand what will come, much as one cannot see through a singularity in the physical universe.

It’s origins go back to the mid-19th century (perhaps earlier). For a good introduction to modern thought about it see Three Major Singularity Schools, Eliezer S. Yudkowsky, Singularity Institute blog, September 2007. Here’s the opening:

Singularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion. The thing about these three logically distinct schools of Singularity thought is that, while all three core claims support each other, all three strong claims tend to contradict each other.

Accelerating Change

Core claim: Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.

Strong claim: Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.

Advocates: Ray Kurzweil, Alvin Toffler(?), John Smart

Event Horizon

Core claim: For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.

Strong claim: To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.

Advocates: Vernor Vinge

Intelligence Explosion

Core claim: Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.

Strong claim: This positive feedback cycle goes FOOM, like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.

Advocates: I. J. Good, Eliezer Yudkowsky

Works about the singularity

  1. The Wikipedia entry about the singularity is excellent.
  2. Marooned in Realtime by Vernor Vinge (1986) — Fun sci-fi.
  3. Ray Kurzweil: his website; the Amazon page for his book (Singularity Is Near: When Humans Transcend Biology).
  4. The Coming Technological Singularity:  How to Survive in the Post-Human Era, Vernor Vinge, 1993

Posts about the singularity

  1. Good news: The Singularity is coming (again), 8 December 2007
  2. The Singularity is in our past, 29 March 2009

(4)  For more information: the two theories about economic growth

(a)  This is a complex subject.  These two Wikipedia entries provide a good introduction to the theories underlying these two visions of the future.

  1. Endogenous growth models
  2. Exogenous growth models

(b)  Posts about the future:

  1. Fears of flying into the future, 25 February 2008
  2. Good news about the 21st century, a counterbalance to the doomsters, 9 May 2008
  3. Some thoughts about the economy of mid-21st century America, 12 January 2009
  4. A look at our history – from the 23rd century, 13 April 2009
  5. A look back at our time from the 2100 A.D. edition of the Encyclopedia Britannica, 24 June 2010

(c)  See all posts about this topic at these Reference Pages:

  1. The 3rd Industrial Revolution has begun
  2. Forecasts – possible futures for America and the World

24 thoughts on “Has America grown old, and can no longer grow? Or are wonders like the singularity in our future?”

  1. We need to carefully separate “change” from “growth” in this discussion. As one author commented, there wasn’t much growth (relative to later decades) before 1750 but there was a lot of change before that time.

    I feel (but cannot prove) that we are heading toward a period of increasing change but cannot predict with any degree of certainty that we are heading toward a period of growth. WWII, for example, was a period of great global change but the overall economy shrank as cities and factories were destroyed and millions of people died.

    I’m not at all comfortable with the futurists predictions, in large part because they mostly seem to be Utopians, and Utopians have about the worst track record of accurate predictions in history.

    Wealth is not so much generated by intelligence but by the intelligent application of innovation within the confines of the current society’s power. It doesn’t matter, for example, if you’ve developed the perfect theory for fusion power if there isn’t enough societal infrastructure and demand to make it possible or practical to implement.

  2. Come on, American movies always have a happy ending. Quantum zero point thorium fluxgate capacitor cold fusion is just around the corner.

  3. Predictions of Malthusian catastrophes are generally accurate, given information available at the time of the prediction. Then the information changes and the apocalypse is delayed. These are the singularities that Fabius Maximus so often writes about.
    So, is it a coincidence that technological and social revolutions seem to always stave off extrapolations of doom, or is necessity the mother of invention?
    There have been many cases in history of peoples who were not able to innovate their way out of demographic problems, albeit none yet on a global scale, so I would be inclined to believe the former, or at least prepare for it as probable outcome.

  4. The “singulatarians” theories bother me; they appear to me to be cherry-picking their data-points and are seem to be biassed toward over-valuing the importance of recent inventions over older ones. It seems to me to be as suspicious as a cargo cult, in other words.

    Here’s a model for thinking about the singularity: consider fire and the printing press. “Of course, we didn’t ‘invent’ fire!” they cry. Because otherwise, in terms of significance of technologies humans adopted, it would look like things have been going steadily downhill since the taming of fire, the knife, our treaties with dog and cat, the horse, the spear, the bow, rope, beer, agriculture, the wheel, etc — all down to the printing press, bacterial model of infection, virology, and the iPod. If one wished to compare the importance of the bacterial model of infection to pretty much anything humans have discovered or invented since, we pretty much haven’t accomplished a damn thing other than perfecting the wonder bra, velcro, and internet porn. If one wished to sensibly talk about inventions and make a claim that we were heading toward a singularity, I think they’d need to argue not simply that we are inventing all kinds of cool stuff (which we are) but that the stuff was having a greater and greater effect on humanity, collectively. Put differently, wheels have improved relatively little since they were invented compared to the overall importance of that first revolutionary (if I may make the obvious pun) version.

    It seems to me that the futurologists point to the recent greatest hits of technology, because they are still in our short-term memory, and we see incremental changes in, say, computers, as being more interesting because we remember them. But if you go to the Musee Des Arts Et Metiers in Paris (one of the best museums, in my opinion) you can see that many technologies we invented go through rapid evolutionary stages then stabilize and taper off. There are brilliant exhibits that show how, for example, steam engine safety systems underwent rapid improvement under extreme pressure until suddenly they were superceeded at the point where they were just about perfected. Burt Rutan does an interesting TED talk about this, by pointing out that aviation has, in its short lifespan, peaked in the 1960s and now we’re down to incremental improvements in efficiency (and are having trouble exceeding the capabilities of the vintage SR-71)

    Another argument against singulatarians is that they tend to like to cherry-pick outside their fields of expertise. Kurzweil, for example, is a well-rounded computer geek, but he makes the horrible mistake of assuming that biology will fit on a progress cycle like computing. There’s the difference between a field we created and control and a field that created and controls us. Ask an evolutionary biologist about biological nano machines and custom genomes and they’ll laugh outright – biology is still at the point where we’ve only just discovered that cancer is much much much nastier than we realized. There has been progress but it’s not progress you can put on a scale like hard drive capacity or processor density. You can see how little the singulatarians understand about biology when they make comments (such as I’ve read online) to the effect that “uploading” will be possible soon – you just model a brain and then simulate it in silicon – all we need to do is figure out the mechanical details of disassembling a brain to figure out the total state of its connections. That’s computer geeks misunderstanding the brains are not digital computers; they rely on side-effects of chemical processes that are messy and unpredictable. The “total state” of a brain actually encompasses not only neuron/axon connections but chemical levels in the axons and neurotransmitter levels. As anyone who deals with depression can tell you: you’re not just your neuron/axon wiring you’re your serotonin and norepinephrine levels, too. Singulatarian comments on biology appear to me to be hopelessly naive and optimistic.

    A better case can be made that collapses are inevitable, because we’ve observed traces of them happening before (unlike singularities). We can see that malthusian collapses are possible, and we live on a planet on which 99.9% of the species that have ever lived are extinct. That does put some weight behind the collapsist position. Though Earth’s getting whanged by a piece of fast-moving space junk still remains a very credible death from above compared to a generalized malthusian collapse.

    Empires, like the US and Rome, however, can face walls of a different sort. Rome’s growth strategy was to conquer nearby rich neighbors and loot their treasuries to fuel additional growth. the obvious inherent limit was hit when Rome ran out of wealthy targets and began expending money expanding into places that didn’t add to the coffers as much as they cost. The US’ strategy has been to open markets to US trade, in a sort of Roman-like trade/conquest scenario. That has an inherent limit, too, which is similar to the Romans’.

    1. “…aviation has, in its short lifespan, peaked in the 1960s and now we’re down to incremental improvements in efficiency”

      I think it’s entirely plausible that the same thing will happen with computer and AI development. That is, computers will of course become smaller and more efficient, and certainly clever programmers will write many sophisticated ‘smart’ solutions to various problems we encounter in our lives, but the machines will forever remain essentially dumb.

      This may come as a disappointment to the Singulatarians predicting a Skynet scenario.

    2. Todd’s point on AI is well-taken. We’re actually surrounded by AIs all over the place.

      We’ve got AIs that write newspaper articles (turns out people are very easy to fool about the difference between a real hack author and a software hack author – take that, Turing!) we’ve got AIs that reschedule our plane tickets when the airlines screw up (no comment) we have AIs that talk to us on the telephone, trade stock, battle us in 1st person shooters like Halo, and make witty conversation on our cell phones.

      We have AIs that freestyle jazz (so far, no AI rappers, but I’m sure one is around the corner…)

      We have AIs that beat us at chess and stupid games of knowledge, we have AIs that do all kinds of things except be creative. The evolution of AIs has been amazingly recent – it was theorized to be possible in the 1940/50s and it was on its way a mere 50 years later. But it has tapered off at creativity. Perhaps not because creativity is hard but because of what creativity is: a re-mixing of existing ideas and a rapid darwinian filtering of the results into what’s practical or makes sense in a given context and what doesn’t.

      The asymptote is not that we’ll eventually have AIs as smart as humans it’s “so what? humans aren’t that smart, really!” what we want are AIs as smart as gods but the singulatarians don’t realize that the gods of humans aren’t that smart, really, either. It’s another case of wishing for a skyhook that’ll be able to think its way out of the hole we’ve dug for it.

      1. A follow-up to Marcos’ comment, about AI everywhere.

        I was a member of the Am Assn for AI for several years, went to 2 conventions. It was a common saying that AI was the next step in software. Once accomplished it was just a feature, boring and so no longer AI.

        His second point is even more important. AI programs are not creative or even “intelligent” in some high sense. But most jobs require neither creativity nor abstract intelligence. This is the essence of the coming robot revolution — the next wave of automation destroying service jobs.

  5. Marcus Ranum reiterates an extremely important point here, one which FM simply overlooked. In a recent response to Ranum, FM recommended “the complete works of Asimov, Clarke and Heinlein” as an antidote to assertions that we had hit the limits of physics and biology in a wide range of disciplines, from space flight to aerodynamics to computer technology to audio and video reproduction to land travel.

    Asimov, Clarke and Heinlein got it spectacularly wrong. Let’s take some specific examples. Asimov predicted humanoid artificially intelligent robots: humanoid robots able to navigate an arbitrary environment (say, a messy hotel room) are still nowhere on the horizon. A robot capable of making a bed or picking up towels from a bathroom floor remain far beyond our capabilities. Most robots today are not humanoid, and there is no evidence they ever will be: optimal form factors for robots involve wheels. The single biggest problem with robotics remains the power source: a human can consume a half pound of inexpensive hamburger and get enough fuel to climb a cliff, whereas a robot like the U.S. Army’s “big dog” requires dozens of gallons of diesel fuel merely to walk across flat terrain. The second biggest problem with robotics involves computer vision, which has only been “solved” in very limited environments (i.e., freeways free of obstacles and isolated warehouses with no unpredictable objects in the robot’s path). In an open environment where the robot doesn’t know what it will encounter (a bird? a dog? a rock? a fallen tree? a tumbleweed?) today’s robots are still helpless. Artificial intelligence, as we all know, qualifies as one of the great failed projects of science, reminiscent of the quest to find the luminiferous ether, or the effort to distill phlogiston. Speech recognition hit a wall at 80% accuracy and has never been to get beyond that range for general conversation. In every area that matters, Asimov was wrong.

    Arthur Clarke predicted moon bases, superintelligent computers, and manned exploration and colonization of other planets, as well as undersea cities — all spectacularly wrong predictions. Computer remain stupid, humans are biologically totally unsuited to travel outside the earth’s magnetosphere because galactic cosmic rays will give us cancer and kill us and osteoporosis makes our bones so fragile in zero gravity that humans can’t survive long space trips, much less the violent radiation of planets like Mars (which has no magnetosphere to shield astronauts from cosmic rays) or moons like Io or Europa (bathed in intense magnetic fields and blasted by violent synchrotoron radiation from particles accelerated by Jupiter’s immense magnetic field).

    Heinlein’s predictions were so comically incorrect it seems almost uncharitable to review them: triphibian flying cars able to travel underwater and in the air at supersonic speeds as well as on the ground (absurd even in the 1950s when Heinlein made such ridiculous predictions), passenger travel by suborbital rocket (ditto above), interplanetary colonization (Mars was known to be uninhabitable even when he claimed in 1954 “Within 10 years intelligent life will be found on Mars” and thermocouple measurements of Venus had by the mid-1950s shown its surface temperature to be greater than melting lead), “an armed society is a polite society” (review the recent spate of American mass shooting to see what an armed society looks like sociologically), and the alleged wonders of military control of society posited in books like STARSHIP TROOPERS (Heinlein would have done well to have reviewed the history of the Praetorian Guard in Rome before writing such twaddle).

    So instead of that trio of wonderfully entertaining but wholly unrealistic sci fi writers, I would recommend the following trio of writers on the technological future:

    John Horgan: The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age, 1997. (Available on amazon.com)

    Hubert Dreyfus: What computers still can’t do: A critique of artificial reason, 1992

    Gunther Stent: The Coming of the Golden Age: A View of the End of Progress, 1969.

    It’s worth noting that the Chaco Canyon people, the Aztecs, the Easter Islanders, and many others proved unable to overcome the degradation of their ecosystems. These peoples eventually had to move to other environments in order to survive, and their former cities are now overgrown by jungle. With the advent of Peak Oil it’s hard to resist the recognition that regions like Southern California and the American southwest are going to go the way of the cliff cities of the Chaco Canyon peoples. These areas are simply going to empty out as the population migrates elsewhere when gasoline hits $10 per gallon.

    Computer hit a plateau around 2003. We got stuck at 3 Ghz clock speeds and have not been able to progress beyond that due to thermal limitations (as integrated circuits get more dense, it becomes impossible to remove heat from the CPU fast enough to keep it from melting down at higher clock speeds, even with the biggest fan). Parallelization has not done anything to speed up computers because most computer applications are serially limited — i.e., you must perform operation 1 before operation 2, as in calculating the digits of pi, so performing many operations simultaneously doesn’t speed your computer up except in very specialized computer applications like triangularizing sparse matrices. (A class example of a serially limited operation is reading a web page. In order to format the web page, the browser must first load all its elements: it is impossible to properly format a web page on the screen without first knowing what all the elements on the page are, and the browser must load them one at a time in order to find that out.)

    Many technological advances loom, but they’re incremental rather than quantum leaps. Asteroid mining, nanotechnology, and so on may occur, but we won’t see the kind of radically new and unanticipated technologies that broke upon the world in the 1930s and 1940s like the transistor and the laser, made possible by quantum theory. That’s Horgan’s point. Science will continue to progress, but incrementally: the large picture isn’t perectly known, but the big outlines are generally understood. No one is going to prove Einstein or Darwin wrong and come up with a radical new theory that, for example, allow cheap low-power antigravity or zero-point energy power, to name two favorite examples of currently popular pseudoscience.

    In this context, Marcus Ranum’s call for an end to wasteful overtechnologization, as for example with the Concorde, seems wise, while FM’s reference to the comically failed predictions of Heinlein and Asimov and Clarke seems remarably shortsighted.

    1. “Asimov, Clarke and Heinlein got it spectacularly wrong.”

      Quite right. Their books are literature, not textbooks. Their value lies in their ability to inspire us, to help us see greater things than we can in our daily lives, to see alternative ways humanity can live.

    2. Los Angeles is now in the process of expanding its mass transit systems such as subway and light rail, partly in response to population pressures, and partly as a matter of forward-looking urban planning. This should help a lot in the event of a prolonged energy shortage, so although we will very likely see the end of the LA car lifestyle by the end of this century, I don’t think the area will become an Angkor Wat.
      Chaco Canyon allegedly experienced multiple prolonged resource shortages before the people migrated to better pastures. So if, in addition to an energy shortage, LA was to also lose its water supply (let’s say an earthquake severed the aqueduct), then the people might not have a choice but to leave.
      Only time will tell.

    3. I suggest that Asimov, Clarke and Heinlein, with their unabashed faith in technoscience, and writing in a period of economic growth, are to the last couple of generations what Jules Verne was in his time, and address a similar audience (with a predilection for adolescent males).

      In a more mature society, where growth has nearly stopped and its failings can no longer be concealed, authors come to the fore with a more somber view of what the future and technology can bring.

      Edwardian authors are a good example. I heartily recommend E.M.Forster’s “The machine stops”, and H.G.Wells’ “The war in the air”, for instance. No idea who could be their literary equivalent nowadays.

      1. Great comment! I hadn’t seen that perspective.

        “the Machine Stops” is a great story! By EM Forster, 1909. URL:

        A modern equivalent are the dark sci-FI stories written during the 1970s. Like “I have no mouth and I must scream” by Harlan Ellison, 1976.

        Interestingly, most of these works disappeared during the 1980s and 1990s (don’t know if they’ve come back). Their authors were quite ticked off at the lost of audience for what they considered works of far higher literary quality than the classic age sci-FI of Asimov, Heinlein, Clarke –which continued to sell well.

        For that matter, much of the vibrant optimistic pulp era sci-FI continued to sell well. Like EE Smith’s space opera Lensman series, and the 1970s Gor novels (whose publisher, I heard, learned to their astonishment that 1/3 of their readers were women!).

  6. Another problem for the singulatarians: there doesn’t appear to be anywhere to go. General relativity has been shown to predict, over and over again, how the universe behaves. Its predictions are why computers work, GPS work, deep space exploration vehicles work, the observations of The Hubble Space Telescope work, etc, etc, ad nauseam. Why does this matter?

    Because there are about 1 trillion lbs of living human meat on the planet, right now, and it’s simply not going to shift itself into orbit and go colonize space. And thanks to The Hubble we now know there’s noplace remotely near us (and by “near” I mean “insanely frickin’ far” but closer than “insanely unbelievably frickin’ far”) worth going anyway. I know FM is skeptical about anthropogenic global warming but if you consider our attempts to screw up the environment as “terraforming” I think it’s safe to say that humans have shown a remarkable inability to build habitable biospheres. So – we’re not going to Alpha Centauri – there’s nothing there. Nor are we going to inhabit the oort cloud, because we can’t inhabit our own damn planet successfully without jeopardizing ourselves and everything else we share a biosphere with.

    The more we understand about physics and physical reality the more convincing it is that there’s no practical path to space folding, teleportation, or other deus ex machina methods of warp drive transportation a la Star Trek.

    Let me repeat that: not only is there no warp drive, there isn’t even a theoretical avenue that leads to a warp drive. The closest theoreticians have come is that if we throw ourselves into a black hole, we’ll have a sort of limited time-travel forward: the universe will appear to stop around us while we enjoy the experience of being ripped apart by tidal forces, and, um… That’s that. Not that there’s a black hole near us that we could get to in any time that wouldn’t dwarf the lifespan of human civilization.

    And that’s the problem: if we assume things like a perfect matter/antimatter energy conversion space drive, and a 1km long tank 1/2 km wide containing antimatter, and a 5,000lb payload – we still can’t get anywhere worth going. We could make it to Barnard’s Star in a mere 5,000 years, assuming humans could actually build a machine that would work successfully for 5,000 years (hint: it won’t be running Microsoft Windows) So… the singulatarians and futurists are basically playing role in this comedy of the cargo cultists waiting for John Frum to come give them the secret of how the skyhook works. And it ain’t gonna happen. The vast mass of human meat is evolved to live on this planet, and this planet only, and it’s going to die on this planet, and this planet only.

    The singulatarians are praying for the “geek rapture” in which a few deserving ubergeeks are wisked off the planet to, uh, somewhere worth going, to live happily ever after. Never mind the fact that human nature would be more likely to result in the rest of us holding them down and cutting their throats rather than letting them escape and leaving us to die, here, the big questions of 1) where to go 2) how to get there 3) how to survive getting there remain unanswered except that everything we know about physics and the universe around us has answered repeatedly 1) noplace 2) can’t 3) can’t.

    Fermi’s famous paradox: “if there is so much universe and so many stars and some aliens doubtless evolved in the billions of years of vastness – then, where are they?” is actually easily answerable: they’re trapped in einsteinian space just like we are and they’re so far from us that neither of us will ever hear the other’s final dying screams. It’s that simple. It’s that obvious. I would further theorize that any intelligences that arise elsewhere, if they survive the discovery of nuclear fission, probably build an equivalent of The Hubble Space Telescope, loft it, and slowly – ever so slowly – the great “oh, shit” sinks in. There is only one habitable planet (for all intents and purposes) and it’s the one we’re on and yes, we’ve screwed it up a bit but it’s not too late. If the aliens are intelligent enough, or we are, they recognize that they’re trapped like flies in amber and the only sensible thing is “il faut cultiver son jardin” as Voltaire says, “we must tend our garden.”

    Elsewhere I’ve argued that the rational and sustainable thing to do would be to establish the means of making the human experience maintainable for another billion or so years, while we evolve and the planet changes and sun dies along with us. I would say that teaching our young the wisdom of Epicurus is more important than ever; and we should have a whole lot fewer young to teach.

    None of that’s going to happen, though. With a population of 7 billion our collective response is to wonder if we can feed 12 billion. Which brings me to the alternative answer to Fermi’s paradox: all the aliens are as stupid as we are, and follow the same growth to collapse as we are on. I don’t think humans will wipe ourselves out (we’re not that capable, really) but what we’ll find eventually is that adopting sustainable civilization will be forced on us, not elected.

    1. PS – it scares me to realize that I have, literally, read every book published by Clarke, Heinlein, and Asimov. I grew up reading Asimov’s children’s science books and even choked down Heinlein’s jingoistic flirtations with racial nationalism such as “5th Column” and “Farnham’s Freehold.” They’re great entertainment but ultimately Einstein was more right, where it mattered, about things that are much less fun.

      1. Looking back at all the sci-fi I’ve read, the horrific realization is not about star travel (too distant in the future) — but that manned space travel is useless today. And probably for a long time to come, until the economics radically change (probably due to the development of more powerful and cheap energy sources).

        Apollo was a bust in terms of cost-benefit. Ditto the Space Shuttle and the Space Station. Neither could find uses to even fill the space. How many high school science projects did the Shuttle take into orbit, and mind-blowing expense? That is the key prediction that Clarke, Asimov, and Heinlein got wrong. Even a manned presence in near-earth space isn’t economically worthwhile. Ditto cities on the moon. And a million-fold for interplanetary colonization and commerce.

    2. Two more comments about Marcus’ fascinating comment.

      (1). “I know FM is skeptical about anthropogenic global warming but if you consider our attempts to screw up the environment as “terraforming” ”

      I am not skeptical about our pollution and land use changes. These are very serious problems, deserving immediate attention. Diverting so much attention to uncertain risks with far longer time horizons — far more costly to mitigate — is IMO daft.

      There are problems out there wrecking havoc right now. Collapse of world fisheries, aerosols (soot might be the major driver of arctic ice loss), lost of tropical forests, accelerating species loss — and many more!

      (2). World population

      I believe demographers are coming to see ten billion as the likely max population, around 2050. And I wonder if that’s possible, as it assumes large growth in areas that might not be able to support the current population.

    3. The comments on this particular Fabius Maximus post remind me of a science fiction book I read when I was younger, The Mote in God’s Eye.

      In this story, humanity has discovered faster-than-light space travel and has used it to colonize the galaxy. The plot centers around the discovery of an advanced alien civilization who *spoiler alert* had never themselves discovered faster-than-light. The alien civilization was therefore confined to one solar system and so their civilization was doomed to endless cycles of overpopulation and collapse.
      The book was no doubt inspired by the dire predictions of the Club of Rome, which were released a couple years prior to it.

      I think the theme of that story speaks directly to the points of Marcus, Thomas, and Fabius here, about how continued economic and demographic expansion depends largely on whether our potential for scientific and technological innovation is accelerating or decelerating.

      1. Another great sci-FI novel!

        This was published in 1974, when fears of ever-increasing population were in vogue. As Todd notes, that’s also the background for the mostly-misrepresented Club of Rome rpt (which didn’t say many of the things it’s famous for saying).

        Now we see fertility dropping almost everywhere, and global population probably peaking at 10 billion in the middle of the century, and beginning a long fall we cannot see the end of.

        Another interesting aspect of that book is its setting in Pournelle’s future history of rise and fall of empires. “Eye” is set in the early days of the second human stellar empire. This assume no improvement in human cognition, in either individuals or groups, from biological or machine mechanisms. Therefore we go to the stars, but keep making the same stupid mistakes over and over.

        That’s the core assumption of much classical sci-fi. But the great works go a different route. “Childhood’s End” by Clarke, Asimov’s robot stories and Foundation stories. EE Smith’s Lensman stories. All show our current lives, humanity, as a stage in something greater.

        These are the stories that justify sci-FI as literature and worthwhile, IMO.

    4. I loved The Mote in God’s Eye. That was back at the height of the Pournelle/Niven era. Good stuff.

      I have an alternate form of the story that I like. And it goes like this: alien von neumann probes (governed by heavy-duty supersmart AIs) begin to encounter human space. As they get closer, they are treated to an increasing mass of human media to decode and contemplate. It starts with Welles’ “War of the worlds” and then “the day the earth stood still” and “Independence day” “Alien” “Predator” “Battlestar Galactica” “Star Wars” “Mars attacks” etc., etc. The AIs conclude that humans are unmanageably violent and that our every fantasy reaction to encountering alien life-forms is universally xenophobic. So they aim a chunk of rock at near-C speeds at Earth. The end. Alternate form #2: they actually mistake all those movies for documentaries and conclude that we humans are the greatest xenophobes ever, and are a menace to every form of life. With genuine fear they aim a chunk of rock at near-C speeds at Earth. The end.

      Joking aside, has there ever been a “man meets aliens” movie that doesn’t end in war? What message would that send, other than that it’s a good idea to sterilize Earth ASAP, either because we’re all horrible xenophobes, or xenocidal nutcases or because we’re dangerously delusional and don’t play well with others.

      I met a traveller from an antique land
      Who said: Two vast and trunkless legs of stone
      Stand in the desart. Near them, on the sand,
      Half sunk, a shattered visage lies, whose frown,
      And wrinkled lip, and sneer of cold command,
      Tell that its sculptor well those passions read
      Which yet survive, stamped on these lifeless things,
      The hand that mocked them and the heart that fed:
      And on the pedestal these words appear:
      “My name is Ozymandias, king of kings:
      Look on my works, ye Mighty, and despair!”
      Nothing beside remains. Round the decay
      Of that colossal wreck, boundless and bare
      The lone and level sands stretch far away.

      1. Man meets alien movie that doesn’t end in way: First Contact (Star Trek, 1996).

        My guess is that our sci-fi movies are so xenographic because they’re made by Americans and Brits. Esp the former. We’ve become the 21st century version of Prussia: esteem force and the military, tightly regimented, beligerent, xenophobic, arrogant.

        What would sci-fi movies made by China, Sweden, or the Swiss look like?

        In any case, we are so technologically primitive it seems unlikely a star traveling species would worry about us. The USS Ford (CVN-78) is our newest supercarrier, scheduled for launch 2015). If on its maiden cruise they made first contact with an undiscovered Polynesian island, would the Captain even imagine their people as a threat? “Ensign, radio Washington for permission to nuke this island. They waved their stone axes in a threatening manner as we entered the atoll.”

  7. With respect to FM, the “great” works of sci fi depend on pseudoscience or outright mystical twaddle which has now been definitively debunked.

    ““Childhood’s End” by Clarke, Asimov’s robot stories and Foundation stories. EE Smith’s Lensman stories. All show our current lives, humanity, as a stage in something greater.”

    Childhood’s End depends on ESP, a fantasy utterly debunked now that J. B. Rhine’s experiments have been thoroughly deflated. Asimov’s robot stories depend on hard AI, a delusion which has been exploded as thoroughly as the Philosopher’s Stone. For evidence, take a look at the book Decartes’ Error by the neuorscientist Antonio Damasio, or study any of the peer-reviewed scientific journal articles which provide evidence for the theory of embodied cognition. (It turns out that emotions arise from the body, and emotions prove crucial for human reason: we are incapable of problem-solving without emotions, and since robots or computers have no bodies and thus no emotions, it’s no wonder they can’t solve even the simplest problems that human children easily overcome.)

    The Foundation stories depend on a mythical science called “psychohistory” which purports to mathematically predict he behaviour of large numbers of people. We actually had several economics claim to accomplish something like this — the Black-Merton-Scholes options pricing model. Using the Black-Merton-Scholes model, a hedge fund called Long Term Capital Management went bankrupt in 1997 and nearly crashed the entire world economy. More recently, the global financial meltdown occurred as a result of the failure of mathematical models which purported to predict the behavior of large number of investors in world markets and accordingly claimed to offer insurance (“hedging positions”) against such market swings. The lesson appears to be that human behavior is not mathematically predictable, and all such mathematical models blow up as soon as crises occur and humans begin behaving irrationally. For more about this bizarre episode in economic history, see the book When Genius Failed by Roger Lowenstein.

    E.E. Smith’s Lensman stories depend on a “Bergenholm drive” which negates inertia and allows ships to travel many times faster than the speed of light, a form of specious pseudoscience which not only grossly violates Einstein’s special theory of relativity (E=Mc^2 so negative inertia would destroy the mass-energy of any object, which the first law of thermodynamics tells us is impossible), but also grossly contradicts the Standard Model of particle physics according to which all objects acquire their mass from the Higgs field. If you dampen the Higgs fields, logically, an object would lose its mass but that mass would have to be converted instantly to the equivalent amount of energy — meaning you’d get a very large explosion. Perhaps we shouldn’t be so harsh on E. E. Smith, since he was writing in the 1930s, an era when much less was known about physics than today. Regardless, all three of these authors based their visions on pseudoscience which today has been completely debunked. It’s easy enough to have rosy visions about the future if you ignore reality. Alas, the Republican party has been traveling that route for quite some time, and as we saw in the 2003 Iraq invasion, ignoring reality is not a good policy option.

    But to grapple directly with Marcus Ranum’s point, sustainability is the key, and the evidence seems clear that we are already way beyond the carrying capacity of this planet’s ecosystem with 7 billion humans. A much lower human population seems required if we want to avoid, for example, fishing all ocean life out of the seas within 50 years, as scientists now predict, or desertifying much of the equatiorial rain forests.

    A key problem with sustainability involves Jevon’s paradox, which basically says that making technology more efficient doesn’t make it sustainable because if some technological resources like energy or transportation or data storage becomes more efficient, people simply use more of it.

    Another big problem with digital technology is Wirth’s Law, which says that software gets slower faster than hardware gets faster.

    It’s all very well to have exuberant optimistic dreams, until you find yourself faced with observed reality. As we saw in the 2003 Iraq invasion, optimistic fantasies and exuberant positive thinking did not represent a good policy option.

    Rather than saying that “America has grown old, and can no longer grow” it seems more accurate to say that America (and the world in general) has grown up and no longer believes in fantasies like leprechauns that can spin straw into gold, or inexhaustible magical furnaces, or faery cities where time stops and everyone stays young forever. The dreams of the Star Trek era depend on visions like space exploration which simply fly in the face of known physics and biology.

    Marcus remarks that this is the only habitable planet in the solar system. For the moment, yes: I would recommend Kim Stanley Robinson’s Mars trilogy about terraforming the Red Planet. This is based on hard science and is doable with current technology, albeit over very long time-scales. It’s quite possible to convert Mars into a fully human-habitable planet as far as we know, so there’s that to consider.

    Also, there loom many other technologies which seem within the realm of possibility and which tantalize with the promise of undreamed-of advances. Biological engineering is still in its infancy. We may well see the day when living houses are grown from seeds, when bioengineered plants generate clean fossil fuels, and when technologies like rigid-body vaccum dirigibles supplant the current generation of wasteful jets. Mind uploading may become possible; recent advances include MIT’s growth of a mixed matrix of neural cells and nanocircuits, something that seemed like wild science fiction just a few years ago. If we get practical mind uploading, we could theoretically upload a mind into a silicon chip and shoot it off at near lightspeed, then grown the original body at the destination in another solar system and download the mind at the destination. That might make interstellar travel possible, especially if you changed the subjective rate at which time passes for the uploaded consciousness in the silicon chip.

    Many current sci fi writers have dealt with these possibilities, including Charles Stross and Cory Doctorow and William Gibson. So there’s still quite a bit of hope for technologically wondrous futures, albeit different from the futures of the “golden age” of sci fi.

    1. “With respect to FM, the “great” works of sci fi depend on pseudoscience or outright mystical twaddle which has now been definitively debunked.”

      Yes. Like the bible, much philosophy (eg, Plato, Aristotle), etc. life is more than science. The works I mentioned discuss large questions of who we are and where we’re going, using science as a metaphor.

      They often have transhuman gods, such as Clarke’s Overlords, Smith’s Arisians, Asimov’s robots. Sometimes they show different concepts of society and the meaning of life.

      Saying that tech in science fiction is fiction is like asking for the map of Odysseus’ travels before reading Homer.

  8. I would respectfully have to dispute Marcus Ranum’s claim that “we have AIs” today that do things like reschedule airplanes. This isn’t artificial intelligence, or anything even remotely akin to genuine human-level intelligence in a machine. This is the standard and tiresome ploy of the computer scientists who, faced with the total failure of the hard AI research program, have defined AI down to be something entirely different than it was originally described to be.

    A program that reschedules airline flights is not intelligent; it’s a queue-optimizing program. Nothing intelligent about that. Graph theory and queue theory from undergrad computer science courses deal with these algorithms. For large queues or graphs, you run into a combinatorial explosion (the traveling salesman problem) that gets dealt with by heuristics — AKA “rules of thumb.” Once again, not intelligent, just hints inserted into the code.

    If you want an example of true AI, consider David Gelernter’s 1967 claim that “within 20 years the greatest mathematician in the world will be a computer.” A computer able to generate creative and imaginative new math? I’m still waiting.

    Or consider the assertion of the 1950s conference on computer intelligence at MIT that “within two decades automated translation of literature from one language to another will be a solved problem.” We’re still waiting. While you wait, try yahoo’s babelfish or google translate. Good for lots of laughs. I entered the title of Samuel R. Delaney’s sci fi novel THE EINSTEIN INTERSECTION and google translate gave me the German title “THE EINSTEIN STREETCORNER.”

    When I can tell a computer a joke and it can explain why the joke is funny, we’ll have genuine AI. Until then, don’t hold your breath.

    Speaking of jokes, here’s an old one about AI:

    The U.S. mlitary finally builds an intelligent computer and puts it in command of North American Air Defense. The computer flashes an alert and shouts “MISSILES HAVE LAUNCHED AT AMERICA FROM THE NORTH!”

    And the general in charge asks the computer, “The north? The north WHAT?”

    And the aritificially intelligent computer responds “The north SIR!”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top
%d bloggers like this: