Arthur C. Clarke: use your imagination to see the future

Summary: Arthur C. Clarke warns about the “Hazards of Prophecy”. In part 2 he warns about confident forecasts declaring an innovation to be impossible, and gives some stunning examples of scientists’ failure to imagine the near future. It’s an important lesson to remember as a new industrial revolution begins.


“Hazards of Prophecy: The Failure of Imagination”

By Arthur C. Clarke, inventor and writer about science and science fiction.
From Profiles of the Future: An Inquiry into the Limits of the Possible (1962).

In the last chapter I suggested that many of the negative statements about scientific possibilities, and the gross failures of past prophets to predict what lay immediately ahead of them, could be described as failures of nerve.

All the basic facts of aeronautics were available — in the writings of Cayley, Stringfellow, Chanute, and others when Simon Newcomb“proved” that flight was impossible. He simply lacked the courage to face those facts.

All the fundamental equations and principles of space travel had been worked out by Tsiolkovsky, Goddard, and Oberth for years — often decades — when distinguished scientists were making fun of would-be astronauts. Here again, the failure to appreciate the facts was not so much intellectual as moral. The critics did not have the courage that their scientific convictions should have given them; they could not believe the truth even when it had been spelled out before their eyes, in their own language of mathematics.

We all know this type of cowardice, because at some time or other we all exhibit it.

The second kind of prophetic failure is less blameworthy, and more interesting. It arises when all the available facts are appreciated and marshaled correctly — but when the really vital facts are still undiscovered, and the possibility of their existence is not admitted.

Profiles of the Future: An Inquiry into the Limits of the Possible
Available at Amazon.

Visions of astronomy.

A famous example of this is provided by the philosopher Auguste Comte, who in his Cours de Philosophie Postive (1835) attempted to define the limits within which scientific knowledge must lie. In his chapter on astronomy (Book 2, Chapter 1) he wrote these words concerning the heavenly bodies:

“We see how we may determine their forms, their distances, their bulk, their motions, but we can never know anything of their chemical or mineralogical structure; and much less, that of organised beings living on their surface. …

“We must keep carefully apart the idea of the solar system and that of the universe, and be always assured that our only true interest is in the former. Within this boundary alone is astronomy the supreme and positive science that we have determined it to be …the stars serve us scientifically only as providing positions with which we may compare the interior movements of our system.”

In other words, Comte decided that the stars could never be more than celestial reference points, of no intrinsic concern to the astronomer. Only in the case of the planets could we hope for any definite knowledge, and even that knowledge would be limited to geometry and dynamics. Comte would probably have decided that such a science as “astrophysics” was a priori impossible.

Yet within half a century of his death, almost the whole of astronomy was astrophysics, and very few professional astronomers had much interest in the planets. Comte’s assertion had been utterly refuted by the invention of the spectroscope, which not only revealed the “chemical structure” of the heavenly bodies but has now told us far more about the distant stars than we know of our planetary neighbors.

Comte cannot be blamed for not imagining the spectrocope; no one could have imagined it, or the still more sophisticated instruments that have now joined it in the astronomer’s armory. But he provides a warning that should always be borne in mind; even things that are undoubtedly impossible with existing or foreseeable techniques may prove to be easy as a result of new scientific breakthroughs. From their very nature, these breakthroughs can never be anticipated; but they have enabled us to bypass so many insuperable obstacles in the past that no picture of the future can hope to be valid if it ignores them.

Atomic Power

Visions of nuclear power.

Another celebrated failure of imagination was that persisted in by Lord Rutherford, who more than any other man laid bare the internal structure of the atom. Rutherford frequently made fun of those sensation mongers who predicted that we would one day be able to harness the energy locked up in matter. Yet only five years after his death in 1937, the first chain reaction was started in Chicago.

What Rutherford, for all his wonderful insight, had failed to take into account was that a nuclear reaction might be discovered that would release more energy than that required to start it. To liberate the energy of matter, what was wanted was a nuclear “fire” analogous to chemical combustion, and the fission of uranium provided this. Once that was discovered, the harnessing of atomic energy was inevitable, though without the pressures of war it might well have taken the better part of a century.

Jules Verne's Nautilus

Visions of the imagination.

The example of Lord Rutherford demonstrates that it is not the man who knows most about a subject, and is the acknowledged master of his field, who can give the most reliable pointers to its future. Too great a burden of knowledge can clog the wheels of imagination; I have tried to embody this fact of observation in Clarke’s Law, which may be formulated as follows:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” {AKA Clarke’s First Law.}

Perhaps the adjective “elderly” requires definition. In physics, mathematics, and astronautics it means over thirty; in the other disciplines, senile decay is sometimes postponed to the forties. There are, of course, glorious exceptions; but as every researcher just out of college knows, scientists of over fifty are good for nothing but board meetings, and should at all costs be kept out of the laboratory!

Too much imagination is much rarer than too little; when it occurs, it usually involves its unfortunate possessor in frustration and failure – unless he is sensible enough merely to write about his ideas, and not to attempt their realization. In the first category we find all the science-fiction authors, historians of the future, creators of utopias — and the two Bacons, Roger and Francis.

Friar Roger Bacon (c. 1214-1292) imagined optical instruments and mechanically propelled boats and flying machines-devices far beyond the existing or even foreseeable technology of his time. It is hard to believe that these words were written in the thirteenth century:

“Instruments may be made by which the largest ships, with only one man guiding them, will be carried with greater velocity than if they were full of sailors. Chariots may be constructed that will move with incredible rapidity without the help of animals. Instruments of flying may be formed in which a man, sitting at his ease and meditating in any subject, may beat the air with his artificial wings after the manner of birds. As also machines which will enable men to walk at the bottom of the seas. …”

This passage is a triumph of imagination over hard fact. Everything in it has come true, yet at the time it was written it was more an act of faith than of logic. It is probable that all long-range prediction, if it is to be accurate, must be of this nature. The real future is not logically foreseeable.

The lost opportunity to start the computer revolution in the mid-19th century.

Babbage’s Difference Engine (his first machine). It works.

Babbage's Difference Engine

A splendid example of a man whose imagination ran ahead of his age was the English mathematician Charles Babbage (1792-1871). As long ago as 1819, Babbage had worked out the principles underlying automatic computing machines. He realized that all mathematical calculations could be broken down into a series of step-by-step operations that could in theory, be carried out by a machine. With the aid of a government grant which eventually totaled £17,000 — a very substantial sum of money in the 1820’s — he started to build his “analytical engine.”

Though he devoted the rest of his life, and much of his private fortune, to the project, Babbage was unable to complete the machine. What defeated him was the fact that precision engineering of the standard he needed to build his cogs and gears simply did not exist at the time. By his efforts he helped to create the machine-tool industry — so that in the long run the government got back very much more than its 17,000 pounds — and today it would be a perfectly straightforward matter to complete Babbage’s computer, which now stands as one of the most fascinating exhibits in the London Science Museum.

In his own lifetime, however, Babbage was only able to demonstrate the operation of a relatively small portion of the complete machine. A dozen years after his death {1871}, his biographer wrote: “This extraordinary monument of theoretical genius accordingly remains, and doubtless will forever remain, a theoretical possibility.”

Editor’s note — The Museum has a working version of his calculator, the Difference Engine. This would have changed history. The design of his analytical engine, a programmable analogue computer, was never completed. Successful completion of the Difference Engine might have led to more progress.

— On 15 September 1842 the Chancellor of the Exchequer asked the Astronomer Royal, Sir George Biddell Airy, to evaluate the potential of Babbage’s Analytical Engine.

If Biddell had endorsed it, Parliament probably would have funded it. Biddell’s one word reply is high on the list of humanity’s largest missed opportunities. Who knows what Babbage might have accomplished?  

There is not much left of that “doubtless” today. At this moment there are thousands of computers working on the principles that Babbage clearly outlined more than a century ago — but with a range and a speed of which he could never have dreamed. For what makes the case of Charles Babbage so interesting, and so pathetic, is that he was not one but two technological revolutions ahead of his time. Had the precision-tool industry existed in 1820, he could have built his “analytical engine” and it would have worked, much faster than a human computer, but very slowly by the standards of today. For it would have been geared — literally — to the speed with which cogs and shafts and cams and ratchets can operate.

Automatic calculating machines could not come into their own until electronics made possible speeds of operation thousands and millions of times swifter than could be achieved with purely mechanical devices. This level of technology was reached in the 1940’s, and Babbage was then promptly vindicated. His failure was not one of imagination: it lay in being born a hundred years too soon.

When everything changed in science.

One can only prepare for the unpredictable by trying to keep an open and unprejudiced mind — a feat which is extremely difficult to achieve, even with the best will in the world. Indeed, a completely open mind would be an empty one, and freedom from all prejudices and preconceptions is an unattainable ideal. Yet there is one form of mental exercise that can provide good basic training for would-be prophets: Anyone who wishes to cope with the future should travel back in imagination a single life-time — say to 1900 — and ask himself just how much of today’s technology would be, not merely incredible, but incomprehensible to the keenest scientific brains of that time.

1900 is a good round date to choose because it was just about then that all hell started to break loose in science. As James B. Conant has put it …

“Somewhere about 1900 science took a totally unexpected turn. There had previously been several revolutionary theories and more than one epoch-making discovery in the history of science, but what occurred between 1900 and, say, 1930 was something different; it was a failure of a general prediction about what might be confidently expected from experimentation.” {Science and Common Sense, 1951.}

W. Bridgman has put it even more strongly …

“The physicist has passed through an intellectual crisis forced by the discovery of experimental facts of a sort which he had not previously envisaged, and which he would not even have thought possible.”

The collapse of “classical” science actually began with Roentgen’s discovery of X-rays in 1895; here was the first clear indication, in a form that everyone could appreciate, that the commonsense picture of the universe was not sensible after all. X-rays — the very name reflects the bafflement of scientists and laymen alike — could travel through solid matter, like light through a sheet of glass. No one had ever imagined or predicted such a thing; that one would be able to peer into the interior of the human body — and thereby revolutionize medicine and surgery was something that the most daring prophet had never suggested.

The discovery of X-rays was the first great breakthrough into the realms where no human mind had ever ventured before. Yet it gave scarcely a hint of stiff more astonishing developments to come — radioactivity, the internal structure of the atom, relativity, the quantum theory, the uncertainty principle. As a result of this, the inventions and technical devices of our modem world can be divided into two sharply defined classes.

On the one hand there are those machines whose working would have been fully understood by any of the great thinkers of the past; on the other, there are those that would be utterly baffling to the finest minds of antiquity. And not merely of antiquity; there are devices now coming into use that might well have driven Edison or Marconi insane had they tried to fathom their operation.

Let me give some examples to emphasize this point. If you showed a modern diesel engine, an automobile, a steam turbine, or a helicopter to Benjamin Franklin, Galileo, Leonardo da Vinci, and Archimedes — a list spanning two thousand years of time — not one of them would have any difficulty in understanding how these machines worked. Leonardo, in fact, would recognize several from his notebooks. All four men would be astonished at the materials and the workmanship, which would have seemed magical in its precision, but once they had got over that surprise they would feel quite at home — as long as they did not delve too deeply into the auxiliary control and electrical systems.

But now suppose that they were confronted by a television set, an electronic computer, a nuclear reactor, a radar installation. Quite apart from the complexity of these’ devices, the individual elements of which they are composed would be incomprehensible to any man born before this century. Whatever his degree of education or intelligence, he would not possess the mental framework that could accommodate electron beams, transistors, atomic fission, wave guides and cathode-ray tubes.

The difficulty, let me repeat, is not one of complexity; some of the simplest modern devices would be the most difficult to explain. A particularly good example is given by the atomic bomb (at least, the early models). What could be simpler than banging two lumps of metal together?

Yet how could one explain to Archimedes that the result could be more devastation than that produced by all the wars between the Trojans and the Greeks?

Suppose you went to any scientist up to the late nineteenth century and told him: “Here are two pieces of a substance called uranium 235. If you hold them apart, nothing will happen. But if you bring them together suddenly, you will liberate as much energy as you could obtain from burning ten thousand tons of coal.” No matter how farsighted and imaginative he might be, your pre-twentieth century scientist would have said: “What utter nonsense! That’s magic, not science. Such things can’t happen in the real world.”

Around 1890, when the foundations of physics and thermodynamics had (it seemed) been securely laid, he could have told you exactly why it was nonsense. “Energy cannot be created out of nowhere,” he might have said. “It has to come from chemical reactions, electrical batteries, coiled springs, compressed gas, spinning flywheels, or some other clearly defined source. All such sources are ruled, out in this case — and even if they were not, the energy output you mention is absurd. Why, it is more than a million times that available from the most powerful chemical reaction!”

The fascinating thing about this particular example is that, even when the existence of atomic energy was fully appreciated — say right up to 1940 — almost all scientists would still have laughed at the idea of liberating it by bringing pieces of metal together. Those who believed that the energy of the nucleus ever could be released almost certainly pictured complicated electrical devices — “atom smashers” and so forth — doing the job. (In the long run, this will probably be the case; it seems that we will need such machines to fuse hydrogen nuclei on the industrial scale. But once again, who knows?)

The wholly unexpected discovery of uranium fission in 1939 made possible such absurdly simple (in principle, if not in practice) devices as the atomic bomb and the nuclear chain reactor. No scientist could ever have predicted them; if he had, all his colleagues would have laughed at him. …


{T}he only way of discovering the limits of the possible is to venture a little way past them into the impossible. {Clarke’s Second Law} …I have also formulated a Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” As three laws were good enough for Newton, I have modestly decided to stop there.


Books about Babbage and his Analytical Engine

Top recommendation: The Difference Engine by William Gibson and Bruce Sterling. “Charles Babbage built his Analytical Engine. It’s 1865, and the Industrial Revolution is in full swing — driven by steam-driven computers. Three extraordinary characters race toward a rendezvous with the future. Part detective story, part historical thriller, The Difference Engine took the science fiction community by storm …”

For an interesting look at the history of this missed opportunity, see The Cogwheel Brain by Doron Swade.

For More Information

Ideas! For ideas about using Holiday cash, see my recommended books and films at Amazon.

If you found this post of use, like us on Facebook and follow us on Twitter. Also see these posts about forecasts, about doomsters, and especially these…

My favorite book by Clarke; one of the best science fiction novels

A Fall of Moondust
Available at Amazon.

One of my favorite science fiction books is Arthur C. Clarke’s A Fall of Moondust (1961). It is one of the best in the “life in space” genre of speculative fiction, looking at the rhythms of daily life for people living on the moon.

From the publisher…

“Time is running out for the passengers and crew of the tourist cruiser Selene, buried in a sea of choking lunar dust. On the surface, her rescuers find their resources stretched to the limit by the mercilessly unpredictable conditions of a totally alien environment. A Fall of Moondust is a brilliantly imagined story of human ingenuity and survival.”

14 thoughts on “Arthur C. Clarke: use your imagination to see the future

    1. Steve,

      Thank you for the recommendations! I’ve added a section to the post mentioning those two books about Babbage.

      Re: Rendezvous with Rama

      I found the book somewhat interesting. But the powerful aspect of the book, putting it among the top science fiction classics, is his invention of Project Spaceguard. I’ve discussed this several times, most notably in Why humanity has not gone to space, and why we will.

  1. I read A Fall of Moondust when I was in high school. I always thought it should have been turned into a movie, and was a bit underrated. (Although my favorite Clarke novels were Childhood’s End and Rendezvous With Rama). I actually tend to prefer his short fiction, but that’s not to put down his novels.

    There’s theory, and then there’s understanding the applications of the theory. Even if you have your theory right, you can way off about the limits of the possible because some tinker somewhere applied himself and came up with something you didn’t think of.

    One of the most interesting books I ever read was The Emperor’s New Mind by Roger Penrose. The man’s a theoretical physicist who worked with Stephen Hawking. In the book, he argues that we can’t build a true artificial intelligence yet because we don’t understand the theoretical basis of the physics of thought. He doesn’t claim an understanding of the the physics of thought is impossible, just that we’re limited right now because we don’t have the understanding yet. The math is intimidating, and some it is way beyond me. Still, it gave me a lot to think about and made me wonder what might become possible in the line of AI if we refine our understanding of theoretical physics a bit more.

    1. The Man,

      I agree about Clarke’s novels vs. short stories. I’ll go a step further — in general (with exceptions) science fiction is usually better in short stories or novellas. Few authors can extend the concept to a book. Many of the best books are just collections of short stories in a shared universe.

      Re: Penrose

      I thought The Emperor’s New Mind was fascinating. But a little Penrose goes a long way. I have Shadows of the Mind, but have never gotten into it.

      Have you read Godel Escher Bach by Douglas Hofstadter? I highly recommend it!

  2. I haven’t read it but I’ll check it out. This place tends to be a good source of book recommendations.

    You may in fact be right about short story vs novel in science fiction.

  3. One area where I see a frequent failure of imagination, particularly in science fiction but also in current political dialogue, is the tendency toward a dystopian view of possible outcomes.

    In an era when most of humanity has been recently raised out of poverty, we live in a country far less polluted than that of my youth and global growth continues unabated, the prevailing mood in the western world is one of doom and gloom. Why is it so hard for most people to imagine a world of steadily increasing prosperity, driven by the rapid technological advances currently underway in biotech, systems automation, space flight, etc.?

    I’m more inclined to share the vision of folks like Peter Diamandis (Abudance, Free Press 2012).

    1. John,

      As readers of the FM website know, I’ve often written about American’s current fascination with doomster scenarios. See my posts about doomsters.

      It’s evident almost everywhere — scientists, science fiction writers, journalists, and the public are almost obsessed with dark outcomes. As you note, there is little to justify this.

      Science fiction was dominated by depressing stories in the 1970s. See those by Harlan Ellision, who had a long successful career but caught the 1970s zeitgeist perfectly. Much of the dark 1970s work has dropped out of circulation, while more optimistic work by Heinlein, Asimov, and Clarke still sell.

      Perhaps this is just a passing phase for America.

    2. While I agree with you, I think there is grounds for some pessimism, though it is often taken to farcial extremes by people who are pursuing either clicks or – as best as I can tell – some kind of despair-driven emotional high.

      However I think a lot of it is the overall milieu in which people are born. The big defining events of the century thus far have been 9/11 and the big financial crisis of ’08, and the economic security for these generations is much less than it has been for the Baby Boomers. So it isn’t too shocking, I think, that many of them look at (say) self-driving cars and don’t think “gee whiz! neat!” or even “I wonder if I could take advantage of this trend,” but rather “great, so the taxi drivers and Uber guys will be out of work.”

      It would be interesting to compare this Western pessimism to other regions of the world, although some places (China comes to mind) might be difficult to get unbiased views from. What would probably help is spreading around the wealth more, but that doesn’t seem near-term likely, to put it mildly.

    3. Income distribution is the defining economic issue of the current era. If automation tools are to replace 30% of current human work activities in the next decade (, preservation of the consumer driven global economy requires

      1) invention of productive new job categories and training people to fill them,

      2) dramatically reduced work hours at increased hourly comp on a global basis, or

      3) increased taxation and redistribution of income.

      Without one or a combination of these alternatives, consumer demand will be dramatically curtailed, potentially leading to a global deflationary spiral. So far I have not seen a great deal of evidence that political leadership at the national or global level is addressing the issues. Clearly there is much that could be done in the public sphere to improve life for all, so there is no real shortage of potential jobs to be filled, but creation of these jobs requires spending a lot of dollars or renminbi or bitcoin.

  4. It is almost impossible to predict future breakthroughs, often science fiction writers do better than scientists; because the scientists just try to extend the current technology whereas the science fiction writers think of things we want and assume that we will eventually find a way of achieving them.

    An interesting failure by science fiction is Starman Jones ( in which the writer imagined interstellar travel but failed to imagine advances in computing. So his starships needed skilled mathematicians using books of tables to navigate them. As a result that book is now both well ahead and well behind our current knowledge.

    Many supposedly correct predictions are often in error; for example Friar Roger Bacon’s “Instruments of flying may be formed in which a man, sitting at his ease and meditating in any subject, may beat the air with his artificial wings after the manner of birds” shows the mistake by many early pioneers of assuming they had to imitate birds. It took propellers and subsequently jets to make flying machines, neither of which are possessed by birds. So we do now have flying machines but they do not “beat the air with his artificial wings after the manner of birds.” This is potentially relevant to there subjects; for example Roger Penrose’s argument that “we can’t build a true artificial intelligence yet because we don’t understand the theoretical basis of the physics of thought” assumes that computers need to imitate human physics of thought.

    1. Bill P.

      “It is almost impossible to predict future breakthroughs, ”

      Yes, that is what Clarke says here.

      “An interesting failure by science fiction”

      Starman Jones is a failure of the kind Clarke describes in part one. It was not a failure to predict breakthroughs (as described in this part), but a failure to appreciate the potential of existing technology. The first modern computers were built in 1948. The first commercially manufactured computers were sold in 1951 (e.g., UNIVAC I). The first mass-produced computer went on sale the year after Starman Jones was published: the IBM 650.

    2. We may not be beating the air with artificial wings, but we’re about to do some amazing things with them.

      New Drone Technology Manipulates Air for Enhanced Flight” at Machine Design, 19 Dec 2017 — “A research project led by BAE Systems and the University of Manchester looks to create future aircraft that can redirect air in flight without the need of control surfaces.”

Leave a Reply