A Timeline for the Extinction of Jobs by Machines

Summary: Finally the AI revolution and the coming great extinction of jobs has become visible to almost everyone. Only its timing remains unknown. New research shows experts’ forecasts of AI’s growing skills, valuable information that will help us prepare. Let’s use it!

Artificial Intelligence

When Will AI Exceed Human Performance?
Evidence from AI Experts.”

By Katja Grace (Future of Humanity Institute at Oxford), John Salvatier (AI Impacts), Allan Dafoe (Yale), Baobao Zhang (Yale), and Owain Evans (Oxford).

“Our survey population was all researchers who published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning). …Our survey used the following definition: High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.

Figure 1: Aggregate subjective probability of ‘high-level machine intelligence’ arrival.

Time to AI - high level machine intelligence

“Each individual respondent estimated the probability of HLMI arriving in future years. Taking the mean over each individual, the aggregate forecast gave a 50% chance of HLMI occurring within 45 years and a 10% chance of it occurring within 9 years. …

“The question defined full automation of labor as: when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers. Forecasts for full automation of labor were much later than for HLMI: the mean of the individual beliefs assigned a 50% probability in 122 years from now and a 10% probability in 20 years. …

Figure 2: Timeline of Median Estimates for AI Achieving Human Performance.
The first graph looks out 200 year; the second looks at the next 30 years.

AI automation of various jobs-1
AI automation of various jobs

“Respondents were also asked when 32 ‘milestones’ for AI would become feasible. …Specifically, intervals represent the date range from the 25% to 75% probability of the event occurring …. Circles denote the 50%-probability year. Each milestone is for AI to achieve or surpass human expert/professional performance ….

Two other conclusions.

1.  We asked researchers whether the rate of progress in machine learning was faster in the first or second half of their career. Sixty-seven percent (67%) said progress was faster in the second half of their career and only 10% said progress was faster in the first half. …

2.  Explosive progress in AI after HLMI is seen as possible but improbable. Some authors have argued that once HLMI is achieved, AI systems will quickly become vastly superior to humans in all tasks. This acceleration has been called the ‘intelligence explosion.’ We asked respondents for the probability that AI would perform vastly better than humans in all tasks two years after HLMI is achieved. The median probability was 10% …. We also asked respondents for the probability of explosive global technological improvement two years after HLMI. Here the median probability was 20% ….”

———————————-

For more about the study and its results, see this article by lead researcher Katja Grace.

Abstract

“Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053).

“Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.”

Putting this in a larger context the last human-build machine

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

— “Speculations Concerning the First Ultraintelligent Machine” by I. J. Good (Irving John Good; British statistician) in Advances in Computers, vol. 6, 1965 – gated. Ungated version here.  He worked as a cryptographer at Bletchley Park during WWII.

See this interesting story of how this might happen: “The Last Invention of Man” by Max Tegmark at Nautilus, October 2017 — “How AI might take over the world.”

Conclusions

“I went through this Ford engine plant about three years ago, when they first opened it. There are acres and acres of machines, and here and there you will find a worker standing at a master switchboard, just watching, green and yellow lights blinking off and on, which tell the worker what is happening in the machine. One of the management people, with a slightly gleeful tone in his voice said to me, ‘How are you going to collect union dues from all these machines?’ And I replied, “You know, that is not what’s bothering me. I’m troubled by the problem of how to sell automobiles to these machines.”

Walter Reuther (the great UAW leader) in November 1956 to a Council group of the National Education Association. From his Selected papers (1961).

We have had several waves of automation, and adapted to each one — often with social turmoil and violence. The lessons from those can help us in this, the big one. These machines make possible a new, more prosperous, and better world. We need only find political solutions that fairly distribute its benefits. The clock is running. Let’s start now.

For More Information

The new industrial revolution has begun. New research shows more robots = fewer jobs. Also see the famous book by Wassily Leontief (Nobel laureate in economics), The Future Impact of Automation on Workers (1986). Also see the Frequently Asked Questions page at the website of the Machine Intelligence Research Institute.

If you liked this post, like us on Facebook and follow us on Twitter. See all posts about robots and automation, and especially these…

  1. A warning about the robot revolution from a great economist.
  2. How Robots & Algorithms Are Taking Over.
  3. Economists show the perils and potential of the coming robot revolution.
  4. Three visions of our future after the robot revolution.
  5. The coming Great Extinction – of jobs.
  6. Lessons for us about AI from the horse apocalypse.

Books about the coming great wave of automation.

Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford (2015).

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies Erik Brynjolfsson and Andrew McAfee (2014).

Rise of the Robots
Available at Amazon.
The Second Machine Age
Available at Amazon.

 

18 thoughts on “A Timeline for the Extinction of Jobs by Machines”

  1. It seems like if many people lose jobs (and thus consuming power), pressure to further improve automation would slacken at some point; sure, it’s advantageous to cut payroll, but why not just close down the factory if demand is falling into the toilet? It isn’t like the robots will be able to print money from thin air.

    I don’t know what the solution to this is, beyond the decoupling of consumer power to employment, at least to some extent. UBI, national dividend, whatever – the big companies probably have to worry about it too, since it’s one thing to sell cheap T-shirts to people who are now poor but could afford middle-class prices twenty years ago, and another thing to sell T-shirts to people who (generously) are living in shacks off dole food.

    Maybe Marx was just two hundred years early.

    1. SF,

      I added a Conclusion to this post. Remember, the problems of automation are familiar to us — and have been solved before.

      “I went through this Ford engine plant about three years ago, when they first opened it. There are acres and acres of machines, and here and there you will find a worker standing at a master switchboard, just watching, green and yellow lights blinking off and on, which tell the worker what is happening in the machine. One of the management people, with a slightly gleeful tone in his voice said to me, ‘How are you going to collect union dues from all these machines?’ And I replied, “You know, that is not what’s bothering me. I’m troubled by the problem of how to sell automobiles to these machines.”

      Walter Reuther (the great UAW leader) in November 1956 to a Council group of the National Education Association. From his Selected papers (1961).

      “Maybe Marx was just two hundred years early.”

      Lots of people have worried about that! See The coming big inequality. Was Marx just early?

  2. Once AI’s performance becomes adequate, not necessarily superior, then its other advantages may accelerate its adoption. In addition to cost, robots and other AI systems don’t need 8 hours sleep a night, for example, or lie about doctors visits so that they can go to their kids’ baseball games.

    1. I agree with Chet, for example non-AI automated translations are already surprisingly good and I am sure that they are having an impact on the demand for humans for the low end of translation tasks.

      One automated translator can be available simultaneously for thousands of customers and will predictably offer the same (increasingly) high level of translation service to all customers. There would probably be a maintenance fee but no benefit costs and no vacation schedules. That business model is pretty sweet.

      1. Pluto,

        Yesterday I began using VoiceBase — “Our deep learning speech recognition has revolutionized speech-to-text.” It’s far better than any other automatic transcription service I’ve seen. It still requires line by line re-writes — made easier by InqScribe’s software (makes it easier to edit a transcript while playing a recording).

        VoiceBase is very cheap to use. In another generation of two of upgrades (perhaps a year or two in human time), it or a competitor will become big time!

    2. Chet,

      Lots of potential for runaway progress of AIs! There are the economic advantages you mention. There is the evolutionary one: once AIs develop the ability to write their own software, their evolution will occur in machine time. They’ll advance in seconds what human society takes generations to accomplish.

      Oddly, this aspect of AI is seldom mentioned in fiction — although it might become their most important characteristic. It is the core premise in Colossus – The Forbin Project (1970), based on a book of the same name by B. F. Jones (1955).

    3. Pluto,

      I’m not sure what non-AI machine translation service you’re speaking off, and the definition of “AI” is notoriously slippery (at what point does “machine learning” become “AI”?) but Google Translate has made enormous leaps and bounds over the last 18 months or so, and currently uses a recurrent neural network to do direct sequence-to-sequence encoding and decoding. This is considered an accomplishment at the very forefront of applied AI. In some cases phrases have been judged to be better translated than human efforts, but looking at the outputs you can see that while they’re far better than the early days of the service, it still has a way to go to hit complete human equivalence.

      Larry,

      That’s the thing with voice recognition, machine translation and similar language tasks. They’ve been “one or two generations away” from being fully usable (the point at which the number of manual corrections needed no longer outweighs the convenience of direct dictation) for as long as I can remember. People were saying the same thing about Dragon NaturallySpeaking and similar products in the previous wave of enthusiasm for voice recognition in the mid-1990s. And probably in the 1980s, 1970s, etc. Machine translation has been worked on since the 1950s, at least, with the participants of the Georgetown experiment in 1954 confidently asserting that they were only 3-5 years out from fully solving the problem. But progress in all these things tends to be asymptotic and slow, with occasional leaps forward as we’re currently experiencing right now as a result of the back-propagation algorithm making deep neural networks practical. So I’m bullish in the long term, but you’ve got to be careful of over-enthusiasm in the near.

      1. phageghost,

        I have no idea what you are attempting to say. If you believe that these systems have not improved since the 1950s, 1970s, or 1980-s — you are wrong. If you believe that people over-estimate the potential for short-term change — yes, that’s a commonplace.

        “They’ve been “one or two generations away” from being fully usable (the point at which the number of manual corrections needed no longer outweighs the convenience of direct dictation)”

        Voicebase is already at that point. I’m using it this week.

        “People were saying the same thing about Dragon NaturallySpeaking and similar products in the previous wave of enthusiasm for voice recognition in the mid-1990s.”

        I suggest you benchmark by the improvement in capabilities, and not comparing hype.

        “But progress in all these things tends to be asymptotic and slow, ”

        That’s really false. Bizarrely false. It’s a commonplace in tech to say such things during the long basing part of the “s” curve — dismissive of the gradual improvements during this phase. Many areas of machine intelligence are following the usual “S” curve, with some (e.g., translation of speech to text, language to language) now moving into the steeper phase. My first involvement with this was in 1987, working on the team building the second fully automated financial planning expert system. It was a toy compared to what’s available today.

        “So I’m bullish in the long term, but you’ve got to be careful of over-enthusiasm in the near.”

        I’ve probably posted the “overestimate over short-term – underestimate over long-term” quote several score times on the FM website.

        “the definition of “AI” is notoriously slippery”

        The usual def used in the 1980s probably still works today. Leading edge functionality in development to early commercialization is “AI”. Once it becomes widely used it becomes just software.

      2. Larry,

        Let me clarify.

        You said:

        “It’s far better than any other automatic transcription service I’ve seen. It still requires line by line re-writes — made easier by InqScribe’s software (makes it easier to edit a transcript while playing a recording).

        VoiceBase is very cheap to use. In another generation of two of upgrades (perhaps a year or two in human time), it or a competitor will become big time!”

        Each correction takes some amount of time. Trivially, the point at which voice recognition becomes faster than typing (and therefore able to replace it broadly) is the point at which the number of corrections times the time it takes to make each one becomes less than the time saved by speaking rather than writing. Assuming both time variables are constant folks can compute the needed error rate to reach the crossover point. I took from your phrasing that you were averaging one error per line (which for my time values would put me still in the typing faster than speaking region of the curve). Also you said that it needed a couple of generations of upgrades before it was ready for the big time. Well, I heard much the same thing about previous generations — “the error rate is almost there! Just a couple more generations”. I could easily find out for myself by trying it out I suppose but dictation wouldn’t be practical for my use cases.

        My overall point is one that I’m sure you’re familiar with in economics, which is that there are diminishing returns such that it takes more and more effort to eke out each next step of improvement (depending obviously on what scale we quantify performance). For example, I’m quite sure that VoiceBase uses several orders of magnitude more CPU cycles and RAM than ye olde Dragon software. Fortunately Moores’ law takes care of that so it’s a wash.

        Likewise contra your thought experiment of self-improving AI (granted I’m assuming it wasn’t meant to be quantitative) you can’t get an endless exponential increase in the efficiency of code. It’s asymptotic. As a human you usually get some big jumps, orders of magnitude in the first few passes — by the end you’re looking at a few % here or there and it’s time to stop. A computer could do this faster and probably better but it will slow down too. And of course there are theoretical hard limits, e.g. you obviously can’t do better than O(N) for the size of your inputs + outputs.

        Since the challenge scales exponentially the history of progress in AI is a bit like Zeno’s paradox, where the backers are confident that the last 10 % of the problem will take 10 % of the development time / CPU cycles it’s taken so far, when in reality it takes just as much time as the first 90 %. And this has lead to the cyclic nature of enthusiasm for the field, when there’s a backlash from dashed expectations, followed by a slow resurgence. We happen to be on another upslope, which I’m happy about, but that’s why I’m cautious about potentially over-ebullient predictions.

        However, to use an analogy to a quote you’ve often repeated here, just because William the Conqueror’s fleet hasn’t shown up yet doesn’t mean it won’t . . .

      3. phageghost,

        I don’t get what you are saying. But time will prove which of us is correct.

        “you can’t get an endless exponential increase in the efficiency of code. It’s asymptotic. ”

        That’s a silly rebuttal, a commonplace folly. “Growth can’t continue because if it continues endlessly…” That something cannot continue until the heat death of the universe doesn’t mean it can’t continue for the time horizons people on Earth use. That there is some limit is incontestable; how far off it is only time will tell. I’ve been reading for 20+ years that Moore’s Law (in its many variations) is about to the break down really soon because semiconductor manufacturers hit physical limits. Scientific American ran several such stories when I used to read it, long ago.

        AI’s achieving HLMI and rewriting their code in 20 years, followed by one year of explosive progress, is not “endless.” Also, that was just an illustration. The growth might be slower, but over two or three years. The resulting leap ahead would be the same.

      4. “you can’t get an endless exponential increase in the efficiency of code. It’s asymptotic. ”

        No, not in the purely-abstract sense, I mean in a functional sense that would be relevant to end users. You’re just not going to get 10,000-fold improvements in efficiency unless the code was very poorly written to begin with, that’s just not how it works. More like 10 fold. But n efficiency increase of even 10-fold is only gives you the same effect as 3.32^1.5 years of Moore’s law, right? In fact, part of the design philosophy of Microsoft under Bill Gates, at least in the 1980s and 1990s, when I was following them, was to ignore efficiency in order to focus on features and let Moore’s law take care of performance.

        I don’t think the benefits of AI-written software will be in code efficiency (though that will be there), it will be in:

        1. Writing MORE code. There are a ton of problems that could benefit from custom-designed software solutions where it’s just not feasible to pay people to write it. But a system that could automatically translate from problem specifications to software solutions (with iterative clarifications, of course) means you could end up with software even for one-off situations. There will just be an endless stream of software being spun out by intelligent agents that you’ll never see or interact with, solving problems on the fly — we’ll just consume it and think of it as we do electricity.

        2. Developing better algorithms (though perhaps this is what you mean by efficiency gains). There are a ton of interesting problems that are just exponential in time complexity and so completely intractable for any interesting size of input, even if Moore’s law continued to the heat death of the universe. Most of them are provably in NP, so the only hope is that a computer could eventually determine a polynomial solution and show that P=NP, the holy grail. Most folks don’t think that’s possible but of course can’t prove it. But we have puny human brains. Even if a computer doesn’t show P=NP just proving P!=NP would be awesome. However, there are little algorithmic improvements made even to mature problems all the time, and if an automated system could, for example, find a slightly-more-efficient way of doing something as well-used as matrix multiplication that would be huge. For problems on the cutting edge, there are undoubtedly much more efficient algorithms to be found, since within any problem domain you get diminishing returns.

      5. phaghost,

        “You’re just not going to get 10,000-fold improvements in efficiency unless the code was very poorly written to begin with”

        It’s always interesting to see people so confidently making up certainties about the future. If only people doing so would prove their prognostication power by showing the accuracy of their past predictions.

        Four order of magnitude improvements in tech are commonplace. Not just in modern semiconductors, but in engines. Tomas Savoy’s engine generated 1HP in 1699. Newcomen’s did 5 hp in 1711. In 1781 Watt’s did 10 HP. A century later 100K engines were commonplace.

        How might software produce such large gains? We can only guess. One method might by AI’s writing directly in machine language. There are massive losses of efficiency when going from high-level programming languages to assembly language to machine language. An AI might find that unnecessary — or even more difficult — than writing directly in machine language. Who can say how many orders of magnitude improvement that change alone might generate? Who can say what other improvements a machine intelligence might eventually make in software?

        Are these certain? Nothing is certain about the future, other than that time will roll onwards. But preparing for it requires an open mind.

        These conversations about trains being impractical and unsafe, airplanes being impossible, the atomic bomb being impossible, and rockets being impossible are the usual fare in tech history. Boring, but commonplace.

    4. Self-improving code will speed things up but I doubt it will (for the most part) take seconds to do what took generations beforehand, since these are complicated tasks involving interface with boring ol’ slow matter and maybe even human beings.

      Now that doesn’t mean they won’t be scary fast, but even a hypothetically one hundred percent automated chip fabricator and AI Forbin complex would have to design, manufacture, test, install and integrate its new superchips. Similarly, it would need to fabricate its Terminators at some point, if it’s going the Terminator route.

      I think Asimov’s stories are going to get a lot more relevant.

      1. SF,

        “I doubt it will (for the most part) take seconds to do what took generations beforehand …Now that doesn’t mean they won’t be scary fast, but even a hypothetically one hundred percent automated chip fabricator and AI Forbin complex would have to design, manufacture, test, install and integrate its new superchips. ”

        The point being made is the possibility that software improvements alone could make an Ai more powerful-effective-smarter. Windows 10 is aprox 50 million lines of code (plus more in updatable firmware); the Google software complex is roughly 2 billion lines of code (per Wired). Easy to imagine that a future AI might re-write those codes to get more juice out of them than did their armies of human software coders. If each generation of AI-designed code was 10% more efficient and took a week to produced, then it would be 142x “better” after one year.

        What would that mean? I doubt we can provide a useful answer to that question today. Our ability to understand animal intelligence is still in its early stage. But an improvement of two orders of magnitude might have big results. As engineers say, “a change of an order of magnitude is a qualitative change.”

        Also — given how well software is designed today, it is possible that a machine might be better at coding than groups of people. So the improvement per generation might be greater than 10% — or take less than a week. Then the results become fantastically high. A gain of fifteen percent every five days gives an improvement of 27 thousand times after a year — four orders of magnitude.

  3. “That’s really false. Bizarrely false. It’s a commonplace in tech to say such things during the long basing part of the “s” curve — dismissive of the gradual improvements during this phase. Many areas of machine intelligence are following the usual “S” curve, with some (e.g., translation of speech to text, language to language) now moving into the steeper phase.”

    A sigmoidal curve starts out in a regime of fast, approximately exponential growth, but the growth rate slows over time, such that it passes through an approximately linear growth domain and ends up asymptotically approaching some maximum. But you’re presumably talking about the derivative of the function, not its growth rate, in which case, yes, it’s fastest in the linear region.

    So the crux of our argument is which domain AI is in. The problem is that we’re in different domains for different problems. For example, autonomous driving definitely seems to be in the linear section, looking at the performance in that early DARPA challenge to now, any performance metric is going to have seen huge increases. I think these are the types of examples that you’re arguing from.

    However, other problems are certainly in the slow growth, right end of the curve. Look at the improvement in accuracy for the MNIST classification task in recent years, for example. Going from 99.5 % to 99.75 % in the span of a decade or so of intensive research (of course, the error rate is still decaying exponentially, which might be a better way of looking at it depending on the structure of your cost function). I’m arguing that voice recognition is a problem that’s not as nearly-solved as MNIST but that does seem to be in the slow growth region.

    Language-based problems seem to be a little more difficult than simple classification tasks to put on this kind of simple curve and discuss quantitatively, though, since something like the “meaning” of a text is really many sets of hierarchically-nested problems of increasing difficulty.

    I’ll defer to your expertise on automated financial planning systems but I have no doubt they’ve improved mightily since 1987 (hmm, you didn’t roll it out on October 19th, did you? ;-)

  4. Definitely interesting stuff. A couple of final points.

    1. The feedback loop of self-improvement will obviously also involve hardware design, not purely software. Physics AI may be a huge boon to uncovering the quantum weirdness at the bottom of things that could help us build better processors.

    2. One point that’s often overlooked in discussing machine-written software is the efficiency gains that could come from discarding the need to be comprehensible. Long story, but most code sacrifices a fair amount of performance so that it can be read, understood and maintained by humans. But, to look across disciplines, biological and other evolved systems are free of this constraint, and don’t neatly compartmentalize functions the way a human engineer would (part of what makes them so complex and hard to unravel). Software can keep much more complex models in mind than human programmers, and so may benefit from being freed of the straitjacket of human-readibility. However, compiles already do a lot of this under the hood, so it may not be so clear cut.

    3. One killer app for voice recognition that’s viable at much higher error rates than dictation is for me is when it’s linked to machine translation. You might have seen that Google Translate has this now for a lot of language pairs. Speak English to it and it speaks back in French, and vice versa. Pretty freaking amazing. Remember the Babel Fish from Hitchhiker’s Guide to the Galaxy? Put this on a little device with the form factor of a bluetooth headset and travel the world with impunity. Definitely check it out if you haven’t played with it yet.

    Of course, back to your larger point, we still have no plan to deal with the job extinction all this is going to cause, and yeah, time is running out.

    1. Phageghost,

      You might find the links in the For More Info section of interest. Few people realize that the discussion about runaway evolution of AI goes back to 1986. Also see the link to the website of the Machine Intelligence Research Institute.

      All this has been examined and discussed for decades by experts.

  5. Pingback: More Adventures in an “Industrial Society and Its Future” | Head Space

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Fabius Maximus website

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
Scroll to Top