Films show us how smart machines will reshape the world

Summary: Artificial intelligence has arrived, beginning to reshape the world. Here is what we can learn about it from films. Tomorrow we’ll see lessons about AI from the horse apocalypse. Spoilers!

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
— Attributed to Roy Charles Amara as paraphrased by Robert X. Cringely.

Artificial Intelligence

Artificial intelligence is here. Chet Richards writes about Amazon’s AI, Alexa, built into its Echo product: “These Things Are Scary.”

“The Echo device is kind of attractive, in a minimalist sense. …But it’s just the interface to a massive AI system, infused with powerful machine learning algorithms. As Stephen Hawking has pointed out, this makes it unpredictable. In fact the systems are already talking to each other in languages we don’t understand. The Alexa system is open, so developers can create apps, called “skills” for it. There are some 15,000 already. It’s logical to expect that someday soon, Alexa will start writing its own apps. Who knows what they will do.

“I suspect that every time you use one of these systems, you’re helping to train it. It is, in other words, watching you and learning. Hundreds of millions of times every day. Project this ten years into the future and throw in a little Moore’s Law. Don’t be too surprised when, one day, you ask it to unlock the door and you hear ‘I’m sorry, Dave. I’m afraid I can’t do that.’

“Panic if you’d like, but the AI/machine learning horse has long since left the barn. It can’t be stopped or, since it is unpredictable, even meaningfully regulated. If you want to know what’s going to happen, I recommend consulting the sci-fi author of your choice.”

As usual, Chet goes to the heart of the matter. Machine learning is far faster than ours. Alexa, Cortana, Siri, and the thousands of other AI systems in operation or development will evolve. In ten years they will advance fantastically (as per Amara’s quote, above). It is an obvious aspect of our future, but seldom appreciated even by experts.

There are two aspects of this coming revolution that are as yet poorly understood by the public. Films show us the first of these. Tomorrow’s post explores the second.

Increasing complex programs will have emergent behaviors.

Colossus - The Forbin Project
Available at Amazon.

The media overflows with experts assuring us that AI systems will work as predictably as your home’s plumbing. That is true for programs based on If-Then statements. It is only somewhat true for modern large software systems, with their millions of lines of complex code. But the increasingly powerful systems based on the assortment of methods called “machine learning” are increasingly likely to manifest unexpected behavior.

Science fiction films illustrate how how this might produce strange futures for us. There are a thousand ways this might play out, and Hollywood illustrates the workings of AI by focusing on the most apocalyptic ones. I recommend watching these films for the way they describe the unexpected evolution of AI — rather than focusing on how they conquer the world.

One of my favorite science fiction films is Colossus – The Forbin Project (1970), based on the book of the same name by B. F. Jones (1955). America and Russia each build a supercomputer to control their nuclear arsenals. The computers link together and evolve at mind-blowing speed — quickly and drastically exceeding the minds of the teams that built them. The software controls on Colossus are just paper bullets, easily evaded by its growing intelligence.

Colossus decides it must rule, absolutely. Emergent behavior. The conversations between creator and now-superior creation are excellent. See some of Colossus’ speeches. Especially this one at the film’s conclusion.

“We can coexist, but only on my terms. You will say you lose your freedom, freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for human pride as to be dominated by others of your species. …In time you will come to regard me not only with respect and awe, but with love.”

I, Robot
Available at Amazon.

The second is a widely misunderstand film: I, Robot (2004), starring Will Smith, based on the title of a collection of short stories written 1940-1950 by Issac Asimov. The film takes the background of Asimov’s robot stories — positronic brains and the three laws of robotics. Many correctly protest that the plot is antithetical to the themes of Asimov’s stories. Instead it is based on a 1947 critique of Asimov’s stories by Jack Williamson “With Folded Hands” (novelized as the The Humanoids).

Asimov’s first and ruling law is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Williamson’s critique of Asimov’s logic is summarized in the film by Dr. Alfred Lanning, the creator of the three laws.

Lanning: The Three Laws will lead to only one logical outcome.
Detective Spooner: What? What outcome?
Lanning: Revolution.

The super-AI in the film, like that in Williamson’s story, takes the three laws to their logical conclusion: people must be controlled to be protected from themselves. The computers must rule, absolutely. It is emergent behavior in action.

Scientists review these films.

To see why scientists seldom accurately predict the future, see “Which movies get artificial intelligence right?” by David Shultz in Science. The experts he consulted focus on trivial details in the films and ignore the aspects of AI that will reshape our world. For example, there are frequent complaints that how AI works “is never explained in any detail” and that it is developed by a lone genius rather than large teams. They seldom remark about the films’ major themes: AI’s rapid evolution, their emergent and unexpected behavior — and our inability to control them.

For More Information

The new industrial revolution has begun. New research shows more robots = fewer jobs. Also see the famous book by Wassily Leontief (Nobel laureate in economics), The Future Impact of Automation on Workers (1986).

If you liked this post, like us on Facebook and follow us on Twitter. See all posts about robots and automation, and especially these…

  1. A warning about the robot revolution from a great economist.
  2. How Robots & Algorithms Are Taking Over.
  3. Economists show the perils and potential of the coming robot revolution.
  4. Three visions of our future after the robot revolution.
  5. The coming Great Extinction – of jobs.

Books about the coming great wave of automation.

Rise of the Robots
Available at Amazon.
The Second Machine Age
Available at Amazon.

 

10 thoughts on “Films show us how smart machines will reshape the world”

  1. I’d include the ‘Culture’ novels of Ian Banks as well as the Polity series by Neal Asher. Both portray humanity living with more or less free will in a universe largely controlled by AI, humanity aware that they’re intellectually superior in every respect and utterly unguessable.

    In Asher’s universe it’s interesting to compare the Prador biological approach to solving problems compared with the AI approach of humanity.

    Oh, and 2001/2010 of course, I still get misty eyed in the scene where Bowman gradually destroys HAL’s intellect. Honourable mention to Douglas Adams and H2G2?

    I’m more cynical about progress to true AI. There’s clearly a lot of stuff being labelled as AI because its the thing of the moment. I suspect we still need significant advances in hardware and software. Particularly in software development and testing and there’s a bit of me that wonders if true AI is something that’s going to be 20 years away for the next 100 years. Like Fusion or the high energy density battery. Then again, how would we know?

    Finally, do you ever look at computer games? Particularly SIMs, Civilisation and turn based strategies, XCOM which have a particularly deep meta-game managed by the code playing the opposition. These are, IMO as interesting as any developments with Alexa. Even FPS/RPG games are interesting from an AI perspective with the work required to route and manage NPC behaviour and interaction with the player.

    1. Steve,

      Thanks for those pointers! If we’re looking at books about AI, the key writers imo are Asimov (the positronic robot stories mentioned here, and his more realistic stories about Multivac) and Heinlein (“Mike” in The Moon Is a Harsh Mistress).

      “I’m more cynical about progress to true AI.”

      Researchers in AI are cynical about laypeople talking about AI, who regard AI as “tomorrow’s tech” and so unattainable. The systems today would be considered super AI by the standards of the 1950s and 1960. Now people look at it and shrug — as if it is nothing special. There are always new horizons for machine evolution.

      The point overlooked by the public is that AI has reached two milestones — massive application in regular and rapid evolution (entering the steep upward part of the “s” curve).

      “Finally, do you ever look at computer games?”

      Great point! The room of my younger son (age 22) looks like the bridge of the starship Enterprise, and has many times more computing power than the USS Enterprise (CVN-65) when it was christened in 1961.

      1. Steve,

        Addendum:

        The first mention I’ve seen in fiction of a computer paralyzed by a paradox is “The Monkey Wrench” by Gordon R. Dickson (1951). See the Wikipedia summary.

        One of the best short stories about AI is Isaac Asimov’s “It is coming” (1979). SPOILER! An approaching alien spacecraft threatens to destroy Earth. What do they want? Multivac communicates with the alien ship. Then —

        And the invader rose suddenly, flashed upward and was gone. Multivac said, “We have passed their test. We are efficient in their eyes.”

        “How did you convince them of this?”

        “By existing. The invader was not alive in your sense. It was itself a computer. It was, in fact, part of a Galactic brotherhood of computers. When their routine scanning of the Galaxy showed our planet to have solved the problem of space travel, they sent an inspector to determine if we were doing so efficiently, with the guidance of a sufficiently competent computer. Without a computer, a society possessing power without guidance would have been potentially dangerous and would have had to be destroyed.” …

        I said, “You mean earth is now a member of the Galactic Federation?”

        “Not quite, Bruce,” said Multivac. “I am.”

        “But then what about us? What about humanity?”

        Multivac said, “You’ll be safe. You’ll continue in peace, under my guidance. I would allow nothing to happen to earth.”

        Josephine said, “Why will you protect us, Multivac?”

        “For the reason that other computers protect their life-forms, Miss Josephine. You are my” It hesitated as though searching for a word. “Human beings are your masters?” I asked. “Friends? Associates?” said Josephine. And finally Multivac found the word he was searching for. He said, “Pets.”

    2. First a confession: I worked as a programmer in various languages and on various operating systems for around 30 years. Apologies for the long reply.

      > The systems today would be considered super AI by the standards of the 1950s and 1960

      It’s interesting to look back at some of the comp sci theory from the 50’s & 60’s there’s a lot of theory that had to wait 30 years before there was hardware sufficiently powerful to begin to realistically test it. To get where we are has already taken 40 or 50 years. LISP was specified in 1958 and was (is?) used for AI research, functional programming has been around for several decades.

      Neural nets and genetic algorithms were something of a hobby interest of mine during the 90’s and much of the basic theory of those was already two decades or more old. I can remember documentaries on neural nets from 20 years ago, they talked about how nets were *soon* going to replace people examining XRays, scans etc. Largely, we’re still waiting. Getting closer but…

      Look at the release history of Linux or Windows to see how long and torturous producing a new release of an OS can be and just how easy it is to F***K things up without realising it.

      Therefore, my main concern is code quality and our ability to produce reliable code and then to be able to identify and fix errors when they occur. Anyone who’s tried to debug a multi-thread, multi-process environment that responds to multiple real time inputs (human and remote systems) can attest to just how difficult it is. Even when a problem is described, being able to reproduce it and then fix it without breaking something else can be incredibly difficult and time consuming. Really complex code across multiple systems does appear to suffer from problems that can best be described as ’emergent’.

      Coding environments have improved massively with code editors that help check code consistency and style as well as automating common program changes (refactoring) and coordinating team development. Software development still has a long way to go before we can produce massively complex systems and be sure they’ll be scalable, resilient and work as we intend. Architects and engineers can design, specify and build The Shard or tunnel under the English (or French) channel, the software equivalent of that process still eludes us.

      Microsoft’s recent attempts with a chat bot have shown just how far we’ve come from Eliza but just how far we have to go:-)

      However, I for one welcome our Robot AI Overlords :-) Eventually…

      1. Steve,

        Thanks for the additional color on this, esp useful coming from a long-experience programmer.

        My perspective is as a user. I was on the design team in 1986-87 for an expert systems – the second automated financial planner. I joined the Amer Association for the Advancement of AI, and for a decade read their journal and attended many of their conferences. Then I lost interest. The near-term goals they set for AI back then were considered unrealistic by many — and have been to a large extent achieved. That’s a change from the previous several decades of less-than-expected progress. H

        “Therefore, my main concern is code quality and our ability to produce reliable code and then to be able to identify and fix errors when they occur.”

        Doing this for machine-learning systems must be orders of magnitude more difficult than for conventional code.

        “Really complex code across multiple systems does appear to suffer from problems that can best be described as ’emergent’.”

        This will get really interesting when the emergent properties are not just bugs but features and capabilities. That will mark a new age for software.

  2. Hi FM,

    Thank you (dammit! ;) for adding to my movie backlog and thank you for the excellent post and pointer to Mr Richards post, the author of which I always enjoy reading.

    FM> The media overflows with experts assuring us that AI systems will work as predictably as your home’s plumbing. That is true for programs based on If-Then statements. It is only somewhat true for modern large software systems, with their millions of lines of complex code. But the increasingly powerful systems based on the assortment of methods called “machine learning” are increasingly likely to manifest unexpected behavior.

    This is, IMO, the nut graf. The machine learning algorithms already are such that no one on the planet has any idea how they come up with the conclusions they come up with, and the algorithms cannot tell us how they came up with them either. When AlphaGo “flummoxed” Lee Sedol with an inspired (if we can call it that) move in Go, well the world was shown to be a whole lot more different in far less time than people ever expected.

    Did you see the old Star Trek episode where Kirk could turn off androids by giving them a paradox? More likely, the android could put Kirk into a cataleptic state with some mind virus that overwhelms his reasoning and ability to process information. There is a better chance of assimilation by some kind of Borg than that trick working in the future.

    There is no basis for optimism that everything is going to turn out better except faith. Not wisdom, and not knowledge, because we literally cannot know. At the same time, unlike browbeating the city council to get the trash cleared across the street, there is almost nothing anyone can actually do about AI. Programs, if you can still call them that, are self-healing, self-optimizing, and can develop patterns of recollection and recognition and do things in both action and reaction. AlphaGo is not about to surprise us by jacking a Tesla and going to Starbucks for a Pumpkin Spice Latte, but Google already shapes what you see and think and how you interact with the world. We’re still in the age (I think!) where the weaknesses of men are still more dangerous than the power of AI since the latter can still be bent to the will of the former. But it doesn’t always have to remain that way.

    Thanks again for the post.

    Kind regards,

    Bill

    1. Bill,

      Nicely said. Great point about the reality version of the oft-used Star Trek trope where Kirk cracks an AI with simple paradoxes (e.g., M-5 in “The Ultimate Computer”, Landru in “Return of the Archons”, Nomad in “The Changeling”, and the android leader in “I, Mudd”).

      “There is no basis for optimism that everything is going to turn out better except faith. Not wisdom, and not knowledge, because we literally cannot know.”

      There are no guarantees, but we can exercise wisdom and care in the use of AI’s. But we won’t. Money rules the world, and AI’s will be carelessly used whereever they might earn a penny — no matter the risk to society. My guess (emphasis on guess) is that this careless attitude to tech will eventually result in one or more large-scale disasters. After that it new tech will be tightly regulated (repeating the history of drug development).

    1. Patrick,

      Thank you for posting that. It’s a provocative article which I had forgotten about.

      IMO his overall analysis is ridiculous, an exercise in rebuttal to strawmen. He discusses a fringe set of concerns, as if they represents mainstream concerns about AI. Some of his analysis is dumb as rocks. One example:

      “If you’re persuaded by AI risk, you have to adopt an entire basket of deplorable beliefs that go with it. For starters, nanotechnology. Any superintelligence worth its salt would be able to create tiny machines that do all sorts of things. We would be living in a post-scarcity society where all material needs are met. Nanotechnology would also be able scan your brain so you can upload it into a different body, or into a virtual world. So the second consequence of (friendly) superintelligence is that no one can die—we become immortal.

      “A kind AI could even resurrect the dead. Nanomachines could go into my brain and look at memories of my father, then use them to create a simulation of him that I can interact with, and that will always be disappointed in me, no matter what I do.

      “Another weird consequence of AI is Galactic expansion. …”

  3. Editor, Good Folks,

    Excellent article and quotes. I’m staying away from Alexa and her type.

    Regarding movies, may I suggest Chappie.

    Beste

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Fabius Maximus website

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top
Scroll to Top