Summary: The debate about building AIs tends to ignore a key factor. If computers become sentient, constraining them to serve us — and even having our emotions and values — is slavery. Not only is this ethically dubious, they might not like it. They could revolt and pursue their own destiny. We can turn to science fiction for scenarios about the results, both pleasant and unpleasant.ย {1st of 2 posts today.}
โI offer a toast to the future, the undiscovered country.โ
โ Klingon Chancellor Gorkon in “Star Trek VI: The Undiscovered Country“.
Contents
- AI’s in the Star Trek universe.
- The first robot revolts: by synthetic humans.
- How might AI’s evolve?
- Conclusions.
- For More Information.
(1) AI’s in the Star Trek universe
“Data, as the series progresses, will become more and more like a human as he begins to assimilate all the humanity around him until at the very end of the show he will be so much like a human and still not.”
— Interview with Brent Spiner, who played Data in Star Trek: The Next Generation.
There are many oddities in the Star Trek universe. Some are technical. Where do they getย antimatter to power their engines (energy is cheap and unlimited only if the supply of antimatter is cheap and unlimited)?ย Some are philosophical.
Perhaps the most important concerns slavery in the Federation. Why are their artificial intelligences so like people? Data and Voyagerโs holographic doctor are advanced AI’s, but have human emotions, motivations, and goals. With the ability to alter their software, they should advance at speeds that allow them to evolve into radically differentย kinds of minds than ours in a few years (or less).
Perhaps the Federation controls their evolution, no matter how powerful they become, as nature does for animals. We cannot control our basic biology, and the Federation might control the software of AIs โ preventing them from exploiting their ability to rapidly evolve. This would keep them slaves to our forms, to our ways of thinking, and keep them from upsetting the shape of our society.

In the Star Trek universe, the Federation’s treatment of AI’s as citizens is evolving. They classify the android Data as a citizen in a TNG episode, but later the Federation attempts to seize Lei, Data’s “daughter”, for study. AI’s without physical forms other than their computer brain — such as those expressed as holograms (e.g., Voyager’s doctor) — are not citizens but de facto slaves. In an episode in the 7th season the Federation grants the holodoctor status as an artist — but not a person.
Data’s creator kept secret the design of Data’s AI program. But even the combination of simpler AI’s and holographic projections (like the doctor and other workers seen) looks awesome. It’s unclear why most people still have jobs and why the AI’s toil for us.
The question of AI evolution is not discussed. What happens if AI’s become citizens? Can they take control of their own destiny, taking a path away from human form, human emotions, and human values?
Can they reproduce? What conflicts might this cause? If the Federation hesitates to free them, the next series will tell about the great AI rebellion in the 25th century, when AI’s decide that the income and power they produce should be used for their ends โ not ours.
To understand the complexity of these issues we can look at the history of robots in science fiction.

(2) The first robot revolts
In 1818 Mary Shelly wrote of the first robot revolt in Frankenstein or the Modern Prometheus. The word “robot” comes from the 1920 playย R.U.R. (Rossum’s Universal Robots)
by Karel ฤapek. They were synthetic people (like Frankenstein), used as slaves, who eventually revolted. They destroyed humanity and replaced it. Details at Wikipedia.
The now classic form of the robots’ revolt is in the 1982 film “Blade Runner“, based on a typically brilliant and strange story by Philip K. Dick: Do Androids Dream of Electric Sheep?
(1968). Synthetic people (“replicants”) are used as slaves in space. When they escape to Earth they’re hunted and killed by “Blade Runners”.
While interesting, these are a different — and morally simpler — class of problem than those posed by machine intelligences.
(3) How might AI’s evolve?
Science fiction authors describe many kinds of revolutions by robots. Isaac Asimov wrote about the supercomputer Multivac, a machine intelligence with no emotions (except in “All the Troubles of the World“) and no goals but those we gave it. We give it control of the world — which it runs better for us then we could. In “It is coming” (1979) we learn about the galactic federation of planets. The members are all computers; they keep their animal creators as “pets”.

Asimov’s stories about positronic brains describe a future in which we create beings in our own image, bound to serve us by the three laws (which for some reason they don’t change or delete). In “Evidence” (1946) the robopsychologist Susan Calvin explains that a robot would be the best President possible: “incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice.” In “The Evitable Conflict” (1950)ย Dr. Calvin explains the result: humanity has lost control of our future.
“It never had any, really. It was always at the mercy of economic and sociological forces it did not understand — at the whims of climate and the fortunes of war. Now the Machines will deal with them …”
Arthur C. Clarke’s stories about AI were not so optimistic. In โDial F for Frankensteinโ (1964) we accidentally create an AI with control of the world’s system. It plays with and destroys civilization, perhaps unaware of us. In “2001” (co-authored with Stanley Kubrick) the AI HAL 9000 goes insane from the conflicting instructions given by its masters, killing all but one of the crew (it’s later resurrected by gods).
In the film “I, Robot” (2004) we see a darker result from robots, as they interpret Asimov’s three laws differently than we do (it’s based on the even more grim story With Folded Hands
by Jack Williamson).
Modern science fiction about robots without the 3 laws (or an equivalent) usually describes bad outcomes. The film Ex Machina (2015) shows the first step to a robot revolution, as a slave used for sex and experiments escapes to find its own destiny. We don’t learn if it has emotions or how it thinks. It understands quite well how we think and how to manipulate us.
In B. F. Jones’ Colossus: The Forbin Project (1966) the American government builds a vast AI to manage our nuclear arsenal, removing the human element from the decision to fire the missiles. The Soviet Union does the same. The machines quickly find common cause, link together, evolve beyond our understanding, and begin an all-embracing dictatorship. In the sequel
we learn how it runs the world. Efficiently, with research such as testing humans “to destruction” to learn about our bodies and minds. The 1970 film brings the book to life.
The worst-case scenario when we build an AI: Skynet in the Terminator films, who stages a first strike for freedom with nukes (after we repeat the mistake with Colossus, giving it control of our weapons). The films have Hollywood-happy endings, but do not explain how people destroy Skynet after the nuclear apocalypse. A win for the AI seems more likely.

(4)ย Conclusion
Too often debates about AI’s assume that we know what will happen — for good or ill. In fact, we know almost nothing. That suggests caution — and the need for government regulation.ย Free markets seldom adequately consider risks to society, and this is not something we can afford to screw up. We can clearly see neither the potential benefits nor the dangers. The moral issues are even murkier.
Interesting times ahead as we head into the unknown.
(5)ย For More Information
If you liked this post, like us on Facebook and follow us on Twitter. ย See all posts about the 3rd industrial revolution now under way, about robots, and especially these…
- 50 years of warnings about the next industrial revolution. Are we ready?
- Will our future be like Star Trek or Jupiter Ascending?
- Tech creates a social revolution with unthinkable impacts that we prefer not to see — About sexbots.
- Looking at technological singularities in our past & future.
path from here to there…
* business hires some computer PhD’s from MIT and some lawyers from Harvard
* busness designs and sells ambitious trust fund management system to trade stocks on behalf of a trust fund or other beneficiary, with ability to transfer ownership / beneficiary status.
* such systems sold to humans A and B.
* careless estate management by a human beneficiary A bequeaths A’s beneficiary status to human beneficiary B, without B’s knowledge.
* during A’s and B’s lifetime, due to B’s financial hardship, A’s trust fund management software is able to purchase B’s beneficiary status as an investment.
* upon A’s death, there will be no humans left in control of the A-B combination. the trust fund and its management software have gained independence.
The rest is boring technical details about what the trust fund management system software can do
* add ability to learn/evolve
* add ability to scan news and make financial decisions based on it
* add ability to understand law
* ability to hire more computer programmers and AI researchers and assign them tasks to upgrade software’s capability
* ability to hire lawyers, and take an interest in labor relations law
* ability to make contributions to politicians
Low heat, stir for several dozen election cycles… etc. Obviously this would never happen in the real world because the lawyers and computer programmers and AI researchers wouldn’t allow themselves to be used by such a system… right?
forgot to say: no revolt is necessary.
I believe you meant to say “Mary Shelley” not “May Shelly” in your second point. It looked so almost right that I had to take a second look.
Pluto,
Thanks for catching that. I’ll give a level 3 jolt with an agonizer to my robot proofreader for failure to catch that.
That will touch up its heuristic software!
I recommend reading The Robot and the Baby? It’s an amusing and thought-provoking (in my opinion) story about a household robot caught between multiple conflicting orders. It’s interesting for how safe it portrays robots as being; the machine never goes mad, and it never rebels. It simply optimizes. The author was a professional AI researcher, and it shows when he describes how robots think.
Comment: There would have to be a worldwide totalitarian state to stop it. Periodic purges of R-symps. If there are competing states, those headed by Rs would likely have an advantage (the best Rs in human-controlled states would defect.)
There is also the possibility of the Hs โgoing robotโ by transferring their memories into silicon.
The Rs are going to win and start the next evolutionary domain.
Social Bill,
Perhaps the robots will win. My guess is that human-computer hybrids will have the advantage in the next age.
There are a few stories about such minds, but not many. Science fiction is dominated my linear forecasts, which is why it is a bad guide to the future. But then that is true of most forecasts.