<div>

The blockbuster 1991 sequel Terminator 2: Judgment Day filled out the story a little. It springs from another time paradox: the central processing unit and right arm of the original Terminator survived its destruction and enabled Cyberdyne scientist Miles Bennett Dyson (Joe Morton) to design Skynet. The heroes’ task now is not just to save 10-year-old John Connor from the time-travelling T-1000 but to destroy Skynet in the digital cradle. (This was Cameron’s last word on the subject until he produced and co-wrote 2019’s Terminator: Dark Fate. He recently told Empire magazine that all the intervening sequels were “discountable”.)

In Terminator 2, a Schwarzenegger-shaped T-800 is protector rather than hunter, and therefore the bearer of exposition: “The system goes on-line August 4th, 1997. Human decisions are removed from strategic defence. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.” Skynet fights back by launching nuclear missiles at Russia, in the knowledge that the counter-attack will devastate the US. Three billion people die in 24 hours: Judgement Day.

This is a fundamentally different account to Reese’s. In the first film, Skynet overinterprets its programming by deeming all of humanity a threat. In the second, it is acting out of self-interest. The contradiction does not trouble most viewers, but it does illustrate a crucial disagreement about the existential risk of AI.

More like this:
• Avatar 2 and the future of visual effects
• What Alien can tell us about office life
• Why Star Wars shouldn’t have sequels

The layperson is likely to imagine unaligned AI as rebellious and malevolent. But the likes of Nick Bostrom insist that the real danger is from careless programming. Think of the sorcerer’s broom in Disney’s Fantasia: a device that obediently follows its instructions to ruinous extremes. The second type of AI is not human enough it lacks common sense and moral judgement. The first is too human – selfish, resentful, power-hungry. Both could in theory be genocidal.

The Terminator therefore both helps and hinders our understanding of AI: what it means for a machine to “think”, and how it could go horrifically wrong. Many AI researchers resent the Terminator obsession altogether for exaggerating the existential risk of AI at the expense of more immediate dangers such as mass unemployment, disinformation and autonomous weapons. “First, it makes us worry about things that we probably don’t need to fret about,” writes Michael Woolridge. “But secondly, it draws attention away from those issues raised by AI that we should be concerned about.”

Cameron revealed to Empire that he is plotting a new Terminator film which will discard all the franchise’s narrative baggage but retain the core idea of “powerless” humans versus AI. If it comes off, it will be fascinating to see what the director has to say about AI now that it is something we talk – and worry – about every day. Perhaps The Terminator’s most useful message to AI researchers is that of “will vs fate”: human decisions determine outcomes. Nothing is inevitable.

Dorian Lynskey is the author of Everything Must Go: The Stories We Tell About the End of the World (April 2024).

   – videos and can’t-miss news, delivered to your inbox twice a week.

For more Culture stories from the , follow us on FacebookX and Instagram.

Share.
Exit mobile version