‘Avengers: Age of Ultron’ and the Risks of Artificial Intelligence

By E. Paul Zehr
May 1, 2015 4:26 PMMay 21, 2019 5:44 PM
ultron

Newsletter

Sign up for our email newsletter for the latest science news
 

Technology enhanced with artificial intelligence is all around us. You might have a robot vacuum cleaner ready to leap into action to clean up your kitchen floor. Maybe you asked Siri or Google—two apps using decent examples of artificial intelligence technology—for some help already today. The continual enhancement of AI and its increased presence in our world speak to achievements in science and engineering that have tremendous potential to improve our lives.

Or destroy us.

At least, that’s the central theme in the new Avengers: Age of Ultron movie with headliner Ultron serving as exemplar for AI gone bad. It’s a timely theme, given some high-profile AI concerns lately. But is it something we should be worried about?

Artificial Intelligence Gone Rogue

How bad is Ultron? The Official Handbook of the Marvel Universe lists his occupation as “would-be conquerer, enslaver of men” with genius intelligence, superhuman speed, stamina, reflexes, and strength, subsonic flight speed, and demi-godlike durability. The good news is that Ultron has “normal” agility and “average hand to hand skills.” Meaning if you can get in close to an autonomous robot with superhuman speed, you should be good to go. At least briefly.

But perhaps most importantly, Ultron represents the ultimate example of artificial intelligence applications gone wrong: intelligence that seeks to overthrow the humans who created it.

Subsequent iterations of Ultron were self-created, each one getting stronger, smarter, and more bent on fulfilling two main desires: survival and bringing peace and order to the universe. The unfortunate part for us humans is that Ultron would like to bring peace and order by eliminating all other intelligent life in the universe. The main theme in Age of Ultron is this fictional conflict between biological beings and artificial intelligence (with a mean streak). But how fictional is it?

Thinking Machines

The answers are found in scientific research related to the fields of machine learning, artificial intelligence, and artificial life. These are fields that continue to expand at a ridiculous, if not superhuman, pace.

One of the most recent breakthroughs was a study in which Volodymyr Mnih and colleagues at Google DeepMind challenged a neural network to learn how to play video games.

The point was to see if the software (rather ominously called a “deep Q-network agent”) could apply lessons learned in one game to master another game. For more than half of the games examined, the deep Q-network agent was better than human level. This list includes Boxing, Video Pinball, Robotank (a favorite of mine), and Tutankham.

And though arcade games may seem trivial, the takeaway here really had nothing to do with games per se. The relevance is that an AI system could adapt its skills to situations for which its programmer had never prepared it. The AI was effectively learning how to apply skills in a new way, basically thinking on its own. Which is relevant in considering the possibility of an AI going rogue.

IBM's Watson computer is a well-known instance of AI. Credit: Clockready

Sounding an Alarm

So, is this a problem? Coverage in popular media often seems to give the spin that machine learning and artificial intelligence are things to fear. There is a boundary that separates helpful applications of AI—imagine a scenario of robot-conducted surgery performed in a remote community and overseen by a physician in a distant location—from truly frightening scenarios of near-future military applications. Imagine the combination of current combat drone technology with artificial intelligence computer engines giving independence to machine warfare.

The real problem is that we don’t often recognize that we have crossed these kinds of boundaries until we are already on the other side. In science we often push to discover and apply things before we truly understand all the implications—both positive and negative—that will accompany them. We often do things because we can without fully considering if we should, in fact, do them at all.

It’s a sentiment that has been surprisingly echoed among various tech cognoscenti in recent months. In late 2014, Tesla CEO Elon Musk told an MIT symposium, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” And in January he put his money behind the cause, donating $10 million to a non-profit for AI safety.

Bill Gates revealed his reservations about AI in a Reddit “Ask Me Anything” session later that same month, writing, “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

And last year, Stephen Hawking co-authored an article on the risks of AI, saying it could be the “worst mistake in history.”

A Different Vision

Central to these concerns is artificial intelligence’s theoretical independence from human regulatory interaction. To avoid such extreme independence—and that sci-fi end-game of Ultron—maybe we’d be better off adopting the approach of “collaborative intelligence” as computer scientist Susan Epstein proposed in a recent study.

We traditionally build machines because we need help, Epstein writes. But perhaps a less-capable machine could be equally helpful, by allowing humans to do things that they’re better at anyway, such as pattern recognition and problem solving. In other words, built-in inabilities in our intelligent robots could allow them to perform their jobs better while keeping them in check—though at the cost of requiring more interaction with their human overseers.

In the tradition of sci-fi futurists Jules Verne, H.G. Wells and Isaac Asimov, the future is “supposed to be a fully automated, atomic-powered, germ-free utopia” Daniel H. Wilson wrote some years back. A collaborative view of AI, on the other hand, equates to thinking about robots as tools—sometimes very smart ones—that humans can employ and work with rather than a replacement for humans altogether.

This view, though, is at odds with the imperative to instrument and mechanize operations of all sorts wherever they are found. The end game—as Ultron’s creators discover—has disastrous ramifications. We all get to enjoy watching this dystopian future play out on the big screen this week. Luckily for our future selves, in the real world these conversations are still happening as we continue to progress toward smarter and smarter machines.

But maybe not too smart. I still want to win at Robotank.

Top image courtesy Marvel

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group