The Computational Limits of Deep Learning Are Closer Than You Think

Deep learning eats so much power that even small advances will be unfeasible give the massive environmental damage they will wreak, say computer scientists.

The Physics arXiv Blog iconThe Physics arXiv Blog
By The Physics arXiv Blog
Jul 25, 2020 12:00 AMJul 27, 2020 3:21 PM
Deep Learning, Machine Learning, AI, Computer Concept - Shutterstock
(Credit: Zapp2Photo/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

Deep in the bowels of the Smithsonian National Museum of American History in Washington, D.C., sits a large metal cabinet the size of a walk-in wardrobe. The cabinet houses a remarkable computer — the front is covered in dials, switches and gauges, and inside, it is filled with potentiometers controlled by small electric motors. Behind one of the cabinet doors is a 20 by 20 array of light sensitive cells, a kind of artificial eye.

This is the Perceptron Mark I, a simplified electronic version of a biological neuron. It was designed by the American psychologist Frank Rosenblatt at Cornell University in the late 1950s who taught it to recognize simple shapes such as triangles.

Rosenblatt’s work is now widely recognized as the foundation of modern artificial intelligence but, at the time, it was controversial. Despite the original success, researchers were unable to build on it, not least because more complex pattern recognition required vastly more computational power than was available at the time. This insatiable appetite prevented further study of artificial neurons and the networks they create.

Today’s deep learning machines also eat power, lots of it. And that raises an interesting question about how much they will need in future. Is this appetite sustainable as the goals of AI become more ambitious?

Today we get an answer thanks to the work of Neil Thompson at the Massachusetts Institute of Technology in Cambridge and several colleagues. This team has measured the improved performance of deep learning systems in recent years, and show that it depends on increases in computing power.

Environmentally Unsustainable

By extrapolating this trend, they say that future advances will soon become unfeasible. “Progress along current lines is rapidly becoming economically, technically and environmentally unsustainable,” say Thompson and colleagues, echoing the problems that emerged for Rosenblatt in the 1960s.

The team’s approach is relatively straightforward. They analyzed over 1000 papers on deep learning to understand how learning performance scales with computational power. The answer is that the correlation is clear and dramatic.

In 2009, for example, deep learning was too demanding for the computer processors of the time. “The turning point seems to have been when deep learning was ported to GPUs, initially yielding a 5-15x speed-up,” they say.

This provided the horsepower for a neural network called AlexNet, which famously triumphed in a 2012 image recognition challenge where it wiped out the opposition. The victory created huge and sustained interest in deep neural networks that continues to this day.

But while deep learning performance increased by 35x between 2012 and 2019, the computational power behind it increased by an order of magnitude each year. Indeed, Thompson and co say this and other evidence suggests the computational power for deep learning has increased 9 orders of magnitude faster than the performance.

So how much computational power will be required in future? Thompson and co say that error rate for image recognition is currently 11.5 percent using 10^14 gigaflops of computational power at a cost of millions of dollars (i.e. 10^6 dollars).

They say achieving an error rate of just 1 percent will require 10^28 gigaflops. And extrapolating at the current rate, this will cost 10^20 dollars. By comparison, the total amount of money in the world right now is measured in trillions ie 10^12 dollars.

What’s more, the environmental cost of such a calculation will be enormous, an increase in the amount of carbon produced of 14 orders of magnitude.

The future isn’t entirely bleak, however. Thompson and co’s extrapolations assume that future deep learning systems will use the same kinds of computers that are available today.

Neuromorphic Advances

But various new approaches offer much more efficient computation. For example, in some tasks the human brain can outperform the best supercomputers while running on little more than a bowl of porridge. Neuromorphic computing attempts to copy this. And quantum computing promises orders of magnitude more computing power with relatively little increase in power consumption.

Another option is to abandon deep learning entirely and concentrate on other forms of machine learning that are less power hungry.

Of course, there is no guarantee that these new techniques and technologies will work. But if they don’t, it’s hard to see how artificial intelligence will get much better than it is now.

Curiously, something like this happened after the Perceptron Mark I first appeared, a period that lasted for decades and is now known as the AI winter. The Smithsonian doesn’t currently have it on display, but it is surely marks a lesson worth remembering.


Ref: The Computational Limits of Deep Learning. arxiv.org/abs/2007.05558.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group