AI Machines Have Beaten Moore's Law Over The Last Decade, Say Computer Scientists

Computational performance has followed Moore's Law since the dawn of the computer age. Not any more.

ai
Credit:(whitehoune/shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

Since the 1990s, computer scientists have measured the performance of the world’s most powerful supercomputers using benchmarking tasks. Each month, they publish a ranking of the top 500 machines with fierce competition among nations to come out on top. The history of this ranking shows that over time, supercomputing performance has increased in line with Moore’s Law, doubling roughly every 14 months.

But no equivalent ranking exists for AI systems despite deep learning techniques having led to a step change in computational performance. These machines have become capable of matching or beating humans at tasks such as object recognition, the ancient Chinese game of Go, many video games and a wide variety of pattern recognition tasks.

For computer scientists, that raises the question of how to measure the performance of these AI systems, how to study the rate of improvement and whether these improvements have followed Moore’s Law or outperformed it.

Now we get an answer thanks to the work of Jaime Sevilla at the University of Aberdeen in the UK and colleagues who have measured the way computational power in AI systems has increased since 1959. This team say the performance of AI systems during the last ten years has doubled every six months or so, significantly outperforming Moore’s Law.

This improvement has come about because of the convergence of three factors. The first is the development of new algorithmic techniques, largely based on deep learning and neural networks. The second is the availability of large datasets for training these machines. The final factor is increased computational power.

While the influences of new datasets and the performance of improved algorithms are hard to measure and rank, computational power is relatively easy to determine. And that has pointed Sevilla and others towards a way to measure the performance of AI systems.

Their approach is to measure the amount of computational power required to train an AI system. Sevilla and colleagues have done this for 123 milestone achievements by AI systems throughout the history of computing.

They say that between 1959 and 2010, the amount of computational power used to train AI systems doubled every 17 to 29 months. They call this time the pre-Deep Learning Era. “The trend in the pre-Deep Learning Era roughly matches Moore’s law,” conclude Sevilla and co.

The team say that the modern era of deep learning is often thought to have started in 2012 with the creation of an object recognition system called AlexNet. However, Sevilla and co say that their data suggests that the sharp improvement in AI performance probably began a little earlier in 2010.

This, they say, marked the beginning of the Deep Learning Era and progress since then has been rapid. Between 2010 and 2022, the rate of improvement has been much higher. “Subsequently, the overall trend speeds up and doubles every 4 to 9 months,” they say.

That significantly outperforms Moore’s Law. But how this has been achieved, given that the improvement in chips themselves has followed Moore’s Law?

Parallel Processing

The answer comes partly from a trend for AI systems to use graphical processing units (GPUs) rather than central processing units. This allows them to calculate more effectively in parallel.

These processors can also be wired together on a large scale. So another factor that has allowed AI systems to outperform Moore’s Law is the creation of ever larger machines relying on greater numbers of GPUs.

This trend has led to the development of machines, such as the AlphaGo and AlphaFold machines that have cracked Go and protein folding respectively. “These large-scale models were trained by large corporations, whose larger training budgets presumably enabled them to break the previous trend,” say Sevilla and co.

The team say the development of large-scale machines since 2015 has become a trend by itself — the Large-Scale Era — running in parallel to the Deep Learning Era.

That’s interesting work that reveals the huge investment in AI and its success in the last decade or so. Sevilla and co are not the only group to be studying AI performance in this way and indeed various groups differ in some of their measured rates of improvement.

However, the common approach suggests that it ought to be possible to measure AI performance on an ongoing basis, perhaps in a way that produces a ranking of the world’s most powerful machines, much like the TOP500 ranking of supercomputers.

The race to build the most powerful AI machines has already begun. Last month, Facebook owner, Meta, announced that it had built the world’s most powerful supercomputer devoted to AI. Just where it sits according to Sevilla and co’s measure isn’t clear but it surely won’t be long before a competitor challenges that position. Perhaps it’s time computer scientists put their heads together to collaborate on a ranking system that will help keep the record straight.


Ref: Compute Trends Across Three Eras Of Machine Learning: arxiv.org/abs/2202.05924

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group