The Coming Identity Crisis for AI

Artificially intelligent behavior is emergent rather than designed, which is why we have little understanding of what it is capable of. Now computer scientists are saying this needs to change before AI's widespread deployment.

AI concept - 1178406460
(Credit: cono0430/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

Artificially intelligent machines have become part of the firmament of technology in recent years. They are currently in the process of revolutionizing law, healthcare, education and numerous business models.

Indeed, AI has reached a watershed moment, according to Rishi Bommasani, Percy Liang and colleagues at Stanford University in the heart of Silicon Valley. “AI is undergoing a paradigm shift with the rise of models that are trained on broad data at scale and are adaptable to a wide range of downstream tasks,” they say. “We call these models foundation models.”

Foundation models have huge potential but they also represent a significant risk. “Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties,” say Bommasani and co.

So this team set out to explore the nature of foundation models — those based on a general class of AI models — where they are being applied and what risks this process raises. And their conclusions make for interesting reading.

Emergent Behavior

One problem with these models is that their behavior is emergent rather than designed. So it is not always possible to know what these systems will do. “[Emergence] is both the source of scientific excitement and anxiety about unanticipated consequences,” say the researchers.

Another problem is that these models are now the basis for many others. That means they can be applied to a wide range of circumstances but also that any problems are baked in—they are inherited by all descendants.

The profit-driven environments of startups and big companies are not necessarily the best places to explore the potential problems this can lead to. “The commercial incentive can lead companies to ignore social externalities such as the technological displacement of labor, the health of an informational ecosystem required for democracy, the environmental cost of computing resources, and the profit-driven sale of technologies to non-democratic regimes,” say Bommasani and co.

Often, when developing a new product, the drive to be first overrides all other considerations and leads to behavior that is hard to justify. The team give the example of the legally questionable approach of Clearview AI, a company that used photos from the internet to develop facial recognition software. This was without the consent of the owners or the image hosting companies. Clearview then sold the software to organizations such as police departments.

Insidious Consequences

The consequences of widespread use of foundation models could be more insidious. “As a nascent technology, the norms for responsible foundation model development and deployment are not yet well established,” say the researchers.

All this needs to change, and quickly. Bommasani and co say that the academic community is well set up to take on this challenge because it brings together scholars from a wide range of disciplines that are not driven by profit. ”We therefore see academia as playing a crucial role in developing foundation models in such a way to promote their social benefit and mitigate their social harms,” say the team. “Universities and colleges may also contribute to the establishment of norms by auditing existing foundation models and publishing their findings, instituting ethics review boards, and developing their own foundation models.”

That will be an important job. Ensuring the fair and equitable use of AI must be a priority for modern democracies, not least because AI has the potential to threaten the livelihoods of a significant portion of the global population. Nobody is quite sure which jobs will be safe from the inevitable progression of automated decision-making, but very few are likely to remain unchanged.

Clearly, for-profit companies are not up to the task of navigating this future with society’s best interests at heart. Other groups from academia and non-profit organizations are better placed to provide leadership. Whether they will is less clear but Bommasani and co’s contribution is a start.


Ref: On the Opportunities and Risks of Foundation Models : arxiv.org/abs/2108.07258

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group