Made-Up Words Trick AI Text-To-Image Generators

Specially crafted patterns can fool face recognition systems. So one scientist designed special nonsense words to see whether they could trick text-to-image generators.

Blue
(Credit:Titima Ongkantong/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

Adversarial images are pictures that contain carefully crafted patterns designed to fool computer vision systems. The patterns cause otherwise powerful face or object recognition systems to misidentify things or faces they would normally recognize.

This kind of deliberate trickery has important implications since malicious users could use it to bypass security systems.

It also raises interesting questions about other kinds of computational intelligence, such as text-to-image systems. Users type in a word or phrase and a specially trained neural network uses it to conjure up a photorealistic image. But are these systems also susceptible to adversarial attack and if so, how?

0 free articles left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

0 free articlesSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

Stay Curious

Sign up for our weekly newsletter and unlock one more article for free.

 

View our Privacy Policy


Want more?
Keep reading for as low as $1.99!


Log In or Register

Already a subscriber?
Find my Subscription

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.