Adversarial images are pictures that contain carefully crafted patterns designed to fool computer vision systems. The patterns cause otherwise powerful face or object recognition systems to misidentify things or faces they would normally recognize.
This kind of deliberate trickery has important implications since malicious users could use it to bypass security systems.
It also raises interesting questions about other kinds of computational intelligence, such as text-to-image systems. Users type in a word or phrase and a specially trained neural network uses it to conjure up a photorealistic image. But are these systems also susceptible to adversarial attack and if so, how?