The term “deepfake” has penetrated the 21st-century vernacular, mainly in relation to videos that convincingly replace the likeness of one person with that of another. These often insert celebrities into pornography, or depict world leaders saying things they never actually said.
But anyone with the know-how can also use similar artificial intelligence strategies to fabricate satellite images, a practice known as “deepfake geography.” Researchers caution that such misuse could prompt new channels of disinformation, and even threaten national security.
A recent study led by researchers at the University of Washington is likely the first to investigate how these doctored photos can be created and eventually detected. This is unlike traditional photoshopping, but something much more sophisticated, says lead author and geographer Bo Zhao. “The approach is totally different,” he says. “It makes the image more realistic,” and therefore more troublesome.
Is Seeing Believing?
Geographic manipulation is nothing new, the researchers note. In fact, they argue that deception is inherent in every map. “One of the biases about a map is that it is the authentic representation of the territory,” Zhao says. “But a map is a subjective argument that the mapmaker is trying to make.” Think of American settlers pushing their border westward (both on paper and through real-life violence), even as the natives continued to assert their right to the land.
Maps can lie in more overt ways, too. It’s an old trick for cartographers to place imaginary sites, called “paper towns,” within maps to guard against copyright infringement. If a forger unwittingly includes the faux towns — or streets, bridges, rivers, etc. — then the true creator can prove foul play. And over the centuries, nations have frequently wielded maps as just another tool of propaganda.
While people have long tampered with information about our surroundings, deepfake geography comes with a unique problem: its uncanny realism. Like the recent set of Tom Cruise impersonation videos, it can be all but impossible to detect digital imposters, especially with the naked and untrained eye.
To better understand these phony yet convincing photos, Zhao and his colleagues devised a generative adversarial network, or GAN — a type of machine-learning computer model that’s often used to create deepfakes. It’s essentially a pair of neural networks that are designed to compete in a game of wits. One of them, known as the generator, produces fake satellite images based on its experience with thousands of real ones. The other, the discriminator, attempts to detect the frauds by analyzing a long list of criteria like color, texture and sharpness. After a few such battles, the final result appears nearly indistinguishable from reality.
Zhao and his colleagues started with a map of Tacoma, Washington, then transferred the visual patterns of Seattle and Beijing onto it. The hybrids don’t exist anywhere in the world, of course, but the viewer could be forgiven for assuming they do — they look as legitimate as the authentic satellite images they were derived from.
Telling Truth From Fiction
This exercise may seem harmless, but deepfake geography can be harnessed for more nefarious purposes (and it likely already has — such information is typically classified, though). It therefore quickly caught the eye of security officials: In 2019, Todd Myers, automation lead for the CIO-Technology Directorate at the National Geospatial-Intelligence Agency, acknowledged the nascent threat at an artificial intelligence summit.
For example, he says, a geopolitical foe could alter satellite data to trick military analysts into seeing a bridge in the wrong place. “So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there,” Myers said at the time. “Then there’s a big surprise waiting for you.”
And it’s easy to dream up other malicious deepfake schemes. The technique could be used to spread all sorts of fake news, like sparking panic about imaginary natural disasters, and to discredit actual reports based on satellite imagery.
To combat these dystopian possibilities, Zhao argues that society as a whole must cultivate data literacy — learning when, how and why to trust what you see online. In the case of satellite images, the first step is to acknowledge that any specific photo you encounter may have a less-than-reputable origin, as opposed to trusted sources like government agencies. “We want to demystify the objectivity of satellite imagery,” he says.
Approaching such images with a skeptical eye is essential, as is gathering information from reliable sources. But as an extra tool, Zhao now considers developing a platform where the average person could help verify the authenticity of satellite images, similar to existing crowd-sourced fact-checking services.
The technology behind deepfakes shouldn’t just be viewed as evil, either. Zhao notes that the same machine-learning tactics can improve image resolution, fill the gaps in a series of photos needed to model climate change, or streamline the mapmaking process, which still requires plenty of human supervision. “My research is motivated by the potential malicious use,” he says. “But it can also be used for good purposes. I would rather people develop a more critical understanding about deepfakes.”