Get featured on IndiaAI

Contribute your expertise or opinions and become part of the ecosystem!

Deng, along with four colleagues, Bo Zhao and Yifan Sun at the University of Washington, and Shaozeng Zhang and Chunxue Xu at Oregon State University — co-authored a recent article in Cartography and Geographic Information Science that explores the problem. In “Deep fake geography? When geospatial data encounter Artificial Intelligence,” they explore how false satellite images could potentially be constructed and detected. News of the research has been picked up by countries around the world, including China, Japan, Germany and France.

Geographic Information Science (GIS) is hugely beneficial today, with applications benefiting autonomous cars, data analysis, daily navigation and more. AI has in particular had a telling impact on the discipline, through the development of Geospatial AI that uses ML to extract and analyse geospatial data. However, these methods could also fabricate GPS signals, provide incorrect information on geolocations and even fake photographs of geographic locations. 

“We need to keep all of this in accordance with ethics. But at the same time, we researchers also need to pay attention and find a way to differentiate or identify those fake images,” Deng said. “With a lot of data sets, these images can look real to the human eye.”

In a note released by Binghamton University, to identify a fake image, one needs to develop such an image. A technique commonly used in the creation of deepfakes - Cycle Consistent Adversarial Networks (Cycle GAN) which is an unsupervised DL algorithm - simulates synethetic media. GANs require training samples of content they are programmed to produce. For instance, a black box on a map could represent any number of different factories or businesses; the various points of information inputted into the network helps determine the possibilities it can generate. The researchers altered a satellite image of Tacoma, Washington, interspersing elements of Seattle and Beijing and making it look as real as possible. After creating the altered composite, they compared 26 different image metrics to determine whether there were statistical differences between the true and false images. Statistical differences were registered on 20 of the 26 indicators, or 80%. Some of the differences, for example, included the color of roofs; while roof colors in each of the real images were uniform, they were mottled in the composite. The fake satellite image was also dimmer and less colorful, but had sharper edges. Those differences, however, depended on the inputs they used to create the fake, Deng cautioned.

But researchers are not encouraging anyone to try such a thing themselves — quite the opposite, in fact.

“It’s not about the technique; it’s about how human being are using the technology,” Deng said. “We want to use technology for the good, not for bad purposes.”

This research is just the beginning. In the future, geographers may track different types of neural networks to see how they generate false images and figure out ways to detect them. Ultimately, researchers will need to discover systematic ways to root out deep fakes and verify trustworthy information before they end up in the public view.

Want to publish your content?

Get Published Icon
ALSO EXPLORE