What if you could detect them just by looking into someone's eyes?
Research presented at the Royal Astronomical Society's National Astronomy Meeting in Hull suggests AI-generated fakes can be identified by examining eye reflections, similar to how astronomers analyze galaxy images.
University of Hull MSc student Adejumoke Owolabi's study focuses on reflections in a person's eyes.
If reflections match, the image likely depicts a real person. If they don't, it's probably a deepfake.
"The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person," said Kevin Pimbblet, professor of astrophysics and director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull.
Researchers analyzed light reflections on the eyeballs in both real and AI-generated images. They used astronomy methods to quantify these reflections and checked for consistency between the left and right eyes.
Fake images often show inconsistencies in reflections between the eyes, whereas real images generally display identical reflections in both eyes.
"To measure the shapes of galaxies, we analyse whether they're centrally compact, whether they're symmetric, and how smooth they are. We analyse the light distribution," said Professor Pimbblet.
"We detect the reflections in an automated way and run their morphological features through the CAS [concentration, asymmetry, smoothness] and Gini indices to compare similarity between left and right eyeballs.
"The findings show that deepfakes have some differences between the pair."
The Gini coefficient measures how light in a galaxy image is distributed among its pixels. This is done by ordering the pixels by flux and comparing the result to an even flux distribution.
A Gini value of 0 indicates light is evenly distributed across all pixels, while a value of 1 shows all light concentrated in a single pixel.
The team also tested CAS parameters, a tool originally developed to measure galaxy light distribution, but found it wasn't effective in predicting fake eyes.
"It's important to note that this is not a silver bullet for detecting fake images," Professor Pimbblet added.
"There are false positives and false negatives; it's not going to get everything. But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes."