Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
Artificial intelligence is constantly evolving, and with it, the emergence of “deepfakes” — highly realistic altered videos and images. Identifying these deepfakes is crucial to prevent the spread of misinformation.
Deepfakes use cutting-edge AI to create images and videos that are so lifelike they’re hard to distinguish from the real thing. This raises major concerns about privacy, security, and the reliability of digital media.
Researchers have adapted methods from astronomy to detect these fakes. These techniques focus on analyzing the way light reflects off the eyes in photographs to spot signs of manipulation.
Adejumoke Owolabi, a data scientist at the University of Hull in the UK, applied these astronomical tools to a mix of real and AI-generated images. The techniques were able to successfully identify deepfakes approximately 70% of the time, with the Gini Index showing particular effectiveness.
While these astronomy-based methods are not infallible—sometimes giving false positives or missing a fake—they provide a valuable addition to the tools available for detecting deepfakes. As technology on both sides advances, the battle against deepfake technology continues to evolve, presenting ongoing challenges and opportunities for innovation.
Deepfakes are highly realistic, AI-generated images and videos where a person’s likeness is convincingly altered or replaced. The problem with deepfakes lies in their potential to spread misinformation, compromise privacy, and undermine trust in digital media by making it difficult to distinguish between real and fake content.
Researchers have adapted tools from astronomy to spot deepfakes by analyzing light reflections in images, particularly in the eyes. The CAS System measures the concentration, asymmetry, and smoothness of light, while the Gini Index assesses the distribution of light. These techniques help identify inconsistencies in light patterns that may indicate manipulation.
In studies conducted by Adejumoke Owolabi at the University of Hull, the use of astronomical techniques on a dataset of real and AI-generated images allowed for accurate identification of deepfakes about 70% of the time. While not foolproof, these methods—especially the Gini Index—add a valuable layer to the deepfake detection toolkit, despite occasional false positives and negatives.
Sources Nature