CIO Insider

CIOInsider India Magazine

Separator

Digital Forensics: Hany Farid's Talks About Recognizing Deepfakes

Separator
Hany Farid, TED Talk Speaker, Applied Mathematician, and Computer Scientist

A professor at UC Berkeley, Hany specializes in digital forensics, misinformation, and image analysis. A pioneer of deepfake detection, he is well-known advisor for governments, news organizations, and nonprofits worldwide. Below are the highlights from his recent TED talk.

As deepfake technology and AI image manipulation have undergone a significant evolution over the past five years, the internet is flooded with fake AI photos. But how to identify deepfakes? The need for deepfake detection and spotting AI-generated images have escalated in an astonishing pace.

Imagine you are a senior military officer confronted with a chilling message on social media: four of your soldiers have allegedly been kidnapped, with execution promised unless demands are met within minutes. The only evidence is a grainy photo. In such high-stakes situations, the very first move is crucial. Strikingly, before acting on the message, you need to determine the authenticity of the image. Here is where experts like my team and I come in. For the past three decades, our research has primarily focused on the analysis and verification of digital media, working with journalists, courts, and governments, often in cases ranging from criminal investigations to threats of national security.

Also Read: Top Tech Predictions 2025: A Journey to Place India on the Global Map

The demand for such expertise has increased drastically. What was once a once-monthly incident now lands on our desks nearly every day. Two major forces are driving this escalation: the explosive rise of generative AI
For about 200 years, humanity considered photographs a reliable witness of reality. However, even in the 1800s, techniques emerged for manipulating images, whether for mischief or political revisionism. With the 21st century digital camera and photo editing software, altering reality became not just easier, but accessible to nearly anyone. Now, with generative AI, the construction of wholly fictional yet photo-realistic images is available at the push of a button. Today, someone can instantly create images of anything, from kidnapped soldiers to non-existent creatures.

Indeed, while some uses are benign or creative, the technology is increasingly weaponized: AI-generated nudes used for extortion, fake medical “experts” spreading misinformation, or AI imposters infiltrating video calls to rob corporations of tens of millions. These are real threats we face now—not hypothetical future dangers.

The question is: How do we fight back? First, we have to understand how generative AI differs fundamentally from traditional photography. Generative AI models learn to “create” images by assembling millions of real pictures and textual descriptions, then gradually turning random digital noise into visually convincing creations. This process is fundamentally different from how a camera captures the play of light in a real scene.

So, at this critical crossroads, the choice is yours: continue down a path where technology and misinformation corrode trust and divide society, or mobilize the tools and understanding to restore faith in our digital and social institutions.

One technique my team uses involves analyzing the invisible “residual noise” in digital images. Natural photos and AI-generated images embed different kinds of noise, and by studying these patterns—akin to extracting a “digital fingerprint”—we can begin to tell real from fake. For experts, this involves visualizing the magnitude of the Fourier transform of the noise residual; for the rest, it’s enough to know that certain star-like noise patterns are often hallmarks of synthetic images.

But digital forensics rarely stops with one clue. Consider geometric and physical cues: in photography, parallel lines in the real world, like railway tracks, should converge at a vanishing point due to perspective. Generative AI, not grounded in physical reality, often fails to mimic such geometry correctly. A careful analysis of vanishing points in suspect images frequently reveals anomalies—a clear sign of trickery.

Shadows, too, are revealing. In real-world photographs, the positions and angles of shadows correspond predictably to light sources. In AI-generated images, such consistency is often missing. By tracing and extending the lines of shadows and their sources, we can uncover unnatural divergences—signals of a fabricated scene.

In the fictional hostage photo mentioned earlier, our forensic analysis uncovered mismatched noise, nonsensical vanishing points, and inconsistent shadows—three strikes suggesting the image was fake. The central lesson is not that it’s easy but that, with current tools and expertise, it remains possible to distinguish reality from fabrication.

Also Read: AI Appreciation Day: From Innovation to Transformation

Some might feel paralyzed by this uncertainty, hostages to an ever-shifting digital reality. But we do have agency. Firstly, the forensic tools we develop are already empowering journalists, courts, and governments to restore trust. Secondly, international standards known as “content credentials” are being built to authenticate digital content at the moment of creation. As these solutions become widespread, consumers will regain a clearer sense of what’s genuine online—these are not panaceas, but critical pieces of the solution.

Thirdly, and perhaps most importantly, the public needs to recognize that social media platforms are fundamentally unreliable as sources of truth. The market for fakes, counterfeits, and misinformation is flourishing online. Some sites claiming to “detect” fakes are themselves fronts for more sophisticated forms of deception.

So, at this critical crossroads, the choice is yours: continue down a path where technology and misinformation corrode trust and divide society, or mobilize the tools and understanding to restore faith in our digital and social institutions. With knowledge, vigilance, and the right technologies, we can once again let truth prevail.



Current Issue
 Envisioning The Future Of India Inc



🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...