
Deepfakes 2.0: The Rising Enterprise Trust Deficit


Diwakar Dayal is the Managing Director and AVP for India and SAARC at SentinelOne. He has over 27 years of experience in IT security, with leadership positions at multinational corporations like Cisco, Juniper, Tenable, NTT, SentinelOne, and Safescrypt (formerly Verisign). Throughout his career, he has successfully established and expanded security businesses from their inception to achieving multimillion-dollar successes.
India's digital economy is on an upward momentum, with projections indicating it will account for more than 20percent of the nation's GDP by 2026. This growth, driven by the widespread adoption of artificial intelligence (AI) and digital infrastructure, presents both immense opportunities and serious challenges. To address these challenges, India's cloud security sector is expected to expand at an annual rate of 29.88percent from 2025 to 2029, achieving a market size of $125.75 million by 2029.
One such challenge that's rapidly emerging as a serious threat to India's digital ecosystem is AI-generated deepfakes. Over 75 percent of Indians online have encountered deepfake content in the past year, exposing the population to risks like misinformation, impersonation, reputational damage, and data privacy violations. These manipulated media pose dangers across social and enterprise landscapes, undermining trust and brand credibility.
Deepfakes weaponize the same AI advances that power digital transformation, drastically reducing the time and skill needed to fabricate convincing audio-visual ‘evidence.’ Today’s easy-to-use, open-source tools require only seconds of sample audio or video to mimic CEOs, board members, or other key decision-makers convincingly — generating synthetic content that traditional security tools simply cannot detect.
One such case took place recently in Karnataka. A fraudulent app named 'Trump Hotel Rental,' (featuring AI-generated content of Donald Trump) defrauded over 200 investors of nearly $232,000. The scammers convinced investors to deposit money by promising very high returns, sometimes more than 100percent profits in a short period.
Also Read: Cyber Security: Hope for the Best and Be Prepared for the Worst
How Deepfakes Erode Digital Trust
Deepfakes undermine digital trust by altering trustworthy business resources such as video and audio into misleading points of attack. This decline in trust occurs via identity-targeted assaults, employing deepfakes to circumvent KYC (Know Your Customer) measures, biometric systems, and
facial recognition technologies through AI-based impersonations. Workers and buyers can no longer confirm authenticity merely by ‘seeing’ or ‘hearing.'
This poses a critical threat to corporate integrity, from impersonating executives in video calls to authorizing wire transfers or manipulating investor sentiment. These hyper-realistic forgeries challenge a company’s ability to trust what it sees and hears, turning real-time communications, surveillance footage, and even video conferences into potential attacks.
Unfortunately, many enterprises suffer from low awareness and inadequate training, leaving cybersecurity teams ill-equipped to detect and respond to video or audio-based social engineering threats in a timely manner. Security responses are also inconsistent, hindered by unclear legal frameworks around digital evidence and data protection. However, the upcoming Digital India Act is expected to address synthetic media governance.
How Cybersecurity Works in an Era of AI-driven Deepfakes
Without AI-powered defenses, enterprises are just one realistic deepfake away from reputational, financial, and operational disaster. That's why it's critical to defend digital environments against generative attacks like deepfakes.
Organizations must also actively institute deepfake response protocols as a part of their business continuity and brand safeguarding plans
Unlike rule-based systems, modern AI platforms detect, correlate, and respond to threats in milliseconds by analyzing behavior and context. A powerful and robust AI-driven, autonomous security solution must be able to detect and block fraudulent content by:
• Detecting lip-sync errors, timing mismatches, or audio inconsistencies invisible to the human eye or conventional tools.
• Matching voices or images with verified corporate records to flag impersonations.
• Using machine learning to identify suspicious behavioral patterns
• Neutralizing the threat before it reaches employees or customers, eliminating reliance on human judgment in time-sensitive moments.
A security system that has these features built in will be able to make deepfakes “just another threat," detectable and mitigated like any malware or phishing attack—rather than an existential challenge.
Also Read: The Quantum Computing Age is Upon US Sooner than Expected
The Need to Champion Ethical AI Governance
As India prepares for the implementation of the Digital India Act, enterprises must prepare their operations for new security regulations such as watermarking requirements, rapid-takedown service level agreements (SLAs), and liability clauses for the circulation of manipulated content. To stay ahead of these challenges, regular deepfake awareness simulations and response drills should become as routine as anti-phishing campaigns.
Incident response (IR) playbooks must be revised to address threats related to synthetic audio and video, particularly when it comes to fraud, media distortion, and harm to reputation. Sharing indicators of compromise (IOCs) and threat intelligence among various sectors will be vital for enhancing national-level resilience. Organizations must also actively institute deepfake response protocols as a part of their business continuity and brand safeguarding plans.
At the same time, enterprises must actively advocate for ethical AI governance by endorsing watermarking norms, utilizing liveness detection methods, and establishing company-wide media authentication protocols to safeguard against misinformation, fraud, and identity theft. Only then can enterprises be more assured that their top executives that they see or hear are really who they purport to be, and safeguard their business and reputation.