The soaring cost of fighting deepfakes

Deloitte’s TMT Predictions 2025

As deepfake technology rapidly evolves, the line between authentic and AI-generated content is becoming increasingly difficult to discern — and it’s costing tech companies and society dearly. A new report from Deloitte’s TMT Predictions 2025 warns that combating the spread of deepfakes demands substantial financial and technological investment, with responsibility falling on tech firms, content creators, advertisers, and consumers alike.

According to Deloitte’s 2024 Connected Consumer Study, half of respondents believe online content has become less trustworthy over the past year. Alarmingly, two-thirds of those familiar with AI fear it will be used for manipulation. While calls for clear labeling of AI-generated content are growing, experts caution that transparency alone won’t solve the issue.

A Growing Financial Burden

The financial toll of identifying and filtering deepfakes is mounting quickly. In 2023, tech and social media platforms reportedly spent $5.5 billion tackling the problem — a figure expected to nearly triple to $15.7 billion by 2026. And the burden isn’t confined to industry giants. As fraudsters get smarter, advertisers, creators, and consumers must also invest in safeguarding the digital environment.

Cutting-edge detection systems leveraging deep learning algorithms and computer vision now analyze unnatural lip movements, inconsistent voice tones, and distorted lighting reflections. Yet even the most advanced tools cap out at about 90% accuracy. Meanwhile, AI models capable of creating realistic deepfakes are increasingly accessible to the public.

Tech, Regulation, and Transparency

To defend digital authenticity, technologies like digital watermarking and encrypted metadata are gaining traction. Deloitte itself has joined the Coalition for Content Provenance and Authenticity (C2PA), which is developing standards to trace the editing history of images and videos.

Social media platforms are implementing their own measures, from identity verification requirements to nominal fees for content authentication. However, these actions raise tough questions about cost-sharing — should platforms, creators, or users bear the financial responsibility for preserving trust?

A Shared Challenge for the Digital Age

Beyond technology, the fight against deepfakes demands regulatory oversight and public education. Both the EU and U.S. have begun drafting policies to address AI-generated misinformation, but experts agree that global coordination is vital for long-term success. Equally important is raising user awareness, empowering individuals to critically assess the digital content they encounter.

In the face of increasingly sophisticated AI-driven deception, preserving digital trust is a shared priority. While technology offers powerful tools to counter deepfakes, the platforms and companies that will thrive in the years ahead are those capable of fostering safe, transparent, and credible digital spaces.