Deepfakes and AI-based fraud have seen exponential growth in recent years, leading to what experts are calling an “industrialization” of scams. Now, deepfake technology has moved from the domain of sophisticated experiments to mass-produced tools used for high-volume fraud.
High-profile figures, including journalists, CEOs, and politicians, have fallen victim to personalized scams involving deceptive deepfake videos. Hackers use these compromised materials to trick people into transferring money or falling for investment schemes known as impersonation fraud.
For example, in 2025, a financial officer at a Singaporean multinational company was deceived during a deepfake video call with executives, paying $500,000 to scammers. According to Experian’s 2026 Future of Fraud Forecast, the top threat to companies is machine-to-machine fraud, where cybercriminals exploit good AI bots and bad bots designed specifically for fraud.
In the nine months leading up to November 2025, UK consumers lost £9.4 billion in AI scams. In the same period, more than $12.5 billion was lost due to fraudulent cases in the United States. A report by Experian highlighted that 60% of companies experienced a 25% increase in financial losses related to fraud.
Additionally, 72% of business leaders and tech giants identified AI-enabled fraud and deepfakes as their top operational challenge for 2026. According to MIT researchers, the ease with which people can generate convincing fake content has led to an escalation in reported cyber incidents. These experts assert that fraudulent activities involving targeted manipulation and scams are now without any significant oversight.
In another disturbing development, AI-powered scammers use artificial avatars to conduct remote interviews for engineering jobs, stealing companies’ secrets and salaries. The FBI issued warnings about infiltration attempts by North Korean IT workers into hundreds of US companies in an attempt to redirect employee salaries back to the regime.
As video quality improves, experts warn that “complete lack of trust” will become a societal challenge as digital interactions increasingly rely on AI-generated content. Experian forecasts suggest that other avenues for fraud will be exploited as AI integration grows. For instance, security loopholes in smart home devices can be used by hackers to commit thefts and manipulate individuals through human-like scambots designed with emotional intelligence.
In summary, the rapid evolution of AI-enabled threats indicates a significant shift from sophisticated experiments to widespread, industrial-scale scams that threaten digital trust.


