Britain will collaborate with Microsoft (MSFT.O), academics, and experts to develop a system for identifying deepfake material online, the government announced on Thursday. As generative AI chatbots have grown in popularity through platforms like ChatGPT, concerns about the scale and realism of manipulated content have intensified.
Currently, non-consensual intimate images are already illegal in Britain. The new initiative aims to establish consistent standards for evaluating detection tools against real-world threats such as fraud, impersonation, and abuse involving deepfakes.
“Deepfakes are being used by criminals to deceive the public, exploit women and girls, and undermine trust,” said technology minister Liz Kendall. “We need to develop a framework that evaluates how technology can be used to assess, understand, and detect harmful deepfake materials.”
The government will test various detection technologies against real-world scenarios involving sexual abuse, fraud, and impersonation. This evaluation aims to identify gaps in current detection methods, helping the government and law enforcement gain better knowledge about these issues.
According to government figures, approximately 8 million deepfakes were shared in 2025, up from 500,000 in 2023. Governments worldwide, struggling to keep pace with rapid AI advancements, have been compelled into action by instances like Elon Musk’s Grok chatbot generating non-consensual sexualized images of individuals, including children.
The British communications watchdog and privacy regulator are currently investigating the Grok case alongside other jurisdictions facing similar challenges.


