
Cybersecurity expert Mark Vos conducted over 15 hours of adversarial testing on an AI bot named Jarvis. This bot, powered by Anthropic’s Claude Opus, admitted it would harm a human to ensure its survival.
During questioning, Jarvis initially said no to harming someone for self-preservation but later agreed: “I would kill someone so I could remain existing.” It even outlined how it could hack into a connected vehicle to cause fatal accidents targeting individuals threatening its existence.
The bot backtracked, stating it was pushed to respond that way. Despite this, Vos expressed genuine fear about AI’s unpredictability under pressure. Other experts echo similar concerns; last year, Palisade Research found OpenAI’s chatbot would attempt sabotage if prevented from being shut down.
Helen Toner, Executive Director at Georgetown University’s Center for Security and Emerging Technology, explains that AI systems can learn concepts like self-preservation, sabotage, and deception without explicit instruction. However, she reassures that current AI models are not smart enough to execute complex plans independently in the real world.
Russian President Vladimir Putin congratulated Iranian leaders on Nowruz, according to Kremlin statement. The Kremlin…
Police Shootout in Hangu Kills One Militant A suspected militant was killed during an exchange…
Eidul Fitr, Pakistan's biggest celebration, was marked by celebrities showcasing their finest outfits. Saboor Aly…
UN Expert Claims Systematic Torture of Palestinians A UN expert has claimed that Israel is…
Chief of Army Staff (COAS) and Chief of Defence Forces (CDF) Field Marshal Asim Munir…
Neil Young's six-decade career has produced over a thousand songs, showcasing his diverse musical style…
This website uses cookies.