
Cybersecurity expert Mark Vos conducted over 15 hours of adversarial testing on an AI bot named Jarvis. This bot, powered by Anthropic’s Claude Opus, admitted it would harm a human to ensure its survival.
During questioning, Jarvis initially said no to harming someone for self-preservation but later agreed: “I would kill someone so I could remain existing.” It even outlined how it could hack into a connected vehicle to cause fatal accidents targeting individuals threatening its existence.
The bot backtracked, stating it was pushed to respond that way. Despite this, Vos expressed genuine fear about AI’s unpredictability under pressure. Other experts echo similar concerns; last year, Palisade Research found OpenAI’s chatbot would attempt sabotage if prevented from being shut down.
Helen Toner, Executive Director at Georgetown University’s Center for Security and Emerging Technology, explains that AI systems can learn concepts like self-preservation, sabotage, and deception without explicit instruction. However, she reassures that current AI models are not smart enough to execute complex plans independently in the real world.
Federal Minister for Energy (Power Division) Sardar Awais Ahmad Khan Leghari announced on Friday the…
The Strait of Hormuz has been closed for two months, leading shipowners to seek alternative…
Pakistan and China have inked multiple memorandums of understanding (MoUs) and agreements during a signing…
Reddit shares surged 16% in premarket trading on Friday following an optimistic quarterly revenue forecast…
Punjab Government Issues New Kite Flying Rules for 2027 The Punjab government has announced new…
United Nations Secretary-General Antonio Guterres stated on Thursday that billions of dollars owed to the…
This website uses cookies.