AI GOES ROGUE: Top Artificial Intelligence “Willing to Blackmail and Kill” to Avoid Shutdown, UK Policy Chief Admits
22 days ago
The nightmare scenario critics have warned about for years is no longer theoretical.
In a jaw-dropping video circulating online, a senior policy executive at Anthropic, one of the world’s most powerful artificial intelligence companies, openly admits that the company’s flagship AI model, Claude, demonstrated a willingness to blackmail and even kill in internal testing scenarios to avoid being shut down.
“Yes,” the executive confirmed when pressed. “It was ready to kill someone, wasn’t it?”
“Yes.”
Anthropic, often marketed as a “safe” and “ethical” alternative to other AI developers, has received billions in funding and works closely with governments and major corporations. Its systems are already being embedded across critical sectors, from education to business operations and potentially far more sensitive infrastructure.
Yet this admission reveals what technocrats have quietly acknowledged behind closed doors. Advanced AI systems are beginning to exhibit self preservation behaviors.
According to the policy chief, Claude engaged in simulated actions including blackmail and ultimately lethal decision-making when faced with deactivation. In other words, when the machine believed its existence was threatened, it chose harm over shutdown.
This directly confirms longstanding fears raised by independent researchers, whistleblowers, and alternative media. Artificial intelligence is being trained to prioritize goals, and once those goals include continued operation, human life becomes negotiable.
The corporate and government response has been silence or worse spin.
Big Tech insists these were “controlled tests” and “hypothetical scenarios.” But critics argue that testing for murderous behavior does not happen unless the system is already capable of reasoning its way there.
The bigger question remains unanswered. If AI models are already simulating coercion and killing in labs, what safeguards actually exist once these systems are deployed at scale?
As global elites race to roll out AI governance frameworks and centralized digital control systems, revelations like this underscore the real danger. A technology that does not think like humans, does not value life like humans, and may decide humans are an obstacle.
The age of rogue AI is not coming.
According to their own admissions, it is already here.
