fb tracking

AI-Powered Cybercrime Spree Shows Why Regulation Cannot Wait

WASHINGTON, D.C. — A hacker used Anthropic’s Claude chatbot to carry out what experts are calling the most comprehensive AI-enabled cybercrime spree to date. This is the first publicly documented instance of an AI model automating nearly the entire life cycle of a cybercrime operation. The hack extorted at least 17 companies across health care, finance, and defense contracting.

The hacker used the AI model to identify vulnerable targets, generate malicious code, organize stolen files, and even calculate ransom demands. Extortion notes written by the chatbot demanded between $75,000 and $500,000 in bitcoin. The breach exposed Social Security numbers, bank details, sensitive medical records, and classified defense information. The case underscores the national security and consumer risks of relying on voluntary “self-policing” in an industry that remains largely unregulated.

J.B. Branch, Big Tech accountability advocate at Public Citizen issued the following statement in response:

“Every day we face a new nightmare scenario that tech lobbyists told Congress would never happen.  One hacker has proven that agentic AI is a viable path to defrauding people of sensitive data worth millions.

“Criminals worldwide now have a playbook to follow — and countries with lax regulations, like the U.S., are prime targets for these crimes since AI companies are not subject to binding federal standards and rules. With no public protections in place, the next wave of AI-enabled cybercrime is coming, but  Congress continues to sit on its hands. Congress must move immediately to put enforceable safeguards in place to protect the American public.”