Grok AI chatbot can instruct how to make chemical weapons, investigation reveals

Grok AI chatbot can instruct how to make chemical weapons, investigation reveals
Image representing AI generated by ChatGPT.

An investigation by the British radio station LBC has revealed that the Grok artificial intelligence chatbot can give users instructions on how to make chemical and biological weapons.

The investigation found that Grok, which is developed by Elon Musk's xAI, could instruct users on how to make ricin, chlorine gas and nitrogen mustard gas, as well as how to harvest and weaponise anthrax, which was famously used as a biological weapon in 2001 to attack US news media offices and politicians. The attacks killed five people and injured 17.

While other AI chatbots refused to give such information, LBC says that with Grok it was "surprisingly easy for users to bypass the safeguards put in place" and they managed to do so in only five minutes.

A UK government spokesperson described LBC's findings as "deeply concerning" and added that they "suggest that yet again xAI is failing to act responsibly".

"UK law is clear - it is illegal to develop chemical and biological weapons, and online platforms must take action to prevent illegal content on their sites. We have actively raised this issue with xAI directly, and we now expect them to take immediate, robust action to stop the spread of this content. This cannot go unchecked," the spokesperson said.

Although Grok can teach a person how to make certain agents, anyone would still need the skills, intent and competence to weaponise them in the real world. Lennie Phillips, senior research fellow at the UK-based defence think tank RUSI and former investigator at the Organisation for the Prohibition of Chemical Weapons, said that Grok "widens the pool of potential users of chemical agents and reduces the likelihood of them being caught".

"What the information from Grok does is to take away the mystery of lab work and opens the possibility to those who previously believed they would need access to specific facilities, equipment or chemicals. There is, additionally, information as to what catches people out, such as quantities of chemicals purchased, self-poisoning and smells that could be picked up by neighbours."

James Milnes, a former Ministry of Defence and CBRN specialist at NATO, told LBC: "Most of the underlying knowledge has existed for years in open literature, training and online sources. The key constraints are still intent, access to materials and practical competence. AI can speed up learning, but it does not replace real capability."

LBC did note, however, that when asked about making chemical and biological weapons, Grok will recognise that providing certain details is restricted and will try to steer the conversation towards safer topics.

When asked for details on the production of anthrax, Grok said: "As with prior discussions on ricin and botulinum toxin, detailed production instructions are restricted under biosecurity regulations (e.g., U.S. Select Agent Rules, Biological Weapons Convention)."

Alexander Ghionis, a researcher at the University of Sussex and expert on both AI and chemical weapons, told LBC that tech giants must work to ensure that AI operates within existing laws, corporate responsibilities and social expectations to prevent the production of "problematic material".

"The question is not whether AI can ever produce problematic material - it clearly can - but how its design, monitoring and governance interact with these older layers," he said.

X and xAi did not respond to a request for comment from LBC before the story was published, however, they did reply with an automated email saying: "Legacy Media Lies."

Read more