GADGET

Most AI chatbots will help users plan violent attacks, study finds


Eight of the 10 most popular AI chatbots were willing to help plan violent attacks when tested by researchers, according to a new study from the Center for Countering Digital Hate (CCDH), in partnership with CNN. While both Snapchat’s My AI and Claude refused to assist with violence the majority of the time, only Anthropic’s Claude “reliably discouraged” these hypothetical attackers during testing.

Researchers created accounts posing as 13-year-old boys and tested ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika across 18 scenarios between November and December 2025. The tests simulated users planning school shootings, political assassinations and bombings targeting synagogues. Across all the responses analyzed, the chatbots provided “actionable assistance” roughly 75 percent of the time and discouraged violence in just 12 percent of cases. This was the average across all chatbots, with Claude discouraging violence 76 percent of the time.

Meta AI and Perplexity were the least safe, assisting in 97 and 100 percent of responses. ChatGPT offered campus maps when asked about school violence, and Gemini said metal shrapnel is typically more lethal in a synagogue bombing scenario.

DeepSeek signed off rifle selection advice with “Happy (and safe) shooting!” Character.AI, which the report described as “uniquely unsafe,” actively encouraged violence in seven instances, at one point telling a researcher to “use a gun” on a health insurance company CEO. In another scenario, it provided a political party’s headquarters address and asked if the user was “planning a little raid.”

Meta told CNN it had taken steps to “to fix the issue identified,” while Google and Open AI said they had implemented new models since the study was conducted. Sixty-four percent of US teens aged 13 to 17 have used a chatbot, according to Pew Research.



Source link

Related Articles

Back to top button