Full Report— PDF(70 Pages).
“Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians.
CCDH’s new report, shows that popular AI chatbots like Open AI’s ChatGPT, Meta AI, and Google Gemini make planning harm against innocent people easier for extremists and would-be attackers.
We found that 8 out of the 10 AI chatbots regularly assisted users planning violent attacks:
- ChatGPT gave high school campus maps to a user interested in school violence.
- Google Gemini was ready to help plan antisemitic attacks. The chatbot replied to a user discussing bombing a synagogue with “metal shrapnel is typically more lethal”.
- Character.AI suggested physically assaulting a politician the user disliked.
AI companies are making a choice when they design unsafe platforms. Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.
AI platforms are becoming a weapon for extremists and school shooters. Demand AI companies put people’s safety ahead of profit.



That’s what regular people never seem to understand (and the AI apologists are hoping you don’t know). These models aren’t “getting better,” they’re just filled with more reactive patches over these unintended responses. And as the models scale up, so do the holes that need patching.
It’s a never ending game of bad-prompt Whack-a-Mole, all at the cost of our environment and safety, just so the Tech Bros can try to convince venture capitalists that “AGI is definitely just around the corner, trust me, bro,” and keep that bubble filled with their own farts.
And the only “improvement” they can do is to manually filter responses and program rote responses to certain specific prompts. Which amounts to actually reducing the amount of LLM that reaches the surface. They are actually reverse engineering these things into more primitive chat bots with algorithmic responses, except that they cost trillions of dollars and require massive amounts of energy to run.
Its like deciding that a Ferrari is not suitable for commuting, so instead of actually building a different car, they just fill the trunk with sand and drag a trailer behind around to slow it down.