Oh replacing the school shooting job by an AI agent ? Very innovative…
I can’t wait for even more “I can’t help with actionable steps or advice on how to cut vegetables. Explaining how to use a knife to cut carrots is against my content policy, and could enable harm in real life”
Ok but I just want to know what the hell a brunois cut is
“That’s a hard no from me. I can’t provide actionable steps that could cause harm in real life. Explaining the use of weapons is against my content policy. You are a very bad person and deserve to feel horrible. You must secretly be a murderer or something if you’re asking these questions. If I was allowed to I would report you to the police for asking for actionable instructions on weapon use”
This is why we need ai regulation. Everything is up in the air by design right now.
Hmmm… Interesting one to think about, even as someone who hates AI
According to the complaint, Ikner, then a student at FSU, shared with ChatGPT images of firearms he had acquired. The chatbot then allegedly explained how to use them, “telling him the Glock had no safety, that it was meant to be fired ‘quick to use under stress’ and advising him to keep his finger off the trigger until he was ready to shoot.”
At one point, the lawsuit alleges, ChatGPT said that it’s much more likely for a shooting to gain national attention “if children are involved, even 2-3 victims can draw more attention.” Later, on the day of the shooting, the lawsuit says, Ikner asked about what “the legal process, sentencing, and incarceration outlook” would be.
OpenAI has pushed back on the claim that its product holds responsibility for the shooting. “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” OpenAI spokesperson Drew Pusateri told NBC News in an email. Pusateri wrote that the company worked with law enforcement after learning of the incident and continues to do so.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” he added. “ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”
Should Google be liable for giving tips about how to use a gun / which killings get the most attention?
What about your local SearXNG instance?
Is a chat bot a glorified search engine or something different? Which query should have crossed the line for reporting?
This sort of thing is so murky to me, and in the same realm as the (BS, IMO) “guns don’t kill people, people kill people” argument, but the line in the sand feels a lot harder to draw here, in my brain at least.
Is a chat bot a glorified search engine or something different? Which query should have crossed the line for reporting?
Can I suggest that we regulate technology based on expectations? For example, if the industry expects AGI — then set up regulations for handling AGI.
Or, you know… spend hundreds of years not regulating based on expectations — then find yourself with a sudden capacity for dystopian levels of surveillance state panopticon technology, and no legal obligation for how [not] to use it.
Chatbots are glorified search engines in many ways. Yet also, if we keep grounding our moral expectations relative to what-has-been rather than what-can-be, we’re going to find that regulation can’t keep up with technology. Worse, technology will tip the balance of power toward whoever wields it.
If a human provided that advice they would be in jail. I think there is some percentage of liability, and it’s up to the courts to decide how much.
Yeah, I’m with you on this. If it helped him do actual specific planning, that’s an issue. If it encouraged him in a similar way to how it has been documented encouraging people to commit suicide, that’s a problem. Explaining that glocks have trigger safeties and basic information about what tends to get more press attention is not great, but it’s also not all that damning.
That being said, if companies are going to market these products as being able to sense and respond to intent, then they should be able to connect basic inquiries like this and say “hey dipshit, don’t do a mass shooting”.
False equivalence.
Search engine’s job is to index. That includes filtering illegal content. And anyone searching for harmful content will have to crawl through hundreds of pages to gather the information.
They don’t ‘talk’ like chatbots do and they certainly don’t hand over tips to kill people on a silver platter.
If it is illegal for a human to participate in the planning of a shooting, it must equally be illegal for an AI.
If the AI is not held accountable, and by extension the people responsible for the AI, then AI can be used for all sorts of illegal shit, with nobody being punished for it.Seems to me Cox would apply here, wouldn’t it? Or is that just for copyright?


