Hmmm… Interesting one to think about, even as someone who hates AI
According to the complaint, Ikner, then a student at FSU, shared with ChatGPT images of firearms he had acquired. The chatbot then allegedly explained how to use them, “telling him the Glock had no safety, that it was meant to be fired ‘quick to use under stress’ and advising him to keep his finger off the trigger until he was ready to shoot.”
At one point, the lawsuit alleges, ChatGPT said that it’s much more likely for a shooting to gain national attention “if children are involved, even 2-3 victims can draw more attention.” Later, on the day of the shooting, the lawsuit says, Ikner asked about what “the legal process, sentencing, and incarceration outlook” would be.
OpenAI has pushed back on the claim that its product holds responsibility for the shooting. “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” OpenAI spokesperson Drew Pusateri told NBC News in an email. Pusateri wrote that the company worked with law enforcement after learning of the incident and continues to do so.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” he added. “ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”
Should Google be liable for giving tips about how to use a gun / which killings get the most attention?
What about your local SearXNG instance?
Is a chat bot a glorified search engine or something different? Which query should have crossed the line for reporting?
This sort of thing is so murky to me, and in the same realm as the (BS, IMO) “guns don’t kill people, people kill people” argument, but the line in the sand feels a lot harder to draw here, in my brain at least.
Is a chat bot a glorified search engine or something different? Which query should have crossed the line for reporting?
Can I suggest that we regulate technology based on expectations? For example, if the industry expects AGI — then set up regulations for handling AGI.
Or, you know… spend hundreds of years not regulating based on expectations — then find yourself with a sudden capacity for dystopian levels of surveillance state panopticon technology, and no legal obligation for how [not] to use it.
Chatbots are glorified search engines in many ways. Yet also, if we keep grounding our moral expectations relative to what-has-been rather than what-can-be, we’re going to find that regulation can’t keep up with technology. Worse, technology will tip the balance of power toward whoever wields it.
What is this based on? Everything I saw that was provided in the article is stuff you would see on a gun forum, reddit, or similar. I didn’t see advice and I’m not aware of someone going to jail for anything comparable.
Yeah, I’m with you on this. If it helped him do actual specific planning, that’s an issue. If it encouraged him in a similar way to how it has been documented encouraging people to commit suicide, that’s a problem. Explaining that glocks have trigger safeties and basic information about what tends to get more press attention is not great, but it’s also not all that damning.
That being said, if companies are going to market these products as being able to sense and respond to intent, then they should be able to connect basic inquiries like this and say “hey dipshit, don’t do a mass shooting”.
Search engine’s job is to index. That includes filtering illegal content. And anyone searching for harmful content will have to crawl through hundreds of pages to gather the information.
They don’t ‘talk’ like chatbots do and they certainly don’t hand over tips to kill people on a silver platter.
Hmmm… Interesting one to think about, even as someone who hates AI
Should Google be liable for giving tips about how to use a gun / which killings get the most attention?
What about your local SearXNG instance?
Is a chat bot a glorified search engine or something different? Which query should have crossed the line for reporting?
This sort of thing is so murky to me, and in the same realm as the (BS, IMO) “guns don’t kill people, people kill people” argument, but the line in the sand feels a lot harder to draw here, in my brain at least.
Can I suggest that we regulate technology based on expectations? For example, if the industry expects AGI — then set up regulations for handling AGI.
Or, you know… spend hundreds of years not regulating based on expectations — then find yourself with a sudden capacity for dystopian levels of surveillance state panopticon technology, and no legal obligation for how [not] to use it.
Chatbots are glorified search engines in many ways. Yet also, if we keep grounding our moral expectations relative to what-has-been rather than what-can-be, we’re going to find that regulation can’t keep up with technology. Worse, technology will tip the balance of power toward whoever wields it.
If a human provided that advice they would be in jail. I think there is some percentage of liability, and it’s up to the courts to decide how much.
What is this based on? Everything I saw that was provided in the article is stuff you would see on a gun forum, reddit, or similar. I didn’t see advice and I’m not aware of someone going to jail for anything comparable.
Yeah, I’m with you on this. If it helped him do actual specific planning, that’s an issue. If it encouraged him in a similar way to how it has been documented encouraging people to commit suicide, that’s a problem. Explaining that glocks have trigger safeties and basic information about what tends to get more press attention is not great, but it’s also not all that damning.
That being said, if companies are going to market these products as being able to sense and respond to intent, then they should be able to connect basic inquiries like this and say “hey dipshit, don’t do a mass shooting”.
False equivalence.
Search engine’s job is to index. That includes filtering illegal content. And anyone searching for harmful content will have to crawl through hundreds of pages to gather the information.
They don’t ‘talk’ like chatbots do and they certainly don’t hand over tips to kill people on a silver platter.
https://www.youtube.com/watch?v=Ykvf3MunGf8