Right! If you don’t count the mass surveillance boost, the autonomous killing machines they’re trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.
All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You’re only aware of such a tiny sliver of it because that’s the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn’t AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it’s just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.
AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse.
I had, like, a bunch of paragraphs lined up because I thought you didn’t understand this. But as it turns out, you seem to be perfectly okay with the world being raped to death.
I hope your academic field is entertaining, at least.
Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?
Iron alloy doesn’t convince people they shouldn’t have their noose visible in case someone might see it and intervene. You’re not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it’s not going to matter that it saves people time, or it creates efficiency.
Just because you don’t like my points doesn’t mean I’m arguing in bad faith, and I find it a little insulting that you’re trying to dodge instead of responding to my point by insinuating I am.
No I’m saying you’re not even trying to understand, you’re just saying you don’t like it no matter what. To that I said, ok keep living in your echochamber. I’m not saying that’s bad faith, it’s just not trying to reach truth.
And what is the truth? You don’t get to define away all the bad parts of the technology and just point out the good parts. My life is materially worse because of how this technology is developing and being implemented. Some extremely vague wins aren’t enough to convince me to change my mind. I have heard your argument, I have measured it and found it wanting.
we really need to get back to throwing rocks at each other, it’s much less environmentally impactful and puts us on a much more level playing field, only the rich control all these techno-marvels.
Making no mistakes is a much higher standard than that which we hold to ourselves. Why are people moving the goalposts of intelligence or usefulness behind perfection?
Technology up to the dawn of the AI slop era was indeed expected to be perfect. When it wasn’t, we fixed it so it would be.
Why should AI be exempt from this? Techbros have convinced you that it should be so that their favourite lines go up.
There’s literally nothing more to it. A hammer is useless if it only drives 50% of the nails you hit with it. Why the fuck should we expect anything less than triple or quad 9 accuracy from AI if its so god damned “intelligent”?
Bc when I use a calculator, I actually DO expect literal perfection. And when I use google search, I expect it to be “useful”. And when I find information in Wikipedia, I expect it to be somewhat authoritative, even if incomplete. And if I use automative driving features, I expect them not to completely take over the wheel and crash me into a brick wall… or to a little child in a crosswalk right in front of me.
People who drive drunk lose their privileges to drive anymore. Employees who screw up that often get fired. Doctors who dispense incorrect medical advice lose their ability to practice medicine, plus get exposed to lawsuits. Counselors who tell their patients to kill themselves… Anyway, people DO experience the consequences of their actions, like ALL THE FUCKING TIME.
Whereas in contrast, AI is said that it is “going to be” great, not that it is great now. Fine, finish it and then we’ll talk. In the meantime, stop shoving it in front of my face.
If AI is like a human, it’s at best 2-year-old and at worst more like 6 months. It should not be “in charge”, e.g. of dispensing medical advice. But since it takes so much time to check its results for errors, it is literally slower and more painful to use it than to not use it (sometimes, often in fact).
You have a point somewhere buried in your mind, as revealed by the insightful first sentence, but your phrasing in the second sentence reads like sea-lioning and is not helping. Nobody is asking for “behind perfection” as that is literally mathematically impossible, and that is not what “moving the goalposts” means. It should not be enough to sound intelligent - we need to actually be such (same for AI as well).
And Google search has been spotty since the beginning.
And Wikipedia article quality … varies.
Like people, if you give AI a sufficiently complex problem, it won’t get it 100% right on the first pass. But, if you give it enough detail to distinguish an acceptable solution from an unacceptable one, it might get 80% of what you’re looking for on the first pass, boost that to 96% on the 2nd pass, 99% on the 3rd pass, and eventually what’s left is simple enough that it finally does get it 100% right.
Anybody who accepts the first thing AI tells them with today’s tech, is using it wrong.
Your “if” there is doing an awfully lot of the heavy lifting. Fwiw, I’m not talking special-purpose, custom-built LLMs - a large part of the problem is the lack of precision language uses to describe the concepts under discussion.
Both of these would be better called “cheating” than “AI”, but seeing as how AI both makes it easier and more to the point so many companies (such as Oracle) are literally pushing their programmers (those remaining anyway) to exclusively write programs using AI rather than by themselves, the very definition of “cheating” will need to be reexamined as a result.
In the examples also take note of how poor quality the LLM output is - e.g. regardless of whether the source is Grok or Claude or whatever, those therapy examples are not helpful in the slightest. Your counterargument might be that these are the “cheap” (aka free) AIs, but preemptively I will say in response: they still count as “AI”, especially in the context of the OP.
As far as “cheating” goes, ever since I got out of the game of paying a bunch of academics to judge and label me, I have been actively encouraged to “cheat” by the people who pay me money… that’s real life.
If you’re using a Ginsu knife to knead dough, you might not have optimal results. Claude is pretty good at code, since about 4-6 months ago. Grok? last time I asked Grok for anything it was the fastest LLM on the market, and the most non-sensical - usless trash.
Okay but Grok is still surely part of the “Anxiety around AI is growing rapidly in the US, research shows” phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.
Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.
Right! If you don’t count the mass surveillance boost, the autonomous killing machines they’re trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.
All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You’re only aware of such a tiny sliver of it because that’s the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn’t AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it’s just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.
I had, like, a bunch of paragraphs lined up because I thought you didn’t understand this. But as it turns out, you seem to be perfectly okay with the world being raped to death.
I hope your academic field is entertaining, at least.
All those things being true is enough for me to hate AI.
Edit: As my dad says, One aw shit wipes away a million attaboys.
Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?
Iron alloy doesn’t convince people they shouldn’t have their noose visible in case someone might see it and intervene. You’re not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it’s not going to matter that it saves people time, or it creates efficiency.
You’re not um… you’re not even reading, but ok. Keep living in your echochamber I guess.
Just because you don’t like my points doesn’t mean I’m arguing in bad faith, and I find it a little insulting that you’re trying to dodge instead of responding to my point by insinuating I am.
No I’m saying you’re not even trying to understand, you’re just saying you don’t like it no matter what. To that I said, ok keep living in your echochamber. I’m not saying that’s bad faith, it’s just not trying to reach truth.
And what is the truth? You don’t get to define away all the bad parts of the technology and just point out the good parts. My life is materially worse because of how this technology is developing and being implemented. Some extremely vague wins aren’t enough to convince me to change my mind. I have heard your argument, I have measured it and found it wanting.
Electricity -> electrocutions
Gasoline -> fire bombs
Axes -> axe murders
we really need to get back to throwing rocks at each other, it’s much less environmentally impactful and puts us on a much more level playing field, only the rich control all these techno-marvels.
If you have anything else to add besides hyperbole now is the time. Otherwise I think we’re done here.
Narrator: actually, no it was not.
e.g. it still spreads misinformation.
Making no mistakes is a much higher standard than that which we hold to ourselves. Why are people moving the goalposts of intelligence or usefulness behind perfection?
Technology up to the dawn of the AI slop era was indeed expected to be perfect. When it wasn’t, we fixed it so it would be.
Why should AI be exempt from this? Techbros have convinced you that it should be so that their favourite lines go up.
There’s literally nothing more to it. A hammer is useless if it only drives 50% of the nails you hit with it. Why the fuck should we expect anything less than triple or quad 9 accuracy from AI if its so god damned “intelligent”?
B-b-be-be-because shut up you, that’s why!
Won’t someone think of the poor shareholders?
(/s)
Bc when I use a calculator, I actually DO expect literal perfection. And when I use google search, I expect it to be “useful”. And when I find information in Wikipedia, I expect it to be somewhat authoritative, even if incomplete. And if I use automative driving features, I expect them not to completely take over the wheel and crash me into a brick wall… or to a little child in a crosswalk right in front of me.
People who drive drunk lose their privileges to drive anymore. Employees who screw up that often get fired. Doctors who dispense incorrect medical advice lose their ability to practice medicine, plus get exposed to lawsuits. Counselors who tell their patients to kill themselves… Anyway, people DO experience the consequences of their actions, like ALL THE FUCKING TIME.
Whereas in contrast, AI is said that it is “going to be” great, not that it is great now. Fine, finish it and then we’ll talk. In the meantime, stop shoving it in front of my face.
If AI is like a human, it’s at best 2-year-old and at worst more like 6 months. It should not be “in charge”, e.g. of dispensing medical advice. But since it takes so much time to check its results for errors, it is literally slower and more painful to use it than to not use it (sometimes, often in fact).
You have a point somewhere buried in your mind, as revealed by the insightful first sentence, but your phrasing in the second sentence reads like sea-lioning and is not helping. Nobody is asking for “behind perfection” as that is literally mathematically impossible, and that is not what “moving the goalposts” means. It should not be enough to sound intelligent - we need to actually be such (same for AI as well).
And you have calulators.
And Google search has been spotty since the beginning.
And Wikipedia article quality … varies.
Like people, if you give AI a sufficiently complex problem, it won’t get it 100% right on the first pass. But, if you give it enough detail to distinguish an acceptable solution from an unacceptable one, it might get 80% of what you’re looking for on the first pass, boost that to 96% on the 2nd pass, 99% on the 3rd pass, and eventually what’s left is simple enough that it finally does get it 100% right.
Anybody who accepts the first thing AI tells them with today’s tech, is using it wrong.
Your “if” there is doing an awfully lot of the heavy lifting. Fwiw, I’m not talking special-purpose, custom-built LLMs - a large part of the problem is the lack of precision language uses to describe the concepts under discussion.
An example: https://lemmy.world/post/46390157
Another example: https://discuss.tchncs.de/post/59584533
Both of these would be better called “cheating” than “AI”, but seeing as how AI both makes it easier and more to the point so many companies (such as Oracle) are literally pushing their programmers (those remaining anyway) to exclusively write programs using AI rather than by themselves, the very definition of “cheating” will need to be reexamined as a result.
In the examples also take note of how poor quality the LLM output is - e.g. regardless of whether the source is Grok or Claude or whatever, those therapy examples are not helpful in the slightest. Your counterargument might be that these are the “cheap” (aka free) AIs, but preemptively I will say in response: they still count as “AI”, especially in the context of the OP.
As far as “cheating” goes, ever since I got out of the game of paying a bunch of academics to judge and label me, I have been actively encouraged to “cheat” by the people who pay me money… that’s real life.
If you’re using a Ginsu knife to knead dough, you might not have optimal results. Claude is pretty good at code, since about 4-6 months ago. Grok? last time I asked Grok for anything it was the fastest LLM on the market, and the most non-sensical - usless trash.
(I did not downvote you btw)
Okay but Grok is still surely part of the “Anxiety around AI is growing rapidly in the US, research shows” phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.
Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.
But there are some negatives as well…