It’s not AI that’s the problem. AI is an amazingly powerful tool (I’m an AI researcher).
The problem is that it’s in the hands of psychotic technofascist greedy subhumans that want to destroy basically all of society so their stock can go up 0.001%. If we can cut out the source of the cancer, the body can begin to heal itself.
It’s truly amazing that when an expert in the field says something, they still cover their eyes and ears and say you’re wrong, they’re always right.
If someone did this with any field, they’d be called willfully ignorant. But because you work in Current Thing, you’re now against them, for being honest with the reality of your job.
Bet these are the same people who think they’re the rational ones and everyone else is a fool or paid actors.
Is it? I keep hearing people keep parroting this but what big advancements have we made cause of AI?
As a developer, I keep hearing this but all I see is low quality software that is all smoke and mirrors. Pumping out low quality code at a high pace is worse than pumping out less but higher quality code.
Literally name any single industry with anything, and AI has vastly pushed it forward. It’s way to big to type here. Just off the top of my head: climate, pharmaceutical, other biomedical stuff (neuroscience, genetics, medical advances in every possible body system), energy (that alone has THOUSANDS of huge advances), science in general (astrophysics, geophysics, chemistry, agriculture, I mean every single scientific field). I’m listing every field I can think of, because it’s that pervasive.
The most visible advances which is just in like business/productivity for the sake of making money, I’d argue is the least important. It’s most important for a capitalist society that values profit over all else, but that’s a recipe for collapse, which is where we’re quickly headed.
lol please, go research something before you make any claims on it. No I’m not talking about datacenters fucking over the water supply or using fossil fuels, that’s bad obviously. Literally right now go google “AI used in climate science”. Just go do it. You’ll learn.
Are we talking about machine learning which has been around for a decade or generative AI? People usually mean the latter. Machine learning isn’t what caused the AI craze.
I honestly am curious in how an LLM could improve the climate in anyway.
And imo leaving the datacenters out is kind of a bad faith argument, it’s the only reason why it’s everywhere. It wouldn’t be a problem if it was basically a new computation tool used by niche professions.
Right! If you don’t count the mass surveillance boost, the autonomous killing machines they’re trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.
All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You’re only aware of such a tiny sliver of it because that’s the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn’t AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it’s just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.
AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse.
I had, like, a bunch of paragraphs lined up because I thought you didn’t understand this. But as it turns out, you seem to be perfectly okay with the world being raped to death.
I hope your academic field is entertaining, at least.
Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?
Iron alloy doesn’t convince people they shouldn’t have their noose visible in case someone might see it and intervene. You’re not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it’s not going to matter that it saves people time, or it creates efficiency.
Just because you don’t like my points doesn’t mean I’m arguing in bad faith, and I find it a little insulting that you’re trying to dodge instead of responding to my point by insinuating I am.
No I’m saying you’re not even trying to understand, you’re just saying you don’t like it no matter what. To that I said, ok keep living in your echochamber. I’m not saying that’s bad faith, it’s just not trying to reach truth.
we really need to get back to throwing rocks at each other, it’s much less environmentally impactful and puts us on a much more level playing field, only the rich control all these techno-marvels.
Making no mistakes is a much higher standard than that which we hold to ourselves. Why are people moving the goalposts of intelligence or usefulness behind perfection?
Technology up to the dawn of the AI slop era was indeed expected to be perfect. When it wasn’t, we fixed it so it would be.
Why should AI be exempt from this? Techbros have convinced you that it should be so that their favourite lines go up.
There’s literally nothing more to it. A hammer is useless if it only drives 50% of the nails you hit with it. Why the fuck should we expect anything less than triple or quad 9 accuracy from AI if its so god damned “intelligent”?
Bc when I use a calculator, I actually DO expect literal perfection. And when I use google search, I expect it to be “useful”. And when I find information in Wikipedia, I expect it to be somewhat authoritative, even if incomplete. And if I use automative driving features, I expect them not to completely take over the wheel and crash me into a brick wall… or to a little child in a crosswalk right in front of me.
People who drive drunk lose their privileges to drive anymore. Employees who screw up that often get fired. Doctors who dispense incorrect medical advice lose their ability to practice medicine, plus get exposed to lawsuits. Counselors who tell their patients to kill themselves… Anyway, people DO experience the consequences of their actions, like ALL THE FUCKING TIME.
Whereas in contrast, AI is said that it is “going to be” great, not that it is great now. Fine, finish it and then we’ll talk. In the meantime, stop shoving it in front of my face.
If AI is like a human, it’s at best 2-year-old and at worst more like 6 months. It should not be “in charge”, e.g. of dispensing medical advice. But since it takes so much time to check its results for errors, it is literally slower and more painful to use it than to not use it (sometimes, often in fact).
You have a point somewhere buried in your mind, as revealed by the insightful first sentence, but your phrasing in the second sentence reads like sea-lioning and is not helping. Nobody is asking for “behind perfection” as that is literally mathematically impossible, and that is not what “moving the goalposts” means. It should not be enough to sound intelligent - we need to actually be such (same for AI as well).
And Google search has been spotty since the beginning.
And Wikipedia article quality … varies.
Like people, if you give AI a sufficiently complex problem, it won’t get it 100% right on the first pass. But, if you give it enough detail to distinguish an acceptable solution from an unacceptable one, it might get 80% of what you’re looking for on the first pass, boost that to 96% on the 2nd pass, 99% on the 3rd pass, and eventually what’s left is simple enough that it finally does get it 100% right.
Anybody who accepts the first thing AI tells them with today’s tech, is using it wrong.
Your “if” there is doing an awfully lot of the heavy lifting. Fwiw, I’m not talking special-purpose, custom-built LLMs - a large part of the problem is the lack of precision language uses to describe the concepts under discussion.
Both of these would be better called “cheating” than “AI”, but seeing as how AI both makes it easier and more to the point so many companies (such as Oracle) are literally pushing their programmers (those remaining anyway) to exclusively write programs using AI rather than by themselves, the very definition of “cheating” will need to be reexamined as a result.
In the examples also take note of how poor quality the LLM output is - e.g. regardless of whether the source is Grok or Claude or whatever, those therapy examples are not helpful in the slightest. Your counterargument might be that these are the “cheap” (aka free) AIs, but preemptively I will say in response: they still count as “AI”, especially in the context of the OP.
As far as “cheating” goes, ever since I got out of the game of paying a bunch of academics to judge and label me, I have been actively encouraged to “cheat” by the people who pay me money… that’s real life.
If you’re using a Ginsu knife to knead dough, you might not have optimal results. Claude is pretty good at code, since about 4-6 months ago. Grok? last time I asked Grok for anything it was the fastest LLM on the market, and the most non-sensical - usless trash.
Okay but Grok is still surely part of the “Anxiety around AI is growing rapidly in the US, research shows” phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.
Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.
I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.
Then the LLM shit started and everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives… and fuck everyone and everything else.
It’s amazing what we can ruin when we let greed and selfishness drive our society.
The LLM craze is a natural maturation point of the AI field though, and now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.
There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we’re in.
They actually have a disorder or disease. However in this case their disorder is destroying the rest of the world. There’s a fast approaching point that the world organism will self-heal to prevent its own death.
Maybe it’s because I’ve only ever had at most a comfortable income but I truly don’t understand the mentality of needing so much money.
I don’t get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were “good tenants” and has managed to keep true to her word) I don’t feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can’t guarantee I won’t have a shitty boss that doesn’t let me have that work/life balance.
The lack of regulation of AI is absolutely a serious problem, there are so many problems your comment isn’t even funny.
Problems with people using it for health advice.
Problems with teens using it instead of friends.
Problems with AI giving absurdly incorrect advice to people in general, but also professionals like managers and CEO’s.
Problems with data-centers that host these AI systems require enormous amounts of power. So much researchers have shown these data centers are drying up vast areas around the centers.
The techno-fascists are in all sorts of business, that’s not special for AI. The problem is with AI the techno-fascists aren’t regulated in any way.
Neither how their data centers impact the environment and the electric grid, or how AI has actual bad effects for their customers, because there is no regulation on the use or supply of AI services.
100% agree with every point you made. Everything you’re saying is specific to this iteration of LLMs though. That’s just one tiny piece (well large in terms of public perception and capital acquisition but small in terms of the research space).
The problem is that it’s in the hands of psychotic technofascist greedy subhumans
gee maybe people like you shouldn’t have put those tools into the shitbag’s hands?
I remember a decade ago multiple movements to reign in AI before it became uncontrollable, and any chance of that is long fuckin gone. we’re gonna barrel forward heedless of the danger, because fuck you that guy wants profits and doesn’t care about humanity.
and people like you made the tools and gave it to 'em.
I fucking work on climate models you jabroni. You have no idea about the industry or really anything other than what your most echochambered influencers tell you to think.
Doubtful. And you thought that AI would stay in modeling? You made them something dangerous, and you thought it wouldn’t be weaponized?
you fucking moron. you either made yourself their bitch, or were used as their bitch unknowingly. science is ashamed of idiots like you who enable the worst.
That seems terribly extreme. Its not like its a bomb that is obviously for blowing people up. Someone made something with some cool applications, then some guys with many times more money and resources than anyone should be allowed to have, took the idea and ran with it toward a bunch of psychotic ends.
The problem isn’t that people can use good things for bad purposes, nor is it the people that make or improve those things. The root cause is that western society is currently structured in a way that ends up rewarding certain types of madness, and the reward structure is set up such that individuals can get a vast undue amount of influence and power. Under these conditions, it is natural that even a tiny number of such individuals can overtake the system like a single cancer cell can eventually kill someone. All of these alarming things going on for over 60 years are symptoms of that societal illness. Please don’t blame scientists for sciencing.
Almost all research sharing is done through open source. Of course there are specific agreements between two companies if they wish to collaborate on private products, but the vast majority is just sharing a code base on github, writing a paper, and letting others review and try it out.
It’s always been that way, it’s just that until now the general public could say “well at least they pay me.”
So ironically this rise in anxiety is itself being driven by self-interest. People were fine with those people being in charge as long as they got a comfortable lifestyle out of it. A pattern seen throughout history.
It’s not AI that’s the problem. AI is an amazingly powerful tool (I’m an AI researcher).
The problem is that it’s in the hands of psychotic technofascist greedy subhumans that want to destroy basically all of society so their stock can go up 0.001%. If we can cut out the source of the cancer, the body can begin to heal itself.
It’s truly amazing that when an expert in the field says something, they still cover their eyes and ears and say you’re wrong, they’re always right.
If someone did this with any field, they’d be called willfully ignorant. But because you work in Current Thing, you’re now against them, for being honest with the reality of your job.
Bet these are the same people who think they’re the rational ones and everyone else is a fool or paid actors.
Is it? I keep hearing people keep parroting this but what big advancements have we made cause of AI?
As a developer, I keep hearing this but all I see is low quality software that is all smoke and mirrors. Pumping out low quality code at a high pace is worse than pumping out less but higher quality code.
Literally name any single industry with anything, and AI has vastly pushed it forward. It’s way to big to type here. Just off the top of my head: climate, pharmaceutical, other biomedical stuff (neuroscience, genetics, medical advances in every possible body system), energy (that alone has THOUSANDS of huge advances), science in general (astrophysics, geophysics, chemistry, agriculture, I mean every single scientific field). I’m listing every field I can think of, because it’s that pervasive.
The most visible advances which is just in like business/productivity for the sake of making money, I’d argue is the least important. It’s most important for a capitalist society that values profit over all else, but that’s a recipe for collapse, which is where we’re quickly headed.
🙄👍
You think AI has made improvements to our climate???
Can’t believe I read this on lemmy
lol please, go research something before you make any claims on it. No I’m not talking about datacenters fucking over the water supply or using fossil fuels, that’s bad obviously. Literally right now go google “AI used in climate science”. Just go do it. You’ll learn.
Are we talking about machine learning which has been around for a decade or generative AI? People usually mean the latter. Machine learning isn’t what caused the AI craze.
I honestly am curious in how an LLM could improve the climate in anyway.
And imo leaving the datacenters out is kind of a bad faith argument, it’s the only reason why it’s everywhere. It wouldn’t be a problem if it was basically a new computation tool used by niche professions.
I know I’m being pedantic, but machine learning has been around for many decades
go google what I said
That is not how socializing on the internet works. You make the claim, you back it up or be discredited for inconsideration
It’s cute that you think you’re somehow different
aww thanks!
I want to agree with you, but AI is just another psychopath in a world where we don’t need any more psychopaths.
Right! If you don’t count the mass surveillance boost, the autonomous killing machines they’re trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.
All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You’re only aware of such a tiny sliver of it because that’s the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn’t AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it’s just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.
I had, like, a bunch of paragraphs lined up because I thought you didn’t understand this. But as it turns out, you seem to be perfectly okay with the world being raped to death.
I hope your academic field is entertaining, at least.
All those things being true is enough for me to hate AI.
Edit: As my dad says, One aw shit wipes away a million attaboys.
Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?
Iron alloy doesn’t convince people they shouldn’t have their noose visible in case someone might see it and intervene. You’re not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it’s not going to matter that it saves people time, or it creates efficiency.
You’re not um… you’re not even reading, but ok. Keep living in your echochamber I guess.
Just because you don’t like my points doesn’t mean I’m arguing in bad faith, and I find it a little insulting that you’re trying to dodge instead of responding to my point by insinuating I am.
No I’m saying you’re not even trying to understand, you’re just saying you don’t like it no matter what. To that I said, ok keep living in your echochamber. I’m not saying that’s bad faith, it’s just not trying to reach truth.
Electricity -> electrocutions
Gasoline -> fire bombs
Axes -> axe murders
we really need to get back to throwing rocks at each other, it’s much less environmentally impactful and puts us on a much more level playing field, only the rich control all these techno-marvels.
If you have anything else to add besides hyperbole now is the time. Otherwise I think we’re done here.
Narrator: actually, no it was not.
e.g. it still spreads misinformation.
Making no mistakes is a much higher standard than that which we hold to ourselves. Why are people moving the goalposts of intelligence or usefulness behind perfection?
Technology up to the dawn of the AI slop era was indeed expected to be perfect. When it wasn’t, we fixed it so it would be.
Why should AI be exempt from this? Techbros have convinced you that it should be so that their favourite lines go up.
There’s literally nothing more to it. A hammer is useless if it only drives 50% of the nails you hit with it. Why the fuck should we expect anything less than triple or quad 9 accuracy from AI if its so god damned “intelligent”?
B-b-be-be-because shut up you, that’s why!
Won’t someone think of the poor shareholders?
(/s)
Bc when I use a calculator, I actually DO expect literal perfection. And when I use google search, I expect it to be “useful”. And when I find information in Wikipedia, I expect it to be somewhat authoritative, even if incomplete. And if I use automative driving features, I expect them not to completely take over the wheel and crash me into a brick wall… or to a little child in a crosswalk right in front of me.
People who drive drunk lose their privileges to drive anymore. Employees who screw up that often get fired. Doctors who dispense incorrect medical advice lose their ability to practice medicine, plus get exposed to lawsuits. Counselors who tell their patients to kill themselves… Anyway, people DO experience the consequences of their actions, like ALL THE FUCKING TIME.
Whereas in contrast, AI is said that it is “going to be” great, not that it is great now. Fine, finish it and then we’ll talk. In the meantime, stop shoving it in front of my face.
If AI is like a human, it’s at best 2-year-old and at worst more like 6 months. It should not be “in charge”, e.g. of dispensing medical advice. But since it takes so much time to check its results for errors, it is literally slower and more painful to use it than to not use it (sometimes, often in fact).
You have a point somewhere buried in your mind, as revealed by the insightful first sentence, but your phrasing in the second sentence reads like sea-lioning and is not helping. Nobody is asking for “behind perfection” as that is literally mathematically impossible, and that is not what “moving the goalposts” means. It should not be enough to sound intelligent - we need to actually be such (same for AI as well).
And you have calulators.
And Google search has been spotty since the beginning.
And Wikipedia article quality … varies.
Like people, if you give AI a sufficiently complex problem, it won’t get it 100% right on the first pass. But, if you give it enough detail to distinguish an acceptable solution from an unacceptable one, it might get 80% of what you’re looking for on the first pass, boost that to 96% on the 2nd pass, 99% on the 3rd pass, and eventually what’s left is simple enough that it finally does get it 100% right.
Anybody who accepts the first thing AI tells them with today’s tech, is using it wrong.
Your “if” there is doing an awfully lot of the heavy lifting. Fwiw, I’m not talking special-purpose, custom-built LLMs - a large part of the problem is the lack of precision language uses to describe the concepts under discussion.
An example: https://lemmy.world/post/46390157
Another example: https://discuss.tchncs.de/post/59584533
Both of these would be better called “cheating” than “AI”, but seeing as how AI both makes it easier and more to the point so many companies (such as Oracle) are literally pushing their programmers (those remaining anyway) to exclusively write programs using AI rather than by themselves, the very definition of “cheating” will need to be reexamined as a result.
In the examples also take note of how poor quality the LLM output is - e.g. regardless of whether the source is Grok or Claude or whatever, those therapy examples are not helpful in the slightest. Your counterargument might be that these are the “cheap” (aka free) AIs, but preemptively I will say in response: they still count as “AI”, especially in the context of the OP.
As far as “cheating” goes, ever since I got out of the game of paying a bunch of academics to judge and label me, I have been actively encouraged to “cheat” by the people who pay me money… that’s real life.
If you’re using a Ginsu knife to knead dough, you might not have optimal results. Claude is pretty good at code, since about 4-6 months ago. Grok? last time I asked Grok for anything it was the fastest LLM on the market, and the most non-sensical - usless trash.
(I did not downvote you btw)
Okay but Grok is still surely part of the “Anxiety around AI is growing rapidly in the US, research shows” phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.
Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.
But there are some negatives as well…
I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.
Then the LLM shit started and everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives… and fuck everyone and everything else.
It’s amazing what we can ruin when we let greed and selfishness drive our society.
They’ve been fantasizing about that ever since “computers” started growing in accessibility - in the 1960s…
The current crop is just the first time such things have been delivered with something resembling “average” human responses.
The LLM craze is a natural maturation point of the AI field though, and now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.
There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we’re in.
At 1million i could already stop working and live decent life :/. I really don’t get why past 1billion they continue to search for more
They actually have a disorder or disease. However in this case their disorder is destroying the rest of the world. There’s a fast approaching point that the world organism will self-heal to prevent its own death.
It’s a sickness
Maybe it’s because I’ve only ever had at most a comfortable income but I truly don’t understand the mentality of needing so much money.
I don’t get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were “good tenants” and has managed to keep true to her word) I don’t feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can’t guarantee I won’t have a shitty boss that doesn’t let me have that work/life balance.
The lack of regulation of AI is absolutely a serious problem, there are so many problems your comment isn’t even funny.
Problems with people using it for health advice.
Problems with teens using it instead of friends.
Problems with AI giving absurdly incorrect advice to people in general, but also professionals like managers and CEO’s.
Problems with data-centers that host these AI systems require enormous amounts of power. So much researchers have shown these data centers are drying up vast areas around the centers.
The techno-fascists are in all sorts of business, that’s not special for AI. The problem is with AI the techno-fascists aren’t regulated in any way.
Neither how their data centers impact the environment and the electric grid, or how AI has actual bad effects for their customers, because there is no regulation on the use or supply of AI services.
100% agree with every point you made. Everything you’re saying is specific to this iteration of LLMs though. That’s just one tiny piece (well large in terms of public perception and capital acquisition but small in terms of the research space).
gee maybe people like you shouldn’t have put those tools into the shitbag’s hands?
I remember a decade ago multiple movements to reign in AI before it became uncontrollable, and any chance of that is long fuckin gone. we’re gonna barrel forward heedless of the danger, because fuck you that guy wants profits and doesn’t care about humanity.
and people like you made the tools and gave it to 'em.
I fucking work on climate models you jabroni. You have no idea about the industry or really anything other than what your most echochambered influencers tell you to think.
Doubtful. And you thought that AI would stay in modeling? You made them something dangerous, and you thought it wouldn’t be weaponized?
you fucking moron. you either made yourself their bitch, or were used as their bitch unknowingly. science is ashamed of idiots like you who enable the worst.
deleted by creator
that’s why you see lots of chemical and biowarfare?
or continued CFC use huh?
you simpering dolt.
That seems terribly extreme. Its not like its a bomb that is obviously for blowing people up. Someone made something with some cool applications, then some guys with many times more money and resources than anyone should be allowed to have, took the idea and ran with it toward a bunch of psychotic ends.
The problem isn’t that people can use good things for bad purposes, nor is it the people that make or improve those things. The root cause is that western society is currently structured in a way that ends up rewarding certain types of madness, and the reward structure is set up such that individuals can get a vast undue amount of influence and power. Under these conditions, it is natural that even a tiny number of such individuals can overtake the system like a single cancer cell can eventually kill someone. All of these alarming things going on for over 60 years are symptoms of that societal illness. Please don’t blame scientists for sciencing.
bingo
Indeed.
To cut off their data and revenue streams, stick to Open Source, locally run, models/chatbots.
Almost all research sharing is done through open source. Of course there are specific agreements between two companies if they wish to collaborate on private products, but the vast majority is just sharing a code base on github, writing a paper, and letting others review and try it out.
It’s amazing how open source has benefitted the individual. The monopolization of compute is still a barrier we’ll have to crash through
-is an AI researcher -immediately uses Nazi lingo after introducing themselves
you can’t be more obvious than this about the ideology of AI💀
lol get offline a bit, not everything is Nazi everything. You’re saying “subhuman” is Nazi coded?
I believe Peter Thiel, Musk, Andreeson, Horowitz, Yarvin, and about 100 others who are actively trying to erode our society to the point of collapse, so they can rise as god-kings from the ashes, need to not exist in our free society. They have broken the inherent social contract and have therefore lost the privilege.
It’s always been that way, it’s just that until now the general public could say “well at least they pay me.”
So ironically this rise in anxiety is itself being driven by self-interest. People were fine with those people being in charge as long as they got a comfortable lifestyle out of it. A pattern seen throughout history.