In addition to making people stupid, I wonder what affect will LLMs like Claude will have on programmers? How will new programmers learn if companies start using Claude?
By design.
Tech in general is making all of us more stupider, or at the very least, suckers and rubes.
Absolutely shocking.
The hilarious part about this is that nobody seems to love using these things more than the C suite. Let’s see how dumb they can get before their brains can’t even handle their bodily functions.

Hi Lisa.
Hi Super Nintendo Chalmers.
I’m lernding
@grok this true?
The vast amount of people don’t understand how their brain works…
What we think of “us” isn’t our brains, it’s just our consciousness. And that’s just a middle manager that’s getting all types of shit thrown at it.
Our consciousness can’t tell the difference between the prefrontal lobe handling something, or a laptop with a chatbot open.
It just takes the input and processes it.
When we throw stuff to an AI, the part of our brain that normally handles it, just starts doing other stuff.
If you don’t have the AI, your prefontal lobe doesn’t want to take the old stuff back, it’s already got its plate full with the new stuff it picked up.
Your consciousness knows the chatbot can puke out an answer, so when your prefrontal love won’t/can’t do it, you just got hyper focused on getting access to the chatbot.
It’s “making people stupider” but the real problem is it’s abusing how every mammals brain has worked for millions of years. It’s not something people can resist,bits the brain as a whole working as intended. We just didn’t evolve for something that at any moment could become prohibitively expensive.
Think of how Uber was cheap till people needed it.
If people get hooked on cheap AI, they’re not gonna be able to survive without it and will pay anything. I think this is why its pushed on coders so hard, they want everyone to use it so everyone becomes dependent on it. Instead of paying for 4-8 years for a degree, people will have to pay monthly for an AI just to earn a living
That’s the end goal of the techbros. No one being able to work unless they pay for AI.
Asking from a place of agreement, curious if you have any readings to suggest, on that impact to the brain. Always looking for solid content to send along to others
(beyond this article and the MIT research it cites)
on that impact to the brain.
I mean, I pulled a whole bunch of stuff together in that comment, I’d be shocked if any source existed that touched on every part.
As far as “us” delegating tasks to other parts of the brain, this looks pretty good:
The driving force behind human brain evolution
Although many species can transfer behavior from volitional to habitual function (Poldrack et al., 2005; Barton, 2007; Seger and Spiering, 2011; Krubitzer and Seelke, 2012; Barton and Venditti, 2013), the shift from quadrupedal to bipedal locomotion nonetheless may have been a powerful driver for the rapid elaboration of the distinctively human “delegation” mode of information processing. Bipedality is rare in mammals, seen commonly only in humans and in some apes (Hardman et al., 2002; Alexander, 2004; Doyon et al., 2009). Although bipedality plausibly affords a number of adaptive advantages (e.g., it facilitates surveillance in densely vegetated areas, and frees the arms for other tasks Carrier, 2011), it also imposes a massive information-processing challenge. Compared to the stability conferred by quadrupedal locomotion, a bipedal organism rests its body mass on only two support points. This inherently unstable posture means that even a tiny shift in position will cause a fall, unless the animal instantly detects and responds to that change. Presumably for this reason, quadrupedal animals that resort to bipedality for surveillance typically do so only briefly, or in highly stereotyped poses (as is the case with meerkats). Moving about while bipedal poses extraordinary challenges, whereby the individual must constantly respond to ever-changing subtle shifts in weight distribution (Preuschoft, 2004), reducing its ability to attend to other aspects of its environment (such as the detection of food sources or approaching predators).
Despite these challenges, adult humans spend little time consciously thinking about maintaining their balance as they move around, except when placed in a challenging circumstance, such as walking on a narrow beam or when leaving a pub. The means of achieving that liberation is very clear as one watches a young child learning to walk. This is a long process, with every step initially requiring full concentration. Through time, however, the skills develop as control over fine motor movements improves–and full concentration on movement is no longer needed as the tasks involved become “automatized” and are delegated to other parts of the brain, such as the basal ganglia (Poldrack et al., 2005; Ashby et al., 2010; Seger and Spiering, 2011; Sepulcre et al., 2012) and the cerebellum (Duncan, 2001; Desmurget and Turner, 2010; Balsters and Ramnani, 2011; Callu et al., 2013). Plausibly, then, the adoption of bipedalism in proto-humans posed a strong selective advantage for individuals with brains capable of using their full processing power to learn bipedalism, but that were also able to delegate the basic tasks of walking and running to “lower” neural centers, freeing up the higher segments for detecting unpredictable opportunities and challenges (be they related to predators, food, or social cues), and rapidly responding to that information.
In summary, we suggest that (1) the ability to delegate routine tasks from the cortex to other parts of the brain is more highly developed in humans than other species; and (2) that elaboration arose during our evolutionary history because the computational challenges associated with balancing on two legs enhanced individual fitness in proto-humans who were capable of transferring the control of routine tasks in this way. To this we can add (3) that once this “delegation” mode of neural functioning had evolved, it was co-opted for many other cognitive tasks–essentially, liberating the cortex to deal with novel unpredictable events.
https://pmc.ncbi.nlm.nih.gov/articles/PMC4010745/
Although to be upfront I didn’t take the time to read the whole study, I just skimmed it. I was already aware of how this works from school and just searched real quick for a source
But that study starts out assuming what pushed us to delegation forward brains was how fucking hard it was to stand on two feet without a giant tail. And once we got good at delegating that away from conscious thinking, why wouldn’t we keep delegating everything else as long as there isn’t an immediate negative consequence?
Thank you. That was a cool read. We squander this amazing organ. I’m with you that it’s hard to find all these concepts in one place. Sapolsky’s lectures captures some of it.
It’s pushed on coders because it gives every developer a team of never sleeping junior devs for a fraction of the price.
And if the competition is doing it, you won’t compete unless you do it too. Until the price matches that team of junior coders.
a team of never sleeping junior devs
As a senior dev, that sounds like my worst nightmare tbh
I couldn’t reach you at 2:41 AM so I went ahead and merged directly to prod!
Uber is still quite cheap i find but certainly not as cheap as it used to be had
I have no doubt “AI” companies are sitting on studies proving their shit causes irreversible brain damage, much like tobacco cartels used to sit on studies proving their shit caused cancer.
By the time the bubble pops and their shit gets properly regulated it’ll have crippled a whole generation (on top of all the other damage like destroying the Internet, causing unfathomable damage to science, culture, and society in general, and infecting any information produced after this shit became commonly used).
I have very little hope for our civilization being able to survive this self inflicted disaster (and given how we’ve squandered natural resources and caused a runaway greenhouse effect that’ll make our world mostly uninhabitable for humans without massive industrial effort that will be impossible after our fall, no new civilization will be rising after this dark age). But hey, at least some sociopath CEOs will have made a lot of money out of it. Who cares if they murdered the future for their short term profit.
It’s like becoming middle managers without the people managing experience.
Peter Principle is real and I’m tired of pretending it’s not.
It’s speed. Grunt work & time gets in the way of seeing out ideas and making them realities. You can test ideas significantly faster and get to answers much quicker so you’re moving mountains instead of pebbles.
If I’m working on something that is properly documented, it’s a waste of time to ask a human or search for something when a machine can find it in seconds so I can continue my work instead of spinning in circles for a hour.
Why am I as a single dad having to spend an entire night working on a meal plan for the week when I can have AI take my parameters, point it to my sources and drop me the plan every Sunday at 5pm without paying a service to do it?
Why should I look through articles and articles of local events when I can have my local AI ingest everything and bubble up what’s important?
I can’t trust news that pops up on Reddit by bots or government controlled media outlets. But I also don’t sit around on my ass all day to spend hours on news. Why not have my local models point at places like AP and Reuters and pull down all the articles in the day and summarize it for me for a 15 min read?
The internet has been enshitified, if I want to learn about something and research whatever, I have to dig through layers of sponsored content, click bait, and straight up lies. Why not build a RAG pointed at places I can trust and ingest data dumps from highly respected scientific and medical studies so I can get -real- answers?
I’m a high output army of 1 with responsibilities taller than me. My greatest enemy is time and I’m regularly trying to shove 35 hours into a 24 hour day.
AI has been a gift.
But you need to do it local and design your own stuff.
deleted by creator
Has anyone found an effective way to pair-up and “learn” the syntax faster/better compared to not using AI?
I’ve written a lot of code in the past, but recently started doing more with golang… and have been using AI for an assist, but at the end of the day (and enough reiterations) - it creates readable and maintainable code. But (unfortunately), I don’t think I could rewrite it.
I was contemplating seeing how I could change my workflow, so I’d write the code, but AI would offer fast guidance.
invest more time before you ask for help. Same as when you’re working with a person.
Also, add instructions to your agent to use a Socratic question teaching mode.
You need to force yourself to think, else you won’t learn.
If any of you actually read the article, they only tested 54 college students with writing a fucking essay. This is also undergoing a limited peer review.
Ultimately tells you no results that reflect reality, and only provides justification for certain people’s feelings who were anti-AI already
The study might not be thorough, but it is pretty obvious if you don’t use your brain your are going to lose your skills. Even without the study this should be obvious, the more you practice your skill the better you get if you rely on Chatgpt you will not get better and might get worse.
The sun still rose every day before we knew the planet spins…
Tobacco still caused cancer before the studies came out…
If you’ve studied biopsychology, you know what happens to any offloading of a cognitive function.
That it’s being handed off specifically to AI, just doesn’t matter in the slightest. Because the offloading itself is what causes the atrophy.
The issue here, is what is being offloaded is critical thinking…
Which makes it incredibly difficult to explain what is happening to someone who is experiencing it, somewhat like Alzheimer’s.
People reliant on chatbots to do their critical thinking, simply don’t have the critical thinking to understand the problem. The only way to get them out of it, is making them go cold turkey like with drugs. And eventually the brain will begrudgingly start doing critical thinking again, but it’s gonna take a while, because offloading cognitive tasks from the conscious mind is literally why humans are the dominant species.
It’s why it takes so little time for people to become reliant on it.
The headline subtly implies we were already stupid lol
If you’re relying on AI, well then…
(Yes, I know they can have their uses if done properly, but for too many that’s a HUGE “if”.)
Everyone is stupid about something. People we label as stupid are stupid about most things
I wonder… are Google and Bing search indexes being intentionally left to moulder specifically to drive people to Gemini and ChatGPT?



