

Well let’s hear some suggestions then.
Freedom is the right to tell people what they do not want to hear.
Well let’s hear some suggestions then.
Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.
What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?
Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.
Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.
Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.
It won’t solve anything
Go tell that to AlphaFold which solved a decades‑old problem in biology by predicting protein structures with near lab‑level accuracy.
What are you suggesting exactly? You have an actual solution here to offer or you just want to be a smart ass?
When people have sex, they usually do it in private, without any witnesses. Whatever happens during that time is often difficult to prove afterward, since it typically comes down to one person’s word against the other’s. Unless there’s clear physical evidence of assault, it can be extremely hard to establish that something was done against someone’s will. Most reasonable people would agree that “she said so” alone doesn’t amount to proof - and isn’t, by itself, a valid basis for sending someone to prison.
“If we just trusted women”
We don’t trust people based on their gender. We trust them based on credibility and evidence. If there’s even the tiniest amount of doubt then it better to let the guilty walk free rather than put an innocent person in jail. And I’m speaking broadly here - not about Trump specifically.
I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
Did you genuinely not understand the point I was making, or are you just being pedantic? “Silicon” obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as “in non-biological substrates,” I’m happy to oblige - but I have a feeling you already knew that.
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
Don’t confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn’t be further apart when it comes to cognitive capabilities.
The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,
Or we wipe ourselves out before we get the chance.
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That’s what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
They’re generally just referred to as “deep learning” or “machine learning”. The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
You’re moving the goalposts. First you claimed understanding requires awareness, now you’re asking whether an AI knows what a molecule is - as if that’s even the standard for functional intelligence.
No, AI doesn’t “know” things the way a human does. But it can still reliably identify ungrammatical sentences or predict molecular interactions based on training data. If your definition of “understanding” requires some kind of inner experience or conscious grasp of meaning, then fine. But that’s a philosophical stance, not a technical one.
The point is: you don’t need subjective awareness to model relationships in data and produce useful results. That’s what modern AI does, and that’s enough to call it intelligent in the functional sense - whether or not it “knows” anything in the way you’d like it to.
Most definitions are imperfect - that’s why I said the term AI, at its simplest, refers to a system capable of performing any cognitive task typically done by humans. Doing things faster, or even doing things humans can’t do at all, doesn’t conflict with that definition.
Humans are unarguably generally intelligent, so it’s only natural that we use “human-level intelligence” as the benchmark when talking about general intelligence. But personally, I think that benchmark is a red herring. Even if an AI system isn’t any smarter than we are, its memory and processing capabilities would still be vastly superior. That alone would allow it to immediately surpass the “human-level” threshold and enter the realm of Artificial Superintelligence (ASI).
As for something like making a sandwich - that’s a task for robotics, not AI. We’re talking about cognitive capabilities here.
“Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.
You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.
The issue here is that machine learning also falls under the umbrella of AI.
So… not intelligent.
But they are intelligent - just not in the way people tend to think.
There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.
Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.
Way to move the goalposts.
If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.
Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.
So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.