If you want something more complex than an alarm clock, this does kinda work for anything. Emphasis on “kinda.”
Neural networks are universal approximators. People get hung-up on the approximation part, like that cancels out the potential in… universal. You can make a model that does any damn thing. Only recently has that seriously meant you and can - backpropagation works, and it works on video-game hardware.
what is currently branded as AI
“AI is whatever hasn’t been done yet” has been the punchline for decades. For any advancement in the field, people only notice once you tell them it’s related to AI, and then they just call it “AI,” and later complain that it’s not like on Star Trek.
And yet it moves. Each advancement makes new things possible, and old things better. Being right most of the time is good, actually. 100% would be better than 99%, but the 100% version does not exist, so 99% is better than never.
Telling the grifters where to shove it should not condemn the cool shit they’re lying about.
My main point WRT “kinda” is that there are a tonne of applications that 99% isn’t good enough for.
For example, one use that all the big players in the phone world seem to be pushing ATM is That of sorting your emails for you. If you rely on that and it classifies an important email as unimportant so you miss it, then that’s actually a useless feature. Either you have to check all your emails manually yourself, in which case it’s quicker to just do that in the first place and the AI offers no value, or you rely on it and end up missing something that it es important you didn’t miss.
And it doesn’t matter if it gets it wrong one time in a hundred, that one time is enough to completely negate all potential positives of the feature.
As you say, 100% isn’t really possible.
I think where it’s useful is for things like analysing medical data and helping coders who know what they’re doing with their work. In terms of search it’s also good at “what’s the name of that thing that’s kinda like this?”-type queries. Kind of the opposite of traditional search engines where you’re trying to find out information about a specific thing, where i think non-Google engines are still better.
Your example of catastrophic failure is… e-mail? Spam filters are wrong all the time, and they’re still fantastic. Glancing in the folder for rare exceptions is cognitively easier than categorizing every single thing one-by-one.
If there’s one false negative, you don’t go “Holy shit, it’s the actual prince of Nigeria!”
But sure, let’s apply flawed models somewhere safe, like analyzing medical data. What?
And it doesn’t matter if it gets it wrong one time in a hundred, that one time is enough to completely negate all potential positives of the feature.
Obviously fucking not.
Even in car safety, a literal life-and-death context, a camera that beeps when you’re about to screw up can catch plenty of times where you might guess wrong. Yeah - if you straight-up do not look, and blindly trust the beepy camera, bad things will happen. That’s why you have the camera and look.
If a single fuckup renders the whole thing worthless, I have terrible news about human programmers.
Okay, so you can’t conceive of the idea of an email that it’s important that you don’t miss.
Let’s go with what Apple sold Apple Intelligence on, shall we? You say to Siri “what time do I need to pick my mother up from the airport?” and Siri coombs through your messages for the flight time, checks the time of arrival from the airline’s website, accesses maps to get journey time accounting for local traffic, and tells you when you need to leave.
With LLMs, absolutely none of those steps can be trusted. You have to check each one yourself. Because if they’re wrong, then the output is wrong. And it’s important that the output is right. And if you have to check the input of every step, then what do you save by having Siri do it in the first place? It’s actually taking you more time than it would have to do everything yourself.
AI assistants are being sold as saving you time and taking meaningless busywork away from you. In some applications, like writing easy, boring code, or crunching more data than a human could in a very short time frame, they are. But for the applications they’re being sold on for phones? Not without being reliable. Which they can’t be, because of their architecture.
This absolutism is jarring against your suggestion of applying the same technology to medicine.
Siri predates this architecture by a decade. And you still want to write off the whole thing as literally useless if it’s ever ever ever wrong… because god forbid you have to glance at whatever e-mail it points to. Like skimming one e-mail to confirm it’s from your mom, about a flight, and mentions the time… is harder than combing through your inbox by hand.
Confirming an answer is a lot easier than finding it from scratch. And if you’re late to the airport anyway, oh no, how terrible. Everything is ruined forever. Burn your computers and live in the woods, apparently, because one important e-mail was skipped. Your mother had to call you and then wait comfortably for an entire hour.
Perfect reliability does not exist. No technology provides it. Even with your prior example, phone alarms - I’ve told Android to re-use the last timer, when I said I wanted twenty minutes, and it didn’t go off until 6:35 PM, because yesterday I said that at 6:15. I’ve had physical analog alarm clocks fail to go off in the morning. I did not abandon the concept of time, following that betrayal.
The world did not end because a machine fucked up.
If you want something more complex than an alarm clock, this does kinda work for anything. Emphasis on “kinda.”
Neural networks are universal approximators. People get hung-up on the approximation part, like that cancels out the potential in… universal. You can make a model that does any damn thing. Only recently has that seriously meant you and can - backpropagation works, and it works on video-game hardware.
“AI is whatever hasn’t been done yet” has been the punchline for decades. For any advancement in the field, people only notice once you tell them it’s related to AI, and then they just call it “AI,” and later complain that it’s not like on Star Trek.
And yet it moves. Each advancement makes new things possible, and old things better. Being right most of the time is good, actually. 100% would be better than 99%, but the 100% version does not exist, so 99% is better than never.
Telling the grifters where to shove it should not condemn the cool shit they’re lying about.
I’m not sure we’re disagreeing very much, really.
My main point WRT “kinda” is that there are a tonne of applications that 99% isn’t good enough for.
For example, one use that all the big players in the phone world seem to be pushing ATM is That of sorting your emails for you. If you rely on that and it classifies an important email as unimportant so you miss it, then that’s actually a useless feature. Either you have to check all your emails manually yourself, in which case it’s quicker to just do that in the first place and the AI offers no value, or you rely on it and end up missing something that it es important you didn’t miss.
And it doesn’t matter if it gets it wrong one time in a hundred, that one time is enough to completely negate all potential positives of the feature.
As you say, 100% isn’t really possible.
I think where it’s useful is for things like analysing medical data and helping coders who know what they’re doing with their work. In terms of search it’s also good at “what’s the name of that thing that’s kinda like this?”-type queries. Kind of the opposite of traditional search engines where you’re trying to find out information about a specific thing, where i think non-Google engines are still better.
Your example of catastrophic failure is… e-mail? Spam filters are wrong all the time, and they’re still fantastic. Glancing in the folder for rare exceptions is cognitively easier than categorizing every single thing one-by-one.
If there’s one false negative, you don’t go “Holy shit, it’s the actual prince of Nigeria!”
But sure, let’s apply flawed models somewhere safe, like analyzing medical data. What?
Obviously fucking not.
Even in car safety, a literal life-and-death context, a camera that beeps when you’re about to screw up can catch plenty of times where you might guess wrong. Yeah - if you straight-up do not look, and blindly trust the beepy camera, bad things will happen. That’s why you have the camera and look.
If a single fuckup renders the whole thing worthless, I have terrible news about human programmers.
Okay, so you can’t conceive of the idea of an email that it’s important that you don’t miss.
Let’s go with what Apple sold Apple Intelligence on, shall we? You say to Siri “what time do I need to pick my mother up from the airport?” and Siri coombs through your messages for the flight time, checks the time of arrival from the airline’s website, accesses maps to get journey time accounting for local traffic, and tells you when you need to leave.
With LLMs, absolutely none of those steps can be trusted. You have to check each one yourself. Because if they’re wrong, then the output is wrong. And it’s important that the output is right. And if you have to check the input of every step, then what do you save by having Siri do it in the first place? It’s actually taking you more time than it would have to do everything yourself.
AI assistants are being sold as saving you time and taking meaningless busywork away from you. In some applications, like writing easy, boring code, or crunching more data than a human could in a very short time frame, they are. But for the applications they’re being sold on for phones? Not without being reliable. Which they can’t be, because of their architecture.
This absolutism is jarring against your suggestion of applying the same technology to medicine.
Siri predates this architecture by a decade. And you still want to write off the whole thing as literally useless if it’s ever ever ever wrong… because god forbid you have to glance at whatever e-mail it points to. Like skimming one e-mail to confirm it’s from your mom, about a flight, and mentions the time… is harder than combing through your inbox by hand.
Confirming an answer is a lot easier than finding it from scratch. And if you’re late to the airport anyway, oh no, how terrible. Everything is ruined forever. Burn your computers and live in the woods, apparently, because one important e-mail was skipped. Your mother had to call you and then wait comfortably for an entire hour.
Perfect reliability does not exist. No technology provides it. Even with your prior example, phone alarms - I’ve told Android to re-use the last timer, when I said I wanted twenty minutes, and it didn’t go off until 6:35 PM, because yesterday I said that at 6:15. I’ve had physical analog alarm clocks fail to go off in the morning. I did not abandon the concept of time, following that betrayal.
The world did not end because a machine fucked up.