You are anthropomorphizing it. It can give truth or falsehood the same as pages of a book or a funhouse mirror.
If you ask me today, “what is the meaning of life?” I might give you an answer. And if you ask me tomorrow, I might give a different one. You have no way of knowing whether I’m correct today, tomorrow, or ever. But if one of those answers, right or wrong, helps you find meaning, it’s still useful. (As a rhetorical point. I’m definitely the last person anyone should look to to find meaning.)
AI is a lot like that. You give it input, it gives you output, and whether you get anything of value depends greatly on what you are looking for.
I’ve gotten some advice on improving some of my writing. And some of the advice I took, some I ignored, and some I modified before using. I think the writing turned out better, and since I largely write for myself I’m pretty happy with that.
I’ve asked it for help programming, and at times it was helpful and other times cost me hours circling around the same old wrong answers, but there’s every chance I would’ve struggled just as much looking online.
The other day my daughter was making a slushie and it was turning out really wet and gross, so I explained to an AI what we’d done and asked if it had any idea why it didn’t turn out. And it turns out, we were using zero sugar soda which doesn’t work—the sugar is necessary. So we added some simple syrup and it turned out perfectly.
And it was much faster and easier than Google. But if the advice had been wrong, nothing of value would’ve been lost.
I have access to AI integrated with my IDE. It mostly guesses at the line I’m going to write. It probably gets it right 50% of the time.
It also very, very often suggests stuff that works but isn’t very good. Like it offered some convoluted suggested for adding audit fields to Firebase. Ultimately it did suggest the solution I went with, but only after starting down the road of stupid ideas.
Like, if your code base is pretty good and you just need to tweaks stuff that is already good enough that’s one thing. I frequently look at the code base and wonder if it was implemented by someone who really knows Java at all.
I suppose it might be fair to assume a huge technology company would have their shit together, but technically I work for a huge tech company… just not the same core business. Tech enough that we have a whole mess of internal AI tooling to create AIs for specific things.
We can create an AI agent, but we can’t follow simple fucking rest standards.
Anyway it’s hard to quantify, but I get less mileage out of integrated AI tools than I do bouncing ideas off ChatGPT.