
I’m certain they know.
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

I’m certain they know.


It’s not the clarity alone. Chatbots are completion engines, and responds back in a way that feels cohesive. It’s not that a question isn’t asked clearly, it’s that in the examples the chatbot is trained on, certain ties of questions get certain types of answers.
It’s like if you ask a ChatGPT what is the meaning of life you’ll probably get back some philosophical answer, but if you ask it what is the answer to life, the universe, and everything, it’s more likely to say 42 (I should test that before posting but I won’t).


Interesting. Is it interpreting the prompt as some sort of Caribbean patois and trying to respond back in kind? I’m not familiar enough to know if that sentence structure is indicative of that region.
If that’s the case, it makes sense that the answers would be lower quality because when patois is written, it’s almost never for quality informational content but “entertainment” reading.
Probably fixable with instructions, but one would have to know how to do that in the first place and that it needs to be done.
Interesting that this causes a problem and yet it has very little problem with my 3 wildly incorrect autocorrect disasters per sentence.


That was like watching a wheelchair race in a retirement community.


I mean there is a cost associated with it, just like there is a cost associated with having free soda in the break room, but it was free for the person doing the project. It’s absorbed into operational costs.


Well it’s Anthropic, creators of Claude. It’s a way to show off and convince people AI can do it. $20k is what it would cost you or me, but it’s just free for them.
I don’t even hate AI but it’s kinda sickening the way they overstate the capabilities. But let me tell you how excited the top leadership at my company is about this…


“I want to add a command line option that auto generates helloworld.exe”
“That’ll be $21,000.”


If you were so smart you’d have wads of cash like them. They got where they are through sheer grit and bootstraps and a paltry $50 million from their family.


I agree with you on a technical level. I still think LLMs are transformative of the original text and if
when the number of sources that’s what ultimately created the volume of the N-dimensional probabilistic space they’re following is very low.
then the solution is to feed it even more relevant data. But I appreciate your perspective. I still disagree, but I respect your point of view.
I’ll give what you’ve written some more thought and maybe respond in greater depth later but I’m getting pulled away. Just wanted to say thanks for the detailed and thorough response.


This is interesting and the article makes this very clear up front but the title is a little clickbait-y, because this requires a fully compromised device. I think it should be fairly obvious that if your device is fully compromised that built in software safeguards are not reliable.


Thank you. Great addition. That was a very interesting read, though I need to be more awake for reading technical writing like that 🥱.
My point about spending $20k to produce garbage, then, was actually realized in this “perfect” use case.


Hey, so I started this comment to disagree with you and correct some common misunderstandings that I’ve been fighting against for years. Instead, as I was formulating my response, I realized you’re substantially right and I’ve been wrong — or at least my thinking was incomplete. I figured I’d mention because the common perception is arguing with strangers on the internet never accomplishes anything.
LLMs are not fundamentally the plagiarism machines everyone claims they are. If a model reproduces any substantial text verbatim, it’s because the LLM is overtrained on too small of a data set and the solution is, somewhat paradoxically, to feed it more relevant text. That has been the crux of my argument for years.
That being said, Anthropic and OpenAI aren’t just LLM models. They are backed by RAG pipelines which are verbatim text that gets inserted into the context when it is relevant to the task at hand. And that fact had been escaping my consideration until now. Thank you.


I just posted where I found the source in another comment. It would have probably the information you’re interested in.


Here is the original cite that my company pulled that from if you want more details.
I’ve never written a compiler, nor in Rust, so I have no idea the effort involved. I’m just boggling over the price tag. I’ll bet that’s the cost of an entire offshore team.


At work today we had a little presentation about Claude Cowork. And I learned someone used it to write a C (maybe C++?) compiler in Rust in two weeks at a cost of $20k and it passed 99% of whatever hell test suite they use for evaluating compilers. And I had a few thoughts.
I think this is a cool thing in the abstract. But in reality, they cherry picked the best possible use case in the world and anyone expecting their custom project is going to go like this will be lighting huge piles of money on fire.


men & women
males & females
men & females
It does feel kinda weird, right?

I am antifa, and I’m completely disorganized.


Yeah. I’m pretty sure for profit social media isn’t good for anyone. Adults shouldn’t use it either. But we decided long ago that you can’t stop adults from drinking or smoking weed, so adults are just mature enough to handle it or approach lies and manipulation with more skepticism, but I look around and see it’s not true.
I feel like at least things like Lemmy and Mastodon are much easier to filter or walk away from when you aren’t in the right emotional space.


What we have now is “neat.” It’s freaking amazing it can do what it does. However it is not the AI from science fiction.
Hope it doesn’t take a huge retooling effort. Someone is going to be left holding the bag when the bubble pops, and it’s going to be a lot of suppliers who invested huge into something no one wants any more.