
Everyone told him to fuck off and leave Greenland alone, so Iran, you’re the next contestant on The Bribe Is Right.
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

Everyone told him to fuck off and leave Greenland alone, so Iran, you’re the next contestant on The Bribe Is Right.


I have to admit, this is more entertaining than counting 'r’s in strawberry. Novel logic puzzles really are about impossible because there is no “logic” input in token selection.
That being said, the first thing that came to my mind is that at some point the (presumable) adults, me and the priest, are going to be on the boat at some point, which would necessarily leave the baby alone on one shore or another.
Clearly, the only viable solution is the baby eats the candy, and then the priest eats the baby.
It’s situational. My one upvote isn’t usually going to have a big impact other than offset some of the downvotes. I would want the response to have higher upvotes than the incorrect comment and if I thought my vote was tipping that scale I wouldn’t. But like most voting processes, I’m just one drop in the river and for the most part the river will go where it goes.


I visited Thailand for a few reasons, but definitely being able to afford a lavish vacation was part of the draw. But as it turned out, I got to know a few locals and really fell in love with the country. Sadly, I haven’t had a chance to go back because the flight is so long and expensive.
I was on a sort of cultural tour. Yeah, we visited a clothier and a jewelry store and super-upscale restaurants, but we also visited roadside booths, temples, a school, a Karen tribe, and walking markets. And I’m a bit of an introvert, but I made a real effort to interact with and get to know some of the locals.
Going there changed me. Not in any way that is easy to describe. I didn’t go a nazi and come back a communist or anything. But that experience has kinda echoed forward through the rest of my life. It has reframed my thinking about some things.
Anyway, I would just suggest that while you’re probably largely right, sometimes folks get enlightened by the experience through no intent of their own.
I sort of agree, but I think any comment that facilitates further on-topic discussion is worth an upvote. It doesn’t need to be exceptional in any way. In rare cases I’ve upvoted incorrect comments before to put more visibility on the correction in the response.
But 100% agree with not downvoting comments just because I disagree. Anyone I bother replying to, even if I vehemently disagree, I probably don’t downvote — because they led to more conversation.
It’s only when I see a comment so self-evidently idiotic or trolling, that I downvote and move on without further engagement.

I don’t think anyone paying attention would suggest it had no impact. In isolation it wasn’t big enough to have swung the election in a single state, but it might still be significant enough to be concerning to the DNC.


Hope it doesn’t take a huge retooling effort. Someone is going to be left holding the bag when the bubble pops, and it’s going to be a lot of suppliers who invested huge into something no one wants any more.

I’m certain they know.


It’s not the clarity alone. Chatbots are completion engines, and responds back in a way that feels cohesive. It’s not that a question isn’t asked clearly, it’s that in the examples the chatbot is trained on, certain ties of questions get certain types of answers.
It’s like if you ask a ChatGPT what is the meaning of life you’ll probably get back some philosophical answer, but if you ask it what is the answer to life, the universe, and everything, it’s more likely to say 42 (I should test that before posting but I won’t).


Interesting. Is it interpreting the prompt as some sort of Caribbean patois and trying to respond back in kind? I’m not familiar enough to know if that sentence structure is indicative of that region.
If that’s the case, it makes sense that the answers would be lower quality because when patois is written, it’s almost never for quality informational content but “entertainment” reading.
Probably fixable with instructions, but one would have to know how to do that in the first place and that it needs to be done.
Interesting that this causes a problem and yet it has very little problem with my 3 wildly incorrect autocorrect disasters per sentence.


That was like watching a wheelchair race in a retirement community.


I mean there is a cost associated with it, just like there is a cost associated with having free soda in the break room, but it was free for the person doing the project. It’s absorbed into operational costs.


Well it’s Anthropic, creators of Claude. It’s a way to show off and convince people AI can do it. $20k is what it would cost you or me, but it’s just free for them.
I don’t even hate AI but it’s kinda sickening the way they overstate the capabilities. But let me tell you how excited the top leadership at my company is about this…


“I want to add a command line option that auto generates helloworld.exe”
“That’ll be $21,000.”


If you were so smart you’d have wads of cash like them. They got where they are through sheer grit and bootstraps and a paltry $50 million from their family.


I agree with you on a technical level. I still think LLMs are transformative of the original text and if
when the number of sources that’s what ultimately created the volume of the N-dimensional probabilistic space they’re following is very low.
then the solution is to feed it even more relevant data. But I appreciate your perspective. I still disagree, but I respect your point of view.
I’ll give what you’ve written some more thought and maybe respond in greater depth later but I’m getting pulled away. Just wanted to say thanks for the detailed and thorough response.


This is interesting and the article makes this very clear up front but the title is a little clickbait-y, because this requires a fully compromised device. I think it should be fairly obvious that if your device is fully compromised that built in software safeguards are not reliable.


Thank you. Great addition. That was a very interesting read, though I need to be more awake for reading technical writing like that 🥱.
My point about spending $20k to produce garbage, then, was actually realized in this “perfect” use case.


Hey, so I started this comment to disagree with you and correct some common misunderstandings that I’ve been fighting against for years. Instead, as I was formulating my response, I realized you’re substantially right and I’ve been wrong — or at least my thinking was incomplete. I figured I’d mention because the common perception is arguing with strangers on the internet never accomplishes anything.
LLMs are not fundamentally the plagiarism machines everyone claims they are. If a model reproduces any substantial text verbatim, it’s because the LLM is overtrained on too small of a data set and the solution is, somewhat paradoxically, to feed it more relevant text. That has been the crux of my argument for years.
That being said, Anthropic and OpenAI aren’t just LLM models. They are backed by RAG pipelines which are verbatim text that gets inserted into the context when it is relevant to the task at hand. And that fact had been escaping my consideration until now. Thank you.
I better go check my NRA-approved 2A handbook for instructions on dealing with masked thugs breaking into my home…