25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 847 Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle


  • I have to admit, this is more entertaining than counting 'r’s in strawberry. Novel logic puzzles really are about impossible because there is no “logic” input in token selection.

    That being said, the first thing that came to my mind is that at some point the (presumable) adults, me and the priest, are going to be on the boat at some point, which would necessarily leave the baby alone on one shore or another.

    Clearly, the only viable solution is the baby eats the candy, and then the priest eats the baby.


  • It’s situational. My one upvote isn’t usually going to have a big impact other than offset some of the downvotes. I would want the response to have higher upvotes than the incorrect comment and if I thought my vote was tipping that scale I wouldn’t. But like most voting processes, I’m just one drop in the river and for the most part the river will go where it goes.


  • I visited Thailand for a few reasons, but definitely being able to afford a lavish vacation was part of the draw. But as it turned out, I got to know a few locals and really fell in love with the country. Sadly, I haven’t had a chance to go back because the flight is so long and expensive.

    I was on a sort of cultural tour. Yeah, we visited a clothier and a jewelry store and super-upscale restaurants, but we also visited roadside booths, temples, a school, a Karen tribe, and walking markets. And I’m a bit of an introvert, but I made a real effort to interact with and get to know some of the locals.

    Going there changed me. Not in any way that is easy to describe. I didn’t go a nazi and come back a communist or anything. But that experience has kinda echoed forward through the rest of my life. It has reframed my thinking about some things.

    Anyway, I would just suggest that while you’re probably largely right, sometimes folks get enlightened by the experience through no intent of their own.


  • I sort of agree, but I think any comment that facilitates further on-topic discussion is worth an upvote. It doesn’t need to be exceptional in any way. In rare cases I’ve upvoted incorrect comments before to put more visibility on the correction in the response.

    But 100% agree with not downvoting comments just because I disagree. Anyone I bother replying to, even if I vehemently disagree, I probably don’t downvote — because they led to more conversation.

    It’s only when I see a comment so self-evidently idiotic or trolling, that I downvote and move on without further engagement.












  • I agree with you on a technical level. I still think LLMs are transformative of the original text and if

    when the number of sources that’s what ultimately created the volume of the N-dimensional probabilistic space they’re following is very low.

    then the solution is to feed it even more relevant data. But I appreciate your perspective. I still disagree, but I respect your point of view.

    I’ll give what you’ve written some more thought and maybe respond in greater depth later but I’m getting pulled away. Just wanted to say thanks for the detailed and thorough response.




  • Hey, so I started this comment to disagree with you and correct some common misunderstandings that I’ve been fighting against for years. Instead, as I was formulating my response, I realized you’re substantially right and I’ve been wrong — or at least my thinking was incomplete. I figured I’d mention because the common perception is arguing with strangers on the internet never accomplishes anything.

    LLMs are not fundamentally the plagiarism machines everyone claims they are. If a model reproduces any substantial text verbatim, it’s because the LLM is overtrained on too small of a data set and the solution is, somewhat paradoxically, to feed it more relevant text. That has been the crux of my argument for years.

    That being said, Anthropic and OpenAI aren’t just LLM models. They are backed by RAG pipelines which are verbatim text that gets inserted into the context when it is relevant to the task at hand. And that fact had been escaping my consideration until now. Thank you.