• FinnFooted@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 hours ago

    Technology these days works in that they always lose money at the start. Its a really stupid feature of modern startups IMO. Get people dependent and they make money later. I don’t agree with it. I don’t really think oir entire economic system is viable though and that’s another conversation.

    But LLMs have been improving exponentially. I was on board with everything you’re saying just a year ago about how they suck and they’re going to hit a wall even. But the don’t need more training data or the processing power. They have those and now they’re refining the LLMs. I have a local LLM on my computer that performs better than chat GPT did a year ago and it’s only a few GB. I run it on a shitty laptop.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      I experimented with quite a few local LLMs too and granted, some perform a lot better than others, but they all have the same major issues. They don’t get smarter, they just produce the same nonsense faster (or rather often it feels like they are just more verbose about the same nonsense).