• 6 Posts
  • 240 Comments
Joined 8 months ago
cake
Cake day: March 22nd, 2024

help-circle

  • I’m not sure how you’d solve the problem of big corpos becoming cheap content farms while avoiding harming the people who use these tools to make something rich and beautiful, but I have to believe there’s a way to thread that needle.

    Easy, local AI.

    Keep generative AI locally runnable instead of corporate hosted. Make it free, open and accessible. This gives the little guys the cost advantage, and takes away the scaling advantages of mega publishers. Lemmy users should be familiar with this concept.

    Whenever I hear people rail against AI, I tell them they are handing the world to Sam Altman and his dystopia, who do not care about stealing content, equality, or them. I get a lot of hate for it. But they need to be fighting the corporate vs open AI battle instead.










  • Maybe I am just out of touch, but I smell another bubble bursting when I look at how enshittified all major web services are simultaneously becoming.

    It feels like something has to give, right?

    We have YouTube, Reddit, Twitter, and more just racing to enshittify like I can’t even believe, Google Search is racing to destroy the internet, yet they’re also at the ‘critical mass’ of ‘too big to fail’ and shoved out all their major competitors already (other than Discord I guess).







  • It’s useful.

    I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

    It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.