Doing the Lord’s work in the Devil’s basement

  • 0 Posts
  • 93 Comments
Joined 6 months ago
cake
Cake day: May 8th, 2024

help-circle











  • The only reason people are throwing bitch fits over AI/LLM’s is because it’s the first time the “art” industry is experiencing their own futility.

    I would even go further and argue that the art industry doesn’t really care about AI. The people white knighting on the topic are evidently not artists and probably don’t know anybody legitimately living from their art.

    The intellectual property angle makes it the most obvious. Typically independent artists don’t care about IP because they don’t have the means to enforce it. They make zero money from their IP and their business is absolutely not geared towards that - they are artists selling art, not patent trolls selling lawsuits. Copying their “style” or “general vibes” is not harming them, just like recording a piano cover of a musician’s song doesn’t make them lose any tickets sales, or sell fewer vinyls (which are the bulk of their revenue).

    AI is not coming for the job of your independent illustrator pouring their heart and soul into their projects. It is coming for the job of corporate artists illustrating corporate blogs, and those who work in content farms. Basically swapping shitty human made slop for shitty computer made slop. Same for music - if you know any musician who’s losing business because of Suno, then it’s on them cause Suno is really mediocre.

    I have yet to meet any artist with this kind of deep anti-AI sentiment. They are either vaguely anxious about the idea of the thing, but don’t touch it cause they’re busy practicing their craft - or they use the hallucination engines as a tool for inspiration. At any rate there’s no indication that their business has seen much of a slowdown linked to AI.






  • Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.

    There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.

    If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.