• 0 Posts
  • 1.5K Comments
Joined 2 years ago
cake
Cake day: August 27th, 2023

help-circle









  • until I find out that it’s AI art.

    This is a description of prejudice. You saw something - you felt something - and then you tried to un-care. Judgement on the work changes based on its provenance. The text has become wrong, because the artist was a robot. It was generated degenerate.

    The feeling isn’t a choice.

    It is, though. You can freely abandon this opinion. Changing your mind doesn’t take a miracle.

    CGI is a great comparison, because it has robbed us of a certain magic. You no longer go ‘how the fuck did they do that?!,’ because the answer is, computers. They used computers. The cameraman didn’t need hiding; they drew over his reflection. The camera didn’t fit through there; they composited two shots. No stunt double was in danger; they rendered the actor.

    All the old magic remains possible… but it’s no longer necessary. It is still wonderful when movies like Crank abuse tiny cameras to pass one through a moving vehicle. It is still wonderful when movies show a crowd, and used hundreds of real people. But now there are other ways to put those images in front of people’s eyeballs. The new ways are massively simpler and flexible to a fault. Recognizing and spitting at those ways was a popular group-bonding activity for a couple decades. And then we got over it.

    Nobody seems to want it.

    I want it. A program that draws anything is cool, actually. Photoreal video, on demand, is obviously fantastic. That shit was nigh impossible even with computers. Now it’s a filter. I am excited for the few people using these tools properly, instead of going ‘look what I made!’ or performatively scoffing.

    Nobody worth giving a damn about wants this technology.

    Ah, no true Scotsman wants it. Since I do not kneejerk un-care when I notice a funny idea was described to a robot, I am an uncultured charlatan.

    When you observe talent in others, what you’re mostly seeing is skill.

    Distinction without difference.

    No shit I could play piano. But I can’t. Ability is present-tense. Aside from rare dipshits who think every piano player is a natural virtuoso, folks understand talent and skill are the same damn thing.

    I’ve written music, though. Developing the ability to play it myself, instead of hearing the computer play it, was not necessary. And by letting the computer play it, I was able to focus on what I wanted it to sound like, instead of whether I was playing it right.



  • Right, should say deep neural networks. Perceptrons hit a brick wall because there’s some problems they cannot handle. Multi-layer networks stalled because nobody went ‘what if we just pretend there’s a gradient?’ until twenty-goddamn-twelve.

    Broad applications will emerge and succeed. LLMs kinda-sorta-almost work for nearly anything. What current grifters have proven is that billions of dollars won’t overcome fundamental problems in network design. “What’s the next word?” is simply the wrong question, for a combination chatbot / editor / search engine / code generator / puzzle solver / chess engine / air fryer. But it’s obviously possible for one program to do all those things. (Assuming you place your frozen shrimp directly atop the video card.) Developing that program will closely resemble efforts to uplift LLMs. We’re just never gonna get there from LLMs specifically.


  • We’ve been trying to abstract hardware since… C. We’ve had much better virtual machines, but they never catch on.

    Adoption is a feature you can’t design.

    But for LLMs digging any deeper than they already have, lol no. Microsoft bet the farm and demanded a whole new keyboard key. People see it as an unreliable convenience at best. It’s not getting any better until after the bubble pops.



  • Neural networks will inevitably be a big deal for a wide variety of industries.

    LLMs are the wrong approach to basically all of them.

    There’s five decades of what-ifs, waiting to be defictionalized, now that we can actually do neural networks. Training them became practical, and ‘just train more’ was proven effective. Immense scale is useful but not necessary.

    But all hype has been forced into spicy autocomplete and a denoiser, and only the denoiser is doing the witchcraft people want.