25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 670 Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle

  • I’d have to assume this is for local LLMs because it would get expensive quickly to test the performance of a prompt well.

    It never really occurred to me to even try because the output can be so subjective. And to grab something at random: must cite source if factual claim

    • There are lots of ways to cite a source. How would you ensure you capture all of them?
    • A source can be hallucinated. You still have to curl any links.
    • A source can be misunderstood and not say what the bot thinks it says. Only way to test this is by hand or write another AI to do it, and now you testing that.
    • Last and not least: what if the source exists, and says what the bot thinks it does, but it’s a garbage source?

    In short, passing a unit test does nothing to guarantee any quality of output. You’d be further ahead effort-wise to just give the LLM multishot examples and actually manually review each output for quality.

    What you’re doing here is spending all your time churning your prompt instead of accomplishing anything. Take it from someone who spent like six months prompt churning. Some prompts are better than others but at the end of the day the output is random and you’ll never anticipate all the possible inputs. Tweak it until the output feels right and that’s that. Play with it when you have nothing better to do.




  • It’s perfectly fine to like something that isn’t art. Hell, it’s perfectly fine to have a definition of art that can include AI, that’s just a framing for talking about the things AI does well vs. the things it doesn’t. I find that where a human can mix different things together in a way that enriches the whole, AI mixes things together in contradictory ways because it lacks human experience. It’s why, AI pictures usually come out flat and lifeless or includes nonsense details that don’t fit or includes requested details in incongruous ways.

    That said, I only know about Hatsuni Miku through my kids. I don’t really know anything about that specifically.











  • I think we could have a fascinating discussion about this offline. But in short here’s my understanding: they look at a bunch of queries and try to deduce the vector that represents a particular idea—like let’s say “sphere”. So then without changing the prompt, they inject that concept.

    How does this injection take place?

    I played with a service a few years ago where we could upload a corpus of text and from it train a “prefix” that would be sent along with every prompt, “steering” the output ostensibly to be more like the corpus. I found the influence to be undetectably subtle on that model, but that sounds a lot like what is going on here. And if that’s not it then I don’t really follow exactly what they are doing.

    Anyway my point is, that concept of a sphere is still going into the context mathematically even if it isn’t in the prompt text. And that concept influences the output—which is entirely the point, of course.

    None of that part is introspective at all. The introspection claim seems to come from unprompted output such as “round things are really on my mind.” To my way of thinking, that sounds like a model trying to bridge the gap between its answer and the influence. Like showing me a Rorschach blot and asking me about work and suddenly I’m describing things using words like fluttering and petals and honey and I’m like “weird that I’m making work sound like a flower garden.”

    And then they do the classic “why did you give that answer” which naturally produces bullshit—which they at least acknowledge awareness of—and I’m just not sure the output of that is ever useful.

    Anyway, I could go on at length, but this is more speculation than fact and a dialog would be a better format. This sounds a lot like researchers anthropomorphizing math by conflating it with thinking, and I don’t find it all that compelling.

    That said, I see analogs in human thought and I expect some of our own mechanisms may be reflected in LLM models more than we’d like to think. We also make decisions on words and actions based on instinct (a sort of concept injection) and we can also be “prefixed” for example by showing a phrase over top of an image to prime how we think about those words. I think there are fascinating things that can be learned about our own thought processes here, but ultimately I don’t see any signs of introspection—at least not in the way I think the word is commonly understood. You can’t really have meta-thoughts when you can’t actually think.

    Shit, this still turned out to be about 5x as long as I intended. This wasn’t “in short” at all. Is that inspection or just explaining the discrepancy between my initial words and where I’ve arrived?


  • They aren’t “self-aware” at all. These thinking models spend a lot of tokens coming up with chains of reasoning. They focus on the reasoning first, and their reasoning primes the context.

    Like if I asked you to compute the area of a rectangle you might first say to yourself: “okay. There’s a formula for that. LxW. This rectangle is 4 by 5, so the calculation is 4x5, which is 20.” They use tokens to delineate the “thinking” from their response and only give you the response, but most will also show the thinking if you want.

    In contrast, if you ask an AI how it arrived at an answer after it gives it, it needs to either have the thinking in context or it is 100% bullshitting you. The reason injecting a thought affects the output is because that injected thought goes into the context. It’s like if you’re trying to count cash and I shout numbers at you, you might keep your focus on the task or the numbers might throw off your response.

    Literally all LLMs do is predict tokens, but we’ve gotten pretty good at finding more clever ways to do it. Most of the advancements in capabilities have been very predictable. I had a crude google augmented context before ChatGPT released browsing capabilities, for instance. Tool use is just low randomness, high confidence, model that the wrapper uses to generate shell commands that it then runs. That’s why you can ask it to do a task 100 times and it’ll execute 99 times correctly and then fail—got a bad generation.

    My point is we are finding very smart ways of using this token prediction, but in the end that’s all it is. And something many researchers shockingly fail to grasp is that by putting anything into context—even a question—you are biasing the output. It simply predicts how it should respond to the question based on what is in its context. That is not at all the same thing as answering a question based on introspection or self-awareness. And that’s obviously the case because their technique only “succeeds” 20% of the time.

    I’m not a researcher. But I keep coming across research like this and it’s a little disconcerting that the people inventing this shit sometimes understand less about it than I do. Don’t get me wrong, I know they have way smarter people than me, but anyone who just asks LLMs questions and calls themselves a researcher is fucking kidding.

    I use AI all the time. I think it’s a great tool and I’m investing a lot of my own time into developing tools for my own use. But it’s a bullshit machine that just happens to spit out useful bullshit, and people are desperate for it to have a deeper meaning. It… doesn’t.


  • Not sure I follow the reasoning. California doesn’t have an abundance of water and there are huge water rights issues over it, making producing almonds there even more outrageous. Data centers might be built where there is more abundant water. Even in California, data centers are a fraction of a percent and shouldn’t be restricted on the basis of water usage when vastly more wasteful industries continue to exist there.