Much has been written about them in computer science volumes. But I’m an LLM luddite, have never tried it, and have no idea if it can even work. At the very least, I assume they have some sort of limiter to keep them running completely out of control. They may also have guardrails that can recognize some problems of this type, and refuse to go down the rabbit hole.
My idea of getting them to consume tokens in an (iterative or recursive) loop is also entirely hypothetical, to me at least.
Maybe some LLM developer or prompt engineer can shed some light.
https://theconversation.com/limits-to-computing-a-computer-scientist-explains-why-even-in-the-age-of-ai-some-problems-are-just-too-difficult-191930
Much has been written about them in computer science volumes. But I’m an LLM luddite, have never tried it, and have no idea if it can even work. At the very least, I assume they have some sort of limiter to keep them running completely out of control. They may also have guardrails that can recognize some problems of this type, and refuse to go down the rabbit hole.
My idea of getting them to consume tokens in an (iterative or recursive) loop is also entirely hypothetical, to me at least.
Maybe some LLM developer or prompt engineer can shed some light.
Look all I’m asking for is an example I can plug into Chipotle right now. Fuck AI