• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle















  • I agree, there’s more going on in a human brain. But fundamentally both humans and LLMs use neural networks. The design of the neural network in a LLM is much simpler than the neural network in a human.

    But they both “think” to come up with an answer. They both cross reference learned information. They both are able to come up with an answer that is statically likely to be correct based on their learned information.

    There’s a ton of potential to take the neural networks in LLMs beyond just language. To have then conceptualize abstract ideas the way a human would. To add specialized subsections to the model for math and logic. I think we’re going to see a ton of development in this area.

    And I think you’re right, they’re not exactly the same as humans. But fundamentally there is a lot of similarity. At the end of the day, they are modeled after human brains.


  • Humans also generate something that sounds like it would answer the prompt. If I ask you “What country is Machu Picchu in?”, you’ll ponder for a moment, and give me what you think the answer to the prompt is. You might answer Peru, or you might answer with something else that seems reasonable to you, like Argentina.

    Humans answer questions incorrectly all the time. And they also try to come up with a reasonable response to prompts when questioned.