I’m not sure what you’re claiming I blew past here. I simply pointed out that nobody is expecting LLMs to validate the solutions it comes up with on its own, or to trust it to come up with a correct solution independently. Ironic that you’re the one who actually decided to blow past what I wrote to make a personal attack.
I simply pointed out that nobody is expecting LLMs to validate the solutions it comes up with on its own, or to trust it to come up with a correct solution independently
I want to point you towards (gestures broadly at everything) where LLMs are being sold as panacea and things like hallucinations and overconfidence are being minimized. The AI industry is claiming these tools are a way to remove the human in the loop, and are trustworthy.
You may understand that a LLM is a starting point, not an end, but the way most people are being sold them is different and dangerous. Articles like the one you posted, which also downplayed the mistakes the model made (“Gemini made some minor numerical errors…”) while suggesting that it made a novel discovery about the source material are problematic. How many people just read the headline, or the whole article, and now assume that the data presented is fact, when the data presented is an unconfirmed opinion at best and made up at worst.
LLMs are really good at making “smart sounding” text that reads like someone intelligent wrote it, but we have tons of examples where smart-sounding and factual are a Venn Diagram that doesn’t overlap.
You may not realize how scientific process works, but it’s not based on trust. What actually happens is that researchers publish a paper that explains their thesis, and provides supporting evidence. Then you have this thing called peer review where other experts in the field examine the findings and make their own assessments. The reality is that hallucinations and fabrications aren’t exclusive to LLMs, humans do this stuff all the time all on their own. This is why we have the scientific method in the first place.
You may not realize that AI evangelism is not only in the scientific community. Articles like this, claiming AI is doing something amazing, something that humans have been unable to do, is propaganda.
Peer review is great. The average reader of that article is assuming the peer review is complete by the time they read it. Even if that isn’t true. The takeaway for them is not “this annotation was likely made by a German scribe at XYZ date” the point of the article is that “Gemini figured out something that stumped human researchers”.
And I refute the original position: the notes are not inscrutable and I doubt a human has never translated the numerals or analyzed the script to guess age. It just wasn’t important enough to write an article until it made AI look good.
The article is propaganda. It’s neat, but if you read it as “look at this cool thing llms can do…” Then you fell for it.
The fact of the matter is that this is a perfect example of LLM actually doing something useful and helping researchers figure things out. I also love how you’re now playing an expert on deciphering ancient scripts. You should go let the researchers know asap what a bunch of dummies they are for not being able to figure it out on their own.
Maybe find a new hobby other than sealioning into threads to screech about how much you hate LLMs. It’s frankly tiring of watching people perseverate over it.
And that’s not what the commenter was talking about. He wasn’t expecting anything else from the LLM. He wanted to see the actual proof that any of this happened, and that it was verified by a human. All the article said was this happened and it worked. If that’s true what were the results and how were they verified?
Again you didn’t answer the question. This is just the prompt and the answer. Where is the proof of the truth claim? Where is the actual human saying “I’m an expert in this field and this is how I know it’s true.” Just because it has a good explanation for how it did the translation doesn’t mean the translation is correct. If I missed it somewhere in this wall of text feel free to point me to the quote, but that is just an AI paste bin to me.
Nobody was claiming a proof, that’s just the straw man the two of you have been using. What the article and the original post from researchers says is that it helped them come up with a plausible explanation. Maybe actually try to engage with the content you’re discussing?
You posted in science and are upset that people asked for proof. Don’t know what you expected. We are already well aware that when you give an AI a prompt it will confidently give you an answer. The crux of any of these claims comes down to whether or not it actually is true.
I get the impression that you don’t understand how science actually works. Science is about examining the evidence, then making hypothesis, and testing them to see if they’re viable. Proof is never guaranteed in the scientific process, and it’s rarely definitive. Seems to me like you just wanted to bray about AI here without actually having anything to say.
And the assumption you must take through the entire process is scepticism. You assume you’re wrong and try to prove that. You look for holes in your theory and try to find any issues in those holes. I’m not seeing any attempts at that.
You literally just made up a baseless argument that the researchers aren’t doing due diligence. I’m skeptical of your thesis and I’m not seeing any attempt on your part to provide any supporting evidence for it.
I’m not sure what you’re claiming I blew past here. I simply pointed out that nobody is expecting LLMs to validate the solutions it comes up with on its own, or to trust it to come up with a correct solution independently. Ironic that you’re the one who actually decided to blow past what I wrote to make a personal attack.
I want to point you towards (gestures broadly at everything) where LLMs are being sold as panacea and things like hallucinations and overconfidence are being minimized. The AI industry is claiming these tools are a way to remove the human in the loop, and are trustworthy.
You may understand that a LLM is a starting point, not an end, but the way most people are being sold them is different and dangerous. Articles like the one you posted, which also downplayed the mistakes the model made (“Gemini made some minor numerical errors…”) while suggesting that it made a novel discovery about the source material are problematic. How many people just read the headline, or the whole article, and now assume that the data presented is fact, when the data presented is an unconfirmed opinion at best and made up at worst.
LLMs are really good at making “smart sounding” text that reads like someone intelligent wrote it, but we have tons of examples where smart-sounding and factual are a Venn Diagram that doesn’t overlap.
You may not realize how scientific process works, but it’s not based on trust. What actually happens is that researchers publish a paper that explains their thesis, and provides supporting evidence. Then you have this thing called peer review where other experts in the field examine the findings and make their own assessments. The reality is that hallucinations and fabrications aren’t exclusive to LLMs, humans do this stuff all the time all on their own. This is why we have the scientific method in the first place.
You may not realize that AI evangelism is not only in the scientific community. Articles like this, claiming AI is doing something amazing, something that humans have been unable to do, is propaganda.
Peer review is great. The average reader of that article is assuming the peer review is complete by the time they read it. Even if that isn’t true. The takeaway for them is not “this annotation was likely made by a German scribe at XYZ date” the point of the article is that “Gemini figured out something that stumped human researchers”.
And I refute the original position: the notes are not inscrutable and I doubt a human has never translated the numerals or analyzed the script to guess age. It just wasn’t important enough to write an article until it made AI look good.
The article is propaganda. It’s neat, but if you read it as “look at this cool thing llms can do…” Then you fell for it.
The fact of the matter is that this is a perfect example of LLM actually doing something useful and helping researchers figure things out. I also love how you’re now playing an expert on deciphering ancient scripts. You should go let the researchers know asap what a bunch of dummies they are for not being able to figure it out on their own.
Maybe find a new hobby other than sealioning into threads to screech about how much you hate LLMs. It’s frankly tiring of watching people perseverate over it.
And that’s not what the commenter was talking about. He wasn’t expecting anything else from the LLM. He wanted to see the actual proof that any of this happened, and that it was verified by a human. All the article said was this happened and it worked. If that’s true what were the results and how were they verified?
In other words, you’re saying neither of you could be arsed to click through to the actual discussion on the project page before making vapid comments? https://blog.gdeltproject.org/gemini-as-indiana-jones-how-gemini-3-0-deciphered-the-mystery-of-a-nuremberg-chronicle-leafs-500-year-old-roundels/
Again you didn’t answer the question. This is just the prompt and the answer. Where is the proof of the truth claim? Where is the actual human saying “I’m an expert in this field and this is how I know it’s true.” Just because it has a good explanation for how it did the translation doesn’t mean the translation is correct. If I missed it somewhere in this wall of text feel free to point me to the quote, but that is just an AI paste bin to me.
Nobody was claiming a proof, that’s just the straw man the two of you have been using. What the article and the original post from researchers says is that it helped them come up with a plausible explanation. Maybe actually try to engage with the content you’re discussing?
You posted in science and are upset that people asked for proof. Don’t know what you expected. We are already well aware that when you give an AI a prompt it will confidently give you an answer. The crux of any of these claims comes down to whether or not it actually is true.
I get the impression that you don’t understand how science actually works. Science is about examining the evidence, then making hypothesis, and testing them to see if they’re viable. Proof is never guaranteed in the scientific process, and it’s rarely definitive. Seems to me like you just wanted to bray about AI here without actually having anything to say.
And the assumption you must take through the entire process is scepticism. You assume you’re wrong and try to prove that. You look for holes in your theory and try to find any issues in those holes. I’m not seeing any attempts at that.
You literally just made up a baseless argument that the researchers aren’t doing due diligence. I’m skeptical of your thesis and I’m not seeing any attempt on your part to provide any supporting evidence for it.