People always misuse searchengines by writing the whole questions as a search…

With ai they still can do that and get, i think in their optinion, a better result

  • chonglibloodsport@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    4 hours ago

    They get an answer but unlike a search engine, the AI doesn’t show its work. I want a citation with the answer, I’m not taking your word for it!

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    5 hours ago

    People that use LLMs as search engines run the very high risk of “learning” misinformation. LLMs excel at being “confidently incorrect”. Not always, but also not seldomly, LLMs slip bits of information into a result that is false. That confident packaging, along with the fact that the misinformation is likely surrounded by actual facts, often convinces people that everything the LLM returned is correct.

    Don’t use LLM as your sole source of information or as a complete replacement for search.

    EDIT: Treat LLM results as gossip or as a rumor.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      Just had a discussion with an LLM about the plot of a particular movie, particularly the parts where the plot falls short. I asked it to list all the parts that feel contrived.

      It gave me 7 points that were ok, but the 8th one was 100% hallucinated. That event is not in this movie at all. It totally missed the 5 completely obivous contrived screw-ups in the ending of the movie too, so I was not very convinced of this plot analysis.

      • jmill@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        I would never expect a good analysis of a movie from an LLM. It can’t actually produce original thought, and can’t even watch the movie itself. It maybe has some version of the script in its training database, and definitely has things that people have said about the movie, and similar movies, and similar books, and whatever else they scraped. It it just returns words that are often grouped together and that have high likelihood of relevance to your query.

        • TranquilTurbulence@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          56 minutes ago

          With popular movies, there’s no shortage of critical blog posts and other material. All of those are obviously already in the training material. However, anything that didn’t make a gazillion dollars probably isn’t that well documented, so the model might not have much to say write about it. It will just fill those gaps with random word salad that makes sense as long as you have enough cocaine in your nostrils.

          If I had asked about Casablanca, Psycho, Titanic or Avengers, the answer would have probably been a bit less crappy.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      That’s my main issue with llms. If I need to fact check the information, I’d save time by directly looking for the information elsewhere. It makes no sense to me.

  • NONE@lemmy.world
    link
    fedilink
    arrow-up
    29
    ·
    edit-2
    6 hours ago

    I tend to think that people use AI (and yeah, search engines too) the way children use their parents:

    “Mom, why is the sky blue?” “Mom, where is China?” “Mom, can you help me with this school project?” (The mother ends up doing everything).

    The thing is, unlike a parent, AI is unable to tell users that it doesn’t know everything and that users should do things on their own. Because that would reduce the number of users.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      The thing is, unlike a parent, AI is unable to tell users that it doesn’t know everything and that users should do things on their own.

      The world would be a better place if most parents did that ibstead of confidently spewing bigotry, misogyny, and other terrible opinions. I only knew of a few that were able to say ‘I don’t know’ as a kid, and the ratio is about the same with adults.

      • NONE@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        5 hours ago

        Blame the Dunning-Kruger effect. The people I have seen most likely to acknowledge their lack of knowledge in a certain area have been those who are very wise and well-versed in at least one field, such as science, History (like my mom), art, etc.

        Mediocre people are mostly convinced that they know everything.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 hours ago

      AI has a lot more surface knowledge about a lot more things than my parents ever did. I think one of the more insidious things about AI though, is that will a human you can generally tell when they are out of their depth. They grasp for words. Their speech cadence is more hesitant. Their hesitation is palpable. (I think palpable might be considered slop these days, but fuck haters it’s how I write — emdashes and all.)

      AI never gives you that hint. It’s like an autistic encyclopedia. “You want to know about the sun? I read just the book. Turns out there’s a god who pulls it across the sky every day.” And then it proceeds to gaslight you when you ask probing questions.

      (It has gotten better about this due to the advanced meta prompting behind the scenes and other improvements, but the guardrails are leaky.)

    • Admiral Patrick@dubvee.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      Maybe AI should be more like a parent and simply say “I don’t know. Go read a book, find out, and let me know”.

      Pretty sure my mom did know the answer but I learned more by reading a book and telling her what I learned.

      • NONE@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        6 hours ago

        Me too! Nothing helped me think for myself more than my mother yelling at me, “I don’t know! The encyclopedia is right there! Go read it and let me cook, for God’s sake!”

  • Lembot_0006@programming.dev
    link
    fedilink
    arrow-up
    11
    ·
    7 hours ago

    LLM can be used as a search engine for things you know absolutely zero terminology about. That’s convenient. You can’t ask Google for “tiny striped barrels with wires” and expect to get the explanation of resistors marking.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      10-15 years ago Google returned the correct answers when I used the wrong words. For example, it would have most likely returned resistors for that query because of the stripes, and if you left off stripes it would have been capacitors.

      AI isn’t nearly as good as Google was 10+ years ago.

      • Cherry@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        There’s the theory it’s by design. They have made search so bad so that we now turn to Ai to give us what search can, and by that they can effectively charge you for searching…which generally we would baloney the idea of paying to search.

    • Björn@swg-empire.de
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      6 hours ago

      It worked yesterday trying to find a video by describing the video and what I remembered from the thumbnail. That was great. I want that for my own photoa and videos without having to upload them somewhere.

    • BenderRodriguez@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 hours ago

      It sounds like you might be referring to miniature striped barrels used in crafts or model-making, often decorated or with wire elements for embellishment or functionality. These barrels can be used in various DIY projects, including model railroads, dioramas, or even as decorative items.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      5 hours ago

      Reverse image search would let you find that answer more accurately than some llm

        • morto@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 hours ago

          When you see something you have no idea what it is, you just take a photo and do the reverse search, finding other similar photos and the name of the thing. You don’t even need to spend time describing what you see and won’t have a chance of getting a wrong confident answer. Reverse image search exists for more than a decade and don’t use llms

          • Lembot_0006@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            ML is ML. No matter if it is LLM or not. And the question “What is this thing?” covers a negligibly tiny percent of search requests.

            • morto@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              It’s not all the same. Application-specific ml models tend to be much smaller and demand much less resources than llms. They also tend to be more precise.

              And the question “What is this thing?” covers a negligibly tiny percent of search requests.

              I was just addressing the given example

  • galaxy_nova@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    I’ve unfortunately noticed that as llms have gotten more traction that search engines in my experience have gotten worse. Sometimes I have to do like 2 or 3 searches to get the exact right articles that actual relate to what I’m looking for. In the contrary llms are great for asking a question directly, and figuring out exactly what you’re looking for and then going to a search engine and doing some research on your own. It would be nice if there was a way to somehow combine the two without the ridiculously egregious environmental and intellectual issues with llms.

    • Zahille7@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      Is that not what Google does now? They give you a little AI summary with information taken from the first few results and break it down into a more easily digestible version.

      • galaxy_nova@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I guess? I only use Google at work though so not too familiar. But still hits my issues with llms, also it’s forced in Google I believe.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    edit-2
    4 hours ago

    It used to be funny when someone wrote a two sentence long “search query” on google. Nowadays, you can literally do that on any LLM and you’ll get a summary based on a few results. There are a whole bunch of problems with that, but I’ll just let the people from [email protected] to elaborate.

    Anyway, I gave this query to DDG: “I just bought a bag of carrots and I don’t know what to do with them. Should I make soup or something? What are the other ingredients I would need for that?”

    and got this response:

    “You can make a simple carrot soup with just a few ingredients. You’ll need carrots, onions, garlic, broth, and cream or coconut milk. Some recipes also include butter, olive oil, and spices like curry paste or ginger for extra flavor.”

    Gotta say, that wasn’t too bad. I didn’t need to open a single cooking blog to figure out what I need.

    • 4am@lemmy.zip
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      3 hours ago

      You already told it you were interested in soup. It didn’t provide cook times, prep work needed or portions. It didn’t mention any other alternatives or possibilities.

      You will need to open a recipe blog anyway, after taking the time to read that and determine that it’s not everything you need to know, and it drank the volume of a Honda Civic in water and used enough electricity to heat your house for resistive space heaters for 17 hours in below-zero F weather.

      It created that answer by comparing its statistical word tree to other, similar word combinations and then autocompleting the next most likely word you might want to hear. It did not consider your topic in any way, it doesn’t know what’s carrot is, only its token number and that it kind of belong in paragraphs that roughly resemble the one it gave you. It is a reverse-Gaussian-blur of a Gaussian-blurred overlay of a million photos of paragraphs about carrots, soups, and carrot soups.

      It carved away forests and poisoned nearby pensioner’s air just to give your this gray area of an answer, devoid of all thought or creativity. It is objectively worse than the ad-strewn sites written by an actual person, in every way, and you’d have to be a fucking madman to offer any praise upon it.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    6 hours ago

    Some people like AI because they treat it as if it’s the voice of God speaking directly to them.

  • Alpha71@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    Yeah that’s what I use it for mostly. On DDG I’ll ask it stuff like someones age, or when did someone pass etc, to get a quick description of something. And if I need more info I’ll look it up on my own.