• Railcar8095@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    3
    ·
    1 day ago

    Devils advocate, AI might be an agent that detects tapering with a NLP frontend.

    Not all AI is LLMs.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      edit-2
      1 day ago

      A “chatbot” is not a specialized AI.

      (I feel like maybe I need to put this boilerplate in every comment about AI, but I’d hate that.) I’m not against AI or even chatbots. They have their uses. This is not using them appropriately.

      • Railcar8095@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        edit-2
        1 day ago

        A chatbot can be the user facing side of a specialized agent.

        That’s actually how original change bots were. Siri didn’t know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to call on those cases.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          1
          ·
          edit-2
          1 day ago

          Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I’m wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn’t be.

          My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.

          • squaresinger@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            21 hours ago

            ChatGPT is a fronted for specialized modules.

            If you e.g. ask it to do maths, it will not do it via LLM but run it through a maths module.

            I don’t know for a fact whether it has a photo analysis module, but I’d be surprised if it didn’t.

          • Railcar8095@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            1 day ago

            It’s not like BBC is a single person with no skill other than a driving license and at least one functional eye.

            Hell, they don’t even need to go, just call the local services.

            For me it’s most likely that they have a specialized tool than an LLM detecting correctly tampering with the photo.

            But if you say it’s unlikely you’re wrong, then I must be wrong I guess.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              8
              ·
              1 day ago

              what is the message to the audience? That ChatGPT can investigate just as well as BBC.

              What about this part?

              Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.

              Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.

                • MagicShel@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  edit-2
                  1 day ago

                  “AI Chatbot”. Which is what to 99% of people, almost certainly including the journalist who doesn’t live under a rock? They are just avoiding naming it.

              • Riskable@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                edit-2
                23 hours ago

                I don’t think it’s irresponsible to suggest to readers that they can use an AI chatbot to examine any given image to see if it was AI-generated. Even the lowest-performing multi-model chatbots (e.g. Grok and ChatGPT) can do that pretty effectively.

                Also: Why stop at one? Try a whole bunch! Especially if you’re a reporter working for the BBC!

                It’s not like they give an answer, “yes: Definitely fake” or “no: Definitely real.” They will analyze the image and give you some information about it such as tell-tale signs that an image could have been faked.

                But why speculate? Try it right fucking now: Ask ChatGPT or Gemini (the current king at such things BTW… For the next month at least hahaha) if any given image is fake. It only takes a minute or two to test it out with a bunch of images!

                Then come back and tell us that’s irresponsible with some screenshots demonstrating why.

                • MagicShel@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  23 hours ago

                  I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just by how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.

                  Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?