• PlantJam@lemmy.world
    link
    fedilink
    English
    arrow-up
    109
    arrow-down
    2
    ·
    1 day ago

    A salesman for an AI consulting company made the comment that we don’t expect perfection from humans, so why should we expect it from AI? He was smug about it, too, like it was his big gotcha. Joke’s on him, I’m the one that talked the bosses out of spending money with them.

    • RidcullyTheBrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      23 hours ago

      That’s such a bad argument too. The whole point of technology is to help perfect the output of humans. Why would we buy technology that is known to not do that

      • PlantJam@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        23 hours ago

        “You can get pretty good results most of the time and save money on labor!” Not like our whole business model is focused on expertise and compliance or anything. Surely our clients won’t mind a few little mistakes here and there, as a treat.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      21 hours ago

      we don’t expect perfection from humans, so why should we expect it from AI?

      If we can’t expect better from an AI than from a human, why should we use the AI (other than so you don’t have to pay workers)?

      • RidcullyTheBrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        21 hours ago

        I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.

          Yeah, the ‘but’ is the entire problem. In my experience, LLM chatbots are like if you made a 12yo a junior admin and fed them speed. Very quick to give you a confident answer, but wrong more often than not. The worst part is a lot of what I’m doing is coding, and it gets basic commands and syntax wrong