Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    46
    ·
    5 hours ago

    What is AI good at? Creating thousands of lines of code that look plausibly correct in seconds.

    What are humans bad at? Reviewing changes containing thousands of lines of plausibly correct code.

    This is a great way to force senior devs to take the blame for things. But, if they actually want to avoid outages rather than just assign blame to them, they’ll need to submit small, efficient changes that the submitter understands and can explain clearly. Wouldn’t it be simpler just to say “No AI”?

    • Joeffect@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      41 minutes ago

      If you ask a writer what is Ai good for? They will say it’s good for art. But never use it for writing, because it’s terrible at it.

      If you ask a artist what is Ai good for? They will say it’s good for writing. but never use it for art, because it’s terrible at it.

    • Earthman_Jim@lemmy.zip
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      4 hours ago

      AI’s greatest feature in the eyes of the Epstein class is the ability to shift responsibility. People will do all kinds of fucked up shit if they can shift the blame to someone else, and AI is the perfect bag holder.

      Just ask the school of little girls in Iran which were likely targets picked by AI with out of date information about it being a barracks. Why bother confirming the target with current intel from the ground when no one’s going to take the blame anyway?

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        In my experience, LLMs suck at making smart, small changes. To know how to do that they need to “understand” the entire codebase, and that’s expensive.