• tinsukE@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 hours ago

    Why?

    Google added that it doesn’t believe its own Gemini models were used, but still has “high confidence” an AI model was part of discovering the vulnerability and weaponizing an exploit.

    No, really, why? If Google itself or their models didn’t discover the vulnerability, how would they know genAI was used on the discovery of the vulnerability and weaponization (interestingly, not “creation”) of an exploit?

    • KuroiKaze@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Google and all large companies employ a fleet of both fte security researchers and 3rd party security firms to constantly stay on the edge of security threats. They constantly are looking at artifacts from the wild and white papers etc

    • HeartyOfGlass@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      5 hours ago

      Because it makes the big line go up. Anything to prove that “AI” is anything other than a dumb series of if/then statements.

  • dan1101@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    8 hours ago

    Generative AI didn’t make anything, it just retrieved indexed code that already existed.

    • BorgDrone@feddit.nl
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      10 hours ago

      They sell you the AI to create the buggy code, and then they sell you more AI to fix the bugs. Amazing. Just think of the amount of profit for the shareholders.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 hours ago

      There will be many more made because of AI computers

      “AI” doesn’t exist, but computers will continue to compute.

  • unitedwithme@lemmy.today
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 hours ago

    Google circa 2002: “Don’t be evil”

    Google 2026: “Try to be slightly less evil than the top 5 evil things combined”

    • pomegranatefern@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 hours ago

      In retrospect, it’s kind of wild how many of us (myself sadly included) actually believed Google’s “Don’t be evil” thing instead of seeing it as a “My “Not involved in human trafficking” T-shirt has people asking a lot of questions already answered by my shirt.” situation.

    • NTesla@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 hours ago

      If you are powerful and evil, you use your power to redefine what evil is.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    6 hours ago

    made with AI

    Fake. “AI” doesn’t exist. There’s no need to even read these articles when the headline alone is straight bullshit.

    But if you read the article, it gets even phonier, This is just another example of supposedly generated code which is absolutely nothing new. But they don’t really know. The grifter headline states speculation as fact.

  • Katherine 🪴@piefed.social
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    10 hours ago

    Google being nothing more than an AI slop factory these days makes sense given how terrible Android is now.

    • MountingSuspicion@reddthat.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 hours ago

      I think there’s some confusion around the title. Google found that someone was using AI to identify exploits. Google itself is not announcing that AI made something that has a zero day exploit.

      Seems like the exploiter was using Google’s AI to try and find exploits and that’s probably what alerted them.