- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Why?
Google added that it doesn’t believe its own Gemini models were used, but still has “high confidence” an AI model was part of discovering the vulnerability and weaponizing an exploit.
No, really, why? If Google itself or their models didn’t discover the vulnerability, how would they know genAI was used on the discovery of the vulnerability and weaponization (interestingly, not “creation”) of an exploit?
Google and all large companies employ a fleet of both fte security researchers and 3rd party security firms to constantly stay on the edge of security threats. They constantly are looking at artifacts from the wild and white papers etc
Because it makes the big line go up. Anything to prove that “AI” is anything other than a dumb series of if/then statements.
Generative AI didn’t make anything, it just retrieved indexed code that already existed.
There will be many more made because of AI.
They sell you the AI to create the buggy code, and then they sell you more AI to fix the bugs. Amazing. Just think of the amount of profit for the shareholders.
Not if you’re the shareholder of a company suffering from a zero day exploit.
They should’ve paid for “AI” to protect them from the “AI” attacking them.
There will be many more made because of
AIcomputers“AI” doesn’t exist, but computers will continue to compute.
Google circa 2002: “Don’t be evil”
Google 2026: “Try to be slightly less evil than the top 5 evil things combined”
In retrospect, it’s kind of wild how many of us (myself sadly included) actually believed Google’s “Don’t be evil” thing instead of seeing it as a “My “Not involved in human trafficking” T-shirt has people asking a lot of questions already answered by my shirt.” situation.
If you are powerful and evil, you use your power to redefine what evil is.
Google being nothing more than an AI slop factory these days makes sense given how terrible Android is now.
I think there’s some confusion around the title. Google found that someone was using AI to identify exploits. Google itself is not announcing that AI made something that has a zero day exploit.
Seems like the exploiter was using Google’s AI to try and find exploits and that’s probably what alerted them.
made with AI
Fake. “AI” doesn’t exist. There’s no need to even read these articles when the headline alone is straight bullshit.
But if you read the article, it gets even phonier, This is just another example of supposedly generated code which is absolutely nothing new. But they don’t really know. The grifter headline states speculation as fact.
Denying that AI exists when you know exactly what is being talked about is quite the choice.






