The headline sounds all spy tech: “Advances in AI break your best encryption!”
But then the article reminds us we are in the stupidest of all possible timelines:
“Embedding AI into the operating system is such a monumentally idiotic thing to do, that no amount of other security controls can save us.”
if you operating system is compromised, you cant make it secure no matter what. Just like if thief has keys to your home, no amount of security will make your home safe. At best you might know if you have been burgled.
So its either using non-compromised operating systems or just submitting to the fact that you have no safety.
She’s right. End to end encryption doesn’t mean a lot when your own device can’t be trusted to not capture screenshots or store the contents of push notifications.
We just need biologically accelerated decryption mechanisms in our brain so we can read encrypted data directly. Keys are safely stored in a new organ which gets implemented at birth.
“Sorry, your neural architecture is incompatible with 2028 society unless you opt in to this neuralink cerebral TPM which will allow for communication and decryption of all new media. Without this upgrade you will be limited to only communicate with legacy users and consume only vintage advertising content.”
Cool concept <3
We just need
Sounds more plausible than some of the tech talking points.
They “just” need the expantional growth to be linear? If we can wish to change maths, we can change our own biology!
I’m not sure you can call one of Musk’s business not tech talking points themselves… .
I’m not a tin-foil hatter by any stretch of the imagination, but this has long been my assumption on why “AI” is being pushed down our throats so hard and from so many angles.
It’s almost the perfect spyware, really.
If I control your agent, I control what you see, what you say, where you go, everything about your life that touches a computer…
There is a reason the FIRST google implementation of AI was to just read all your emails and give you shitty inaccurate summaries of the content.
Like they’re barely trying to give you a product justification for invasively spying on you while you use your own computer.
You mean Gemini reading your emails? That’s way after Bard was a thing.
Plus, Apple AI is basically at the same level still.
Embedding AI in the operating system instead of as a normal program is something that should be punished.
Repeat irresponsible disclosure will not get you paid the same but will fix the architectural problem faster.
I expect that eventually windows will be anti-trusted again by established nations. we haven’t seen it since explorer but, eventually it will happen again.
It’s already been happening. It’s finally, actually, for reals, the year for Linux.
Meme aside, countries have started to get off the Microsoft tit.
It will happen guys! I swear! 🗞️
Whittaker, a veteran researcher who spent more than a decade at Google, pointed to a fundamental shift in the threat model where AI agents integrated into core operating systems are being granted expansive access to user data, undermining the assumptions that secure messaging platforms like Signal are built on. To function as advertised, these agents must be able to read messages, access credentials, and interact across applications, collapsing the isolation that E2EE relies on.
This concern is not theoretical. A recent investigation by cybersecurity researcher Jamieson O’Reilly uncovered exposed deployments of Clawdbot, an open-source AI agent framework, that were directly linked to encrypted messaging platforms such as Signal. In one particularly serious case, an operator had configured Signal device-linking credentials inside a publicly accessible control panel. As a result, anyone who discovered the interface could pair a new device to the account and read private messages in plaintext, effectively nullifying Signal’s encryption.
I suppose her attention is naturally focused on encryption, but the result of an untrustworthy operating system is not specific to it: Security in general becomes impossible.
Her business is secure communication and communication isn’t secure (and can’t be secured) if you have someone reading everything over your shoulder.
I’m curious: is there any operating system where a program can somehow inherently trust it via some form of verification?
Signal phone OS when?
This might be what your looking for, next phone wipe I’m putting this on:
They are working with an OEM to make an entire phone so stay tuned in that space
Just don’t. The only thing I’ve missed is a debit/ credit tap wallet and an app that won’t process my credit card purchase for in account credits. I haven’t looked too hard for a techy solution to that one.
Edit I meant to type “just do it” but…typo
Did you mean “just do it” and autocorrect got you? Based on the rest of the text, that is what I figure.
Lol yes. Just do it. I’d like to blame the early hour for my typos but they are a chronic thing.
I’m on a 9 after leaving an iPhone 15pro. IOS 26 drove me away. I spent ten years on iOS. Few habits to break, quirks that are different.
they are a chronic thing
I assumed so, given your username 🤣
I myself am rocking a 7. Put GOS on over the summer. Don’t look back at all!
Just use curve. Also, why do you tell others to not use it if only one thing is not working for you?
his username.
the amount of spying baked into the oses should already be doing the same, right?
Some one could TL; DR me ?
He’sShe’s talking specifically about the idea of embedding AI agents in operating systems, and allowing them to interact with the OS on the user’s behalf.So if you think about something like Signal, the point is that as it leaves your device the message is encrypted, and only gets decrypted when it arrives on the device of the intended recipient. This should shut down most “Man in the middle” type of attacks. It’s like writing your letters in code so that if the FBI opens them, they can’t read any of it.
But when you add an AI agent in the OS, that’s like dictating your letter to an FBI agent, and then encrypting it. Kind of makes the encryption part pointless.
He’s talking specifically
She*
My bad. Thanks for the correction.
Thanks !
like using Gboard?
Encrypted apps like Signal encrypt messages in a way that only you and the recipient can decrypt and read. Not even Signal can decrypt them. However it has always been the case that another person could look over your shoulder and read the messages you send, who you’re sending them to, and so on. Pretty obvious, right?
What the author and Signal are calling out here is that all major commercial OSes are now building in features that “look over your shoulder.” But it’s worse than that because they also record every other device sensor’s data.
Windows Recall is the easiest to understand. It is a tool build into windows (and enabled by default) that takes a screenshot a few times per second. This effectively capture a stream of everything you do while using windows; what you browse, who you chat with, the pron you watch, the games you play, where you travel, and who you travel with or near. If you use “private” message tools like Signal, they’ll be able to see who you are messaging and read the conversations, just as if they were looking over your shoulder, permanently.
They claim that for an AI agent to serve you well, it needs to know everything it can about you. They also make dubious claims that they’ll never use any of this against you, but they also acknowledge that they comply with court orders and government requests (to varying degrees). So… if you trust all of these companies and every government in the world, there’s nothing to worry about.
Thanks
“Agentic” LLMs are turning garbage operating systems, like Microslop Winblows, into hostile and untrusted environments where applications need to run. A primary example given is how Recall constantly captures your screen and turns the image data into text that can be processed by Microslop, thus making the fact that Signal is end-to-end encrypted largely irrelevant, since your OS is literally shoulder-surfing you at all times. This is made worse by the fact that the only workaround that application developers can use to defend against this surveillance is to implement OS DRM APIs, which are also controlled by the hostile entity.
Thanks !
During the interview, she described how AI agents are marketed as helpful assistants but require sweeping permissions to work. As Whittaker explained, these systems are pitched as tools that can coordinate events or communicate on a user’s behalf, but to do so they must access calendars, browsers, payment methods, and private messaging apps like Signal, placing decrypted messages directly within reach of the operating system
Thanks!
Your operating system and half the software you use has integrated spyware that can read anything you see on your computer or phone as free text and use that information to notify state actors or just whoever the fuck they want of the contents. It doesn’t matter that the message was encrypted between you and the other person when they can spy directly on your device.
Its like passing a coded note to a friend in class and then they open it and just read it out loud to everyone sitting there. Didn’t really matter that you encoded it.
Thank !
Is that because code will be so fucking unintentionally obfuscated that even admins will never be able to recover secrets?
No. It’s because in order to AI agents to work they need access to the content being transmitted on each end of the communication.










