AI Can Now Easily Unmask Your Secret Online Life for $4
A new research paper from ETH Zurich, Anthropic, and MATS demonstrates that Large Language Models can automatically de-anonymize users across platforms like Reddit and Hacker News.
The AI acts like a digital detective using a method called ESRC (Extract, Search, Reason, Calibrate). It scans a user’s post history for subtle clues (hobbies, writing style, locations), searches the wider internet (LinkedIn, other forums) for matches, and uses complex reasoning to confirm the identity.
The terrifying results:
- It correctly linked secret Hacker News usernames to real people 67% of the time (with 90% accuracy when it made a firm guess).
- It successfully matched a person’s separate Reddit accounts from different years 68% of the time.
- The entire automated process costs only $4 per target.
“Practical obscurity”-the idea that you’re safe online because it takes too much human effort to connect your digital breadcrumbs-is dead. Anyone with a few dollars and an LLM API can now mass-dox thousands of pseudonymous accounts in minutes.
Good thing I’ve deleted my previous two reddit accounts. And am planning to delete my current one as well
There’s long been easy ways to compile tidbits on a person from Reddit. Connecting it to other sources on the web in an automated way may be novel, but I’m skeptical about the accuracy claims. An AI telling you who someone could be is still a guess and if you can’t prove it some other way, it remains a guess. The important part, I would say, is the possibility of surveillance efforts using this kind of thing to make their jobs easier to make connections on who is who, which is something they’ve probably already been doing for years.
It’s all the more reason to be careful about what details you share about yourself and to never assume anonymity will protect you. Always be working to manage risk, never with a false belief that you can eliminate it entirely.
Imageboards are safe for now, I guess



