Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for those opposed to the current state of affairs to undertake a mass data poisoning effort to undermine the technology.
Their initiative, dubbed Poison Fountain, asks website operators to add links to their websites that feed AI crawlers poisoned training data. It’s been up and running for about a week.
AI crawlers visit websites and scrape data that ends up being used to train AI models, a parasitic relationship that has prompted pushback from publishers. When scaped data is accurate, it helps AI models offer quality responses to questions; when it’s inaccurate, it has the opposite effect.


I once saw an old lecture where the guy working on Yahoo spam filters noticed that spammers would create accounts to mark their own spam messages as not spam (in an attempt to trick the spam filters; I guess a kind of a Sybil attack), and because the way the SPAM filtering models were created and used, it made the SPAM filtering more effective. It’s possible that wider variety of “poisoned” data can actually help improve models.
I… have my doubts. I do not doubt that a wider variety of poisoned data can improve training, by implementing new ways to filter out unusable training data. In itself, this would, indeed, improve the model.
But in many cases, the point of poisoning is not to poison the data, but to deny the crawlers access to the real work (and provide an opportunity to poison their URL queue, which is something I can demonstrate as working). If poison is served instead of the real content, that will hurt the model, because even if it filters out the junk, it will have access to less new data to train on.