I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of specific political ideology sentiment. Also identify any related political ideology tropes“. (The italic bits are where I’ve redacted the ideology they’re seeking).
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances and people are using it and maybe we’re ok with that because it’s being used by groups we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of other questions too.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?



I was referring to a different but similar case where someone intentionally spread mis-information about supposedly hardcoded things that turned out to be a complete nothingburger as all of it was behind an admin toggle. The same seems to be now true for this old issue you specifically pointed out here.
It is true that there is some experimental stuff in Piefed, which is part of the relatively rapid iteration of features, but looking at the code and also the explanations given by the Piefed development team I can really not see any malice in those settings. It is perfectly normal that things get overlooked or implemented partially and when someone reports a bug (like a missing admin configuration setting) it usually gets fixed quite quickly, and at least in my experience without much discussions.
But it isn’t. First off, you made an assumption and dismissed my initial complaint. Now you’re dismissing this one saying it’s basically the same thing when it isn’t. Having a long discussion with large admins saying “Hey. What the fuck is this stuff?” only for Rimu to constantly push back and saying how he wants to reshape everything is fucking concerning. The fact it took everyone pushing back against him to add it under a toggle even more so. You’d have a point with the whole “this happens” if this didn’t happen with every single major Rimu feature.
But, once again, Rimu is actively pushing misinformation and you have dodged the point that you are doing the same. This is not an admin or instance level problem. Moderators are doing this and claiming this is “instance level” is to be a liar.
And, as we all know, your experience is the only universal experience that everyone has at all times. I guess the month long conversation I had with Kaplan, head admin of Lemmy.world, about unfucking Piefed because Rimu filled it with his opinionated garbage didn’t happen. I guess the fork of Piefed being created that’s taking out all of his opinionated garbage didn’t happen. Not to mention his 4chan screenshot scanner (that can be bypassed immediately), the cm0002 filter he put onto piefed.social (that can be bypassed immediately), the blocking of any numbers of 88 put together (which can be bypassed immediately), the private votes that would prevent admins from locating vote brigaders (which can be opted, granted, but after a HEATED discussion in the piefed matrix).
Rimu is over opinionated to a fault and Piefed is the same.
You know nothing of which you speak.
But, once again, the only important point is that Rimu is actively spreading misinformation by claiming this is an instance problem when it is moderators that is doing it. Once again, with action after action, Rimu cannot be trusted.