Thinking Creatively About Ban Requests and Content Policing.

Self-censored page of 'Green Illusions', by Ozzie Zehner
image credit

A particularly insidious problem with online social media platforms is biased and overly-restrictive ban patterns. When enough people report someone as violating the site’s Terms Of Service, the site will usually accept the reports at face value, because there simply isn’t time to evaluate all of the source materials and apply sophisticated yet consistent judgement.

No matter how large the company, even if it’s Facebook, there will simply never be enough staff to evaluate ban requests well. The whole way these companies are profitable is by maintaining low staff-to-user ratios. If policing user-contributed content requires essentially arbitrary increases in staff size, that’s a losing proposition, and the companies understandably aren’t going to go there.

One possible solution is for the companies to make better use of the resource that does increase in proportion to user base — namely, users!

When user B reports user Q as violating the site’s ToS, what if the site’s next step were to randomly select one or more other users (who have also seen the same material user B saw) to sanity-check the request? User B doesn’t get to choose who they are, and user B would be anonymous to them — the others wouldn’t know who made the ban request, only what the basis for the request is, that is, what user B claimed about user Q. The site would also put their actual Terms of Service conveniently in front of the checkers, to make the process as easy as possible.

Now, some percentage of the checkers would ignore the request and just not bother. That’s okay, though: if that percentage is high, that tells you something right there. If user Q is really violating the site’s ToS in some offensive way, there ought to be at least a few other people besides user B who think so, and some of them would respond when asked and support B’s claim. The converse case, in which user Q is perhaps controversial but is not violating the ToS, does not necessarily need to be symmetrically addressed here because the default is not to ban: freedom of speech implies a bias toward permitting speech when the case for suppressing it is not convincing. However, in practice, if Q is controversial in that way then some of the checkers would be motivated to respond because they realize the situation and want to preserve Q’s ability to speak.

The system scales very naturally. If there aren’t enough other people who have read Q’s post available to confirm or reject the ban, then it is also not very urgent to evaluate the ban in the first place — not many people are seeing the material anyway. ToS violations matter most when they are being widely circulated, and that’s exactly when there will be lots of users available to confirm them.

If user B issues too many ban requests that are not supported by a majority of randomly-selected peers, then the site could gradually downgrade the priority of user B’s ban requests generally. In other words, a site can use crowd-sourced checking both to evaluate a specific ban request and to generally sort people who request bans in terms of their reliability. The best scores would belong to those who are conservative about reporting and who only do so when (say) they see an actual threat of violence or some other unambiguous violation of the ToS. The worst scores would belong to those who issue ban requests against any speech they don’t like. Users don’t necessarily need to be told what their score is; only the site needs to know that.

(Of course, this whole mechanism depends on surveillance — on centralized tracking of who reads what. But let’s face it, that ship sailed long ago. While personally I’m not on Facebook, for that reason among many, lots of other people are. If they’re going to be surveilled, they should at least get some of the benefits!)

Perhaps users who consistently issue illegitimate ban requests should eventually be blocked from issuing further ban requests at all. This does not censor them nor interfere with their access to the site. They can still read and post all they want. The site is just informing them that the site doesn’t trust their judgement anymore when it comes to ban requests.

The main thing is (as I’ve written elsewhere) that right now there’s no cost for issuing unjustified ban requests. Users can do it as often as they want. For anyone seeking to no-platform someone else, it’s all upside and no downside. What is needed is to introduce some downside risk for attempts to silence.

Other ideas:

  • A site should look more carefully at others’ ban requests against material that someone else has already made a rejected ban request about, or that someone who has a poor ban-reliability score has requested a ban on, because there would be a higher likelihood that those other requests are also unjustified.

  • A lifetime (or per-year) limit on how many ban requests someone can issue.

  • Make ban requests publicly visible by default, with opt-out anonymity (that is, opt-in to be identified) for the requester.

Do you have other (hopefully better) ideas? I’d love to hear them in the comments.

If you think over-eager banning isn’t a real problem yet, remember that we have incomplete information as to how bad the problem actually is (though there is some evidence out there). By definition, you mostly don’t know what material you’ve been prevented from seeing.

Leave a Reply

Your email address will not be published. Required fields are marked *

Rants.org Comments Policy

× four = twenty four