False reporting, AI moderation and the law of averages have struck again. I’m not being persecuted, I’m just mathematically guaranteed to wind up getting banned from more social media platforms than a certain politician in February 2021.
I have made myself intimately familiar with Reddit TOS because over the past couple of years since I began posting medical case reports and photos. I developed a following of over 2700 people and became the queen of a 300,000 member medical subreddit. I started out just posting the occasional interesting medical case I found, and discovered people liked it so I began posting it on a daily basis. I created a list on my phone of links to medical journal articles and selected some each day to post.
Looking for new stuff to post, reading it and discussing it with others was a really fun hobby. I learned a lot about medicine from the reports and the discussion they inspired. I used the opportunity to educate against ineffective and harmful quack medicine (two words: “bonesetter’s gangrene”). Multiple Redditors told me they were medical students and used my case reports as a study aid. In one case a woman recognized her own undiagnosed disorder in a case report I put up and said she was delighted cause this medical problem had been bothering her for years and now she finally knew what it was and how to fix it.
And now it’s probably over, at least over as it was, and I’m not surprised.
As far as I am aware, although it was extremely graphic at times, none of my content violated Reddit TOS. The medical subreddit I posted to also had its own rules for what could be posted, rules I followed to the best of my ability. All of it was presented in an educational context, the genital area was covered if the patient was a minor (even in the cases of autopsies and fetuses) and the general atmosphere of that subreddit is very respectful.
But my content had a tendency to get reported a lot, something I know cause I was able to monitor reports as I was a sub moderator. Some people just like reporting things to cause trouble. Some people were truly shocked by the content: the content in that subreddit, due to what I am assuming to be a Reddit glitch, apparently isn’t blurred by default like all the other NSFW subreddits and so many people who prefer to keep NSFW images blurred saw more than what they wanted. I could do nothing about this but, in my capacity as sub moderator, clear the reports from the inbox since I believed my content didn’t violate TOS and the other moderators were fine with it too.
So my stuff accumulated reports a lot and some posts were especially famous for them. Which brings me to the law of averages:
For each report a decision must be made whether to remove the content and give the poster a violation. And each time there’s a slight chance (let’s pretend it’s 1 in 100) that the decision will be wrong. That a post that didn’t violate Reddit TOS will get removed, or a post that does violate it will stay up. No one is perfect. I’ve made incorrect moderation decisions before and so has Reddit.
And so when you’re posting content every day and half of it generates at least one report and some posts generate dozens of reports, this guarantees, mathematically, that eventually you will accumulate sufficient violations to get permanently banned. The risk increases exponentially because any post you add can get reported and removed even if it’s been up for years. I got a violation once on photos that had been up for two years.
To Reddit AI (artificial intelligence) moderation the injuries on the child in the photo looked like images depicting child abuse, which is disallowed under certain circumstances. In fact the child in question had been attacked by a wild animal, specifically a jackal. I’d made many previous posts about attacks from dogs and bears and big cats but never before by a jackal until that day and it got me permabanned.
Reddit AI only saw the photo and not the medical journal article I attached to the post explaining about the reconstruction done on the child’s face after the jackal attack. A lack of context is the cause of a lot of moderation errors. The truth is, even if the child in my photo had been injured from child abuse, the medical context (how to do reconstruction for this kind of injury in a resource poor area with a very young patient) would have made the photo permissible under Reddit TOS.
But under AI moderation nuance seems to be impossible.
It’s happened to me on two platforms right now. Last time was Facebook and the posts weren’t medicine related, they were Holocaust related mostly. Facebook AI moderation can’t tell the difference between content meant to educate about the Holocaust and World War II (which is ok under TOS) and white supremacist promoting content (which is against Facebook TOS) because the same imagery (such as the swastika, a gentleman with a certain type of mustache, etc) used in both. The last straw was a SpongeBob meme. I did manage to get a Facebook account back but it took awhile and I lost all my old content.
Sucks but that’s what things are like for people who make posts like mine at the rate I do in this Wild West era of social media platforms and artificial intelligence. I have to kind of accept that this will happen.
I use Reddit for medical education and Facebook for Holocaust education and got banned for doing this. I don’t know what topic to educate people on next. What platform to get banned from.
If I can’t figure out how to get back on Reddit (and I appealed the permaban and I have some plans but it will take some time to see how this spirals out and I am assuming it’s gone) I might make a second WordPress blog referencing my Reddit username and post my medical journal stuff and photos there. Or I might find another social media platform to land on.
Fortunately I do not need a Reddit account to access content. Only to participate in discussion and post my own material. I do need a Facebook account to access material so I need that for the Charley Project.