Hi
I am a computer science student and am just starting my masters thesis. My focus will be on content moderation (algorithms) and therefore I am currently exploring how some social media applications moderate content.
If I understand the docs correctly, content moderation on mastodon is all manual labor? I haven’t read anything about automatic detection of Child Sexual Abuse Material (CSAM) for example which is a thing that most centralised platforms seem to do.
Another question which kind of goes in the same direction is reposting of already moderated content. For example a racist meme that was posted before. Are there any measures in place to detect this?
Thank you for your help!
What do you mean? There’s tools out there already. Lemmy had one made recently that helps with CSAM. Plus the Fedi is always touting donations as the way so why wouldn’t people donate if funds are needed?