Opinion | Why Twitter’s firing of content moderators is a sign of worse to come
- With essential workers such as content moderators cut, hate speech has quickly increased on the social media platform
- This does not bode well for a platform that already had content moderation problems, and which might descend into chaos
Content moderators are among the most essential workers in any major social media platform: not just Twitter, but also companies such as Facebook and TikTok. Content moderators are responsible for pulling inappropriate or graphic content such as videos of suicide, hate speech, violence, pornography and disinformation and reviewing content reported for various violations of company policy.
In this sense, moderators are an invisible army essential for protecting Twitterati and other social media users from the depths of human depravity. Given the real-world impact of social media, content moderators also play a vital role in maintaining peace and reducing hate speech and hate crimes.
It is their job to pull disinformation during election cycles and to remove videos posted by terrorist groups such as Islamic State, for example. But the violent nature of the videos reviewed has resulted in content moderators experiencing symptoms of post-traumatic stress disorder or feelings of isolation, and thus a high turnover rate (some employees quit after about a year).
In early signs of the content moderation staff cull, hate speech directed at various racial, religious, and sexual minorities has increased. For example, the number of tweets using the racial slur directed at African-Americans rose by 500 per cent in the first 12 hours following Musk’s takeover. There has also been an uptick in anti-Semitic hate speech on the platform.