Importance of Content Moderation (Human)
When it comes to mental safety on the internet versus the freedom of speech, Content Moderation is the target right in the middle. Elon Musk hates it. Most people working in social media likes it. Recently, some people formerly hired by OpenAI came out to tell the public on how many awful content they reviewed to make sure the AI model is and will only learn from the safe contents.
As a person who worked shortly in this field (but very heavily involved in every aspect — from the making of the policy, operationalize it, and make sure every moderator’s mental safety and calibration, etc.), I have to say that we will never be able to get rid of the manual process of content moderation. Here are some reasons:
People are AWFUL: let’s be honest. We all know it’s true. We are awful and we all want to make the nasty and gross content when people are not looking. Think of all the creative things people are putting on Telegram — from the horrible Telegram group “Nth Room”, to the latest betting bot to bet who cum first? We are naturally keen on the obscene content. As sad as it sounds, we will always need some sorts of reviewing mechanism to exist, to offer the safety of the viewer. I understand some content are the freedom of speech, but we should also think for the psychological safety for some people who prefer not to damage their eyes and also the psychological development for the next generation, eh?
AI or LLM model is not perfect and only human mind is controllable (or have lesser scale of the damage): Think of Skynet. Then think of Genghis Khan or Attila the Hun or Hitler. You know my point now. When it’s a human tyrannic, the impact of the scale and the influence is always less sporadic and takes time to gradually evolve to the point to scare the world. However, when it comes to the evolution or change in technology, it will be exponential and rapid — so fast that it will be cascading to the world before we even aware of the change. Think of the adoption rate on ChatGPT. If we accidentally fed the wrong information to the LLM, they would quickly evolve to something that we could hardly recognize nor control. Before we know, they will either decide that caucasian men should dominate all women and do whatever they want to us, or decide human is a thread and launch a full missile attack to all major cities.
Human and our perception itself is a constant evolving project: just over one hundred years ago, women are bound to be at home and no career. Now, we find female CEOs in Fortune 500 companies. Human and culture in itself is a constant evolving organism. It is impossible to toss it to an LLM or AI model and expect they can just move along our ever-changing human mind. The best way is still to let human to decide what human wants; and only human and dictate what we want to include in the next cultural movement.
In conclusion, as much as I hate to disagree with Elon Musk (well, I also disagree with X but what do I know?), I have to say that Content Moderation will exist and the safety for human mind, lays on the hand of human.