Will ChatGPT Mean An End to Human Moderation Jobs?

KEY TAKEAWAYS

Content moderation encompasses the challenging task of evaluating and overseeing user-generated content on digital platforms. This content frequently contains harmful and disturbing material, placing a strain on the mental well-being of content moderators who are regularly exposed to a constant influx of offensive and distressing content. Can AI offer a solution at the cost of taking their jobs?

Content moderation is not easy. Hateful and vile user-generated content that the content moderators must manage can extract a huge mental toll.

Advertisements

As an example, Meta paid $52 million to its content moderators after its employees filed a suit seeking compensation for the health issues they suffered, following months and months of moderating disturbing content.

While dealing with disturbing content remains a job hazard for content moderators, efforts are on to enable AI to moderate content.

Advertisements

In a blog post, ChatGPT developer OpenAI states:

“Content moderation plays a crucial role in sustaining the health of digital platforms.

“A content moderation system using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours.

Advertisements

GPT-4 can also interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling.

“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of many human moderators.

“Anyone with OpenAI API access can implement this approach to create their AI-assisted moderation system.”

While certain benefits like faster content moderation and policy iterations will result from introducing ChatGPT to content moderation, on the other side, it may also result in job losses for content moderators.

Why the Need for AI-Assisted Moderation

Mental health issues of the content moderators and a want to expedite content moderation by sizeable digital content platforms such as Meta, X, and LinkedIn set the context behind the introduction of ChatGPT to content moderation.

In an interview with HBR, Sarah T. Roberts, who is the faculty director of the Center for Critical Internet Inquiry and associate professor of gender studies, information studies, and labor studies at the University of California, Los Angeles (UCLA), they were told that content moderators, as part of their daily job, contend with alarming content, low pay, backdated software, and poor support.

Sarah describes her coming across a content moderator who thinks of herself as the one who takes on other people’s sins for some money because she needs it.

Efficiency and Human Moderators

Content moderation is highly challenging because of the pace of content generation and the relative slowness in keeping up with the review.

For example, a report stated that Meta failed to identify 52% of the hateful content generated. Content moderation also fails to distinguish between malicious and proper content.

For example, the government may want to remove pornographic images, but content moderation rules may find it difficult to differentiate between malicious porn and nude images representing a good cause.

For example, the famous book Custodians of the Internet shows a nude girl running while her village is burning. Should such images be removed also?

The Case for ChatGPT’s Introduction to Content Moderation

Two advantages drive ChatGPT’s introduction to content moderation: faster and more accurate content moderation and relieving human content moderators from the trauma of having to moderate hateful and bad content.

Let’s look a bit more deeply into both advantages.

Content moderation is challenging because of the continuous need to adapt to new content policies and updates to existing ones.

Human moderators may interpret the content uniquely, and there might be a lack of uniformity in moderation. Also, human moderators will be relatively slower in responding to the dynamic nature of the content policies of their organizations.

ChatGPT’s GPT 4 Large Language Models (LLM) can accurately and quickly apply labels to various content, and appropriate actions can follow. LLMs can also respond faster to policy updates and update labeling if needed.

Meanwhile, ChatGPT is immune to the most hateful and atrocious content and can do its job unaffected. In its blog post, OpenAI suggested ChatGPT can relieve the mental burden of human moderators.

Easy Access to ChatGPT 4.0

Organizations that need content moderation with ChatGPT can access the OpenAI API to implement its AI-driven content moderation system. It’s simple and inexpensive.

However, ChatGPT is not the panacea for content moderation problems because it has limitations, at least in its present state.

Among them, ChatGPT depends on the training content to interpret the social media content.

There are too many cases of ChatGPT providing biased and unacceptable responses to questions that have generated controversies.

If the training content provided to ChatGPT contains biased content, then it may treat harmful social media content as harmless and vice versa.

As a result, malicious content may be labeled as a false positive. It’s a complex problem that requires time to resolve.

Will This Mean a Loss of Jobs?

AI is seen as a replacement for human beings in various roles, and content moderation could be an addition to the list.

For example, X, formerly Twitter, employs 15000 content moderators. So, all social media platforms employ a huge number of content moderators who will face an uncertain future.

The Bottom Line

OpenAI’s claims notwithstanding, AI in content moderation is not novel. In fact, AI has been used to moderate content for many years by platforms like Meta, YouTube, and TikTok.

But every platform admits that perfect content moderation is impossible, even with AI at scale.

Practically, both the human moderators and AI make mistakes.

Given the enormous challenge of moderating user-generated content that has been growing too fast, moderation will be a huge and complicated challenge.

GPT-4 continues to generate false information, which makes the situation even more complicated.

Content moderation is not a simple task that AI can just tick off like a checkbox and claim as done.

In this light, the claims that OpenAI makes sound simplistic and lacking details — the introduction of AI isn’t going to work magically.

Human moderators can breathe easier because its much-touted replacement is still far from ready. But it will undoubtedly have a growing role.

Advertisements

Related Terms

Advertisements
Kaushik Pal

Kaushik is a technical architect and software consultant, having over 23 years of experience in software analysis, development, architecture, design, testing and training industry. He has an interest in new technology and innovation areas. He focuses on web architecture, web technologies, Java/J2EE, open source, WebRTC, big data and semantic technologies. He has demonstrated his expertise in requirement analysis, architecture design & implementation, technical use case preparation, and software development. His experience has spanned different domains like insurance, banking, airlines, shipping, document management and product development, etc. He has worked with a wide variety of technologies starting from mainframe (IBM S/390),…