yitit
Home
/
Computing
/
ChatGPT may soon moderate illegal content on sites like Facebook
ChatGPT may soon moderate illegal content on sites like Facebook-February 2024
Feb 12, 2026 2:37 AM

  GPT-4 — the large language model (LLM) that powers ChatGPT Plus — may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

  By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

  Rolf van Root / UnsplashFor example, the blog post explains that moderation teams could assign labels to content to explain whether it falls within or outside a given platform’s rules. GPT-4 could then take the same data set and assign its own labels, without knowing the answers beforehand.

  Recommended Videos

  The moderators could then compare the two sets of labels and use any discrepancies to reduce confusion and add clarification to their rules. In other words, GPT-4 could act as an everyday user and gauge whether the rules make sense.

  Related

  The best custom GPTs to make ChatGPT even more powerful This one image breaks ChatGPT each and every time Researchers just unlocked ChatGPT

  

The human toll

OpenAIRight now, content moderation on various websites is performed by humans, which exposes them to potentially illegal, violent, or otherwise harmful content on a regular basis. We’ve repeatedly seen the awful toll that content moderation can take on people, with Facebook paying $52 million to moderators who suffered from PTSD due to the traumas of their job.

  Reducing the burden on human moderators could help to improve their working conditions, and since AIs like GPT-4 are immune to the kind of mental stress that humans feel when handling troublesome content, they could be deployed without worrying about burnout and PTSD.

  However, it does raise the question of whether using AI in this manner would result in job losses. Content moderation is not always a fun job, but it is a job nonetheless, and if GPT-4 takes over from humans in this area, there will likely be concern that former content moderators will simply be made redundant rather than reassigned to other roles.

  OpenAI does not mention this possibility in its blog post, and that really is something for content platforms to decide on. But it might not do much to allay fears that AI will be deployed by large companies simply as a cost-saving measure, with little concern for the aftermath.

  Still, if AI can reduce or eliminate the mental devastation faced by the overworked and underappreciated teams who moderate content on the websites used by billions of people every day, there could be some good in all this. It remains to be seen whether that will be tempered by equally devastating redundancies.

Comments
Welcome to yitit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Login to display more comments
Computing
Recent News
Copyright 2023-2026 - www.yitit.com All Rights Reserved