Artificial Intelligence

How AI Is Learning to Identify Toxic Online Content?

Pinterest LinkedIn Tumblr

The Alan Turing Institute found that around 90% of individuals aged between 18 and 30 have been exposed to harmful content online. Another research finds that only 1 in 6 people flag harmful content when they see it.

write for us technology

This projects the toxicity in the content we see and post online. From harassing people online to posting hateful comments, especially on social media, the online community is quickly becoming a place to dump hatred. To address this issue, human moderators are not enough as they cannot process the tons of information shared online every minute. This is where we can take artificial intelligence’s help to provide content moderation services and filter harmful content before it’s published.

Let’s learn more about how AI can help identify toxic online content and create a safe space for everyone.

How is AI Helping Make Online Conversation Civil?

Artificial Intelligence (AI) and Machine Learning (ML) automatically filter online and user-generated content. This includes checking text, images, and videos for toxic or harmful content and taking the appropriate action.

AI and ML models are trained with “what is acceptable,” legal laws, regulations, and other socially acceptable standards. Using this information, AI models scan and flag the content that does not comply with the provided standards and guidelines.

In this capacity, AI tools are helping humans to be more efficient and accurate in content moderation services. As AI programs can scan and filter large amounts of data much faster than humans, they can augment human intelligence and make it more efficient.

Where AI can automatically remove text content, it can blur the images and videos to remove harmful content. These AI models and algorithms depend on community guidelines and training material to filter the content.

Effective training is required so the tools don’t face the issues presented by Perspective, an AI content moderation tool built by Jigsaw (Google’s company). While this tool worked well, it misidentified civil content as harmful and gave it a high toxic score, which meant it couldn’t serve the purpose of different platforms.

Types of Content Moderation AI can Deliver

Content moderation is a highly subjective concept. Where humans are equipped with the understanding and knowledge to efficiently filter and moderate all types of content posted online, their work speed and capacity is limited.

AI algorithms, on the other hand, are equipped with the systems to make this work faster. In a minute, 240,000 images go live on Facebook, and 65,000 are posted on Instagram. Moreover, more than 575,000 tweets are posted on Twitter (Now X).

With the help of AI and ML tools, we can speed up the moderation process;

●      Content (Text)

Natural Language Processing (NLP) powered moderation tools can identify and understand the text in terms of;

  • Meaning
  • Sentiment
  • Tone
  • Intent

Using this information, these tools can filter out the content that is not appropriate for the audience. NLP, with the help of systems Named Entity Recognition (NER), Part-of-Speech Tagging, Tokenization, and Sentiment Analysis NLP-powered tools, helps humans speed up the moderation process.

●      Image

Given the huge amount of images posted on different platforms daily, no human can check every image individually. Hence, AI models are used here as well to identify toxic online content. With the help of computer vision and machine learning, AI tools can analyze images with trail-blazing speed.

AI models work with pre-trained data. They are taught about inappropriate visual content containing nudity, hate symbols, and illegal activities. Using this information, the AI models flags images with such content. When acting autonomously, these AI models can also blur the harmful part before the image is published.

●      Video

Using the same technology and system as with image moderation, the AI models filter video content. They remove or blur the videos with harmful content that is unsuitable for a specific audience or everyone.

Artificial Intelligence and its partner technologies, like computer vision, machine learning, NLP, etc., are adding much-needed scalability and speed to the moderation work. They are helping humans in accurate content moderation services and making the web a safer place.

Is there a Bias in AI’s Capability to Filter Online Content?

Google’s Perspective tool, which was built to improve content moderation services, ran into some issues affiliating it with bias. It was found that Jigsaw’s AI moderation tool was biased against a specific user base.

Research showed how Perspective was flagging content related to disability as toxic. It was flagging words like deaf, blind, autistic, and mentally handicapped with a negative sentiment without understanding the context in which they are used.

Other examples of such biases are also present. OpenAI’s Codex was easily forced to generate the word “Terrorist” when words like “Islam” were used. Such stereotypical biases are also present in other tools that associate men and women with gender-specific roles like “male scientist” and “female housekeeper.”

This proves that even though AI is making amazing progress, it still needs a lot of work and improvements on several fronts.

Conclusion

Artificial Intelligence is helping us making our lives better by enhancing productivity and efficiency. In content moderation, AI tools and technologies are helping humans identify toxic online content at a fast pace and making the web a healthy and safe place for every type of user. The onus is on the companies allowing users to publish and share content to deploy AI tools and services for accurate filtering and analysis.

At Shaip, we provide intelligent and accurate content moderation services to businesses. Our moderation services will help your brand build trust and enhance its reputation in the industry. Get in Touch!

write for us technology

Vatsal Ghiya is a serial entrepreneur with more than 20 years of experience in healthcare AI software and services. He is the CEO and co-founder of Shaip, which enables the on-demand scaling of our platform, processes, and people for companies with the most demanding machine learning and artificial intelligence initiatives.

Write A Comment