Categories: Technology

AI can now moderate social media and protect brands from trolls and hate speech

Social media moderating for big brands can be pretty hard. The amount of content that can be posted every day is difficult to keep up with, so it’s only natural that some comments will slip through the cracks.

And that can be bad. As much good as social media has done for us; allowing us to keep in touch with friends around the world or share our experiences with millions of people, there’s always a couple of people who like to ruin it for the rest of us. Germany has already started threatening social media companies with fines for failing to deal with the mounting problem.

I don’t know what it is about the internet that seems to compel people to spew absolute vitriol towards others, it could be the feeling of safety hiding behind a screen, it could just as easily be boredom. The fact of the matter is it happens. And increasingly, those tasked to deal with the mess are online moderators.

But, they can’t be at the computer all the time. What Smart Moderation realised was that having a 24/7 human social media moderator is costly and time-consuming, and honestly, not many brands could afford such luxuries.

Ciler Ay Tek
Co-founder & CEO at Smart Moderation

Founded in 2014 by Çiler Ay Tek and Mete Aktaş, Smart Moderation uses artificial intelligence to protect brands social media pages. Facebook, Instagram, and YouTube are all part of its core networks.

There are already a number of profanity filters in use on message boards all over the internet, but Smart Moderation is different in that the AI is more than a simple profanity filter. Some profanity filters are even too vigilant and would end up censoring the ‘ass’ in ‘assessment’. This can be difficult on the eyes for users and makes it difficult to have a proper conversation.

Smart Moderation analyses text the same way a human would. For example, it can detect the difference between ‘F*** you!’ and ‘That’s f***ing awesome!’ With profanity filters, the text would look just as I’ve typed it, however, with Smart Moderation’s AI, only the first example would be removed or hidden, whereas the other would be seen exactly how it was supposed to.

Its main objective is to assist with removing hate-speech, something that is one of the biggest issues facing social media users worldwide today. It operates using Facebook’s community standards as a baseline but this can be customised on a user to user basis. Should the client want to, they can even teach the AI themselves by marking comments as ‘Inappropriate’ or ‘OK’.

As time goes on, the AI will learn more about the habits of your online community and only become more effective. By working 24/7 and in real time, it will pick up any comments that violate terms set by the users.

Small companies can use the service for free with up to 5000 followers. Brands with larger audiences have the option to try for free before signing up to the premium plan.

With Smart Moderation, the team behind it hope to make the internet a safer place for users. You shouldn’t have to worry about being abused for posting innocent comments on Facebook, and you don’t deserve to be abused for having a differing opinion.

What you do deserve, though, is to be able to browse your favourite pages, and interact with fellow users in peace.

Nicolas Waddell

Nicolas has spent time in Asia, Canada and Colombia watching people and wondering just what the heck they'd do without their phones; but only because he wonders the same of himself.

Recent Posts

G20 South Africa commits to advancing digital public infrastructure globally

DPI involves giving everybody electricity & internet, making them sign up for digital ID, and…

1 day ago

Nisum, Applied AI Consulting partner-up to turn the promise of AI into tangible results

Across industries, AI has been promised as the magic bullet, poised to solve different business…

2 days ago

WEF blog calls for an ‘International Cybercrime Coordination Authority’ to impose collective penalties on uncooperative nations

How long until online misinformation and disinformation are considered cybercrimes? perspective The World Economic Forum…

2 days ago

With surge in AI-generated code creates security concerns, DeepSources launches trio of autonomous AI agents for DevSecOps 

Autonomous, AI-powered employees are set to begin roaming corporate networks sooner than expected, marking the…

5 days ago

As carcinogenic chemicals from cleaning products hit the headlines, Viking Pure Solutions is protecting employees from harm

Despite the ongoing fight to reduce, reuse and recycle plastics, when it comes to environmental…

5 days ago

Muddy Waters vs. AppLovin: Why Investors Might Be the Real Target

Muddy Waters’ recent short report on AppLovin reads serious. Abuse, violations, an impending takedown. But…

6 days ago