Social Media

Murky methods: social media’s initial anti-terrorism efforts

As governmental pressure mounts on social media companies to rid their platforms of terrorist content, initial AI methods are producing results; though we know not how.

YouTube recently pointed proudly to an 83% success rate in their new approach to removing “extremist” content, a method developed by training an AI engine which flags content to be reviewed by a team of human monitors.

The results sound strong, but between their AI approach and their desire to retain proprietary knowledge the definition of “extremist” remains within.

The statistics are timely, only recently the EU threatened tech companies with legal repercussions should their platforms fail to remove hate speech quickly.

YouTube has been part of a broad coalition of tech companies which together constitute the Tech Against Terrorism project. The collective endeavour is aimed at a self-regulation approach to terrorist content — in essence the kids are being left at home by themselves for the first time – no babysitter – and intent on proving they’re responsible enough for the situation to become the norm.

The alternative that social media platforms are trying to avoid is governments stepping in with heavy-handed or overly-generalised legislation which could unduly hinder innovation in more legitimate directions.

However, self-regulation also means the public doesn’t get to find out the rules by which their approach functions.

The companies involved have all released their own success stories recently. Twitter offered up some big numbers, having removed nearly million accounts since its efforts began in the summer of 2015 and having most recently seen a 20% drop in the number of accounts removed in the first half of 2017 compared to previous six-month periods. Twitter attributed the drop in account removal to the efficacy of their approach, having also seen an 80% drop in accounts reported by governments.

That being said Twitter also boasted that 75% of the accounts were removed before a single post was made on the accounts. High efficacy is undoubtedly preferable to low efficacy, though the details of their methods remain unclear, which in turn makes it tricky to ascertain the level to which accounts or content are being blocked which are legitimate expressions of free speech.

Of the big tech social media platforms Facebook has been the most forthcoming with the details behind its methods. The social media giant has developed “text-based signals” from previously flagged content to train an AI engine in image matching and language understanding. Facebook is also in the process of hiring 3,000 new staff to monitor the posts flagged by the AI engine.

So, glass half-full? Perhaps, but let’s at least agree the glass isn’t yet full. It is at least encouraging that the promise by social media companies to deal with terrorist content on their platforms isn’t empty. And governmental legislation on the matter is, for now, the road not taken, so it’s difficult to fathom whether the inevitable clumsiness of any legislation would be worth the corollary transparency.

The methods of social media platforms’ anti-terrorism efforts may unravel if reports start emerging that legitimate content is being removed; at which point both the glass and the promise might start looking half-empty.

Ben Allen

Ben Allen is a traveller, a millennial and a Brit. He worked in the London startup world for a while but really prefers commenting on it than working in it. He has huge faith in the tech industry and enjoys talking and writing about the social issues inherent in its development.

View Comments

Recent Posts

‘Social problems in substituting humans for machines will be easier in developed countries with declining populations’: Larry Fink to WEF

Blackrock CEO Larry Fink tells the World Economic Forum (WEF) that developed countries with shrinking…

4 hours ago

Meet Nobody Studios, the enterprise creating 100 companies amidst global funding winter 

Founders and investors alike were hopeful the funding winter would start to thaw in 2024.…

5 hours ago

As fintech innovation picks up pace, software experts like 10Pearls help lead the way

Neobanks and fintech solutions hit the US market more than a decade ago, acting as…

1 day ago

CBDC will hopefully replace cash, ‘be one hundred percent digital’: WEF panel

Central bank digital currencies (CBDCs) will hopefully replace physical cash and become fully digital, a…

2 days ago

Ethical Imperatives: Should We Embrace AI?

Five years ago, Frank Chen posed a question that has stuck with me every day…

7 days ago

The Tech Company Brief by HackerNoon: A Clash with the Mainstream Media

What happens when the world's richest man gets caught in the crosshairs of one of…

7 days ago