Social Media

Murky methods: social media’s initial anti-terrorism efforts

As governmental pressure mounts on social media companies to rid their platforms of terrorist content, initial AI methods are producing results; though we know not how.

YouTube recently pointed proudly to an 83% success rate in their new approach to removing “extremist” content, a method developed by training an AI engine which flags content to be reviewed by a team of human monitors.

The results sound strong, but between their AI approach and their desire to retain proprietary knowledge the definition of “extremist” remains within.

The statistics are timely, only recently the EU threatened tech companies with legal repercussions should their platforms fail to remove hate speech quickly.

YouTube has been part of a broad coalition of tech companies which together constitute the Tech Against Terrorism project. The collective endeavour is aimed at a self-regulation approach to terrorist content — in essence the kids are being left at home by themselves for the first time – no babysitter – and intent on proving they’re responsible enough for the situation to become the norm.

The alternative that social media platforms are trying to avoid is governments stepping in with heavy-handed or overly-generalised legislation which could unduly hinder innovation in more legitimate directions.

However, self-regulation also means the public doesn’t get to find out the rules by which their approach functions.

The companies involved have all released their own success stories recently. Twitter offered up some big numbers, having removed nearly a million accounts since its efforts began in the summer of 2015 and having most recently seen a 20% drop in the number of accounts removed in the first half of 2017 compared to previous six-month periods. Twitter attributed the drop in account removal to the efficacy of their approach, having also seen an 80% drop in accounts reported by governments.

That being said Twitter also boasted that 75% of the accounts were removed before a single post was made on the accounts. High efficacy is undoubtedly preferable to low efficacy, though the details of their methods remain unclear, which in turn makes it tricky to ascertain the level to which accounts or content are being blocked which are legitimate expressions of free speech.

Of the big tech social media platforms Facebook has been the most forthcoming with the details behind its methods. The social media giant has developed “text-based signals” from previously flagged content to train an AI engine in image matching and language understanding. Facebook is also in the process of hiring 3,000 new staff to monitor the posts flagged by the AI engine.

So, glass half-full? Perhaps, but let’s at least agree the glass isn’t yet full. It is at least encouraging that the promise by social media companies to deal with terrorist content on their platforms isn’t empty. And governmental legislation on the matter is, for now, the road not taken, so it’s difficult to fathom whether the inevitable clumsiness of any legislation would be worth the corollary transparency.

The methods of social media platforms’ anti-terrorism efforts may unravel if reports start emerging that legitimate content is being removed; at which point both the glass and the promise might start looking half-empty.

Ben Allen

Ben Allen is a traveller, a millennial and a Brit. He worked in the London startup world for a while but really prefers commenting on it than working in it. He has huge faith in the tech industry and enjoys talking and writing about the social issues inherent in its development.

View Comments

Recent Posts

Prezent partners with Tom McCarthy to bring powerful AI-communication coaching avatars to market 

For startups the world over, the ability to master the art of a good pitch…

1 day ago

The mind-body metrics revolution: building smarter health plans for longer lives

As the global population ages, the burden of degenerative disease rises, including a higher prevalence…

2 days ago

Brazil’s breastfeeding laws exposed a gap- a biotechnology startup just secured $5.9 million to fill it 

Since the 1980s, when the number of families in Brazil headed by women almost doubled,…

3 days ago

DARPA ‘Generative Optogenetics (GO)’ seeks to program biology using light, could aid in ‘extended human spaceflight’

Apart from 'extended human spaceflight' for what other purposes could DARPA GO serve? perspective DARPA…

4 days ago

Competing in the post-gatekeeper era: How the DMA is rewiring platforms, security, and market access

The Digital Markets Act (DMA) has joined the General Data Protection Regulation (GDPR) as one…

1 week ago

Horasis India Meeting to Spotlight India’s Global Ascent At Singapore Summit This Month

Amid several years of shifting global dynamics, it’s become increasingly clear that we are entering…

1 week ago