As governmental pressure mounts on social media companies to rid their platforms of terrorist content, initial AI methods are producing results; though we know not how.
YouTube recently pointed proudly to an 83% success rate in their new approach to removing “extremist” content, a method developed by training an AI engine which flags content to be reviewed by a team of human monitors.
The results sound strong, but between their AI approach and their desire to retain proprietary knowledge the definition of “extremist” remains within.
The statistics are timely, only recently the EU threatened tech companies with legal repercussions should their platforms fail to remove hate speech quickly.
YouTube has been part of a broad coalition of tech companies which together constitute the Tech Against Terrorism project. The collective endeavour is aimed at a self-regulation approach to terrorist content — in essence the kids are being left at home by themselves for the first time – no babysitter – and intent on proving they’re responsible enough for the situation to become the norm.
The alternative that social media platforms are trying to avoid is governments stepping in with heavy-handed or overly-generalised legislation which could unduly hinder innovation in more legitimate directions.
However, self-regulation also means the public doesn’t get to find out the rules by which their approach functions.
The companies involved have all released their own success stories recently. Twitter offered up some big numbers, having removed nearly a million accounts since its efforts began in the summer of 2015 and having most recently seen a 20% drop in the number of accounts removed in the first half of 2017 compared to previous six-month periods. Twitter attributed the drop in account removal to the efficacy of their approach, having also seen an 80% drop in accounts reported by governments.
That being said Twitter also boasted that 75% of the accounts were removed before a single post was made on the accounts. High efficacy is undoubtedly preferable to low efficacy, though the details of their methods remain unclear, which in turn makes it tricky to ascertain the level to which accounts or content are being blocked which are legitimate expressions of free speech.
Of the big tech social media platforms Facebook has been the most forthcoming with the details behind its methods. The social media giant has developed “text-based signals” from previously flagged content to train an AI engine in image matching and language understanding. Facebook is also in the process of hiring 3,000 new staff to monitor the posts flagged by the AI engine.
So, glass half-full? Perhaps, but let’s at least agree the glass isn’t yet full. It is at least encouraging that the promise by social media companies to deal with terrorist content on their platforms isn’t empty. And governmental legislation on the matter is, for now, the road not taken, so it’s difficult to fathom whether the inevitable clumsiness of any legislation would be worth the corollary transparency.
The methods of social media platforms’ anti-terrorism efforts may unravel if reports start emerging that legitimate content is being removed; at which point both the glass and the promise might start looking half-empty.