Social media service providers are currently facing the mother of all backlashed for not doing enough to prevent and remove terrorist content and propaganda. Despite having repeatedly promised to step up efforts, it’s clear that far more needs to be done to bring the problem under control. However, Facebook recently shared details of a new artificial intelligence system, which along with detecting and intercepting terrorist content could also prove effective against other criminals.
Drug dealing, gang activity, white nationalists and so on – the new AI being cooked up by Microsoft is said to be about far more than targeting Muslim extremists.
The system is currently being tested behind the scenes at Facebook, which uses natural language identification techniques to pick up on potentially harmful content/conversations. A database of words and phrases that have appeared in accounts that have already been suspended has been created and will be continually expanded and refined as the system evolves.
“We’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts,” wrote Facebook’s director of global policy management Monika Bickert and counterterrorism policy manager Brian Fishman in a blog post.
A number of potentially effective measures have been outlined by Facebook, which include photo and video recognition technology and further efforts to prevent and remove fake accounts. They’ve also teamed up with a group of counterterrorism experts to obtain more detailed insights into appropriate and potentially effective action.
Politicians on a global basis are calling on social media service providers to intensify efforts to prevent and remove the kind of terrorist propaganda that has become epidemic. A report penned by the British House of Commons Home Affairs Committee on hate crimes and extremism released in April should take responsibility for the security of the services they provide.
“Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way,” the report stated.
“We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.”