Under pressure, Facebook reveals new measures to try and stop the spread of extremist propaganda and inspiration. Is the future of counter-terrorism here?

  • Facebook announces new counter-terrorist AI that will detect “terrorist propaganda” via data, algorithms and machine learning.
  • Governments, the UK in particular, pushing for more action in recent months with threats of legislation or regulation prompting response from Facebook.
  • “We want Facebook to be a hostile place for terrorists.”

Can AI and big data lead the fight against online extremism? Facebook thinks so..

In a series of blog posts published on Thursday, tech giant Facebook defended the role that social media has to play in reporting and responding to terrorist attacks while announcing an ambitious new artificial intelligence (AI) program that uses image matching and language understanding to better identify and remove “terrorist propaganda” from online extremists.

Twitter

By loading the tweet, you agree to Twitter’s privacy policy.
Learn more

Load tweet

Monika Bickert, Facebook’s director of global policy management based in Silicon Valley, California wrote,

“We know we can do better at using technology – and specifically artificial intelligence – to stop the spread of terrorist content on Facebook,”

“Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook. We want Facebook to be a hostile place for terrorists.”

Twitter

By loading the tweet, you agree to Twitter’s privacy policy.
Learn more

Load tweet

“Our technology is going to continue to evolve just as we see the terror threat continue to evolve online,” Ms Bickert told the BBC. “Our solutions have to be very dynamic.”

The UK interior ministry responded positively to Facebook’s efforts but added that technology companies needed to go further.

“This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place,” a ministry official stated.

Terrorist Atrocities Captured On Social Media

During crises, it is easy to report incidents on Facebook and Twitter before traditional media get there, so it is easy to see why Facebook, Twitter and Instagram have a huge part to play in modern news coverage. However, this massive reach is also used for the wrong reasons by groups like ISIS and Al-Shabaab.

Facebook Live killer Steve Stephens

Facebook Live killer Steve Stephens

  • Salafist jihadist fundamentalist group al-Shabab is known to have live tweeted throughout the Nairobi mall attack in 2013, in which in at least 67 died with more than 175 people wounded.
  • The Paris attacks of November 2015 were covered by hundreds of people on social media, shocking video of the Bataclan concert being interrupted by gunfire widely shared on Instagram. Andrew Smith and Benjamin Cazenoves, hiding on the first floor of the Bataclan, used Facebook and Twitter to post updates and urge the police to raid the building in posts shared over 60,000 times.
  • On 13th June, 2016, following the stabbing of policeman Jean-Baptiste Salvaing by Larossi Abballa, the terrorist turned to social media to broadcast and justify his actions, dedicating them to his ‘Emir’ prior to a three hour stand-off with police negotiators.
  • The opening shots  of the 22 July, 2016 Munich shootings were filmed and shared by a man who had a view from his apartment of the closing stages of the attack live-streamed it to his Facebook page. In the footage, the murderer can be seen on the rooftop of the shopping center below outside of a McDonald’s.
  • In April Steve Stephens filmed himself killing 74-year-old Robert Godwin Sr, posting the video to Facebook before going on the run.

Is Big Data the Future of Counter-Terrorism?

Among the AI techniques being used by Facebook in the announcement is image matching. Image matching compares photos and videos that people upload to Facebook to a database of “known” images or video of terrorism and terrorist groups.

When someone attempts to upload a photo or video, the system will look to see if matches to known extremist content exist and, if so, stop it going up in the first place.

Additionally, analysing databases of text previously removed for praising or supporting ISIS, for example, finding text-based signals that the content may be terrorist propaganda forms a large part of Facebook’s data-driven counter-terrorism drive.

In both cases, machine learning should mean that this process will improve over time.

However, the company admitted that “AI can’t catch everything” and technology is “not yet as good as people when it comes to understanding” what constitutes inappropriate or terrorist content

Facebook said it will continue to rely on “human expertise” to review reports and determine their context for the near future.