Share on facebook
Share on twitter
Share on linkedin
Share on reddit
Share on pinterest
What are social media companies really doing to combat terrorism online?
Facebook has added 3000 people to their review team that should expand to 20,000 by the end of this year. This is enough evidence that Facebook is taking online terrorism very seriously.
Person analysing data on computer screens

[vc_row full_width=”stretch_row”][vc_column][insikt_heading title=”What are social media companies really doing to combat terrorism online?” title_color=”#ffffff”][/vc_column][/vc_row][vc_row el_class=”container”][vc_column][vc_column_text]

With the advance of technology, terrorist groups have begun to not only use, but rely on online resources for the recruitment and spreading of propaganda.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”1/2″][vc_row_inner][vc_column_inner][vc_column_text]

Terrorist operatives are mostly targeting people on social media and this is why the leading platforms sent their representatives to a hearing in Washington D.C. earlier this year to speak to the U.S. Senate Committee on Commerce, Science, and Transportation. Here they were able to discuss their current efforts to combat terrorism online, particularly within social media platforms. 

This was the first time big companies like Twitter, YouTube, and Facebook spoke about online terrorism openly and here is what they had to say:

Facebook said they were able to remove 99% of harmful content

Facebook’s AI platform has been helpful in this case of online terrorism. Thanks to it, Facebook can now recognise and remove 99% of content related to Al Qaeda and ISIS, said Facebook’s head of Product Policy and Counterterrorism, Monika Bickert. The AI software is able to go through video, images and text posts with almost 100% accuracy. She also noted that further improvements on the AI software were expected.

[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][vc_column width=”1/2″][vc_single_image image=”467″ img_size=”800×400″][/vc_column][/vc_row][vc_row][vc_column][vc_column_text css=”.vc_custom_1530095801391{border-right-width: 20px !important;border-left-width: 20px !important;padding-right: 15px !important;padding-left: 15px !important;}”]Users that post terrorist related content online are removed from Facebook’s platform and are prevented from creating new accounts. Other people that are connected to the account are also looked into by a team of experts. Facebook has added 3,000 people to their review team that should expand to 20,000 by the end of this year. This is enough evidence that Facebook is taking online terrorism very seriously.

Twitter says they are doing more of the same

Twitter has been attempting to ban terrorist-related accounts for some time now and that number just went over 1 Million. The banning started since mid-2015 but a total of 574,070 accounts were banned last year. This was due to the improvement made by the algorithm working on identifying and removing terrorist-related content from Twitter. This technology supplements reports from Twitter users and makes the job easier for people in charge of removing harmful content from this platform. Twitter will also have a different approach to political campaigns starting this year. Some money from political ad revenue will be forwarded to charity and users will be protected from false information.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”1/2″][vc_single_image image=”361″ img_size=”750×300″][/vc_column][vc_column width=”1/2″][vc_column_text]

YouTube Relying on AI

Machine learning has been playing a key part in removing terrorist content from the internet and the case is the same with YouTube as well. Their AI is able to remove 98 percent of “violent extremism” videos, up from 40 percent a year ago. 70 percent of those videos are removed within 8 hours. This still leaves some room but the removal times should go down to two hours very soon.

Google is taking this matter very seriously. The AI won’t be alone on this case since 10 000 flaggers will be added to the review team this year. These staff members will be a part of the Trusted Flagger program that will involve counter-terrorism groups. There are videos that fall into the “grey area” but YouTube is restricting those as well. Videos like that will be unable to receive monetary revenue and comments will also be disabled to prevent unwanted discussions.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text css=”.vc_custom_1530096296166{border-right-width: 20px !important;border-left-width: 20px !important;padding-right: 15px !important;padding-left: 15px !important;}”]

What should we expect in the future?

The number of removed content and user accounts related to terrorism online has been growing over the past couple of years. This means that big companies and social media platforms are making an effort against this but is that enough? Some people think that removing anonymous accounts from the internet would solve a big portion of the problem and they are probably right. If every platform required an ID check there would be no room for hate speech and online terrorist content. We would have a clean and regulated space where online terrorism would have no chance of surviving. But this would also raise the question of online privacy.

Obviously, it is hard to find a balance so we should definitely begin leaning more heaving on AI, which is maturing and beginning to show better results.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

TAKE A LOOK AT SOME OF OUR OTHER NEWS ARTICLES

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”1/4″][icon_box_content image=”570″ img_pos=”center” title=”THIS IS HOW EXTREMISTS TRY TO TRICK YOUTUBE” slink=”https://www.insiktai.com/extremists-trick-youtube-and-upload-propaganda/”][/icon_box_content][/vc_column][vc_column width=”1/4″][icon_box_content image=”579″ img_pos=”center” title=”ISIS PROPAGANDA WEBSITE SHUTS DOWN AFTER SUCCESSFUL COORDINATED PLAN” slink=”https://www.insiktai.com/isis-propaganda-website-shut-down-2/”][/icon_box_content][/vc_column][vc_column width=”1/4″][icon_box_content image=”569″ img_pos=”center” title=”EUROPEAN SECURITY SECRETARY ENTHUSIASTIC ABOUT INSIKT’S EC-FUNDED RESULTS” slink=”https://www.insiktai.com/counter-terrorism-event-visit/”][/icon_box_content][/vc_column][vc_column width=”1/4″][icon_box_content image=”571″ img_pos=”center” title=”ONLINE TERRORIST CONTENT REMOVAL IN 60 MINUTES OR LESS?” slink=”https://www.insiktai.com/online-terrorist-content-removal-in-60-minutes-or-less/”][/icon_box_content][/vc_column][/vc_row][vc_row][vc_column][vc_btn title=”CONTACT US” size=”lg” align=”center” link=”url:http%3A%2F%2Fwww.insiktai.com%2Fcontact-us%2F||target:%20_blank|”][/vc_column][/vc_row]

Read next

Copyright © 2021 INSIKT AI All rights reserved

Tell us about your need
Members of

Our technology has been co-funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement number 767542.