Google Announces New Policies to Fight Spread of Extremist Content
NEC member Google recently announced new policies to combat the spread of terrorist-related content on YouTube, the company’s online video service.
Google outlined four steps it will put in place as part of the technology company’s ongoing efforts to fight the spread of extremist content on its online platforms. Google will invest increased resources into developing artificial intelligence (AI) software that can be trained to identify and remove terrorist-related contact. The company will also add 50 expert organizations to its YouTube’s Trusted Flagger program, which gives Trusted Flaggers a tool to report multiple videos at a time for YouTube employees to review for removal. Google will also begin taking tougher stances against videos that do not violate YouTube rules per se, but contain “concerning content.” Google will aim to reduce the number of users engaging with these videos and make them harder to find. Additionally, YouTube will collaborate with Jigsaw, a subsidiary of Google’s parent company Alphabet, to implement “The Redirect Method.” “The Redirect Method” uses ad targeting to redirect potential ISIS recruits to video content that “debunks terrorist recruiting messages.”
“Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them,” said Kent Walker, general counsel at Google. “Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge. We are committed to playing our part.”
The New England Council thanks Google for continuing to aggressively fight the spread of terrorist-related content online. Read more in The New York Times and USA Today.
Recently from the Blog
November 24 Weekly Round-Up: NEC Members Contribute to COVID-19 Crisis Response