After increasing pressure from the government, Facebook is implementing new tools to combat terrorism on the social media site.
Natalie Andrews and Deepa Seetharaman of The Wall Street Journal had the day’s news:
Hours after the December shootings in San Bernardino, Calif., Mark Wallace asked his employees at the nonprofit Counter Extremism Project to comb social media for profiles of the alleged attackers.
They failed. A team at Facebook Inc. had already removed a profile for Tashfeen Malik, after seeing her name in news reports.
The incident highlights how Facebook, under pressure from government officials, is more aggressively policing material it views as supporting terrorism. The world’s largest social network is quicker to remove users who back terror groups and investigates posts by their friends. It has assembled a team focused on terrorist content and is helping promote “counter speech,” or posts that aim to discredit militant groups like Islamic State.
The moves come as attacks on Westerners proliferate and U.S. lawmakers and the Obama administration intensify pressure on Facebook and other tech companies to curb extremist propaganda online. Top U.S. officials flew to Silicon Valley on Jan. 8 to press their case with executives including Facebook Chief Operating Officer Sheryl Sandberg. Last week, Twitter Inc. said it suspended 125,000 accounts associated with Islamic State.
Tech companies “have a social responsibility to not just see themselves as a place where people can freely express themselves and debate issues,” Lt. Gen. Michael Flynn, who ran the U.S. Defense Intelligence Agency from 2012 to 2014, said in an interview.
Facebook’s tougher approach puts the company in a tight spot, forcing it to navigate between public safety and the free-speech and privacy rights of its nearly 1.6 billion users.
After the Jan. 8 meeting, the Electronic Frontier Foundation, a nonprofit privacy organization, urged Facebook and other tech companies not to “become agents of the government.”
Facebook said it believes it has an obligation to keep the social network safe.
Amar Toor of The Verge explained how Facebook’s program works:
According to the Journal, a team led by Monika Bickert, Facebook’s head of global policy management, met in December to plan ways to encourage counter speech through competitions and ensure that it reaches target audiences. The company has provided ad credits worth up to $1,000 to those who post counter-extremist messages, and together with the State Department, launched competitions in 45 college classes around the world. Those who participated in the competition were provided a budget of $2,000 and $200 in ad credits.
Last year, Facebook allowed former members of extremist groups to create fake accounts and engage with current members. The experiment delivered encouraging results, a person involved with the test tells the Journal, though it’s unclear whether Facebook’s broader counter speech efforts will be effective.
Combatting online extremism has been a priority for Western governments, as jihadists increasingly focus their recruiting efforts on social media. Last month, executives from tech companies like Apple, Facebook, and Google met with President Obama to discuss strategies including counter speech initiatives and efforts to identify potential terrorists online.
Alice MacGregor of The Stack detailed the pressure social media company’s have been under to combat terrorism on their sites:
At the beginning of this year, a number of top U.S. tech firms, including Apple, Facebook and Google, met with the Obama administration to discuss ways of combatting terrorist organisations online – and to thwart ISIS’ recruitment and propaganda drives across social media.
Following the gruesome San Bernadino shootings in December last year, Facebook had been accused of being slack in its approach to terrorist material and the amount of hate speech spread over its network. The suspects in the killings were reportedly promoting ISIS through their accounts, and helping to recruit members.
Twitter has also recently made a public stand against extremism and terrorist-related accounts on its platform. This month the microblogging site revealed that it had shut down over 125,000 accounts linked to terrorists since 2015. In a statement released last week, Twitter said: “We condemn the use of Twitter to promote violent terrorism […] This type of behaviour, or any violent threats, is not permitted on our service.
CNBC senior vice president Dan Colarusso sent out the following on Monday: Before this year comes to…
Business Insider editor in chief Jamie Heller sent out the following on Monday: I'm excited to share…
Former CoinDesk editorial staffer Michael McSweeney writes about the recent happenings at the cryptocurrency news site, where…
Manas Pratap Singh, finance editor for LinkedIn News Europe, has left for a new opportunity…
Washington Post executive editor Matt Murray sent out the following on Friday: Dear All, Over the last…
The Financial Times has hired Barbara Moens to cover competition and tech in Brussels. She will start…