Kunova writes, “After starting to use the tool, rather than accepting every decision the machine has made, the human moderators checked each decision manually. It took a couple of months to get the moderating decisions right: the machine now catches most sexist and racist comments, despite the sophisticated language the FT readers use to get around it.
“‘It is not perfect and it is still learning,’ Warwick-Ching says after six months.
“However, its impact has been significant. Previously, moderators spent a large portion of their time filtering out negativity. Now, AI takes care of a lot of the heavy lifting, freeing them up to focus on community-building. Readers often share valuable insights, personal stories, and even story leads within the comments. Moderators can now dedicate their time to finding these gems and bringing them to the attention of journalists, enriching FT‘s content.
“The benefits are not just about efficiency. Moderating online comments takes an emotional toll. AI now absorbs most of that negativity, protecting humans from the worst abuse.”
Read more here.
Manas Pratap Singh, finance editor for LinkedIn News Europe, has left for a new opportunity…
Washington Post executive editor Matt Murray sent out the following on Friday: Dear All, Over the last…
The Financial Times has hired Barbara Moens to cover competition and tech in Brussels. She will start…
CNBC.com deputy technology editor Todd Haselton is leaving the news organization for a job at The Verge.…
Note from CNBC Business News senior vice president Dan Colarusso: After more than 27 years…
Members of the CoinDesk editorial team have sent a letter to the CEO of its…