Kunova writes, “After starting to use the tool, rather than accepting every decision the machine has made, the human moderators checked each decision manually. It took a couple of months to get the moderating decisions right: the machine now catches most sexist and racist comments, despite the sophisticated language the FT readers use to get around it.
“‘It is not perfect and it is still learning,’ Warwick-Ching says after six months.
“However, its impact has been significant. Previously, moderators spent a large portion of their time filtering out negativity. Now, AI takes care of a lot of the heavy lifting, freeing them up to focus on community-building. Readers often share valuable insights, personal stories, and even story leads within the comments. Moderators can now dedicate their time to finding these gems and bringing them to the attention of journalists, enriching FT‘s content.
“The benefits are not just about efficiency. Moderating online comments takes an emotional toll. AI now absorbs most of that negativity, protecting humans from the worst abuse.”
Read more here.
Lili Bayer is joining Reuters in October as European security correspondent. She will be covering…
National Geographic and Netflix received news and documentary Emmys on Wednesday and Thursday. The news…
Tricia Duryee, a tech reporter for the Seattle Times, AllThingsD and GeekWire, has died from…
The integrity of a critical labor-market survey – used to calculate the unemployment rate and…
The Santa Fe New Mexican is looking for a journalist who can best inform our readers about…
The Logic managing editor Jordan Timm sent out the following. I’m delighted to announce that…