Media News

What Reuters is telling its journalists about using artificial intelligence

May 14, 2023

Posted by Chris Roush

Reuters editor in chief Alessandra Galloni and ethics editor Alix Freedman sent out the following to the staff about using artificial intelligence:

Colleagues,

As you know, artificial intelligence (AI) is transforming the world of work, including in the field of journalism, presenting both opportunities and challenges. We want to ensure that Reuters journalists will use AI technology effectively, while maintaining our reputation as the world’s most trusted news organization.

This memo reflects our preliminary thinking about the role of AI in the newsroom. We expect to be updating this guidance regularly, understanding that the technology is changing very quickly. As we gain more experience, we also will issue formal guidelines.

Our four pillars

First, Reuters regards AI technology, including generative text-based models like ChatGPT, as a breakthrough that offers the potential to enhance our journalism and empower our journalists. From its founding, Reuters has embraced new technologies to deliver information to the world, from pigeons to the telegraph to the Internet. More recently, we have utilized automated systems to find and extract vital economic and corporate data at the speed that our customers demand. The idea of autonomous news content may be new for some media companies, but it is a longstanding and essential practice at Reuters News.

Second, Reuters reporters and editors will be fully involved in – and responsible for – greenlighting any content we may produce that relies on AI. A Reuters story is a Reuters story, regardless of who produces it or how it’s generated, and our editorial ethics and standards apply. If your name is on a story, you are responsible for ensuring that story meets those standards; if a story is published in an entirely autonomous fashion, that will be because Reuters journalists have determined that the underlying technology can deliver the quality and standards we require.

Third, Reuters will make robust disclosures to our global audience about our use of these tools. Transparency is an essential part of our ethos. We will give our readers and customers as much information as possible about the origin of a news story, from the specificity of our sourcing to the methods used to create or publish it. This does not mean that we will disclose every step in the editorial process. But where use of a particular AI tool is material to the result, we will be transparent.

Finally, exploring the possibilities afforded by the new generation of tools is not optional – though we are still examining how to make most appropriate use of them. The Trust Principles require us to “spare no effort to expand, develop and adapt” the news. They also require us to deliver “reliable” news. Given the proliferation of AI-generated content, we must remain vigilant that our sources of content are real. Our mantra: Be skeptical and verify.

In sum, Reuters will harness AI technology to support our journalism when we are confident that the results consistently meet our standards for quality and accuracy – and with rigorous oversight by newsroom editors.

As we uphold the reputation of our unique brand, we hope this memo provides a useful framework for thinking about the key issues surrounding AI. And we, of course, welcome your questions and ideas. If you send them directly to Brian Moss (Brian.Moss@thomsonreuters.com), in our Ethics and Standards Office, he will make sure they reach the right person.

All best,

Alessandra & Alix

***

Q & A

We are providing this Q&A about AI in response to questions raised by colleagues in the newsroom. We want to stress that we view them as a snapshot of our current thinking. As we gain experience, we plan to issue a more formal set of guidelines.

Q.  How have we been using technology and automation in the newsroom until now? 

Reuters has for decades used technology to deliver fast, accurate journalism to our customers and the world. We developed the first entirely automated news alerts in the 1990s and now publish over 1,000 pieces of economic data a month without human intervention. We’ve been auto-alerting company results for about 15 years – and last year’s acquisition of PLX AI pushed us even further ahead, using a combination of AI and more traditional forms of natural language processing

With the emergence of more robust AI technology, we are finding more ways to use it throughout the newsroom. Our local language teams now routinely use machine-assisted AI to provide first-pass translations within LEON, and we will soon be piloting entirely automated machine-translated stories for LSEG. Our video teams use voice-to-text transcription AI to produce scripts and subtitles for raw and packaged video.

Q.  What is so different about the next generation of AI tools that people are discussing now?

Until recently, most AI capabilities have been tried and tested over the past decade or more, with relatively well-understood outcomes. We have most often used these tools to convert the same set of content from one format to another (English to Chinese, or audio to text). Two things are different with the new generation of generative AI tools like ChatGPT or Open Diffusion: With very basic written instructions, or prompts, they can create credible, human-like original content – from text to images to music – almost instantly; and the tools are immediately accessible to a mass global audience via a simple intuitive chat interface.

AI cannot do original reporting but is increasingly good at learning from what has already been produced to create new content. This means that, in theory, AI could be used to support creating summaries of past stories in an Explainer, or to create a Timeline or Factbox. AI prompts also could be used to help edit stories or extract facts to be checked. In all relevant content, we would add a disclaimer that would make clear the role that AI has had in the process.

All this output would have to go through a rigorous editing process before going to clients. We plan to set up a system in which journalists who have used AI in their newsgathering or production would log it in a Teams channel. That way, we can encourage creativity while also keeping a close eye on production. At the appropriate time we will designate an editor who will monitor this.

Q.  Are Reuters journalists able to use generative AI to help our reporting?

Reuters journalists can use such AI tools as ChatGPT to brainstorm headline and story ideas. However, we must remain mindful of its limitations and apply the same standards and safeguards we would use in any other circumstances.

Some rules are basic: Just as we would never upload the text of an unpublished news story to Twitter or Facebook, we should not share an unpublished story with any other open AI platform or service (such as ChatGPT). Our tech teams are working on safeguarding technology tools we use that would protect Reuters content from being saved in open tools like OpenAI

In addition, just as we would never trust a set of unverified, unattributed facts sent to us by email from an unknown source, we should never trust unverified, unattributed facts given to us by an AI system (such as ChatGPT). When using AI to brainstorm headline ideas, make sure that the headline we publish is unique.

Q. How can Reuters journalists safely experiment with AI?

To experiment with content generation, we recommend Reuters Editorial use OpenArena (https://urldefense.com/v3/__https://trreuters.us.newsweaver.com/reutersmessages/4hywlaicvdp1xauzfc85ue/external?email=true&i=2&a=6&p=19022765&t=506095__;!!GFN0sa3rsbfR8OLyAw!fKQoPl4GyGsw-mZySbr-tD93qu5UH0NUf4pc-noWU0vtUbgEyGve6bC2LvfKsXau0XyRx8GzCqxIEhvt4iq0a3fNWZIiIgISEzsGtKcASt6$), which is a platform by TR labs which provides access to OpenAI. Through OpenArena, we can ensure that what our staff puts into the interface does not get shared back with Microsoft and Open AI, plus TR would also benefit from understanding the use cases people are trying. Use this Reuters News-specific link (https://urldefense.com/v3/__https://trreuters.us.newsweaver.com/reutersmessages/rh5arcqhmfr1xauzfc85ue/external?email=true&i=2&a=6&p=19022765&t=506095__;!!GFN0sa3rsbfR8OLyAw!fKQoPl4GyGsw-mZySbr-tD93qu5UH0NUf4pc-noWU0vtUbgEyGve6bC2LvfKsXau0XyRx8GzCqxIEhvt4iq0a3fNWZIiIgISEzsGhGPxb5_$), but first please register for access via this TEAMS form (https://urldefense.com/v3/__https://trreuters.us.newsweaver.com/reutersmessages/gv92q9iu5lq1xauzfc85ue/external?email=true&a=6&p=19022765&t=506095__;!!GFN0sa3rsbfR8OLyAw!fKQoPl4GyGsw-mZySbr-tD93qu5UH0NUf4pc-noWU0vtUbgEyGve6bC2LvfKsXau0XyRx8GzCqxIEhvt4iq0a3fNWZIiIgISEzsGtcqj3-m$). Please get your manager’s approval. To emphasize, OpenArena is a space to test the possibilities of AI, not to publish, though in some limited cases–data analysis, for instance–we may allow its use for journalism that will be published.

Q.  Why don’t we use generative AI now?

One key limitation of the latest technology is that it does not always generate reliable content. At present, we are experimenting with its capabilities. We will not publish AI-generated stories, videos or photographs or edit text stories until the new generation AI tools meet our standards for accuracy and reliability.

Q.  Are there any disclaimers we will need to make if we use AI in certain specific ways?

Consistent with our Trust Principles pledge to provide “reliable news,” Reuters strives for transparency about how we create content.  For instance, our upcoming auto translation service will use a disclaimer which says “This story was translated and published by machine” on stories that were automatically translated. Depending on how AI may be used in the future, content would carry a disclaimer to the effect of, “This story was generated by machine and edited by the Reuters newsroom.”

If the subject of a story we are covering is generative AI technology itself, then the use of a video or photograph as a visual element is permissible, with approval from senior editors and robust disclosure.

Q.  How would we handle AI-related errors?

As ever, we should be fully transparent about errors and corrections, adhering to our usual standards. Editors are responsible for content they publish, whether the story is created by human or machine. It is critical for editors to cast the same critical eye on any story or content that is created by AI that they would if it were created by a human. That means checking facts, sense, and bias, and correcting any errors.

Q.  What are the legal perils for Reuters News? 

The use of external generative AI tools can make it more difficult to protect the confidentiality of our unpublished journalistic work product. That’s because sharing information with a third party may be considered publication.

Additionally, using these tools may complicate our ability to protect our intellectual property rights.  The terms of use of some tools ask users to relinquish legal rights to content, and some countries view AI-generated content as not copyrightable.

Lastly, we generally remain legally responsible for the content we publish – regardless whether an AI tool was involved in its creation.

When in doubt on these and related issues, please seek guidance from our Legal team.

Subscribe to TBN

Receive updates about new stories in the industry daily or weekly.

Subscribe to TBN

Receive updates about new stories in the industry.