Insider editor in chief Nicholas Carlson sent out the following:
Team–
I’ve had conversations with many of you and others – sentient and non-sentient – about what role artificial intelligence should play in our newsroom. Everyone sees huge opportunities. And worrying challenges.
I want this note to encourage you to be excited about the potential opportunities, try out AI, and report back your findings — all while taking seriously the challenges posed by AI.
[Please note that this focuses solely on text-generating AI, specifically, ChatGPT – the only service we are comfortable with the newsroom using at this point. To be clear: We should avoid using other services with restrictive and opaque terms of service. Image and other types of generative AI promise an even more complicated and exciting future, and we need to have more conversations before we can decide if and how to use them.]
I’ve spent many hours working with ChatGPT, and I can already tell having access to it is going to make me a better global editor-in-chief for Insider.Just in the past couple of weeks it helped me think about how and what I wanted to say in this memo, do casual background research for a post I assigned, brainstorm headline ideas, and prepare for a live interview. It read and summarized Alvin Bragg’s indictment and statement of facts in his case against Donald Trump in moments. I fed it some of the episode titles of one of our most popular video series and asked for future episode ideas. (I sent them to one of our executive producers and she said, “Holy moly! There are some really great ideas here. Thank you!”) I asked it to come up with ideas for trips for our travel reporters, asking it to make additional recommendations for other related places a half day’s trip away.
My takeaway after a fair amount of experimentation with ChatGPT is that generative AI can make all of you better editors, reporters, and producers, too.
Do not use ChatGPT or other chatbots and versions of AI to write sentences that you put into your scripts or articles.
This may change in the future. But before we make that change, we are going to ask a pilot group of experienced producers, editors, and reporters to experiment with it as a word processing aid and report back to the rest of us. If you are interested in joining this pilot group, please let me know. It will be exciting and important work for Insider.Anyone joining this group will receive three important warnings:
- Generative AI can introduce falsehoods into the copy it produces. Research it provided me for this memo was wrong, as I discovered when I fact-checked it. You cannot trust Generative AI as a source of truth. Doing so can lead to journalistic disaster. AI can also introduce bias into text it generates. When it comes to facts, generative AI should be viewed as a resource similar to Wikipedia or a factoid at the top of a Google search-results page: that is, a great starting point that helps you find more reliable sources. ChatGPT is a language generator that performs calculations to guess at the next best word. It doesn’t understand facts or meaning, and it doesn’t know whether an assertion is right or wrong, much less fair. It is not a journalist – you are. No matter the tool you use, AI or otherwise, journalists at Insider are ultimately responsible for the accuracy of their stories. Always verify your facts.
- Generative AI may lift passages from other people’s work and present it as original text. Do not plagiarize! Always verify originality. Best company practices for doing so are likely to evolve, but for now, at a minimum, make sure you are running any passages received from ChatGPT through Google search and Grammarly’s plagiarism search.
- A third, less serious, warning is that text generated by AI can be dull and generic. Take what it gives you as a suggestion: something to rewrite into your own voice and in Insider’s style. Make sure you stand by and are proud of what you file.
For these reasons and others, generative AI is tricky to use as a text drafting tool, and that’s why we are limiting experimentation with using it in this way to a small pilot group. We’re excited to hear what they learn. Already it’s obvious that ChatGPT can help a reporter bust through writer’s block and generate ideas for a lede, kicker, or transitions. Our suggestion to the pilot group will be to take copy and rewrite the output until they’re satisfied with it.
But beyond that use case, now is absolutely the time for the rest of us to begin experimenting with this powerful yet poorly understood new technology.Here are some ideas for how it may help you:
- Use AI to generate outlines for your stories, or to help structure a post that you’re struggling with. This could help a lot with writer’s block.
- Save your editors precious time they are currently spending fixing typos and cleaning up copy. Ask AI to make suggested edits to your writing to make it more readable and concise. Please see below for an important caution on this use case.*
- Use AI to suggest SEO-optimized headlines and meta descriptions.
- Tell AI who you are planning to interview and what you hope to get out of it, and ask for interview question ideas. This also works well for prepping for panels and interviews.
- Ask AI to explain tricky, unfamiliar concepts. (“What happens if the US fails to raise the debt ceiling?”)
- Ask AI to summarize old news stories and suggest lessons that can be learned from them. For example: “How did John Edwards avoid conviction and what lessons should future prosecutors learn from it?”
*Do not put sensitive information, particularly sourcing details, into ChatGPT. The AI companies employ humans who can see conversations with their bots.
I encourage all of you to try all those prompts, and then try a million more of your own creation.Please let me and the rest of us know what’s useful and what’s not. For me, the really exciting thing about using generative AI is that I keep figuring out new ways to use it.
Please also flag to us anything that might give you pause while using these tools. We’ll use your feedback as we build out training for the whole newsroom.
There are perils ahead. Bad actors are going to use AI to make fake news and fake newsworthy moments. You’ve all probably seen the fake images of Trump and Putin getting arrested. Just last month, we learned that a source who reached out to us was actually a bot.
Now more than ever, it will matter to our readers and viewers that they can trust us. So I cannot stress enough that we need to be careful with the tools we use, and be certain of our facts. Used carefully, AI can help us be the trusted source we need to be.Editors: It is already impossible for you to know if work produced by your colleagues was created using AI. This underscores the vital role you play in challenging reporters on the veracity of facts. It is necessary to step up your vigilance and take the time to ask how every fact in every story is known to your colleague.
Some of you may lead the list of ways AI can be used to help our journalists get better and faster and may worry that it will take your job. To them, I say: AI is not going to replace you. But hopefully it will reduce the number of menial tasks we ask you to do and free you up to come up with ideas that push us forward in new innovative ways.
Back in the 1980s, when Steve Jobs was trying to convince regular people they needed to own personal computers, he used to say that computers were a bicycle of the mind.
He’d point out that if you compared the land speed of humans against the rest of the animal kingdom, we are a relatively slow species. That is, until we get on a tool we’ve built for ourselves to go faster: the bicycle.
As the land speed records stand:
- Humans: 28 mph
- House cat: 30 mph
- Elk: 45 mph
- Quarter horse: 47.5 mph
- Lion: 50 mph
- Wildebeest: 50 mph
- Pronghorn: 60 mph
- Cheetah: 75 mph
- Human on a bike: 90 mph
That 90 mph record, by the way, was set by Ted Reichert in Nevada in 2016.
I think generative AI such as ChatGPT can make us faster and better, too. It can be our “bicycle of the mind.”And, as with a bicycle, we could crash and hurt ourselves and others if we don’t use AI correctly. So we need to be careful with it. Our policy is this: You may, and even should, use AI to make your work better. But it is still your work, and you are responsible for it. You are responsible to our readers and viewers for the accuracy, originality, and quality of your work.
If our newsroom doesn’t figure out how to use these tools to make our work better for our readers and viewers, other newsrooms will — and they will leave us in their dust.
Let’s make like Ted Reichert and never give them the chance!
NicholasP.S. I’m sure you have questions and concerns. Please send them through and we will compile them into an FAQ. And please look forward to more guidelines and forums on the topic as we continue to figure out how to carefully embrace this massively important new technology.
Former CoinDesk editorial staffer Michael McSweeney writes about the recent happenings at the cryptocurrency news site, where…
Manas Pratap Singh, finance editor for LinkedIn News Europe, has left for a new opportunity…
Washington Post executive editor Matt Murray sent out the following on Friday: Dear All, Over the last…
The Financial Times has hired Barbara Moens to cover competition and tech in Brussels. She will start…
CNBC.com deputy technology editor Todd Haselton is leaving the news organization for a job at The Verge.…
Note from CNBC Business News senior vice president Dan Colarusso: After more than 27 years…