Gideon Lichfield of Wired writes about how the magazine will use artificial intelligence software.
Lichfield writes, “We do not publish stories with text generated by AI, except when the fact that it’s AI-generated is the whole point of the story. (In such cases we’ll disclose the use and flag any errors.) This applies not just to whole stories but also to snippets—for example, ordering up a few sentences of boilerplate on how Crispr works or what quantum computing is. It also applies to editorial text on other platforms, such as email newsletters. (If we use it for non-editorial purposes like marketing emails, which are already automated, we will disclose that.)
“This is for obvious reasons: The current AI tools are prone to both errors and bias, and often produce dull, unoriginal writing. In addition, we think someone who writes for a living needs to constantly be thinking about the best way to express complex ideas in their own words. Finally, an AI tool may inadvertently plagiarize someone else’s words. If a writer uses it to create text for publication without a disclosure, we’ll treat that as tantamount to plagiarism.
“We do not publish text edited by AI either. While using AI to, say, shrink an existing 1,200-word story to 900 words might seem less problematic than writing a story from scratch, we think it still has pitfalls. Aside from the risk that the AI tool will introduce factual errors or changes in meaning, editing is also a matter of judgment about what is most relevant, original, or entertaining about the piece. This judgment depends on understanding both the subject and the readership, neither of which AI can do.”
Read more here.