To the VentureBeat Staff:
The debate around the ethics of using generative AI was further amplified yesterday after one of the world’s leading AI data scientists Geoffrey Hinton resigned from Google out of concern that companies aren’t ensuring that generative AI is being used ethically.
This same question around ethics is now swirling around journalism. Can journalists use generative AI ethically?
If you haven’t seen the recent Bloomberg story, “How Important Is It to You That a Human Writes the News You Read?” by Rachel Metz — a sensationalized account of how AI tools like ChatGPT are infiltrating newsrooms — I suggest you check it out.
As a panelist during the recent San Francisco Press Club event moderated by Metz, which is referenced in her story, I had the privilege of sharing my thoughts on the growing impact of generative AI tools in the journalism landscape.
The panel has since sparked a heated debate on this issue, both among the public and within our own newsroom. As your Editorial Director, I would like to take this opportunity to share VentureBeat’s ambitious plan for pioneering the responsible integration of AI in journalism.
In short, I believe AI has the potential to revolutionize our profession, much like the internet, social media and personal computers did in previous decades — possibly to a much greater extent. It will enable us to reach more readers, uncover more stories and push creative boundaries as never before. At VentureBeat, we have been at the forefront of covering artificial intelligence for years, and in my role as Editorial Director, I am committed to leading the way in harnessing AI’s power to transform our work.
My vision is for our reporters and editors to use generative AI as a creative partner, not as a replacement for human judgment. To be explicitly clear, we do not blindly copy and paste what tools like ChatGPT generate, nor do we let AI write entire stories. Instead, we will use AI to inspire and strengthen our work — overcoming writer’s block, improving our storytelling, and exploring new angles. Our human copyeditors will always review, edit and fact-check VentureBeat stories, whether they contain AI-influenced content or are written without any generative AI assistance.
Far from being a threat to the industry, I believe generative AI presents a remarkable opportunity to enhance and elevate the craft of journalism. Used responsibly, it can help address some of the biggest challenges facing the industry today, such as the relentless pressure to generate more content quickly to the lack of diversity and inclusion in many newsrooms. But we must be aware of the risks and remain committed to the fundamentals of journalism: truth, accuracy and accountability.
Some will argue this approach is risky or that AI can never replicate human judgment and ethics. They’re right that AI systems today are narrow, biased and inconsistent — but so are humans! The solution is to manage AI responsibly while tapping into its potential to overcome human limitations, like fatigue, groupthink and implicit biases.
This idea is not a particularly radical idea: Generative AI is already used widely, from powering psychotherapy chatbots to generating medical images. If it can assist doctors in clinical settings and patient care, surely it can aid journalists. Large language models like GPT-4 have been trained on humanity’s entire written canon — they are practically engineered to help writers!
Over the next 12 months, I hope we can increasingly use AI to empower our reporting. By employing tools like ChatGPT or Bing Chat (as I mentioned during the panel discussion) for routine brainstorms and ideation on elements of a story — such as headlines and ledes — I hope we can free up time for reporters to conduct more interviews, write more impactful stories and provide more in-depth analysis.
Make no mistake, our duty remains reporting the truth. As long as we uphold accuracy, truth and journalism’s principles, I fully support every journalist in our newsroom using AI to assist with their stories. By year’s end, I envision employing it throughout our story creation process — from pitch to headline to editing and publishing. AI will enable us to produce more, better stories at scale. We are not competing with AI. We are collaborating with it.
VentureBeat is not alone in embracing this visionary leap. Mainstream media outlets such as The Atlantic, Business Insider, The New York Times, Reuters, and The Washington Post have also been experimenting with AI tools for years. The venerable Associated Press has been using AI to generate stories for nearly a decade. We join a growing community of journalists who recognize AI’s potential to reshape the way we tell stories. The organizations that bravely walk this path, as we intend to do at VentureBeat, will unlock a new era of fast, accurate and inclusive storytelling.
The game has changed — it is no longer a choice whether to use AI in the newsroom but a necessity if we want to better serve our readers. The technology has matured to the point where it can be a creative partner rather than just a utility.
We can either compete against AI or collaborate with it. At VentureBeat, we choose to collaborate — and shape the future of media in the process.
Sincerely,
Michael Nuñez, Editorial Director @ VentureBeat
Manas Pratap Singh, finance editor for LinkedIn News Europe, has left for a new opportunity…
Washington Post executive editor Matt Murray sent out the following on Friday: Dear All, Over the last…
The Financial Times has hired Barbara Moens to cover competition and tech in Brussels. She will start…
CNBC.com deputy technology editor Todd Haselton is leaving the news organization for a job at The Verge.…
Note from CNBC Business News senior vice president Dan Colarusso: After more than 27 years…
Members of the CoinDesk editorial team have sent a letter to the CEO of its…