Responsibly leveraging AI in journalism
Generative AI tools have suddenly entered newsrooms, with many journalists already using them to streamline aspects of their work. This raises pressing questions about how to regulate the use of AI responsibly, ensuring transparency without undermining journalism’s credibility
In recent years, we have reached a turning point in technology: the rise of a wide range of accessible, user-friendly generative AI tools (genAI), now available to almost anyone with basic literacy skills. This technological leap has not gone unnoticed in news production and dissemination, where genAI tools are quickly gaining traction. However, their growing adoption raises essential questions about appropriateness, ethics, and regulation.
The main concern is not that AI will “steal journalists’ jobs.” While AI can now generate original texts, including news articles, research suggests that human readers are still able to tell the difference between AI-generated “churnalism” – low-quality, filler news designed to drive online engagement – and journalism created by humans, which offers original perspectives thoughtful interpretations on reality.
Ethical Considerations and Responsibilities
AI may be a powerful tool that can help streamline journalists’ workflows and simplify certain stages of news production. However, for AI to be an asset to journalism, it must be applied mindfully and responsibly.
A recent study published in the academic journal Journalism Practice by researchers Hannes Cools and Nicholas Diakopoulos examines journalists’ experiences using AI in their work. A critical issue that emerged from their survey is the potential tension between AI use and journalism ethics. Without careful oversight, biases embedded in AI could influence reporting, potentially spreading misinformation or deepening polarization in the public debate.
Also from the ENJOI Observatory: Unveiling AI's Newsroom Role: Efficiency Boosts and Misinformation Hazards
Survey participants reported finding genAI tools especially helpful for routine, time-consuming tasks, such as transcribing interviews, summarising content-rich documents, and drafting internal reports within the newsroom. GenAI’s ability to help with language refinement and proofreading was also mentioned as a valuable tool.
However, respondents stressed that for journalism to remain ethical, genAI should never replace human judgement in critical tasks like news verification – a crucial component of credible journalism. As one survey participant said, “Verification will probably be one of the last strongholds when reflecting on the impact of generative AI on the news reporting process.”
Moving Forward with Transparency
In response to the opportunities and challenges that come with the application of genAI in the newsroom, many media outlets are setting up task forces to inform and support journalists in understanding the benefits and risks of these tools. These teams also aim to define clear boundaries for the safe and fair use of genAI in news reporting. Transparency is a central commitment in these efforts: readers have a right to know if AI played a role in creating the articles, newsletters, social media posts, or images they consume, and in general, to understand AI’s role in the news production process.
This commitment to transparency also calls for rigorous monitoring of genAI’s outputs to catch potential inaccuracies, hallucinations, and biases. In addition, the use of genAI in newsrooms requires journalists to regularly update their knowledge and skills on AI technology to prevent naive or exploitative applications that could undermine journalistic quality and erode public trust.
Further readings: