AI in the newsroom: a general lack of guidelines. A survey
A new report from the LSE’s JournalismAI initiative underlines the lack of news organisations strategy on AI and highlights some concerns on the future of the industry
We already discussed how much and for what kind of purposes AI is used in newsrooms around the world, thanks to a recent report on the topic. A new publication from the JournalismAI initiative at the London School of Economics helps us dig deeper into the subject. Generating Change. A global survey of what news organisations are doing with AI by Charlie Beckett and Mira Yaseen is based on a worldwide survey. Their work has been conducted on 105 news organisations in 46 countries worldwide.
The survey confirms the significant presence of AI practices in the newsroom. 75% of the respondents declared that they use various AI-based applications for newsgathering and 90% for news production. For newsgathering, for example, journalists use AI tools for speech-to-text transcriptions, saving time from doing it manually or for text extraction. However, some respondents lament the general inaccuracy of the tools they tried. AI tools that autonomously gather information collected in various social networks (such as CrowdTangle or Rapidminer) are used, in some cases, particularly for detecting potential disinformation circulating on the platforms.
News production is where AI tools are more involved. Some respondents declare that their newsroom is testing a headline suggestion tool based on ChatGPT. Others admit to using natural language processing applications like ChatGPT to create summaries of articles. Other tools, such as Grammarly, are used for grammar checks, editing, and other proofreading tasks to improve the quality of the articles produced.
In Gauging Generative AI’s Impact on Newsroom, many respondents lamented that their news organisations didn’t have any internal policy on AI in place. The LSE’s publication reports a similar situation, with two-thirds of the journalists reporting that their organisation hasn’t developed an AI Strategy.
The ethical issues
Interestingly, only 25% of the respondents identified the ethical side as one of the most pressing challenges for AI integration in the newsroom. At the same time, 60% of them expressed concerns about the ethical implications of AI use in journalistic production. About this aspect, the following are the prevalent ethical issues: algorithmic bias, editorial quality of the AI-produced or assisted reporting, the role of the tech companies and intermediary companies, and the reader’s perception. A large part of the respondents (82%) is also concerned about the general effects of AI on the industry at large.
As the two authors say in the publication's conclusion, AI “is a volatile technology for news organisations. Most are aware of the inherent risks in AI technologies and the dangers of bias or inaccuracy. They are discovering that applying AI in news production has immediate possibilities, but how it will shape future practice is uncertain”. That is why there is more and more need for in-depth analysis and reflection on the contact surface between technology and journalism.
If you want to know more, download the Generating Change. A global survey of what news organisations are doing with AI repor
Featured image by Ralph, ai artist, pixexid.com