Flaws in reporting on AI: a checklist tool for reporters
Reporting on AI can be tricky. Journalists do not always possess enough knowledge in the field to avoid exaggerated claims (both positive and negative) and companies tend to hype their products in order to maximize their media exposure. But nonetheless, reporting on AI is crucial in a moment of great expansion of the sector and its decisive role in the scientific field and in society at large.
Arvind Narayanan is a professor of computer science at Princeton University and focuses his research on the societal impact of AI and digital technology in general. Sayash Kapoor is a Ph.D. candidate at the same university and mainly works on Machine Learning methods and their use in science. Together they are co-authoring a book on AI Snake Oil, meaning they are looking at and deconstructing marketing hypes around AI (you can have an idea of the meaning of the expression “snake oil” by reading the Wikipedia article). You can follow the development of the book by subscribing to their free newsletter.
Among the recent contents that looked at the relationship between AI and the media, one is particularly interesting for science journalists and journalists in general. The two researchers analyzed more than 50 articles about AI from major publications in the English language. The results are a deep look into how the information provided by these articles is misleading. This is not a comprehensive review of how AI is treated in major media outlets, but it could be taken as a point of reference for flaws that journalists could and should avoid.
The 18 identified flaws are grouped into four categories:
- flawed human-AI comparison;
- hyperbolic, incorrect, or non-falsifiable claims about AI;
- uncritically platforming those with self-interest;
- limitations not addressed.
Each of these categories is analyzed through a useful checklist enriched with links to actual examples from online media that can be downloaded as a simple PDF document. For example, the pitfall “Comparison with human intelligence” is exemplified by a CNN piece titled AI may be as effective as medical specialists at diagnosing disease from 2019 where you can read: “[The study] focused on an AI technique called deep learning, which employs algorithms, big data, and computing power to emulate human intelligence”
An example from the third category focuses on a piece from the New York Times about a popular chatbot in the education sector called Bakpax. The pitfall involved is “repeating or re-using PR terms and statements''. And in this case, the author of the article wrote “She uses the platform Bakpax that can read students’ handwriting and auto-grade schoolwork”, echoing a PR document that Narayan and Kapoor easily found on Bakpax’s company website.
Even though the list by the two researchers is not directly intended for journalists, nonetheless it is a concrete tool that can help while reporting on AI and related themes.
Download the 18-pitfalls checklist or find it among the contents of the AI Snake Oil newsletter.