Labelling AI-generated content helps maintain audience trust. Or not?
We already know that artificial intelligence is already present in many newsrooms around the globe. And we know AI-based tools are used to help generate news content in various ways, but often without an explicit declaration of such behaviour. What if we labelled the AI-generated content as such? Are we going to foster a trusting relationship with our readers? This is the core question two researchers asked themselves in a recent working paper titled "Or they could just not use it?": The Paradox of AI Disclosure for Audience Trust in News.
University of Minnesota’s Benjamin Toff and Oxford Internet Institute’s Felix M. Simon designed an experiment to test the public reaction to the labelling idea. A group of people was asked to read a series of news articles and express their trust in that content on a scale between 1 and 11. Some articles were labelled as not written by an actual journalist but by a generative AI bot. The results showed a significant difference in trust, with AI-generated content judged less trustworthy than human-generated content.
This is a paradox. By labelling the AI-generated news as such, an increase in the trust of the public was expected out of a transparency commitment. However, the experiment result tells otherwise, and this finding is consistent with the literature. The main idea behind the loss of trust when newsrooms use AI is a general hesitance in front of headlines not selected or written by human beings.
Toff and Simon underline another noteworthy aspect of the results. Among the labelled AI-generated content, those that cited sources performed a little better in terms of trust than those that didn’t. This indicates that fundamental principles of the journalistic process have a role in building and keeping readers’ trust.
Of course, there are some limitations to this experiment. Namely, the fact that the group of subjects was entirely composed of US citizens and that the political inclination of the group bent toward the progressivist and the democratic side. Toff and Simon know that different political positions could result in different trust levels towards journalistic content in general. But nonetheless, the working paper is an interesting read on trust and AI-generated journalistic content.
Read the entire working paper: “Or they could just not use it?”: The Paradox of AI Disclosure for Audience Trust in News
AI generated image by Unreal, Airtist, pixexid.com