The risk with data voids in reinforcing misinformation

February 16, 2024
Marco Boscolo
Share this on

In the lead-up to the 2024 election, concerns over misinformation abound, fueled by "data voids" and reinforced by online searches. Experts emphasize the need for tailored approaches and recognize trust in information as a multifaceted, ongoing process.

Up to an estimated 4 billion people will vote in 2024. In a recent editorial published in December 2023, the Nature journal board underlined that “some researchers are concerned that 2024 could also be one of the biggest years for the spreading of misinformation and disinformation” and that this will play a significant role in the election results. 

According to a recent study, the so-called “data voids” constitute a significant concern. The term was coined by Michael Golebiewski of Microsoft in 2018 to describe search engine queries with little to no results. As the article from Nature put it, in the field of information, data voids are informative spaces, primarily concentrated online, where the information provided is not backed by evidence or facts. This is no new phenomenon, and data voids pre-existed the Internet and social media era. 

We would expect that if you are searching for more information on a particular news story and don’t find enough supporting materials, you would at least be suspicious about the trustability of the story itself. This is not the case, as the political scientist Kevin Aslett at the University of Central Florida in Orlando and his colleague found out.

In a recent paper published in Nature, they created a series of fabricated new stories and made a group of people read them. These stories lacked proper sources and were inaccurate in what they reported. They asked the participants to search for confirmation and supportive material online using the Google search engine. After this phase, they evaluated participants’ trust in what they read and researched. 

In one experiment, for example, they asked the readers to verify stories in which the Government of the United States was accused of creating a famine by implementing the lockdown during the COVID-19 pandemic. When searching online using search prompts like “engineered famine”, they were likely to find sources reporting a non-existent engineered famine. Contrary to what was expected, readers were likelier to trust the fabricated stories after the online search. In other words, going into more detail on a news story based on no trustworthy sources does not appear to serve as a countermeasure for misinformation. This is more true when the online search returns fewer results.

 

What to do? 

In many cases, conducting a detailed online search is presented as an effective fact-checking tool. But, as Paul Crawshaw, a social scientist at Teesside University in Middlesbrough, UK, suggests, asking readers to do more research on a suspicious topic alone is not the solution.

He suggests, for example, that different characteristics of different populations must be considered when teaching how to research misinformation countermeasures online. He developed this thought after looking at a research study in which he and his colleagues noticed that different income groups reacted differently during the teaching program. 

From the ENJOI project point of view, having extensively worked on the role of engagement in assuring the quality of journalism, and of science journalism in particular, the point that can be drawn is that there is no easy checklist that can shield readers from misinformation. In particular, media, journalists and all the other stakeholders must be aware that science information trustability is not the result of a simple recipe but of a complex and layered ongoing process. 

 

Read more:

Copyright © 2021, ENJOI Project. All rights reserved
Cookie policyPrivacy policy
crossmenu