Navigating Misinformation in the Digital Age: A Conversation with Andy Ridgway

March 25, 2025
Sara Urbani
Share this on

Andy Ridgway is a researcher within the Science Communication unit at the University of the West of England in Bristol and a science journalist with 20 years of experience. He writes for the BBC Science Focus magazine and contributes to the International Brain Research Organisation (IBRO). Ridgway is also involved with the communication activities of the European Competence Centre for Science Communication that's being built within the COALESCE project, so he's particularly interested in themes like misinformation and trust.

Andy Ridgway

Misinformation is a significant topic and not particularly new, but nowadays it seems like it's something that an even broader public is aware of (in light of recent news from the US, but not only). Do you think it's a theme that has gained more popularity even outside the science communication community?

Yes, I think so. Most people in the broader public, even if they're not particularly interested in or involved with science communication, are likely aware of the term' misinformation.' In many cases, however, the term has been misappropriated and used in political contexts. Broadly speaking, people recognize that there is a contestation of facts, a debate about what's true and what's not.

I believe this discussion has been driven by the sheer volume of communication happening online and the technology enabling it. We know that an increasing number of people are getting their news through social media, which raises concerns about echo chambers and how misinformation spreads. What's particularly interesting—and worrying—is the real-world impact of this phenomenon. For instance, in the UK, there are issues with declining vaccination rates among children and misinformation about vaccines is thought to be a significant contributing factor.

While people are generally aware of misinformation and its challenges, I think there's a broader lack of self-awareness regarding how we interpret the information we encounter. This might be jumping ahead, but when we look back on this digital era, we may realize that we were somewhat naive in how we used social media and other online platforms. It's almost as if we've been using them unconstrained and uncritical—without much thought about how we interpret information, who we trust, and what we trust—and this has had consequences.

Looking ahead, I hope that as we continue to navigate this digital age, we'll become more knowledgeable and self-aware about these issues.

 

Going back to the example of vaccines in children in the UK, did this issue exist even before the COVID-19 pandemic? Is it related to vaccines like MMR and so on? Because this is happening in other countries as well, where health ministers are launching information campaigns to persuade parents. However, despite all the evidence of their effectiveness, there remains a strong anti-vaxxer movement…

Yes, statistically, it has been shown that concerns are particularly focused on childhood vaccinations. There are areas of concern surrounding what's commonly known as the MMR vaccine, which protects against measles, mumps, and rubella. With a growing proportion of children not receiving this vaccine, there are subsequent consequences, such as an increase in the number of kids contracting measles and other conditions like whooping cough and polio. These are diseases for which we have effective preventative treatments, and yet, due to public perceptions, control of these diseases is being hampered.

 

However, it started well before the widespread use of social media. There was a major controversy surrounding the MMR vaccine, with some research suggesting a link to autism that was later disproven. This issue predates regular social media use, so it's a long-standing problem, and it seems that social media has exacerbated it. Do you think this is a matter of trust? And how can we, as science communication professionals in different areas, work to shift public trust towards more scientifically proven news?

That's right. Some of the research I conducted during my PhD is relevant here. While the specific topic was different, it still concerned parents. The research focused on how parents use information about food, particularly that found on social media, to inform their decisions about what to feed their children and how this, in turn, influences the amount of food wasted. The key finding was the significance of trust, and to be somewhat reductive, parents often trusted those they perceived as similar to themselves, sharing their values and worldviews. They were more inclined to accept information from other parents who also prioritized feeding their children healthy diets. In contrast, broader guidance issued by governments and other large institutions was often seen as too generic and didn't resonate as strongly.

While we must be cautious when extrapolating insights from one area to another, I believe we can confidently state that values and trust are powerful factors. These are frequently mediated through social media. For example, one of Facebook's strengths is its ability to facilitate the formation of groups of like-minded individuals. In the context of food, this leads to groups of parents who share similar approaches to feeding their children, such as specific weaning methods or dietary preferences. One can easily see how these pools of shared information and values develop. Ultimately, this is what parents often rely on; it's who they trust. And it's within this communication context that we encounter other issues, such as those related to vaccines, health, and safety.

 

But then this is the echo chamber all over again: social media algorithms enhance this effect, meaning you mostly see people who are like-minded. As a result, you rarely have the opportunity to listen to alternative perspectives, which could be beneficial if trustworthy or detrimental if not. Does this seem risky to you?

 

Yes, but taking a positive perspective, large institutions and governments considering how to reach specific groups of people on health-related topics might ask: Who is currently communicating this information? Is it effective to act as a big, anonymous, faceless, and authoritarian source? Or would a different approach, involving individuals with whom the target group feels a sense of connection, be more effective?

Understanding the values of the people you're trying to reach is also important. As with parents, decisions are often based on what they believe is best for their children. Understanding the foundation of these decisions can inform your strategy, helping you grasp the origins of concerns surrounding issues like vaccine safety.

 

Okay, here we're discussing real people—individuals with faces and stories we can relate to. But what about another major aspect of misinformation: the role of AI? Now, anyone can create highly realistic fake content and spread it widely, which is particularly concerning. Do you think we should regulate AI-generated misinformation strictly, or can we harness its potential to counter these threats?

 

It's a bit like social media that, in some sense, is a force for good because it connects people with shared values. Equally, AI is a way of generating information and getting information quite quickly about a particular topic. And like social media, I don't think regulation is the answer. Instead, the focus should be on empowering individuals to navigate online information effectively. This involves fostering information literacy, understanding sources, and discerning who and what to trust. Ideally, this would become a core component of educational programs, starting at the school level. This education should address social media, AI, and how AI generates information, acknowledging its potential for inaccuracy, replication of false information, and inherent biases. Increased awareness and navigational skills would be beneficial.

However, numerous ethical concerns surround AI, including generative AI and technologies capable of manipulating video, such as deepfakes. We are now witnessing the impact of these technologies.

 

And since AI is likely here to stay, it's essential that we educate ourselves about its capabilities and implications…

Definitely, I don't think you can say stop and it will happen. So I don't think that's feasible. It's more on the recipient's end that we need to change things.

I think the most important thing is to know how these things work—know how social media, algorithms, and AI work—so we're able to make more informed decisions about where we're getting that information from. But it's not a new issue. In journalism, there are lots of questions about AI and whether people will lose their jobs. I remember things like VHS videos and people saying, 'Oh, cinemas are going to die out.' And that just didn't happen.

Other disruptive technologies have come along, but if there's something unique about a particular experience or technology—even if it's quite old—that's usually a reason for it to stay. Cinema is a good example: there's a lot to be said for watching a film with lots of other people. If you're watching a comedy and everyone is laughing together, that shared experience is a really important part of it. Going out, being at an event, leaving the house, maybe putting on different clothes, and meeting people—that social aspect is vital. Cinemas have never died out, and I'm sure in 20 or 40 years, we'll have great home cinema technology, but I wouldn't be surprised if we still have cinemas.

It's the same with AI: what human journalism offers are real-life stories and telling individual, personal stories. For them to be believable and relatable, that often sits at the heart of a lot of journalism.

I often tell students that if you want to be really impactful about an issue, such as climate change, it's best to hone in on something very specific—maybe a family that lives in a particular location and has been displaced by some climate change-related event. Telling a very human, emotional story is a far more powerful way to communicate, and that's something AI can try to replicate but fundamentally can't be authentic.

I think that authenticity—and the lack of authenticity—is going to be a big block (in a good way) on AI's ability to overtake real human journalism. So I wouldn't be surprised if there are areas where it grows, like factual reports on sporting events, fixtures, and results. I'm sure there will be powerful ways it will speed up those sorts of processes. But in terms of doing the exciting bits that all journalists want to do—telling more human stories about real-life people—I don't think you're ever going to be able to replicate that.

Copyright © 2021, ENJOI Project. All rights reserved
Cookie policyPrivacy policy
crossmenu