Generative AI and science journalism: a conversation with Mohamed Elsonbaty Ramadan
Mohamed Elsonbaty Ramadan is an award-winning science journalist, science communication consultant, and trainer. Recently, he founded SciComm-AI, an initiative dedicated to providing training and consultancy for researchers, science journalists, and science communicators interested in effectively integrating generative AI into their work—most importantly, in a responsible way. With 15 years of experience, he has published over 700 articles in outlets such as Scientific American and Nature. He has also held roles in major science communication networks, including the Public Communication of Science and Technology (PCST) and the Arab Forum of Science Media and Communication (AFMSC).
From your experience, what do you see as the advantages and disadvantages of using generative AI in science journalism and science communication in general?
I believe that, within the next couple of years, and possibly even in as little as three years, the entire communication landscape—and journalism landscape in general, not specifically science journalism—will shift towards increased use of generative AI. Of course, this comes with many challenges. You need to ensure that science journalists and science communicators are aware of this, because we don't want to lose them.
Secondly, using generative AI to communicate science can be tricky. If you consider the most significant limitations of generative AI, there's something called "hallucination". It means that AI models can fabricate information (let's say, they're lying) and provide a lot of inaccurate information. This is the first and most obvious issue.
However, one aspect that also affects its use for communicating science, which I don't think receives enough attention, is that these models are inherently biased. They were trained on data, and we don't know where this data is coming from due to a lack of transparency. Most probably, what we do know is that it's coming from the internet. And the internet is not always a reliable source for extracting information.
Can you provide an example of the types of biases that can be found in AI?
Well, most of the content on the internet is in English, but for example, my native language is Arabic, which is one of the UN's official languages. It is the fifth most spoken language worldwide, with around 300 million native speakers and two billion non-native speakers. Yet, Arabic content on the internet accounts for less than 1%, which means it constitutes less than 1% of the data used to train the AI.
I can see a significant difference when I attempt to use AI to communicate science in Arabic due to this inherent bias. For example, when I ask the AI to explain a specific scientific topic to an eight-year-old student, it provides an answer based on its assumptions about the audience, which is very Western-biased. But if I ask it to do the same for an Arabic speaker from Egypt, the result is catastrophic.
What exactly do you mean when you say "catastrophic result"?
I remember the first time I tried this, AI gave me the answer in English. While we don't expect an eight-year-old child in Egypt to speak English—even if they study it from kindergarten, they are usually not able to speak or understand it well. When I asked it to respond in Arabic, it used formal language. In Arabic, we have a spoken language and a formal language-the latter used mainly for reading and writing. An eight-year-old's proficiency in formal Arabic would not be sufficient.
So, I asked it to use Egyptian Arabic. It honestly sounded like when an American tries to fake a British accent. Then, I asked it to use analogies and metaphors more related to their context or daily life. What I got was a very Western view of how an Egyptian child should think and behave. For example, it said something like, "If you think of this, it's faster than the fastest camel." But in Egypt, we don't have many camels. When I was a child, I thought the fastest animal was a horse, not a camel. It also said things like, "This is like having a genie." We no longer use these kinds of metaphors; we're not living in an Aladdin world. It felt very stereotypical.
I recall discussing this with a friend a few days ago, and she shared a very impressive phrase to describe it: "This is the new Orientalism." I told her I would quote this because I find it fascinating to capture this issue in just a few words. This kind of bias is often overlooked, but it's not only applied to Arabic. Think, for example, about languages spoken by smaller populations in Europe, or the contexts of countries in Eastern or Southern Europe—the situation should be different.
What can we do to understand better and be aware of the risks associated with the use of AI in science communication?
We need to train science communicators on how to use AI not only effectively but also responsibly. Three key points need to be discussed.
- We need to understand how AI works—how we ended up with models like ChatGPT, Gemini, or others. If you don't realise this, you won't fully understand the biases, hallucinations, and other issues involved. You don't need to dive into technical details, but for example, it's essential to know that AI models are trained on data, and we don't always know what that data is. Some people have fine-tuned the models, assessed and evaluated the answers, but we don't know exactly who did that or based on what criteria. Also, these models learn from our inputs and outputs. Whenever I use a model, the input and output generated are used again to train the data. This creates a loop that will most likely reinforce certain biases. Moreover, it's essential to understand where hallucinations—making things up—come from. After all, generative AI doesn't truly understand what it says; it's a statistical model that simply predicts the most probable next word.
- How to use AI properly. It's not just about typing something and getting an answer. You need a structured approach and to change how you think about using AI. AI isn't here to make us work faster or to be lazy. In fact, if used effectively, AI can make you a better critical thinker because it requires you to think critically about questions like, "How can I use AI to help me do this?" or "I need to review this output." The best way to handle AI is to think of it as an intern: if you have an intern in your office, they are trained to do some tasks, but they don't know how to do it exactly your way. You need to supervise, train, and teach them. You must realise they are allowed to make mistakes, but they learn from them. And of course, you need to review their work—you would never publish an intern's work without review. If you adopt this mentality, everything changes.
- The ethical and responsible use of AI. There are many concerns here. For example, environmental concerns—AI consumes vast amounts of energy and water. What is the environmental impact? It also costs a lot of money. There are also societal impacts: AI may exacerbate the gap between individuals who have access to the latest models and those who don't, or between those who have models trained on their context and those who don't. How will it affect the economy and transparency? Should we, for example, declare that we used generative AI in developing a science communication activity? How will our audience perceive that? Will it increase or decrease trust in science? What is legal? What is ethical? I don't have all the answers, and I don't think anyone does. In my training sessions, I usually say I'm here to help you ask the right questions, but you need to find the answers yourself. You need to develop policies within your organisation and ensure compliance with the country's legal framework. Ultimately, it's about ethics and integrity. As a science journalist, if I use AI to write my piece, then it's not my piece anymore—even if I train it to write in my style using my own words.
Then, if a science journalist asked you how they should use generative AI in science journalism, what would you tell them?
My first piece of advice is: do not use it for writing. Use it for researching, helping you brainstorm, and providing better analogies or metaphors, better ways to explain complex scientific concepts more simply. Sometimes, it can also give you feedback. I find using AI to provide feedback very helpful, especially when I'm writing in my non-native language. For example, if I write in English, I use it for proofreading or editing.
Overall, do you feel more optimistic or pessimistic? Do you believe we will be able to use AI responsibly and cautiously?
I would say, honestly, as Mohamed, I'm usually a pessimistic person. I believe that regardless of our thoughts on generative AI, it will continue to evolve, and we will be unable to control its evolution. For example, when social media first started, everyone saw it as a way to create change. In Egypt, our 2011 revolution began with a Facebook event. However, if you look at Facebook now, it's being used as a tool by governments to control people, because they learned how to use it. Right now, it's more like a new public TV media used to control everything.
Usually, humanity doesn't make the right choices. However, the problem is that, whether you think about it or not, the world is moving in that direction. So, I'm neither optimistic nor pessimistic—I just don't know. But I prefer to think pragmatically: I utilise AI when I need it, and I refrain from using it when I believe it's not warranted. I try to be as critical as possible, think about it, take what I like, and leave what I don't. But what will happen in the future—I have no idea.