How to employ AI in journalism ethically?

January 17, 2025
Sofia Belardinelli
Share this on
The EBU's News Report 2024 explores AI's impact on journalism, highlighting its potential to enhance efficiency while raising ethical concerns. Emphasizing the "human in the loop" principle, the report advocates for transparency and responsible AI use, ensuring that journalism's core values of accuracy and accountability are upheld.

Journalism was born from a groundbreaking technological innovation: letterpress printing. Throughout its history, this profession has always engaged with cutting-edge technological tools. The recent rise of artificial intelligence (AI) marks another significant leap forward, unlocking many creative possibilities that could profoundly impact the field. The European Broadcasting Union (EBU) ’s News Report 2024, titled “Trusted Journalism in the Age of AI,” delves deeply into these developments, exploring the challenges, potential, and ethical considerations of integrating AI into the newsroom.

AI has the potential to be a powerful ally for journalists: for one thing, its growing array of tools can help professionals complete technical and repetitive tasks more quickly and efficiently. However, AI tools can do far more than that, and their expanding ability to handle creative tasks introduces both opportunities and risks. While most interviewees in the EBU Report express little concern about the fear of “AI replacing their jobs,” many highlight the struggle to find a balance between using these tools without over-reliance or banning them from newsrooms altogether.

 

Accountability and transparency

One common challenge for journalists is how to address AI-generated errors. According to the Report, many news organisations adhere to the “human in the loop” principle. This approach requires that all content, whether human or AI-generated, be reviewed by a human editor for accuracy, proper sourcing, and potential disinformative or misinformative content before publication. Ultimately, accountability for errors found in published journalistic content rests with humans, not AI systems—a principle at the core of journalistic integrity.

AI tools have been available for quite some time, but the release of ChatGPT has been a game-changer in the field. Its sudden accessibility to the public heightened the sense of urgency surrounding the role of AI in journalism. Following ChatGPT’s release, many media organisations swiftly established guidelines for an “ethical use of AI in the newsroom,” emphasising accountability and transparency to their audiences. This movement culminated in November 2023 with the release of the “Paris Charter on AI and Journalism. Co-signed by 17 media organisations worldwide, the document outlines ten principles to ensure AI is used consistently with journalism’s core values, such as safeguarding democracy, freedom of expression, and universal access to reliable information. The preamble states:

“AI systems can greatly assist media outlets in fulfilling this role, but only if they are used transparently, fairly and responsibly in an editorial environment that staunchly upholds journalistic ethics. In affirming these principles, we uphold the right to information, champion independent journalism, and commit to trustworthy news and media outlets in the era of AI.”

Another critical aspect of ethical AI use is transparency, both from developers — who often fail to fully disclose how they handle data provided by users — and news consumers, who have the right to be informed about the methods used to create the content they consume. The 5th principle of the Paris Charter emphasises this: “Any use of AI that has a significant impact on the production or distribution of journalistic content should be clearly disclosed and communicated to everyone receiving information alongside the relevant content.”

The EBU’s 2024 News Report emphasises the need for caution when using AI tools in journalism, particularly regarding how developers might utilise the material journalists input into these systems. To date, transparency on this matter remains limited. Mattia Peretti, an AI consultant, expressed his concerns to EBU, stating: “Using generative AI tools for investigative journalism is risky. We need to be extremely careful about what we type into a chatbot. I always advise: If you wouldn’t put it on a LinkedIn post, don’t type it on ChatGPT” (EBU News Report 2024, p. 114).

 

Responsible uses

Despite justified concerns, AI’s (responsible) use in journalism also offers promising opportunities. The EBU 2024 News Report highlights some of these potential benefits. One area where AI could make a significant impact is fostering trust and increasing audience engagement by tailoring the news experience to different types of audiences.

In this vein, AI further expands the possibilities for personalising the news experience – not just for audience groups but for individual users. However, this personalisation raises an ethical challenge: remaining true to journalism’s core values. Above all, journalism is a collective service aimed at building public awareness and critical thinking among citizens. It must then prioritise this responsibility over merely catering to individual preferences or consumer demands.

Another challenge lies in selecting the appropriate AI tools for newsroom use. While open-source models are preferable for their transparency, many professionals perceive proprietary models as potentially safer and less susceptible to misuse. The EBU Report notes: “[…] from an ethical perspective, both options have their pros and cons, and a singular recommendation is difficult. Nevertheless, being informed about the technology behind the models, their limitations, capabilities, and impact is imperative” (p. 142).

Incorporating AI in the newsroom has both positive and negative aspects. Balancing the siren song of highly advantageous technological tools with journalism’s commitment to reliability and its societal role is no small task. At times, journalists may need to resist the allure of AI’s capabilities; as the EBU Report advises, this responsibility may even mean learning to say no.

In a world increasingly saturated with AI-generated content, the label of “human-made” could become synonymous with quality and trustworthiness. Rasmus Nielsen, director of the Reuters Institute, remarked to EBU: “AI may be helpful for those publishers who are able and willing to define and double down on what makes them different, who are genuinely interested in meeting people where they are, and who can resist the temptation to commodify further the journalism they offer” (p. 151). AI’s role in journalism must be carefully defined to ensure it enhances rather than undermines the values and principles of the profession.

Links to the full EBU Report and the Paris Charter.

Copyright © 2021, ENJOI Project. All rights reserved
Cookie policyPrivacy policy
crossmenu