Robot holding a human who is on their computer and looking at the news.

Artificial Intelligence wedges its way into journalism

Graphic by Sheridan Williamson-Fraser

Over the past decade, the growth of artificial intelligence (AI) has become intertwined with the journalism industry as both a tool and a hindrance. From AI-written articles to falsified video content, there is an urgent need to evaluate the ethics of using simulated human intelligence.

Synthetic media is a general term used to describe video, image, text, or audio that have been fully or partially generated using AI. Angela Misri, assistant professor of journalism at Toronto Metropolitan University, says that most newsrooms haven’t been thinking about AI. Misri has been researching current ethical discussions on the use of AI in newsrooms, and so far, she has found that these discussions generally have been put off and that ethical concerns about synthetic material have been left to interpretation. Depending on the newsroom or story, audiences may have no clue about the use of AI.

Misri recalls interviewing a senior editor of a publication that had recently published multiple articles entirely written using AI. When asked if the audience was aware AI was being used to produce content the editor initially said yes. However, after a quick look at the publication’s website, they realized there was no disclaimer explaining the use of AI.

The lack of a consensus among journalists regarding the application of AI technology in formatting or generating aspects of journalism, and the degree of disclosure to the audiences they serve, cause concern over the accuracy and truthfulness of AI-generated journalism. According to Misri, there are two ethical issues: ineffective fact checking of sources, and a lack of transparency in the newsroom when working with AI. She says fact checking sources to avoid mirages or falsely generated AI information is essential in ethically produced journalism. She cites CNET, a technology review publication, as an example of a failure to fact check AI-produced content.

In November 2022, CNET Money, an editorial team of CNET, launched an AI engine to set up basic explainers for different financial services. In January 2023, The Verge reported that CNET issued corrections to 41 of the 77 stories produced by AI, many of which included factual errors. “I’m worried about being able to tell [what’s true] from fake,” says Misri. “We’ve gotten lazier as an audience in actually asking, ‘Is that real?’” In 2022, Misri took an online AI training course that required her to complete a quiz, which displayed side-by-side content, one made by a human and one by AI. “I only scored 45 percent on that,” she says. “I’m a journalist and I’ve been doing this my whole life.”

A study by The Canadian Journalism Foundation (CJF), released in October 2023, found that “48 percent of Canadians admit that they are not confident in their own ability to distinguish the difference between online/social media content generated by AI versus content created by humans.” Misri worries about the growth of more advanced fake news and misinformation that critical thinkers and even trained professionals may have difficulty differentiating. Misri suggests using an AI disclaimer on a case-by-case basis, explaining that its use in the journalistic production process would be a good step in creating transparency. 

Brodie Fenlon, general manager and editor-in-chief at CBC News, wanted to be proactive with audiences in the use of AI. “I wanted to get ahead of the growth of generative AI as quickly as possible and assure the public that they could trust our journalism,” he says. “That we were not using AI to just create stories—and to explain some initial thinking around how we’re going to use this technology and how we’re not going to use it.”

To that end, in June 2023, Fenlon wrote the CBC editor’s blog post, “How CBC News Will Manage the Challenge of AI,” which provided a breakdown of the institution’s AI practice in research and production. Questions about AI had been popping up but it wasn’t at the forefront of everyone’s minds the way it is now. He thinks the guideline is an effective way for the public to hold CBC’s use of AI to account and provide a transparent look behind the scenes of the institution’s productions.

David McKie, deputy managing editor at Canada’s National Observer, believes it is possible to integrate AI into the newsroom. In simple cases, he thinks, there is nothing wrong with producing an AI-generated story if it is merely straightforward facts with little nuance. He says it’s a waste of time to tell readers every time a journalist uses ChatGPT or similar systems for minimal tasks such as sifting through data or occasionally brainstorming headline ideas. 

However, McKie does think journalists should be required to acknowledge the use of AI when producing larger, in-depth stories. And in cases where critical thinking is necessary to present quality journalism, he says concerns would naturally arise about generative AI being used as a substitute. When generating headlines or other significant synthetic media, it should be explained to the audience how the publication is using the tool, and how it is being used consistently. The choice is fairly stark for McKie. When used responsibly, he thinks AI technology can strengthen journalism. But when used irresponsibly, he says, “It weakens your journalism and it calls our credibility into question at a time when we can ill afford to do that.” 

Ultimately, the onus is on us. “It’s up to media outlets to use this, or any technology, responsibly,” McKie says, summing up. “There are minefields but, if we’re careful, we can learn to live with this technology and benefit from it.”

About the author

+ posts

Sign Up for Our Newsletters

Keep up to date with the latest stories from our newsroom.

You May Also Like
A group of people in a line looking their electronic devices

Good as Newsletter

Amid the instability of social media, email newsletters are back and booming…
An illustration of an opened lock

System Failure

Accessing government information in Canada is notoriously slow, often serving as more of a hindrance than a help for journalists. But The Globe and Mail’s Secret Canada project might be able to change that by providing a database of requests available to all.