A robot looms over a scared man and his keyboard

Fearing replacement, many Canadian journalists don’t trust or talk about AI. But if the industry hopes to survive this latest revolution, that needs to change

A robot looms over a scared man and his keyboard
Illustrations by Alexei Vella

It’s early October 2024, and the Toronto Star’s conference room is abuzz with anticipation. Stephen Ghigliotty, a marketing strategist with expertise in artificial intelligence, is preparing to host a five-hour learning session on the impacts of AI on journalism. The space is warm and welcoming, but beneath the surface lurks an undeniable undercurrent of fear. Ghigliotty can feel it when he steps into the conference room.

The trepidation is palpable, but Ghigliotty doesn’t let it dominate the room. He has a clear mission: to defuse the angst by stripping away the mystery. He needs to teach the journalists and their chief editors how AI can enhance their work and boost the newspaper’s bottom line if they embrace it with a structured and ethical approach.

Recalling the event, Ghigliotty understands the nervousness. “It was welcoming and very positive,” in the sense that everyone was there to learn,” he says. “But there was also this underlying vibe, this fear—‘this is going to take my job.’”

Ghigliotty pauses, reflecting on how such apprehension seemed almost inevitable. “It’s natural,” he continues. “AI is this big, intimidating thing for people. It’s this massive unknown, and people are always afraid of what they don’t understand. It’s a bit like staring into the abyss.”

“A lot of the anxiety about AI, especially in journalism and marketing, is the less they know, the more anxious they are about it,” he continues. “People hear AI and instantly think of these dystopian sci-fi scenarios. They think, ‘This is going to replace me.’”

At first glance, artificial intelligence might seem like a cold and impersonal force—something that threatens to overtake the humans who have always told the stories. To others, however, it’s but a tool that promises to relieve journalists of mundane tasks, making room for more creative and more meaningful work. The truth lies somewhere in between: AI can transform journalism, but it also brings challenges that must be carefully navigated.

This underlying feeling that AI is going to take journalists’ jobs or change the business beyond recognition is not unique to the Star. The advent of AI in the journalism industry has sparked a heated debate about its potential to replace human journalists. While AI has made tremendous progress in generating content that closely mimics human outputs, including audio clips and human-like text, experts argue that it lacks nuance, empathy, and sensitivity. As humanity ushers in this latest transformative, technological revolution, journalism must determine how it can use AI to its advantage—and how to avoid its potentially enormous pitfalls. But what’s clear is that we ignore it at our own peril.

Automation or Replacement?

From streamlining newsroom operations to delivering personalized content, integrating AI into journalism promises to transform how news is produced, distributed, and consumed. But as the technology scales across the industry, it also raises significant ethical, economic, and professional concerns.

The potential applications of AI in journalism are extensive. According to research that Ghigliotty undertook specifically for the Star conference, machine learning, natural language processing, and other computational technologies could all be applied to the newsmaking process. Ghigliotty says these tools can automate tasks such as transcription, trend analysis, and even article writing, enabling journalists to focus on human-facing, nuanced, and creative work in other aspects of the journalistic process. He argues that in practice, AI can assist in tasks ranging from creating bullet-point summaries of lengthy articles to predicting subscriber behaviour.

AI algorithms can analyze reader habits, enabling audience departments to increase subscriber engagement and loyalty with more customized content: European media company Mediahuis Group’s use of predictive models to boost reader retention by 14 percent in three months exemplifies this promise. AI tools can help with operational efficiency—routine tasks such as fact-checking or transcription, for example, can be automated to save time, allowing journalists to prioritize human-facing reporting. They can also identify emerging topics, enabling newsrooms to stay ahead of the curve and produce timely content, according to Ghigliotty’s research.

Despite its benefits, however, AI in journalism raises profound questions about the future of the profession. Ghigliotty argues that the use of AI in content creation and curation must be transparent to avoid misleading and losing audiences. Additionally, large language models can replicate the biases in their training data, perpetuating discrimination against marginalized people. If unaccounted for, using these tools uncritically can lead to mistakes that undermine editorial integrity. Ghigliotty found that while AI can assist with fact-checking and content generation, preserving the human element in journalism remains crucial for maintaining trust and accountability.

There is no universal consensus on what AI in journalism should entail, says Ghigliotty. For some, it’s a tool to amplify human creativity and efficiency. For others, it’s a disruptive force threatening traditional practices.

Paradigm Shift

Journalists have adapted to advances in technology before. On September 4, 1998, two PhD students from Stanford University launched a revolution in information retrieval. A search engine designed to organize the vast expanse of the internet, Google arrived with a simple promise: to connect users with the world’s knowledge in an instant. Before Google, journalists relied on a combination of traditional analogue methods to gather information. They often relied on government reports, research libraries, official documents, and the dependable human contacts of their beat. Google became the journalist’s go-to platform for accessing information quickly and with ease. Today, as AI evolves—manifesting in humanoid robots and virtual assistants like Siri—many journalists forget that they have adapted in the past to technological change to evolve their research and reporting. Now, the question on everyone’s mind is: Will AI replace journalists?

It’s one that Canadian journalists have not been paying enough attention to, but that can no longer be ignored, says Nikita Roy, a Knight Fellow at the International Center for Journalists. The use of AI in journalism has been a contentious issue since it first arrived in our newsrooms. “Canada just has a more complicated past with tech companies,” she says, explaining how Canadian journalism has suffered from past developments in Big Tech. “In the last decade, the added prioritization of social media ended up hurting newsrooms,” Roy says. “One of the first questions I get asked everywhere is, ‘how do I know that this is not another pivot-to-video disaster?’” Now, the Big Tech players are here again, telling us that AI is the next big thing. Of course there’s hesitation.

But this isn’t social media. “This is a paradigm shift in how people are going to access information.” AI is “not just another distraction,” Roy says. It’s newsroom infrastructure. It’s going to power how newsrooms work, how stories are told, and how audiences connect with information. The key is that it has to be done responsibly and in alignment with journalistic values.

Cautionary Tale

One of the most notable examples of a newsroom failing to utilize AI responsibly is Sports Illustrated (SI), which in 2023 published AI-generated articles credited to made-up writers, sparking outrage among journalists.

The SI horror story is a prime example of how AI can go wrong in journalism. Angela Misri, a former digital director for The Walrus, says this incident highlights the importance of transparency and ethics in using AI in journalism. “Sports Illustrated made up humans—I mean, that is going to damage an industry-wide understanding of how AI is being used.”

For the journalists paying attention, the SI debacle was a turning point. AI was not going away, and it was up to journalists to learn how to harness its power responsibly. Roy has dedicated herself to fostering responsible and innovative use of AI in media, conducting workshops, building products for newsrooms, and advising journalists on strategies for using AI. Her podcast, Newsroom Robots, has become a resource for journalists looking to stay ahead of the curve. “It is key to define what AI means, since different people may have different interpretations of it,” Roy says. When people talk about artificial intelligence today, the conversation often circles around tools like ChatGPT—software capable of producing essays, articles, and even poetry in seconds. At first glance, it seems revolutionary. But for journalists, AI isn’t entirely new. It’s been quietly embedded in their work for years, assisting with transcription, research, and data analysis.

“One of the biggest mistakes the industry makes is assuming that ChatGPT and Google function the same way,” Roy says, leaning forward with the conviction of someone who has seen too many misconceptions take root. They do not: the former represents a fundamental shift in how information is processed and presented, she thinks. While Google is a tool for finding facts, ChatGPT excels at crafting coherent responses to prompts that are based on the linguistic patterns of its training data. As such, this ability comes with a critical caveat: it doesn’t know what’s true. ChatGPT can hallucinate, she says, “because it’s not a knowledge generator. It is giving you language. We have to be able to learn what it’s best at doing and use it for that purpose.”

Despite these limitations, Roy sees a tremendous opportunity for journalism—if we approach it correctly. For her, the key lies in understanding how to wield AI as a tool, rather than seeing it as a replacement and shunning it. Since its popularization, we’ve been trained to use Google by keeping keywords concise and exact. But with ChatGPT, “the more context you provide, the better your output will be. We now have to unlearn what we learned when we started with Google.”

Roy’s perspective is grounded in years of working at the intersection of journalism and technology. Her mission, like Ghigliotty’s, is clear: to demystify AI and equip journalists with the tools they need to thrive in an industry on the brink of revolution. “It’s a transformative force,” she says, “and it’s here to stay.”

Roy’s work is crucial in an industry where AI is becoming increasingly prevalent. She notes that introducing AI into large legacy newsrooms such as CBC, The Globe and Mail, and the Star is a challenge that demands new skills, changes to the way we manage them, and an overall cultural shift. “This is very similar to what happened when the industry was evolving from print to digital,” she says. “History is actually repeating itself, and possibly we haven’t yet learned from our past mistakes.”

Landmark Lawsuit

In a bold move on November 29, 2024, a coalition of prominent Canadian news outlets joined forces to file a lawsuit against OpenAI. The lawsuit—whose plaintiffs are comprised of CBC/Radio-Canada, Postmedia, Torstar, The Globe and Mail Inc., and The Canadian Press—accuses ChatGPT’s creator of engaging in “ongoing, deliberate, and unauthorized misappropriation” of the news organizations’ proprietary content to train its AI models. The claims include copyright infringement, breach of terms of use, and circumvention of the companies’ technological protection measures.

The media companies allege that OpenAI used web scraping on their news sites to gather vast amounts of text data—referred to in the claim as “Works”—without obtaining the required permissions. These “Works” consist of articles, investigative reports, and other forms of journalistic content that have been carefully crafted through substantial investment of time, expertise, and resources by the newsrooms. The coalition seeks punitive damages, a share of OpenAI’s profits, and an injunction to stop the unauthorized use of their content. “OpenAI has capitalized on the commercial success of its GPT models, building an expansive suite of GPT-based products and services, and raising significant capital—all without obtaining a valid license from any of the News Media Companies,” the publishers argue in their statement of claim. OpenAI has defended its practices, asserting that its models are trained on publicly available data and that it collaborates with news publishers to provide attribution and links to their content.

The case raises a pivotal question: “Do large language models have the right to read information that is openly available on the internet just as a human would?” Roy asks. The outcome of this high-stakes legal battle could shape the future of journalism in the AI era, with implications for how intellectual property is protected—and exploited—in the digital age.

A man and a robot shake hands amicably
Illustrations by Alexei Vella

Lack of AI Literacy

Roy notes two distinct but equally critical legal issues exist between AI and news producers: training AI models on news, and using AI to summarize news in real time. “The central question around the first is whether this is fair use or not.” Fair use—or fair dealing in Canada—allows copyrighted material to be used without permission under certain circumstances, like when the content is transformed in such a way that it can no longer be considered exact copying. The debate over fair use remains unresolved, according to Roy. Generative AI models like ChatGPT are undeniably innovative, but whether they meet the legal definition of transformative use is unclear. “That’s where it gets tricky,” she says. If these models are seen as transformative technologies, the argument can be made that training on copyrighted content falls under fair use.

The second issue, Roy points out, is perhaps even more urgent for the survival of journalism. It centres on how recent inventions like ChatGPT, Perplexity, and AI-powered search engines use news content in real time to generate summaries for users. “This fundamentally disrupts the business model of news,” she says. “When a bot summarizes journalists’ hard work in real time, it removes the incentive for readers to visit the original source.” Roy sees this practice as creating a product based on someone else’s labour—without compensation. “AI companies are essentially building tools that use newsrooms’ hard work for free.”

The implications extend beyond individual newsrooms. Roy notes this issue feeds into a broader ecosystem where AI companies like TollBit and ProRata act as data brokers and are exploring ways to compensate journalism. “It’s more than just training models,” she adds. It’s about how AI is creating new experiences that monetize news content while reducing direct audience engagement with newsrooms.

For Roy, these challenges emphasize the need for newsrooms to be proactive about the adoption of AI. One way is by “building AI systems that have journalistic values embedded in them,” she says. “Whether it’s ensuring fair compensation or advocating for clearer legal frameworks, the survival of the industry depends on addressing these issues now.”

This sense of urgency is shared by Misri, who is also a contributor to the Journalism, Artificial Intelligence, and Ethics explanatory journalism project at Toronto Metropolitan University. As part of her research, Misri examines Canadian newsrooms and how openly they discuss, use, and address the ethics of AI with their audience. She found that two years ago, there were no guidelines set for using AI in newsrooms, and many journalists were not even aware that they were using AI tools every day. “Some of it was just about basic literacy of understanding.”

Despite this, Misri says the situation has improved in the last couple of years, with news leaders gradually developing policies and talking to each other about how to use AI and how to tell their audiences. She emphasizes, however, that transparency remains key when using AI in journalism, saying that journalists “need to be clear with each other and with the audience when we are using AI.”

Code of Silence

In contrast, American newsrooms have been more open to embracing AI. “The U.S. has been more proactive in thinking about AI and journalism,” Roy says.

Misri has felt that cultural difference through her work. This cultural divide became evident to her during her efforts to interview Canadian journalists about AI, where conversations were often cut short by non-disclosure agreements (NDAs) or a general unwillingness to engage. The hesitancy mirrors the guarded and cautious approach many Canadian newsrooms take toward AI adoption.

I met this hesitation myself when I was researching this issue. Over several months, I contacted more than a dozen journalists across various Canadian newsrooms for this story. Most turned me away, citing NDAs or expressing fear they would lose their jobs if they spoke out. A TSN journalist said they had heard of research departments and other reporters being laid off, with their roles replaced by AI systems capable of automating tasks once performed by humans. The journalist also revealed they had personally lost work hours to AI, which had taken over parts of their workload—but declined to share further details, fearing repercussions.

Experiences like these hint at a culture of silence that surrounds AI’s implementation in Canadian newsrooms. It highlights a broader challenge within our industry: the lack of transparency and open dialogue about AI’s role and its implications. When facing job insecurity, journalists’ fear of speaking out—whether due to NDAs or the threat of termination—only widens the gap between newsrooms and the technological transformations that are shaping their future, with or without them.

Not everyone, however, was hesitant to share their thoughts on AI in journalism. Dan Berlin, a former CTV NFL analyst and current assistant professor in sport media at Toronto Metropolitan University, sees AI as a positive force in an industry where change is constant. “Much like when the internet came, it comes down to ‘we have to embrace change,’” he argues. “Without question, it’s going to have a major impact on a certain portion of the media who are being sent out to tell those game recap stories.”

At the same time, Berlin believes AI can create new jobs within the industry, such as managing the content AI is producing and editing stories to ensure accuracy and sensibility. “The next wave can fall into who is managing the content AI is producing,” he says. “You’re going to see more people in the iterative process of editing these stories and being able to ensure that they’re accurate and that they’re sensible and being able to post them on these websites.”

Not everyone shares Berlin’s optimism, however. Misri remains adamant that AI should not replace human journalists. “If my job becomes just a fact-checker for AI, I’m quitting,” she says. “I am running away. This is not what I signed up for.”

The ethical concerns surrounding AI in journalism are multifaceted. Brian Hastings, a former TSN story editor, says that with the increasing use of AI, editors tread a “fine line between what is ethical and what is not.” In some cases, it seems they may have already crossed into unethical territory. For his part, Berlin emphasizes that the core of journalism lies in accuracy and fairness but acknowledges that AI’s predictive ability to generate content raises questions about truth and reliability, as it generally prioritizes patterns over factual accuracy.

At the Crossroads

As AI continues to redefine journalism, the industry stands at a crossroads—one where embracing this new era of innovation must go hand in hand with preserving the human connection that builds trust, tells meaningful stories, and ensures journalism remains a cornerstone of truth in an evolving world.

Yet, even as ethical debates continue, one thing is clear: AI can only replace journalists if we let it. By harnessing the power of AI to assist our work rather than lead it, we can ensure that the integrity and subtlety of human journalism remain intact. “It’s about understanding the nuances of storytelling, the context, and the human touch that only a journalist can provide,” Ghigliotty says.

Regardless of how journalists adapt, AI is here to stay, and it will continue to shape the way people interact with information and news. Roy hopes that change will make those human elements even more valuable. “I believe the natural evolution of journalism is toward fostering belonging, by ensuring audiences feel seen, heard, and served,” she says. “That’s where you have the audience’s trust with journalism.”

About the author

+ posts

Sandra is in her fourth year of her Bachelor of Journalism at TMU, along with a minor in English. She contributed infographics to HerCampus local TMU coverage. She hopes to work in the Sports Journalism field after graduating. Outside of reporting, Sandra can be found gaming on Twitch.

Sign Up for Our Newsletters

Keep up to date with the latest stories from our newsroom.

You May Also Like
Edith Yang at her store Presse Internationale located at The Annex 537 Bloor Street West

Last Mags Standing

When Edith Yang was a little girl, she dreamed of having her own library. She always loved to read and write. These passions led to a degree in Chinese language and literature. Later, in August 2005, she entered the magazine business with her husband. She says, “I never thought I would have a bookstore.”
An illustration of a newspaper with the title “CanadaLand” being written across the top. One hand on the left side tugging the newspaper towards it, and three hands on the right tugging the opposite way, causing there to be a tier on the top middle.

Editor, Publisher, Founder

Over the past few years, Canadaland has been facing scrutiny over its coverage of Israel’s war on Gaza, largely related to its founder, host, and publisher, Jesse Brown.