Newsroom Fact-Checking in the Age of AI

As AI technology makes it harder to tell if visual content is real, for newsrooms, it means adapting their approach — or getting left behind.

Two hands holding a newspaper in front of a computer with blurred out images and headphones hanging off the screen. A black smartphone is placed on a wooden table to the right of the computer.
Photo by Beyzanur K via Pexels

Seeing isn’t always believing. With AI-generated content making it increasingly difficult to tell reality from fiction, newsrooms are changing the way they handle photos and videos. However, as AI becomes more advanced, fact-checkers say it’s getting hard to keep up. 

“It’s astonishing how fast the technology is developing. There are things that, even a year ago, we relied on to reveal something as fake that we can’t rely on anymore,” says David Michael Lamb of CBC’s visual verification team. “It means that the work of verification is getting harder and harder, almost by the day.” 

While CBC has separate teams for in-depth digital investigations into AI-generated misinformation, the visual verification team focuses on breaking and developing news. Lamb started working with the team in the summer of 2024, joining at the start of its development. He says CBC developed it to combat the threat of AI, recognizing that the problem was steadily getting worse. 

In 2024, about 24 percent of Canadians used social media as their primary source of information, according to Statistics Canada. Newsrooms have also been taking advantage of social media to quickly gain real-time information on developing situations. Today, it’s flooded with AI-generated and fake content, which is disrupting the information ecosystem. 

“Before, if we saw a video and we wanted to verify its authenticity…it was, ‘Is the video being misrepresented, or has it been edited in some way?’” says Melissa Goldin, an AP news verification reporter and editor who has focused on mis- and disinformation since 2018. “Now, the question has to be, ‘Is it even a real video to begin with?’” 

A series of damaged, broken down houses next to palm trees blowing in the wind.
An AI-generated image that was widely shared on social media, claiming to be of damage wrought by Hurricane Melissa in Jamaica.

While it’s easy to tell whether some visuals are fake, others are less straightforward. News outlets, organizations, and governments have all had to retract or delete posts, articles, and warnings after the images that inspired them were revealed to be fake. Even CBC had to issue corrections after mistakenly broadcasting AI-generated aerial “footage” of Hurricane Melissa in October last year.

AI is now able to do things it couldn’t before, such as generate multiple, slightly different but still consistent images, show videos from different angles, create realistic audio, and accurately replicate human anatomy, something that was once a clear giveaway. Generative AI has come a long way from putting the Pope in a puffer jacket

“The pace of change has accelerated, and it frustrates people, because they learn a skill…and a few months later, you’re saying, like, ‘sorry, that’s not useful anymore,’” Lamb says. “You’re gonna have to learn to do this a different way.”

Lamb says that while they currently have a core team of six to eight people on the visual verification team, they’re also responsible for training their colleagues to do it themselves. 

When it comes to verifying visual content, it can be as fast as a few minutes or take multiple hours. AI-generated content is also not the only thing they’re looking for. While the process looks different for every newsroom and evolves with each new advancement, there are some consistent steps. 

First, it’s good to look for obvious AI “tells”. For example, counting the number of fingers or teeth, or examining clothes and jewellery for inconsistencies. It also means checking if text is legible and correct, if people have warped faces, if objects blend into each other, or if there are remnants of watermarks. This content also often has an uncanny feeling or looks “too perfect.” Additionally, audio should make sense and be in a real language.

To determine if content is coming from a trusted entity, it is also important to do a reverse image search. Goldin says occasionally a photo or video can be easily traced back to an “AI content creator.” If multiple images or videos of the same thing from different angles or perspectives can be found, that adds credibility, although AI is learning how to do that as well.

Another step is checking the location and making sure it was actually recorded where it says it was. Sometimes this can be done through Google Maps or by looking for inconsistencies in the weather, language, surroundings, or people. 

Sometimes, even things the verification team thinks are probably true can’t be verified enough to be published. Lamb gave the example of a video that claimed to be of North Korean soldiers in Russian uniforms preparing to fight in Ukraine. The video was low quality, so while it looked like it could have been Russian uniforms and it sounded like they might have been speaking Korean, they couldn’t say for sure. It was also shot entirely inside a nondescript building, meaning there were no geographic markers to go off of. He says, ultimately, there just wasn’t enough evidence to be confident it was true. 

He adds that anyone who wants to settle into a long-term routine for verifying simply won’t be able to, because visual cues are getting harder to find. As a result, Lamb says they’re starting to rely more on source analysis. 

“Old-fashioned traditional journalistic skills of trying to get to the source of a video, who shot this, who posted it. Can you get a hold of that person?” Lamb adds. “It’s almost like journalism is going back to an old time when the only way you could get a picture in your newspaper was to go take the picture yourself.” 

The need to adapt to AI-generated visuals may end up bringing journalism full circle and shift the need for news back onto traditional media. If nothing online is real, who can people trust to tell the truth?

About the author

+ posts