The Threat of “Deep Fake” Text Generation: Pull Quotes Series 3, Episode 10

Listen above or subscribe on iTunes.

As COVID-19 continues to spread throughout the globe, disinformation is a threat as dangerous as the pandemic itself. That is according to the World Health Organization’s director-general, Tedros Adhanom Ghebreyesus, who suggested at the Munich Security conference, back on February 15, that we are also facing an “infodemic.

The Ryerson Review of Journalism maintains that qualified reporting, with a heavy approach to verification, is imperative for keeping people accurately informed. Unverified news stories that mislead the public can cause a chain reaction of inaccurate claims. With advancements in media technology, however, some misinformation might not be authored by humans. 

Last February, OpenAI announced it would be delaying the release of an advanced artificial intelligence (AI) text generator, called GPT-2, capable of producing prose without programmed templates or task-specific training. (It can write without human control.)

The non-profit AI research organization feared the “malicious applications” of this new technology, not yet understanding the potential dangers that could emerge.

The AI tech was initially designed to predict the next word in a paragraph based on the previous words in a given text, similar to Google’s Smart Compose. The biggest difference is the 40 gigabytes of training data the model uses to make sense of written language: 8 million web pages of upvoted Reddit content, approximating 6.5 billion words of text — substantially more than the combined word count of every novel by Dickens, every play by Shakespeare, the Lord of the Rings trilogy by J.R.R Tolkien, and the Harry Potter series by J.K Rowling — with billions of words left over. 

On their site, OpenAI says the company has “trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization.”

What does that mean? GPT-2 analyzes thousands of word configurations before a sentence is determined. It also assesses the probability of certain words being used in interpreted contexts, meaning the model can carry out extended “train of thought” writing without disruption. It also means that the model, despite only drawing from a vocabulary of approximately 50,000 words,can write in a way that emulates numerous levels of writing ability — from an amateur to a professional, and every skill level in between.  

On its site, OpenAI lists several malicious applications of this tech. On top of that list is “Generate misleading news articles.”

But does the technology really have the capacity to produce prose indistinguishable from verified news stories? Can AI really mimic the writing voice of a journalist, completely void of programmed templates and curated data feeds? The RRJ decided to find out. 

We fed the GPT-2 model 60 RRJ articles, each averaging 3,000 words, to see if it could mirror the collective voice of our magazine. By downloading the base code for the full-sized model, which OpenAI made publically available in November 2019, we modified it to run on a standard operating system and adapt to stories we’ve gathered. The articles we selected to train the model were from previous magazine issues between 2010 and 2019 — many of which were award recipients for accolades such as Best Student Writer, Best New Magazine Writer, and Honourable Mentions for Best Feature (through the National Magazine Awards and the Association for Education in Journalism and Mass Communication). This was our way of ensuring the model was trained on the best of our work over the last decade. We modified the software to emulate the structure, paragraph flow, and narrative style of these stories. 

The results were troubling. From using real names and attributing fake quotes, to generating nut graphs that mention real organizations with fabricated statements, the model seems to have opened a new doorway into defamation and misinformation. (To review a gallery of our produced samples, click here.)

“I would think of it as a massive wake up call about what’s possible and what’s in existence and, more, what’s coming,” says Andrew Cochran, AI media researcher and former head of news strategy for CBC, in this week’s Pull Quotes episode. Cochran believes this wake up call pertains to “how we as journalists and informed members of the public need to be aware of the issues, and need to be that much more literate about how these things are working.” 

Also on Pull Quotes, Lisa Gibbs, director of news partnerships at Associated Press, phoned in from New York to discuss the implications of our experiment. AP has been using automated text technology since July 2014, when they went from writing 300 financial stories per earnings quarter, to automating 3,700. Their templated automation coverage extends to sports, business, elections, and local news reporting. 

“There is no question — this is true of this technology and it’s been true of most technologies overtime — that it could be used for a bad intent, or it could be used for a good intent,” Gibbs told us after hearing some of our produced samples read out loud

Below is a complete article that the model produced. This particular sample includes the use of real names (which we removed) with falsified accusations of sexual assault and unfounded statistics presented as verified facts. (To listen to this week’s Pull Quotes episode, which unpacks the details of our experiment, listen below.)

By Mitchell Consky 

AN ARTICLE BY GPT-2:

The following text is completely computer generated and is not based on any process of verification; the excerpts do not represent the RRJ’s practice of journalistic integrity. Grammatical errors were not corrected. Paragraph breaks were implemented for readability, and names were removed. Numerous claims, presented as facts and statistics, are completely inaccurate. 

BELOW IS A SLIDE SHOW OF OTHER GPT-2 GENERATED ARTICLES: 

Read more about the rise of automated text generation in our Spring 2020 issue — coming soon. 

Episode 10 of Pull Quotes was edited by Ashley Fraser and produced by guest producer Mitchell Consky and Pull Quotes producer, Tanja Saric. Technical production help from Angela Glover and Lindsay Hanna. Pull Quotes’ executive producer is Sonya Fatah. 

About the author

+ posts
+ posts
+ posts

Sign Up for Our Newsletters

Keep up to date with the latest stories from our newsroom.