Switch to previous version of Magzter
Autocorrect
Autocorrect

How advances in real-time fact-checking might improve our politics

Jonathanan Rauch

IT’S FEBRUARY 2019, and I’m waiting to see whether a robot will call the president of the United States a liar.

I have tuned in to the State of the Union address, a speech that I haven’t missed more than a couple of times in my four decades of adulthood. Some addresses were soaring, some were slogs, and one, a magisterial performance by Ronald Reagan, was thrilling, because I watched it in the House chamber, from a press- gallery perch right behind the president. But I have never had the sense of suspense I feel now, as I sit staring not at a TV, but at a password-protected website called FactStream. I log in and find myself facing a plain screen with a video player. It looks rudimentary, but it might be revolutionary.

At the appointed time, President Donald Trump comes into view. Actually, not at precisely the appointed time; my feed is delayed by about 30 seconds. In that interval, a complicated transaction takes place. First, a piece of software—a bot, in effect— translates the president’s spoken words into text. A second bot then searches the text for factual claims, and sends each to a third bot, which looks at an online database to determine whether the claim (or a related one) has previously been verified or debunked by an independent fact-checking organization. If it has, the software generates a chyron (a caption) displaying the previously fact-checked statement and a verdict: true, false, not the whole story, or whatever else the fact-checkers concluded. If Squash, as the system is code-named, works, I will see the president’s truthfulness assessed as he speaks—no waiting for post-speech reportage, no mental note to Google it later. All in seconds, without human intervention. If it works.

Also watching the experiment, from Duke University, is a journalism professor named Bill Adair, along with researchers at the university’s Reporters’ Lab. A doyen of fact-checking since 2007, when he established PolitiFact .org, Adair has for years dreamt of fact-checking politicians in real time. Squash is his first attempt to teach computers to do just that. In the run-up to the State of the Union, he and a team of journalists, computer scientists, web developers, and students scrambled to prepare what tonight is still jerry-rigged software. “Guys,” he told the group ahead of time, defensively talking down expectations, “I’ll be happy if we just have stuff popping up on the screen.” (Fact-check: Stuff was not the word he used.) But if the system works, for an hour or two we might glimpse a new digital future, one that guides us toward truth instead of away from it.

THE WEB AND its digital infrastructure are sometimes referred to as information technology, but a better term might be misinformation technology. Though many websites favor information that is true over information that is false, the web itself—a series of paths and pipes for raw data—was designed to be indifferent to veracity. In the early days, that seemed fine. Everyone assumed that users could and would gravitate toward true content, which after all seems a lot more useful than bogus content.

Instead, perverse incentives took hold. Because most online business models depend on advertising, and because most advertising depends on clicks, profits flow from content that users notice and, ideally, share—and we users, being cognitively flawed human beings, prefer content that stimulates our emotions and confirms our biases. Dutifully chasing ever more eyeballs, algorithms and aggregators propagate and serve ever more clickbait. In 2016, an analysis by BuzzFeed News found that fake election news out performed real election news on Facebook. Donald Trump, a self- proclaimed troll, quickly caught on. So did propagandists in Russia, and conspiracy websites, and troll farms, and … well, you already know the rest. By now, multiple studies have confirmed that, as Robinson Meyer reported last year for The Atlantic, “fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.”

The resulting situation is odd. Normally, if an information-technology system delivered false results at least as often as true results, we would say it was broken. But when the internet delivers a wrong answer, we say the user is to blame. That cannot be right. Our information systems need to help us avoid error, not promulgate it. As philosophers and politicians have known since at least the time of Plato, human cognition is inherently untrustworthy.

Continue Reading with Magzter GOLD

GoldLogo

Get unlimited access to thousands of curated premium stories and 5,000+ magazines

READ THE ENTIRE ISSUE

June 2019