Fake Video, Shared Reality
SEPTEMBER 3, 2019
One of the great ironies of the information age is that abundant and available knowledge has produced widespread uncertainty. Rather than merely expanding access to authoritative accounts of reality, the web has cast doubt on even the most entrenched forms of institutional authority. “Information, it turned out, has authority in proportion to its scarcity,” analyst Martin Gurri notes. “The more there is, the less people believe.” Fears about fake news, overblown or not as a specific problem, concretize a more pervasive phenomenon: the collapse of shared reality.
More recently, so-called deepfakes -- convincing, face-swapped video generated by artificial intelligence -- have played upon similar fears. Thanks in part to a few eerily convincing examples, deepfakes have been in the news again in recent weeks. Opinions vary: some worry that these videos pose an existential threat to our civil discourse; others argue that the technology itself may not be that harmful, though our overreaction to it could be. A background atmosphere of political polarization hangs over the debate: Republican Senator Ben Sasse and clinical psychologist Jordan Peterson have produced some of the most dystopian warnings, informed largely by their personal enmeshment in political controversy.
Many findings from psychology seem to bear out the worst fears. We have long known, for example, that people tend to forget the source of information they encode in memory, and that mere repetition is often enough for people to remember a false piece of information as true. What’s more, recalling a memory is a constructive process, vulnerable to many of the same errors as ordinary cognition (including political bias). One study has even shown that, when fake photographs are presented alongside news headlines, people can be led to “immediately and confidently remember false news events.” One can only imagine how video might exacerbate this problem.
This vulnerability is rooted in our evolutionary history. As the psychologist Geoffrey Miller notes, social information in nature is rarely irrelevant, and we have good reason to be skeptical of an incriminated party’s post-hoc attempt to revise our judgments. This may be the origin of the infamous “backfire effect” in political science, which suggests that people not only resist updating false beliefs but often double down in the face of correction. Given that the “it’s just a deepfake” defense will be available, at least at first, to both the guilty and the innocent, we should expect similar skepticism to surround any disputes over video evidence. And even if we are able to rationally accept that a given video is a deepfake, the emotional salience will be much harder to extinguish.
The bleak psychological picture has led many to argue that a solution to the deepfake problem will have to be technological in nature. The computer scientist Steven Wolfram has sketched out one such possible solution, rooted in cryptography and blockchain technology. Presumably, news organizations will be able to ensure public trust by proving (via cryptographic hash) that a given video was created in a certain way at a certain time. But if, as we’ve suggested, the problem of mass uncertainty is deeper than any one technology, then solving the deepfake problem is just the beginning. The question that remains -- for psychologists and technologists alike -- is how to make the truth count.