| |

Deepfake detection with and without content warnings.

Researchers

Journal

Modalities

Models

Abstract

The rapid advancement of ‘deepfake’ video technology-which uses deep learning artificial intelligence algorithms to create fake videos that look real-has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people’s alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.© 2023 The Authors.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *