Memento logo Deepfake Caricatures

Amplifying attention to artifacts increases deepfake detection by humans and machines

Deepfakes pose a serious threat to our digital society by fueling the spread of misinformation. It is essential to develop techniques that both detect them, and effectively alert the human user to their presence. Here, we introduce a novel deep- fake detection framework that meets both of these needs. Our approach learns to generate attention maps of video artifacts, semi-supervised on human annotations. These maps make two contributions. First, they improve the accuracy and general- izability of a deepfake classifier, demonstrated across several deepfake detection datasets. Second, they allow us to generate an intuitive signal for the human user, in the form of "Deepfake Caricatures": transformations of the original deepfake video where attended artifacts are exacerbated to improve human recognition. Our approach, based on a mixture of human and artificial supervision, aims to further the development of countermeasures against fake visual content, and grants humans the ability to make their own judgment when presented with dubious visual media.

Dataset icon Framework and Data

Human Artifact Maps

Our Framework utilises both machine and human supervision to learn to highlight unnatural deepfake artifacts. This has two advantages: it allows us to build better deepfake detectors, and it gives rise to Deepfake Caricatures, a new way of modifying videos to expose doctoring.

code icon Code

Deepfake Caricature code Image

Download the code for the Deepfake Caricatures project here.

Paper icon Paper

Paper thumbnail
  • Camilo Fosco*, Emilie Josephs*, Alex Andonian, Allen Lee, Xi Wang, Aude Oliva
    Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines

Gallery icon Gallery

From right to left: Real Video, Deepfake, Caricature on Fake and Caricature on Real. Hover to play the videos.

Team