Memento logo Deepfake Caricatures

Amplifying attention to artifacts increases deepfake detection by humans and machines

Deepfakes can fuel online misinformation. As deepfakes get harder to recognize with the naked eye, human users become more reliant on deepfake detection models to help them decide whether a video is real or fake. Currently, models yield a prediction for a video's authenticity, but do not integrate a method for alerting a human user. We introduce a framework for amplifying artifacts in deepfake videos to make them more detectable by people. We propose a novel, semi-supervised Artifact Attention module, which is trained on human responses to create attention maps that highlight video artifacts, and magnify them to create a novel visual indicator we call “Deepfake Caricatures”. In a user study, we demonstrate that Caricatures greatly increase human detection, across video presentation times and user engagement levels. We also introduce a deepfake detection model that incorporates the Artifact Attention module to increase its accuracy and robustness. Overall, we demonstrate the success of a human-centered approach to designing deepfake mitigation methods.

Dataset icon Framework and Data

Human Artifact Maps

Our Framework utilises both machine and human supervision to learn to highlight unnatural deepfake artifacts. This has two advantages: it allows us to build better deepfake detectors, and it gives rise to Deepfake Caricatures, a new way of modifying videos to expose doctoring.

code icon Code

Deepfake Caricature code Image

Access the code for the Deepfake Caricatures project here.

Paper icon Paper

Paper thumbnail
  • Camilo Fosco*, Emilie Josephs*, Alex Andonian, Aude Oliva
    Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines

Gallery icon Gallery

From right to left: Real Video, Deepfake, Caricature on Fake and Caricature on Real. Hover to play the videos.

Team