Handbook of Digital Face Manipulation and Detection : : From DeepFakes to Morphing Attacks.

Saved in:
Bibliographic Details
Superior document:Advances in Computer Vision and Pattern Recognition Series
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer International Publishing AG,, 2022.
©2022.
Year of Publication:2022
Edition:1st ed.
Language:English
Series:Advances in Computer Vision and Pattern Recognition Series
Online Access:
Physical Description:1 online resource (483 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
Table of Contents:
  • Intro
  • Preface
  • Contents
  • Part I Introduction
  • 1 An Introduction to Digital Face Manipulation
  • 1.1 Introduction
  • 1.2 Types of Digital Face Manipulations
  • 1.2.1 Entire Face Synthesis
  • 1.2.2 Identity Swap
  • 1.2.3 Face Morphing
  • 1.2.4 Attribute Manipulation
  • 1.2.5 Expression Swap
  • 1.2.6 Audio-to-Video and Text-to-Video
  • 1.3 Conclusions
  • References
  • 2 Digital Face Manipulation in Biometric Systems
  • 2.1 Introduction
  • 2.2 Biometric Systems
  • 2.2.1 Processes
  • 2.2.2 Face Recognition
  • 2.3 Digital Face Manipulation in Biometric Systems
  • 2.3.1 Impact on Biometric Performance
  • 2.3.2 Manipulation Detection Scenarios
  • 2.4 Experiments
  • 2.4.1 Experimental Setup
  • 2.4.2 Performance Evaluation
  • 2.5 Summary and Outlook
  • References
  • 3 Multimedia Forensics Before the Deep Learning Era
  • 3.1 Introduction
  • 3.2 PRNU-Based Approach
  • 3.2.1 PRNU Estimation
  • 3.2.2 Noise Residual Computation
  • 3.2.3 Forgery Detection Test
  • 3.2.4 Estimation Through Guided Filtering
  • 3.3 Blind Methods
  • 3.3.1 Noise Patterns
  • 3.3.2 Compression Artifacts
  • 3.3.3 Editing Artifacts
  • 3.4 Learning-Based Methods with Handcrafted Features
  • 3.5 Conclusions
  • References
  • Part II Digital Face Manipulation and Security Implications
  • 4 Toward the Creation and Obstruction of DeepFakes
  • 4.1 Introduction
  • 4.2 Backgrounds
  • 4.2.1 DeepFake Video Generation
  • 4.2.2 DeepFake Detection Methods
  • 4.2.3 Existing DeepFake Datasets
  • 4.3 Celeb-DF: the Creation of DeepFakes
  • 4.3.1 Synthesis Method
  • 4.3.2 Visual Quality
  • 4.3.3 Evaluations
  • 4.4 Landmark Breaker: the Obstruction of DeepFakes
  • 4.4.1 Facial Landmark Extractors
  • 4.4.2 Adversarial Perturbations
  • 4.4.3 Notation and Formulation
  • 4.4.4 Optimization
  • 4.4.5 Experimental Settings
  • 4.4.6 Results
  • 4.4.7 Robustness Analysis
  • 4.4.8 Ablation Study.
  • 4.5 Conclusion
  • References
  • 5 The Threat of Deepfakes to Computer and Human Visions
  • 5.1 Introduction
  • 5.2 Related Work
  • 5.3 Databases and Methods
  • 5.3.1 DeepfakeTIMIT
  • 5.3.2 DF-Mobio
  • 5.3.3 Google and Jigsaw
  • 5.3.4 Facebook
  • 5.3.5 Celeb-DF
  • 5.4 Evaluation Protocols
  • 5.4.1 Measuring Vulnerability
  • 5.4.2 Measuring Detection
  • 5.5 Vulnerability of Face Recognition
  • 5.6 Subjective Assessment of Human Vision
  • 5.6.1 Subjective Evaluation Results
  • 5.7 Evaluation of Deepfake Detection Algorithms
  • 5.8 Conclusion
  • References
  • 6 Morph Creation and Vulnerability of Face Recognition Systems to Morphing
  • 6.1 Introduction
  • 6.2 Face Morphing Generation
  • 6.2.1 Landmark Based Morphing
  • 6.2.2 Deep Learning-Based Face Morph Generation
  • 6.3 Vulnerability of Face Recognition Systems to Face Morphing
  • 6.3.1 Data Sets
  • 6.3.2 Results
  • 6.3.3 Deep Learning-Based Morphing Results
  • 6.4 Conclusions
  • References
  • 7 Adversarial Attacks on Face Recognition Systems
  • 7.1 Introduction
  • 7.2 Taxonomy of Attacks on FRS
  • 7.2.1 Threat Model
  • 7.3 Poisoning Attacks on FRS
  • 7.3.1 Fast Gradient Sign Method
  • 7.3.2 Projected Gradient Descent
  • 7.4 Carlini and Wagner (CW) Attacks
  • 7.5 ArcFace FRS Model
  • 7.6 Experiments and Analysis
  • 7.6.1 Clean Dataset
  • 7.6.2 Attack Dataset
  • 7.6.3 FRS Model for Baseline Verification
  • 7.6.4 FRS Baseline Performance Evaluation
  • 7.6.5 FRS Performance on Probe Data Poisoning
  • 7.6.6 FRS Performance on Enrolment Data Poisoning
  • 7.7 Impact of Adversarial Training with FGSM Attacks
  • 7.8 Discussion
  • 7.9 Conclusions and Future Directions
  • References
  • 8 Talking Faces: Audio-to-Video Face Generation
  • 8.1 Introduction
  • 8.2 Related Work
  • 8.2.1 Audio Representation
  • 8.2.2 Face Modeling
  • 8.2.3 Audio-to-Face Animation
  • 8.2.4 Post-processing
  • 8.3 Datasets and Metrics.
  • 8.3.1 Dataset
  • 8.3.2 Metrics
  • 8.4 Discussion
  • 8.4.1 Fine-Grained Facial Control
  • 8.4.2 Generalization
  • 8.5 Conclusion
  • 8.6 Further Reading
  • References
  • Part III Digital Face Manipulation Detection
  • 9 Detection of AI-Generated Synthetic Faces
  • 9.1 Introduction
  • 9.2 AI Face Generation
  • 9.3 GAN Fingerprints
  • 9.4 Detection Methods in the Spatial Domain
  • 9.4.1 Handcrafted Features
  • 9.4.2 Data-Driven Features
  • 9.5 Detection Methods in the Frequency Domain
  • 9.6 Learning Features that Generalize
  • 9.7 Generalization Analysis
  • 9.8 Robustness Analysis
  • 9.9 Further Analyses on GAN Detection
  • 9.10 Open Challenges
  • References
  • 10 3D CNN Architectures and Attention Mechanisms for Deepfake Detection
  • 10.1 Introduction
  • 10.2 Related Work
  • 10.2.1 Deepfake Detection
  • 10.2.2 Attention Mechanisms
  • 10.3 Dataset
  • 10.4 Algorithms
  • 10.5 Experiments
  • 10.5.1 All Manipulation Techniques
  • 10.5.2 Single Manipulation Techniques
  • 10.5.3 Cross-Manipulation Techniques
  • 10.5.4 Effect of Attention in 3D ResNets
  • 10.5.5 Visualization of Pertinent Features in Deepfake Detection
  • 10.6 Conclusions
  • References
  • 11 Deepfake Detection Using Multiple Data Modalities
  • 11.1 Introduction
  • 11.2 Deepfake Detection via Video Spatiotemporal Features
  • 11.2.1 Overview
  • 11.2.2 Model Component
  • 11.2.3 Training Details
  • 11.2.4 Boosting Network
  • 11.2.5 Test Time Augmentation
  • 11.2.6 Result Analysis
  • 11.3 Deepfake Detection via Audio Spectrogram Analysis
  • 11.3.1 Overview
  • 11.3.2 Dataset
  • 11.3.3 Spectrogram Generation
  • 11.3.4 Convolutional Neural Network (CNN)
  • 11.3.5 Experimental Results
  • 11.4 Deepfake Detection via Audio-Video Inconsistency Analysis
  • 11.4.1 Finding Audio-Video Inconsistency via Phoneme-Viseme Mismatching
  • 11.4.2 Deepfake Detection Using Affective Cues
  • 11.5 Conclusion.
  • References
  • 12 DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame
  • 12.1 Introduction
  • 12.2 Related Works
  • 12.3 DeepFakesON-Phys
  • 12.4 Databases
  • 12.4.1 Celeb-DF v2 Database
  • 12.4.2 DFDC Preview
  • 12.5 Experimental Protocol
  • 12.6 Fake Detection Results: DeepFakesON-Phys
  • 12.6.1 DeepFakes Detection at Frame Level
  • 12.6.2 DeepFakes Detection at Short-Term Video Level
  • 12.7 Conclusions
  • References
  • 13 Capsule-Forensics Networks for Deepfake Detection
  • 13.1 Introduction
  • 13.2 Related Work
  • 13.2.1 Deepfake Generation
  • 13.2.2 Deepfake Detection
  • 13.2.3 Challenges in Deepfake Detection
  • 13.2.4 Capsule Networks
  • 13.3 Capsule-Forensics
  • 13.3.1 Why Capsule-Forensics?
  • 13.3.2 Overview
  • 13.3.3 Architecture
  • 13.3.4 Dynamic Routing Algorithm
  • 13.3.5 Visualization
  • 13.4 Evaluation
  • 13.4.1 Datasets
  • 13.4.2 Metrics
  • 13.4.3 Effect of Improvements
  • 13.4.4 Feature Extractor Comparison
  • 13.4.5 Effect of Statistical Pooling Layers
  • 13.4.6 Capsule-Forensics Network Versus CNNs: Seen Attacks
  • 13.4.7 Capsule-Forensics Network Versus CNNs: Unseen Attacks
  • 13.5 Conclusion and Future Work
  • 13.6 Appendix
  • References
  • 14 DeepFakes Detection: the DeeperForensics Dataset and Challenge
  • 14.1 Introduction
  • 14.2 Related Work
  • 14.2.1 DeepFakes Generation Methods
  • 14.2.2 DeepFakes Detection Methods
  • 14.2.3 DeepFakes Detection Datasets
  • 14.2.4 DeepFakes Detection Benchmarks
  • 14.3 DeeperForensics-1.0 Dataset
  • 14.3.1 Data Collection
  • 14.3.2 DeepFake Variational Auto-Encoder
  • 14.3.3 Scale and Diversity
  • 14.3.4 Hidden Test Set
  • 14.4 DeeperForensics Challenge 2020
  • 14.4.1 Platform
  • 14.4.2 Challenge Dataset
  • 14.4.3 Evaluation Metric
  • 14.4.4 Timeline
  • 14.4.5 Results and Solutions
  • 14.5 Discussion
  • 14.6 Further Reading
  • References.
  • 15 Face Morphing Attack Detection Methods
  • 15.1 Introduction
  • 15.2 Related Works
  • 15.3 Morphing Attack Detection Pipeline
  • 15.3.1 Data Preparation and Feature Extraction
  • 15.3.2 Feature Preparation and Classifier Training
  • 15.4 Database
  • 15.4.1 Image Morphing
  • 15.4.2 Image Post-Processing
  • 15.5 Morphing Attack Detection Methods
  • 15.5.1 Pre-Processing
  • 15.5.2 Feature Extraction
  • 15.5.3 Classification
  • 15.6 Experiments
  • 15.6.1 Generalisability
  • 15.6.2 Detection Performance
  • 15.6.3 Post-Processing
  • 15.7 Summary
  • References
  • 16 Practical Evaluation of Face Morphing Attack Detection Methods
  • 16.1 Introduction
  • 16.2 Related Work
  • 16.3 Creation of Morphing Datasets
  • 16.3.1 Creating Morphs
  • 16.3.2 Datasets
  • 16.4 Texture-Based Face Morphing Attack Detection
  • 16.5 Morphing Disguising
  • 16.6 Experiments and Results
  • 16.6.1 Within Dataset Performance
  • 16.6.2 Cross Dataset Performance
  • 16.6.3 Mixed Dataset Performance
  • 16.6.4 Robustness Against Additive Gaussian Noise
  • 16.6.5 Robustness Against Scaling
  • 16.6.6 Selection of Similar Subjects
  • 16.7 The SOTAMD Benchmark
  • 16.8 Conclusion
  • References
  • 17 Facial Retouching and Alteration Detection
  • 17.1 Introduction
  • 17.2 Retouching and Alteration Detection-Review
  • 17.2.1 Digital Retouching Detection
  • 17.2.2 Digital Alteration Detection
  • 17.2.3 Publicly Available Databases
  • 17.3 Experimental Evaluation and Observations
  • 17.3.1 Cross-Domain Alteration Detection
  • 17.3.2 Cross Manipulation Alteration Detection
  • 17.3.3 Cross Ethnicity Alteration Detection
  • 17.4 Open Challenges
  • 17.5 Conclusion
  • References
  • Part IV Further Topics, Trends, and Challenges
  • 18 Detecting Soft-Biometric Privacy Enhancement
  • 18.1 Introduction
  • 18.2 Background and Related Work
  • 18.2.1 Problem Formulation and Existing Solutions.
  • 18.2.2 Soft-Biometric Privacy Models.