In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy
Deepfake technology has become increasingly sophisticated in recent years, making detecting fake images and videos challenging. This paper investigates the performance of adaptable convolutional neural network (CNN) models for detecting Deepfakes. In-the-wild OpenForensics dataset was used to e...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Conference or Workshop Item |
| Language: | en |
| Published: |
2023
|
| Online Access: | http://eprints.utem.edu.my/id/eprint/28039/1/In-the-wild%20deepfake%20detection%20using%20adaptable%20CNN%20models%20with%20visual%20class%20activation%20mapping%20for%20improved%20accuracy.pdf http://eprints.utem.edu.my/id/eprint/28039/ https://ieeexplore.ieee.org/document/10210096 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Deepfake technology has become increasingly
sophisticated in recent years, making detecting fake images and
videos challenging. This paper investigates the performance of
adaptable convolutional neural network (CNN) models for
detecting Deepfakes. In-the-wild OpenForensics dataset was used
to evaluate four different CNN models (DenseNet121, ResNet18,
SqueezeNet, and VGG11) at different batch sizes and with various
performance metrics. Results show that the adapted VGG11
model with a batch size of 32 achieved the highest accuracy of
94.46% in detecting Deepfakes, outperforming the other models,
with DenseNet121 as the second-best performer achieving an
accuracy of 93.89% with the same batch size. Grad-CAM
techniques are utilized to visualize the decision-making process
within the models, aiding in understanding the Deepfake
classification process. These findings provide valuable insights
into the performance of different deep learning models and can
guide the selection of an appropriate model for a specific
application. |
|---|
