In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy
Deepfake technology has become increasingly sophisticated in recent years, making detecting fake images and videos challenging. This paper investigates the performance of adaptable convolutional neural network (CNN) models for detecting Deepfakes. In-the-wild OpenForensics dataset was used to e...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Online Access: | http://eprints.utem.edu.my/id/eprint/28039/1/In-the-wild%20deepfake%20detection%20using%20adaptable%20CNN%20models%20with%20visual%20class%20activation%20mapping%20for%20improved%20accuracy.pdf http://eprints.utem.edu.my/id/eprint/28039/ https://ieeexplore.ieee.org/document/10210096 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.utem.eprints.28039 |
---|---|
record_format |
eprints |
spelling |
my.utem.eprints.280392024-10-17T12:22:18Z http://eprints.utem.edu.my/id/eprint/28039/ In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy Saealal, Muhammad Salihin Ibrahim, Mohd Zamri Shapiai, Mohd Ibrahim Fadilah, Norasyikin Deepfake technology has become increasingly sophisticated in recent years, making detecting fake images and videos challenging. This paper investigates the performance of adaptable convolutional neural network (CNN) models for detecting Deepfakes. In-the-wild OpenForensics dataset was used to evaluate four different CNN models (DenseNet121, ResNet18, SqueezeNet, and VGG11) at different batch sizes and with various performance metrics. Results show that the adapted VGG11 model with a batch size of 32 achieved the highest accuracy of 94.46% in detecting Deepfakes, outperforming the other models, with DenseNet121 as the second-best performer achieving an accuracy of 93.89% with the same batch size. Grad-CAM techniques are utilized to visualize the decision-making process within the models, aiding in understanding the Deepfake classification process. These findings provide valuable insights into the performance of different deep learning models and can guide the selection of an appropriate model for a specific application. 2023 Conference or Workshop Item PeerReviewed text en http://eprints.utem.edu.my/id/eprint/28039/1/In-the-wild%20deepfake%20detection%20using%20adaptable%20CNN%20models%20with%20visual%20class%20activation%20mapping%20for%20improved%20accuracy.pdf Saealal, Muhammad Salihin and Ibrahim, Mohd Zamri and Shapiai, Mohd Ibrahim and Fadilah, Norasyikin (2023) In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy. In: 5th International Conference on Computer Communication and the Internet, ICCCI 2023, 23 June 2023 through 25 June 2023, Fujisawa. https://ieeexplore.ieee.org/document/10210096 |
institution |
Universiti Teknikal Malaysia Melaka |
building |
UTEM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknikal Malaysia Melaka |
content_source |
UTEM Institutional Repository |
url_provider |
http://eprints.utem.edu.my/ |
language |
English |
description |
Deepfake technology has become increasingly
sophisticated in recent years, making detecting fake images and
videos challenging. This paper investigates the performance of
adaptable convolutional neural network (CNN) models for
detecting Deepfakes. In-the-wild OpenForensics dataset was used
to evaluate four different CNN models (DenseNet121, ResNet18,
SqueezeNet, and VGG11) at different batch sizes and with various
performance metrics. Results show that the adapted VGG11
model with a batch size of 32 achieved the highest accuracy of
94.46% in detecting Deepfakes, outperforming the other models,
with DenseNet121 as the second-best performer achieving an
accuracy of 93.89% with the same batch size. Grad-CAM
techniques are utilized to visualize the decision-making process
within the models, aiding in understanding the Deepfake
classification process. These findings provide valuable insights
into the performance of different deep learning models and can
guide the selection of an appropriate model for a specific
application. |
format |
Conference or Workshop Item |
author |
Saealal, Muhammad Salihin Ibrahim, Mohd Zamri Shapiai, Mohd Ibrahim Fadilah, Norasyikin |
spellingShingle |
Saealal, Muhammad Salihin Ibrahim, Mohd Zamri Shapiai, Mohd Ibrahim Fadilah, Norasyikin In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy |
author_facet |
Saealal, Muhammad Salihin Ibrahim, Mohd Zamri Shapiai, Mohd Ibrahim Fadilah, Norasyikin |
author_sort |
Saealal, Muhammad Salihin |
title |
In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy |
title_short |
In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy |
title_full |
In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy |
title_fullStr |
In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy |
title_full_unstemmed |
In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy |
title_sort |
in-the-wild deepfake detection using adaptable cnn models with visual class activation mapping for improved accuracy |
publishDate |
2023 |
url |
http://eprints.utem.edu.my/id/eprint/28039/1/In-the-wild%20deepfake%20detection%20using%20adaptable%20CNN%20models%20with%20visual%20class%20activation%20mapping%20for%20improved%20accuracy.pdf http://eprints.utem.edu.my/id/eprint/28039/ https://ieeexplore.ieee.org/document/10210096 |
_version_ |
1814061452554665984 |
score |
13.232414 |