A framework for robust deep learning models against adversarial attacks based on a protection layer approach

Deep learning (DL) has demonstrated remarkable achievements in various fields. Nevertheless, DL models encounter significant challenges in detecting and defending against adversarial samples (AEs). These AEs are meticulously crafted by adversaries, introducing imperceptible perturbations to clean da...

Full description

Saved in:
Bibliographic Details
Main Authors: Tan, Shing Chiang, Mohammed Al-Andoli, Mohammed Nasser, Goh, Pey Yun, Sim, Kok Swee, Lim, Chee Peng
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2024
Online Access:http://eprints.utem.edu.my/id/eprint/27255/2/0272917012024103253681.PDF
http://eprints.utem.edu.my/id/eprint/27255/
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10400453
Tags: Add Tag
No Tags, Be the first to tag this record!
id my.utem.eprints.27255
record_format eprints
spelling my.utem.eprints.272552024-07-01T14:28:00Z http://eprints.utem.edu.my/id/eprint/27255/ A framework for robust deep learning models against adversarial attacks based on a protection layer approach Tan, Shing Chiang Mohammed Al-Andoli, Mohammed Nasser Goh, Pey Yun Sim, Kok Swee Lim, Chee Peng Deep learning (DL) has demonstrated remarkable achievements in various fields. Nevertheless, DL models encounter significant challenges in detecting and defending against adversarial samples (AEs). These AEs are meticulously crafted by adversaries, introducing imperceptible perturbations to clean data to deceive DL models. Consequently, AEs pose potential risks to DL applications. In this paper, we propose an effective framework for enhancing the robustness of DL models against adversarial attacks. The framework leverages convolutional neural networks (CNNs) for feature learning, Deep Neural Networks (DNNs) with softmax for classification, and a defense mechanism to identify and exclude AEs. Evasion attacks are employed to create AEs to evade and mislead the classifier by generating malicious samples during the test phase of DL models i.e., CNN and DNN, using the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), and Square Attack (SA). A protection layer is developed as a detection mechanism placed before the DNN classifier to identify and exclude AEs. The detection mechanism incorporates a machine learning model, which includes one of the following: Fuzzy ARTMAP, Random Forest, K-Nearest Neighbors, XGBoost, or Gradient Boosting Machine. Extensive evaluations are conducted on the MNIST, CIFAR-10, SVHN, and Fashion-MNIST data sets to assess the effectiveness of the proposed framework. The experimental results indicate the framework's ability to effectively and accurately detect AEs generated by four popular attacking methods, highlighting the potential of our developed framework in enhancing its robustness against AEs. Institute of Electrical and Electronics Engineers Inc. 2024 Article PeerReviewed text en http://eprints.utem.edu.my/id/eprint/27255/2/0272917012024103253681.PDF Tan, Shing Chiang and Mohammed Al-Andoli, Mohammed Nasser and Goh, Pey Yun and Sim, Kok Swee and Lim, Chee Peng (2024) A framework for robust deep learning models against adversarial attacks based on a protection layer approach. IEEE Access, 12. pp. 17522-17540. ISSN 2169-3536 https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10400453 10.1109/ACCESS.2024.3354699
institution Universiti Teknikal Malaysia Melaka
building UTEM Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider Universiti Teknikal Malaysia Melaka
content_source UTEM Institutional Repository
url_provider http://eprints.utem.edu.my/
language English
description Deep learning (DL) has demonstrated remarkable achievements in various fields. Nevertheless, DL models encounter significant challenges in detecting and defending against adversarial samples (AEs). These AEs are meticulously crafted by adversaries, introducing imperceptible perturbations to clean data to deceive DL models. Consequently, AEs pose potential risks to DL applications. In this paper, we propose an effective framework for enhancing the robustness of DL models against adversarial attacks. The framework leverages convolutional neural networks (CNNs) for feature learning, Deep Neural Networks (DNNs) with softmax for classification, and a defense mechanism to identify and exclude AEs. Evasion attacks are employed to create AEs to evade and mislead the classifier by generating malicious samples during the test phase of DL models i.e., CNN and DNN, using the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), and Square Attack (SA). A protection layer is developed as a detection mechanism placed before the DNN classifier to identify and exclude AEs. The detection mechanism incorporates a machine learning model, which includes one of the following: Fuzzy ARTMAP, Random Forest, K-Nearest Neighbors, XGBoost, or Gradient Boosting Machine. Extensive evaluations are conducted on the MNIST, CIFAR-10, SVHN, and Fashion-MNIST data sets to assess the effectiveness of the proposed framework. The experimental results indicate the framework's ability to effectively and accurately detect AEs generated by four popular attacking methods, highlighting the potential of our developed framework in enhancing its robustness against AEs.
format Article
author Tan, Shing Chiang
Mohammed Al-Andoli, Mohammed Nasser
Goh, Pey Yun
Sim, Kok Swee
Lim, Chee Peng
spellingShingle Tan, Shing Chiang
Mohammed Al-Andoli, Mohammed Nasser
Goh, Pey Yun
Sim, Kok Swee
Lim, Chee Peng
A framework for robust deep learning models against adversarial attacks based on a protection layer approach
author_facet Tan, Shing Chiang
Mohammed Al-Andoli, Mohammed Nasser
Goh, Pey Yun
Sim, Kok Swee
Lim, Chee Peng
author_sort Tan, Shing Chiang
title A framework for robust deep learning models against adversarial attacks based on a protection layer approach
title_short A framework for robust deep learning models against adversarial attacks based on a protection layer approach
title_full A framework for robust deep learning models against adversarial attacks based on a protection layer approach
title_fullStr A framework for robust deep learning models against adversarial attacks based on a protection layer approach
title_full_unstemmed A framework for robust deep learning models against adversarial attacks based on a protection layer approach
title_sort framework for robust deep learning models against adversarial attacks based on a protection layer approach
publisher Institute of Electrical and Electronics Engineers Inc.
publishDate 2024
url http://eprints.utem.edu.my/id/eprint/27255/2/0272917012024103253681.PDF
http://eprints.utem.edu.my/id/eprint/27255/
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10400453
_version_ 1804070306479865856
score 13.211869