Defense against adversarial attack in image recognition

Deep neural networks were found to be extremely useful and perform exceedingly well in machine learning tasks such as computer vision, speech recognition, natural language processing and in various domains such as healthcare system and autonomous car system. The high accuracy exhibited by the deep l...

Full description

Saved in:
Bibliographic Details
Main Author: Ng, Shi Qi
Format: Final Year Project / Dissertation / Thesis
Published: 2024
Subjects:
Online Access:http://eprints.utar.edu.my/6989/1/fyp_CS_2024_NSQ.pdf
http://eprints.utar.edu.my/6989/
Tags: Add Tag
No Tags, Be the first to tag this record!
id my-utar-eprints.6989
record_format eprints
spelling my-utar-eprints.69892025-02-21T03:05:57Z Defense against adversarial attack in image recognition Ng, Shi Qi T Technology (General) Deep neural networks were found to be extremely useful and perform exceedingly well in machine learning tasks such as computer vision, speech recognition, natural language processing and in various domains such as healthcare system and autonomous car system. The high accuracy exhibited by the deep learning models in machine learning tasks have attracted people into using them in various real-world scenarios including those safety-oriented ones. However, it would seem that the high accuracy that was exhibited by those machine learning models does not necessary means that they are reliable, or robust enough to be employed in our daily lives directly. It was found recently that these high-performance models can be fooled by adversarial examples, which are perturbed inputs that are almost indiscernible from normal inputs to human eyes but can create a huge disruption in the behavior of machine learning models. If this inherent nature of machine learning models was exploited by adversaries, dire consequences will occur. Hence, defensive methods should be deployed to prevent the machine learning models to be robust against these adversarial examples. In the reviewed papers, we found that researchers have tried incorporating randomness into the models by various methods such as RSE and Adv-BNN, but only Adv-BNN has combined adversarial training with BNN which infuse randomness into their defensive strategy, and other methods rarely investigated the effect of combining adversarial training and randomness incorporation into one defensive method, as well as investigating the effect of combining adversarial purification and adversarial training. In this thesis, we propose a defensive method combining a preprocessing pipeline that introduces noises, adversarial purification and adversarial training, and we have investigated various methods of incorporating noises the effect of doing so in an adversarial defense system. 2024-05 Final Year Project / Dissertation / Thesis NonPeerReviewed application/pdf http://eprints.utar.edu.my/6989/1/fyp_CS_2024_NSQ.pdf Ng, Shi Qi (2024) Defense against adversarial attack in image recognition. Final Year Project, UTAR. http://eprints.utar.edu.my/6989/
institution Universiti Tunku Abdul Rahman
building UTAR Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider Universiti Tunku Abdul Rahman
content_source UTAR Institutional Repository
url_provider http://eprints.utar.edu.my
topic T Technology (General)
spellingShingle T Technology (General)
Ng, Shi Qi
Defense against adversarial attack in image recognition
description Deep neural networks were found to be extremely useful and perform exceedingly well in machine learning tasks such as computer vision, speech recognition, natural language processing and in various domains such as healthcare system and autonomous car system. The high accuracy exhibited by the deep learning models in machine learning tasks have attracted people into using them in various real-world scenarios including those safety-oriented ones. However, it would seem that the high accuracy that was exhibited by those machine learning models does not necessary means that they are reliable, or robust enough to be employed in our daily lives directly. It was found recently that these high-performance models can be fooled by adversarial examples, which are perturbed inputs that are almost indiscernible from normal inputs to human eyes but can create a huge disruption in the behavior of machine learning models. If this inherent nature of machine learning models was exploited by adversaries, dire consequences will occur. Hence, defensive methods should be deployed to prevent the machine learning models to be robust against these adversarial examples. In the reviewed papers, we found that researchers have tried incorporating randomness into the models by various methods such as RSE and Adv-BNN, but only Adv-BNN has combined adversarial training with BNN which infuse randomness into their defensive strategy, and other methods rarely investigated the effect of combining adversarial training and randomness incorporation into one defensive method, as well as investigating the effect of combining adversarial purification and adversarial training. In this thesis, we propose a defensive method combining a preprocessing pipeline that introduces noises, adversarial purification and adversarial training, and we have investigated various methods of incorporating noises the effect of doing so in an adversarial defense system.
format Final Year Project / Dissertation / Thesis
author Ng, Shi Qi
author_facet Ng, Shi Qi
author_sort Ng, Shi Qi
title Defense against adversarial attack in image recognition
title_short Defense against adversarial attack in image recognition
title_full Defense against adversarial attack in image recognition
title_fullStr Defense against adversarial attack in image recognition
title_full_unstemmed Defense against adversarial attack in image recognition
title_sort defense against adversarial attack in image recognition
publishDate 2024
url http://eprints.utar.edu.my/6989/1/fyp_CS_2024_NSQ.pdf
http://eprints.utar.edu.my/6989/
_version_ 1825167457040465920
score 13.244413