Deep learning-based classification of breast tumors in ultrasound images / Ayub Ahmed Omar

The use of ultrasound imaging techniques to diagnose breast cancer at an early stage is a popular and effective method. The issue with traditional breast ultrasound diagnosis is that, unlike magnetic resonance imaging (MRI) and mammography, it is prone to making a mistake due to its subjectivi...

Full description

Saved in:
Bibliographic Details
Main Author: Ayub Ahmed, Omar
Format: Thesis
Published: 2022
Subjects:
Online Access:http://studentsrepo.um.edu.my/14364/1/Ayub_Ahmed_Omar.jpg
http://studentsrepo.um.edu.my/14364/3/ayub.pdf
http://studentsrepo.um.edu.my/14364/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The use of ultrasound imaging techniques to diagnose breast cancer at an early stage is a popular and effective method. The issue with traditional breast ultrasound diagnosis is that, unlike magnetic resonance imaging (MRI) and mammography, it is prone to making a mistake due to its subjectivity, which could result in a missed diagnosis and an unnecessary biopsy. In this research project, recent breast tumor classification model algorithms are investigated and analyzed, and then the limitations and gaps in previous techniques are highlighted. The Breast Ultrasound Images Dataset (BUID) has been prepared and preprocessed in order to train both the U-Net and Convolutional neural network (CNN) classifier models. The U-Net model is used to locate tumor growth in original medical images because of its capacity to do classification on each pixel in the input image and produce input and output images that are the same size. Then, a CNN classifier model is built to classify the U-Net model's generated mask images as benign, malignant, or normal. The accuracy performance matrices and Dice loss function are used to evaluate the performance of both U-Net and CNN classifier models. The U-Net model have achieved an accuracy of 93% and a dice loss value of 0.4391. Whereas the CNN classifier model has achieved an accuracy of 85%.