Vision-Based Leaf Disease Detection Using an Improved ShuffleNet Architecture

In the agricultural sector, accurate and efficient plant disease detection is essential for ensuring food security, minimizing economic losses, and reducing the environmental impact of excessive pesticide use. Traditional models such as Bag of Features (BoF), DenseNet-201, ResNet-50, and ShuffleNet...

Full description

Saved in:
Bibliographic Details
Main Author: Chyntia Jaby, Entuni
Format: Thesis
Language:en
en
en
Published: UNIMAS 2026
Subjects:
Online Access:http://ir.unimas.my/id/eprint/51543/7/Chyntia%20Jaby%20Anak%20Entuni_PhD%20Thesis.pdf
http://ir.unimas.my/id/eprint/51543/8/Chyntia%20Jaby_PhD%20Thesis%20_24%20pages.pdf
http://ir.unimas.my/id/eprint/51543/9/DOW_Chyntia%20Jaby%20Anak%20Entuni.pdf
http://ir.unimas.my/id/eprint/51543/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the agricultural sector, accurate and efficient plant disease detection is essential for ensuring food security, minimizing economic losses, and reducing the environmental impact of excessive pesticide use. Traditional models such as Bag of Features (BoF), DenseNet-201, ResNet-50, and ShuffleNet variants often struggle with real-world complexities, including occlusions, varying lighting conditions, and clustered foliage. These limitations hinder their ability to detect diseased leaves in practical agricultural environments. This research introduces an improved ShuffleNet model, incorporating additional convolutional layers and optimized parameters, combined with Kinect-based imaging for precise disease identification. The proposed approach effectively detects grey spot, discolored leaf, and leaf curling diseases within dense capsicum leaf clusters, achieving a peak accuracy of 91.94%, significantly surpassing conventional methods. The model's success is attributed to rigorous hyperparameter tuning, with an optimal learning rate of 0.010, 50 epochs, a minibatch size of 64, and the Adam Optimizer, balancing accuracy and computational efficiency. Additionally, the model generalizes well across multiple crops, including capsicum, rice, corn, tomato, and citrus. In conclusion, this research provides an innovative deep learning-driven plant disease detection framework, capable of operating in complex agricultural conditions. Overall, the experiments achieved 91.94% accuracy, 92.10% precision, 91.60% recall, and 91.80% F1-score across tested crops, confirming the robustness and scalability of the proposed model. Future work should improve efficiency with knowledge distillation for real-time use on edge devices, and combine hyperspectral, thermal imaging, and IoT platforms to better detect diseases, monitor at scale, and support smart farming.