Automated cervical vertebral maturation staging using deep learning: Enhancing accuracy through random oversampling and memory optimization
Objectives This study introduces a customized deep convolutional neural network (DCNN) framework for automated classification of cervical vertebral maturation stages (CVMS) from lateral cephalometric radiographs, with targeted strategies to address class imbalance and training inefficiencies. Materi...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | en |
| Published: |
Scientific Scholar
2025
|
| Subjects: | |
| Online Access: | http://ir.unimas.my/id/eprint/48668/1/Automated%20cervical%20vertebral%20maturation%20staging%20using.pdf http://ir.unimas.my/id/eprint/48668/ https://apospublications.com/automated-cervical-vertebral-maturation-staging-using-deep-learning-enhancing-accuracy-through-random-oversampling-and-memory-optimization/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Objectives This study introduces a customized deep convolutional neural network (DCNN) framework for automated classification of cervical vertebral maturation stages (CVMS) from lateral cephalometric radiographs, with targeted strategies to address class imbalance and training inefficiencies. Material and Methods A total of 922 radiographs from subjects aged 7–20 years were independently assessed for CVMS by two orthodontists. Images meeting quality criteria were preprocessed to isolate C2–C4 cervical vertebrae. To address the class imbalance, random oversampling (ROS) was applied. The dataset was split into 70% training, 30% validation, and an additional 10% unseen test set to evaluate model generalization. A custom DCNN model was developed with hyperparameters tuning through random search and trained using Adam optimizer and categorical cross-entropy loss. Early stopping was implemented to prevent overfitting and ensure optimal model convergence. In addition, a memory reset function was applied before each training session to release memory and reset the model’s weights, optimizing memory usage and preventing any unwanted bias accumulation during the training process. Results Initially, the model showed high training accuracy (98%) but but poor generalization (57% validation accuracy) due to dataset imbalance. After applying ROS, dataset restructuring, and early stopping, the model’s validation accuracy improved to 88%. On unseen data, the model achieved 76% accuracy, demonstrating better generalization. The recall analysis revealed significant underestimation for CVMS 4 and CVMS 5 (21% and 15% misclassifications), while CVMS 1 and CVMS 6 exhibited minimal misclassifications (8%), mainly within adjacent stages, indicating reasonable stage progression accuracy. Conclusion This study highlights the potential of a fully automated DCNN for CVMS classification with promising results. Future work will focus on enhancing stage differentiation, improving classification accuracy, and leveraging advanced AI techniques to enhance model robustness and generalization. |
|---|
