A framework for multiprocessor neural networks systems
Artificial neural networks (ANN) are able to simplify classification tasks and have been steadily improving both in accuracy and efficiency. However, there are several issues that need to be addressed when constructing an ANN for handling different scales of data, especially those with a low accurac...
Saved in:
Main Author: | |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2012
|
Subjects: | |
Online Access: | http://eprints.unisza.edu.my/134/1/FH03-FIK-16-05752.jpg http://eprints.unisza.edu.my/134/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Artificial neural networks (ANN) are able to simplify classification tasks and have been steadily improving both in accuracy and efficiency. However, there are several issues that need to be addressed when constructing an ANN for handling different scales of data, especially those with a low accuracy score. Parallelism is considered as a practical solution to solve a large workload. However, a comprehensive understanding is needed to generate a scalable neural network that is able to achieve the optimal training time for a large network. Therefore, this paper proposes several strategies, including neural ensemble techniques and parallel architecture, for distributing data to several network processor structures to reduce the time required for recognition tasks without compromising the achieved accuracy. The initial results indicate that the proposed strategies are able to improve the speed up performance for large scale neural networks while maintaining an acceptable accuracy. |
---|