An improved back propagation leaning algorithm using second order methods with gain parameter
Back Propagation (BP) algorithm is one of the oldest learning techniques used by Artificial Neural Networks (ANN). It has successfully been implemented in various practical problems. However, the algorithm still faces some drawbacks such as getting easily stuck at local minima and needs longer time...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Penerbit UTHM
2018
|
Subjects: | |
Online Access: | http://eprints.uthm.edu.my/4558/1/AJ%202018%20%28791%29%20An%20improved%20back%20propagation%20leaning%20algorithm%20using%20second%20order%20methods%20with%20gain%20parameter.pdf http://eprints.uthm.edu.my/4558/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Back Propagation (BP) algorithm is one of the oldest learning techniques used by Artificial Neural Networks (ANN). It has successfully been implemented in various practical problems. However, the algorithm still faces some drawbacks such as getting easily stuck at local minima and needs longer time to converge on an acceptable solution. Recently, the introduction of Second Order Methods has shown a significant improvement on the learning in BP but it still has some drawbacks such as slow convergence and complexity. To overcome these limitations, this research proposed a modified approach for BP by introducing the Conjugate Gradient and QuasiNewton which were Second Order methods together with ‘gain’ parameter. The performances of the proposed approach is evaluated in terms of lowest number of epochs, lowest CPU time and highest accuracy on five benchmark classification datasets such as Glass, Horse, 7Bit Parity, Indian Liver Patient and Lung Cancer. The results show that the proposed Second Order methods with ‘gain’ performed better than the BP algorithm. |
---|