Volume 24 Issue 3
Mar.  1998
Turn off MathJax
Article Contents
Luo Dehan, Chen Weihai. Active Back-propagation Algorithm Based on Adjusting Error for Multilayer Feed-forward Neural Network[J]. Journal of Beijing University of Aeronautics and Astronautics, 1998, 24(3): 350-353. (in Chinese)
Citation: Luo Dehan, Chen Weihai. Active Back-propagation Algorithm Based on Adjusting Error for Multilayer Feed-forward Neural Network[J]. Journal of Beijing University of Aeronautics and Astronautics, 1998, 24(3): 350-353. (in Chinese)

Active Back-propagation Algorithm Based on Adjusting Error for Multilayer Feed-forward Neural Network

  • Received Date: 25 Nov 1997
  • Publish Date: 31 Mar 1998
  • The back-propagation (BP) algorithm was used as a learning algorithm in training multilayer feed-forward neural networks (MLFNN) in past years, and some improved BP algorithms have recently been developed to speed up MLFNN learning. However, the effeciency of these improved BP algorithms are limited due to ignoring the activity of adjusting error during training MLFNN. In this paper, an active back propagation (ABP) algorithm based on improved BP algorithm is developed for MLFNN trained. The ABP algorithm alters the adjusting errors of MLFNN during the network trained, according to the error tendency of the network, and aimed to enhance rapidity of the network trained. The paper describes experiments that compare the performance of ABP algorithm with improved BP algorithms. The experiment results have shown that the ABP algorithm gives more efficient than improved BP algorithm for MLFNN trained.

     

  • loading
  • 1 .Rumelhart De, McClelland Jl. Learning internal representation by error propagation.In:Rumelhart De,McClelland Jl,eds.Parallel Distributed Processing——Explorations in the Microstructure of Cognition. Vol. 1.Cambridge MA:MIT Press,1986.318~362 2. Baba N. A new approach for finding the global minimum of error function of neural networks.Neural Networks,1989,2(2):367~373 3. Hornik K, Stinhcombe M, White H.Multilayer feedforward networks are universal approximators. Neural Networks,1989,2(2):359~366 4. Minai Aa, Williams Rd.Acceleration of back propagation through learning rate and momentum adaptation.In:IEEE,eds.Proceedings of the International Joint Conference on Neural Networks.California:San Diego,1990.676~679 5. Lang Kj, Witbrock Mj. Faster learning variations on back-propagation: an empirical study.In:Toure tzky Ds,Hinton Ge,Sejnowski Tj,eds.Proc Connectionist Models Summer School. San Mateo:Morgan Kaufmann, 1988.52~59 6. Riedmiller M, Braun H. A direct adaptative method for faster backpropagation learning: the RPROP Algorithm.In:IEEE,eds.Proceedings of International Neural Networks. San francisco, 1993.586~591 7. Shiffman W, Joost M, Werner R. Optimization of the back propagation algorithm for training multilayer perceptrons.Proc European Symposium on Artificial Neural Networks, ESANN 93. Brussels,1993.97~104
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views(2902) PDF downloads(865) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return