Home|Journals|Articles by Year|Audio Abstracts
 

Original Article

JJCIT. 2022; 8(3): 218-231


PHYLOGENETIC REPLAY LEARNING IN DEEP NEURAL NETWORKS

Jean-Patrice Glafkides, Gene I Sher, Herman Akdag.




Abstract

Though substantial advancements have been made in training deep neural networks, one problem remains, the vanishing gradient. The very strength of deep neural networks, their depth, is also unfortunately their problem, due to the difficulty of thoroughly training the deeper layers due to the vanishing gradient. This paper proposes "Phylogenetic Replay Learning", a learning methodology that substantially alleviates the vanishing gradient problem. Unlike the residual learning methods, it does not restrict the structure of the model. Instead, it leverages elements from Neuroevolution, transfer learning, and layer by layer training. We demonstrate that this new approach is able to produce a better performing model, and by calculating Shannon Entropy of weights, we show that the deeper layers are trained much more thoroughly and contain statistically significantly more information than when a model is trained in a traditional brute force manner...

Key words: Neural Networks, Neuroevolution, Phylogenetic Replay Learning, Deep Learning, Vanishing gradient






Full-text options


Share this Article


Online Article Submission
• ejmanager.com




ejPort - eJManager.com
Refer & Earn
JournalList
About BiblioMed
License Information
Terms & Conditions
Privacy Policy
Contact Us

The articles in Bibliomed are open access articles licensed under Creative Commons Attribution 4.0 International License (CC BY), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.