USING BAYESIAN METHODS TO OPTIMIZE TRAINING OF NEURAL NETWORK MODELS FOR APPLICATION TESTING
https://doi.org/10.34822/1999-7604-2022-3-46-56
Abstract
Neural network models are a universal tool for solving probabilistic problems in various fields and areas. However, when solving a set of problems, typical neural networks and training algorithms can lead to overfitted networks, resulting in a decrease in the accuracy of the posterior distribution of the output layer of the neural network. In this case, the network fails to adapt fully to self-training due to no data in the training sample, leading to a decrease in the network’s ability to detect anomalies. One of the methods to solve the problem is by using Bayesian methods for training of and inference in neural networks. When using the Bayesian approach in neural networks, it is intended to set the weights in the network as a probability distribution over all values that are permissible for the parameters of the network. Then, classical learning mechanisms and probabilistic inference used in Bayesian networks, such as algorithms based on the Monte Carlo methods and Markov chains, can be used to determine weights. The article considers the issues of using Bayesian neural networks for modeling the process of web application testing.
About the Author
P. V. PolukhinRussian Federation
Candidate of Sciences (Engineering)
E-mail: alfa_force@bk.ru
References
1. Skansi S. Introduction to Deep Learning: From Logical Calculus to Artificial Intelligence. 2018. 191 p.
2. Зуев В. Н., Кемайкин В. Л. Модифицированные алгоритмы обучения нейронных сетей // Программные продукты и системы. 2019. Т. 4, № 2. С. 258-262.
3. Вершков Н. А., Кучуков В. А., Кучукова Н. Н. Теоретический подход к поиску глобального экстремума при обучении нейронных сетей // Тр. ИСП РАН. 2019. Т. 31, № 2. С. 41-52.
4. Russell S., Norvig P. Artificial Intelligence: A Modern Approach. Hoboken : Pearson, 2020. 1023 p.
5. Bunch P., Godsill S. Improved Particle Approximations to the Joint Smoothing Distribution Using Markov Chain Monte Carlo // IEEE Transactions Signal Processing. 2013. Vol. 61, Is. 4. P. 946‒953.
6. Mohan K., Pearl J. Graphical Models for Processing Missing Data // Journal of American Statistical Association. 2018. Vol. 116, Is. 534. P. 1023‒1037.
7. Азарнова Т. В., Полухин П. В., Аснина Н. Г., Проскурин Д. К. Формирование структуры байесовской сети процесса тестирования надежности информационных систем // Вестн. Воронеж. гос. техн. ун-та. 2017. Т. 13, № 6. 156 с.
8. Moral P. D., Doucet A. Particle Methods: An Introduction with Application // ESAIM. 2014. Vol. 44. P. 1‒46.
9. Мельникова И. В., Сметанников Д. И. Исследование уравнений для вероятностных характеристик случайных процессов, заданных стохастическими уравнениями // Тр. ИММ УрО РАН. 2018. Т. 24, № 2. С. 185-193.
10. Bardenet R., Doucet A., Holmes C., Bardenet R. Towards Scaling up Markov Chain Monte Carlo:An Adaptive Subsampling Approach // Proceedings of the 31st International Conference on Machine Learning, June 22‒24, 2014, Beijing, China. 2014. Vol. 32, Is. 1. P. 405‒413.
11. LeCun Y., Bengio G., Hinton G. Deep Learning // Nature. 2015. Vol. 521. P. 436‒444.
12. Durmus A., Majewski S., Miasojedow B. Analysis of Langevin Monte Carlo via Convex Optimization // Journal of Machine Learning Research. 2019. Vol. 20. P. 1‒46.
Review
For citations:
Polukhin P.V. USING BAYESIAN METHODS TO OPTIMIZE TRAINING OF NEURAL NETWORK MODELS FOR APPLICATION TESTING. Proceedings in Cybernetics. 2022;(3 (47)):46-56. (In Russ.) https://doi.org/10.34822/1999-7604-2022-3-46-56