Preview

Proceedings in Cybernetics

Advanced search

VARIATIONAL LEARNING AND SURVEY ALGORITHMS OF DYNAMIC BAYESIAN NETWORKS IN PARTIAL OBSERVABILITY OF PARAMETERS

https://doi.org/10.34822/1999-7604-2022-2-75-84

Abstract

Probabilistic models based on Bayesian networks are used as a common mechanism to describe processes occurring in uncertainty. The optimization of solving factorization problems, distributing evidence, and calculating the complete joint distribution of each vertices of the Bayesian network graph repre-sents a current direction associated with optimizing the calculation of dynamic Bayesian networks’ variational features. The study considers the possibility of presenting Bayesian networks as hypergraphs resulting from the need to develop optimal learning algorithms of Bayesian networks and determine the main approaches to implementation of the network’s survey. Specifics of variational inference in forming transition models applied to certain adjacent time samplings are studied, with semantics of dynamic Bayesian networks considered. Algorithms for discrete and continuous models of dynamic Bayesian networks are presented. The proposed approaches make it possible to optimize the procedure for calculating a priori distributions of a dynamic Bayesian network, simplify its topological structure, as well as optimize the network’s survey when obtaining new evidence and determining the probability distribution for hidden variables according to evidence data.

About the Author

P. V. Polukhin
Voronezh State University, Voronezh
Russian Federation

Candidate of Sciences (Engineering)

E-mail: alfa_force@bk.ru



References

1. Pearl J. Causality: Models, Reasoning and Inference. Cambridge University Press, 2009. 484 p.

2. Тихонов В. И., Миронов М. А. Марковские процессы. М. : Сов. радио, 1977. 488 с.

3. Jensen F. V., Nielsen T. D. Bayesian Networks and Decision Graphs. New York : Springer, 2007. 441 p.

4. Рассел С., Норвиг П. Искусственный интеллект: современный подход. М. : Вильямс, 2006. 1408 с.

5. Zacks S. Introduction to Reliability Analyses: Probability Models and Statistical Models. New York : Springer-Verlag, 1992. 212 p.

6. Murphy K. P. Machine Learning: A Probabilistic Perspective.

7. Massachusetts : MIT Press, 2012. 1067 p.

8. MacKay D. J. C. A Practical Bayesian Framework for Backpropagation Networks // Neural Computation. 1992. Vol. 4, No. 3. P. 448‒472.

9. Kullbak S., Leibler R. A. Information and Sufficiency // Ann Math Statist. 1951. Vol. 22, No 1. P. 79–86.

10. Bardenet R., Doucet A., Holmes C. Towards Scaling up Markov Chain Monte Carlo: An Adaptive Subsampling Approach // PMLR. 2014. Vol. 32, Is. 1. P. 405‒413.

11. Полухин П. В. Инструменты оптимизации многочастичного фильтра для вероятностных моделей динамических систем // Системы управления и информ. технологии. 2021. № 4 (86). С. 4–10.

12. Del Moral P., Doucet A., Jasra A. On Adaptive Resampling Procedures for Sequential Monte Carlo Methods // Bernoulli. 2012. Vol. 18, No. 1. P. 252‒278.

13. Ross S. M. Stochastic Processes. 2nd edition. New York: Wiley, 1996. 510 p.

14. Del Moral P. Nonlinear Filtering: Interacting Particle Resolution // Markov Processing and Related Fields. 1996. Vol. 2, No. 4. P. 555‒580.

15. Zaharia M., Chowdhury M., Das T. Resilent Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing // 9th USENIX Symposium on Networked Systems Design and Implementation, April 25‒27, 2012, San Jose. P. 1‒15.


Review

For citations:


Polukhin P.V. VARIATIONAL LEARNING AND SURVEY ALGORITHMS OF DYNAMIC BAYESIAN NETWORKS IN PARTIAL OBSERVABILITY OF PARAMETERS. Proceedings in Cybernetics. 2022;(2 (46)):75-84. https://doi.org/10.34822/1999-7604-2022-2-75-84

Views: 129


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1999-7604 (Online)