Research article Special Issues

Deep Bayesian networks

  • Published: 05 January 2026
  • MSC : 62M05, 60J22, 68T10

  • Deep Bayesian networks (DBNs) having deep recurrent neural network (DRNN) topography, effective forward-backward learning and innate model selection capabilities are introduced. DBNs provided randomness, Bayes' factor methods and efficient gradient-free expectation-maximization-based (EM) learning to the DRNN layout. DBN's learning, simulation and Bayes' factor capabilities provided an effective generative adversarial network (GAN) in the sequential (RNN) setting. Consequently, deep fakes with real probabilistic models could be created, based upon training data. Alternatively, DBNs could be thought of as some super generalization of hidden Markov models (HMMs), which have inputs and multiple hidden layers. The proofs establishing the above claims were based upon the novel idea to transform the whole network to a completely independent network where the analysis is trivial using a Girsanov like theorem.

    Citation: Michael A. Kouritzin. Deep Bayesian networks[J]. AIMS Mathematics, 2026, 11(1): 272-290. doi: 10.3934/math.2026011

    Related Papers:

  • Deep Bayesian networks (DBNs) having deep recurrent neural network (DRNN) topography, effective forward-backward learning and innate model selection capabilities are introduced. DBNs provided randomness, Bayes' factor methods and efficient gradient-free expectation-maximization-based (EM) learning to the DRNN layout. DBN's learning, simulation and Bayes' factor capabilities provided an effective generative adversarial network (GAN) in the sequential (RNN) setting. Consequently, deep fakes with real probabilistic models could be created, based upon training data. Alternatively, DBNs could be thought of as some super generalization of hidden Markov models (HMMs), which have inputs and multiple hidden layers. The proofs establishing the above claims were based upon the novel idea to transform the whole network to a completely independent network where the analysis is trivial using a Girsanov like theorem.



    加载中


    [1] S. I. Amari, Characteristics of random nets of analog neuron-like elements, IEEE Trans. Syst., Man, Cybern., SMC-2 (1972), 643–657. https://doi.org/10.1109/TSMC.1972.4309193 doi: 10.1109/TSMC.1972.4309193
    [2] N. Chopin, O. Papaspiliopoulos, An introduction to sequential Monte Carlo, Springer Cham, 2020. https://doi.org/10.1007/978-3-030-47845-2
    [3] D. Creal, A survey of sequential Monte Carlo methods for economics and finance, Economet. Rev., 31 (2012), 245–296. https://doi.org/10.1080/07474938.2011.607333 doi: 10.1080/07474938.2011.607333
    [4] E. D'Amato, V. A. Nardi, I. Notaro, V. Scordamaglia, A particle filtering approach for fault detection and isolation of UAV IMU sensors: design, implementation and sensitivity analysis, Sensors, 21 (2021), 3066. https://doi.org/10.3390/s21093066 doi: 10.3390/s21093066
    [5] J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. USA, 79 (1982), 2554–2558. https://doi.org/10.1073/pnas.79.8.2554 doi: 10.1073/pnas.79.8.2554
    [6] S. Hochreiter, The vanishing gradient problem during learning recurrent neural nets and problem solutions, Int. J. Uncertainty, Fuzziness Knowl.-Based Syst., 06 (1998), 107–116. https://doi.org/10.1142/S0218488598000094 doi: 10.1142/S0218488598000094
    [7] E. Ising, Beitrag zur theorie des ferromagnetismus, Z. Phys., 31 (1925), 253–258. https://doi.org/10.1007/BF02980577 doi: 10.1007/BF02980577
    [8] M. A. Kouritzin, Sampling and filtering with Markov chains, Signal Process., 225 (2024), 109613. https://doi.org/10.1016/j.sigpro.2024.109613 doi: 10.1016/j.sigpro.2024.109613
    [9] M. A. Kouritzin, Markov observation models and deepfakes, Mathematics, 13 (2025), 2128. https://doi.org/10.3390/math13132128 doi: 10.3390/math13132128
    [10] M. A. Kouritzin, Residual and stratified branching particle filters, Comput. Stat. Data Anal., 111 (2017), 145–165. https://doi.org/10.1016/j.csda.2017.02.003 doi: 10.1016/j.csda.2017.02.003
    [11] W. Lenz, Beitrag zum Verständnis der magnetischen Erscheinungen in festen Körpern, Z. Phys., 21 (1920), 613–615.
    [12] V. Maroulas, A. Nebenführ, Tracking rapid intracellular movements: a Bayesian random set approach, Ann. Appl. Stat., 9 (2015), 926–949. https://doi.org/10.1214/15-AOAS819 doi: 10.1214/15-AOAS819
    [13] M. Oren, M. Hassid, N. Yarden, Y. Adi, R. Schwartz, Transformers are multi-state RNNs, arXiv Preprint, 2024. https://doi.org/10.48550/arXiv.2401.06104
    [14] A. M. Schäfer, H. G. Zimmermann, Recurrent neural networks are universal approximators, Int. J. Neural Syst., 17 (2007), 253–263. https://doi.org/10.1142/S0129065707001111 doi: 10.1142/S0129065707001111
  • Reader Comments
  • © 2026 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(215) PDF downloads(17) Cited by(0)

Article outline

Figures and Tables

Figures(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog