Typesetting math: 100%

Network inference with hidden units

  • Received: 01 December 2012 Accepted: 29 June 2018 Published: 01 September 2013
  • MSC : Primary: 62M45, 82C20; Secondary: 62J02.

  • We derive learning rules for finding the connections between units in stochastic dynamical networks from the recorded history of a ``visible'' subset of the units. We consider two models. In both of them, the visible units are binary and stochastic. In one model the ``hidden'' units are continuous-valued, with sigmoidal activation functions, and in the other they are binary and stochastic like the visible ones. We derive exact learning rules for both cases. For the stochastic case, performing the exact calculation requires, in general, repeated summations over an number of configurations that grows exponentially with the size of the system and the data length, which is not feasible for large systems. We derive a mean field theory, based on a factorized ansatz for the distribution of hidden-unit states, which offers an attractive alternative for large systems. We present the results of some numerical calculations that illustrate key features of the two models and, for the stochastic case, the exact and approximate calculations.

    Citation: Joanna Tyrcha, John Hertz. Network inference with hidden units[J]. Mathematical Biosciences and Engineering, 2014, 11(1): 149-165. doi: 10.3934/mbe.2014.11.149

    Related Papers:

    [1] Shichao Wu, Xianzhou Lv, Yingbo Liu, Ming Jiang, Xingxu Li, Dan Jiang, Jing Yu, Yunyu Gong, Rong Jiang . Enhanced SSD framework for detecting defects in cigarette appearance using variational Bayesian inference under limited sample conditions. Mathematical Biosciences and Engineering, 2024, 21(2): 3281-3303. doi: 10.3934/mbe.2024145
    [2] William Chad Young, Adrian E. Raftery, Ka Yee Yeung . A posterior probability approach for gene regulatory network inference in genetic perturbation data. Mathematical Biosciences and Engineering, 2016, 13(6): 1241-1251. doi: 10.3934/mbe.2016041
    [3] Xianli Liu, Yongquan Zhou, Weiping Meng, Qifang Luo . Functional extreme learning machine for regression and classification. Mathematical Biosciences and Engineering, 2023, 20(2): 3768-3792. doi: 10.3934/mbe.2023177
    [4] Jia Yu, Huiling Peng, Guoqiang Wang, Nianfeng Shi . A topical VAEGAN-IHMM approach for automatic story segmentation. Mathematical Biosciences and Engineering, 2024, 21(7): 6608-6630. doi: 10.3934/mbe.2024289
    [5] Mengya Zhang, Qing Wu, Zezhou Xu . Tuning extreme learning machine by an improved electromagnetism-like mechanism algorithm for classification problem. Mathematical Biosciences and Engineering, 2019, 16(5): 4692-4707. doi: 10.3934/mbe.2019235
    [6] Xiwen Qin, Dongmei Yin, Xiaogang Dong, Dongxue Chen, Shuang Zhang . Survival prediction model for right-censored data based on improved composite quantile regression neural network. Mathematical Biosciences and Engineering, 2022, 19(8): 7521-7542. doi: 10.3934/mbe.2022354
    [7] Chao Ma, Yanfeng Lu . Distributed nonsynchronous event-triggered state estimation of genetic regulatory networks with hidden Markovian jumping parameters. Mathematical Biosciences and Engineering, 2022, 19(12): 13878-13910. doi: 10.3934/mbe.2022647
    [8] Jingchi Jiang, Xuehui Yu, Yi Lin, Yi Guan . PercolationDF: A percolation-based medical diagnosis framework. Mathematical Biosciences and Engineering, 2022, 19(6): 5832-5849. doi: 10.3934/mbe.2022273
    [9] Michele La Rocca, Cira Perna . Designing neural networks for modeling biological data: A statistical perspective. Mathematical Biosciences and Engineering, 2014, 11(2): 331-342. doi: 10.3934/mbe.2014.11.331
    [10] R Nandhini Abiram, P M Durai Raj Vincent . Identity preserving multi-pose facial expression recognition using fine tuned VGG on the latent space vector of generative adversarial network. Mathematical Biosciences and Engineering, 2021, 18(4): 3699-3717. doi: 10.3934/mbe.2021186
  • We derive learning rules for finding the connections between units in stochastic dynamical networks from the recorded history of a ``visible'' subset of the units. We consider two models. In both of them, the visible units are binary and stochastic. In one model the ``hidden'' units are continuous-valued, with sigmoidal activation functions, and in the other they are binary and stochastic like the visible ones. We derive exact learning rules for both cases. For the stochastic case, performing the exact calculation requires, in general, repeated summations over an number of configurations that grows exponentially with the size of the system and the data length, which is not feasible for large systems. We derive a mean field theory, based on a factorized ansatz for the distribution of hidden-unit states, which offers an attractive alternative for large systems. We present the results of some numerical calculations that illustrate key features of the two models and, for the stochastic case, the exact and approximate calculations.


    [1] Cogn. Sci., 9 (1985), 147-169.
    [2] IEE Transactions on Automatic Control, AC-19 (1974), 716-723.
    [3] chapter 11, Cambridge Univ. Press, 2012.
    [4] J. Roy. Stat. Soc. B, 39 (1977), 1-38.
    [5] Phys. Rev. E, 87 (2013), 022127.
    [6] J. Math. Phys., 4 (1963), 294-307.
    [7] in "Principles of Neural Coding" (eds. S. Panzeri and R. R. Quiroga), CRC Press, (2013), 527-546.
    [8] chapter 2, World Scientific Lecture Notes in Physics, 9, World Scientific Publishing Co., Inc., Teaneck, NJ, 1987.
    [9] Neural Computation, 1 (1989), 263-269.
    [10] Biol. Cybern., 50 (1984), 51-62.
    [11] Phys. Rev. Lett., 59 (1987), 2229-2232.
    [12] Phys. Rev. Lett., 106 (2011), 048702.
    [13] Phys. Rev. E, 79 (2009), 051915.
    [14] in "Parallel Distributed Processing" (eds. D. E. Rumelhart and J. L. McClelland), Vol. 1, Chapter 8, MIT Press, 1986.
    [15] J. Art. Intel. Res., 4 (1996), 61-76.
    [16] Nature, 440 (2006), 1007-1012.
    [17] Annals of Statistics, 6 (1978), 461-464.
    [18] Scand. J. Statistics, 1 (1974), 49-58.
    [19] Phys. Rev. Lett., 35 (1975), 1792-1796.
    [20] Philos. Mag., 92 (1974), 272-279.
    [21] Neural Comp., 1 (1989), 270-280.
  • This article has been cited by:

    1. B Bravi, P Sollich, Inference for dynamics of continuous variables: the extended Plefka expansion with hidden nodes, 2017, 2017, 1742-5468, 063404, 10.1088/1742-5468/aa657d
    2. C Battistin, J Hertz, J Tyrcha, Y Roudi, Belief propagation and replicas for inference and learning in a kinetic Ising model with hidden spins, 2015, 2015, 1742-5468, P05021, 10.1088/1742-5468/2015/05/P05021
    3. Braden A. W. Brinkman, Fred Rieke, Eric Shea-Brown, Michael A. Buice, Jeff Beck, Predicting how and when hidden neurons skew measured synaptic interactions, 2018, 14, 1553-7358, e1006490, 10.1371/journal.pcbi.1006490
    4. Yasser Roudi, Graham Taylor, Learning with hidden variables, 2015, 35, 09594388, 110, 10.1016/j.conb.2015.07.006
    5. Ludovica Bachschmid-Romano, Manfred Opper, Inferring hidden states in a random kinetic Ising model: replica analysis, 2014, 2014, 1742-5468, P06013, 10.1088/1742-5468/2014/06/P06013
    6. Daniel Soudry, Suraj Keshri, Patrick Stinson, Min-hwan Oh, Garud Iyengar, Liam Paninski, Matthias Bethge, Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data, 2015, 11, 1553-7358, e1004464, 10.1371/journal.pcbi.1004464
    7. Christian Donner, Manfred Opper, Inverse Ising problem in continuous time: A latent variable approach, 2017, 96, 2470-0045, 10.1103/PhysRevE.96.062104
    8. Yasser Roudi, Benjamin Dunn, John Hertz, Multi-neuronal activity and functional connectivity in cell assemblies, 2015, 32, 09594388, 38, 10.1016/j.conb.2014.10.011
    9. Danh-Tai Hoang, Junghyo Jo, Vipul Periwal, Data-driven inference of hidden nodes in networks, 2019, 99, 2470-0045, 10.1103/PhysRevE.99.042114
    10. Haiping Huang, Effects of hidden nodes on network structure inference, 2015, 48, 1751-8113, 355002, 10.1088/1751-8113/48/35/355002
    11. Benjamin Dunn, Claudia Battistin, The appropriateness of ignorance in the inverse kinetic Ising model, 2017, 50, 1751-8113, 124002, 10.1088/1751-8121/aa59dc
    12. Ludovica Bachschmid-Romano, Manfred Opper, Learning of couplings for random asymmetric kinetic Ising models revisited: random correlation matrices and learning curves, 2015, 2015, 1742-5468, P09016, 10.1088/1742-5468/2015/09/P09016
    13. Francesco Randi, Andrew M. Leifer, Nonequilibrium Green’s Functions for Functional Connectivity in the Brain, 2021, 126, 0031-9007, 10.1103/PhysRevLett.126.118102
    14. Claudia Battistin, Benjamin Dunn, Yasser Roudi, Learning with unknowns: Analyzing biological data in the presence of hidden variables, 2017, 1, 24523100, 122, 10.1016/j.coisb.2016.12.010
    15. David Dahmen, Sonja Grün, Markus Diesmann, Moritz Helias, Second type of criticality in the brain uncovers rich multiple-neuron dynamics, 2019, 116, 0027-8424, 13051, 10.1073/pnas.1818972116
    16. Francesca Puppo, Deborah Pré, Anne G. Bang, Gabriel A. Silva, Super-Selective Reconstruction of Causal and Direct Connectivity With Application to in vitro iPSC Neuronal Networks, 2021, 15, 1662-453X, 10.3389/fnins.2021.647877
    17. Louis-Brahim Beaufort, Pierre-Yves Massé, Antonin Reboulet, Laurent Oudre, Network reconstruction problem for an epidemic reaction--diffusion system, 2022, 10, 2051-1329, 10.1093/comnet/cnac047
    18. Sangwon Lee, Vipul Periwal, Junghyo Jo, Inference of stochastic time series with missing data, 2021, 104, 2470-0045, 10.1103/PhysRevE.104.024119
    19. Weinuo Jiang, Shihong Wang, Detecting hidden nodes in networks based on random variable resetting method, 2023, 33, 1054-1500, 043109, 10.1063/5.0134953
    20. Tong Liang, Braden A. W. Brinkman, Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances, 2024, 109, 2470-0045, 10.1103/PhysRevE.109.044404
    21. Zhongqi Cai, Enrico Gerding, Markus Brede, Enhanced network inference from sparse incomplete time series through automatically adapted
    regularization, 2024, 9, 2364-8228, 10.1007/s41109-024-00621-7
  • Reader Comments
  • © 2014 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2868) PDF downloads(492) Cited by(21)

Article outline

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog