Reaction–diffusion systems serve as a powerful mathematical framework for modeling dynamic behaviors and pattern formation, widely used to model physical, chemical, and biological systems, among others. Understanding their stability often involves examining eigenvalue problems derived by linearizing the governing equations around their time-independent solutions. In most practical cases, analytical solutions to these eigenvalue problems are difficult to obtain—especially in multi-component systems or when dealing with non‐self‐adjoint problems—making both analysis and computation more challenging. As a result, numerical methods remain the primary practical tool for investigating their spectral properties and gaining insight into system behavior. Recently, physics-informed neural networks (PINNs) have emerged as a promising alternative for solving partial differential equations by embedding physical laws and constraints directly into the training process. In this work, we use a PINN-based approach to compute multiple eigenpairs for the eigenvalue problems of reaction–diffusion systems, covering both single- and multi-component cases. This includes challenging scenarios involving non‐self‐adjoint problems, where traditional numerical methods often become less effective. In this work, we limit our focus to two one-dimensional reaction–diffusion models: the Zeldovich–Frank–Kamenetsky (ZFK) equation, a single-component system with a self-adjoint structure, and the FitzHugh–Nagumo (FHN) system, a two-component model exhibiting non‐self‐adjoint behavior. By embedding essential physical constraints—such as biorthonormality between left and right eigenfunctions and spectral ordering—directly into the loss function, the method maintains consistency and robustness during training. In both cases, the PINN framework yields accurate eigenvalue and eigenfunction approximations that agree closely with direct numerical simulations. These results highlight the capability of PINNs to serve as a flexible and effective tool for spectral analysis across a broad class of reaction–diffusion problems.
Citation: Burhan Bezekci. Spectral analysis of reaction-diffusion systems via physics-informed neural networks[J]. AIMS Mathematics, 2025, 10(8): 18156-18182. doi: 10.3934/math.2025809
Reaction–diffusion systems serve as a powerful mathematical framework for modeling dynamic behaviors and pattern formation, widely used to model physical, chemical, and biological systems, among others. Understanding their stability often involves examining eigenvalue problems derived by linearizing the governing equations around their time-independent solutions. In most practical cases, analytical solutions to these eigenvalue problems are difficult to obtain—especially in multi-component systems or when dealing with non‐self‐adjoint problems—making both analysis and computation more challenging. As a result, numerical methods remain the primary practical tool for investigating their spectral properties and gaining insight into system behavior. Recently, physics-informed neural networks (PINNs) have emerged as a promising alternative for solving partial differential equations by embedding physical laws and constraints directly into the training process. In this work, we use a PINN-based approach to compute multiple eigenpairs for the eigenvalue problems of reaction–diffusion systems, covering both single- and multi-component cases. This includes challenging scenarios involving non‐self‐adjoint problems, where traditional numerical methods often become less effective. In this work, we limit our focus to two one-dimensional reaction–diffusion models: the Zeldovich–Frank–Kamenetsky (ZFK) equation, a single-component system with a self-adjoint structure, and the FitzHugh–Nagumo (FHN) system, a two-component model exhibiting non‐self‐adjoint behavior. By embedding essential physical constraints—such as biorthonormality between left and right eigenfunctions and spectral ordering—directly into the loss function, the method maintains consistency and robustness during training. In both cases, the PINN framework yields accurate eigenvalue and eigenfunction approximations that agree closely with direct numerical simulations. These results highlight the capability of PINNs to serve as a flexible and effective tool for spectral analysis across a broad class of reaction–diffusion problems.
| [1] | M. Ghergu, V. Radulescu, Nonlinear PDEs: Mathematical models in biology, chemistry and population genetics, Berlin, Heidelberg: Springer, 2011. |
| [2] |
V. Volpert, S. Petrovskii, Reaction–diffusion waves in biology, Phys. Life Rev., 6 (2009), 267–310. http://doi.org/10.1016/j.plrev.2009.10.002 doi: 10.1016/j.plrev.2009.10.002
|
| [3] |
A. C. Newell, J. A. Whitehead, Finite bandwidth, finite amplitude convection, J. Fluid Mech., 38 (1969), 279–303. http://doi.org/10.1017/S0022112069000176 doi: 10.1017/S0022112069000176
|
| [4] |
A. M. Turing, The chemical basis of morphogenesis, Bltn. Mathcal. Biology, 52 (1990), 153–197. http://doi.org/10.1007/BF02459572 doi: 10.1007/BF02459572
|
| [5] |
B. Bezekci, I. Idris, R. D. Simitev, V. N. Biktashev, Semianalytical approach to criteria for ignition of excitation waves, Phys. Rev. E, 92 (2015), 042917. http://doi.org/10.1103/PhysRevE.92.042917 doi: 10.1103/PhysRevE.92.042917
|
| [6] |
A. L. Andrew, J. W. Paine, Correction of finite element estimates for Sturm-Liouville eigenvalues, Numer. Math., 50 (1986), 205–215. https://doi.org/10.1007/BF01390430 doi: 10.1007/BF01390430
|
| [7] |
I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE Trans. Neural Netw., 9 (1998), 987–1000. http://doi.org/10.1109/72.712178 doi: 10.1109/72.712178
|
| [8] |
M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686–707. http://doi.org/10.1016/j.jcp.2018.10.045 doi: 10.1016/j.jcp.2018.10.045
|
| [9] |
X. Li, J. Zhang, Z. Wang, Y. Liu, Fourier-feature induced physics-informed randomized neural network method for solving the biharmonic equation, J. Comput. Appl. Math., 468 (2025), 116635. http://doi.org/10.1016/j.cam.2024.116635 doi: 10.1016/j.cam.2024.116635
|
| [10] | X. Li, J. Zhang, Z. Wang, A hybrid PINN framework with soft and hard constraints using Fourier expansions for advection-diffusion equations, Comput. Math. Appl., 159 (2024), 60–75. |
| [11] |
B. Bezekci, V. N. Biktashev, Strength-duration relationship in an excitable medium, Commun. Nonlinear Sci. Numer. Simul., 80 (2020), 104954. http://doi.org/10.1016/j.cnsns.2019.104954 doi: 10.1016/j.cnsns.2019.104954
|
| [12] |
B. Bezekci, Deep Learning-Enhanced Regularization of Irregular Traveling Pulses in the FitzHugh-Nagumo Model, SN Comput. Sci., 6 (2025), 206. https://doi.org/10.1007/s42979-025-03752-5 doi: 10.1007/s42979-025-03752-5
|
| [13] |
G. Flores, The stable manifold of the standing wave of the Nagumo equation, J. Differential Equations, 80 (1989), 306–314. https://doi.org/10.1016/0022-0396(89)90086-7 doi: 10.1016/0022-0396(89)90086-7
|
| [14] |
G. Flores, Stability analysis for the slow travelling pulse of the FitzHugh–Nagumo system, SIAM J. Math. Anal., 22 (1991), 392–399. https://doi.org/10.1137/0522025 doi: 10.1137/0522025
|
| [15] |
Q. Yang, Y. Deng, Y. Yang, Q. He, S. Zhang, Neural networks based on power method and inverse power method for solving linear eigenvalue problems, Comput. Math. Appl., 147 (2023), 14–24. https://doi.org/10.1016/j.camwa.2023.07.013 doi: 10.1016/j.camwa.2023.07.013
|
| [16] |
Q.-H. Yang, Y. Yang, Y.-T. Deng, Q.-L. He, H.-L. Gong, S.-Q. Zhang, Physics-constrained neural network for solving discontinuous interface K-eigenvalue problem with application to reactor physics, Nucl. Sci. Tech., 34 (2023), 161. https://doi.org/10.1007/s41365-023-01313-0 doi: 10.1007/s41365-023-01313-0
|
| [17] |
N. Pallikarakis, A. Ntargaras, Application of machine learning regression models to inverse eigenvalue problems, Comput. Math. Appl., 154 (2024), 162–174. https://doi.org/10.1016/j.camwa.2023.11.0384 doi: 10.1016/j.camwa.2023.11.0384
|
| [18] | M. Mattheakis, G. R. Schleder, D. T. Larson, E. Kaxiras, First principles physics-informed neural network for quantum wavefunctions and eigenvalue surfaces, 2022. Available from: https://arxiv.org/abs/2211.04607. |
| [19] |
E. G. Holliday, J. F. Lindner, W. L. Ditto, Solving quantum billiard eigenvalue problems with physics-informed machine learning, AIP Adv., 13 (2023), 085315. https://doi.org/10.1063/5.0161067 doi: 10.1063/5.0161067
|
| [20] |
L. Harcombe, Q. Deng, Physics-informed neural networks for discovering localised eigenstates in disordered media, J. Comput. Sci., 73 (2023), 102136. http://doi.org/10.1016/j.jocs.2023.102136 doi: 10.1016/j.jocs.2023.102136
|
| [21] |
H. Jin, M. Mattheakis, P. Protopapas, Physics-informed neural networks for quantum eigenvalue problems, 2022 International Joint Conference on Neural Networks (IJCNN), 2022, 1–8. http://doi.org/10.1109/IJCNN55064.2022.9892705 doi: 10.1109/IJCNN55064.2022.9892705
|
| [22] | A. D. Dongare, R. R. Kharde, A. D. Kachare, Introduction to artificial neural network, Int. J. Eng. Innov. Technol. (IJEIT), 2 (2012), 189–194. |
| [23] | D. Kingma, J. L. Ba. Adam, Adam: A method for stochastic optimization, International Conference on Learning Representations (ICLR) 2015, 2015. |
| [24] | A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2018), 1–43. |
| [25] |
L. Lu, R. Pestourie, W. Yao, Z. Wang, F. Verdugo, S. G. Johnson, Physics-informed neural networks with hard constraints for inverse design, SIAM J. Sci. Comput., 43 (2021), B1105–B1132. http://doi.org/10.1137/21M1397908 doi: 10.1137/21M1397908
|
| [26] |
S. Mishra, R. Molinaro, Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs, IMA J. Numer. Anal., 42 (2022), 981–1022. https://doi.org/10.1093/imanum/drab032 doi: 10.1093/imanum/drab032
|
| [27] |
L. Yuan, Y.-Q. Ni, X.-Y. Deng, S. Hao, A-PINN: Auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations, J. Comput. Phys., 462 (2022), 111260. http://doi.org/10.1016/j.jcp.2022.111260 doi: 10.1016/j.jcp.2022.111260
|
| [28] |
L. Lu, X. Meng, Z. Mao, G. E. Karniadakis, DeepXDE: A deep learning library for solving differential equations, SIAM Review, 63 (2021), 208–228. http://doi.org/10.1137/19M1274067 doi: 10.1137/19M1274067
|
| [29] | P. Ramachandran, B. Zoph, Q. V. Le, Searching for activation functions, 2017). Available from: https://arxiv.org/abs/1710.05941. |
| [30] | X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010, 249–256. |
| [31] |
D. Zhang, L. Lu, L. Guo, G. E. Karniadakis, Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems, J. Comput. Phys., 397 (2019), 108850. http://doi.org/10.1016/j.jcp.2019.07.048 doi: 10.1016/j.jcp.2019.07.048
|
| [32] |
D. C. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program., 45 (1989), 503–528. http://doi.org/10.1007/BF01589116 doi: 10.1007/BF01589116
|
| [33] | M. Mattheakis, P. Protopapas, D. Sondak, M. Di Giovanni, E. Kaxiras, Physical symmetries embedded in neural networks, 2019, Available from: https://arxiv.org/abs/1904.08991. |
| [34] | B. Bezekci, Analytical and numerical approaches to initiation of excitation waves, University of Exeter, PhD thesis, 2016. |
| [35] | Ya. B. Zel'dovich, D. A. Frank-Kamenetsky, Towards the theory of uniformly propagating flames, Doklady AN SSSR, 19 (1938), 693–697. |
| [36] |
F. Schlögl, Chemical reaction models for non-equilibrium phase transitions, Z. Phys., 253 (1972), 147–161. http://doi.org/10.1007/BF01379769 doi: 10.1007/BF01379769
|
| [37] |
J. Nagumo, S. Arimoto, S. Yoshizawa, An active pulse transmission line simulating nerve axon, Proc. IRE, 50 (1962), 2061–2070. http://doi.org/10.1109/JRPROC.1962.288235 doi: 10.1109/JRPROC.1962.288235
|
| [38] |
I. Idris, V. N. Biktashev, Analytical approach to initiation of propagating fronts, Phys. Rev. Lett., 101 (2008), 244101. http://doi.org/10.1103/PhysRevLett.101.244101 doi: 10.1103/PhysRevLett.101.244101
|
| [39] |
R. FitzHugh, Impulses and physiological states in theoretical models of nerve membrane, Biophys. J., 1 (1961), 445–466. http://doi.org/10.1016/S0006-3495(61)86902-6 doi: 10.1016/S0006-3495(61)86902-6
|
| [40] |
J. C. Neu, R. S. Preissig Jr., W. Krassowska, Initiation of propagation in a one-dimensional excitable medium, Physica D, 102 (1997), 285–299. https://doi.org/10.1016/S0167-2789(96)00203-5 doi: 10.1016/S0167-2789(96)00203-5
|