Loading [MathJax]/jax/output/SVG/jax.js
Special Issues

Synchronization for a class of complex-valued memristor-based competitive neural networks(CMCNNs) with different time scales

  • Received: 01 December 2020 Revised: 01 March 2021 Published: 26 May 2021
  • Primary: 37N25, 37N35, 92B25, 92B10; Secondary: 93D05, 93D15, 93C30, 93C10

  • In this paper, the synchronization problem of complex-valued memristive competitive neural networks(CMCNNs) with different time scales is investigated. Based on differential inclusions and inequality techniques, some novel sufficient conditions are derived to ensure synchronization of the drive-response systems by designing a proper controller. Finally, a numerical example is provided to illustrate the usefulness and feasibility of our results.

    Citation: Yong Zhao, Shanshan Ren. Synchronization for a class of complex-valued memristor-based competitive neural networks(CMCNNs) with different time scales[J]. Electronic Research Archive, 2021, 29(5): 3323-3340. doi: 10.3934/era.2021041

    Related Papers:

    [1] M. B. A. Mansour . Computation of traveling wave fronts for a nonlinear diffusion-advection model. Mathematical Biosciences and Engineering, 2009, 6(1): 83-91. doi: 10.3934/mbe.2009.6.83
    [2] Honghua Bin, Daifeng Duan, Junjie Wei . Bifurcation analysis of a reaction-diffusion-advection predator-prey system with delay. Mathematical Biosciences and Engineering, 2023, 20(7): 12194-12210. doi: 10.3934/mbe.2023543
    [3] Meng Zhao, Wan-Tong Li, Yang Zhang . Dynamics of an epidemic model with advection and free boundaries. Mathematical Biosciences and Engineering, 2019, 16(5): 5991-6014. doi: 10.3934/mbe.2019300
    [4] José Luis Díaz Palencia, Abraham Otero . Modelling the interaction of invasive-invaded species based on the general Bramson dynamics and with a density dependant diffusion and advection. Mathematical Biosciences and Engineering, 2023, 20(7): 13200-13221. doi: 10.3934/mbe.2023589
    [5] Min Zhu, Xiaofei Guo, Zhigui Lin . The risk index for an SIR epidemic model and spatial spreading of the infectious disease. Mathematical Biosciences and Engineering, 2017, 14(5&6): 1565-1583. doi: 10.3934/mbe.2017081
    [6] Changwook Yoon, Sewoong Kim, Hyung Ju Hwang . Global well-posedness and pattern formations of the immune system induced by chemotaxis. Mathematical Biosciences and Engineering, 2020, 17(4): 3426-3449. doi: 10.3934/mbe.2020194
    [7] Tiberiu Harko, Man Kwong Mak . Travelling wave solutions of the reaction-diffusion mathematical model of glioblastoma growth: An Abel equation based approach. Mathematical Biosciences and Engineering, 2015, 12(1): 41-69. doi: 10.3934/mbe.2015.12.41
    [8] Chichia Chiu, Jui-Ling Yu . An optimal adaptive time-stepping scheme for solving reaction-diffusion-chemotaxis systems. Mathematical Biosciences and Engineering, 2007, 4(2): 187-203. doi: 10.3934/mbe.2007.4.187
    [9] Thomas G. Hallam, Qingping Deng . Simulation of structured populations in chemically stressed environments. Mathematical Biosciences and Engineering, 2006, 3(1): 51-65. doi: 10.3934/mbe.2006.3.51
    [10] Yunfeng Liu, Guowei Sun, Lin Wang, Zhiming Guo . Establishing Wolbachia in the wild mosquito population: The effects of wind and critical patch size. Mathematical Biosciences and Engineering, 2019, 16(5): 4399-4414. doi: 10.3934/mbe.2019219
  • In this paper, the synchronization problem of complex-valued memristive competitive neural networks(CMCNNs) with different time scales is investigated. Based on differential inclusions and inequality techniques, some novel sufficient conditions are derived to ensure synchronization of the drive-response systems by designing a proper controller. Finally, a numerical example is provided to illustrate the usefulness and feasibility of our results.



    Many phenomena in nature can be mathematically modeled by means of partial differential equations (PDEs) to approximate the underlying physical principles governing these phenomena. The advection–diffusion–reaction (ADR) equation can describe the spatio-temporal evolution of a physical quantity in a flowing medium, such as water or air [1]. This evolution expresses different physical phenomena: The transport of the physical quantity driven by the advection velocity field of the flowing medium, its diffusion from highly dense areas to less dense areas, and the reaction with other components, represented by source terms. The ADR equation plays an important role in many research fields, such as heat conduction [2], transport of particles [3], chemical reactions [4], wildfire propagation [5], etc.

    In most cases, there are no analytical solutions for these PDEs, so they have to be solved by means of numerical methods. The finite volume (FV) method [6,7] is based on the direct discretization of the integral form of the conservation laws, and this form does not require the fluxes to be continuous. The FV method being closer to the physical flow conservation laws is the reason why it is very useful when solving equations like the ADR equation. In this article, the FV method-based first-order upwind (FOU) scheme is used to discretize the information of the conservation laws by assuming a piecewise constant distribution of the conserved variables within computational cells [8,9].

    The large number and diversity of problems, together with the vast spatial domains considered in realistic scenarios requiring an improvement in computational cost, has led, in recent years, to the development of a wide range of mathematical strategies and tools in the scientific literature to facilitate, improve, and increase the calculation capacity of the classical methods used in the framework of fluid mechanics. Among many others, reduced-order modeling is one of the most popular in the field. It was originally developed as the reduced basis strategy for predicting the nonlinear static response of structures [10]. Intrusive reduced–order models (ROMs) based on proper orthogonal decomposition (POD) [11] are one of the most interesting tools in fluid mechanics [12]. Intrusive ROMs are intended to be alternative numerical schemes to replace the calculations performed by classical schemes, also called full-order models (FOMs), to save computational costs without losing accuracy in the solutions. These ROMs reside in a reduced dimensional space much smaller than the physical space, which is the reason why they are more efficient than FOMs [13].

    The correct resolution of PDEs by means of numerical schemes requires a thorough calibration of the parameters on which the mathematical model depends. It is therefore of great interest to try to find alternatives that help in this calibration objective in parametrized problems [14]. In this sense, ROMs can be useful for performing multiple simulations in a less expensive way to map different values of the input parameter. ROMs are data-driven methods, which means that they need a training phase prior to their resolution. Training snapshots impose some computational limits on the parameters that define the problem (such as the final time [15] or the initial condition of the advection velocity) that ROMs cannot exceed when computing their solutions. However, it is highly interesting to study what can be done to overcome these limits by means of ROMs.

    The aim of this article was to solve the advection–diffusion–reaction equation by means of parametrized ROMs to obtain solutions with parameter values that do not belong to the training set. There are different strategies available in the literature, such as those based on the reduced-basis methods [16,17], interpolation in matrix manifolds [18], the shifted POD method [19], the extrapolation technique carried out in [20], the transformed modes used in [21] and proposed in [22,23], or non-intrusive ROMs such as autoencoders and neural networks [24,25,26]. The technique used in this article is based on generating a multiple training sample from arbitrary values of the input parameters (training values), as done in [27]. The ROM is then solved for some values of interest of the input parameters (target values).

    The novelty of this paper consists of the detailed analysis of the constitution of the training sample, taking into account the particular needs of each parameter studied. To do this, it is necessary to find out what size the training sample should be for the ROM to be sufficiently accurate, whether a minimum number of samples is required or even if only one sample is needed, and to check what the relationship is between the ROM's configuration parameters and the number of training samples. This part of the study is done by applying the parametrized ROM strategy to the two–dimensional (2D) ADR equation, whose advection velocity and the diffusion coefficient have been considered as input parameters, as well as parameters that define the initial condition (IC). The test cases solved are designed to evaluate the prediction of ROMs with all possible input parameters when considering the 2D ADR equation.

    Within the framework of climate change, forest fires will become more frequent in the following years, with devastating impacts on society [28]. To minimize their risk, mathematical models of forest fire propagation have been developed to predict the evolution of a fire to adequately plan suppression actions, improve the safety of firefighting brigades, and reduce fire damage. Considering all the conclusions obtained via the 2D ADR equation, the parametrized ROM strategy is extended to the 2D wildfire propagation (WP) model [5,29,30], which governs the spatio-temporal evolution of fire in a wildfire and whose resolution in the reduced space is similar to that of the ADR equation.

    The remainder of the article is organized as follows. Section 2 introduces the 2D ADR equation and the 2D WP model and their discretization using the FV method. Section 3 outlines the POD-based ROM applied to both equations and the modification of the ROM strategy to approach parametrized problems. Section 4 presents seven numerical cases in which the ROM is solved for different input parameters. Finally, the concluding remarks are given in Section 5.

    The 2D advection–diffusion–reaction (ADR) equation is defined as follows:

    ut+au=ν2uru, (2.1)

    where u=u(x,y,t) is the conserved variable, a=(ax,ay) is the advection velocity, ν0 is the diffusion coefficient, and r is the reaction coefficient. To find u(x,y,t) in the domain (x,y,t)[0,Lx]×[0,Ly]×[0,T], Equation (2.1) has to be solved, subject to appropiate initial conditions (ICs) and boundary conditions (BCs).

    Equation (2.1) can be numerically approximated by means of the FV method [31,32]. The computational domain is discretized using rectangular volume cells with a uniform width Δx×Δy, where the position of the center of i-th cell is (xi,yj), with xi=iΔx and yj=jΔy, and i=1,...,Ix and j=1,...,Iy, where Ix and Iy are the number of volume cells in the x- and the y-directions, respectively. The subsets JB and JI contain a pair of indices (i,j) indicating the cells that belong to the boundary and to the interior of the domain, respectively, with J=JBJI, where J={(i,j),1iIx,1jIy} is the total set of indices.

    Regarding the time discretization, the time step Δt=tn+1tn with n=0,...,NT1, where NT is the total number of time steps, is selected dynamically using the Courant–Friedrichs–Lewy (CFL) condition [33]

    Δt=CFL|ax|Δx+|ay|Δy+2ν(1Δx2+1Δy2), (2.2)

    with 0<CFL1.

    By means of the FV method, the discretised variables are integrated inside the volume cell (xi1/2,xi+1/2)×(yj1/2,yj+1/2) with (i,j)J in the time interval (tn,tn+1). The numerical fluxes are reconstructed using the first-order upwind method [34,35], and the diffusion term is discretized using central differences, so that the final full-order model (FOM) of the 2D ADR equation is

    un+1i,j=uni,j12axΔtΔx(uni+1,juni1,j)+12|ax|ΔtΔx(uni+1,j2uni,j+uni1,j)12ayΔtΔy(uni,j+1uni,j1)+12|ay|ΔtΔy(uni,j+12uni,j+uni,j1)+ΔtΔx2ν(uni+1,j2uni,j+uni1,j)+ΔtΔy2ν(uni,j+12uni,j+uni,j1)runi,j, (2.3)

    where uni,ju(xi,yj,tn) is the average value over the cell (xi1/2,xi+1/2)×(yi1/2,yi+1/2) at tn, with (i,j)JI.

    All the variables of interest belonging to the volume cells inside the domain, (i,j)JI are integrated in time, following the same updating relation indicated in (2.3). However, those on the boundary, (i,j)JB, require a special treatment that takes into account the boundary conditions imposed on them. Two examples of free boundary conditions are given below for a point placed at the corner (0,0) and for a point placed at the boundary y=0 of the rectangular domain, respectively

    un+11,1=un1,112axΔtΔx(un2,1un1,1)+12|ax|ΔtΔx(un2,1un1,1)12ayΔtΔy(un1,2un1,1)+12|ay|ΔtΔy(un1,2un1,1)+νxΔtΔx2(un2,1un1,1)+νyΔtΔy2(un1,2un1,1)run1,1,un+1i,1=uni,112axΔtΔx(uni+1,1uni1,1)+12|ax|ΔtΔx(uni+1,12uni,1+uni1,1)12ayΔtΔy(uni,2uni,1)+12|ay|ΔtΔy(uni,2uni,1)+νxΔtΔx2(uni+1,12uni,1+uni1,1)+νyΔtΔy2(uni,2uni,1)runi,1, (2.4)

    with 2iIx1.

    The wildfire propagation (WP) model is an application case of the ADR equation. Physics-based wildfire propagation models [5,29,30] are usually defined in two horizontal spatial dimensions, as an approach to predict the evolution of the fire on the Earth's surface and require the spatial distribution and characteristics of the biomass [28] as input parameters.

    In the WP model, the energy within the fire is represented by the temperature T=T(x,y,t)>0. According to this model, the two-dimensional finite layer of the ground is composed of fuel, represented by the presence of biomass Y=Y(x,y,t)[0,1], which sustains the evolution of the fire. The WP model, as proposed in [28], is

    Tt+vT=(kρcp)2T(αρcp)(TT)+Ψ(T)HcpY, (2.5)
    Yt=Ψ(T)Y, (2.6)

    where the rate of variation of the mass fraction of fuel is given by the Arrhenius law [36] as follows:

    Ψ(T)={0,if T<Tpc,AeTpcT,if TTpc. (2.7)

    The heat is generated by the burning reaction of the available fuel in the coupling term, it is transported by the advection term and diffused by the diffusion term, and it is lost to the atmosphere through the cooling (reaction) term.

    The parameters of this model are v=(vx,vy) is the advection velocity, which is constant; k is the thermal conductivity; ρ is the density of the medium; cp is the specific heat; T is the ambient temperature; H is the fuel combustion heat; A is the pre-exponential factor; and Tpc is the ignition temperature. It should be mentioned that all the physical variables that appear in the WP model are in units of the International System. They have been omitted in all cases for the sake of simplicity.

    Following the FOU-based FV method, the variables are integrated into each of the volume cells, and the WP model is discretized as follows [37]:

    Tn+1i,j=Tni,j12vxΔtΔx(Tni+1,jTni1,j)+12|vx|ΔtΔx(Tni+1,j2Tni,j+Tni1,j)12vyΔtΔy(Tni,j+1Tni,j1)+12|vy|ΔtΔy(Tni,j+12Tni,j+Tni,j1)+(kρcp)ΔtΔx2(Tni+1,j2Tni,j+Tni1,j)+(kρcp)ΔtΔy2(Tni,j+12Tni,j+Tni,j1)Δt(αρcp)(Tni,jT)+ΔtHcpYni,jΨ(Tni,j)Yn+1i,j=Yni,jΔtYni,jΨ(Tni,j), (2.8)

    where TnijT(xi,yj,tn) and YnijY(xi,yj,tn) are the average values over the cell (xi1/2,xi+1/2)×(yj1/2,yj+1/2) at tn, with (i,j)JI. The discretization of all the boundary conditions are similar to the linear case in (2.4). The time step is selected dynamically using the CFL condition in (2.2).

    In this section, the parametrized training methodology used to predict solutions with the ROM for the values of the parameters of study that were different from those of the training set is presented, together with the standard strategy and the ROMs based on the 2D ADR equation (2.1) and the 2D WP model (2.5).

    The reduced-order modeling strategy consists of two phases: (1) offline phase, in which the ROM is trained, and (2) online phase, in which the ROM is solved. This resolution in time justifies its use, since it accelerates the required computation time with respect to that of the FOM by several orders of magnitude.

    A set of NT-time numerical solutions computed by the FOM, or training solutions, are assembled in the so-called snapshot matrix

    U=(U1U2UIy)T, (3.1)

    with

    Uj=(u11,ju21,juNT1,ju1Ix,ju2Ix,juNTIx,j)RIx×NT. (3.2)

    The proper orthogonal decomposition (POD) [38] of U by means of the singular value decomposition (SVD) [39] decomposes the snapshot matrix into orthogonal components, also called POD modes

    U=ΦΣΨT,

    where ΣRIxIy×NT is a diagonal matrix in which the entries of the main diagonal are the singular values of U and represent the magnitude of each POD mode; ΦRIxIy×IxIy and ΨRNT×NT are orthogonal matrices. The matrix Φ=(ϕ1,,ϕIxIy)RIx×Iy with ϕk=(ϕ1,k,,ϕIxIy,k)T consists of the orthogonal eigenvectors of UUT, which are used to define the reduced space.

    Let MPOD be a positive integer such that MPODmin(IxIy,NT). It will be chosen to be as small as possible without significantly affecting the accuracy of the solution computed via with the reduced-order method [12]. The number of POD modes solved by the ROM in the test cases proposed in this paper has been obtained a posteriori by analyzing the efficiency obtained. In the following sections, Case 4 will illustrate this procedure.

    The standard strategy used to train the ROM presented in the previous secion needs to be modified to handle the prediction of solutions for different values from those of the input parameters used to train the ROM, called the parameters of study, which are denoted by μm, with m=1,...,Mtrain.

    For each solution that is calculated with the FOM for a set of training parameters, also called the training sample, a snapshot matrix is generated U(μm). All the training sub-matrices are assembled together into one common snapshot matrix, also called the training set. This can be done by easily placing one sub-matrix after another

    U=(U(μ1)U(μ2)U(μMtrain)),

    where Mtrain is the number of snapshot sub-matrices of the training set. The SVD is applied without special treatment to obtain the reduced basis and obtains the basis functions that define the reduced space for a general value of the parameters. Once the reduced space has been defined, the ROM is solved for the new value of the parameters, namely the test value μT. In this paper, the benefits and limitations of this methodology are studied.

    In order to extend the ROM strategy to parametrized nonlinear problems, such as the WP model (2.5), it is necessary to combine this modified method with the proper interval decomposition (PID) [20]. For this purpose, it is necessary to group all the snapshots generated in the different training samples according to the defined time windows. The SVD is applied to each time window to obtain the different reduced spaces. For the sake of simplicity, in this paper, both the training set samples and the solution predicted by the ROM at the same time instants will be calculated.

    In the following subsections, the ROMs of the 2D ADR equation (2.1) and the 2D WP model (2.5) are presented.

    Intrusive ROMs based on the POD method are alternative numerical schemes that need to be developed from a standard numerical scheme by projecting it from the physical space to the reduced space. The Galerkin method [40] acts as the projection between these two spaces

    uni,jMPODk=1ˆunkϕi+Ix(j1),k, (3.3)

    where ˆunk with k=1,...,MPOD is the reduced variable that depend on time, with (i,j)J.

    The 2D ADR-based ROM is developed by applying the Galerkin method (3.3) to the 2D ADR-based FOM (2.3) and projecting it to the reduced space

    ˆun+1p=ˆunp+ΔtMPODk=1ˆunkApk, (3.4)

    where the coefficients are

    Apk=(i,j)JBbi,jϕi+Ix(j1),p12axΔx(i,j)JI(ϕi+1+Ix(j1),j,kϕi1+Ix(j1),k)ϕi,j,p+12|ax|Δx(i,j)JI(ϕi+1+Ix(j1),k2ϕi+Ix(j1),k+ϕi1+Ix(j1),k)ϕi+Ix(j1),p12ayΔyi,jJI(ϕi+Ixj,kϕi+Ix(j2),k)ϕi+Ix(j1),p+12|ay|Δy(i,j)JI(ϕi+Ixj,k2ϕi+Ix(j1),k+ϕi+Ix(j2),k)ϕi+Ix(j1),p+νΔx2(i,j)JI(ϕi+1+Ix(j1),k2ϕi+Ix(j1),k+ϕi1+Ix(j1),k)ϕi+Ix(j1),p+νΔy2(i,j)JI(ϕi+Ixj,k2ϕi+Ix(j1),k+ϕi+Ix(j2),k)ϕi+Ix(j1),pr(i,j)Jϕi+Ix(j1),kϕi+Ix(j1),p; (3.5)

    where the boundary coefficients bi,j are given by the BCs considered. An example of free BCs is

    b1,1=(12axΔx+12|ax|Δx+νΔx2)(ϕ2,kϕ1,k)+(12ayΔy+12|ay|Δy+νΔy2)(ϕ1+Ix,kϕ1,k). (3.6)

    Regarding the WP model, the Galerkin method reduces both variables

    Tnij=Tni,jTTMPODk=1ˆTnkηi+Ix(j1),k,YnijMPODk=1ˆYnkφi+Ix(j1),k,(i,j)J. (3.7)

    where the temperature needs to normalized to avoid loss of accuracy in the ROM calculation.

    The ROM based on the 2D WP model (2.5) is obtained by applying the Galerkin method (3.7) to the FOM of Eq (2.8)

    ˆTn+1p=ˆTnp+ΔtMPODk=1ˆTnkApk+ΔtMPODk=1ˆYnkBpk,ˆYn+1p=ˆYnp+ΔtMPODk=1ˆYnqkCpk, (3.8)

    where the coefficients are

    Apk=12vxΔx(i,j)JI(ηi+1+Ix(j1),kηi1+Ix(j1),k)ηi+Ix(j1),p+12|vx|Δx(i,j)JI(ηi+1+Ix(j1),k2ηi+Ix(j1),k+ηi1+Ix(j1),k)ηi+Ix(j1),p12vyΔy(i,j)JI(ηi+Ixj,kηi+Ix(j2),k)ηi+Ix(j1),p+12|vy|Δy(i,j)JI(ηi+Ixj,k2ηi+Ix(j1),k+ηi+Ix(j2),k)ηi+Ix(j1),p+(kρcp)1Δx2(i,j)JI(ηi+1+Ix(j1),k2ηi+Ix(j1),k+ηi1+Ix(j1),k)ηi+Ix(j1),p+(kρcp)1Δy2(i,j)JI(ηi+Ixj,k2ηi+Ix(j1),k+ηi+Ix(j2),k)ηi+Ix(j1),p+(i,j)JBbi,jηi+Ix(j1),p(αρcp)(i,j)Jηi+Ix(j1),kηi+Ix(j1),p,Bpk=HTcp(i,j)JΨ(ˉTwi,j)φi+Ix(j1),kηi+Ix(j1),p,Cpk=(i,j)JΨ(ˉTwi,j)φi+Ix(j1),kφi+Ix(j1),p.

    The boundary terms bi,j are computed in a similar manner to the linear case (3.6).

    The rate variation of the mass fraction (2.7) is nonlinear, so its reduction is not straightforward. In this paper, the reduced version of the mass fraction Ψ(Tni,j) is carried out by defining different time windows to renew the POD basis

    Ψ(ˉTwi,j)={0,if ˉTwi,jT+T<Tpc,AeTpcˉTwi,jT+T,if ˉTwi,jT+TTpc. (3.9)

    The ROM is linearized inside each time window, because this function is computed in the off-line phase within each time window by using time-averaged temperatures as follows:

    ˉTwi,j=1MSTWMtrainMSTWn=1Mtrainm=1(Tni,j)trainm,

    where (Tni,j)Trainm are the training samples, with Mtrain being the total number of samples computed by the the FOM that comprises the training set, and with MSTW being the number of snapshots included in each time window. The number of time windows is selected a posteriori in the numerical cases solved in this paper to improve efficiency.

    Note that the reduced equation for updating ˆY does not depend on the values of ˆT computed during the online phase and is therefore decoupled from the first equation. In the reduced temperature update equation, the biomass coupling term is precomputed in the coefficients. Because of this, a reduced ADR equation similar to (3.4) is solved at each time level.

    This section presents the numerical results obtained by applying the parameterized ROM strategy to solve the 2D ADR equation and the 2D WP model.

    First, the 2D ADR equation is used to study the limitations of the proposed methodology when the value of parameters such as the transport velocity, diffusion coefficient, initial conditions, or ROM the parameters are modified.

    Case 1. Parameter: Diffusion coefficient

    The parameter studied in this case is the diffusion coefficient μ=ν. Since this parameter is scalar, a one-dimensional problem will be worked with for the sake of simplicity. The time-space domain of the case is defined as (x,y,t)[0,20]×[0,1]×[0,40]. The initial condition (IC) is defined as the following Gaussian profile

    u(x,y,0)=1+e0.2(x10)2, y. (4.1)

    Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=200×1 volume cells and CFL=0.9. The advection velocity is set to zero, ax=ay=0, and the diffusion coefficient is given two different values to generate the training samples. These values, indicated by ν1 and ν2, are shown in Table 1, together with the target value νT.

    Table 1.  Case 1. Values of the diffusion coefficient ν for the training samples and the target value.
    ν1 ν2 νT
    0.01 0.2 0.1

     | Show Table
    DownLoad: CSV

    All the settings of the problem are shown in Table 2.

    Table 2.  Case 1. Settings.
    LX×Ly T CFL Ix×Iy ax ay ν BC MPOD
    20×1 40 0.9 200×1 0 0 Table 1 Free 5

     | Show Table
    DownLoad: CSV

    Three different subcases have been solved with the ROM. For each of these, the training set has been constructed with different samples. As indicated in Table 3, in Subcase 1, the ROM has been trained with ν1; in Subcase 2, with ν2; and in Subcase 3, with both values ν1 and ν2. The ROM has been solved in all subcases using five POD modes, since this number achieves a good compromise between the accuracy of the solution obtained by the ROM and the central processing unit (CPU) time required to compute it. Later, in Case 4, it is analyzed how the solutions are modified if the value of this parameter is changed.

    Table 3.  Case 1. Training set for the different subcases and differences of the ROM solutions with respect to the test solutions.
    Subcase 1 2 3
    Training set {ν1} {ν2} {ν1,ν2}
    d1 1.96101 5.90104 6.79104

     | Show Table
    DownLoad: CSV

    The accuracy obtained by the solutions uROM calculated by the ROM is measured by means of the differences with respect to the solutions uFOM calculated by the FOM using the L1 norm

    d1=Δxi,jJ|uFOMi,juROMi,j|. (4.2)

    These differences can be computed at each time step, so the time evolution of the errors can be visualized. The differences d1 computed in this case are shown in Table 3. These results are shown in Figure 1, where the IC is represented by the black line, the ROM solution with the red line, the test solution with the blue line, and the training solution with the gray line. As can be seen from these representations, all three subcases show high accuracies. This means that the ROM is able to predict solutions for larger values of the diffusion coefficient than the one it has been trained with, as shown in Subcase 1. However, as indicated in Subcase 2, using larger values of this parameter in the training allows to improve the accuracy by three orders of magnitude. Conversely, increasing the training set with more samples does not improve the accuracy, as reported in Subcase 3.

    Figure 1.  Case 1. ROM solutions are shown in red, together with the test solutions in blue and the training solutions in gray at the final time. The IC is represented by the black solid line.

    It is also possible to study the speed up achieved by the ROM by means of the required CPU time, τROMCPU, and that of the FOM, τFOMCPU, both measured in seconds. The CPU times required by the FOM to compute the test solution and the by ROM to compute the target solutions are τFOMCPU=1.00102 and τROMCPU=2.61104, respectively, so the speed up achieved by the ROM is ×39.

    Case 2. Parameter: Advection velocity

    The parameter studied in this case is the advection velocity in the x direction μ=ax (with ay=0). In this case, only one component of the velocity is used for the sake of simplicity. Later, the two-dimensional character of the velocity field is studied. The time-space domain of the case is defined as (x,t)[0,20]×[0,10]. The IC is defined as the following Gaussian profile:

    u(x,y,0)=1+e0.2(x6)2. (4.3)

    Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=200×1 volume cells and CFL=0.9. The diffusion coefficient is set as ν=0.01, and the advection velocity in the x direction is given six different values to generate the training samples. These values are shown in Table 4, together with the target value (ax)T.

    Table 4.  Case 2. Advection velocity ax values for the training samples and the target value.
    (ax)1 (ax)2 (ax)3 (ax)4 (ax)5 (ax)6 (ax)T
    0.01 0.2 0.4 0.6 0.8 1 0.5

     | Show Table
    DownLoad: CSV

    All the settings of the problem are shown in Table 5.

    Table 5.  Case 2. Settings.
    Lx×Ly T CFL Ix×Iy ax ay ν BC MPOD
    20×1 10 0.9 200×1 Table 4 0 0.01 Free 5

     | Show Table
    DownLoad: CSV

    The ROM has been solved 22 different times, corresponding to the 22 subcases indicated in Figure 2. In each of these subcases, the training set consists of different combinations of the six samples for different values of the parameter of interest ax listed in Table 5. In the first six subcases, the ROM has been trained with a single sample, choosing a different one in each subcase; while in the rest, at least two samples are used. For example, in Subcase 5, the ROM has been trained using just Sample 5 ((ax)5=0.8); in Subcase 16, the ROM has been trained using Samples 2 and 5 ((ax)2=0.2 and (ax)5=0.8). Through this exploration of the number of samples and which ones in particular, the most appropriate composition of the parametric training set has been studied to predict the target solution with (ax)T=0.5. In all subcases, the ROM has been solved using five POD modes.

    Figure 2.  Case 2. Training set.

    The differences between the ROM and FOM solutions d1 are presented in Figure 3. The main conclusion that can be drawn from these results is that it is necessary to train the ROM with a solution obtained with a higher advection velocity than the target. In other words, it is necessary to use a solution that has gone further in space. In addition to this, the accuracy of the solution predicted by the ROM will be higher the closer the training value is to the target. If it gets further away towards higher values, the accuracy drops, although it still works. This is clearly shown in the single-trained Subcases 4–6, as Subcase 4 has the best accuracy, and Subcases 5 and 6 worsen with respect to Subcase 4, but remain below Subcases 1–3. The ROM is able to accurately predict the time evolution of the initial Gaussian profile in Subcases 4–6, as shown in Figure 4df, whereas in Subcases 1–3, the ROM changes the shape of the Gaussian profile, as shown in Figure 4ac.

    Figure 3.  Case 2. Differences d1 (black dots) between each subcase and the threshold difference dT1 (horizontal dashed line).
    Figure 4.  Case 2. ROM solutions are shown in red, together with the test solutions in blue and the training solutions in gray at the final time. The IC is represented by the black solid line.

    In Figure 3, of all the subcases, the one showing the smallest differences is Subcase 4, where the training set is composed of a single sample. However, the training sets are rarely composed of only one solution and, in addition, the possible lack of knowledge of their composition may mean that they are not sufficiently close. Thus, it is necessary to consider sets composed of several samples, as is done in Subcases 7–22. In general, it has been found that the subcases with the smallest differences are those containing Training Sample 4, such as Subcases 9, 10, 17, 19, 21, and 22. In these subcases, the differences obtained are above those of Subcase 4, but they are still good levels of error, as can be seen in the solutions shown in Figure 4gl. In these subcases, the solutions predicted by the ROM are practically identical to the test solutions. From this, it can be deduced that as long as the differences are below a value of approximately 0.15, the ROM solution will be good. This value of dT1=1.5101 will be used as a threshold for selecting solutions with acceptable accuracy. This value has been included in the figure by means of a horizontal dashed line, according to which, Subcases 4, 9, 17, 19, and 21 would be accepted.

    The CPU times required by the FOM and the ROM are τFOMCPU=1.08103 and τROMCPU=1.93105, respectively, so the speed up is ×56.

    Case 3. Parameter: Initial Gaussian profile

    This test case is designed to study the ability of the ROM to predict solutions when the coefficients defining the IC act as input parameters. Since a detailed analysis of the elements that define the IC is carried out, in this case a one-dimensional case is used. The time-space domain of the case is defined as (x,y,t)[0,20]×[0,1]×[0,10]. A Gaussian profile is defined as the IC

    u(x,y,0)=ˉu+u0ec(xx0)2, (4.4)

    where the four coefficients ˉu, u0, c, and x0 are treated as input parameters. Each of these parameters is studied separately to see how they affect the ROM's predictions. They are given six different values structured around the target value, so that six training samples are considered to build the training set according to the distribution of subcases indicated in Figure 5.

    Figure 5.  Case 3. Training set.

    Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=200×1 volume cells and CFL=0.9. The diffusion coefficient is set to ν=0.01, and the advection velocity is set to a=(ax,ay)=(0.5,0). All the settings of the problem are kept fixed in all the following different cases and are shown in Table 6.

    Table 6.  Case 3. Settings.
    Lx×Ly T CFL Ix×Iy ax ay ν BC
    20×1 10 0.9 200×1 0.5 0 0.01 Free

     | Show Table
    DownLoad: CSV

    Parameter of study: ˉu

    First, the offset of the initial Gaussian profile ˉu is considered as the parameter of study and given the values indicated in Table 7. The target value is also included.

    Table 7.  Case 3.1. Training values of ˉu.
    ˉu1 ˉu2 ˉu3 ˉu4 ˉu5 ˉu6 ˉuT
    0 0.2 0.4 0.6 0.8 1 0.5

     | Show Table
    DownLoad: CSV

    The rest of the coefficients defining the initial Gaussian profile are given the following values,

    u0=1,c=0.2,x0=10.

    The ROM has been solved using MPOD=6 POD modes in all subcases.

    The differences d1 of all subcases are shown in Figure 6. In all the single-trained subcases, the ROM obtains very high differences, even in Subcases 3 and 4, where the training values of ˉu are the closest to the target value. The solutions computed by the ROM are not able to accurately predict the final Gaussian profile of the test solution, as indicated in Figure 7af. This is due to the fact that the initial Gaussian profile, when projected into the reduced space (using a small number of POD modes), is deformed and loses accuracy. Therefore, when it is transported in space, it maintains this deformation. This is illustrated by Figure 7gl, in which the IC is represented together with a reprojection of space of its projection to the reduced space, i.e., ϕϕTu(x,0).

    Figure 6.  Case 3.1. Differences d1 (black dots) between each subcase and the threshold difference dT1 (horizontal dashed line).
    Figure 7.  Case 3.1. ROM results and comparison between the IC and its reduced projection.

    However, in subcases where the training set is composed of two or more samples, whatever they are, the ROM is able to predict solutions with high accuracy. Thus, Subcases 7–22 show similar differences that are below the acceptance threshold dT1, as shown in Figure 6. In short, the offset of the initial Gaussian profile ˉu is a very malleable parameter that simply requires a training set with several samples and can be given arbitrary values.

    The CPU times required by the FOM and the ROM are τFOMCPU=9.05104 and τROMCPU=1.94105, respectively, so the speed up is ×47.

    Parameter of study: u0

    Second, the amplitude of the initial Gaussian profile u0 is considered as the parameter of study and given the values indicated in Table 8.

    Table 8.  Case 3.2. Training values of u0.
    (u0)1 (u0)2 (u0)3 (u0)4 (u0)5 (u0)6 (u0)T
    0.1 0.28 0.46 0.64 0.82 1 0.5

     | Show Table
    DownLoad: CSV

    The rest of the coefficients defining the initial Gaussian profile are given the following values

    ˉu=1,c=0.2,x0=6.

    The ROM has been solved using MPOD=5 POD modes.

    Similar to what has been observed with the offset coefficient ˉu, the ROM needs two or more samples in the training set when predicting solutions for different values of the amplitude of the initial Gaussian profile. This is shown in Figure 8, where Subcases 7–22 present similar differences below the threshold difference dT1.

    Figure 8.  Case 3.2. Differences d1 (black dots) between each subcase and the threshold difference dT1 (horizontal dashed line).

    The CPU times required by the FOM and the ROM are τFOMCPU=8.88104 and τROMCPU=1.42105, respectively, so the speed up is ×63.

    Parameter of study: c

    Third, the width of the initial Gaussian profile c is considered as the parameter of study and given the values indicated in Table 9.

    Table 9.  Case 3.3. Training values of c.
    c1 c2 c3 c4 c5 c6 cT
    0.1 0.28 0.46 0.64 0.82 1 0.5

     | Show Table
    DownLoad: CSV

    The rest of the coefficients defining the initial Gaussian profile are given the following values:

    ˉu=1,u0=1,x0=6.

    The ROM has been solved using MPOD=8 POD modes.

    The width of the initial Gaussian profile is a more complicated parameter, since most of the subcases show higher differences than the threshold difference dT1, as shown in Figure 9. Only in Subcases 12, 17, 20, 21, and 22, the ROM is able to predict accurate solutions, as shown in Figure 10. What these subcases have in common is the presence of Sample 3 in their training set. This sample has been computed with the value of c that is the closest below the target value, which is the initial Gaussian profile of width that is the closest above the target. In the subcases in which the training set is composed of several samples (including Sample 3), the differences are also decreased, as indicated in Subcases 20–22. It can therefore be concluded that the ROM needs to be trained with several samples that include wider profiles.

    Figure 9.  Case 3.3. Differences d1 (black dots) between each subcase and the threshold difference dT1 (horizontal dashed line).
    Figure 10.  Case 3.3. ROM results and comparison between the IC and its reduced projection.

    In addition to this, with the previous parameters, only six and five POD modes have been enough to achieve good predictions, but when considering the width of the Gaussian profile, it is necessary to increase the number of POD modes to eight to obtain satisfactory differences.

    The CPU times required by the FOM and the ROM are τFOMCPU=8.86104 and τROMCPU=3.09105, respectively, so the speed up is ×29.

    Figure 11.  Case 3.4. Differences d1 (black dots) between each subcase and the threshold difference dT1 (horizontal dashed line).

    Parameter of study: x0

    Lastly, the position of the initial Gaussian profile x0 is considered as the parameter of study and given the values indicated in Table 10.

    Table 10.  Case 3.4. Training values of x0.
    (x0)1 (x0)2 (x0)3 (x0)4 (x0)5 (x0)6 (x0)T
    4 4.8 5.6 6.4 7.2 8 6

     | Show Table
    DownLoad: CSV

    The rest of the coefficients defining the initial Gaussian profile are given the following values

    ˉu=1,u0=1,c=0.2.

    The ROM has been solved using MPOD=5 POD modes.

    Subcases 4, 13, 17, 19, 21, and 22 show much smaller differences than the threshold difference dT1. In all these subcases, the training sets include Sample 4, which is computed with the closest value above the target value. The smallest differences are obtained in Subcase 17, where the training set includes just Samples 3 and 4. The differences are increased if more samples are added to the training set, although the ROM continues to obtain good solutions, as shown in Figure 12.

    Figure 12.  Case 3.4. ROM results and comparison between the IC and its reduced projection.

    The CPU times required by the FOM and the ROM are τFOMCPU=8.96104 and τROMCPU=1.44105, respectively, so the speed up is ×62.

    Case 4. Parameters: Mtrain and MPOD

    It is possible that the samples that compose the training set cannot be designed and that the training set is given in advance. In this case, it is necessary to study how to modify the parameters of the ROM which it is possible to access. With this objective, the training set has been built with ten samples in which the coefficients that define the initial Gaussian profile (4.4) have been given random values, as shown in Figure 13.

    Figure 13.  Case 4. Values of the training set.

    The time-space domain of the case is defined as (x,y,t)[0,20]×[0,20]×[0,10]. A Gaussian profile is defined as the IC

    u(x,0)=ˉu+u0ecx(xx0)2cy(yy0)2, (4.5)

    where the six coefficients ˉu, u0, cx, cy, x0, and y0 are treated as given random values, as shown in Figure 13. Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=100×100 volume cells and CFL=0.4. The diffusion coefficient is set as ν=0.01, and the advection velocity is set as a=(ax,ay)=(0.5,0.5). All the settings of the problem are kept fixed in all the following different cases and are shown in Table 11.

    Table 11.  Case 4. Settings.
    Lx×Ly T CFL Ix×Iy ax ay ν IC BC
    20×20 10 0.4 100×100 0.5 0.5 0.01 Eq (4.5) Free

     | Show Table
    DownLoad: CSV

    The training set has been designed to add samples one by one in increasing order, as follows:

    Mtrain{1,2,3,4,5,6,7,8,9,10}.

    In addition to this, the ROM has been solved using different values of the number of POD modes

    MPOD{3,6,9,12,15,18,21,24,27,30}.

    Therefore, the subcases solved by ROM are the combination of all the values of these two parameters. That is, there are 100 subcases whose training sets are composed as shown in Figure 14. Each time the training set is returned to a training set consisting only of the first sample (every 10 subcases), three POD modes are added.

    Figure 14.  Case 4. Training set.

    The differences in all the subcases can be seen in a single row in Figure 15, where the general tendency to decrease as the number of POD modes increases can be seen. In the last four data-sets (from Subcase 61 onwards), differences smaller than the threshold difference d1 are reached. In other words, the ROM predicts good solutions whenever it uses at least 18 POD modes. This is best seen in the 2D representation shown in Figure 16a, where the red line indicates the threshold difference. It can be seen that just one training sample is never enough, since the error is always very big. However, at least four samples are needed in the training set when they have arbitrary values. From four samples onwards, and with a high number of POD modes, the differences remain at similar values (below the threshold error), for which the predictions are accurate.

    Figure 15.  Case 4. Differences d1 (black dots) between each subcase and the threshold difference dT1 (horizontal dashed line).
    Figure 16.  Case 4. Differences (left) and CPU times (right).

    As the number of POD modes used by the ROM increases, the computation times increase and, consequently, the speed up decreases, as shown in Figure 16b. Thus, it is desirable to use as few POD modes as possible to make the ROM faster than the FOM, but without losing accuracy in the predictions. Taking this into account, the optimal value of POD modes is around 18 POD modes, since it achieves three orders of magnitude of speed up and the accuracy of the solution is guaranteed.

    Case 5. Parameter: velocity

    In Case 2, it was concluded that it is necessary for the training set to contain a sample with a higher advection velocity than the target for the ROM to predict accurate solutions. In the case of a two-dimensional velocity field, this conclusion could be extended to cases where the target solution is transported in the same direction as the training samples (and its magnitude may vary). That is, the ratio ax/ay is kept constant. However, it is highly interesting to study what happens if the angle of the transport direction of the target solution is modified.

    The training set is composed of a single sample, as indicated in Figure 17a, and the ROM is solved for different target values by modifying the transport direction (and its magnitude) as indicated in the same table, which lists the values of the study parameters

    μ1=ax,μ2=ay.

    following the formulation indicated in 4.6. The diffusion parameter is given the following value ν=0.

    ax=a0cosα,ay=a0sinα. (4.6)

    The training and target values are also shown in Figure 17a, where the circular symmetry of the field can be observed. The target velocities of T1, T2, and T3 are equal in magnitude to the training sample but move away from it by 5º, 10°, and 20° on the circumference with centre at (0,0), respectively. The velocities of T5, T6, and T7 velocities are shifted by the same number of degrees but with a smaller magnitude. The velocities of T4 and T8 velocities are different in magnitude with respect to the training value, but they face in the same direction.

    Table 12.  Case 5. Model parameters.
    Sample a0 α ax ay
    Train 0.5 45 0.5 0.5
    Target 1 0.5 40 0.54167522 0.454519478
    Target 2 0.5 35 0.57922797 0.405579788
    Target 3 0.5 25 0.64085638 0.298836239
    Target 4 0.4 45 0.4472136 0.447213595
    Target 5 0.4 40 0.48448905 0.40653458
    Target 6 0.4 35 0.51807724 0.36276159
    Target 7 0.4 25 0.57319937 0.267287258
    Target 8 0.6 45 0.54772256 0.547722558

     | Show Table
    DownLoad: CSV
    Figure 17.  Case 5. Parameters values, final differences, and IC.

    The time-space domain of the case is defined as (x,y,t)[0,2]×[0,2]×[0,2]. The IC is the same for all target solutions and is given by

    u(x,y,0)={2,if (x0.25)2+(y0.25)2<0.1,1,otherwise,  (4.7)

    and can be seen in Figure 17c. Free boundary conditions are considered.

    The physical domain is discretized using Ix×Iy=150×150 volume cells and CFL=0.4. All the settings of the problem are shown in Table 13.

    Table 13.  Case 5. Problem settings.
    Lx×Ly T CFL Ix×Iy ax ay ν BC NT MPOD
    2×2 2 0.4 150×150 Table 12 0 Free 467 20

     | Show Table
    DownLoad: CSV

    Regarding the results, the target velocities of T4 and T8 agree with the results obtained in Case 2. As shown in Figure 17b, the differences computed for T4 stayed at a very low level for the whole time simulation, predicting highly accurate solutions, whereas the differences of T8 grow as its result exceeds the final position reached by the training solution (since its velocity is lower). As for the cases in which the direction of transport is modified, it can be seen in Figure 17b how their differences d1 are grouped according to the angle at which they vary, being smaller for smaller angles. Accordingly, the change in magnitude has hardly any impact on these differences.

    In addition to this, the representations shown in Figure 18 were proposed to further test the predictive capability of ROMs. In this figure, the time evolution of the solutions at three different points in the domain is shown. The dashed lines represent the target solution predicted by the ROM, and the solid lines represent the test solution computed by the FOM as reference solution. In this case, the following three points lie on the diagonal:

    P1=(x1,y1)=(0.5,0.5),P2=(x2,y2)=(0.7,0.7),P3=(x3,y3)=(1,1),

    i.e., for values of vx=vy. Therefore, the figures show how the test solutions decrease in magnitude as their velocities move off the diagonal. In other words, at the point (x3,y3), the tests T1 and T5 present a magnitude that is greater than that of T2 and T6; at T3 and T7, it even disappears.

    Figure 18.  Case 5. Time evolution of u at different points.

    As it can be seen in the figures, the ROM predictions tend to maintain approximately constant magnitudes and do not decrease enough to resemble the tests. This is unacceptable for the targets of T2 and T6 (for 10º of separation from the diagonal) and even more so for T3 and T7 (for 10º of separation from the diagonal), where there should be no magnitude. It is therefore desirable that the target velocities are not modified beyond 5º with respect to the training velocities. With this in mind, the training sample must be carefully designed so as not to introduce an error that deviates from the expected physics, as shown in Case 6.

    The ROM has been solved using 20 POD modes, requiring a CPU time of τROMCPU=5.27104. Since the CPU time required by the FOM is τFOMCPU=6.64101, the speed up achieved is ×1268.

    A series of numerical results are presented below in which the parameterized fire model is solved taking into account the conclusions obtained from the results of the linear equation.

    Case 6. Parameter: velocity

    The parameter studied in this case is the wind velocity, as indicated in order in the following equations:

    μ1=vx,μ2=vy

    These parameters are given the values shown in Figure 19, where the training velocities are grouped into two different magnitudes, i.e., 0.5 and 1. Within each group, the velocities are separated by 10° in circular symmetry, so that the four samples closest (in blue) to the test velocity (in red) are chosen to carry out the training. The rest of the parameters of the wildfire model (2.5) are given the following fixed values.

    Table 14.  Case 6. Model parameters.
    vx vy k ρ cp α H T Tac Tpc A
    Figure 19 Figure 19 1 40 1 0.05 4000 300 400 400 0.05

     | Show Table
    DownLoad: CSV
    Figure 19.  Case 6. Values of the training set (blue) and the target value (red).

    The time-space domain of the case is defined as (x,y,t)[0,200]×[0,200]×[0,100]. The IC for the temperature and the fuel mass fraction are given by

    T(x,y,0)={670,if (x25)2+(y25)2<10,300,otherwise, Y(x,y,0)=1,x,y. (4.8)

    Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=150×150 volume cells and CFL=0.4. All the settings of the problem are shown in Table 15, including the number of POD modes MPOD and the number of snapshots per time window MSTW.

    Table 15.  Case 6. Problem settings.
    Lx×Ly T CFL Ix×Iy BC NT MPOD MSTW
    200×200 100 0.4 150×150 Free 939 30 100

     | Show Table
    DownLoad: CSV

    Figure 20 shows the test solutions of T and Y at the final time, and Figure 21 shows the solutions predicted by the ROM at different time instants. In Figure 21c, f, it can be clearly seen that the solution predicted by the ROM is made up of the training solutions, because their shadows are superimposed. Because of this, the wavefront, which is very well delimited in the test case, as shown in the Figure 20, is very blurred in the ROM case.

    Figure 20.  Case 6. Test solutions of T and Y at the final time.
    Figure 21.  Case 6. Test (left) and predicted (right) solutions of T and Y at the final time.

    Nevertheless, the evolution of the solution at the three gauging points

    P1=(x1,y1)=(35,35),P2=(x2,y2)=(50,50),P3=(x3,y3)=(65,50), (4.9)

    shows an acceptable agreement between the two solutions, as can be seen in Figure 22b. The closer the predicted solution is to the origin of the IC, the more aligned it is with the test solution. However, as it evolves, it becomes out of phase. This can be seen in the Figure 22, which shows the time evolution of the absolute value of the position at which the maximum is located. In this figure, it can be seen that up to t=40, the slope of the solution predicted by the ROM (red) overlaps with the test slope (blue); however, for longer times, it ends up tending to the closest training solution (gray).

    Figure 22.  Case 6. Time evolution of the differences dT1 and dY1, the time evolution of T at different points, and the speed-rate of the maximum value of T.

    The ROM has been solved using 30 POD modes with 100 snapshots per time window. The CPU time required by the FOM and the ROM are τROMCPU=1.6 and τROMCPU=7.70102, respectively, so the speed up achieved is ×21.

    Case 7. Parameter: Diffusion coefficient

    The parameter studied in this case is the diffusion coefficient

    μ=k.

    This parameter is given the values indicated in Table 16, where the two training values and the target value are found. The rest of the parameters of the model are given the fixed values indicated in Table 17.

    Table 16.  Case 7. Training and target values of the diffusion coefficient k.
    k1 k2 kT
    0.01 10 1

     | Show Table
    DownLoad: CSV
    Table 17.  Case 7. Model parameters.
    vx vy k ρ cp α H T Tac Tpc A
    0.5 0.5 Table 16 40 1 0.05 4000 300 400 400 0.05

     | Show Table
    DownLoad: CSV

    The time-space domain of the case is defined as (x,y,t)[0,200]×[0,200]×[0,100]. The IC for the temperature and the fuel mass fraction is given by (4.8). Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=150×150 volume cells and CFL=0.4. All the settings of the problem are shown in Table 18, including the number of POD modes MPOD and the number of snapshots per time window MSTW.

    Table 18.  Case 7. Problem settings.
    Lx×Ly T CFL Ix×Iy BC NT MPOD MSTW
    200×200 100 0.4 150×150 Free 5814 10 153

     | Show Table
    DownLoad: CSV

    The time evolution of the solution compared at the three gauging points located in (4.9) reveals a good correspondence between the solution predicted by the ROM and the test solution, considering that the ROM takes a little longer to lower the temperature, as can be seen in the tails of each point. This confirms the conclusions obtained with the linear equation, since it is possible to train the ROM with only two values for the diffusion coefficient and predict accurate solutions, even using a training range as large as the one used in this case, ranging from 0.01 to 10. The ROM has been solved using 10 POD modes and 153 snapshots per time window, so the CPU time required by the FOM to compute the test solution and by the ROM to predict the target solution is τFOMCPU=9.71 and τROMCPU=9.30102, respectively, and the corresponding speed up is ×105.

    Figure 23.  Case 7. Time evolution of the differences dT1 and dY1 and the time evolution of T at different points.

    Case 8. Parameter: Pre-exponential factor

    The parameter studied in this case is the pre-exponential factor

    μ=A.

    This parameter is given the values indicated in Table 19, where the two training values and the target value are found. The rest of the parameters of the model are given the fixed values indicated in Table 20.

    Table 19.  Case 8. Training and target values of the pre-exponential factor A.
    A1 A2 AT
    0.033 0.066 0.05

     | Show Table
    DownLoad: CSV
    Table 20.  Case 8. Model parameters.
    vx vy k ρ cp α H T Tac Tpc A
    0.5 0.5 2 40 1 0.05 4000 300 400 400 Table 19

     | Show Table
    DownLoad: CSV

    The time-space domain of the case is defined as (x,y,t)[0,200]×[0,200]×[0,100]. The IC for the temperature and the fuel mass fraction is given by (4.8). Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=150×150 volume cells and CFL=0.4. All the settings of the problem are shown in Table 21.

    Table 21.  Case 8. Settings.
    Lx×Ly T CFL Ix×Iy BC NT MPOD MSTW
    200×200 100 0.4 150×150 Free 751 15 50

     | Show Table
    DownLoad: CSV

    The results predicted by the ROM closely resemble the test results, as confirmed by the time evolution of the solution at the gauging points (4.9) in Figure 24b. The ROM is solved using 15 POD modes and 50 snapshots per time window. The CPU time required by the FOM to compute the test solution and by the ROM to predict the target solution is τFOMCPU=1.31 and τROMCPU=5.10102, respectively, and the corresponding speed up is ×26.

    Figure 24.  Case 8. Time evolution of the differences dT1 and dY1 and time evolution of T at different points.

    Case 9. Parameter: Reaction coefficient

    The parameter studied in this case is the reaction coefficient

    μ=α.

    This parameter is given the values indicated in Table 22, where the two training values and the target value are found. The rest of the parameters of the model are given the fixed values indicated in Table 23.

    Table 22.  Case 9. Training and target values of the reaction coefficient α.
    α1 α2 αT
    0.001 0.1 0.05

     | Show Table
    DownLoad: CSV
    Table 23.  Case 9. Model parameters.
    vx vy k ρ cp α H T Tac Tpc A
    0.5 0.5 1 40 1 Table 22 4000 300 400 400 0.05

     | Show Table
    DownLoad: CSV

    The time-space domain of the case is defined as (x,y,t)[0,200]×[0,200]×[0,100]. The IC for the temperature and the fuel mass fraction is given by (4.8). Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=150×150 volume cells and CFL=0.4. All the settings of the problem are shown in Table 24.

    Table 24.  Case 9. Settings.
    Lx×Ly T CFL Ix×Iy BC NT MPOD MSTW
    200×200 100 0.4 150×150 Free 751 15 80

     | Show Table
    DownLoad: CSV

    The results predicted by the ROM achieved high accuracy with respect to the test solution. The time evolution of the solution at the gauging points (4.9) confirms that only two training samples allow the ROM to predict accurate solutions when changing the reaction coefficient. The ROM has been solved using 15 POD modes and 80 snapshots per time window. The CPU time required by the FOM to compute the test solution and by the ROM to predict the target solution is τFOMCPU=1.31 and τROMCPU=2.90102, respectively, and the corresponding speed up is ×45.

    Figure 25.  Case 9. Time evolution of the differences dT1 and dY1 and time evolution of T at different points.

    Case 10. Parameter: Shape of the IC

    In this test case, the parameters that are studied belong to the IC. As concluded in the test cases solved with the linear equation, the shape of the IC can be arbitrarily modified, but its position presents more difficulties when it comes to being included in the ROM prediction. Because of this, in this case, we consider only the IC parameters related to its shape, which can be given by

    T(x,y,0)={T0,if (x20)2+(y20)2<r2,300,otherwise, Y(x,y,0)=1,x,y. (4.10)

    where

    μ1=r,μ2=T0,

    are the parameters of study. These are given the values contained in Table 25 together with the target values. The training ICs and the target IC are shown in Figure 26. The parameters that define the model (2.5) are given fixed values as shown in Table 26.

    Table 25.  Case 10. Training and target values of the IC given in (4.10).
    Parameter Train 1 Train 1 Train 1 Train 1 Target
    r 20 14 7 2 10
    T0 400 550 750 900 670

     | Show Table
    DownLoad: CSV
    Figure 26.  Case 10. Initial conditions of the training set (black) and the target solution (red).
    Table 26.  Case 10. Model parameters.
    vx vy k ρ cp α H T Tac Tpc A
    0.5 0.5 1 40 1 0.05 4000 300 400 400 0.05

     | Show Table
    DownLoad: CSV

    The time-space domain of the case is defined as (x,y,t)[0,200]×[0,200]×[0,100]. Free boundary conditions are considered. The physical domain is discretized using Ix×Iy=150×150 volume cells and CFL=0.4. All the settings of the problem are shown in Table 27.

    Table 27.  Case 10. Settings.
    Lx×Ly T CFL Ix×Iy BC NT MPOD MSTW
    200×200 50 0.3 150×150 Free 751 15 80

     | Show Table
    DownLoad: CSV

    The ROM is able to predict accurate solutions when the values that define the shape of the IC are changed. The time evolution of the solution at the gauging points (4.9) confirms this conclusion, since the ROM resembles the test even though the maximums are not as high as they should be (Figure 27). The ROM has been solved using 15 POD modes and 80 snapshots per time window. The CPU time required by the FOM to compute the test solution and by the ROM to predict the target solution is τFOMCPU=1.28 and τROMCPU=3.00102, respectively, and the corresponding speed up is ×43.

    Figure 27.  Case 10. Time evolution of the differences dT1 and dY1 and time evolution of T at different points.

    A methodology has been proposed to solve the parametrized versions of the ADR equation (2.1) and the WP model (2.5) using POD-based ROMs that allows to predict solutions for values of the problem parameters other than the training ones.

    Several numerical cases have been solved to perform a very detailed analysis of the ability of this parametrized ROM strategy, which are summarized in Table 28. In the first five cases, it has been applied to the 2D ADR equation (2.1), where solutions for different values of the advection velocity and the diffusion coefficient have been accurately predicted. Other input parameters succesfully considered are the coefficients defining the initial condition. The reduced version of the 2D WP model (2.5) needs definition of time windows to linearize the nonlinear coupling term. The parametrized ROM strategy is also applied to this model for certain physical parameters, such as the diffusion coefficient, the pre-exponential factor, and the reaction coefficient.

    Table 28.  Main conclusions of each test case.
    Case Equation Parameter d1 speed up
    1 2D ADR ν 104 101
    2 ax 102 101
    3.1 IC: ˉu 102 101
    3.2 IC: u0 101 101
    3.3 IC: c 101 101
    3.4 IC: x0 102 101
    4 (MPOD,Mtrain) 103 103
    5 (ax,ay) 102 103
    6 2D WP (vx,vy) 101 101
    7 k 101 102
    8 A 101 101
    9 α 101 101
    10 IC 101 101

     | Show Table
    DownLoad: CSV

    The particular conclusions obtained for the different parameters studied are summarized below. The ROM is able to predict solutions for new values of the diffusion and reaction coefficients if at least one training value is lower and other is higher than the target, as shown in Cases 1, 7, and 9. As for the advection velocity, it has been noted in Case 2 that when having an arbitrary set of training samples, it has to contain at least one with a velocity value as close as possible to the target and above it. In Cases 5 and 6, training samples with circularly distributed velocity values have been proposed, and it has been observed that a separation of 5° between the training and the target values allows the prediction of accurate solutions.

    Given an arbitrary training set computed with random values of the parameters that define the IC, the ROM needs at least 18 POD modes (in 2D problems) and 4 training samples to predict accurate solutions with high speed up, as observed in Case 4. Regarding the values that define the IC, the ROM is able to predict solutions changing the offset or the amplitude of the initial profile when the training set is composed of at least two samples with arbitrary values, as shown in Case 3. Regarding the width of the initial profile, the ROM needs to be trained with samples computed for wider profiles than the target. Unlike the linear problem, the initial position is particularly difficult to predict when solving linearized ROMs, as shown in [41], such as the 2D WP model (3.8). It remains for future work to explore possibilities to overcome this limitation.

    The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.

    This work was supported by Project PID2022-137334NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by European Regional Development Fund (ERDF) of the Spanish Ministry of Science and Innovation and the European Regional Development Fund. This work has also been funded by the Spanish Ministry of Science, Innovation and Universities – Agencia Estatal de Investigación (10.13039/501100011033) and Fondo Europeo de Desarrollo Regional (FEDER) under project-No. PID2022-141051NA-I00. It is also partially funded by the Government of Aragón, through the research grant T32_23R Fluid Dynamics Technologies. The authors belong and are supported by the Aragón Institute of Engineering Research (I3A) of the University of Zaragoza.

    A. Navas-Montilla is a guest editor for this special issue and was not involved in the editorial review or the decision to publish this article. The authors declare there is no conflict of interest.



    [1] Synchronization conditions in simple memristor neural networks. Journal of the Franklin Institute (2015) 352: 3196-3220.
    [2] Singularity formation and blowup of complex-valued solutions of the modified KdV equation. Discrete Cont. Dyn. (2013) 33: 4811-4840.
    [3] Memristor–The missing circuit element. IEEE Trans. Circ. Theor. (1971) 18: 507-519.
    [4] Improving control effects of absence seizures using single-pulse alternately resetting stimulation (SARS) of corticothalamic circuit. Appl. Math. Mech. -Engl. Ed. (2020) 41: 1287-1302.
    [5] A. F. Filippov, Differential Equations with Discontinuous Righthand Sides, Kluwer Academic Publishers, Dordrecht, 18 1988. doi: 10.1007/978-94-015-7793-9
    [6] Global exponential synchronization of memristive competitive neural networks with time-varying delay via nonlinear control. Neural Process. Lett. (2018) 49: 103-119.
    [7] Excitement and synchronization of small-world neuronal networks with short-term synaptic plasticity. Int. J. Neural Syst. (2011) 21: 415-425.
    [8] Global Hopf bifurcation analysis on a six-dimensional FitzHugh-Nagumo neural network with delay by a synchronized scheme. Discrete Cont. Dyn. Syst.-B (2011) 16: 457-474.
    [9] Global anti-synchronization of complex-valued memristive neural networks with time delays. IEEE T. Cybernetics (2019) 49: 1735-1747.
    [10] Singular perturbation analysis of competitive neural networks with different time scales. Neural. Comput. (1996) 8: 1731-1742.
    [11] Local and global stability analysis of an unsupervised competitive neural network. IEEE T. Neural Networ. (2008) 19: 346-351.
    [12] Local exponential stability of competitive neural networks with different time scales. Eng. Appl. Artif. Intell. (2004) 17: 227-232.
    [13] Anti-synchronization of a class of fuzzy memristive competitive neural networks with different time scales. Neural Process. Lett. (2020) 52: 647-661.
    [14] Spike synchronization and rate modulation differentially involved in motor cortical function. Science (1997) 278: 1950-1953.
    [15] Synchronization of memristive competitive neural networks with different time scales. Neural. Comput. and Applic. (2014) 25: 1163-1168.
    [16] Hybrid memristor chaotic system. J. Nanorlectron. Optoe. (2018) 13: 812-818.
    [17] Electronics: The fourth element. Nature (2008) 453: 42-43.
    [18] Complex-valued recurrent correlation neural networks. IEEE Trans. Neural. Netw. Learn. Syst. (2014) 25: 1600-1612.
    [19] Circuit elements with memory: Memristors, memcapacitors, and meminductors. P. IEEE (2009) 97: 1717-1724.
    [20] D. Wang, L. Huang and L. Tang, New results for global exponential synchronization in neural networks via functional differential inclusions, Chaos, 25 (2015), 083103, 11 pp. doi: 10.1063/1.4927737
    [21] A review of computational modeling and deep brain stimulation: Applications to Parkinson's disease. Appl. Math. Mech. -Engl. Ed. (2020) 41: 1747-1768.
    [22] Effects of initial conditions on the synchronization of the coupled memristor neural circuits. Nonlinear Dynamics (2019) 95: 1269-1282.
    [23] W. Zhang, C. Li and T. Huang,, Global robust stability of complex-valued recurrent neural networks with time-delays and uncertainties, Int. J. Biomath., 7 (2014), 1450016. doi: 10.1142/S1793524514500168
    [24] Input-to-state stability analysis for memristive Cohen-Grossberg-type neural networks with variable time delays. Chaos Soliton. Fract. (2018) 114: 364-369.
    [25] Exponential stability of a class of competitive neural networks with multi-proportional delays. Neural Process. Lett. (2016) 44: 651-663.
    [26] Synchronization of memristive complex-valued neural networks with time delays via pinning control method. IEEE T. Cybernetics (2020) 50: 3806-3815.
    [27] J. Zhuang, Y. Zhou and Y. Xia, Synchronization analysis of drive-response multi-layer dynamical networks with additive couplings and stochastic perturbations, Discrete Cont. Dyn.-S., 14 (2021), 1607–-1629. doi: 10.3934/dcdss.2020279
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2617) PDF downloads(204) Cited by(4)

Figures and Tables

Figures(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog