Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Generative adversarial network for inverse design of airfoils with flow control devices

  • Deep learning has recently gained prominence in fluid dynamics due to advances in computational power, algorithm development, and data availability. While most applications have focused on modeling and control, its potential for design and optimization remains relatively unexplored. In this study, a conditional generative adversarial network (CGAN) was developed for inverse design of airfoils with flow control devices. This CGAN receives the characteristics of the flow (Reynolds number and angle of attack) and the desired aerodynamic characteristics (drag and lift coefficients). Based on those inputs, the CGAN generates an airfoil that fulfills the defined specifications. With this objective, numerical simulations of 4-digit NACA airfoils with variable flap configurations and flow conditions were conducted, in order to obtain the necessary aerodynamic data for training and testing the CGAN. The results demonstrate that the proposed CGAN generates airfoil geometries with high accuracy and efficiency, showing minor deviations from real geometries and achieving aerodynamic performances that approach the desired ones for all the flow conditions considered. Additionally, the model is able to generalize to extreme cases not seen during training, which significantly broadens its application range. This approach offers a significant reduction in computational time compared to traditional iterative optimization methods, making it suitable for rapid design exploration and real-time applications.

    Citation: Alejandro Ballesteros-Coll, Koldo Portal-Porras, Unai Fernandez-Gamiz, Iñigo Aramendia, Daniel Teso-Fz-Betoño. Generative adversarial network for inverse design of airfoils with flow control devices[J]. Electronic Research Archive, 2025, 33(5): 3271-3284. doi: 10.3934/era.2025144

    Related Papers:

    [1] Serge Nicaise, Julie Valein . Stabilization of the wave equation on 1-d networks with a delay term in the nodal feedbacks. Networks and Heterogeneous Media, 2007, 2(3): 425-479. doi: 10.3934/nhm.2007.2.425
    [2] Yaru Xie, Genqi Xu . The exponential decay rate of generic tree of 1-d wave equations with boundary feedback controls. Networks and Heterogeneous Media, 2016, 11(3): 527-543. doi: 10.3934/nhm.2016008
    [3] Yacine Chitour, Guilherme Mazanti, Mario Sigalotti . Stability of non-autonomous difference equations with applications to transport and wave propagation on networks. Networks and Heterogeneous Media, 2016, 11(4): 563-601. doi: 10.3934/nhm.2016010
    [4] Zhong-Jie Han, Enrique Zuazua . Decay rates for $1-d$ heat-wave planar networks. Networks and Heterogeneous Media, 2016, 11(4): 655-692. doi: 10.3934/nhm.2016013
    [5] Dingwen Deng, Jingliang Chen . Explicit Richardson extrapolation methods and their analyses for solving two-dimensional nonlinear wave equation with delays. Networks and Heterogeneous Media, 2023, 18(1): 412-443. doi: 10.3934/nhm.2023017
    [6] Zhong-Jie Han, Gen-Qi Xu . Dynamical behavior of networks of non-uniform Timoshenko beams system with boundary time-delay inputs. Networks and Heterogeneous Media, 2011, 6(2): 297-327. doi: 10.3934/nhm.2011.6.297
    [7] Martin Gugat, Mario Sigalotti . Stars of vibrating strings: Switching boundary feedback stabilization. Networks and Heterogeneous Media, 2010, 5(2): 299-314. doi: 10.3934/nhm.2010.5.299
    [8] Wen Shen . Traveling wave profiles for a Follow-the-Leader model for traffic flow with rough road condition. Networks and Heterogeneous Media, 2018, 13(3): 449-478. doi: 10.3934/nhm.2018020
    [9] Xiaoqian Gong, Alexander Keimer . On the well-posedness of the "Bando-follow the leader" car following model and a time-delayed version. Networks and Heterogeneous Media, 2023, 18(2): 775-798. doi: 10.3934/nhm.2023033
    [10] Yinlin Ye, Hongtao Fan, Yajing Li, Ao Huang, Weiheng He . An artificial neural network approach for a class of time-fractional diffusion and diffusion-wave equations. Networks and Heterogeneous Media, 2023, 18(3): 1083-1104. doi: 10.3934/nhm.2023047
  • Deep learning has recently gained prominence in fluid dynamics due to advances in computational power, algorithm development, and data availability. While most applications have focused on modeling and control, its potential for design and optimization remains relatively unexplored. In this study, a conditional generative adversarial network (CGAN) was developed for inverse design of airfoils with flow control devices. This CGAN receives the characteristics of the flow (Reynolds number and angle of attack) and the desired aerodynamic characteristics (drag and lift coefficients). Based on those inputs, the CGAN generates an airfoil that fulfills the defined specifications. With this objective, numerical simulations of 4-digit NACA airfoils with variable flap configurations and flow conditions were conducted, in order to obtain the necessary aerodynamic data for training and testing the CGAN. The results demonstrate that the proposed CGAN generates airfoil geometries with high accuracy and efficiency, showing minor deviations from real geometries and achieving aerodynamic performances that approach the desired ones for all the flow conditions considered. Additionally, the model is able to generalize to extreme cases not seen during training, which significantly broadens its application range. This approach offers a significant reduction in computational time compared to traditional iterative optimization methods, making it suitable for rapid design exploration and real-time applications.



    Data-driven differential equations aim to formulate and study various types of differential equations using simulated or available data without explicit knowledge of the underlying physical principles governing the system. As we know, well-known differential equations are often derived from known physical laws or theories. In data-driven approaches that solely rely on data, it is believed that the data carries information and properties representing the physical system. Applying machine learning mechanisms with the help of statistical methods, the underlying differential equations can be estimated directly from the observed data.

    In order to demonstrate the accuracy of the parameter estimation, a set of coupled first-order ODEs is solved using both the numerical solution, the explicit Runge-Kutta method of order 5(4) (RK45), and the analytical method, PINNs. As a general note, any selection of a suitable numerical method should work to compare with the trained solutions obtained from the PINNs method. Generally, quoted from one of the chapters in [1], the RK method is an effective and widely used method for solving the initial-value problems of differential equations. In Python Scipy library documentation, the numeric 5(4) indicates that the error is controlled assuming the accuracy of the fourth-order under the classification of accuracy method, but steps are taken using the fifth-order accuracy formula. The analytical solutions have been verified (see [2]) that they are similar to the numerical solutions. In the same paper, it was illustrated that the universal approximation theorem (UAT) stated that neural networks (NN) have universality, i.e., any function can be approximated by approaching the result with NNs. NNs can achieve this by adjusting weight and bias parameters in the hidden layer to derive the staircase fitting of the respective function. The weights determine the width of the function, whereas the biases determine the position of the function.

    The beauty of the data-driven approach is shown using the set of experimental data points in the coefficient search. Simply put, the set of experimental data points portrays the features of the ODE, specifying the parameter estimation of the unknown underlying equations. The parameter estimation can be achieved with the combination of the PINNs method and the sparse optimization method. The use of the PINNs method is due to its ability to provide reasoning on physical phenomena that model the dynamic development of the system of equations, explained by the derivatives computed from the PINNs method. In this paper, in order to achieve the purpose of parameter estimation, the known coupled first-order ODE is kept aside, and then the analytical solution is employed as the experimental data. Notation-wise, the experimental data for a coupled first-order ODE is denoted as u[x,y] during the algorithm setting. Knowing the values of u[x,y], the derivatives, ut, can be computed by using the PINNs method. To kickstart the estimation process, a column vector array, ϕ, consisting of all potential variables, is formed. For example, ϕ=[1xx2x3y...]. If there are in total n number of potential variables listed, then the dimension of the ϕ matrix follows n×1. Since all the potential variables are generated from the variables x and y, based on the given data, expressed in u in terms of x and y, the numerical values of each variable listed in the ϕ array will be computed accordingly. The desired coefficient matrix is denoted as Λ. This coefficient matrix, Λ, generally is defined to follow dimension m×n, depending on the definition of the functions to be optimized. Hence, similar to the concept of solving a system of linear equations, Ax=b, the setting to search for the best coefficient combination is set to solve ΛTϕ=ut. The Λ matrix will undergo a series of training processes to optimize the coefficients. It is expected that the outcome of the optimization should suggest the best-fit combination of the coefficients that represent the underlying equations. The detailed process of optimization is illustrated in Section 3.2.

    To the best of our knowledge, the most frequently used method in solving inverse problems with the aim to uncover the coefficient of the governing equations can be achieved by optimization methods, Bayesian inference, and regression techniques. Some of the examples are sparse modeling (demonstrated in [3]) and unconstrained mathematical optimization such as the gradient descent algorithm involving iterations in training machine learning models. Recently, Kamyab et al. [4] have conducted a survey on current deep learning methods used for solving inverse problems. They categorized the existing analytical methods into analytic inversion, iterative methods, discretization as regularization, and variational methods, which often make use of the concept to regularize and constrain the problem to obtain numerically stable solutions. In their paper, the alternating direction method of multipliers (ADMM) is mentioned which can deal with a huge collection of images that need to be optimized with a regularizer. This is because the ADMM algorithm suggests that by combining the features of dual decomposition and the augmented Lagrangian method, it could efficiently handle large-scale problems. ADMM uses the augmented Lagrangian method as the optimization catalyst to search for the best combinations of coefficients. Chen et al. [5] have further described the usage of ADMM in discovering the coefficient of the underlying equation. In [6], they outlined the idea of ADMM in low-rank matrix recovery. In fact, ADMM is an optimization algorithm being used to solve convex optimization problems, particularly those involving separable objectives or constraints. The intention of this paper is to apply ADMM for parameter estimation. The optimization problem setting in this paper contains a combination of differentiable parts and non-differentiable parts. In this context, the concept of alternating in ADMM refers to alternating between optimizing the differentiable part and non-differentiable part while updating the Lagrange multipliers. The PINNs method is used to optimize the differentiable part. Then, the sparse optimization method is applied to optimize the non-differentiable part which could then suggest the best combination of parameters which leads to the simplest equations. Hence, PINNs and sparse optimization are applied alternatively in a loop. This is a process to keep the trainable variables (weights and biases) updated and at the same time search for optimized sparse solutions. Both the PINNs and sparse optimization methods attempt to extract the underlying properties of the differential equations that describe the relationships between variables within a system based solely on experimental data. This invention allows for the discovery or modeling of complex systems where the underlying governing equations might be unknown or difficult to ascertain through numerical methods.

    It is commonly known, at least to the best of our literature review, that dealing with the inverse problem of unknown underlying equations is difficult to recover. This means that given a set of data from any physical phenomenon, it is a great challenge to search for the underlying equations that govern the phenomena because the underlying equations are unknown. Hence, understanding the features of the given data set is important. Since the experimental data is trained using the proposed PINNs method, it could speed up the modeling process as the coupled first-order ODE is well studied beforehand. The specialty of the PINNs method, which allows the training process to be done with reasoning, has indirectly assisted the understanding of the dynamics of the system.

    The aim is to uncover the equations governing a physical phenomenon directly from data, which can be done in the spirit of machine learning or statistical learning. This work develops a novel framework to discover the underlying governing equations in a dynamical system. The proposed combinations of algorithms provide a way to discover sparse approximations of differential equations from data. This data-driven approach provides insights and a step-by-step algorithm guide to allow more research opportunities to explore the possibility of representing any physical phenomenon with differential equations. The illustration showing how the proposed method works on the simple linear coupled first-order ODE can serve as a ground truth of the reliability and effectiveness of the method. Then, the mechanism could be extended to deal with complex systems.

    This section will outline the details of the PINNs method and the sparse optimization method.

    In every neural-network training process, trainable variables are initiated. Similar to the setting of conventional neural networks (CNNs) and multi-layer perceptrons (MLPs) as described in [7], the PINNs method requires the setting of hyperparameters. Since the setting of hyperparameters is done before the training process, these parameters do not learn from the data. Instead, these parameters are set to determine the training behavior and model architecture, specifying how models can be learned or formed. Initially, it is required to set up the model architecture by deciding the number of input, hidden, and output layers, number of neurons, and activation function in use. The number of neurons serves as the communication platform between the input layer and hidden layers and then between hidden layers and output layer during the training process. All the layers of neurons are connected by weights. Hyperparameters such as the learning rate, batch size, number of epochs, and types of optimizers are initialized for model compilation in the PINNs method. Every epoch of the training process will lead to the trainable variables, i.e., weight and bias updates with backpropagation, particularly taking place in the hidden layers. All CNNs, MLPs, and PINNs are also required to employ an activation function. During the training process, the activation function is selected as a catalyst to increase the accuracy of the model training. Some of the commonly used activation functions are ReLU, sigmoid, and tanh. Lastly, the output layer provides the trained solutions. Unlike CNNs and MLPs, the PINNs method is designed to approximate solutions to partial differential equations (PDEs) or ordinary differential equations (ODEs). This can be achieved because the PINNs method incorporates the underlying physics of a problem by embedding differential equation constraints into the loss function. The uniqueness of the PINNs method which incorporates the data and derivative loss in the model is classified as the regularization step in neural networks. This regularization step can ensure the neural network learns solutions consistent with the underlying physical laws and generalizes well to unseen scenarios. The entire training process of PINNs is illustrated in Figure 1. With this, model compilation is initiated to build the model so that the loss function embedded can assist in ensuring that optimization can be achieved for data consistency, and at the same time complying with physical laws.

    Figure 1.  Procedure of PINNs.

    Usually, the loss function is formed based on the mean square difference from the data values part and derivatives (gradients) part, as described in [8]. During this optimization process, the optimizer algorithm is specified as well. The optimizer selected is Adam with a default learning rate of 0.001. The Adam optimizer is chosen because Adam could adapt to the learning rate for each parameter individually, which is especially useful in solving differential equations. The default learning rate of 0.001 in Adam has been empirically tested and works well in many deep learning applications (see [9]). The learning rate of 0.001 is also adopted in Scikit-learn from the Python library which provides tools for machine learning, data mining, and data analysis. Once the model is compiled, the next training step is to fine-tune and seek the best-trained solution with the smallest loss value. This step is where the model fitting is executed. Model fitting is where the actual training process begins using training data with forward and backpropagation processes to update the trainable variables in each epoch.

    In this paper, the PINNs method is used to obtain the experimental data sets to pass over to the sparse optimization part to retrieve the coefficients of the underlying equations. The PINNs method is one type of machine learning method that combines neural networks with physical characteristics to solve differential equations and model physical systems. Hence, unlike ordinary neural networks, the PINNs method is able to train the data and suggest a solution with reasoning. This statement is supported by the paper written by [10]. In the same paper, some common activation functions were listed such as sigmoid, hyperbolic tangent, rectified linear unit (ReLU), Gaussian error linear unit (GELU), etc. Usually, the activation function is commonly assumed to be sufficiently smooth in practice. The selection of an activation function and optimizer are subject to the suitability of the fitting problem and there is no strict guide on which is better. Raissi et al. [11] are some of the pioneers who used the PINNs method to solve forward and inverse problems for partial differential equations. Mishra et al. [12] were also able to demonstrate that the PINNs method could efficiently approximate inverse problems for PDEs. As stated in the paper by Stiasny et al. [13], the PINNs method can universally approximate any continuous function with an arbitrary degree of accuracy. This means that the PINNs method is not only able to ensure that the accuracy of the suggested solution can be achieved, but it also makes sure that the gradients of the equation can be satisfied.

    The computation of the gradients of the equation is performed by gradientTape (see [14]), which is a built-in command in TensorFlow. Sometimes, gradientTape is referred to as an automatic differentiation mechanism. TensorFlow uses a "tape" to record the operations and input values from the forward pass to compute the gradients using backpropagation. After the computation, all objects temporarily stored in the tape will be discarded, and TensorFlow releases any associated resources. The computation involved in automatic differentiation uses the concept of the dual numbers algorithm. Dual numbers encode both the function value and its derivative simultaneously for efficient computation of derivatives through the chain rule.

    Sparse optimization optimizes the solutions to optimization problems while encouraging sparsity in the solution, meaning that the solution is expected to have many zero or close-to-zero elements. This is because the sparse solutions aim to identify only a few parameters or variables that contribute significantly, while the rest are close to zero. In the paper written by Schaeffer [15], it is stated that sparsity plays a key role in optimization and data sciences. In particular, the regularization term, l1 norm, is often used as a proxy for sparsity. The paper stated that l1 norm is used to penalize the number of nonzero coefficients in order to promote coefficient sparsity. Therefore, sparse optimization often involves the use of regularization terms to be in use as a "penalty" to fine-tune the magnitude of coefficients or parameters in optimization problems. In the survey paper written by Li et al. [16], they reviewed and made comparisons on some sparse optimization methods used for modeling such as LASSO, elastic net and others. In their paper, sparse modeling is introduced for feature selection. It is emphasized that features with nonzero estimated coefficients are selected. Hence, with the same principle, in this paper, the best coefficients are obtained based on nonzero estimated coefficients.

    In situations where the underlying data or parameters have a sparse structure or where simplicity and interpretability of the solution are desirable, sparse optimization appears to be particularly useful. In machine learning, sparse optimization is employed for feature selection. Especially while dealing with massive data (see [17]), sparse optimization can achieve the goal of identification of the most relevant features or variables that contribute to a predictive model. As indicated in their paper, while searching for the relevant patterns, aiming to strike a balance between accuracy and simplicity, sparse optimization could efficiently select the crucial and essential variables, ignoring the less important ones.

    Any non-smooth function is non-differentiable and hence has limitations to be solved using well-known optimization methods, such as steepest descent, conjugate gradient, etc. For a convex function, it is not necessary that there exists a local or global minimum point. To tackle similar problems, as stated in the paper written by Huang et al. [18], it is possible to make use of ADMM which could efficiently decompose the complex problem into smaller pieces so that each piece will be easier to handle by speeding up the optimization process. The construction of ADMM involves the minimization of a sum of functions. Each component in the function is possibly subject to some constraints. As described in [19], recently ADMM became a well-known optimization framework for many conventional machine learning problems. As stated, ADMM serves as a gradient-free optimizer that can efficiently overcome the gradient vanishing problem and poor-conditioning optimization problems. In summary, ADMM restructures the constrained objective function into smaller parts. It is believed that by doing so, the smaller parts of the functions will be less complicated and comparatively able to be solved more easily. The technique applied for restructuring the objective function in ADMM uses the augmented Lagrangian method so that the resulting objective function will be unconstrained and can be solved term by term in the function.

    Generally, to illustrate the algorithm, recall that our intention is to uncover the underlying equations. Hence, it is assumed that f(x) denotes the smooth convex function of the underlying equations, which is done by solving ΛTϕut=0 which is required to be minimized, written as minimizexf(x). Due to the convexity of the coupled first-order ODE, another function g(x) is introduced to represent the non-smooth part. The objective function has now become minimizexf(x)+g(x). This construction of the objective function tallies with what has been recommended by Nishihara et al. [20]. In order to avoid confusion of the terms, the objective function can be re-written as

    minimizex,zf(x)+g(z),subject toh(xz)=0. (3.1)

    The constraint h(x,z)=h(xz)=0 is an artificially created constraint. During the process of minimizing the function f(x), an l1 regularization term is imposed, contributed by the function g(z)=λ||z||1 to ensure the suggested outcome (a combination of coefficients of the model) is at its simplest form. The constraint, h(xz)=0, compares the difference between the smooth (x) and non-smooth (z) terms, which are assumed to be approximately zero. Ideally, no dissimilarity between x and z should appear, whereby x and z should match exactly. As suggested by Yuan et al. [21], the formation of Eq (3.1) is a nonconvex sparsity constrained/ sparse-regularized optimization problem, which is difficult to solve. Therefore, by applying ADMM, the constrained optimization problem can be transformed into unconstrained optimization, and hence, the general augmented Lagrangian form of the objective function is:

    minimizex,zf(x)+g(z)+yTh(x,z)+ρ2||h(x,z)||22. (3.2)

    Looking at the objective function written in the form of Eq (3.2), the notations f(x) and g(z) are represented in the same context as in Eq (3.1). The term yT is introduced as the Lagrange multiplier for the constraint h(x,z); whereas ρ is the augmentation parameter controlling the penalty on the constraint violation. Usually, ρ is assigned to the value of one being the natural scaling of the baseline result. Specifically, for the coupled first-order ODE in this paper, the augmented Lagrangian form is:

    minimizeΛ,z12||ΛTϕut||22+λ||z||1+i,j(yijΛTyijzT)+ρ2||ΛTzT||22. (3.3)

    Referring to Eq (3.3), the dimension of ΛTϕut, z, yijΛTyijzT follows m×n. Particularly, the smooth function 12||ΛTϕut||22 usually will be solved by the least squares estimator method. The least squares estimator method minimizes the sum of the squares of the difference between the entries of ΛTϕ and ut. As for the non-smooth function, it is observed that the non-smooth function g(z)=λ||z||1 utilizes the l1 norm which tries to retain z to zero. The accompanied coefficient λ is a penalty set to control how sparse the non-smooth function should be. The larger the value for λ, the higher the sparsity of the outcome will be, leading to more zeros in the coefficients. In [22] on the details of ADMM in optimization, the sparsity can be achieved in the suggested outcome, i.e., the sparse matrix because of the regularization norm 1 term. In order to deal with the non-smooth function, z, the soft thresholding denoted as S, will be introduced to "smoothen" the term ||z||1. Then, the notation y here serves as an indicator of minimization on the function ΛTzT. Since ΛTzT approaches zero, the term i,j(yijΛTyijzT) will converge to zero. The last term with notation ΛTzT in the equation is accompanied by another penalty, ρ. This penalty term controls how strictly the condition of ΛTzT should be satisfied. The larger ρ is, the more strict it is to ensure ΛTzT must be zero to satisfy the condition.

    Solving for Eq (3.3) under sparse optimization involves an iteration update of the terms Λ,z, and y, which is

    ΛTk+1(ϕϕT+ρI)=(utϕTykij+ρzTk), (3.4)
    zTk+1=S(λρ,ykijρ+ΛTk), (3.5)
    yk+1ij=ykij+ρ(ΛTk+1zTk+1). (3.6)

    During iteration updates, components Λ, z, and y are updated to suggest the best combinations of coefficients for the coefficient matrix, Λ.

    As mentioned earlier, with the intention of building the ground truth of the problem, this paper combined the PINNs method and sparse optimization approach to uncover the coefficients of the underlying equations. In short, the combination of methods is named the PINNs-sparse method. The PINNs method will be specifically applied to obtain the analytical solutions of the coupled first-order ODE. First, given an initial-value coupled first-order ODE problem with [dxdt,dydt], the analytical solutions are estimated using the PINNs method with time t as the input data and u as the output trained solutions in terms of x and y, i.e., u[x,y]. Then, the ϕ array is constructed consisting of the guessing of potential variables and is computed numerically, with the trained experimental data x and y.

    The data-driven parameter estimation process kickstarts with the PINNs method when the experimental data, u[x,y], are being passed over to the model setting and compilation process to set up the model and obtain the derivative, ut. The coefficient estimation process starts by alternating the PINNs algorithm and sparse optimization iterations in a loop. Similar to passing ut, after the initiated ΛT coefficient matrix with all ones is passed to the sparse optimization part, the process begins with the updates of the ΛT, z, and y components. Then, the new updated ΛT coefficient matrix will pass back to the PINNs algorithm to repeat the training process. The PINNs part will always be in charge of updates of trainable variables resulting in the changes in ut to pass to the sparse optimization part. This alternating process is repeated until the loss value from PINNs and sparse optimization are both at a minimum. In other words, once the Λ coefficient matrix is stable, loss values from both PINNs and sparse optimization remain unchanged at minimum, and then the coefficient estimation process is complete. Ideally, the training process shall not continue when the values of ut and ΛT remain unchanged. For checking purposes, the product of the outcome obtained from sparse optimization, ΛT, and the experimental data array, ϕ, is expected to be approximately equal to ut, obtained from the PINNs method, i.e., ΛTϕ=ut. Therefore, upon completion of the estimating process, the accuracy of the coefficient matrix can be determined by finding the mean square error between ΛTϕ and ut. Summarized below is the step-by-step PINNs-sparse algorithm.

    Algorithm 1 PINNs-Sparse Algorithm
    Require: Initialization of hyperparameters, variables, and the model architecture setting
    Ensure: Data: Analytical solutions, u[x,y], from the PINNs method.
        1. Compute the derivatives, ut, on PINNs and pass over to Sparse.             ▷ First PINNs
        2. Find the numerical values of ut and ΛT ((Eq 3.3)).
        3. Solve Eq (3.3) by performing iteration updates on ΛT (3.4), z (3.5), and y (3.6).     ▷ First Sparse
        4. Pass back ΛT from sparse optimization to the PINNs method.
        5. Update ut in PINNs then pass to Sparse.                     ▷ Second PINNs
        6. Repeat Steps 3 – 5 until ut and ΛT remain unchanged.     ▷ Looping between PINNs and Sparse
        7. Computation of the loss function and accuracy checking between ΛTϕ and ut.
        8. Graphical representation based on the estimated coefficient model.

    In short, the complexity of neural networks falls under the set up of the model architecture as so far there has been no specific guide on the ideal number of hidden layers needed in the neural-network training process. However, the computation time and the optimization process of the PINNs method are short and efficient. The training process can be done in just a few iterations, as illustrated in the examples in the Results section. Generally, some commonly used optimization methods such as gradient descent and conjugate gradient are not suitable for large-scale, non-smooth convex optimization problems because these methods are sensitive to the step size whereby gradients are not well-defined everywhere causing the solutions to be hard or fail to converge. As a comparison, ADMM in the PINNs-sparse algorithm can excel in large-scale, non-smooth convex optimization problems because it can decompose the smooth and non-smooth components into smaller parts to solve, with the assistance of the augmented Lagrangian formation and l1 norm regularization introduced.

    In this section, the derivation of the results in the searching coefficient for a linear coupled first-order ODE using the PINNs-sparse method is demonstrated.

    The linear coupled first-order ODE illustrated in this paper is

    {dxdt=1+0.2x0.3y,dydt=20.4x+0.5y, (4.1)

    with initial condition u[t=0]=[0,1]. From the coupled first-order ODE, it can be observed that this is similar to solving a system of two linear equations. Hence, the dimension of the solution, u, will be 2×1. Letting the dimension of the variable array, ϕ, be 3×1, and by convention, the dimension of the coefficient matrix follows 3×2, then, represented in vector form, the equation is written as ut=ΛTϕ. In this problem, the hyperparameters setting is with seven hidden layers with the number of neurons being 8, 16, 32, 64, 32, 16, and 8, and the activation function is GELU, running on 100 epochs with a batch size of 4. GELU was selected because it has a significantly smoother gradient transition than the sharp and abrupt ReLU. One of the advantages for both ReLU and GELU activation functions is that both of them do not activate all the neurons at the same time.

    The linear coupled first-order ODE is solved using both the numerical method, RK45, and the analytical method, PINNs. The intention of having it solved by the numerical method is to make sure the PINNs method can produce the same solutions. The trained solutions, u[x,y], obtained from the PINNs method are compared with the numerical solutions from RK45. The comparison of the results is shown below. It can be observed from Figure 2 that the solutions trajectory based on the PINNs method is able to nearly overlap with the solutions trajectory of the numerical method, RK45.

    Figure 2.  Results between RK45 and PINNs for the linear coupled first-order ODE.

    Putting aside the coupled first-order ODE, the trained solutions, u[x,y], are referred to in the search for best combinations of coefficients to uncover the underlying equations. The known coupled first-order ODE will not be used for any computation until the end of the coefficient search because it is used as a reference of comparison to verify the performance of the parameter estimation of the PINNs-sparse method.

    In this paper, to demonstrate the ability of the PINNs-sparse method in coefficient search, the ϕ array is set to include only the definite potential variable, i.e., [1xy]. The initialization of Λ and y are formed with a matrix of all ones, and then a random number initializes the z matrix. By performing the PINNs-sparse procedure steps listed under Section 3.3, the coefficient search process is stopped after observing unchanged values of ut and Λ. Figure 3 shows the PINNs part of the PINNs-sparse method, verified by comparison with the trajectory solutions of the numerical method, RK45.

    Figure 3.  PINNs part training results between RK45 and PINNs for the linear coupled first-order ODE.

    The outcome obtained from the PINNs-sparse method on the suggested parameters for the coupled first-order ODE is shown in Eq (4.2). It is compared with the known linear coupled first-order ODE earlier, shown in Eq (4.1). Observing the coefficients, it can be concluded that they are quite similar. In particular, this set of combinations of coefficients is generated based on 10 iterations, with a switch between PINNs and sparse optimization being the best fit. The learning rate is not applicable in this problem due to setting the maximum likelihood of the function equal to zero, which is optimal.

    {dxdt=0.99931.1657x0.1711y,dydt=1.95883.1502x+0.7653y. (4.2)

    In order to visualize the performance of the suggested parameters for the linear coupled first-order ODE, the estimation of the coefficients obtained from the PINNs-sparse method is substituted to be solved using the numerical method, RK45. The trajectory solutions, based on the estimated coefficients, are compared with the trajectory solutions of the known coupled first-order ODE, solved by the numerical method, RK45. As a graphical result of the outcome of the PINNs-sparse method, shown in Figure 4, the parameters fit well. As an additional remark, the loss value contributed by the PINNs part is 0.4736 whereas the loss value contributed by the sparse part is 0.01011. Hence, the total loss value of the PINNs-sparse method is 0.4837.

    Figure 4.  Training results between RK45 and the PINNs-sparse method for the linear coupled first-order ODE.

    As a side note, the convergence rate of PINNs in solving direct (4.1) and inverse (4.2) problems is determined based on the indication from the log-loss of PINNs, as shown in Figure 5. It can be observed that in the direct problem, the PINNs method is able to converge after approximately 10 epochs; the PINNs part from the inverse problem (4.2) converges after 2 epochs.

    Figure 5.  Log-loss of PINNs between Figures 2 and 3.

    The non-linear coupled ODE showcases how the proposed method could uncover the underlying equations of its simplest form, verified by the exact trajectory match. The proposed problem is

    {dxdt=1+0.2x2,dydt=20.4xy0.3y, (4.3)

    with initial conditions u[t=0]=[0,1]. In this problem, hyperparameters are set to seven hidden layers with a number of neurons of 8, 16, 32, 64, 32, 16, and 8, and the activation function is GELU, with 100 epochs and a batch size of 4. The optimizer selected is Adam with a default learning rate of 0.001.

    With a similar approach, the nonlinear coupled ODE is solved using both the numerical method, RK45, and the analytical method, PINNs. It is verified that the PINNs method works well in nonlinear coupled equations too because both trajectories match exactly, as shown in Figure 6.

    Figure 6.  Results between RK45 and PINNs for the non-linear coupled ODE.

    Putting aside the equations, the trained solutions, u[x,y], obtained from the PINNs method are referred to in the search for best combinations of coefficients to uncover the underlying equations. The known equations will not be used for any computation until the end of the coefficient search because they are used as a reference of comparison to verify the performance of the parameter estimation of the PINNs-sparse method.

    In this problem, to demonstrate the ability of the PINNs-sparse method in coefficient search, the ϕ array is set to include the simplest form of the potential variable, i.e., [1xy]. The initialization of Λ and y are formed with a matrix of all ones, and then a random number initializes the z matrix. By performing the PINNs-sparse procedure steps listed under Section 3.3, the coefficient search process is stopped after observing unchanged values of ut and Λ. Figure 7 shows the PINNs part of the PINNs-sparse method, verified by comparison with the trajectory solutions of the numerical method, RK45.

    Figure 7.  PINNs part training results between RK45 and PINNs for the non-linear coupled ODE.

    The outcome obtained from the PINNs-sparse method on the suggested parameters for the coupled first-order ODE is shown in Eq (4.4). It is compared with the known non-linear coupled first-order ODE earlier, shown in Eq (4.3).

    {dxdt=0.42101.01478x+1.2057y,dydt=0.85451.8863x+0.7413y. (4.4)

    This set of combinations of coefficients provides an estimation of best fit. Although the terms vary from the original equations, the trajectory for both estimated solutions and the known non-linear coupled first-order ODE fit well, as shown in Figure 8. This is often the case and reasonable because under the situation whereby the underlying equations are not known, the estimated form of the coefficients can be freely in any form. As an additional remark, the loss value contributed by the PINNs part is 0.2636 whereas the loss value contributed by the sparse part is 0.004619. Hence, the total loss value of the PINNs-sparse method in this case results in value of 0.2682.

    Figure 8.  Training results between RK45 and the PINNs-sparse for the non-linear coupled ODE.

    Similarly, the convergence rate of PINNs in solving direct (4.3) and inverse (4.4) problems is determined based on the indication from the log-loss of PINNs, as shown in Figure 9. It can be observed that in the direct problem, the PINNs method is able to converge after approximately 2 epochs; the PINNs part from the inverse problem converges after 3 epochs.

    Figure 9.  Log-loss of PINNs between Figures 6 and 7.

    Regarding the comparisons of computational resource consumption and computational accuracy between the PINNs method and any numerical method (here referring to the RK45 method), numerical methods are well-known for their efficiency in computing power and accuracy when dealing with well-defined optimization problems. However, when dealing with situations where equations are unknown and only experimental and derivative data are known, here is where the PINNs method could be applied to estimate the coefficients of the unknown equations. Generally, numerical methods are good at estimating unique solutions with specific initial and boundary conditions. However, when the optimization problem arises with multiple conditions needing to be incorporated, then the PINNs method could be used to optimize the parameter estimation by satisfying all the conditions as well as being able to meet the gradient of the solution trajectory. This is because the PINNs method can naturally incorporate boundary conditions, initial conditions, and any additional physical constraints directly into the loss function with regularization.

    As a conclusion, this paper has made use of ADMM to investigate data-driven differential equations, particularly focusing on solving convex optimization problems. The iterative process, involving multiple loops of updating weights and biases with PINNs and employing ADMM for constrained optimization restructuring, underscores the complexity and efficacy of the proposed approach. This paper has provided insights and shown how the novel method, PINNs-sparse, could uncover the governing differential equations. This initiative could provide an alternative option to deal with similar research problems on how to recover the underlying differential equations. This simple approach demonstrates the potential of how the underlying differential equations can be retrieved, solely based on data. During the search for the governing differential equations, optimization techniques are applied to ensure that the combination of coefficients is the most ideal case with minimum error. This could also increase the reliability of the research.

    In this paper, we have demonstrated the proposed method for solving linear and non-linear coupled ODEs for direct and inverse problems. In real applications, the linear coupled ODEs have several applications in finance and tomography, particularly in modeling systems where multiple variables influence each other dynamically over time. The research direction is still continuously being carried out to extend the concept of parameter estimation using the PINNs-sparse method in nonlinear or more complex systems. It is believed that this study could contribute to the broader understanding and application of this versatile optimization method in solving complex and decomposable problems across various fields. For example, it is under our radar to make use of the comprehensive collection of machine learning data sets encompassing 15 terabytes of numerical simulations across various spatiotemporal physical systems (see [23]) with the proposed method to model complex physical systems. The beauty of unveiling the hidden equations from given data is encouraging in finding out how the world works and discovering previously unknown aspects of physical phenomena. It is particularly useful when an extreme physical phenomenon comes to our attention, such as a flood, extreme weather, traffic problems, etc., where we could educationally apply the method proposed in this paper in order to explain the scenario with functions or equations.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work is supported and funded by Sunway University under the Kick-Start Grant Scheme (GRTIN-KSGS(02)-DAPS-2022) and the Publication Support Scheme. The authors would also like to thank the reviewers for the constructive feedback provided.

    The authors declare that they have no competing interests.



    [1] N. Thuerey, K. Weißenow, L. Prantl, X. Hu, Deep learning methods for Reynolds-averaged Navier–Stokes simulations of airfoil flows, AIAA J., 58 (2020), 25–36. https://doi.org/10.2514/1.J058291 doi: 10.2514/1.J058291
    [2] H. Chen, L. He, W. Qian, S. Wang, Multiple aerodynamic coefficient prediction of airfoils using a convolutional neural network, Symmetry, 12 (2020), 544. https://doi.org/10.3390/sym12040544 doi: 10.3390/sym12040544
    [3] E. Yilmaz, B. German, A convolutional neural network approach to training predictors for airfoil performance, in 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, AIAA, 2017. https://doi.org/10.2514/6.2017-3660
    [4] K. Portal-Porras, U. Fernandez-Gamiz, E. Zulueta, A. Ballesteros-Coll, A. Zulueta, CNN-based flow control device modelling on aerodynamic airfoils, Sci. Rep., 12 (2022), 8205. https://doi.org/10.1038/s41598-022-12157-w doi: 10.1038/s41598-022-12157-w
    [5] K. Portal-Porras, U. Fernandez-Gamiz, E. Zulueta, R. Garcia-Fernandez, X. Uralde-Guinea, CNN-based vane-type vortex generator modelling, Eng. Appl. Comput. Fluid Mech., 18 (2024), 2300481. https://doi.org/10.1080/19942060.2023.2300481 doi: 10.1080/19942060.2023.2300481
    [6] S. J. Jacob, M. Mrosek, C. Othmer, H. Köstler, Deep learning for real-time aerodynamic evaluations of arbitrary vehicle shapes, SAE Int. J. Passeng. Veh. Syst., 15 (2022). https://doi.org/10.4271/15-15-02-0006 doi: 10.4271/15-15-02-0006
    [7] R. Garcia-Fernandez, K. Portal-Porras, O. Irigaray, Z. Ansa, U. Fernandez-Gamiz, CNN-based flow field prediction for bus aerodynamics analysis, Sci. Rep., 13 (2023), 21213. https://doi.org/10.1038/s41598-023-48419-4 doi: 10.1038/s41598-023-48419-4
    [8] B. Du, P. D. Lund, J. Wang, Combining CFD and artificial neural network techniques to predict the thermal performance of all-glass straight evacuated tube solar collector, Energy, 220 (2021), 119713. https://doi.org/10.1016/j.energy.2020.119713 doi: 10.1016/j.energy.2020.119713
    [9] J. Ren, H. Wang, K. Luo, J. Fan, A priori assessment of convolutional neural network and algebraic models for flame surface density of high Karlovitz premixed flames, Phys. Fluids, 33 (2021), 036111. https://doi.org/10.1063/5.0042732 doi: 10.1063/5.0042732
    [10] J. Rabault, M. Kuchta, A. Jensen, U. Réglade, N. Cerardi, Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, J. Fluid Mech., 865 (2019), 281–302. https://doi.org/10.1017/jfm.2019.62 doi: 10.1017/jfm.2019.62
    [11] F. Ren, J. Rabault, H. Tang, Applying deep reinforcement learning to active flow control in weakly turbulent conditions, Phys. Fluids, 33 (2021), 037121. https://doi.org/10.1063/5.0037371 doi: 10.1063/5.0037371
    [12] D. Fan, L. Yang, Z. Wang, M. S. Triantafyllou, G. E. Karniadakis, Reinforcement learning for bluff body active flow control in experiments and simulations, Proc. Natl. Acad. Sci. U. S. A., 117 (2020), 26091–26098. https://doi.org/10.1073/pnas.2004939117 doi: 10.1073/pnas.2004939117
    [13] B. Z. Han, W. X. Huang, C. X. Xu, Deep reinforcement learning for active control of flow over a circular cylinder with rotational oscillations, Int. J. Heat Fluid Flow, 96 (2022), 109008. https://doi.org/10.1016/j.ijheatfluidflow.2022.109008 doi: 10.1016/j.ijheatfluidflow.2022.109008
    [14] K. Portal-Porras, U. Fernandez-Gamiz, E. Zulueta, R. Garcia-Fernandez, S. E. Berrizbeitia, Active flow control on airfoils by reinforcement learning, Ocean Eng., 287 (2023), 115775. https://doi.org/10.1016/j.oceaneng.2023.115775 doi: 10.1016/j.oceaneng.2023.115775
    [15] G. Lee, Y. Joo, S. U. Lee, T. Kim, Y. Yu, H. G. Kim, Design optimization of heat exchanger using deep reinforcement learning, Int. Commun. Heat Mass Transfer, 159 (2024), 107991. https://doi.org/10.1016/j.icheatmasstransfer.2024.107991 doi: 10.1016/j.icheatmasstransfer.2024.107991
    [16] Y. Wang, W. Wang, G. Tao, H. Li, Y. Zheng, J. Cui, Optimization of the semi-sphere vortex generator for film cooling using generative adversarial network, Int. J. Heat Mass Transfer, 183 (2022), 122026. https://doi.org/10.1016/j.ijheatmasstransfer.2021.122026 doi: 10.1016/j.ijheatmasstransfer.2021.122026
    [17] S. A. A. Mehrjardi, A. Khademi, M. Fazli, Optimization of a thermal energy storage system enhanced with fins using generative adversarial networks method, Therm. Sci. Eng. Prog., 49 (2024), 102471. https://doi.org/10.1016/j.tsep.2024.102471 doi: 10.1016/j.tsep.2024.102471
    [18] S. A. A. Mehrjardi, A. Khademi, S. M. M. Safavi, Machine learning approach to balance heat transfer and pressure loss in a dimpled tube: Generative adversarial networks in computational fluid dynamics, Therm. Sci. Eng. Prog., 57 (2025), 103116. https://doi.org/10.1016/j.tsep.2024.103116 doi: 10.1016/j.tsep.2024.103116
    [19] E. Yilmaz, B. German, Conditional generative adversarial network framework for airfoil inverse design, in AIAA Aviation 2020 Forum, AIAA, 2020. https://doi.org/10.2514/6.2020-3185
    [20] V. Sekar, M. Zhang, C. Shu, B. C. Khoo, Inverse design of airfoil using a deep convolutional neural network, AIAA J., 57 (2019), 993–1003. https://doi.org/10.2514/1.J057894 doi: 10.2514/1.J057894
    [21] S. Oh, Y. Jung, S. Kim, I. Lee, N. Kang, Deep generative design: Integration of topology optimization and generative models, J. Mech. Des., 141 (2019), 111405. https://doi.org/10.1115/1.4044229 doi: 10.1115/1.4044229
    [22] D. Shu, J. Cunningham, G. Stump, S. W. Miller, M. A. Yukish, T. W. Simpson, et al., 3D design using generative adversarial networks and physics-based validation, J. Mech. Des., 142 (2019), 071701. https://doi.org/10.1115/1.4045419 doi: 10.1115/1.4045419
    [23] G. Achour, W. J. Sung, O. J. Pinon-Fischer, D. N. Mavris, Development of a conditional generative adversarial network for airfoil shape optimization, in AIAA Scitech 2020 Forum, Orlando, FL, 2020. https://doi.org/10.2514/6.2020-2261
    [24] W. Chen, K. Chiu, M. Fuge, Aerodynamic design optimization and shape exploration using generative adversarial networks, in AIAA Scitech 2019 Forum, San Diego, CA, 2019. https://doi.org/10.2514/6.2019-2351
    [25] X. Tan, D. Manna, J. Chattoraj, L. Yong, X. Xinxing, D. M. Ha, et al., Airfoil inverse design using conditional generative adversarial networks, in 2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV), IEEE, (2022), 143–148. https://doi.org/10.1109/ICARCV57592.2022.10004343
    [26] J. Wang, R. Li, C. He, H. Chen, R. Cheng, C. Zhai, et al., An inverse design method for supercritical airfoil based on conditional generative models, Chin. J. Aeronaut., 35 (2022), 62–74. https://doi.org/10.1016/j.cja.2021.03.006 doi: 10.1016/j.cja.2021.03.006
    [27] M. Drela, XFOIL: An analysis and design system for low Reynolds number airfoils, in Low Reynolds Number Aerodynamics. Lecture Notes in Engineering (eds. T. J. Mueller), Springer, Berlin, Heidelberg, 54 (1989). https://doi.org/10.1007/978-3-642-84010-4_1
    [28] MATLAB, Mathworks. Available from: https://es.mathworks.com/products/matlab.html.
    [29] Deep Learning Toolbox, Mathworks. Available from: https://es.mathworks.com/products/deep-learning.html.
  • This article has been cited by:

    1. Serge Nicaise, Julie Valein, Quasi exponential decay of a finite difference space discretization of the 1-d wave equation by pointwise interior stabilization, 2010, 32, 1019-7168, 303, 10.1007/s10444-008-9108-1
    2. Zhong-Jie Han, Gen-Qi Xu, Exponential stability of Timoshenko beam system with delay terms in boundary feedbacks, 2011, 17, 1292-8119, 552, 10.1051/cocv/2010009
    3. Gilbert Bayili, Akram Ben Aissa, Serge Nicaise, Same decay rate of second order evolution equations with or without delay, 2020, 141, 01676911, 104700, 10.1016/j.sysconle.2020.104700
    4. YingFeng Shang, Genqi Xu, Dynamic control of an Euler–Bernoulli equation with time-delay and disturbance in the boundary control, 2019, 92, 0020-7179, 27, 10.1080/00207179.2017.1334264
    5. Kaïs Ammari, Serge Nicaise, 2015, Chapter 5, 978-3-319-10899-5, 147, 10.1007/978-3-319-10900-8_5
    6. H. El Boujaoui, Uniform interior stabilization for the finite difference fully-discretization of the 1-D wave equation, 2018, 29, 1012-9405, 557, 10.1007/s13370-018-0559-3
    7. Serge Nicaise, Internal stabilization of a Mindlin-Timoshenko model by interior feedbacks, 2011, 1, 2156-8499, 331, 10.3934/mcrf.2011.1.331
    8. E. M. Ait Benhassi, J. E. Benyaich, H. Bouslous, L. Maniar, Decay rates for delayed abstract thermoelastic systems with Cattaneo law, 2017, 95, 0037-1912, 222, 10.1007/s00233-017-9880-7
    9. Shun-Tang Wu, Blow-Up of Solution for A Viscoelastic Wave Equation with Delay, 2019, 39, 0252-9602, 329, 10.1007/s10473-019-0124-7
    10. Denis Mercier, Virginie Régnier, Boundary controllability of a chain of serially connected Euler-Bernoulli beams with interior masses, 2009, 60, 0010-0757, 307, 10.1007/BF03191374
    11. Kaïs Ammari, Denis Mercier, Virginie Régnier, Julie Valein, Spectral analysis and stabilization of a chain of serially connected Euler-Bernoulli beams and strings, 2012, 11, 1553-5258, 785, 10.3934/cpaa.2012.11.785
    12. Kaïs Ammari, Denis Mercier, Boundary feedback stabilization of a chain of serially connected strings, 2015, 4, 2163-2480, 1, 10.3934/eect.2015.4.1
    13. Zhong-Jie Han, Gen-Qi Xu, Exponential stabilisation of a simple tree-shaped network of Timoshenko beams system, 2010, 83, 0020-7179, 1485, 10.1080/00207179.2010.481767
    14. Jilong Wu, Yingfeng Shang, Exponential Stability of the Euler-Bernoulli Beam Equation with External Disturbance and Output Feedback Time-Delay, 2019, 32, 1009-6124, 542, 10.1007/s11424-018-7182-0
    15. Zhong-Jie Han, Gen-Qi Xu, Stabilization and SDG condition of serially connected vibrating strings system with discontinuous displacement, 2012, 14, 15618625, 95, 10.1002/asjc.218
    16. Yaru Xie, Genqi Xu, The exponential decay rate of generic tree of 1-d wave equations with boundary feedback controls, 2016, 11, 1556-1801, 527, 10.3934/nhm.2016008
    17. Z. Abbas, S. Nicaise, Polynomial decay rate for a wave equation with general acoustic boundary feedback laws, 2013, 61, 1575-9822, 19, 10.1007/s40324-013-0002-5
    18. Hugo Lhachemi, Christophe Prieur, Robert Shorten, In-Domain Stabilization of Block Diagonal Infinite-Dimensional Systems With Time-Varying Input Delays, 2021, 66, 0018-9286, 6017, 10.1109/TAC.2021.3059103
    19. Xiu Fang Liu, Gen Qi Xu, Exponential Stabilization for Timoshenko Beam with Distributed Delay in the Boundary Control, 2013, 2013, 1085-3375, 1, 10.1155/2013/726794
    20. Li Zhang, Gen Qi Xu, Nikos E Mastorakis, A new approach for stabilization of Heat-ODE cascaded systems with boundary delayed control, 2022, 39, 0265-0754, 112, 10.1093/imamci/dnab037
    21. Serge Nicaise, Julie Valein, A remark on the stabilization of the 1-d wave equation, 2010, 348, 1631073X, 47, 10.1016/j.crma.2009.11.015
    22. Markus Dick, Martin Gugat, Gunter Leugering, 2012, Feedback stabilization of quasilinear hyperbolic systems with varying delays, 978-1-4673-2124-2, 125, 10.1109/MMAR.2012.6347931
    23. Jamilu Hashim Hassan, Nasser-eddine Tatar, Optimal stability for a viscoelastic neutral differential problem, 2022, 34, 0897-3962, 10.1216/jie.2022.34.335
    24. Julie Valein, Enrique Zuazua, Stabilization of the Wave Equation on 1-d Networks, 2009, 48, 0363-0129, 2771, 10.1137/080733590
    25. Qilong Gu, Tatsien Li, Exact boundary controllability for quasilinear wave equations in a planar tree-like network of strings, 2009, 26, 0294-1449, 2373, 10.1016/j.anihpc.2009.05.002
    26. Lea Sirota, Anuradha M. Annaswamy, Modeling and Control of Wave Propagation in a Ring With Applications to Power Grids, 2019, 64, 0018-9286, 3676, 10.1109/TAC.2018.2889064
    27. Charlotte Rodriguez, Networks of geometrically exact beams: Well-posedness and stabilization, 2022, 12, 2156-8472, 49, 10.3934/mcrf.2021002
    28. Dongyi Liu, Genqi Xu, 2010, Stability of a controlled strings network with a circuit, 978-1-4244-5181-4, 2683, 10.1109/CCDC.2010.5498740
    29. Martin Gugat, Markus Dick, Time-delayed boundary feedback stabilization of the isothermal Euler equations with friction, 2011, 1, 2156-8499, 469, 10.3934/mcrf.2011.1.469
    30. Xiaoxuan Feng, Genqi Xu, Yunlan Chen, Rapid stabilisation of an Euler–Bernoulli beam with the internal delay control, 2019, 92, 0020-7179, 42, 10.1080/00207179.2017.1286693
    31. Gilbert Bayili, Serge Nicaise, Roland Silga, Rational energy decay rate for the wave equation with delay term on the dynamical control, 2021, 495, 0022247X, 124693, 10.1016/j.jmaa.2020.124693
    32. Soumaya Amara, Uniform exponential decay of the energy for a fully discrete wave equation with point‐wise dissipation, 2023, 0170-4214, 10.1002/mma.9099
    33. Ying Feng Shang, Gen Qi Xu, Dynamic feedback control and exponential stabilization of a compound system, 2015, 422, 0022247X, 858, 10.1016/j.jmaa.2014.09.013
    34. Zhong-Jie Han, Enrique Zuazua, Decay rates for $1-d$ heat-wave planar networks, 2016, 11, 1556-1801, 655, 10.3934/nhm.2016013
    35. Kaïs Ammari, Farhat Shel, 2022, Chapter 2, 978-3-030-86350-0, 15, 10.1007/978-3-030-86351-7_2
    36. Ying Feng Shang, Gen Qi Xu, Stabilization of an Euler–Bernoulli beam with input delay in the boundary control, 2012, 61, 01676911, 1069, 10.1016/j.sysconle.2012.07.012
    37. Yaru Xie, Genqi Xu, 2017, The orientation control of the tree-shaped strings network, 978-988-15639-3-4, 1551, 10.23919/ChiCC.2017.8027571
    38. Yanni Guo, Genqi Xu, 2012, Stabilization of wave equations with boundary delays and non-collocated feedback controls, 978-1-4577-2074-1, 3142, 10.1109/CCDC.2012.6244495
    39. Kaïs Ammari, Serge Nicaise, 2015, Chapter 3, 978-3-319-10899-5, 61, 10.1007/978-3-319-10900-8_3
    40. Xiu-Fang Liu, Gen-Qi Xu, Exponential stabilization of Timoshenko beam with input and output delays, 2016, 6, 2156-8472, 271, 10.3934/mcrf.2016004
    41. Yaxuan Zhang, Genqi Xu, Controller design for bush-type 1-d wave networks, 2012, 18, 1292-8119, 208, 10.1051/cocv/2010050
    42. D. Mercier, V. Régnier, Control of a network of Euler–Bernoulli beams, 2008, 342, 0022247X, 874, 10.1016/j.jmaa.2007.12.062
    43. Hugo Lhachemi, Robert Shorten, Christophe Prieur, Control Law Realification for the Feedback Stabilization of a Class of Diagonal Infinite-Dimensional Systems With Delay Boundary Control, 2019, 3, 2475-1456, 930, 10.1109/LCSYS.2019.2919309
    44. Zhong-Jie Han, Gen-Qi Xu, Output feedback stabilisation of a tree-shaped network of vibrating strings with non-collocated observation, 2011, 84, 0020-7179, 458, 10.1080/00207179.2011.561441
    45. Zhong-Jie Han, Enrique Zuazua, Decay rates for elastic-thermoelastic star-shaped networks, 2017, 12, 1556-181X, 461, 10.3934/nhm.2017020
    46. Farhat Shel, Exponential stability of a network of elastic and thermoelastic materials, 2013, 36, 01704214, 869, 10.1002/mma.2644
    47. Yan-Fang Li, Zhong-Jie Han, Gen-Qi Xu, Explicit decay rate for coupled string-beam system with localized frictional damping, 2018, 78, 08939659, 51, 10.1016/j.aml.2017.11.003
    48. Yaxuan Zhang, Genqi Xu, Exponential and Super Stability of a Wave Network, 2013, 124, 0167-8019, 19, 10.1007/s10440-012-9768-1
    49. Marjeta Kramar Fijavž, Delio Mugnolo, Serge Nicaise, Linear hyperbolic systems on networks: well-posedness and qualitative properties, 2021, 27, 1292-8119, 7, 10.1051/cocv/2020091
    50. Hugo Lhachemi, Christophe Prieur, Feedback Stabilization of a Class of Diagonal Infinite-Dimensional Systems With Delay Boundary Control, 2021, 66, 0018-9286, 105, 10.1109/TAC.2020.2975003
    51. Li Zhang, Gen Qi Xu, Hao Chen, Uniform stabilization of 1-d wave equation with anti-damping and delayed control, 2020, 357, 00160032, 12473, 10.1016/j.jfranklin.2020.09.034
    52. Hai-E. Zhang, Gen-Qi Xu, Hao Chen, Min Li, Stability of a Variable Coefficient Star-Shaped Network with Distributed Delay, 2022, 35, 1009-6124, 2077, 10.1007/s11424-022-1157-x
    53. Yanni Guo, Yunlan Chen, Genqi Xu, Yaxuan Zhang, Exponential stabilization of variable coefficient wave equations in a generic tree with small time-delays in the nodal feedbacks, 2012, 395, 0022247X, 727, 10.1016/j.jmaa.2012.05.079
    54. Dongyi Liu, Rumeng Han, Genqi Xu, Controller design for distributed parameter systems with time delays in the boundary feedbacks via the backstepping method, 2020, 93, 0020-7179, 1220, 10.1080/00207179.2018.1500717
    55. Christophe Prieur, Emmanuel Trelat, Feedback Stabilization of a 1-D Linear Reaction–Diffusion Equation With Delay Boundary Control, 2019, 64, 0018-9286, 1415, 10.1109/TAC.2018.2849560
    56. Hugo Lhachemi, Christophe Prieur, Robert Shorten, Robustness of constant-delay predictor feedback for in-domain stabilization of reaction–diffusion PDEs with time- and spatially-varying input delays, 2021, 123, 00051098, 109347, 10.1016/j.automatica.2020.109347
    57. E. M. Ait Benhassi, K. Ammari, S. Boulite, L. Maniar, Feedback stabilization of a class of evolution equations with delay, 2009, 9, 1424-3199, 103, 10.1007/s00028-009-0004-z
    58. Genqi Xu, Hongxia Wang, Stabilisation of Timoshenko beam system with delay in the boundary control, 2013, 86, 0020-7179, 1165, 10.1080/00207179.2013.787494
    59. Emilia Fridman, Serge Nicaise, Julie Valein, Stability of second order evolution equations with time-varying delays, 2009, 42, 14746670, 112, 10.3182/20090901-3-RO-4009.00016
    60. E. M. Ait Benhassi, K. Ammari, S. Boulite, L. Maniar, Exponential energy decay of some coupled second order systems, 2013, 86, 0037-1912, 362, 10.1007/s00233-012-9440-0
    61. Yanni Guo, Genqi Xu, Yansha Guo, Stabilization of the wave equation with interior input delay and mixed Neumann-Dirichlet boundary, 2016, 21, 1531-3492, 2491, 10.3934/dcdsb.2016057
    62. Emilia Fridman, Serge Nicaise, Julie Valein, Stabilization of Second Order Evolution Equations with Unbounded Feedback with Time-Dependent Delay, 2010, 48, 0363-0129, 5028, 10.1137/090762105
    63. Xiu Fang Liu, Gen Qi Xu, Output-Based Stabilization of Timoshenko Beam with the Boundary Control and Input Distributed Delay, 2016, 22, 1079-2724, 347, 10.1007/s10883-015-9293-4
    64. Lila Ihaddadene, Ammar Khemmoudj, General decay for a wave equation with Wentzell boundary conditions and nonlinear delay terms, 2022, 95, 0020-7179, 2565, 10.1080/00207179.2021.1919318
    65. E.M. Ait Ben Hassi, K. Ammari, S. Boulite, L. Maniar, Stability of abstract thermo-elastic semigroups, 2016, 435, 0022247X, 1021, 10.1016/j.jmaa.2015.11.010
    66. Gabriel Rivière, Julien Royer, Spectrum of a non-selfadjoint quantum star graph, 2020, 53, 1751-8113, 495202, 10.1088/1751-8121/abbfbe
    67. Yaru Xie, Genqi Xu, Exponential stability of 1-d wave equation with the boundary time delay based on the interior control, 2017, 10, 1937-1179, 557, 10.3934/dcdss.2017028
    68. Ying Feng Shang, Gen Qi Xu, Yun Lan Chen, Stability analysis of Euler-Bernoulli beam with input delay in the boundary control, 2012, 14, 15618625, 186, 10.1002/asjc.279
    69. Hugo Lhachemi, Robert Shorten, Boundary feedback stabilization of a reaction–diffusion equation with Robin boundary conditions and state-delay, 2020, 116, 00051098, 108931, 10.1016/j.automatica.2020.108931
    70. Gen Qi Xu, Li Zhang, Uniform Stabilization of 1-D Coupled Wave Equations with Anti-dampings and Joint Delayed Control, 2020, 58, 0363-0129, 3161, 10.1137/19M1289145
    71. Serge Nicaise, Julie Valein, Stabilization of second order evolution equations with unbounded feedback with delay, 2010, 16, 1292-8119, 420, 10.1051/cocv/2009007
    72. Mokhtar Kirane, Belkacem Said-Houari, Existence and asymptotic stability of a viscoelastic wave equation with a delay, 2011, 62, 0044-2275, 1065, 10.1007/s00033-011-0145-0
    73. Lucie Baudouin, Emmanuelle Crépeau, Julie Valein, Global Carleman estimate on a network for the wave equation and application to an inverse problem, 2011, 1, 2156-8499, 307, 10.3934/mcrf.2011.1.307
    74. Serge Nicaise, Cristina Pignotti, Julie Valein, Exponential stability of the wave equation with boundary time-varying delay, 2011, 4, 1937-1179, 693, 10.3934/dcdss.2011.4.693
    75. Zhong-Jie Han, Gen-Qi Xu, Dynamical behavior of networks of non-uniform Timoshenko beams system with boundary time-delay inputs, 2011, 6, 1556-181X, 297, 10.3934/nhm.2011.6.297
    76. Xiu Fang Liu, Gen Qi Xu, Exponential stabilization for Timoshenko beam with different delays in the boundary control, 2015, 0265-0754, dnv036, 10.1093/imamci/dnv036
    77. Ya-Xuan Zhang, Zhong-Jie Han, Gen-Qi Xu, Stability and Spectral Properties of General Tree-Shaped Wave Networks with Variable Coefficients, 2019, 164, 0167-8019, 219, 10.1007/s10440-018-00236-y
    78. Ionu Munteanu, Stabilisation of non-diagonal infinite-dimensional systems with delay boundary control, 2022, 0020-7179, 1, 10.1080/00207179.2022.2063193
    79. Kaïs Ammari, Serge Nicaise, 2015, Chapter 1, 978-3-319-10899-5, 1, 10.1007/978-3-319-10900-8_1
    80. Delio Mugnolo, Damped wave equations with dynamic boundary conditions, 2011, 17, 1425-6908, 10.1515/jaa.2011.015
    81. Shun-Tang Wu, ASYMPTOTIC BEHAVIOR FOR A VISCOELASTIC WAVE EQUATION WITH A DELAY TERM, 2013, 17, 1027-5487, 10.11650/tjm.17.2013.2517
    82. Zaiyun Zhang, Jianhua Huang, Zhenhai Liu, Mingbao Sun, Boundary Stabilization of a Nonlinear Viscoelastic Equation with Interior Time-Varying Delay and Nonlinear Dissipative Boundary Feedback, 2014, 2014, 1085-3375, 1, 10.1155/2014/102594
    83. Jérôme Lohéac, Enrique Zuazua, Averaged controllability of parameter dependent conservative semigroups, 2017, 262, 00220396, 1540, 10.1016/j.jde.2016.10.017
    84. Abdelkader Moulay, Abdelghani Ouahab, 2019, Chapter 17, 978-3-030-26986-9, 265, 10.1007/978-3-030-26987-6_17
    85. Martin Gugat, Markus Dick, Günter Leugering, Gas Flow in Fan-Shaped Networks: Classical Solutions and Feedback Stabilization, 2011, 49, 0363-0129, 2101, 10.1137/100799824
    86. Houssem Herbadji, Ammar Khemmoudj, Stability of coupled wave equations with variable coefficients, localised Kelvin–Voigt damping and time delay, 2024, 109, 0037-1912, 390, 10.1007/s00233-024-10453-7
    87. Cuiying Li, Yi Cheng, Donal O’Regan, Exponential stability of a geometric nonlinear beam with a nonlinear delay term in boundary feedbacks, 2023, 74, 0044-2275, 10.1007/s00033-023-02018-5
    88. Désiré Saba, Gilbert Bayili, Serge Nicaise, Polynomial stabilization of the wave equation with a time varying delay term in the dynamical control, 2024, 538, 0022247X, 128441, 10.1016/j.jmaa.2024.128441
    89. Martin Gugat, Stabilization of a cyclic network of strings by nodal control, 2025, 25, 1424-3199, 10.1007/s00028-024-01030-0
    90. Serge Nicaise, Lassi Paunonen, David Seifert, Stability of abstract coupled systems, 2025, 00221236, 110909, 10.1016/j.jfa.2025.110909
    91. Soumaya Amara, Mohamed Jellouli, Spectral Analysis for a System of Wave Equations and Applications, 2025, 0170-4214, 10.1002/mma.10956
    92. Yaru Xie, Ruiqing Gao, Exponential stabilization of 1-D wave network with boundary delay, 2025, 2095-6983, 10.1007/s11768-025-00256-8
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(240) PDF downloads(27) Cited by(0)

Figures and Tables

Figures(6)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog