Review

Anti-HIV lectins and current delivery strategies

  • Lectins, a class of carbohydrate binding agents (CBAs), have been widely studied for their potential antiviral activity. In general, lectins exert their anti-HIV microbicidal activity by binding to viral envelope glycoproteins which hinders a proper interaction between the virus and its host, thereby preventing viral entry and replication processes. Several natural lectins extracted from plant, fungi, algae, bacteria and animals, as well as boronic acid-based synthetic lectins, have been investigated against the Human Immunodeficiency Virus (HIV). This manuscript discusses the nature of HIV envelope glycoprotein glycans and their implication in lectin antiviral activity for HIV/AIDS prevention. In addition, anti-HIV lectins and their carbohydrate specificity is reported. Furthermore, current formulations of anti-HIV lectins are presented to illustrate how to overcome delivery challenges. Although antiviral lectins will continue to occupy a major stage in future microbicide research, further investigation in this field should focus on novel delivery strategies and the clinical translation of CBAs.

    Citation: Fohona S. Coulibaly, Danielle N. Thomas, Bi-Botti C. Youan. Anti-HIV lectins and current delivery strategies[J]. AIMS Molecular Science, 2018, 5(1): 96-116. doi: 10.3934/molsci.2018.1.96

    Related Papers:

    [1] Minseok Kim, Yeongjong Kim, Yeoneung Kim . Physics-informed neural networks for optimal vaccination plan in SIR epidemic models. Mathematical Biosciences and Engineering, 2025, 22(7): 1598-1633. doi: 10.3934/mbe.2025059
    [2] Meiyuan Du, Chi Zhang, Sheng Xie, Fang Pu, Da Zhang, Deyu Li . Investigation on aortic hemodynamics based on physics-informed neural network. Mathematical Biosciences and Engineering, 2023, 20(7): 11545-11567. doi: 10.3934/mbe.2023512
    [3] Yoon-gu Hwang, Hee-Dae Kwon, Jeehyun Lee . Feedback control problem of an SIR epidemic model based on the Hamilton-Jacobi-Bellman equation. Mathematical Biosciences and Engineering, 2020, 17(3): 2284-2301. doi: 10.3934/mbe.2020121
    [4] Shixuan Yao, Xiaochen Liu, Yinghui Zhang, Ze Cui . An approach to solving optimal control problems of nonlinear systems by introducing detail-reward mechanism in deep reinforcement learning. Mathematical Biosciences and Engineering, 2022, 19(9): 9258-9290. doi: 10.3934/mbe.2022430
    [5] Reymundo Itzá Balam, Francisco Hernandez-Lopez, Joel Trejo-Sánchez, Miguel Uh Zapata . An immersed boundary neural network for solving elliptic equations with singular forces on arbitrary domains. Mathematical Biosciences and Engineering, 2021, 18(1): 22-56. doi: 10.3934/mbe.2021002
    [6] Huiqing Wang, Sen Zhao, Jing Zhao, Zhipeng Feng . A model for predicting drug-disease associations based on dense convolutional attention network. Mathematical Biosciences and Engineering, 2021, 18(6): 7419-7439. doi: 10.3934/mbe.2021367
    [7] Sung Woong Cho, Sunwoo Hwang, Hyung Ju Hwang . The monotone traveling wave solution of a bistable three-species competition system via unconstrained neural networks. Mathematical Biosciences and Engineering, 2023, 20(4): 7154-7170. doi: 10.3934/mbe.2023309
    [8] Jing Cao, Dong Zhao, Chenlei Tian, Ting Jin, Fei Song . Adopting improved Adam optimizer to train dendritic neuron model for water quality prediction. Mathematical Biosciences and Engineering, 2023, 20(5): 9489-9510. doi: 10.3934/mbe.2023417
    [9] Han Ma, Qimin Zhang . Threshold dynamics and optimal control on an age-structured SIRS epidemic model with vaccination. Mathematical Biosciences and Engineering, 2021, 18(6): 9474-9495. doi: 10.3934/mbe.2021465
    [10] Dan Shi, Shuo Ma, Qimin Zhang . Sliding mode dynamics and optimal control for HIV model. Mathematical Biosciences and Engineering, 2023, 20(4): 7273-7297. doi: 10.3934/mbe.2023315
  • Lectins, a class of carbohydrate binding agents (CBAs), have been widely studied for their potential antiviral activity. In general, lectins exert their anti-HIV microbicidal activity by binding to viral envelope glycoproteins which hinders a proper interaction between the virus and its host, thereby preventing viral entry and replication processes. Several natural lectins extracted from plant, fungi, algae, bacteria and animals, as well as boronic acid-based synthetic lectins, have been investigated against the Human Immunodeficiency Virus (HIV). This manuscript discusses the nature of HIV envelope glycoprotein glycans and their implication in lectin antiviral activity for HIV/AIDS prevention. In addition, anti-HIV lectins and their carbohydrate specificity is reported. Furthermore, current formulations of anti-HIV lectins are presented to illustrate how to overcome delivery challenges. Although antiviral lectins will continue to occupy a major stage in future microbicide research, further investigation in this field should focus on novel delivery strategies and the clinical translation of CBAs.


    With the incredible success of neural networks in the field of machine learning tasks, including image recognition, computer vision, natural speech processing, and cognitive science as well as the prospect of harnessing the great computing power of specialized hardware, there has been much interest in investigating their suitability also for high-performance computing tasks. The result is now an exciting new research field known as scientific machine learning, where techniques such as deep neural networks and statistical learning are applied to classical problems of applied mathematics. Thanks to the general approximation property of the neural network, it is natural to consider using the neural network to obtain the approximate solution for governing PDEs. As the predominant method nowadays for data-driven problems, deep neural networks (DNN) are used as surrogate models of PDE solvers to accelerate optimization. DNN is generally adopted as training a supervised machine learning task to establish the nonlinear mapping from input to output data pairs [1,2,3,4,5,6], that is, learning a specific model from training data and the algorithm defined in advance structure, its quality is closely related to training data or distribution. Such models have yielded remarkable success in data-rich domains, yet in many fields of physics and engineering, the training data is often implied some prior knowledge, such as the flow field data of fluid mechanics problems need to satisfy the conservation of mass and momentum, and this part of prior knowledge is not utilized in the classic machine learning algorithms.

    Physics-informed neural networks (PINNs) [6,7], which combine data-driven machine learning and the advantages of physical models, can train models that automatically satisfy physical constraints with a small amount of training data. The PINN algorithm has better generalization performance and can predict the important physical parameters of the model. The PINN can accurately solve the forward problems by minimizing the mean squared error loss function, such that one can get the numerical solutions of the PDEs by soving optimization problem which requires that the loss is close to zero. The loss function of PINN contains initial and boundary loss term as well as the residual from the governing equation given by the physics-informed part. It should be noted that different boundary condition setting methods used in PINN training have a significant impact on the training results. Usually, the boundary loss term is set by soft boundary in the loss function, and the weight of boundary loss term is controlled by penalty coefficient to accelerate the convergence of optimization problem. Furthermore, the selection of penalty coefficient often depends on experience to adjust, improper penalty coefficient is easy to lead to abnormal solution. Wang et al. [8] proposed an adaptive learning rate annealing algorithm, which utilizes the back-propagated gradient statistics during model training to assign appropriate weight to each term in the composite loss functions, that aims to balance the interplay between data-fit and regularization. This empirical parameter adjustment technique does not explain why some times PINNs fail to train. To investigated this question, Wang et al. utilized the neural tangent kernel (NTK) in their subsequent work [9], and proved that NTK of PINNs converges to a deterministic kernel that stays constant during training in the infinite width limit of fully-connected networks. They developed a novel adaptive training strategy that exploits the eigenvalues of the NTK to adaptively calibrate the convergence rate of the total training error. When using PINN to solve stiff ODE systems, Ji et al. [10] noticed that stiffness could cause the failure of the regular PINN. Thus he developed stiff-PINN approach which applies PINN to non/mild-stiff systems obtained by employing quasi-steady-state-assumptions (QSSA) to reduce the stiffness.

    The neural networks can be regarded as a combination of linear transformation and nonlinear transformation, in which the activation function only determines the nonlinear approximation effect of the neural networks. The activation function plays an important role in the PINN training process as a result of the dependence of the derivative of the loss function on optimization parameters, in fact, depends on the derivative of the activation function. In the PINN algorithm various activation functions such as tanh, sin etc are used to solve various problems. There is no unified selection criterion for the activation function since it is often related to the specific problem. For ordinary neural networks, lots of literature [11,12] have confirmed that well-designed adaptive activation function can accelerate the convergence process. Jagtap et al. [13] introduced a scalable hyper-parameter in the activation function, which can be optimized and updated synchronously with neural networks parameters, and could dynamically adjust the derivative value of the activation function. Compared with the fixed activation function, this method can significantly accelerate the convergence rate and improve the accuracy of PINN training. To further accelerate the training convergence rate, different scalable parameters are introduced into the activation function of each neural layer/neuron separately in PINN [14]. Lu et al. [7] developed a Python library for PINN, i.e., DeepXDE, which can solve forward problems with initial and boundary conditions, as well as inverse problems. DeepXDE contributes to the rapid popularization and extensive applications of PINN, such as in the fields of fluid mechanics [15], biomedicine [16], cardiovascular flow [17] and so on. For more applications, the interested readers are referred to the review [18]. In [16], Sahli et al. improved diagnostic predictability in diagnosing atrial fibrillation by using PINN to solve an nonlinear wave dynamics equation satisfied by cardiac activation mapping. Kissas et al. [17] trained the PINN model on noisy and scattered clinical data of flow and wall displacement to predict blood flow in the cardiovascular system. In addition to the applications mentioned above, PINN is actively explored and improved in localized wave solutions[19,20], high-dimensional integrable systems [21], porous flow [22], seepage equation [23] and so on.

    The partial differential equation governs several important phenomena in physics, engineering and biology, and has a wide range of applications in the fields of epidemiological transmission[24], tumor growth and wound healing [25,26], bacterial aggregation [27,28], the cardiomyocyte potential propagation model [29,30] and other fields. As the fundamental equation in such areas, Hamilton-Jacobi equations are numerically solved by many high-order accurate numerical methods. These works include, but are not limited to, central schemes [31,32], Godunov-type central schemes [33,34,35], WENO schemes [36,37,38]. Using neural networks to represent the viscosity solution of certain Hamilton-Jacobi (HJ) PDEs is not by itself a new idea, and some neural network architectures have led to promising results [39,40]. Graber et al. [39] presented optimal control problems on generalized networks that the controllability assumptions are not satisfied around the junctions, the Value Function is characterized as the unique solution to a system of HJ equations in a bilateral viscosity sense. In [40], the high-dimensional Hamilton-Jacobi-Bellman (HJB) PDEs is solved by Deep Galerkin Method which approximates the solution by a deep neural network trained to satisfy the differential operator, boundary condition, and initial conditions. The adaptive deep learning networks is proposed to model semi-global solutions for high-dimensional HJB PDEs in [41]. In [42,43], the authors proposed shallow neural network architectures to express the viscosity solution of certain HJ PDEs with particular form of convex initial data and specific Hamiltonians, where physical constraints that satisfy certain conditions are naturally encoded into the neural networks. Moreover, [43] investigated that two network architectures exactly represent the Lax-Oleinik formula solution of certain HJ PDEs whose initial data and convex Hamiltonian satisfy certain assumptions. Note that the Lax-Oleinik formula solution is given as the viscosity solution of the Hamiltonian which does not depend on the state variable $ {x} $ and the time variable $ {t} $. However, the performance of PINN is not yet fully investigated in solving HJ PDEs which have non-convex Hamiltonian. It sparks our interest to utilize the PINN algorithm to solve more fundamental HJ PDEs.

    The paper is structured as follows. In Section 2, we will discuss the problem setup for HJ PDEs. Section 3 explores the generality of the PINN training algorithm for solving HJ equations, exactly embedding Dirichlet or periodic boundary conditions and physical constraints in neural networks architecture. To further improve the predictive accuracy, a physics-informed neural networks based on adaptive weighted loss functions (AW-PINN) is trained to solve unsupervised learning tasks for HJ PDEs with fewer training data while physical information constraints are imposed during the training process. In Section 4, we demonstrate the effectiveness and convergence of the AW-PINN training algorithm for convex and non-convex Hamiltonian. The series of numerical experiments illustrate that the proposed algorithm effectively achieves noticeable improvements in predictive accuracy and the convergence rate of the total training error. A comparison between the proposed algorithm and the original PINN algorithm for HJ equations indicates that the proposed AW-PINN algorithm can train the solutions more accurately with fewer iterations. Finally, we summarize and discuss our results.

    We consider the time-dependent Hamilton-Jacobi (HJ) equations

    $ {φt+H(xφ(x,t))=0,xΩRn,t(0,+)φ(x,0)=h(x),xΩ,φ(x,t)=g(x,t),xΩ,t(0,+)
    $
    (2.1)

    with Dirichlet or periodic boundary conditions on $ \partial\Omega $. The partial derivative with respect to $ t $ and the gradient vector with respect to $ \boldsymbol{x} $ of solution $ \varphi(\boldsymbol{x}, t) $ are denoted by $ \varphi_t $ and $ \nabla_{\boldsymbol{x}} \varphi(\boldsymbol{x}, t) = \left(\frac{\partial\varphi(\boldsymbol{x}, t)}{\partial x_1}, \dots, \frac{\partial\varphi(\boldsymbol{x}, t)}{\partial x_n} \right) $, respectively. The Hamilton $ H $ depends on $ \nabla_{\boldsymbol{x}} \varphi(\boldsymbol{x}, t) $ and possibly on $ x $ and $ t $. The solution of an HJ equation may have a discontinuity even when the initial data is smooth. As in conservation laws, the unique physically relevant solution can be singled out by the consideration of viscosity solutions which provides a consistent definition of a weak solution of Eq (2.1). Thus, we want to design a neural networks to approximate the viscosity solution of HJ equations from small training data. Here we draw motivation from the PINN algorithm [6,8,9] and some neural network architectures to solve some HJ PDEs [43]. We construct the PINN training method for HJ PDEs with different kinds of Hamiltonian and boundary conditions, and then optimize the weight coefficients to each term in the loss function such that their gradients during back-propagation are similar in magnitude. The problem of solving a PDE is converted into the multi-objective optimization problem where certain constraints are introduced in loss functional minimizing.

    We consider $ N^L:\mathbb{R}^{D_i} \rightarrow \mathbb{R}^{D_o} $ to be fully connected feed-forward neural networks of $ L $ layers and $ N_k $ neurons in $ k\text{th} $ layer $ \left(N_0 = D_i, \text{and}\; N_L = D_o\right) $. The input vector is denoted by $ \boldsymbol{z}\in \mathbb{R}^{D_i} $ and the output vector at $ k\text{th} $ layer is denoted by $ N^k(\boldsymbol{z}) $ and $ N^0(\boldsymbol{z}) = \boldsymbol{z} $. Each hidden layer of the network receives an output $ N^{k-1}(\boldsymbol{z})\in \mathbb{R}^{N_{k-1}} $ from the previous layer where an affine transformation of the form

    $ Nk(z):=WkNk1(z)+bk,
    $
    (3.1)

    is performed. The weight matrix and bias vector in the $ k\text{th} $ layer $ \left(1 \leq k \leq L\right) $ are denoted by $ \boldsymbol{W}^k \in \mathbb{R}^{N_k\times N_{k-1}} $ and $ \boldsymbol{b}^k \in \mathbb{R}^{N_k} $ respectively, which are initialized from independent and identically distributed samplings. Such a linear model is simple to solve but is limited in its capacity to solve complex problems. We should use a smooth nonlinear activation function, in order to efficiently compute arbitrary-order derivatives in the back propagation processe; in this study, we choose the hyperbolic tangent ($ tanh $). The nonlinear activation function $ \sigma(\cdot) $ is applied to each component of the transformed vector before sending it as an input to the next layer. Then the $ L $ layers fully connected feed-forward neural networks is defined as

    $ Nk(z)=σ(WkNk1(z))+bk,1kL.
    $
    (3.2)

    The activation function is an identity function in the last hidden-layer. By taking $ \boldsymbol{\theta} = \{\boldsymbol{W}^k, \boldsymbol{b}^k\} $ as the collection of all weights and biases that represent the trainable parameters in the networks, We can write the neural network as follows

    $ ˜φ(z):=NL(z;θ).
    $
    (3.3)

    The distribution of training points has a certain impact on the flexibility of PINN. In unsupervised learning for solving PDEs, training data only consist of the initial and boundary conditions and the residual point locations in the domain, which is done without using true solution information. Figure 1 shows two different ways to select the residual point locations in the domain. The lattice-like training points are the same as the finite difference grid points, which are equispaced in the spatio-temporal domain. The scattered training points can be taken from certain quasi-random sequences, such as the Sobol sequences or the Latin hypercube sampling. The available sets of measurements can also be used to train the neural network model for some practical problems.

    Figure 1.  Two distributions of training points for computing equation residual.

    Let $ F(\boldsymbol{x}, t) $ denote the left-hand-side of the first equation in (2.1), i.e.,

    $ F(x,t):=φt+H(xφ(x,t)).
    $
    (3.4)

    Following the original idea of PINN in [6], we then approximate $ \varphi(\boldsymbol{x}, t) $ by the neural network denoted by $ \tilde{\varphi}(\boldsymbol{x}, t) $, in which the parameters $ \boldsymbol{\theta} $ consist of the weights $ \boldsymbol{W}^k $ and biases $ \boldsymbol{b}^k $. The schematic diagram for the neural network with multiple hidden layer is shown in Figure 2. The residual of (2.1) defines as

    $ r(x,t;θ):=t˜φ(x,t)+H(x˜φ(x,t)),
    $
    (3.5)
    Figure 2.  Schematic of the AW-PINN for the HJ PDEs. Along with neural networks, PDE parts and initial/boundary conditions, AW-PINN training algorithm adaptively update weights to loss terms.

    where the partial derivatives of the neural networks with respect to the space and time coordinates can be readily computed by automatic differentiation [44]. To learn a good set candidate parameters $ \boldsymbol{\theta} $ in neural networks $ \tilde{\varphi}(\boldsymbol{x}, t) $, we minimize the mean squared error via gradient descent for the following composite loss function in the general form

    $ L(θ)=λrLr(θ)+Mi=1λiLi(θ)
    $
    (3.6)

    where $ \lambda_r $ and $ \lambda_i $ are hyper-parameters, which used to balance the interplay between the different loss terms. Here, $ L_r(\boldsymbol{\theta}) $ is a loss term that penalizes the PDE residual, and $ L_i(\boldsymbol{\theta}), i = 1, \dotsc, M, $ correspond to data-fit terms (e. g., measurements, initial or boundary conditions, etc.). For a typical initial and boundary value problem, these loss functions would take the specific form as

    $ L0(θ)=1N0N0i=1|˜φ(xi0,0)h(xi0)|2,
    $
    (3.7)
    $ Lb(θ)=1NbNbi=1|˜φ(xib,tib)g(xib,tib)|2,
    $
    (3.8)
    $ Lr(θ)=1NrNri=1|r(xir,tir;θ)|2,
    $
    (3.9)

    where $ \{(\boldsymbol{x}_0^i, 0), h(\boldsymbol{x}_0^i)\}_{i = 1}^{N_0} $ denotes the initial data, $ \{(\boldsymbol{x}_b^i, t_b^i), g(\boldsymbol{x}_b^i)\}_{i = 1}^{N_b} $ denotes the boundary data, and $ \{(\boldsymbol{x}_r^i, t_r^i), 0\}_{i = 1}^{N_r} $ denotes a set of collocation points that are randomly placed inside the domain $ \Omega $ in order to minimize the PDE residual. Consequently, $ L_r $ penalizes the equation for not being satisfied on a finite set of collocation points, which constitutes the physics-informed part of the neural networks. The loss terms $ L_0(\boldsymbol{\theta}) $ and $ L_b(\boldsymbol{\theta}) $ correspond to the initial and boundary data, which must be satisfied by the neural networks solution.

    The resulting optimization problem leads to finding the minimum of a loss function by optimizing the parameters, i.e., we seek to find

    $ θ=arg minθ(L(θ)).
    $
    (3.10)

    One can solve this minimization problem by the stochastic gradient descent (SGD) algorithm which is widely used in the machine learning community, i.e.,

    $ θn+1=θnηθL(θn)=θnηλrθLr(θn)ηMi=1λiθLi(θn).
    $
    (3.11)

    Here $ \eta > 0 $ is the learning rate and $ L(\boldsymbol{\theta}_{n}) $ is the loss function at $ n\text{th} $ iteration while the SGD methods can be initialized with some starting value $ \boldsymbol{\theta}_0 $. We see how the constants $ \lambda_r $ and $ \lambda_i $ can effectively introduce a rescaling of the learning rate corresponding to each loss term. In particular, the weights are updated as

    $ Wn+1=WnηλrWLr(Wn)ηMi=1λiWLi(Wn).
    $
    (3.12)

    The appropriate SGD optimizer such as Adam [45], AdaGrad [46] and L-BFGS [47] can be used according to the features of the neural networks.

    The weight coefficients of loss function play an important role in improving the trainability of NN, which can be user-defined or tuned automatically. One can obtain $ \lambda_i $ arbitrarily via a trial-and-error procedure, yet this manual hyperparameter tuning may not produce satisfying consequences. However, the optimal weights need to be reconstructed for different governing equations, which means we cannot find a fixed empirical formula that is transferable across different problems. Most importantly, the loss function needs to be tailored according to the form of PDEs. It is impractical to set the optimal weights for different loss terms without enough prior knowledge. Wang et al. have made some frontier exploration for adaptive weight by utilizing the back-propagated gradient statistics [8] and exploited the eigenvalues of the NTK during training [9]. Meer et al. introduced a scaling parameter as loss weight which balances the relative importance of the different constraints [48].

    Here we draw motivation from the above research works and the Adam algorithm [45] to derive an adaptive estimate for choosing the weights during the training process. The Adam algorithm adaptively tunes the learning rate associated with each parameter in the $ \boldsymbol{\theta} $ vector, based on the track of the first- and second-order moments of the back-propagated gradients during training. Following a similar idea, our goal is to adaptively design appropriate weight to each loss term that their gradients are similar in magnitude during back-propagation. We define $ \lambda_r = 1 $ such that the residual loss generally dominate the other loss terms. For given initial and boundary loss terms $ L_i $, find $ \hat{\lambda}_i $ satisfying

    $ ˆλimean{|θnLi(θn)|}=max{|θnLr(θn)|},i=0,b,
    $
    (3.13)

    where $ \bigl|\cdot\bigr| $ denotes the elementwise absolute value and the mean function denotes the average of all the elementwise value for $ \bigl|\nabla_{\boldsymbol{\theta}_n} L_i(\boldsymbol{\theta}_{n})\bigr| $. Therefore, it follows that

    $ ˆλi=max{|θnLr(θn)|}mean{|θnLi(θn)|},i=0,b.
    $
    (3.14)

    Then update the weight coefficients $ \lambda_i $ using the logarithmic mean [49] of the form

    $ λn+1i=λniˆλiln(λni)ln(ˆλi),i=0,b,
    $
    (3.15)

    which include a numerically stable implementation and the details can be seen in Appendix 6. What's more, this updating method can also avoid the additional hyperparameter. This optimal choice of $ \lambda_r $ and $ \lambda_i $ leads to an adaptively weighted loss function

    $ L(θ)=1NrNri=1|r(xir,tir)|2+λ0N0N0i=1|˜φiφ(xi0,ti0)|2+λbNbNbi=1|˜φiφ(xib,tib)|2.
    $
    (3.16)

    Thus, We propose the physics-informed neural networks based on adaptive weighted loss function, which adaptively assign appropriate weight to each term in the loss function during model training, an illustrative schematic is shown in Figure 2. Moreover, the logarithmic mean is employed in updating the weights, which can avoid the additional hyperparameter than learning rate annealing for PINN (LRA-PINN) [8].

    Algorithm 1 Algorithm 1:AW-PINN algorithm
    Step 1: Specify the training set over all domain
    Initial and boundary training data: $ \{\boldsymbol{x}_{\varphi}^i, t_{\varphi}^i, \varphi^i\}_{i = 1}^{N_{\varphi}} $.
    Residual training points: $ \{\boldsymbol{x}_r^i, t_r^i\}_{i = 1}^{N_r} $.
    Step 2: Construct neural networks $ NN(\boldsymbol{x}, t;\boldsymbol{\theta}) $ with the initialization of parameters $ \boldsymbol{\theta} $.
    Step 3: Construct the residual neural network $ r $ by substituting surrogate $ \tilde{\varphi} $ into the governing equations using automatic differentiation.
    Step 4: Specify the adaptively weighted loss function as shown in Eq (3.16), initialize the weight parameters $ \lambda_i $ by 1. Then use a gradient descent algorithm to update the parameters $ \boldsymbol{\theta} $ as :
    for $ n = 1, \dots, S $ do
    (a) Compute the weights $ \hat{\lambda}_i $ by (3.14).
    (b) Update the adaptive weight coefficients $ \lambda_i $ using the logarithmic mean (3.15).
    (c) Update the parameters $ \boldsymbol{\theta} $ via gradient descent Eq (3.11).
    end for

    As summarized in Algorithm 1, our proposed AW-PINN algorithm assigns appropriate weight to each term in the loss function such that the learning rate is adaptively tuned as shown in Eq (3.11), and the gradients of each term in the loss function are similar in magnitude. The proposed AW-PINN algorithm is a modification of the original PINN algorithm [6] and learning rate annealing for PINN [8]. we remark that one can update the adaptive weights according to the Eqs (3.14) and (3.15) either at every iteration of the gradient descent loop or at a frequency specified by the user. The proposed AW-PINN algorithm can be easily extended to loss functions consisting of multiple terms such as multiple boundary conditions for multivariate problems, while only the gradient statistics in Eqs (3.14) and (3.15) need to be calculated. Moreover, the AW-PINN algorithm can be used to compute the solution of HJ PDEs with different kinds of Hamiltonian $ H $, initial data, and boundary conditions, which further confirms the generality of physics-informed neural networks. Specifically, if take the adaptive weights $ \lambda_r, \; \lambda_i $ to be 1 in (3.6), then the AW-PINN becomes the conventional one, i.e.

    $ L(θ)=Lr(θ)+Mi=1Li(θ).
    $
    (3.17)

    Here we also explore the generality of the PINN algorithm for solving Hamilton-Jacobi equations, exactly embedding Dirichlet or periodic boundary conditions and physical constraints in neural networks architecture.

    In this section, we provide a series of numerical examples to capture the viscosity solution and illustrate the capacity of the proposed AW-PINN algorithm for solving HJ equations with both convex and nonconvex Hamiltonian. The original method introduced in [6] will also be explored in our experiments for comparison. For the sake of generality, only fully connected feedforward neural networks are considered. Unless otherwise stated, the proposed AW-PINN algorithm has the following set up of hyper-parameters: 4 hidden layers with 100 neurons in each layer, and the optimizing procedure is the Adam optimizer with an initial learning rate of 0.001 for 50000 iterations followed by the L-BFGS-B optimizer, in which training process would stop if the relative error between two neighboring training steps is less than $ \varepsilon = 10^{-8} $. The additional L-BFGS-B training process is used to accelerate the convergence rate during the training process. Moreover, the AW-PINN algorithm is initialized using the Glorot scheme [50] and implemented in TensorFlow.

    For one dimensional case, unless otherwise stated, all randomly sampled collocation points inside the computational region are generated using a space filling Latin Hypercube Sampling strategy, such as Figure 3 for Example 4.1.1.

    Figure 3.  Distributions of the randomly distributed training points.

    Example 4.1.1 Variable coefficient linear equation along with periodic boundary condition is given by

    $\left\{ φt+sin(x)φx=0,x[0,2π],t[0,1],φ(x,0)=sin(x),φ(0,t)=φ(2π,t).
    \right. $
    (4.1)

    The exact solution is given as

    $ φ(x,t)=sin(2arctan(ettan(x2))).
    $
    (4.2)

    Our goal here is to use this canonical benchmark problem to systematically analyze the performance of the AW-PINN algorithm. A neural networks $ \tilde{\varphi} $ approximating the solution of (4.1) can now be trained by minimizing the mean squared error loss

    $ L(θ)=Lr(θ)+λ0L0(θ)+λbLb(θ),
    $
    (4.3)

    where $ L_0(\boldsymbol{\theta}) $ and $ L_r(\boldsymbol{\theta}) $ are defined by Eqs (3.7) and (3.9). The periodic boundary condition can strictly impose the periodicity requirement on the function and its derivative up to a finite order. Thus, the boundary loss term $ L_b(\boldsymbol{\theta}) $ for periodic boundary condition can be defined as

    $ Lb(θ)=1Nb(Nbi=1|˜φ(0,tib)˜φ(2π,tib)|2+Nbi=1|t˜φ(0,tib)t˜φ(2π,tib)|2),
    $
    (4.4)

    where $ \{x_b^i, t_b^i\}_{i = 1}^{N_b} $ denotes the boundary training data. The training point set consists of $ N_b = 50 $ boundary data randomly sampled from a uniform distribution $ \delta t = 0.01 $ in $ [0, 1] $, $ N_0 = 50 $ initial data randomly parsed from a uniform distribution $ \delta x = 0.01 $ in $ [0, 2\pi] $, as well as $ N_r = 2000 $ randomly sampled collocation points to enforce Eq (4.1) inside the solution domain. Figure 4 presents the approximate solutions obtained by the proposed AW-PINN algorithm with 50000 gradient descent iterations. The predicted solution obtained by the AW-PINN algorithm shown in Figure 4(b) is well consistent with the exact solution in Figure 4(a). From the comparison of the exact, PINN, LRA-PINN and AW-PINN solution at time $ t = 1 $ shown in Figure 4(c), it is observed that the predicted solutions obtained by the three algorithms agree well with the exact one. However, after 50000 Adam iterations and 9 L-BFGS-B iterations in about 1667.092 seconds, the relative error defined by $ \Vert{\varphi(x, t)-\tilde{\varphi}(x, t)}\Vert_{L^2}/\Vert\varphi(x, t)\Vert_{L^2} $ for our AW-PINN is 6.1164e-04 which is smaller than 1.4195e-02 for traditional PINN after 50000 Adam iterations and 278 L-BFGS-B iterations in about 852.0633 seconds as shown in Table 1. The LRA-PINN algorithm achieves the relative error of 1.4967e-03 after 50000 Adam iterations and 31 L-BFGS-B iterations in about 1994.6314 seconds. Figure 4(d) shows the loss history over the number of iterations, where the loss of the AW-PINN is decreasing faster than others. It can be deduced that the learning capabilities of AW-PINN are better as it improves greatly the convergence rate (accelerate the training), especially at the early stage. It is easy to see from Figure 5(a) that the weight coefficients $ \lambda_0 $ and $ \lambda_b $ adaptively changed with the iteration. Besides, from the evolution of loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ over the number of iterations shown in Figure 5(b), we obtain that all of the amplitude gradually decrease with iterations and the loss term of PDE residual is dominant. Table 2 summarizes our results with different methods and different neural network architectures. Evidently, the relative $ L^2 $ errors of the AW-PINN is smaller than the conventional PINN and LRA-PINN. In contrast, the proposed AW-PINN algorithm appear to be better robust with respect to network architectures.

    Figure 4.  Comparison of the conventional PINN and AW-PINN predicted solution and the loss value over iterations during the training of conventional PINN and AW-PINN with 4 hidden layers and 100 neurons per layer for 50000 iterations using the Adam optimizer.
    Table 1.  Iteration times and relative $ L^2 $ and $ L^{\infty} $ errors of the approximations that were obtained by the three methods.
    Conventional PINN LRA-PINN AW-PINN
    Iteration times (Adam; L-BFGS-B) 50000; 278 50000; 31 50000; 9
    Relative $ L^2 $ error 1.4195e-02 1.4967e-03 6.1164e-04
    Relative $ L^{\infty} $ error 3.9156e-02 3.4360e-03 1.9243e-03

     | Show Table
    DownLoad: CSV
    Figure 5.  Evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $, and loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ of the AW-PINN algorithm with 4 hidden layers and 100 neurons per layer for 50000 iterations.
    Table 2.  The relative $ L^2 $ errors for the different methods, and for different neural architectures obtained by varying the number of hidden layers and different number of neurons per layer.
    Conventional PINN LRA-PINN AW-PINN
    30 neurons / 2 hidden layers 2.8067e-03 2.4658e-03 1.1468e-03
    60 neurons / 2 hidden layers 1.3770e-03 3.3325e-03 1.6954e-03
    120 neurons / 2 hidden layers 1.8578e-03 3.9267e-03 1.8192e-03
    30 neurons / 4 hidden layers 1.5972e-03 9.1343e-04 1.1114e-03
    60 neurons /4 hidden layers 7.1538e-03 6.3312e-04 3.3524e-04
    120 neurons /4 hidden layers 1.3858e-02 1.6538e-03 9.3797e-04
    30 neurons / 6 hidden layers 2.6337e-03 3.2978e-03 2.0124e-03
    60 neurons /6 hidden layers 1.2910e-02 1.7492e-03 1.2216e-03
    120 neurons /6 hidden layers 2.4746e-02 2.6474e-03 8.3961e-04

     | Show Table
    DownLoad: CSV

    Example 4.1.2 The strictly convex Hamiltonian is given by

    $ \left\{ φt+12(φx+1)2=0,x[0,2],t[0,1.5/π2],φ(x,0)=cos(πx),φ(0,t)=φ(2,t),
    \right. $
    (4.5)

    with periodic boundary conditions. The singularity occurs at about $ t = 1/\pi^2. $ The change of variables, $ v = \varphi_x +1 $, transforms the Eq (4.5) into a conservation law, which can be easily solved via the method of characteristics [51]. Here we randomly choose initial training subset with $ N_0 = 50 $ from a uniform distribution with $ n_x = 201 $ in the space domain, boundary training subset with $ N_b = 50 $ from a uniform distribution with $ n_t = 101 $ in the time domain, and the collocation points with $ N_r = 2000 $ using a space filling Latin Hypercube Sampling strategy. Figure 6 shows the comparison of the exact solution and the predicted solution for the strictly convex HJ PDEs (4.5) trained by the conventional PINN, LRA-PINN and AW-PINN algorithms. Compared with the exact solution given in Figure 6(a), the neural networks presented in Figure 6(b) trained by our AW-PINN algorithm predict the solution very well. After the formation of the singularity at $ t = 1.5/\pi^2 $, the comparison of exact, conventional PINN, LRA-PINN and AW-PINN solutions is given in Figure 6(c), it can be seen that three algorithms accurately capture singularity of the solution. Table 3 contains the Iterations and relative errors of the approximations. Figure 6(d) shows the comparison of the loss history for the conventional PINN, LRA-PINN and AW-PINN algorithm, the loss of the AW-PINN algorithm converges faster to the global minimum after 20000 iterations without occasional large spikes occurring. Figure 7(a) provides a more detailed visual evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $ used to scale the loss of initial and boundary conditions during the training procedure of the AW-PINN. Figure 7(b) shows the evolution of mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ over the number of iterations, where all loss terms are convergent gradually. Evidently, the residual loss terms dominate the others.

    Figure 6.  Comparison of the conventional PINN and AW-PINN predicted solution and the loss value over iterations during the training procedure with 4 hidden layers and 100 neurons per layer for 50000 iterations using the Adam optimizer.
    Table 3.  Iteration times and relative $ L^2 $ and $ L^{\infty} $ errors of the approximations that were obtained by the three methods.
    Conventional PINN LRA-PINN AW-PINN
    Iteration times (Adam; L-BFGS-B) 50000; 6349 50000; 40 50000; 22
    Relative $ L^2 $ error 4.7554e-03 3.6710e-03 6.7551e-03
    Relative $ L^{\infty} $ error 2.6565e-02 1.7579e-02 2.2131e-02

     | Show Table
    DownLoad: CSV
    Figure 7.  Evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $ and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ of the AW-PINN training algorithm with 4 hidden layers and 100 neurons per layer for 50000 iterations.

    Example 4.1.3 The one-dimensional Eikonal equation is given by

    $ \left\{ φt+|φx|=0,x[0,2π],t[0,1],φ(x,0)=sin(x),φ(0,t)=φ(2π,t),
    \right. $
    (4.6)

    with periodic boundary conditions. Despite its origins in both geometric optics and wave propagation theory, the Eikonal equations are widely applied in science and engineering disciplines. For example, it is used to infer 3D surface shapes, calculate the distance fields, image denoising, segmentation in image processing and optimal path planning in robotics. As well, the Eikonal equation is used to compute geodesic distances in computer graphics and calculate travel time fields in seismology and is also used for etching, deposition, and lithography simulations in semi-conductor manufacturing. It is of great practical significance to study this equation. The viscosity solution to this equation has a shock forming in $ \varphi_x $ at $ x = \pi/2 $ and a rarefaction wave at $ x = 3\pi/2 $. The exact solution can be obtained via the Lax-Hopf formula [52]. Our training data is composed of the initial data $ N_0 = 50 $ randomly parsed from a uniform distribution with $ n_x = 201 $ in $ [0, 2\pi] $, the boundary points $ N_b = 50 $ randomly sampled from a uniform distribution with $ n_t = 101 $ in the time domain, as well as the collocation points $ N_r = 2000 $ using a space filling Latin Hypercube Sampling strategy. Figure 8 demonstrates the contour plot of the solution of the Eikonal equation on the $ x-t $ domain in the top row. Figure 8(c) provides a more detailed visual comparison of exact, conventional PINN, LRA-PINN and AW-PINN predicted solution at time $ t = 1 $. One can observe that the AW-PINN algorithm is able to capture the kink formed at $ \frac{\pi}{2} $ and the rarefaction wave at $ \frac{3\pi}{2} $. After 50000 Adam iterations and various number of L-BFGS iterations, the relative prediction error of AW-PINN is 7.6622e-03, improved by one order of magnitude compared to the conventional PINN (2.2495e-02) as shown in Table 4. Figure 8(d) shows that the loss of the AW-PINN algorithm converges faster towards global minimum, which indicates that the proposed AW-PINN algorithm seems to have the ability to train the superior solution more quickly, and have good stability also. Figure 9(a) provides the evolution process for the adaptive weights $ \lambda_0 $ and $ \lambda_b $ used to scale the initial and boundary conditions loss term during model training in the AW-PINN Algorithm. It can be seen that the weight $ \lambda_0 $ is larger than $ \lambda_b $ throughout the iterations and they all change more slowly after 20000 iterations. Figure 9(b) shows the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ over the number of iterations. Compared with the results obtained by the conventional PINN and LRA-PINN, the loss terms of the AW-PINN converge to a small deterministic number as the iterations increasing. Obviously, the proposed AW-PINN training algorithm can properly balance the interplay between the initial, boundary, and residual loss terms, and avoid oscillations at extreme points.

    Figure 8.  Comparison of the conventional PINN and AW-PINN predicted solution and the loss value over iterations during the training procedure with 4 hidden layers and 100 neurons per layer for 50000 iterations using the Adam optimizer.
    Table 4.  Iteration times and relative $ L^2 $ and $ L^{\infty} $ errors of the approximations that were obtained by the three methods.
    Conventional PINN LRA-PINN AW-PINN
    Iteration times (Adam; L-BFGS-B) 50000; 3636 50000; 19 50000; 16
    Relative $ L^2 $ error 2.2495e-02 3.8931e-02 7.6622e-03
    Relative $ L^{\infty} $ error 1.2236e-01 1.8352e-01 2.9233e-02

     | Show Table
    DownLoad: CSV
    Figure 9.  Evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $ and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ of the AW-PINN algorithm with 4 hidden layers and 100 neurons per layer for 50000 iterations.

    Example 4.1.4 The nonconvex Hamiltonian is given by

    $ \left\{ φtcos(φx+1)=0,x[1,1],t[0,1.5/π2],φ(x,0)=cos(πx),φ(1,t)=φ(1,t),
    \right. $
    (4.7)

    with periodic boundary conditions. We note that the exact solution can be given by $ \varphi(x, t) = -\cos(\pi x_0)+t[(v-1)\sin v+\cos v], v = 1+\pi \sin(\pi x_0) $ using characteristic methods. We approximate the solution by fully-connected neural networks $ NN(x, t;\boldsymbol{\theta}) $ of 8 hidden layers with 100 neurons in each layer. Here we choose randomly sampled training points with $ N_0 = 50 $, $ N_b = 50 $ and $ N_r = 2000 $ as the training set. The contour plot of the solution obtained by the AW-PINN algorithm shown in Figure 10(b) is well consistent with the exact solution in Figure 10(a). Figure 10(c) provides a more detailed visual comparison of the exact, conventional PINN, LRA-PINN and AW-PINN predicted solution at time $ t = \frac{1.5}{\pi^2} $. The predicted solution of the AW-PINN training algorithm outperforms the other two, and the $ L_2 $ relative error is 3.7065e-03, improved by one order of magnitude compared to the conventional PINN (5.9719e-02). The number of iterations and relative error of the three training algorithms are given in Table 5. In contrast, the proposed can train the solution more accurately with fewer iterations. The loss of the AW-PINN decreases faster and accelerates convergence as shown in Figure 10(d). Figure 11 illustrates the evolution process for the adaptive weights $ \lambda_0 $ and $ \lambda_b $, and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $. We can observe that the loss terms converge, and the residual loss term dominates the others. This example shows that the AW-PINN algorithm can approximate the solution even when the Hamiltonian is not convex.

    Figure 10.  Comparison of the conventional PINN and AW-PINN predicted solution and the loss value over iterations during the training procedure with 8 hidden layers and 100 neurons per layer for 50000 iterations using the Adam optimizer.
    Table 5.  Iteration times and relative $ L^2 $ and $ L^{\infty} $ errors of the approximations that were obtained by the three methods.
    Conventional PINN LRA-PINN AW-PINN
    Iteration times (Adam; L-BFGS-B) 50000; 19342 50000; 17106 50000; 9402
    Relative $ L^2 $ error 5.9719e-02 3.8731e-02 3.7065e-03
    Relative $ L^{\infty} $ error 2.6706e-01 2.0716e-01 2.6123e-02

     | Show Table
    DownLoad: CSV
    Figure 11.  Evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $ and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ of AW-PINN training algorithm with 8 hidden layers and 100 neurons per layer for 50000 iterations.

    Table 6 displays the relative $ L^2 $ error of the AW-PINN method for different examples, and for different neural architectures obtained by varying the number of hidden layers and the different number of neurons per layer. The proposed AW-PINN algorithm appear to be better robust with respect to network architectures and show a consistent trend in improving the prediction accuracy as the number of hidden layers and neural units is increased. As can be seen from Figure 4(d)Figure 10(d) for the loss value over iterations, the loss trained in the first step are very large, the possible reason is the Xavier initialization. However, the loss decreases to a stable interval in a few steps. The numerical examples verify that our AW-PINN algorithm is stable, and the random initialization has tiny effect on numerical results.

    Table 6.  The relative $ L^2 $ error of the AW-PINN method for different examples, and for different neural architectures obtained by varying the number of hidden layers and the different number of neurons per layer.
    Example 4.1.1 Example 4.1.2 Example 4.1.3 Example 4.1.4
    30 neurons / 2 hidden layers 1.1468e-03 3.6212e-02 6.8436e-03 4.8704e-02
    60 neurons / 2 hidden layers 1.6954e-03 1.1317e-02 1.6412e-02 9.2931e-03
    120 neurons / 2 hidden layers 1.8192e-03 1.4429e-02 1.0010e-02 2.4058e-02
    30 neurons / 4 hidden layers 1.1114e-03 5.4070e-03 1.3936e-02 8.9544e-02
    60 neurons /4 hidden layers 3.3524e-04 2.6192e-03 5.8579e-03 1.5446e-02
    120 neurons /4 hidden layers 9.3797e-04 3.0318e-03 9.4430e-03 5.4431e-02
    30 neurons / 6 hidden layers 2.0124e-03 8.7984e-03 4.5297e-03 1.2776e-02
    60 neurons /6 hidden layers 1.2216e-03 2.1817e-03 7.4221e-03 3.4884e-03
    120 neurons /6 hidden layers 8.3961e-04 4.0882e-03 3.3320e-03 3.8391e-02

     | Show Table
    DownLoad: CSV

    Example 4.2.1 The two-dimensional convex Hamiltonian is given by

    $ \left\{ φt+12(φx+φy+1)2=0,(x,y)Ω,t[0,1.5/π2],φ(x,y,0)=cos(π(x+y)/2),(x,y)Ω,
    \right. $
    (4.8)

    with periodic boundary conditions on the domain $ \Omega = [0, 2\pi]\times[0, 2\pi] $. This problem can be reduced to a one-dimensional problem via the coordinate transformation $ \zeta = (x+y)/2, \eta = (x-y)/2 $, we can thus use the one-dimensional exact solution to analyze our predicted results. We approximate the solution by fully-connected neural networks $ NN(x, t;\boldsymbol{\theta}) $ of 5 hidden layers with 200 neurons in each layer. Small data set consists of the uniformly spaced grid points in the domain $ \Omega $ with the number of space and time grid points to be $ n_x = n_y = 41 $ and $ n_t = 41 $. Here we choose the randomly sampled collocation point with $ N_0 = 400, N_b = 4000 $ and $ N_r = 30000 $. We present the contour plot of the solution after the singularity formation in Figure 12. By using the same neural architecture hyperparameter, the trained solutions obtained by the AW-PINN and the conventional PINN algorithms are consistent with the exact one. However, the $ L_2 $ relative error 9.8475e-03 for the AW-PINN is smaller than 1.2731e-02 for the conventional PINN. What's more, the loss decreases faster and accelerates convergence as shown in Figure 12(d). Finally, Figure 13 presents the evolution process for the adaptive weights $ \lambda_0 $ and $ \lambda_b $, and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ over iterations during the training of AW-PINN with 50000 iterations using the Adam optimizer. It can be seen from Figure 13(a) that although the weights $ \lambda_0 $ and $ \lambda_b $ trained in the first step are very large, they decrease to a stable interval in a few steps. Figure 13(b) shows that the residual loss term plays a dominant role in the total loss.

    Figure 12.  Comparison of the conventional PINN and AW-PINN predicted solution at time $ t = 1.5/\pi^2 $, and the loss value over iterations during the training procedure with 5 hidden layers and 200 neurons per layer for 50000 iterations using the Adam optimizer.
    Figure 13.  Evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $ and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ of the AW-PINN training algorithm with 5 hidden layers and 100 neurons per layer for 50000 iterations using the Adam optimizer.

    Example 4.2.2 The two-dimensional nonlinear equation is given by

    $ \left\{ φt+φxφy=0,(x,y)Ω,t[0,0.5],φ(x,y,0)=sin(x)+cos(y),(x,y)Ω,
    \right. $
    (4.9)

    with Dirichlet boundary conditions on the domain $ \Omega = [-\pi, \pi]^2 $. This is a genuinely nonlinear problem with a nonconvex Hamiltonian. The exact solution is given implicitly by $ \varphi(x, y, t) = -\cos(q)\sin(r)+\sin(q)+\cos(r) $ where $ x = q-t\sin(r) $ and $ y = r+t\cos(q) $. We approximate the solution by fully-connected neural networks $ NN(x, t;\boldsymbol{\theta}) $ of 5 hidden layers with 200 neurons in each layer. Small data set consists of the uniformly spaced grid points in the domain $ \Omega $, where the numbers of grid points are $ n_x = n_y = 41 $ and $ n_t = 41 $. Here we choose $ N_0 = 1681, N_b = 4000 $ and $ N_r = 30000 $ randomly sampled collocation points. We also present the detailed visual comparison of the exact solution, predicted solution using conventional PINN and AW-PINN at time $ t = 0.5 $ in Figure 14. Compared with the conventional PINN, the loss of the AW-PINN training algorithm decreases faster and accelerates convergence after 30000 iterations as shown in Figure 14(d). What's more, the $ L_2 $ relative error is $ 1.2702e-01 $, which is smaller than $ 1.2937e-01 $ for the conventional PINN. Finally, Figure 15 shows the evolution process for the adaptive weights $ \lambda_0 $ and $ \lambda_b $, and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ for 50000 iterations. One can obtain that the variation range of the weights become smaller as the number of iterations increases and the residual loss term dominant the others.

    Figure 14.  Comparison of the conventional PINN and AW-PINN predicted solution at time $ t = 0.5 $, and the loss value over iterations during the training procedure with 5 hidden layers and 200 neurons per layer for 50000 iterations using the Adam optimizer.
    Figure 15.  Evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $ and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ of the AW-PINN algorithm with 5 hidden layers and 100 neurons per layer for 50000 iterations using the Adam optimizer.

    Example 4.2.3 The shape-from-shading problem is given by

    $ \left\{ φt+I(x,y)1+φ2x+φ2y1=0,(x,y)Ω,t[0,1],φ(x,y,0)=(4096/9)xy(1x)(1y)(x1/2)(y1/2),(x,y)Ω,
    \right. $
    (4.10)

    with $ \varphi(x, y, t) = 0 $ at the boundary of the domain $ \Omega = [0, 1]^2 $. Here I(x, y) is the brightness value with $ 0 < I(x, y)\leq 1 $, specifically, we take

    $ I(x,y)=1/1+(2πcos(2πx)sin(2πy))2+(2πsin(2πx)cos(2πy))2.
    $
    (4.11)

    The steady-state solution of equation (4.10) is the shape function, which has the brightness $ I $ under vertical lighting, see [53]. The shape-from-shading problem has multiple solutions to Eqs (4.10) and (4.11), and all satisfy the homogeneous boundary condition, according to [54]. Similar to traditional numerical methods [55], the additional "boundary conditions" are introduced at points where $ I(x, y) = 1 $. In our problem, we consider such "boundary conditions":

    $ φ(14,14)=φ(34,34)=1,φ(14,34)=φ(34,14)=1,φ(12,12)=0,
    $
    (4.12)

    then the exact solution will be $ \varphi(x, y) = \sin(2\pi x)\sin(2\pi y) $. We approximate the solution by a fully-connected neural networks $ NN(x, t;\boldsymbol{\theta}) $ of 5 hidden layers with 200 neurons in each layer. Small data set consists of the uniformly spaced grid points in the domain $ \Omega $, where $ n_x = n_y = 41 $ and $ n_t = 161 $ are the number of space and time grid points. Here we take randomly sampled collocation points with $ N_0 = 1681, N_b = 4000 $ and $ N_r = 30000 $. We also show the detailed visual comparison of the exact solution, predicted solution using the conventional PINN and the AW-PINN at time $ t = 1 $ in Figure 16. The loss of AW-PINN training algorithm decreases faster and accelerates convergence after 30000 iterations as shown in Figure 16(d), and the $ L_2 $ relative error is $ 1.7686e-02 $, improved by one order of magnitude compared to the conventional PINN (1.5982e-01). Finally, Figure 17 presents the evolution process for the adaptive weights $ \lambda_0 $ and $ \lambda_b $, and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ for 50000 iterations. For this problem, the residual loss term dominant the others as usual. However, to make the total loss decreases as the number of iterations increases, the weights for loss terms corresponding to the initial and boundary conditions change more frequently here.

    Figure 16.  Comparison of the conventional PINN and AW-PINN predicted solution at time $ t = 1 $, and the loss value over iterations during the training procedure with 5 hidden layers and 200 neurons per layer for 50000 iterations using the Adam optimizer.
    Figure 17.  Evolution of the adaptive weights $ \lambda_0 $ and $ \lambda_b $ and the mean squared error loss terms $ L_0(\boldsymbol{\theta}), L_b(\boldsymbol{\theta}), L_r(\boldsymbol{\theta}) $ of the AW-PINN training algorithm with 5 hidden layers and 100 neurons per layer for 50000 iterations using the Adam optimizer.

    Although there are some very valuable works related to developing specific neural network architectures to solve some sets of HJ PDEs [42,43], whose Hamiltonian is convex and the viscosity solution can be defined by the Lax-Oleinik formula. The performance of physics-informed neural networks algorithm is not yet fully investigated in the literature in solving time-dependent HJ PDEs such as nonconvex Hamiltonian equations that can capture shock and rarefaction waves. The original PINN formulation [6] is trained to solve unsupervised learning tasks by minimizing the mean squared error loss with physics-informed constraint. This training method is suitable for solving certain types of problems, such as the curse of dimensionality problems. However, the original PINN also has difficulties in representing an accurate approximation for nonconvex Hamiltonian. To further improve the predictive accuracy, we have proposed the AW-PINN algorithm such that the weights for different loss terms can be adaptively updated, in which the residual loss terms keep dominating the others. This approach is an improvement of the learning rate annealing for physics-informed neural networks of Wang, Teng, Karniadakis [8], and updates the weights using the log average and provides better predictive accuracy for HJ PDEs. We examined our proposed training algorithm for solving time-dependent HJ PDEs, including variable coefficient linear equation, strictly convex Hamiltonian, Eikonal equation, nonconvex Hamiltonian, two-dimensional Burgers-type Hamiltonian, and two-dimensional nonlinear equation with a nonconvex Hamiltonian and the shape-from-shading problem. The series of numerical experiments show that the proposed algorithm effectively achieves noticeable improvements in predictive accuracy and increases the convergence rate. Although a series of numerical results have verified that this training algorithm can learn accurate approximation solutions of HJ PDEs, and yield practical improvements, the theoretical analysis of the proposed AW-PINN algorithm is worth further research. We will investigate many critical applications in computational science and engineering to better understand PINN. These numerical results may provide some inspiration for the subsequent theoretical research. And we notice that the AW-PINN algorithm can enforce exactly periodic boundary conditions by imposing the periodicity requirement on the function and all its derivatives. The neural network architecture can also be modified to exactly impose Dirichlet and periodic boundary conditions.

    This work is supported by National Natural Science Foundation of China (Grant Nos. 11871399, 11901460) and China Postdoctoral Science Foundation (Grant No. 2022M712600), which are gratefully acknowledged.

    The authors declare there is no conflict of interest.

    The logarithmic mean of $ a $ is defined as

    $ aln(l,r)=alarln(al)ln(ar)
    $

    However, this is not numerically well-posed when $ a_l\to a_r $. To overcome this, let us write the logarithmic mean in another form. Let $ \zeta = \frac{a_l}{a_r} $, so that

    $ aln(l,r)=al+arlnζζ1ζ+1,
    $

    where $ \ln(\zeta) = 2\Bigl(\frac{\zeta-1}{\zeta+1}+\frac{1}{3}\frac{(\zeta-1)^3}{(\zeta+1)^3}+\frac{1}{5}\frac{(\zeta-1)^5}{(\zeta+1)^5}+\frac{1}{7}\frac{(\zeta-1)^7}{(\zeta+1)^7}+o((\zeta-1)^9)\Bigr) $ is used to obtain a numerically well-formed logarithmic mean. The subroutine for computing the logarithmic mean is as following, let

    $ ζ=aLaR,f=ζ1ζ+1,u=ff,
    $

    and

    $ F={1.0+u/3.0+uu/5.0+uuu/7.0,if u<ε;ln(ζ)/2.0/(f),otherwise,
    $

    so that $ a^{\ln}(l, r) = \frac{a_l+a_r}{2F} $ with $ \varepsilon = 10^{-2} $.

    [1] Hodder SL, Justman J, Haley DF, et al. (2010) Challenges of a hidden epidemic: HIV prevention among women in the United States. J Acquir Immune Defic Syndr 55 Suppl 2: S69–S73.
    [2] Shattock RJ, Rosenberg Z (2012) Microbicides: Topical prevention against HIV. Cold Spring Harb Perspect Med 2: a007385.
    [3] UNAIDS. Available from: http://www.unaids.org/en/resources/documents/2016/Global-AIDS-update-2016.
    [4] Swanson MD, Winter HC, Goldstein IJ, et al. (2010) A lectin isolated from bananas is a potent inhibitor of HIV replication. J Biol Chem 285: 8646–8655. doi: 10.1074/jbc.M109.034926
    [5] Murphy EM, Greene ME, Mihailovic A, et al. (2006) Was the "ABC" approach (abstinence, being faithful, using condoms) responsible for Uganda's decline in HIV? PLoS Med 3: 1–5.
    [6] Cohen SA. Available from: https://www.guttmacher.org/gpr/2004/11/promoting-b-abc-its-value-and-limitations-fostering-reproductive-health.
    [7] Sovran S (2013) Understanding culture and HIV/AIDS in sub-Saharan Africa. Sahara J 10: 32–41.
    [8] Reniers G, Watkins S (2010) Polygyny and the spread of HIV in sub-Saharan Africa: A case of benign concurrency. Aids 24: 299–307. doi: 10.1097/QAD.0b013e328333af03
    [9] Helweglarsen M, Collins BE (1994) The UCLA Multidimensional Condom Attitudes Scale: Documenting the complex determinants of condom use in college students. Health Psychol 13: 224–237. doi: 10.1037/0278-6133.13.3.224
    [10] UNAIDS. Available from: http://www.unaids.org/globalreport/Global_report.htm.
    [11] Qian K, Morris-Natschke SL, Lee KH (2009) HIV entry inhibitors and their potential in HIV therapy. Med Res Rev 29: 369–393. doi: 10.1002/med.20138
    [12] Fohona S, Coulibaly BBC (2017) Current status of lectin-based cancer diagnosis and therapy. AIMS Mol Sci 4: 1–27.
    [13] Hansen TK, Gall MA, Tarnow L, et al. (2007) Mannose-binding lectin and mortality in type 2 diabetes. Arch Intern Med 166: 2007–2013.
    [14] Guan LZ, Tong Q, Xu J (2015) Elevated serum levels of mannose-binding lectin and diabetic nephropathy in type 2 diabetes. PLoS One 10: 1–10.
    [15] Losin IE, Shakhnovich RM, Zykov KA, et al. (2014) Cardiovascular diseases and mannose-binding lectin. Kardiologiia 54: 64–70.
    [16] Kelsall A, Fitzgerald AJ, Howard CV, et al. (2002) Dietary lectins can stimulate pancreatic growth in the rat. Int J Exp Pathol 83: 203–208. doi: 10.1046/j.1365-2613.2002.00230.x
    [17] Hoyem PH, Bruun JM, Pedersen SB, et al. (2012) The effect of weight loss on serum mannose-binding lectin levels. Clin Dev Immunol 2012: 1–5.
    [18] Akkouh O, Ng TB, Singh SS, et al. (2015) Lectins with anti-HIV activity: A review. Molecules 20: 648–668. doi: 10.3390/molecules20010648
    [19] Checkley MA, Luttge BG, Freed EO (2011) HIV-1 envelope glycoprotein biosynthesis, trafficking, and incorporation. J Mol Biol 410: 582–608. doi: 10.1016/j.jmb.2011.04.042
    [20] Balzarini J, Van LK, Hatse S, et al. (2004) Profile of resistance of human immunodeficiency virus to mannose-specific plant lectins. J Virol 78: 10617–10627. doi: 10.1128/JVI.78.19.10617-10627.2004
    [21] Huskens D, Van LK, Vermeire K, et al. (2007) Resistance of HIV-1 to the broadly HIV-1-neutralizing, anti-carbohydrate antibody 2G12. Virology 360: 294–304. doi: 10.1016/j.virol.2006.10.027
    [22] Blumenthal R, Durell S, Viard M (2012) HIV entry and envelope glycoprotein-mediated fusion. J Biol Chem 287: 40841–40849. doi: 10.1074/jbc.R112.406272
    [23] Wilen CB, Tilton JC, Doms RW (2012) HIV: Cell binding and entry. Cold Spring Harb Perspect Med 2: 1–13
    [24] Ratner L, Haseltine W, Patarca R, et al. (1985) Complete nucleotide sequence of the AIDS virus, HTLV-III. Nature 313: 277–284. doi: 10.1038/313277a0
    [25] Allan JS, Coligan JE, Barin F, et al. (1985) Major glycoprotein antigens that induce antibodies in AIDS patients are encoded by HTLV-III. Science 228: 1091–1094. doi: 10.1126/science.2986290
    [26] Montagnier L, Clavel F, Krust B, et al. (1985) Identification and antigenicity of the major envelope glycoprotein of lymphadenopathy-associated virus. Virology 144: 283–289. doi: 10.1016/0042-6822(85)90326-5
    [27] Wainhobson S, Sonigo P, Danos O, Cole S, et al. (1985) Nucleotide sequence of the AIDS virus, LAV. Cell 40: 9–17. doi: 10.1016/0092-8674(85)90303-4
    [28] Mizuochi T, Spellman MW, Larkin M, et al. (1988) Carbohydrate structures of the human-immunodeficiency-virus (HIV) recombinant envelope glycoprotein gp120 produced in Chinese-hamster ovary cells. Biochem J 254: 599–603. doi: 10.1042/bj2540599
    [29] Mizuochi T, Spellman MW, Larkin M, et al. (1988) Structural characterization by chromatographic profiling of the oligosaccharides of human immunodeficiency virus (HIV) recombinant envelope glycoprotein gp120 produced in Chinese hamster ovary cells. Biomed Chromatogr 2: 260–270.
    [30] Mizuochi T, Matthews TJ, Kato M, et al. (1990) Diversity of oligosaccharide structures on the envelope glycoprotein gp120 of human immunodeficiency virus 1 from the lymphoblastoid cell line H9. Presence of complex-type oligosaccharides with bisecting N-acetylglucosamine residues. J Biol Chem 265: 8519–8524.
    [31] Geyer H, Holschbach C, Hunsmann G, et al. (1988) Carbohydrates of human immunodeficiency virus. Structures of oligosaccharides linked to the envelope glycoprotein 120. J Biol Chem 263: 11760–11767.
    [32] Go EP, Hewawasam G, Liao HX, et al. (2011) Characterization of glycosylation profiles of HIV-1 transmitted/founder envelopes by mass spectrometry. J Virol 85: 8270–8284. doi: 10.1128/JVI.05053-11
    [33] Bonomelli C, Doores KJ, Dunlop DC, et al. (2011) The glycan shield of HIV is predominantly oligomannose independently of production system or viral clade. PLoS One 6: 1–7.
    [34] Go EP, Herschhorn A, Gu C, et al. (2015) Comparative Analysis of the Glycosylation Profiles of Membrane-Anchored HIV-1 Envelope Glycoprotein Trimers and Soluble gp140. J Virol 89: 8245–8257. doi: 10.1128/JVI.00628-15
    [35] Raska M, Takahashi K, Czernekova L, et al. (2010) Glycosylation patterns of HIV-1 gp120 depend on the type of expressing cells and affect antibody recognition. J Biol Chem 285: 20860–20869. doi: 10.1074/jbc.M109.085472
    [36] Zhu X, Borchers C, Bienstock RJ, et al. (2000) Mass spectrometric characterization of the glycosylation pattern of HIV-gp120 expressed in CHO cells. Biochemistry 39: 11194–11204. doi: 10.1021/bi000432m
    [37] Doores KJ, Bonomelli C, Harvey DJ, et al. (2010) Envelope glycans of immunodeficiency virions are almost entirely oligomannose antigens. Proc Natl Acad Sci U S A 107: 13800–13805. doi: 10.1073/pnas.1006498107
    [38] Behrens AJ, Harvey DJ, Milne E, et al. (2017) Molecular Architecture of the Cleavage-Dependent Mannose Patch on a Soluble HIV-1 Envelope Glycoprotein Trimer. J Virol 91: 1–16.
    [39] Sok D, Doores KJ, Briney B, et al. (2014) Promiscuous glycan site recognition by antibodies to the high-mannose patch of gp120 broadens neutralization of HIV. Sci Transl Med 6: 1–15.
    [40] Pritchard LK, Spencer DI, Royle L, et al. (2015) Glycan clustering stabilizes the mannose patch of HIV-1 and preserves vulnerability to broadly neutralizing antibodies. Nat Commun 6: 1–11.
    [41] Coss KP, Vasiljevic S, Pritchard LK, et al. (2016) HIV-1 Glycan Density Drives the Persistence of the Mannose Patch within an Infected Individual. J Virol 90: 11132–11144. doi: 10.1128/JVI.01542-16
    [42] Raska M, Novak J (2010) Involvement of envelope-glycoprotein glycans in HIV-1 biology and infection. Arch Immunol Ther Exp 58: 191–208. doi: 10.1007/s00005-010-0072-3
    [43] Wang SK, Liang PH, Astronomo RD, et al. (2008) Targeting the carbohydrates on HIV-1: Interaction of oligomannose dendrons with human monoclonal antibody 2G12 and DC-SIGN. Proc Natl Acad Sci U S A 105: 3690–3695. doi: 10.1073/pnas.0712326105
    [44] Balzarini J (2005) Targeting the glycans of gp120: A novel approach aimed at the Achilles heel of HIV. Lancet Infect Dis 5: 726–731. doi: 10.1016/S1473-3099(05)70271-1
    [45] Koch M, Pancera M, Kwong PD, et al. (2003) Structure-based, targeted deglycosylation of HIV-1 gp120 and effects on neutralization sensitivity and antibody recognition. Virology 313: 387–400. doi: 10.1016/S0042-6822(03)00294-0
    [46] Rathore U, Saha P, Kesavardhana S, et al. (2017) Glycosylation of the core of the HIV-1 envelope subunit protein gp120 is not required for native trimer formation or viral infectivity. J Biol Chem 24: 10197–10219.
    [47] Sanders RW, Anken EV, Nabatov AA, et al. (2008) The carbohydrate at asparagine 386 on HIV-1 gp120 is not essential for protein folding and function but is involved in immune evasion. Retrovirology 5: 1–15. doi: 10.1186/1742-4690-5-1
    [48] Montefiori DC, Jr RW, Mitchell WM (1988) Role of protein N-glycosylation in pathogenesis of human immunodeficiency virus type 1. Proc Natl Acad Sci U S A 85: 9248–9252. doi: 10.1073/pnas.85.23.9248
    [49] Francois KO, Balzarini J (2011) The highly conserved glycan at asparagine 260 of HIV-1 gp120 is indispensable for viral entry. J Biol Chem 286: 42900–42910. doi: 10.1074/jbc.M111.274456
    [50] Mathys L, Francois KO, Quandte M, et al. (2014) Deletion of the highly conserved N-glycan at Asn260 of HIV-1 gp120 affects folding and lysosomal degradation of gp120, and results in loss of viral infectivity. PLoS One 9: 1–11.
    [51] Huang X, Jin W, Hu K, et al. (2012) Highly conserved HIV-1 gp120 glycans proximal to CD4-binding region affect viral infectivity and neutralizing antibody induction. Virology 423: 97–106. doi: 10.1016/j.virol.2011.11.023
    [52] Binley JM, Ban YE, Crooks ET, et al. (2010) Role of complex carbohydrates in human immunodeficiency virus type 1 infection and resistance to antibody neutralization. J Virol 84: 5637–5655. doi: 10.1128/JVI.00105-10
    [53] Li Y, Luo L, Rasool N, et al. (1993) Glycosylation is necessary for the correct folding of human immunodeficiency virus gp120 in CD4 binding. J Virol 67: 584–588.
    [54] Li H, Jr CP, Tuen M, Visciano ML, et al. (2008) Identification of an N-linked glycosylation in the C4 region of HIV-1 envelope gp120 that is critical for recognition of neighboring CD4 T cell epitopes. J Immunol 180: 4011–4021. doi: 10.4049/jimmunol.180.6.4011
    [55] Behrens AJ, Vasiljevic S, Pritchard LK, et al. (2016) Composition and Antigenic Effects of Individual Glycan Sites of a Trimeric HIV-1 Envelope Glycoprotein. Cell Rep 14: 2695–2706. doi: 10.1016/j.celrep.2016.02.058
    [56] Pritchard LK, Vasiljevic S, Ozorowski G, et al. (2015) Structural Constraints Determine the Glycosylation of HIV-1 Envelope Trimers. Cell Rep 11: 1604–1613. doi: 10.1016/j.celrep.2015.05.017
    [57] Steckbeck JD, Craigo JK, Barnes CO, et al. (2011) Highly conserved structural properties of the C-terminal tail of HIV-1 gp41 protein despite substantial sequence variation among diverse clades: Implications for functions in viral replication. J Biol Chem 286: 27156–27166. doi: 10.1074/jbc.M111.258855
    [58] Dimonte S, Mercurio F, Svicher V, et al. (2011) Selected amino acid mutations in HIV-1 B subtype gp41 are associated with specific gp120v(3) signatures in the regulation of co-receptor usage. Retrovirology 8: 1–11. doi: 10.1186/1742-4690-8-1
    [59] Perrin C, Fenouillet E, Jones IM (1998) Role of gp41 glycosylation sites in the biological activity of human immunodeficiency virus type 1 envelope glycoprotein. Virology 242: 338–345. doi: 10.1006/viro.1997.9016
    [60] Johnson WE, Sauvron JM, Desrosiers RC (2001) Conserved, N-linked carbohydrates of human immunodeficiency virus type 1 gp41 are largely dispensable for viral replication. J Virol 75: 11426–11436. doi: 10.1128/JVI.75.23.11426-11436.2001
    [61] Fenouillet E (1993) La N-glycosylation du VIH: Du modèle expérimental à l'application thérapeutique. J Libbery Eurotext Montrouge 9: 901–906.
    [62] Fenouillet E, Jones IM (1995) The glycosylation of human immunodeficiency virus type 1 transmembrane glycoprotein (gp41) is important for the efficient intracellular transport of the envelope precursor gp160. J Gen Virol 76: 1509–1514. doi: 10.1099/0022-1317-76-6-1509
    [63] Lee WR, Yu XF, Syu WJ, et al. (1992) Mutational analysis of conserved N-linked glycosylation sites of human immunodeficiency virus type 1 gp41. J Virol 66: 1799–1803.
    [64] Ma BJ, Alam SM, Go EP, et al. (2011) Envelope deglycosylation enhances antigenicity of HIV-1 gp41 epitopes for both broad neutralizing antibodies and their unmutated ancestor antibodies. PLoS Pathog 7: 1–16.
    [65] Wang LX, Song H, Liu S, et al. (2005) Chemoenzymatic synthesis of HIV-1 gp41 glycopeptides: Effects of glycosylation on the anti-HIV activity and alpha-helix bundle-forming ability of peptide C34. Chembiochem 6: 1068–1074. doi: 10.1002/cbic.200400440
    [66] Balzarini J, Van LK, Hatse S, et al. (2005) Carbohydrate-binding agents cause deletions of highly conserved glycosylation sites in HIV GP120: A new therapeutic concept to hit the achilles heel of HIV. J Biol Chem 280: 41005–41014. doi: 10.1074/jbc.M508801200
    [67] Van AE, Sanders RW, Liscaljet IM, et al. (2008) Only five of 10 strictly conserved disulfide bonds are essential for folding and eight for function of the HIV-1 envelope glycoprotein. Mol Biol Cell 19: 4298–4309. doi: 10.1091/mbc.E07-12-1282
    [68] Mathys L, Balzarini J (2014) The role of N-glycans of HIV-1 gp41 in virus infectivity and susceptibility to the suppressive effects of carbohydrate-binding agents. Retrovirology 11: 1–18. doi: 10.1186/1742-4690-11-1
    [69] Fenouillet E, Jones I, Powell B, et al. (1993) Functional role of the glycan cluster of the human immunodeficiency virus type 1 transmembrane glycoprotein (gp41) ectodomain. J Virol 67: 150–160.
    [70] Yuste E, Bixby J, Lifson J, et al. (2008) Glycosylation of gp41 of simian immunodeficiency virus shields epitopes that can be targets for neutralizing antibodies. J Virol 82: 12472–12486. doi: 10.1128/JVI.01382-08
    [71] Tanaka H, Chiba H, Inokoshi J, et al. (2009) Mechanism by which the lectin actinohivin blocks HIV infection of target cells. Proc Natl Acad Sci U S A 106: 15633–15638. doi: 10.1073/pnas.0907572106
    [72] Hoorelbeke B, Huskens D, Ferir G, et al. (2010) Actinohivin, a broadly neutralizing prokaryotic lectin, inhibits HIV-1 infection by specifically targeting high-mannose-type glycans on the gp120 envelope. Antimicrob Agents Chemother 54: 3287–32301. doi: 10.1128/AAC.00254-10
    [73] Zhang F, Hoque MM, Jiang J, et al. (2014) The characteristic structure of anti-HIV actinohivin in complex with three HMTG D1 chains of HIV-gp120. Chembiochem 15: 2766–2773. doi: 10.1002/cbic.201402352
    [74] Bewley CA, Gustafson KR, Boyd MR, et al. (1998) Solution structure of cyanovirin-N, a potent HIV-inactivating protein. Nat Struct Biol 5: 571–578. doi: 10.1038/828
    [75] Barrientos LG, Louis JM, Ratner DM, et al. (2003) Solution structure of a circular-permuted variant of the potent HIV-inactivating protein cyanovirin-N: Structural basis for protein stability and oligosaccharide interaction. J Mol Biol 325: 211–223. doi: 10.1016/S0022-2836(02)01205-6
    [76] Esser MT, Mori T, Mondor I, et al. (1999) Cyanovirin-N binds to gp120 to interfere with CD4-dependent human immunodeficiency virus type 1 virion binding, fusion, and infectivity but does not affect the CD4 binding site on gp120 or soluble CD4-induced conformational changes in gp120. J Virol 73: 4360–4371.
    [77] Alexandre KB, Gray ES, Mufhandu H, et al. (2012) The lectins griffithsin, cyanovirin-N and scytovirin inhibit HIV-1 binding to the DC-SIGN receptor and transfer to CD4(+) cells. Virology 423: 175–186. doi: 10.1016/j.virol.2011.12.001
    [78] Buffa V, Stieh D, Mamhood N, et al. (2009) Cyanovirin-N potently inhibits human immunodeficiency virus type 1 infection in cellular and cervical explant models. J Gen Virol 90: 234–243. doi: 10.1099/vir.0.004358-0
    [79] Boyd MR, Gustafson KR, Mcmahon JB, et al. (1997) Discovery of cyanovirin-N, a novel human immunodeficiency virus-inactivating protein that binds viral surface envelope glycoprotein gp120: Potential applications to microbicide development. Antimicrob Agents Chemother 41: 1521–1530.
    [80] Hu Q, Mahmood N, Shattock RJ (2007) High-mannose-specific deglycosylation of HIV-1 gp120 induced by resistance to cyanovirin-N and the impact on antibody neutralization. Virology 368: 145–154. doi: 10.1016/j.virol.2007.06.029
    [81] Keeffe JR, Gnanapragasam PN, Gillespie SK, et al. (2011) Designed oligomers of cyanovirin-N show enhanced HIV neutralization. Proc Natl Acad Sci U S A 108: 14079–14084. doi: 10.1073/pnas.1108777108
    [82] Dey B, Lerner DL, Lusso P, et al. (2000) Multiple antiviral activities of cyanovirin-N: Blocking of human immunodeficiency virus type 1 gp120 interaction with CD4 and coreceptor and inhibition of diverse enveloped viruses. J Virol 74: 4562–4569. doi: 10.1128/JVI.74.10.4562-4569.2000
    [83] Férir G, Huskens D, Noppen S, et al. (2014) Broad anti-HIV activity of the Oscillatoria agardhii agglutinin homologue lectin family. J Antimicrob Chemother 69: 2746–2758. doi: 10.1093/jac/dku220
    [84] Koharudin LM, Gronenborn AM (2011) Structural basis of the anti-HIV activity of the cyanobacterial Oscillatoria Agardhii agglutinin. Structure 19: 1170–1181. doi: 10.1016/j.str.2011.05.010
    [85] Koharudin LM, Furey W, Gronenborn AM (2011) Novel fold and carbohydrate specificity of the potent anti-HIV cyanobacterial lectin from Oscillatoria agardhii. J Biol Chem 286: 1588–1597. doi: 10.1074/jbc.M110.173278
    [86] Carneiro MG, Koharudin LM, Ban D, et al. (2015) Sampling of Glycan-Bound Conformers by the Anti-HIV Lectin Oscillatoria agardhii agglutinin in the Absence of Sugar. Angew Chem 54: 6462–6465. doi: 10.1002/anie.201500213
    [87] Bokesch HR, O'Keefe BR, Mckee TC, et al. (2003) A potent novel anti-HIV protein from the cultured cyanobacterium Scytonema varium. Biochemistry 42: 2578–2584. doi: 10.1021/bi0205698
    [88] Adams EW, Ratner DM, Bokesch HR, et al. (2004) Oligosaccharide and glycoprotein microarrays as tools in HIV glycobiology; glycan-dependent gp120/protein interactions. Chem Biol 11: 875–881. doi: 10.1016/j.chembiol.2004.04.010
    [89] Mcfeeters RL, Xiong C, O'Keefe BR, et al. (2007) The novel fold of scytovirin reveals a new twist for antiviral entry inhibitors. J Mol Biol 369: 451–461. doi: 10.1016/j.jmb.2007.03.030
    [90] Alexandre KB, Gray ES, Lambson BE, et al. (2010) Mannose-rich glycosylation patterns on HIV-1 subtype C gp120 and sensitivity to the lectins, Griffithsin, Cyanovirin-N and Scytovirin. Virology 402: 187–196. doi: 10.1016/j.virol.2010.03.021
    [91] Williams DC, Lee JY, Cai M, et al. (2005) Crystal structures of the HIV-1 inhibitory cyanobacterial protein MVL free and bound to Man3GlcNAc2: Structural basis for specificity and high-affinity binding to the core pentasaccharide from n-linked oligomannoside. J Biol Chem 280: 29269–29276. doi: 10.1074/jbc.M504642200
    [92] Ziółkowska NE, Wlodawer A (2006) Structural studies of algal lectins with anti-HIV activity. Acta Biochim Pol 53: 617–626.
    [93] Shahzadulhussan S, Gustchina E, Ghirlando R, et al. (2011) Solution structure of the monovalent lectin microvirin in complex with Man(alpha)(1–2)Man provides a basis for anti-HIV activity with low toxicity. J Biol Chem 286: 20788–20796. doi: 10.1074/jbc.M111.232678
    [94] Huskens D, Ferir G, Vermeire K, et al. (2010) Microvirin, a novel alpha(1,2)-mannose-specific lectin isolated from Microcystis aeruginosa, has anti-HIV-1 activity comparable with that of cyanovirin-N but a much higher safety profile. J Biol Chem 285: 24845–24854. doi: 10.1074/jbc.M110.128546
    [95] López S, Armandugon M, Bastida J, et al. (2003) Anti-human immunodeficiency virus type 1 (HIV-1) activity of lectins from Narcissus species. Planta Med 69: 109–112. doi: 10.1055/s-2003-37715
    [96] Müller WEG, Forrest JMS, Chang SH, et al. (1991) Narcissus and Gerardia lectins: Tools for the development of a vaccine against AIDS and a new ELISA to quantify HIV-gp 120. Lectins Cancer 1991: 27–40.
    [97] Charan RD, Munro MH, O'Keefe BR, et al. (2000) Isolation and characterization of Myrianthus holstii lectin, a potent HIV-1 inhibitory protein from the plant Myrianthus holstii(1). J Nat Prod 63: 1170–1174. doi: 10.1021/np000039h
    [98] Coulibaly FS, Youan BB (2014) Concanavalin A-polysaccharides binding affinity analysis using a quartz crystal microbalance. Biosens Bioelectron 59: 404–411. doi: 10.1016/j.bios.2014.03.040
    [99] Bhattacharyya L, Brewer CF (1989) Interactions of concanavalin A with asparagine-linked glycopeptides. Structure/activity relationships of the binding and precipitation of oligomannose and bisected hybrid-type glycopeptides with concanavalin A. Eur J Biochem 178: 721–726.
    [100] Witvrouw M, Fikkert V, Hantson A, et al. (2005) Resistance of human immunodeficiency virus type 1 to the high-mannose binding agents cyanovirin N and concanavalin A. J Virol 79: 7777–7784. doi: 10.1128/JVI.79.12.7777-7784.2005
    [101] Hansen JE, Nielsen CM, Nielsen C, et al. (1989) Correlation between carbohydrate structures on the envelope glycoprotein gp120 of HIV-1 and HIV-2 and syncytium inhibition with lectins. Aids 3: 635–641. doi: 10.1097/00002030-198910000-00003
    [102] Matsui T, Kobayashi S, Yoshida O, et al. (1990) Effects of succinylated concanavalin A on infectivity and syncytial formation of human immunodeficiency virus. Med Microbiol Immunol 179: 225–235.
    [103] Pashov A, Macleod S, Saha R, et al. (2005) Concanavalin A binding to HIV envelope protein is less sensitive to mutations in glycosylation sites than monoclonal antibody 2G12. Glycobiology 15: 994–1001. doi: 10.1093/glycob/cwi083
    [104] Swanson MD, Boudreaux DM, Salmon L, et al. (2015) Engineering a therapeutic lectin by uncoupling mitogenicity from antiviral activity. Cell 163: 746–758. doi: 10.1016/j.cell.2015.09.056
    [105] Alexandre KB, Gray ES, Pantophlet R, et al. (2011) Binding of the mannose-specific lectin, griffithsin, to HIV-1 gp120 exposes the CD4-binding site. J Virol 85: 9039–9050. doi: 10.1128/JVI.02675-10
    [106] Mori T, O'Keefe BR, Bringans S, et al. (2005) Isolation and characterization of griffithsin, a novel HIV-inactivating protein, from the red alga Griffithsia sp. J Biol Chem 280: 9345–9353. doi: 10.1074/jbc.M411122200
    [107] Emau P, Tian B, O'Keefe BR, et al. (2007) Griffithsin, a potent HIV entry inhibitor, is an excellent candidate for anti-HIV microbicide. J Med Primatol 36: 244–253. doi: 10.1111/j.1600-0684.2007.00242.x
    [108] Moulaei T, Alexandre KB, Shenoy SR, et al. (2015) Griffithsin tandemers: Flexible and potent lectin inhibitors of the human immunodeficiency virus. Retrovirology 12: 1–14. doi: 10.1186/s12977-014-0129-1
    [109] Zhou X, Liu J, Yang B, et al. (2013) Marine natural products with anti-HIV activities in the last decade. Curr Med Chem 20: 953–973.
    [110] Wang JH, Kong J, Li W, et al. (2006) A beta-galactose-specific lectin isolated from the marine worm Chaetopterus variopedatus possesses anti-HIV-1 activity. Comp Biochem Physiol C Toxicol Pharmacol 142: 111–117. doi: 10.1016/j.cbpc.2005.10.019
    [111] Bulgheresi S, Schabussova I, Chen T, et al. (2006) A new C-type lectin similar to the human immunoreceptor DC-SIGN mediates symbiont acquisition by a marine nematode. Appl Environ Microbiol 72: 2950–2956. doi: 10.1128/AEM.72.4.2950-2956.2006
    [112] Nabatov AA, Jong MAWPD, Witte LD, et al. (2008) C-type lectin Mermaid inhibits dendritic cell mediated HIV-1 transmission to CD4+ T cells. Virology 378: 323–328. doi: 10.1016/j.virol.2008.05.025
    [113] Molchanova V, Chikalovets I, Chernikov O, et al. (2007) A new lectin from the sea worm Serpula vermicularis: Isolation, characterization and anti-HIV activity. Comp Biochem Physiol C Toxicol Pharmacol 145: 184–193. doi: 10.1016/j.cbpc.2006.11.012
    [114] Vo TS, Kim SK (2010) Potential anti-HIV agents from marine resources: An overview. Mar Drugs 8: 2871–2892. doi: 10.3390/md8122871
    [115] Mahalingam A, Geonnotti AR, Balzarini J, et al. (2011) Activity and safety of synthetic lectins based on benzoboroxole-functionalized polymers for inhibition of HIV entry. Mol Pharmaceutics 8: 2465–2475. doi: 10.1021/mp2002957
    [116] Berube M, Dowlut M, Hall DG (2008) Benzoboroxoles as efficient glycopyranoside-binding agents in physiological conditions: Structure and selectivity of complex formation. J Org Chem 73: 6471–6479. doi: 10.1021/jo800788s
    [117] Trippier PC, Mcguigan C, Balzarini J (2010) Phenylboronic-acid-based carbohydrate binders as antiviral therapeutics: Monophenylboronic acids. Antivir Chem Chemother 20: 249–257.
    [118] Trippier PC, Balzarini J, Mcguigan C (2011) Phenylboronic-acid-based carbohydrate binders as antiviral therapeutics: Bisphenylboronic acids. Antivir Chem Chemother 21: 129–142.
    [119] Khan JM, Qadeer A, Ahmad E, et al. (2013) Monomeric banana lectin at acidic pH overrules conformational stability of its native dimeric form. PLoS One 8: 1–12.
    [120] Suzuki K, Ohbayashi N, Jiang J, et al. (2012) Crystallographic study of the interaction of the anti-HIV lectin actinohivin with the alpha(1-2)mannobiose moiety of gp120 HMTG. Acta Crystallogr Sect F Struct Biol Cryst Commun 68: 1060–1063. doi: 10.1107/S1744309112031077
    [121] Tevibénissan C, Bélec L, Lévy M, et al. (1997) In vivo semen-associated pH neutralization of cervicovaginal secretions. Clin Diagn Lab Immunol 4: 367–374.
    [122] Ballerstadt R, Evans C, Mcnichols R, et al. (2006) Concanavalin A for in vivo glucose sensing: A biotoxicity review. Biosens Bioelectron 22: 275–284. doi: 10.1016/j.bios.2006.01.008
    [123] Krauss S, Buttgereit F, Brand MD (1999) Effects of the mitogen concanavalin A on pathways of thymocyte energy metabolism. Biochim Biophys Acta 1412: 129–138. doi: 10.1016/S0005-2728(99)00058-4
    [124] Balzarini J, Laethem KV, Peumans WJ, et al. (2006) Mutational pathways, resistance profile, and side effects of cyanovirin relative to human immunodeficiency virus type 1 strains with N-glycan deletions in their gp120 envelopes. J Virol 80: 8411–8421. doi: 10.1128/JVI.00369-06
    [125] Gavrovicjankulovic M, Poulsen K, Brckalo T, et al. (2008) A novel recombinantly produced banana lectin isoform is a valuable tool for glycoproteomics and a potent modulator of the proliferation response in CD3+, CD4+, and CD8+ populations of human PBMCs. Int J Biochem Cell Biol 40: 929–941. doi: 10.1016/j.biocel.2007.10.033
    [126] Mahalingam A, Geonnotti AR, Balzarini J, et al. (2011) Activity and safety of synthetic lectins based on benzoboroxole-functionalized polymers for inhibition of HIV entry. Mol Pharm 8: 2465–2475. doi: 10.1021/mp2002957
    [127] Lam SK, Ng TB (2011) Lectins: Production and practical applications. Appl Microbiol Biotechnol 89: 45–55. doi: 10.1007/s00253-010-2892-9
    [128] Scanlan CN, Offer J, Zitzmann N, et al. (2007) Exploiting the defensive sugars of HIV-1 for drug and vaccine design. Nature 446: 1038–1045. doi: 10.1038/nature05818
    [129] Gupta A, Gupta RK, Gupta GS (2009) Targeting cells for drug and gene delivery: Emerging applications of mannans and mannan binding lectins. J Sci Ind Res 68: 465–483.
    [130] Ghazarian H, Idoni B, Oppenheimer SB (2011) A glycobiology review: Carbohydrates, lectins and implications in cancer therapeutics. Acta Histochem 113: 236–247. doi: 10.1016/j.acthis.2010.02.004
    [131] Toda S, Ishii N, Okada E, et al. (1997) HIV-1-specific cell-mediated immune responses induced by DNA vaccination were enhanced by mannan-coated liposomes and inhibited by anti-interferon-gamma antibody. Immunology 92: 111–117. doi: 10.1046/j.1365-2567.1997.00307.x
    [132] Zelensky AN, Gready JE (2005) The C-type lectin-like domain superfamily. FEBS J 272: 6179–6217. doi: 10.1111/j.1742-4658.2005.05031.x
    [133] Cui Z, Hsu CH, Mumper RJ (2003) Physical characterization and macrophage cell uptake of mannan-coated nanoparticles. Drug Dev Ind Pharm 29: 689–700. doi: 10.1081/DDC-120021318
    [134] Espuelas S, Thumann C, Heurtault B, et al. (2008) Influence of ligand valency on the targeting of immature human dendritic cells by mannosylated liposomes. Bioconjugate Chem 19: 2385–2393. doi: 10.1021/bc8002524
    [135] Zhang Q, Su L, Collins J, et al. (2014) Dendritic cell lectin-targeting sentinel-like unimolecular glycoconjugates to release an anti-HIV drug. J Am Chem Soc 136: 4325–4332. doi: 10.1021/ja4131565
    [136] Hong PWP, Flummerfelt KB, Parseval AD, et al. (2002) Human immunodeficiency virus envelope (gp120) binding to DC-SIGN and primary dendritic cells is carbohydrate dependent but does not involve 2G12 or cyanovirin binding sites: Implications for structural analyses of gp120-DC-SIGN binding. J Virol 76: 12855–12865. doi: 10.1128/JVI.76.24.12855-12865.2002
    [137] Cruz LJ, Tacken PJ, Fokkink R, et al. (2010) Targeted PLGA nano- but not microparticles specifically deliver antigen to human dendritic cells via DC-SIGN in vitro. J Controlled Release 144: 118–126. doi: 10.1016/j.jconrel.2010.02.013
    [138] Ingale J, Stano A, Guenaga J, et al. (2016) High-Density Array of Well-Ordered HIV-1 Spikes on Synthetic Liposomal Nanoparticles Efficiently Activate B Cells. Cell Rep 15: 1986–1999. doi: 10.1016/j.celrep.2016.04.078
    [139] He L, Val ND, Morris CD, et al. (2016) Presenting native-like trimeric HIV-1 antigens with self-assembling nanoparticles. Nat Commun 7: 1–15.
    [140] Akashi M, Niikawa T, Serizawa T, et al. (1998) Capture of HIV-1 gp120 and virions by lectin-immobilized polystyrene nanospheres. Bioconjugate Chem 9: 50–53. doi: 10.1021/bc970045y
    [141] Hayakawa T, Kawamura M, Okamoto M, et al. (1998) Concanavalin A-immobilized polystyrene nanospheres capture HIV-1 virions and gp120: Potential approach towards prevention of viral transmission. J Med Virol 56: 327–331. doi: 10.1002/(SICI)1096-9071(199812)56:4<327::AID-JMV7>3.0.CO;2-A
    [142] Coulibaly FS, Ezoulin MJM, Purohit SS, et al. (2017) Layer-by-layer engineered microbicide drug delivery system targeting HIV-1 gp120: Physicochemical and biological properties. Mol Pharm 14: 3512–3527.
    [143] Takahashi K, Moyo P, Chigweshe L, et al. (2013) Efficacy of recombinant chimeric lectins, consisting of mannose binding lectin and L-ficolin, against influenza A viral infection in mouse model study. Virus Res 178: 495–501. doi: 10.1016/j.virusres.2013.10.001
    [144] Sato Y, Morimoto K, Kubo T, et al. (2015) Entry Inhibition of Influenza Viruses with High Mannose Binding Lectin ESA-2 from the Red Alga Eucheuma serra through the Recognition of Viral Hemagglutinin. Mar Drugs 13: 3454–3465. doi: 10.3390/md13063454
    [145] Kachko A, Loesgen S, Shahzad-Ul-Hussan S, et al. (2013) Inhibition of hepatitis C virus by the cyanobacterial protein Microcystis viridis lectin: Mechanistic differences between the high-mannose specific lectins MVL, CV-N, and GNA. Mol Pharmaceutics 10: 4590–4602. doi: 10.1021/mp400399b
    [146] Gadjeva M, Paludan SR, Thiel S, et al. (2004) Mannan-binding lectin modulates the response to HSV-2 infection. Clin Exp Immunol 138: 304–311. doi: 10.1111/j.1365-2249.2004.02616.x
    [147] Eisen S, Dzwonek A, Klein NJ (2008) Mannose-binding lectin in HIV infection. Future Virol 3: 225–233. doi: 10.2217/17460794.3.3.225
    [148] Ji X, Olinger GG, Aris S, et al. (2005) Mannose-binding lectin binds to Ebola and Marburg envelope glycoproteins, resulting in blocking of virus interaction with DC-SIGN and complement-mediated virus neutralization. J Gen Virol 86: 2535–2542. doi: 10.1099/vir.0.81199-0
    [149] Keyaerts E, Vijgen L, Pannecouque C, et al. (2007) Plant lectins are potent inhibitors of coronaviruses by interfering with two targets in the viral replication cycle. Antiviral Res 75: 179–187. doi: 10.1016/j.antiviral.2007.03.003
    [150] Hamel R, Dejarnac O, Wichit S, et al. (2015) Biology of Zika Virus Infection in Human Skin Cells. J Virol 89: 8880–8896. doi: 10.1128/JVI.00354-15
    [151] Clement F, Venkatesh YP (2010) Dietary garlic (Allium sativum) lectins, ASA I and ASA II, are highly stable and immunogenic. Int Immunopharmacol 10: 1161–1169. doi: 10.1016/j.intimp.2010.06.022
    [152] Lusvarghi S, Bewley CA (2016) Griffithsin: An Antiviral Lectin with Outstanding Therapeutic Potential. Viruses 8: 1–18.
  • This article has been cited by:

    1. Youqiong Liu, Li Cai, Yaping Chen, Pengfei Ma, Qian Zhong, Variable separated physics-informed neural networks based on adaptive weighted loss functions for blood flow model, 2024, 153, 08981221, 108, 10.1016/j.camwa.2023.11.018
    2. Jianpeng Ran, Xiaoxiang Hu, Xiaoqian Yuan, Aoyi Li, Peng Wei, 2023, Physics-Informed neural networks based low thrust orbit transfer design for spacecraft, 979-8-3503-3775-4, 1, 10.1109/SAFEPROCESS58597.2023.10295814
    3. Fangrui Xiu, Zengan Deng, A dynamically adaptive Langmuir turbulence parameterization scheme for variable wind wave conditions: Model application, 2024, 192, 14635003, 102453, 10.1016/j.ocemod.2024.102453
    4. Sandor M. Molnar, Joseph Godfrey, Binyang Song, Balance Equations for Physics-Informed Machine Learning, 2024, 24058440, e38799, 10.1016/j.heliyon.2024.e38799
    5. Yanjie Song, He Wang, He Yang, Maria Luisa Taccari, Xiaohui Chen, Loss-attentional physics-informed neural networks, 2024, 501, 00219991, 112781, 10.1016/j.jcp.2024.112781
    6. Ikhyun Ryu, Gyu-Byung Park, Yongbin Lee, Dong-Hoon Choi, Physics-informed neural network for engineers: a review from an implementation aspect, 2024, 38, 1738-494X, 3499, 10.1007/s12206-024-0624-9
    7. Vikas Yadav, Mario Casel, Abdulla Ghani, RF-PINNs: Reactive Flow Physics-Informed Neural Networks for Field Reconstruction of Laminar and Turbulent Flames using Sparse Data, 2024, 00219991, 113698, 10.1016/j.jcp.2024.113698
    8. Filip Rękas, Marcin Chutkowski, Krzysztof Kaczmarski, Application of Physics-Informed Neural Networks to predict concentration profiles in gradient liquid chromatography, 2025, 00219673, 465831, 10.1016/j.chroma.2025.465831
    9. Daolun Li, Qian Wang, Wenshu Zha, Luhang Shen, Yuxiang Hao, Xiang Li, Shuaijun Lv, Inversion of Multiple Reservoir Parameters Based on Deep Neural Networks Guided by Lagrange Multipliers, 2025, 1086-055X, 1, 10.2118/225442-PA
    10. Minseok Kim, Yeongjong Kim, Yeoneung Kim, Physics-informed neural networks for optimal vaccination plan in SIR epidemic models, 2025, 22, 1551-0018, 1598, 10.3934/mbe.2025059
    11. Zhengyi Li, Ting Zhang, Lei Cheng, Adaptive physics-informed neural networks for underwater acoustic field prediction, 2025, 5, 2691-1191, 10.1121/10.0036834
    12. Zhuang Zhao, Ye Wang, Weijian Zhang, Zhenggang Ba, Lin Sun, Physics-informed neural networks in heat transfer-dominated multiphysics systems: A comprehensive review, 2025, 157, 09521976, 111098, 10.1016/j.engappai.2025.111098
    13. Tianhao Wang, Guirong Liu, Eric Li, Xu Xu, Adaptive deep physics-informed neural network with dual-nested activation for solving complex partial differential equations, 2025, 444, 00457825, 118125, 10.1016/j.cma.2025.118125
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(7725) PDF downloads(2120) Cited by(6)

Figures and Tables

Figures(2)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog