Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Mini review

Modeling cell behavior: moving beyond intuition

  • In the context of the launching of this new journal, we propose a forum to the community of researchers interested and involved in, or even simply questioning the why, what, how, and when of modeling cell or cell culture behavior. To start the discussion, we review some of the usual questions we are routinely asked on the pertinence of modeling cell behavior, and on who might benefit from conducting such work. To draw a global portrait, throughout this text we refer the reader to handbooks introducing the basics of modeling a biosystem, as well as to selected works that can help visualize the broad fields of applications.

    Citation: Mario Jolicoeur. Modeling cell behavior: moving beyond intuition[J]. AIMS Bioengineering, 2014, 1(1): 1-12. doi: 10.3934/bioeng.2014.1.1

    Related Papers:

    [1] Vilas K. Mahajan, Sandip P. Patil, Shirish H. Sonawane, Gunvant H. Sonawane . Ultrasonic, photocatalytic and sonophotocatalytic degradation of Basic Red-2 by using Nb2O5 nano catalyst. AIMS Biophysics, 2016, 3(3): 415-430. doi: 10.3934/biophy.2016.3.415
    [2] Keleabetswe L. Mpye, Samantha Gildenhuys, Salerwe Mosebi . The effects of temperature on streptavidin-biotin binding using affinity isothermal titration calorimetry. AIMS Biophysics, 2020, 7(4): 236-247. doi: 10.3934/biophy.2020018
    [3] Mary Jane Beilby, Sabah Al Khazaaly . Re-modeling Chara action potential: I. from Thiel model of Ca2+transient to action potential form. AIMS Biophysics, 2016, 3(3): 431-449. doi: 10.3934/biophy.2016.3.431
    [4] Mikayel Ginovyan, Agnieszka Bartoszek, Izabela Koss-Mikołajczyk, Barbara Kusznierewicz, Pierre Andreoletti, Mustapha Cherkaoui-Malki, Naira Sahakyan . Growth inhibition of cultured cancer cells by Ribes nigrum leaf extract. AIMS Biophysics, 2022, 9(3): 282-293. doi: 10.3934/biophy.2022024
    [5] Anatoliy I. Fisenko, Oleksii V. Khorolskyi, Nikolay P. Malomuzh, Artur A. Guslisty . Relationship between the major parameters of warm-blooded organisms' life activity and the properties of aqueous salt solutions. AIMS Biophysics, 2023, 10(3): 372-384. doi: 10.3934/biophy.2023022
    [6] István P. Sugár . Electric energies of a charged sphere surrounded by electrolyte. AIMS Biophysics, 2021, 8(2): 157-164. doi: 10.3934/biophy.2021012
    [7] Roman Marks, Piotr H. Pawłowski . Rotational-electric principles of RNA/DNA and viability. AIMS Biophysics, 2023, 10(3): 385-400. doi: 10.3934/biophy.2023023
    [8] Rianne Bartelds, Jonathan Barnoud, Arnold J. Boersma, Siewert J. Marrink, Bert Poolman . Lipid phase separation in the presence of hydrocarbons in giant unilamellar vesicles. AIMS Biophysics, 2017, 4(4): 528-542. doi: 10.3934/biophy.2017.4.528
    [9] Panisak Boonamnaj, Pornthep Sompornpisut, Piyarat Nimmanpipug, R.B. Pandey . Thermal denaturation of a coronavirus envelope (CoVE) protein by a coarse-grained Monte Carlo simulation. AIMS Biophysics, 2022, 9(4): 330-340. doi: 10.3934/biophy.2022027
    [10] Mikayel Ginovyan, Pierre Andreoletti, Mustapha Cherkaoui-Malki, Naira Sahakyan . Hypericum alpestre extract affects the activity of the key antioxidant enzymes in microglial BV-2 cellular models. AIMS Biophysics, 2022, 9(2): 161-171. doi: 10.3934/biophy.2022014
  • In the context of the launching of this new journal, we propose a forum to the community of researchers interested and involved in, or even simply questioning the why, what, how, and when of modeling cell or cell culture behavior. To start the discussion, we review some of the usual questions we are routinely asked on the pertinence of modeling cell behavior, and on who might benefit from conducting such work. To draw a global portrait, throughout this text we refer the reader to handbooks introducing the basics of modeling a biosystem, as well as to selected works that can help visualize the broad fields of applications.


    Let $ \Omega $ be a bounded domain in $ \mathbb{R}^{d} $ with sufficient smooth boundary $ \partial\Omega $. We consider the following initial boundary value problem (IBVP) for the anomalous diffusion model with multi-term time-fractional derivatives

    $ {sj=1qjαj0+u(x,t)(Lu)(x,t)=f(x)p(t),(x,t)Ω×I,u(x,0)=0,xΩ,u(x,t)=0,(x,t)Ω×I, $ (1.1)

    where $ \; I = (0, T)\; $ with $ \; T > 0\; $ be fixed and $ \partial_{0+}^{\alpha_{j}} $ is the Caputo fractional derivative defined by

    $ αj0+u(x,t)=1Γ(1αj)t0u(x,s)sds(ts)αj,0<αj<1. $

    For a fixed positive integer $ s $, the orders $ \mathit{\boldsymbol{\alpha}} = (\alpha_1, ..., \alpha_s) $ and the coefficients $ \mathit{\boldsymbol{q}} = (q_1, ..., q_s) $ are restricted in the admissible sets

    $ B:={(α1,...,αs)Rs;¯αα1>α2>>αsα_}, $ (1.2)
    $ Q:={(q1,...,qs)Rs;q1=1,qj[q_,¯q],(j=2,...,s)}, $ (1.3)

    with fixed $ 0 < \underline{\alpha} < \overline{\alpha} < 1 $ and $ 0 < \underline{q} < \overline{q} $.

    $ \; -L\; $ is a symmetric uniformly elliptic operator defined on $ \; D(-L) = H^2(\Omega)\cap H^1_0(\Omega)\; $ given by

    $ Lu(x,t)=di=1xi(dj=1aij(x)xju(x,t))+c(x)u(x,t),xΩ, $

    in which the coefficients satisfy

    $ aij=aji,1i,jd,aijC1(ˉΩ),μdi=1ξ2idi,j=1aij(x)ξiξj,xˉΩ,ξ=(ξ1,,ξd)Rd,foraconstantμ>0,c(x)0,xˉΩ,c(x)C(ˉΩ). $

    If the number of fractional derivatives $ s $, the orders $ \mathit{\boldsymbol{\alpha}} $ and its coefficients $ \mathit{\boldsymbol{q}} $, elliptic operator $ L $ and source functions $ p(t), \; f(x) $ are given appropriately, then IBVP Eq (1.1) is a direct problem. Here the spatial source term $ f(x) $ is unknown, so the inverse problem in the paper is to determine the spatial source term $ f(x) $ based on problem Eq (1.1) and an additional terminal data

    $ u(x,T)=g(x),xΩ. $ (1.4)

    Since the measurement is noise-contaminated inevitably, we denote the noisy measurement of $ g $ as $ g^\delta(x) $ satisfying

    $ gδ(x)g(x)δ. $ (1.5)

    It is well known that the time-fractional diffusion equations (TFDEs) have a wide range of applications in physics, chemistry and other aspects [1,2,3,4,5]. The most representative is continuous time random walk problem in general non-Markovian processes. However, with the increasing demand on the accuracy of the problem, the single-time fractional diffusion equation gradually failed to meet the needs of the problem, so Schumer et al. [6] proposed using the multi-term time-fractional diffusion equation (MTFDE) to increase the accuracy of the model. MTFDE is not only a useful tool for describing the behavior of anomalous diffusion phenomena in highly heterogeneous aquifers and complex viscoelastic materials [7], but can also be applied indirectly in the numerical solution of distributed-order fractional differential equations [8].

    There are many studies on the direct problem for IBVP Eq (1.1) in recent years such as some uniqueness and existence results [9], the maximum principle [10], and analytic solutions [11,12]. In terms of numerical work, there are also some papers on the numerical solutions by finite difference methods [13,14,15] and by finite element methods [16,17,18,19].

    Due to the spectral method needs fewer grid points to obtain highly accurate solution. Therefore, the spectral method is more suitable for discrete MTFDE because of its huge computations. The study of the spectral method can be divided into two categories in MTFDE. On the one hand, the time fractional derivative is discreted by finite difference method and spectral method is applied for the space variable. Guo et al. [20] studied the numerical approximation of distributed-order time-space fractional reaction-diffusion equation. Jiang et al. [21] considered a Legendre spectral method on graded meshes for handling MTFDE with non-smooth solutions in two-dimensional case. Liu et al. [22] applied an alternating direction implicit Legendre spectral method to handle the multi-term time fractional Oldroyd-B fluid type diffusion equation in the two-dimensional case. On the other hand, the spectral method is applied both in space and in time. Zheng et al. [23] considered a high order numerical method for handling MTFDE. Zaky [24] applied a Legendre spectral tau method to handle MTFDE. However, we are interested not only in obtaining a numerical solution with high precision by using spectral method, but also in determining the spatial source term in an inverse problem of the MTFDE.

    As far as we know, the theory as well as numerical methods of inverse source problems for single term (i.e, . $ s = 1 $) case in Eq (1.1) are relatively abundant. Zhang et al. [25] testified a uniqueness result to inverse the spatial source term in one-dimensional case by using one-point Cauchy data and proposed an efficient numerical method. Wei et al. [26] studied to identify a spatial source term in a multi-dimensional time-fractional diffusion equation from boundary measured data. The uniqueness for the inverse source problem is proved by the Laplace transformation method. Yan et al. [27] studied to identify a spatial source term in a multi-dimensional time-fractional diffusion-wave equation from a part of noisy boundary data. The uniqueness of inverse spatial source term problem is proved by the Titchmarsh convolution theorem and the Duhamel principle. Sun et al. [28] devoted to recovering simultaneously the fractional order and the space-dependent source term from partial Cauchy's boundary data in a multi-dimensional time-fractional diffusion equation. Recently, Yeganeh et al. [29] came up with an interesting idea. They used a method based on a finite difference scheme in time and a local discontinuous Galerkin method in space to determine a spatial source term in a time-fractional diffusion equation. This has not been seen in previous studies.

    For a multi-term case, however, research results on inverse source problem are relatively few at present. Jiang et al. [30] built a weak unique continuation property for time-fractional diffusion-advection equations, and they considered an inverse problem on determining identifying the spatial source term by interior measurements. Li et al. [31] considered an inverse problem of recovering time-dependent source term from the Cauchy data in a MTFDE, and they applied the conjugate gradient method to identify the approximate source term. Recently, Sun et al. [32] investigated an inverse the spatial source term in MTFDE with nohomogeneous boundary condition from partially disturbed boundary data. They proposed the Levenberg-Marquardt regularization method to compute an inverse source problem. In addition, simultaneous inversion of source term and other terms has been studied. For instance, Malik et al. [33] studied an inverse problem of identifying a time-dependent source term along with diffusion/temperature concentration from a non-local over-specified condition for a space-time fractional diffusion equation. Sun et al. [34] considered a nonlinear inverse problem for simultaneously recovering the potential function and the fractional orders in a MTFDE from the noisy boundary Cauchy data in the one-dimensional case. For other inverse source problems, we can refer to [35,36,37]. Nevertheless, to the best knowledge of the authors, no one has used the spectral method to determine a spatial source term in an inverse problem of MTFDE.

    In this paper, we will focus on two aspect research on our proposed model Eq (1.1). One is the study on the direct problem. The theoretical analysis and the numerical scheme of the Galerkin spectral method are proposed to solve the IBVP Eq (1.1). On the other hand, we use the Galerkin spectral method to investigate an inverse spatial source term problem in a MTFDE from a noisy final data in a general bounded domain. In the propose method, multi-term Caputo fractional derivatives are discretized by the L1-formula and the Galerkin spectral method is applied for the space variable. At the same time, the comparison between the Galerkin spectral method and the finite difference method is added in this paper. Finally, we not only prove the validity of the Galerkin spectral method in the application of MTFDE, but also verify the superiority of the Galerkin spectral method in forward and inverse problems by comparing the numerical results with the finite difference method.

    The remainder of this paper is organized as follows. Some preliminaries are presented in Section 2. The detailed convergence analysis of the presented method is shown in Section 3. Uniqueness and ill-posedness for the inverse problem are showed in Section 4. We present the Galerkin spectral method and the finite difference method algorithm in Section 5. Numerical results for four examples are investigated in Section 6. Finally, we give a conclusion in Section 7.

    We firstly introduce some preliminaries as follows in this section.

    Definition 2.1. The multinomial Mittag-Leffler function is defined by (see [9,38])

    $ E(θ1,,θs),θ0(z1,,zs):=k=0k1++ks=k(k;k1,,ks)sj=1zkjjΓ(θ0+sj=1θjkj), $

    where $ \theta_0, \; \theta_j\in \mathbb{R} $, and $ z_j\in \mathbb{C} $ $ (j = 1, \cdots, s) $, and $ (k; k_1, \cdots, k_s) $ denotes the multinomial coefficient

    $ (k;k1,,ks):=k!k1!ks!withk=sj=1kj, $

    where $ \; k_j\; (j = 1, \cdots, s)\; $ are non-negative integers.

    For the convenience of the later, if the orders $ \mathit{\boldsymbol{\alpha}} = (\alpha_1, ..., \alpha_s) $ and its coefficients $ \mathit{\boldsymbol{q}} = (q_1, ..., q_s) $ satisfy Eq (1.2) and Eq (1.3), then we adopt the abbreviation

    $ E(n)α,β(t):=E(α1,α1α2,,α1αs),β(λntα1,q2tα1α2,,qstα1αs),t>0,n=1,2,, $

    where $ \mathit{\boldsymbol{\alpha}}' = (\alpha_1, \alpha_1-\alpha_2, \cdots, \alpha_1-\alpha_s) $ and $ \lambda_n $ denotes the $ n $-th eigenvalues of elliptic operator $ -L $ with the homogeneous Dirichlet boundary condition.

    Lemma 2.2. ([9]) Let $ \; 0 < \alpha_s < \alpha_{s-1} < \cdots < \alpha_1 < 1 $. Then,

    $ ddt{tα1E(n)α,1+α1(t)}=tα11E(n)α,α1(t),t>0. $

    Lemma 2.3. ([9,39]) Let $ \; 0 < \alpha_s < \alpha_{s-1} < \cdots < \alpha_1 < 1 $. Then the function $ t^{\alpha_1-1}E_{\boldsymbol \alpha^{'}, \alpha_1}^{(n)}(t) $ is positive for $ t > 0. $

    Lemma 2.4. ([40]) Let $ \; 0 < \alpha_s < \alpha_{s-1} < \cdots < \alpha_1 < 1 $. Then

    $ |1λntα1E(n)α,α1+1(t)|sj=2M(1+qjtα1αj)1+λntα1,t>0,n=1,2,, $

    where $ M $ ia a positive constant.

    Proposition 2.5. Let $ \; 0 < \alpha_s < \alpha_{s-1} < \cdots < \alpha_1 < 1 $. Then we have $ E_{\alpha^{'}, 1+\alpha_1}^{(n)}(t) > 0 $.

    Proof. By Lemma 2.2 and 2.3, we know that

    $ ddt{tα1E(n)α,1+α1(t)}=tα11E(n)α,α1(t)>0. $

    Hence it is obvious $ E_{\boldsymbol \alpha^{'}, 1+\alpha_1}^{(n)}(t) > 0 $.

    Proposition 2.6. For $ \lambda_n > 0 $ and $ 0 < \alpha_s < \alpha_{s-1} < \cdots < \alpha_1 < 1 $, we have $ 0 < 1-\lambda_{n}t^{\alpha_{1}}E_{\boldsymbol \alpha^{'}, 1+\alpha_1}^{(n)}(t) < 1 $ for $ t > 0 $. Moreover, $ 1-\lambda_{n}t^{\alpha_{1}}E_{\boldsymbol \alpha^{'}, 1+\alpha_1}^{(n)}(t) $ is a strictly decreasing function on $ t > 0 $.

    Proof. By Lemma 2.2 and 2.3, we have

    $ ddt{1λntα1E(n)α,1+α1(t)}=λntα11E(n)α,α1(t)<0. $

    We notice that $ 1-\lambda_{n}t^{\alpha_{1}}E_{\boldsymbol \alpha^{'}, 1+\alpha_1}^{(n)}(t) $ is a continuous function on $ t $. Hence, we have $ \lim_{t\rightarrow 0}(1-\lambda_{n}t^{\alpha_{1}}E_{\boldsymbol \alpha^{'}, 1+\alpha_1}^{(n)}(t)) = 1 $. By Lemma 2.4 we know $ \lim_{t\rightarrow \infty}(1-\lambda_{n}t^{\alpha_{1}}E_{\boldsymbol \alpha^{'}, 1+\alpha_1}^{(n)}(t)) = 0 $. The proof is completed.

    Lemma 2.7. For $ \lambda_n > 0 $ and $ 0 < \alpha_s < \alpha_{s-1} < \cdots < \alpha_1 < 1 $, such that

    $ 0E(n)α,1+α1(T)1Tα1λn. $

    Proof. By Proposition 2.6, we know $ 0 < E_{\alpha^{'}, 1+\alpha_1}^{(n)}(T) < \frac{1}{\lambda_{n}T^{\alpha_{1}}} $.

    Lemma 2.8. Let $ k $ be a positive integer, $ \Delta t > 0 $ and $ \; \lim_{k\rightarrow \infty}k\Delta t = T $. We have the following properties from the sequence defined by

    $ ωk=sj=1qjΔtαjΓ(2αj)(k1αj(k1)1αj), $ (2.1)

    where $ \alpha_j $ and $ q_j $ are defined in equation Eq (1.1). Here we denote $ \omega_k^{-1} $ equivalent to $ \frac{1}{\omega_k} $.

    (1) $ \omega_k $ is a decreasing sequence with respect to $ k $;

    (2) $ \omega_1 > \dots > \omega_k > 0 $ for each $ k $;

    (3) $ \omega_k^{-1} $ is an increasing sequence with respect to $ k $;

    (4) $ \lim_{k\rightarrow \infty} k^{-\alpha_1}\omega_k^{-1} = \frac{{\Delta t}^{\alpha_1}}{\frac{q_1(1-\alpha_1)}{\Gamma(2-\alpha_1)}+\sum\limits_{j = 2}^{s}q_j\; \frac{T^{\alpha_1-\alpha_j}(1-\alpha_j)}{\Gamma(2-\alpha_j)}}. $

    Proof. First, let $ \Phi(x) = \omega_x $ be a continuous function with respect to $ x $, According to $ \frac{d\Phi(x)}{dx} < 0, $ we know $ \Phi(x) $ is decreasing function, then we have $ \omega_k $ is decreasing the sequence. And we can direct to check that $ \omega_1 > \dots > \omega_k > 0 $ for each $ k $. In addition, we can easy to derive $ \omega_k^{-1} $ is increasing function. Further, we have

    $ lim $

    where

    $ \begin{equation} \nonumber \begin{aligned} &\Psi_k^1 = \frac{q_1{\Delta t}^{-\alpha_1}}{\Gamma(2-\alpha_1)}(k^{1-\alpha_1}-(k-1)^{1-\alpha_1})k^{\alpha_1}, \\ &\Psi_k^2 = \sum\limits_{j = 2}^{s}q_j\; \frac{{\Delta t}^{-\alpha_j}}{\Gamma(2-\alpha_j)} (k^{1-\alpha_j}-(k-1)^{1-\alpha_j})k^{\alpha_1}. \end{aligned} \end{equation} $

    First we have

    $ \begin{equation} \nonumber \begin{aligned} \lim\limits_{k\rightarrow \infty}\Psi_k^1 & = \lim\limits_{k\rightarrow \infty}\frac{q_1{\Delta t}^{-\alpha_1}}{\Gamma(2-\alpha_1)} (1-(1-\frac{1}{k})^{1-\alpha_1}) k^{1-\alpha_1}k^{\alpha_1}\\ & = \lim\limits_{k\rightarrow \infty}\frac{q_1{\Delta t}^{-\alpha_1}}{\Gamma(2-\alpha_1)}\lim\limits_{k\rightarrow \infty}(1-1+(1-\alpha_1)\frac{1}{k}+o(\frac{1}{k}))k\\ & = \frac{q_1(1-\alpha_1)}{\Gamma(2-\alpha_1)}{\Delta t}^{-\alpha_1}, \end{aligned} \end{equation} $

    where we use the Taylor's formula of $ (1-1/x)^{1-\alpha_1} $ in the second equation. Similarly, we have

    $ \begin{equation} \nonumber \begin{aligned} \lim\limits_{k\rightarrow \infty}\Psi_k^2 & = \lim\limits_{k\rightarrow \infty}\sum\limits_{j = 2}^{s}q_j\; \frac{{\Delta t}^{-\alpha_j}}{\Gamma(2-\alpha_j)} (k^{1-\alpha_j}-(k-1)^{1-\alpha_j})k^{\alpha_1}\\ & = \lim\limits_{k\rightarrow \infty}\sum\limits_{j = 2}^{s}q_j\; \frac{{\Delta t}^{-\alpha_j}}{\Gamma(2-\alpha_j)}(k^{1-\alpha_j}-(k-1)^{1-\alpha_j}) k^{\alpha_j}k^{-\alpha_j}{\Delta t}^{-\alpha_1}{\Delta t}^{\alpha_1}k^{\alpha_1}\\ & = \lim\limits_{k\rightarrow \infty}\sum\limits_{j = 2}^{s}q_j\; \frac{1}{\Gamma(2-\alpha_j)}(1-(1-\frac{1}{k})^{1-\alpha_j}) k(k\Delta t)^{-\alpha_j}(k\Delta t)^{\alpha_1}{\Delta t}^{-\alpha_1}\\ & = \sum\limits_{j = 2}^{s}q_j\; \frac{T^{\alpha_1-\alpha_j}(1-\alpha_j)}{\Gamma(2-\alpha_j)}{\Delta t}^{-\alpha_1}. \end{aligned} \end{equation} $

    For all these reasons, we arrive at

    $ \begin{equation} \lim\limits_{k\rightarrow \infty} k^{-\alpha_1}\omega_k^{-1} = \frac{{\Delta t}^{\alpha_1}}{\frac{q_1(1-\alpha_1)}{\Gamma(2-\alpha_1)}+\sum\limits_{j = 2}^{s}q_j\; \frac{T^{\alpha_1-\alpha_j}(1-\alpha_j)}{\Gamma(2-\alpha_j)}}. \end{equation} $ (2.2)

    This proof is complete.

    In this section, we discuss semi-discretize and full discretize schemes of the problem Eq (1.1) in the one-dimensional case. First, let $ t_n: = n \Delta t, \; n = 0, 1, \cdots, N, \; $ where $ \Delta t: = \frac{T}{N} $ (at leat satisfying $ \Delta t < 1 $) is the time step. We can denote $ u^n\; $ an approximation to $ u(x, t_n) $, and in order to convenience we write $ \; p(t_n), \; f(x), \; a(x), \; c(x) $ is equal to $ \; p^n, \; f, \; a, \; c $, where $ f(x), \; p(t_n) $ are the source terms of Eq (1.1), and $ a(x), \; c (x) $ come from the definition of the symmetric uniformly elliptic operator $ L $. Then we introduce the time-fractional derivative is approximated by the $ L1 $ formula [41].

    $ \begin{equation} \label{3.1} \nonumber \sum\limits_{j = 1}^{s}q_j\; \partial_{0+}^{\alpha_{j}}u(x, t_{n}) = \sum\limits_{k = 1}^{n}\omega_k(u(x, t_{n-k+1})-u(x, t_{n-k})) +r_{\Delta t}^n, \end{equation} $

    where $ \omega_k = \sum\limits_{j = 1}^{s}q_j\; \frac{{\Delta t}^{-\alpha_j}}{\Gamma(2-\alpha_j)} (k^{1-\alpha_j}-(k-1)^{1-\alpha_j}) $ and $ r_{\Delta t}^n $ is the truncation error with the estimate

    $ \begin{equation} \nonumber r_{\Delta t}^n\leq c_u {\Delta t}^{2-\alpha_1}+\cdots+c_u {\Delta t}^{2-\alpha_s} \leq c_1 {\Delta t}^{2-\alpha_1}. \end{equation} $

    Here $ c_1 $ is a constant depending on $ u, \; a_j $ and $ T $.

    In this paper, we use $ H^1 $-norm defined by

    $ \begin{equation} \nonumber \|\nu\|_{H^1(\Omega)} = \left(\|\nu\|^2_{L^2(\Omega)} +\frac{1}{\omega_1}\left\|\frac{d\nu}{dx}\right\|^2_{L^2(\Omega)}\right)^{\frac{1}{2}}. \end{equation} $

    where $ \omega_1 $ is defined in Lemma 2.8.

    In order to establish the complete semi-discrete problem, we define the error term $ r^n $ by

    $ \begin{equation} r^n: = \frac{1}{\omega_1}\left\lbrack \sum\limits_{j = 1}^{n}\omega_j(u(x, t_{n-j+1})-u(x, t_{n-j}))- \sum\limits_{j = 1}^{s}q_j\; \partial_{0+}^{\alpha_{j}}u(x, t_{n}) \right\rbrack. \end{equation} $ (3.1)

    Then we have

    $ \begin{equation} \vert r^n \vert = \frac{1}{\omega_1}\vert r_{\Delta t}^n \vert \leq c_2 {\Delta t}^{2-\alpha_1}, \end{equation} $ (3.2)

    where $ c_2 $ is a constant depending on $ u, \; a_j $ and $ T $.

    Using the first term on right hand side of Eq (3.1) as an approximation of $ \sum\limits_{j = 1}^{s}q_j\; \partial_{0+}^{\alpha_{j}}u(x, t_{n}) $ leads to the following finite difference scheme to Eq (1.1)

    $ \begin{equation} \sum\limits_{k = 1}^{n}\omega_k(u^{n-k+1}-u^{n-k})-\left(a\frac{\partial u^n}{\partial x}\right)_x-cu^n = fp^n. \end{equation} $ (3.3)

    Then we have

    $ \begin{equation} u^n-\frac{1}{\omega_1}\left(a\frac{\partial u^n}{\partial x}\right)_x- \frac{1}{\omega_1}cu^n = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}u^{n-k}+\frac{\omega_n}{\omega_1} u^0+ \frac{1}{\omega_1}fp^n. \end{equation} $ (3.4)

    Multiply both sides by $ \nu\in H_0^1(\Omega) $ and integrate over $ \; \Omega\; $ and by Green formula. We can find $ u^n\in H_0^1(\Omega) $, such that

    $ \begin{equation} \begin{aligned} (u^n, \nu)+\frac{1}{\omega_1}\left(a\frac{\partial u^n}{\partial x}, \frac{\partial \nu}{\partial x}\right)-\frac{1}{\omega_1}(cu^n, \nu) & = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(u^{n-k}, \nu)\\ &+\frac{\omega_n}{\omega_1} (u^0, \nu) +\frac{1}{\omega_1}(fp^n, \nu). \end{aligned} \end{equation} $ (3.5)

    For convenience and without loss of generality, we consider the case $ a_{ij}(x)\equiv1, \; c(x) = 0 $. Stability of the scheme Eq (3.5) is given in the following result.

    Theorem 3.1. The semi-discretized problem Eq (3.5) is unconditionally stable in sense that for all $ \Delta t > 0 $, holds

    $ \begin{equation} \nonumber \Arrowvert u^n\Arrowvert_{H^1(\Omega)}\leq \Arrowvert u^0\Arrowvert_{L^2(\Omega)} +\frac{1}{\omega_1}\sum\limits_{k = 1}^{n}p^k\Arrowvert f\Arrowvert_{L^2(\Omega)}. \end{equation} $

    Proof. we prove this theorem using induction. When n = 1, scheme Eq (3.5) has

    $ \begin{equation} \nonumber (u^1, \nu)+\frac{1}{\omega_1}\left(\frac{\partial u^1}{\partial x} , \frac{\partial \nu}{\partial x}\right) = (u^0, \nu) +\frac{1}{\omega_1}(\sum\limits_{k = 1}^{1}p^1f, \nu). \end{equation} $

    Taking $ \nu = u^1 $ and Schwarz inequality, we have

    $ \begin{equation} \Arrowvert u^1\Arrowvert_{H^1(\Omega)}\leq \Arrowvert u^0\Arrowvert_{L^2(\Omega)} +\frac{1}{\omega_1}\sum\limits_{k = 1}^{1}p^k\Arrowvert f\Arrowvert_{L^2(\Omega)}. \end{equation} $ (3.6)

    Now for the following inequality to hold

    $ \begin{equation} \Arrowvert u^j\Arrowvert_{H^1(\Omega)}\leq \Arrowvert u^0\Arrowvert_{L^2(\Omega)} +\frac{1}{\omega_1}\sum\limits_{k = 1}^{j}p^k\Arrowvert f\Arrowvert_{L^2(\Omega)}, \quad j = 1, 2, \cdots, n-1. \end{equation} $ (3.7)

    We must prove that $ \Arrowvert u^n\Arrowvert_{H^1(\Omega)}\leq \Arrowvert u^0\Arrowvert_{L^2(\Omega)}+\frac{1}{\omega_1}\sum_{k = 1}^{n}p^k\Arrowvert f\Arrowvert_{L^2(\Omega)} $. Taking $ \nu = u^n $ in Eq (3.5) and by using Eq (3.7), we have

    $ \begin{equation} \begin{aligned} \Arrowvert u^n\Arrowvert_{H^1(\Omega)} &\leq\sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}\Arrowvert u^{n-k}\Arrowvert_{L^2(\Omega)}+\frac{\omega_n}{\omega_1}\Arrowvert u^0\Arrowvert_{L^2(\Omega)} +\frac{1}{\omega_1}\Arrowvert f\Arrowvert_{L^2(\Omega)}p^n\\ &\leq\left\lbrack\sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}+\frac{\omega_n}{\omega_1}\right\rbrack \left\lbrack\Arrowvert u^0\Arrowvert_{L^2(\Omega)} +\frac{1}{\omega_1}\sum\limits_{k = 1}^{n-1}p^k\Arrowvert f\Arrowvert_{L^2(\Omega)}\right\rbrack+\frac{1}{\omega_1}p^n\Arrowvert f\Arrowvert_{L^2(\Omega)}. \end{aligned} \end{equation} $ (3.8)

    Hence, we have $ \Arrowvert u^n\Arrowvert_{H^1(\Omega)}\leq \Arrowvert u^0\Arrowvert_{L^2(\Omega)}+\frac{1}{\omega_1}\sum_{k = 1}^{n}p^k\Arrowvert f\Arrowvert_{L^2(\Omega)} $.

    Now, we give the convergence estimate between the exact solution and the solution of the semi-discretized problem.

    Theorem 3.2. Let $ u(x, t) $ be the exact solution of Eq (1.1), $ \{u^n\}_{n = 0}^{N} $ be the numerical solution of semi-discretized Eq (3.5), and $ u(x, t_0) = u^0 $. Then there holds the following error estimates

    $ \begin{equation} \nonumber \Arrowvert u(x, t_n)-u^n\Arrowvert_{H^1(\Omega)} \leq \frac{c_2 \omega_1 T^{\alpha_1} {\Delta t}^{2-\alpha_1}}{\frac{q_1(1-\alpha_1)}{\Gamma(2-\alpha_1)}+\sum\nolimits_{j = 2}^{s}\frac{q_jT^{\alpha_1-\alpha_j}(1-\alpha_j)}{\Gamma(2-\alpha_j)}}, \end{equation} $

    where $ c_2 $ is a constant depending on $ u, \; a_j $ and $ T $.

    Proof. We will verify that the following estimate

    $ \begin{equation} \Arrowvert u(x, t_j)-u^j \Arrowvert_{H^1(\Omega)} \leq c_2 \omega_1 \omega_{j}^{-1}{\Delta t}^{2-\alpha_1}, \quad j = 1, 2, \cdots, N. \end{equation} $ (3.9)

    Here, we will prove it by the mathematical induction, let $ \hat{e}^n = u(x, t_n)-u^n $. For $ j = 1 $, we have, by combining Eq (1.1), Eq (3.5), Eq (3.1), the error equation

    $ \begin{equation} (\hat{e}^1, \nu)+\frac{1}{\omega_1}\left(\frac{\partial \hat{e}^1}{\partial x}, \frac{\partial \nu}{\partial x}\right) = (\hat{e}^0, \nu)+(r^1, \nu) \quad \nu\in H^1_0(\Omega). \end{equation} $ (3.10)

    Taking $ \nu = \hat{e}^1 $, we have

    $ \begin{equation} \nonumber \Arrowvert u(x, t_1)-u^1\Arrowvert_{H^1(\Omega)} \leq c_2 \omega_1 \omega_1^{-1}{\Delta t}^{2-\alpha_1}. \end{equation} $

    Hence, Eq (3.9) is proven for the case j = 1. Now, Suppose the inequality Eq (3.9) holds for all $ j = 1, 2, \cdots, n-1 $. Then we need to prove that it holds also for $ j = n $.

    Similar to $ j = 1 $, we have $ \nu\in H^1_0(\Omega) $

    $ \begin{equation} (\hat{e}^n, \nu)+\frac{1}{\omega_1}\left(\frac{\partial \hat{e}^n}{\partial x}, \frac{\partial \nu}{\partial x}\right) = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(\hat{e}^{n-k}, \nu)+\frac{\omega_n}{\omega_1}(\hat{e}^{0}, \nu)+(r^n, \nu). \end{equation} $ (3.11)

    Taking $ \nu = \hat{e}^n $, and using Eq (3.9). We have

    $ \begin{equation} \nonumber \begin{aligned} \Arrowvert \hat{e}^n\Arrowvert_{H^1(\Omega)} &\leq \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1} \Arrowvert \hat{e}^{n-k} \Arrowvert_{L^2(\Omega)}+\frac{\omega_n}{\omega_1}\Arrowvert \hat{e}^{0} \Arrowvert_{L^2(\Omega)}+\Arrowvert r^n \Arrowvert_{L^2(\Omega)}\\ &\leq\left\lbrack\sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}+\frac{\omega_n}{\omega_1}\right\rbrack c_2 \omega_1 \omega_n^{-1} {\Delta t}^{2-\alpha_1}. \end{aligned} \end{equation} $

    Then we obtain

    $ \begin{equation} \nonumber \Arrowvert \hat{e}^n\Arrowvert_{H^1(\Omega)}\leq c_2 \omega_1 \omega_n^{-1} {\Delta t}^{2-\alpha_1}. \end{equation} $

    The estimate Eq (3.9) is proved. Notice that $ \; \lim_{k\rightarrow \infty}k\Delta t = T $ is clearly true. Then, we have

    $ \begin{equation} \begin{aligned} \Arrowvert u(x, t_n)-u^n \Arrowvert_{H^1(\Omega)} &\leq c_2 \omega_1 \omega_{n}^{-1}{\Delta t}^{2-\alpha_1} = c_2 \omega_1 n^{-\alpha_1} \omega_n^{-1}{\Delta t}^{-\alpha_1} T^{\alpha_1} {\Delta t}^{2-\alpha_1}.\\ \end{aligned} \end{equation} $ (3.12)

    Then by Eq (3.12) and Lemma 2.8, we deduce that for sufficently large $ n $, the assertion of the theorem is valid.

    First, we set

    $ \begin{equation} \nonumber V^{M} = \left\lbrace \nu \in H^1_0(\Omega)\vert \nu\in \mathbb{P}^{M}(\Omega) \right\rbrace, \end{equation} $

    as the space of polynomials of degree $ \leq M $ with respect to $ x $. Let $ P $ be the orthogonal projection operator from $ H_0^1(\Omega) $ into $ V^{M} $. Then for all $ \psi\in H_0^1(\Omega) $, we have $ P\psi \in V^{M} $. Further, we have $ \forall \nu_M\in V^{M} $ such that

    $ \begin{equation} ({P}\psi, \nu_M)+\frac{1}{\omega_1}\left(\frac{\partial {P}\psi }{\partial x}, \frac{\partial \nu_M}{\partial x}\right) = (\psi, \nu_M)+\frac{1}{\omega_1}\left(\frac{\partial \psi }{\partial x}, \frac{\partial \nu_M}{\partial x}\right). \end{equation} $ (3.13)

    This the orthogonal projection operator satisfy the inequality [42]

    $ \begin{equation} \|\psi- {P}\psi\|_{H^1(\Omega)}\leq c M^{1-m}\| \psi\|_{H^m(\Omega)}, \end{equation} $ (3.14)

    where $ \psi\in H^m(\Omega)\cap H^1_0(\Omega), \; m\geq 1 $.

    Now, we consider the Galerkin scheme as follow: find $ u_M^n \in V^{M} $, such that for all $ \nu_M \in V^{M} $

    $ \begin{equation} (u^n_M, \nu_M)+\frac{1}{\omega_1}\left(\frac{\partial u^n_M}{\partial x}, \frac{\partial \nu_M}{\partial x}\right) = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(u^{n-k}_M, \nu_M)+\frac{\omega_n}{\omega_1}(u^0_M, \nu_M) +\frac{p^n}{\omega_1}(f_M, \nu_M). \end{equation} $ (3.15)

    Theorem 3.3. Let $ u(x, t)\in H^1((0, T), H^m(\Omega)\cap H^1_0(\Omega)), \; m > 1 $ be the exact solution of Eq (1.1), $ \{u^n_M\}_{n = 0}^{N} $ be the numerical solution of full discretization Eq (3.15), and $ u_M^0 = \text{P} u^0 $. Then there holds the following error estimates

    $ \begin{equation} \nonumber \Arrowvert u(x, t_n)-u^n_M\Arrowvert_{H^1(\Omega)} \leq \frac{\omega_1 T^{\alpha_1} (c_2 \Delta t^{2-\alpha_1}+cM^{1-m}\|u\|_{L^{\infty}(H^m)})} {\frac{q_1(1-\alpha_1)}{\Gamma(2-\alpha_1)}+\sum\nolimits_{j = 2}^{s}\frac{q_jT^{\alpha_1-\alpha_j}(1-\alpha_j)}{\Gamma(2-\alpha_j)}}, \end{equation} $

    where $ c_2 $ is a constant depending on $ u, \; a_j, \; T $, and $ \|u\|_{L^{\infty}(H^m)}: = sup_{t\in(0, T)}\|u(x, t)\|_{H^{m}(\Omega)} $.

    Proof. From Eq (1.1), Eq (3.1) satisfy $ \; \forall \nu\in H^1_0(\Omega) $,

    $ \begin{equation} \begin{aligned} (u(x, t_n), \nu)+\frac{1}{\omega_1}\left(\frac{\partial u(x, t_n)}{\partial x}, \frac{\partial \nu}{\partial x}\right) & = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(u(x, t_{n-k}), \nu)\\ &+\frac{\omega_n}{\omega_1}(u(x, t_0), \nu)+(r^n, \nu) +\frac{p^n}{\omega_1}(f, \nu). \end{aligned} \end{equation} $ (3.16)

    By projecting $ u(x, t_n) $ in $ Pu(x, t_n)\in V^{M} $, applying Eq (3.13), we obtain for all $ \nu_M\in V^{M} $

    $ \begin{equation} \begin{aligned} ({P}u(x, t_n), \nu_M)+\frac{1}{\omega_1}\left(\frac{\partial {P}u(x, t_n) }{\partial x}, \frac{\partial \nu_M}{\partial x}\right) & = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(u(x, t_{n-k}), \nu_M)\\ &+\frac{\omega_n}{\omega_1}(u(x, t_0), \nu_M)+(r^n, \nu_M) +\frac{p^n}{\omega_1}(f, \nu_M). \end{aligned} \end{equation} $ (3.17)

    Let $ \hat{\epsilon}^n_M = Pu(x, t_n)-u_M^n, \; \epsilon^n_M = u(x, t_n)-u_M^n $ by subtracting Eq (3.15) from Eq (3.17), we have

    $ \begin{equation} (\hat{\epsilon}^n_M, \nu_M)+\frac{1}{\omega_1}\left(\frac{\partial \hat{\epsilon}^n_M }{\partial x}, \frac{\partial \nu_M}{\partial x}\right) = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(\epsilon^{n-k}_M, \nu_M) +\frac{\omega_n}{\omega_1}(\epsilon^0_M, \nu_M)+(r^n, \nu_M) \end{equation} $ (3.18)

    Taking $ \nu_M = \hat{\epsilon}^n_M $ in above equation and using the triangular inequality $ \|\epsilon^n_M\|_{H^1(\Omega)}\leq \|\hat{\epsilon}^n_M\|_{H^1(\Omega)}+\|u(x, t_n)-Pu(x, t_n)\|_{H^1(\Omega)} $, we obtain

    $ \begin{equation} \begin{aligned} \|\epsilon^n_M\|_{H^1(\Omega)}&\leq \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}\| \epsilon^{n-k}_M\|_{H^1(\Omega)} +\|r^n\|_{H^1(\Omega)}+ \|u(x, t_n)- {P}u(x, t_n)\|_{H^1(\Omega)}\\ &\leq \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}\| \epsilon^{n-k}_M\|_{H^1(\Omega)} +c_2 \Delta t^{2-\alpha_1}+cM^{1-m}\|u(x, t_n)\|_{H^m(\Omega)}. \end{aligned} \end{equation} $ (3.19)

    Here, we use the mathematical induction, by Eq (3.19), we can easy to verify when case of n = 1.

    $ \begin{equation} \|\epsilon^1_M\|_{H^1(\Omega)}\leq \omega_1\omega_1^{-1}(c_2 \Delta t^{2-\alpha_1}+cM^{1-m}\|u(x, t_1)\|_{H^m(\Omega)}) \end{equation} $ (3.20)

    Suppose we have proven

    $ \begin{equation} \|\epsilon^j_M\|_{H^1(\Omega)}\leq \omega_1\omega_{j}^{-1}(c_2 \Delta t^{2-\alpha_1}+cM^{1-m}\|u(x, t_{j})\|_{H^m(\Omega)}) \quad j = 1, 2, \cdots, n-1. \end{equation} $ (3.21)

    And then we need prove $ \|\epsilon^n_M\|_{H^1(\Omega)}\leq\omega_1\omega_n^{-1}(c_2 \Delta t^{2-\alpha_1}+cM^{1-m}\|u(x, t_n)\|_{H^m(\Omega)}) $ By Eq (3.19), Eq (3.21) we have

    $ \begin{equation} \|\epsilon^n_M\|_{H^1(\Omega)}\leq \left\lbrack\sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}+\frac{\omega_n}{\omega_1}\right\rbrack \omega_1\omega_n^{-1} \left\lbrack c_2 \Delta t^{2-\alpha_1}+cM^{1-m} \|u(x, t_n)\|_{H^m(\Omega)} \right\rbrack. \end{equation} $ (3.22)

    Finally, by $ \; \lim_{k\rightarrow \infty}k\Delta t = T $ and the last equality have

    $ \begin{equation} \begin{aligned} \Arrowvert u(x, t_n)-u^n_M\Arrowvert_{H^1(\Omega)} & = {\omega_1 \omega_n^{-1} n^{-\alpha_1} \Delta t^{-\alpha_1} T^{\alpha_1} (c_2 \Delta t^{2-\alpha_1}+cM^{1-m}\|u\|_{L^{\infty}(H^m)})}.\\ \end{aligned} \end{equation} $ (3.23)

    Then by Eq (3.23) and Lemma 2.8, we deduce that for sufficently large $ n $, the assertion of the theorem is valid.

    Now we give error estimates for semi-discrete and full discrete problems in the following result.

    Theorem 3.4. Let $ \{u^n\}_{n = 0}^{N}\in H^m(\Omega) \cap H^{1}_0(\Omega), \; m > 1 $ is solution of semi-discrete problem Eq (3.5), $ \{u^n_M\}_{n = 0}^{N} $ is the solution of full discrete problem Eq (3.15), and $ u_M^0 = \text{P} u^0 $. Then we have

    $ \begin{equation} \nonumber \begin{aligned} \Arrowvert u^n-u^n_M\Arrowvert_{H^1(\Omega)} &\leq \frac{c\omega_1 T^{\alpha_1} M^{1-m} \max \limits_{1\leq j\leq n}\|u^j\|_{H^m(\Omega)}} {\frac{q_1(1-\alpha_1)}{\Gamma(2-\alpha_1)}+\sum\nolimits_{j = 2}^{s}\frac{q_jT^{\alpha_1-\alpha_j}(1-\alpha_j)}{\Gamma(2-\alpha_j)}}, \end{aligned} \end{equation} $

    Proof. According to the definition of $ P $, for solution $ u^{n} $ of semi-discrete, we have

    $ \begin{equation} ({P}u^n, \nu_M)+\frac{1}{\omega_1}\left(\frac{\partial {P}u^n }{\partial x}, \frac{\partial \nu_M}{\partial x}\right) = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(u^{n-k}, \nu_M) +\frac{\omega_n}{\omega_1}(u^0, \nu_M)+\frac{p^n}{\omega_1}(f, \nu_M). \end{equation} $ (3.24)

    Let $ \hat{e}^n_M = Pu^n-u_M^n, \; e^n_M = u^n-u_M^n $, by subtracting Eq (3.15) from Eq (3.24), we have

    $ \begin{equation} (\hat{e}^n_M, \nu_M)+\frac{1}{\omega_1}(\frac{\partial \hat{e}^n_M }{\partial x}, \frac{\partial \nu_M}{\partial x}) = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(e^{n-k}_M, \nu_M) +\frac{\omega_n}{\omega_1}(e^0_M, \nu_M). \end{equation} $ (3.25)

    Taking $ \nu_M = \hat{e}^n_M $ in above equation and using the triangular inequality $ \|e^n_M\|_{H^1(\Omega)}\leq \|\hat{e}^n_M\|_{H^1(\Omega)}+\|u^n-Pu^n\|_{H^1(\Omega)} $, we obtain

    $ \begin{equation} \begin{aligned} \|e^n_M\|_{H^1(\Omega)}&\leq \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}\| e^{n-k}_M\|_{H^1(\Omega)} + \|u^n-{P}u^n\|_{H^1(\Omega)}\\ &\leq \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}\| \epsilon^{n-k}_M\|_{H^1(\Omega)} +cM^{1-m}\|u^n\|_{H^m(\Omega)}. \end{aligned} \end{equation} $ (3.26)

    Here, we use the mathematical induction similar to Theorem 3.3, and by $ \; \lim_{k\rightarrow \infty}k\Delta t = T $. We have follow result

    $ \begin{equation} \begin{aligned} \Arrowvert u^n-u^n_M \Arrowvert_{H^1(\Omega)} & = {\omega_1 \omega_n^{-1} n^{-\alpha_1} \Delta t^{-\alpha_1}T^{\alpha_1} cM^{1-m}\max \limits_{1\leq j\leq n} \|u^j\|_{H^m(\Omega)}}. \end{aligned} \end{equation} $ (3.27)

    Then by Eq (3.27) and Lemma 2.8, we deduce that for sufficently large $ n $, the assertion of the theorem is valid.

    In this section, we discuss the uniqueness of solution and the ill-posed analysis of the unknown source identification problem.

    Denote the eigenvalues of the operator $ -L $ as $ \lambda_n $ and the corresponding eigenfunctions as $ \varphi_{n}\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega) $, then we have $ L\varphi_{n} = -\lambda_n\varphi_{n} $. Without loss of generality, suppose a family of eigenvalues $ 0 < \lambda_1\leq\lambda_2\leq\cdot\cdot\cdot\leq\lambda_n\leq\cdot\cdot\cdot, \; \lim_{n\rightarrow \infty}\lambda_n = +\infty $, then $ \left\{\varphi_{n}\right\}_{n = 1}^{\infty} $ constituting an orthonormal basis in space $ L^2(\Omega) $.

    Definition 4.1. ([43]) For any $ \gamma > 0 $, define

    $ \begin{equation} \nonumber D((-L)^\gamma) = \left\{\psi\in L^{2}(\Omega);\sum\limits^{\infty}_{n = 1}\lambda_{n}^{2\gamma}|(\psi, \varphi_{n})|^{2} < \infty\right\}, \end{equation} $

    where $ (\cdot, \cdot) $ is the inner product in $ L^{2}(\Omega) $, and define its norm

    $ \begin{equation} \nonumber \|\psi\|_{D((-L)^\gamma)} = \left\{\sum\limits^{\infty}_{n = 1}\lambda_{n}^{2\gamma}|(\psi, \varphi_{n})|^{2}\right\}^{\frac{1}{2}}. \end{equation} $

    Lemma 4.2. ([34]) For fixed $ \mathit{\boldsymbol{\alpha}}\in \mathcal{A}, \; \mathit{\boldsymbol{q}}\in \mathcal{Q} $, let $ \; f(x)\in L^d(0, T), \; p(t)\in D((-L)^\gamma)\; $ with some $ d\in[1, \infty] $ and $ \gamma\in[0, 1] $. Then Eq (1.1) exists a unique solution $ u(x, t) $ given by

    $ \begin{equation} u(x, t) = \sum\limits_{n = 1}^\infty f_n Q_n(t)\varphi_n(x), \end{equation} $ (4.1)

    where $ Q_n(t) = \int_0^t p(s)(t-s)^{\alpha_1-1}E^{(n)}_{\mathit{\boldsymbol{\alpha}}', \alpha_1}(t-s)ds $, and $ f_n = (f, \varphi_n) $.

    From Lemma 4.2, we know there exists a unique weak solution $ u\in L^{2}((0, T); H^{2}(\Omega)\cap H^{1}_{0}(\Omega)) $ for the direct problem Eq (1.1), if we know a source function $ f(x)p(t)\in L^{\infty}(0, T; L^2(\Omega)) $. And we know the formal solution for Eq (1.1) can be expressed by

    $ \begin{equation} u(x, t) = \sum\limits_{n = 1}^{\infty}f_n\int_{0}^{t}p(s)(t-s)^{\alpha_1-1}E^{(n)}_{\mathit{\boldsymbol{\alpha}}', \alpha_1}(t-s)ds\varphi_{n}(x). \end{equation} $ (4.2)

    Applying $ u(x, T) = g(x) $, we have

    $ \begin{equation} \sum\limits_{n = 1}^{\infty}g_n\varphi_{n}(x) = g(x) = \sum\limits_{n = 1}^{\infty}f_nQ_n(T)\varphi_{n}(x), \end{equation} $ (4.3)

    where $ \; g_n = (g, \varphi_{n}) \; and\; Q_n(T) = \int_{0}^{t}p(s)(T-s)^{\alpha_1-1}E^{(n)}_{\mathit{\boldsymbol{\alpha}}', \alpha_1}(T-s)ds $.

    Theorem 4.3. If $ \; p(t)\in C[0, T]\; $ satisfying $ \; p(t)\geq p_0 > 0\; $ for all $ \; t\in[0, T]\; $, then the solution $ u(x, t), \; f(x) $ of problem Eq (1.1) is unique.

    Proof. By Lemma 2.3, we have $ E^{(n)}_{\mathit{\boldsymbol{\alpha}}', \alpha_1}(t) > 0 $, for $ t > 0 $. From Lemma 2.2 and Proposition 2.5, we know

    $ \begin{equation} Q_n(T)\geq p_0\int_{0}^{T}(T-s)^{\alpha_1-1}E^{(n)}_{\mathit{\boldsymbol{\alpha}}', \alpha_1}(T-s)ds = p_0T^{\alpha_1}E^{(n)}_{\mathit{\boldsymbol{\alpha}}', 1+\alpha_1}(T) > 0. \end{equation} $ (4.4)

    Hence, we know if $ g(x) = 0 $, we have $ f(x) = 0 $, and by Eq (4.2) we know $ u(x, t) = 0 $. The proof is completed.

    In the following, we show the inverse source problem Eq (1.1) is ill-posed. Without loss of generality, we take a final data $ \; g_k(x) = \frac{\varphi_{k}(x)}{\sqrt{\lambda_k}}\; $ in Eq (1.1) and then know $ \|g_k\| = \frac{1}{\sqrt\lambda_k}\rightarrow 0, \; as\; k\rightarrow \infty $. The corresponding source terms are $ f_k(x) = \frac{\varphi_{k}(x)}{Q_k(T)\sqrt{\lambda_k}} $, and we have $ \|f_k\| = \frac{1}{Q_k(T)\sqrt{\lambda_k}} $. By Lemma 2.7, we have

    $ \begin{equation} Q_k(T)\leq\|p\|_{C[0, T]}T^{\alpha_1}E^{(k)}_{\mathit{\boldsymbol{\alpha}}', 1+\alpha_1}(T)\leq\frac{\|p\|_{C[0, T]}}{\lambda_k}. \end{equation} $ (4.5)

    Hence $ \; \|f_k\|\geq\frac{\sqrt{\lambda_k}}{\|q\|_{C[0, T]}}\rightarrow \infty\; $, as $ \; k\rightarrow \infty\; $. Then we know the inverse source problem Eq (1.1) is ill-posed.

    For the sake of argument, we just talk about the one-dimensional case.

    First, let $ L_{M}(x) $ denotes the Legendre polynomial of degree $ M $. $ \xi_j, \; j = 1, 2, \cdots, M $, are the Legendre-Gauss-Lobatto points, and these points satisfy $ (1-x^2)L'_{M}(x) = 0 $. we consider the follow equation

    $ \begin{equation} \left\{\begin{aligned} &(u^n_M, \nu_M)+\frac{1}{\omega_1}\left(a\frac{\partial u^n_M}{\partial x}, \frac{\partial \nu_M}{\partial x}\right)-\frac{1}{\omega_1}(cu^n_M, \nu_M) = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}(u^{n-k}_M, \nu_M)\\ &+\frac{\omega_n}{\omega_1}(u^0_M, \nu_M) +\frac{p^n}{\omega_1}(f_M, \nu_M), \\ & (u_M^N, \nu_M) = (g_M, \nu_M). \end{aligned}\right. \end{equation} $ (5.1)

    We set

    $ \begin{equation} u_M^n(x) = u_M(x, n\Delta t) = \sum\limits_{i = 0}^{M}\delta_i^{n}h_i(x) , \end{equation} $ (5.2)

    Here, $ h_i $ is the Lagrangian polynomial defined in $ \Omega $ i.e. $ h_i\in V^{M}, \; h_i(\xi_j) = \varepsilon_{ij}, $ with $ \varepsilon_{ij}: $ the Krinecker-delta symbol. Since first equation in Eq (1.1) satisfies Dirichlet boundary condition, therefore, we have $ \delta_0^{n} = \delta_{M}^{n} = 0 $. Take $ \nu_M = h_j(x), \; j = 1, 2, \cdots, M-1, $ we can obtain

    $ \begin{equation} \nonumber \left\{\begin{aligned} &\left(\sum\limits_{i = 1}^{M-1}\delta_i^{n}h_i, h_j\right)+\frac{1}{\omega_1}\left(a\sum\limits_{i = 1}^{M-1}\delta_i^{n}\frac{\partial h_i}{\partial x}, \frac{\partial h_j}{\partial x}\right)-\frac{1}{\omega_1}\left(c\sum\limits_{i = 1}^{M-1}\delta_i^{n}h_i , h_j\right) = \sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}\left(\sum\limits_{i = 1}^{M-1}\delta_i^{n-k}h_i , h_j\right)\\ &+\frac{\omega_n}{\omega_1}\left(\sum\limits_{i = 1}^{M-1}\delta_i^{0}h_i , h_j\right) +\frac{p^n}{\omega_1}(f_M, h_j)\\ & \left(\sum\limits_{i = 1}^{M-1}\delta_i^{N}h_i, h_j\right) = (g_M, h_j). \end{aligned}\right. \end{equation} $

    Denote

    $ \begin{equation} \nonumber F = ((f_M, h_1), \cdots, (f_M, h_{M-1}))^{T}, \quad G = ((g_M, h_1), \cdots, (g_M, h_{M-1}))^{T}, \end{equation} $

    and set

    $ \begin{equation} \nonumber \delta^{n} = (\delta_1^{n}, \cdots, \delta_{M-1}^{n})^{T}, \quad \end{equation} $

    By above equation, we have

    $ \begin{equation} \left\{\begin{aligned} &K\delta^{n} = \frac{1}{\omega_1}p^{n}F +\sum\limits_{k = 1}^{n-1}\frac{\omega_k-\omega_{k+1}}{\omega_1} \hat{K}\delta^{n-k} \\ & \hat{K}\delta^{N} = G, \end{aligned}\right. \end{equation} $ (5.3)

    where $ i, j = 1, 2, \cdots, M-1 $, and

    $ \begin{equation} \begin{aligned} &(K)_{ij} = (h_i, h_j)+\frac{1}{\omega_1}(a\frac{\partial h_i}{\partial x}, \frac{\partial h_j}{\partial x})-\frac{1}{\omega_1}(ch_i, h_j), \quad &&(\hat{K})_{ij} = (h_i, h_j). \end{aligned} \end{equation} $ (5.4)

    Here, we use the Legendre Gauss-type quadratures to approximate Eq (5.4), and by the first equation in Eq (5.3) and Eq (5.2), we can solve the forward problem.

    For the inverse problem, by Eq (5.3), we have

    $ \begin{equation} \left\{\begin{aligned} &\delta^{1} = K^{-1}\frac{1}{\omega_1}p^{1}F = A_1F &&\quad n = 1\\ &\delta^{2} = K^{-1}(\frac{1}{\omega_1}p^{2}F+(1-\frac{\omega_2}{\omega_1})\hat{K}\delta^{1}) = A_2F &&\quad n = 2\\ &\quad\vdots\\ & \delta^{N-1} = A_{N-1}F &&\quad n = N-1. \end{aligned}\right. \end{equation} $ (5.5)

    For $ n = N $, we have

    $ \begin{equation} \left\{\begin{aligned} &K\delta^{N} = \frac{1}{\omega_1}p^{N}F +\sum\limits_{k = 1}^{N-1}\frac{\omega_k-\omega_{k+1}}{\omega_1} \hat{K}\delta^{N-k} \\ & \hat{K}\delta^{N} = G, \end{aligned}\right. \end{equation} $ (5.6)

    By Eq (5.5), Eq (5.6), we have

    $ \begin{equation} AF = G. \end{equation} $ (5.7)

    And then by solving linear equation Eq (5.7), we can obtain $ F $. Further, we recover $ f_M $ as an approximation of the source term by using a suitable integration rule.

    In order to prove the ill-posedness of equation Eq (5.7), an upper bound of $ \|A^{-1}\|_2 $ in Eq (5.7) is given below.

    Theorem 5.1. Let the orthogonal basis function $ \{h_j\}_{j = 1}^{M-1} $ satisfy $ h_j(x)\in H^1(\Omega) $. Then the inverse matrix of the coefficient matrix A in Eq (5.7) have

    $ \begin{equation} \|A^{-1}\|_2\leq C(M-1)^2\frac{\max\limits_{1\leq j\leq M-1}\|h_j\|^2_{H^1(\Omega)}}{\min\limits_{1\leq j\leq M-1}\|h_j\|^2_{H^1(\Omega)}} \end{equation} $ (5.8)

    Proof. From the first equation of Eq (5.6), we can get

    $ \begin{equation} \delta^{N} = \frac{p^{N}}{\omega_1}K^{-1}F+\sum\limits_{k = 1}^{N-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}K^{-1}\hat{K}\delta^{N-k}. \end{equation} $ (5.9)

    Combining the second equation of Eq (5.6), we have

    $ \begin{equation} \frac{p^{N}}{\omega_1}\hat{K}K^{-1}F+ \sum\limits_{k = 1}^{N-1}\frac{\omega_k-\omega_{k+1}}{\omega_1}\hat{K}K^{-1}\hat{K}\delta^{N-k} = G. \end{equation} $ (5.10)

    Let $ k^{'} = N-k\; (also\; denote \; k) $, we have

    $ \begin{equation} \frac{p^{N}}{\omega_1}\hat{K}K^{-1}F+ \sum\limits_{k = 1}^{N-1}\frac{\omega_{N-k}-\omega_{N-k+1}}{\omega_1}\hat{K}K^{-1}\hat{K}\delta^{k} = G. \end{equation} $ (5.11)

    On the other hand, by Eq (5.5) we have

    $ \begin{equation} \delta^{k} = (\frac{p^{k}}{\omega_1}K^{-1}+\sum\limits_{m = 1}^{k-1}\frac{\omega_m-\omega_{m+1}}{\omega_1}K^{-1}\hat{K}A_{k-m})F = :A_{k}F, \quad k = 1, 2, \cdots, N-1. \end{equation} $ (5.12)

    By substituting Eq (5.12) into Eq (5.11), the equation $ AF = G $ is obtained, where

    $ \begin{equation} A = \frac{p^{N}}{\omega_1}\hat{K}K^{-1}+\sum\limits_{k = 1}^{N-1}\frac{\omega_{N-k}-\omega_{N-k+1}}{\omega_1}\hat{K}K^{-1}\hat{K}A_k. \end{equation} $ (5.13)

    Since $ \omega_k $ is non-negative and decreasing with respect to $ k $, therefore we have

    $ \begin{equation} \frac{\omega_{N-k}-\omega_{N-k+1}}{\omega_1} < 1. \end{equation} $ (5.14)

    The definition of $ A_k $ tells us that the second part of $ A $ is the higher order term of the first part. Therefore, we only need to estimate the matrix

    $ \begin{equation} A\approx\widetilde{A}: = \frac{p^{N}}{\omega_1}\hat{K}K^{-1}. \end{equation} $ (5.15)

    Due to $ F\approx\widetilde{A}^{-1}G $, we only have to estimate the upper bound of $ \|\widetilde{A}^{-1}\|_2 $. According to Eq (5.4), $ \hat{K} $ is a diagonal matrix, and

    $ \begin{equation} (K\hat{K}^{-1})_{ij} = \frac{(h_i, h_j)}{\|h_j\|^2}+ \frac{1}{\omega_1\|h_j\|^2}(a\frac{\partial h_i}{\partial x}, \frac{\partial h_j}{\partial x} ) -\frac{1}{\omega_1\|h_j\|^2}(ch_i, h_j). \end{equation} $ (5.16)

    Therefore, we have

    $ \begin{equation} \begin{aligned} \|\widetilde{A}^{-1}\|_2 & = \frac{\omega_1}{p^{N}} \|K\hat{K}^{-1}\|_2\leq \frac{\omega_1}{p^{N}}\|K\hat{K}^{-1}\|_F = \frac{\omega_1}{p^{N}}\left(\sum\limits_{i, j = 1}^{M-1}(K\hat{K}^{-1})_{ij}^2\right)^{\frac{1}{2}}\\ & = \frac{\omega_1}{p^{N}}\left(\sum\limits_{i, j = 1}^{M-1} \frac{1}{\|h_j\|^4}\left(\frac{(h_i, h_j)}{\|h_j\|^2}+\frac{1}{\omega_1}(a\frac{\partial h_i}{\partial x}, \frac{\partial h_j}{\partial x} )- \frac{1}{\omega_1}(ch_i, h_j) \right)^2\right)^{\frac{1}{2}}\\ &\leq \frac{\sqrt{3}\omega_1}{\min\limits_{1\leq j\leq M-1}\|h_j\|^2_{H^1(\Omega)}p^{N}} \left(\sum\limits_{i, j = 1}^{M-1}(h_i, h_j)^2+\frac{1}{\omega_1^2}(a\frac{\partial h_i}{\partial x}, \frac{\partial h_j}{\partial x} )^2+ \frac{1}{\omega_1^2}(ch_i, h_j)^2\right)^{\frac{1}{2}} \\ &\leq C(M-1)^2\frac{\max\limits_{1\leq j\leq M-1}\|h_j\|^2_{H^1(\Omega)}}{\min\limits_{1\leq j\leq M-1}\|h_j\|^2_{H^1(\Omega)}}, \end{aligned} \end{equation} $ (5.17)

    where $ C $ is dependent on $ a, \; c, \; M, \; \mathit{\boldsymbol{\alpha}}, \; q_j $.

    Remark 1. It can be seen from Theorem 5.1 that $ \|A^{-1}\|_2 $ is completely determined by the projection dimension $ M-1 $ of the spectral method and the maximum and minimum norm of the basis function $ h_j(x) $. It can be seen from Eq (5.8) that as the projection dimension increases, the numerator increases with respect to $ j $, while the denominator decreases with respect to $ j $. Therefore, the projection dimension $ M-1 $ can be used as regularization parameter. When the appropriate $ M $ is selected, the minimum value of $ \|A^{-1}\|_2 $ can be guaranteed, thus reducing the ill-posedness of the inverse problem.

    Remark 2. Here we emphasize that the Galerkin spectral method is a projection method. In the projection method, as long as the appropriate dimension of the projection space is selected, the regularization effect can be effectively generated for the linear ill-posed problem, and it is no longer necessary to adopt other regularization techniques for the problem. This phenomenon is sometimes called self-regularization or regularization by projection. A series of descriptions and proofs of this phenomenon can be found in the literature [44].

    In order to compare the finite difference method the and Galerkin spectral method in application of the multi-term time-fractional diffusion equation. We derive follow the scheme similar to reference [45].

    In the finite difference algorithm, we can denote $ \; u_i^{n}\approx u(x_i, t_n) $, where $ \; x_i = i\Delta x, i = 1, ..., C.\; $ and $ \; t_n = n\Delta t, n = 0, 1, ..., N.\; $

    The space has the following discrete form:

    $ \begin{equation} \nonumber Lu(x_i, t_{n})\approx\frac{1}{(\Delta x)^2}(a_{i+\frac{1}{2}}u_{i+1}^{n})-((a_{i+\frac{1}{2}}+a_{i-\frac{1}{2}})u_{i}^{n}+a_{i-\frac{1}{2}}u_{i-1}^{n})+c(x_i)u_{i}^{n}, \end{equation} $

    for $ \; i = 1, ..., C-1, n = 1, ..., N\; $ where $ \; a_{i+\frac{1}{2}} = a(x_i+\frac{1}{2})\; $ with $ \; x_i+\frac{1}{2} = (x_i+x_{i+1})/2 $. In problem Eq (1.1), according to initial condition and boundary condition, we can get a numerical solution for forward problem Eq (1.1) from the finite difference scheme

    $ \begin{equation} \begin{aligned} &\sum\limits_{k = 1}^{n}\omega_k(u^{n-k+1}-u^{n-k})\\ & = \frac{1}{(\Delta x)^2}(a_{i+\frac{1}{2}}u_{i+1}^{n})-((a_{i+\frac{1}{2}}+a_{i-\frac{1}{2}})u_{i}^{n}+a_{i-\frac{1}{2}}u_{i-1}^{n})+c(x_i)u_{i}^{n}+f(x_i)p(t_{n}). \end{aligned} \end{equation} $ (5.18)

    Denote $ \; U^{n} = (u_1^{n}, u_2^{n}, ..., u_{C-1}^{n})^T, \; Y = (f(x_1), f(x_2), ..., f(x_{C-1)})^T, \; $ then the scheme Eq (5.18) leads to the following iterative scheme

    $ \begin{equation} \begin{aligned} &BU^1 = U^0+\frac{1}{\omega_1}p(t_1)Y, \\ &BU^{n} = c_1U^{n-1}+c_2U^{n-2}+...+c_{n-1}U^1+\frac{\omega_n}{\omega_1}U^0+\frac{1}{\omega_1}p(t_{n})Y, \end{aligned} \end{equation} $ (5.19)

    where $ c_n = (\omega_{n}-\omega_{n+1})/\omega_1 $ and $ B $ is a tridiagonal matrix given by $ B_{ii} = \frac{a_{i+\frac{1}{2}}+a_{i-\frac{1}{2}}}{(\Delta x)^2\omega_1}+1-\frac{c(x_i)}{\omega_1} $ for $ i = 1, 2, \cdots, C-1 $ and $ B_{i, i-1} = -\frac{a_{i-\frac{1}{2}}}{(\Delta x)^2\omega_1} $ for $ i = 2, 3, \cdots, C-1 $ and $ B_{i, i+1} = -\frac{a_{i+\frac{1}{2}}}{(\Delta x)^2\omega_1} $ for $ i = 1, 3, \cdots, C-2. $

    Inverse source problem based on finite difference. Similar to Section 5.1, we can derive a linear equation.

    $ \begin{equation} ZY = D, \end{equation} $ (5.20)

    where $ D = (g(x_1), g(x_2), ..., g(x_{C-1)})^T, \; $ and $ Z $ is a matrix.

    In this section, first, we verify the stability and validity of the proposed numerical methods. Without lose of generality, let the maximum time is $ T = 1 $. The noisy data are generated by adding random perturbations, i.e,

    $ \begin{equation} g^{\delta}(x) = g(x)+\varepsilon g(x)(2rand(size(g(x))-1), \end{equation} $ (6.1)

    where $ \varepsilon $ is relative noise level and $ rand(\cdot) $ generate random numbers uniformly distributed on $ [0, 1] $. The corresponding noise level is calculated by $ \delta = \|g-g^{\delta}\| $. To show the accuracy of numerical solution, we compute the approximate $ L^2 $-norm error denoted by

    $ \begin{equation} e(f, \varepsilon) = \|f-f^{\delta}\|, \end{equation} $ (6.2)

    and the approximate $ L^2 $-norm relative error as

    $ \begin{equation} e_r(f, \varepsilon) = \|f-f^{\delta}\|/\|f\|, \end{equation} $ (6.3)

    where $ f^{\delta} $ is term reconstructed and $ f $ is the exact solution.

    In addition, for the convenience of writing, we will abbreviate the Galerkin spectral method as GSM and the finite difference method as FDM.

    Example 1. In this example, we firstly consider a three-term time fractional diffusion equation with a exact solution. Let $ \Omega_T = (-1, 1)\times(0, T) $ and $ T = 1, \; q_j\equiv1. $ Take a source function $ p(t) = \frac{2}{\Gamma(3-\alpha_1)}t^{2-\alpha_1}+ \frac{2}{\Gamma(3-\alpha_2)}t^{2-\alpha_2}+ \frac{2}{\Gamma(3-\alpha_3)}t^{2-\alpha_3}+4\pi^2t^2, $ and $ f(x) = \sin(2\pi x) $. Further more, we have exact analytical solution

    $ \begin{equation} u(x, t) = t^2 \sin(2\pi x). \end{equation} $ (6.4)

    In the first three graphs of Figure 1, namely (a), (b) and (c), we show the errors in $ L^2 $-norm and $ L^\infty $-norm for different time step in case of $ \mathit{\boldsymbol{\alpha_1}} = (0.9, 0.8, 0.7), \; \mathit{\boldsymbol{\alpha_2}} = (0.6, 0.5, 0.3) $, and $ \mathit{\boldsymbol{\alpha_3}} = (0.3, 0.2, 0.1) $. Here fix the polynomial degree $ M = 24 $. In the same case, the graphs (d), (e) and (f) from Figure 1 show the the errors in $ L^2 $-norm and $ L^\infty $-norm for different polynomial degree $ M $. Here we also take time step $ \Delta t = 1e-4 $. From Figure 1, we can find that the theoretical convergence accuracy (see Theorem 3.3) is in good agreement with the numerical results.

    Figure 1.  The result of the forward problem for Example 1.

    In Table 1 and Figure 2, we make a comparison with FDM in terms of the forward problem by using the scheme Eq (5.1). Clearly, we observe that the scheme Eq (5.1) has higher accuracy than FDM on the forward problem. As a further step, we choose $ \Delta t = 1e-4 $ to obtain satisfactory results in time and accuracy.

    Table 1.  Error of two approximation methods of Example 1 for $ \; \alpha = (0.3, 0.2, 0.1)\; $.
    $ time step $ approximation method $ L^2 $ error $ L^\infty $ error run time
    $ \Delta t=1e-1 $ GSM 0.002121690 2.121690e-04 88.254s
    FDM 0.010039748 0.001419834 1.341s
    $ \Delta t=1e-2 $ GSM 4.382322e-05 4.382322e-06 90.043s
    FDM 0.0085668512 0.001211535 10.076s
    $ \Delta t=1e-3 $ GSM 8.535546e-07 8.535546e-08 90.622s
    FDM 0.008536392 0.001207228 96.931s
    $ \Delta t=1e-4 $ GSM 1.913459e-08 1.913459e-09 198.829s
    FDM 0.008535798 0.001207144 1348.549s
    $ \Delta t=1e-5 $ GSM 1.007804e-08 1.007804e-9 10834.238s
    FDM 0.008535787 0.001207142 33733.606s

     | Show Table
    DownLoad: CSV
    Figure 2.  Error of two approximation methods of Example 1 for $ \; \alpha = (0.3, 0.2, 0.1)\; $.

    On the inverse source problem aspect. In order to avoid the impact of 'inverse crime'. We take $ M = 20, \Delta t = 1e-4 $ to solve the direct problem by using the GSM Eq (5.1), and take $ M = 20, \Delta t = 1e-3 $ to solve the inverse source problem by the scheme Eq (5.7). The related results are shown in (e), (f) of Figure 3 and Table 2. For the inverse source problem based on FDM. We take $ C = 100, \Delta t = 5e-3 $. Numerical result for $ {\alpha_2} = (0.3, 0.2, 0.1) $ with various noise levels $ \varepsilon = 5\%, 1\%, 0.5\%, 0.1\% $ are shown in (c), (d) of Figure 3.

    Figure 3.  Result of Example 1 with $ \mathit{\boldsymbol{\alpha}} = (0.3, 0.2, 0.1) $.
    Table 2.  Relative error of the GSM for Example 1 for different $ \; \alpha\; $ with $ \varepsilon = 5\%, 1\%, 0.5\%, 0.1\% $.
    $ \epsilon $ $ \mathit{\boldsymbol{\alpha}} $ $ e_r(f, \varepsilon) $
    $ \varepsilon=5\% $ $ (0.3, 0.2, 0.1) $ 0.082339
    $ (0.9, 0.8, 0.7) $ 0.074294
    $ (0.6, 0.5, 0.3) $ 0.059221
    $ \varepsilon=1\% $ $ (0.3, 0.2, 0.1) $ 0.030306
    $ (0.9, 0.6, 0.3) $ 0.034862
    $ (0.6, 0.5, 0.3) $ 0.049156
    $ \varepsilon=0.5\% $ $ (0.3, 0.2, 0.1) $ 0.014814
    $ (0.9, 0.8, 0.7) $ 0.043858
    $ (0.6, 0.5, 0.3) $ 0.043187
    $ \varepsilon=0.1\% $ $ (0.3, 0.2, 0.1) $ 0.013495
    $ (0.9, 0.8, 0.7) $ 0.010624
    $ (0.6, 0.5, 0.3) $ 0.013953

     | Show Table
    DownLoad: CSV

    Example 2. In this example, let $ \Omega_T = (0, 1)\times(0, T), \; s = 3, \; q_j\equiv1, \; a(x) = 1, \; c(x) = 0 $. Take a exact source function $ \; p(t) = 1\; $ and $ f(x) = x(x-0.4)(x-0.6)(x-0.8)(x-1) $. We first solve the direct problem by using the GSM Eq (5.1) to obtain the additional data $ g(x) $. Then we use the scheme Eq (5.7) to solve inverse source problem. Numerical result for $ \mathit{\boldsymbol{\alpha_1}} = (0.9, 0.8, 0.7), \; \mathit{\boldsymbol{\alpha_2}} = (0.3, 0.2, 0.1) $, and $ \mathit{\boldsymbol{\alpha_3}} = (0.6, 0.5, 0.3), $ with various noise levels $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $ are presented in (e), (f) of Figure 4 and Table 3. For the inverse source problem based on FDM. We take $ C = 100, \Delta t = 5e-3 $. Numerical result for $ {\alpha_2} = (0.3, 0.2, 0.1) $ with various noise levels $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $ are shown in (c), (d) of Figure 4.

    Figure 4.  Result of Example 2 with $ \mathit{\boldsymbol{\alpha}} = (0.3, 0.2, 0.1) $.
    Table 3.  Relative error of the GSM for Example 2 for different $ \; \alpha\; $ with $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $.
    $ \varepsilon $ $ \mathit{\boldsymbol{\alpha}} $ $ e_r(f, \varepsilon) $
    $ \varepsilon=1\% $ $ (0.3, 0.2, 0.1) $ 0.085368
    $ (0.9, 0.8, 0.7) $ 0.092676
    $ (0.6, 0.5, 0.3) $ 0.088179
    $ \varepsilon=0.5\% $ $ (0.3, 0.2, 0.1) $ 0.076024
    $ (0.9, 0.8, 0.7) $ 0.082049
    $ (0.6, 0.5, 0.3) $ 0.079423
    $ \varepsilon=0.1\% $ $ (0.3, 0.2, 0.1) $ 0.073424
    $ (0.9, 0.8, 0.7) $ 0.0800324
    $ (0.6, 0.5, 0.3) $ 0.077136
    $ \varepsilon=0.01\% $ $ (0.3, 0.2, 0.1) $ 0.0652520
    $ (0.9, 0.8, 0.7) $ 0.066370
    $ (0.6, 0.5, 0.3) $ 0.066352

     | Show Table
    DownLoad: CSV

    Example 3. In this example, let $ \Omega_T = (0, 1)\times(0, T), \; s = 3, \; q_j\equiv1, \; a(x) = 1, \; c(x) = 0 $. We consider a continuous piecewise smooth function.

    $ \begin{equation} f(x) = \left\{\begin{aligned} &2x, \quad && 0\leq x\leq0.5, \\ &-2x+2, \quad && 0.5\leq x < 1.\\ \end{aligned}\right. \end{equation} $ (6.5)

    We first solve the direct problem by using the GSM Eq (5.1) to obtain the additional data $ g(x) $. Then we use the scheme Eq (5.7) to solve inverse source problem. Numerical result for $ \mathit{\boldsymbol{\alpha_1}} = (0.9, 0.8, 0.7), \; \mathit{\boldsymbol{\alpha_2}} = (0.3, 0.2, 0.1) $, and $ \mathit{\boldsymbol{\alpha_3}} = (0.6, 0.5, 0.3), $ with various noise levels $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $ are presented in (e), (f) of Figure 5 and Table 4. For the inverse source problem based on FDM. We take $ C = 100, \Delta t = 5e-3 $. Numerical result for $ {\alpha_2} = (0.3, 0.2, 0.1) $ with various noise levels $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $ are shown in (c), (d) of Figure 5.

    Figure 5.  Result of Example 3 with $ \mathit{\boldsymbol{\alpha}} = (0.3, 0.2, 0.1) $.
    Table 4.  Relative error of the GSM for Example 3 for different $ \; \alpha\; $ with $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $.
    $ \varepsilon $ $ \mathit{\boldsymbol{\alpha}} $ $ e_r(f, \varepsilon) $
    $ \varepsilon=1\% $ $ (0.3, 0.2, 0.1) $ 0.045909
    $ (0.9, 0.8, 0.7) $ 0.058139
    $ (0.6, 0.5, 0.3) $ 0.058454
    $ \varepsilon=0.5\% $ $ (0.3, 0.2, 0.1) $ 0.042321
    $ (0.9, 0.8, 0.7) $ 0.048615
    $ (0.6, 0.5, 0.3) $ 0.047175
    $ \varepsilon=0.1\% $ $ (0.3, 0.2, 0.1) $ 0.040261
    $ (0.9, 0.8, 0.7) $ 0.048302
    $ (0.6, 0.5, 0.3) $ 0.046372
    $ \varepsilon=0.01\% $ $ (0.3, 0.2, 0.1) $ 0.012787
    $ (0.9, 0.8, 0.7) $ 0.011516
    $ (0.6, 0.5, 0.3) $ 0.011305

     | Show Table
    DownLoad: CSV

    Example 4. Let $ \Omega_T = (0, 1)\times(0, T), \; s = 3, \; q_j\equiv1, \; a(x) = x^2+1, \; c(x) = -1 $. Take a exact source function $ \; p(t) = 1\; $ and $ f(x) = e^{-x}\sin(7 \pi x) $. We first solve the direct problem by using the GSM Eq (5.1) to obtain the additional data $ g(x) $. Then we use the scheme Eq (5.7) to solve inverse source problem. Numerical result for $ \mathit{\boldsymbol{\alpha_1}} = (0.9, 0.8, 0.7), \; \mathit{\boldsymbol{\alpha_2}} = (0.3, 0.2, 0.1) $, and $ \mathit{\boldsymbol{\alpha_3}} = (0.6, 0.5, 0.3), $ with various noise levels $ \varepsilon = 10\%, 5\%, 1\%, 0.1\% $ are presented in (e), (f) of Figure 6 and Table 5. For the inverse source problem based on FDM. We take $ C = 100, \Delta t = 5e-3 $. Numerical result for $ {\alpha_2} = (0.3, 0.2, 0.1) $ with various noise levels $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $ are shown in (c), (d) of Figure. 6.

    Figure 6.  Result of Example 4 with $ \mathit{\boldsymbol{\alpha}} = (0.3, 0.2, 0.1) $.
    Table 5.  Relative error of the GSM for Example 4 for different $ \; \alpha\; $ with $ \varepsilon = 10\%, 5\%, 1\%, 0.1\% $.
    $ \varepsilon $ $ \mathit{\boldsymbol{\alpha}} $ $ e_r(f, \varepsilon) $
    $ \varepsilon=10\% $ $ (0.3, 0.2, 0.1) $ 0.100485
    $ (0.9, 0.8, 0.7) $ 0.111043
    $ (0.6, 0.5, 0.3) $ 0.107902
    $ \varepsilon=5\% $ $ (0.3, 0.2, 0.1) $ 0.093797
    $ (0.9, 0.8, 0.7) $ 0.068835
    $ (0.6, 0.5, 0.3) $ 0.064510
    $ \varepsilon=1\% $ $ (0.3, 0.2, 0.1) $ 0.072740
    $ (0.9, 0.8, 0.7) $ 0.073513
    $ (0.6, 0.5, 0.3) $ 0.072786
    $ \varepsilon=0.1\% $ $ (0.3, 0.2, 0.1) $ 0.034704
    $ (0.9, 0.8, 0.7) $ 0.037078
    $ (0.6, 0.5, 0.3) $ 0.035328

     | Show Table
    DownLoad: CSV

    By observing Figures 36, we can see that $ (a) $ denotes the graph of the numerical solution of the forward problem, namely the input data. $ (b) $ is the numerical approximation of the inverse source problem without noisy data. $ (c) $ and $ (d) $ are results of the inverse source problem based on the FDM with and without regularization method, and $ (e) $ and $ (f) $ are results of the inverse source problem based on the GSM under the same case.

    From Tables 69, we show the error levels of two numerical reconstruction method with and without regularization method. We can find that the numerical results of the inverse problem based on FDM with regularization method are obviously better than those without regularization method. However, the numerical results of the inverse problem based on the GSM with and without regularization method do not show a big difference compared with the numerical results of the FDM. There may even be cases where regularization method is not required to get a better result. Hence, we can conclude that scheme Eq (5.7) itself has regularization function compared to the FDM in the inverse source problem. Next, we explain why scheme Eq (5.7) is not sensitive to noise level.

    Table 6.  Comparison of relative error of numerical results for Example 1 for $ \; \alpha = (0.3, 0.2, 0.1)\; $ with $ \varepsilon = 5\%, 1\%, 0.5\%, 0.1\% $.
    method $ \varepsilon=5\% $ $ \varepsilon=1\% $ $ \varepsilon=0.5\% $ $ \varepsilon=0.1\% $
    GSM (no regularization) 0.2501 0.0832 0.0290 0.0079
    GSM (regularization) 0.0823 0.0303 0.0134 0.0148
    FDM (no regularization) 4.3792 0.7871 0.4036 0.0965
    FDM (regularization) 0.0301 0.0052 0.0025 6.273e-4

     | Show Table
    DownLoad: CSV
    Table 7.  Comparison of relative error of numerical results for Example 2 for $ \alpha = (0.3, 0.2, 0.1) $ with $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $.
    method $ \varepsilon=1\% $ $ \varepsilon=0.5\% $ $ \varepsilon=0.1\% $ $ \varepsilon=0.01\% $
    GSM (no regularization) 0.0944 0.0635 0.0119 0.0010
    GSM (regularization) 0.0853 0.0760 0.0734 0.0652
    FDM (no regularization) 2.6407 1.220 0.2478 0.027
    FDM (regularization) 0.2043 0.0520 0.0208 0.0164

     | Show Table
    DownLoad: CSV
    Table 8.  Comparison of relative error of numerical results for Example 3 for $ \alpha = (0.3, 0.2, 0.1) $ with $ \varepsilon = 1\%, 0.5\%, 0.1\%, 0.01\% $.
    method $ \varepsilon=1\% $ $ \varepsilon=0.5\% $ $ \varepsilon=0.1\% $ $ \varepsilon=0.01\% $
    GSM (no regularization) 0.4327 0.2868 0.0365 0.0126
    GSM (regularization) 0.0459 0.0423 0.0402 0.0127
    FDM (no regularization) 11.808 5.0791 1.4387 0.1025
    FDM (regularization) 0.0353 0.0239 0.0116 0.0102

     | Show Table
    DownLoad: CSV
    Table 9.  Comparison of relative error of numerical results for Example 4 for $ \alpha = (0.3, 0.2, 0.1) $with $ \varepsilon = 10\%, 5\%, 1\%, 0.1\% $.
    method $ \varepsilon=10\% $ $ \varepsilon=5\% $ $ \varepsilon=1\% $ $ \varepsilon=0.1\% $
    GSM (no regularization) 0.2924 0.0878 0.0425 0.0233
    GSM (regularization) 0.1004 0.0937 0.0727 0.0347
    FDM (no regularization) 2.79904 1.1476 0.2951 0.1435
    FDM (regularization) 0.1435 0.0846 0.0181 0.0014

     | Show Table
    DownLoad: CSV

    In Table 10, we show $ norm $ of matrices $ G $, $ {A}^{-1} $ and $ {A}^{-1}G $ for Example 1. Obviously, $ \|{A}^{-1}G\|_{2} $ is small and $ \|{A}^{-1}\|_{2} $ has a reasonable size. Therefore numerical solutions are not sensitive with respect to the perturbation in the initial data. But, In Table 11, The numerical format of FDM is sensitive to the perturbation in the initial datal. Therefore, GSM has better anti-interference than FDM. On the other hand, in Tables 12 and 13, We can find that although GSM has better anti-interference than FMD, it can not completely get rid of the regularization method when the unknown source has less regularity and the meased data has a high noise level.

    Table 10.  GSM of ill-posedness for Example 1 for different $ \varepsilon $.
    $ \varepsilon $ $ \|G\|_{2} $ $ \|{A}^{-1}\|_{2} $ $ \|{A}^{-1}G\|_{2} $
    0.05 0.356708 86.11457 0.369712
    0.01 0.360401 86.11457 0.360933
    0.005 0.360111 86.11457 0.360085
    0.001 0.360341 86.11457 0.360345
    0.0001 0.360348 86.11457 0.360348

     | Show Table
    DownLoad: CSV
    Table 11.  FDM of ill-posedness for Example 1 for different $ \varepsilon $.
    $ \varepsilon $ $ \|D\|_{2} $ $ \|{Z}^{-1}\|_{2} $ $ \|{Z}^{-1}D\|_{2} $
    0.05 7.0881 2.322e+02 27.8227
    0.01 7.0743 2.322e+02 8.8667
    0.005 7.0809 2.322e+02 7.5734
    0.001 7.0804 2.322e+02 7.0929
    0.0001 7.0796 2.322e+02 7.0712

     | Show Table
    DownLoad: CSV
    Table 12.  GSM of ill-posedness for Example 3 for different $ \varepsilon $.
    $ \varepsilon $ $ \|G\|_{2} $ $ \|{A}^{-1}\|_{2} $ $ \|{A}^{-1}G\|_{2} $
    0.01 0.013187 1.5104e+04 0.161897
    0.005 0.013226 1.5104e+04 0.164711
    0.001 0.013227 1.5104e+04 0.159407
    0.0001 0.013228 1.5104e+04 0.159405

     | Show Table
    DownLoad: CSV
    Table 13.  FDM of ill-posedness for Example 3 for different $ \varepsilon $.
    $ \varepsilon $ $ \|D\|_{2} $ $ \|{Z}^{-1}\|_{2} $ $ \|{Z}^{-1}D\|_{2} $
    0.01 0.459280 3.9992e+04 76.739424
    0.005 0.459857 3.9992e+04 27.761398
    0.001 0.459380 3.9992e+04 7.957420
    0.0001 0.459332 3.9992e+04 5.798087

     | Show Table
    DownLoad: CSV

    In this paper, we first obtain a high accuracy numerical solution by using the GSM, and give the error estimates between exact solution and semi-discrete solution as well as full-discrete one, and compare with the FDM. It is indicate that our method has a better accuracy. Secondly, the GSM is extended to solve the inverse source problem. Moreover, we find that this method can effectively reduce the ill-posednes of inverse source problem compared with the traditional FDM. Thus the spectral method itself can play a regularization role. It should be mentioned that the estimates given in the paper are also valid in two and three dimensional cases. In the following work, we will continue to try to optimize the GSM numerical scheme so that it can better play the role of regularization method. For example, replacing the original basis functions with smooth periodic functions or equidistant trigonometric functions to avoid Runge phenomenon.

    This work is supported by the NSF of China (grant no. 12201502), the Youth Science and Technology Fund of Gansu Province (grant no. 20JR10RA099), the Innovation Capacity Improvement Project for Colleges and Universities of Gansu Province (grant no. 2020B-088) and the Innovation star of Gansu Province (grant no. 2022CXZX-324).

    The authors declare there is no conflict of interest.

    [1] Pavlou AK, Belsey MJ (2008) The therapeutic antibodies market to 2008. Eur J Pharm Biopharm 59: 389-396.
    [2] Wurm F (2004) Production of recombinant protein therapeutics in cultivated mammalian cells. Nat Biotechnol 22: 1393-1398. doi: 10.1038/nbt1026
    [3] Li F, Vijayasankaran N, Shen A, et al. (2010) Cell culture processes for monoclonal antibody production. mAbs J 2: 466-479. doi: 10.4161/mabs.2.5.12720
    [4] Pirt SJ (1975) Principles of microbe and cell cultivation. Blackwell Scientific Publications.
    [5] Bailey JE, Ollis DF (1986) Biochemical Engineering Fundamentals, 2 Eds., New York: McGraw-Hill, 984.
    [6] Monod J (1949) The Growth of Bacterial Cultures. Ann Rev Microbiol 3: 371. doi: 10.1146/annurev.mi.03.100149.002103
    [7] Lamboursain L, Jolicoeur M (2005) Critical influence of Eschscholtzia californica cells nutritional state on secondary metabolite production. Biotechnol Bioeng 91: 827-837. doi: 10.1002/bit.20553
    [8] Cloutier M, Bouchard-Marchand É, Perrier M, et al. (2007) A predictive nutritional model for plant cells and hairy roots. Biotechnol Bioeng 99: 189-200.
    [9] Stephanopoulos G, Aristodou, Nielsen J (1998) Metabolic Engineering. Principles and Methodologies, San Diego : Academic Press, 698.
    [10] Palsson BO (2011) Systems Biology. Simulation of dynamic network states, Cambridge UK: Cambridge University Press, 317.
    [11] Fell D (1997) Understanding the Control of Metabolism. In Frontiers in metabolism. London UK: Portland Press, 301.
    [12] Bordbar A, Monk JM, King ZA, et al. (2014) Constraint-based models predict metabolic and associated cellular functions. Nat Rev Gen 15: 107-120. doi: 10.1038/nrg3643
    [13] Chong WPK, Thng SH, Hiu AP, et al. (2012) LC-MS-Based Metabolic Characterization of High Monoclonal Antibody-Producing Chinese Hamster Ovary Cells. Biotechnol Bioeng 109:3103-3111. doi: 10.1002/bit.24580
    [14] Dunn WB, Broadhurst DI, Atherton HJ, et al. (2011) Systems level studies of mammalian metabolomes: the roles of mass spectrometry and nuclear magnetic resonance spectroscopy. Chem Soc Rev 40: 387-426. doi: 10.1039/B906712B
    [15] Ben-Tchavtchavadze M, Perrier M, Jolicoeur M (2010) A Non-Invasive Technique for the Measurement of the Energetic State of Free-Suspension Mammalian Cells. Biotechnol Prog 26: 532-541.
    [16] Scheer M, Grote A, Chang A, et al. (2011) BRENDA, the enzyme information system. Nucleic Acids Res 39 (Database issue http://www.brenda-enzymes.org/): 670-676.
    [17] Nolan RP, Lee K (2011) Dynamic model of CHO cell metabolism. Metab Eng 13: 108-124. doi: 10.1016/j.ymben.2010.09.003
    [18] Leduc M, Tikhomiroff C, Cloutier M, et al. (2006) Development of a kinetic metabolic model: application to Catharanthus roseus hairy root. Bioprocess Biosyst Eng 28: 295-313. doi: 10.1007/s00449-005-0034-z
    [19] Ghorbaniaghdam A, Henry O, Jolicoeur M (2012) A kinetic-metabolic model based on cell energetic state: study of CHO cell behavior under Na-butyrate stimulation, Bioprocess Biosyst Eng 36: 469-487.
    [20] Ghorbaniaghdam A, Chen J, Henry O, et al. (2014) Analyzing clonal variation of monoclonal antibody producing CHO cell lines using an in silico metabolomic platform. PLoS One 9:90832. doi: 10.1371/journal.pone.0090832
    [21] Ghorbaniaghdam A, Chen J, Henry O, et al. (2014) An in silico study of the regulation of CHO cells glycolysis. J Theor Biol In press.
    [22] Cloutier M, Chen J, Tagte F, et al. (2009) Kinetic metabolic modelling for the control of plant cells cytoplasmic phosphate. J Theor Biol 259: 118-131. doi: 10.1016/j.jtbi.2009.02.022
    [23] Cloutier M, Chen J, DeDobbeleer C, et al. (2009) A systems approach to plant bioprocess optimization. Plant Biotechnol J 7: 939-951. doi: 10.1111/j.1467-7652.2009.00455.x
    [24] Valancin A, Srinivasan B, Rivoal J, et al. (2013) Analyzing the effect of decreasing cytosolic triosephosphate isomerase on Solanum tuberosum hairy root cells using a kinetic-metabolic model. Biotechnol Bioeng 110: 924-935. doi: 10.1002/bit.24747
    [25] Poliquin PO, Chen J, Cloutier M, et al. (2013) Metabolomics and in-silico analysis reveal critical energy deregulations in animal models of Parkinson's disease. PLoS One 8: 69146
    [26] Michal G, Schomburg D (2012) Biochemical Pathways: An atlas of biochemistry and molecular biology, 2 Eds., Wiley, 398.
  • This article has been cited by:

    1. Gabriel García-Laiton, Fernando Arcenio Zubieta López, Ehsan Shakerzadeh, Ernesto Chigo-Anota, Role of homonuclear B–B/N–N bonds in DNA nucleobases adsorption on boron nitride fullerenes: Biosensor and drug transport implications, 2025, 2210271X, 115188, 10.1016/j.comptc.2025.115188
  • Reader Comments
  • © 2014 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5102) PDF downloads(1019) Cited by(3)

Figures and Tables

Figures(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog