1.
Introduction
The balance between linear diffusion and nonlinear reaction or multiplication was studied in the 1930s by Fisher [12]. The generalized Fisher's equation with boundary and initial conditions is given as
where μ is the (constant) reaction factor and f is the nonlinear reaction term. This equation was first proposed to show a model for the propagation of a mutant gene, with u denoting the density of an advantageous. This equation is encountered in population dynamics and chemical kinetics, which includes problems such as the neutron population in a nuclear reaction, the nonlinear evolution of a population in a one-dimensional habitat [14,19].
Many methods and computational techniques can be employed to deal with these topics. Mittal et al. [20] used an efficient B-spline scheme to solve the Fisher's equation. They proved the stability of the method and reduced the computational cost. The Sinc collocation method is proposed in [18] for solving this equation. Mittal et al. [21] have obtained numerical solutions of equation (1.1) using wavelet Galerkin method and they have shown that the present method can be computed for a large value of the linear growth rate. Olmos et al. [22] developed an efficient pseudospectral solution of Fisher's equation. The viability of applying moving mesh methods to simulate traveling wave solutions of Fisher's equation is investigated in [23]. Cattani et al. [9] proposed mutiscale analysis of the equation and this article is one of the first articles that has solved the problem numerically. For more related numerical results, we refer the interested readers to [1,11,13,16] and references therein.
Mixed methods including finite difference and Galerkin or collocation method have been used to solve the various PDEs. The use of this method can be observed in many studies. Here are some studies which we refer to them. Başan [4] applied a mixed algorithm based on Crank–Nicolson mixed by modified cubic B-spline DQM to solve the coupled KdV equation. In [7], mixed methods including quintic B-spline and Crank-Nicolson is utilized for solving the nonlinear Schrödinger equation. In [5], a numerical solution to the coupled Burgers' equation via contributions of the Crank-Nicolson and differential quadrature method. In [28], an algorithm is proposed based on θ-weighted method and wavelet Galerkin method to solve the Benjamin-Bona-Mahony equation. For more related cases, we refer the reader to [6,8,25].
Wavelets and specially multiwavelets are found an interesting basis for solving a variety of equations [10,27]. Multiwavelets have some properties of wavelets, such as orthogonality, vanishing moments and compact support. Note that they can be both symmetric and orthogonal. In contrast to the wavelets and biorthogonal wavelets they can have high smoothness and high approximate order coupled with short support [17]. In addition, some multiwavelets such as Alpert's multiwavelets have the interpolating properties. Contrary to biorthogonal wavelets, multiwavelets can have the high vanishing moments without enlarging their support [15]. These bases are suited for high-order adaptive solvers of partial differential equations also the integro-differential equations have sparse representations in these bases. In general, the multiwavelets are a very powerful tool for expressing a variety of operators. At the present work, we apply the Alpert's multiwavelets constructed in [2,3].
This paper is organized as follows: In Section 2, we review the definition and properties of the Alpert multi-wavelets required for our subsequent development. In Section 3, we proceed to the main results where construction of a convergent method is given by multiwavelets Galerkin method and the proposed method is examined along with the analysis of the convergence and stability. Section 4, contains some numerical examples to illustrate the efficiency and accuracy of the scheme.
2.
Alpert's multi-wavelet
Assume that Ω:=[α,β]=∪b∈BXJ,b is the finite discretizations of Ω where XJ,b:=[xb,xb+1],b∈B:={α2J,…,(β−1)2J+2J−1} with J∈Z+∪{0}, are determined by the point xb:=b/(2J). On this discretization, applying the dilation D2j and the translation Tb operators to primal scaling functions {ϕ00,α2J,⋯,ϕr−10,(β−1)2J+2J−1}, one can introduce the subspaces
of scaling functions. Here R={0,1,⋯,r−1} and the primal scaling functions are the Lagrange polynomials of degree less than r that introduced in [2].
Every function p∈L2(Ω) can be represented in the form
where ⟨.,.⟩ denotes the L2-inner product and PrJ is the orthogonal projection that maps L2(Ω) onto the subspace VrJ. To find the coefficients pkJ,b that are determined by ⟨p,ϕkJ,b⟩=∫XJ,bf(x)ϕkJ,b(x)dx, we shall compute these integrals. We apply the r-point Gauss-Legendre quadrature by a suitable choice of the weights ωk and nodes τk for k∈R to avoid these integrals [2,26], via
Convergence analysis of the projection PrJ(p) is investigated for the r-times continuously differentiable function p∈Cr(Ω).
For the full proof of this approximation and further details, we refer the readers to [3]. Thus we can conclude that PrJ(p) converges to p with rate of convergence O(2−Jr).
Let ΦrJ be the vector function ΦrJ:=[Φr,J,α2J,⋯,Φr,J,(β−1)2J+2J−1]T and consists of vectors Φr,J,b:=[ϕ0J,b,⋯,ϕr−1J,b]. The vector function ΦrJ includes the scaling functions and called multi-scaling function. Furthermore, by definition of vector P that includes entries pkJ,b, we can rewrite Eq (2.2) as follows
where P is an N-dimensional vector (N:=(β−α)r2J). The building blocks of these bases construction can be applied to approximate a higher-dimensional function. To this end, one can introduce the two-dimensional subspace Vr,2J:=VrJ×VrJ⊂L2(Ω)2 that is spanned by
Thus by this assumption, to derive an approximation of the function p∈L2(Ω)2 by the projection operator PrJ, we have
where components of the square matrix P of order N are obtained by
where ˆτk=(τk+1)/2. Consider the 2r-th partial derivatives of f:Ω2→R are continuous. Utilizing this assumption, the error of this approximation can be bounded as follows
where Mmax is a constant.
By reviewing the spaces VrJ, it is obvious these bases are nested. Hence there exist complement spaces WrJ such that
where ⊕ denotes orthogonal sums. These subspaces are spanned by the multi-wavelet basis
According to (2.8), the space VJ may be inductively decomposed to VrJ=Vr0⊕(⊕J−1j=0Wrj). This called multi-scale decomposition and spanned by the multi-wavelet bases and single-scale bases. This leads us to introduce the multi-scale projection operator MrJ. Assume that the projection operator Qrj the maps L2(Ω) onto Wrj. Thus we obtain
and consequently, any function p∈L2(Ω) can be approximated as a linear combination of multi-wavelet bases
where
Note that, we can compute the coefficients pk0,0 by using (2.2). But multi-wavelet coefficients from zero up to higher-level J−1 in many cases must be evaluated numerically. To avoid this problem, we use multi-wavelet transform matrix TJ, introduced in [24,26]. This matrix connects multi-wavelet bases and multi-scaling functions, via,
where ΨrJ:=[Φr,0,b,Ψr,0,b,Ψr,1,b,⋯,Ψr,J−1,b]T is a vector with the same dimension ΦrJ (here Ψr,j,b:=[ψ0j,b,⋯,ψr−1j,b]). This representation helps to rewrite Eq (2.10) as to form
where we have the N-dimensional vector ˜PJ whose entries are pk0,0 and ˜pkj,b and is given by employing the multi-wavelet transform matrix TJ as ˜PJ=TJPJ. Note that according to the properties of TJ we have T−1J=TTJ.
The multi-wavelet coefficients (details) become small when the underlying function is smooth (locally) with increasing refinement levels. If the multi-wavelet bases have Nrψ vanishing moment, then details decay at the rate of 2−JNrψ [15]. Because vanishing moment of Alpert's multi-wavelet is equal to r, one can obtain ˜pkJ,b≈O(2−Jr) consequently. This allows us to truncate the full wavelet transforms while preserving most of the necessary data. Thus we can set to zero all details that satisfy a certain constraint ε using thresholding operator Cε
and the elements of ˉPJ are determined by
where Dε:={(j,b,k):|˜pkj,b|>ε}. Now we can bound the approximation error after thresholding via
where PrJ,Dε(p) is the projection operator after thresholding with the threshold ε and Cthr>0 is constant independent of J,ε.
3.
The mixed finite difference and Galerkin method (MFDGM)
The building block for the time discretization is the θ-weighted scheme applied to the generalized Fisher's equation. For this purpose we introduce the time discretization tn+1:=tn+δt, where the time step δt is assumed to be constant. To derive a stable method, we will see in next section that δt must satisfy to certain conditions. Here we consider generalized Fisher's equation with initial and boundary conditions
where κ and μ are the constants. We suppose that the initial data g(x) is several times differentiable and the nonlinear term uuκ satisfies a Lipschitz condition with Lipschitz constant L.
Let un:=u(x,tn) where tn=nδt, n∈N={0,1,…,Tδ}. Then the θ-weighted scheme reads
where 0≤θ≤1 and R<δtC is a small term for a positive constant C. Omitting the small term R, and rearranging (3.2), we obtain from Crank–Nicolson method (θ=12),
To derive a multiwavelets Galerkin method for the generalized Fisher's equation (3.1), assume that the approximate solution for each step can be written as
where Un is an N-dimensional vector whose elements must be found. The derivative operator d2dx2, can be approximated by
where Dψ is the operational matrix of derivative for multiwavelets introduced in [8].
Inserting (3.4) and (3.5) into (3.3) and using multi-scale projection operator for (uuκ)n, yields
where (uuκ)n≈ETnΨrJ. To obtain the approximate solution of (3.6) using multiwavelets Galerkin method, we multiply (3.6) by ΨTJ(x) and integrate over its support Ω. Therefore using the orthonormality of this system of multiwavelets, we have
where M2 is the right hand side of (3.6), and
The system (3.7) consists of N equations. Since two of them are linearly dependent, we replace these dependent equations with two others that have derived from boundary conditions (3.1). A new system of equations is obtained by replacing the first and last columns of M1 with ΨrJ(α) and ΨrJ(β) and the first and last elements of M2 with h1(tn+1) and h2(tn+1), i.e.,
To start the steps, we use the initial condition. The initial condition (3.1) can be approximated as
Putting U0=G, a linear system of algebraic equations arise such that by solving this system, the unknown coefficients Un+1 at every time steps can be found.
3.1. Convergence analysis
Let Hk(Ω)=Wk,2(Ω) be the usual Sobolev space of order k on Ω. With this definition, the Sobolev spaces H2 admit a natural norm
where ⟨.,.⟩ is the L2(Ω)-inner product.
Theorem 3.1. Assume that the nonlinear term p(u):=uuκ satisfies Lipschitz condition with respect to u as,
where lipschitz constant L<∞ is supposed to be large enough. Then, the time discrete numerical scheme defined by (3.3) is stable in H2-norm when δt<1μL.
Proof. Subtraction equation (3.3) from
one can find the roundoff error en=un−ˆun for n=0,1,…,Tδt, as
where un and ˆun are the exact and approximate solutions of (3.3), respectively.
Applying the Lipschitz condition (3.10) and then simplifying, it follows that
Multiplying (3.12) by en+1 and integrating on Ω, yield
Performing integration by parts, we obtain
Now, it follows from the Schwarz inequality (|⟨v,w⟩|≤‖v‖‖w‖) that
where we used the inequality vw≤12(v2+w2). Rearranging (3.14), we have
Since δt<1μL and L is assumed to be sufficiently large, one has
and then it follows from 1−μδtL>0 that
Hence, one can find for n=0,1,…,Tδt−1
The proof completes by taking the limit as n→∞,
Theorem 3.2. Under the assumptions of Theorem 3.1, the time discrete solution ˆun is H2-convergent to un.
Proof. Assume that en=un−ˆun is the perturbation error. Since ˆun is the approximate solution of (3.2) at the time step n which satisfies initial and boundary conditions (3.1), it follows that en satisfies (3.2)
Applying Lipschitz condition (3.10), one can write after simplification
Multiplying (3.20) by en+1, and integrating over Ω
Using integration by parts, one has
It follows from the Schwarz inequality that
Further, by the Young's inequality (ab≤12εa2+ε2b2) with (ε=δt) we obtain
By simplification of the above relation, we obtain
Since δt<1μL, it follows that
By repeating this relation for n=0,1,⋯,Tδt−1, one can write
Because e0=0, one can show that
Then, since
it follows from R≤Cδt that
It is obvious that δt→0 as n→∞ and then we obtain
This completes the proof.
4.
Numerical experiments
In this section, some numerical examples are presented to illustrate the validity and the merits of the new technique. The accuracy of the method has been measured by L2-error i.e.,
where i=nδt/2m, n=0,…,10T(2m−1)−1. In all examples, we assume that the primal time step size is δt:=0.1.
Example 4.1. Consider the Fisher's equation:
The exact solution is given in [29]
and the initial and boundary condition can be extracted by the exact solution.
The effects of the refinement level J, multiplicity parameter r and time step size δt on L2-error are given in Table 1. Figure 1 is plotted to show the effect of time step size on the accuracy. As the time step size increases, it can be seen that the error decreases, and the approximate solution converges to the exact solution. Figure 2 illustrate the approximate solution and L∞-error taking r=3, J=2 and m=8. Table 2 displays L2-error using the presented method taking r=3, J=2, δt=0.1/2m, m=1,…,8. The results have been compared with implicit (θ=1) and explicit method (θ=0).
Example 4.2. Consider the Fisher's equation:
The exact solution is given by [14]
and the boundary and initial conditions can be obtained by it.
Table 3 shows the effects of the refinement level J, multiplicity parameter r and time step size δt on L2-error. Figure 3 is also provided for further observations. Figure 4 shows that the approximate solution converges to the exact solution as the time step size increases. The approximate solution and L∞-error is presented graphically for r=3, J=2 and m=8 and the results are shown in Figure 5. The results prove the efficiency and accuracy of the proposed method. In Table 4, we compare the L2-errors taking r=3, J=2 and δt=0.1/2m−1 at time t=1 between the Crank-Nicolson method and implicit (θ=1).
Example 4.3. Consider the Fisher's equation:
The exact solution is given by [14]
Figure 6 shows the approximate solution and L2-error taking r=3, J=2 and m=8. One can see the effect of the refinement level J, the multiplicity parameter r and time step size, on L2 error in Figure 7. We observe that with increasing the refinement level J and the multiplicity parameter r the L2 error decreases. Table 5 displays L2-error using the presented method taking r=3, J=2, δt=0.1/2m, m=1,…,8. The results have been compared with implicit (θ=1) and explicit method (θ=0).
5.
Conclusion
Multiwavelets Galerkin method is applied to solve the Fisher's equation. After discretization of time using the Crank-Nicolson method, a system of ordinary differential equations arises at any time step. Then Multiwavelets Galerkin method is used to solve this system of equations. The result of applying the method is a nonlinear system of algebraic equations at any time step. By solving this system, one can find the approximate solution at any time. The convergence and stability analysis are investigated, and numerical simulations indicate that the proposed method gives a satisfactory approximation to the exact solution.
Acknowledgments
This project was supported by Researchers Supporting Project number (RSP-2020/210), King Saud University, Riyadh, Saudi Arabia.
Conflict of interest
The author declares no conflicts of interest in this paper.