1.
Introduction
Fractional diffusion equations (FDEs) have been utilized to model anomalous diffusion phenomena in the real world, see for instance [1,2,4,18,19,22,23,25]. One of the main features of the fractional differential operator is nonlocality. It brings big challenge for finding the numerical solution of FDEs, as the coefficient matrix of the discretized FDEs is typically dense, which requires O(N3) of computational cost and O(N2) of memory storage if a direct solution method is employed, where N is the number of unknowns. However, by making use of the Toeplitz-like structure of the coefficient matrices, many efficient algorithms have been developed, see for instance [6,9,12,13,14,15,16,21].
In this paper, we consider the following initial-boundary value problem of spatial balanced FDEs [7,24]:
where d±(x,t)>0 are diffusion coefficients, f(x,t) is the source term, and α,β are the fractional orders satisfying 12<α,β<1. Here RLxLDγx and RLxDγxR denote the left-sided and right-sided Riemann-Liouville fractional derivatives for 0<γ<1, respectively, and are defined by [17]
where Γ(⋅) is the Gamma function, while CxLDγx and CxDγxR denote the left-sided and right-sided Caputo fractional derivatives for 0<γ<1, respectively, and are defined by [17]
The fractional differential operator in (1.1) is called the balanced central fractional derivative, which was studied in [24]. One advantage of such fractional differential operator is that its variational formulation has a symmetric bilinear form, which can greatly benefit theoretical investigation.
Recently, a finite volume approximation for the spatial balanced FDEs (1.1) is proposed in [7]. By applying a standard first-order difference scheme for the time derivative and a finite volume discretization scheme for the spatial balanced fractional differential operator, a series of systems of linear equations are generated, whose coefficient matrices share the form of the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. One attractive feature of these coefficient matrices is that they are symmetric positive definite, so that the linear systems can be solved by CG method, in which the three-term recurrence can significantly reduce the computational and storage cost. However, due to the ill-conditioning of the coefficient matrices, CG method, when applied to solve the resulted linear systems, usually converges very slow. Therefore, preconditioners should be applied to improve the computational efficiency. In [7], the authors proposed two preconditioners: circulant preconditioner for the constant diffusion coefficient case and banded preconditioner for the variational diffusion coefficient case.
In this paper, we consider the approximate inverse preconditioners for the resulting linear systems arising from the finite volume discretization of the spatial balanced FDEs (1.1). Our preconditioner is based on the symmetric approximate inverse strategy studied in [11] and the Sherman-Morrison-Woodburg formula. Rigorous analysis shows that the preconditioned matrix can be written as the sum of the identity matrix, a small norm matrix, and a low-rank matrix. Therefore, the quick convergence of the CG method for solving the preconditioned linear systems is expected. Numerical examples, for both one-dimensional and two-dimensional cases, are given to demonstrate the robustness and effectiveness of the proposed preconditioner. We remark that our preconditioner can also be applied to another class of conservative balanced FDEs which was studied in [8,10].
The rest of the paper is organized as follows. In Section 2, we present the discretized linear systems of the balanced FDEs. Our new preconditioners are given in Section 3, and their properties are investigated in detail in Section 4. In Section 5, we carry out the numerical experiments to demonstrate the performance of the proposed preconditioners. Finally, we give the concluding remarks in Section 6.
2.
Discretization of the spatial balanced FDEs
Let Δt=T/Mt be the time step where Mt is a given positive integer. We define a temporal partition tj=jΔt for j=0,1,2,…,Mt. The first-order time derivative in (1.1) can be discretized by the standard backward Euler scheme, and we obtain the following semidiscrete form:
for j=1,2,⋯,Mt. Let Δx=(xR−xL)/(N+1) be the size of the spatial grid where N is a positive integer. We define a spatial partition xi=xL+iΔx for i=0,1,2,…,N+1, and denote by xi−12=xi−1+xi2 the midpoint of the interval [xi−1,xi]. Integrating both sides of (2.1) over [xi−12,xi+12] gives
Let SΔx(xL,xR) be the space of continuous and piecewise-linear functions with respect to the spatial partition, and define the nodal basis functions ϕk(x) as
for k=1,2,…,N, and
The approximate solution uΔx(x,tj)∈SΔx(xL,xR) can be expressed as
Therefore, the corresponding finite volume scheme leads to
which can be further approximated and we obtain [7]
where ηα=ΔtΓ(2−α)2Δx2α, ηβ=ΔtΓ(2−β)2Δx2β, and
for γ=α,β, with
The initial value and boundary condition are
Collecting all components i into a single matrix system, we obtain
where
are diagonal matrices, M=18tridiag(1,6,1), u(j)=[u(j)1,u(j)2,…,u(j)N]⊺, f(j)=[f(j)1,f(j)2,…, f(j)N]⊺ with
and
with
The matrices ˜Gα, ˜Gβ are N-by-(N+1) Toeplitz matrices defined by
We remark that the entries of ˜Gα and ˜Gβ are independent on the time partition tj.
We denote the coefficient matrix by A(j), that is,
As M and ˜D(j)+,˜D(j)− are symmetric positive definite, we can see that the coefficient matrix A(j) is symmetric positive definite too.
For the entries of ˜Gα and ˜Gβ, we have the following result, which is directly obtained from Lemma 2.3 in [15].
Lemma 2.1. Let g(γ)k be defined by (2.5) with 12<γ<1. Then
(1) g(γ)0>0, g(γ)1<g(γ)2<⋯<g(γ)k<⋯<0;
(2) |g(γ)k|<cγ(k−1)γ+1 for k=2,3,…, where cγ=γ(1−γ);
(3) limk→∞|g(γ)k|=0, ∞∑k=0g(γ)k=0, and p∑k=0g(γ)k>0 for p≥1.
3.
The approximate inverse preconditioner
As the coefficient matrix is symmetric positive definite, we can apply CG method to solve the linear systems (2.6). In order to improve the performance and reliability of the CG method, preconditioning is necessarily to employed. In this section, we develop the approximate inverse preconditioner for the linear system (2.6).
For the convenience of analysis, we omit the superscript "(j)" for the jth time step, that is, we denote the coefficient matrix (2.7) as
where ˜D+=˜D(j)+, ˜D−=˜D(j)−.
Let gα and Gα be the first column and the last N columns of ˜Gα, respectively, that is, ˜Gα=[gα,Gα] where
Analogously, we denote by gβ and Gβ the last column and the first N columns of ˜Gβ, respectively, that is, ˜Gβ=[Gβ,gβ] where
Then we have
where
Therefore, A can be written as
where
and
According to Sherman-Morrison-Woodburg Theorem, we have
In the following, we consider the approximation of ˆA−1. To this end, we first define a matrix ˜A with
where D+=diag({d+(xk)}Nk=1) and D−=diag({d−(xk)}Nk=1).
Assume d+(x),d−(x)∈C[xL,xR], then it is easy to see that for any ϵ>0, when the spatial gird Δx is small enough, we have
Meanwhile, we learn from Lemma 2.1 that there exists cα,cβ>0 such that ‖Gα‖2≤cα,‖Gβ‖2≤cβ. Let
Then it holds that
This indicates that ˜A can be a good approximation of ˆA when Δx is very small. However, it is not easy to compute the inverse of ˜A.
Motivated by the symmetric approximate inverse preconditioner proposed in [11], we consider the following approximations
where ei is the i-th column of the identity matrix I and
which is symmetric positive definite. That is, we approximate the i-th column of ˜A−1/2 by the i-th column of K−1/2i. Then we propose the following symmetric approximate inverse preconditioner
Although Ki is symmetric positive definite, it is not easy to compute the inverse of its square root. Hence we further approximate Ki by circulant matrices whose inverse square roots can be easily obtained by FFT.
Let CM, Cα and Cβ be Strang's circulant approximations [5] of M, Gα and Gβ, respectively. Then we obtain the following preconditioner
where Ci=CM+ηαd+(xi)CαC⊺α+ηβd−(xi)CβC⊺β is circulant matrix.
In order to make the preconditioner more practical, similar to the idea in [11,13], we utilize the interpolation technique. Let {˜xk}ℓk=1 be ℓ distinct interpolation points in [xL,xR] with a small integer ℓ (ℓ≪N), and denote by λ≜(λM,λα,λβ), where λM, λα, λβ are certain positive real numbers. Then we define the function
Let
be the piecewise-linear interpolation for gλ(x) based on the ℓ points {(˜xk,gλ(˜xk))}ℓk=1. Define
Then ˜Ck is symmetric positive definite and can be diagonalized by FFT, that is,
where F is the Fourier matrix and ˜Λk is a diagonal matrix whose diagonal entries are the eigenvalues of ˜Ck. By making use of the interpolation technique, we approximate C−1/2i by
By substituting these approximations into P−12, we obtain the practical preconditioner
where Φk=diag(ϕk(x1),ϕk(x2),…,ϕk(xN)). Now applying P−13 to any vector requires about O(ℓNlogN) operations, which is acceptable for a moderate number ℓ.
Finally, by substituting ˆA−1 in the Sherman-Morrison-Woodburg formula (3.5) with P−13, we obtain the preconditioner
We remark that both P−13 and P−14 can be taken as preconditioners. It is clear that implementing P−14 requires more computational work than P−13. However, we note that P−14 is a rank-2 update of P−13, and gα, gβ are independent on tj, which indicates that, during the implementation of the preconditioner P−14 to the CG method, we can precompute P−13U ahead, as well as the inner product U⊺P−13U and the inverse of the 2-by-2 matrix I+U⊺P−13U. Therefore, at each CG iteration with preconditioner P−14, besides the matrix-vector product with P−13, only two inner-products and two vector updates are needed. Therefore, it is expected that P−14 may have better performance for the one dimensional problems.
4.
Analysis of the preconditioner
Since P−14 is obtained by substituting ˆA−1 with P−13 in the expression of A−1, the approximation property of P−14 to A−1 is dependent on how close P−13 to ˆA−1 will be. Therefore, in this section, we study the difference between P−13 and ˆA−1.
We first introduce the off-diagonal decay property [20], which is crucial for studying the circulant approximation of the Toeplitz matrix.
Definition 4.1. Let A=[ai,j]i,j∈I be a matrix, where the index set is I=Z,N or {1,2,…,N}. We say A belongs to the class Ls if
holds for s>1 and some constant c>0, and say A belongs to the class Er if
holds for r>0 and some constant c>0.
The following results hold for the off-diagonal decay matrix class Ls and Er [3,11,15,20].
Lemma 4.1. Let A=[ai,j]i,j∈I be a nonsingular matrix, where the index set is I=Z,N or {1,2,…,N}.
(1) If A∈Ls for some s>1, then A−1∈Ls.
(2) If A∈L1+s1 and B∈L1+s2 are finite matrices, then AB∈L1+s, where s=min{s1,s2}.
(3) If A∈Er1 and B∈Er2 are finite matrices, then AB∈Er for some constant 0<r<min{r1,r2}.
(4) Let A be a banded finite matrix and Hermitian positive definite, and let f be an analytic function on [λmin(A),λmax(A)] and f(λ) is real for real λ, where λmin(A) and λmax(A) denote the minimal and maximal eigenvalues of A, respectively. Then f(A) has the off-diagonal exponential decay property (4.2). In particular, let f(x)=x−1,x−1/2,andx1/2, respectively, then A−1, A−1/2, and A1/2 have the off-diagonal exponential decay property (4.2).
Assume that d+(x),d−(x)∈C[xL,xR], it follows from Lemma 2.1 and Lemma 4.1 that we have
for 12<α,β<1. Therefore, it holds that ˜A, ˜A−1∈L1+min{α,β}.
4.1. Approximation property of P−11 to ˜A−1
Given an integer m, let Gα,m and Gβ,m be the (2m+1)-banded approximations of Gα and Gβ, respectively, that is,
In the following discussion, we let
which can be regarded as some kind of banded approximation of the Ki defined in (3.8). We remark that, in the actual applications, we still employ Ki in (3.8) to construct the preconditioners.
Define
As ˜Am and Ki are banded matrices and symmetric positive definite, it follows from Lemma 4.1 that ˜A−1m, K−1i and K−1/2i have off diagonal exponential decay property. Meanwhile, we have
and hence
For a given ϵ>0, it follows from Theorem 12 in [7] that there exists an integer m1>0 such that for all m≥m1 we have
Therefore, it holds that
and
Now, we turn to estimate the upper bound of ‖P−11−˜A−1m‖2. Since both P−11 and ˜A−1m are symmetric, we have
For the first item, we have the following estimation.
Lemma 4.2. Let Kj be defined by (4.3). Assume that 0<dmin≤d+(x),d−(x)≤dmax. Then for a given ϵ>0, there exist an integer N1>0 and two positive constants c3,c4, such that
where ΔN1d+,j=maxj−N1≤k≤j+N1|d+,k−d+,j|, ΔN1d−,j=maxj−N1≤k≤j+N1|d−,k−d−,j|.
Proof. We have
As it was shown in Theorem 2.2 of [3], for a given ϵ>0, we can find a polynomial pk(t)=∑kℓ=0aℓtℓ of degree k such that
where [⋅]l,r denotes the (l,r)-th entry of a matrix. Then we can write (4.7) as
where
Note that pk(Ki) is (2km+1)-banded matrix and K−1/2i has the off diagonal exponential decay property, we know that K−1/2i−pk(Ki) also has the off diagonal exponential decay property. According to Lemma 4.1, we learn that H(1)ij has the off diagonal exponential decay property. Hence, analogous to the proof of Lemma 3.8 in [11], we can find constants ˆc>0 and ˆr>0 such that
On the other hand, we denote the j-th column of H(1)ij by [h1,j,h2,j,⋯,hN,j]⊺, and let ˜Ki≜K−1/2i−pk(Ki). Observe that
Then based on (4.8) and the fact that K−1/2j has off diagonal exponential property, we can show that there exists a constant ˆc1 such that
Denote
From (4.10) and (4.11), we can show that there exists a constant c1>0, such that
Analogously, we can show that there exists a constant c2>0 such that
Now we turn to H(2)ij. It follows from the proof of Lemma 3.11 in [11] that pk(Ki)−pk(Kj) has the form
where
It is easy to see that both ˜H(2)ij and ˆH(2)ij have the off diagonal exponential decay property. As
there exists an integer N1 such that
where ΔN1d+,j=maxj−N1≤k≤j+N1|d+,k−d+,j| and ΔN1d−,j=maxj−N1≤k≤j+N1|d−,k−d−,j|.
Let c3=˜c and c4=c1+c2+4dmax. It follows from (4.12), (4.13) and (4.15) that
□
Now we consider the second item in (4.6).
Lemma 4.3. Let Kj and ˜Am be defined by (4.3) and (4.4), respectively. Assume that 0<dmin≤d+(x),d−(x)≤dmax. Then for a given ϵ>0, there exists an integer N2>0 and constants c5,c6>0 such that
Proof. Let
then we have
On the one hand, observe that
we have
where
and
As K−1j has off diagonal exponential decay property, similar to the derivation of (4.15), we can show that, given a ϵ>0, there exists a integer ˜N1>0 and a constant ˜c1>0, such that
and
Therefore, there exists positive constants ˜c2 and ˜c3, such that
On the other hand, observe that
we have
Since Gα and Gβ have off diagonal decay property, then as shown in the proof of Lemma 4.7 in [15], given a ϵ>0, there exists a integer ˜N2>0 and a constant ˜c4>0 such that
and
where Δ˜N2d+,k=maxk−˜N2≤l≤k+˜N2|d+,l−d+,k| and Δ˜N2d−,k=maxk−˜N2≤l≤k+˜N2|d−,l−d−,k|. Therefore, there exists positive constants ˜c5 and ˜c6 such that
Now, let
It follows from (4.18) and (4.19) that, given a ϵ>0, there exists a integer N_2 > 0 and constants c_{5}, c_{6} > 0 such that
The proof is completed.
□
In combination with (4.5), Lemma 4.2 and Lemma 4.3, we obtain the following theorem.
Theorem 4.1. Assume 0\leq d_{\min} \leq d_+(x), d_-(x)\leq d_{\max} . Then given an \epsilon > 0 , there exist an integer N_3 > 0 and constants c_{7}, c_{8} > 0 such that
Remark 1. If d_{+}(x), \, d_{-}(x)\in C[x_L, x_R] , then \max\limits_{1\leq k\leq N}\max\{\Delta_{N_3}d_{+, k}, \Delta_{N_3}d_{-, k}\} can be very small for sufficiently large N , which implies that P_{1}^{-1} then will be a good approximation to \tilde{A}^{-1} .
4.2. Approximation of P_{2}^{-1} to P_{1}^{-1}
Now we consider the difference between P_{2}^{-1} to P_{1}^{-1} . As shown in Section 3.3 of [11], we know that, for a given \epsilon > 0 , there exists a polynomial p_{k}(t) = \sum_{\ell = 0}^{k} a_\ell t^\ell of degree k such that
Define
and
Then we have
Lemma 4.4. Let \tilde{P}_1 and \tilde{P}_2 be defined by (4.22) and (4.23), respectively. Then we have
Proof. Observe that
It is easy to see that
For the matrix G_{\alpha} and its Strang's circulant approximation C_{\alpha} , they have the following form
where K_0, K_1\in\mathbb{R}^{m \times m} . Therefore, we have
Similarly, we can show that
Therefore, we can see from (4.25), (4.26), (4.27) and (4.28) that K_{i}-C_{i} is of the form
where "+" denotes a m -by- m block matrix.
Analogous to the proof of Lemma 3.11 in [11], it follows from
and
that {\mathrm{rank}}(\tilde{P}_2-\tilde{P}_1)\leq 8km . □
On the other hand, for P_1-\tilde{P}_1 and P_2-\tilde{P}_2 , we can directly apply the proof of Lemma 3.12 in [11] to obtain the following lemma.
Lemma 4.5. Let \tilde{P}_1 and \tilde{P}_2 be defined by (4.22) and (4.23), respectively. Then given a \epsilon > 0 , there exist constants \tilde{c}_7 > 0 and \tilde{c}_8 > 0 such that
By Lemma 4.4 and Lemma 4.5, we obtain the following theorem.
Theorem 4.2. Let P_{1}^{-1} and P_{2}^{-1} be defined by (3.9) and (3.10), respectively. Then given a \epsilon > 0 , there exists constants \tilde{c}_9 > 0 and k > 0 such that
where E = (P_2^{-1}-\tilde{P}_2)+(\tilde{P}_1-P_1^{-1}) is a small norm matrix with \|E\|_{2} < \tilde{c}_9\epsilon , and M = \tilde{P}_2-\tilde{P}_1 is a low rank matrix with {\mathrm{rank}}(M)\leq 8km .
4.3. Approximation of P_{3}^{-1} to P_{2}^{-1}
We can directly apply the analysis of section 3.4 in [11] to reach the following conclusion.
Lemma 4.6. Let P_{2}^{-1} and P_{3}^{-1} be defined by (3.10) and (3.11), respectively. Then, for a given \epsilon > 0 , there exist an integer \ell_{0} > 0 and a constant c_{9} > 0 such that
holds when the number of interpolation points in P_{3}^{-1} satisfies \ell\geq \ell_{0} .
Summarizing our analysis, we have the following theorem.
Theorem 4.3. Let P_{3}^{-1} and A be defined by (3.11) and (3.2), respectively. Then we have
where E is a low rank matrix and S is a small norm matrix.
Proof. Observe that
By (3.2), (3.7), Theorems 4.1, 4.2 and Lemma 4.6, we can show that
where E_1 and E_2 are two low rank matrices, and S_1 and S_2 are two small norm matrices. Therefore, it is easy to see from (4.29) that
where E is a low rank matrix and S is a small norm matrix.
□
5.
Numerical experiments
In this section, we carry out numerical experiments to show the performance of the proposed preconditioners P_3 and P_4 . We use the preconditioned conjugate gradient (PCG) method to solve the symmetric positive definite linear systems (2.6). In all experiments, we set the stopping criterion as
where r_{k} is the residual vector after k iterations and b is the right-hand side.
For the sake of comparison, we also test two other types of preconditioners which are proposed in [7]. One is the Strang's circulant preconditioner C_s defined as
where s(M) , s(G_{\alpha}) and s(G_{\beta}) are the strang-circulant approximations of the Toeplitz matrices M , G_{\alpha} and G_{\beta} , and \tilde{d}_{+} and \tilde{d}_{-} are the mean values of the diagonals of \tilde{D}_{+} and \tilde{D}_{-} , respectively. Another is the banded preconditioner B(\ell) defined as
where \tilde{B}_{\alpha} and \tilde{B}_{\beta} are the banded approximations of \tilde{G}_{\alpha} and \tilde{G}_{\beta} which are of the form
respectively.
Example 1. We consider balanced fractional diffusion equation (1.1) with (x_L, x_R) = (0, 1) and T = 1 . The diffusion coefficients are given by
and f(x, t) = 1000 .
In the numerical testing, we set the number of temporal girds with M_t = N/2 . The results for different values of \alpha and \beta are reported in Table 1. We do not report the numerical results of the unpreconditioned CG method as it converges very slow.
In the table, we use "P_3(\ell)" and "P_4(\ell)" to denote our proposed preconditioners with \ell being the number of interpolation points, "Iter" to denote the average number of iteration steps required to solve the discrete fractional diffusion equation (2.6) at each time step, and "CPU" to denote the total CPU time in seconds for solve the whole discretized system.
From Table 1, we can see that the banded preconditioner B(\ell) and our proposed preconditioners P_3(\ell) and P_4(\ell) perform much better than the circulant preconditioner C_s in terms of both iteration number and elapsed time. The banded preconditioner B(\ell) has better performance when \alpha = \beta = 0.9 . In this case, the off-diagonals of the coefficient matrix decay to zero very quickly. For other cases, P_3(\ell) and P_4(\ell) perform much better.
We also see that the performance of P_3(\ell) and P_4(\ell) are very robust in the sense that the average iteration numbers are almost the same for different values of \alpha and \beta , as well as for the different mesh size.
Example 2. Consider the two dimensional balanced fractional diffusion equation
where \frac{1}{2} < \alpha_1, \beta_1, \alpha_2, \beta_2 < 1 , and d_{\pm}(x, y, t) > 0 , e_{\pm}(x, y, t) > 0 for (x, y, t)\in\Omega \times (0, T] .
As was shown in [7], the finite volume discretization leads to
where
Here M_{x}, M_{y} are tridiagonal matrices and D_{\pm}^{(j)} , E_{\pm}^{(j)} are block diagonal matrices
where N_{x} and N_{y} denote the number of grid points in the x -direction and y -direction, respectively.
Similar to the construction of P_{3} of (3.11), we can define an approximate inverse preconditioner for the two dimensional problems. For convenience of expression, we omit the superscript in A^{(j)} . To construct the preconditioner, we define the following circulant matrices
Then we choose the interpolation points \{(\tilde{x}_i, \tilde{y}_j\} , i = 1, 2, \ldots, \ell_x , j = 1, 2, \ldots, \ell_y , and we approximate \tilde{C}_{i, j}^{-1/2} by
where F refers to the two dimensional discrete Fourier transform matrix of size N_{x}N_{y} , and \tilde{\Lambda}_{k, l} is the diagonal matrix whose diagonals are the eigenvalues of \tilde{C}_{k, l} for 1\leq k\leq \ell_x and 1\leq l\leq \ell_y . Then we obtain the resulting preconditioner
where \Phi_{k, l} = {\rm{diag}}\left(\phi_{k, l}(x_1, y_1), \phi_{k, l}(x_1, y_2), \cdots, \phi_{k, l}(x_{N_x}, y_{N_y})\right) for 1\leq k\leq \ell_x and 1\leq l\leq \ell_y .
In the tests, we set \Omega = (0, 1)\times(0, 1) , T = 1 ,
and f(x, y, t) = 1000 . We also set N_{x} = N_{y} = N , M = N/2 and \ell_{x} = \ell_{y} = \ell . The maximum iteration number is set to be 500. As it is expensive to compute the second item of the Sherman-Morrison-Woodburg formula 3.5 for the two dimensional problem, we only test the preconditioner P_3(\ell) .
The results are reported in Table 2. We can see that, for the two dimensional problem, although our proposed preconditioners P_3(3) and P_3(4) take more iteration steps to converge than the banded preconditioners B(3) and B(4) , it is much cheaper to implement our new preconditioners than banded preconditioner and circulant preconditioner in terms of CPU time.
6.
Concluding remarks
In this paper, we consider the preconditioners for the linear systems resulted from the discretization of the balanced fractional diffusion equations. The coefficient matrices are symmetric positive definite and can be written as the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. We investigate the approximate inverse preconditioners and show that the spectra of the preconditioned matrices are clustered around 1. Numerical experiments have shown that the conjugate gradient method, when applied to solve the preconditioned systems, converges very quickly. Besides, we extend our preconditioning technique to the case of two dimensional problems.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This research is supported by NSFC grant 11771148 and Science and Technology Commission of Shanghai Municipality grant 22DZ2229014.
Conflict of interest
All authors declare no conflicts of interest in this paper.