
In this study, we construct an error estimate for a fully discrete finite element scheme that satisfies the criteria of unconditional energy stability, as suggested in [
Citation: Wenyan Tian, Yaoyao Chen, Zhaoxia Meng, Hongen Jia. An adaptive finite element method based on Superconvergent Cluster Recovery for the Cahn-Hilliard equation[J]. Electronic Research Archive, 2023, 31(3): 1323-1343. doi: 10.3934/era.2023068
[1] | Shahbaz Ahmad, Adel M. Al-Mahdi, Rashad Ahmed . Two new preconditioners for mean curvature-based image deblurring problem. AIMS Mathematics, 2021, 6(12): 13824-13844. doi: 10.3934/math.2021802 |
[2] | Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453 |
[3] | Damrongsak Yambangwai, Tanakit Thianwan . An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems. AIMS Mathematics, 2022, 7(1): 1366-1398. doi: 10.3934/math.2022081 |
[4] | Shahbaz Ahmad, Faisal Fairag, Adel M. Al-Mahdi, Jamshaid ul Rahman . Preconditioned augmented Lagrangian method for mean curvature image deblurring. AIMS Mathematics, 2022, 7(10): 17989-18009. doi: 10.3934/math.2022991 |
[5] | Xiao Guo, Chuanpei Xu, Zhibin Zhu, Benxin Zhang . Nonmonotone variable metric Barzilai-Borwein method for composite minimization problem. AIMS Mathematics, 2024, 9(6): 16335-16353. doi: 10.3934/math.2024791 |
[6] | Bolin Song, Zhenhao Shuai, Yuanyuan Si, Ke Li . Blind2Grad: Blind detail-preserving denoising via zero-shot with gradient regularized loss. AIMS Mathematics, 2025, 10(6): 14140-14166. doi: 10.3934/math.2025637 |
[7] | Ziqing Du, Yaru Li, Guangming Lv . Evaluating the nonlinear relationship between nonfinancial corporate sector leverage and financial stability in the post crisis era. AIMS Mathematics, 2022, 7(11): 20178-20198. doi: 10.3934/math.20221104 |
[8] | H. M. Barakat, Magdy E. El-Adll, M. E. Sobh . Bootstrapping m-generalized order statistics with variable rank. AIMS Mathematics, 2022, 7(8): 13704-13732. doi: 10.3934/math.2022755 |
[9] | Chih-Hung Chang, Ya-Chu Yang, Ferhat Şah . Reversibility of linear cellular automata with intermediate boundary condition. AIMS Mathematics, 2024, 9(3): 7645-7661. doi: 10.3934/math.2024371 |
[10] | Shabir Ahmad, Saud Fahad Aldosary, Meraj Ali Khan . Stochastic solitons of a short-wave intermediate dispersive variable (SIdV) equation. AIMS Mathematics, 2024, 9(5): 10717-10733. doi: 10.3934/math.2024523 |
In this study, we construct an error estimate for a fully discrete finite element scheme that satisfies the criteria of unconditional energy stability, as suggested in [
Image blur, such as motion blur, is a common disturbance in real-world photography applications. Therefore, image deblurring is of great importance for further practical vision tasks. Motion blur can be modeled as the covolution of the sharp image and blur kernel, which is typically unknown in real-world scenarios. The image degradation can be modeled as:
B=L⊗K+n, | (1.1) |
where B, L, and K denote the motion-blurred image, the sharp image, and the blur kernel (point spread function), respectively, and n represents the additive white Gaussian noise with a mean of 0 and a standard deviation of σ, which is introduced during the image degradation process. The symbol ⊗ denotes the convolution operator.
Blind deblurring aims to reconstruct both the blur kernel K and the sharp latent image L from a blurred input image B. However, this process is ill-posed because different combinations of L and K can produce identical outputs of B. To address this problem, it is essential to incorporate prior knowledge to avoid the local optimal solution.
Researchers have extensively explored the optimization of blur kernels modeled with prior knowledge of images in recent years[1,2,3]. Li et al. [4] utilized a deep network to formulate the image prior as a binary classifier. Levin et al. [5] employed hyper-Laplacian priors to model the latent image and derived a simple approximation method to optimize the maximum a posteriori (MAP). In the pursuit of developing an efficient blind deblurring method, various prior terms tailored to enhance image clarity have been integrated within the MAP framework[6,7,8]. Krishnan et al. [9] utilized an L1/L2 regularization scheme to sparsely represent the gradient image, whose main feature is to adapt the L1 norm regularization by using the L2 norm of the image gradient as the weight in the iterative process. However, this approach is not conducive to recovering image details in the early stages of the optimization process. Meanwhile, Xu et al. [10] proposed an unnatural L0 norm sparse representation to eliminate detrimental small-amplitude structures, providing a unified framework for both uniform and non-uniform motion deblurring. Liu et al. [11] explored that the surface maps of intermediate latent images containing detrimental structures typically have a large surface area, and they introduced an additional surface-perception a prior based on the use of the L0 norm to enforce sparsity on the image gradient, thereby preserving sharp edges and removing unfavorable microstructures from the intermediate latent images.
These methods still fail when dealing with images containing more saturated pixels and large blur kernels. Therefore, recent works concentrate on the image reconstruction with outliers for non-blind deblurring[12] and blind deblurring tasks[13,14,15]. Chen et al. [16] proposed to remove the outliers by adopting a confidence map and further shrunk the outliers by multiplying with its inverse value[17]. Zhang et al. [18] proposed an intermediate image correction method for saturated pixels to improve the quality of saturated image restoration by screening the intermediate image using Bayesian a posteriori estimation and excluding pixel points that adversely affect the blur kernel estimation. Much progress has been made in blurred image estimation for natural images and in image reconstruction techniques, but there are still several major problems with the current blind deblurring algorithms. First, most current motion blur estimation methods are based on images with a linear blurring process[19,20,21]. In practice, blurred images are often accompanied by large noise and outlier points, such as saturated pixel points, and linear blur models cannot effectively describe saturated pixel points, leading to their poor performance in processing blurred images with outlier pixels. In particular, blurred images taken in low-light environments will contain large noise and outlier points. Therefore, how to effectively cope with the interference caused by saturated pixels has great practical value.
Recently, deep learning methods based on Bayes theory have also developed[22,23,24]. Kingma et al. [22] proposed the auto-encoding variational Bayesian algorithm, where the encoder maps the input into a distribution within the latent space, and the decoder maps the sample points from the latent space to the input space. Zhang et al. [20] and Ren et al. [23] constructed blind deblurring networks based on the MAP estimation. However, these deep learning-based methods can easily fail when the data distribution is different from the training data. For this reason, the proposed method focuses on the conventional iterative blind deblurring method.
This work investigates the blind deblurring optimization model for saturated pixels established under the MAP framework. By solving the intermediate image and blur kernel by alternating iterations, the blur kernel will eventually converge to the blur kernel of the observed image. In order to overcome the highly ill-posed problem of blind deblurring, the image regularity and the blur kernel regularity are usually used to constrain the model. Although the dark channel prior (DCP) has achieved excellent results, when dealing with images with larger blur kernels or saturated pixels, the results are often unsatisfactory. Therefore, we utilize the pixel screening strategy [18] to further correct the intermediate images with large blur kernels or saturated pixels. By distinguishing whether a pixel conforms to the linear degradation assumption, the proposed method reduces the influence of unfavorable structure to obtain a more accurate blur kernel.
We use the MAP probability estimation to construct a probabilistic modeling framework between a sharp image, a blur kernel, and a blurred image. Given the blurred image, the sharp image and the blur kernel are estimated by maximizing a posterior probability based on the assumption that the sharp image L and the blur kernel K are independent of each other. According to the conditional probability formula, we obtain
(L,K)=argmaxL,KP(L,K∣B)=argmaxL,KP(B∣L,K)∗P(L)∗P(K)P(B). | (2.1) |
Taking the negative logarithms on both sides of the above equation, we derive a new form that is equivalent to the original probability density function:
−logP(L,K∣B)∝−logP(B∣L,K)−logP(L)−logP(K). | (2.2) |
Assume that n is an additive white Gaussian noise with a mean of 0 and a variance of σ, and the variable B follows a normal distribution, provided that L and K are known. The solution of L and K is transformed into the following minimization problem:
(L,K)=argminL,K‖L⊗K−B‖22+Φ(L)+Ψ(K). | (2.3) |
The first term on the right-hand side is the data fitting term, and the second and third terms are regularization terms, which involve a priori knowledge, including statistical and distribution properties about the sharp image and blur kernel. Blind deblurring is to estimate the blur kernel and then recover the sharp image from the blurred image.
The motion blur is usually caused by relative motion between the camera and the subject. This motion causes pixels shifting in a specific direction and distance, thus resulting in image degradation. Assume all values in the blur kernel are greater than or equal to 0, and the sum is 1, that is,
K(z)≥0, ∑z∈ΩkK(z)=1, |
where Ω is the whole image space.
Since blur kernels are sparse, we constrain the possible blur kernels as follows:
Ψ(K)=‖K‖p, | (2.4) |
where ‖⋅‖p denotes the p norm operator. As the L2 norm constraint focuses more on the smoothness of the blur kernel, this leads to more stable results for kernel estimation. Therefore, we use the L2 norm to constrain the blur kernel in this paper.
The dark channel is a natural metric for distinguishing sharp images from blurry images[25]. He et al. [26] first proposed dark channels for image haze removal. The dark channel of image L can be defined as the minimum value of an image patch as follows:
Ci,j(L)=minN(i,j)(minc∈{r,g,b}Lci,j), | (2.5) |
where N(i,j) is the image patch centered at pixel (i,j). Experiments show that the dark channels of sharp images are more sparse. The possible reason is that the image blur is a weighted sum of pixel values within the local neighborhood, thereby increasing the dark channel pixels. Therefore, we use the L0 norm of the dark channel as the image regularization.
The deblurring model based on the DCP is to solve the following problem:
minL,K‖L⊗K−B‖22+λ‖D(L)‖0+μ‖∇L‖0+γ‖K‖22. | (2.6) |
The first term of this formula is a fidelity item that constrains the output of the convolution of the recovered image with the blur kernel to be as similar as possible to the observed result. The ‖∇L‖0 term is used to preserve large image gradients and ‖D(L)‖0 is used to measure the dark channel sparsity. The blind deconvolution method commonly optimizes L and K alternately during the iterative process. The main purpose of this alternating iterative optimization is to progressively refine the motion blur kernel K and the latent image L.
In this work, the following two subproblems are solved by the alternating iteration method:
minL‖L⊗K−B‖22+λ‖D(L)‖0+μ‖∇L‖0, minK‖L⊗K−B‖22+γ‖K‖22. | (2.7) |
Specifically, for the k-th iterative process, L can be solved using the fast Fourier transform. When L is given, kernel estimation in Eq (2.7) is a least-squares problem. The gradient-based kernel estimation methods have shown superiority [11], and the kernel estimation model as follows:
Kk+1=argminK‖∇Lk+1⊗Kk−∇B‖22+γ‖Kk‖22. | (2.8) |
Normally, blind image deblurring follows the basic linear blurring assumption Eq (1.1). However, methods based on this assumption do not yield satisfactory results in recovering images with a high number of saturated pixels. When outliers are present, intermediate potential images, estimated using methods with traditional data fidelity items, contain significant artifacts. Even a small number of outliers severely degrade the quality of the estimated blur kernel because these outliers often do not fit the linear model.
An effective way to identify and discard outliers during the iterative process is assigning different weights to the pixels while updating the latent image and the blur kernel. Those pixels categorized as outliers are assigned a weight equal to zero to ensure that they do not affect the subsequent iterations[18]. We introduce variable Z to determine whether the pixel (i,j) complies with the linearity assumption[12], and the intermediate correction operator is defined as
Pk+1i,j=P(Zk+1i,j=1∣Bi,j,Kk,Lk+1). | (2.9) |
According to the Bayes formula, we have
P(Zk+1ij=1∣Bij,Kk,Lk+1)=P(Bij∣Zk+1ij=1,Kk,Lk+1)P(Zk+1ij=1∣Kk,Lk+1)P(Bij∣Kk,Lk+1). | (2.10) |
In this work, we assume that the noise n obeys a Gaussian distribution with a mean of 0 and a variance σ2. When
Zk+1ij=1, |
the degradation assumption holds, and we obtain
P(Bij∣Zk+1ij=1,Kk,Lk+1)=φij, | (2.11) |
where
φij∼N((Lk+1∗Kk)ij,σ2). |
When
Zk+1ij=0, |
pixel (i,j) is considered an outlier. The posterior distribution is approximated by a uniform distribution as follows:
P(Bij∣Zk+1ij=0,Kk,Lk+1)=1/(b−a), | (2.12) |
where b and a correspond to the maximum and minimum values of the input image, respectively.
Given the intermediate image Lk+1 and kernel Kk, we use p0 to represent the percentage of image pixels that deviate from the linear model. The probability of a pixel conforming to Eq (1.1) can be defined as
P(Zk+1ij=0∣Kk,Lk+1)=p0, | (2.13) |
and we generally assume that about 0–10 % of the pixels are deviated. The probability of satisfying the linearity assumption Eq (1.1) for a given intermediate blur kernel and intermediate image is defined as
P(Zk+1ij=1∣Kk,Lk+1)=1−p0. | (2.14) |
According to the full probability formula, we obtain
P(Bij∣Kk,Lk+1)=∑Zij=0,1P(Bij∣Zk+1ij,Kk,Lk+1)P(Zk+1ij∣Kk,Lk+1)=φij(1−p0)+p0(b−a). | (2.15) |
Thus, with the above definitions, the pixel screening operator P is calculated as follows:
Pk+1i,j=φij(1−p0)φij(1−p0)+p0/(b−a). | (2.16) |
During the iterative process, after obtaining the estimated intermediate image, we alternate the estimation of the blurring kernel. Based on the intermediate correction operator, we screen and correct the pixels of intermediate images. For those pixels with a high probability of deviation, which means they have greater disadvantageous impact for blur kernel estimation, they are appropriately corrected. With the corrected intermediate image, we solve the following model to estimate the blur kernel:
Kk+1=argminK‖∇(Lk+1∘P)⊗K−∇B‖22+γ‖K‖22, | (2.17) |
where ∘ is the matrix dot product operator.
As shown in Figure 1, this work is carried out in the framework of multi-scale deblurring, where kernel estimation is carried out from a coarse to a fine scheme with an image pyramid. With the color input image, we first transform it to a grayscale image. We use the image to create a pyramid and resize the blur kernel with a down-sampling operation, thus a set of multi-resolution images is obtained. Starting from the smallest level, the structure of the whole image is restored and we recover the rough blur kernel using the correction operator. As the image and kernel resolution improve, the finer details are gradually restored.
In order to verify the effectiveness of this method, we conduct numerical experiments on both synthetic and real-world image datasets to compare the processing effects of the dark channel blind deblurring method before and after the correction improvement. We set the parameters
λ=0.003,μ=0.003,andγ=2, |
and p is an adjustable parameter in the range of 0.02 to 0.1. Figure 2 compares the results on the Levin dataset[5] by adjusting p from 0.02 to 0.16. The results show that the deblurring performance relies on the choice of p. The more outliers present, the higher the value of p, which will provide better results.
The experimental hardware configuration is an Intel Core i5-10300 CPU, NVIDlA GeForce GTX 1650 GPU, 16.0 GB RAM, the software configuration operating system is Windows 10 (64-bit). We use the PSNR and structural similarity (SSIM) as our evaluation metrics.
We use the Levin dataset [5] and Köhler dataset [27] to evaluate our method. The Levin dataset is a standard benchmark dataset that consists of 32 blurred images synthesized from 4 original images and 8 different convolution kernels, with each image size of 255×255. The Köhler dataset is a standard benchmark dataset that consists of 48 blurred images synthesized from 4 original images and 12 different convolutional kernels, with each image size of 800×800. We compare our method with DCP[25], PMP[28], LMG[21], and Sat[17] to demonstrate the effectiveness of our method.
In Figure 3, the left figure shows the PSNR comparison between the proposed method and state-of-the-art methods, where our method significantly improves the PSNR metric. The right figure shows the error ratio comparison with and without intermediate correction. It can be seen that the proposed method has the smallest error ratio. As shown in Figure 3 and Table 1, experimental results on the Levin dataset demonstrate that the deblurring algorithm proposed in this paper achieves significant performance improvements across a wide range of blur types and degrees. The improved method recovery results obtain higher PSNR and SSIM values, and its ability to reach a 100% success rate faster proves its effectiveness in removing blur of different types and degrees.
Figure 4 shows that our method recovers the image and kernel with less artifacts and higher quality.
As shown in Figure 5 and Table 2, experimental results on the Köhler dataset show that our proposed deblurring method achieves significant performance improvement, and the recovery results obtain higher PSNR and SSIM values, demonstrating its effectiveness for image quality improvement. Figure 6 shows that the deblurred image of the proposed method obtains most restoration performance with the least ringing artifacts. The restored kernel of the proposed method is more clean and the image has the best visual quality.
As shown in Figure 7, we compare the dark channels of intermediate results with and without the intermediate correction. Without the correction strategy, our method reduces to the DCP-based method[25]. The intermediate results show that our method restores more sharp edges and clear blurry kernels. The final recovered image contains more details that demonstrate that our method improves the deblurring quality for saturated images.
Estimating motion kernels from blurred images with saturated pixel regions is challenging in image processing. As shown in Figure 8, we present three blurry images with saturated pixels to demonstrate the performance of our method. The first column shows blurry images, and the second and the third column are the results of the DCP[25] and ours, respectively. The results show that with the intermediate correction, it not only improves the quality of the recovered images, but also restores more clear blurry trajectories.
In this work, we introduce a blind deblurring method based on the DCP with an intermediate image correction strategy. In order to remove the disadvantageous effect of outliers, such as saturated pixels, we correct the intermediate image during the deblurring process. By assigning different weights to intermediate images, we improve the kernel estimation performance and thus enhance the final image restoration quality. Experimental results show that our method can significantly improve the accuracy and robustness of blur estimation when dealing with blurred images containing noise and outlier pixels.
Min Xiao: writing—original draft; Jinkang Zhang: writing—original draft; Zijin Zhu: writing—review and editing; Meina Zhang: methodology, supervision. All authors have read and agreed to the published version of the manuscript.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work is supported by the Science Foundation of China University of Petroleum, Beijing (No. 2462023YJRC008), Foundation of National Key Laboratory of Computational Physics (No. 6142A05QN23005), Postdoctoral Fellowship Program of CPSF (Nos. GZC20231997 and 2024M752451), National Natural Science Foundation of China (No. 62372467).
The authors have no conflicts to disclose.
[1] |
F. Guillen-Gonzalez, G. Tierra, Second order schemes and time-step adaptivity for Allen-Cahn and Cahn-Hilliard models, Comput. Math. Appl., 68 (2014), 821–846. https://doi.org/10.1016/j.camwa.2014.07.014 doi: 10.1016/j.camwa.2014.07.014
![]() |
[2] |
J. Cahn, J. Hilliard, Free energy of a nonuniform system. Ⅰ. interfacial free energy, J. Chem. Phys., 28 (1958), 258–267. https://doi.org/10.1063/1.1744102 doi: 10.1063/1.1744102
![]() |
[3] |
A. Karma, W. Rappel, Quantitative phase-field modeling of dendritic growth in two and three dimensions, Phys. Rev. E, 57 (1998), 4323–4349. https://doi.org/10.1103/PhysRevE.57.4323 doi: 10.1103/PhysRevE.57.4323
![]() |
[4] |
S. Allen, J. Cahn, A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening, Acta. Matall., 27 (1979), 1085–1095. https://doi.org/10.1016/0001-6160(79)90196-2 doi: 10.1016/0001-6160(79)90196-2
![]() |
[5] |
R. Kobayashi, Modeling and numerical simulations of dendritic crystal growth, Physica D Nonlinear Phenom., 63 (1993), 410–423. https://doi.org/10.1016/0167-2789(93)90120-P doi: 10.1016/0167-2789(93)90120-P
![]() |
[6] |
M. Gurtin, D. Polignone, J. Vinals, Two-phase binary fluids and immiscible fluids described by an order parameter, Math. Models Methods Appl. Sci., 6 (1996), 815–831. https://doi.org/10.1142/S0218202596000341 doi: 10.1142/S0218202596000341
![]() |
[7] |
J. Barret, J. Blowey, H. Garcke, Finite element approximation of the Cahn-Hilliard equation with degenerate mobility, SIAM J. Numer. Anal., 37 (1999), 286–318. https://doi.org/10.1137/S0036142997331669 doi: 10.1137/S0036142997331669
![]() |
[8] |
C. Elliott, D. French, A nonconforming finite-element method for the two-dimensional Cahn-Hilliard equation, SIAM J. Numer. Anal., 26 (1989), 884–903. https://doi.org/10.1137/0726049 doi: 10.1137/0726049
![]() |
[9] |
C. Elliott, D. French, Numerical studies of the Cahn-Hilliard equation for phase separation, IMA J. Appl. Math., 38 (1987), 97–128. https://doi.org/10.1093/imamat/38.2.97 doi: 10.1093/imamat/38.2.97
![]() |
[10] |
J. Shen, J. Xu, J. Yang, The scalar auxiliary variable (SAV) approach for gradient flows, J. Comput. Phys., 353 (2018), 407–416. https://doi.org/10.1016/j.jcp.2017.10.021 doi: 10.1016/j.jcp.2017.10.021
![]() |
[11] |
S. Zhao, X. Xiao, X. Feng, An efficient time adaptivity based on chemical potential for surface Cahn-Hilliard equation using finite element approximation, Appl. Math. Comput., 369 (2020), 124901. https://doi.org/10.1016/j.amc.2019.124901 doi: 10.1016/j.amc.2019.124901
![]() |
[12] |
J. Shen, X. Yang, Numerical approximations of Allen-Cahn and Cahn-Hilliard equations, Discrete Contin. Dyn. Syst., 28 (2010), 1669–1691. https://doi.org/10.3934/dcds.2010.28.1669 doi: 10.3934/dcds.2010.28.1669
![]() |
[13] |
Y. Huang, W. Yang, H. Wang, J. Cui, Adaptive operator splitting finite element method for Allen-Cahn equation, Numer. Methods Partial Differ. Equations, 35 (2019), 1290–1300. https://doi.org/10.1002/num.22350 doi: 10.1002/num.22350
![]() |
[14] |
D. Kay, A. Tomasi, Color image segmentation by the Vector-Valued Allen-Cahn Phase-Field Model: A multigrid solution, IEEE. Trans. Image Process., 18 (2009), 2330–2339. https://doi.org/10.1109/TIP.2009.2026678 doi: 10.1109/TIP.2009.2026678
![]() |
[15] |
X. Feng, Y. Li, Analysis of interior penalty discontinuous Galerkin methods for the Allen-Cahn equation and the mean curvature flow, IMA J. Numer. Anal., 35 (2015), 1622–1651. https://doi.org/10.1093/imanum/dru058 doi: 10.1093/imanum/dru058
![]() |
[16] |
Y. Chen, Y. Huang, N. Yi, A SCR-based error estimation and adaptive finite element method for the Allen-Cahn equation, Comput. Math. Appl., 78 (2019), 204–223. https://doi.org/10.1016/j.camwa.2019.02.022 doi: 10.1016/j.camwa.2019.02.022
![]() |
[17] |
Y. Chen, Y. Huang, N. Yi, A decoupled energy stable adaptive finite element method for Cahn-Hilliard-Navier-Stokes equations, Commun. Comput. Phys., 29 (2021), 1186–1212. https://doi.org/10.4208/cicp.OA-2020-0032 doi: 10.4208/cicp.OA-2020-0032
![]() |
[18] |
D. Mao, L. Shen, A. Zhou, Adaptive finite element algorithms for eigenvalue problems based on local averaging type a posteriori error estimates, Adv. Comput. Math., 25 (2006), 135–160. https://doi.org/10.1007/s10444-004-7617-0 doi: 10.1007/s10444-004-7617-0
![]() |
[19] |
Z. Zhang, Z. Qiao, An adaptive time-stepping Strategy for the Cahn-Hilliard Equation, Commun. Comput. Phys., 11 (2012), 1261–1278. https://doi.org/10.4208/cicp.300810.140411s doi: 10.4208/cicp.300810.140411s
![]() |
[20] |
Y. Li, Y. Choi, J. Kim, Computationally efficient adaptive time step method for the Cahn-Hilliard equation, Comput. Math. Appl., 73 (2017), 1855–1864. https://doi.org/10.1016/j.camwa.2017.02.021 doi: 10.1016/j.camwa.2017.02.021
![]() |
[21] |
Y. Huang, N. Yi, The superconvergent cluster recovery method, J. Sci. Comput., 44 (2010), 301–322. https://doi.org/10.1007/s10915-010-9379-9 doi: 10.1007/s10915-010-9379-9
![]() |
[22] | A. Diegel, C. Wang, S. Wise, Stability and convergence of a second order mixed finite element method for the Cahn-Hilliard equation, arXiv preprint, 2016, arXiv: 1411.5248. https://doi.org/10.48550/arXiv.1411.5248 |
[23] |
J. Guo, C. Wang, S. Wise, X. Yue, An H2 convergence of a second-order convex-splitting, finite difference scheme for the three-dimensional Cahn–Hilliard equation, Commun. Math. Sci., 14 (2016), 489–515. https://dx.doi.org/10.4310/CMS.2016.v14.n2.a8 doi: 10.4310/CMS.2016.v14.n2.a8
![]() |
[24] |
K. Cheng, C. Wang, S. Wise, X. Yue, A second-order, weakly energy-stable pseudo-spectral scheme for the Cahn–Hilliard equation and its solution by the homogeneous linear iteration method, J. Sci. Comput., 69 (2016), 1083–1114. https://doi.org/10.1007/s10915-016-0228-3 doi: 10.1007/s10915-016-0228-3
![]() |
[25] |
J. Guo, C. Wang, S. Wise, X. Yue, An improved error analysis for a second-order numerical scheme for the Cahn-Hilliard equation, J. Comput. Appl. Math., 388 (2021), 113300. https://doi.org/10.1016/j.cam.2020.113300 doi: 10.1016/j.cam.2020.113300
![]() |
[26] | J. Shen, On error estimates of the projection methods for the Navier-Stokes equations: first-order schemes, Math. Comput., 65 (1996), 1039–1065. |
[27] |
X. Feng, A. Prohl, Error analysis of a mixed finite element method for the Cahn-Hilliard equation, Numer. Math., 99 (2004), 47–84. https://doi.org/10.1007/s00211-004-0546-5 doi: 10.1007/s00211-004-0546-5
![]() |
[28] |
C. Li, Y. Huang, N. Yi, An unconditionally energy stable second order finite element method for solving the Allen–Cahn equation, J. Comput. Appl. Math., 353 (2019), 38–48. https://doi.org/10.1016/j.cam.2018.12.024 doi: 10.1016/j.cam.2018.12.024
![]() |
[29] | Y. Yan, W. Chen, C. Wang, S. Wise, A second-order energy stable BDF numerical scheme for the Cahn-Hilliard equation, Commun. Comput. Phys., 23 (2018), 572–602. |
[30] |
K. Cheng, W. Feng, C. Wang, S. Wise, An energy stable fourth order finite difference scheme for the Cahn-Hilliard equation, J. Comput. Appl. Math., 362 (2019), 574–595. https://doi.org/10.1016/j.cam.2018.05.039 doi: 10.1016/j.cam.2018.05.039
![]() |