
Citation: Peixian Zhuang, Xinghao Ding, Jinming Duan. Subspace-based non-blind deconvolution[J]. Mathematical Biosciences and Engineering, 2019, 16(4): 2202-2218. doi: 10.3934/mbe.2019108
[1] | Hui-Yu Huang, Wei-Chang Tsai . An effcient motion deblurring based on FPSF and clustering. Mathematical Biosciences and Engineering, 2019, 16(5): 4036-4052. doi: 10.3934/mbe.2019199 |
[2] | Junhua Liu, Wei Zhang, Mengnan Tian, Hong Ji, Baobao Liu . A double association-based evolutionary algorithm for many-objective optimization. Mathematical Biosciences and Engineering, 2023, 20(9): 17324-17355. doi: 10.3934/mbe.2023771 |
[3] | Xuyang Xie, Zichun Yang, Lei Zhang, Guoqing Zeng, Xuefeng Wang, Peng Zhang, Guobing Chen . An improved Autogram and MOMEDA method to detect weak compound fault in rolling bearings. Mathematical Biosciences and Engineering, 2022, 19(10): 10424-10444. doi: 10.3934/mbe.2022488 |
[4] | Zheng Dai, I.G. Rosen, Chuming Wang, Nancy Barnett, Susan E. Luczak . Using drinking data and pharmacokinetic modeling to calibrate transport model and blind deconvolution based data analysis software for transdermal alcohol biosensors. Mathematical Biosciences and Engineering, 2016, 13(5): 911-934. doi: 10.3934/mbe.2016023 |
[5] | Tyson Loudon, Stephen Pankavich . Mathematical analysis and dynamic active subspaces for a long term model of HIV. Mathematical Biosciences and Engineering, 2017, 14(3): 709-733. doi: 10.3934/mbe.2017040 |
[6] | Yue Shi, Xin Zhang, Chunlan Yang, Jiechuan Ren, Zhimei Li, Qun Wang . A review on epileptic foci localization using resting-state functional magnetic resonance imaging. Mathematical Biosciences and Engineering, 2020, 17(3): 2496-2515. doi: 10.3934/mbe.2020137 |
[7] | Manuela Aguiar, Ana Dias, Miriam Manoel . Gradient and Hamiltonian coupled systems on undirected networks. Mathematical Biosciences and Engineering, 2019, 16(5): 4622-4644. doi: 10.3934/mbe.2019232 |
[8] | Zhi Li, Jiyang Fu, Qisheng Liang, Huajian Mao, Yuncheng He . Modal identification of civil structures via covariance-driven stochastic subspace method. Mathematical Biosciences and Engineering, 2019, 16(5): 5709-5728. doi: 10.3934/mbe.2019285 |
[9] | Titin Siswantining, Alhadi Bustamam, Devvi Sarwinda, Saskya Mary Soemartojo, Moh. Abdul Latief, Elke Annisa Octaria, Anggrainy Togi Marito Siregar, Oon Septa, Herley Shaori Al-Ash, Noval Saputra . Triclustering method for finding biomarkers in human immunodeficiency virus-1 gene expression data. Mathematical Biosciences and Engineering, 2022, 19(7): 6743-6763. doi: 10.3934/mbe.2022318 |
[10] | Balázs Boros, Stefan Müller, Georg Regensburger . Complex-balanced equilibria of generalized mass-action systems: necessary conditions for linear stability. Mathematical Biosciences and Engineering, 2020, 17(1): 442-459. doi: 10.3934/mbe.2020024 |
Image deblurring has attracted considerable attention due to its wide applications in the fields of biomedical imaging [1,2], remote sensing [3,4], multimedia security [5,6], video monitoring [7,8], and so on. Image blurring has been easily generated from hand-held camera shaking and fast moving objects when capturing an image. Generally, image blurring issues can be categorized into two classes: non-blind deconvolution and blind deconvolution. The former is to restore the ideal image from the observed blurry image and the known blur kernel, while the latter is to recover both the ideal image and the unknown blur kernel from the blurry image. The non-blind deconvolution problem is the main focus of this paper, and image priors are generally introduced to address this problem by regularizing an optimization objective function as follows:
argminxD(y,k⊗x)+λR(x) | (1.1) |
where D(y,k⊗x) and R(x) are the data fidelity term and the prior regularization term, respectively. λ is the weighting parameter to balance above terms. y is the blurry image, x is the ideal image, k is the blur kernel, and ⊗ is the convolution operator. Significant developments of non-blind deconvolution have been achieved in recent years, and existing methods are briefly reviewed from two viewpoints of data fidelity and prior regularization:
The first category of non-blind deconvolution is based on data fidelity, which is derived from the energy error between the blurry image and the convolution of the ideal image and the blur kernel. The forms of data fidelity are mainly modelled according to noise distributions. Most of non-blind deconvolution methods generally employ the ℓ2 norm for data fidelity based on Gaussian noise statistics. Then the modified versions of ℓ2 norm data fidelity are presented to model spatial randomness of Gaussian noise by using several orders partial derivatives of image noise [9,10]. The weighted ℓ2 norm of data fidelity is developed to remove Poisson and mixed Poisson-Gaussian noise in image deblurring [11,12]. To tackle the problem of impulse noise removal during image deblurring, the ℓ1 norm of data fidelity is presented in [13,14]. The ℓ2 and ℓ1 norms for data fidelity presume that different pixels of image energy error obey the same distribution, namely, different pixels in energy error have the same visual importance for image deblurring. However, image edges and details are more important than smoothing areas for visual effect, and better visual results of deblurred images can be achieved as smaller errors of image edges and details recovery are yielded. The above-mentioned methods not realize differential processing on different contents, which reckon with visual importance difference of smoothing areas and image structures.
The second family of non-blind deconvolution is based on prior regularization. It exploits sparse priors of image, gradients or patches to constrain optimization objective functions, and thus one accurate solution is obtained. Many methods of prior regularization [15,16,17,18,19,20,21,22,23,24] have been successfully put forward, and various types of image priors have been shown, such as ℓ2, ℓ1, ℓp (0<p<1), ℓ1/ℓ2 and ℓ0 norms. In [15], the Tikhonov regularization method presents the ℓ2 norm of image prior by assuming image sparse prior as Gaussian distributions, and the objective function composed of linear equations can be optimized in inexpensive minimization. Compared with Gaussian distribution priors, non-Gaussian priors of image gradients are used to preserve better image edges and details. The total variation (TV) regularization [16,17] has been the well-known ℓ1 norm of gradient prior, and small penalties are imposed on salient edges and details while large penalties on smoothing areas. The literatures [18,19,20,21] approximate gradient distributions of natural images by employing ℓp (0<p<1) (HLP), ℓ1/ℓ2 and ℓ0 norms, respectively. In addition, nonlocal self-similarity of image patches has been widely used for restoring image edges and details. Centralized sparse representation model (CSR) [22,23] employs nonlocal self-similarity priors of image patches for sparse coeffcients and minimizes sparse coding noise for performance improvement. Joint statistical modeling method (JSM) [24] is presented to exploit image nonlocal self-similarity in transform domain and local smoothness in space domain, respectively. Unfortunately, these above methods focus on sparse priors of the overall image but ignore sparse priors difference of different image subspaces.
In this paper, we develop a new subspace-based non-blind deconvolution method (named SND). The main contributions of the paper are presented: (1) Considering visual importance difference between image structures and smoothing areas, we propose subspace data fidelity to realize differential processing on different image contents for image structures preservation and noise and artifacts suppression. (2) Observing both the difference of subspace priors and the difference between whole image prior and subspace priors, we present differentiation modelings on subspace priors to exploit the difference of different subspace priors for performance improvement. (3) We employ the least square integration method to fuse the deblurred estimations for the final recovered image and to compensate information loss of subspace deblurrings. (4) We derive an efficient optimization method with least square and fast Fourier transform for addressing the proposed objective function.
The paper is organized as follows: in section 2, the motivations of data fidelity and subspace prior are described and then the implementation of the proposed algorithm is detailed. The effectiveness of the proposed algorithm is verified by numerous experimental results provided in section 3, and the conclusion is finally drawn in section 4.
Most of non-blind deconvolution methods assume that image pixels in data fidelity term obey the same distribution, namely, they are the same visual contribution to non-blind deconvolution. However, the literatures [25,26] indicate that image edges and details are most sensitive to human visual system, image edges are basic features for analyzing and understanding image contents, and their changes determine basic contents for subjective perception. Figure 1(a) shows data fidelity errors (e=y−k⊗x and ei=hi⊗(y−k⊗x),i=1,2,3, where {hi}3i=1 can be seen in section 2.2 for details) distributions of original and subspace images of image Lena, and Figure 1(b) shows data fidelity error distributions of original and subspace images by averaging 100 images (Google dataset in section 3). We can observe that data fidelity error distributions of subspace images are different from that of original images, and different subspace images have different data fidelity error distributions. Furthermore, different subspaces have different relative reconstruction accuracy for image edges and details, and better reconstruction of image edges and details can be obtained by using better deblurring methods. Based on the above observation, it is necessary to adopt differential processing on image structures and smoothing areas for performance improvement.
Existing non-blind deconvolution methods employ sparse prior of an original image in the space domain or other transform domains, however, they reckon without the prior difference between different subspaces and the original image, and even unuse the prior difference of different subspaces. Furthermore, the sparse prior of original image is a combination of different subspaces priors, and different subspaces have different sparse priors. Figure 2 shows average gradient prior distributions of 100 images (Google dataset in section 3) for original x and subspace images xi=hi⊗x,i=1,2,3 ({hi}3i=1 will be detailed in section 2.2). We can see that the gradient distributions of original images are different from those of subspaces images, and different subspaces have different gradient distributions. Therefore, it is essential to precisely exploit the difference of subspace priors for performance improvement.
We develop a novel subspace-based non-blind deconvolution algorithm. The subspace data fidelity term, based on visual importance difference between image structures and smoothing areas, is put forward to effectively realize differential modelings of different contents. In the meanwhile, the terms of subspace priors are proposed to differentially model sparse priors of different subspaces. The final optimization objective function is established by taking both subspace data fidelity and subspace priors into consideration:
x=argminxM∑i=1{‖hi⊗(y−k⊗x)‖22+λi‖hi⊗x‖1} | (2.1) |
where M∑i=1‖hi⊗(y−k⊗x)‖22 are the subspace data fidelity terms which distinguish image structures and smoothing areas, and M∑i=1‖hi⊗x‖1 are the subspace priors terms used for the difference of subspace priors. M is the total number of image subspaces, and {λi}Mi=1 are the weights to balance above terms. {hi}Mi=1 denote linear and complete filters, which can be used to divide the whole image x into different subspaces xi=hi⊗x,i=1,2,...,M. Three convolutional filters are adopted based on their simplicity and effectiveness, in which the first-order derivative filters h2=[1,−1] and h3=[1;−1] are used to exact two subspaces x2 and x3 that contain high-frequency components of x. Then h1 is defined to ensure that x is uniquely definable in terms of x1, x2 and x3. And the frequency response (H1) of corresponding filter (h1) is satisfied that H21=1−H22−H23, where H2 and H3 are the frequency transform of h2 and h3, respectively. 1 is a matrix that every element equals one. In addition, the ℓ1 norm is used to enforce the sparsity of subspace priors because it performs better constraint on image edges and details [28].
We divide the problem (2.1) into three sub-problems below:
ˆx=argminx‖hi⊗(y−k⊗x)‖22+λi‖hi⊗x‖1,i=1,2,3 | (2.2) |
where ˆx denotes one deblurred estimate of the ideal image x obtained by addressing the i-th sub-problem, and three deblurred estimates can be achieved (i=1,2,3). For dealing with the i-th sub-problem, we introduce the latent variable βi for the ℓ1 norm approximation, and then turn the problem (2.2) into the following form:
(ˆx,βi)=argmin(x,βi)‖hi⊗(y−k⊗x)‖22+λi{‖hi⊗x−βi‖22+ρi‖βi‖1} | (2.3) |
where ρi is a weighting parameter to balance above terms. Then we adopt an alternative iteration optimization method [17,18] to seek for a local optimal solution to (2.3). The optimization procedure reduces to iterating between two sub-problems that can be individually and iteratively optimized. The two sub-problems are cycled through, and their k-th iterations are formulated below:
βi(k)=argminβi‖hi⊗ˆx(k−1)−βi‖22+ρi‖βi‖1 | (2.4) |
ˆx(k)=argminx‖hi⊗(y−k⊗x)‖22+λi‖hi⊗x−βi(k)‖22 | (2.5) |
We present the update algorithms for addressing (2.4) and (2.5) below:
Update for (2.4): A shrinkage operation [17] is used to update the auxiliary variable βi at the k-th iteration:
βi(k)=shrink(hi⊗ˆx(k−1),ρi) | (2.6) |
where shrink(x,ρ)=x|x|∗max(|x|−ρ,0), and x|x|=0 if |x|=0.
Update for (2.5): We set the derivative of both sides of the convex sub-problem (2.5) to 0, and use the least square method [25] with the fast Fourier transform (FFT) [17,18,29] for speeding up convolution operations. Then the closed-form solution ˆx at the k-th iteration can be derived by
ˆx(k)=F−1{Φ⊙Λ+λiF∗(hi)⊙F(βi(k))Ψ⊙Λ+λiΛ} | (2.7) |
where Φ=F∗(k)⊙F(y), Λ=F∗(hi)⊙F(hi), Ψ=F∗(k)⊙F(k). F is the FFT operator, F∗ and F−1 are the conjugate transpose and the inverse operator of F, respectively. ⊙ denotes the element-wise multiplication operation, and all operations of plus, multiplication and division in (2.7) are performed component-wise.
After obtaining three deblurred estimates (ˆx(j), j=1,2,3) from above subspace-based recovery, we employ the least square integration method to introduce the degraded image to compensate the information loss in subspace deblurrings. Then the objective function can be formulated by
x=argminx‖y−k⊗x‖22+3∑j=1vj‖x−ˆx(j)‖22 | (2.8) |
where {vj}3j=1 are the weighting parameters to balance above terms. We use the least square and FFT methods to address the convex problem (2.8). The final closed-form solution x can be derived by
x=F−1{Φ+3∑j=1vjF[ˆx(j)]Ψ+3∑j=1vj} | (2.9) |
Note that all above operations in (2.9) are performed component-wise.
We summarize the main steps of the proposed algorithm in Algorithm 1.
Algorithm 1: Outline of Proposed Method |
Input: blurry image y, blur kernel k, subspace filter hi, weighting parameters λi, ρi, vj, i=1,...,M, subspace number M=3, maximum iteration number T=10, i=1 and k=1.
while i⩽ do initialization: {\hat{{\bf x}}}^{(0)} \gets {\bf y} , {\boldsymbol{\beta}_{i}}^{(0)} \gets {\bf 0} . iteration on k = 1, ..., T : update {\boldsymbol{\beta}_{i}}^{(k)} via Eq. (2.6). update {\hat{{\bf x}}}^{(k)} via Eq. (2.7). stopping criteria: terminate iteration if k = T , then i \gets i + 1 , otherwise, k \gets k + 1 and continue iteration. end while update {\bf x} via Eq. (2.9). Output: deblurred image {\bf x} . |
In this section, numerous experiments are provided to validate the effectiveness of the proposed algorithm. All experiments are conducted in Matlab R2016a on a PC with Intel Core i7-4790 CPU (3.60GHz) and 8G RAM. The improvement in signal-to-noise ratio (ISNR) [30] and the structure similarity (SSIM) [26] are used to be quantitative measurements, while visual results of deblurred images are taken as a qualitative evaluation. ISNR is an important indicator to evaluate deblurring performance, and SSIM is an important measurement to assess reconstruction performance of image structures. In the proposed algorithm, the maximum iteration number T is set to 10 when the stable convergence of the proposed method can be achieved in Algorithm 1. According to satisfactory performance, weighting parameters {\rm{\{ }}{\lambda _{i}}{\rm{\} }}_{i = 1}^3 are set to 2 \times {10^{ - 5}} , 1.25 \times {10^{ - 4}} and 1 \times {10^{ - 4}} , respectively, {\rm{\{ }}{\rho _{i}}{\rm{\} }}_{i = 1}^3 are all set to 0.01, and {\rm{\{ }}{v _{j}}{\rm{\} }}_{j = 1}^3 are set to 0.7, 2 and 1.7, respectively. First, we compare the proposed method (named SND) with four methods: FTVd [17], HLP [18], JSM [24], and CSR [22]. FTVd and HLP are based on different priors regularizations of image gradient, while JSM and CSR are based on nonlocal self-similarity priors of image patches. For the fair comparison, the parameters of four competitive methods are default settings according to [17,18,22,24]. For the case of noise effect, we compare SND with four above-mentioned methods under different levels of Gaussian noise. Then we validate the effectiveness of both subspace data fidelity and subspace prior, respectively. At last, we compare different integration strategies of image subspaces to demonstrate the effectiveness of least square integration. In addition, all above parameters are universally fixed for all test images, and the reasonability of their settings is shown in the following subsection of parameters evaluation.
We compare the proposed method (SND) with FTVd [17], HLP [18], JSM [24], and CSR [22] on Levin [32] and Google datasets, respectively. The Levin dataset* includes 4 images (the row1 in Figure 3: Im05-Im08) and 10 types of blur kernels (the row3 in Figure 3). The Google dataset† is composed of 100 images collected from the Google website (one representative in the row2 of Figure 3) and 10 types of blur kernels. Each test image is blurred with one type blur kernel and then noised with Gaussian noise of same standard deviation. Figure 4 shows ISNR and SSIM values of FTVd, HLP, JSM, CSR and the proposed method, respectively. We can see that the proposed method outperforms FTVd, HLP, JSM and CSR in large margins of ISNR and SSIM improvements. Meanwhile, Figure 5 shows the deblurring results of different methods, and SND achieves better results of both image structures protection and noise suppression. However, there are residual noises in the deblurred results of FTVd, HLP, JSM and CSR. It is most obvious for FTVd (Figure 5(b)) due to its limitations of initialization sensitive and total variation. The residual noise is less in HLP (Figure 5(c)) by employing hyper-laplacian distributions of image gradients. Both JSM (Figure 5(d)) and CSR (Figure 5(e)) generate deblurred results of less noise by using nonlocal self-similarity priors of image patches. In addition, we test different non-blind deconvolution methods on Google dataset to demonstrate the effectiveness of our model for image universality. The average ISNR and SSIM values of different methods are presented in Figure 6. Compared with other methods, our method not only retains best ISNR and SSIM values in all cases, but also yields better perception quality of image structures preservation and noise reduction.
*http://webee.technion.ac.il/people/anat.levin/
†http://www.escience.cn/people/zhuangpeixian/index.html
We test the proposed method with FTVd [17], HLP [18], JSM [24] and CSR [22] on two images of Cameraman and Lena, which are both blurred with a motion blur kernel and then noised with the same standard deviation of Gaussian noise. The motion blur kernel is from the Matlab function fspecial (`motion', length, theta), which is convolved with an image via both linear motion of camera by length pixels 15 and an angle of theta degrees 45 in a counter-clockwise direction. Table 1 tabulates ISNR and SSIM results of different methods, meanwhile, Figure 7 illustrates their corresponding deblurred results of Cameraman image. We can observe that the proposed method outperforms FTVd, HLP, JSM and CSR in both ISNR and SSIM improvements, in which this advantage becomes more prominent as image size increases. Meanwhile, SND yields more visually pleasing results of image structures preservation and both noise and artifacts suppression. In the deblurred results, FTVd (Figure 7(b)) produces serious residual noise, JSM (Figure 7(d)) and CSR (Figure 7(e)) have obvious artifacts and noise, and HLP (Figure 7(c)) has less noise and artifacts. However, SND in Figure 7(f) yields sharper visual results. Therefore, the effectiveness of subspace data fidelity and subspace priors in the proposed method are demonstrated by above results.
Image | FTVd [17] | HLP [18] | JSM [24] | CSR [22] | SND |
Cameraman | 4.55/0.51 | 8.15/0.75 | 5.24/0.64 | 7.41/0.72 | 8.91/0.88 |
Lena | 0.32/0.48 | 5.71/0.76 | 4.05/0.67 | 6.04/0.78 | 7.34/0.93 |
We compare the proposed method with FTVd [17], HLP [18], JSM [24] and CSR [22] under different levels of Gaussian noise. All tests are implemented on Barbara image, which is corrupted by blur kernel 5 and then noised with different levels of Gaussian noise. We can see from Figure 8 that ISNR and SSIM plots of the proposed method are almost higher than those of other methods, and this advantage becomes more prominent as the standard deviation of Gaussian noise increases. It is demonstrated that SND yields better results of image structures protection and noise suppression. We observe that artifacts and noise are generated in deblurred results of other competitive methods, while our method maintains better results of protecting image edges and details and suppressing both noise and artifacts.
We conduct related experiments to demonstrate the effectiveness of weighted (subspace) data fidelity and subspace prior. First, we compared SND with FTVd ( {\ell _2} norm fidelity and whole image prior) [17], JSM ( {\ell _2} norm fidelity with both local and non-local image priors), L2-SP ( {\ell _2} norm fidelity and subspace priors) [27], Outlier-Handling (removing artifacts caused by outliers) [31] in the case of mixed noise, which demonstrates the effectiveness of weighted data fidelity. Figure 9 shows the deblurring results of different methods on image Barbara, which is blurred with the blur kernel 10 and then corrupted by mixed noise of Gaussian and salt-and-pepper. We can see that FTVd, JSM and L2-SP produce serious artifacts and noise in the deblurred results due to the effect of salt-and-pepper noise, Outlier-Handling removes these artifacts but over-smoothes image edges and details, however, our method yields clearer visual results, and this superiority benefits from the effectiveness of weighted data fidelity in salt-and-pepper noise removal. Compared with L2-SP, our data fidelity outperforms {\ell _2} norm data fidelity in both artifacts suppression and salt-and-pepper noise removal, which is manifested that weighted data fidelity exploits visual importance difference between image structures and smoothing areas for performance improvement. Second, we compare different schemes of both data fidelity and image priors to validate the effectiveness of weighted data fidelity and subspace priors. Figure 10 shows average ISNR and SSIM values of different methods on Google dataset. It is noted that the method of subspace priors outperforms that of whole image prior in deblurring performance. This superiority is attributed to exploiting subspace priors difference and modeling subspace priors. Under the same condition of subspace priors, the proposed method with weighted data fidelity overwhelms that of {\ell _2} norm data fidelity in performance improvement. Therefore, it is seen that both subspace data fidelity and subspace priors are advantageous to performance improvement, and SND with the two advantages achieves higher deblurring performance, and yields better results of structures preservation and both noise and artifacts suppression.
We compare different subspaces integration schemes to validate the reasonability of least square integration. Figure 11 shows average ISNR and SSIM values of FTVd (Original), the method with sum integration of deblurred estimates (Sum Integration), and the proposed method with least square integration (Least Square Integration). We can observe that least square integration method outperforms sum integration method and original method in both ISNR and SSIM improvements, and this advantage is derived from proposed least square integration method with employing the degraded image to compensate information loss of subspace deblurrings. It is demonstrated that least square integration is effective in subspaces integration.
We investigate the sensitivity of our algorithm to parameters settings when changing one parameter at a time with fixing the rest at their current values. The plots of ISNR versus (vs) these parameters are shown in Figure 12. The convergence of the proposed algorithm is firstly analyzed in Figure 12(a) where plots a function of ISNR vs iteration number T . We observe that the convergence trend becomes stable until the iteration number reaches 10, which is suitable for being the maximum iteration number when taking account of a better tradeoff between satisfactory performance and computational efficiency. In Figure 12(e)-(g), the similar trends can be seen in the plots of ISNR vs three parameters {\rm{\{ }}{\rho _{i}}{\rm{\} }}_{i = 1}^3 , and 0.01 is the suitable value for them. In the following, we see that the overall trend of ISNR vs other rest parameters first increases and then degrades. We can see from Figure 12(b) to (d) that better ISNR values are achieved when three weighting parameters {\rm{\{ }}{\lambda _{i}}{\rm{\} }}_{i = 1}^3 are set to 2 \times {10^{ - 5}} , 1.25 \times {10^{ - 4}} and 1 \times {10^{ - 4}} , respectively. Meanwhile, the plots of ISNR vs three weights v_{1} , v_{2} and v_{3} are illustrated in Figure 12(h)-(j) where we can see that the ISNR achieves better performance when {\rm{\{ }}{v _{j}}{\rm{\} }}_{j = 1}^3 are set to 0.7, 2 and 1.7, respectively. Therefore, the plots of Figure 12 are presented to show the reasonability of parameters settings in the proposed method.
In this paper, an algorithm based on subspace data fidelity and subspace prior has been presented for non-blind deconvolution. With using visual importance difference between image structures and smoothing areas, subspace data fidelity outperforms the {\ell _2} norm data fidelity in protecting image edges and details and suppressing noise or artifacts. With exploiting the difference of subspace priors, different modelings of subspace priors are proposed for performance improvement. Then least square integration method is used to fuse deblurred estimations and compensate information loss of subspace deblurrings. And an efficient optimization scheme using the methods of least square and fast Fourier transform is derived to address proposed objective function. Final experiments present a comprehensive analysis of the proposed algorithm by objective and subjective assessments. Compared with other methods, our method yields higher deblurring performance and better visual results.
This work was supported in part by the National Natural Science Foundation of China under Grant 61701245, in part by The Startup Foundation for Introducing Talent of NUIST 2243141701030, in part by A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.
All authors declare no conflicts of interest in this paper.
[1] | G. Wang, D. L. Snyder and J. A. O'Sullivan, et al., Iterative deblurring for CT metal artifact reduction, IEEE Trans. Med. Imag., 28 (1996), 657–664. |
[2] | M. Jiang, G. Wang and M. W. Skinner, et al., Blind deblurring of spiral CT images, IEEE Trans. Med. Imag., 22 (2003), 837–845. |
[3] | X. Kan, Y. Zhang and L. Zhu, et al., Snow cover mapping for mountainous areas by fusion of MODIS L1B and geographic data based on stacked denoising auto-encoders, Computers, Materials and Continua, 57 (2018), 49–68. |
[4] | L. He, D. Ouyang and M. Wang, et al. , A method of identifying thunderstorm clouds in satellite cloud image based on clustering, Computers, Materials and Continua, 57 (2018), 549–570. |
[5] | C. Thorpe, F. Li and Z. Li, et al., A coprime blur scheme for data security in video surveillance, IEEE Trans. Pattern Anal. Machine Intell., 35 (2013), 3066–3072. |
[6] | J. Wang, T. Li and X. Luo, et al., Identifying computer generated images based on quaternion central moments in color quaternion wavelet domain, IEEE Trans. Circuits Syst. Video Technol., (2018), 1. |
[7] | S. Zhou, W. Liang and J. Li, et al., Improved VGG model for road traffic sign recognition, Computers, Materials and Continua, 57 (2018), 11–24. |
[8] | J. Liu, N. Sun and X. Li, et al., Rare bird sparse recognition via part-based gist feature fusion and regularized intraclass dictionary learning, Computers, Materials and Continua, 55 (2018), 435–446. |
[9] | Q. Shan, J. Jia and A. Agarwala, High-quality motion deblurring from a single image, ACM Trans. Graphics., 27 (2008), 73. |
[10] | S. Cho and S. Lee, Fast motion deblurring, ACM Trans. Graphics., 28 (2009), 145. |
[11] | I. Caiszar, Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems, Ann. Stat., 8 (1991), 2032–2066. |
[12] | J. Li, Z. Shen and R. Yin, et al., A reweighted {\ell _2} method for image restoration with Poisson and mixed Poisson-Gaussian noise, UCLA Preprint 68 (2012). |
[13] | P. Zhuang, Y. Huang and D. Zeng, et al., Non-blind deconvolution using `1-norm high-frequency fidelity, Multimed. Tools Appl., 76 (2016), 1–9. |
[14] | J. Yang, Y. Zhang and W. Yin, An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842–2865. |
[15] | A. Tikhonov, On the stability of inverse problems, Dokl. Akad. Nauk SSSR, 39 (1943), 195–198. |
[16] | S. Osher, L. Rudin and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D, 60 (1992), 259–268. |
[17] | Y. Wang, J. Wang and W. Yin, et al., A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imaging Sci., 1 (2008), 248–272. |
[18] | D. Krishnan and R. Fergus, Fast image deconvolution using hyper-laplacian priors, Adv. Neural Inf. Process. Syst., (2009), 1033–1041. |
[19] | D. Krishnan, T. Tay and R. Fergus, Blind deconvolution using a normalized sparsity measure, IEEE Conf. Comput. Vis. Pattern Recognit., (2011), 2657–2664. |
[20] | L. Xu, C. Lu and Y. Xu, et al., Image smoothing via `0 gradient minimization, ACM Trans. Graphics., 30 (2011), 174. |
[21] | L. Xu, S. Zheng and J. Jia, Unnatural {\ell _0} sparse representation for natural image deblurring, IEEE Conf. Comput. Vis. Pattern Recognit., (2013), 1107–1114. |
[22] | W. Dong, L. Zhang and G. Shi, Centralized sparse representation for image restoration, IEEE Int. Conf. Comput. Vis., (2011), 1259–1266. |
[23] | W. Dong, L. Zhang and G. Shi, et al., Nonlocally centralized sparse representation for image restoration, IEEE Trans. Image Process., 22 (2013), 1620–1630. |
[24] | J. Zhang, D. Zhao and R. Xiong, et al., Image restoration using joint statistical modeling in a space-transform domain, IEEE Trans. Circuits Syst. Video Technol., 24 (2014), 915–928. |
[25] | V. M. Patel, R. Maleh and A. C. Gilbert, et al., Gradient-based image recovery methods from incomplete Fourier measurements, IEEE Trans. Image Process. 21 (2012), 94–105. |
[26] | Z. Wang, A. C. Bovik and H. R. Sheikh, et al., Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600–612. |
[27] | P. Zhuang, X. Fu and Y. Huang,et al., A novel framework method for non-blind deconvolution using subspace images priors, Signal Processing: Image Communication, 46 (2016), 17–28. |
[28] | T. Chan, S. Esedoglu and F. Park, et al., Recent developments in total variation image restoration, Mathematical Models of Computer Vision, 17 (2015). |
[29] | L. Xu and J. Jia, Two-phase kernel estimation for robust motion deblurring, Eur. Conf. Comput. Vis., (2010), 157–170. |
[30] | C. C. Lee and W. L. Hwang, Sparse representation of a blur kernel for out-of-focus blind image restoration, IEEE Int. Conf. Image Process., (2016), 2698–2702. |
[31] | S. Cho, J. Wang and S. Lee, Handling outliers in non-blind image deconvolution, IEEE Int. Conf. Comput. Vis., (2011), 495–502. |
[32] | A. Levin, Y. Weiss and F. Durand, et al., Understanding and evaluating blind deconvolution algorithms, IEEE Conf. Comput. Vis. Pattern Recognit., (2009), 1964–1971. |
1. | Xiyuan Chen, Di Liu, Yu Zhang, Xiao Liu, Yuan Xu, Chunfeng Shi, Robust motion blur kernel parameter estimation for star image deblurring, 2021, 230, 00304026, 166288, 10.1016/j.ijleo.2021.166288 |