
By making use of the Nevanlinna theory and difference Nevanlinna theory of several complex variables, we investigate some properties of the transcendental entire solutions for several systems of partial differential difference equations of Fermat type, and obtain some results about the existence and the forms of transcendental entire solutions of the above systems, which improve and generalize the previous results given by Cao, Gao, Liu [
Citation: Hong Li, Keyu Zhang, Hongyan Xu. Solutions for systems of complex Fermat type partial differential-difference equations with two complex variables[J]. AIMS Mathematics, 2021, 6(11): 11796-11814. doi: 10.3934/math.2021685
[1] | Fang Wang, Weiguo Li, Wendi Bao, Li Liu . Greedy randomized and maximal weighted residual Kaczmarz methods with oblique projection. Electronic Research Archive, 2022, 30(4): 1158-1186. doi: 10.3934/era.2022062 |
[2] | Ranran Li, Hao Liu . On global randomized block Kaczmarz method for image reconstruction. Electronic Research Archive, 2022, 30(4): 1442-1453. doi: 10.3934/era.2022075 |
[3] | Koung Hee Leem, Jun Liu, George Pelekanos . A regularized eigenmatrix method for unstructured sparse recovery. Electronic Research Archive, 2024, 32(7): 4365-4377. doi: 10.3934/era.2024196 |
[4] | Qin Guo, Binlei Cai . Learning capability of the rescaled pure greedy algorithm with non-iid sampling. Electronic Research Archive, 2023, 31(3): 1387-1404. doi: 10.3934/era.2023071 |
[5] | John Daugherty, Nate Kaduk, Elena Morgan, Dinh-Liem Nguyen, Peyton Snidanko, Trung Truong . On fast reconstruction of periodic structures with partial scattering data. Electronic Research Archive, 2024, 32(11): 6481-6502. doi: 10.3934/era.2024303 |
[6] | Zhengyu Liu, Yufei Bao, Changhai Wang, Xiaoxiao Chen, Qing Liu . A fast matrix completion method based on truncated L2,1 norm minimization. Electronic Research Archive, 2024, 32(3): 2099-2119. doi: 10.3934/era.2024095 |
[7] | Jingqian Xu, Ma Zhu, Baojun Qi, Jiangshan Li, Chunfang Yang . AENet: attention efficient network for cross-view image geo-localization. Electronic Research Archive, 2023, 31(7): 4119-4138. doi: 10.3934/era.2023210 |
[8] | Daochang Zhang, Dijana Mosić, Liangyun Chen . On the Drazin inverse of anti-triangular block matrices. Electronic Research Archive, 2022, 30(7): 2428-2445. doi: 10.3934/era.2022124 |
[9] | Huimin Qu, Haiyan Xie, Qianying Wang . Multi-convolutional neural network brain image denoising study based on feature distillation learning and dense residual attention. Electronic Research Archive, 2025, 33(3): 1231-1266. doi: 10.3934/era.2025055 |
[10] | Jiange Liu, Yu Chen, Xin Dai, Li Cao, Qingwu Li . MFCEN: A lightweight multi-scale feature cooperative enhancement network for single-image super-resolution. Electronic Research Archive, 2024, 32(10): 5783-5803. doi: 10.3934/era.2024267 |
By making use of the Nevanlinna theory and difference Nevanlinna theory of several complex variables, we investigate some properties of the transcendental entire solutions for several systems of partial differential difference equations of Fermat type, and obtain some results about the existence and the forms of transcendental entire solutions of the above systems, which improve and generalize the previous results given by Cao, Gao, Liu [
In the modern fields of scientific research and medical diagnostics [1,2,3], there is an increasing reliance on image restoration techniques [4,5,6,7], which are particularly prominent in the field of medical imaging. Ensuring image quality is paramount for the authenticity of data [8]. By processing X-ray projection data obtained from CT scans, we can reconstruct clear tomographic images with a resolution of n×n pixels. This technology encompasses two main schools: mathematical theoretical analysis [9] and iterative methods [10]. The former, like the filtered back projection method [11], is widely used in fields such as CT imaging, while the latter excels in dealing with noise interference and data loss. With technological advancements, image restoration is moving toward greater precision and efficiency.
To reconstruct images, we must solve a large-scale system of linear equations with multiple righthand sides, which is presented as follows:
AX=B | (1) |
In contemporary mathematics and engineering, solving systems of equations stands as a pivotal task, particularly in scientific research and industrial applications where efficiency and precision are of paramount importance. A variety of numerical methods have been widely adopted [12,13,14,15], among which the Kaczmarz algorithm [16] is renowned for its iterative projection approach in approximating the true solution. The algorithm's simplicity has facilitated its application across various domains, including image reconstruction [12,17], medical imaging [11,18], and signal processing [19,20]. Advances in technology have given rise to multiple enhanced versions of the Kaczmarz algorithm [21,22,23,24,25,26,27,28,29,30,31], improving performance in large-scale parallel computing and noisy data environments. Notably, with the development of free pseudo-inverse techniques, Du and Sun [24] further extended the randomized extended average block Kaczmar (REABK) method, proposing a class of pseudo-inverse-free stochastic block iterative methods for solving both consistent and inconsistent systems of linear equations, Free pseudo-inverse can accelerate convergence speed. Pseudo-inverse approximation is a specific case of generalized inverse techniques. For more theoretical analysis and applications, refer to literature [32]. Inspired by references [33,34], this paper introduces a faster lazy free greedy block Kaczmarz (LFGBK) method, exploring matrix sketching techniques as a key tool for accelerating matrix operations. This method employs sampling based on an approximate maximum distance criterion, excelling at selecting small, representative samples from large datasets for more efficient computation. The adaptive randomized block Kaczmarz (ARBK) [35] algorithm integrates adaptive and randomized block selection strategies. However, under certain specific matrix structures, ARBK may experience slow convergence or even fail to converge. Additionally, when dealing with large-scale sparse matrices, the ARBK algorithm incurs significant memory overhead. Our improved algorithm successfully overcomes these drawbacks, offering a more stable and efficient solution.
Additionally, it not only addresses single righthand side linear equations but also extends to multiple righthand side linear equations, solving the memory overflow issues that traditional algorithms face when dealing with image processing vectors. A comprehensive theoretical framework supports the convergence of these methods. Numerical experiments validate its effectiveness, demonstrating improved computational efficiency and laying a solid foundation for further optimization of matrix sketching techniques.
The structure of this paper is as follows: Section 2 introduces the necessary background knowledge. Section 3 presents the faster free pseudo-inverse GBK method for solving single righthand side linear equation systems. Section 4 proposes the leverage score sampling free pseudo-inverse GBK method for solving multiple righthand side linear equation systems. Section 5 details the numerical experiments, and Section 6 concludes the paper.
In this article, we adopt the same notation as in reference [27]. For example, A(i),AT,A†,||A||,||A||F, and [n], respectively, represent the i-th row of the coefficient matrix A, transpose, generalized inverse, spectral norm, F-norm, and the set {1,2,…,n}.
Recently, Niu and Zheng combined greedy strategy with the block Kaczmarz method, proposing the GBK [29] method for solving large-scale consistent linear equation systems. See Algorithm 1 for the specific process.
Algorithm1 GBK method |
1: Input: A,b,l,x0∈range(AT)andη∈(0,1). 2: Output: xl. 3: for k=0,1,…l−1 do 4: Compute εk=ηmax1≤i≤m{|bi−A(i)xk|2||A(i)||22}. 5: Determine the sequence of indicator sets. Tk={ik:|bik−A(ik)xk|2≥εk||A(ik)||22}. 6: Compute xk+1=xk+A†Tk(bTk−ATkxk). 7: end for |
Convergence analysis of the GBK method is described as follows:
Theorem 1 ([29]) If the linear system of Eq (1) is consistent, then the iterative sequence {xk}∞k=0 generated by algorithm 1 converges to the minimum norm solution x∗=A†b of the system of equations, and satisfies for any k≥0,
||xk+1−x∗||22≤(1−γk(η)σ2min(A)||A||2F)k+1||x0−x∗||22. |
The formula for γk(η) is defined as follows: γk(η)=η||A||2F||A||2F−||ATk−1||2F||ATk||2Fσ2max(ATk). Here, γ0(η) is defined as η||AT0||2Fσ2max(AT0), where η is in the range (0,1], and σmin(A) and σmax(A) represent the nonzero minimum singular value and maximum singular value of matrix A.
In the matrix sketching technique, as described in leverage score sampling [36,37], we select samples based on the leverage score of each row. Specifically, we will choose each row with a probability proportional to its leverage score. Therefore, rows with higher scores (i.e., rows with greater influence in the dataset) will have a greater chance of being selected.
Algorithm2 Leverage score sampling method based on manifold |
1: Input:A∈Rm×n. 2: Initialize C as a d×n zero matrix. 3: Initialize the variable sum to 0. 4: Calculate the singular value decomposition of A as U, S, and V. 5: for k=1,…,m 6: Calculate the sum of the squares of each row in U and add it to the sum. 7: Calculate the probability of each data point being sampled: Calculate the sum of squares for each row of U, then divide by the total sum. 8: Based on the calculated probabilities, sampling is conducted to obtain the sampling indices. 9: Utilize C=[indices,:] for indexing. 10: end for 11: Return C∈Rd×n. |
The advantage of this sampling method lies in its ability to select a small, representative subset of samples from a large dataset, enabling more efficient computation. The resulting sample set S∈Rd×m, where d is the chosen number of samples, can be utilized to estimate various properties of the original matrix A, such as singular value decomposition, principal component analysis, and so on. The LFGBK algorithm is similar to the derivation process in reference [38].
Remark: In addition to the method proposed in this paper, there are other sampling techniques such as random Gaussian matrices, subsampled randomized Hadamard transform (SRHT), and uniform sampling. In previous experiments, we also tried other methods like random sparse sampling. Through comparison, we found that the method proposed in this paper excels in extracting the main features of matrices. Other methods not only have higher computational complexity but may also extract all-zero vectors during actual image processing, rendering calculations impossible. Our method effectively avoids these issues, which is why we chose to adopt it.
This chapter introduces an algorithm, namely, the leverage score sampling free pseudo-inverse GBK method for single righthand side, and provides a proof of the corresponding convergence theory for the algorithm.
First, each step follows the approximate maximum illustration principle.
|bik−A(ik)xk|2||A(ik)||22≥ηmax1≤i≤m{|bi−A(i)xk|2||A(i)||22}, |
Second, select the index set Tk for the block matrix ATk; then, project the current estimate onto each row forming the block matrix ATk; finally, calculate the average of the projections to determine the next iteration.
xk+1=xk+(∑i∈Tkwibi−A(i)xk||A(i)||22AT(i)). |
Before presenting the convergence theory for Algorithm 3, let us first introduce a lemma.
Algorithm 3 SLFGBK method |
1: Let A,b,l,x0 be within the range of (AT), parameter η within (0,1), and consider the sequences of step sizes (αk)k≥0 and weights (wk)k≥0. 2: Output:xl. 3: Initialization: Leverage score sampling enables the selection of a small, representative subset from large datasets, facilitating more efficient computation. Algorithm 2 generates a leverage score sampling transformation matrix ˜A=SA∈Rd×n and vector ˜b=Sb, where d≪m. 4: for k=0,1,…l−1 do 5: Calculation ˜εk=ηmax1≤i≤m{|˜bi−˜A(i)xk|2||˜A(i)||22}. 6: Define the index set sequence Tk={ik:|˜biK−˜A(ik)xk|2≥εk||˜A(ik)||22}. 7: Computation xk+1=xk+(∑i∈Tkwi˜bi−˜A(i)xk||˜A(i)||22˜AT(i)). 8: end for |
Lemma 1 ([27]): If any vector u belongs to the range of AT, then
||Au||22≥λmin(ATA)||u||22 |
Theorem 2: The leverage score transformation S satisfies d=O(n2/δθ2), and x∗=A†b is the minimum norm solution of the single righthand side linear equation systems, for any k ≥ 0.
||xk+1−x∗||22≤(1−˜ψk(η)σ2min(˜A)||˜A||2F)||xk−x∗||22 | (2) |
The function ˜ψk(η)=ηt(2t−1t2σ2max(ˆATTk))σ2min,η∈(0,1], where range(A),σmin(A), and σmax(A) represent the range, nonzero minimum singular value, and maximum singular value of matrix A, respectively.
Proof. With algorithm 3 and ˜rk=˜b−˜Axk, we can obtain
xk+1=xk+(∑i∈Tk1|Tk|˜rik˜AT(i)||˜A(i)||22). | (3) |
Expand Eq (3) with the set |Tk|=t and Tk={jk1,…,jkt}.
xk+1=xk+∑i∈Tk1t˜AT(i)eTi˜rk||˜A(i)||22=xk+1t(˜AT(jk1)eTjk1˜rk||˜A(jk1)||22+⋯+˜AT(jkt)eTjkt˜rk||˜A(jkt)||22)=xk+1tˆATTkˆITk˜rk. | (4) |
Among them
ˆATTk=[˜AT(jk1)||˜A(jk1)||2,˜AT(jk2)||˜A(jk2)||2,…,˜AT(jkt)||˜A(jkt)||2]∈Rn×t, | (5) |
and
ˆITk=[e(jk1)||˜A(jk1)||2,e(jk2)||˜A(jk2)||2,…,e(jkt)||˜A(jkt)||2]T∈Rt×d. | (6) |
Subtracting x∗ from Eq (4) simultaneously yields
xk+1−x∗=xk−x∗−1tˆATTkˆITk˜A(xk−x∗)=(I−1tˆATTkˆATk)(xk−x∗) | (7) |
Taking the spectral norm and squaring both sides of Eq (7), and for any positive semi-definite matrix Q, satisfying Q2≼λmax(Q)Q, we can obtain
||xk+1−x∗||22=||(xk−x∗)−1tˆATTkˆATk(xk−x∗)||22=||(xk−x∗)||22−2t||ˆATk(xk−x∗)||22+1t2||ˆATTkˆATk((xk−x∗)||22≤||(xk−x∗)||22−(2t−1t2σ2max(ˆATTk))||ˆATk(xk−x∗)||22 | (8) |
Using Eq (5) and the inequality |˜bjk−˜A(jk)xk|2≥˜εk||˜A(jk)||22, a straightforward calculation yields.
||ˆATk(xk−x∗)||22=∑jk∈Tk1||˜A(jk)||22|˜rjkk|2 | (9) |
≥∑jk∈Tk1||˜A(jk)||22˜εk||˜A(jk)||2 | (10) |
Substituting ˜εk=ηmax1≤i≤d{|˜bi−˜A(i)xk|2||˜A(i)||22} into the above equation yields the following result:
||ˆATk(xk−x∗)||22≥∑jk∈Tkηmax1≤i≤d{|˜rik|2||˜A(i)||22}≥ηtd∑i=1|˜rik|2||A(i)||22||˜A(i)||22||A||2F=ηt||˜rk||2||˜A||2F | (11) |
Given x0∈range(AT) and x∗=A†b, we have xk−x∗∈range(AT). Therefore, by Lemma 1, we can conclude that
||˜rk||2=||˜A(xk−x∗)||22 |
≥λ2min(˜AT˜A)||xk−x∗||22 Combining the above equation, we can obtain the following
||ˆATk(xk−x∗)||22≥ηtσ2min(˜A)||˜A||2F||xk−x∗||22 | (12) |
From Eqs (8) and (12), we can obtain Eq (2), and the iterative sequence {xk}∞k=0 converges to the minimum norm solution x∗=A†b of the system of equations. Hence, Theorem 2 is proved.
In this chapter, we introduce an algorithm known as the multi-righthand-side leverage score sampling free pseudo-inverse GBK method, and we provide a proof for the corresponding convergence theory of the algorithm.
For most image reconstruction, the system of equations is formulated as follows:
AX=B | (13) |
In the context, A∈Rm×n,X=[x1,x2,…,xd]∈Rn×d,B=[b1,b2,…,bd]∈Rm×d,d>1. This paper addresses the solution of multiple righthand side linear equation systems and initially presents the index set for each iteration selection in the multiple righthand side linear equation systems.
max1≤i≤m{|b(i)−A(i)xk|||A(i)||22} | (14) |
Iterate through the following steps:
xk+1=xk+b(i)−A(i)xk||A(i)||22AT(i) |
To solve systems of linear equations with multiple righthand sides, our approach involves working with the j righthand side vector bj by selecting the working row Tkj according to the criterion established in Eq (15).
max1≤i≤m{|B(i,j)−A(i)x(k)j|||A(i)||22},j=1,2,…,kb. | (15) |
The iterative formula corresponding to the solution at the j right endpoint is as follows:
x(k+1)j=xkj+˜B(Tkj,j)−˜A(Tkj)x(k)j||˜A(Tkj)||22˜ATTkj | (16) |
According to the criterion for selecting the index set in Eq (15), each term on the righthand side corresponds to the selection of a working row. For the system of linear equations with multiple righthand terms as in Eq (13), the Kaczmarz method can be extended to an iterative format that simultaneously solves for the righthand side system of linear equations.
In this context, X(k+1)=(x(k+1)1,x(k+1)2,…,x(k+1)d) represents the approximate solution at the k+1 iteration.
Xk+1=Xk+(˜ATjk1,…,˜ATjkd)Tdiag(˜B(jk1,1)−˜Ajk1x(k)1||˜Ajk1||22,…,˜B(jkd,d)−˜Ajkdx(k)d||˜Ajkd||22) |
To reduce computational effort, Algorithm 2 is employed to extract certain rows from matrix A corresponding to the index set Tkj. The sub-matrix formed by these rows is denoted as ˜A,, while the sub-matrix of B corresponding to the index set Tk is denoted as ˜B. The specific configurations of ˜A and ˜B are as follows.
˜BTTkj=(BTTkj1,BTTkj2,…,BTTkjt) | (17) |
and
˜ATTkj=(ATTk1,ATTk2,…,ATTkt) | (18) |
The transposed matrix BTTkji is represented as (B(Tkji,1),B(Tkji,2),…,B(Tkji,d)), where Tkji is an element of the index set Tkj. Here, B(Tkji,t) denotes the element in the i row and t column of the sub-matrix ˜B, and |Tkj| denotes the number of row indices contained in the index set Tkj with |Tkj| equal to t.
The multi-righthand-side method proposed in this paper is a specialized block approach for solving large-scale righthand side linear equations using a leverage-based pseudo-inverse GBK method. We provide a detailed framework for this method, which involves simultaneous computation of multiple righthand sides, with each iteration involving a matrix B∈Rm×t containing t righthand sides, resulting in the selection of t row indices.
This article exclusively discusses the scenario where αk equals 1. The convergence theory for solving large-scale righthand side linear equation groups with the LFGBK method is presented below:
Theorem 3: Assume that the linear equation system (13) is consistent, and so are its extracted subsystems. The iterative sequence X(k) from k=0 to infinity, generated by Algorithm 4 starting from the initial value x0, converges to the least squares solution X∗=A†B. The expected error for the iterative sequence solutions X(k) from k=0 to infinity is:
E||X(k+1)−X∗||2F≤(1−σ2min(˜A(Tkj))||˜A(Tkj)||2F)E||Xk−X∗||2F | (19) |
Algorithm 4 MLFGBK Method |
1: Let A,b,l,x0 be within the range of (AT), parameter η within (0,1), and consider the sequences of step sizes (αk)k≥0 and weights (wk)k≥0. 2: Output:x. 3: Initialization: Algorithm 2 generates a leverage score sampling transformation matrix ˜A=SA∈Rd×n and matrix ˜b=Sb, where d≪m. 4: for k=0,1,…l−1 do 5: Calculation ˜εkj=ηmax1≤i≤m{|B(i,j)−A(i)X(k)j|||A(i)||22},j=1,2,…,d. 6: Establish the sequence of indicator sets. Tkj={ikj:|˜bikj−˜A(ikj)xkj|2≥˜εkj||˜A(ikj)||22}. 7: Calculation Xk+1=Xk+(˜ATjk1,…,˜ATjkd)Tdiag{˜B(jk1,1)−˜Ajk1x(k)1||˜Ajk1||22,…,˜B(jkd,d)−˜Ajkdx(k)d||˜Ajkd||22}. 8: end for |
The tilde-decorated A and B, denoted as ˜A and ˜B, respectively, represent the sub-matrices formed by the rows corresponding to the index set Tkj within matrices A and B.
Proof. According to Algorithm 4 and reference [39], we have
x(k+1)j=xkj+˜B(Tkj,j)−˜A(Tkj)x(k)j||˜A(Tkj)||22˜ATTkj |
Since X∗=A†B is the least squares solution, we infer that x∗j=A†B(:,j) for j=1,2,…,d, where B(:,j) denotes the j column of matrix B.
Further,
E||X(k+1)−X∗||2F=E∑tj=1||x(k+1)j−x∗j||22=∑tj=1E||x(k+1)j−x∗j||22 | (20) |
For j=1,2,…,d, it can be deduced that
||x(k+1)j−x∗j||22=||x(k)j−x∗j||22−E||x(k+1)j−xkj||22 |
Then
E||x(k+1)j−xkj||22=|˜B(Tkj,j)−˜A(Tkj)x(k)j|2||˜A(Tkj)||22=maxTkji∈Tkj{|˜B(Tkji,j)−˜A(Tkji)x(k)j|2||˜A(Tkji)||22} |
The specific details of the term E||x(k+1)j−xkj||22 are elaborated in the proof of Theorem 3.2 in [39].
Due to
maxTkji∈Tkj{|˜B(Tkji,j)−˜A(Tkji)x(k)j|2||˜A(Tkji)||22}=maxTkji∈Tkj{|˜B(Tkji,j)−˜A(Tkji)x(k)j|2||˜A(Tkji)||22}||Akx(k)j−˜B(:,j)||22||Akx(k)j−˜B(:,j)||22=maxTkji∈Tkj{|˜B(Tkji,j)−˜A(Tkji)x(k)j|2||˜A(Tkji)||22}||Akx(k)j−˜B(:,j)||22∑ti=1||˜A(Tkji)||22||˜B(Tkji,j)−ATkjix(k)j||22||˜A(Tkji)||22≥||˜A(Tkj)x(k)j−˜B(:,j)||22∑ti=1||˜A(Tkji)||22≥σ2min(˜A(Tkj))||˜A(Tkj)||22||x(k)j−x∗j||22 | (21) |
In this context, ˜B(:,j) denotes the j column of the matrix ˜B.
E||X(k+1)−Xkj||22=||x(k)j−x∗j||22−E||x(k+1)j−xkj||22≤(1−σ2min(˜A(Tkj))||˜A(Tkj)||22)||x(k)j−x∗j||22 | (22) |
Thus, by combining Eqs (20) and (22), we can derive inequality (19).
The proof of Theorem 3 reveals that the convergence rate of Algorithm 4 is related to the sub-matrix ˜A sampled during each leveraged iteration.
In this section, we evaluate the effectiveness of large-scale image restoration equations using several algorithms through comparative examples: the GBK method [29], the semi-stochastic block Kaczmarz method with simple random sampling for single righthand sides [40], the LFGBK method for single righthand sides, the semi-stochastic block Kaczmarz method with simple random sampling for multiple righthand sides [40], and the LFGBK method for multiple righthand sides. All experiments are implemented in MATLAB, with metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), mean squared error (MSE), and computational time (CPU in seconds) reported. Following the approach of [27], the PSNR, SSIM, MSE, and CPU metrics reflect the average iterations and computational time needed for 50 repeated calculations. During all computations, we initialize the matrix x=zeros(n,m) and set the righthand side B=AX, where X represents images from MATLAB's Cameraman, Phantom, and Mri datasets, each sized 100×100. Matrix A is constructed with elements corresponding to the product of the number of X-ray beams (4), the range of scanning angles t (0° to 179°), and the image pixels. The stopping criterion is set as RSE=||Xk−X∗||22||X∗||22≤10−6. In practice, the condition O(n2/δθ2) is quite stringent, but in many real-world computations, a sketching factor of d=n2 yields satisfactory results. We consider matrices A∈R(4∗180∗n,m) of two types: type a, generated by radon(), and type b, generated by randn().
Based on the numerical results from Tables 1 and 2, when variable A is generated using radon, we can draw the following conclusions: ⅰ) The GBK method, single righthand side SRBK (single RHS SRBK) method, single RHS LFGBK method, multiple righthand side SRBK (multiple RHS SRBK) method, and multiple RHS LFGBK method all demonstrate effectiveness in solving equation systems for image restoration. ⅱ) The GBK method, single RHS SRBK method, single RHS LFGBK method, multiple RHS SRBK method, and multiple RHS LFGBK method show similar restoration quality; however, the single RHS LFGBK and multiple RHS LFGBK methods significantly outperform the GBK, Single RHS SRBK, and Multiple RHS SRBK methods in terms of time efficiency. ⅲ) The LFGBK method exhibits superior acceleration effects at η = 0.9 compared to η = 0.8. ⅳ) The MLFGBK method displays more pronounced acceleration effects at η = 0.9 than at η = 0.8.
GBK | SSRBK | SLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 19.595 | 17.996 | 19.076 | 17.640 | 17.595 | 17.519 |
SSIM | 0.999 | 0.998 | 0.999 | 0.998 | 0.999 | 0.999 | |
MSE | 0.011 | 0.015 | 0.012 | 0.017 | 0.017 | 0.017 | |
CPU | 234.78 | 335.37 | 55.00 | 94.755 | 11.581 | 7.543 | |
Phantom | PSNR | 73.636 | 72.694 | 73.639 | 73.655 | 72.377 | 72.430 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 266.67 | 279.43 | 56.186 | 75.168 | 12.496 | 8.556 | |
Mri | PSNR | 29.319 | 28.936 | 28.068 | 29.023 | 28.057 | 28.112 |
SSIM | 0.969 | 0.975 | 0.924 | 0.919 | 0.962 | 0.978 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 259.86 | 375.48 | 60.963 | 162.06 | 19.340 | 15.563 |
SLFGBK | MSRBK | MLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 17.595 | 17.519 | 17.611 | 17.732 | 17.894 | 17.876 |
SSIM | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | |
MSE | 0.017 | 0.017 | 0.017 | 0.016 | 0.016 | 0.016 | |
CPU | 11.581 | 7.543 | 18.027 | 14.207 | 3.196 | 2.880 | |
Phantom | PSNR | 72.377 | 72.430 | 72.492 | 72.532 | 72.486 | 72.760 |
SSIM | 1.000 | 1.000 | 1.000 | 1.0000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 12.496 | 8.556 | 43.106 | 38.075 | 20.874 | 20.023 | |
Mri | PSNR | 28.057 | 28.112 | 28.081 | 28.234 | 28.161 | 28.506 |
SSIM | 0.962 | 0.978 | 0.901 | 0.899 | 0.895 | 0.891 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 19.340 | 15.563 | 42.535 | 37.575 | 18.237 | 17.539 |
Based on the numerical results from Tables 3 and 4, when variable A is generated using randn, we can draw the following conclusions: ⅰ) The GBK method, single RHS SRBK method, single RHS LFGBK method, multiple RHS SRBK method, and multiple RHS LFGBK method all demonstrate effectiveness in solving equation systems for image restoration. ⅱ) The GBK method, single RHS SRBK method, single RHS LFGBK method, multiple RHS SRBK method, and multiple RHS LFGBK method show similar restoration quality; however, the Single RHS LFGBK and Multiple RHS LFGBK methods significantly outperform the GBK, single RHS SRBK, and multiple RHS SRBK methods in terms of time efficiency. ⅲ) The LFGBK method exhibits superior acceleration effects at η = 0.9 compared to η = 0.8. ⅳ) The MLFGBK method displays more pronounced acceleration effects at η = 0.9 than at η = 0.8.
GBK | SSRBK | SLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 20.122 | 19.195 | 18.419 | 17.594 | 17.519 | 17.552 |
SSIM | 0.999 | 0.998 | 0.998 | 0.998 | 0.999 | 0.999 | |
MSE | 0.009 | 0.012 | 0.014 | 0.017 | 0.017 | 0.017 | |
CPU | 3.651 | 6.916 | 1.433 | 3.596 | 1.698 | 0.723 | |
Phantom | PSNR | 73.904 | 72.475 | 74.040 | 73.704 | 72.619 | 72.464 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 68.434 | 129.151 | 19.739 | 35.465 | 7.776 | 6.878 | |
Mri | PSNR | 30.615 | 29.072 | 29.289 | 28.899 | 28.080 | 28.169 |
SSIM | 0.912 | 0.900 | 0.893 | 0.884 | 0.892 | 0.905 | |
MSE | 0 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 63.758 | 122.983 | 28.132 | 31.338 | 9.451 | 8.641 |
SLFGBK | MSRBK | MLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 17.519 | 17.552 | 17.748 | 17.762 | 17.860 | 17.805 |
SSIM | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | |
MSE | 0.017 | 0.017 | 0.016 | 0.016 | 0.016 | 0.016 | |
CPU | 1.698 | 0.723 | 14.582 | 11.879 | 2.005 | 1.672 | |
Phantom | PSNR | 72.619 | 72.464 | 72.398 | 72.629 | 72.695 | 72.353 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 7.776 | 6.878 | 47.513 | 45.431 | 13.824 | 13.757 | |
Mri | PSNR | 28.080 | 28.169 | 28.241 | 28.093 | 28.241 | 28.328 |
SSIM | 0.892 | 0.905 | 0.904 | 0.904 | 0.902 | 0.897 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 9.451 | 8.641 | 54.802 | 38.773 | 11.533 | 10.332 |
This study aims to explore a new image reconstruction algorithm, the LFGBK method, which utilizes a greedy strategy. The core of this method lies in transforming the image reconstruction problem into solving a linear system problem with multiple righthand sides. In traditional Kaczmarz algorithms, the iterative process of gradually approaching the solution is often inefficient and susceptible to the influence of initial value selection. However, the LFGBK method introduces leverage score sampling and extends from solving single righthand side linear equations to solving multiple righthand side linear equations, which is crucial for handling large-scale problems due to the time-consuming and resource-intensive nature of pseudo-inverse computation on large matrices. By avoiding pseudo-inverse calculation, the LFGBK method significantly improves the algorithm's computational efficiency. To validate the effectiveness of the proposed algorithm, this study conducts in-depth research through theoretical analysis and simulation experiments. The theoretical analysis confirms the convergence of the LFGBK method, ensuring the reliability and stability of the algorithm. The simulation experiments, compared with traditional Filtered Back Projection (FBP) methods, demonstrate the advantages of LFGBK in terms of image reconstruction quality. The experimental results show that the LFGBK method significantly preserves image details and reduces noise, proving its practicality and superiority in image reconstruction tasks.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors declare there is no conflict of interest.
[1] | A. Biancofiore, W. Stoll, Another proof of the lemma of the logarithmic derivative in several complex variables, In: J. Fornaess (ed.), Recent developments in several complex variables, Princeton University Press, Princeton, 1981, 29–45. |
[2] |
T. B. Cao, R. J. Korhonen, A new version of the second main theorem for meromorphic mappings intersecting hyperplanes in several complex variables, J. Math. Anal. Appl., 444 (2016), 1114–1132. doi: 10.1016/j.jmaa.2016.06.050
![]() |
[3] |
Y. M. Chiang, S. J. Feng, On the Nevanlinna characteristic of f(z+η) and difference equations in the complex plane, Ramanujan J., 16 (2008), 105–129. doi: 10.1007/s11139-007-9101-1
![]() |
[4] | R. Courant, D. Hilbert, Methods of Mathematical Physics, Vol II, partial differential equations, Interscience, New York, 1962. |
[5] | L. Y. Gao, Entire solutions of two types of systems of complex differential-difference equations, Acta Math. Sinica, Chin. Ser., 59 (2016), 677–685. |
[6] | P. R. Garabedian, Partial Differential Equations, Wiley, New York, 1964. |
[7] |
F. Gross, On the equation fn+gn=1, Bull. Am. Math. Soc., 72 (1966), 86–88. doi: 10.1090/S0002-9904-1966-11429-5
![]() |
[8] |
R. G. Halburd, R. J. Korhonen, Difference analogue of the lemma on the logarithmic derivative with applications to difference equations, J. Math. Anal. Appl., 314 (2006), 477–487. doi: 10.1016/j.jmaa.2005.04.010
![]() |
[9] |
R. G. Halburd, R. J. Korhonen, Finite-order meromorphic solutions and the discrete Painlevé equations, Proc. London Math. Soc., 94 (2007), 443–474. doi: 10.1112/plms/pdl012
![]() |
[10] | R. G. Halburd, R. J. Korhonen, Nevanlinna theory for the difference operator, Ann. Acad. Sci. Fenn. Math., 31 (2006), 463–478. |
[11] |
J. Heittokangas, R. Korhonen, I. Laine, J. Rieppo, K. Tohge, Complex difference equations of Malmquist type, Comput. Methods Funct. Theory, 1 (2001), 27–39. doi: 10.1007/BF03320974
![]() |
[12] | P. C. Hu, Malmquist type theorem and factorization of meromorphic solutions of partial differential equations, Complex Var., 27 (1995), 269–285. |
[13] |
P. C. Hu, B. Q. Li, On meromorphic solutions of nonlinear partial differential equations of first order, J. Math. Anal. Appl., 377 (2011), 881–888. doi: 10.1016/j.jmaa.2010.12.004
![]() |
[14] | P. C. Hu, P. Li, C. C. Yang, Unicity of Meromorphic Mappings, Advances in Complex Analysis and its Applications, Vol. 1, Kluwer Academic Publishers, Dordrecht, Boston, London, 2003. |
[15] | P. C. Hu, C. C. Yang, Uniqueness of meromorphic functions on Cm, Complex Var., 30 (1996), 235–270. |
[16] |
D. Khavinson, A note on entire solutions of the eiconal equation, Am. Math. Mon., 102 (1995), 159–161. doi: 10.1080/00029890.1995.11990551
![]() |
[17] |
R. J. Korhonen, A difference Picard theorem for meromorphic functions of several variables, Comput. Methods Funct. Theory, 12 (2012), 343–361. doi: 10.1007/BF03321831
![]() |
[18] | I. Laine, Nevanlinna Theory and Complex Differential Equations, Walter de Gruyter, Berlin, 1993. |
[19] |
B. Q. Li, On entire solutions of Fermat type partial differential equations, Int. J. Math., 15 (2004), 473–485. doi: 10.1142/S0129167X04002399
![]() |
[20] |
B. Q. Li, Entire solutions of (uz1)m+(uz2)n=eg, Nagoya Math. J., 178 (2005), 151–162. doi: 10.1017/S0027763000009156
![]() |
[21] |
B. Q. Li, Entire solutions of eiconal type equations, Arch. Math., 89 (2007), 350–357. doi: 10.1007/s00013-007-2118-2
![]() |
[22] |
K. Liu, Meromorphic functions sharing a set with applications to difference equations, J. Math. Anal. Appl., 359 (2009), 384–393. doi: 10.1016/j.jmaa.2009.05.061
![]() |
[23] |
K. Liu, T. B. Cao, H. Z. Cao, Entire solutions of Fermat type differential-difference equations, Arch. Math., 99 (2012), 147–155. doi: 10.1007/s00013-012-0408-9
![]() |
[24] |
K. Liu, T. B. Cao, Entire solutions of Fermat type difference differential equations, Electron. J. Differ. Equ., 2013 (2013), 1–10. doi: 10.1186/1687-1847-2013-1
![]() |
[25] |
F. Lü, W. R. Lü, C. P. Li, J. F. Xu, Growth and uniqueness related to complex differential and difference equations, Results Math., 74 (2019), 30. doi: 10.1007/s00025-018-0945-z
![]() |
[26] |
F. Lü, Z. Li, Meromorphic solutions of Fermat type partial differential equations, J. Math. Anal. Appl., 478 (2019), 864–873. doi: 10.1016/j.jmaa.2019.05.058
![]() |
[27] | P. Montel, Lecons sur les familles normales de fonctions analytiques et leurs applications, Gauthier-Villars, Paris, 1927,135–136. |
[28] | A. Naftalevich, On a differential-difference equation, Mich. Math. J., 2 (1976), 205–223. |
[29] | A. Naftalevich, On meromorphic solutions of a linear differential-difference equation with constant coefficients, Mich. Math. J., 22 (1990), 205–223. |
[30] | G. Pólya, On an integral function of an integral function, J. Lond. Math. Soc., 1 (1926), 12–15. |
[31] |
X. G. Qi, Y. Liu, L. Z. Yang, A note on solutions of some differential-difference equations, J. Contemp. Math. Anal. (Arm. Acad. Sci.), 52 (2017), 128–133. doi: 10.3103/S1068362317030037
![]() |
[32] |
X. G. Qi, L. Z. Yang, Entire solutions of some differential-difference equations, Bull. Iran. Math. Soc., 46 (2020), 579–591. doi: 10.1007/s41980-019-00277-5
![]() |
[33] | J. Rieppo, On a class of complex functional equations, Ann. Acad. Sci. Fenn. Math., 32 (2007), 151–170. |
[34] | L. I. Ronkin, Introduction to the Theory of Entire Functions of Several Variables, Moscow: Nauka, 1971 (Russian), American Mathematical Society, Providence, 1974. |
[35] |
E. G. Saleeby, Entire and meromorphic solutions of Fermat type partial differential equations, Analysis, 19 (1999), 69–376. doi: 10.1524/anly.1999.19.1.69
![]() |
[36] | W. Stoll, Holomorphic Functions of Finite Order in Several Complex Variables, American Mathematical Society, Providence, 1974. |
[37] |
H. Y. Xu, S. Y. Liu, Q. P. Li, Entire solutions for several systems of nonlinear difference and partial differential-difference equations of Fermat-type, J. Math. Anal. Appl., 483 (2020), 123641. doi: 10.1016/j.jmaa.2019.123641
![]() |
[38] |
H. Y. Xu, H. Wang, Notes on the existence of entire solutions for several partial differential-difference equations, Bull. Iran. Math. Soc., 46 (2020), 1–13. doi: 10.1007/s41980-019-00237-z
![]() |
[39] |
L. Xu, T. B. Cao, Solutions of complex Fermat-type partial difference and differential-difference equations, Mediterr. J. Math., 15 (2018), 1–14. doi: 10.1007/s00009-017-1047-y
![]() |
[40] |
L. Xu, T. B. Cao, Correction to: Solutions of complex Fermat-type partial difference and differential-difference equations, Mediterr. J. Math., 17 (2020), 1–4. doi: 10.1007/s00009-019-1430-y
![]() |
[41] |
Z. Ye, On Nevanlinnaś second main theorem in projective space, Invent. Math., 122 (1995), 475–507. doi: 10.1007/BF01231453
![]() |
GBK | SSRBK | SLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 19.595 | 17.996 | 19.076 | 17.640 | 17.595 | 17.519 |
SSIM | 0.999 | 0.998 | 0.999 | 0.998 | 0.999 | 0.999 | |
MSE | 0.011 | 0.015 | 0.012 | 0.017 | 0.017 | 0.017 | |
CPU | 234.78 | 335.37 | 55.00 | 94.755 | 11.581 | 7.543 | |
Phantom | PSNR | 73.636 | 72.694 | 73.639 | 73.655 | 72.377 | 72.430 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 266.67 | 279.43 | 56.186 | 75.168 | 12.496 | 8.556 | |
Mri | PSNR | 29.319 | 28.936 | 28.068 | 29.023 | 28.057 | 28.112 |
SSIM | 0.969 | 0.975 | 0.924 | 0.919 | 0.962 | 0.978 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 259.86 | 375.48 | 60.963 | 162.06 | 19.340 | 15.563 |
SLFGBK | MSRBK | MLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 17.595 | 17.519 | 17.611 | 17.732 | 17.894 | 17.876 |
SSIM | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | |
MSE | 0.017 | 0.017 | 0.017 | 0.016 | 0.016 | 0.016 | |
CPU | 11.581 | 7.543 | 18.027 | 14.207 | 3.196 | 2.880 | |
Phantom | PSNR | 72.377 | 72.430 | 72.492 | 72.532 | 72.486 | 72.760 |
SSIM | 1.000 | 1.000 | 1.000 | 1.0000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 12.496 | 8.556 | 43.106 | 38.075 | 20.874 | 20.023 | |
Mri | PSNR | 28.057 | 28.112 | 28.081 | 28.234 | 28.161 | 28.506 |
SSIM | 0.962 | 0.978 | 0.901 | 0.899 | 0.895 | 0.891 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 19.340 | 15.563 | 42.535 | 37.575 | 18.237 | 17.539 |
GBK | SSRBK | SLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 20.122 | 19.195 | 18.419 | 17.594 | 17.519 | 17.552 |
SSIM | 0.999 | 0.998 | 0.998 | 0.998 | 0.999 | 0.999 | |
MSE | 0.009 | 0.012 | 0.014 | 0.017 | 0.017 | 0.017 | |
CPU | 3.651 | 6.916 | 1.433 | 3.596 | 1.698 | 0.723 | |
Phantom | PSNR | 73.904 | 72.475 | 74.040 | 73.704 | 72.619 | 72.464 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 68.434 | 129.151 | 19.739 | 35.465 | 7.776 | 6.878 | |
Mri | PSNR | 30.615 | 29.072 | 29.289 | 28.899 | 28.080 | 28.169 |
SSIM | 0.912 | 0.900 | 0.893 | 0.884 | 0.892 | 0.905 | |
MSE | 0 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 63.758 | 122.983 | 28.132 | 31.338 | 9.451 | 8.641 |
SLFGBK | MSRBK | MLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 17.519 | 17.552 | 17.748 | 17.762 | 17.860 | 17.805 |
SSIM | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | |
MSE | 0.017 | 0.017 | 0.016 | 0.016 | 0.016 | 0.016 | |
CPU | 1.698 | 0.723 | 14.582 | 11.879 | 2.005 | 1.672 | |
Phantom | PSNR | 72.619 | 72.464 | 72.398 | 72.629 | 72.695 | 72.353 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 7.776 | 6.878 | 47.513 | 45.431 | 13.824 | 13.757 | |
Mri | PSNR | 28.080 | 28.169 | 28.241 | 28.093 | 28.241 | 28.328 |
SSIM | 0.892 | 0.905 | 0.904 | 0.904 | 0.902 | 0.897 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 9.451 | 8.641 | 54.802 | 38.773 | 11.533 | 10.332 |
GBK | SSRBK | SLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 19.595 | 17.996 | 19.076 | 17.640 | 17.595 | 17.519 |
SSIM | 0.999 | 0.998 | 0.999 | 0.998 | 0.999 | 0.999 | |
MSE | 0.011 | 0.015 | 0.012 | 0.017 | 0.017 | 0.017 | |
CPU | 234.78 | 335.37 | 55.00 | 94.755 | 11.581 | 7.543 | |
Phantom | PSNR | 73.636 | 72.694 | 73.639 | 73.655 | 72.377 | 72.430 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 266.67 | 279.43 | 56.186 | 75.168 | 12.496 | 8.556 | |
Mri | PSNR | 29.319 | 28.936 | 28.068 | 29.023 | 28.057 | 28.112 |
SSIM | 0.969 | 0.975 | 0.924 | 0.919 | 0.962 | 0.978 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 259.86 | 375.48 | 60.963 | 162.06 | 19.340 | 15.563 |
SLFGBK | MSRBK | MLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 17.595 | 17.519 | 17.611 | 17.732 | 17.894 | 17.876 |
SSIM | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | |
MSE | 0.017 | 0.017 | 0.017 | 0.016 | 0.016 | 0.016 | |
CPU | 11.581 | 7.543 | 18.027 | 14.207 | 3.196 | 2.880 | |
Phantom | PSNR | 72.377 | 72.430 | 72.492 | 72.532 | 72.486 | 72.760 |
SSIM | 1.000 | 1.000 | 1.000 | 1.0000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 12.496 | 8.556 | 43.106 | 38.075 | 20.874 | 20.023 | |
Mri | PSNR | 28.057 | 28.112 | 28.081 | 28.234 | 28.161 | 28.506 |
SSIM | 0.962 | 0.978 | 0.901 | 0.899 | 0.895 | 0.891 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 19.340 | 15.563 | 42.535 | 37.575 | 18.237 | 17.539 |
GBK | SSRBK | SLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 20.122 | 19.195 | 18.419 | 17.594 | 17.519 | 17.552 |
SSIM | 0.999 | 0.998 | 0.998 | 0.998 | 0.999 | 0.999 | |
MSE | 0.009 | 0.012 | 0.014 | 0.017 | 0.017 | 0.017 | |
CPU | 3.651 | 6.916 | 1.433 | 3.596 | 1.698 | 0.723 | |
Phantom | PSNR | 73.904 | 72.475 | 74.040 | 73.704 | 72.619 | 72.464 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 68.434 | 129.151 | 19.739 | 35.465 | 7.776 | 6.878 | |
Mri | PSNR | 30.615 | 29.072 | 29.289 | 28.899 | 28.080 | 28.169 |
SSIM | 0.912 | 0.900 | 0.893 | 0.884 | 0.892 | 0.905 | |
MSE | 0 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 63.758 | 122.983 | 28.132 | 31.338 | 9.451 | 8.641 |
SLFGBK | MSRBK | MLFGBK | |||||
data | Metrics | η=0.8 | η=0.9 | η=0.8 | η=0.9 | η=0.8 | η=0.9 |
Cameraman | PSNR | 17.519 | 17.552 | 17.748 | 17.762 | 17.860 | 17.805 |
SSIM | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | |
MSE | 0.017 | 0.017 | 0.016 | 0.016 | 0.016 | 0.016 | |
CPU | 1.698 | 0.723 | 14.582 | 11.879 | 2.005 | 1.672 | |
Phantom | PSNR | 72.619 | 72.464 | 72.398 | 72.629 | 72.695 | 72.353 |
SSIM | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | |
MSE | 0 | 0 | 0 | 0 | 0 | 0 | |
CPU | 7.776 | 6.878 | 47.513 | 45.431 | 13.824 | 13.757 | |
Mri | PSNR | 28.080 | 28.169 | 28.241 | 28.093 | 28.241 | 28.328 |
SSIM | 0.892 | 0.905 | 0.904 | 0.904 | 0.902 | 0.897 | |
MSE | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | |
CPU | 9.451 | 8.641 | 54.802 | 38.773 | 11.533 | 10.332 |