
Quaternionic Hilbert (Q-Hilbert) spaces are frequently used in applied physical sciences and especially in quantum physics. In order to solve some problems of many nonlinear physical systems, the frame theory of Q-Hilbert spaces was studied. Frames in Q-Hilbert spaces not only retain the frame properties, but also have some advantages, such as a simple structure for approximation. In this paper, we first characterized Hilbert (orthonormal) bases, frames, dual frames and Riesz bases, and obtained the accurate expressions of all dual frames of a given frame by taking advantage of pre-frame operators. Second, we discussed the constructions of frames with the help of the pre-frame operators and gained some more general methods to construct new frames. Moreover, we obtained a necessary and sufficient condition for the finite sum of frames to be a (tight) frame, and the obtained results further enriched and improved the frame theory of the Q-Hilbert space.
Citation: Yan Ling Fu, Wei Zhang. Some results on frames by pre-frame operators in Q-Hilbert spaces[J]. AIMS Mathematics, 2023, 8(12): 28878-28896. doi: 10.3934/math.20231480
[1] | Xiaoyan Jiang, Jianguo Sun . Local geometric properties of the lightlike Killing magnetic curves in de Sitter 3-space. AIMS Mathematics, 2021, 6(11): 12543-12559. doi: 10.3934/math.2021723 |
[2] | Pınar Zengin Alp . On paranormed sequence space arising from Riesz Euler Totient matrix. AIMS Mathematics, 2025, 10(5): 11260-11270. doi: 10.3934/math.2025510 |
[3] | R. Marcinkevicius, I. Telksniene, T. Telksnys, Z. Navickas, M. Ragulskis . The construction of solutions to CD(1/n) type FDEs via reduction to (CD(1/n))n type FDEs. AIMS Mathematics, 2022, 7(9): 16536-16554. doi: 10.3934/math.2022905 |
[4] | Khalid K. Ali, Mohamed S. Mohamed, M. Maneea . A novel approach to q-fractional partial differential equations: Unraveling solutions through semi-analytical methods. AIMS Mathematics, 2024, 9(12): 33442-33466. doi: 10.3934/math.20241596 |
[5] | Jiang-Wei Ke, Jin-E Zhang . Associative memories based on delayed fractional-order neural networks and application to explaining-lesson skills assessment of normal students: from the perspective of multiple O(t−α) stability. AIMS Mathematics, 2024, 9(7): 17430-17452. doi: 10.3934/math.2024847 |
[6] | Wei Ma, Qiongfen Zhang . Existence of solutions for Kirchhoff-double phase anisotropic variational problems with variable exponents. AIMS Mathematics, 2024, 9(9): 23384-23409. doi: 10.3934/math.20241137 |
[7] | Ling Peng, Qiong Liu . The construction conditions of a Hilbert-type local fractional integral operator and the norm of the operator. AIMS Mathematics, 2025, 10(1): 1779-1791. doi: 10.3934/math.2025081 |
[8] | Lufeng Bai . A new approach for Cauchy noise removal. AIMS Mathematics, 2021, 6(9): 10296-10312. doi: 10.3934/math.2021596 |
[9] | Yang Yang, Yanyan Song, Haifeng Fan, Haiyan Qiao . A note on the generalized Gaussian Estrada index and Gaussian subgraph centrality of graphs. AIMS Mathematics, 2025, 10(2): 2279-2294. doi: 10.3934/math.2025106 |
[10] | Adel Alahmadi, Altaf Alshuhail, Patrick Solé . The mass formula for self-orthogonal and self-dual codes over a non-unitary commutative ring. AIMS Mathematics, 2023, 8(10): 24367-24378. doi: 10.3934/math.20231242 |
Quaternionic Hilbert (Q-Hilbert) spaces are frequently used in applied physical sciences and especially in quantum physics. In order to solve some problems of many nonlinear physical systems, the frame theory of Q-Hilbert spaces was studied. Frames in Q-Hilbert spaces not only retain the frame properties, but also have some advantages, such as a simple structure for approximation. In this paper, we first characterized Hilbert (orthonormal) bases, frames, dual frames and Riesz bases, and obtained the accurate expressions of all dual frames of a given frame by taking advantage of pre-frame operators. Second, we discussed the constructions of frames with the help of the pre-frame operators and gained some more general methods to construct new frames. Moreover, we obtained a necessary and sufficient condition for the finite sum of frames to be a (tight) frame, and the obtained results further enriched and improved the frame theory of the Q-Hilbert space.
Throughout the evolution of digital image processing, a variety of processing technologies have been formed, including the wavelet transform, partial differential equation (PDE), and stochastic model. In image processing, the edge of an image is the most important visual feature. In 1992, Rudin et al. proposed the well-known total variation (TV) model [1], which has been named the ROF model. The ROF model can balance edge preservation and noise removal because it can take advantage of the inherent regularity of the image. The ROF model is as follows:
minu∫Ωλ2||u−u0||22+||u||TVdΩ, | (1) |
where Ω⊂Rn is an open bounded set, n≥2 [2], u0(x,y) denotes the noisy image and u(x,y) denotes the desired clean image. λ denotes a real positive number and ‖u‖TV denotes the TV of u(x,y) , which is defined as ‖∇u‖1 . The ROF model has played an important role in image denoising, deblurring and inpainting. However, the solution of the ROF model is a piecewise constant function, so it is easy to generate a blocky effect in the flat region. To reduce the block effect, scholars have proposed a fourth-order PDE [3] and LLT [4], which can effectively remove the noise and reduce the blocky effect. The LLT model is as follows:
minu∫Ωλ2||u−u0||22+||Δu||1dΩ, | (2) |
where Δu=(∂2xu,∂2yu) and ‖Δu‖1=|∂2xu|+|∂2yu| . The disadvantage of the LLT model is that it produces excessive smoothing in the edge region. To solve this problem, an adaptive fourth-order PDE has been proposed [5]. Both the ROF model and LLT model have the L2 -data fidelity term. The type of noise that corrupts the image typically affects the data fidelity term selection. In general, images are affected by different types of noise. If the image is only affected by a mixture of Gaussian noise and Poisson noise, the noises can be converted into additive Gaussian noise. This is probably why most of the literature is devoted to removing Gaussian noise. The L2 - data fidelity term is suitable for removing Gaussian additive noise, but it is almost invalid for other noises. The L1 -data fidelity term can effectively remove non-additive Gaussian noise, such as Laplacian noise and impulse noise [6,7]. The L1/TV model is as follows:
minu∫Ωλ2||u−u0||1+||u||TVdΩ. | (3) |
The L1/TV model has some unique features. It does not destroy the geometric structures or morphological invariance of the images under processing [8,9]. Therefore, the L1/TV denoising image model is widely used in practical applications, such as face recognition [10], shape denoising [11] and image texture decomposition [12]. In fact, images are generally not corrupted by only one type of noise. The mixture of Gaussian and salt-and-pepper noise is considered in this paper. In particular, salt-and-pepper noise is a simple type of impulse noise [13]. An L1 - L2 -data fidelity term was introduced and proved to be suitable for the removal of a mixture of Gaussian and impulse noise in [14]. The L1L2/TV model is as follows:
minu∫Ωλ||u−u0||22+μ||u−u0||1+||u||TVdΩ, | (4) |
where λ , μ≥0 . The L1L2/TV model (4) is a generalization of (1) and (3). For example, if we set λ=0 in (4) then we get the L1/TV model. If we set μ=0 then we get the L2/TV model. In particular, the choice of parameters critically affects the quality of image restoration. Small values of λ and μ lead to an oversmoothed reconstruction, which eliminates both noise and detail in the image. In contrast, large values of λ and μ retain noise [15]. An improvement of the L1L2/TV model has been proposed in [16], where ‖Wu‖1 replaces the TV. In [17], the authors used second-order total generalized variation [18] as a regularization term and incorporated box constraints.
In this paper, the fractional-order TV regularization term is the focus. We propose a combined model with a fractional-order TV regularization term, an L1 -data fidelity term, and an L2 - data fidelity term, which we term the L1L2/TVα model. This model aims to remove the mixture of Gaussian noise and salt-and-pepper noise.
It is difficult to minimize the objective function because the fractional-order TV regularization term is non-differentiable. Numerous efforts have been devoted to addressing this issue. There are some methods to solve the fractional-order TV model, including the use of the primal-dual algorithm [19], fractional-order Euler-Lagrange equations [20], alternating projection algorithm for the fractional-order multi-scale variational model [21,22], and majorization-minimization algorithm [23]. The Split Bregman iterative algorithm [24] and alternating direction method of multipliers [25] can also effectively solve non-differentiable terms. Recently, proximity algorithms [26–30] for solving the ROF model or the L1/TV denoising image model have attracted widespread attention in digital image processing. The method mainly combines a convex function with a linear transformation to represent the non-differentiable term ‖u‖TV . The issue of solving the proximity operator of the convex function can be reformulated into solving a fixed-point equation. Consequently, the proximity operator of the convex function can be obtained. The convergence of the fixed-point proximity algorithm has been proven [26]. The L1/TV model requires solving two fixed point equations due to the non-differentiability of the L1 -data fidelity term [28]. In this paper, the proximity algorithm is used to solve the L1L2/TVα model.
The structure of the paper is as follows. Section 1 introduces the prior works and our motivation. Section 2 proposes the L1L2/TVα model and proves the existence of its solution. The proximity algorithm is applied to solve the model and the convergence of the algorithm is proved. Section 3 presents several numerical experiments and shows the results. Finally, Section 4 concludes the paper.
This section first introduces two very important concepts of convex functions: the proximity operator and the subdifferential. The relationship between them will also be given.
Initially, we introduce some notations. We denote the m -dimensional Euclidean space by Rm . For x,y∈Rm , we define the standard inner product of Rmas<x,y>≔∑mi=1xiyiandthep−normofavectorx∈Rm as ||x||p≔(∑mi=1|xi|p)1p . The proximity operator was introduced in [31]. We recall its definition as follows.
Definition 2.1. (Proximity operator): Let f be a proper lower-semi-continuous convex function on Rm , where Rm is m -dimensional Euclidean space. The proximity operator of f is defined for any x∈Rm by proxf(x)=argminu{12‖u−x‖22+f(u):u∈Rm} .
Definition 2.2. (Subdifferential): Let f be a proper lower-semi-continuous convex function on Rm , where Rm is m -dimensional Euclidean space. The subdifferential of f is defined for y∈Rm by ∂f(x)≔{y∈Rmandf(z)≥f(x)+⟨y,z−x⟩,∀z∈Rm} .
The following lemma describes the relationship between the proximity operator and the convex function subdifferential.
Lemma 2.1. (Proposition 2.6 in [27]): If f is a convex function on Rm and x∈Rm , then
y∈∂f(x)ifandonlyifx=proxf(x+y). | (5) |
The proof of this lemma is given in [27]. Based on the Lemma 2.1, we can get that
y∈∂f(x)ifandonlyify=(I−proxf)(x+y). | (6) |
Recently in [13], it has been demonstrated that the L1L2/TV model is effective at removing mixtures of Gaussian and impulse noise. In this approach, an image is restored by solving the following equation:
minp∫Ωλ||p−p0||22+μ||p−p0||1+||p||TVdΩ, | (7) |
where p0∈RN×N denotes the noise image, N is a positive integer, p∈RN×N denotes the denoising image, and λ,μ are the parameters of L2 -data and L1 -data fidelity terms respectively. This model combines two kinds of data fidelity terms, L1 and L2 , which can combine the advantages of both norms. Therefore, it has a significant effect in the removal of mixtures noise of Gaussian noise and salt-and-pepper noise.
However, we observe that the numerical solution produced by the L1L2/TV model yields a substantial block effect. Additionally, this model fails to completely remove salt-and-pepper noise. The fractional-order TV regularization term has been proved to effectively reduce the block effect. This section introduces a minimum optimization denoising model, termed the L1L2/TVα model. The L1L2/TVα model includes three terms: an L2 -data fidelity term for Gaussian noise, an L1 -data fidelity term for salt-and-pepper noise, and a fractional-order TV regularization term for a balance between detail preservation and noise reduction. The model is as follows:
minpE(p)=minp∫Ω(λ||p−p0||22+μ||p−p0||1+||p||TVα)dΩ, | (8) |
where p0∈RN×N denotes the noise image and p∈RN×N denotes the denoising image. ‖p‖TVα is the α fractional-order TV of p , and ‖p‖TVα is defined as ‖∇αp‖1 , where ∇αp=(∂αxp,∂αyp) and ‖∇αp‖1=|∂αxp|+|∂αyp| . In particular, note the following:
● When setting λ=0 , the model (8) simplifies L1/TVα .
● When setting μ=0 , the model (8) simplifies L2/TVα .
The parameter settings of λ and μ for these specialized models demonstrates the flexibility of the L1L2/TVα model.
To prove the existence of a solution to the L1L2/TVα model, it is critical to prove the boundedness of the potential solution [33].
Lemma 2.2. (Boundedness) Let p0∈L2(Ω) , where Ω⊂Rn(n≥2) is an open bounded set. Given infΩp0>0 , if the model has a solution ˆp , then infΩp0<ˆp<supΩp0 .
Proof of Lemma 2.2. Let ω=infΩp0 and ν=supΩp0 . When p>p0 , functions |p−p0| and (p−p0)2 increase monotonically. Then,
∫Ω||inf(p,ν)−p0||1dΩ≤∫Ω||p−p0||1dΩ, | (9) |
∫Ω‖inf(p,ν)−p0‖22dΩ≤∫Ω‖p−p0‖22dΩ, | (10) |
where inf(p,ν) is the lower bound of p and ν . That is, inf(p,ν) is the minimum value of p and ν .
Moreover, based on Lemma 2 in the literature [34], there exists TVα(inf(p,ν))≤TVα(p) . Thus, we have
E(inf(p,ν))≤E(p), | (11) |
and the equation holds if and only if p≤ν .
Since ˆp is the minimum solution of optimization problem (8), the equation holds when p=ˆp and hence ˆp≤ν . Similarly, E(p)≤E(sup(p,ω)) ; then, ˆp≥ω can be obtained. In summary, infΩf<ˆp<supΩf .
In what follows, we will give the existence of a solution for the optimization problem (8).
Lemma 2.3. (Existence): Let p0∈L2(Ω) , where Ω⊂Rn ( n≥2 ) is an open bounded set. Given infΩp0>0 , the optimization problem (8) has at least one solution in the solution space BVα(Ω) .
Proof of Lemma 2.3. The space of bounded variational functions BVα(Ω) can be defined as follows: BVα(Ω)={f:f∈L1(Ω)} , forming Banach spaces under the BVα norm ‖f‖BVα=‖f‖L1+TVα(f) .
Define ω=infΩp0 and ν=supΩp0 . Because p=ν∈BVα(Ω) , the solution space is not empty [35]. Consider that the optimization problem (8) has a minimization sequence {pn}∈BVα(Ω) with ω≤pn≤ν .
Because BVα(Ω) is a Banach space and Ω is bounded, it follows that
‖pn‖L1=∫Ω|pn|dΩ≤+∞. | (12) |
Moreover, because {pn} is a minimization sequence, there exists a constant C>0 such that E(pn)≤C . Because ∫Ω||p−p0||22+||p−p0||1dΩ is nonnegative, there is a constant C'>0 and
TVα(pn)≤C'. | (13) |
Equations (12) and (13) yield that {pn} is consistently bounded. Due to the compactness of BVα(Ω) , there exists a subsequence {pnj} of {pn} and a function pϵBVα(Ω) such that
{pnj}→p,inL1(Ω). |
Using the Lebesgue control convergence theorem, we obtain
∫Ω||p−p0||1dΩ=limj→∞∫Ω||pnj−p0||1dΩ, | (14) |
∫Ω||p−p0||22dΩ=limj→∞∫Ω||pnj−p0||22dΩ. | (15) |
According to the lower semi-continuity of the function, the following inequality holds:
E(p)≤limn→∞infE(pn). | (16) |
Since {pn} is a minimization sequence, p is the smallest solution to the optimization problem (8).
Consider an image represented by a grid of N×N pixels. The discretization of the data term is given by
∫Ω||p−p0||22dΩ≈∑i,j(pi,j−p0i,j)2,∫Ω‖p−p0‖1dΩ≈∑i,j|pi,j−p0i,j|, |
where (i,j) denotes the coordinates at the points. For the fractional-order TV term, we obtain the following discretization:
∫Ω‖∇αp‖1dΩ≈∑i,j|∇αxpi,j|+|∇αypi,j|, |
∇αxpi,j=∑K−1k=0(−1)kCαkpi−k,j,∇αypi,j=∑K−1k=0(−1)kCαkpi,j−k, |
where C(α)k=(−1)αΓ(α+1)Γ(k+1)Γ(α−k+1) and Γ(x) is the gamma function.
Considering that the proximity algorithm is suitable for vectors, we respectively transform the image matrices p and p0 into vectors u and u0 by using the formulas pi,j=u(j−1)n+i and p0i,j=u0(j−1)n+i , i,j=1,2,….,N . We describe the minimization problem (8) as follows:
argminu{λ||u−u0||22+μ||u−u0||1+||∇αu||1}, | (17) |
where u∈Rm and u0∈Rm,m=N2 .
The proximity operator of ‖∇αu‖1 is not easy to compute. To overcome this difficulty, we treat ‖∇αu‖1 as the composition of a convex function with a fractional-order difference operator by using the formula ‖∇αu‖1=(ϕ∘Bα)(u) . In the formula, ϕ:R2m→R is defined as the norm ‖⋅‖1,Bα is a 2m×m matrix, and ∇αu can be represented as Bαu . The (i,j) component of ∇αu can thus be represented as a multiplication of the vector u∈Rm by a matrix Bαn∈R2×m for n=1,2,...,m :
Bαnu={(∑i−1k=0C(a)kun−k,∑j−1k=0C(a)kun−Nk)Ti>1,j>1(um,∑j−1k=0C(a)kun−Nk)Ti=1,j>1(∑i−1k=0C(a)kun−k,un)Ti>1,j=1(un,un)Ti=1,j=1, | (18) |
where the matrix Bαn=[Bα1,Bα2,…,BαN]T∈R2×2m [29]. Therefore, we describe the minimization problem as follows:
argminu{λ||u−u0||22+μ||u−u0||1+(ϕ∘Bα)(u)}. | (19) |
Consider φ to be a convex function on Rm at u∈Rm , as follows:
φ(u)=λ||u−u0||22+μ||u−u0||1. | (20) |
Therefore, we can describe the above minimization problem as follows:
argminu{φ(u)+(ϕ∘Bα)(u)}. | (21) |
Proposition 2.1. Let ϕ be a proper convex function on Rm ; Bα is a 2m×m matrix. If u∈Rm is a solution of model (21), then for any positive numbers β1, β2>0 , there exists a vector b∈R2m such that
u=prox1αφ(u−β2β1(Bα)Tb), | (22) |
b=(I−prox1β2ϕ)(Bαu+b). | (23) |
On the contrary, if b∈R2m and u∈Rm satisfies (22) and (23) for some positive β1 , β2>0 , then u is a solution of (21).
Proof. If u∈Rm is a solution of (21), then, by Fermat's theorem on convex analysis, it follows that
0∈∂(φ(u)+(ϕ∘Bα)(u)). |
By the chain rule
∂((ϕ∘Bα)(u))=(Bα)T∂ϕ(Bαu), |
then
0∈∂φ(u)+(Bα)T∂ϕ(Bαu). | (24) |
For any β1 , β2>0 , we choose two vectors a∈1β1∂φ(u) and b∈1β2∂ϕ(Bau) such that
0=β1a+β2(Bα)Tb. | (25) |
By (5) and a∈1β1∂φ(u) , we have that
u=prox1β1φ(u+a). | (26) |
Using (25), we conclude that a=−β2β1(Bα)Tb ; by substituting a into (26), we obtain (22). By applying the definition of the proximity operator and b∈1β2∂ϕ(Bαu) , we obtain (23). Conversely, if there exist β1,β2>0 , b∈R2m , and u∈Rm satisfying (22) and (23), then by Lemma 2.1, we obtain that b∈1β2∂ϕ(Bαu) and −β2β1(Bα)Tb∈1α∂φ(u) . We can yield that
0=β1(−β2β1(Bα)Tb)+β(Bα)Tb∈∂φ(u)+(Bα)T∂φ(Bαu). |
This implies that u∈Rm is a solution of (21).
According to Proposition 2.1, we can conclude the following corollary.
Corollary 2.1. Suppose that u0∈Rm is given, λ , μ are two positive numbers, Bα is a 2m×m matrix, φ is the function defined by (12), and ϕ is a differentiable convex function on R2m . If u∈Rm is a solution of (21), then for any β1 > 0,
u=prox1β1φ(u−1β1(Bα)T∂ϕ(Bαu)). | (27) |
Conversely, if for some β1 > 0 there exists u∈Rm satisfying (27), then u∈Rm is a solution to (21).
Proof. By Proposition 2.1, a solution u∈Rm of (21) satisfies (22) and (23). If the function ϕ is differentiable, then ∂ϕ(u)={∇ϕ(u)} , where ∂ϕ(u) is the gradient of ϕ at u . Therefore, (6) and (23) imply that b=1β2∂ϕ(Bαu) . Hence, (22) yields the fixed-point equation (27).
The fixed-point equation (27) can be viewed as an instance of the split forward-backward formula [31]. Suppose that ∂ϕ is Lipschitz continuous with a Lipschitz constant L , that is
||∂ϕ(p)−∂ϕ(q)||2≤L||p−q||2,∀p,q∈Rm, | (28) |
and that β1 is chosen to satisfy
1β1<2L||Bα||22. | (29) |
It was proved in [33], that for any initial point u0∈Rm , the Picard iteration
uk+1=prox1β1φ(uk−1β1(Bα)T∂ϕ(Bαuk)), | (30) |
converges to a fixed point of (27), which is a minimum of (21).
Let Hu:=u−1β1(Bα)T∂ϕ(Bαuk) and Qu:=(prox1β1φ∘H)u . To prove that (30) is convergent, we only need to prove that H and Q are non-expansive averaged operators. We recall the definitions of non-expansive operators [31].
Definition 2.3. (Non-expansive operator): An operator T on Rm is non-expansive if it satisfies the following condition ∀x,y∈Rm:‖Tx−Ty‖2≤‖x−y‖2 .
Both proxf(x) and (I−proxf)(x) are operators; see [31].
Definition 2.4. (Firmly non-expansive operator): An operator T on Rm is firmly non-expansive if it satisfies the following condition ∀x,y∈Rm:‖Tx−Ty‖2≤<x−y,Tx−Ty> .
Definition 2.5. (Non-expansive averaged operators): A non-expansive operator Q on Rm is a non-expansive averaged operator if there exists k∈(0,1) and it satisfies the following condition ∀x,y∈Rm:Q=kI+(1−k)P , where P is a non-expansive operator. If k=12 , then Q is a firmly non-expansive operator.
Both proxf(x) and (I−proxf)(x) are firmly non-expansive operators; see [32].
Proposition 2.2. If ϕ is a convex function and Bα is a 2m×m matrix, then H is firmly non-expansive.
Proof. First, by the definition of the operator H , ∀x,y∈Rm , we have
Hx−Hy=x−y−1β1(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy)), | (31) |
(I−H)x−(I−H)y=1β1(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy)). | (32) |
We have
∥Hx−Hy∥2=∥x−y∥2−2β1⟨(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy)),x−y⟩ |
+1β12‖(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy))‖2, | (33) |
‖(I−H)x−(I−H)y‖2=1β12∥(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy))∥2. | (34) |
According to the sub-gradient inequalities of convex functions, we have
⟨∂ϕ(Bαx)−∂ϕ(Bαy),Bαx−Bαy⟩≥0. | (35) |
Substituting (35) into (33), we have
∥Hx−Hy∥2≤∥x−y∥2+1β12∥(Bα)T(∂ϕ(Bαx)−∂ϕ(Bαy))∥2. | (36) |
Combining (36) with (34), we have
∥Hx−Hy∥2≤∥x−y∥2+∥(I−H)x−(I−H)y∥2. | (37) |
We have
‖Hx−Hy‖2≤<x−y,Hx−Hy>. |
This completes the proof.
If H:Rm→Rm is firmly non-expansive, then H is a non-expansive 12 -averaged operator (see Lemma 3.8 in [26]). Thus Q is a non-expansive averaged operator (see Lemma 3.7 in [26]).
We prove the convergence of (30). To simplify (30) and find an iterative format that is equivalent to (30), we make the following substitution
uk−1β1(Bα)T∂ϕ(Bαuk)=v. | (38) |
Let v∈Rm be a given vector and x∈Rm ; we denote the proximity operator of 1β1φ for the given v∈Rm as follows
prox1β1φ(v)=argminx{12||x−v||22+λβ1||x−u0||22+μβ1||x−u0||1}. | (39) |
We have
prox1β1φ(v)=u0+argminx{12||x−v+u0||22+λβ1||x||22+μβ1||x||1}. | (40) |
Let g and f be two functions on Rm ; then, we have
g(x)=12||x−v+u0||22+λβ1||x||22, | (41) |
f(x)=μβ1||x||1. | (42) |
Because the function g is differentiable, it can be expanded by applying the Taylor formula to (v−u0)∈Rm :
g(x)=g(v−u0)+<∇g(v−u0),x−v+u0>+12r||x−v+u0||22, | (43) |
where r denotes a constant greater than 1 .
We can use (43) to find the following minimum value problem:
argminx{g(x)+f(x)} |
=argminx{g(v−u0)+<∇g(v−u0),x−v+u0>+12r||x−v+u0||22+f(x)} |
=argminx{12r||x−v+u0+r∇g(v−u0)||22+f(x)} |
=proxrf(v−u0−r∇g(v−u0)). | (44) |
By (41), we can get
∇g(x)=(x−v+u0)+λ2β1x=(1+2λβ1)x−v+u0. | (45) |
Using (45), we obtain
∇g(v−u0)=2λβ1(v−u0). | (46) |
Therefore, substituting (42), (44) and (46) into (40), we conclude that
prox1β1φ(v)=u0+proxrμβ1||⋅||1(β1−2λrβ1(v−u0)). | (47) |
We can combine (38) and uk+1=prox1β1φ(v) with (47) to obtain
uk+1=u0+proxrμβ1||⋅||1(β1−2λrβ1(uk−1α(Bα)T∂ϕ(Bαpk)−u0)). | (48) |
Substituting bk=1β2∂ϕ(Bαuk) into (48) shows that (49) and (50) are equivalent iterations of (30).
uk+1=u0+proxrμβ1||⋅||1(β1−2λrβ1(uk−β2β1(Bα)Tbk−u0)), | (49) |
bk+1=(I−prox1β2ϕ)(Bαuk+1+bk). | (50) |
Hence, according to the iterative equations (48) and (49), We can propose the following algorithm.
Algorithm |
1. Noisy image u0∈Rm ; choose λ ≥ 0, μ ≥ 0, β1 > 0, β2 > 0; |
2. Initialization: u0=p0, b0=0 ; |
3. For k∈N , update u and b as follows: |
uk+1←u0+proxrμβ1‖∙‖1(β1−2λrβ1(uk−1β1(Bα)Tbk−u0)) |
bk+1←(I−prox1β2ϕ)(Bαuk+1+b) |
4. Stop if the preset stop criteria are met; otherwise, return to 2 to continue iteration. |
This section describes several image denoising experiments that were conducted to demonstrate the behavior of the proposed algorithm. The peak signal to noise ratio (PSNR) is currently the most widely used tool for objectively evaluating image quality, and it is consistent with human subjective perception. A larger value of PSNR indicates better quality of the recovered image. It is defined as follows:
PSNR=10log102552n2||u∗−u||22(dB), | (51) |
where u∗ is the original image and u is the denoised image. All experiments' iterations were ceased when the following criterion was satisfied:
||uk−uk+1||||uk+1||≤0.001. | (52) |
In this study, images of size 256×256 pixels were used to conduct numerical experiments with r=β1β1+2λ . We used the L1L2/TVα model to remove Gaussian noise, salt-and-pepper noise, and mixed noise. Original images of the experiment are shown in Figure 1. In particular, the different noise regimes yielded different results, as shown in Figure 2. Salt-and-pepper noise involves setting a value of a pixel to the minimal or maximal value of the image intensity range. Gaussian noise may extend this intensity range. We considered adding salt-and-pepper noise to the original image after Gaussian noise.
This study included a total of four groups of experiments. The first experiment was to restore images affected with σ=20 , which is the level of Gaussian noise. The second experiment was to restore images affected with s=0.03 , which is the level of salt-and-pepper noise. The third experiment was to restore images affected by the mixed noise. The fourth experiment was to explore the convergence of our proposed fractional-order TV denoising algorithm.
We began by investigating the effects of different parameters on the experimental results. Inspired by [27], we consistently chose α=6,β=128 . We determined the most suitable values λ and μ through trial and error. When Gaussian noise with σ=20 was added to the image 'Lena', we found that λ=0.07,μ=0 performed better. When salt-and-pepper noise with s=0.03 was added to the image 'Square', we found that λ=0,μ=3 performed better. We verified that these selected parameters were effective for other images with the same noise levels. Additionally, we increased α from 0.8 to 1.9 in increments of 0.1.
In the first experiment, the Gaussian noise was added to the 'Lena' image at different levels. We chose λ=0.07,μ=0 to deal with noisy images. Table 1 shows the values of PSNR, while Figure 3 shows the experimental results. In addition, Gaussian noise was added to the other images at σ=20 . Table 2 shows the values of PSNR. The first experimental results demonstrated that α has an impact on the denoising results. The best denoising result often did not appear when α=1 . Therefore, the fractional-order TV model can be applied to improve the denoising performance of the TV model.
α | σ=15 | σ=20 | σ=25 | σ=30 |
0.8 | 29.7892 | 27.4593 | 25.0759 | 23.0262 |
0.9 | 30.2569 | 27.8891 | 25.4172 | 23.3055 |
1 | 30.5238 | 28.2065 | 25.7044 | 23.5671 |
1.1 | 30.5492 | 28.3853 | 25.9207 | 23.7875 |
1.2 | 30.5086 | 28.5046 | 25.1172 | 24.0012 |
1.3 | 30.4731 | 28.6112 | 26.3034 | 24.2273 |
1.4 | 30.4378 | 28.7046 | 26.4887 | 24.4474 |
1.5 | 30.3991 | 28.7886 | 26.6701 | 24.6726 |
1.6 | 30.3599 | 28.8649 | 26.8453 | 24.8976 |
1.7 | 30.3170 | 28.9319 | 27.0052 | 25.1174 |
1.8 | 30.2558 | 28.9798 | 27.1521 | 25.3290 |
1.9 | 30.1917 | 29.0035 | 27.2755 | 25.5298 |
α | Man | Pepper | Square |
0.8 | 27.5666 | 27.3701 | 29.7428 |
0.9 | 27.9925 | 27.8178 | 30.4608 |
1 | 28.2736 | 28.1412 | 31.8894 |
1.1 | 28.3151 | 28.1936 | 31.1472 |
1.2 | 28.2914 | 28.3959 | 31.3280 |
1.3 | 28.2809 | 28.4796 | 31.4950 |
1.4 | 28.2808 | 28.5583 | 31.6557 |
1.5 | 28.2824 | 28.6456 | 31.8110 |
1.6 | 28.2666 | 28.7134 | 31.9377 |
1.7 | 28.2452 | 28.7684 | 32.0184 |
1.8 | 28.2056 | 28.8044 | 32.0599 |
1.9 | 28.1466 | 29.8223 | 32.0439 |
In the second experiment, we chose λ=0, μ=4.8 to deal with the 'Square' image corrupted by the salt-and-pepper noise at noise levels of 0.01, 0.02, 0.03, 0.05. Table 3 shows the values of PSNR, while Figure 4 shows the experimental results. In addition, salt-and-pepper noise was applied to the other images at s=0.03 . Table 4 shows the values of PSNR. The experimental results indicate that when α is larger, the effect is better.
α | s=0.01 | s=0.02 | s=0.03 | s=0.05 |
0.8 | 26.6361 | 23.5071 | 21.5444 | 19.3476 |
0.9 | 26.8099 | 23.6805 | 21.7061 | 19.5088 |
1 | 26.9692 | 23.8357 | 21.8529 | 19.6589 |
1.1 | 27.184 | 24.0518 | 22.0587 | 19.8782 |
1.2 | 27.3262 | 24.1908 | 22.1921 | 20.0358 |
1.3 | 28.6720 | 25.4874 | 23.4443 | 21.2008 |
1.4 | 30.3872 | 27.1640 | 25.0587 | 22.7285 |
1.5 | 32.3797 | 29.0880 | 26.8780 | 24.4160 |
1.6 | 34.7346 | 31.2814 | 26.8750 | 26.2035 |
1.7 | 37.4620 | 33.7378 | 30.9774 | 27.7989 |
1.8 | 40.4214 | 36.6253 | 33.2592 | 30.0331 |
1.9 | 42.8820 | 39.2325 | 35.0649 | 31.0890 |
α | Lena | Man | Pepper |
0.8 | 22.4379 | 22.3227 | 22.5437 |
0.9 | 22.7182 | 22.5793 | 22.8359 |
1 | 22.9806 | 22.8058 | 23.1135 |
1.1 | 23.2306 | 23.0675 | 23.3906 |
1.2 | 23.4360 | 23.2628 | 23.6167 |
1.3 | 24.7680 | 24.4272 | 24.8107 |
1.4 | 26.8144 | 26.1801 | 26.6391 |
1.5 | 29.1850 | 28.1721 | 28.6084 |
1.6 | 31.5879 | 30.1103 | 30.4547 |
1.7 | 33.4039 | 31.3361 | 31.7732 |
1.8 | 34.7360 | 31.8994 | 32.7549 |
1.9 | 35.4855 | 32.2108 | 32.4564 |
In the third experiment, we added Gaussian noise at σ=20 and salt-and-pepper noise at s=0.03 to four images and explore the performance of the algorithm. We chose λ=0.009 , μ=2.3 . Table 5 shows the values of PSNR, while Figure 5 shows the experimental results Figure 6 shows the original image, the noisy image, and the denoised image for different values of α (from 0.8 to 1.9 ). The third and fourth rows represent their corresponding contour map. The data from Table 5 indicate that a larger α yields better denoising performance. Consequently, the fractional-order TV model outperformed the traditional TV model under mixed noise.
α | Lena | Man | Pepper | Square |
0.8 | 23.0276 | 22.7010 | 23.0907 | 22.5688 |
0.9 | 23.5519 | 23.128 | 23.5879 | 22.9819 |
1 | 24.0592 | 23.5512 | 24.0694 | 23.4123 |
1.1 | 24.6520 | 24.0576 | 24.6195 | 24.0678 |
1.2 | 25.3215 | 24.6112 | 25.2445 | 24.8745 |
1.3 | 25.8844 | 25.0583 | 25.7745 | 25.6832 |
1.4 | 26.3067 | 25.3817 | 26.1841 | 26.4623 |
1.5 | 26.6177 | 25.5930 | 26.4867 | 27.1617 |
1.6 | 26.8404 | 25.7059 | 26.7017 | 27.7913 |
1.7 | 27.0049 | 25.7356 | 26.8421 | 28.2917 |
1.8 | 27.1119 | 25.7179 | 26.9169 | 28.6416 |
1.9 | 27.1679 | 25.6679 | 26.9147 | 28.8466 |
Based on the PSNR values and denoised images from the first three experiments, we can see that the fractional-order TV model can effectively reduce the block effect and perform better than the TV model.
The fourth experiment focused on the convergence of the algorithm. We applied α=1.8 as an example in the α range of 0.8 to 1.9. The PSNR value was recorded at each iteration. Figure 6 shows the experimental results. The blue line represents the noisy image results for σ=15,s=0.01 , the red line represents the noisy image results for σ=20,s=0.03 , and the yellow line represents the noisy image results for σ=20,s=0.05 . From Figure 6, it is obvious that our proposed fractional-order TV denoising algorithm is convergent.
Furthermore, we will show that our proposed model demonstrated good performance on the task of removing mixed noise. For this purpose, we added Gaussian noise at σ=20 and salt-and-pepper noise at s=0.03 to image 'Lena'. We chose α=1.9 . Figure 7 shows the 60th and 100th rows of the 'Lena' image from a one-dimensional perspective. The original, noisy and denoised images are represented by black, pink and blue lines, respectively. The blue solid line and the black solid line nearly coincide, which indicates that our proposed model exhibited good denoising performance. Figure 8 shows that the histogram for the noisy image was completely different from that of the original image, while the histogram for the denoised image was similar to the histogram for the original image. We took a small part of the 'Lena' image and marked it with a red rectangle; the experimental results can be seen in Figure 9.
In this paper, we developed a fractional-order TV ( L1L2/TVα ) model to remove mixtures of Gaussian noise and salt-and-pepper noise, by incorporating an L1 -data fidelity term and L2 -data fidelity term into the model. The existence of the solution of this model has been proved. We solved the proposed model by using the proximity algorithm, which prevents non-differentiability of the fractional order TV regularization terms. The convergence of the algorithm has been proved. The numerical experiments revealed the following: (1) The L1L2/TVα model can effectively reduce the block effect and achieve better denoising performance than the L1L2/TV model. (2) The L1L2/TVα model effectively removes the mixture of Gaussian noise and salt-and-pepper noise owing to the proximity algorithm. (3) In the L1L2/TVα model, α should range from 0.8 to 1.9. Different images will have different optimal values of α .
Donghong Zhao: Conceptualization, funding acquisition, supervision, methodology; Ruiying Huang: Writing-original draft, writing-review & editing, software, formal analysis; Li Feng: Writing-original draft, software.
The authors declare that they have not used artificial intelligence tools in the creation of this article.
This research was funded by Graduate Online Course Research Project of USTB (2024ZXB002), the National Natural Science Foundation of China (grant number 12371481) and Youth Teaching Talents Training Program of USTB (2016JXGGRC-002).
All authors declare no conflict of interest.
[1] |
R. J. Duffin, A. C. Schaeffer, A class of nonharmonic Fourier series, Trans. Amer. Soc., 72 (1952), 341–366. http://dx.doi.org/10.2307/1990760 doi: 10.2307/1990760
![]() |
[2] |
I. Daubechies, A. Grossmann, Y. Meyer, Painess nonorthogonal expansion, J. Math. Phys., 27 (1986), 1271–1283. http://dx.doi.org/10.1063/1.527388 doi: 10.1063/1.527388
![]() |
[3] | O. Christensen, An introduction to frames and Riesz bases, Boston: Birkhäuser, 2003. http://dx.doi.org/10.1007/978-3-319-25613-9 |
[4] |
P. G. Casazza, The art of frame theory, Taiwanese J. Math., 4 (2000), 129–201. http://dx.doi.org/10.11650/twjm/1500407227 doi: 10.11650/twjm/1500407227
![]() |
[5] |
T. Strohmer, R. W. Heath, Grassmannian frames with applications to coding and communication, Appl. Comput. Harmon. Anal., 14 (2003), 257–275. http://dx.doi.org/10.1016/S1063-5203(03)00023-X doi: 10.1016/S1063-5203(03)00023-X
![]() |
[6] |
D. Han, W. Sun, Reconstruction of signals from frame coefficients with erasures at unknown locations, IEEE T. Inform. Theory, 60 (2014), 4013–4025. http://dx.doi.org/10.1109/TIT.2014.2320937 doi: 10.1109/TIT.2014.2320937
![]() |
[7] |
S. Li, On general frame decompositions, Numer. Func. Anal. Opt., 16 (1995), 1181–1191. http://dx.doi.org/10.1080/01630569508816668 doi: 10.1080/01630569508816668
![]() |
[8] |
J. P. Gabardo, D. G. Han, Frames associated with measurable spaces, Adv. Comput. Math., 18 (2003), 127–147. http://dx.doi.org/10.1023/A:1021312429186 doi: 10.1023/A:1021312429186
![]() |
[9] | B. Daraby, F. Delzendeh, A. Rostami, A. Rahimi, Fuzzy normed linear spaces and fuzzy frames, Azerbaijan J. Math., 9 (2019), 96–121. |
[10] | S. M. Ramezani, Soft g-frames in soft Hilbert spaces, arXiv: 2307.14390, 2023. http://dx.doi.org/10.48550/arXiv.2307.14390 |
[11] | S. L. Adler, Quaternionic quantum mechanics and quantum fields, New York: Oxford University Press, 1995. http://dx.doi.org/10.1063/1.2807659 |
[12] |
R. Ghiloni, V. Moretti, A. Perotti, Continuous slice functional calculus in quaternionic Hilbert spaces, Rev. Math. Phys., 25 (2013), 1350006. http://dx.doi.org/10.1142/S0129055X13500062 doi: 10.1142/S0129055X13500062
![]() |
[13] | G. Birkhoff, J. Von Neumann, The logic of quantum mechanics, Ann. Math., 37 (1936), 823–843. |
[14] | D. Aerts, Quantum axiomatics, In: Handbook of Quantum Logic and Quantum Structures, Quantum Logic (Elsevier/North-Holland, Amsterdam), 2 (2009), 79–126. |
[15] | C. Piron, Axiomatique quantique, Helv. Phys. Acta, 37 (1964), 439–468. |
[16] | F. Colombo, J. Gantner, Kimsey, P. David, Spectral theory on the S-spectrum for quaternionic operators, Cham: Birkhäuser, 2018. http://dx.doi.org/10.1007/978-3-030-03074-2 |
[17] |
M. Khokulan, K. Thirulogasanthar, S. Srisatkunarajah, Discrete frames on finite dimensional quaternion Hilbert spaces, Axioms, 6 (2017). http://dx.doi.org/10.3390/axioms6010003 doi: 10.3390/axioms6010003
![]() |
[18] |
S. K. Sharma, Virender, Dual frames on finite dimensional quaternionic Hilbert space, Poincare J. Anal. Appl., 2 (2016), 79–88. http://dx.doi.org/10.46753/PJAA.2016.V03I02.004 doi: 10.46753/PJAA.2016.V03I02.004
![]() |
[19] |
S. K. Sharma, S. Goel, Frames in quaternionic Hilbert spaces, J. Math. Phys. Anal. Geom., 15 (2019), 395–411. http://dx.doi.org/10.15407/mag15.03.395 doi: 10.15407/mag15.03.395
![]() |
[20] | S. K. Sharma, G. Singh, S. Sahu, Duals of a frame in quaternionic Hilbert spaces, arXiv: 1803.05773, 2018. http://dx.doi.org/10.48550/arXiv.1803.05773 |
[21] |
H. Ellouz, Some properties of K-frames in quaternionic Hilbert spaces, Complex Anal. Oper. Theory, 14 (2020). http://dx.doi.org/10.1007/s11785-019-00964-5 doi: 10.1007/s11785-019-00964-5
![]() |
[22] |
W. Zhang, Y. Z. Li, Characterizations of Riesz bases in quaternionic Hilbert spaces, Chin. J. Contemp. Math., 44 (2023), 87–100. http://dx.doi.org/10.16205/j.cnki.cama.2023.0008 doi: 10.16205/j.cnki.cama.2023.0008
![]() |
[23] |
X. Guo, Operator characterizations, rigidity and constructions of (Ω, μ)-frames, Numer. Func. Anal. Opt., 39 (2017), 346–360. http://dx.doi.org/10.1080/01630563.2017.1364265 doi: 10.1080/01630563.2017.1364265
![]() |
[24] |
S. Obeidat, S. Samarah, P. G. Casazza, J. C. Tremain, Sums of Hilbert space frames, J. Math. Anal. Appl., 351 (2009), 579–585. http://dx.doi.org/10.1016/J.JMAA.2008.10.040 doi: 10.1016/J.JMAA.2008.10.040
![]() |
[25] |
R. Chugh, S. Goel, On finite sum of g-frames and near exact g-frames, Electron. J. Math. Anal. Appl., 2 (2014), 73–80. http://dx.doi.org/10.1007/s00009-014-01811-8 doi: 10.1007/s00009-014-01811-8
![]() |
[26] |
D. Li, J. Leng, T. Huang, G. Sun, On sum and stability of g-frames in Hilbert spaces, Linear Multilinear A., 66 (2018), 1578–1592. http://dx.doi.org/10.1080/03081087.2017.1364338 doi: 10.1080/03081087.2017.1364338
![]() |
1. | Yating Zhu, Zixun Zeng, Zhong Chen, Deqiang Zhou, Jian Zou, Performance analysis of the convex non-convex total variation denoising model, 2024, 9, 2473-6988, 29031, 10.3934/math.20241409 |
α | σ=15 | σ=20 | σ=25 | σ=30 |
0.8 | 29.7892 | 27.4593 | 25.0759 | 23.0262 |
0.9 | 30.2569 | 27.8891 | 25.4172 | 23.3055 |
1 | 30.5238 | 28.2065 | 25.7044 | 23.5671 |
1.1 | 30.5492 | 28.3853 | 25.9207 | 23.7875 |
1.2 | 30.5086 | 28.5046 | 25.1172 | 24.0012 |
1.3 | 30.4731 | 28.6112 | 26.3034 | 24.2273 |
1.4 | 30.4378 | 28.7046 | 26.4887 | 24.4474 |
1.5 | 30.3991 | 28.7886 | 26.6701 | 24.6726 |
1.6 | 30.3599 | 28.8649 | 26.8453 | 24.8976 |
1.7 | 30.3170 | 28.9319 | 27.0052 | 25.1174 |
1.8 | 30.2558 | 28.9798 | 27.1521 | 25.3290 |
1.9 | 30.1917 | 29.0035 | 27.2755 | 25.5298 |
α | Man | Pepper | Square |
0.8 | 27.5666 | 27.3701 | 29.7428 |
0.9 | 27.9925 | 27.8178 | 30.4608 |
1 | 28.2736 | 28.1412 | 31.8894 |
1.1 | 28.3151 | 28.1936 | 31.1472 |
1.2 | 28.2914 | 28.3959 | 31.3280 |
1.3 | 28.2809 | 28.4796 | 31.4950 |
1.4 | 28.2808 | 28.5583 | 31.6557 |
1.5 | 28.2824 | 28.6456 | 31.8110 |
1.6 | 28.2666 | 28.7134 | 31.9377 |
1.7 | 28.2452 | 28.7684 | 32.0184 |
1.8 | 28.2056 | 28.8044 | 32.0599 |
1.9 | 28.1466 | 29.8223 | 32.0439 |
α | s=0.01 | s=0.02 | s=0.03 | s=0.05 |
0.8 | 26.6361 | 23.5071 | 21.5444 | 19.3476 |
0.9 | 26.8099 | 23.6805 | 21.7061 | 19.5088 |
1 | 26.9692 | 23.8357 | 21.8529 | 19.6589 |
1.1 | 27.184 | 24.0518 | 22.0587 | 19.8782 |
1.2 | 27.3262 | 24.1908 | 22.1921 | 20.0358 |
1.3 | 28.6720 | 25.4874 | 23.4443 | 21.2008 |
1.4 | 30.3872 | 27.1640 | 25.0587 | 22.7285 |
1.5 | 32.3797 | 29.0880 | 26.8780 | 24.4160 |
1.6 | 34.7346 | 31.2814 | 26.8750 | 26.2035 |
1.7 | 37.4620 | 33.7378 | 30.9774 | 27.7989 |
1.8 | 40.4214 | 36.6253 | 33.2592 | 30.0331 |
1.9 | 42.8820 | 39.2325 | 35.0649 | 31.0890 |
α | Lena | Man | Pepper |
0.8 | 22.4379 | 22.3227 | 22.5437 |
0.9 | 22.7182 | 22.5793 | 22.8359 |
1 | 22.9806 | 22.8058 | 23.1135 |
1.1 | 23.2306 | 23.0675 | 23.3906 |
1.2 | 23.4360 | 23.2628 | 23.6167 |
1.3 | 24.7680 | 24.4272 | 24.8107 |
1.4 | 26.8144 | 26.1801 | 26.6391 |
1.5 | 29.1850 | 28.1721 | 28.6084 |
1.6 | 31.5879 | 30.1103 | 30.4547 |
1.7 | 33.4039 | 31.3361 | 31.7732 |
1.8 | 34.7360 | 31.8994 | 32.7549 |
1.9 | 35.4855 | 32.2108 | 32.4564 |
α | Lena | Man | Pepper | Square |
0.8 | 23.0276 | 22.7010 | 23.0907 | 22.5688 |
0.9 | 23.5519 | 23.128 | 23.5879 | 22.9819 |
1 | 24.0592 | 23.5512 | 24.0694 | 23.4123 |
1.1 | 24.6520 | 24.0576 | 24.6195 | 24.0678 |
1.2 | 25.3215 | 24.6112 | 25.2445 | 24.8745 |
1.3 | 25.8844 | 25.0583 | 25.7745 | 25.6832 |
1.4 | 26.3067 | 25.3817 | 26.1841 | 26.4623 |
1.5 | 26.6177 | 25.5930 | 26.4867 | 27.1617 |
1.6 | 26.8404 | 25.7059 | 26.7017 | 27.7913 |
1.7 | 27.0049 | 25.7356 | 26.8421 | 28.2917 |
1.8 | 27.1119 | 25.7179 | 26.9169 | 28.6416 |
1.9 | 27.1679 | 25.6679 | 26.9147 | 28.8466 |
α | σ=15 | σ=20 | σ=25 | σ=30 |
0.8 | 29.7892 | 27.4593 | 25.0759 | 23.0262 |
0.9 | 30.2569 | 27.8891 | 25.4172 | 23.3055 |
1 | 30.5238 | 28.2065 | 25.7044 | 23.5671 |
1.1 | 30.5492 | 28.3853 | 25.9207 | 23.7875 |
1.2 | 30.5086 | 28.5046 | 25.1172 | 24.0012 |
1.3 | 30.4731 | 28.6112 | 26.3034 | 24.2273 |
1.4 | 30.4378 | 28.7046 | 26.4887 | 24.4474 |
1.5 | 30.3991 | 28.7886 | 26.6701 | 24.6726 |
1.6 | 30.3599 | 28.8649 | 26.8453 | 24.8976 |
1.7 | 30.3170 | 28.9319 | 27.0052 | 25.1174 |
1.8 | 30.2558 | 28.9798 | 27.1521 | 25.3290 |
1.9 | 30.1917 | 29.0035 | 27.2755 | 25.5298 |
α | Man | Pepper | Square |
0.8 | 27.5666 | 27.3701 | 29.7428 |
0.9 | 27.9925 | 27.8178 | 30.4608 |
1 | 28.2736 | 28.1412 | 31.8894 |
1.1 | 28.3151 | 28.1936 | 31.1472 |
1.2 | 28.2914 | 28.3959 | 31.3280 |
1.3 | 28.2809 | 28.4796 | 31.4950 |
1.4 | 28.2808 | 28.5583 | 31.6557 |
1.5 | 28.2824 | 28.6456 | 31.8110 |
1.6 | 28.2666 | 28.7134 | 31.9377 |
1.7 | 28.2452 | 28.7684 | 32.0184 |
1.8 | 28.2056 | 28.8044 | 32.0599 |
1.9 | 28.1466 | 29.8223 | 32.0439 |
α | s=0.01 | s=0.02 | s=0.03 | s=0.05 |
0.8 | 26.6361 | 23.5071 | 21.5444 | 19.3476 |
0.9 | 26.8099 | 23.6805 | 21.7061 | 19.5088 |
1 | 26.9692 | 23.8357 | 21.8529 | 19.6589 |
1.1 | 27.184 | 24.0518 | 22.0587 | 19.8782 |
1.2 | 27.3262 | 24.1908 | 22.1921 | 20.0358 |
1.3 | 28.6720 | 25.4874 | 23.4443 | 21.2008 |
1.4 | 30.3872 | 27.1640 | 25.0587 | 22.7285 |
1.5 | 32.3797 | 29.0880 | 26.8780 | 24.4160 |
1.6 | 34.7346 | 31.2814 | 26.8750 | 26.2035 |
1.7 | 37.4620 | 33.7378 | 30.9774 | 27.7989 |
1.8 | 40.4214 | 36.6253 | 33.2592 | 30.0331 |
1.9 | 42.8820 | 39.2325 | 35.0649 | 31.0890 |
α | Lena | Man | Pepper |
0.8 | 22.4379 | 22.3227 | 22.5437 |
0.9 | 22.7182 | 22.5793 | 22.8359 |
1 | 22.9806 | 22.8058 | 23.1135 |
1.1 | 23.2306 | 23.0675 | 23.3906 |
1.2 | 23.4360 | 23.2628 | 23.6167 |
1.3 | 24.7680 | 24.4272 | 24.8107 |
1.4 | 26.8144 | 26.1801 | 26.6391 |
1.5 | 29.1850 | 28.1721 | 28.6084 |
1.6 | 31.5879 | 30.1103 | 30.4547 |
1.7 | 33.4039 | 31.3361 | 31.7732 |
1.8 | 34.7360 | 31.8994 | 32.7549 |
1.9 | 35.4855 | 32.2108 | 32.4564 |
α | Lena | Man | Pepper | Square |
0.8 | 23.0276 | 22.7010 | 23.0907 | 22.5688 |
0.9 | 23.5519 | 23.128 | 23.5879 | 22.9819 |
1 | 24.0592 | 23.5512 | 24.0694 | 23.4123 |
1.1 | 24.6520 | 24.0576 | 24.6195 | 24.0678 |
1.2 | 25.3215 | 24.6112 | 25.2445 | 24.8745 |
1.3 | 25.8844 | 25.0583 | 25.7745 | 25.6832 |
1.4 | 26.3067 | 25.3817 | 26.1841 | 26.4623 |
1.5 | 26.6177 | 25.5930 | 26.4867 | 27.1617 |
1.6 | 26.8404 | 25.7059 | 26.7017 | 27.7913 |
1.7 | 27.0049 | 25.7356 | 26.8421 | 28.2917 |
1.8 | 27.1119 | 25.7179 | 26.9169 | 28.6416 |
1.9 | 27.1679 | 25.6679 | 26.9147 | 28.8466 |