This paper proposes a model that eliminates combined additive and multiplicative noise by merging the Rudin, Osher, and Fatemi (ROF) and I-divergence data fidelity terms. Many important techniques for denoising and restoring medical images were included in this model. The addition of the I-divergence fidelity term increases the complexity of the model and difficulty of the solution in comparison with the ROF. To solve this model, we first proposed the generalized concept of the maximum common factor based on the inverse scale space algorithm. Different from general denoising algorithms, the inverse scale space method exploits the fact that u starts at some value c0 and gradually approaches the noisy image f as time passes which better handles noise while preserving image details, resulting in sharper and more natural-looking images. Furthermore, a proof for the existence and uniqueness of the minimum solution of the model was provided. The experimental findings reveal that our proposed model has an excellent denoising effect on images destroyed by additive noise and multiplicative noise at the same time. Compared with general methods, numerical results demonstrate that the nonlinear inverse scale space method has better performance and faster running time on medical images especially including lesion images, with combined noises.
Citation: Chenwei Li, Donghong Zhao. Restoring medical images with combined noise base on the nonlinear inverse scale space method[J]. Mathematical Modelling and Control, 2025, 5(2): 216-235. doi: 10.3934/mmc.2025016
[1] | Hui Yan, Hongxing Wang, Kezheng Zuo, Yang Chen . Further characterizations of the weak group inverse of matrices and the weak group matrix. AIMS Mathematics, 2021, 6(9): 9322-9341. doi: 10.3934/math.2021542 |
[2] | Jinyong Wu, Wenjie Shi, Sanzhang Xu . Revisiting the m-weak core inverse. AIMS Mathematics, 2024, 9(8): 21672-21685. doi: 10.3934/math.20241054 |
[3] | Xiaofei Cao, Yuyue Huang, Xue Hua, Tingyu Zhao, Sanzhang Xu . Matrix inverses along the core parts of three matrix decompositions. AIMS Mathematics, 2023, 8(12): 30194-30208. doi: 10.3934/math.20231543 |
[4] | Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242 |
[5] | Wanlin Jiang, Kezheng Zuo . Revisiting of the BT-inverse of matrices. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158 |
[6] | Hongjie Jiang, Xiaoji Liu, Caijing Jiang . On the general strong fuzzy solutions of general fuzzy matrix equation involving the Core-EP inverse. AIMS Mathematics, 2022, 7(2): 3221-3238. doi: 10.3934/math.2022178 |
[7] | Almudena Campos-Jiménez, Francisco Javier García-Pacheco . The core of the unit sphere of a Banach space. AIMS Mathematics, 2024, 9(2): 3440-3452. doi: 10.3934/math.2024169 |
[8] | Yuxu Chen, Hui Kou . Core compactness of ordered topological spaces. AIMS Mathematics, 2023, 8(2): 4862-4874. doi: 10.3934/math.2023242 |
[9] | Wanlin Jiang, Kezheng Zuo . Further characterizations of the m-weak group inverse of a complex matrix. AIMS Mathematics, 2022, 7(9): 17369-17392. doi: 10.3934/math.2022957 |
[10] | Liying Yang, Jinjin Li, Yiliang Li, Qifang Li . Sub-base local reduct in a family of sub-bases. AIMS Mathematics, 2022, 7(7): 13271-13277. doi: 10.3934/math.2022732 |
This paper proposes a model that eliminates combined additive and multiplicative noise by merging the Rudin, Osher, and Fatemi (ROF) and I-divergence data fidelity terms. Many important techniques for denoising and restoring medical images were included in this model. The addition of the I-divergence fidelity term increases the complexity of the model and difficulty of the solution in comparison with the ROF. To solve this model, we first proposed the generalized concept of the maximum common factor based on the inverse scale space algorithm. Different from general denoising algorithms, the inverse scale space method exploits the fact that u starts at some value c0 and gradually approaches the noisy image f as time passes which better handles noise while preserving image details, resulting in sharper and more natural-looking images. Furthermore, a proof for the existence and uniqueness of the minimum solution of the model was provided. The experimental findings reveal that our proposed model has an excellent denoising effect on images destroyed by additive noise and multiplicative noise at the same time. Compared with general methods, numerical results demonstrate that the nonlinear inverse scale space method has better performance and faster running time on medical images especially including lesion images, with combined noises.
The weak core inverse was introduced in [1] where the authors presented some characterizations and properties. In [2], the authors introduced an extension of the weak core inverse. Continuing previous research about the weak core inverse, our purpose is to present new characterizations and representations of the weak core inverse. Additionally, we also give several equivalent conditions for a matrix to be a weak core matrix.
Let Cm×n be the set of all m×n complex matrices and Z+ denotes the set of all positive integers. The symbols R(A), N(A), A∗, r(A) and In will denote the range space, null space, conjugate transpose, rank of A∈Cm×n and the identity matrix of order n. Ind(A) means the index of A∈Cn×n. Let Cn×nk be the set consisting of all n×n complex matrices with index k. The symbol dim(S) represents the dimension of a subspace S⊆Cn. PL stands for the orthogonal projection onto the subspace L. PA, PA∗ respectively denote the orthogonal projection onto R(A) and R(A∗), i.e., PA=AA†, PA∗=A†A.
We will now introduce definitions of several generalized inverses that will be used throughout the paper. The Moore-Penrose inverse of A∈Cm×n, denoted by A†, is defined as the unique matrix X∈Cn×m satisfying [3]:
(1) AXA=A, (2) XAX=X, (3) (AX)∗=AX, (4) (XA)∗=XA. |
In particular, X is an outer inverse of A which is denoted as A(2) if XAX=X. For any matrix A∈Cm×n with r(A)=r, let T⊆Cn, S⊆Cm be two subspaces such that dim(T)=t≤r and dim(S)=m−t. Then A has an outer inverse X that satisfies R(X)=T and N(X)=S if and only if AT⊕S=Cm. In that case X is unique and denoted by A(2)T,S [4].
The Drazin inverse of A∈Cn×nk, denoted by AD, is the unique matrix X∈Cn×n satisfying [5]: XAX=X, AX=XA, XAk+1=Ak.
For any matrix A∈Cn×n1, a new generalized inverse, which is called core inverse [6] was introduced. Two other generalizations of the core inverse for A∈Cn×nk such as core-EP inverse [7], DMP inverse [8] were also introduced.
In 2018, Wang and Chen [9] defined the weak group inverse of A∈Cn×nk, denoted by AⓌ, as the unique matrix X∈Cn×n such that [9]: . Moreover, it was verified that
.
Recently, Ferreyra et al. introduced a new generalization of core inverse called the weak core inverse of A∈Cn×nk, denoted by AⓌ,†(or, in short, WC inverse). It is defined as the unique matrix X∈Cn×n satisfying [1]:
XAX=X, AX=CA†, XA=ADC, |
where C=AAⓌA. Moreover, it is proved that AⓌ,†=ADCA†=AⓌAA†.
The structure of this paper is as follows: In Section 2, we give some preliminaries which will be made use of later in this paper. In Section 3, we discuss some characterizations of the WC inverse based on its range space, null space and matrix equations. In Section 4, several new representations of the WC inverse are proposed. Section 5 is devoted to deriving some properties of the WC inverse by the core-EP decomposition. Moreover, in Section 6, we present several equivalent conditions for a matrix to be a weak core matrix.
For convenience, we will use the following notations: CCMn, CEPn, CPn and COPn will denote the subsets of Cn×n consisting of core matrices, EP matrices, idempotent matrices and Hermitian idempotent matrices, respectively, i.e.,
● CCMn={A∣A∈Cn×n,r(A2)=r(A)};
● CEPn={A∣A∈Cn×n,R(A)=R(A∗)};
● CPn={A∣A∈Cn×n,A2=A};
● COPn={A∣A∈Cn×n,A2=A=A∗}.
Before giving characterizations of the WC inverse, we first present the following auxiliary lemmas which will be repeatedly used throughout this paper.
Lemma 2.1. [10] Let A∈Cn×nk. Then A can be represented as
A=U[TS0N]U∗, | (2.1) |
where T∈Ct×t is nonsingular and t=r(T)=r(Ak), N is nilpotent with index k, and U∈Cn×n is unitary.
Moreover, the representation of A given by (2.1) is unique [10,Theorem 2.4]. In that case, we have that
Ak=U[Tk˜T00]U∗, | (2.2) |
where ˜T=k−1∑j=0TjSNk−1−j.
Lemma 2.2. [1,9,10,11,12] Let A∈Cn×nk be given by (2.1). Then :
A†=U[T∗△−T∗△SN†(In−t−N†N)S∗△N†−(In−t−N†N)S∗△SN†]U∗, | (2.3) |
AD=U[T−1(Tk+1)−1˜T00]U∗, | (2.4) |
![]() |
(2.5) |
AD,†=U[T−1(Tk+1)−1˜TNN†00]U∗, | (2.6) |
A†,D=U[T∗△T∗△T−k˜T(In−t−N†N)S∗△(In−t−N†N)S∗△T−k˜T]U∗, | (2.7) |
AⓌ=U[T−1T−2S00]U∗, | (2.8) |
.
AⓌ,†=U[T−1T−2SNN†00]U∗, | (2.9) |
where ˜T=k−1∑j=0TjSNk−1−j and △=[TT∗+S(In−t−N†N)S∗]−1.
˜T and △ will be often used throughout this paper.
Lemma 2.3. [1] Let A∈Cn×nk. Then
(a) AⓌ,†=A(2)R(Ak), N((Ak)∗A2A†);
(b) AAⓌ,†=PR(Ak), N((Ak)∗A2A†);
(c) AⓌ,†A=PR(Ak), N((Ak)∗A2).
Lemma 2.4. Let A∈Cn×nk and C=AAⓌA. The following conditions hold:
(a) [1]
(b) [1] r(AⓌ,†)=r(Ak);
(c) [1] CA†C=C;
(d) [9] AⓌAk+1=Ak;
(e) Ck=AkAⓌA.
Proof. Item (e) can be directly verified by (2.1), (2.2) and (2.8).
Applying existing results for the WC inverse with respect to R(X)=R(Ak) and N(X)=N((Ak)∗A2A†), some new results can be obtained for the WC inverse in the next result.
Theorem 3.1. Let A∈Cn×nk and C=AAⓌA. The following statements are equivalent:
(a) X=AⓌ,†;
(b) N(X)=N((Ak)∗A2A†) and XAA∗=AⓌAA∗;
(c) N(X)=N((Ak)∗A2A†) and XA=AⓌA;
(d) R(X)=R(Ak) and A∗AX=A∗CA†;
(e) R(X)=R(Ak) and AX=CA†;
(f) R(X)=R(Ak) and AkX=CkA†;
(g) R(X)=R(Ak), N(X)=N((Ak)∗A2A†) and XAAⓌ=AⓌ;
(h) R(X)=R(Ak), N(X)=N((Ak)∗A2A†) and XAk+1=Ak.
Proof. (a)⇒(b). By the definition of AⓌ,†, we have that XAA∗=ADCA∗=ADAAⓌAA∗=AⓌAA∗. Hence, by (a) of Lemma 2.3, we now obtain that (b) holds.
(b)⇒(c). Postmultiplying XAA∗=AⓌAA∗ by (A†)∗, we obtain that XA=AⓌA.
(c)⇒(d). From N(X)=N((Ak)∗A2A†), we have that N(AA†)⊆N((Ak)∗A2A†)=N(X), which leads to X=XAA†. Thus we get that X=XAA†=AⓌAA†=AⓌ,† by XA=AⓌA. Hence, by the definition of AⓌ,† and (a) of Lemma 2.3, we have that (d) holds.
(d)⇒(e). Evidently.
(e)⇒(f). Since C=AAⓌA and Ck=AkAⓌA, premultiplying AX=CA† by Ak−1, we have that AkX=CkA†.
(f)⇒(g). From (2.2) and R(X)=R(Ak), we can set X=U[X1X200]U∗, where X1∈Ct×t, X2∈Ct×(n−t) and t=r(Ak). Furthermore, it follows from AkX=CkA† and (2.9) that X=AⓌ,†. Therefore, by the definition of AⓌ,† and (a) of Lemma 2.3, we obtain that (g) holds.
(g)⇒(h). It follows from AⓌAk+1=Ak and XAAⓌ=AⓌ that XAk+1=XAAⓌAk+1=AⓌAk+1=Ak.
(h)⇒(a). By R(X)=R(Ak) and XAk+1=Ak, we get that XAX=X. Hence, by (a) of Lemma 2.3, we get that X=AⓌ,†.
Now we will consider other characterizations of the WC inverse by the fact that AⓌ,†AAⓌ,†=AⓌ,†.
Theorem 3.2. Let A∈Cn×nk and C=AAⓌA. The following statements are equivalent:
(a) X=AⓌ,†;
(b) XAX=X, R(X)=R(Ak) and N(X)=N((Ak)∗A2A†);
(c) XAX=X, R(X)=R(Ak) and AX=CA†;
(d) XAX=X, AX=CA† and XAk=AⓌAk;
(e) XAX=X, XA=AⓌA and AkX=CkA†;
(f) XAX=X, XA=AⓌA and N(X)=N((Ak)∗A2A†).
Proof. (a)⇒(b). The proof can be demonstrated by (a) of Lemma 2.3.
(b)⇒(c). By the definition of AⓌ,† and (b) of Lemma 2.3, we get that AX∈CPn, R(AX)=AR(X)=R(Ak+1)=R(Ak)=R(AAⓌ,†)=R(CA†) and N(AX)=N(X)=N((Ak)∗A2A†)=N(AAⓌ,†)=N(CA†). On the other hand, Lemma 2.4 (c) implies CA†∈CPn, hence AX=CA†.
(c)⇒(d). By item (c) of Lemma 2.3, we obtain that R(X)=R(Ak)=R(AⓌ,†A). So we get that AⓌ,†AX=X, which implies that XAk=AⓌ,†AXAk=AⓌ,†CA†Ak=AⓌ,†AAⓌAA†Ak=AⓌAk.
(d)⇒(e). By conditions and AAⓌ=Ak(AⓌ)k, we can infer that X=XCA†=XAAⓌAA†=XAk(AⓌ)kAA†=AⓌAk(AⓌ)kAA†=AⓌ,†. Hence, by AⓌ,†=AⓌAA† and AkAⓌA=Ck, we obtain that (e) holds.
(e)⇒(f). Since XAX=X, XA=AⓌA, we have that R(X)=R(XA)=R(AⓌA)=R(Ak). We now obtain that X=AⓌ,† by (f) of Theorem 3.1. Hence (f) holds by (a) of Lemma 2.3.
(f)⇒(a). It follows from XAX=X that N(AX)=N(X), by conditions and (a) of Lemma 2.3. We now obtain that X=XAX=AⓌAX=AⓌAA†AX=AⓌ,†PR(AX), N(AX)=AⓌ,†.
Notice the fact that XAk+1=Ak if X=AⓌ,†. Therefore, we will characterize the WC inverse in terms of AⓌ,†Ak+1=Ak.
Theorem 3.3. Let A∈Cn×nk and C=AAⓌA. The following statements are equivalent:
(a) X=AⓌ,†;
(b) XAk+1=Ak, A∗AX=A∗CA† and r(X)=r(Ak);
(c) XAk+1=Ak, AX=CA† and r(X)=r(Ak);
(d) XAk+1=Ak, AkX=CkA† and r(X)=r(Ak).
Proof. (a)⇒(b). Since AⓌ,†=AⓌAA†, we can show that XAk+1=Ak, A∗AX=A∗CA†. Then, by (b) of Lemma 2.4, we get that (b) holds.
(b)⇒(c). Obviously.
(c)⇒(d). Premultiplying AX=CA† by Ak−1, we have that AkX=CkA† from AkAⓌA=Ck.
(d)⇒(a). It follows from XAk+1=Ak and r(X)=r(Ak) that R(X)=R(Ak). Hence, we obtain that X=AⓌ,† from (f) of Theorem 3.1.
In the following example, we show that the condition r(X)=r(Ak) in Theorem 3.3 is necessary.
Example 3.4. Let
A=[100003000], X=[100 002000]. |
Then Ind(A)=2,
A†=[10000001/30], C=[100000000] and AⓌ,†=[100000000]. |
It can be directly verified that XA3=A2, A∗AX=A∗CA† and r(X)≠r(A2), but X≠AⓌ,†. The other cases follow similarly.
By Lemma 2.3, it is clear that AX=PR(Ak), N((Ak)∗A2A†) and XA=PR(Ak), N((Ak)∗A2) if X=AⓌ,†. However, the converse is invalid as shown in the next example:
Example 3.5. Let A, X be the same as in Example 3.4. Then
AX=[100000000], XA=[100000000]and AⓌ,†=[100000000]. |
It can be directly verified that AX=PR(A2), N((A2)∗A2A†) and XA=PR(A2), N((A2)∗A2), but X≠AⓌ,†.
In the next result, we will present some new equivalent conditions for the converse implication:
Theorem 3.6. Let A∈Cn×nk and X∈Cn×n. The following statements are equivalent:
(a) X=AⓌ,†;
(b) AX=PR(Ak), N((Ak)∗A2A†),XA=PR(Ak), N((Ak)∗A2) and r(X)=r(Ak);
(c) AX=PR(Ak), N((Ak)∗A2A†),XA=PR(Ak), N((Ak)∗A2) and XAX=X;
(d) AX=PR(Ak), N((Ak)∗A2A†),XA=PR(Ak), N((Ak)∗A2) and AX2=X.
Proof. (a)⇒(b). The proof can be demonstrated by (b) and (c) of Lemma 2.3 and (b) of Lemma 2.4.
(b)⇒(c). By R(XA)=R(Ak) and r(X)=r(Ak), we obtain that R(X)=R(XA)=R(Ak), hence we further derive that XAX=X.
(c)⇒(d). By conditions and (a) of Lemma 2.3, we have that X=AⓌ,†. Therefore, by (2.9), it can be directly verified that AX2=X.
(d)⇒(a). From AX2=X, we have that X=AX2=A2X3=⋯=AkXk+1, which implies R(X)⊆R(Ak). Combining with the condition R(Ak)=R(XA)⊆R(X), we get that R(X)=R(Ak). From (2.2), we now set X=U[X1X200]U∗, where X1∈Ct×t, X2∈Ct×(n−t) and t=r(Ak). On the other hand, it follows from N(AX)=N((Ak)∗A2A†) that (Ak)∗A2A†=(Ak)∗A2X, which yields X1=T−1 and X2=T−2SNN†. Therefore, by (2.9), we obtain that X=AⓌ,†.
In [1], the authors introduced the definition of AⓌ,† with an algebraic approach. In the next result, we will consider characterization of AⓌ,† with a geometrical point of view.
Theorem 3.7. Let A∈Cn×nk. Then:
(a) AⓌ,† is the unique matrix X that satisfies:
AX=PR(Ak), N((Ak)∗A2A†), R(X)⊆R(Ak). | (3.1) |
(b) AⓌ,† is the unique matrix X that satisfies:
XA=PR(Ak), N((Ak)∗A2), N(A∗)⊆N(X). | (3.2) |
Proof. (a). Since R(AD)=R(Ak), it is a consequence of [2,Corollary 3.2] by properities of Drazin and MP inverse.
(b). Since items (c) of Lemma 2.3, AⓌ,† satisfies XA=PR(Ak), N((Ak)∗A2). Additionally, we derive that N(A∗)=N(A†)⊆N(AⓌAA†)=N(X). Now it remains to prove that X is unique.
Assume that X1, X2 satisfy (3.2), then X1A=X2A, \ N(A∗)⊆N(X1) and N(A∗)⊆N(X2). Furthermore, we get that (X1−X2)A=0 and R(X∗i)⊆R(A) for i=1,2, which further imply that A∗(X∗1−X∗2)=0 and R(X∗1−X∗2)⊆R(A). Therefore we have that R(X∗1−X∗2)⊆R(A)∩N(A∗)=R(A)∩R(A)⊥={0}. Thus, X∗1=X∗2, i.e., X1=X2.
Remark 3.8. In Theorem 3.7, R(X)⊆R(Ak) in (3.1) can be replaced by R(X)=R(Ak). However, if we replace N(A∗)⊆N(X) with N(A∗)=N(X) in (3.2), item (b) of Theorem 3.7 does not hold.
Characterizations of some generalized inverses by using its block matrices have been investigated in [13,14,15,16,17]. In [18,Theorem 3.2], the authors presented a characterization for the WC inverse using its block matrices. Next we will give another proof of it by using characterization of projection operator.
Theorem 3.9. Let A∈Cn×nk and r(Ak)=t. Then there exist a unique matrix P such that
P2=P, PAk=0, (Ak)∗A2P=0, r(P)=n−t, | (3.3) |
a unique matrix Q such that
Q2=Q, QAk=0, (Ak)∗A2A†Q=0, r(Q)=n−t, | (3.4) |
and a unique matrix X such that
r([AI−QI−PX])=r(A). | (3.5) |
Furthermore, X is the WC inverse AⓌ,† of A and
P=PN((Ak)∗A2), R(Ak), Q=PN((Ak)∗A2A†), R(Ak). | (3.6) |
Proof. It is not difficult to prove that
the condition (3.3) hold⟺(I−P)2=I−P, (I−P)Ak=Ak,(Ak)∗A2(I−P)=(Ak)∗A2, r(P)=n−t⟺I−P=PR(Ak), N((Ak)∗A2)⟺P=PN((Ak)∗A2), R(Ak). |
Similarly, we can show that (3.4) have the unique solution Q=PN((Ak)∗A2A†), R(Ak).
Furthermore, comparing (3.6) and items (b) and (c) of Lemma 2.3 immediately leads to the conclusion that
r([AI−QI−PX])=r([AAAⓌ,†AⓌ,†AX])=r(A)+r(X−AⓌ,†). |
By (3.5), we obtain that X=AⓌ,†.
In [19], Drazin introduced the (b,c)-inverse in semigroup. In [20], Benítez et al. investigated the (B,C)-inverse of A∈Cm×n, as the unique matrix X∈Cn×m satisfying [20]:
CAX=C, XAB=B, R(X)=R(B), N(X)=N(C), |
where B,C∈Cn×m. In the next result, we will show that the WC inverse is a special (B,C)-inverse.
Theorem 4.1. Let A∈Cn×nk. Then
AⓌ,†=A(Ak,(Ak)∗A2A†). |
Proof. According to Lemma 2.3, we get that
R(AⓌ,†)=R(Ak), N(AⓌ,†)=N((Ak)∗A2A†)). |
Observe that AⓌ,†AAk=AⓌAk+1=Ak and (Ak)∗A2A†AAⓌ,†=(Ak)∗A2AⓌAA†=(Ak)∗A2A†. Thus, we obtain AⓌ,†=A(Ak,(Ak)∗A2A†).
In [21], the authors introduced the Bott-Duffin inverse of A∈Cn×n when APL+PL⊥ is nonsingular, i.e., A(−1)L=PL(APL+PL⊥)−1=PL(APL+I−PL)−1. In [22], the authors showed the weak group inverse by a special Bott-Duffin inverse. Next we will show that the WC inverse of A is indeed the Bott-Duffin inverse of A2 with respect to R(Ak).
Theorem 4.2. Let A∈Cn×nk be given by (2.1). Then
AⓌ,†=(A2)(−1)(R(Ak))APA=(PAkA2PAk)†APA. | (4.1) |
Proof. It follows from (2.3) and (2.2) that
PA=U[It00NN†]U∗, | (4.2) |
PAk=U[It000]U∗. | (4.3) |
We now obtain that
(A2)(−1)(R(Ak))APA=PAk(A2PAk+I−PAk)−1APA=U[It000][T200In−t]−1[TS0N][It00NN†]U∗=U[T−1T−2SNN†00]U∗=AⓌ,†. |
Similarly, by a direct calculation, we can derive that AⓌ,†=(PAkA2PAk)†APA.
Working with the fact that P=PN((Ak)∗A2), R(Ak) and Q=PN((Ak)∗A2A†), R(Ak) in Theorem 3.9, we will consider other representations of AⓌ,† in the next theorem.
Theorem 4.3. Let A∈Cn×nk and P=PN((Ak)∗A2), R(Ak), Q=PN((Ak)∗A2A†), R(Ak). Then for any a,b≠0, we have
AⓌ,†=(A+aP)−1(I−Q)=(I−P)(A+bQ)−1. | (4.4) |
Proof. From items (b) and (c) of Lemma 2.3, it is not difficult to conclude that
(A+aP)AⓌ,†=I−Q. |
Now we only need to show the invertibility of A+aP. Assume that α=U[α1α2]∈Cn such that (A+aP)α=0, i.e., Aα=−aPα, where α1∈Cp, α2∈Cn−p. Now it follows from condition (c) of Lemma 2.3 and (3.6) that
[TS0N][α1α2]=−a[0−T−1S−T−2SN0I][α1α2], |
implying α1=0 and α2=0 since a≠0, N is nilpotent and T is non singular. Thus A+aP is nonsingular.
Analogously, we can prove that A+bQ is invertible and AⓌ,†=(I−P)(A+bQ)−1.
The limit expressions for some generalized inverses of matrices have been given in [14,15,16,17,23,24]. Similarly, the WC inverse can also be characterized as limit value as shown in the next result:
Theorem 4.4. Let A∈Cn×nk. Then:
(a) AⓌ,†=limλ→0Ak(λIn+(Ak)∗Ak+2)−1(Ak)∗A2A∗(λIn+AA∗)−1;
(b) AⓌ,†=limλ→0Ak(Ak)∗A(λIn+Ak+1(Ak)∗A)−1AA∗(λIn+AA∗)−1;
(c) AⓌ,†=limλ→0(λIn+Ak(Ak)∗A2)−1Ak(Ak)∗A2A∗(λIn+AA∗)−1;
(d) AⓌ,†=limλ→0Ak(Ak)∗A2A∗(λIn+AA∗)−1(λIn+Ak+1(Ak)∗A2A∗(λIn+AA∗)−1)−1.
Proof. According to condition (a) of Lemma 2.3, it is not hard to show that
AⓌ,†=A(2)R(Ak), N((Ak)∗A2A†)=A(2)R(Ak(Ak)∗A2A†), N(Ak(Ak)∗A2A†). |
Thus, by [25,Theorem 2.1], we have the following results:
(a) Let X=Ak, Y=(Ak)∗A2A† and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0Ak(λIn+(Ak)∗Ak+2)−1(Ak)∗A2A∗(λIn+AA∗)−1. |
(b) Let X=Ak(Ak)∗A, Y=AA† and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0Ak(Ak)∗A(λIn+Ak+1(Ak)∗A)−1AA∗(λIn+AA∗)−1. |
(c) Let X=In, Y=Ak(Ak)∗A2A† and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0(λIn+Ak(Ak)∗A2)−1Ak(Ak)∗A2A∗(λIn+AA∗)−1. |
(d) Let X=Ak(Ak)∗A2A†, Y=In and by A†=limλ→0A∗(λIn+AA∗)−1. We have
AⓌ,†=limλ→0Ak(Ak)∗A2A∗(λIn+AA∗)−1(λIn+Ak+1(Ak)∗A2A∗(λIn+AA∗)−1)−1. |
We end up this section with three examples of computing the WC inverse of a matrix using three different expressions in Theorems 4.2–4.4.
Example 4.5. Let
A=[2210342000010000]. | (4.5) |
Then Ind(A)=2 and the weak core inverse of A is
AⓌ,†=A2(A4)†A2A†=[2−1−1/20−3/211/2000000000]. |
Firstly, using the expression (4.1) to compute the WC inverse of A. Then
(A2PA2+I−PA2)−1=[11/2−300−9/25/20000100001] and (PA2A2PA2)†=[11/2−300−9/25/20000000000]. |
After simplification, it follows that (A2)(−1)(R(A2))APA=AⓌ,† and (PA2A2PA2)†APA=AⓌ,†.
Secondly, using the expression (4.4), we obtain
(A−6P)−1=[2−1−1/2−19/12−3/217/1297/7200−1/6−1/36000−1/6] and (A+15Q)−1=[2−1−1/25/2−3/21−210005−250005]. |
Therefore, it can be directly verified (A−6P)−1(I−Q)=AⓌ,†, (I−P)(A+15Q)−1=AⓌ,†.
Finally, using the limit expressions of item (a) in Theorem 4.4.
Let B=A2(λIn+(A2)∗A4)−1(A2)∗A2A∗(λIn+AA∗)−1, then
B=A2(λIn+(A2)∗A4)−1(A2)∗A2A∗(λIn+AA∗)−1=[2(30321λ2+5361λ+580)/λ12(54629λ2+6699λ−290)/λ1(1305λ−58)/λ20(110503λ2+19405λ−870)/λ14(49773λ2+6090λ+145)/λ1(2378λ+58)/λ2000000000]. |
where λ1=λ4+38734λ3+1470569λ2+197888λ+580, λ2=λ3+38697λ2+38812λ+116.
After simplification, it follows that
limλ→0B=limλ→0A2(λIn+(A2)∗A4)−1(A2)∗A2A∗(λIn+AA∗)−1=AⓌ,†. |
The other cases in Theorem 4.4 can be similarly verified.
In this section, we discuss some properties of the WC inverse and consider the connection between the WC inverse and other known classes of matrices.
Lemma 5.1. Let A∈Cn×nk be given by (2.1). Then:
(a) A∈CEPn⇔S=0 and N=0;
(b) A∈CPn⇔T=It and N=0;
(c) A∈COPn⇔T=It, S=0 and N=0.
Proof. (a) The proof can be easily verified from (2.1) and (2.3).
(b) By (2.1), we obtain that A∈CPn is equivalent with
U[T2TS+SN0N2]U∗=U[TS0N]U∗, |
which is further equivalent with T2=T, TS+SN=S and N2=N. Hence, by non singularity of T and Nk=0, we can conclude that A∈CPn if and only if T=It and N=0.
(c) Since COPn⊆CPn, it is a direct consequence of item (b) and (2.1).
Theorem 5.2. Let A∈Cn×nk be given by (2.1). The following statements hold:
(a) AⓌ,†=0⇔A is nilpotent $;
(b) AⓌ,†=A⇔A∈CEPn and A3=A;
(c) AⓌ,†=A∗⇔A∈CEPn and AA∗=PAk;
(d) AⓌ,†=PA⇔A∈CPn;
(e) AⓌ,†=PA∗⇔A∈COPn.
Proof. (a) By (2.1) and (2.9), we directly get that
AⓌ,†=0⟺r(Ak)=t=0⟺A is nilpotent. |
(b) It follows (2.1), (2.9) and (a) of lemma 5.1 that
AⓌ,†=A⟺U[T−1T−2SNN†00]U∗=U[TS0N]U∗⟺S=0 , N=0 and T3=T⟺A∈CEPn and A3=A. |
(c) By (2.1), (2.9) and (a) of lemma 5.1, we have that
AⓌ,†=A∗⟺U[T−1T−2SNN†00]U∗=U[T∗0S∗N∗]U∗⟺S=0 , N=0 and TT∗=It⟺A∈CEPn and AA∗=PAk. |
(d) From (2.9), (4.2) and (b) of lemma 5.1, we obtain that
AⓌ,†=PA⟺U[T−1T−2SNN†00]U∗=U[It00NN†]U∗⟺T=It, N=0⟺A∈CPn. |
(e) It follows from (2.1) and (2.3) that
PA∗=A†A=U[T∗△TT∗△S(In−t−N†N)(In−t−N†N)S∗△TN†N+(In−t−N†N)S∗△S(In−t−N†N)]U∗. | (5.1) |
By (2.9) and (5.1), we now get that AⓌ,†=PA∗ is equivalent with
U[T−1T−2SNN†00]U∗=U[T∗△TT∗△S(In−t−N†N)(In−t−N†N)S∗△TN†N+(In−t−N†N)S∗△S(In−t−N†N)]U∗, |
which is further equivalent with T−1=T∗△T, (In−t−N†N)S∗△T=0 and N†N+(In−t−N†N)S∗△S(In−t−N†N)=0. Hence, by nonsingularity of △T and (c) of lemma 5.1, we can conclude that AⓌ,†=PA∗ if and only if A∈COPn.
From Lemma 2.3, we know that both AAⓌ,† and AⓌ,†A are oblique projectors. The next theorem will further discuss other characteriations for AAⓌ,† and AⓌ,†A.
Theorem 5.3. Let A∈Cn×nk be given by (2.1). The following statements hold:
(a) AAⓌ,†=PA⇔A∈CCMn; (b) AAⓌ,†=PA∗⇔A∈CEPn;
(c) AⓌ,†A=PA⇔A∈CEPn; (d) AⓌ,†A=PA∗⇔A∈CEPn.
Proof. It follows from (2.1) and (2.9) that
AAⓌ,†=U[ItT−1SNN†00]U∗, | (5.2) |
AⓌ,†A=U[ItT−1S+T−2SN00]U∗. | (5.3) |
(a) By (4.2) and (5.2), the result can be directly verified.
(b) By (5.1) and (5.2), we can show that AAⓌ,†=PA∗ if and only if (In−t−N†N)S∗△T=0 and N†N+(In−t−N†N)S∗△S(In−t−N†N)=0, which is further equivalent with S=0 and N=0, i.e., A∈CEPn.
(c) It follows from (4.2) and (5.3) that AⓌ,†A=PA is equivalent with A∈CEPn.
(d) From (5.1) and (5.3), it is similar to the proof of (b).
Recall from [6] that the core inverse is necessarily EP. The next Theorem shows that this is not the case with the WC inverse.
Theorem 5.4. Let A∈Cn×nk be given by (2.1) and t∈Z+. The following statements are equivalent:
(a) AⓌ,†∈CEPn; (b) SN=0;
(c) ; (d)
;
(e) AⓌ,†At=AtAⓌ.
Proof. (a)⇔(b). Since AⓌ,†∈CEPn is equivalent with R(AⓌ,†)=R((AⓌ,†)∗). Using (2.9), we have that AⓌ,†∈CEPn if and only if SN=0.
(c)⇔(b). By (2.5) and (5.3), it can be directly verified that if and only if SN=0.
(d)⇔(b). By (2.5) and (2.9), it follows that
![]() |
(e)⇔(b). From (2.8) and (2.9), it follows that
AⓌ,†At=AtAⓌ⟺U[Tt−1Tt−2S+T−2TtN00]U∗=U[Tt−1Tt−2S00]U∗⟺T−2TtN=0⟺SN=0. |
where Tt=t−1∑j=0TjSNt−1−j.
In [26], the authors introduced that a matrix A to be a weak group matrix if A∈CWGn, which is equivalent with SN=0. Therefore, we have that following remark:
Remark 5.5. It is worth noting that conditions (a), (c)–(e) in Theorem 5.4 are equivalent with A∈CWGn.
The next theorems provide some equivalent conditions for AⓌ,†∈CPn and AⓌ,†∈COPn.
Theorem 5.6. Let A∈Cn×nk be given by (2.1). The following statements are equivalent:
(a) AⓌ,†∈CPn; (b) T=It;
(c) AAⓌ,†=AⓌ,†; (d) AⓌ,†Ak=Ak;
(e) Ak(AⓌ,†)k=AⓌ,†; (f) A(AⓌ,†)k=(AⓌ,†)k.
Proof. (a)⇔(b). From (2.9), it is not hard to prove that AⓌ,†∈CPn is equivalent with T=It.
(c)⇔(b). From (2.9) and (5.3), it follows that AAⓌ,†=AⓌ,† if and only if T=It.
(d)⇔(b). By (2.2) and (2.9), it is easy to verify that AⓌ,†Ak=Ak if and only if T=It.
The proofs (e)⇔(b) and (f)⇔(b) are similar to the proof (d)⇔(b).
Theorem 5.7. Let A∈Cn×nk be given by (2.1). The following statements are equivalent:
(a) AⓌ,†∈COPn; (b) T=It and SN=0;
(c) AAⓌ,†=(AⓌ,†)∗; (d) AⓌ,†A=AⓌ;
(e) (AⓌ,†)kAk=AⓌ; (f) (AⓌ,†)kA=(AⓌ)k.
Proof. (a)⇔(b). From (2.9) and Theorem 5.6, we can show that AⓌ,†∈COPn is equivalent with T=It and SN=0.
(c)⇔(b). By (2.9) and (5.3), it follows from AAⓌ,†=(AⓌ,†)∗ that
[ItT−1SNN†00]=[(T−1)∗0(T−2SNN†)∗0]. |
Hence, we get that AAⓌ,†=(AⓌ,†)∗ is equivalent with T=It and SN=0.
The proofs of (d)⇔(b), (e)⇔(b) and (f)⇔(b) are similar to the proof of (c)⇔(b).
Corollary 5.8. Let A∈Cn×nk. Then A∈COPn if and only if AⓌ,†∈CPn∩CEPn.
Proof. It is a direct consequence from Theorem 5.4 and Theorem 5.6.
Working with Theorem 5.6 and Theorem 5.7, we have the following corollary.
Corollary 5.9. Let A∈Cn×nk and for any l∈N, l≥k. The following statements statements hold:
(a)A∈CPn⇔AⓌ,†∈CPn and Al=A;
(b)A∈COPn⇔AⓌ,†∈CPn and Al=A∗.
Proof. (a) The result can be easily derived by lemma 5.1 and Theorem 5.6,
(b) From Lemma 5.1 and Theorem 5.7, we can show that (b) holds.
Ferreyra et al.[1] introduced the weak core matrix. The set of all n×n weak core matrices is denoted by CWCn, that is:
CWCn={A∣A∈Cn×n,AⓌ,†=AD,†}. |
In this section, we discuss some equivalent conditions satisfied by a matrix A such that A∈CWCn using the core-EP decomposition. For convenience, we introduce a necessary lemma.
Lemma 6.1. [1] Let A∈Cn×nk be given by (2.1). Then the following statements are equivalent:
(a) A∈CWCn;
(b) SN2=0;
(c) AⓌ=AD.
Theorem 6.2. Let A∈Cn×nk be given by (2.1) and t∈Z+. The following statements are equivalent:
(a) A∈CWCn;
(b) AⓌA=ADA;
(c) AtAⓌA=AtADA;
(d) AtAⓌ,†=AtAD,†;
(e) AkAⓌ,†=AkA†;
(f) AkAⓌA=Ak.
Proof. (a)⇒(b). It is a direct consequence from condition (c) of Lemma 6.1.
(b)⇒(c). Evident.
(c)⇒(a). By condition, it follows from (2.4) and (2.8) that
[TtTt−1S+Tt−2SN00]=[TtTt−1S+Tt−k−1˜TN00], |
which implies Tt−k−1(Tk−2SN2+⋯+TSNk−1)=0. We now obtain that SN2=0 since T is invertible. By Lemma 6.1, we obtain that A∈CWCn.
(a)⇔(d). It follows form (2.6), (2.9) and Lemma 6.1 that
AtAⓌ,†=AtAD,†⟺U[Tt−1Tt−2SNN†00]U∗=U[Tt−1Tt−k−1˜TNN†00]U∗⟺Tt−2SNN†=Tt−k−1˜TNN†⟺SN2=0⟺A∈CWCn. |
(a)⇒(e). From the definition of the weak core matrix, we have that AkAⓌ,†=AkAD,†=AkA†.
(e)⇒(f). Evident.
(f)⇒(a). If AkAⓌA=Ak, by (2.2) and (2.8), we can conclude that SN2. Hence item (a) holds.
Corollary 6.3. Let A∈Cn×nk be given by (2.1). Then A∈CWCn if and only if Ak=Ck, where C=AAⓌA.
Proof. Since AkAⓌA=Ck, the result is a direct consequence of item (f) of Theorem 6.2.
Theorem 6.4. Let A∈Cn×nk be given by (2.1) and for some t∈Z+. The following statements are equivalent:
(a) A∈CWCn;
(b) A(AⓌ)tA=(AⓌ)tA2;
(c) (AⓌ)tA=(AⓌ)t+1A2;
(d) A(AⓌ)tA commutes with (AⓌ)tA2;
(e) (AⓌ)tA commutes with (AⓌ)t+1A2.
Proof. By (2.1) and (2.8), we get that
A(AⓌ)tA=U[T−t+2T−t+1S+T−tSN00]U∗, | (6.1) |
(AⓌ)tA2=U[T−t+2T−t+1S+T−tSN+T−t−1SN200]U∗. | (6.2) |
(a)⇔(b). By (6.1), (6.2) and Lemma 6.1, we get that A(AⓌ)tA=(AⓌ)tA2 if and only if A∈CWCn.
(a)⇔(c). Similar to the part (a)⇔(b).
(a)⇔(d). It follows from (6.1), (6.2) and Lemma 6.1 that
A(AⓌ)tA(AⓌ)tA2−(AⓌ)tA2A(AⓌ)tA=U[0T−2t+1SN200]U∗, |
which implies that A(AⓌ)tA commutes with (AⓌ)tA2 if and only if A∈CWCn.
(a)⇔(e). It is analogous to that of the part (a)⇔(d).
Corollary 6.5. Let A∈Cn×nk and t∈Z+. The following statements are equivalent:
(a) A∈CWCn;
Proof. Since , Corollary 6.5 can be directly verified.
In this paper, new characterizations and properties of the WC inverse are derived by using range, null space, matrix equations, respectively. Several expressions of the WC inverse are also given. Finally, we show various characterizations of the weak core matrix.
According to the current research background, more characterizations and applications for the WC inverse are worthy of further discussion which as follows:
1) Characterizing the WC inverse by maximal classes of matrices, full rank decomposition, integral expressions and so on;
2) New iterative algorithms and splitting methods for computing the WC inverse;
3) Using the WC inverse to solve appropriately constrained systems of linear equations;
4) Investigating the WC inverse of tensors.
This work was supported by the Natural Science Foundation of China under Grants 11961076. The authors are thankful to two anonymous referees for their careful reading, detailed corrections and pertinent suggestions on the first version of the paper, which enhanced the presentation of the results distinctly.
All authors read and approved the final manuscript. The authors declare no conflict of interest.
[1] | G. Aubert, P. Kornprobst, Mathematical problems in image processing–-partial differential equations and the calculus of variations, 2 Eds., Applied Mathematical Sciences, 2006. http://dx.doi.org/10.1007/978-0-387-44588-5 |
[2] |
D. L. Donoho, I. M. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, J. Am. Stat. Assoc., 90 (1995), 1200–1224. http://dx.doi.org/10.1080/01621459.1995.10476626 doi: 10.1080/01621459.1995.10476626
![]() |
[3] |
S. Osher, M. Burger, D. Goldfarb, J. Xu, W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460–489. http://dx.doi.org/10.1137/040605412 doi: 10.1137/040605412
![]() |
[4] |
X. Yu, D. Zhao, A weberized total variance regularization-based image multiplicative noise model, Image Anal. Stereol., 42 (2023), 65–76. http://dx.doi.org/10.5566/ias.2837 doi: 10.5566/ias.2837
![]() |
[5] |
P. Kornprobst, R. Deriche, G. Aubert, Image sequence analysis via partial differential equations, J. Math. Imaging Vis., 11 (1999), 5–26. http://dx.doi.org/10.1023/A:1008318126505 doi: 10.1023/A:1008318126505
![]() |
[6] |
S. M. A. Sharif, R. A. Naqvi, Z. Mehmood, J. Hussain, A. Ali, S. W. Lee, Meddeblur: medical image deblurring with residual dense spatial-asymmetric attention, Mathematics, 11 (2023), 115. http://dx.doi.org/10.3390/math11010115 doi: 10.3390/math11010115
![]() |
[7] |
R. A. Naqvi, A. Haider, H. S. Kim, D. Jeong, S. W. Lee, Transformative noise reduction: leveraging a transformer-based deep network for medical image denoising, Mathematics, 12 (2024), 2313. http://dx.doi.org/10.3390/math12152313 doi: 10.3390/math12152313
![]() |
[8] |
S. Umirzakova, S. Mardieva, S. Muksimova, S. Ahmad, T. Whangbo, Enhancing the super-resolution of medical images: introducing the deep residual feature distillation channel attention network for optimized performance and efficiency, Bioengineering, 10 (2023), 1332. http://dx.doi.org/10.3390/bioengineering10111332 doi: 10.3390/bioengineering10111332
![]() |
[9] |
M. Shakhnoza, S. Umirzakova, M. Sevara, Y. I. Cho, Enhancing medical image denoising with innovative teacher student model-based approaches for precision diagnostics, Sensors, 23 (2023), 9502. http://dx.doi.org/10.3390/s23239502 doi: 10.3390/s23239502
![]() |
[10] |
L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), 259–268. http://dx.doi.org/10.1016/0167-2789(92)90242-F doi: 10.1016/0167-2789(92)90242-F
![]() |
[11] |
A. Boukdir, M. Nachaoui, A. Laghrib, Hybrid variable exponent model for image denoising: a nonstandard high-order pde approach with local and nonlocal coupling, J. Math. Anal. Appl., 536 (2024), 128245. http://dx.doi.org/10.1016/j.jmaa.2024.128245 doi: 10.1016/j.jmaa.2024.128245
![]() |
[12] |
W. Lian, X. Liu, Non-convex fractional-order TV model for impulse noise removal, J. Comput. Appl. Math., 417 (2023), 114615. http://dx.doi.org/10.1016/j.cam.2022.114615 doi: 10.1016/j.cam.2022.114615
![]() |
[13] |
J. Shen, On the foundations of vision modeling: I. Webers law and Weberized TV restoration, Phys. D, 175 (2003), 241–251. http://dx.doi.org/10.1016/S0167-2789(02)00734-0 doi: 10.1016/S0167-2789(02)00734-0
![]() |
[14] | C. Liu, X. Qian, C. Li, A texture image denoising model based on image frequency and energy minimization, Springer Berlin Heidelberg, 2012,939–949. http://dx.doi.org/10.1007/978-3-642-34531-9_101 |
[15] |
C. Li, D. Zhao, A non-convex fractional-order differen- tial equation for medical image restoration, Symmetry, 16 (2024), 258. http://dx.doi.org/10.3390/sym16030258 doi: 10.3390/sym16030258
![]() |
[16] |
J. S. Lee, Digital image enhancement and noise filtering by use of local statistics, IEEE Trans. Pattern Anal. Mach. Intell., PAMI-2 (1980), 165–168. http://dx.doi.org/10.1109/TPAMI.1980.4766994 doi: 10.1109/TPAMI.1980.4766994
![]() |
[17] |
A. Achim, A. Bezerianos, P. Tsakalides, Novel bayesian multiscale method for speckle removal in medical ultrasound images, IEEE Trans. Med. Imaging, 20 (2001), 772–783. http://dx.doi.org/10.1109/42.938245 doi: 10.1109/42.938245
![]() |
[18] | L. Rudin, P. L. Lions, S. Osher, Multiplicative denoising and deblurring: theory and algorithms, In: Geometric level set methods in imaging, vision, and graphics, New York: Springer, 2003,103–119. https://dx.doi.org/10.1007/0-387-21810-6_6 |
[19] |
G. Aubert, J. F. Aujol, A variational approach to removing multiplicative noise, SIAM J. Appl. Math., 68 (2008), 925–946. http://dx.doi.org/10.1137/060671814 doi: 10.1137/060671814
![]() |
[20] |
J. Shi, S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model, SIAM J. Imaging Sci., 1 (2008), 294–321. http://dx.doi.org/10.1137/070689954 doi: 10.1137/070689954
![]() |
[21] |
G. Steidl, T. Teuber, Removing multiplicative noise by douglas-rachford splitting methods, J. Math. Imaging Vis., 36 (2010), 168–184. http://dx.doi.org/10.1007/s10851-009-0179-5 doi: 10.1007/s10851-009-0179-5
![]() |
[22] |
L. Xiao, L. L. Huang, Z. H. Wei, A weberized total variation regularization-based image multiplicative noise removal algorithm, EURASIP J. Adv. Signal Process., 2010 (2010), 1–15. http://dx.doi.org/10.1155/2010/490384 doi: 10.1155/2010/490384
![]() |
[23] | L. Huang, L. Xiao, Z. Wei, A nonlinear inverse scale space method for multiplicative noise removal based on weberized total variation, 2009 Fifth International Conference on Image and Graphics, Xi'an, China, 2009,119–123. http://dx.doi.org/10.1109/ICIG.2009.19 |
[24] |
X. Liu, T. Sun, Hybrid non-convex regularizers model for removing multiplicative noise, Comput. Math. Appl., 126 (2022), 182–195. http://dx.doi.org/10.1016/j.camwa.2022.09.012 doi: 10.1016/j.camwa.2022.09.012
![]() |
[25] |
C. Li, C. He, Fractional-order diffusion coupled with integer-order diffusion for multiplicative noise removal, Comput. Math. Appl., 136 (2023), 34–43. http://dx.doi.org/10.1016/j.camwa.2023.01.036 doi: 10.1016/j.camwa.2023.01.036
![]() |
[26] |
K. Hirakawa, T. W. Parks, Image denoising using total least squares, IEEE Trans. Image Process., 15 (2006), 2730–2742. http://dx.doi.org/10.1109/TIP.2006.877352 doi: 10.1109/TIP.2006.877352
![]() |
[27] |
N. Chumchob, K. Chen, C. Brito-Loeza, A new variational model for removal of combined additive and multiplicative noise and a fast algorithm for its numerical approximation, Int. J. Comput. Math., 90 (2013), 140–161. http://dx.doi.org/10.1080/00207160.2012.709625 doi: 10.1080/00207160.2012.709625
![]() |
[28] |
A. Ullah, W. Chen, M. A. Khan, H. G. Sun, An efficient variational method for restoring images with combined additive and multiplicative noise, Int. J. Appl. Comput. Math., 3 (2017), 1999–2019. http://dx.doi.org/10.1007/s40819-016-0219-y doi: 10.1007/s40819-016-0219-y
![]() |
[29] |
Y. Chen, W. Feng, R. Ranftl, H. Qiao, T. Pock, A higher-order mrf based variational model for multiplicative noise reduction, IEEE Signal Process Lett., 21 (2014), 1370–1374. http://dx.doi.org/10.1109/LSP.2014.2337274 doi: 10.1109/LSP.2014.2337274
![]() |
[30] |
P. Ochs, Y. Chen, T. Brox, T. Pock, ipiano: inertial proximal algorithm for nonconvex optimization, SIAM J. Imaging Sci., 7 (2014), 1388–1419. http://dx.doi.org/10.1137/130942954 doi: 10.1137/130942954
![]() |
[31] | O. Scherzer, C. Groetsch, Inverse scale space theory for inverse problems, In: Scale-space and morphology in computer vision, Springer, Berlin, Heidelberg, 2106 (2001), 317–325. http://dx.doi.org/10.1007/3-540-47778-0_29 |
[32] | C. W. Groetsch, O. Scherzer, Nonstationary iterated tikhonovmorozov method and third order differential equations for the evaluation of unbounded operators, Math. Methods Appl. Sci., 23 (2000), 1287–1300. |
[33] | M. Burger, S. Osher, J. Xu, G. Gilboa, Nonlinear inverse scale space methods for image restoration, In: Variational, geometric, and level set methods in computer vision, Springer, Berlin, Heidelberg, 3752 (2005), 25–36. http://dx.doi.org/10.1007/11567646_3 |
[34] |
M. Burger, G. Gilboa, S. Osher, J. Xu, Nonlinear inverse scale space methods, Commun. Math. Sci., 4 (2006), 179–212. http://dx.doi.org/10.4310/CMS.2006.v4.n1.a7 doi: 10.4310/CMS.2006.v4.n1.a7
![]() |
[35] |
I. Csiszar, Why least squares and maximum entropy? an axiomatic approach to inference for linear inverse problems, Ann. Statist., 19 (1991), 2032–2066. http://dx.doi.org/10.1214/AOS/1176348385 doi: 10.1214/AOS/1176348385
![]() |
1. | Wende Li, Jianlong Chen, Yukun Zhou, CHARACTERIZATIONS AND PROPERTIES OF WEAK CORE INVERSES IN RINGS WITH INVOLUTION, 2024, 54, 0035-7596, 10.1216/rmj.2024.54.793 | |
2. | Jiaxuan Yao, Hongwei Jin, Xiaoji Liu, The weak group-star matrix, 2023, 37, 0354-5180, 7919, 10.2298/FIL2323919Y | |
3. | Dijana Mosić, Janko Marovt, Weighted MP weak group inverse, 2024, 32, 1844-0835, 221, 10.2478/auom-2024-0012 |