
In recent years, compressive sensing (CS) problem is being popularly applied in the fields of signal processing and statistical inference. The alternating direction method of multipliers (ADMM) is applicable to the equivalent forms of basis pursuit denoising (BPDN) in CS problem. However, the solving speed and accuracy are adversely affected when the dimension increases greatly. In this paper, a new iterative format of proximal ADMM, which has fast solving speed and pinpoint accuracy when the dimension increases, is proposed to solve BPDN problem. Global convergence of the new type proximal ADMM is established in detail, and we exhibit a R− linear convergence rate under suitable condition. Moreover, we apply this new algorithm to solve different types of BPDN problems. Compared with the state-of-the-art of algorithms in BPDN problem, the proposed algorithm is more accurate and efficient.
Citation: Bing Xue, Jiakang Du, Hongchun Sun, Yiju Wang. A linearly convergent proximal ADMM with new iterative format for BPDN in compressed sensing problem[J]. AIMS Mathematics, 2022, 7(6): 10513-10533. doi: 10.3934/math.2022586
[1] | Yue Zhao, Meixia Li, Xiaowei Pan, Jingjing Tan . Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Mathematics, 2025, 10(2): 3041-3061. doi: 10.3934/math.2025142 |
[2] | Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753 |
[3] | Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097 |
[4] | Cristina Calineata, Teodor Turcanu . On fixed proximal pairs of $ E_r $-mappings. AIMS Mathematics, 2023, 8(11): 26632-26649. doi: 10.3934/math.20231362 |
[5] | Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601 |
[6] | Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453 |
[7] | Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078 |
[8] | Chanchal Garodia, Izhar Uddin, Bahaaeldin Abdalla, Thabet Abdeljawad . A modified proximal point algorithm in geodesic metric space. AIMS Mathematics, 2023, 8(2): 4304-4320. doi: 10.3934/math.2023214 |
[9] | Chainarong Khunpanuk, Chanchal Garodia, Izhar Uddin, Nuttapol Pakkaranang . On a proximal point algorithm for solving common fixed point problems and convex minimization problems in Geodesic spaces with positive curvature. AIMS Mathematics, 2022, 7(5): 9509-9523. doi: 10.3934/math.2022529 |
[10] | Saknarin Channark, Poom Kumam, Juan Martinez-Moreno, Wachirapong Jirakitpuwapat . Hermitian and skew-Hermitian splitting method on a progressive-iterative approximation for least squares fitting. AIMS Mathematics, 2022, 7(9): 17570-17591. doi: 10.3934/math.2022967 |
In recent years, compressive sensing (CS) problem is being popularly applied in the fields of signal processing and statistical inference. The alternating direction method of multipliers (ADMM) is applicable to the equivalent forms of basis pursuit denoising (BPDN) in CS problem. However, the solving speed and accuracy are adversely affected when the dimension increases greatly. In this paper, a new iterative format of proximal ADMM, which has fast solving speed and pinpoint accuracy when the dimension increases, is proposed to solve BPDN problem. Global convergence of the new type proximal ADMM is established in detail, and we exhibit a R− linear convergence rate under suitable condition. Moreover, we apply this new algorithm to solve different types of BPDN problems. Compared with the state-of-the-art of algorithms in BPDN problem, the proposed algorithm is more accurate and efficient.
Compressive sensing (CS) problem is to recover a sparse signal ˉx from an undetermined linear system Aˉx=b, where ˉx∈Rn, A is the sensing matrix and A∈Rm×n (m≪n), b is the observed signal and b∈Rm. A fundamental decoding model of CS problem is the following basis pursuit denoising (termed as BPDN) problem:
minx∈Rn12‖Ax−b‖22+μ‖x‖1 | (1.1) |
where μ(>0) is a parameter, and the norm ‖⋅‖1 and ‖⋅‖2 denote the Euclidean 1-norm and 2-norm, respectively.
Recently, a lot of numerical algorithms about BPDN problem have been extensively developed. In fact, the BPDN problem can be equivalently converted into a separable convex programming by introducing auxiliary variable. Thus the numerical methods, which can be used to solve the separable convex programming, are applicable to BPDN problem, such as the alternating direction method of multipliers and its linearized version [1,2,3,4], the Peaceman-Rachford splitting method (PRSM) or Douglas-Rachford splitting method (DRSM) of multipliers [5,6,7,8], the symmetric alternating direction method of multipliers [9], etc. Yang and Zhang [1] investigate the use of alternating direction algorithms for several ℓ1-norm minimization problems arising from sparse solution recovery in CS, including the basis pursuit problem, the basis-pursuit denoising problems, and so on. Yuan [2] presents a descent-like method, which can obtain a descent direction and an appropriate step size and improve the proximal alternating direction method. Yu et al. [5] apply the primal DRSM to solve equivalent transformation form of BPDN problem. He and Dan [6] furtherly study the multi-block separable convex minimization problem with linear constraints along the way by the primal application of DRSM, and present the exact and inexact versions of the new method in a unified framework. Compared to the DRSM, the PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. He and Liu et al.[7] illustrate the reason for this difference, and develop a modified PRSM for separable convex programming, which includes BPDN problem as a special case. Sun and Liu [8] develop a generalized PRSM for BPDN problem, of which both subproblems are approximated by the linearization technique. He et al. [9] obtain the convergence of the symmetric version of ADMM with step sizes, where step sizes can be enlarged by Fortin and Glowinski's constant. On the other hand, BPDN problem can be equivalently transformed into an equation or variational inequality problem by splitting technique [10,11,12,13,14,15], which can be solved by some standard methods such as conjugate gradient methods, proximal point algorithms and projection-type algorithms. Xiao and Zhu [10] transform BPDN problem into a convex constrained monotone equation, and present a conjugate gradient method for the equivalent form of the problem. Sun and Tian [11] propose a class of derivative-free conjugate gradient (CG) projection methods for nonsmooth equations with convex constraints, including the BPDN problem. Sun et al. [12] reformulate BPDN problem as variational inequality problem by splitting the decision variable into two nonnegative auxiliary variables, and propose a novel inverse matrix-free proximal point algorithm for BPDN problem. Base on the same transformation of BPDN problem, Feng and Wang [13] also propose a projection-type algorithm without any backtracking line search. Although there are so many ways to solve the problem, it is still needed to improve the solving speed and accuracy. In particular, as the dimension increases greatly, the solving speed and accuracy are adversely affected. In the paper, we will establish a new iterative format of proximal ADMM, which has closed-form solutions. The motivation behind this is for the better numerical performance when the dimension increases. Furthermore, the linear rate convergence result for new algorithm is established, which is also one of the most important motivations.
The rest of this paper is organized as follows. In Section 2, some equivalent reformulations of (1.1) and related theories of (1.1), which are the basis of our analysis, are given. In Section 3, basing on a special iterative format, we present a new type proximal ADMM, in which the corresponding subproblems have closed-form solutions. The global convergence of new method is discussed in detail. We establish a R−linear rate convergence theorem under suitable condition. In Section 4, some numerical experiments on sparse signal recovery are given, and we compare the CPU time and the relative error among the Peaceman-Rachford splitting method of multipliers[8], the conjugate gradient methods[11], the proximal point algorithms[12], the projection-type algorithms[13] and our algorithm, and show that our algorithm is more accurate and efficient than other algorithms. Finally, some conclusive remarks are drawn in the last section.
We end this section with some notations used in this paper. Vectors considered in this paper are all taken in Euclidean space equipped with the standard inner product. The notation Rm×n stands for the set of all m×n real matrices. For x,y∈Rn, we use (x;y) to denote the column vector (x⊤,y⊤)⊤. If G∈Rn×n is a symmetric positive definite matrix, we denote by ‖x‖G=√x⊤Gx the G-norm of the vector x.
In this section, we first establish some equivalent reformulations to the BPDN problem via some the related optimality theories.
It is obvious that the BPDN problem can be equivalently reformulated as the following optimization problem [8]
min12‖Ax1−b‖22+μ‖x2‖1s.t.x1−x2=0x1∈Rn,x2∈Rn. | (2.1) |
Then the augmented Lagrangian function of the convex programming (2.1) can be written as
Lβ(x1,x2,λ):=θ1(x1)+θ2(x2)−⟨λ,x1−x2⟩+β2‖x1−x2‖2, | (2.2) |
where λ is the Lagrangian multiplier for the linear constraints of (2.1) and λ∈Rn,
θ1(x1)=12‖Ax1−b‖22,θ2(x2)=μ‖x2‖1. |
By invoking the first-order optimality condition for convex programming, we can equivalently reformulate problem (2.1) as the following variational inequality problem: finding vector x∗=(x∗1;x∗2)∈Rn×Rn and λ∗∈Rn such that
{θ1(x1)−θ1(x∗1)−(x1−x∗1)⊤λ∗≥0, ∀x1∈Rn,θ2(x2)−θ2(x∗2)+(x2−x∗2)⊤λ∗≥0, ∀x2∈Rn,x∗1−x∗2=0. | (2.3) |
Obviously, the system (2.3) is equivalent to the following problem: Find a vector w∗∈W such that
θ(x)−θ(x∗)+(w−w∗)⊤F(w∗)≥0,∀w∈W, | (2.4) |
where w=(x1;x2;λ)∈W=Rn×Rn×Rn, θ(x)=θ1(x1)+θ2(x2), and
F(w):=(−λλx1−x2)=(00−In00InIn−In0)(x1x2λ). | (2.5) |
We denote the solution set of (2.4) by W∗.
It is easy to verify that the mapping F(⋅) is not only monotone but also satisfies the following nice property
(w′−w)⊤(F(w′)−F(w))=0,∀w′,w∈W. |
To proceed, we present the following definition, which will be needed in the sequel.
Definition 2.1. For sequence vector μk=(μk1,μk2,⋯,μkn)⊤∈Rn(k=1,2,⋯), we define two new functions ψ(μk) and δ(μk).
(i) The function ψ(μk)=(ψ1(μk1),ψ1(μk2),⋯,ψ1(μkn))⊤, and
ψ1(μki)={0,if|μki|≤Cn2k,μki,if|μki|>Cn2k.(i=1,2,⋯,n) | (2.6) |
where k is positive integer, and C>0 is a constant.
(ii) The function δ(μk)=μk−ψ(μk).
In this section, we present a new type proximal ADMM for solving (2.1) by a special iterative format, and the global convergence of new method is also established in detail.
Algorithm 3.1.
Step 0. Select constants β,γ,ε>0, two positive semi-definite matrices Ri∈Rn×n(i=1,2). Choose an arbitrarily initial point w0=(x01;x02;λ0)∈Rn×Rn×Rn. Take
η={γ,if0<γ≤1,1γ,ifγ>1. | (3.1) |
Set k=0.
Step 1. By current iterate wk, compute the new iterate ˆwk=(ˆxk1;ˆxk2;ˆλk) via
{ˆxk1∈argminx1∈RnLβ(x1,xk2,λk)+12‖x1−xk1‖2R1,ˆxk2∈argminx2∈RnLβ(ˆxk1,x2,λk)+12‖x2−xk2‖2R2,ˆλk=λk−γβ(ˆxk1−ˆxk2), | (3.2) |
Step 2. If ‖wk−ˆwk‖≤ε, then stop; otherwise, go to Step 3.
Step 3. Set wk+1=ρˆwk+(1−ρ)ψ(wk), where ρ∈(0,η) and η is a constant defined in (3.1). Go to Step 1.
In the following, we show that it is reasonable to use ‖wk−ˆwk‖≤ε to terminate Algorithm 3.1.
Lemma 3.1. If wk=ˆwk, then the vector wk is a solution of (2.4).
Proof. For x1-subproblem in (3.2), using the first-order optimality condition, for any x1∈Rn, we obtain
0≤θ1(x1)−θ1(ˆxk1)+(x1−ˆxk1)⊤{−λk+β(ˆxk1−xk2)+R1(ˆxk1−xk1)}=θ1(x1)−θ1(ˆxk1)+(x1−ˆxk1)⊤{−ˆλk+β(1−γ)(ˆxk1−ˆxk2)−β(xk2−ˆxk2)+R1(ˆxk1−xk1)}=θ1(x1)−θ1(ˆxk1)+(x1−ˆxk1)⊤(−ˆλk)+(x1−ˆxk1)⊤{1−γγ(λk−ˆλk)−β(xk2−ˆxk2)}+(x1−ˆxk1)⊤R1(ˆxk1−xk1), |
where the first and the second equalities are by λk=ˆλk+γβ(ˆxk1−ˆxk2), i.e.,
θ1(x1)−θ1(ˆxk1)+(x1−ˆxk1)⊤(−ˆλk)≥(x1−ˆxk1)⊤R1(xk1−ˆxk1)+β(x1−ˆxk1)⊤(xk2−ˆxk2)−1−γγ(x1−ˆxk1)⊤(λk−ˆλk). | (3.3) |
For x2-subproblem in (3.2), similar to discussion above, one has
0≤θ2(x2)−θ2(ˆxk2)+(x2−ˆxk2)⊤{λk−β(ˆxk1−ˆxk2)+R2(ˆxk2−xk2)}=θ2(x2)−θ2(ˆxk2)+(x2−ˆxk2)⊤{ˆλk−β(1−γ)(ˆxk1−ˆxk2)+R2(ˆxk2−xk2)}=θ2(x2)−θ2(ˆxk2)+(x2−ˆxk2)⊤ˆλk−(x2−ˆxk2)⊤{1−γγ(λk−ˆλk)}+(x2−ˆxk2)⊤R2(ˆxk2−xk2), |
where the first and second equalities are by λk=ˆλk+γβ(ˆxk1−ˆxk2), i.e.,
θ2(x2)−θ2(ˆxk2)+(x2−ˆxk2)⊤ˆλk≥(x2−ˆxk2)⊤R2(xk2−ˆxk2)+1−γγ(x2−ˆxk2)⊤(λk−ˆλk). | (3.4) |
For λ-subproblem in (3.2), for any λ∈Rn, one has
(λ−ˆλk)⊤(ˆxk1−ˆxk2)=1βγ(λ−ˆλk)⊤(λk−ˆλk). | (3.5) |
By (3.3)–(3.5), we get
θ(x)−θ(ˆxk)+(w−ˆwk)⊤F(ˆwk)≥(w−ˆwk)⊤G(wk−ˆwk), ∀w∈W, | (3.6) |
where
G=(R1βIn−1−γγIn0R21−γγIn001βγIn). | (3.7) |
Combining wk=ˆwk with (3.6), one has
θ(x)−θ(ˆxk)+(w−ˆwk)⊤F(ˆwk)≥0, ∀w∈W. | (3.8) |
Substituting ˆwk and ˆxk in (3.8) with wk and xk, respectively, we obtain
θ(x)−θ(xk)+(w−wk)⊤F(wk)≥0, ∀w∈W, |
which indicates that wk is a solution of (2.4).
Remark 3.1. From Lemma 3.1, if Algorithm 3.1 stops at Step 2, then wk is a proximal solution of (2.4).
In the following, we assume that Algorithm 3.1 generates infinite sequences {wk} and {ˆwk}. For convenience, we define two matrices to simplify our notation in the later analysis.
M=(R1000βIn+R20001βγIn),Q=(R1000βIn+R2−12γIn0−12γIn1βγ2In). | (3.9) |
The following lemma gives some interesting properties of the two matrices M,Q just defined. These properties are crucial in the convergence analysis of Algorithm 3.1.
Lemma 3.2. If R1 and R2 are two positive semi-definite matrices, then we have
(i) Both matrices M and Q are positive semi-definite;
(ii) The matrix H1:=2Q−γM is positive semi-definite if 0<γ≤1;
(iii) The matrix H2:=2γQ−M is positive semi-definite if γ>1.
Proof. (i) For any w=(x1;x2;λ), one has
w⊤Mw=‖x1‖2R1+β‖x2‖2+‖x2‖2R2+1βγ‖λ‖2≥0. |
So the matrix M is positive semi-definite.
The matrix Q can be written as
Q=(R1000R20000)+(0000βIn−12γIn0−12γIn1βγ2In):=Q1+Q2. |
Obviously, the matrix Q1 is positive semi-definite, and we have
w⊤Q2w=βx⊤2x2+1βγ2λ⊤λ−1γx⊤2λ≥βx⊤2x2+1βγ2λ⊤λ−1γ‖x2‖‖λ‖≥βx⊤2x2+1βγ2λ⊤λ−(β4‖x2‖2+1βγ2‖λ‖2)≥3β4‖x2‖2≥0, |
where the first inequality is obtained by the Cauchy-Schwartz inequality, the second inequality follows from the fact that a2+b2≥2ab,∀a,b∈R+, and the desired result follows.
(ii) For the matrix H1. By a direct computation, it yields that
H1=2Q−γM=((2−γ)R1000(2−γ)(βIn+R2)−1γIn0−1γIn2−γ2βγ2In)=(2−γ)(R1000R20000)+(0000(2−γ)βIn−1γIn0−1γIn2−γ2βγ2In):=(2−γ)Q1+Q3. |
Obviously, by 0<γ≤1, the first part is positive semi-definite, and
w⊤Q3w=(2−γ)βx⊤2x2+2−γ2βγ2λ⊤λ−2γx⊤2λ≥(2−γ)βx⊤2x2+2−γ2βγ2λ⊤λ−2γ‖x2‖‖λ‖≥(2−γ)βx⊤2x2+2−γ2βγ2λ⊤λ−(β2−γ2‖x2‖2+2−γ2βγ2‖λ‖2)≥(2−γ)12−γ2β‖x2‖2≥0, |
and thus the desired result follows.
(iii) For the matrix H2. By a direct computation, it yields that
H2=2γQ−M=((2γ−1)R1000(2γ−1)(βIn+R2)−In0−In1βγIn)=(2γ−1)(R1000R20000)+(0000(2γ−1)βIn−In0−In1βγIn):=(2γ−1)Q1+Q4. |
Similar to discussion above, using γ>1, we obtain
w⊤Q4w=(2γ−1)βx⊤2x2+1βγλ⊤λ−2x⊤2λ≥(2γ−1)βx⊤2x2+1βγλ⊤λ−2|x2‖‖λ‖≥(2γ−1)βx⊤2x2+1βγλ⊤λ−(βγ‖x2‖2+1βγ‖λ‖2)≥(γ−1)β‖x2‖2≥0. |
Combining this with the positive semi-definite of Q1, and the desired result follows.
Lemma 3.3. Let {wk} and {ˆwk} be two sequences generated by Algorithm 3.1. Then we have
(wk−w∗)⊤M(wk−ˆwk)≥‖wk−ˆwk‖2Q, ∀w∗∈W∗. | (3.10) |
Proof. From the definitions of M in (3.9) and G in (3.7), one has
G=M+˜M, |
where ˜M=(0βIn−1−γγIn0−βIn1−γγIn000). By a direct computation, it yields that
0≤θ(ˆxk)−θ(x∗)+(ˆwk−w∗)⊤F(ˆwk)≤(ˆwk−w∗)⊤G(wk−ˆwk)=(ˆwk−w∗)⊤M(wk−ˆwk)+(ˆwk−w∗)⊤˜M(wk−ˆwk)=(ˆwk−w∗)⊤M(wk−ˆwk)+β[(ˆxk1−x∗1)−(ˆxk2−x∗2)]⊤(xk2−ˆxk2)−1−γγ[(ˆxk1−x∗1)−(ˆxk2−x∗2)]⊤(λk−ˆλk)=(ˆwk−w∗)⊤M(wk−ˆwk)+β(ˆxk1−ˆxk2)⊤(xk2−ˆxk2)−1−γγ(ˆxk1−ˆxk2)⊤(λk−ˆλk)=(ˆwk−w∗)⊤M(wk−ˆwk)+1γ(λk−ˆλk)⊤(xk2−ˆxk2)−1−γγ[1βγ(λk−ˆλk)⊤](λk−ˆλk)=(ˆwk−w∗)⊤M(wk−ˆwk)+1γ(λk−ˆλk)⊤(xk2−ˆxk2)−1−γβγ2‖λk−ˆλk‖2, | (3.11) |
where the first inequality is by (2.4) with w∗∈W∗, wk∈W, since w∗∈W∗⊆W, using (3.6) with x=x∗ and w=w∗, we have that the second inequality holds, the third equality follows from x∗1=x∗2, and the four equality comes from the fact (ˆxk1−ˆxk2)=1γβ(λk−ˆλk). Applying (3.11), we get
(ˆwk−w∗)⊤M(wk−ˆwk)≥−1γ(λk−ˆλk)⊤(xk2−ˆxk2)+1−γβγ2‖λk−ˆλk‖2, |
and one has
(wk−w∗)⊤M(wk−ˆwk)=(wk−ˆwk)⊤M(wk−ˆwk)+(ˆwk−w∗)⊤M(wk−ˆwk)≥(wk−ˆwk)⊤M(wk−ˆwk)−1γ(λk−ˆλk)⊤(xk2−ˆxk2)+1−γβγ2‖λk−ˆλk‖2=‖wk−ˆwk‖2Q, |
where the second equality follows from the definition of matrix Q in (3.9), and the desired result follows.
Lemma 3.4. For any solution w∗=(x∗1;x∗2;λ∗) of (2.4), the sequence {wk} generated by Algorithm 3.1 satisfies
‖ρˆwk+(1−ρ)wk−w∗‖2M≤‖wk−w∗‖2M−ρ‖ˆwk−wk‖2H, | (3.12) |
where the matrix H is defined by
H=(η−ρ)M, | (3.13) |
and η is defined in (3.1).
Proof. From (3.1), if 0<γ≤1, then η=γ. A direct computation yields that
‖ρ(ˆwk−wk)+(wk−w∗)‖2M=‖wk−w∗‖2M+2ρ(wk−w∗)⊤M(ˆwk−wk)+ρ2‖ˆwk−wk‖2M≤‖wk−w∗‖2M−2ρ‖ˆwk−wk‖2Q+ρ2‖ˆwk−wk‖2M=‖wk−w∗‖2M−2ρ‖ˆwk−wk‖2(12H1+γ2M)+ρ2‖ˆwk−wk‖2M=‖wk−w∗‖2M−(ˆwk−wk)⊤(ρH1+(ργ−ρ2)M)(ˆwk−wk)=‖wk−w∗‖2M−(ˆwk−wk)⊤ρ(γ−ρ)M(ˆwk−wk))−(ˆwk−wk)⊤ρH1(ˆwk−wk)≤‖wk−w∗‖2M−ρ‖ˆwk−wk‖2(γ−ρ)M, | (3.14) |
where the first inequality follows from (3.10), the second equality follows from H1=2Q−γM in Lemma 3.2 (ii), and the second inequality follows from the fact that the matrix H1 is positive semi-definite in Lemma 3.2.
If γ≥1, then η=1γ. Similar to discussion (3.14), we can also obtain
‖ρ(ˆwk−wk)+(wk−w∗)‖2M≤‖wk−w∗‖2M−2ρ‖ˆwk−wk‖2Q+ρ2‖ˆwk−wk‖2M=‖wk−w∗‖2M−2ρ‖ˆwk−wk‖2(12γH2+12γM)+ρ2‖ˆwk−wk‖2M=‖wk−w∗‖2M−(ˆwk−wk)⊤(ργH2+ρ(1γ−ρ)M(ˆwk−wk)=‖wk−w∗‖2M−ρ‖ˆwk−wk‖2(1γ−ρ)M−ργ(ˆwk−wk)⊤H2(ˆwk−wk)≤‖wk−w∗‖2M−ρ‖ˆwk−wk‖2(1γ−ρ)M, | (3.15) |
where the first equality comes from H2=2γQ−M in Lemma 3.2 (iii), and the second inequality follows from the fact that the matrix H2 is positive semi-definite in Lemma 3.2.
The desired result follows by combining above.
Remark 3.2. By the definition of η in (3.1), the matrix H in (3.13) is positive semi-definite.
Theorem 3.1. For any solution w∗=(x∗1;x∗2;λ∗) of (2.4), the sequence {wk} generated by Algorithm 3.1 satisfies
‖wk+1−w∗‖M≤‖wk−w∗‖M+(1−ρ)‖δ(wk)‖M. | (3.16) |
Proof. Since the matrix H in (3.13) is positive semi-definite, by (3.12), one has
‖wk+1−w∗‖M=‖ρˆwk+(1−ρ)ψ(wk)−w∗‖M=‖ρ(ˆwk−wk)+(wk−w∗)+(ρ−1)(wk−ψ(wk))‖M≤‖ρ(ˆwk−wk)+(wk−w∗)‖M+(1−ρ)‖δ(wk)‖M≤‖wk−w∗‖M+(1−ρ)‖δ(wk)‖M. | (3.17) |
Thus, the desired result follows.
Theorem 3.2. Assume that the matrix R1 is positive definite. Then the sequence {wk} generated by Algorithm 3.1 converges to some ˉw∈W∗.
Proof. Using Definition 2.1, there exists a constant c1>0 such that ‖δ(wk)‖M≤c12k. Combining this with (3.16), we obtain
‖wk+1−w∗‖M≤‖wk−w∗‖M+(1−ρ)c112k≤‖wk−1−w∗‖M+(1−ρ)c1(12k+12k−1)≤⋯⋯⋯≤‖w1−w∗‖M+(1−ρ)c1∑km=112m≤‖w1−w∗‖M+(1−ρ)c1∑∞m=112m, | (3.18) |
where w∗∈W∗. Since that the matrix R1 is positive definite, it is easy to obtain that the matrix M is positive definite. Combining this with (3.18), we have that {wk} is bounded.
Now, we break up the discussion into two cases.
Case 1. If there exists a subsequence {wkj} such that ‖wkj−w∗‖M≤(1−ρ)‖δ(wkj)‖M, i.e., ‖wkj−w∗‖≤(1−ρ)c112kj, Then {wkj} converges to w∗. Since the series of positive terms ∑∞k=112k is convergent, by Cauchy convergence criterion, for any ϵ>0, there exists a positive integer m such that
12k+12k−1+⋯+12kj≤ϵ2c1(1−ρ) | (3.19) |
as positive integer k,kj≥m(k≥kj). From limj→∞wkj=w∗, for ϵ>0 above, there exists an integer j, such that
‖wkj−w∗‖M<ϵ2. | (3.20) |
Combining (3.19) with (3.20), for sufficiently large positive integer k,kj(k≥kj), similar to discussion (3.18), we obtain
‖wk+1−w∗‖M≤‖wk−w∗‖M+(1−ρ)c112k≤‖wk−1−w∗‖M+(1−ρ)c1(12k+12k−1)≤⋯⋯⋯≤‖wkj−w∗‖M+(1−ρ)c1(12k+12k−1+⋯+12kj)<ϵ2+ϵ2=ϵ, |
which indicates that the sequence {wk} converges globally to the point w∗.
Case 2. For any subsequence {wkj} such that ‖wkj−w∗‖M>(1−ρ)‖δ(wkj)‖M, i.e., ‖wk+1−w∗‖M>(1−ρ)‖δ(wk)‖M.
Since {wk} is bounded, there exists constant c2>0 such that ‖wk+1−w∗‖≤c2. By definition of wk+1 in Step 3 of Algorithm 3.1, we have
‖ρ(ˆwk−wk)+(wk−w∗)‖2M=‖wk+1−w∗+(1−ρ)δ(wk)‖2M≥(‖wk+1−w∗‖−(1−ρ)‖δ(wk)‖M)2=‖wk+1−w∗‖2M+(1−ρ)2‖δ(wk)‖2M−2(1−ρ)‖wk+1−w∗‖‖δ(wk)‖≥‖wk+1−w∗‖2M+(1−ρ)2‖δ(wk)‖2M−2(1−ρ)c2‖δ(wk)‖. | (3.21) |
Combining this with (3.12), one has
∑∞k=0‖wk−ˆwk‖2H≤ρ−1∑∞k=0(‖wk−w∗‖2M−‖ρˆwk+(1−ρ)wk−w∗)‖2M)≤2ρ−1(1−ρ)c2∑∞k=0‖δ(wk)‖−ρ−1(1−ρ)2∑∞k=0‖δ(wk)‖2M+ρ−1∑∞k=0(‖wk−w∗‖2M−‖wk+1−w∗‖M)≤2ρ−1(1−ρ)c1c2∑∞k=012k+ρ−1‖w0−w∗‖2M, | (3.22) |
which together with the positive definiteness of H yields
limk→∞‖wk−ˆwk‖=0, | (3.23) |
and one has
limk→∞‖xk1−ˆxk1‖=0, | (3.24) |
limk→∞‖xk2−ˆxk2‖=0, | (3.25) |
limk→∞‖λk−ˆλk‖=0. | (3.26) |
By (3.23), we know that the sequence {ˆwk} is also bounded since {wk} is bounded. Thus, it has at least a cluster point, saying w∞:=(x∞1;x∞2;λ∞), and suppose that the subsequence {ˆwki} converges to w∞. By (3.26), one has limki→∞‖λki−ˆλki‖=0. Taking limits on both sides of
ˆxki1−ˆxki2=1γβ(λki−ˆλki), |
we have
x∞1−x∞2=0. |
Furthermore, taking limits on both sides of (3.3) and (3.4), and using (3.24)–(3.26), we obtain
θ1(x1)−θ1(x∞1)+(x1−x∞1)⊤(−λ∞)≥0, ∀x1∈Rn, |
and
θ2(x2)−θ2(x∞2)+(x2−x∞2)⊤λ∞≥0, ∀x2∈Rn. |
Therefore, (x∞1,x∞2,λ∞)∈W∗.
Since the series of positive terms ∑∞k=112k is convergent, by Cauchy convergence criterion, for any ϵ>0, there exists a positive integer m such that
12k+12k−1+⋯+12kl≤ϵ3c1(1−ρ) | (3.27) |
as positive integer k,kl≥m(k≥kl). By (3.23) and limj→∞ˆwkj=w∞, for ϵ>0 above, there exists an integer l, such that
‖wkl−ˆwkl‖M<ϵ3,‖ˆwkl−w∞‖M<ϵ3. | (3.28) |
Combining (3.27) with (3.28), for sufficiently large positive integer k,kl(k≥kl), similar to discussion (3.18), we obtain
‖wk+1−w∞‖M≤‖wk−w∞‖M+(1−ρ)c112k≤‖wk−1−w∞‖M+(1−ρ)c1(12k+12k−1)≤⋯⋯⋯≤‖wkl−w∞‖M+(1−ρ)c1(12k+12k−1+⋯+12kl)≤‖wkl−ˆwkl‖M+‖ˆwkl−w∞‖M+(1−ρ)c1(12k+12k−1+⋯+12kl)<ϵ3+ϵ3+ϵ3=ϵ, |
which indicates that the sequence {wk} converges globally to the point w∞. The proof is completed.
In the end of this section, we discuss the convergence rate of Algorithm 3.1. To this end, the following assumptions is needed.
Assumption 3.1. For two sequences {wk} and {ˆwk}, assume that there exist a positive constant σ such that, for wk, there exists w∗∈W∗ such that
‖wk−w∗‖≤σ‖∇Lk(wk)‖, |
where
Lk(w):=θ1(x1)+μ∂(‖ˆxk2‖1)⊤x2−⟨λk,x1−x2⟩+β2‖x1−xk2‖2+β2‖ˆxk1−x2‖2+12‖x1−xk1‖2R1+12‖x2−xk2‖2R2+12λ⊤λ−⟨λ,λk−γβ(ˆxk1−ˆxk2)⟩, | (3.29) |
and ∂(‖ˆxk2‖1) denotes the subdifferential of the function ‖x2‖1 at the point ˆxk2.
Lemma 3.5. Suppose that Assumption 3.1 holds. Then there exists a positive constant ˆμ such that
‖wk−w∗‖≤ˆμ‖wk−ˆwk‖. |
Proof. From (3.2), we have
{ˆxk1=(A⊤A+βI+R1)−1(A⊤b+λk+βxk2+R1xk1)[2mm]ˆxk2=(βI+R2)−1(−μ∂(‖ˆxk2‖1)−λk+βˆxk1+R2xk2)[2mm]ˆλk=λk−γβ(ˆxk1−ˆxk2), | (3.30) |
By (3.29) and (3.30), a direct computation yields that
∇Lk(w)=(∂L∂x1;∂L∂x2;∂L∂λ)=(A⊤(Ax1−b)−λk+β(x1−xk2)+R1(x1−xk1)μ∂(‖ˆxk2‖1)+λk−β(ˆxk1−x2)+R2(x2−xk2)λ−[λk−γβ(ˆxk1−ˆxk2)])=((A⊤A+βI+R1)[x1−(A⊤A+βI+R1)−1(A⊤b+λk+βxk2+R1xk1)](βI+R2)[x2−(βI+R2)−1(−μ∂(‖ˆxk2‖1)−λk+βˆxk1+R2xk2)]λ−[λk−γβ(ˆxk1−ˆxk2)]) |
=(A⊤A+βI+R1000βI+R2000I)(x1−ˆxk1x2−ˆxk2λ−ˆλk)=ˉQ(w−ˆwk), |
where ˉQ=(A⊤A+βI+R1000βI+R2000I). Combining this with Assumption 3.1, we obtain
‖wk−w∗‖≤σ‖∇Lk(wk)‖=σ‖ˉQ(wk−ˆwk)‖≤σ‖ˉQ‖‖wk−ˆwk‖. |
Let ˆμ=σ‖ˉQ‖. Then the desired result follows.
Theorem 3.3 Suppose that the hypothesis of Theorem 3.2 and Assumption 3.1 hold, and 34ˆμ2<(η−ρ)ρ<54ˆμ2. Then the sequence {wk} generated by Algorithm 3.1 converges to a solution of (2.4) R−linearly.
Proof. From Theorem 3.2. Without loss of generality, we assume that the sequence {wk} converges to w∗∈W∗. A direct computation yields that
‖wk+1−w∗‖2M=‖ρˆwk+(1−ρ)ψ(wk)−w∗‖2M=‖ρ(ˆwk−wk)+(wk−w∗)+(ρ−1)(wk−ψ(wk))‖2M≤(‖ρ(ˆwk−wk)+(wk−w∗)‖M+(1−ρ)‖δ(wk)‖M)2≤2‖ρ(ˆwk−wk)+(wk−w∗)‖2M+2(1−ρ)2‖δ(wk)‖2M≤2‖wk−w∗‖2M−2ρ‖ˆwk−wk‖2H+2(1−ρ)2‖δ(wk)‖2M≤2‖wk−w∗‖2M−2ρˆμ2‖wk−w∗‖2H+2(1−ρ)2‖δ(wk)‖2M≤2(1−(η−ρ)ρˆμ2)‖wk−w∗‖2M+2(1−ρ)2‖δ(wk)‖2M. | (3.31) |
where the second inequality follows from the fact that a2+b2≥2ab,∀a,b∈R, the third inequality is obtained by (3.12), the fourth inequality follows by Lemma 3.5, and the fifth inequality is by (3.13).
Since the sequence {wk} converges to w∗, it follows that there exists a positive integer k0, for all k≥k0, we obtain the following conclusions.
If ‖wk−w∗‖M≤2(1−ρ)‖δ(wk)‖M. By Definition 2.1, the following holds
‖wk−w∗‖≤2(1−ρ)c112k. | (3.32) |
If ‖wk−w∗‖M>2(1−ρ)‖δ(wk)‖M. Combining this with (3.31), we obtain
‖wk+1−w∗‖2M‖wk−w∗‖2M≤[2(1−(η−ρ)ρˆμ2)‖wk−w∗‖2M‖wk−w∗‖2M+2(1−ρ)2‖δ(wk)‖2M‖wk−w∗‖2M]≤2(1−(η−ρ)ρˆμ2)+12. |
Let ˆτ=2(1−(η−ρ)ρˆμ2)+12. Then 0<ˆτ<1 by 34ˆμ2<(η−ρ)ρ<54ˆμ2. Therefore, we have
‖wk+1−w∗‖M≤√ˆτ‖wk−w∗‖M≤√ˆτ2‖wk−1−w∗‖M⋯⋯⋯≤√ˆτk−k0+1‖wk0−w∗‖M≤c2√ˆτk. |
where c2 is positive constant, i.e., ‖wk−w∗‖M≤c2√ˆτ−1√ˆτk. Combining this with (3.32), one has
‖wk−w∗‖M≤c3max{12k,√ˆτk}. |
where c3 is positive constant, and thus the desired result follows.
In this section, we provide some numerical tests about BPDN problem to show the efficiency of method proposed in this paper. All codes are written by MATLAB 9.2.0.538062 and performed on a Windows 10 PC with an AMD FX-7500 Radeon R7, 10 Computer Cores 4C+6G CPU, 2.10GHz CPU and 8GB of memory. In experiment, we set μ=0.001, and the measurement matrix A is generated by MATLAB scripts:
[Q,R]=qr(A′,0);A=Q′. |
The original signal ˉx is generated by p=randperm(n);x(p(1:k))=randn(k,1).
To simplify calculations of (3.2), we set R1=τIn−A⊤A, where τ>0 is a constant. Combining this with the first equality in (3.30), one has
ˆxk1=1β+τ(λk+τxk1+βxk2−gk), | (4.1) |
where gk=A⊤(Axk1−b).
For the second equality in (3.30), we set R2=0, then
ˆxk2=ˆxk1−1βλk−μβ∂(‖ˆxk2‖1). | (4.2) |
The subdifferential of the absolute value function |t| is given as follows
∂(|t|)={−1ift<0,[−1,1]ift=0,1ift>0. |
Combining this with (4.2), for i=1,2,⋯,n, we obtain
(ˆxk2)i={(ˆxk1−1βλk)i+μβif(ˆxk1−1βλk)i<−μβ0if|ˆxk1−1βλk)i|<μβ(ˆxk1−1βλk)i−μβif(ˆxk1−1βλk)i>μβ=(shrinkμβ(ˆxk1−1βλk))i, |
where shrinkc(∗) is the soft-thresholding operator defined as
shrinkc(k):=k−min{c,|k|}k|k|,∀k∈R, |
and c>0 is a constant. In addition, when k=0, k/|k| should be taken 0. Therefore, we have the following formula to calculate ˆxk2, i.e.,
ˆxk2=shrinkμβ(ˆxk1−1βλk). | (4.3) |
Applying (4.1) and (4.3), then (3.2) in Algorithm 3.1 can be written as follow
{ˆxk1=1β+τ(λk+τxk1+βxk2−gk),ˆxk2=shrinkμβ(ˆxk1−1βλk),ˆλk=λk−γβ(ˆxk1−ˆxk2). | (4.4) |
For any methods, the stop criterion is
‖fk−fk−1‖‖fk−1‖<10−6, |
where fk denotes the objective value of (1.1) at iteration xk.
In each test, we calculate the relative error
RelErr=‖xk+1−ˉx‖‖ˉx‖, |
where ˉx denotes the recovery signal.
In this subsection, we apply Algorithm 2.3 in [11] (DFCGPM), Algorithm 2 in [12] (IMFPPA) and Algorithms 3.1 in this paper to recover a simulated sparse signal of which observation data is corrupted by additive Gaussian white noise, respectively. We set n=212,m=210,k=27, and some parameters about tested algorithms are listed as follows:
Algorithms 3.1: β=0.2,μ=0.01,γ=1.9,τ=0.5;
IMFPPA: ρ=0.01,γ=0.01,τ=0.2;
DFCGPM: C=1,r=0,η=1,ρ=0.4,σ=0.01,γ=1.9.
The original signal, the measurement and the reconstructed signal(marked by red point) by Algorithm 3.1, DFCGPM and IMPPA are given in Figure 1. Obviously, from the first, third, fourth and the last subplots in Figure 1, all elements in the original signal are circled by the red points, which indicates that the three methods can recover the original signal quite well. Furthermore, we record the number of iterations (abbreviated as Iter), the CPU time in seconds (abbreviated as CPU Time), the relative error (abbreviated as RelErr) of the three methods. Figure 1 indicates that Algorithm 3.1 have higher accuracy than IMPPA method, and Algorithm 3.1 is also always faster than DFCGPM and IMPPA methods. Thus, Algorithm 3.1 is an efficient method for sparse signal recovery.
In this subsection, Algorithm 3.1 proposed in this paper is compared with IMFPPA [12], DFCGPM [11], Algorithm 3.1 in [8] (PPRSM) and Algorithm 3.1 in [13] (LAPM) from the CPU Time and the RelErr, where some parameters about PPRSM and LAPM are listed as follows:
PPRSM: γ=0.2,σ=0.1;
LAPM: β=0.25,τ=0.6.
All algorithms have run 5 times, respectively, and the average of the the CPU Time and the RelErr are obtained. The numerical results are listed in Table 1. From the Table 1, It is obvious that the CPU time of Algorithm 3.1 is less than other algorithms in different k-Sparse signal whether it is Free noise or Gaussian noise, which shows that Algorithm 3.1 is faster. In addition, we find that the accuracy of algorithm 3.1 are also better than other algorithms. So, Algorithm 3.1 is a more efficient method for different k-Sparse signal recovery.
No noise | Gaussian noise | ||||
k-Sparse signal | Methods | CPU Time | RelErr | CPU Time | RelErr |
80 | Algorithm 3.1 | 1.0937 | 3.3782 | 1.1250 | 3.4187 |
IMFPPA | 4.6253 | 4.1837 | 4.7031 | 4.7528 | |
DFCGPM | 4.9585 | 3.8375 | 5.2744 | 3.8501 | |
PPRSM | 7.2164 | 4.6751 | 7.5363 | 4.6751 | |
LAPM | 12.8906 | 4.5521 | 13.4894 | 4.5095 | |
120 | Algorithm 3.1 | 1.4531 | 3.0063 | 1.4688 | 3.1883 |
IMFPPA | 6.8872 | 4.9351 | 7.2431 | 5.2315 | |
DFCGPM | 7.1964 | 3.5026 | 7.2173 | 3.8449 | |
PPRSM | 8.3361 | 4.3693 | 8.5913 | 4.3559 | |
LAPM | 13.2656 | 4.1577 | 14.7813 | 4.4311 | |
140 | Algorithm 3.1 | 1.6719 | 3.8609 | 1.7813 | 3.7207 |
IMFPPA | 7.1722 | 5.1312 | 8.8313 | 4.8873 | |
DFCGPM | 7.6563 | 3.9326 | 8.3519 | 4.4939 | |
PPRSM | 8.9961 | 4.5807 | 9.1320 | 4.5431 | |
LAPM | 14.1875 | 4.8401 | 15.6406 | 4.9417 | |
160 | Algorithm 3.1 | 1.9844 | 4.4808 | 2.1875 | 4.3251 |
IMFPPA | 8.2192 | 4.9317 | 9.2268 | 4.9718 | |
DFCGPM | 8.3548 | 3.7487 | 8.8897 | 4.4939 | |
PPRSM | 9.4201 | 4.6442 | 9.8942 | 4.4337 | |
LAPM | 16.3594 | 4.9220 | 17.6375 | 4.9417 |
In this subsection, Algorithm 3.1 proposed in this paper is compared with DFCGPM [11] and IMFPPA [12] from aspects of the CPU Time and the RelErr in different dimension, where some parameters about DFCGPM and IMFPPA are given in Subsection 4.1. We set m=n4,k=n32 and no additive Gaussian white noise. All algorithms have run 5 times, respectively. The average of the CPU Time and the RelErr are obtained. Some numerical results are listed in Table 2, where the IMFPPA and DFCGPM are difficult to solve the problem in our computer when n≥10,000 since our computer configuration constraints, and it is also drawn in Figure 2. The numerical results in Table 2 and Figure 2 indicates that: The CPU Time and RelErr of Algorithm 3.1 are less than that of the other two tested methods, which shows that Algorithm 3.1 is more effective for large scale problem. Thus, Algorithm 3.1 is very suitable for solving large-scale problems.
Dimension (n) | Algorithm 3.1 | IMFPPA | DFCGPM | |||
CPU Time | RelErr(%) | CPU Time | RelErr(%) | CPU Time | RelErr(%) | |
1024 | 0.0625 | 2.6592 | 0.6250 | 4.2036 | 1.1205 | 2.9219 |
2048 | 0.2813 | 3.6120 | 2.3281 | 4.0544 | 2.1701 | 3.6048 |
3072 | 0.7188 | 3.2769 | 3.7188 | 4.2890 | 5.2639 | 3.1776 |
4096 | 1.5313 | 3.2059 | 7.6563 | 4.8450 | 9.7066 | 3.6737 |
5120 | 2.4688 | 3.3400 | 12.6094 | 4.7120 | 13.0390 | 4.2521 |
6144 | 3.6094 | 3.5239 | 16.8594 | 4.8617 | 16.3323 | 4.7696 |
7168 | 4.3594 | 3.4662 | 22.7031 | 5.0325 | 24.6479 | 3.3939 |
8192 | 6.1153 | 3.2848 | 29.1563 | 4.8082 | 32.3971 | 3.5910 |
9216 | 7.7188 | 3.4343 | 39.4844 | 4.9214 | 41.6752 | 3.8263 |
10238 | 9.5469 | 3.6712 | - | - | - | - |
11262 | 10.4688 | 3.4449 | - | - | - | - |
12286 | 12.6875 | 3.2772 | - | - | - | - |
13310 | 14.2969 | 3.3361 | - | - | - | - |
14334 | 17.5313 | 3.2459 | - | - | - | - |
The method proposed in this work has several possible extensions. Firstly, it could be numerically beneficial to tune the parameter, and thus it is meaningful to investigate the global convergence of the proposed method with adaptively adjusted parameter. Secondly, we may establish global error bound for (1.1) just as was done for generalized linear complementarity problem in [16,17,18], and may use the error bound estimation to establish quick convergence rate of the new Algorithm 3.1 for solving (1.1). This is a topic for future research.
Since the RNNM model is a convex program, we explore the possibility of the proposed algorithm developed for BPDN model to solve the RNNM model from theoretical results and numerical experiments. This will be our further research direction.
The Regularized Nuclear Norm Minimization (RNNM) model is defined as follows [19,20]:
minX12‖A(X)−b‖2+μ‖X‖∗, |
where μ>0 is a parameter, b∈Rm is an observed vector, A:Rn1×n2⟶Rm is a known linear measurement map defined as
A(X)=[tr(X⊤A(1)),tr(X⊤A(2)),⋯,tr(X⊤A(m))]⊤. |
Here, A(i) for i=1,2,⋯,m is denoted as a matrix with size n1×n2, and tr(⋅) is the trace function, the norm ‖⋅‖∗ denote the Euclidean nuclear norm.
In this paper, by choosing a special iterative format, we have developed a new iterative format of proximal ADMM, which has closed-form solutions. Thus, it has fast solving speed and pinpoint accuracy when the dimension increases. It makes new algorithm very attractive for solving large-scale problems. The global convergence of new method is discussed in detail. Furthermore, the linear rate convergence result for new algorithm is established. Some numerical experiments on sparse signal recovery are given, and compared with the state-of-the-art of algorithms in [8,11,12,13], the method proposed in this paper is more accurate and efficient.
The authors wish to give their sincere thanks to the Editor-in-chief, the Associated Editor and the anonymous referees for their valuable suggestions and helpful comments which improved the presentation of the paper.
This study was funded by the Natural Science Foundation of China (Nos. 12071250, 11801309), and Shandong Provincial Natural Science Foundation (ZR2021MA088).
No potential conflict of interest was reported by the authors.
[1] |
J. F. Yang, Y. Zhang, Alternating direction algorithms for ℓ1−problems in compressive sensing, SIAM J. Sci. Comput., 33 (2011), 250–278. https://doi.org/10.1137/090777761 doi: 10.1137/090777761
![]() |
[2] |
X. M. Yuan, An improved proximal alternating direction method for monotone variational inequalities with separable structure, Comput. Optim. Appl., 49 (2011), 17–29. https://doi.org/10.1007/s10589-009-9293-y doi: 10.1007/s10589-009-9293-y
![]() |
[3] |
M. H. Xu, T. Wu, A class of linearized proximal alternating direction methods, J. Optim. Theory Appl., 151 (2011), 321–337. https://doi.org/10.1007/s10957-011-9876-5 doi: 10.1007/s10957-011-9876-5
![]() |
[4] |
Y. H. Xiao, H. N. Song, An inexact alternating directions algorithm for constrained total variation regularized compressive sensing problems, J. Math. Imaging Vis., 44 (2012), 114–127. https://doi.org/10.1007/s10851-011-0314-y doi: 10.1007/s10851-011-0314-y
![]() |
[5] |
Y. C. Yu, J. G. Peng, X. L. Han, A. G. Cui, A primal Douglas-Rachford splitting method for the constrained minimization problem in compressive sensing, Circuits Syst. Signal Process, 36 (2017), 4022–4049. https://doi.org/10.1007/s00034-017-0498-5 doi: 10.1007/s00034-017-0498-5
![]() |
[6] |
H. J. He, D. R. Han, A distributed Douglas-Rachford splitting method for multi-block convex minimization problems, Adv. Comput. Math., 42 (2016), 27–53. https://doi.org/10.1007/s10444-015-9408-1 doi: 10.1007/s10444-015-9408-1
![]() |
[7] |
B. S. He, H. Liu, Z. R. Wang, X. M. Yuan, A strictly contractive Peaceman-Rachford splitting method for convex programming, SIAM J. Optim., 24 (2014), 1011–1040. https://doi.org/10.1137/13090849X doi: 10.1137/13090849X
![]() |
[8] |
M. Sun, J. Liu, A proximal Peaceman-Rachford splitting method for compressive sensing, J. Appl. Math. Comput., 50 (2016), 349–363. https://doi.org/10.1007/s12190-015-0874-x doi: 10.1007/s12190-015-0874-x
![]() |
[9] |
B. S. He, F. Ma, X. M. Yuan, Convergence study on the symmetric version of ADMM with larger step sizes, SIAM J. Imaging Sci., 9 (2016), 1467–1501. https://doi.org/10.1137/15M1044448 doi: 10.1137/15M1044448
![]() |
[10] |
Y. H. Xiao, H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl., 405 (2013), 310–319. https://doi.org/10.1016/j.jmaa.2013.04.017 doi: 10.1016/j.jmaa.2013.04.017
![]() |
[11] |
M. Sun, M. Y. Tian, A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem, Bull. Iran. Math. Soc., 46 (2020), 183–205. https://doi.org/10.1007/s41980-019-00250-2 doi: 10.1007/s41980-019-00250-2
![]() |
[12] |
H. C. Sun, M. Sun, B. H. Zhang, An inverse matrix-Free proximal point algorithm for compressive sensing, ScienceAsia, 44 (2018), 311–318. https://doi.org/10.2306/scienceasia1513-1874.2018.44.311 doi: 10.2306/scienceasia1513-1874.2018.44.311
![]() |
[13] |
D. X. Feng, X. Y. Wang, A linearly convergent algorithm for sparse signal reconstruction, J. Fixed Point Theory Appl., 20 (2018), 154. https://doi.org/10.1007/s11784-018-0635-1 doi: 10.1007/s11784-018-0635-1
![]() |
[14] |
W. T. Yin, S. Osher, D. Goldfarb, J. Darbon, Bregman iterative algorithms for ℓ1− minimization with applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), 143–168. https://doi.org/10.1137/070703983 doi: 10.1137/070703983
![]() |
[15] |
Y. H. Xiao, Q. Y. Wang, Q. J. Hu, Non-smooth equations based method for ℓ1-norm problems with applications to compressed sensing, Nonlinear Anal.-Theor., 74 (2011), 3570–3577. https://doi.org/10.1016/j.na.2011.02.040 doi: 10.1016/j.na.2011.02.040
![]() |
[16] |
H. C. Sun, Y. J. Wang, L. Q. Qi, Global error bound for the generalized linear complementarity problem over a polyhedral cone, J. Optim. Theory Appl., 142 (2009), 417–429. https://doi.org/10.1007/s10957-009-9509-4 doi: 10.1007/s10957-009-9509-4
![]() |
[17] |
H. C. Sun, Y. J. Wang, Further discussion on the error bound for generalized linear complementarity problem over a polyhedral cone, J. Optim. Theory Appl., 159 (2013), 93–107. https://doi.org/10.1007/s10957-013-0290-z doi: 10.1007/s10957-013-0290-z
![]() |
[18] |
H. C. Sun, Y. J. Wang, S. J. Li, M. Sun, A sharper global error bound for the generalized linear complementarity problem over a polyhedral cone under weaker conditions, J. Fixed Point Theory Appl., 20 (2018), 75. https://doi.org/10.1007/s11784-018-0556-z doi: 10.1007/s11784-018-0556-z
![]() |
[19] |
E. J. Candˊes, Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE T. Inform. Theory, 57 (2011), 2342–2359. https://doi.org/10.1109/TIT.2011.2111771 doi: 10.1109/TIT.2011.2111771
![]() |
[20] |
W. D. Wang, F. Zhang, J. J. Wang, Low-rank matrix recovery via regularized nuclear norm minimization, Appl. Comput. Harmon Anal., 54 (2021), 1–19. https://doi.org/10.1016/j.acha.2021.03.001 doi: 10.1016/j.acha.2021.03.001
![]() |
1. | Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun, A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing, 2023, 8, 2473-6988, 14726, 10.3934/math.2023753 | |
2. | Yong Zhao, Mengjiao Niu, Jinkui Liu, A three-term subspace projection method for solving systems of nonlinear monotone equations, 2024, 0, 1547-5816, 0, 10.3934/jimo.2024156 | |
3. | Yuanshou Zhang, Min Sun, Jing Liu, A modified PRP conjugate gradient method with inertial extrapolation for sparse signal reconstruction, 2024, 2024, 1029-242X, 10.1186/s13660-024-03219-w | |
4. | J. K. Liu, B. Tang, N. Zhang, J. Xiong, P. T. Gao, X. L. Dong, A subspace derivative-free projection method for convex constrained nonlinear equations, 2024, 0916-7005, 10.1007/s13160-024-00675-1 | |
5. | Zihang Yuan, Hu Shao, Xiaping Zeng, Pengjie Liu, Xianglin Rong, Jianhao Zhou, An improved Dai‐Liao‐style hybrid conjugate gradient‐based method for solving unconstrained nonconvex optimization and extension to constrained nonlinear monotone equations, 2024, 0170-4214, 10.1002/mma.10396 | |
6. | Xueyong Wang, Gang Wang, Ping Yang, Novel Pareto $ Z $-eigenvalue inclusion intervals for tensor eigenvalue complementarity problems and its applications, 2024, 9, 2473-6988, 30214, 10.3934/math.20241459 |
No noise | Gaussian noise | ||||
k-Sparse signal | Methods | CPU Time | RelErr | CPU Time | RelErr |
80 | Algorithm 3.1 | 1.0937 | 3.3782 | 1.1250 | 3.4187 |
IMFPPA | 4.6253 | 4.1837 | 4.7031 | 4.7528 | |
DFCGPM | 4.9585 | 3.8375 | 5.2744 | 3.8501 | |
PPRSM | 7.2164 | 4.6751 | 7.5363 | 4.6751 | |
LAPM | 12.8906 | 4.5521 | 13.4894 | 4.5095 | |
120 | Algorithm 3.1 | 1.4531 | 3.0063 | 1.4688 | 3.1883 |
IMFPPA | 6.8872 | 4.9351 | 7.2431 | 5.2315 | |
DFCGPM | 7.1964 | 3.5026 | 7.2173 | 3.8449 | |
PPRSM | 8.3361 | 4.3693 | 8.5913 | 4.3559 | |
LAPM | 13.2656 | 4.1577 | 14.7813 | 4.4311 | |
140 | Algorithm 3.1 | 1.6719 | 3.8609 | 1.7813 | 3.7207 |
IMFPPA | 7.1722 | 5.1312 | 8.8313 | 4.8873 | |
DFCGPM | 7.6563 | 3.9326 | 8.3519 | 4.4939 | |
PPRSM | 8.9961 | 4.5807 | 9.1320 | 4.5431 | |
LAPM | 14.1875 | 4.8401 | 15.6406 | 4.9417 | |
160 | Algorithm 3.1 | 1.9844 | 4.4808 | 2.1875 | 4.3251 |
IMFPPA | 8.2192 | 4.9317 | 9.2268 | 4.9718 | |
DFCGPM | 8.3548 | 3.7487 | 8.8897 | 4.4939 | |
PPRSM | 9.4201 | 4.6442 | 9.8942 | 4.4337 | |
LAPM | 16.3594 | 4.9220 | 17.6375 | 4.9417 |
Dimension (n) | Algorithm 3.1 | IMFPPA | DFCGPM | |||
CPU Time | RelErr(%) | CPU Time | RelErr(%) | CPU Time | RelErr(%) | |
1024 | 0.0625 | 2.6592 | 0.6250 | 4.2036 | 1.1205 | 2.9219 |
2048 | 0.2813 | 3.6120 | 2.3281 | 4.0544 | 2.1701 | 3.6048 |
3072 | 0.7188 | 3.2769 | 3.7188 | 4.2890 | 5.2639 | 3.1776 |
4096 | 1.5313 | 3.2059 | 7.6563 | 4.8450 | 9.7066 | 3.6737 |
5120 | 2.4688 | 3.3400 | 12.6094 | 4.7120 | 13.0390 | 4.2521 |
6144 | 3.6094 | 3.5239 | 16.8594 | 4.8617 | 16.3323 | 4.7696 |
7168 | 4.3594 | 3.4662 | 22.7031 | 5.0325 | 24.6479 | 3.3939 |
8192 | 6.1153 | 3.2848 | 29.1563 | 4.8082 | 32.3971 | 3.5910 |
9216 | 7.7188 | 3.4343 | 39.4844 | 4.9214 | 41.6752 | 3.8263 |
10238 | 9.5469 | 3.6712 | - | - | - | - |
11262 | 10.4688 | 3.4449 | - | - | - | - |
12286 | 12.6875 | 3.2772 | - | - | - | - |
13310 | 14.2969 | 3.3361 | - | - | - | - |
14334 | 17.5313 | 3.2459 | - | - | - | - |
No noise | Gaussian noise | ||||
k-Sparse signal | Methods | CPU Time | RelErr | CPU Time | RelErr |
80 | Algorithm 3.1 | 1.0937 | 3.3782 | 1.1250 | 3.4187 |
IMFPPA | 4.6253 | 4.1837 | 4.7031 | 4.7528 | |
DFCGPM | 4.9585 | 3.8375 | 5.2744 | 3.8501 | |
PPRSM | 7.2164 | 4.6751 | 7.5363 | 4.6751 | |
LAPM | 12.8906 | 4.5521 | 13.4894 | 4.5095 | |
120 | Algorithm 3.1 | 1.4531 | 3.0063 | 1.4688 | 3.1883 |
IMFPPA | 6.8872 | 4.9351 | 7.2431 | 5.2315 | |
DFCGPM | 7.1964 | 3.5026 | 7.2173 | 3.8449 | |
PPRSM | 8.3361 | 4.3693 | 8.5913 | 4.3559 | |
LAPM | 13.2656 | 4.1577 | 14.7813 | 4.4311 | |
140 | Algorithm 3.1 | 1.6719 | 3.8609 | 1.7813 | 3.7207 |
IMFPPA | 7.1722 | 5.1312 | 8.8313 | 4.8873 | |
DFCGPM | 7.6563 | 3.9326 | 8.3519 | 4.4939 | |
PPRSM | 8.9961 | 4.5807 | 9.1320 | 4.5431 | |
LAPM | 14.1875 | 4.8401 | 15.6406 | 4.9417 | |
160 | Algorithm 3.1 | 1.9844 | 4.4808 | 2.1875 | 4.3251 |
IMFPPA | 8.2192 | 4.9317 | 9.2268 | 4.9718 | |
DFCGPM | 8.3548 | 3.7487 | 8.8897 | 4.4939 | |
PPRSM | 9.4201 | 4.6442 | 9.8942 | 4.4337 | |
LAPM | 16.3594 | 4.9220 | 17.6375 | 4.9417 |
Dimension (n) | Algorithm 3.1 | IMFPPA | DFCGPM | |||
CPU Time | RelErr(%) | CPU Time | RelErr(%) | CPU Time | RelErr(%) | |
1024 | 0.0625 | 2.6592 | 0.6250 | 4.2036 | 1.1205 | 2.9219 |
2048 | 0.2813 | 3.6120 | 2.3281 | 4.0544 | 2.1701 | 3.6048 |
3072 | 0.7188 | 3.2769 | 3.7188 | 4.2890 | 5.2639 | 3.1776 |
4096 | 1.5313 | 3.2059 | 7.6563 | 4.8450 | 9.7066 | 3.6737 |
5120 | 2.4688 | 3.3400 | 12.6094 | 4.7120 | 13.0390 | 4.2521 |
6144 | 3.6094 | 3.5239 | 16.8594 | 4.8617 | 16.3323 | 4.7696 |
7168 | 4.3594 | 3.4662 | 22.7031 | 5.0325 | 24.6479 | 3.3939 |
8192 | 6.1153 | 3.2848 | 29.1563 | 4.8082 | 32.3971 | 3.5910 |
9216 | 7.7188 | 3.4343 | 39.4844 | 4.9214 | 41.6752 | 3.8263 |
10238 | 9.5469 | 3.6712 | - | - | - | - |
11262 | 10.4688 | 3.4449 | - | - | - | - |
12286 | 12.6875 | 3.2772 | - | - | - | - |
13310 | 14.2969 | 3.3361 | - | - | - | - |
14334 | 17.5313 | 3.2459 | - | - | - | - |