Research article

A linearly convergent proximal ADMM with new iterative format for BPDN in compressed sensing problem

  • Received: 18 December 2021 Revised: 11 March 2022 Accepted: 15 March 2022 Published: 28 March 2022
  • MSC : 90C30, 90C33

  • In recent years, compressive sensing (CS) problem is being popularly applied in the fields of signal processing and statistical inference. The alternating direction method of multipliers (ADMM) is applicable to the equivalent forms of basis pursuit denoising (BPDN) in CS problem. However, the solving speed and accuracy are adversely affected when the dimension increases greatly. In this paper, a new iterative format of proximal ADMM, which has fast solving speed and pinpoint accuracy when the dimension increases, is proposed to solve BPDN problem. Global convergence of the new type proximal ADMM is established in detail, and we exhibit a R linear convergence rate under suitable condition. Moreover, we apply this new algorithm to solve different types of BPDN problems. Compared with the state-of-the-art of algorithms in BPDN problem, the proposed algorithm is more accurate and efficient.

    Citation: Bing Xue, Jiakang Du, Hongchun Sun, Yiju Wang. A linearly convergent proximal ADMM with new iterative format for BPDN in compressed sensing problem[J]. AIMS Mathematics, 2022, 7(6): 10513-10533. doi: 10.3934/math.2022586

    Related Papers:

    [1] Yue Zhao, Meixia Li, Xiaowei Pan, Jingjing Tan . Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Mathematics, 2025, 10(2): 3041-3061. doi: 10.3934/math.2025142
    [2] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [3] Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097
    [4] Cristina Calineata, Teodor Turcanu . On fixed proximal pairs of $ E_r $-mappings. AIMS Mathematics, 2023, 8(11): 26632-26649. doi: 10.3934/math.20231362
    [5] Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601
    [6] Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453
    [7] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [8] Chanchal Garodia, Izhar Uddin, Bahaaeldin Abdalla, Thabet Abdeljawad . A modified proximal point algorithm in geodesic metric space. AIMS Mathematics, 2023, 8(2): 4304-4320. doi: 10.3934/math.2023214
    [9] Chainarong Khunpanuk, Chanchal Garodia, Izhar Uddin, Nuttapol Pakkaranang . On a proximal point algorithm for solving common fixed point problems and convex minimization problems in Geodesic spaces with positive curvature. AIMS Mathematics, 2022, 7(5): 9509-9523. doi: 10.3934/math.2022529
    [10] Saknarin Channark, Poom Kumam, Juan Martinez-Moreno, Wachirapong Jirakitpuwapat . Hermitian and skew-Hermitian splitting method on a progressive-iterative approximation for least squares fitting. AIMS Mathematics, 2022, 7(9): 17570-17591. doi: 10.3934/math.2022967
  • In recent years, compressive sensing (CS) problem is being popularly applied in the fields of signal processing and statistical inference. The alternating direction method of multipliers (ADMM) is applicable to the equivalent forms of basis pursuit denoising (BPDN) in CS problem. However, the solving speed and accuracy are adversely affected when the dimension increases greatly. In this paper, a new iterative format of proximal ADMM, which has fast solving speed and pinpoint accuracy when the dimension increases, is proposed to solve BPDN problem. Global convergence of the new type proximal ADMM is established in detail, and we exhibit a R linear convergence rate under suitable condition. Moreover, we apply this new algorithm to solve different types of BPDN problems. Compared with the state-of-the-art of algorithms in BPDN problem, the proposed algorithm is more accurate and efficient.



    Compressive sensing (CS) problem is to recover a sparse signal ˉx from an undetermined linear system Aˉx=b, where ˉxRn, A is the sensing matrix and ARm×n (mn), b is the observed signal and bRm. A fundamental decoding model of CS problem is the following basis pursuit denoising (termed as BPDN) problem:

    minxRn12Axb22+μx1 (1.1)

    where μ(>0) is a parameter, and the norm 1 and 2 denote the Euclidean 1-norm and 2-norm, respectively.

    Recently, a lot of numerical algorithms about BPDN problem have been extensively developed. In fact, the BPDN problem can be equivalently converted into a separable convex programming by introducing auxiliary variable. Thus the numerical methods, which can be used to solve the separable convex programming, are applicable to BPDN problem, such as the alternating direction method of multipliers and its linearized version [1,2,3,4], the Peaceman-Rachford splitting method (PRSM) or Douglas-Rachford splitting method (DRSM) of multipliers [5,6,7,8], the symmetric alternating direction method of multipliers [9], etc. Yang and Zhang [1] investigate the use of alternating direction algorithms for several 1-norm minimization problems arising from sparse solution recovery in CS, including the basis pursuit problem, the basis-pursuit denoising problems, and so on. Yuan [2] presents a descent-like method, which can obtain a descent direction and an appropriate step size and improve the proximal alternating direction method. Yu et al. [5] apply the primal DRSM to solve equivalent transformation form of BPDN problem. He and Dan [6] furtherly study the multi-block separable convex minimization problem with linear constraints along the way by the primal application of DRSM, and present the exact and inexact versions of the new method in a unified framework. Compared to the DRSM, the PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. He and Liu et al.[7] illustrate the reason for this difference, and develop a modified PRSM for separable convex programming, which includes BPDN problem as a special case. Sun and Liu [8] develop a generalized PRSM for BPDN problem, of which both subproblems are approximated by the linearization technique. He et al. [9] obtain the convergence of the symmetric version of ADMM with step sizes, where step sizes can be enlarged by Fortin and Glowinski's constant. On the other hand, BPDN problem can be equivalently transformed into an equation or variational inequality problem by splitting technique [10,11,12,13,14,15], which can be solved by some standard methods such as conjugate gradient methods, proximal point algorithms and projection-type algorithms. Xiao and Zhu [10] transform BPDN problem into a convex constrained monotone equation, and present a conjugate gradient method for the equivalent form of the problem. Sun and Tian [11] propose a class of derivative-free conjugate gradient (CG) projection methods for nonsmooth equations with convex constraints, including the BPDN problem. Sun et al. [12] reformulate BPDN problem as variational inequality problem by splitting the decision variable into two nonnegative auxiliary variables, and propose a novel inverse matrix-free proximal point algorithm for BPDN problem. Base on the same transformation of BPDN problem, Feng and Wang [13] also propose a projection-type algorithm without any backtracking line search. Although there are so many ways to solve the problem, it is still needed to improve the solving speed and accuracy. In particular, as the dimension increases greatly, the solving speed and accuracy are adversely affected. In the paper, we will establish a new iterative format of proximal ADMM, which has closed-form solutions. The motivation behind this is for the better numerical performance when the dimension increases. Furthermore, the linear rate convergence result for new algorithm is established, which is also one of the most important motivations.

    The rest of this paper is organized as follows. In Section 2, some equivalent reformulations of (1.1) and related theories of (1.1), which are the basis of our analysis, are given. In Section 3, basing on a special iterative format, we present a new type proximal ADMM, in which the corresponding subproblems have closed-form solutions. The global convergence of new method is discussed in detail. We establish a Rlinear rate convergence theorem under suitable condition. In Section 4, some numerical experiments on sparse signal recovery are given, and we compare the CPU time and the relative error among the Peaceman-Rachford splitting method of multipliers[8], the conjugate gradient methods[11], the proximal point algorithms[12], the projection-type algorithms[13] and our algorithm, and show that our algorithm is more accurate and efficient than other algorithms. Finally, some conclusive remarks are drawn in the last section.

    We end this section with some notations used in this paper. Vectors considered in this paper are all taken in Euclidean space equipped with the standard inner product. The notation Rm×n stands for the set of all m×n real matrices. For x,yRn, we use (x;y) to denote the column vector (x,y). If GRn×n is a symmetric positive definite matrix, we denote by xG=xGx the G-norm of the vector x.

    In this section, we first establish some equivalent reformulations to the BPDN problem via some the related optimality theories.

    It is obvious that the BPDN problem can be equivalently reformulated as the following optimization problem [8]

    min12Ax1b22+μx21s.t.x1x2=0x1Rn,x2Rn. (2.1)

    Then the augmented Lagrangian function of the convex programming (2.1) can be written as

    Lβ(x1,x2,λ):=θ1(x1)+θ2(x2)λ,x1x2+β2x1x22, (2.2)

    where λ is the Lagrangian multiplier for the linear constraints of (2.1) and λRn,

    θ1(x1)=12Ax1b22,θ2(x2)=μx21.

    By invoking the first-order optimality condition for convex programming, we can equivalently reformulate problem (2.1) as the following variational inequality problem: finding vector x=(x1;x2)Rn×Rn and λRn such that

    {θ1(x1)θ1(x1)(x1x1)λ0,  x1Rn,θ2(x2)θ2(x2)+(x2x2)λ0,  x2Rn,x1x2=0. (2.3)

    Obviously, the system (2.3) is equivalent to the following problem: Find a vector wW such that

    θ(x)θ(x)+(ww)F(w)0,wW, (2.4)

    where w=(x1;x2;λ)W=Rn×Rn×Rn, θ(x)=θ1(x1)+θ2(x2), and

    F(w):=(λλx1x2)=(00In00InInIn0)(x1x2λ). (2.5)

    We denote the solution set of (2.4) by W.

    It is easy to verify that the mapping F() is not only monotone but also satisfies the following nice property

    (ww)(F(w)F(w))=0,w,wW.

    To proceed, we present the following definition, which will be needed in the sequel.

    Definition 2.1. For sequence vector μk=(μk1,μk2,,μkn)Rn(k=1,2,), we define two new functions ψ(μk) and δ(μk).

    (i) The function ψ(μk)=(ψ1(μk1),ψ1(μk2),,ψ1(μkn)), and

    ψ1(μki)={0,if|μki|Cn2k,μki,if|μki|>Cn2k.(i=1,2,,n) (2.6)

    where k is positive integer, and C>0 is a constant.

    (ii) The function δ(μk)=μkψ(μk).

    In this section, we present a new type proximal ADMM for solving (2.1) by a special iterative format, and the global convergence of new method is also established in detail.

    Algorithm 3.1.

    Step 0. Select constants β,γ,ε>0, two positive semi-definite matrices RiRn×n(i=1,2). Choose an arbitrarily initial point w0=(x01;x02;λ0)Rn×Rn×Rn. Take

    η={γ,if0<γ1,1γ,ifγ>1. (3.1)

    Set k=0.

    Step 1. By current iterate wk, compute the new iterate ˆwk=(ˆxk1;ˆxk2;ˆλk) via

    {ˆxk1argminx1RnLβ(x1,xk2,λk)+12x1xk12R1,ˆxk2argminx2RnLβ(ˆxk1,x2,λk)+12x2xk22R2,ˆλk=λkγβ(ˆxk1ˆxk2), (3.2)

    Step 2. If wkˆwkε, then stop; otherwise, go to Step 3.

    Step 3. Set wk+1=ρˆwk+(1ρ)ψ(wk), where ρ(0,η) and η is a constant defined in (3.1). Go to Step 1.

    In the following, we show that it is reasonable to use wkˆwkε to terminate Algorithm 3.1.

    Lemma 3.1. If wk=ˆwk, then the vector wk is a solution of (2.4).

    Proof. For x1-subproblem in (3.2), using the first-order optimality condition, for any x1Rn, we obtain

    0θ1(x1)θ1(ˆxk1)+(x1ˆxk1){λk+β(ˆxk1xk2)+R1(ˆxk1xk1)}=θ1(x1)θ1(ˆxk1)+(x1ˆxk1){ˆλk+β(1γ)(ˆxk1ˆxk2)β(xk2ˆxk2)+R1(ˆxk1xk1)}=θ1(x1)θ1(ˆxk1)+(x1ˆxk1)(ˆλk)+(x1ˆxk1){1γγ(λkˆλk)β(xk2ˆxk2)}+(x1ˆxk1)R1(ˆxk1xk1),

    where the first and the second equalities are by λk=ˆλk+γβ(ˆxk1ˆxk2), i.e.,

    θ1(x1)θ1(ˆxk1)+(x1ˆxk1)(ˆλk)(x1ˆxk1)R1(xk1ˆxk1)+β(x1ˆxk1)(xk2ˆxk2)1γγ(x1ˆxk1)(λkˆλk). (3.3)

    For x2-subproblem in (3.2), similar to discussion above, one has

    0θ2(x2)θ2(ˆxk2)+(x2ˆxk2){λkβ(ˆxk1ˆxk2)+R2(ˆxk2xk2)}=θ2(x2)θ2(ˆxk2)+(x2ˆxk2){ˆλkβ(1γ)(ˆxk1ˆxk2)+R2(ˆxk2xk2)}=θ2(x2)θ2(ˆxk2)+(x2ˆxk2)ˆλk(x2ˆxk2){1γγ(λkˆλk)}+(x2ˆxk2)R2(ˆxk2xk2),

    where the first and second equalities are by λk=ˆλk+γβ(ˆxk1ˆxk2), i.e.,

    θ2(x2)θ2(ˆxk2)+(x2ˆxk2)ˆλk(x2ˆxk2)R2(xk2ˆxk2)+1γγ(x2ˆxk2)(λkˆλk). (3.4)

    For λ-subproblem in (3.2), for any λRn, one has

    (λˆλk)(ˆxk1ˆxk2)=1βγ(λˆλk)(λkˆλk). (3.5)

    By (3.3)–(3.5), we get

    θ(x)θ(ˆxk)+(wˆwk)F(ˆwk)(wˆwk)G(wkˆwk),  wW, (3.6)

    where

    G=(R1βIn1γγIn0R21γγIn001βγIn). (3.7)

    Combining wk=ˆwk with (3.6), one has

    θ(x)θ(ˆxk)+(wˆwk)F(ˆwk)0,  wW. (3.8)

    Substituting ˆwk and ˆxk in (3.8) with wk and xk, respectively, we obtain

    θ(x)θ(xk)+(wwk)F(wk)0,  wW,

    which indicates that wk is a solution of (2.4).

    Remark 3.1. From Lemma 3.1, if Algorithm 3.1 stops at Step 2, then wk is a proximal solution of (2.4).

    In the following, we assume that Algorithm 3.1 generates infinite sequences {wk} and {ˆwk}. For convenience, we define two matrices to simplify our notation in the later analysis.

    M=(R1000βIn+R20001βγIn),Q=(R1000βIn+R212γIn012γIn1βγ2In). (3.9)

    The following lemma gives some interesting properties of the two matrices M,Q just defined. These properties are crucial in the convergence analysis of Algorithm 3.1.

    Lemma 3.2. If R1 and R2 are two positive semi-definite matrices, then we have

    (i) Both matrices M and Q are positive semi-definite;

    (ii) The matrix H1:=2QγM is positive semi-definite if 0<γ1;

    (iii) The matrix H2:=2γQM is positive semi-definite if γ>1.

    Proof. (i) For any w=(x1;x2;λ), one has

    wMw=x12R1+βx22+x22R2+1βγλ20.

    So the matrix M is positive semi-definite.

    The matrix Q can be written as

    Q=(R1000R20000)+(0000βIn12γIn012γIn1βγ2In):=Q1+Q2.

    Obviously, the matrix Q1 is positive semi-definite, and we have

    wQ2w=βx2x2+1βγ2λλ1γx2λβx2x2+1βγ2λλ1γx2λβx2x2+1βγ2λλ(β4x22+1βγ2λ2)3β4x220,

    where the first inequality is obtained by the Cauchy-Schwartz inequality, the second inequality follows from the fact that a2+b22ab,a,bR+, and the desired result follows.

    (ii) For the matrix H1. By a direct computation, it yields that

    H1=2QγM=((2γ)R1000(2γ)(βIn+R2)1γIn01γIn2γ2βγ2In)=(2γ)(R1000R20000)+(0000(2γ)βIn1γIn01γIn2γ2βγ2In):=(2γ)Q1+Q3.

    Obviously, by 0<γ1, the first part is positive semi-definite, and

    wQ3w=(2γ)βx2x2+2γ2βγ2λλ2γx2λ(2γ)βx2x2+2γ2βγ2λλ2γx2λ(2γ)βx2x2+2γ2βγ2λλ(β2γ2x22+2γ2βγ2λ2)(2γ)12γ2βx220,

    and thus the desired result follows.

    (iii) For the matrix H2. By a direct computation, it yields that

    H2=2γQM=((2γ1)R1000(2γ1)(βIn+R2)In0In1βγIn)=(2γ1)(R1000R20000)+(0000(2γ1)βInIn0In1βγIn):=(2γ1)Q1+Q4.

    Similar to discussion above, using γ>1, we obtain

    wQ4w=(2γ1)βx2x2+1βγλλ2x2λ(2γ1)βx2x2+1βγλλ2|x2λ(2γ1)βx2x2+1βγλλ(βγx22+1βγλ2)(γ1)βx220.

    Combining this with the positive semi-definite of Q1, and the desired result follows.

    Lemma 3.3. Let {wk} and {ˆwk} be two sequences generated by Algorithm 3.1. Then we have

    (wkw)M(wkˆwk)wkˆwk2Q,  wW. (3.10)

    Proof. From the definitions of M in (3.9) and G in (3.7), one has

    G=M+˜M,

    where ˜M=(0βIn1γγIn0βIn1γγIn000). By a direct computation, it yields that

    0θ(ˆxk)θ(x)+(ˆwkw)F(ˆwk)(ˆwkw)G(wkˆwk)=(ˆwkw)M(wkˆwk)+(ˆwkw)˜M(wkˆwk)=(ˆwkw)M(wkˆwk)+β[(ˆxk1x1)(ˆxk2x2)](xk2ˆxk2)1γγ[(ˆxk1x1)(ˆxk2x2)](λkˆλk)=(ˆwkw)M(wkˆwk)+β(ˆxk1ˆxk2)(xk2ˆxk2)1γγ(ˆxk1ˆxk2)(λkˆλk)=(ˆwkw)M(wkˆwk)+1γ(λkˆλk)(xk2ˆxk2)1γγ[1βγ(λkˆλk)](λkˆλk)=(ˆwkw)M(wkˆwk)+1γ(λkˆλk)(xk2ˆxk2)1γβγ2λkˆλk2, (3.11)

    where the first inequality is by (2.4) with wW, wkW, since wWW, using (3.6) with x=x and w=w, we have that the second inequality holds, the third equality follows from x1=x2, and the four equality comes from the fact (ˆxk1ˆxk2)=1γβ(λkˆλk). Applying (3.11), we get

    (ˆwkw)M(wkˆwk)1γ(λkˆλk)(xk2ˆxk2)+1γβγ2λkˆλk2,

    and one has

    (wkw)M(wkˆwk)=(wkˆwk)M(wkˆwk)+(ˆwkw)M(wkˆwk)(wkˆwk)M(wkˆwk)1γ(λkˆλk)(xk2ˆxk2)+1γβγ2λkˆλk2=wkˆwk2Q,

    where the second equality follows from the definition of matrix Q in (3.9), and the desired result follows.

    Lemma 3.4. For any solution w=(x1;x2;λ) of (2.4), the sequence {wk} generated by Algorithm 3.1 satisfies

    ρˆwk+(1ρ)wkw2Mwkw2Mρˆwkwk2H, (3.12)

    where the matrix H is defined by

    H=(ηρ)M, (3.13)

    and η is defined in (3.1).

    Proof. From (3.1), if 0<γ1, then η=γ. A direct computation yields that

    ρ(ˆwkwk)+(wkw)2M=wkw2M+2ρ(wkw)M(ˆwkwk)+ρ2ˆwkwk2Mwkw2M2ρˆwkwk2Q+ρ2ˆwkwk2M=wkw2M2ρˆwkwk2(12H1+γ2M)+ρ2ˆwkwk2M=wkw2M(ˆwkwk)(ρH1+(ργρ2)M)(ˆwkwk)=wkw2M(ˆwkwk)ρ(γρ)M(ˆwkwk))(ˆwkwk)ρH1(ˆwkwk)wkw2Mρˆwkwk2(γρ)M, (3.14)

    where the first inequality follows from (3.10), the second equality follows from H1=2QγM in Lemma 3.2 (ii), and the second inequality follows from the fact that the matrix H1 is positive semi-definite in Lemma 3.2.

    If γ1, then η=1γ. Similar to discussion (3.14), we can also obtain

    ρ(ˆwkwk)+(wkw)2Mwkw2M2ρˆwkwk2Q+ρ2ˆwkwk2M=wkw2M2ρˆwkwk2(12γH2+12γM)+ρ2ˆwkwk2M=wkw2M(ˆwkwk)(ργH2+ρ(1γρ)M(ˆwkwk)=wkw2Mρˆwkwk2(1γρ)Mργ(ˆwkwk)H2(ˆwkwk)wkw2Mρˆwkwk2(1γρ)M, (3.15)

    where the first equality comes from H2=2γQM in Lemma 3.2 (iii), and the second inequality follows from the fact that the matrix H2 is positive semi-definite in Lemma 3.2.

    The desired result follows by combining above.

    Remark 3.2. By the definition of η in (3.1), the matrix H in (3.13) is positive semi-definite.

    Theorem 3.1. For any solution w=(x1;x2;λ) of (2.4), the sequence {wk} generated by Algorithm 3.1 satisfies

    wk+1wMwkwM+(1ρ)δ(wk)M. (3.16)

    Proof. Since the matrix H in (3.13) is positive semi-definite, by (3.12), one has

    wk+1wM=ρˆwk+(1ρ)ψ(wk)wM=ρ(ˆwkwk)+(wkw)+(ρ1)(wkψ(wk))Mρ(ˆwkwk)+(wkw)M+(1ρ)δ(wk)MwkwM+(1ρ)δ(wk)M. (3.17)

    Thus, the desired result follows.

    Theorem 3.2. Assume that the matrix R1 is positive definite. Then the sequence {wk} generated by Algorithm 3.1 converges to some ˉwW.

    Proof. Using Definition 2.1, there exists a constant c1>0 such that δ(wk)Mc12k. Combining this with (3.16), we obtain

    wk+1wMwkwM+(1ρ)c112kwk1wM+(1ρ)c1(12k+12k1)w1wM+(1ρ)c1km=112mw1wM+(1ρ)c1m=112m, (3.18)

    where wW. Since that the matrix R1 is positive definite, it is easy to obtain that the matrix M is positive definite. Combining this with (3.18), we have that {wk} is bounded.

    Now, we break up the discussion into two cases.

    Case 1. If there exists a subsequence {wkj} such that wkjwM(1ρ)δ(wkj)M, i.e., wkjw(1ρ)c112kj, Then {wkj} converges to w. Since the series of positive terms k=112k is convergent, by Cauchy convergence criterion, for any ϵ>0, there exists a positive integer m such that

    12k+12k1++12kjϵ2c1(1ρ) (3.19)

    as positive integer k,kjm(kkj). From limjwkj=w, for ϵ>0 above, there exists an integer j, such that

    wkjwM<ϵ2. (3.20)

    Combining (3.19) with (3.20), for sufficiently large positive integer k,kj(kkj), similar to discussion (3.18), we obtain

    wk+1wMwkwM+(1ρ)c112kwk1wM+(1ρ)c1(12k+12k1)wkjwM+(1ρ)c1(12k+12k1++12kj)<ϵ2+ϵ2=ϵ,

    which indicates that the sequence {wk} converges globally to the point w.

    Case 2. For any subsequence {wkj} such that wkjwM>(1ρ)δ(wkj)M, i.e., wk+1wM>(1ρ)δ(wk)M.

    Since {wk} is bounded, there exists constant c2>0 such that wk+1wc2. By definition of wk+1 in Step 3 of Algorithm 3.1, we have

    ρ(ˆwkwk)+(wkw)2M=wk+1w+(1ρ)δ(wk)2M(wk+1w(1ρ)δ(wk)M)2=wk+1w2M+(1ρ)2δ(wk)2M2(1ρ)wk+1wδ(wk)wk+1w2M+(1ρ)2δ(wk)2M2(1ρ)c2δ(wk). (3.21)

    Combining this with (3.12), one has

    k=0wkˆwk2Hρ1k=0(wkw2Mρˆwk+(1ρ)wkw)2M)2ρ1(1ρ)c2k=0δ(wk)ρ1(1ρ)2k=0δ(wk)2M+ρ1k=0(wkw2Mwk+1wM)2ρ1(1ρ)c1c2k=012k+ρ1w0w2M, (3.22)

    which together with the positive definiteness of H yields

    limkwkˆwk=0, (3.23)

    and one has

    limkxk1ˆxk1=0, (3.24)
    limkxk2ˆxk2=0, (3.25)
    limkλkˆλk=0. (3.26)

    By (3.23), we know that the sequence {ˆwk} is also bounded since {wk} is bounded. Thus, it has at least a cluster point, saying w:=(x1;x2;λ), and suppose that the subsequence {ˆwki} converges to w. By (3.26), one has limkiλkiˆλki=0. Taking limits on both sides of

    ˆxki1ˆxki2=1γβ(λkiˆλki),

    we have

    x1x2=0.

    Furthermore, taking limits on both sides of (3.3) and (3.4), and using (3.24)–(3.26), we obtain

    θ1(x1)θ1(x1)+(x1x1)(λ)0,  x1Rn,

    and

    θ2(x2)θ2(x2)+(x2x2)λ0,  x2Rn.

    Therefore, (x1,x2,λ)W.

    Since the series of positive terms k=112k is convergent, by Cauchy convergence criterion, for any ϵ>0, there exists a positive integer m such that

    12k+12k1++12klϵ3c1(1ρ) (3.27)

    as positive integer k,klm(kkl). By (3.23) and limjˆwkj=w, for ϵ>0 above, there exists an integer l, such that

    wklˆwklM<ϵ3,ˆwklwM<ϵ3. (3.28)

    Combining (3.27) with (3.28), for sufficiently large positive integer k,kl(kkl), similar to discussion (3.18), we obtain

    wk+1wMwkwM+(1ρ)c112kwk1wM+(1ρ)c1(12k+12k1)wklwM+(1ρ)c1(12k+12k1++12kl)wklˆwklM+ˆwklwM+(1ρ)c1(12k+12k1++12kl)<ϵ3+ϵ3+ϵ3=ϵ,

    which indicates that the sequence {wk} converges globally to the point w. The proof is completed.

    In the end of this section, we discuss the convergence rate of Algorithm 3.1. To this end, the following assumptions is needed.

    Assumption 3.1. For two sequences {wk} and {ˆwk}, assume that there exist a positive constant σ such that, for wk, there exists wW such that

    wkwσLk(wk),

    where

    Lk(w):=θ1(x1)+μ(ˆxk21)x2λk,x1x2+β2x1xk22+β2ˆxk1x22+12x1xk12R1+12x2xk22R2+12λλλ,λkγβ(ˆxk1ˆxk2), (3.29)

    and (ˆxk21) denotes the subdifferential of the function x21 at the point ˆxk2.

    Lemma 3.5. Suppose that Assumption 3.1 holds. Then there exists a positive constant ˆμ such that

    wkwˆμwkˆwk.

    Proof. From (3.2), we have

    {ˆxk1=(AA+βI+R1)1(Ab+λk+βxk2+R1xk1)[2mm]ˆxk2=(βI+R2)1(μ(ˆxk21)λk+βˆxk1+R2xk2)[2mm]ˆλk=λkγβ(ˆxk1ˆxk2), (3.30)

    By (3.29) and (3.30), a direct computation yields that

    Lk(w)=(Lx1;Lx2;Lλ)=(A(Ax1b)λk+β(x1xk2)+R1(x1xk1)μ(ˆxk21)+λkβ(ˆxk1x2)+R2(x2xk2)λ[λkγβ(ˆxk1ˆxk2)])=((AA+βI+R1)[x1(AA+βI+R1)1(Ab+λk+βxk2+R1xk1)](βI+R2)[x2(βI+R2)1(μ(ˆxk21)λk+βˆxk1+R2xk2)]λ[λkγβ(ˆxk1ˆxk2)])
    =(AA+βI+R1000βI+R2000I)(x1ˆxk1x2ˆxk2λˆλk)=ˉQ(wˆwk),

    where ˉQ=(AA+βI+R1000βI+R2000I). Combining this with Assumption 3.1, we obtain

    wkwσLk(wk)=σˉQ(wkˆwk)σˉQwkˆwk.

    Let ˆμ=σˉQ. Then the desired result follows.

    Theorem 3.3 Suppose that the hypothesis of Theorem 3.2 and Assumption 3.1 hold, and 34ˆμ2<(ηρ)ρ<54ˆμ2. Then the sequence {wk} generated by Algorithm 3.1 converges to a solution of (2.4) Rlinearly.

    Proof. From Theorem 3.2. Without loss of generality, we assume that the sequence {wk} converges to wW. A direct computation yields that

    wk+1w2M=ρˆwk+(1ρ)ψ(wk)w2M=ρ(ˆwkwk)+(wkw)+(ρ1)(wkψ(wk))2M(ρ(ˆwkwk)+(wkw)M+(1ρ)δ(wk)M)22ρ(ˆwkwk)+(wkw)2M+2(1ρ)2δ(wk)2M2wkw2M2ρˆwkwk2H+2(1ρ)2δ(wk)2M2wkw2M2ρˆμ2wkw2H+2(1ρ)2δ(wk)2M2(1(ηρ)ρˆμ2)wkw2M+2(1ρ)2δ(wk)2M. (3.31)

    where the second inequality follows from the fact that a2+b22ab,a,bR, the third inequality is obtained by (3.12), the fourth inequality follows by Lemma 3.5, and the fifth inequality is by (3.13).

    Since the sequence {wk} converges to w, it follows that there exists a positive integer k0, for all kk0, we obtain the following conclusions.

    If wkwM2(1ρ)δ(wk)M. By Definition 2.1, the following holds

    wkw2(1ρ)c112k. (3.32)

    If wkwM>2(1ρ)δ(wk)M. Combining this with (3.31), we obtain

    wk+1w2Mwkw2M[2(1(ηρ)ρˆμ2)wkw2Mwkw2M+2(1ρ)2δ(wk)2Mwkw2M]2(1(ηρ)ρˆμ2)+12.

    Let ˆτ=2(1(ηρ)ρˆμ2)+12. Then 0<ˆτ<1 by 34ˆμ2<(ηρ)ρ<54ˆμ2. Therefore, we have

    wk+1wMˆτwkwMˆτ2wk1wMˆτkk0+1wk0wMc2ˆτk.

    where c2 is positive constant, i.e., wkwMc2ˆτ1ˆτk. Combining this with (3.32), one has

    wkwMc3max{12k,ˆτk}.

    where c3 is positive constant, and thus the desired result follows.

    In this section, we provide some numerical tests about BPDN problem to show the efficiency of method proposed in this paper. All codes are written by MATLAB 9.2.0.538062 and performed on a Windows 10 PC with an AMD FX-7500 Radeon R7, 10 Computer Cores 4C+6G CPU, 2.10GHz CPU and 8GB of memory. In experiment, we set μ=0.001, and the measurement matrix A is generated by MATLAB scripts:

    [Q,R]=qr(A,0);A=Q.

    The original signal ˉx is generated by p=randperm(n);x(p(1:k))=randn(k,1).

    To simplify calculations of (3.2), we set R1=τInAA, where τ>0 is a constant. Combining this with the first equality in (3.30), one has

    ˆxk1=1β+τ(λk+τxk1+βxk2gk), (4.1)

    where gk=A(Axk1b).

    For the second equality in (3.30), we set R2=0, then

    ˆxk2=ˆxk11βλkμβ(ˆxk21). (4.2)

    The subdifferential of the absolute value function |t| is given as follows

    (|t|)={1ift<0,[1,1]ift=0,1ift>0.

    Combining this with (4.2), for i=1,2,,n, we obtain

    (ˆxk2)i={(ˆxk11βλk)i+μβif(ˆxk11βλk)i<μβ0if|ˆxk11βλk)i|<μβ(ˆxk11βλk)iμβif(ˆxk11βλk)i>μβ=(shrinkμβ(ˆxk11βλk))i,

    where shrinkc() is the soft-thresholding operator defined as

    shrinkc(k):=kmin{c,|k|}k|k|,kR,

    and c>0 is a constant. In addition, when k=0, k/|k| should be taken 0. Therefore, we have the following formula to calculate ˆxk2, i.e.,

    ˆxk2=shrinkμβ(ˆxk11βλk). (4.3)

    Applying (4.1) and (4.3), then (3.2) in Algorithm 3.1 can be written as follow

    {ˆxk1=1β+τ(λk+τxk1+βxk2gk),ˆxk2=shrinkμβ(ˆxk11βλk),ˆλk=λkγβ(ˆxk1ˆxk2). (4.4)

    For any methods, the stop criterion is

    fkfk1fk1<106,

    where fk denotes the objective value of (1.1) at iteration xk.

    In each test, we calculate the relative error

    RelErr=xk+1ˉxˉx,

    where ˉx denotes the recovery signal.

    In this subsection, we apply Algorithm 2.3 in [11] (DFCGPM), Algorithm 2 in [12] (IMFPPA) and Algorithms 3.1 in this paper to recover a simulated sparse signal of which observation data is corrupted by additive Gaussian white noise, respectively. We set n=212,m=210,k=27, and some parameters about tested algorithms are listed as follows:

    Algorithms 3.1: β=0.2,μ=0.01,γ=1.9,τ=0.5;

    IMFPPA: ρ=0.01,γ=0.01,τ=0.2;

    DFCGPM: C=1,r=0,η=1,ρ=0.4,σ=0.01,γ=1.9.

    The original signal, the measurement and the reconstructed signal(marked by red point) by Algorithm 3.1, DFCGPM and IMPPA are given in Figure 1. Obviously, from the first, third, fourth and the last subplots in Figure 1, all elements in the original signal are circled by the red points, which indicates that the three methods can recover the original signal quite well. Furthermore, we record the number of iterations (abbreviated as Iter), the CPU time in seconds (abbreviated as CPU Time), the relative error (abbreviated as RelErr) of the three methods. Figure 1 indicates that Algorithm 3.1 have higher accuracy than IMPPA method, and Algorithm 3.1 is also always faster than DFCGPM and IMPPA methods. Thus, Algorithm 3.1 is an efficient method for sparse signal recovery.

    Figure 1.  Signal recovery result.

    In this subsection, Algorithm 3.1 proposed in this paper is compared with IMFPPA [12], DFCGPM [11], Algorithm 3.1 in [8] (PPRSM) and Algorithm 3.1 in [13] (LAPM) from the CPU Time and the RelErr, where some parameters about PPRSM and LAPM are listed as follows:

    PPRSM: γ=0.2,σ=0.1;

    LAPM: β=0.25,τ=0.6.

    All algorithms have run 5 times, respectively, and the average of the the CPU Time and the RelErr are obtained. The numerical results are listed in Table 1. From the Table 1, It is obvious that the CPU time of Algorithm 3.1 is less than other algorithms in different k-Sparse signal whether it is Free noise or Gaussian noise, which shows that Algorithm 3.1 is faster. In addition, we find that the accuracy of algorithm 3.1 are also better than other algorithms. So, Algorithm 3.1 is a more efficient method for different k-Sparse signal recovery.

    Table 1.  Compared Free noise with Gaussian noise from different k-Sparse signal (Subsection 4.2).
    No noise Gaussian noise
    k-Sparse signal Methods CPU Time RelErr CPU Time RelErr
    80 Algorithm 3.1 1.0937 3.3782 1.1250 3.4187
    IMFPPA 4.6253 4.1837 4.7031 4.7528
    DFCGPM 4.9585 3.8375 5.2744 3.8501
    PPRSM 7.2164 4.6751 7.5363 4.6751
    LAPM 12.8906 4.5521 13.4894 4.5095
    120 Algorithm 3.1 1.4531 3.0063 1.4688 3.1883
    IMFPPA 6.8872 4.9351 7.2431 5.2315
    DFCGPM 7.1964 3.5026 7.2173 3.8449
    PPRSM 8.3361 4.3693 8.5913 4.3559
    LAPM 13.2656 4.1577 14.7813 4.4311
    140 Algorithm 3.1 1.6719 3.8609 1.7813 3.7207
    IMFPPA 7.1722 5.1312 8.8313 4.8873
    DFCGPM 7.6563 3.9326 8.3519 4.4939
    PPRSM 8.9961 4.5807 9.1320 4.5431
    LAPM 14.1875 4.8401 15.6406 4.9417
    160 Algorithm 3.1 1.9844 4.4808 2.1875 4.3251
    IMFPPA 8.2192 4.9317 9.2268 4.9718
    DFCGPM 8.3548 3.7487 8.8897 4.4939
    PPRSM 9.4201 4.6442 9.8942 4.4337
    LAPM 16.3594 4.9220 17.6375 4.9417

     | Show Table
    DownLoad: CSV

    In this subsection, Algorithm 3.1 proposed in this paper is compared with DFCGPM [11] and IMFPPA [12] from aspects of the CPU Time and the RelErr in different dimension, where some parameters about DFCGPM and IMFPPA are given in Subsection 4.1. We set m=n4,k=n32 and no additive Gaussian white noise. All algorithms have run 5 times, respectively. The average of the CPU Time and the RelErr are obtained. Some numerical results are listed in Table 2, where the IMFPPA and DFCGPM are difficult to solve the problem in our computer when n10,000 since our computer configuration constraints, and it is also drawn in Figure 2. The numerical results in Table 2 and Figure 2 indicates that: The CPU Time and RelErr of Algorithm 3.1 are less than that of the other two tested methods, which shows that Algorithm 3.1 is more effective for large scale problem. Thus, Algorithm 3.1 is very suitable for solving large-scale problems.

    Table 2.  Compared with DFCGPM method and IMFPPA method from results (Subsection 4.3).
    Dimension (n) Algorithm 3.1 IMFPPA DFCGPM
    CPU Time RelErr(%) CPU Time RelErr(%) CPU Time RelErr(%)
    1024 0.0625 2.6592 0.6250 4.2036 1.1205 2.9219
    2048 0.2813 3.6120 2.3281 4.0544 2.1701 3.6048
    3072 0.7188 3.2769 3.7188 4.2890 5.2639 3.1776
    4096 1.5313 3.2059 7.6563 4.8450 9.7066 3.6737
    5120 2.4688 3.3400 12.6094 4.7120 13.0390 4.2521
    6144 3.6094 3.5239 16.8594 4.8617 16.3323 4.7696
    7168 4.3594 3.4662 22.7031 5.0325 24.6479 3.3939
    8192 6.1153 3.2848 29.1563 4.8082 32.3971 3.5910
    9216 7.7188 3.4343 39.4844 4.9214 41.6752 3.8263
    10238 9.5469 3.6712 - - - -
    11262 10.4688 3.4449 - - - -
    12286 12.6875 3.2772 - - - -
    13310 14.2969 3.3361 - - - -
    14334 17.5313 3.2459 - - - -

     | Show Table
    DownLoad: CSV
    Figure 2.  Compared with DFCGPM and IMFPPA in CPU Time and RelErr (Subsection 4.3).

    The method proposed in this work has several possible extensions. Firstly, it could be numerically beneficial to tune the parameter, and thus it is meaningful to investigate the global convergence of the proposed method with adaptively adjusted parameter. Secondly, we may establish global error bound for (1.1) just as was done for generalized linear complementarity problem in [16,17,18], and may use the error bound estimation to establish quick convergence rate of the new Algorithm 3.1 for solving (1.1). This is a topic for future research.

    Since the RNNM model is a convex program, we explore the possibility of the proposed algorithm developed for BPDN model to solve the RNNM model from theoretical results and numerical experiments. This will be our further research direction.

    The Regularized Nuclear Norm Minimization (RNNM) model is defined as follows [19,20]:

    minX12A(X)b2+μX,

    where μ>0 is a parameter, bRm is an observed vector, A:Rn1×n2Rm is a known linear measurement map defined as

    A(X)=[tr(XA(1)),tr(XA(2)),,tr(XA(m))].

    Here, A(i) for i=1,2,,m is denoted as a matrix with size n1×n2, and tr() is the trace function, the norm denote the Euclidean nuclear norm.

    In this paper, by choosing a special iterative format, we have developed a new iterative format of proximal ADMM, which has closed-form solutions. Thus, it has fast solving speed and pinpoint accuracy when the dimension increases. It makes new algorithm very attractive for solving large-scale problems. The global convergence of new method is discussed in detail. Furthermore, the linear rate convergence result for new algorithm is established. Some numerical experiments on sparse signal recovery are given, and compared with the state-of-the-art of algorithms in [8,11,12,13], the method proposed in this paper is more accurate and efficient.

    The authors wish to give their sincere thanks to the Editor-in-chief, the Associated Editor and the anonymous referees for their valuable suggestions and helpful comments which improved the presentation of the paper.

    This study was funded by the Natural Science Foundation of China (Nos. 12071250, 11801309), and Shandong Provincial Natural Science Foundation (ZR2021MA088).

    No potential conflict of interest was reported by the authors.



    [1] J. F. Yang, Y. Zhang, Alternating direction algorithms for 1problems in compressive sensing, SIAM J. Sci. Comput., 33 (2011), 250–278. https://doi.org/10.1137/090777761 doi: 10.1137/090777761
    [2] X. M. Yuan, An improved proximal alternating direction method for monotone variational inequalities with separable structure, Comput. Optim. Appl., 49 (2011), 17–29. https://doi.org/10.1007/s10589-009-9293-y doi: 10.1007/s10589-009-9293-y
    [3] M. H. Xu, T. Wu, A class of linearized proximal alternating direction methods, J. Optim. Theory Appl., 151 (2011), 321–337. https://doi.org/10.1007/s10957-011-9876-5 doi: 10.1007/s10957-011-9876-5
    [4] Y. H. Xiao, H. N. Song, An inexact alternating directions algorithm for constrained total variation regularized compressive sensing problems, J. Math. Imaging Vis., 44 (2012), 114–127. https://doi.org/10.1007/s10851-011-0314-y doi: 10.1007/s10851-011-0314-y
    [5] Y. C. Yu, J. G. Peng, X. L. Han, A. G. Cui, A primal Douglas-Rachford splitting method for the constrained minimization problem in compressive sensing, Circuits Syst. Signal Process, 36 (2017), 4022–4049. https://doi.org/10.1007/s00034-017-0498-5 doi: 10.1007/s00034-017-0498-5
    [6] H. J. He, D. R. Han, A distributed Douglas-Rachford splitting method for multi-block convex minimization problems, Adv. Comput. Math., 42 (2016), 27–53. https://doi.org/10.1007/s10444-015-9408-1 doi: 10.1007/s10444-015-9408-1
    [7] B. S. He, H. Liu, Z. R. Wang, X. M. Yuan, A strictly contractive Peaceman-Rachford splitting method for convex programming, SIAM J. Optim., 24 (2014), 1011–1040. https://doi.org/10.1137/13090849X doi: 10.1137/13090849X
    [8] M. Sun, J. Liu, A proximal Peaceman-Rachford splitting method for compressive sensing, J. Appl. Math. Comput., 50 (2016), 349–363. https://doi.org/10.1007/s12190-015-0874-x doi: 10.1007/s12190-015-0874-x
    [9] B. S. He, F. Ma, X. M. Yuan, Convergence study on the symmetric version of ADMM with larger step sizes, SIAM J. Imaging Sci., 9 (2016), 1467–1501. https://doi.org/10.1137/15M1044448 doi: 10.1137/15M1044448
    [10] Y. H. Xiao, H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl., 405 (2013), 310–319. https://doi.org/10.1016/j.jmaa.2013.04.017 doi: 10.1016/j.jmaa.2013.04.017
    [11] M. Sun, M. Y. Tian, A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem, Bull. Iran. Math. Soc., 46 (2020), 183–205. https://doi.org/10.1007/s41980-019-00250-2 doi: 10.1007/s41980-019-00250-2
    [12] H. C. Sun, M. Sun, B. H. Zhang, An inverse matrix-Free proximal point algorithm for compressive sensing, ScienceAsia, 44 (2018), 311–318. https://doi.org/10.2306/scienceasia1513-1874.2018.44.311 doi: 10.2306/scienceasia1513-1874.2018.44.311
    [13] D. X. Feng, X. Y. Wang, A linearly convergent algorithm for sparse signal reconstruction, J. Fixed Point Theory Appl., 20 (2018), 154. https://doi.org/10.1007/s11784-018-0635-1 doi: 10.1007/s11784-018-0635-1
    [14] W. T. Yin, S. Osher, D. Goldfarb, J. Darbon, Bregman iterative algorithms for 1 minimization with applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), 143–168. https://doi.org/10.1137/070703983 doi: 10.1137/070703983
    [15] Y. H. Xiao, Q. Y. Wang, Q. J. Hu, Non-smooth equations based method for 1-norm problems with applications to compressed sensing, Nonlinear Anal.-Theor., 74 (2011), 3570–3577. https://doi.org/10.1016/j.na.2011.02.040 doi: 10.1016/j.na.2011.02.040
    [16] H. C. Sun, Y. J. Wang, L. Q. Qi, Global error bound for the generalized linear complementarity problem over a polyhedral cone, J. Optim. Theory Appl., 142 (2009), 417–429. https://doi.org/10.1007/s10957-009-9509-4 doi: 10.1007/s10957-009-9509-4
    [17] H. C. Sun, Y. J. Wang, Further discussion on the error bound for generalized linear complementarity problem over a polyhedral cone, J. Optim. Theory Appl., 159 (2013), 93–107. https://doi.org/10.1007/s10957-013-0290-z doi: 10.1007/s10957-013-0290-z
    [18] H. C. Sun, Y. J. Wang, S. J. Li, M. Sun, A sharper global error bound for the generalized linear complementarity problem over a polyhedral cone under weaker conditions, J. Fixed Point Theory Appl., 20 (2018), 75. https://doi.org/10.1007/s11784-018-0556-z doi: 10.1007/s11784-018-0556-z
    [19] E. J. Candˊes, Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE T. Inform. Theory, 57 (2011), 2342–2359. https://doi.org/10.1109/TIT.2011.2111771 doi: 10.1109/TIT.2011.2111771
    [20] W. D. Wang, F. Zhang, J. J. Wang, Low-rank matrix recovery via regularized nuclear norm minimization, Appl. Comput. Harmon Anal., 54 (2021), 1–19. https://doi.org/10.1016/j.acha.2021.03.001 doi: 10.1016/j.acha.2021.03.001
  • This article has been cited by:

    1. Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun, A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing, 2023, 8, 2473-6988, 14726, 10.3934/math.2023753
    2. Yong Zhao, Mengjiao Niu, Jinkui Liu, A three-term subspace projection method for solving systems of nonlinear monotone equations, 2024, 0, 1547-5816, 0, 10.3934/jimo.2024156
    3. Yuanshou Zhang, Min Sun, Jing Liu, A modified PRP conjugate gradient method with inertial extrapolation for sparse signal reconstruction, 2024, 2024, 1029-242X, 10.1186/s13660-024-03219-w
    4. J. K. Liu, B. Tang, N. Zhang, J. Xiong, P. T. Gao, X. L. Dong, A subspace derivative-free projection method for convex constrained nonlinear equations, 2024, 0916-7005, 10.1007/s13160-024-00675-1
    5. Zihang Yuan, Hu Shao, Xiaping Zeng, Pengjie Liu, Xianglin Rong, Jianhao Zhou, An improved Dai‐Liao‐style hybrid conjugate gradient‐based method for solving unconstrained nonconvex optimization and extension to constrained nonlinear monotone equations, 2024, 0170-4214, 10.1002/mma.10396
    6. Xueyong Wang, Gang Wang, Ping Yang, Novel Pareto $ Z $-eigenvalue inclusion intervals for tensor eigenvalue complementarity problems and its applications, 2024, 9, 2473-6988, 30214, 10.3934/math.20241459
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1982) PDF downloads(66) Cited by(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog