Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

New entanglement-assisted quantum codes constructed from Hermitian LCD codes

  • Hermitian linear complementary dual (LCD) codes are a class of linear codes that intersect with their Hermitian dual trivially. Each Hermitian LCD code can give an entanglement-assisted quantum error-correcting code (EAQECC) with maximal entanglement. Methods of constructing Hermitian LCD codes from known codes were developed, and seven new Hermitian LCD codes with parameters [119,4,88]4, [123,4,91]4, [124,4,92]4, [136,4,101]4, [140,4,104]4, [188,4,140]4 and [212,4,158]4 were constructed. Seven families of Hermitian LCD codes and their related EAQECCs were derived from these codes. These new EAQECCs have better parameters than those known in the literature.

    Citation: Yuezhen Ren, Ruihu Li, Guanmin Guo. New entanglement-assisted quantum codes constructed from Hermitian LCD codes[J]. AIMS Mathematics, 2023, 8(12): 30875-30881. doi: 10.3934/math.20231578

    Related Papers:

    [1] Wei Yang, Lili Pan, Jinhui Wan . Smoothing gradient descent algorithm for the composite sparse optimization. AIMS Mathematics, 2024, 9(12): 33401-33422. doi: 10.3934/math.20241594
    [2] Chenyang Hu, Yuelin Gao, Eryang Guo . Fractional portfolio optimization based on different risk measures in fuzzy environment. AIMS Mathematics, 2025, 10(4): 8331-8363. doi: 10.3934/math.2025384
    [3] Chengjin Tang, Jiahao Guo, Yinghui Dong . Optimal investment based on performance measure with a stochastic benchmark. AIMS Mathematics, 2025, 10(2): 2750-2770. doi: 10.3934/math.2025129
    [4] Zongqi Sun, Peng Yang, Ying Wang, Jing Lu . Research on the wealth management fees of defined contribution pensions during the pre-retirement stage. AIMS Mathematics, 2024, 9(12): 36102-36115. doi: 10.3934/math.20241713
    [5] T. K. Buvaneshwari, D. Anuradha . Solving bi-objective bi-item solid transportation problem with fuzzy stochastic constraints involving normal distribution. AIMS Mathematics, 2023, 8(9): 21700-21731. doi: 10.3934/math.20231107
    [6] M Junaid Basha, S Nandhini . A fuzzy based solution to multiple objective LPP. AIMS Mathematics, 2023, 8(4): 7714-7730. doi: 10.3934/math.2023387
    [7] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [8] Shima Soleimani Manesh, Mansour Saraj, Mahmood Alizadeh, Maryam Momeni . On robust weakly ε-efficient solutions for multi-objective fractional programming problems under data uncertainty. AIMS Mathematics, 2022, 7(2): 2331-2347. doi: 10.3934/math.2022132
    [9] Daniel Gerth, Bernd Hofmann . Oversmoothing regularization with 1-penalty term. AIMS Mathematics, 2019, 4(4): 1223-1247. doi: 10.3934/math.2019.4.1223
    [10] Tan Zhang, Pianpian Yan . Asymmetric integral barrier function-based tracking control of constrained robots. AIMS Mathematics, 2024, 9(1): 319-339. doi: 10.3934/math.2024019
  • Hermitian linear complementary dual (LCD) codes are a class of linear codes that intersect with their Hermitian dual trivially. Each Hermitian LCD code can give an entanglement-assisted quantum error-correcting code (EAQECC) with maximal entanglement. Methods of constructing Hermitian LCD codes from known codes were developed, and seven new Hermitian LCD codes with parameters [119,4,88]4, [123,4,91]4, [124,4,92]4, [136,4,101]4, [140,4,104]4, [188,4,140]4 and [212,4,158]4 were constructed. Seven families of Hermitian LCD codes and their related EAQECCs were derived from these codes. These new EAQECCs have better parameters than those known in the literature.



    We consider the following problem with inequality constraints:

    minf(x)s.t.gi(x)0,iI, (P)

    where I={1,2,...,m} is a finite set of integers, and f:RnR, gi:RnR, iI are continuously differentiable. The feasible set of (P) is denoted by X={xRn|gi(x)0,iI}.

    Many methods have been proposed to solve (P), such as interior-point methods [1,2], sequential quadratic programming (SQP) methods [3,4], feasible direction methods [5,6,7], and penalty function methods[8,9,10]. The penalty function method is widely utilized due to its simplicity and efficiency. Courant [8] initially proposed the exterior penalty function method, and several variants have been subsequently presented. The majority of these methods add a multiple of the violation of each constraint to the objective function, and a series of unconstrained problems have been solved to serve as replacements for the original constrained problem. Fortunately, in the exact penalty method, a single unconstrained problem (rather than a sequence) takes the place of the original constrained problem. A penalty function is exact if there is some α such that an optimal solution to (P) is also an optimal solution to the penalty optimization problem for all αα. In 1967, Zangwill [9] proposed the following popular exact penalty function:

    F1(x,α)=f(x)+αiImax{gi(x),0},

    where α>0 is a penalty parameter, and the corresponding penalty problem of (P) is given by the following:

    minF1(x,α)s.t.xRn. (P1)

    In 1981, Fletcher unified the nonsmooth exact penalty function and defined it as follows:

    Fp(x,α)=f(x)+αg+(x)p,

    where the corresponding penalty problem of (P) is given by the following:

    minFp(x,α)s.t.xRn, (Pp)

    where α is a penalty parameter, 1p, and g+i=max{gi,0}. When αλ and g+(x)=0, the minimizer x of (Pp) is the optimal solution of the problem (P). Especially, (Pp) is equivalent to (P1) when p=1. Moreover, when p=, the L penalty function [10] can be written as follows:

    F(x,α)=f(x)+αg+(x)=f(x)+αmax{0,g1(x),...,gm(x)}.

    The exact penalty function has been extensively discussed [11,12,13]. The majority of exact penalty functions are nonsmooth because of the presence of the max-value function. However, in most existing unconstrained optimization algorithms, the objective function needs to be smooth. Consequently, numerous smoothing techniques [14,15,16] have been devised to approximate the max-value function. In practice, the existing exact penalty function algorithm requires gradually increasing the penalty parameter to find an optimal solution. Nevertheless, excessive penalty parameters may result in ill-conditioned problems. To address these limitations, innovative forms of penalty functions have been proposed.

    The objective penalty function, which is special form of the penalty function, is discussed in [17,18,19], where one kind of the objective penalty function is defined as follows:

    F(x,M)=(f(x)M)p+iIgi(x)p, (PM)

    where p>0, and M is an objective penalty parameter. Assume that x is an optimal solution and f(x) is the optimal value of (P). The optimal solution x(Mk) of (PM) tends to x when a convergent sequence Mk tends to f(x). In 2011, Meng et al. [20] proposed the following objective penalty function:

    FQ(x,M)=Q(f(x)M)+iIP(gi(x)), (PQ)

    where Q():RR{} and P():RR{} respectively satisfy the following conditions:

    {Q(t)>0,for alltR{0},Q(0)=0,Q(t1)<Q(t2),for0t1<t2, (1.1)

    and

    {P(t)=0,if and only ift0,P(t)>0,if and only ift>0.

    If Q(t), P(t), f(x), and gi(x)(iI) are all differentiable, then it is obvious that FQ(x,M) is also differentiable. For a comprehensive discussion of the objective penalty function, the sufficient and necessary stability condition used to determine whether the objective penalty function is exact for a global solution is proved in [21]; different forms of the objective penalty function have been proposed in [22,23]; the smoothness property concerning the objective penalty function has been discussed in [24,25,26]; lower-order smoothed objective penalty functions based on filling properties have been discussed in [27].

    In order to avoid the ill-conditioning brought about by overly large penalty parameters in the traditional penalty function method, in this study, we adopt the approach of adjusting the objective penalty parameters. Furthermore, to improve the computational efficiency of constrained optimization problems with a large number of constraints, a smooth approximation is employed using a modified flattened aggregate function.

    This paper presents a new objective penalty function with the flattened aggregate function, which offers both smoothness and exactness. The proposed innovative algorithm offers a powerful solution to tackle inequality constrained optimization problems, even when faced with many constraints. The rest of this paper is organized as follows: A modified flattened aggregate function is proposed in Section 2 to smoothly approximate the max-value function, and its properties are discussed; Section 3 presents the construction of an objective penalty function using the modified flattened aggregate function, and proposes an algorithm based on this function, with its convergence thoroughly proven; and Section 4 demonstrates the efficiency of the algorithm through numerical examples.

    In this section, we will construct a modified flattened aggregate function to smoothly approximate the max-value function; its specific form is as follows:

    ˆfp(x)=1pln(iIexp(pϕ(gi(x)))),

    where

    ϕ(t)={0,ift0,t,ift>ϵ,1ϵ2t3+2ϵt2,if0<tϵ.

    Some properties of the flattened aggregate function ˆfp(x) are discussed as follows.

    Proposition 2.1. For any xRn, gmax(x):=max{0,g1(x),...,gm(x)}, we have the following:

    (1) ˆfp(x)=1plnm, if gmax(x)0.

    (2) 1plnm<ˆfp(x)1plnm+ϵ, if 0<gmax(x)ϵ.

    (3) gmax(x)ˆfp(x)gmax(x)+1plnm, if gmax(x)>ϵ.

    Proof. (1) If gmax(x)0, we have ϕ(gi(x))=0, then

    ˆfp(x)=1pln(mexp(pϕ(gi(x))))=1plnm.

    (2) With a simple verification, we know ϕ(t) is a monotonically increasing function. For a given x, we obtain the following:

    ϕ(gi(x))ϕ(gmax(x)).

    If 0<gmax(x)ϵ, we have ϕ(gi(x))ϵ, then we obtain the following:

    ˆfp(x)=1pln(iIexp(pϕ(gi(x))))1pln(mexp(pϵ))=1plnm+ϵ.

    Hence, by combining (1), we have the following:

    1plnm<ˆfp(x)1plnm+ϵ.

    (3) If gmax(x)>ϵ, then we obtain the following:

    ˆfp(x)1pln(mexp(pϕ(gmax(x))))=1pln(mexp(pgmax(x)))=1plnm+gmax(x).

    Obviously, gmax(x)ˆfp(x).

    Hence,

    gmax(x)ˆfp(x)gmax(x)+1plnm.

    Proposition 2.2. For any given xRn, the new flattened aggregate function ˆfp(x) monotonically decreases as p increases. When p and ϵ0, ˆfp(x) sufficiently approximates the max-value function max{0,g1(x),...,gm(x)}.

    Proof. Suppose that there is a vector-valued function φ(x), whose components are as follows:

    φi(x)=exp{ϕ(gi(x))},i=1,2,...,m,

    and its Lp norm can be given as follows:

    φ(x)p=(mi=1(φi(x))p)1p=(mi=1exp(pϕ(gi(x))))1p.

    By Jensen's inequality, it can be seen that if qp1, then we obtain the following:

    ϕ(x)pϕ(x)q. (2.1)

    Taking the natural logarithm on both sides of the inequality (2.1), we have the following:

    ˆfp(x)ˆfq(x).

    Hence, ˆfp(x) monotonically decreases as p increases.

    Next, we prove the second half of the proposition. By Proposition 2.1, if gmax(x)0, ˆfp(x)=1plnm. When p, we obtain the following:

    limpˆfp(x)=limp1plnm=0.

    If 0<gmax(x)ϵ, then 1plnm<ˆfp(x)1plnm+ϵ. When p and ϵ0, we obtain the following:

    limp1plnm=limϵ0pˆfp(x)=limplimϵ0(1plnm+ϵ)=0.

    Similarly, if gmax(x)>ϵ, then gmax(x)ˆfp(x)gmax(x)+1plnm. When p, we obtain the following:

    limpˆfp(x)limp(gmax(x)+1plnm)=gmax(x).

    Then, we have limpˆfp(x)=gmax(x). Hence,

    limϵ0pˆfp(x)=max{0,g1(x),...,gm(x)}.

    In this section, we propose a new objective penalty function

    F(x,M,ρ)=Q(f(x)M)+ρmax{0,g1(x),...,gm(x)}, (P(M,ρ))

    where Q() is defined by Eq (1.1). Due to the existence of the max-value function max{0,g1(x),...,gm(x)}, F(x,M,ρ) is not differentiable. In Section 2, we have already discussed that the flattened aggregate function smoothly approximates the max-value function. Therefore, we consider the following smooth unconstrained optimization problem:

    minFp(x,M,ρ),xY, (P(p,M,ρ))

    where RnYX={xRn|gi(x)0} and Fp(x,M,ρ) can be rewritten as follows:

    Fp(x,M,ρ)=Q(f(x)M)+ρpln(iIexp(pϕ(gi(x)))). (3.1)

    If there is an M such that an optimal solution x to (P) is also an optimal solution to (P(p,M,ρ)) for M<M, then F(x,M,ρ) is called an exact objective penalty function. If an optimal solution x to (P(p,M,ρ)) is also an optimal solution to (P), then M is called an exact value of the objective penalty parameter. The assumptions and proofs of the subsequent theorems fundamentally rely on the well-established properties of the flattened aggregate function. Next, we will discuss the exactness of Fp(x,M,ρ).

    Theorem 3.1. Let x be an optimal solution to (P). For some ρ>0, p>0, if M=f(x), then x is the optimal solution to (P(p,M,ρ)).

    Proof. Since x is the optimal solution to (P), gi(x)0. We obtain the following:

    Fp(x,M,ρ)=Q(f(x)M)+ρpln(iIexp(pϕ(gi(x))))=Q(f(x)M)+ρplnm.

    It follows from M=f(x) that Fp(x,M,ρ)=ρplnm. From Fp(x,M,ρ)ρplnm, xY, we conclude that x is the optimal solution to (P(p,M,ρ)).

    This completes the proof.

    Theorem 3.2. Let x be an optimal solution to (P), and xMY be an optimal solution to (P(p,M,ρ)). Assume that the feasible set Y is connected and compact, fmin(x)=minxXf(x), fmax(x)=maxxXf(x), and f(x) is continuous. For a given M, ρ>0, and p is large enough, then the following three assertions hold:

    (1) If Fp(xM,M,ρ)=ρplnm, then xM is a feasible solution to (P) and fmin(x)Mfmax(x).

    (2) If Fp(xM,M,ρ)>ρplnm and Mfmax(x), then M<fmin(x).

    (3) If Fp(xM,M,ρ)>ρplnm, M<fmin(x), and xM is a feasible solution to (P), then xM is an optimal solution to (P).

    Proof. (1) From the properties of the flattened aggregate function, we know that

    Fp(xM,M,ρ)ρplnm.

    It follows from (3.1) that

    Fp(xM,M,ρ)=Q(f(xM)M)+ρpln(iIexp(pϕ(gi(xM))))=ρplnm.

    Since Q()0, we have the following:

    Q(f(xM)M)=0,

    and

    ρpln(iIexp(pϕ(gi(xM))))=ρplnm.

    Then, we obtain f(xM)=M and gi(xM)0. Hence, xM is a feasible solution to (P) and fmin(x)Mfmax(x).

    (2) Since Fp(xM,M,ρ)>ρplnm, we have the following:

    ρplnm<Fp(xM,M,ρ)Fp(x,M,ρ)=Q(f(x)M)+ρplnm,xX.

    Suppose that fmin(x)M; then, we have fmin(x)Mfmax(x). Since f is continuous on the connected and compact set X, there is some ˆxX such that f(ˆx)=M. Then, we obtain Fp(ˆx,M,ρ)=ρplnm.

    Furthermore, xM is an optimal solution to (P(p,M,ρ)) and Fp(xM,M,ρ)>ρplnm; then, we have the following:

    ρplnm<Fp(xM,M,ρ)Fp(ˆx,M,ρ)=ρplnm,

    which leads to a contradiction. Hence, M<fmin(x).

    (3) From the given conditions, we obtain Fp(xM,M,ρ)Fp(x,M,ρ), that is

    Q(f(xM)M)+ρpln(iIexp(pϕ(gi(xM))))Q(f(x)M)+ρpln(iIexp(pϕ(gi(x)))).

    Both x and xM are feasible solutions to the original problem, and we obtain the following:

    Q(f(xM)M)Q(f(x)M),xX. (3.2)

    Since x is an optimal solution to (P), we suppose that there is some xX such that f(x)M; then, we have f(x)M<f(x), which contradicts the fact that x is an optimal solution to (P). Therefore, for any xX, we have the following:

    f(x)M>0. (3.3)

    Additionally, if we have f(xM)f(x), then

    f(xM)Mf(x)M>0. (3.4)

    It follows from (3.2)–(3.4) and the properties of Q(), we obtain the following:

    f(xM)Mf(x)M,xX.

    Hence, f(xM)f(x),xX. This completes the proofs.

    Based on Theorems 3.1 and 3.2, it is clear that the optimal solution of (P(p,M,ρ)) is the optimal solution of (P). This solidifies the proof that the objective penalty function is both smooth and exact. After conducting the aforementioned analysis, we develop a new algorithm called the flattened aggregate objective penalty function (FAOPF) algorithm.

    Definition 3.1. x is an ϵ-approximately optimal solution if

    |f(x)f(x)|ϵ,

    where f(x) is an optimal value to (P).

    FAOPF Algorithm

    Step 1. Choose x0X, a1<minxXf(x)<b1, and a1<M1<b1. a1 and b1 are all randomly selected within a given range. M1=34b1+14a1 or 14b1+34a1. Given ρ1>0, n=2, ϵ=104, and p=109, let k:=1.

    Step 2. Solve minxYFp(x,Mk,ρk). Let xk be the minimizer.

    Step 3. If xk is not a feasible solution to (P), then let bk+1=bk, ak+1=Mk, Mk+1=ak+1+34(bk+1ak+1), ρk+1=nρk, and go to Step 5; otherwise, if xkX, then go to Step 4.

    Step 4. If Fp(xk,Mk,ρk)=ρkplnm, then let ak+1=ak, bk+1=Mk, Mk+1=ak+1+14(bk+1ak+1), ρk+1=ρk, and go to Step 5; otherwise, if Fp(xk,Mk,ρk)>1plnm, then stop and xk is an optimal solution to (P).

    Step 5. If |bk+1ak+1|<ϵ, then stop and xk is an ϵ-approximately optimal solution to (P). Otherwise, let k:=k+1, and go to Step 2.

    Remark. In the FAOPF algorithm, it is assumed that one can always get a1<minxXf(x).

    Next, we will prove that the FAOPF algorithm is well-defined. Assume that gi(x)(iI) are continuous in Rn. Let

    S(L,f)={xk|LQ(f(xk)Mk),k=1,2,...},

    which is called a Q-level set. S(L,f) is called bounded if, for any given L>0 and a convergent sequence Mkminf(x), S(L,f) is bounded.

    The convergence of the FAOPF algorithm is demonstrated in the following theorem.

    Theorem 3.3. Let Y=Rn or Y be an open set. Suppose that Q(), f(x), and gi(x)(iI) are continuous on Rn, and the Q-level set is bounded. Let {xk} be the sequence generated by the FAOPF Algorithm. The following three assertions hold:

    (1) If {xk}(k=1,2,...,ˆk) is a finite sequence (i.e., the FAOPF algorithm stops at the ˆk-th iteration), then xˆk is either an optimal solution or an ϵ-approximately optimal solution to (P).

    (2) If {xk}(k=1,2,...,ˆk) is a infinite sequence, then {xk} is bounded and any limit point of it is either an optimal solution or an approximately optimal solution to (P).

    (3) If {xk} is an infinite sequence, then for any limit point x of it, there exists τi>0, such that

    f(x)+iIϵ(xk)τixgi(x)=0.

    Proof. First, we prove that the sequence {ak} increases and {bk} decreases with

    ak<Mk<bk,k=1,2,..., (3.5)

    and

    bk+1ak+1=14(bkak),k=1,2,.... (3.6)

    The proof by induction is as follows. From the setting of the initial value, it is clear that a1<M1<b1. In Step 3, we know that b2=b1, and a2=M1. We take M1=14a1+34b1; then, we have the following:

    b2a2=b1M1=14(b1a1).

    Similarly, in Step 4, we know that b2=M1 and a2=a1. We take M1=14b1+34a1; then, we have the following:

    b2a2=M1a1=14(b1a1).

    Suppose that (3.5) and (3.6) hold for some k>1, and consider the case of k+1. By Step 3, we have bk+1=bk, ak+1=Mk, and Mk+1=ak+1+34(bk+1ak+1). Then, we obtain the following:

    Mk+1=ak+1+34(bk+1ak+1)=14ak+1+34bk+1=14Mk+34bk>Mk=ak+1,

    and

    Mk+1=14Mk+34bk<bk=bk+1.

    We obtain ak<Mk=ak+1, bk+1=bk, ak+1<Mk+1<bk+1, and

    bk+1ak+1=bkMk=bk(ak+34(bkak))=14(bkak).

    By Step 4, we have ak+1=ak, bk+1=Mk, and Mk+1=ak+1+14(bk+1ak+1). Then, we have the following:

    Mk+1=ak+1+14(bk+1ak+1)=34ak+1+14bk+1=34ak+14Mk>ak=ak+1,

    and

    Mk+1=34ak+14Mk<Mk=bk+1.

    We get bk>Mk=bk+1, ak+1=ak, ak+1<Mk+1<bk+1, and

    bk+1ak+1=Mkak=ak+14(bkak)ak=14(bkak).

    Hence, (3.5) and (3.6) hold. It is obvious that {ak} is increasing and {bk} is decreasing; then, both {ak} and {bk} are convergent. Let akˉa and bkˉb. By (3.6), we have ˉa=ˉb. Additionally, by (3.5), we obtain Mkˉa.

    (1) If the FAOPF Algorithm stops at the ˆk-th iteration and the second situation of Step 4 holds, then xˆk is a feasible solution to (P), and Fp(xˆk,Mˆk,ρ)>1plnm. By Theorem 3.1(3), xˆk is an optimal solution for (P). Otherwise, the stopping condition |bˆk+1bˆk+1|<ϵ and h(xˆk)=ρkpln(iIexp(pϕ(gi(xˆk))))<ϵ,i=1,2,...m will occur. Then, xˆk is an ϵ-approximately optimal solution to (P).

    (2) First, we demonstrate that the sequence {xk} is bounded. Since xk is an optimal solution to minxYFp(x,M,ρ),

    Fp(xk,Mk,ρk)Q(f(x0)Mk)+ρkplnm,k=1,2,....

    Since Mkˉa as k+, we obtain that there is some L>0 such that

    L>Fp(xk,Mk,ρk)>Q(f(xk)Mk).

    Then, we know the Q-level set is bounded. Hence, the sequence {xk} is bounded.

    Next, we prove the second half of the theorem. Let M=minxXf(x). Without a loss of generality, we assume that xkx. We have shown that

    ak<Mk<bk,k=1,2,...,

    and all sequences {ak}, {bk}, and {Mk} converge to ˉa. By Step 4 and Theorem 3.2(1), we know MMk=bk+1, k=1,2,.... Additionally, by Step 3 and Theorem 3.2(2), we know ak+1=Mk<M, k=1,2,.... Therefore, ak+1<Mbk+1. Let k+, we obtain ˉa=M. Assume that z is an optimal solution to (P), then M=f(z). By the exactness of Fp, there is some ρ such that

    Fp(xk,Mk,ρ)Fp(z,Mk,ρ)=Q(f(z)Mk)+ρplnm.

    Let k+ in the above equation. We have

    Fp(x,M,ρ)ρplnm,

    which implies f(z)=M=f(x). Hence, x is an optimal solution to (P).

    (3) In Theorem 3.2(2), we have shown that the sequence {xk} is bounded. For any xY, set Iϵ(x)={iI|gi(x)>0}, and the cardinality of Iϵ(x) is denoted by |Iϵ(x)|. Then, Fp(xk,Mk,ρk) can be rewritten as follows:

    Fp(xk,Mk,ρk)=Q(f(xk)Mk)+ρkpln(iIϵ(xk)exp(pϕ(gi(xk)))+(m|Iϵ(xk)|)).

    From the definition of Fp(xk,Mk,ρk), k=1,2,..., we have the following:

    Fp(xk,Mk,ρk)=Q(f(xk)Mk)f(xk)+ρkiIϵ(xk)δip(xk)xgi(xk),

    where δip(xk)=exp(pϕ(gi(xk)))ϕt(gi(xk))iIϵ(xk)exp(pϕ(gi(xk)))+m|Iϵ(xk)|.

    Set

    ηk=1+ρkiIϵ(xk)δip(xk),

    we know that ηk>1 and

    1ηkQ(f(xk)Mk)f(xk)+ρkηkiIϵ(xk)δip(xk)xgi(xk)=0.

    Let

    μk=1ηk,λk=1ηkQ(f(xk)Mk),ωi,k=ρkηkδip(xk).

    Then, for any k and iI,

    μk+iIϵ(xk)ωi,k=1.

    It's obvious that μkμ(0,1), ωi,kωi(0,1), iI. We have shown that {Mk}M as k+, and {xk} is bounded with {xk}x. Since f(x) and Q() are continuous differentiable, Q(f(xk)Mk)Q(f(x)M). Additionally, we know that M<f(x), and λkλ>0 as k+. Then,

    f(x)+iIϵ(xk)ωiλxgi(x)=0.

    Let τi=ωiλ>0,

    f(x)+iIϵ(xk)τixgi(x)=0.

    This completes the proofs.

    In this section, we provide some examples to illustrate the efficiency of the FAOPF algorithm. We compare the numerical performance of FAOPF with the other three algorithms: The Two-Parameter Exact Penalty algorithm (TPEP) described in [24], the Smoothing Method of Penalty Function algorithm (SMPF) described in reference [25], and the Smoothing Objective Penalty Function algorithm (SOPF) described in reference [20]. Since each algorithm requires solving unconstrained optimization subproblems, we use either Gradient-type or Newton-type algorithms. The specific forms of penalty functions employed in the comparative experiment are as follows:

    TPEP

    H(x,M,ρ,ϵ)=(f(x)M)2+ρiIPϵ(gi(x)),

    where

    Pϵ(t)={12ϵsintϵ,t0,t12ϵsintϵ,t>0.

    SMPF

    G(x,M,ϵ)=(f(x)M)2+iIPϵ(gi(x)),

    where

    Pϵ(t)={14ϵe2t/ϵ,t0,t+14ϵe2t/ϵ,t>0.

    SOPF

    Fϵ(x,M,ϵ)=Q(f(x)M)+ρiIPτϵ(gi(x)),

    where

    Pτϵ(t)={0,t0,1aϵτaτtaτ,0tϵ,tτ(11a)ϵτ,tϵ.

    In the following examples, we choose

    Q(t)=t2.

    We define the constraint error at the approximate optimal solution as follows:

    e(x)=mi=1max{gi(x),0}.

    We know that x is an ϵ-feasible solution to (P), as e(x)<ϵ.

    Our tests are performed on a PC with an Intel Core CPU (I7-12700H 2.30 GHz) and 16 GB RAM, with MATLAB, R2021b. Five test problems were selected for analysis. Problems 1–3 and Problem 5 were obtained from [28], and problem 4 was obtained from [29]. The numerical results are shown in the following tables. The symbol m represents the number of constraints. f(x) represents the approximate minimum and x represents the approximate optimal solution to the penalty problem. e(x) denotes the constraint error at the approximate optimal solution. k represents the number of iterations in these algorithms. CPU represents the time required to solve the penalty problem. 'Fail' indicates a solution failure.

    Example 4.1.

    f(x)=1kn(xk1)2,gi(x)=x1exp(x2ti)cos(x3ti+x4)+x3x2exp(x1ti)sin(x2ti)+x5exp(x6ti)2,ti=πi/m,i=1,...,m,x0=(2,2,7,0,2,1)R6.

    FAOPF/TPEP/SMPF: a1=1000, b1=1000, ϵ1=0.001, and ρ1=1.

    Example 4.2.

    f(x)=x2,gi(x)=x1cos(2πti)x2sin(2πti)1,ti=i/m,i=1,...,m,x0=(0.8,0.5)R2.

    FAOPF/TPEP/SMPF: a1=100, b1=0, ϵ1=0.001, and ρ1=1.

    SOPF: a1=100, b1=0, τ=0.5, a=4, ϵ1=0.001, and ρ1=1.

    Example 4.3.

    f(x)=1n1kn(xk1)2,gi(x)=1kncos(tixk)+ti1knx3k,ti=1/2+πi/m,i=1,...,m,x0=(2,...,2)Rn.

    FAOPF/TPEP/SMPF: a1=100, b1=100, ϵ1=0.001, ρ1=1, and n=10 (or n=100).

    SOPF: a1=100, b1=100, a=4, ϵ=0.0001, ρ1=1, and n=10 (or n=100).

    Example 4.4.

    f(x)=x213+x12+x22,gi(x)=(1x21t2i)2x1t2ix22+x2,ti=i/m,i=1,...,m,x0=(1,100)R2.

    FAOPF/TPEP/SMPF: a1=100, b1=100, ρ1=1, and ϵ1=0.001.

    SOPF: a1=100, b1=100, τ=0.5, a=4, ϵ=0.0001, and ρ1=1.

    Example 4.5.

    f(x)=sin(x11+1.5π)+2k1000sin(xk+1.5π+x2k1),gi(x)={x1π,i=1,x2i1xiπ,i=2,...,1000,x1π,i=1001,x2i1001+xi1000π,i=1002,...,2000,x0=(1,...,1)R1000.

    FAOPF/TPEP/SMPF: a1=99910, b1=99900, ρ1=1, and ϵ1=0.001.

    The result of these problems are in following Figure 1 and Tables 16.

    Figure 1.  Performance profile for CPU time.
    Table 1.  Results of problem 1.
    m Method f(x) e(x) k CPU x
    10 FAOPF 0.0000 0.00000000 1 0.0078 (1.0000,...,1.0000)R6
    TPEP 0.0000 0.00000000 15 0.0975 (1.0000,...,1.0000)R6
    SMPF 0.0000 0.00000000 15 0.0835 (1.0000,...,1.0000)R6
    SOPF Fail
    102 FAOPF 0.0000 0.00000000 1 0.0060 (1.0000,...,1.0000)R6
    TPEP 0.0000 0.00000000 15 0.2599 (1.0000,...,1.0000)R6
    SMPF 0.0000 0.00000000 15 0.1560 (1.0000,...,1.0000)R6
    SOPF Fail
    103 FAOPF 0.0000 0.00000000 1 0.0192 (1.0000,...,1.0000)R6
    TPEP 0.0000 0.00000000 15 2.1690 (1.0000,...,1.0000)R6
    SMPF 0.0000 0.00000000 15 1.0915 (1.0000,...,1.0000)R6
    SOPF Fail
    104 FAOPF 0.0000 0.00000000 1 0.0640 (1.0000,...,1.0000)R6
    TPEP 0.0000 0.00000000 15 15.563 (1.0000,...,1.0000)R6
    SMPF 0.0000 0.00000000 15 6.2424 (1.0000,...,1.0000)R6
    SOPF Fail
    105 FAOPF 0.0000 0.00000000 1 0.2768 (1.0000,...,1.0000)R6
    TPEP 0.0000 0.00000000 15 218.00 (1.0000,...,1.0000)R6
    SMPF 0.0000 0.00000000 15 79.626 (1.0000,...,1.0000)R6
    SOPF Fail

     | Show Table
    DownLoad: CSV
    Table 2.  Results of problem 2.
    m Method f(x) e(x) k CPU x
    10 FAOPF -1.0503 0.00000000 18 0.0704 (0.0029,1.0503)
    TPEP -0.6179 0.00257560 14 0.0192 (0.7903,0.6179)
    SMPF -1.0513 0.00000000 14 0.0447 (0.0000,1.0513)
    SOPF -1.0515 0.00000000 14 0.6546 (0.0000,1.0515)
    102 FAOPF -1.0000 0.00001262 14 0.0498 (0.0312,1.0000)
    TPEP -0.7293 0.14098368 14 0.0846 (0.7179,0.7293)
    SMPF -1.0000 0.00000000 14 0.0858 (0.0000,1.0000)
    SOPF Fail
    103 FAOPF -1.0000 0.00001195 14 0.1366 (0.0011,1.0000)
    TPEP -1.0088 0.25050601 14 0.6236 (0.0000,1.0088)
    SMPF -0.9998 0.00000000 14 0.6159 (0.0000,0.9998)
    SOPF Fail
    104 FAOPF -0.9991 0.00000037 12 0.5572 (0.0421,0.9991)
    TPEP -1.0075 1.93291483 14 6.5033 (0.0000,1.0075)
    SMPF -1.0000 0.00000000 14 5.9534 (0.0000,1.0000)
    SOPF Fail
    105 FAOPF -1.0000 0.00144653 14 4.7819 (0.0000,1.0000)
    TPEP -0.9282 0.00000000 14 69.2135 (0.0000,0.9282)
    SMPF -1.0000 0.00000000 14 87.9373 (0.0000,0.9282)
    SOPF Fail

     | Show Table
    DownLoad: CSV
    Table 3.  Results of problem 3 (n=10).
    m Method f(x) e(x) k CPU x
    10 FAOPF 2.3149 0.00000000 23 0.0473 (0.5225,...,0,5225)R10
    TPEP 2.2813 0.05459490 18 0.0540 (0.5104,...,0,5104)R10
    SMPF 2.3157 0.00000000 18 0.0503 (0.5218,...,0,5218)R10
    SOPF 2.3149 0.00000000 18 0.1112 (0.5215,...,0,5215)R10
    102 FAOPF 2.3149 0.00000000 21 0.0742 (0.5225,...,0,5225)R10
    TPEP 2.3002 0.02394616 18 0.1642 (0.5167,...,0,5167)R10
    SMPF 2.3153 0.00000000 18 0.1212 (0.5216,...,0,5216)R10
    SOPF 2.3149 0.00000089 18 0.5165 (0.5215,...,0,5215)R10
    103 FAOPF 2.3149 0.00000000 23 0.2736 (0.5225,...,0,5225)R10
    TPEP 2.3101 0.00825778 18 1.6351 (0.5199,...,0,5199)R10
    SMPF 2.3157 0.00000000 18 0.6453 (0.5218,...,0,5218)R10
    SOPF 2.3149 0.00000000 18 2.2040 (0.5215,...,0,5215)R10
    104 FAOPF 2.3149 0.00000000 23 1.1659 (0.5225,...,0,5225)R10
    TPEP 2.3417 0.00000000 18 5.6564 (0.5303,...,0,5303)R10
    SMPF 2.3145 0.00061067 18 3.7953 (0.5214,...,0,5214)R10
    SOPF 2.3149 0.00000000 18 10.8069 (0.5215,...,0,5215)R10
    105 FAOPF 2.3149 0.00000000 23 12.4410 (0.5215,...,0,5215)R10
    TPEP 2.4367 0.00000000 18 141.000 (0.5610,...,0,5610)R10
    SMPF 2.3149 0.00002634 18 49.2752 (0.5215,...,0,5215)R10
    SOPF 2.3149 0.00000000 18 147.013 (0.5215,...,0,5215)R10

     | Show Table
    DownLoad: CSV
    Table 4.  Results of problem 3 (n=100).
    m Method f(x) e(x) k CPU x
    10 FAOPF 1.4915 0.00000000 17 0.2432 (0.2213,...,0.2213)R100
    TPEP 2.4059 0.00000000 15 0.5205 (0.5511,...,0.5511)R100
    SMPF 1.4917 0.00000000 15 0.3057 (0.2213,...,0.2213)R100
    SOPF 1.4954 0.00000000 15 0.4231 (0.2228,...,0.2228)R100
    102 FAOPF 1.4915 0.00000000 17 0.3328 (0.2213,...,0,2213)R100
    TPEP 13.4435 0.0000000 15 1.2428 (2.6665,...,2.6665)R100
    SMPF 1.4917 0.00000000 15 0.3422 (0.2213,...,0.2213)R100
    SOPF 1.4954 0.00000089 15 0.7039 (0.2228,...,0.2228)R100
    103 FAOPF 1.4915 0.00000000 17 1.8971 (0.2213,...,0,2213)R100
    TPEP 1.3497 29.9112787 15 2.1453 (0.1618,...,0.1618)R100
    SMPF 1.4920 0.00000000 15 1.2871 (0.2215,...,0.2215)R100
    SOPF 1.4954 0.00000000 15 2.6963 (0.2228,...,0.2228)R100
    104 FAOPF 1.4915 0.00000000 17 26.9781 (0.2213,...,0.2213)R100
    TPEP Fail
    SMPF 1.4913 0.00038001 15 13.1280 (0.2212,...,0.2212)R100
    SOPF 1.4954 0.00000000 15 33.1198 (0.2228,...,0.2228)R100
    105 FAOPF 1.4915 0.00000000 17 233.263 (0.2213,...,0.2213)R100
    TPEP Fail
    SMPF 1.4915 0.00000000 15 223.992 (0.2213,...,0.2213)R100
    SOPF 1.4954 0.00000000 15 368.986 (0.2228,...,0.2228)R100

     | Show Table
    DownLoad: CSV
    Table 5.  Results of problem 4.
    m Method f(x) e(x) k CPU x
    10 FAOPF 2.4319 0.00000000 15 0.0630 (0.7579,1.6185)
    TPEP 2.4262 0.00337124 15 0.1059 (0.7579,1.6185)
    SMPF 2.4267 0.00274804 15 0.1561 (0.7703,1.6168)
    SOPF 2.4353 0.00000000 15 0.1051 (0.7579,1.6195)
    102 FAOPF 2.4319 0.00000000 15 0.0831 (0.7579,1.6185)
    TPEP 2.4327 0.00000000 15 0.3213 (0.7068,1.6185)
    SMPF 2.4300 0.00137691 15 0.3046 (0.7758,1.6178)
    SOPF 2.4353 0.00000000 15 0.2740 (0.7579,1.6195)
    103 FAOPF 2.4319 0.00000000 15 0.3610 (0.7579,1.6185)
    TPEP 2.4313 0.00000000 15 1.5127 (0.7084,1.6181)
    SMPF 2.4308 0.00000000 15 1.7243 (0.7769,1.6180)
    SOPF 2.4353 0.00000000 15 1.5236 (0.7579,1.6195)
    104 FAOPF 2.4319 0.00000000 15 1.8437 (0.7579,1.6185)
    TPEP 2.6163 0.00000000 15 14.1481 (0.9297,1.6713)
    SMPF 2.4308 0.00047849 15 17.9017 (0.7770,1.6180)
    SOPF 2.4353 0.00000000 15 15.0246 (0.7579,1.6195)
    105 FAOPF 2.4319 0.00000000 15 16.2051 (0.7579,1.6185)
    TPEP 2.4616 13.8075420 15 269.000 (1.0260,1.6198)
    SMPF 2.4308 0.00020298 15 146.000 (0.7770,1.6180)
    SOPF 2.4353 0.00000000 15 228.745 (0.7579,1.6195)

     | Show Table
    DownLoad: CSV
    Table 6.  Results of problem 5.
    m Method f(x) e(x) k CPU x
    10 FAOPF -99901 0.00000000 1 0.0077 (1.0000,...,1.0000)R1000
    TPEP -99901 0.00000000 10 0.2173 (1.0000,...,1.0000)R1000
    SMPF -99901 0.00000000 10 0.2759 (1.0000,...,1.0000)R1000
    SOPF Fail

     | Show Table
    DownLoad: CSV

    The numerical results are described as follows to explain the numerical efficiency.

    (1) From Figure 1, it is obvious that the FAOPF algorithm exhibits the best performance subject to the CPU time.

    (2) The flattened aggregate function works similarly to the active set, with only a portion of the function being calculated in each iteration. This significantly reduces the amount of the gradient calculation. As a result, even if the FAOPF algorithm requires more iterations than others, the actual calculations are fewer, especially in problem 3. When considering the CPU time for each algorithm, the FAOPF algorithm shows a better numerical performance as the number of constraints increases.

    (3) The FAOPF algorithm exhibits the most stable numerical performance compared to other methods. Regardless of the changes in the number of constraints, the approximate minimum value, the optimal solutions, and the constraint error remain relatively constant.

    (4) In our numerical experiments, we discover that a moderately sized initial value for ρ is adequate, with ρ1 proving to be generally sufficient.

    (5) The penalty function in the SOPF algorithm is a smooth approximation based on F(x,M)=Q(f(x)M)+ρiIPτ(gi(x)), where

    Pτ(t)={0,t0,tτ,t0.

    According to the results of the numerical experiments, this penalty function cannot solve all the above problems. However, the numerical properties of the solvable problems are more stable than those of the TPEP and SMPF algorithms.

    (6) When the constraint error is greater than 0, it indicates that the approximate optimal solution obtained violates the constraints of the original problem to some extent. With the same parameter settings, the TPEP and SMPF algorithms consistently produce constraint errors greater than 0 in a larger number of instances.

    In this paper, the modified flattened aggregate function was used to smoothly approximate the max-value function, and combined with the objective penalty parameter, a new objective penalty function was proposed. Then, a penalty function algorithm (FAOPF) was designed to solve the approximate optimal solution of the original problem (P), and the efficiency of the FAOPF algorithm was proven by numerical experiments. Especially when faced with multiple constraints, the objective penalty function presented in this paper demonstrates a superior effectiveness.

    Zhuolin Yan: Conceptualization, methodology, writing-original draft, writing-review and editing, software, validation; Xiaowei Jiang: Conceptualization, methodology, writing-original draft; Siyao Wang: Writing-review and editing, software, validation. All authors contributed equally to the manuscript. All authors have read and approved the final version of the manuscript for publication.

    This work was supported by the key project of the natural science foundation joint fund of Jilin province (YDZJ202101ZYTS167, YDZJ202201ZYTS303) and the graduate innovation project of Beihua University(2023002).

    All authors declare no conflict of interest in this paper.



    [1] J. L. Massey, Linear codes with complementary duals, Discrete Math., 106-107 (1992), 337–342. https://doi.org/10.1016/0012-365x(92)90563-u doi: 10.1016/0012-365x(92)90563-u
    [2] C. Carlet, S. Guilley, Complementary dual codes for counter-measures to side-channel attacks, Adv. Math. Commun., 10 (2016), 131–150. https://doi.org/10.3934/amc.2016.10.131 doi: 10.3934/amc.2016.10.131
    [3] L. Lu, R. Li, L. Guo, Q. Fu, Maximal entanglement entanglement-assisted quantum codes constructed from linear codes, Quantum Inf. Process., 14 (2015), 165–182. https://doi.org/10.1007/s11128-014-0830-y doi: 10.1007/s11128-014-0830-y
    [4] C. Y. Lai, T. A. Brun, M. M. Wilde, Dualities and identities for entanglement-assisted quantum codes, Quantum Inf. Process., 13 (2014), 957–990. https://doi.org/10.1007/s11128-013-0704-8 doi: 10.1007/s11128-013-0704-8
    [5] C. Carlet, S. Mesnager, C. Tang, Y. Qi, R. Pellikaan, Linear codes over Fq are equivalent to LCD codes for q>3, IEEE Trans. Inf. Theory, 64 (2018), 3010–3017. https://doi.org/10.1109/tit.2018.2789347 doi: 10.1109/tit.2018.2789347
    [6] M. Araya, M. Harada, K. Saito, Quaternary Hermitian linear complementary dual codes, IEEE Trans. Inf. Theory, 66 (2019), 2751–2759. https://doi.org/10.1109/tit.2019.2949040 doi: 10.1109/tit.2019.2949040
    [7] M. Araya, M. Harada, On the classification of quaternary optimal Hermitian LCD codes, Cryptogr. Commun., 14 (2022), 833–847. https://doi.org/10.1007/s12095-021-00552-5 doi: 10.1007/s12095-021-00552-5
    [8] M. Harada, Construction of binary LCD codes, ternary LCD codes and quaternary Hermitian LCD codes, Des. Codes Cryptogr., 89 (2021), 2295–2312. https://doi.org/10.1007/s10623-021-00916-1 doi: 10.1007/s10623-021-00916-1
    [9] L. Lu, X. Zhan, S. Yang, H. Cao, Optimal quaternary Hermitian LCD codes, arXiv, 2020. https://doi.org/10.48550/arXiv.2010.10166 doi: 10.48550/arXiv.2010.10166
    [10] X. Zhan, R. Li, L. Lu, H. Li, Quatemary Hermitian linear complementary dual codes with small distance, 2020 International Conference on Information Science and Education (ICISE-IE), Sanya, China, 2020, 38–41. https://doi.org/10.1109/icise51755.2020.00016
    [11] T. Brun, I. Devetak, M. Hsieh, Correcting quantum errors with entanglement, Science, 314 (2006), 436–439. https://doi.org/10.1126/science.1131563 doi: 10.1126/science.1131563
    [12] W. Bosma, J. Cannon, C. Playoust, The Magma algebra system Ⅰ: the user language, J. Symb. Comput., 24 (1997), 235–265. https://doi.org/10.1006/jsco.1996.0125 doi: 10.1006/jsco.1996.0125
    [13] W. C. Huffman, V. Pless, Fundamentals of error-correcting codes, New York: Cambridge University Press, 2003. https://doi.org/10.1017/CBO9780511807077
    [14] I. Bouyukliev, M. Grassl, Z. Varbanov, New bounds for n4(k,d) and classification of some optimal codes over GF(4), Discrete Math., 281 (2004), 43–66. https://doi.org/10.1016/j.disc.2003.11.003 doi: 10.1016/j.disc.2003.11.003
    [15] P. P. Greenough, R. Hill, Optimal linear codes over GF(4), Discrete Math., 125 (1994), 187–199. https://doi.org/10.1016/0012-365x(94)90160-0 doi: 10.1016/0012-365x(94)90160-0
    [16] M. C. Bhandari, M. S. Garg, Optimum codes of dimension 3 and 4 over GF(4), IEEE Trans. Inf. Theory, 38 (1992), 1564–1567. https://doi.org/10.1109/18.149507 doi: 10.1109/18.149507
    [17] M. Grassl, Code tables: bounds on the parameters of various types of codes, 2023. Available from: http://www.codetables.de.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1357) PDF downloads(73) Cited by(1)

Figures and Tables

Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog