Loading [MathJax]/jax/output/SVG/jax.js
Research article

Impact of COVID-19 on the environment sector: a case study of Central Visayas, Philippines

  • Received: 31 October 2021 Revised: 29 January 2022 Accepted: 08 February 2022 Published: 15 March 2022
  • The pandemic has underscored the importance of the environment. In this study, the environmental condition of Central Visayas, Philippines has been assessed and evaluated before and during the onset of the COVID-19 pandemic to deal with a possible association between the environmental indicators and the pandemic. The relationships between environmental key variables namely: air quality, air pollution, water quality, water pollution, and solid waste management have been quantified. The study utilized secondary data sources from a review of records from government agencies and LGUs in Region 7. This study also provides a framework which is the pandemics and epidemics in environmental aspects. The paper concludes by offering researchers and policymakers to promote changes in environmental policies and provide some recommendations for adequately controlling future pandemic and epidemic threats in Central Visayas, Philippines.

    Citation: Clare Maristela V. Galon, James G. Esguerra. Impact of COVID-19 on the environment sector: a case study of Central Visayas, Philippines[J]. AIMS Environmental Science, 2022, 9(2): 106-121. doi: 10.3934/environsci.2022008

    Related Papers:

    [1] Aissa Boukarou, Kaddour Guerbati, Khaled Zennir, Mohammad Alnegga . Gevrey regularity for the generalized Kadomtsev-Petviashvili I (gKP-I) equation. AIMS Mathematics, 2021, 6(9): 10037-10054. doi: 10.3934/math.2021583
    [2] Xiaochun Sun, Yulian Wu, Gaoting Xu . Global well-posedness for the 3D rotating Boussinesq equations in variable exponent Fourier-Besov spaces. AIMS Mathematics, 2023, 8(11): 27065-27079. doi: 10.3934/math.20231385
    [3] Muhammad Imran Liaqat, Fahim Ud Din, Wedad Albalawi, Kottakkaran Sooppy Nisar, Abdel-Haleem Abdel-Aty . Analysis of stochastic delay differential equations in the framework of conformable fractional derivatives. AIMS Mathematics, 2024, 9(5): 11194-11211. doi: 10.3934/math.2024549
    [4] William O. Bray, Ellen Hunter . Wave equations & energy. AIMS Mathematics, 2019, 4(3): 463-481. doi: 10.3934/math.2019.3.463
    [5] Phuong Nguyen Duc, Ahmet Ocak Akdemir, Van Tien Nguyen, Anh Tuan Nguyen . Remarks on parabolic equation with the conformable variable derivative in Hilbert scales. AIMS Mathematics, 2022, 7(11): 20020-20042. doi: 10.3934/math.20221095
    [6] Kedong Wang, Xianguo Geng, Mingming Chen, Ruomeng Li . Long-time asymptotics for the generalized Sasa-Satsuma equation. AIMS Mathematics, 2020, 5(6): 7413-7437. doi: 10.3934/math.2020475
    [7] Said Mesloub, Faten Aldosari . Well posedness for a singular two dimensional fractional initial boundary value problem with Bessel operator involving boundary integral conditions. AIMS Mathematics, 2021, 6(9): 9786-9812. doi: 10.3934/math.2021569
    [8] Zhongdi Cen, Jian Huang, Aimin Xu . A posteriori mesh method for a system of singularly perturbed initial value problems. AIMS Mathematics, 2022, 7(9): 16719-16732. doi: 10.3934/math.2022917
    [9] Murat A. Sultanov, Vladimir E. Misilov, Makhmud A. Sadybekov . Numerical method for solving the subdiffusion differential equation with nonlocal boundary conditions. AIMS Mathematics, 2024, 9(12): 36385-36404. doi: 10.3934/math.20241726
    [10] Dumitru Baleanu, Babak Shiri . Generalized fractional differential equations for past dynamic. AIMS Mathematics, 2022, 7(8): 14394-14418. doi: 10.3934/math.2022793
  • The pandemic has underscored the importance of the environment. In this study, the environmental condition of Central Visayas, Philippines has been assessed and evaluated before and during the onset of the COVID-19 pandemic to deal with a possible association between the environmental indicators and the pandemic. The relationships between environmental key variables namely: air quality, air pollution, water quality, water pollution, and solid waste management have been quantified. The study utilized secondary data sources from a review of records from government agencies and LGUs in Region 7. This study also provides a framework which is the pandemics and epidemics in environmental aspects. The paper concludes by offering researchers and policymakers to promote changes in environmental policies and provide some recommendations for adequately controlling future pandemic and epidemic threats in Central Visayas, Philippines.



    Sparse optimization is a core problem of compressed sensing [1,2,3], signal and image processing [4,5,6,7], and high-dimensional statistical learning [4,8], etc. Sparsity is usually characterized by the 0 norm, which is the cardinality of the support set of vector xRn, denoted by x0=|supp(x)|=|{i{1,2,,n}:xi0}|. The penalized formulation of sparse optimization can be expressed as the following cardinality regularized optimization:

    minxRnf(x)+λx0, (1.1)

    where f:RnR is a loss function depending on the application, λ>0 is the penalty parameter.

    Compared with sparse optimization, composite sparse optimization problems enforce certain structural constraints instead of pure sparsity on the coefficients, which arise from many important applications in various fields, such as structural health monitoring [10], fault diagnosis [11], motion planning [12] and impact force identification[13], etc. The most important method is to promote the sparsity of variables through linear transformation [14]. By imposing a regularization matrix W=(W1,,Wp)Rp×n on vector x, composite sparse optimization is nicely encapsulated as the following optimization formulation:

    minxRnf(x)+λWx0. (1.2)

    A typical choice of function f in problem (1.2) is the 2 loss function given by f(x)=12Axb22, where A=(A1,,Am)Rm×n and b=(b1,,bm)Rm, and the 1 relaxation of the 0 norm given by Wx1, which was first defined and summarized as generalized LASSO with the general formulation of W [15]. Unlike traditional LASSO, solving generalized LASSO efficiently on high- dimensional data is very challenging. A few attempts have been made to improve the efficiency of generalized LASSO, but this requires a specific form of the W to work well [14,15,16], such as the fused LASSO problem [17], the TV regularizer [18] and trending filtering [19].

    However, many loss functions of the composite sparse optimization problems cannot be expressed in the form of differentiable functions. The results in [20] showed that the least squares loss function can solve a class of linear regression problems but is not suitable for all types of data. We can choose the outlier that has a strong interference loss function, such as the 1 function, quantile regression function, or more general Huber class function. On the other hand, J. Fan et al. [20] pointed out that using the 1 relaxation often results in a biased estimator, various continuous nonconvex relaxation functions for 0 norm were proposed, such as the smoothly clipped absolute deviation (SCAD) function [20], the hard thresholding function [21], capped-1 function [22,23,24,25,26], the transformed 1 function [27], etc. Here, we are interested in the capped-1 function as the relaxation function of the 0 norm, which is a simple relaxation function that satisfies specific properties. Z. Shen et al. [40] applied locally Lipschitz continuous scaled folded concave functions to the approximate 0 pseudo-norm. A generic nonsmooth but convex framework was established to gradually approximate the scaled folded concave functions. Numerical experimental results showed the proposed framework and algorithms admitted the exact sparsity-induced capability of the 0 pseudo-norm. Q. Chen et al.[41] first explored using a class of locally Lipschitz scale folded concave functions to approach the 0. Then, a convex half-absolute method was proposed to precisely approximate these nonconvex nonsmooth functions. A double iterative algorithm was considered to solve the convex-relaxed composite optimization problems. Both [40] and [41] established a generic nonsmooth convex framework that gradually approximates these scale-folded concave functions based on the Legendre-Fenchel transformation to avoid directly solving a nonconvex optimization problem. However, we use a smoothing function for approximation to achieve the goal of solving the nonconvex optimization problem. The advantages of capped-1 function have been explored in various fields. For example, the authors in [28] put forward a capped 1-norm Sparse Representation method (CSR) for graph clustering. The proposed model learned the optimal graph for clustering by integrating sparse representation and capped 1-norm loss function. In order to utilize the advantages of the twin extreme learning machine and FDA, Z. Xue et al. [29] first put forward a novel classifier named Fisher-regularized twin extreme learning machine (FTELM). Also considering the instability of the 2-norm for the outliers, authors introduced the capped 1-norm into the FTELM model and proposed a more robust capped 1-norm FTELM (C1-FTELM) model. The capped 1 function was also discussed in the context of sparse group 0 regularized algorithms by [30]. It's worth noting that reference [9] gave an exact continuous relaxation problem with capped-1 penalty for nonsmooth convex loss function with cardinality penalty in the sense that both problems have the same optimal solution set. Moreover, a smoothing proximal gradient algorithm for finding a lifted stationary point of the continuous relaxation model was proposed. Regarding the solution of relaxation problems, T. Zhang [42] presented a multi-stage convex relaxation scheme for solving problems with non-convex objective functions. However, only parameter estimation performance was analyzed in [42]. Unfortunately, the result in [42] does not directly imply that multi-stage convex relaxation achieves unbiased recovery of the support set. H. Zhou et al. [43] proposed a new unified algorithm based on the local linear approximation (LLA) for maximizing the penalized likelihood for a broad class of concave penalty functions. It did not eliminate the bias issue. Here, we extend the results in [9] to composite sparse optimization and give a smoothing gradient descent algorithm for the continuous relaxation problem. The new algorithm exploits the piecewise linearity of the capped-1 penalty term in the relaxation problem. In view of the composite sparsity, if the subproblem in the algorithm does not have a closed solution, then our relaxation problem model analogizes the 1 penalty model for solving LASSO problems, using the smoothing gradient method to solve the subproblem. We prove that if the sequence generated by the algorithm has an accumulation point, then it is a lifted stationary point of relaxation problem.

    In this paper, we consider the following composite sparse regression problem with cardinality penalty,

    minxΩW0(x):=f(x)+λWx0, (1.3)

    where f:RnR is a convex (not necessarily smooth) and bounded from below function, λ is a positive parameter, and Ω={xRn:lWxu}. For example, the 1 loss function given by

    f(x)=1mAxb1, (1.4)

    or the censored regression problem with

    f(x)=1mmi=1|max{Aix,0}bi|. (1.5)

    For a given parameter ν>0, the continuous relaxation of the 0 penalty with the capped-1 function ϕ is given by

    ϕ(t)=min{1,|t|ν}. (1.6)

    We consider the following continuous optimization problem to solve (1.3):

    minxΩW(x):=f(x)+λΦ(Wx), (1.7)

    where Φ(Wx)=pi=1ϕ(Wix).

    Composite sparse optimization has attracted much attention recently. In [15], a dual path algorithm was proposed for the generalized LASSO problem with any formulation of W. If the composite optimization problem is convex and the W is the general linear map, one feasible approach is to apply the alternating direction method of multipliers (ADMM)[31]. In [31], the author proposed a dual method for the variational problem in the form of inf{f(Av)+g(v)}. Z. J. Bai [32] aimed to provide a coordinate gradient descent method with stepsize chosen by an Armijo-type rule to solve the problem minxf(x)+cLx1 and minxAxb22+cLx1 efficiently, especially when the problems dimension is large.

    In this paper, we use the exact continuous relaxation problem with capped-1 function to solve optimization problem (1.3) and present a smoothing gradient descent (SGD) algorithm. Since W is a general linear mapping and Φ(Wx) is an inseparable function, which makes the proximal gradient algorithm unable to be explicitly applied, we approximately solve the subproblem in the SGD algorithm. We prove that if there is an accumulation point, then the accumulation point is a lifted stationary point of (1.7).

    Notation. We denote N={0,1,}. For xRn and δ>0, let x:=x2 and Bδ(x) means the open ball centered at x with radius δ. For a nonempty, closed, and convex set ΩRn, NΩ(x) means the normal cone to Ω at xΩ. σmin(W) is the minimum singular value of W. We denote x=max{|x1|,|x2|,,|xn|}.

    Before starting this section, we first make the following two assumptions:

    Assumption 2.1. Function f is Lipschitz continuous on Ω with Lipschitz constant Lf>0 and matrix W has full column rank.

    Assumption 2.2. Parameter ν in (1.6) satisfies 0<ν<¯ν:=λσmin(W)Lf.

    We suppose Assumptions 2.1 and 2.2 hold throughout the paper and assume that Lf is large enough such that Lfλσmin(W)Γ, where

    Γ:=min{|li|,ui:li0,ui0,i=1,,p}.

    When f is defined by the 1 loss function (1.4) or the censored regression loss function (1.5), Lf can be taken as Lf=A.

    We first give the definition of lifted stationary points of (1.7) as that in [33]. Since function ϕ in (1.6) can be rephrased as

    ϕ(t)=1ν|t|max{θ1(t),θ2(t),θ3(t)}

    with θ1(t)=0, θ2(t)=tν1 and θ3(t)=tν1 for tR, we denote

    D(t):={i{1,2,3}:θi(t)=max{θ1(t),θ2(t),θ3(t)}} (2.1)

    and

    Dp(Wx):=Πpi=1D(Wix). (2.2)

    Definition 2.1. Point xΩ is called a lifted stationary point of (1.7) if there exists d=(d1,,dp)Dp(Wx) such that

    λpi=1(θdi(Wix))f(x)+λνpi=1Wiϑi(x)+NΩ(x), (2.3)

    where

    ϑi(x){=1ifWix>0,[1,1]if|Wix|=0,=1ifWix<0. (2.4)

    Under the definition of the range of ν in Assumption 2.2, we first prove that the element in Dp(Wx) for a lifted stationary point satisfying (2.3) is unique.

    Proposition 2.2. If ˉx is a lifted stationary point of (1.7), then the vector dWˉx=(dWˉx1,,dWˉxp)Dp(Wˉx) satisfying (2.3) is unique. In particular, for i=1,,p,

    dWˉxi={1if|Wiˉx|<ν,2ifWiˉxν,3ifWiˉxν. (2.5)

    Proof. If |Wiˉx|ν, the statement is clearly valid. Hence, we only need to consider the case |Wiˉx|=ν. When Wiˉx=ν, since D(Wiˉx)={1,2}, arguing by contradiction, we assume (2.3) holds with dWˉxi=1, so ϑi(ˉx)=1. By ν<ˉν, we have Wiˉx(li,ui), and by (2.3), there exists ξ(ˉx)f(ˉx) such that

    0=ξ(ˉx)+λνi:|Wiˉx|νWiϑi(ˉx), (2.6)

    where ϑi(ˉx) is defined as (2.4).

    It is easy to observe that the following relation holds

    i:|Wiˉx|νWiϑi(ˉx)2σmin(W). (2.7)

    In fact, from the definition of the minimum singular value of W,

    σmin(W)=min{Wx2x2:x0}=min{Wx2:x2=1},

    we have

    σmin(W)=min{Wx2x2:x0}min{Wx2x2:x21,x1}min{Wx2:x21,x1}min{Wx2:x2=1}=σmin(W).

    Then, we see that

    σmin(W)=min{Wx2:x21,x1}.

    From the definition of ϑi(ˉx) (2.4), this yields that

    i:|Wiˉx|νWiϑi(ˉx)2σmin(WI)σmin(W),

    where WI is the submatrix consisting of the rows in W indexed by I:={i:|Wiˉx|ν} [34].

    Combining (2.6) and (2.7), we have

    λσmin(W)νλνi:|Wiˉx|νWiϑi(ˉx)=ξ(ˉx)Lf.

    This leads to a contradiction to ν<λσmin(W)Lf. Then, (2.5) holds for Wiˉx=ν. Similar analysis can be given for the case that Wiˉx=ν, which completes the proof.

    For a given d=(d1,,dp)Dp:={dRp:di{1,2,3},i=1,,p}, we define

    Φd(Wx):=pi=1|Wix|νpi=1θdi(Wix), (2.8)

    which is convex with respect to x. It can be verified that

    Φ(Wx)=mindDpΦd(Wx),xΩ.

    In particular, for a fixed ˉxΩ, Φ(Wˉx)=ΦdWˉx(Wˉx) and the following relation holds

    ˉxisaliftedstationarypointof(1.7)ˉxargminxΩf(x)+λΦdWˉx(Wx). (2.9)

    Next lemma describes a lower bound property.

    Lemma 2.3. If ˉxΩ is a lifted stationary point of (1.7), then it holds that

    Wiˉx(ν,ν)Wiˉx=0,i=1,,p. (2.10)

    Proof. Suppose that ˉx is a lifted stationary point of (1.7). Now we assume that Wiˉx(ν,ν){0} for some i1,,p. So from (2.5) and Assumption 2.1, we have dWˉxi=1 and Wiˉx(li,ui). By (2.3), there exists ξ(ˉx)f(ˉx). We have

    0=ξ(ˉx)+λνi:|Wiˉx|<νWiϑi(ˉx).

    Through the analysis in the proof of Proposition 2.2, combining (2.6) and (2.7), we have

    λσmin(W)νλνi:|Wiˉx|<νWiϑi(ˉx)=ξ(ˉx)Lf,

    which leads to a contradiction to ν<λσmin(W)Lf. Consequently, Wiˉx(ν,ν) implies Wiˉx=0 for i1,,p {and} the proof is completed.

    This subsection discusses the relationship between the global minimizers and local minimizers of (1.3) and (1.7). First, Theorem 2.4 discusses the relationship between the local minimizers of (1.3) and (1.7). Second, Theorem 2.5 states that (1.3) and (1.7) have the same global minimizers. We use the lower bound property mentioned in Lemma 2.3 to prove Theorems 2.4 and 2.5.

    Theorem 2.4. If ˉx is a lifted stationary point of (1.7), then it is a local minimizer of (1.3) and the objective functions have the same value at ˉx, i.e., f(ˉx)+λΦ(Wˉx)=f(ˉx)+λWˉx0.

    Proof. Combining the lower bound property of Wˉx in (2.10) and the definition of ΦdWˉx defined in (2.8), for any xRn, we have

    ΦdWˉx(Wx):=pi=1|Wix|νpi=1θdWˉxi(Wix)=i:|Wiˉx|ν1+i:|Wiˉx|<ν|Wix|ν=Wˉx0+i:Wiˉx=0|Wix|ν.

    Then

    ΦdWˉx(Wx)Wx0,xBϱ(ˉx),ϱ>0. (2.11)

    Combining this with Φ(Wˉx)=Wˉx0 and (2.9), we have

    f(ˉx)+λWˉx0f(x)+λWx0,xΩBϱ(ˉx).

    Thus, ˉx is a local minimizer of (1.7).

    Theorem 2.4 indicates that any lifted stationary point of (1.7) is a local minimizer of (1.3), which means that any local minimizer of (1.7) is also certainly a local minimizer of (1.3).

    Theorem 2.5. If ˉxΩ is a global minimizer of (1.3) if and only if it is a global minimizer of (1.7). Moreover, problems (1.3) and (1.7) have the same optimal value.

    Proof. On the one hand, let ˉxΩ be a global minimizer of (1.7), and according to Definition 2.1, then we can obtain that ˉx is a lifted stationary point of (1.7). By (2.10), from Wiˉx(ν,ν), then Wiˉx=0, so it gives Φ(Wˉx)=Wˉx0. We have

    f(ˉx)+λWˉx0=f(ˉx)+λΦ(Wˉx) f(x)+λΦ(Wx)f(x)+λWx0,xΩ,

    where the last inequality uses Φ(Wx)Wx0. Therefore, ˉx is a global minimizer of (1.3).

    On the other hand, let ˉxΩ be a global minimizer of (1.3). Assume on the contrary ˉx is not a solution of (1.7). Let ˆx be a global minimizer of (1.7), we obtain

    f(ˆx)+λΦ(Wˆx)<f(ˉx)+λΦ(Wˉx).

    From

    Φ(Wˆx)=Wˆx0andΦ(Wˉx)Wˉx0,

    we have

    f(ˆx)+λWˆx0<f(ˉx)+λWˉx0.

    This contradicts the global optimality of ˉx for (1.3). Hence ˉx is a global minimizer of (1.7). Therefore, (1.3) and (1.7) have the same global minimizers and optimal values.

    When f is convex, ˉx is a local minimizer of (1.3) if and only if ˉxΩ satisfies

    0f(ˉx)+NΩ(ˉx). (2.12)

    which is often used as a criterion for the local minimizers of problem (1.3).

    Definition 2.6. We call ˉxΩ a ν-strong local minimizer of (1.3), if there exists ¯ξf(ˉx) and ¯ηNΩ(ˉx) such that for any isupp(Wˉx), it holds

    0=¯ξ+¯ηand|Wiˉx|ν.

    By (2.12), any ν-strong local minimizer of (1.3) is a local minimizer of it. Below we provide a result on the relationship between the ν-strong local minimizers of (1.3) and the lifted stationary points of (1.7).

    Proposition 2.7. ˉxΩ is a ν-strong local minimizer of (1.3) if and only if it is a lifted stationary point of (1.7).

    Proof. First, by (2.9), we see that if ˉx is a lifted stationary point of (1.7), then

    W0(ˉx)=f(ˉx)+λWˉx0=f(ˉx)+λΦ(Wˉx)=f(ˉx)+λΦdWˉx(Wˉx)f(x)+λΦdWˉx(Wx),xΩ.

    Combining the Lemma 2.3 and ΦdWˉx(Wx)Wx0,xBϱ(ˉx),ϱ>0 in (2.11), then we have

    W0(ˉx)W0(x),xΩBϱ(ˉx),

    so ˉx is a ν-strong local minimizer of (1.3).

    Next, because ˉx is a ν-strong local minimizer of (1.3), it is also a local minimizer of (1.3), suppose ˉx is a local minimizer of (1.3) but not a local minimizer of (1.7). Then there exists a local minimizer of (1.7) denoted by ˆx, combining (2.9), ΦdWˆx(Wˉx)Wˉx0,ˉxBϱ(ˆx) in (2.11) and Φ(Wˆx)=Wˆx0, we have

    f(ˆx)+λWˆx0=f(ˆx)+λΦ(Wˆx)=f(ˆx)+λΦdWˆx(Wˆx)f(ˉx)+λΦdWˆx(Wˉx)f(ˉx)+λWˉx0,ˉxBϱ(ˆx),

    which leads to a contradiction. Thus, the local minimizer of (1.3) is the local minimizer of (1.7), that is to say, the ν-strong local minimizer of (1.3) is the lifted stationary point of (1.7).

    We use Table 1 to clearly demonstrate the relationship between (1.3) and (1.7).

    Table 1.  Link between problems (1.3) and (1.7).

     | Show Table
    DownLoad: CSV

    The main content of this section is to find the lifted stationary point of (1.7). Due to the existence of matrix W, we cannot express the explicit solution using the proximal gradient method. We first approximate f by a smoothing function and propose some preliminary theories on the smoothing methods; the second section proposes our algorithm; and the third section conducts a convergence analysis on the proposed algorithm for solving (1.7).

    Throughout this paper, we approximate the loss function f by a smoothing function ˜f in (1.7). When it is clear from the context, the derivative of ˜f(x,μ) with respect to x is simply denoted as ˜f(x,μ). We denote

    ˜Wd(x,μ):=˜f(x,μ)+λΦd(Wx),˜W(x,μ):=˜f(x,μ)+λΦ(Wx),

    where smoothing parameter μ>0 and dDp. For any fixed μ>0 and dDp, ˜Wd(x,μ) is nonsmooth and convex, and ˜W(x,μ) is nonsmooth and nonconvex. Due to

    Φ(Wx)=mindDpΦd(Wx),xΩ,

    we obtain

    ˜Wd(x,μ)˜W(x,μ),dDp,xΩ,μ(0,ˉμ]. (3.1)

    The following definition describes some theories about the smoothing function ˜f, which is frequently used in the proof of convergence analysis.

    Definition 3.1 We call ˜f:Rn×[0,ˉμ]R with ˉμ>0 a smoothing function of the convex function f in (1.7), if ˜f(,μ) is continuously differentiable in Rn for any fixed μ>0 and satisfies the following conditions:

    (ⅰ) limzx,μ0˜f(z,μ)=f(x),xΩ;

    (ⅱ) (convexity) ˜f(x,μ) is convex with respect to x in Ω for any fixed μ>0;

    (ⅲ) (gradient consistency) {limzx,μ0z˜f(z,μ)}f(x),xΩ;

    (ⅳ) (Lipschitz continuity with respect to μ) there exists a positive constant κ such that

    |˜f(x,μ2)˜f(x,μ1)|κ|μ1μ2|,xΩ,μ1,μ2[0,ˉμ];

    (ⅴ) (Lipschitz continuity with respect to x) there exists a constant L>0 such that for any μ(0,ˉμ],x˜f(,μ) is Lipschitz continuous on Ω with Lipschitz constant Lμ1.

    By virtue of Definition 3.1-(ⅳ), we obtain that

    |˜f(x,μ)f(x)|κμ,xΩ,0<μˉμ. (3.2)

    Next, we aim to solve the following problem with μ>0 and vector dDp

    minxΩ˜Wd(x,μ)=˜f(x,μ)+λΦd(Wx), (3.3)

    by introducing an approximation of ˜Wd(,μ) around a given point z as follows:

    Gd,γ(x,z,μ)=˜f(z,μ)+xz,˜f(z,μ)+12γμ1xz2+λΦd(Wx) (3.4)

    with a constant γ>0. Φd(Wx) is convex with respect to x for any fixed dDp, function Gd,γ(x,z,μ) is a strongly convex function with respect to x for any fixed d,γ,z and μ. Then, we solve the following problem

    minxΩGd,γ(x,z,μ)

    to find the approximate solution of (3.3).

    In this subsection, we propose a new algorithm (see Algorithm 1) for finding a lifted stationary point of (1.7). Specially, since W is a general linear mapping and Φ(Wx) is an inseparable function, which makes the smoothing proximal gradient algorithm [9] cannot explicitly solve a subproblem. The proposed algorithm combines the smoothing method and the smoothing gradient descent algorithm, so we call it the smoothing gradient descent (SGD) algorithm. We use the SGD algorithm to obtain approximate solutions of the subproblem. Let

    Ps={kN:μk+1μk},

    and denote psr the rth smallest number in Ps. Then, we can obtain the following updating method of {μk}

    μk=μpsr+1=μ0(psr+1)σ,psr+1kpsr+1, (3.5)

    which will be used in the proof of Lemmas 3.2 and 3.4.

    Algorithm 1: Smoothing Gradient Descent (SGD) algorithm
    Require: Take x1=x0Ω and μ1=μ0(0,ˉμ]. Give parameters ρ>1, σ>12, α>0 and 0<γ_<ˉγ. Set k=0.
    1: while a termination criterion is not met do
    2:      Step 1.Choose γk[γ_,ˉγ] and let dkdWxk, where dWxk is defined in (2.5).
    3:      Step 2.
    3:      2a) Compute
    4:
    ˆxk+1=argminxΩGdk,γk(x,xk,μk).          (3.6)
    4:      2b) If ˆxk+1 satisfies
    5:
    ˜Wdk(ˆxk+1,μk)Gdk,γk(ˆxk+1,xk,μk).          (3.7)
    6:          Set
    7:
    xk+1=ˆxk+1          (3.8)
    8:          and go to Step 3. Otherwise, let γk=ργk and return to 2a).
    9:      Step 3. If
    10:
    ˜W(xk+1,μk)+κμk˜W(xk,μk1)κμk1αμ2k,          (3.9)
    11:              set μk+1=μk, otherwise, set
    12:
    μk+1=μ0(k+1)σ.          (3.10)
    13:              Increment k by one and return to Step 1.
    14: end while

    Lemma 3.2. The proposed SGD algorithm is well-defined, and the sequences {xk}, {γk} and {μk} generated by it own the following properties:

    (ⅰ) {xk}Ω and {γk}[γ_,max{¯γ,ρL}];

    (ⅱ) there are infinite elements in Ps and limkμk=0.

    Proof. (ⅰ) By organizing (3.7), we can obtain

    ˜f(ˆxk+1,μk)˜f(xk,μk)+˜f(xk,μk),ˆxk+1xk+12γkμ1kˆxk+1xk2.

    According to Definition 3.1-(ⅴ), (3.7) holds when γkL. Thus the updating of γk in Step 2 is at most logρ(Lγ_)+1 times at each iteration. Hence, the SGD algorithm is well-defined, and we have that γkmax{¯γ,ρL},kN. From (3.8), it is easy to verify that xk+1Ω by xkΩ and ˆxk+1Ω.

    (ⅱ) Since {μk} is non-increasing, to prove (ⅱ), we assume that limkμk=ˆμ>0 by contradiction. If {μk} converges to a non-zero value, then the iteration of (3.10) is finite, which means that there exists KN such that μk=ˆμ,kK. Substituting ˆμ into (3.9), we obtain

    ˜W(xk+1,μk)+κμk˜W(xk,μk1)κμk1αˆμ2,kK+1.

    By the above inequality, we have

    limk˜W(xk+1,μk)+κμk=. (3.11)

    However, by {xk}Ω, (3.2) and Theorem 2.5, then

    ˜W(xk+1,μk)+κμkW(xk+1)minxΩW(x)=minxΩW0(x),kK, (3.12)

    (3.11) and (3.12) are contradictory; (ⅱ) holds.

    Lemma 3.3. For any kN, we have

    ˜W(xk+1,μk)˜W(xk,μk)12γkμ1kxk+1xk2, (3.13)

    which implies {˜W(xk+1,μk)+κμk} is non-increasing and limk˜W(xk+1,μk)=limkW(xk).

    Proof. Since Gdk,γk(x,xk,μk) is strongly convex with modulus γkμ1k, we have

    Gdk,γk(x,xk,μk)Gdk,γk(ˆxk+1,xk,μk)+Gdk,γk(ˆxk+1,xk,μk),xˆxk+1+12γkμ1kˆxk+1x2, (3.14)

    using the definition of ˆxk+1 in (3.6) and xk+1=ˆxk+1 when (3.7) holds, we obtain

    Gdk,γk(xk+1,xk,μk)Gdk,γk(x,xk,μk)12γkμ1kxk+1x2,xΩ.

    By the definition of function Gdk,γk given in (3.4), organizing the inequalities above, we have

    λΦdk(Wxk+1)λΦdk(Wx)+xxk+1,˜f(xk,μk)+12γkμ1kxxk212γkμ1kxk+1xk212γkμ1kxk+1x2. (3.15)

    Moreover, (3.7) can be written as

    ˜Wdk(xk+1,μk)˜f(xk,μk)+xk+1xk,˜f(xk,μk)+12γkμ1kxk+1xk2+λΦdk(Wxk+1). (3.16)

    Summing up (3.15) and (3.16), we obtain that

    ˜Wdk(xk+1,μk)˜f(xk,μk)+λΦdk(Wx)+xxk,˜f(xk,μk)+12γkμ1kxxk212γkμ1kxk+1x2,xΩ. (3.17)

    For a fixed μ>0, the convexity of ˜f(x,μ) with respect to x indicates

    ˜f(xk,μk)+xxk,˜f(xk,μk)˜f(x,μk),xΩ. (3.18)

    Combining (3.17) and (3.18) and utilizing the definition of ˜Wdk, one has

    ˜Wdk(xk+1,μk)˜Wdk(x,μk)+12γkμ1kxxk212γkμ1kxk+1x2,xΩ. (3.19)

    Letting x=xk in (3.19) and by dk=dWxk, we obtain

    ˜Wdk(xk+1,μk)+12γkμ1kxk+1xk2˜W(xk,μk). (3.20)

    Because ˜Wdk(xk+1,μk)˜W(xk+1,μk), (3.20) leads to (3.13). Due to Definition 3.1-(iv), we have

    ˜f(xk,μk1)˜f(xk,μk)κ(μk1μk),

    then, it follows that

    ˜W(xk,μk)˜W(xk,μk1)+κ(μk1μk),

    by (3.13), we obtain

    ˜W(xk+1,μk)+κμk+12γkμ1kxk+1xk2˜W(xk,μk1)+κμk1, (3.21)

    (3.21) implies the non-increasing property of {˜W(xk+1,μk)+κμk}. This result and (3.12) ensure the existence of limk˜W(xk+1,μk)+κμk. By virtue of limkμk=0 and Definition 3.1-(i), we obtain

    limk˜W(xk+1,μk)=limkW(xk).

    The proof is completed.

    Lemma 3.4. The following statements hold:

    (ⅰ) k=0γkμ1kxk+1xk22(W(x0,μ1)+κμ1minΩW);

    (ⅱ) k=0μ2kΛ with Λ=1α(˜W(x0,μ1)+κμ1minxΩW(x))+2μ20σ2σ1<;

    Proof. (ⅰ) Recalling (3.21), for all kN, we obtain

    γkμ1kxk+1xk22(˜W(xk,μk1)+κμk1˜W(xk+1,μk)κμk). (3.22)

    Now adding up the above inequality over k=0,,K, it gives

    Kk=0γkμ1kxk+1xk22(˜W(x0,μ1)+κμ1˜W(xK+1,μK)κμK). (3.23)

    By letting K in (3.22) tend to infinity and along with (3.12), we obtain (ⅰ).

    (ⅱ) From (3.5), we have

    kPsμ2k=r=1μ20(psr+1)2σk=1μ20k2σ2μ20σ2σ1, (3.24)

    where psr is the rth smallest element in Ps. When kPs, (3.9) gives

    αμ2k˜W(xk,μk1)+κμk1˜W(xk+1,μk)κμk,

    which together with the non-increasing property of {˜W(xk+1,μk)+κμk} and (3.12) implies

    kPsμ2k1α(˜W(x0,μ1)+κμ1minΩW). (3.25)

    Combining (3.24) and (3.25), the proof of (ii) is completed.

    Theorem 3.5. If there is an accumulation point in {xk:kPs}, then the accumulation point is a lifted stationary point of (1.7).

    Proof. Since (3.9) fails for kPs, by rearranging (3.21), we obtain that

    γkμ1kxk+1xk22αμ2k,

    which gives

    xk+1xk2αγ1kμ3k.

    Thus,

    γkμ1kxk+1xk2αγkμk,

    which together with limkμk=0 and {γk}[γ_,max{ˉγ,ρL}] implies

    limkγkμ1kxk+1xk=0andlimkxk+1xk=0. (3.26)

    Let ˉx be an accumulation point of {xk}kPs, (3.26) indicates that {xk} exists a subsequence {xkt}ktPs converges to ˉx. Similar analysis can be given for the case that ktPs implies

    limtγktμ1ktxkt+1xkt=0andlimtxkt+1=ˉx. (3.27)

    Recalling xkt+1=ˆxkt+1 defined in (3.6) and by its first-order necessary optimality condition, we have

    ˜f(xkt,μkt)+γktμ1kt(xkt+1xkt)+λζkt,xxkt+10,    ζktΦdkt(Wxkt+1),xΩ. (3.28)

    Since the elements in {dkt:tN} are finite and limtxkt+1=ˉx, there exists a subsequence of {kt}, denoted as {ktj}, and ˉdDp(Wˉx) such that dktj=ˉd,jN. By the upper semicontinuity of Φˉd and limjxktj+1=ˉx, it gives

    {limjζktj:ζktjΦdktj(Wxktj+1)}Φˉd(Wˉx). (3.29)

    Along with the subsequence {ktj} and letting j in (3.28), from Definition 3.1-(ⅲ), (3.27) and (3.29), we obtain that there exist ˉξf(ˉx) and ˉζˉdΦˉd(Wˉx) such that

    ˉξ+λˉζˉd,xˉx0,xΩ. (3.30)

    By ˉdDp(Wˉx), thanks to the convexity of f+λΦˉd, (3.30) implies

    f(x)+λΦˉd(Wx)f(ˉx)λΦˉd(Wˉx)ˉξ+λˉζˉd,xˉx0,xΩ,

    which implies that ˉx is a lifted stationary point of (1.7).

    The purpose of this part is to test and verify the theoretical results and the properties of the SGD algorithm by the numerical experiments. We present Examples 1 and 2, which are respectively an under-determined linear regression problem and an over-determined censored regression problem. Especially, the process of solving subproblem (3.6) is very similar to the algorithm process of solving the LASSO problem.

    All experiments are performed in MATLAB 2016a on a Lenovo PC with an Intel(R) Core(TM) i5-8250U CPU @1.60GHz 1801 Mhz and 8GB RAM. In the following examples, stopping criterion is set as

    numberofiterationsMaxiterorμkε. (4.1)

    We stop the proposed algorithm if the number of iterations exceeds Maxiter or the smoothing parameter is less than ε. Denote ˉx the output of iterate xk. Set the fixed parameter α=1 throughout the numerical experiments.

    Example 4.1. (Linear regression problem) Linear regression problems have been widely used in information theory[1], signal processing [35,36] and image restoration[6,36]. As pointed out in [20], 1 loss function is nonsmooth, but more robust and has stronger capability of outlier-resistance than the least squares loss function in the linear regression problems. Then we consider the following 0 regularized linear regression problem with 1 loss function:

    minxΩW0(x):=1mAxb1+λWx0, (4.2)

    where ARm×n with m=n, bRm. A smoothing function of the 1 loss function can be defined by

    ˜f(x,μ)=1mmi=1˜θ(Aixbi,μ)with˜θ(s,μ)={|s|if|s|>μ,s22μ+μ2if|s|μ. (4.3)

    Denote s the 0 norm of true solution x, i.e., Wx0=s. For the given positive integers m,n and s, the data are generated by

    W=randn(p,n);B=randn(n,m);A=orth(B);b=Ax+0.01randn(m,1).

    In the algorithm, we set the parameters as below: γ_=¯γ=1,μ0=3.533,Maxiter=103,ν=35.6014,σ=3.0003,ρ=1.0001,κ=12. Generate A,b and x with m=n=45, p=45 and s=2, set λ=103 in (4.2) and ε=103 in the stopping criterion (4.1). We set x0 = ones(n,1). Figure 1 shows the numerical results. Figure 1 plots x and ¯x, where x and ¯x denote the original signal (which can also be expressed as true solution) and the output of iterate xk from the SGD algorithm. From Figure 1, we can see that the output of xk is very close to the original generated signal.

    Figure 1.  Digital experiment of the SGD algorithm in Example 4.1 under the first form of W.

    Now we use another form of matrix W to solve Example 4.1:

    W=(00000310000310000031)p×n.

    Set γ_=¯γ=1,μ0=3.533,Maxiter=103,ν=36,σ=7,ρ=1.0001andκ=12. We randomly generate the data as follows:

    B=randn(n,m);A=orth(B);b=Ax+0.01randn(m,1).

    We run numerical experiments with (m,n,p,s) = (45, 45, 45, 2). Set λ=103 in (4.2) and ε=103 in the stopping criterion (4.1). We define x0 = randn(n,1). From Figure 2, we can see that the output of xk obtained by the SGD algorithm is also close to the true solution x.

    Figure 2.  Digital experiment of SGD algorithm in Example 4.1 under the second form of W.

    The last special case of W is the penalty matrix in 1-dimensional Fused LASSO [39]:

    W=(11000011000010000011)p×n.

    Set ν=40,p=45,n=46 and x0 = ones(n,1). The remaining parameter values are the same as the previous situation (see Figure 3). From Figures 13, we can see that the output of xk obtained by the SGD algorithm is close to the true solution x.

    Figure 3.  Digital experiment of SGD algorithm in Example 4.1 under the third form of W.

    Example 4.2 (Censored regression problem) The application of censored regression problems has been studied in machine learning[37], economics[38], biomedical, and technical systems. The following censored regression problem is also a typical class of composite sparse optimization problems with nonsmooth convex loss functions. Now we consider the following 0 regularized censored regression problem:

    minxΩW0(x):=1mmax{Ax,0}b1+λWx0,

    where ARm×n and bRm. For the loss function in (1.5), a smoothing function of it can be defined by

    ˜f(x,μ)=1mmi=1˜θ(˜ϕ(Aix,μ)bi,μ)with˜ϕ(s,μ)={max{s,0}if|s|>μ,(s+μ)24μif|s|μ.

    Set ε=102, ν=16.0009, λ=103, μ0=10.8999, σ=4.0003, κ=12, ρ=1.2006 and x0 = randn(n,1). In this example, we run numerical experiments with (m,n,p,s) = (40, 40, 40, 2), we randomly generate the problem data as follows:

    A=randn(m,n);W=randn(p,n);b=max(Ax+0.01randn(m,1),0).

    The computational results of x and x are shown in Figure 4.

    Figure 4.  Numerical results of the SGD algorithm for Example 4.2.

    We use the following form of W to solve Example 4.2:

    W=(02000021300003140000001m0000m1)p×n.

    Set γ_=¯γ=3,μ0=2,Maxiter=103,ν=1.2200,σ=2.6915,ρ=1.0001andκ=0.711. For the given positive integers m,n and s, the data are generated by

    A=randn(m,n);b=Ax+0.01randn(m,1).

    We run numerical experiments with (m,n,p,s) = (40, 40, 40, 2). Set λ=103 in (4.2), x0 = ones(n,1) and ε=103. From Figures 4 and 5, it can be seen that the output of xk is very close to the true solution.

    Figure 5.  Numerical results of the SGD algorithm for Example 4.2.

    We have intensively studied the composite sparse optimization problem consisting of the sum of a nonsmooth convex function and the 0 penalty term of a matrix times the coefficient vector. Considering the original cardinality penalty problem and an exact continuous relaxation problem with capped-1 penalty, we have proved several novel and interesting results: the consistency between global minimizers of the relaxation problem and the original problem, and local minimizers of relaxation problems are local minimizers of the original problem. We propose the SGD algorithm based on the smoothing method and the smoothing gradient descent algorithm. Then SGD algorithm has been investigated from both a theoretical and an algorithmic point of view. So we prove that if the sequence generated by the algorithm has an accumulation point, then it is a lifted stationary point of relaxation problem. This well explains why the algorithm is expected to enjoy an appealing performance from the theoretical perspective, which is testified by the numerical experiments. Our initial numerical results confirm the predicted underlying theoretical results.

    Wei Yang: Conceptualization, writing-original draft, validation, software; Lili Pan: Conceptualization, supervision, funding acquisition, validation, software; Jinhui Wan: Software, methodology, data curation. All authors have read and approved the final version of the manuscript for publication.

    The work of the authors was supported by the National Natural Science Foundation of China grants 12271309.

    The authors declare no conflicts of interest.



    [1] World Health Organization. (2020). Transmission of SARS-COV-2: Implications for infection prevention precautions. World Health Organization. Retrieved February 16, 2022, from https://www.who.int/news-room/commentaries/detail/transmission-of-sars-cov-2-implications-for-infection-prevention-precautions
    [2] Deress T, Hassen F, Adane K, et al (2018). Assessment of knowledge, attitude, and practice about biomedical waste management and associated factors among the healthcare professionals at Debre Markos Town Healthcare facilities, Northwest Ethiopia. J Environ Public Health 2018: 7672981. https://doi.org/10.1155/2018/7672981 doi: 10.1155/2018/7672981
    [3] Santamouris M, Kolokotsa D (2016). URBAN CLIMATE MITIGATION TECHNIQUES. Routledge
    [4] Bala BK, Arshad M, Noh KM. (2016). Modelling of solid waste management systems of Dhaka City in Bangladesh. Springer Texts Bus Econ 249–274. https://doi.org/10.1007/978-981-10-2045-2_12 doi: 10.1007/978-981-10-2045-2_12
    [5] Arbulú I, Lozano J, Rey-Maquieira J. (2016). The challenges of municipal solid waste management systems provided by public-private partnerships in mature tourist destinations: The case of Mallorca. Waste Manage 51: 252–258. doi: 10.1016/j.wasman.2016.03.007
    [6] Houessionon MG, Ouendo EMD, Bouland C, et al. (2021). Environmental heavy metal contamination from Electronic Waste (e-waste) recycling activities worldwide: A systematic review from 2005 to 2017. Int J Environ Res Pub He 18: 3517. https://doi.org/10.3390/ijerph18073517 doi: 10.3390/ijerph18073517
    [7] Theunis J, Stevens M, Botteldooren D (2016). Sensing the environment. Und Com Sys 21–46. https://doi.org/10.1007/978-3-319-25658-0_2 doi: 10.1007/978-3-319-25658-0_2
    [8] Zhu Z, Chen B, Zhao Y, et al (2021). Multi-sensing paradigm based urban air quality monitoring and hazardous gas source analyzing: A Review. J Saf Sci Rese 2: 131–145. https://doi.org/10.1016/j.jnlssr.2021.08.004 doi: 10.1016/j.jnlssr.2021.08.004
    [9] Almaden C R C (2014). Protecting the water supply: The Philippine experience. J Soc Polit Econ Stud 39: 467–49
    [10] Bashir M F, Jiang B, Komal B, et al. (2020). Correlation between environmental pollution indicators and COVID-19 pandemic: A brief study in Californian context. Environ Res 187: 109652. https://doi.org/10.1016/j.envres.2020.109652 doi: 10.1016/j.envres.2020.109652
    [11] Xu K, Cui K, Young L H, et al. (2020). Impact of the COVID-19 event on air quality in Central China. Aerosol and Air Qual Res 20: 915–929. https://doi.org/10.4209/aaqr.2020.04.0150 doi: 10.4209/aaqr.2020.04.0150
    [12] United Nations (2014) World Urbanization Prospects: The 2014 Revision, Highlights. New York: Department of Economic and Social Affairs, Population Division, United Nations.
    [13] Atta U, Hussain M, Malik R N (2020). Environmental impact assessment of municipal solid waste management value chain: A case study from Pakistan. Waste Manag Res: J Sust Circular Econ 38: 1379–1388. https://doi.org/10.1177/0734242x20942595 doi: 10.1177/0734242x20942595
    [14] Baniasad M, Mofrad M G, Bahmanabadi B, et al. (2021). Covid-19 in Asia: Transmission factors, re-opening policies, and Vaccination simulation. Environ Res 202: 111657. https://doi.org/10.1016/j.envres.2021.111657 doi: 10.1016/j.envres.2021.111657
    [15] Vasistha P, Ganguly R (2020). Water quality assessment of natural lakes and its importance: An overview. Mater Today: P 32: 544–552. https://doi.org/10.1016/j.matpr.2020.02.092 doi: 10.1016/j.matpr.2020.02.092
    [16] Abbaspour S (2011). Water quality in developing countries, South Asia, South Africa, water quality management, and activities that cause water pollution. IPCBEE 15: e102
    [17] Environmental Management Bureau 7, Department of Environment and Natural Resources. (2019). Regional State of the Brown Environment Report 2019.
    [18] Eroğlu H (2020). Effects of covid-19 outbreak on environment and Renewable Energy Sector. Environ Dev Sustain 23: 4782–4790. https://doi.org/10.1007/s10668-020-00837-4 doi: 10.1007/s10668-020-00837-4
    [19] Shakil M H, Munim Z H, Tasnia M, et al. (2020). Covid-19 and the environment: A critical review and research agenda. Sci Total Environ 745: 141022. https://doi.org/10.1016/j.scitotenv.2020.141022 doi: 10.1016/j.scitotenv.2020.141022
    [20] Environmental Management Bureau 7, Department of Environment and Natural Resources. (2019). Regional State of the Brown Environment Report 2019.
    [21] Environmental Management Bureau 7, Department of Environment and Natural Resources. (2019). Regional State of the Brown Environment Report 2020
    [22] Chirani M R, Kowsari E, Teymourian T, et al. (2021). Environmental impact of increased soap consumption during COVID-19 pandemic: Biodegradable soap production and sustainable packaging. Sci Total Environ 796: 149013. https://doi.org/10.1016/j.scitotenv.2021.149013 doi: 10.1016/j.scitotenv.2021.149013
    [23] Santos G D (2021). 2020 tropical cyclones in the Philippines: A Review. Trop Cyclone Res Rev 10: 191–199. https://doi.org/10.1016/j.tcrr.2021.09.003 doi: 10.1016/j.tcrr.2021.09.003
    [24] Rume T, Islam SU. (2020). Environmental effects of COVID-19 pandemic and potential strategies of sustainability. Heliyon 6: e04965. doi: 10.1016/j.heliyon.2020.e04965
    [25] Ncube LK, Ude AU, Ogunmuyiwa EN, et al. (2021). An overview of plastic waste generation and management in food packaging industries. Recycling 6: 12. https://doi.org/10.3390/recycling6010012 doi: 10.3390/recycling6010012
  • Environ-09-02-08-s1.pdf
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(9447) PDF downloads(417) Cited by(2)

Figures and Tables

Figures(8)  /  Tables(8)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog