Loading [MathJax]/jax/output/SVG/jax.js
Editorial Special Issues

Special Issue: Alzheimer’s disease

  • Received: 20 January 2018 Accepted: 22 January 2018 Published: 26 January 2018
  • More than 45 million people worldwide have Alzheimer’s disease (AD), a deterioration of memory and other cognitive domains that leads to death within 3 to 9 years after diagnosis. The principal risk factor for AD is age. As the aging population increases, the prevalence will approach 131 million cases worldwide in 2050. AD is therefore a global problem creating a rapidly growing epidemic and becoming a major threat to healthcare in our societies. It has been more than 20 years since it was first proposed that the neurodegeneration in AD may be caused by deposition of amyloid-b (Ab) peptides in plaques in brain tissue. According to the amyloid hypothesis, accumulation of Ab peptides, resulting from a chronic imbalance between Ab production and Ab clearance in the brain, is the primary influence driving AD pathogenesis. Current available medications appear to be able to produce moderate symptomatic benefits but not to stop disease progression. The search for biomarkers as well as novel therapeutic approaches for AD has been a major focus of research. Recent findings, however, show that neuronal-injury biomarkers are independent of Ab suggesting epigenetic modifications, gene-gene and/or gene-environment interactions in the disease etiology, and calling for reconsideration of the pathological cascade and assessment of alternative therapeutic strategies. In addition, recent research results regarding the expression of the β-amyloid precursor protein (APP) gene resulting in the presence of various APP-mRNA isoforms and their quantification, especially for identifying the most abundant one that may decisive for the normal status or disease risk, have been reported. As such, a more complete understanding of AD pathogenesis will likely require greater insights into the physiological function of the b-amyloid precursor protein (APP).

    Citation: Khue Vu Nguyen. Special Issue: Alzheimer’s disease[J]. AIMS Neuroscience, 2018, 5(1): 74-80. doi: 10.3934/Neuroscience.2018.1.74

    Related Papers:

    [1] Sandra Rugonyi . Genetic and flow anomalies in congenital heart disease. AIMS Genetics, 2016, 3(3): 157-166. doi: 10.3934/genet.2016.3.157
    [2] Jennifer K. Redig, Gameil T. Fouad, Darcie Babcock, Benjamin Reshey, Eleanor Feingold, Roger H. Reeves, Cheryl L. Maslen . Allelic Interaction between CRELD1 and VEGFA in the Pathogenesis of Cardiac Atrioventricular Septal Defects. AIMS Genetics, 2014, 1(1): 1-19. doi: 10.3934/genet.2014.1.1
    [3] Xiaochen Fan, David A F Loebel, Heidi Bildsoe, Emilie E Wilkie, Jing Qin, Junwen Wang, Patrick P L Tam . Tissue interactions, cell signaling and transcriptional control in the cranial mesoderm during craniofacial development. AIMS Genetics, 2016, 3(1): 74-98. doi: 10.3934/genet.2016.1.74
    [4] Sarah M. Zimmerman, Roberta Besio, Melissa E. Heard-Lipsmeyer, Milena Dimori, Patrizio Castagnola, Frances L. Swain, Dana Gaddy, Alan B. Diekman, Roy Morello . Expression characterization and functional implication of the collagen-modifying Leprecan proteins in mouse gonadal tissue and mature sperm. AIMS Genetics, 2018, 5(1): 24-40. doi: 10.3934/genet.2018.1.24
    [5] Dawei Liu, Zeeshan Shaukat, Rashid Hussain, Mahwish Khan, Stephen L. Gregory . Drosophila as a model for chromosomal instability. AIMS Genetics, 2015, 2(1): 1-12. doi: 10.3934/genet.2015.1.1
    [6] Neha Mittal, Anand kumar Srivastava . Cd2++Cr6+causes toxic effects on chromosomal development of microspore in Carthamus tinctorius. AIMS Genetics, 2019, 6(1): 1-10. doi: 10.3934/genet.2019.1.1
    [7] Jue Er Amanda Lee, Linda May Parsons, Leonie M. Quinn . MYC function and regulation in flies: how Drosophila has enlightened MYC cancer biology. AIMS Genetics, 2014, 1(1): 81-98. doi: 10.3934/genet.2014.1.81
    [8] Alisa Morshneva, Olga Gnedina, Svetlana Svetlikova, Valery Pospelov, Maria Igotti . Time-dependent modulation of FoxO activity by HDAC inhibitor in oncogene-transformed E1A+Ras cells. AIMS Genetics, 2018, 5(1): 41-52. doi: 10.3934/genet.2018.1.41
    [9] Liliana Osório, Fei Long, Zhongjun Zhou . Uncovering the stem cell hierarchy by genetic lineage tracing in the mammary gland. AIMS Genetics, 2016, 3(2): 130-145. doi: 10.3934/genet.2016.2.130
    [10] Achal Rastogi, Xin Lin, Bérangère Lombard, Damarys Loew, Leïla Tirichine . Probing the evolutionary history of epigenetic mechanisms: what can we learn from marine diatoms. AIMS Genetics, 2015, 2(3): 173-191. doi: 10.3934/genet.2015.3.173
  • More than 45 million people worldwide have Alzheimer’s disease (AD), a deterioration of memory and other cognitive domains that leads to death within 3 to 9 years after diagnosis. The principal risk factor for AD is age. As the aging population increases, the prevalence will approach 131 million cases worldwide in 2050. AD is therefore a global problem creating a rapidly growing epidemic and becoming a major threat to healthcare in our societies. It has been more than 20 years since it was first proposed that the neurodegeneration in AD may be caused by deposition of amyloid-b (Ab) peptides in plaques in brain tissue. According to the amyloid hypothesis, accumulation of Ab peptides, resulting from a chronic imbalance between Ab production and Ab clearance in the brain, is the primary influence driving AD pathogenesis. Current available medications appear to be able to produce moderate symptomatic benefits but not to stop disease progression. The search for biomarkers as well as novel therapeutic approaches for AD has been a major focus of research. Recent findings, however, show that neuronal-injury biomarkers are independent of Ab suggesting epigenetic modifications, gene-gene and/or gene-environment interactions in the disease etiology, and calling for reconsideration of the pathological cascade and assessment of alternative therapeutic strategies. In addition, recent research results regarding the expression of the β-amyloid precursor protein (APP) gene resulting in the presence of various APP-mRNA isoforms and their quantification, especially for identifying the most abundant one that may decisive for the normal status or disease risk, have been reported. As such, a more complete understanding of AD pathogenesis will likely require greater insights into the physiological function of the b-amyloid precursor protein (APP).


    Let $ \mathcal{D}\subset\Re^n $ be a nonempty, closed and convex set and $ f : \Re^n\rightarrow \Re^n $ be a given continuous and monotone mapping. The variational inequality problem, denoted by Ⅵ($ \mathcal{D}, f $), is to find a vector $ x^*\in \mathcal{D} $ such that

    $ xx,f(x)0,xD. $ (1.1)

    The Ⅵ($ \mathcal{D}, f $) has found many important applications in areas such as nonlinear complementarity problems (where $ \mathcal{D} = \Re_{+}^{n} $) [1], traffic equilibrium and economic problems [2,3]. For recent applications and numerical methods of the Ⅵ($ \mathcal{D}, f $), we refer the reader to [4,5,6] and the references therein.

    In this paper, we consider a special case of the general Ⅵ problem, where the set $ \mathcal{D} $ is assumed to have the following form:

    $ D={xn|xX,Ax=b}, $ (1.2)

    here $ A\in\Re^{m\times n} $ is a given matrix, $ b\in\Re^{m} $ is a given vector, $ \mathcal{X}\subset\Re^n $ is a nonempty, closed and convex subset. The solution set of (1.1) and (1.2), denoted by $ \Omega^* $, is assumed to be nonempty. Note that the Ⅵ problem (1.1) and (1.2) is closely related to the convex optimization problem with linear equality constraints:

    $ minθ(x)s.t.Ax=b, xX. $ (1.3)

    To show this, we recall the first order optimization conditions of problem (1.3). Let $ x^* $ be a minimum point of the convex function $ \theta(x) $ over $ \mathcal{D} $ and $ \xi^* $ be any vector in $ \partial \theta(x^*) $, where $ \partial \theta(\cdot) $ denotes the subdifferential operator of $ \theta(\cdot) $. Then for any feasible direction $ d\in\Re^n $ at $ x^* $, we have $ (\xi^*)^Td > 0 $. It means that, the following variational inequality problem is captured:

    $ xx,ξ0,xD. $ (1.4)

    Thus, solving the Ⅵ problem (1.4) amounts to solving (1.3). This Ⅵ characterization is also used in e.g., [7,8].

    The Ⅵ problem (1.1) and (1.2) or its equivalent form (1.3) is one of the fundamental problems in convex optimization. In particular, it includes linear programming, conic and semidefinite optimization as special cases. It can find many applications in compressed sensing, image processing and data mining, see, e.g., [9,10,11]. We refer to [12] for recent examples and discussions. To solve the Ⅵ problem (1.1) and (1.2), some proximal-like algorithms have been developed over the past years, see e.g., [13] for a review. One benchmark method is the augmented Lagrangian Method (ALM) [14,15] for nonlinear problems. The ALM is applicable to solve Ⅵ($ \mathcal{D}, f $) (1.1) and (1.2). More specifically, for a given $ u^{k} = (x^{k}, \lambda^{k}) $, ALM uses the following procedure to carry out the new iteration $ u^{k+1} = (x^{k+1}, \lambda^{k+1})\in\mathcal{X}\times\Re^m $:

    Find $ {x}^{k+1}\in\mathcal{X} $ such that

    $ xxk+1,f(xk+1)ATλk+βAT(Axk+1b)0,xX, $ (1.5a)

    then update $ \lambda^{k+1} $ via

    $ λk+1=λkβ(Axk+1b), $ (1.5b)

    where $ \beta\geq 0 $ is a given penalty parameter for the violation of the linearly constraints. To make ALM (1.5) more efficient and flexible, some alternative strategies can be used. For example, some self-adaptive rules can be carried to adjust the parameter $ \beta $ based on certain strategies [16,17,18,19]. We can also use some correction technologies to the output point [20,21]. Let $ \tilde{ u}^k $ denote the predictor generated by ALM. A simple and effective correction scheme is

    $ uk+1=ukαk(uk˜uk), $ (1.6)

    where $ 0 < \alpha_{k} < 2 $ is the step size parameter; see, e.g., [22,23] for details.

    The main computational cost at each iteration of ALM is to solve the subproblem (1.5a), which is still a variational inequality, structurally the same as the original problem. So in many cases the ALM (1.5a) is easy to iterate only if $ A $ is an identity matrix. This is because, for this case, the solution of the subproblem (1.5a) corresponds to the proximal mappings of $ f $ and it usually has closed-form solutions or can be efficiently solved up to a high precision. However, in many applications in sparse optimization, we often encounter the case where $ A $ is not an identity matrix, and the resulting subproblem (1.5a) no longer has closed-form solutions. For this case, solving the subproblem (1.5a) could be computationally intensive because of the costly evaluation of $ (A^TA+\frac{1}{\beta}f)^{-1}(A\upsilon) $. Therefore, efficient numerical methods with implementable iterative scheme are highly desired.

    Recently, several techniques attempting to overcome this difficulty have been proposed. In the framework of the proximal point algorithm (PPA), there are two relevant approaches. The first one is regularization. By adding a customized regularization term to the saddle-point reformulation of (1.1) and (1.2), the primal subproblem becomes easy as it only involves a simple evaluation, see the customized PPA algorithms proposed in e.g., [24,25,26,27]. We refer the reader to e.g., [28,29] of the linearized regularization term for the separable case of problem (1.2). The second one employs prediction-correction technology which adds an asymmetrical proximal term to make a prediction, and then introduces a simple correction step to guarantee the convergence, see e.g., [30,31,32]. In this paper, we propose a new prediction-correction method for the Ⅵ($ \mathcal{D}, f $). At each iteration, the method first makes a simple prediction step to obtain a point, and then generates a new iteration via a minor correction to the predictor. The reduced subproblems are easy to solve when the resolvent mapping of $ f $ can be efficiently evaluated. As can be seen in the Section 5, the proposed method converges faster with less iterations to achieve the same accuracy on most numerical cases.

    The rest of this paper is organized as follows. In Section 2, we review some preliminaries which are useful for further analysis. In Section 1, we present the implementable prediction-correction method for Ⅵ($ \mathcal{D}, f $). In Section 4, we establish the global convergence of the proposed method. The computational experiment is presented in Section 5. Finally, we make a conclusion in Section 6.

    In this section, we reformulate the Ⅵ problem (1.1) and (1.2) in succinct form. Let $ \Omega = \mathcal{X}\times\Re^m $. By attaching a Lagrange multiplier vector $ \lambda\in\Re^m $ to the linear constraints $ Ax = b $, the Ⅵ problem (1.1) and (1.2) can be converted to the following form:

    $ xx,f(x)ATλλλ,Axb0,(x,λ)TΩ. $ (2.1)

    By denoting

    $ u:=(xλ),F(u):=(f(x)ATλAxb), $ (2.2)

    we can rewrite (2.1) into the following more compact form

    $ uu,F(u)0,uΩ. $ (2.3)

    Henceforth, we will denote the Ⅵ problem (2.2) and (2.3) by Ⅵ($ \Omega, F $). Now, we make some basic assumptions and summarize some well known results of Ⅵ, which will be used in subsequent analysis.

    ${\bf{ (A1)}} $ $ \mathcal{X} $ is a simple closed convex set.

    A set is said to be simple if the projection onto it can be easily obtained. Here, the projection of a point $ a $ onto the closed convex set $ \mathcal{X} $, denoted by $ P_{\mathcal{X}}(a) $, is defined as the nearest point $ x\in\mathcal{X} $ to $ a $, i.e.,

    $ P_{\mathcal{X}}(a) = \text{argmin}\{\| x-a\| \ | x\in\mathcal{X}\}. $

    $ {\bf{(A2)}} $ The mapping $ f $ is point-to-point, monotone and continuous.

    A mapping $ F:\Re^n\rightarrow \Re^n $ is said to be monotone on $ \Omega $ if

    $ uυ,F(u)F(υ)0,u,υΩ. $ (2.4)

    $ {\bf{(A3)}} $ The solution set of (2.2) and (2.3), denoted by $ \Omega^* $, is nonempty.

    Remark 1. The mapping $ F(\cdot) $ defined in (2.3) is monotone with respect to $ \Omega $ since

    $ (F(u)F(˜u))T(u˜u)0,  u,˜uΩ. $ (2.5)

    Proof.

    $ (F(u)F(˜u))T(u˜u)=(f(x)f(˜x)AT(λ˜λ)A(x˜x))T(x˜xλ˜λ)=(f(x)f(˜x))T(x˜x)0, $ (2.6)

    where the last inequality follows from the monotone property of $ f $.

    Let $ G $ be a symmetric positive definite matrix. The $ G $-norm of the vector $ u $ is denoted by $ \| u\|_G: = \sqrt{\langle u, G u\rangle} $. In particular, when $ G = I $, $ \| u\| : = \sqrt{\langle u, u\rangle} $ is the Euclidean norm of $ u $. For a matrix $ A $, $ \|A\| $ denotes its norm $ \|A\|: = \text{max}\{\|Ax\|: \|x\|\leq 1\} $.

    The following well-known properties of the projection operator will be used in the coming analysis. The proofs can be found in textbooks, e.g., [2,33].

    Lemma 1. Let $ \mathcal{X}\in\Re^n $ be a nonempty closed convex set, $ P_{\mathcal{X}}(\cdot) $ be the projection operator onto $ \mathcal{X} $ under the $ G $-norm. Then

    $ yPX(y),G(xPX(y))0,yn,xX. $ (2.7)
    $ PX(x)PX(y)GxyG,x,yn. $ (2.8)
    $ xPX(y)2Gxy2GyPX(y)2G,yn,xX. $ (2.9)

    For any arbitrary positive scalar $ \beta $ and $ u\in\Omega $, let $ e(u, \beta) $ denote the residual function associated with the mapping $ F $, i.e.,

    $ e(u,β)=uPΩ[uβF(u)]. $ (2.10)

    Lemma 2. Solving VI($ \Omega, F $) is equivalent to finding the zero point of the mapping

    $ e(u,β):=(e1(u,β)e2(u,β))=(xPX{xβ[f(x)ATλ]}β(Axb)). $ (2.11)

    Proof. See [2], pp 267.

    We now formally present the new prediction-correction method for the Ⅵ($ \Omega, F $) (Algorithm 1). The method can be viewed as a generalization of [31] in relaxed case.

    Algorithm 1: A prediction-correction based method (PCM) for the Ⅵ($ \Omega, F $).
    Step 0: Initialization step.
    Given a small number $ \epsilon > 0 $. Take $ \gamma\in(0, 2) $, $ u^0\in\Re^{n+m} $; set $ k = 0 $. Choose the parameters $ r > 0, s > 0 $, such that
        $rs > \frac{1}{4}\|A^TA\|.$     (3.1)
    Step 1: Prediction step.
    Generate the predictor $ \tilde{x}^k $ via solving the following projection equation:
        $\tilde{x}^k = P_{\mathcal{X}}[x^{k}-\frac{1}{r}(f(\tilde{x}^k)-A^T\lambda^k)]. $    (3.2a)
    Then update $ \tilde{\lambda}^k\in\Re^m $ via
        $\tilde{\lambda}^k = \lambda^k-\frac{1}{s}(A\tilde{x}^k-b).$    (3.2b)
    Step 2: Correction step.
    Generate the new iteration $ u^{k+1} = (x^{k+1}, \lambda^{k+1}) $ via
        $ x^{k+1} = x^k-\gamma(x^k-\tilde{x}^k)-\frac{\gamma}{2r}A^T(\lambda^k-\tilde{\lambda}^k), $    (3.3a)
    and
        $\lambda^{k+1} = \lambda^k+\frac{\gamma}{2s}A(x^k-\tilde{x}^k)-\gamma(I-\frac{1}{2rs}AA^T)(\lambda^k-\tilde{\lambda}^k). $    (3.3b)

    Step 3: Convergence verification.
    If $ \|u^{k+1}-u^k\|\leq \epsilon $, stop; otherwise set $ k: = k+1 $; go to Step 1.

    Remark 2. Note that the regularized projection Eq (3.2a) amounts to solving the following VI problem

    $  x˜xk,1r(f(˜xk)ATλk)+(˜xkxk)0,xX, $ (3.4)

    which represents to find $ \tilde{x}^k\in\mathcal{X} $ such that

    $ 0˜xk+1rf(˜xk)(1rATλk+xk). $ (3.5)

    We can rewrite the above equation as

    $ ˜xk(I+1rf)1(1rATλk+xk). $ (3.6)

    Thus, the subproblem (3.2a) is equivalent to an evaluation of the resolvent operator $ (I+\frac{1}{r}f)^{-1} $ when $ \mathcal{X} $ is the finite-dimensional Euclidean spaces [34]. Notice that ALM (1.5a) needs to evaluate the operator $ (A^TA+\frac{1}{\beta}f)^{-1} $. Therefore, the resulting subproblem (3.2a) could be much easier to solve than ALM (1.5a). On the other hand, the correction step only consists of some elementary manipulations. Thus, the resulting method (3.2a)–(3.3b) is easily implementable.

    Remark 3. The parameters $ \frac{1}{r} $ and $ \frac{1}{s} $ in the prediction step can be viewed as two step sizes for the projection step (3.2a) and dual step, respectively. Using the step size condition $ rs > \frac{1}{4}\|A^TA\| $ of this algorithm, the parameters $ \frac{1}{r} $ and $ \frac{1}{s} $ can be chosen larger values compared to the condition $ rs > \|A^TA\| $ of some other primal dual algorithms, e.g., linearized ALM, customized PPA algorithms. This larger step size is usually beneficial to the effectiveness and robustness of the algorithm. In Section 5 we will empirically see that our algorithm is significantly faster than some other primal dual algorithms by allowing larger step sizes.

    In the following, we will focus our attention to solving Ⅵ($ \Omega, F $). But at the beginning, to make the notation of the proof more succinct, we define some matrices:

    $ G=(rI12AT12AsI),Q=(rIAT0sI). $ (4.1)

    Obviously, when $ rs > \frac{1}{4}\|A^TA\| $, $ G $ is a positive definite matrix. Now, we start to prove the global convergence of the sequence $ \{ u^k\} $. Towards this end, we here follow the work [35] to reformulate the algorithm into a prediction-correction method and establish its convergence results. We first prove some lemmas. The first lemma quantifies the discrepancy between the point $ \tilde{ u}^k $ and a solution point of Ⅵ($ \Omega, F $).

    Lemma 3. Let $ \{u^k\} $ be generated by the PCM and $ \{\tilde{u}^k\} $ be generated by PCM (3.2), and the matrix $ Q $ be given in (4.1). Then we have

    $ u˜uk,F(˜uk)Q(uk˜uk)0,uΩ. $ (4.2)

    Proof. Note that the sequence $ \{\tilde{ u}^k\} $ generated by (3.2) is actually solutions of the following VIs:

    $  x˜xk,f(˜xk)ATλkr(xk˜xk)0,xX, $ (4.3)

    and

    $ λ˜λk,A˜xkbs(λk˜λk)0,λm. $ (4.4)

    Combining (4.3) and (4.4) yields

    $ x˜xk,f(˜xk)AT˜λkAT(λk˜λk)r(xk˜xk)λ˜λ,A˜xkbs(λk˜λk)0,uΩ. $ (4.5)

    It can be rewritten as

    $ (x˜xkλ˜λk), (f(˜xk)AT˜λkA˜xkb)(rIAT0sI)(xk˜xkλk˜λk)0,uΩ. $ (4.6)

    Using the notation of $ F(u) $ (2.3) and $ Q $ (4.1), the assertion (4.2) is proved.

    The following lemma characterizes the correction step by a matrix form.

    Lemma 4. Let $ \{u^k\} $ be generated by the PCM and $ \{\tilde{u}^k\} $ be generated by PCM (3.2), Then we have

    $ ukuk+1=γM(uk˜uk), $ (4.7)

    where

    $ M=(I12rAT12sAIAAT2rs). $ (4.8)

    Proof. From the correction Step (3.3a) and (3.3b), we have

    $ (xk+1λk+1)=(xkλk)γ((xk˜xk)+AT2r(λk˜λk)12sA(xk˜xk)+(I12rsAAT)(λk˜λk)), $ (4.9)

    which can be written as

    $ (xkxk+1λkλk+1)=γ(I12rAT12sAIAAT2rs)(xk˜xkλk˜λk). $ (4.10)

    By noting the matrix $ M $ (4.8), the proof is completed.

    Using the matrices $ Q $ (4.1) and $ M $ (4.8), we define

    $ H=QM1, $ (4.11)

    then we have $ Q = HM $. The inequality (4.2) can be written as

    $ γu˜uk,F(˜uk)HM(uk˜uk)0,uΩ. $ (4.12)

    Substituting (4.7) into (4.12) and using the monotonicity of $ F $ (see (2.5)), we obtain

    $ ˜ukΩ,u˜uk,F(u)u˜uk,H(ukuk+1),uΩ. $ (4.13)

    Now, we prove a simple fact for the matrix $ H $ in the following lemma.

    Lemma 5. The matrix $ H $ defined in (4.11) is positive definite for any $ r > 0, s > 0 $, $ rs > \frac{1}{4}\|A^TA\| $.

    Proof. For the matrix $ Q $ defined by (4.1), we have

    $ Q^{-1} = \left(1rI1rsAT01sI \right). $

    Thus, it follows from (4.1) that

    $ H1=MQ1=(I 12rAT12sA IAAT2rs)(1rI1rsAT01sI)=(1rI12rsAT12rsA1sI). $ (4.14)

    For any any $ r > 0, s > 0 $ satisfying $ rs > \frac{1}{4}\|A^TA\| $, $ H^{-1} $ is positive definite, and the positive definiteness of $ H $ is followed directly.

    Then we show how to deal with the right-hand side of (4.13). The following lemma gives an equivalent expression of it in terms of $ \|u-u^k\|_H $ and $ \|u-u^{k+1}\|_H $.

    Lemma 6. Let $ \{u^k\} $ be generated by the PCM and $ \{\tilde{u}^k\} $ be generated by PCM (3.2). Then we have

    $ u˜uk,H(ukuk+1)=12(uuk+12Huuk2H)+12uk˜uk2G, uΩ, $ (4.15)

    where the matrix $ G = \gamma(Q^T+Q)-\gamma^2M^THM $.

    Proof. Applying the identity

    $ ab,H(cd)=12(ad2Hac2H)+12(cb2Hdb2H), $ (4.16)

    to the right term of (4.13) with $ a = u, b = \tilde{u}^k, c = u^k, d = u^{k+1} $, we obtain

    $ u˜uk,H(ukuk+1)=12(uuk+12Huuk2H)+12(uk˜uk2Huk+1˜uk2H). $ (4.17)

    For the last term of (4.17), we have

    $ uk˜uk2Huk+1˜uk2H=uk˜uk2H(uk˜uk)(ukuk+1)2H(4.7)=uk˜uk2H(uk˜uk)γM(uk˜uk)2H=2γuk˜uk,HM(uk˜uk)γ2uk˜uk,MTHM(uk˜uk)(4.11)=uk˜uk,(γ(QT+Q)γ2MTHM)(uk˜uk). $ (4.18)

    the assertion is proved.

    Now, we investigate the positive definiteness for the matrix G. Using (4.11), we have

    $ G=γ(QT+Q)γ2MTHM=γ(QT+Q)γ2MTQ=γ(2rIATA2sI)γ2(I 12sAT12rA IAAT2rs)(rIAT0sI)=γ(2rIATA2sI)γ2(rI12AT12AsI)=(2γγ2)(rI12AT12AsI). $ (4.19)

    Thus, when $ rs > \frac{1}{4}\|A^TA\| $ and $ \gamma\in(0, 2) $, the matrix $ G $ is guaranteed to be positive definite, and we can easily obtained the contraction property of the algorithm. This is given by the following theorem.

    Theorem 1. Suppose the condition

    $ rs>14ATA $ (4.20)

    holds. Let the relaxation factor $ \gamma\in(0, 2) $. Then, for any $ u^{*} = (x^{*}, \lambda^{*})^T\in\Omega^* $, the sequence $ u^{k+1} = (x^{k+1}, \lambda^{k+1})^T $ generated by PCM satisfies the following inequality:

    $ uk+1u2Huku2Huk˜uk2G. $ (4.21)

    Proof.

    Combining (4.13) and (4.15), we have

    $ ˜ukΩ,u˜uk,F(u)12(uuk+12Huuk2H)+12uk˜uk2G,uΩ. $ (4.22)

    Note that $ u^*\in\Omega $. We get

    $ \label{HH-SA} \|u^k -u^*\|_H^2 - \|u^{k+1}-u^*\|_H^2 \ge \|u^k-\tilde{u}^k\|_G^2 + 2\langle \tilde{u}^k -u^*, F(u^*)\rangle. $

    Since $ u^* $ is a solution of (2.3) and $ \tilde{ u}^k\in\Omega $, we have

    $ ˜uku,F(u)0, $ (4.23)

    and thus

    $ \|u^k -u^*\|_H^2 - \|u^{k+1}-u^*\|_H^2 \ge \|u^k-\tilde{u}^k\|_G^2. $

    The assertion (4.21) follows directly.

    Based on the above results, we are now ready to prove the global convergence of the algorithm.

    Theorem 2. Let $ \{u^k\} $ be the sequence generated by PCM for the Ⅵ problem (2.2) and (2.3). Then, for any $ r > 0, s > 0 $ satisfying $ rs > \frac{1}{4}\|A^TA\| $ and $ \gamma\in(0, 2) $, the sequence $ \{ u^{k}\} $ converges to a solution of Ⅵ($ \Omega, F $).

    Proof. We follows the standard analytic framework of contraction-type methods to prove the convergence of the proposed algorithm. It follows from Theorem 1 that $ \{ u^{k}\} $ is bounded. Then we have that

    $ limkuk˜ukG=0. $ (4.24)

    Consequently,

    $ limkxk˜xk=0, $ (4.25)

    and

    $ limkA˜xkb=limks(λk˜λk)=0. $ (4.26)

    Since

    $ ˜xk=PX[˜xk1r(f(˜xk)AT˜λk)+(xk˜xk)+1rAT(λk˜λk)], $ (4.27)

    and

    $ ˜λk=λk1s(A˜xkb), $ (4.28)

    it follows from (4.25) and (4.26) that

    $\left\{ \begin{array}{l} 
    \begin{array}{l}  \lim\limits_{k\rightarrow  \infty} \tilde{x}^k-P_{\mathcal{X}}[\tilde{x}^k-\frac{1}{r}(f(\tilde{x}^k)-A^T\tilde{\lambda}^k)] = 0, & (4.{\rm{29a}})\\ 
     \lim\limits_{k\rightarrow  \infty}A\tilde{x}^k-b = 0.   & (4.{\rm{29b}})
    \end{array}
    \end{array} \right. $

    Because $ \tilde{u}^k $ is also bounded, it has at least one cluster point. Let $ u^\infty $ be a cluster point of $ \tilde{u}^k $ and let $ \tilde{u}^k_j $ be the subsequence converges to $ u^\infty $. It follows from (4.29) that

    $ \left\{ \begin{array}{l}
     
    \begin{array}{l} 
    \lim\limits_{j\rightarrow  \infty} \tilde{x}^k_j-P_{\mathcal{X}}[\tilde{x}^k_j-\frac{1}{r}(f(\tilde{x}^k_j)-A^T\tilde{\lambda}^k_j)] = 0, &(4.{\rm{30a}})\\  \lim\limits_{j\rightarrow  \infty}A\tilde{x}^k_j-b = 0.&(4.{\rm{30b}}) 
    \end{array}
    \end{array} \right. $

    Consequently, we have

    $\left\{ \begin{array}{l}
    \begin{array}{l} 
    x^\infty-P_{\mathcal{X}}[x^\infty-\frac{1}{r}(f(x^\infty)-A^T\lambda^\infty)] = 0, &(4.{\rm{31a}}) \\ Ax^\infty-b = 0. &(4.{\rm{31b}})
    \end{array}
    \end{array} \right. $

    Using the continuity of $ F $ and the projection operator $ P_{\Omega}(\cdot) $, we have that $ u^\infty $ is a solution of Ⅵ($ \Omega, F $). On the other hand, by taking limits over the subsequences in (4.24) and using $ \lim_{j\rightarrow \infty} \tilde{u}^k_j = u^\infty $. we have that, for any $ k > k_j $,

    $ \|u^{k}-u^\infty\|_H\leq \|u^{k_j}-u^\infty\|_H. $

    Thus, the sequence $ \{ u^{k}\} $ converges to $ u^\infty $, which is a solution of Ⅵ($ \Omega, F $).

    In this paper, we test the performance of PCM (3.2a)–(3.3b) for solving the basis pursuit (BP) and matrix completion problem. All the simulations are performed on a Lenovo Laptop with CPU Intel with 2.81GHz and 16GB RAM memory, using Matlab R2015b.

    The BP problem arises from areas such as the communities of information theory, signal processing, statistics, machine learning. it seeks to encode a large sparse signal representations through a relatively small number of linear equations. The BP problem can be cast as the following equality-constrained $ l_1 $ minimization problem

    $ minx1s.t.Ax=b, $ (5.1)

    where $ x\in\Re^n $, data $ A\in\Re^{m\times n} $ with $ m < n $, $ b\in\Re^m $. Here we assume that $ A $ has full row rank. By invoking the first-order optimality condition, BP is equivalent to Ⅵ (1.1) and (1.2) with $ f(x) = \partial\|x\|_1 $. Applying PCM with $ \gamma = 1 $ for this problem, we get the following iterative scheme:

    $\left\{ \begin{array}{l}
     
    \begin{array}{l} 
     \tilde{x}^k\in P_{\Re^n}[x^{k}-\frac{1}{r}(\partial\|\tilde{x}^k\|_1-A^T\lambda^k)], &({\rm{5.2a}}) \\ \tilde{\lambda}^k = \lambda^k-\frac{1}{s}(A\tilde{x}^k-b), &({\rm{5.2b}}) \\ x^{k+1} = \tilde{x}^k-\frac{1}{2r}A^T( \lambda^k-\tilde{\lambda}^k), &({\rm{5.2c}}) \\      \lambda^{k+1} = \lambda^k+\frac{1}{2s}A(x^k-\tilde{x}^k)-(I-\frac{AA^T}{2rs})( \lambda^k-\tilde{\lambda}^k).  &({\rm{5.2d}})  
    \end{array}
    \end{array} \right. $

    Note that the projection (5.2a) is equivalent to the $ l_1 $ shrinkage operation:

    $ \tilde{x}^k : = \mbox{Shrink}(x^k+\frac{1}{r}A^T\lambda^k, \frac{1}{r}), $

    where the $ l_1 $ shrinkage operator, denoted by $ \mbox{Shrink}(M, \xi) $, is defined as

    $ [\mbox{Shrink}(M, \xi)]_{i}: = \left\{Miξ,if Mi>ξ,Mi+ξ,if Mi<ξ,0,if |Mi|ξ, \qquad i = 1, 2, \dots, n. \right. $

    In our experiments, we focus on comparing our algorithm with the linearized ALM [36] (L-ALM) and the customized PPA (C-PPA) in [27] and verifying its efficiency. Similar with PCM, L-ALM and C-PPA also depend on Shrink operation, which have the same easiness of implementation.

    The data used in this experiment is similar to the one employed in [37]. The basic setup of the problem is as follows. The data matrix $ A $ is a i.i.d. standard Gaussian matrix generated by the randn(m, n) function in MATLAB with $ m = n/2 $. The original sparse signal $ x_{\mbox{true}} $ is sampled from i.i.d. standard Gaussian distributions with $ m/5 $ nonzero values. The output $ b $ is then created as the signs of $ b = Ax $. It is desirable to test problems that have a precisely known solution. In fact, when the original signal $ x_{\mbox{true}} $ is very sparse, it reduces to a minimization problem. The parameters used in the numerical experiments are similar to that in [19,38]: we set the relaxation factor $ \gamma = 1 $, $ s\in (25,100) $, $ r = 1.01\|A^TA\|/(4s) $ for PCA and $ s\in(25,100) $, $ r = 1.01\|A^TA\|/s $ for C-PPA. (In order to ensure the convergence of L-ALM and C-PPA, the parameters $ r, s $ should satisfy $ rs > \|A^TA\| $.) We set the criterion error as $ \!\min\{ \text{rel}_L = \|x^k-x^{k-1}\|, \text{rel}_S = Ax^k - b\} $, and declare successful recovery when this error is less than $ Tol = 10^{-3} $. In all the tests, the initial iteration is $ (x^0, \lambda^0) = (\textbf{0}, \textbf{1}) $.

    We first test the sensitivity of $ \gamma $ of PCM. We fix $ s = 50 $ and choose different values of $ \gamma $ in the interval $ [0.6, 1.8] $. The number of iterations required are reported in Figure 1. The curves in Figure 1 indicate that $ \gamma\in (1, 1.4) $ is preferable when we implement Algorithm PCM in practice.

    Figure 1.  Sensitivity test on the relaxation factor of $ \gamma $.

    In order to investigate the stability and efficiency of our algorithms, we test $ 8 $ groups of problems with different $ n $ and we generated the model by 10 times and reported the average results. The comparisons of these algorithms for small BP problems are presented in Tables 13.

    Table 1.  Numerical Results for Basis Pursuit problem $ s = 25 $.
    PCM L-ALM C-PPA
    n Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s) Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s) Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s)
    100 81 4.2e-04 1.9e-04 0.03 161 1.6e-04 7.5e-04 0.03 232 8.2e-05 8.6e-04 0.04
    300 88 5.1e-04 7.3e-04 0.05 207 7.5e-05 6.3e-04 0.07 311 5.3e-05 9.4e-04 0.10
    600 110 9.2e-05 7.9e-04 0.54 262 7.2e-05 8.9e-04 0.62 374 6.0e-05 9.4e-04 1.01
    1000 141 4.4e-05 9.8e-04 1.81 291 3.3e-05 7.9e-04 2.21 411 3.2e-05 9.4e-04 2.86
    1500 139 6.7e-05 6.6e-04 3.95 359 7.0e-05 9.4e-04 5.04 484 6.9e-05 9.0e-04 6.54
    2000 151 6.3e-05 7.9e-04 6.55 406 1.2e-05 9.7e-04 9.24 536 1.3e-05 9.7e-04 12.08
    2500 171 3.9e-05 8.5e-04 11.57 519 1.3e-05 8.9e-04 18.60 635 1.8e-05 9.8e-04 22.69
    3000 199 2.9e-05 8.8e-04 18.82 585 1.7e-05 9.8e-04 29.46 709 1.7e-05 9.0e-04 35.61

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical Results for Basis Pursuit problem $ s = 50 $.
    PCM L-ALM C-PPA
    n Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s) Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s) Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s)
    100 80 5.9e-04 7.0e-04 0.03 151 3.7e-04 9.6e-04 0.03 224 1.4e-04 9.4e-04 0.04
    300 87 6.1e-04 8.1e-04 0.05 203 1.2e-04 8.5e-04 0.09 299 3.3e-05 9.0e-04 0.10
    600 110 1.1e-04 6.1e-04 0.49 249 3.5e-05 9.9e-04 0.63 331 5.7e-05 8.6e-04 0.87
    1000 137 4.0e-05 8.5e-04 1.75 266 3.2e-05 7.9e-04 1.80 370 3.1e-05 8.3e-04 2.62
    1500 135 3.5e-05 8.6e-04 3.54 312 3.6e-05 8.5e-04 4.35 424 3.4e-05 8.2e-04 5.84
    2000 146 5.4e-05 7.6e-04 6.38 331 1.3e-05 9.8e-04 7.57 445 1.5e-05 9.5e-04 10.19
    2500 143 4.1e-05 9.7e-04 9.67 375 1.4e-05 9.3e-04 13.41 491 1.5e-05 9.1e-04 17.63
    3000 170 3.1e-05 9.7e-04 16.00 418 1.5e-05 9.5e-04 20.70 533 1.6e-05 8.9e-04 26.42

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical Results for Basis Pursuit problem $ s = 75 $.
    PCM L-ALM C-PPA
    n Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s) Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s) Iter. $ \text{rel}_L $ $ \text{rel}_S $ CPU(s)
    100 82 5.3e-04 5.7e-04 0.02 164 2.7e-04 6.2e-04 0.03 236 6.3e-05 9.2e-04 0.04
    300 99 1.9e-04 7.9e-04 0.05 213 9.4e-05 5.9e-04 0.07 284 4.2e-05 9.2e-04 0.09
    600 117 6.7e-05 8.9e-04 0.55 226 1.1e-04 9.9e-04 0.58 316 8.5e-05 9.3e-04 0.84
    1000 179 4.2e-05 8.7e-04 2.35 253 5.4e-05 9.5e-04 1.68 369 1.9e-05 9.5e-04 2.47
    1500 159 3.4e-05 9.3e-04 4.20 299 3.4e-05 8.1e-04 4.13 399 3.5e-05 7.9e-04 5.39
    2000 142 6.6e-05 9.4e-04 6.18 313 1.3e-05 9.0e-04 7.32 410 1.9e-05 9.7e-04 9.56
    2500 139 2.9e-05 9.8e-04 9.41 337 1.4e-05 1.0e-03 12.20 444 1.1e-05 9.8e-04 16.00
    3000 168 3.7e-05 9.0e-04 15.92 370 1.6e-05 8.2e-04 18.54 464 2.2e-05 9.0e-04 23.22

     | Show Table
    DownLoad: CSV

    From Tables 13, it can be seen that the PCM performs the best, both in terms of number of iterations and CPU time for all test cases. These numerical results illustrate that if the step size condition relaxed, can indeed be beneficial to yield larger step sizes, which could accelerate the convergence of algorithm.

    To verify the performance results of our algorithm, we plotted the approximation error $ \text{rel}_L = \|x^k-x^{k-1}\|, \text{rel}_S = \|Ax^k - b\| $ achieved for $ n = 1500, s = 50 $ by each of the algorithms versus the iteration number $ k $ in Figures 2 and 3, respectively. It is clear to see that PCM outperforms all the other algorithms significantly in terms of number of iterations.

    Figure 2.  Relative errors of $ \mbox{rel}_L = \|x^k-x^{k-1}\| $.
    Figure 3.  Relative errors of $ \mbox{rel}_S = \|Ax^k - b\| $.

    Matrix completion problem (MC) comes from many fields such as signal processing, statistics, machine learning communities. It tries to recover the low-rank matrix $ X $ from its incomplete known entries. Mathematically, its convex formula is as follows:

    $ minXs.t.Xij=Mij,(i,j)Ω, $ (5.3)

    where $ \|X\|_* $ is the nuclear norm of X, $ M $ is the unknown matrix with $ p $ available sampled entries and $ \Omega $ is a set of pairs of indices of cardinality $ p $. By invoking the first-order optimality condition, MC can also be equivalent to Ⅵ (1.1) and (1.2) with $ f(x) = \partial\|X\|_* $.

    The basic setup of the problem is as follows. We first generate two random matrices $ M_L \in \Re^{n\times r_a} $ and $ M_R \in \Re^{n\times r_a} $, all with i.i.d. standard Gaussian entries, and then set the low-rank matrix $ M = M_LM_R^T $. The available index set $ \Omega $ is randomly uniformly sampled in all cardinality sets $ |\Omega| $. We denote the oversampling factor (OS) by $ OS = |\Omega|/r_a(2n-r_a) $, i.e., the ratio of sample size to degrees of freedom in an asymmetric matrix of rank $ r_a $. The relative error of the approximation $ X $ is defined as

    $ relative error=XΩMΩFMΩF. $ (5.4)

    We set the relative error $ Tol = 10^{-5} $ as the tolerance for all algorithms. In all tests, the initial iteration is $ (X^0, \Lambda^0) = (\textbf{0}, \textbf{0}) $. The parameters used in the numerical experiments are set as follows: we set $ r = 0.006 $, $ s = 1.01/(4r) $, $ \gamma = 1 $ for PCM, $ s = 1.01/r $ for L-ALM and C-PPA.

    Tables 46 list the comparison between these algorithms with three different OS values. The results confirm that PCM outperforms other methods in terms of computation time and number of iterations in all cases.

    Table 4.  Comparison results of PCM, L-ALM, C-PPA ($ OS = 5 $).
    Problems PCM L-ALM C-PPA
    n $ r_a $ Rel.err. Iter. Rel.err. Iter. Rel.err. Iter.
    100 5 9.71e-06 60 6.22e-05 101 5.83e-05 101
    100 10 9.74e-06 18 9.17e-06 38 9.15e-06 37
    200 5 9.40e-06 69 1.41e-04 101 1.33e-04 101
    200 10 8.88e-06 27 9.14e-06 56 9.15e-06 55
    500 10 9.67e-06 43 9.29e-06 72 8.96e-06 71
    500 15 8.93e-06 31 9.87e-06 50 9.06e-06 49
    1000 10 9.21e-06 71 9.23e-06 119 9.52e-06 117
    1500 10 9.92e-06 95 9.67e-06 166 9.41e-06 165

     | Show Table
    DownLoad: CSV
    Table 5.  Comparison results of PCM, L-ALM, C-PPA ($ OS = 10 $).
    Problems PCM L-ALM C-PPA
    n $ r_a $ Rel.err. Iter. Rel.err. Iter. Rel.err. Iter.
    100 5 8.67e-06 14 7.66e-06 29 7.67e-06 28
    100 10 8.39e-06 13 9.10e-06 27 9.12e-06 26
    200 5 6.84e-06 21 8.85e-06 36 8.67e-06 35
    200 10 5.78e-06 7 6.17e-06 13 8.32e-06 10
    500 10 9.13e-06 24 8.22e-06 39 8.86e-06 38
    500 15 8.34e-06 17 9.13e-06 26 9.74e-06 25
    1000 10 8.22e-06 48 9.25e-06 77 9.33e-06 76
    1500 10 9.41e-06 66 9.39e-06 112 9.40e-06 111

     | Show Table
    DownLoad: CSV
    Table 6.  Comparison results of PCM, L-ALM, C-PPA ($ OS = 15 $).
    Problems PCM L-ALM C-PPA
    n $ r_a $ Rel.err. Iter. Rel.err. Iter. Rel.err. Iter.
    100 5 9.62e-06 11 8.92e-06 23 8.99e-06 22
    100 10 9.62e-06 13 9.15e-06 27 9.20e-06 26
    200 5 4.73e-06 13 9.27e-06 21 8.34e-06 20
    200 10 7.98e-06 6 6.15e-06 13 8.34e-06 10
    500 10 8.15e-06 17 8.51e-06 25 9.93e-06 24
    500 15 4.58e-06 10 4.22e-06 14 7.19e-06 13
    1000 10 9.35e-06 35 9.38e-06 56 9.81e-06 55
    1500 10 9.75e-06 50 9.20e-06 84 9.46e-06 83

     | Show Table
    DownLoad: CSV

    This paper proposes a new prediction-correction method for solving the monotone variational inequalities with linear constraints. At the prediction step, the implementation is carried out by a simple projection. At the correction step, the method introduces a simple updating step to generate the new iteration. We establish the global convergence of the method. The convergence condition of the method also allows larger step sizes that can in potential make the algorithm numerically converge faster. The numerical experiments approve the efficiency of the proposed methods. The future work is to explore combining self adaptive technique for the method. Besides, further applications of our method are expected.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was funded by NSF of Shaanxi Province grant number 2020-JQ-485.

    The authors declare no conflict of interest.

    [1] World Alzheimer Report 2015.
    [2] Hardy JA, Higgin GA (1992) Alzheimer's disease: the amyloid cascade hypothesis. Science 256: 184–185. doi: 10.1126/science.1566067
    [3] Haass C, Selkoe DJ (2007) Soluble protein oligomers in neurodegeneration: lessons from the Alzheimer's amyloid beta-peptide. Nat Rev Mol Cell Biol 8: 101–112.
    [4] Bettens K, Sleegers K, Van Broeckhoven C (2010) Current status on Alzheimer's disease molecular genetics: from past, to present, to future. Hum Mol Genet 19: R4–R11. doi: 10.1093/hmg/ddq142
    [5] Klafki HW (2006) Therapeutic approaches to Alzheimer's disease. Brain 129: 2840–2855. doi: 10.1093/brain/awl280
    [6] Hampel H, Frank R, Broich K, et al. (2010) Biomarkers for Alzheimer's disease: academic, industry and regulatory perspectives. Nat Rev 9: 560–574.
    [7] Jiang T, Yu JT, Zhu XC, et al. (2013) TREM2 in Alzheimer's disease. Mol Neurobiol 48: 180–185. doi: 10.1007/s12035-013-8424-8
    [8] Ulrich JD, Ulland TK, Colonna M, et al. (2017) Elucidating the role of TREM2 in Alzheimer's disease. Neuron 94: 237-248. doi: 10.1016/j.neuron.2017.02.042
    [9] Hardy J, Selkoe DJ (2002) The amyloid hypothesis of Alzheimer's disease: progress and problems on the road to therapeutics. Science 297: 353–356. doi: 10.1126/science.1072994
    [10] Chetelat G (2013) Ab-independent processes-rethinking preclinical AD. Nat Rev Neurol 9: 123–124. doi: 10.1038/nrneurol.2013.21
    [11] Nguyen KV (2015) The human b-amyloid precursor protein: biomolecular and epigenetic aspects. BioMol Concepts 6: 11–32.
    [12] Wang SC, Oelze B, Schumacher A (2008) Age-specific epigenetic drift in late-onset Alzheimer's disease. PLoS One 3: e2698. doi: 10.1371/journal.pone.0002698
    [13] Combarros O, Cortina-Borja M, Smith AD, et al. (2009) Epistasis in sporadic Alzheimer's disease. Neurobiol Aging 30: 1333–1349. doi: 10.1016/j.neurobiolaging.2007.11.027
    [14] Nguyen KV (2014) Epigenetic regulation in amyloid precursor protein and the Lesch-Nyhan syndrome. Biochem Biophys Res Commun 446: 1091–1095. doi: 10.1016/j.bbrc.2014.03.062
    [15] Nguyen KV (2015) Epigenetic regulation in amyloid precursor protein with genomic rearrangements and the Lesch-Nyhan syndrome. Nucleosides Nucleotides Nucleic Acids 34: 674–690. doi: 10.1080/15257770.2015.1071844
    [16] Nguyen KV, Nyhan WL (2017) Quantification of various APP-mRNA isoforms and epistasis in Lesch-Nyhan disease. Neurosci Lett 643: 52–58. doi: 10.1016/j.neulet.2017.02.016
    [17] Nguyen KV, Leydiker K, Wang R, et al. (2017) A neurodevelopmental disorder with a nonsense mutation in the Ox-2 antigen domain of the amyloid precursor protein (APP) gene. Nucleosides Nucleotides Nucleic Acids 36: 317–327.
    [18] Ray B, Long JM, Sokol DK, et al. (2011) Increased secreted amyloid precursor protein-a (sAPPa) in severe autism: proposal of a specific, anabolic pathway and putative biomarker. PLoS One 6: e20405. doi: 10.1371/journal.pone.0020405
    [19] Sokol DK, Maloney B, Long JM, et al. (2011) Autism, Alzheimer's disease, and fragile X, APP, FMRP, and mGluR5 are molecular links. Neurology 76: 1344–1352. doi: 10.1212/WNL.0b013e3182166dc7
    [20] Bryson JB, Hobbs C, Parsons MJ, et al. (2012) Amyloid precursor protein (APP) contributes to pathology in the SODG93A mouse model of amyotrophic lateral sclerosis. Hum Mol Genet 21: 3871–3882. doi: 10.1093/hmg/dds215
    [21] Matias-Guiu JA, Oreja-Guevara C, Cabrera-Martin MN, et al. (2016) Amyloid proteins and their role in multiple sclerosis. Considerations in the use of amyloid-PET imaging. Front Neurol 7: 53.
    [22] Saonere JA (2011) Antisense therapy, a magic bullet for the treatment of various diseases: present and future prospects. J Med Genet Genomics 3: 77–83.
    [23] Sabaum JM, Weidermann A, Lemaire HG, et al. (1988) The promoter of Alzheimer's disease amyloid A4 precursor gene. EMBO J 7: 2807–2813.
    [24] Nicolas M, Hassan BA (2014) Amyloid precursor protein and neural development. Development 141: 2543–2548. doi: 10.1242/dev.108712
  • This article has been cited by:

    1. Hirofumi Kiyokawa, Mitsuru Morimoto, Molecular crosstalk in tracheal development and its recurrence in adult tissue regeneration, 2021, 250, 1058-8388, 1552, 10.1002/dvdy.345
    2. Björn Bluhm, Harald W. A. Ehlen, Tatjana Holzer, Veronika S. Georgieva, Juliane Heilig, Lena Pitzler, Julia Etich, Toman Bortecen, Christian Frie, Kristina Probst, Anja Niehoff, Daniele Belluoccio, Jocelyn Van den Bergen, Bent Brachvogel, miR-322 stabilizes MEK1 expression to inhibit RAF/MEK/ERK pathway activation in cartilage, 2017, 1477-9129, 10.1242/dev.148429
    3. A. Gimelli, D.H. Freire, C.S.L. Barros, D.C. Gomes, T.B.M. Möck, Congenital Segmental Absence of Tracheal Rings in a Kitten, 2019, 172, 00219975, 58, 10.1016/j.jcpa.2019.09.003
    4. M. Matthijs Fockens, Bernadette S. de Bakker, Roelof‐Jan Oostra, Frederik G. Dikkers, Development pattern of tracheal cartilage in human embryos, 2021, 34, 0897-3806, 668, 10.1002/ca.23688
    5. Akela Kuwahara, Ace E Lewis, Coohleen Coombes, Fang-Shiuan Leung, Michelle Percharde, Jeffrey O Bush, Delineating the early transcriptional specification of the mammalian trachea and esophagus, 2020, 9, 2050-084X, 10.7554/eLife.55526
    6. John C. Densmore, Keith T. Oldham, Kathleen M. Dominguez, Elizabeth R. Berdan, Michael E. McCormick, David J. Beste, Louella B. Amos, Cecilia A. Lang, Ronald K. Woods, Peter C. Kouretas, Michael E. Mitchell, Neonatal esophageal trachealization and esophagocarinoplasty in the treatment of flow-limited Floyd II tracheal agenesis, 2017, 153, 00225223, e121, 10.1016/j.jtcvs.2017.01.029
    7. Yu.A. Molochek, N.I. Levchuk, O.M. Makarenko, DEVELOPMENT AND EXPERIMENTAL REPLICATION OF THE MODEL OF PARTIAL TRACHEAL STENOSIS IN YOUNG SEXUALLY IMMATURE WHITE RATS, 2023, 23, 2077-1126, 218, 10.31718/2077-1096.23.4.218
    8. Ace E. Lewis, Akela Kuwahara, Jacqueline Franzosi, Jeffrey O. Bush, Tracheal separation is driven by NKX2-1-mediated repression of Efnb2 and regulation of endodermal cell sorting, 2022, 38, 22111247, 110510, 10.1016/j.celrep.2022.110510
    9. Takuya Yoshida, Michiyuki Matsuda, Tsuyoshi Hirashima, Incoherent Feedforward Regulation via Sox9 and ERK Underpins Mouse Tracheal Cartilage Development, 2020, 8, 2296-634X, 10.3389/fcell.2020.585640
    10. M. Matthijs Fockens, Michiel Hölscher, Jacqueline Limpens, Frederik G. Dikkers, Tracheal anomalies associated with Down syndrome: A systematic review, 2021, 56, 8755-6863, 814, 10.1002/ppul.25203
    11. Onur B Dolmaci, Marc Matthijs Fockens, Matthijs W Oomen, Job B van Woensel, Carlijn E L Hoekstra, David R Koolbergen, A modified surgical technique for aortopexy in tracheobronchomalacia, 2021, 33, 1569-9285, 462, 10.1093/icvts/ivab100
    12. Romman Muntzar, Juan David Carvajal-Agudelo, Langston Tench, Tracy Alice Ofosua Apienti, Tamara A. Franz-Odendaal, BMP and WNT signaling are involved in tracheal cartilage development in chicken embryos, 2025, 524, 00121606, 69, 10.1016/j.ydbio.2025.04.020
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5963) PDF downloads(1288) Cited by(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog