Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Enhancing structural health monitoring with machine learning for accurate prediction of retrofitting effects

  • Received: 05 June 2024 Revised: 15 October 2024 Accepted: 23 October 2024 Published: 28 October 2024
  • MSC : 68T45, 93C95

  • Structural health in civil engineering involved maintaining a structure's integrity and performance over time, resisting loads and environmental effects. Ensuring long-term functionality was vital to prevent accidents, economic losses, and service interruptions. Structural health monitoring (SHM) systems used sensors to detect damage indicators such as vibrations and cracks, which were crucial for predicting service life and planning maintenance. Machine learning (ML) enhanced SHM by analyzing sensor data to identify damage patterns often missed by human analysts. ML models captured complex relationships in data, leading to accurate predictions and early issue detection. This research aimed to develop a methodology for training an artificial intelligence (AI) system to predict the effects of retrofitting on civil structures, using data from the KW51 bridge (Leuven). Dimensionality reduction with the Welch transform identified the first seven modal frequencies as key predictors. Unsupervised principal component analysis (PCA) projections and a K-means algorithm achieved 70% accuracy in differentiating data before and after retrofitting. A random forest algorithm achieved 99.19% median accuracy with a nearly perfect receiver operating characteristic (ROC) curve. The final model, tested on the entire dataset, achieved 99.77% accuracy, demonstrating its effectiveness in predicting retrofitting effects for other civil structures.

    Citation: A. Presno Vélez, M. Z. Fernández Muñiz, J. L. Fernández Martínez. Enhancing structural health monitoring with machine learning for accurate prediction of retrofitting effects[J]. AIMS Mathematics, 2024, 9(11): 30493-30514. doi: 10.3934/math.20241472

    Related Papers:

    [1] Weerawat Sudsutad, Chatthai Thaiprayoon, Sotiris K. Ntouyas . Existence and stability results for ψ-Hilfer fractional integro-differential equation with mixed nonlocal boundary conditions. AIMS Mathematics, 2021, 6(4): 4119-4141. doi: 10.3934/math.2021244
    [2] Murugesan Manigandan, R. Meganathan, R. Sathiya Shanthi, Mohamed Rhaima . Existence and analysis of Hilfer-Hadamard fractional differential equations in RLC circuit models. AIMS Mathematics, 2024, 9(10): 28741-28764. doi: 10.3934/math.20241394
    [3] Qun Dai, Shidong Liu . Stability of the mixed Caputo fractional integro-differential equation by means of weighted space method. AIMS Mathematics, 2022, 7(2): 2498-2511. doi: 10.3934/math.2022140
    [4] Xiaoming Wang, Rizwan Rizwan, Jung Rey Lee, Akbar Zada, Syed Omar Shah . Existence, uniqueness and Ulam's stabilities for a class of implicit impulsive Langevin equation with Hilfer fractional derivatives. AIMS Mathematics, 2021, 6(5): 4915-4929. doi: 10.3934/math.2021288
    [5] Ugyen Samdrup Tshering, Ekkarath Thailert, Sotiris K. Ntouyas . Existence and stability results for a coupled system of Hilfer-Hadamard sequential fractional differential equations with multi-point fractional integral boundary conditions. AIMS Mathematics, 2024, 9(9): 25849-25878. doi: 10.3934/math.20241263
    [6] Songkran Pleumpreedaporn, Chanidaporn Pleumpreedaporn, Weerawat Sudsutad, Jutarat Kongson, Chatthai Thaiprayoon, Jehad Alzabut . On a novel impulsive boundary value pantograph problem under Caputo proportional fractional derivative operator with respect to another function. AIMS Mathematics, 2022, 7(5): 7817-7846. doi: 10.3934/math.2022438
    [7] J. Vanterler da C. Sousa, E. Capelas de Oliveira, F. G. Rodrigues . Ulam-Hyers stabilities of fractional functional differential equations. AIMS Mathematics, 2020, 5(2): 1346-1358. doi: 10.3934/math.2020092
    [8] Kaihong Zhao, Shuang Ma . Ulam-Hyers-Rassias stability for a class of nonlinear implicit Hadamard fractional integral boundary value problem with impulses. AIMS Mathematics, 2022, 7(2): 3169-3185. doi: 10.3934/math.2022175
    [9] Najla Alghamdi, Bashir Ahmad, Esraa Abed Alharbi, Wafa Shammakh . Investigation of multi-term delay fractional differential equations with integro-multipoint boundary conditions. AIMS Mathematics, 2024, 9(5): 12964-12981. doi: 10.3934/math.2024632
    [10] Dongming Nie, Usman Riaz, Sumbel Begum, Akbar Zada . A coupled system of p-Laplacian implicit fractional differential equations depending on boundary conditions of integral type. AIMS Mathematics, 2023, 8(7): 16417-16445. doi: 10.3934/math.2023839
  • Structural health in civil engineering involved maintaining a structure's integrity and performance over time, resisting loads and environmental effects. Ensuring long-term functionality was vital to prevent accidents, economic losses, and service interruptions. Structural health monitoring (SHM) systems used sensors to detect damage indicators such as vibrations and cracks, which were crucial for predicting service life and planning maintenance. Machine learning (ML) enhanced SHM by analyzing sensor data to identify damage patterns often missed by human analysts. ML models captured complex relationships in data, leading to accurate predictions and early issue detection. This research aimed to develop a methodology for training an artificial intelligence (AI) system to predict the effects of retrofitting on civil structures, using data from the KW51 bridge (Leuven). Dimensionality reduction with the Welch transform identified the first seven modal frequencies as key predictors. Unsupervised principal component analysis (PCA) projections and a K-means algorithm achieved 70% accuracy in differentiating data before and after retrofitting. A random forest algorithm achieved 99.19% median accuracy with a nearly perfect receiver operating characteristic (ROC) curve. The final model, tested on the entire dataset, achieved 99.77% accuracy, demonstrating its effectiveness in predicting retrofitting effects for other civil structures.



    Let M1,M2,MsRn×n, and let u1(z),u2(z),,us(z):RnRn, be s nonlinear mappings. The definition of the vertical nonlinear complementarity problem (abbreviated as VNCP) is: Find zRn such that

    ri=Miz+ui(z),1is,withmin{z,r1,,rs}=0. (1.1)

    Here, the minimum operation is taken component-wise.

    The VNCP has many applications in generalized Leontief input-output models, control theory, nonlinear networks, contact problems in lubrication, and so on; e.g., see [11,18,34,35,39]. Taking ui(z)=uiRn,1is, the VNCP reduces to the vertical linear complementarity problem (abbreviated as VLCP) [9,20,40]. Further, taking s=1, the VLCP reduces to the linear complementarity problem (abbreviated as LCP) [10].

    In earlier literatures, there were some iterative methods for such problems. In [39], the Mangasarian's general iterative algorithm was constructed to solve the VLCP. Solving an approximation equation of the nonlinear complementarity problem by a continuation method can be extended to solve the VNCP; see [8]. Furthermore, in [37] the authors approximated the equivalent minimum equation of the VNCP by a sequence of aggregation functions, and found the zero solution of the approximated problems. Moreover, some interior-point methods were applied to solve complementarity problems, such as LCP [24,29], nonlinear complementarity problems (abbreviated as NCP) [36] and horizontal LCP [46]. In recent years, modulus-based matrix splitting (abbreviated as MMS) iteration methods have gained popularity for numerical solutions to various complementarity problems. References [2,12,15,16] detail their application to the LCP, while [13,32,50] focus on the horizontal LCP. The second-order cone LCP is discussed in [27], implicit complementarity problems in [7,14,25], quasi-complementarity problem in [38], NCP in [41], and the circular cone NCP is addressed in [28]. Numerical examples have demonstrated that MMS iteration methods often outperform state-of-the-art smooth Newton methods in practical applications. Specifically, for the VLCP, a special case of the VNCP, the MMS iteration method was introduced in [31]. Alternatively, an MMS iteration method without auxiliary variables based on a non-auxiliary-variable equivalent modulus equation was presented in [23] and shown to be more efficient than the method in [31]. Accelerated techniques and improved results for MMS iteration methods in the VLCP are further detailed in [21,48,49,51]. On the other hand, Projected type methods were also used to solve the VLCP; see [6,33]. For the VNCP, the only literature on the MMS iteration method currently is [42].

    To improve the convergence rate of the MMS iteration method for solving the VNCP, in this work, we aim to construct a two-step MMS iteration method. The two-step splitting technique had been successfully used in other complementarity problems, e.g., see [43,44,45,52], where the main idea is to change the iteration in the MMS to two iterations based on two matrix splittings of the system matrices, which can make full use of the information of the matrices for acceleration.

    In the following, after presenting some required preliminaries, the new two-step MMS iteration method is established in Section 2. The convergence analysis of the proposed method is given in Section 3, which can generalize and improve the results in [42]. Some numerical examples are presented to illustrate the efficiency of the proposed method in Section 4, and concluding remarks of the whole work are presented in Section 5.

    First, some notations, definitions, and existing results needed in the following discussion are introduced.

    Let M=(mij)Rn×n and M=DMBM, where DM and BM are the diagonal and the nondiagonal matrices of M, respectively. For two matrices M and N, the order in inequality M(>)N means mij(>)nij for every i,j. Let |M|=(|mij|) and the comparison matrix of M be denoted by M=(mij), where mij=|mij| if i=j and mij=|mij| if ij. If mij0 for any ij, M is called a Z-matrix. If M is a nonsingular Z-matrix and M10, it is called a nonsingular M-matrix. M is called an H-matrix if M is a nonsingular M-matrix. If |mii|>ji|mij| for all 1in, M is called a strictly diagonal dominant (abbreviated as s.d.d.) matrix (see [5]). If M is an H-matrix with all its diagonal entries positive (e.g., see [1]), it is called an H+-matrix. If M is a nonsingular M-matrix, it is well known that there exists a positive diagonal matrix Λ, which can make MΛ be an s.d.d. matrix with all the diagonal entries of AΛ positive [5]. M=FG is called an H-splitting if F|G| is a nonsingular M-matrix [19].

    Let Y={Y1,Y2,,Ys} be a set of n×n real matrices (or n×1 real vectors). Denote a mapping φ as follows:

    φ(Y)=s1i=12si1Yi+Ys.

    Let ΩRn×n be a positive diagonal matrix, σ>0R and Mi=FiGi (1is) be s splittings of Mi. Denote

    F={F1,F2,,Fs},G={G1,G2,,Gs},M={M1,M2,,Ms},
    DF={DF1,DF2,,DFs},BF={BF1,BF2,,BFs},

    and

    U(z)={u1(z),u2(z),,us(z)}.

    By equivalently transforming the VNCP to a fixed-point equation, the MMS iteration method for the VNCP was given in [42].

    Method 2.1. [42] (MMS) Given x(0)1Rn, for k=0,1,2,, compute x(k+1)1Rn by

    (2s1Ω+φ(F))x(k+1)1=φ(G)x(k)1+(2s1Ωφ(M))|x(k)1|+Ωsi=22si+1|x(k)i|σφ(U(z(k))),

    where

    z(k)=1σ(x(k)1+|x(k)1|)

    and x(k)2,,x(k)s are computed by

    {x(k)s=12Ω1[(Ms1Ms)(|x(k)1|+x(k)1)+σus1(z(k))σus(z(k))],x(k)j=12Ω1[(Mj1Mj)(|x(k)1|+x(k)1)+Ω(|x(k)j+1|+x(k)j+1)+σuj1(z(k))σuj(z(k))],j=s1,s2,,2, (2.1)

    until the iteration is convergent.

    Based on Method 2.1, to fully utilize the information of the entries of the matrix set M, for 1is, consider two matrix splittings of Mi, e.g., Mi=F(1)iG(1)i=F(2)iG(2)i. Then, the two-step MMS (abbreviated as TMMS) iteration method can be established as follows:

    Method 2.2. (TMMS) Given x(0)1Rn, for k=0,1,2,, compute x(k+1)1Rn by

    {(2s1Ω+φ(F(1)))x(k+12)1=φ(G(1))x(k)1+(2s1Ωφ(M))|x(k)1|+Ωsi=22si+1|x(k)i|σφ(U(z(k))),(2s1Ω+φ(F(2)))x(k+1)1=φ(G(2))x(k+12)1+(2s1Ωφ(M))|x(k+12)1|+Ωsi=22si+1|x(k+12)i|σφ(U(z(k+12))), (2.2)

    where

    z(k)=1σ(x(k)1+|x(k)1|),z(k+12)=1σ(x(k+12)1+|x(k+12)1|),

    where x(k)2,,x(k)s are computed by (2.1) and

    {x(k+12)s=12Ω1[(Ms1Ms)(|x(k+12)1|+x(k+12)1)+σus1(z(k+12))σus(z(k+12))],x(k+12)j=12Ω1[(Mj1Mj)(|x(k+12)1|+x(k+12)1)+Ω(|x(k+12)j+1|+x(k+12)j+1)+σuj1(z(k))σuj(z(k+12))],j=s1,s2,,2, (2.3)

    until the iteration is convergent.

    Clearly, if we take F(1)i=F(2)i, Method 2.2 reduces to Method 2.1 immediately. Furthermore, we can obtain a class of relaxation methods from Method 2.2 by specially choosing the two matrix splittings, similar to those in [43,44,45,49,50,52]. For example, for i=1,2,,s, taking

    {F(1)i=1α(D(1)AiβL(1)Ai),G(1)i=F(1)iAi,F(2)i=1α(D(2)AiβU(2)Ai),G(2)i=F(2)iAi, (2.4)

    one can get the two-step modulus-based accelerated overrelaxation (abbreviated as TMAOR) iteration method, which can reduce to the two-step modulus-based successive overrelaxation (abbreviated as TMSOR) and Gauss-Seidel (abbreviated as TMGS) methods, when (α,β)=(α,α) and (α,β)=(1,1), respectively.

    In this section, the convergence conditions of Method 2.2 are given under the assumption that the VNCP has a unique solution z, the same as that in [42]. Furthermore, for 1is, ui(z) is assumed to satisfy the locally Lipschitz smoothness conditions: Let

    ui(z)=(ui(z1),ui(z2),,ui(zn))T

    be differentiable with

    0ui(z)zUi,i=1,2,,s. (3.1)

    Then, by Lagrange mean value theorem, there exists ξj between z(k)j and zj such that

    ui(z(k))ui(z)=U(k)i(z(k)z)=1σU(k)i[(x(k)1x1)+(|x(k)1||x1|)], (3.2)

    where U(k)i is a nonnegative diagonal matrix with diagonal entries ui(zj)zj|zj=ξj,1jn. Furthermore, by (3.1), we have

    0U(k)iUi. (3.3)

    Denote

    U(k)={U(k)1,U(k)2,,U(k)s},U={U1,U2,,Us}.

    Lemma 3.1. Let Mi,1is, be H+-matrices, ΩRn×n be a positive diagonal matrix, and σ>0R. For t=1,2, assume that:

    (I) DF(t)i>0,i=1,2,,s1, and Ms=F(t)sG(t)s be an H-splitting of Ms;

    (II)

    {F(t)s1F(t)s,2sjF(t)j1s1i=j2si1F(t)i+F(t)s,2js1; (3.4)

    (III) There exists a positive diagonal matrix Λ such that (F(t)s|G(t)s|)Λ is an s.d.d. matrix.

    Then, 2s1Ω+φ(F(t)) is an H-matrix.

    Proof. Let e=∈Rn be a vector with all entries being 1. By reusing (3.4) s2 times, we have

    2s1Ω+φ(F(t))Λe>φ(F(t))Λe(2s2F(t)1+s1i=22si1Fi+F(t)s)Λe2s1i=22si1F(t)i+F(t)sΛe2(2s3F(t)2+s1i=32si1Fi+F(t)s)Λe2s1F(t)sΛe2s1(F(t)s|G(t)s|)Λe>0.

    Hence, 2s1Ω+φ(F(t))Λ is an s.d.d matrix, which implies that 2s1Ω+φ(F(t)) is an H-matrix.

    Lemma 3.2. With the same notations as those in Lemma 3.1, denote xi,i=1,2,,s, as the solution of the VNCP, and let

    δ(k)i=x(k)ixi,ˉδ(k)i=|x(k)i||xi|.

    Then, we have

    si=22si+1Ω|ˉδ(k)i|[2(s2j=12sj1Γj+Γs1)+Θ]|δ(k)1|, (3.5)

    where

    Γj={|Ms1Ms|,ifj=1,|2j1Msjs1t=sj+12st1MtMs|,if2js1,

    and

    Θ=s1j=2(2s2sj)Uj+(2s2)Us.

    Proof. By (2.1), we get

    {δ(k)s=Ω1[(Ms1Ms)+(U(k)s1U(k)s)](ˉδ(k)1+δ(k)1),δ(k)j=12Ω1[(Mj1Mj)+(U(k)j1U(k)j)(ˉδ(k)1+δ(k)1)+Ω(ˉδ(k)j+1+δ(k)j+1)],j=s1,s2,,2. (3.6)

    By the first equation of (3.6), we have

    21Ω|δ(k)s|=|[(Ms1Ms)+(U(k)s1U(k)s)](ˉδ(k)1+δ(k)1)|(|Ms1Ms|+Us1+Us)|δ(k)1|=2(Γ1+Us1+Us)|δ(k)1|.

    Then, when the subscript is s1, with the second equation of (3.6), we can get

    22Ω|δ(k)s1|=|[2(Ms2Ms1)+2(U(k)s2U(k)s1)](ˉδ(k)1+δ(k)1)+2Ω(ˉδ(k)s+δ(k)s)|=|[(2Ms2Ms1Ms)+(2U(k)s2U(k)s1U(k)s)](ˉδ(k)1+δ(k)1)+2Ωˉδ(k)s|2(Γ2+2Us2+Us1+Us)|δ(k)1|+2Ω|δ(k)s|2[Γ2+Γ1+2(Us2+Us1+Us)]|δ(k)1|.

    By induction, for 2is, we have

    2si+1Ω|δ(k)i|(sij=12sij+1Γj+2Γsi+1+2si+1sj=i1Uj)|δ(k)1|. (3.7)

    Then, by (3.7), we get

    si=22si+1Ω|ˉδ(k)i|si=22si+1Ω|δ(k)i|si=2(sij=12sij+1Γj+2Γsi+1+2si+1sj=i1Uj)|δ(k)1|=(s2j=1sji=22sij+1Γj+2s1i=1Γi+si=22si+1sj=i1Uj)|δ(k)1|=[s2j=1(2sj2)Γj+2s1j=1Γj+s1j=2(2s2sj)Uj+(2s2)Us)]|δ(k)1|=[2(s2j=12sj1Γj+Γs1)+Θ]|δ(k)1|.

    Theorem 3.1. With the same notations and assumptions as those in Lemmas 3.1 and 3.2, for t=1,2, assume that

    {|G(t)s1||G(t)s|,2sj|G(t)j||s1i=j2si1G(t)i+G(t)s|,2js1, (3.8)

    Then, Method 2.1 converges for any initial vector x(0)1 provided that

    Ω21s(φ(DM(t))+φ(U)+Θ) (3.9)

    or

    [21s(φ(DM(t))+φ(U))+2sΘ(F(t)s|G(t)s|)]Λe<ΩΛe<21s(φ(DM(t))+Θ)Λe. (3.10)

    Proof.

    By (3.2), we get

    ui(z(k))ui(z)=1σU(k)i(δ(k)1+ˉδ(k)1).

    By the definition of φ, we have

    φ(U(z(k)))φ(U(z))=φ(U(z(k))U(z))=1σφ(U(k))(δ(k)1+ˉδ(k)1). (3.11)

    Combining the first equation of (2.2) and (3.11), we can get

    (2s1Ω+φ(F(1)))δ(k+12)1=φ(G(1))δ(k)1+(2s1Ωφ(M))ˉδ(k)1+Ωsi=22si+1ˉδ(k)iσ[φ(U(z(k)))φ(U(z))]=φ(G(1))δ(k)1+(2s1Ωφ(M))ˉδ(k)1+Ωsi=22si+1ˉδ(k)iφ(U(k))(δ(k)i+ˉδ(k)i)=[φ(G(1))φ(U(k))]δ(k)1+[2s1Ωφ(M)φ(U(k))]ˉδ(k)1+Ωsi=22si+1ˉδ(k)i.

    Similarly, by the second equation of (2.2), we have

    (2s1Ω+φ(F(2)))δ(k+1)1=[φ(G(2))φ(U(k+12))]δ(k+12)1+[2s1Ωφ(M)φ(U(k+12))]ˉδ(k+12)1+Ωsi=22si+1ˉδ(k+12)i.

    Then we have

    {|δ(k+12)1||2s1Ω+φ(F(1))|1(|φ(G(1))φ(U(k))||δ(k)1|+|2s1Ωφ(M)φ(U(k))||ˉδ(k)1|+si=22si+1Ω|ˉδ(k)i|),|δ(k+1)1||2s1Ω+φ(F(2))|1(|φ(G(2))φ(U(k+12))||δ(k+12)1|+|2s1Ωφ(M)φ(U(k+12))||ˉδ(k+12)1|+si=22si+1Ω|ˉδ(k+12)i|). (3.12)

    By Lemma 3.1, we have that 2s1Ω+φ(F) is an H-matrix. By [17], with (3.5) and (3.12), we get

    |δ(k+1)1|P(2)1Q(2)P(1)1Q(1)|δ(k)1|, (3.13)

    where

    P(1)=2s1Ω+φ(F(1)),
    Q(1)=|φ(G(1))φ(U(k))|+|2s1Ωφ(M)φ(U(k))|+2(s2j=12sj1Γj+Γs1)+Θ,
    P(2)=2s1Ω+φ(F(2)),
    Q(2)=|φ(G(2))φ(U(k))|+|2s1Ωφ(M)φ(U(k))|+2(s2j=12sj1Γj+Γs1)+Θ.

    First, we estimate F(1)1G(1). By Lemma 3.1 and [26], we have

    Λ1F(1)1G(1)Λ=(F(1)Λ)1(G(1)Λ)max1in(G(1)Λe)i(F(1)Λe)i. (3.14)

    Let

    Γ(1)j={|DF(1)s1DF(1)s|,ifj=1,|2j1DF(1)sjs1t=sj+12st1DF(1)tDF(1)s|,if2js1,
    Γ(2)j={|BF(1)s1BF(1)s|,ifj=1,|2j1BF(1)sjs1t=sj+12st1BF(1)tBF(1)s|,if2js1,

    and

    Γ(3)j={|G(1)s1G(1)s|,ifj=1,|2j1G(1)sjs1t=sj+12st1G(1)tG(1)s|,if2js1.

    Clearly, we have ΓjΓ(1)j+Γ(2)j+Γ(3)j. By (3.4), we can get

    Γ(1)j={DFs1DFs,ifj=1,2j1DFsjs1t=sj+12st1DFtDFs,if2js1.

    Then, by direct computation, we can obtain

    φ(D(1)M)2(s2j=12sj1Γ(1)j+Γ(1)s1)=s1i=12si1DF(1)i+DF(1)s2(s2j=12sj1Γ(1)j+Γ(1)s1)=(2s1)DF(1)ss1i=12si1DF(1)i. (3.15)

    Moreover, by reusing (3.4) s2 times, we get

    2(s2j=12sj1Γ(2)j+Γ(2)s1)+2|s1j=12sj1BF(1)j+BF(1)s|=2s2j=12sj1Γ(2)j+(2|2s2BF(1)1s1j=22sj1BF(1)jBF(1)s|+2|2s2BF(1)1+s1j=22sj1BF(1)j+BF(1)s|)=2s2j=12sj1Γ(2)j+22|s1j=22sj1BF(1)j+BF(1)s|=2s3j=12sj1Γ(2)j+(22|2s3BF(1)2s1j=32sj1BF(1)jBF(1)s|+22|2s3BF2+s1j=32sj1BF(1)j+BF(1)s|)=2s3j=12sj1Γ(2)j+23|s1j=32sj1BF(1)j+BF(1)s|==2s|BFs|. (3.16)

    Analogously, by reusing (3.8) s2 times, we get

    2(s2j=12sj1Γ(3)j+Γ(3)s1)+2|φ(G(1))|=2s|G(1)s|. (3.17)

    By (3.15)–(3.17), we get

    P(1)ΛeQ(1)Λe=[2s1Ω+φ(F(1))|φ(G(1))φ(U(k))||2s1Ωφ(M)φ(U(k))|2(s2j=12sj1Γj+Γs1)Θ]Λe[2s1Ω+φ(DM(1))|φ(BM(1))||φ(G(1))φ(U(k))||2s1Ωφ(DM)φ(U(k))||φ(BM(1))||φ(G(1))|2(s2j=12sj1Γ(1)j+Γ(1)s1)2(s2j=12sj1Γ(2)j+Γ(2)s1)2(s2j=12sj1Γ(3)j+Γ(3)s1)Θ]Λe{2s1Ω+[φ(DM(1))2(s2j=12sj1Γ(1)j+Γ(1)s1)][2|φ(BM(1))|+2(s2j=12sj1Γ(2)j+Γ(2)s1)][2|φ(G)|+2(s2j=12sj1Γ(3)j+Γ(3)s1)]φ(U(k))|2s1Ωφ(DM(1))φ(U(k))Θ|}Λe=[2s1Ω+(2s1)DF(1)ss1i=12si1DF(1)i2s|BF(1)s|2s|G(1)s|φ(U(k))|2s1Ωφ(DM(1))φ(U(k))Θ|]Λe. (3.18)

    Next, consider the following two cases.

    Case 1. When

    Ω21s(φ(DM(1))+φ(U)+Θ),

    by (3.18), we have

    P(1)ΛeQ(1)Λe[2s1Ω+(2s1)DF(1)ss1i=12si1DF(1)i2s|BF(1)s|2s|G(1)s|φ(U(k))(2s1Ωφ(DM)φ(U(k))Θ)]Λe=[2s(DF(1)s|G(1)s||BF(1)s|)+Θ]Λe=[2s(Fs|G(1)s|)+Θ]Λe>0.

    Case 2. When

    [21s(φ(DM(1))+φ(U))+2sΘ(F(1)s|G(1)s|)]Λe<ΩΛe<21s(φ(DM(1))+Θ)Λe,

    by (3.18), we have

    P(1)ΛeQ(1)Λe[2s1Ω+(2s1)DF(1)ss1i=12si1DF(1)i2s|BF(1)s|2s|G(1)s|φ(U(k))(φ(DM(1))+φ(U(k))+Θ2s1Ω)]Λe[2sΩ2φ(DM(1))2φ(U)Θ+2s(DFs|G(1)s||BF(1)s|)]Λe=[2sΩ2φ(DM(1))2φ(U)Θ+2s(F(1)s|G(1)s|)]Λe>0.

    Summarizing Cases 1 and 2, we immediately get P(1)ΛeQ(1)Λe>0, provided that (3.9) or (3.10) holds, which implies that

    Λ1P(1)1Q(1)Λ<1

    by (3.14).

    By similar deductions, we can also get

    Λ1P(2)1Q(2)Λ<1,

    when (3.9) or (3.10) is satisfied.

    In summary, the spectral radius of the iteration matrix given by (3.13) can be estimated as follows:

    ρ(P(2)1Q(2)F(1)1Q(1))=ρ(Λ1P(2)1Q(2)ΛΛ1P(1)1Q(1)Λ)Λ1P(2)1Q(2)ΛΛ1P(1)1Q(1)Λ<1.

    Hence, we can see that {x(k)1}k=0 converges by (3.13).

    Based on Theorem 3.1, we have the next theorem for the TMAOR method.

    Theorem 3.2. With the same notations and assumptions as those in Theorem 3.1, for t=1,2, let F(t)i and G(t)i be given by (2.4), 1is. Assume that

    0<βα<21+ρ(D1MsBMs). (3.19)

    Then, the TMAOR converges to the solution of the VNCP.

    Proof. By (2.4), with direct computation, we can get

    F(1)sG(1)s=F(2)sG(2)s=11ααDMsBMs,

    if 0<βα. Since Ms is an H+-matrix, by [5] we have ρ(D1MsBMs)<1. Since (3.19) holds, we can easily have that 11ααDMsBMs is a nonsingular M-matrix. Note that all the assumptions of Theorem 3.1 hold. Hence, the TMAOR is convergent by Theorem 3.1.

    Next, some numerical examples are given to shown the efficiency of Method 2.2 compared to Method 2.1.

    Example 4.1. [42] Consider a VNCP (s=2), where the two system matrices are given by

    M1=(RImImRImImRImImR)Rn×nandM2=(RRRR)Rn×n,

    where

    R=(4114114)Rm×m.

    Example 4.2. [42] Consider a VNCP (s=2), where the two system matrices are given by

    M1=(T0.5Im1.5ImT0.5Im1.5ImT0.5Im1.5ImT)Rn×nandM2=(TTTT)Rn×n,

    where

    T=(40.51.540.51.54)Rm×m.

    Example 4.3. Consider the VNCP (s=2) whose system matrices M1,M2Rm2×m2 come from the discretization of Hamilton-Jacobi-Bellman (HJB) equation [4]

    {maxi=1,2{Li+fi}=0inΓ,z=0onΓ,

    with Γ={(x,y)0<x<2,0<y<1},

    {L1=0.002zxx+0.001zyy20u1(z),f1=1,L2=0.001zxx+0.001zyy10u2(z),f2=1.

    Same as those in [42], all the nonlinear functions in Examples 4.1–4.3 are set to u1(z)=z1+z and u2(z)=arctan(z).

    The programming language is MATLAB, whose codes are run on a PC with a 12th Gen Intel(R) Core(TM) i7-12700 2.10 GHz and 32G memory. Consider the Gauss-Seidel (abbreviated as GS), SOR and AOR splittings, where the SOR splitting is

    F(t)s=1α(DM(t)sαLM(t)s),G(t)s=F(t)sM(t)s,t=1,2,

    with α being the relaxation parameter and starting from 0.8 to 1.2 with a step size of 0.1, while the parameters of AOR splitting given by (2.4) satisfy α=β+0.1, and β starts from 0.8 to 1.2 with a step size of 0.1.

    All initial vectors are set to x(0)1=e. The stopping criterion in the iteration of both Methods 2.1 and 2.2 is taken as

    min{z(k),M1z(k)+u1(z(k)),M2z(k)+u2(z(k))}2<1010,

    and Ω is chosen as

    Ω=12α(DM1+DM2)+I.

    Let m=256,512,1024 for the three examples. For fair comparison, when the relaxation parameters are chosen as the experimentally optimal ones for each size of the three examples, the results of the MMS and TMMS methods are shown in Tables 13, where "IT" is the iteration steps, "CPU" is the elapsed CPU time, and

    SAVE=CPUMMSCPUTMMSCPUMMS×100%.
    Table 1.  Numerical results of Example 4.1.
    m splitting Method 2.1 Method 2.2
    IT CPU IT CPU SAVE
    256 GS 98 0.6945 50 0.5976 14%
    SOR 94 0.6679 46 0.5652 15%
    AOR 97 0.7469 47 0.5902 21%
    512 GS 100 3.7686 51 3.1709 16%
    SOR 94 3.6039 46 3.0237 16%
    AOR 101 4.0266 47 3.1373 22%
    1024 GS 102 22.7605 52 18.9571 17%
    SOR 134 21.1283 47 17.7199 16%
    AOR 97 24.4213 47 18.8083 23%

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results of Example 4.2.
    m splitting Method 2.1 Method 2.2
    IT CPU IT CPU SAVE
    256 GS 86 0.5939 34 0.3792 36%
    SOR 69 0.4937 31 0.3571 27%
    AOR 71 0.5252 32 0.4136 21%
    512 GS 89 3.4883 35 2.1793 38%
    SOR 70 2.6886 31 1.9244 28%
    AOR 71 4.5891 32 3.3323 27%
    1024 GS 91 19.1887 36 12.4029 35%
    SOR 70 15.3799 32 11.5026 25%
    AOR 72 17.7342 33 13.3755 24%

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of Example 4.3.
    m splitting Method 2.1 Method 2.2
    IT CPU IT CPU SAVE
    256 GS 76 0.6216 39 0.5478 12%
    SOR 59 0.4853 30 0.4394 9%
    AOR 54 0.4732 28 0.4103 13%
    512 GS 288 17.4591 145 14.2667 18%
    SOR 253 12.6084 101 11.6480 7%
    AOR 204 14.9972 103 12.6497 16%
    1024 GS 1137 172.0573 569 144.5840 16%
    SOR 885 137.8101 443 116.6376 15%
    AOR 805 130.3913 403 111.1060 15%

     | Show Table
    DownLoad: CSV

    It is found from the numerical results that Method 2.2 always converges faster than Method 2.1 for all cases. Specifically, the iteration steps of Method 2.1 take almost twice as much time as those of Method 2.2 due to the fact that there are two equations needed to solve in each iteration in Method 2.2 versus only one in Method 2.1. Focusing on the CPU time, the cost of Method 2.2 is 14%–23% less than that of Method 2.1 for Example 4.1, while the percentages of Examples 4.2 and 4.3 are 21%–38% and 7%–18%, respectively. It is clear that the two-step splitting technique works for accelerating the convergence of the MMS iteration method.

    By the two-step matrix splitting technique, we have constructed the TMMS iteration method for solving the VNCP. We also present the convergence theorems of the TMMS iteration method for an arbitrary s, which can extend the research scope of the modulus-based methods for the VNCP.

    Note that to show the effectiveness of the proposed two-step method, the choice of the two-step splittings in Section 4 is common in existing literatures. However, since the iteration matrix in (3.13) is essentially an extended bound containing absolute operation, it is very hard to minimize its spectral radius. How to choose an optimal two-step splitting in general or based some special structure is still an open question. On the other hand, it is noted that there were some other accelerated techniques for the MMS iteration method, such as relaxation, precondition, two-sweep and so on. One can also mix a certain technique with the two-step splittings for acceleration, e.g., [3,22,30,47]. More techniques to improve the convergence of the MMS iteration method are worth studying in the future.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by Basic and Applied Basic Research Foundation of Guangdong (No. 2024A1515011822), Scientific Computing Research Innovation Team of Guangdong Province (No. 2021KCXTD052), Science and Technology Development Fund, Macau SAR (No. 0096/2022/A), Guangdong Key Construction Discipline Research Capacity Enhancement Project (No. 2022ZDJS049) and Technology Planning Project of Shaoguan (No. 230330108034184).

    The authors declare that they have no conflict of interest.



    [1] R. Katam, V. D. K. Pasupuleti, P. Kalapatapu, A review on structural health monitoring: past to present, Innov. Infrastruct. Solut., 8 (2023), 248. https://doi.org/10.1007/s41062-023-01217-3 doi: 10.1007/s41062-023-01217-3
    [2] O. S. Sonbul, M. Rashid, Algorithms and techniques for the structural health monitoring of bridges: systematic literature review, Sensors, 23 (2023), 4230. https://doi.org/10.3390/s23094230 doi: 10.3390/s23094230
    [3] M. Shibu, K. P. Kumar, V. J. Pillai, H. Murthy, S. Chandra, Structural health monitoring using AI and ML based multimodal sensors data, Meas. Sens., 27 (2023), 100762. https://doi.org/10.1016/j.measen.2023.100762 doi: 10.1016/j.measen.2023.100762
    [4] Y. J. Cha, R. Ali, J. Lewis, O. Büyüköztürk, Deep learning-based structural health monitoring, Automat. Constr., 161 (2024), 105328. https://doi.org/10.1016/j.autcon.2024.105328 doi: 10.1016/j.autcon.2024.105328
    [5] A. Anjum, M. Hrairi, A. Aabid, N. Yatim, M. Ali, Civil structural health monitoring and machine learning: a comprehensive review, Fratt. Integr. Strutturale, 69 (2024), 43–59. https://doi.org/10.3221/IGF-ESIS.69.04 doi: 10.3221/IGF-ESIS.69.04
    [6] M. Rodrigues, V. L. Miguéis, C. Felix, C. Rodrigues, Machine learning and cointegration for structural health monitoring of a model under environmental effects, Expert Syst. Appl., 238 (2024), 121739. https://doi.org/10.1016/j.eswa.2023.121739 doi: 10.1016/j.eswa.2023.121739
    [7] H. Son, Y. Jang, S. E. Kim, D. Kim, J. W. Park, Deep learning-based anomaly detection to classify inaccurate data and damaged condition of a cable-stayed bridge, IEEE Access, 9 (2021), 124549–124559. https://doi.org/10.1109/ACCESS.2021.3100419 doi: 10.1109/ACCESS.2021.3100419
    [8] K. Maes, L. Van Meerbeeck, E. P. B. Reynders, G. Lombaert, Validation of vibration-based structural health monitoring on retrofitted railway bridge KW51, Mech. Syst. Signal Process., 165 (2022), 108380. https://doi.org/10.1016/j.ymssp.2021.108380 doi: 10.1016/j.ymssp.2021.108380
    [9] C. R. Farrar, K. Worden, Structural health monitoring: a machine learning perspective, John Wiley and Sons, 2012. https://doi.org/10.1002/9781118443118
    [10] C. Scuro, F. Lamonaca, S. Porzio, G. Milani, R. S. Olivito, Internet of Things (IoT) for masonry structural health monitoring (SHM): overview and examples of innovative systems, Constr. Build. Mater., 290 (2021), 123092. https://doi.org/10.1016/j.conbuildmat.2021.123092 doi: 10.1016/j.conbuildmat.2021.123092
    [11] A. Malekloo, E. Ozer, M. AlHamaydeh, M. Girolami, Machine learning and structural health monitoring overview with emerging technology and high-dimensional data source highlights, Struct. Health Monit., 21 (2022), 1906–1955. https://doi.org/10.1177/14759217211036880 doi: 10.1177/14759217211036880
    [12] A. Pelle, B. Briseghella, G. Fiorentino, G. F. Giaccu, D. Lavorato, G. Quaranta, et al., Repair of reinforced concrete bridge columns subjected to chloride-induced corrosion with ultra-high performance fiber reinforced concrete, Struct. Concr., 24 (2023), 332–344. https://doi.org/10.1002/suco.202200555 doi: 10.1002/suco.202200555
    [13] M. Omori Yano, E. Figueiredo, S. da Silva, A. Cury, I. Moldovan, Transfer learning for structural health monitoring in bridges that underwent retrofitting, Buildings, 13 (2023), 2323. https://doi.org/10.3390/buildings13092323 doi: 10.3390/buildings13092323
    [14] C. Flexa, W. Gomes, C. Sales, Data normalization in structural health monitoring by means of nonlinear filtering, 2019 8th Brazilian Conference on Intelligent Systems (BRACIS), 2019,204–209. https://doi.org/10.1109/BRACIS.2019.00044
    [15] K. Worden, L. A. Bull, P. Gardner, J. Gosliga, T. J. Rogers, E. J. Cross, et al., A brief introduction to recent developments in population-based structural health monitoring, Front. Built Environ., 6 (2020), 146. https://doi.org/10.3389/fbuil.2020.00146 doi: 10.3389/fbuil.2020.00146
    [16] P. Gardner, L. A. Bull, N. Dervilis, K. Worden, Domain-adapted Gaussian mixture models for population-based structural health monitoring, J. Civil Struct. Health Monit., 12 (2022), 1343–1353. https://doi.org/10.1007/s13349-022-00565-5 doi: 10.1007/s13349-022-00565-5
    [17] K. Maes, G. Lombaert, Monitoring railway bridge KW51 before, during, and after retrofitting, J. Bridge Eng., 26 (2021), 04721001. https://doi.org/10.1061/(ASCE)BE.1943-5592.0001668 doi: 10.1061/(ASCE)BE.1943-5592.0001668
    [18] P. Welch, The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms, IEEE Trans. Audio Electroacoust., 15 (1967), 70–73. https://doi.org/10.1109/TAU.1967.1161901 doi: 10.1109/TAU.1967.1161901
    [19] R. Katam, V. D. K. Pasupuleti, P. Kalapatapu, A review on structural health monitoring: past to present, Innov. Infrastruct. Solut., 8 (2023), 248. https://doi.org/10.1007/s41062-023-01217-3 doi: 10.1007/s41062-023-01217-3
    [20] F. A. Amjad, H. Toozandehjani, Time-Frequency analysis using Stockwell transform with application to guided wave structural health monitoring, Iran. J. Sci. Technol. Trans. Civ. Eng., 47 (2023), 3627–3647. https://doi.org/10.1007/s40996-023-01224-5 doi: 10.1007/s40996-023-01224-5
    [21] I. T. Joliffe, Principal component analysis, 2 Eds., Springer, 2011.
    [22] D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolut. Comput., 1 (1997), 67–82. https://doi.org/10.1109/4235.585893 doi: 10.1109/4235.585893
    [23] T. K. Ho, Random decision forests, Proceedings of 3rd International Conference on Document Analysis and Recognition, 1 (1995), 278–282. https://doi.org/10.1109/ICDAR.1995.598994
    [24] L. Breiman, Random forests, Mach. Learn., 45 (2001), 5–32. https://doi.org/10.1023/A:1010933404324 doi: 10.1023/A:1010933404324
    [25] M. C. Cheng, M. Bonopera, L. J. Leu, Applying random forest algorithm for highway bridge-type prediction in areas with a high seismic risk, J. Chin. Inst. Eng., 47 (2024), 597–610. https://doi.org/10.1080/02533839.2024.2368464 doi: 10.1080/02533839.2024.2368464
    [26] T. Hastie, R. Tibshirani, J. Friedman, The elements of statistical learning: data mining, inference and prediction, 2 Eds., Springer-Verlag, 2009. https://doi.org/10.1007/978-0-387-84858-7
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1698) PDF downloads(125) Cited by(1)

Figures and Tables

Figures(15)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog