Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

Semi-supervised random forest regression model based on co-training and grouping with information entropy for evaluation of depression symptoms severity

  • Received: 08 March 2021 Accepted: 19 May 2021 Published: 27 May 2021
  • Semi-supervised learning has always been a hot topic in machine learning. It uses a large number of unlabeled data to improve the performance of the model. This paper combines the co-training strategy and random forest to propose a novel semi-supervised regression algorithm: semi-supervised random forest regression model based on co-training and grouping with information entropy (E-CoGRF), and applies it to the evaluation of depression symptoms severity. The algorithm inherits the ensemble characteristics of random forest, and combines well with co-training. In order to balance the accuracy and diversity of co-training random forests, the algorithm proposes a grouping strategy to decision trees. Moreover, the information entropy is used to measure the confidence, which avoids unnecessary repeated training and improves the efficiency of the model. In the practical application of evaluation of depression symptoms severity, we collect cognitive behavioral data of emotional conflict based on the depressive affective disorder. And on this basis, feature construction and normalization preprocessing are carried out. Finally, the test is conducted on 35 labeled and 80 unlabeled depression patients. The result shows that the proposed algorithm obtains MAE (Mean Absolute Error) = 3.63 and RMSE (Root Mean Squared Error) = 4.50, which is better than other semi-supervised regression algorithms. The proposed method effectively solves the modeling difficulties caused by insufficient labeled samples, and has important reference value for the diagnosis of depression symptoms severity.

    Citation: Shengfu Lu, Xin Shi, Mi Li, Jinan Jiao, Lei Feng, Gang Wang. Semi-supervised random forest regression model based on co-training and grouping with information entropy for evaluation of depression symptoms severity[J]. Mathematical Biosciences and Engineering, 2021, 18(4): 4586-4602. doi: 10.3934/mbe.2021233

    Related Papers:

    [1] Gaofeng Du, Chenghua Gao, Jingjing Wang . Spectral analysis of discontinuous Sturm-Liouville operators with Herglotzs transmission. Electronic Research Archive, 2023, 31(4): 2108-2119. doi: 10.3934/era.2023108
    [2] Fei-fan Li, Ji-jun Ao . A dissipative third-order boundary value problem with distributional potentials and eigenparameter-dependent boundary conditions. Electronic Research Archive, 2025, 33(5): 3378-3393. doi: 10.3934/era.2025149
    [3] Chenghua Gao, Enming Yang, Huijuan Li . Solutions to a discrete resonance problem with eigenparameter-dependent boundary conditions. Electronic Research Archive, 2024, 32(3): 1692-1707. doi: 10.3934/era.2024077
    [4] Vladimir Rovenski . Willmore-type variational problem for foliated hypersurfaces. Electronic Research Archive, 2024, 32(6): 4025-4042. doi: 10.3934/era.2024181
    [5] Ting-Ying Chang, Yihong Du . Long-time dynamics of an epidemic model with nonlocal diffusion and free boundaries. Electronic Research Archive, 2022, 30(1): 289-313. doi: 10.3934/era.2022016
    [6] Qian He, Wenxin Du, Feng Shi, Jiaping Yu . A fast method for solving time-dependent nonlinear convection diffusion problems. Electronic Research Archive, 2022, 30(6): 2165-2182. doi: 10.3934/era.2022109
    [7] Pshtiwan Othman Mohammed, Hari Mohan Srivastava, Dumitru Baleanu, Ehab E. Elattar, Y. S. Hamed . Positivity analysis for the discrete delta fractional differences of the Riemann-Liouville and Liouville-Caputo types. Electronic Research Archive, 2022, 30(8): 3058-3070. doi: 10.3934/era.2022155
    [8] Mufit San, Seyma Ramazan . A study for a higher order Riemann-Liouville fractional differential equation with weakly singularity. Electronic Research Archive, 2024, 32(5): 3092-3112. doi: 10.3934/era.2024141
    [9] J. F. Toland . Path-connectedness in global bifurcation theory. Electronic Research Archive, 2021, 29(6): 4199-4213. doi: 10.3934/era.2021079
    [10] Nan Deng, Meiqiang Feng . New results of positive doubly periodic solutions to telegraph equations. Electronic Research Archive, 2022, 30(3): 1104-1125. doi: 10.3934/era.2022059
  • Semi-supervised learning has always been a hot topic in machine learning. It uses a large number of unlabeled data to improve the performance of the model. This paper combines the co-training strategy and random forest to propose a novel semi-supervised regression algorithm: semi-supervised random forest regression model based on co-training and grouping with information entropy (E-CoGRF), and applies it to the evaluation of depression symptoms severity. The algorithm inherits the ensemble characteristics of random forest, and combines well with co-training. In order to balance the accuracy and diversity of co-training random forests, the algorithm proposes a grouping strategy to decision trees. Moreover, the information entropy is used to measure the confidence, which avoids unnecessary repeated training and improves the efficiency of the model. In the practical application of evaluation of depression symptoms severity, we collect cognitive behavioral data of emotional conflict based on the depressive affective disorder. And on this basis, feature construction and normalization preprocessing are carried out. Finally, the test is conducted on 35 labeled and 80 unlabeled depression patients. The result shows that the proposed algorithm obtains MAE (Mean Absolute Error) = 3.63 and RMSE (Root Mean Squared Error) = 4.50, which is better than other semi-supervised regression algorithms. The proposed method effectively solves the modeling difficulties caused by insufficient labeled samples, and has important reference value for the diagnosis of depression symptoms severity.



    In 2008, Bhattacharya et al. [5] and Bell et al. [4] discovered that bipartite chain graphs whose largest eigenvalues within the connected bipartite graph is maximal, and named therein as double nested graphs. After that, many scholars began to study some mathematical properties of chain graphs. Andelic et al. provide that some upper and lower bounds on index of chain graphs [3]. And Alazemi et al. proved that any chain graph has its least positive eigenvalue greater than 12 [2]. Hence Zhang et al. proposed that upper bounds on Laplacian spectral radius of chain graphs [13]. Das et al. studied the energy and Laplacian energy of chain graphs [8]. In this paper, we further study some bounds of energy and Laplacian energy of chain graphs.

    We consider finite undirected connected graphs without loops and multiple edges. Let G be a such graph with vertex set V(G)={v1,v2,,vn} and edge set E(G), where |E(G)|=m. Let di be the degree of the vertex vi for i=1,2,,n. The minimum vertex degrees of G are denoted by δ(G). Let NG(vi) be the adjacent set of the vertex vi, then di=|NG(vi)|. If G has distinct vertices vi and vj with NG(vi)=NG(vj), then vi and vj are duplicates and (vi,vj) is a duplicate pair.

    Let A(G) be the adjacency matrix of G, and rank(G) be the rank of the adjacency matrix A(G). Let λ1λ2λn the eigenvalues of A(G). We denote S(G)={λ1,λ2,,λn} as the spectrum of G. The energy of graph G is defined as [11]

    ε(G)=ni=1|λi|.

    For its basic properties and application, including various lower and upper bounds, see the [17], the recent paper [1,7,8,11,12,20] and the references cited therein.

    The Laplacian matrix of graph G is defined as L(G)=D(G)A(G), where D(G) is the diagonal matrix of vertex degrees. The matrix L(G) has non-negative eigenvalues μ1μ2μn1μn=0, and the Laplacian spectrum of graph G be denoted by LS(G)={μ1,μ2,,μn}. The Laplacian energy of G is defined as [10]

    LE(G)=ni=1|μi2mn|.

    It can also be defined as

    LE(G)=2Sσ(G)4mσn, (1.1)

    where σ(1σn) be the largest positive integer such that μσ2mn and Sk(G)=ki=1μi.

    For its basic properties, including various lower and upper bounds, see [7,8,10,18,19] and the references cited therein. The Laplacian energy found applications not only in theoretical organic chemistry [12,21], but also in image processing [22] and information theory [16].

    In the class of bipartite graphs of fixed order and size those having maximal spectral radius of adjacency/Laplacian/signless Laplacian matrix are chain graphs. Thus, they can be significant in modeling some bipartite networks with large spectral radius. Their applications involve ecological networks, in which graphs with nested properties are considered [14] and are used in some applications for economic network modeling.

    We now introduce the structure of a (connected) chain graph. The vertex set of any chain graph consists of two color classes, which are U and V. Both of them are divided into h non-empty units U1,U2,,Uh and V1,V2,,Vh, respectively. All the vertices in Us are joined by edges to all vertices in h+1sk=1Vk, for s=1,2,,h. Therefore, if uiUs+1 and ujUs, then NG(ui)NG(uj), or if viVt+1 and vjvt, then NG(vi)NG(vj).

    If ns=|Us| and ms=|Vs| for s=1,2,,h, then G is denoted by G(m1,,mh;n1,,nh), as shown in Figure 1. And

    m=m1hi=1ni+m2h1i=1ni++mhn1=hi=1aimi,
    m=n1hi=1mi+n2h1i=1mi++nhm1=hi=1bini,
    Figure 1.  Structure of G(m1,,mh;n1,,nh).

    where

    ai=h+1ik=1nk,bi=h+1ik=1mk.

    Moreover,

    n=hk=1mk+hk=1nk.

    The second smallest Laplacian eigenvalue of a graph is well known as the algebraic connectivity. It has been proved that the second smallest Laplacian eigenvalue μn1=0 if and only if G is disconnected. The algebraic connectivity is often applied in theoretical chemistry, control theory, combinatorial optimization and other fields [15].

    As usual, Kn, Kp,q(p+q=n) and K1,n1, denote, respectively, the complete graph, the complete bipartite graph and the star on n vertices. For other undefined notations and terminology from graph theory, the readers are referred to [6].

    The paper is organized as follows. In Section 2, we list some previously known results. In Section 3, we get some upper and lower bounds on ε(G) of a chain graph G. In Section 4, we establish an upper bound on LE(G) of the chain graphs in terms of vertex cover number. In Section 5, we attain the maximal Laplacian energy of the bicyclic chain graph G by comparing the algebraic connectivity.

    This section lists some known results to be used in this paper.

    Lemma 2.1. [8] Let B be a p×p real symmetric matrix and Bk be its leading k×k submatrix. Then for i=1,2,,k,

    λpi+1(B)λki+1(Bk)λki+1(B),

    where λi(B) is the i-th largest eigenvalue of B.

    Lemma 2.2. [9] Let G be a graph with vertices {v1,v2,,vk}V(G) having same set of adjacent vertices, then G has at least k1 equal eigenvalues 0.

    Lemma 2.3. [18] Let GKn. Then μn1δ(G).

    Lemma 2.4. [10] Let A and B be real symmetric matrices of order n. Then for any 1kn,

    ki=1λi(A+B)ki=1λi(A)+ki=1λi(B),

    where λi(M) denotes the i-th largest eigenvalue of the matrix M.

    Lemma 2.5. [1] If G is a connected bipartite graph of rank r, then

    ε(G)(r+1)25.

    Lemma 2.6. [11] If G is a connected bipartite graph of rank r, then

    LE(G)2(ε(G)r).

    Lemma 2.7. [8] Let GG(m1,,mh;n1,,nh) be a chain graph of order n. Then

    ε(G)2n1,

    with equation holds if and only if GK1,n1.

    Lemma 2.8. [8] Let G be a graph with vertex set V(G)={v1,v2,,vn}. If G has k1 duplicate pairs (vi,vi+1), where i=1,2,,k1, then G has at least k1 equal Laplacian eigenvalues and they are all equal to the cardinality of the neighbor set.

    Theorem 3.1. Let GG(m1,,mh;n1,,nh) be a chain graph of order n. Then

    ε(G)2hm (3.1)

    with equation holds if and only if GKn1,m1, where n1+m1=n.

    Proof. By Lemma 2.2, the eigenvalue 0 with multiplicity hi=1(ni+mi2) of A(G), and the remaining eigenvalues are the eigenvalues of the following matrix,

    C=(0000m1m2mh1mh0000m1m2mh100000m1m2000000m1000n1n2nh1nh0000n1n2nh100000n1n2000000n10000000).

    Let λ1λ2λ2h be the eigenvalues of C. Then

    ε(G)=2hi=1|λi|.

    Since G be a bipartite graph, we have λi and λi are eigenvalues of G. Thus we have

    ε(G)=2hi=1λi.

    Obviously,

    2hi=1λ2i=Tr(C2)=2hi=1hi+1j=1mjni=2m,

    that is,

    hi=1λ2i=hi=1hi+1j=1mjni=m.

    So

    ε(G)=2hi=1λ2i+21i<jhλiλj2hi=1λ2i+hi=1(h1)λ2i=2hhi=1λ2i=2hm.

    First we assume that h=1. Then GKn1,m1, where n1+m1=n. So S(G)={±m1n1,0,,0} and ε(G)=2m1n1=2m. Hence the equation holds in (3.1).

    Next we assume that h2. By the definition of chain graph, G(1,1;1,1), that is, P4 is an induced subetaaph of G. By Lemma 2.1, we get λ2(G)λ2(P4)>0. Since G is connected, by Perron-Frobenius theorem we have λ1(G)>λ2(G). Hence the inequality 21i<jhλiλjhi=1(h1)λ2i is strict. This completes the proof.

    Theorem 3.2. Let GG(m1,,mh;n1,,nh) be a chain graph of order n. Then

    ε(G)(2h+1)25. (3.2)

    Proof. By calculating the matrix C in the proof of Theorem 3.1, we get

    det

    Therefore, all the eigenvalues of matrix C are non-zero. Hence r(G) = 2h . Using Lemma 2.5, we can get result in (3.2).

    In this section, we give an upper bound on LE(G) of chain graphs in terms of vertex cover number. Also, the lower bound follows from a known lower bound for Laplacian energy of any graph in terms of rank and energy.

    Theorem 4.1. Let G\cong G(m_{1}, \dots, m_{h}; n_{1}, \dots, n_{h}) be a chain graph of order n , and a_{1}\geq b_{1} . Then

    \begin{equation} LE(G)\leq \left\{ \begin{array}{lcl} 2(m+b_{1})-\frac{4m}{n}, & if \; {\frac{2m}{n}\geq b_{1}},\\ 2b_{1}(n-2)-2m+\frac{8m}{n}, & if \; {\frac{2m}{n} < b_{1}}, \end{array}\right. \end{equation} (4.1)

    with equation holds if and only if G\cong K_{1, n-1} .

    Proof. Let \Gamma = \{v_{11}, v_{12}, \ldots, v_{1m_{1}}, v_{21}, v_{22}, \ldots, v_{2m_{2}}, \ldots, v_{h1}, v_{h2}, \ldots, v_{hm_{h}}\} be a vertex cover set of the graph G , where v_{ij} is the j -th vertex in V_{i} . Hence \{v_{i1}, v_{i2}, \ldots, v_{im_{i}}\}\in V_{i} . We can assume that G_{ij} are spanning subetaaphs of G such that V(G) = V(G_{i1}) = V(G_{i2}) = \cdots = V(G_{im_{i}}) , and the edge set of G_{ij} is defined as

    \begin{equation*} E(G_{ij}) = \left\{v_{ij}U_{k}:U_{k}\subseteq N_G(v_{ij})\right\}. \end{equation*}

    Since |N_G(v_{i1})| = |N_G(v_{i2})| = \cdots = |N_G(v_{im_{i}})| = a_{i} ,

    \begin{equation*} G_{ij} = K_{1, a_{i}}\cup (n-a_{i}-1)K_{1}, \end{equation*}

    we have

    \begin{equation*} E(K_{m_{i}, a_{i}}) = E(G_{i1})\cup E(G_{i2})\cup\cdots\cup E(G_{im_{1}}), \end{equation*}

    so

    \begin{equation*} L(K_{m_{i}, a_{i}}) = L(G_{i1})+ L(G_{i2})+ \cdots + L(G_{im_{1}}), \, \, \, \, i = 1, 2, \dots, h. \end{equation*}

    By Figure 1,

    \begin{equation*} E(G) = E(K_{m_{1}, a_{1}})\cup E(K_{m_{2}, a_{2}})\cup\cdots\cup E(K_{m_{i}, a_{i}}), \end{equation*}

    then we can see easily that

    \begin{equation*} L(G) = L(K_{m_{1}, a_{1}}) + L(K_{m_{2}, a_{2}}) + \cdots + L(K_{m_{i}, a_{i}}). \end{equation*}

    Note that

    \begin{equation*} S_{k}(G_{i1}) = S_{k}(G_{i2}) = \cdots = S_{k}(G_{im_{i}})\leq a_{i}+k, \end{equation*}

    where S_{k}(G) is the sum of the k largest Laplacian eigenvalues of graph G .

    By Lemma 2.4, we get

    \begin{equation*} \begin{aligned} S_{k}(G)&\leq m_{1}S_{k}(G_{11})+m_{2}S_{k}(G_{21})+\cdots+m_{h}S_{k}(G_{h1})\\ &\leq m_{1}(a_{1}+k)+m_{2}(a_{2}+k)+\cdots+m_{h}(a_{h}+k)\\ & = \sum\limits_{i = 1}^{h}m_{i}a_{i}+k\sum\limits_{i = 1}^{h}m_{i}\\ & = m+kb_{1}. \end{aligned} \end{equation*}

    So from (1.1), we get

    \begin{equation*} LE(G) = 2S_{\sigma}(G)-\frac{4m\sigma}{n}\leq2\left(m+\sigma b_{1}\right)-\frac{4m\sigma}{n} = 2m+2\sigma\left(b_{1}-\frac{2m}{n}\right). \end{equation*}

    Since G is connected, 1\leq \sigma\leq n-1 . So it suffices to consider the following two cases.

    Case1. \frac{2m}{n}\geq b_{1} .

    Then we have

    \begin{equation*} LE(G)\leq 2m+2b_{1}-\frac{4m}{n} = 2(m+b_{1})-\frac{4m}{n}. \end{equation*}

    Case2. \frac{2m}{n} < b_{1} .

    By Lemma 2.3, we get \mu_{n-1}\leq \delta(G)\leq \frac{2m}{n} . Thus it must be 1\leq \sigma\leq n-2 . Hence

    \begin{equation*} LE(G)\leq 2m+2(n-2)\left(b_{1}-\frac{2m}{n}\right) = 2b_{1}(n-2)-2m+\frac{8m}{n}. \end{equation*}

    Next we prove that the equality holds.

    If G\cong K_{1, n-1} , we get b_{1} = m_{1} = 1, n_{1} = n-1 , and S(G) = \{0, 1^{n-2}, n\} . Then

    \begin{equation*} LE(K_{1, n-1}) = \sum\limits_{i = 1}^{n} \left|\mu_{i}-\frac{2m}{n}\right| = 2n-\frac{4(n-1)}{n} = 2(m+b_{1})-\frac{4m}{n}. \end{equation*}

    Theorem 4.2. Let G\cong G(m_{1}, \dots, m_{h}; n_{1}, \dots, n_{h}) be a chain graph of order n . Then

    \begin{equation} LE(G)\geq 4(\sqrt{n-1}-h). \end{equation} (4.2)

    Proof. By Theorem 3.2, we get r(G) = 2h . Using Lemmas 2.6 and 2.7, we get result in (4.2).

    Let G be a connected bicyclic chain graph. We have m = n+1 , and h = 2 or h = 3 . If h = 2 , then G\cong G(1, 1;3, n-5) or G\cong G(1, 2;2, n-5) . If h = 3 , then G\cong G(1, 2, k-3;1, 1, n-k-2) , where 4\leq k\leq n-3 (Figure 2). In this section, we will attain the maximal Laplacian energy of all connected bicyclic chain graphs.

    Figure 2.  Graphs G(1, 1;3, n-5) , G(1, 2;2, n-5) and G(1, 2, k-3;1, 1, n-k-2) .

    Lemma 5.1. Let G be a connected bicyclic chain graph (n\geq8) .

    (1) If G\cong G(1, 1;3, n-5) , then LE(G) = 6+\frac{2(n-4)(n+1)}{n}-2\mu_{n-1} .

    (2) If G\cong G(1, 2;2, n-5) , then LE(G) = 10+\frac{2(n-6)(n+1)}{n}-2\mu_{n-1} .

    (3) If G\cong G(1, 2, k-3;1, 1, n-k-2) , where 4\leq k\leq n-3 , then LE(G) = 10+\frac{2(n-6)(n+1)}{n}-2\mu_{n-1} .

    Proof. (1) Let G\cong G(1, 1;3, n-5) . By Lemma 2.8, we conclude that 2, 2, \underbrace{1, 1, \cdots, 1}_{n-6} are the Laplacian eigenvalues of G and the remaining Laplacian eigenvalues of G are satisfying the equation f_{1}(x) = 0 , where f_{1}(x) is the characteristic polynomial of the matrix

    \begin{equation*} A_{1} = \left( \begin{array}{cccc} n-2 & 0 & -3 & 5-n \\ 0 & 3 & -3 & 0 \\ -1 & -1 & 2 & 0 \\ -1 & 0 & 0 & 1 \\ \end{array} \right), \end{equation*}

    that is, f_{1}(x) = x\left(x^{3}-(4+n)x^{2}+(5n-2)x-3n\right).

    Let h_{1}(x) = x^{3}-(4+n)x^{2}+(5n-2)x-3n . Then we obtain h_{1}(0) = -3n < 0 , h_{1}(1) = n-5 > 0 , h_{1}(2) = 3n-12 > 0 , h_{1}(n-1) = -3 < 0 and \lim\limits_{x\rightarrow\infty}h_{1}(x) = \infty . Thus the Laplacian eigenvalues of G are \mu_{1}, \mu_{2}, 2, 2, \underbrace{1, 1, \dots, 1}_{n-6}, \mu_{n-1}, 0 , where \mu_{1}\geq n-1 , 2\leq\mu_{2}\leq n-1 , \mu_{n-1} < 1 and \mu_{1}+\mu_{2}+\mu_{n-1} = n+4 .

    Therefore

    \begin{equation} LE(G) = \sum\limits_{i = 1}^{n}\left|\mu_{i}-\frac{2(n+1)}{n}\right| = 6+\frac{2(n-4)(n+1)}{n}-2\mu_{n-1}. \end{equation} (5.1)

    (2) Let G\cong G(1, 2;2, n-5) . By Lemma 2.8, we conclude that 3, 2, \underbrace{1, 1, \cdots, 1}_{n-6} are the Laplacian eigenvalues of G and the remaining Laplacian eigenvalues of G are satisfying the equation f_{2}(x) = 0 , where f_{2}(x) is the characteristic polynomial of the matrix

    \begin{equation*} A_{2} = \left( \begin{array}{cccc} n-3 & 0 & -2 & 5-n \\ 0 & 2 & -2 & 0 \\ -1 & -2 & 3 & 0 \\ -1 & 0 & 0 & 1 \\ \end{array} \right), \end{equation*}

    that is, f_{2}(x) = x\left(x^{3}-(3+n)x^{2}+(5n-8)x-2n\right).

    Let h_{2}(x) = x^{3}-(3+n)x^{2}+(5n-8)x-2n . Then we obtain h_{2}(0) = -2n < 0 , h_{2}(1) = 2n-10 > 0 , h_{2}(3) = 4n-24 > 0 , h_{2}(n-2) = -4 < 0 and \lim\limits_{x\rightarrow\infty}h_{2}(x) = \infty . Thus the Laplacian eigenvalues of G are \mu_{1}, \mu_{2}, 3, 2, \underbrace{1, 1, \dots, 1}_{n-6}, \mu_{n-1}, 0 , where \mu_{1}\geq n-2 , 3\leq\mu_{2}\leq n-2 , \mu_{n-1} < 1 and \mu_{1}+\mu_{2}+\mu_{n-1} = n+3 .

    Therefore

    \begin{equation} LE(G) = \sum\limits_{i = 1}^{n}\left|\mu_{i}-\frac{2(n+1)}{n}\right| = 10+\frac{2(n-6)(n+1)}{n}-2\mu_{n-1}. \end{equation} (5.2)

    (3) Let G\cong G(1, 2, k-3;1, 1, n-k-2) . When 4\leq k\leq \lceil\frac{n}{2}\rceil , by Lemma 2.8, we conclude that 2, \underbrace{1, 1, \cdots, 1}_{n-7} are the Laplacian eigenvalues of G and the remaining laplacian eigenvalues of G are satisfying equation f_{3}(x) = 0 , where f_{3}(x) is the characteristic polynomial of the matrix

    \begin{equation*} A_{3} = \left( \begin{array}{cccccc} n-k & 0 & 0 & -1 & -1 & 2+k-n\\ 0 & 2 & 0 & -1 & -1 & 0 \\ 0 & 0 & 1 & -1 & 0 & 0 \\ -1 & -2 & 3-k & k & 0 & 0 \\ -1 & -2 & 0 & 0 & 3 & 0 \\ -1 & 0 & 0 & 0 & 0 & 1 \\ \end{array} \right), \end{equation*}

    that is

    \begin{equation} f_{3}(x) = x(x-1)\left(x^{4}-(n+6)x^{3}+(kn+5n-k^{2}+10)x^{2}-(4kn+5n-4k^{2}+12)x+6n\right). \end{equation} (5.3)

    Let g(x) = x^{4}-(n+6)x^{3}+(kn+5n-k^{2}+10)x^{2}-(4kn+5n-4k^{2}+12)x+6n . Then we obtain g(0) = 6n > 0 , g(1) = 3k^{2}-3kn+5n-7 < 0 , g(2) = 4(k-2)(2+k-n) < 0 , g(k) = -(k-2)(k-3)(2k-n)\geq 0 . Since when n is odd, g(x) is same for k = \lceil\frac{n}{2}\rceil and k = \lfloor\frac{n}{2}\rfloor , we take a smaller value k = \lfloor\frac{n}{2}\rfloor . g(n-k) = (2+k-n)(2k-n)(-n+3+k)\leq 0 and \lim\limits_{x\rightarrow\infty}g(x) = \infty . Thus the Laplacian eigenvalues of G are \mu_{1}, \mu_{2}, \mu_{3}, 2, \underbrace{1, 1, \cdots, 1}_{n-7}, \mu_{n-1}, 0 , where \mu_{1}\geq n-k , k\leq\mu_{2}\leq n-k , 2 < \mu_{3} < k , \mu_{n-1} < 1 .

    Since \sum\limits_{i = 1}^{n}\mu_{i} = 2m = 2(n+1) = 2n+2 , we get \mu_{1}+\mu_{2}+\mu_{3}+\mu_{n-1} = n+6 , that is, \mu_{1}+\mu_{2}+\mu_{3} = n+6-\mu_{n-1} .

    Therefore

    \begin{equation} LE(G) = \sum\limits_{i = 1}^{n}\left|\mu_{i}-\frac{2(n+1)}{n}\right| = 10+\frac{2(n-6)(n+1)}{n}-2\mu_{n-1}. \end{equation} (5.4)

    When \lceil\frac{n}{2}\rceil < k < n-3 , letting k = n-k in the Eq (5.3) we get the same characteristic polynomial, so it is equal to the Laplacian energy when 4\leq k\leq \lceil\frac{n}{2}\rceil .

    When k = n-3 , f_{3}(x) = x(x-1)(x-3)\left(x^{3}-(3+n)x^{2}+(5n-8)x-2n\right) , so it is equal to the Laplacian energy of G(1, 2;2, 5) .

    This completes the proof.

    Lemma 5.2. Let G_{n, k}\cong G(1, 2, k-3;1, 1, n-k-2) , where 4\leq k\leq \lceil\frac{n}{2}\rceil . Then \mu_{n-1}(G_{n, k})\geq \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) , with equation holds if and only if k = \lceil\frac{n}{2}\rceil . In particular, if n is odd, then \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) = \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-4;1, 1, \lfloor\frac{n}{2}\rfloor-1)\right) .

    Proof. If k = \lceil\frac{n}{2}\rceil , then \mu_{n-1}(G_{n, k}) = \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) . By Lemma 5.1, we obtain that \mu_{1}, \mu_{2}, \mu_{3}, \mu_{n-1} are the roots of the equation P(G_{n, k}, x) = 0 , where

    \begin{equation*} P(G_{n, k},x) = x^{4}-(n+6)x^{3}+(kn+5n-k^{2}+10)x^{2}-(4kn+5n-4k^{2}+12)x+6n, \end{equation*}

    and \mu_{1}\geq n-k , k\leq\mu_{2}\leq n-k , 2 < \mu_{3} < k , \mu_{n-1} < 1 .

    We need to prove that

    \begin{equation*} \mu_{n-1}(G_{n, k}) > \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right), \, \, for\, \, 4\leq k\leq \lceil\frac{n}{2}\rceil-1. \end{equation*}

    Since

    \begin{equation*} P(G_{n, k+1}, x)-P(G_{n, k}, x) = x(x-4)(n-2k-1), \,\, for \,\,0 < x < 1, \end{equation*}

    we get P(G_{n, k+1}, x)-P(G_{n, k}, x)\leq0 . Hence P(G_{n, k+1}, x)\leq P(G_{n, k}, x) . So when n is odd and k = \lceil\frac{n}{2}\rceil-1 , the equation holds.

    Thus we have \mu_{n-1}(G_{n, k}) > \mu_{n-1}(G_{n, k+1}) , that is,

    \begin{equation} \mu_{n-1}(G_{n, 4}) > \mu_{n-1}(G_{n, 5}) > \cdots > \mu_{n-1}(G_{n, \lceil\frac{n}{2}\rceil-1})\geq\mu_{n-1}(G_{n, \lceil\frac{n}{2}\rceil}). \end{equation} (5.5)

    Hence \mu_{n-1}(G_{n, k}) > \mu_{n-1}(G_{n, \lceil\frac{n}{2}\rceil}) = \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) .

    This completes the proof.

    Lemma 5.3. Let G be a bicyclic graph of order n\ge 8 . Then \mu_{n-1}(G(1, 2;2, n-5)) > \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right).

    Proof. When k = 3 , we get P(G_{n, k}, x) = f_{2}(x) , that is \mu_{n-1}(G_{n, 3}) = \mu_{n-1}\left(G(1, 2;2, n-5)\right) .

    By Lemma 5.2, we have P(G_{n, k+1}, x)\leq P(G_{n, k}, x) , and P(G_{n, 4}, x)\leq P(G_{n, 3}, x) still hold.

    By inequation (5.5), we obtain

    \begin{equation*} \mu_{n-1}(G_{n, 3}) > \mu_{n-1}(G_{n, 4}) > \cdots > \mu_{n-1}(G_{n, \lceil\frac{n}{2}\rceil-1})\geq\mu_{n-1}(G_{n, \lceil\frac{n}{2}\rceil}). \end{equation*}

    Hence \mu_{n-1}\left(G(1, 2;2, n-5)\right) > \mu_{n-1}(G_{n, \lceil\frac{n}{2}\rceil}) = \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) for n\geq8 .

    Lemma 5.4. Let G be a bicyclic graph of order n\ge 8 . Then \mu_{n-1}(G(1, 1;3, n-5))- \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) > \frac{2}{n}.

    Proof. For n = 8 and n = 9 , it can be verified by using Maple.

    Let n = 8 , \mu_{n-1}(G(1, 1;3, n-5)) = 0.8377 and \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) = 0.5858 . Then \mu_{n-1}(G(1, 1;3, n-5))- \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) = 0.2519 > \frac{1}{4} , so the conclusion is correct.

    Let n = 9 , \mu_{n-1}(G(1, 1;3, n-5)) = 0.8169 and \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) = 0.5344 . Then \mu_{n-1}(G(1, 1;3, n-5))- \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) = 0.2825 > \frac{2}{9} , so the conclusion is correct.

    Next we prove when n\geq10 , the inequality holds.

    By Lemma 5.3, we get \mu_{n-1}(G(1, 2;2, n-5))\geq \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) , so we can prove \mu_{n-1}(G(1, 1;3, n-5))-\mu_{n-1}(G(1, 2;2, n-5)) > \frac{2}{n} . Let \alpha = \mu_{n-1}(G(1, 1;3, n-5)) , \beta = \mu_{n-1}(G(1, 2;2, n-5)) . Then it is satisfying

    \begin{equation*} h_{1}(x) = x^{3}-(4+n)x^{2}+(5n-2)x-3n \, \, and \, \, h_{1}(\alpha) = 0. \end{equation*}
    \begin{equation*} h_{2}(x) = x^{3}-(3+n)x^{2}+(5n-8)x-2n \, \, and \, \, h_{2}(\beta) = 0. \end{equation*}

    By the implicit function existence theorem and Figure 3, when G\cong G(1, 1;3, n-5) , the relation between the decreases of \alpha and the increase of n , and h_{1}(x) is monotonically increasing on the interval [0, 1] . Hence h_{1}(0.81) = -3.713+0.39n > 0 , h_{1}(0.69) = -2.956-0.26n < 0 , so 0.69 < \alpha < 0.81 .

    Figure 3.  h_{1}(x) (thin line) and h_{2}(x) (thick line).

    Similarly, h_{2}(0.58) = -5.454+0.56n > 0 , h_{2}(0.43) = -3.915-0.035n < 0 , so 0.43 < \beta < 0.58 . Therefore, \alpha-\beta > 0.11 > \frac{2}{19} , that is, when n\geq19 , hence the conclusion is correct.

    When 10\leq n\leq18 , \alpha-\beta > \frac{2}{n} is obvious. The results are shown in Table 1.

    Table 1.  The correlation between \alpha-\beta and \frac{2}{n} .
    n \alpha \beta \alpha-\beta \frac{2}{n}
    10 0.8107 0.5735 0.2372 0.200
    11 0.7899 0.5566 0.2333 0.182
    12 0.7804 0.5438 0.2366 0.167
    13 0.7728 0.5332 0.2396 0.154
    14 0.7666 0.5248 0.2418 0.143
    15 0.7612 0.5176 0.2436 0.133
    16 0.7566 0.5116 0.2450 0.125
    17 0.7526 0.5064 0.2462 0.118
    18 0.7491 0.5020 0.2471 0.111

     | Show Table
    DownLoad: CSV

    So we conclude that when n\geq8 ,

    \begin{equation*} \mu_{n-1}(G(1, 1;3, n-5))- \mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) > \frac{2}{n}. \end{equation*}

    Theorem 5.1. Let G be a connected bicyclic chain graph of order n\geq8 . Then G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2) attains the maximal Laplacian energy. In particular, when n is odd, LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) = LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-4;1, 1, \lfloor\frac{n}{2}\rfloor-1)\right) .

    Proof. By Lemma 5.1, we can attain the maximal Laplacian energy by comparing \mu_{n-1} in equations (5.1), (5.2) and (5.4). It is obvious that LE(G(1, 2;2, n-5)) < LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) . In particular, when n is odd, LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) = LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-4;1, 1, \lfloor\frac{n}{2}\rfloor-1)\right) . So

    \begin{equation*} \begin{aligned} &LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right)-LE(G(1, 1;3, n-5))\cr = &10+\frac{2(n-6)(n+1)}{n}-2\mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right)\cr &-6-\frac{2(n-4)(n+1)}{n}+2\mu_{n-1}(G(1, 1;3, n-5))\\ = &2\left(\mu_{n-1}(G(1, 1;3, n-5))-\mu_{n-1}\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right)\right)-\frac{4}{n}.\\ \end{aligned} \end{equation*}

    Hence by Lemma 5.4, LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right)-LE(G(1, 1;3, n-5)) > 0 , that is, LE\left(G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2)\right) > LE(G(1, 1;3, n-5)) . In conclusion, we get G(1, 2, \lceil\frac{n}{2}\rceil-3;1, 1, \lfloor\frac{n}{2}\rfloor-2) has the maximal Laplacian energy among all connected bicyclic chain graphs (n\geq8) .

    In this paper, we introduced the definition of chain graph. We obtain some bounds on \varepsilon(G) of the chain graphs. Since the rank of the chain graphs is 2h , we can get some bounds on \varepsilon(G) and LE(G) of the chain graphs. We present the upper bound on LE(G) of the chain graphs in terms of vertex cover number. In order to attain the maximal Laplacian energy of bicyclic chain graphs, we compare algebraic connectivity of each kind of bicyclic chain graphs. The problem is still open to discuss what chain graphs give the maximal Laplacian energy for given n and whether it is still related to algebraic connectivity.

    This work was supported by National Nature Science Foundation of China (Grant No. 61774137). The authors express their sincere thanks to the anonymous referee for many valuable comments and suggestions.

    The authors declare that they have no conflict of interest in this paper.



    [1] G Casalino, G Castellano, F Galetta, K. Kaczmarek-Majer, Dynamic incremental semi-supervised fuzzy clustering for bipolar disorder episode prediction, in International Conference on Discovery Science, Springer, Cham, (2020), 79-93.
    [2] J. C. Wakefield and S. Demazeux, Introduction: Depression, one and many, Sadness or Depression?, Netherlands, Springer, 2016, 1-15.
    [3] M. E. Gerbasi, A. Eldar-Lissai, S. Acaster, M. Fridman, V. Bonthapally, P. Hodgkins, et al., Associations between commonly used patient-reported outcome tools in postpartum depression clinical practice and the Hamilton Rating Scale for Depression, Arch. Women's Mental Health, 23 (2020), 727-735.
    [4] C. L. Allan, C. E. Sexton, N. Filippini, A. Topiwala, A. Mahmood, E. Zsoldos, et al., Sub-threshold depressive symptoms and brain structure: A magnetic resonance imaging study within the Whitehall Ⅱ cohort, J. Affective Disord., 204 (2016), 219-225.
    [5] X. Li, Z. Jing, B. Hu, J. Zhu, N. Zhong, M. Li, et al., A resting-state brain functional network study in MDD based on minimum spanning tree analysis and the hierarchical clustering, Complexity, 2017 (2017), 9514369.
    [6] K. Yoshida, Y. Shimizu, J. Yoshimoto, M. Takamura, G. Okada, Y. Okamoto, et al., Prediction of clinical depression scores and detection of changes in whole-brain using resting-state functional MRI data with partial least squares regression, Plos One, 12 (2017), e0179638.
    [7] S. Sun, X. Li, J. Zhu, Y. Wang, R. La, X. Zhang, et al., Graph theory analysis of functional connectivity in major depression disorder with high-density resting state EEG data, IEEE Trans. Neural Syst. Rehabil. Eng., 27 (2019), 429-439.
    [8] U. R. Acharya, S. L. Oh, Y Hagiwara, J. Tan, H. Adeli, D. P. Subha, Automated EEG-based screening of depression using deep convolutional neural network, Comput. Methods Prog. Biomed., 161 (2018), 103-113. doi: 10.1016/j.cmpb.2018.04.012
    [9] R. W. Lam, S. H. Kennedy, R. S. McIntyre, A. Khullar, Cognitive dysfunction in major depressive disorder: effects on psychosocial functioning and implications for treatment, Can. J. Psychiatry, 59 (2014), 649-654. doi: 10.1177/070674371405901206
    [10] R. S. McIntyre, D. S. Cha, J. K. Soczynska, H. O. Woldeyohannes, L. A. Gallaugher, P. Kudlow, et al., Cognitive deficits and functional outcomes in major depressive disorder: determinants, substrates, and treatment interventions, Depression Anxiety, 30 (2013), 515-527.
    [11] Y. Kang, X. Jiang, Y. Yin, Y. Shang, X. Zhou, Deep transformation learning for depression diagnosis from facial images, in Chinese Conference on Biometric Recognition, Springer, Cham, (2017), 13-22.
    [12] A. Haque, M. Guo, A. S. Miner, F. Li, Measuring depression symptom severity from spoken language and 3D facial expressions, preprint, arXiv: 1811.08592.
    [13] M. Muzammel, H. Salam, Y. Hoffmann, M. Chetouani, A. Othmani, AudVowelConsNet: A phoneme-level based deep CNN architecture for clinical depression diagnosis, Mach. Learn. Appl., 2 (2020), 100005.
    [14] J. Zhu, J. Li, X. Li, J. Rao, Y. Hao, Z. Ding, et al., Neural basis of the emotional conflict processing in major depression: ERPs and source localization analysis on the N450 and P300 components, Front. Human Neurosci., 12 (2018), 214.
    [15] B. W. Haas, K. Omura, R. T. Constable, T. Canli, Interference produced by emotional conflict associated with anterior cingulate activation, Cognit. Affective Behav. Neurosci., 6 (2006), 152-156. doi: 10.3758/CABN.6.2.152
    [16] T. Armstrong, B. O. Olatunji, Eye tracking of attention in the affective disorders: a meta-analytic review and synthesis, Clin. Psychol. Rev., 32 (2012), 704-723. doi: 10.1016/j.cpr.2012.09.004
    [17] A. Duque, C. Vázquez, Double attention bias for positive and negative emotional faces in clinical depression: Evidence from an eye-tracking study, J Behav. Ther. Exp. Psychiatry, 46 (2015), 107-114. doi: 10.1016/j.jbtep.2014.09.005
    [18] S. P. Karparova, A. Kersting, T. Suslow, Disengagement of attention from facial emotion in unipolar depression, Psychiatry Clin. Neurosci., 59 (2005), 723-729. doi: 10.1111/j.1440-1819.2005.01443.x
    [19] M. P. Caligiuri, J. Ellwanger, Motor and cognitive aspects of motor retardation in depression, J. Affective Disord., 57 (2000), 83-93. doi: 10.1016/S0165-0327(99)00068-3
    [20] A. Etkin, T. Egner, D. M. Peraza, E. R. Kandel, J. Hirsch, Resolving emotional conflict: a role for the rostral anterior cingulate cortex in modulating activity in the amygdala, Neuron, 51 (2006), 871-882. doi: 10.1016/j.neuron.2006.07.029
    [21] K Mohan, A Seal, O Krejcar, A. Yazidi, FER-net: facial expression recognition using deep neural net, Neural Comput. Appl., (2021), 1-12.
    [22] K Mohan, A Seal, O Krejcar, A. Yazidi, Facial expression recognition using local gravitational force descriptor-based deep convolution neural networks, IEEE Trans. Instrum. Meas., 70 (2020), 1-12.
    [23] Z. Zhou, M. Li, Semi-supervised regression with co-training, in IJCAI, (2005), 908-913.
    [24] M. A. Lei, W. Xili, Semi-supervised regression based on support vector machine co-training, Comput. Eng. Appl., 47 (2011), 177-180.
    [25] Y. Q. Li, M. Tian, A semi-supervised regression algorithm based on co-training with SVR-KNN, in Advanced Materials Research, Trans Tech Publications Ltd, (2014), 2914-2918.
    [26] L. Bao, X. Yuan, Z. Ge, Co-training partial least squares model for semi-supervised soft sensor development, Chemom. Intell. Lab. Syst., 147 (2015), 75-85. doi: 10.1016/j.chemolab.2015.08.002
    [27] D. Li, Y. Liu, D. Huang, Development of semi-supervised multiple-output soft-sensors with Co-training and tri-training MPLS and MRVM, Chemom. Intell. Lab. Syst., 199 (2020), 103970.
    [28] M. F. A. Hady, F. Schwenker and G. Palm, Semi-supervised learning for regression with co-training by committee, in International Conference on Artificial Neural Networks, Springer, Berlin, Heidelberg, (2009), 121-130.
    [29] F. Saitoh, Predictive modeling of corporate credit ratings using a semi-supervised random forest regression, 2016 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), IEEE, (2016), 429-433.
    [30] J. Levatić, M. Ceci, D. Kocev, S. Džeroski, Self-training for multi-target regression with tree ensembles, Knowl. Based Syst., 123 (2017), 41-60. doi: 10.1016/j.knosys.2017.02.014
    [31] S. Xue, S. Wang, X. Kong, J. Qiu, Abnormal neural basis of emotional conflict control in treatment-resistant depression: An event-related potential study, Clin. EEG Neurosci., 48 (2017), 103-110. doi: 10.1177/1550059416631658
    [32] N. Tottenham, J. W. Tanaka, A. C. Leon, T. McCarry, M. Nurse, T. A. Hare, et al., The NimStim set of facial expressions: Judgments from untrained research participants, Psychiatry Res., 168 (2009), 242-249.
    [33] M. Lei, J. Yang, S. Wang, L. Zhao, P. Xia, G. Jiang, et al., Semi-supervised modeling and compensation for the thermal error of precision feed axes, Int. J. Adv. Manuf. Technol., 104 (2019), 4629-4640.
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4226) PDF downloads(167) Cited by(15)

Figures and Tables

Figures(5)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog