
Biomedical image segmentation is a vital task in the analysis of medical imaging, including the detection and delineation of pathological regions or anatomical structures within medical images. It has played a pivotal role in a variety of medical applications, involving diagnoses, monitoring of diseases, and treatment planning. Conventionally, clinicians or expert radiologists have manually conducted biomedical image segmentation, which is prone to human error, subjective, and time-consuming. With the advancement in computer vision and deep learning (DL) algorithms, automated and semi-automated segmentation techniques have attracted much research interest. DL approaches, particularly convolutional neural networks (CNN), have revolutionized biomedical image segmentation. With this motivation, we developed a novel equilibrium optimization algorithm with a deep learning-based biomedical image segmentation (EOADL-BIS) technique. The purpose of the EOADL-BIS technique is to integrate EOA with the Faster RCNN model for an accurate and efficient biomedical image segmentation process. To accomplish this, the EOADL-BIS technique involves Faster R-CNN architecture with ResNeXt as a backbone network for image segmentation. The region proposal network (RPN) proficiently creates a collection of a set of region proposals, which are then fed into the ResNeXt for classification and precise localization. During the training process of the Faster RCNN algorithm, the EOA was utilized to optimize the hyperparameter of the ResNeXt model which increased the segmentation results and reduced the loss function. The experimental outcome of the EOADL-BIS algorithm was tested on distinct benchmark medical image databases. The experimental results stated the greater efficiency of the EOADL-BIS algorithm compared to other DL-based segmentation approaches.
Citation: Eman A. Al-Shahari, Marwa Obayya, Faiz Abdullah Alotaibi, Safa Alsafari, Ahmed S. Salama, Mohammed Assiri. Accelerating biomedical image segmentation using equilibrium optimization with a deep learning approach[J]. AIMS Mathematics, 2024, 9(3): 5905-5924. doi: 10.3934/math.2024288
[1] | Xiuzhi Yang, G. Farid, Waqas Nazeer, Muhammad Yussouf, Yu-Ming Chu, Chunfa Dong . Fractional generalized Hadamard and Fejér-Hadamard inequalities for m-convex functions. AIMS Mathematics, 2020, 5(6): 6325-6340. doi: 10.3934/math.2020407 |
[2] | Maryam Saddiqa, Ghulam Farid, Saleem Ullah, Chahn Yong Jung, Soo Hak Shim . On Bounds of fractional integral operators containing Mittag-Leffler functions for generalized exponentially convex functions. AIMS Mathematics, 2021, 6(6): 6454-6468. doi: 10.3934/math.2021379 |
[3] | Ye Yue, Ghulam Farid, Ayșe Kübra Demirel, Waqas Nazeer, Yinghui Zhao . Hadamard and Fejér-Hadamard inequalities for generalized k-fractional integrals involving further extension of Mittag-Leffler function. AIMS Mathematics, 2022, 7(1): 681-703. doi: 10.3934/math.2022043 |
[4] | Hengxiao Qi, Muhammad Yussouf, Sajid Mehmood, Yu-Ming Chu, Ghulam Farid . Fractional integral versions of Hermite-Hadamard type inequality for generalized exponentially convexity. AIMS Mathematics, 2020, 5(6): 6030-6042. doi: 10.3934/math.2020386 |
[5] | Thongchai Botmart, Soubhagya Kumar Sahoo, Bibhakar Kodamasingh, Muhammad Amer Latif, Fahd Jarad, Artion Kashuri . Certain midpoint-type Fejér and Hermite-Hadamard inclusions involving fractional integrals with an exponential function in kernel. AIMS Mathematics, 2023, 8(3): 5616-5638. doi: 10.3934/math.2023283 |
[6] | Hari M. Srivastava, Artion Kashuri, Pshtiwan Othman Mohammed, Abdullah M. Alsharif, Juan L. G. Guirao . New Chebyshev type inequalities via a general family of fractional integral operators with a modified Mittag-Leffler kernel. AIMS Mathematics, 2021, 6(10): 11167-11186. doi: 10.3934/math.2021648 |
[7] | Erhan Set, M. Emin Özdemir, Sevdenur Demirbaş . Chebyshev type inequalities involving extended generalized fractional integral operators. AIMS Mathematics, 2020, 5(4): 3573-3583. doi: 10.3934/math.2020232 |
[8] | Ghulam Farid, Maja Andrić, Maryam Saddiqa, Josip Pečarić, Chahn Yong Jung . Refinement and corrigendum of bounds of fractional integral operators containing Mittag-Leffler functions. AIMS Mathematics, 2020, 5(6): 7332-7349. doi: 10.3934/math.2020469 |
[9] | Maimoona Karim, Aliya Fahmi, Shahid Qaisar, Zafar Ullah, Ather Qayyum . New developments in fractional integral inequalities via convexity with applications. AIMS Mathematics, 2023, 8(7): 15950-15968. doi: 10.3934/math.2023814 |
[10] | Maryam Saddiqa, Saleem Ullah, Ferdous M. O. Tawfiq, Jong-Suk Ro, Ghulam Farid, Saira Zainab . k-Fractional inequalities associated with a generalized convexity. AIMS Mathematics, 2023, 8(12): 28540-28557. doi: 10.3934/math.20231460 |
Biomedical image segmentation is a vital task in the analysis of medical imaging, including the detection and delineation of pathological regions or anatomical structures within medical images. It has played a pivotal role in a variety of medical applications, involving diagnoses, monitoring of diseases, and treatment planning. Conventionally, clinicians or expert radiologists have manually conducted biomedical image segmentation, which is prone to human error, subjective, and time-consuming. With the advancement in computer vision and deep learning (DL) algorithms, automated and semi-automated segmentation techniques have attracted much research interest. DL approaches, particularly convolutional neural networks (CNN), have revolutionized biomedical image segmentation. With this motivation, we developed a novel equilibrium optimization algorithm with a deep learning-based biomedical image segmentation (EOADL-BIS) technique. The purpose of the EOADL-BIS technique is to integrate EOA with the Faster RCNN model for an accurate and efficient biomedical image segmentation process. To accomplish this, the EOADL-BIS technique involves Faster R-CNN architecture with ResNeXt as a backbone network for image segmentation. The region proposal network (RPN) proficiently creates a collection of a set of region proposals, which are then fed into the ResNeXt for classification and precise localization. During the training process of the Faster RCNN algorithm, the EOA was utilized to optimize the hyperparameter of the ResNeXt model which increased the segmentation results and reduced the loss function. The experimental outcome of the EOADL-BIS algorithm was tested on distinct benchmark medical image databases. The experimental results stated the greater efficiency of the EOADL-BIS algorithm compared to other DL-based segmentation approaches.
Graph theory has become a significant and essential part of the predictive toxicology and drug discovery, as it performs a vital role in the analysis of structure-property and structure-activity relationships. That is, different properties of molecules rely on their structures and therefore, quantitative structure activity property toxicity relationships (QSAR/QSPR/QSTR) research has become visible as a productive field of research in the characterization of physico-chemical properties, biological and pharmacological activities of materials and chemical compounds. These studies have been extensively used to toxicology, pharmacokinetics, pharmacodynamics, chemometrics, and so on [1].
Topological descriptors catch symmetry of compounds and provide the information in the numerical form about the molecular size, presence of heteroatoms, shape, multiple bonds, and branching. Topological descriptors have secured appreciable significance, due to the ease of generation and the speed with which these calculations can be performed. There are many graph-related numerical descriptors, which have confirmed their importance in theoretical chemistry and nanotechnology. Thereby, the computation of these topological descriptors is an interesting and attractive line of research. Some productive classes of topological descriptors of graphs are distance-based, counting-related, and degree-based; among these, degree-based indices have the most eye-catching position and can perform the prominent role to characterize the chemical compounds and forecast their different physicochemical properties like density, molecular weight, boiling and melting points, etc. A valuable subclass of degree-based topological descriptors are the irregularity indices that tell us about the irregularity of the graph in question. The topological descriptor TI(Γ) of the graph Γ is said to be an irregularity index if TI(Γ)≥0 and TI(Γ)=0, if and only if, it is regular graphics. Prior to the documentation, I cite [2] that irregularity indices were not considered to play a significant role in predicting the physicochemical properties of chemical structures. In [2], a regression analysis is performed to investigate and determine the application of various irregularity indicators to evaluate the physicochemical properties of octane isomers. They submitted that using the non-uniformity indices, the properties of octane isomers such as Accentric Factor (AcenFac), Evaporation Enthalpy (HVAP), Entropy, and Standard Evaporation Enthalpy (DHVAP) can be estimated with a correlation coefficient greater than 0.9. For the detail discussion of different types of indices and their related results, we refer the interested reader to [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18].
Throughout the article, the vertex and edge sets of a graph Γ are represented by V(Γ) and E(Γ) respectively. We denote the degree of a vertex q of a graph Γ by dΓ(q), and it is defined as the number of edges incident with q. If in a graph, all its vertices have the same degree, then it said to be a regular graph, otherwise, it is an irregular graph. Let the order and size of Γ are n and m respectively and with V(Γ)={q1,…,qn}. A sequence s1,…,sn, where si∈Z+ for all i=1,…,n, is said to be a degree sequence of a graph Γ, and dΓ(ql)=sl. Let ql represents the number of vertices of degree l, where l=1,2,3,…,n−1. Let e=q1q2∈E(Γ), the imbalance of e is described as imb(e):=|dΓ(q1)−dΓ(q2)|. In 1997, the idea of the irregularity of a graph Γ was given by Albertson [19] in the following way:
irr(Γ)=∑e∈E(Γ)imb(e) |
The Zagreb indices have appreciable applications in chemistry. In 1972, Gutman et. al [20] proposed the first Zagreb index based on the degree of vertices of a graph Γ. The first and second Zagreb indices of a graph Γ can be defined in the following way:
M1(Γ)=∑q∈V(Γ)d2Γ(q),M2(Γ)=∑q1q2∈E(Γ)dΓ(q1)dΓ(q2). | (1.1) |
Inspired by the Zagreb indices, Furtula and Gutman [21] introduced the forgotten index of Γ as follows:
F(Γ)=∑q∈V(Γ)d3Γ(q)=∑q1q2∈E(Γ)(d2Γ(q1)+d2Γ(q2)). | (1.2) |
Recently, Gutman et al. [22] brought in the idea of the σ irregularity index of a graph Γ, which is defined as:
σ(Γ)=∑q1q2∈E(Γ)(dΓ(q1)−dΓ(q2))2. | (1.3) |
Different properties of σ irregularity index have been discussed in [23,24]. If the size and order of Γ are m and n respectively, then the variance of Γ is defined by [25] in the following way:
Var(Γ)=1n∑q∈V(Γ)d2Γ(q)−1n2(∑q∈V(Γ)dΓ(q))2. | (1.4) |
The irregularity measure discrepancy of a graph Γ was introduced in [26,27] as follows:
Disc(Γ)=1n∑q∈V(Γ)|dΓ(q)−2mn|. | (1.5) |
For the comprehensive discussions about these graph descriptors, we refer the readers to [28,29,30,31,32,33].
Definition 2.1. Subdivision graph: For a graph Γ, its subdivision graph is constructed by adding a vertex of degree 2 in each edge. Therefore, |V(S(Γ))|=n+m, |E(S(Γ))|=2m and
dS(Γ)(q)={dΓ(q),if q∈V(Γ),2,if q∈E(Γ). | (2.1) |
Definition 2.2. Line Graph: For a graph Γ, its line graph denoted by L(Γ) is the graph such that V(L(Γ))=E(Γ) and there is an edge between a pair vertices of L(Γ) if and only if the corresponding edges are incident in Γ. Clearly, |V(L(Γ))|=m and by using hand shaking-lemma one can easily see that |E(L(Γ))|=M1(Γ)2−m, and for all q=q1q2∈E(Γ), we have
dL(Γ)(q)=dΓ(q1)+dΓ(q2)−2. | (2.2) |
Definition 2.3. Semi-total point graph [34]: For a graph Γ, its semi-total point graph is represented by T1(Γ) and it is formed by inserting a new vertex to each edge of Γ and then joining it to the end vertices of the corresponding edge. Thus, |V(T1(Γ))|=n+m, |E(T1(Γ))|=|E(S(Γ))|+m=2m+m=3m and
dT1(Γ)(q)={2dΓ(q),if q∈V(Γ),2,if q∈E(Γ). | (2.3) |
Definition 2.4. Semi-total line graph [34]: For a graph Γ, its semi-total line graph is represented as T2(Γ) and it is formed by placing a new vertex at each edge of Γ, linking those new vertices by edges whose related edges are incident in Γ. We have |V(T2(Γ))|=n+m, |E(T2(Γ))|=m+M1(Γ)2 and
dT2(Γ)(q)={dΓ(q),if q∈V(Γ),dL(Γ)(q)+2=dΓ(q1)+dΓ(q2)+2,if q=q1q2∈E(Γ), q1,q2∈V(Γ). | (2.4) |
Definition 2.5. Total Graph: The union of semi-total point graph and semi-total line graph is called total graph of a graph Γ. It is denoted by T(Γ). Also, |V(T(Γ))|=n+m, |E(T(Γ))|=m+|E(S(Γ))|+|E(L(Γ))|=2m+M1(Γ)2 and
dT(Γ)(q)={2dΓ(q),if q∈V(Γ),dL(Γ)(q)+2=dΓ(q1)+dΓ(q2)+2,if q=q1q2∈E(Γ), q1,q2∈V(Γ). | (2.5) |
Definition 2.6. Paraline Graph: This graph PL(Γ) is the line graph of subdivision graph represented by PL(Γ)=L(S(Γ)). Also |V(PL(Γ))|=|E(S(Γ))|=2m and |E(PL(Γ))|=M1(S(Γ))2−2m, where M1(S(Γ))=M1(Γ)+4m, therefore |E(PL(Γ))|=M1(Γ)2.
Definition 2.7. Double Graph: Let Γ be a graph with V(Γ)={q1,q2,…,qn}, and the vertex set of double graph D[Γ] are given by the two sets Γ1={x1,x2,…,xn} and Γ2={y1,y2,…,yn}. For qi∈V(Γ), there are two vertices xi and yi in V(D[Γ]). The double graph D[Γ] consists of the original edge set of every copy of Γ, and for qiqj∈E(Γ), two more edges xiyj and xjyi are added.
Definition 2.8. Strong double Graph: Let Γ be a graph with V(Γ)={q1,q2,…,qn}, and the set V(SD[Γ]) is converted into Γ1={x1,x2,…,xn} and Γ2={y1,y2,…,yn} sets. For each qi∈V(Γ), there are xi and yi type vertices in V(SD[Γ]). The strong double graph SD[Γ] consists of the original edge set of every copy of Γ, and for qiqj∈E(Γ), more edges xiyj, xjyi and xiyi are added.
Definition 2.9. Extended double cover: Let Γ be a graph with V(Γ)={q1,q2,…,qn}. The extended double cover of Γ, represented by Γ∗ is the bipartite graph with bipartition (Γ1,Γ2) where Γ1={x1,x2,…,xn} and Γ2={y1,y2,…,yn} such that xi and yj are linked if and only if either qi and qj are linked in Γ or i=j.
The different derived graphs of a graph Γ=C6 are illustrated in Figure 1.
In this section, we present our main results. First of all, we deduce the results related to the variance of derived graphs.
Theorem 3.1. The variance of the subdivision graph S(Γ) of Γ is
Var(S(Γ))=M1(Γ)+4mn+m−16m2(n+m)2. |
Proof. By using Eq 2.1 in formula (1.4), we get
Var(S(Γ))=1n+m∑q∈V(S(Γ))d2S(Γ)(q)−1(n+m)2(∑q∈V(S(Γ))dS(Γ)(q))2=1n+m(∑q∈V(Γ)d2Γ(q)+∑q∈E(Γ)(2)2)−1(n+m)2(∑q∈V(Γ)dΓ(q)+∑q∈E(Γ)(2))2=M1(Γ)+4mn+m−(2m+2m)2(n+m)2=M1(Γ)+4mn+m−16m2(n+m)2. |
This finishes the proof.
Theorem 3.2. The variance of L(Γ) of Γ is
Var(L(Γ))=F(Γ)+2M2(Γ)−4M1(Γ)+4mm−(M1(Γ)−2m)2m2. |
Proof. By using Eq 2.2 in formula (1.4), we obtain
Var(L(Γ))=1m∑q∈V(L(Γ))d2L(Γ)(q)−1m2(∑q∈V(L(Γ))dL(Γ)(q))2=1m∑q=q1q2∈E(Γ)(dΓ(q1)+dΓ(q2)−2)2−1m2(∑q=q1q2∈E(Γ)(dΓ(q1)+dΓ(q2)−2))2=1m∑q=q1q2∈E(Γ)((d2Γ(q1)+d2Γ(q2))+2dΓ(q1)dΓ(q2)+4−4(dΓ(q1)+dΓ(q2)))−1m2(∑q=q1q2∈E(Γ)((dΓ(q1)+dΓ(q2))−2))2=F(Γ)+2M2(Γ)−4M1(Γ)+4mm−(M1(Γ)−2m)2m2. |
This accomplishes the proof.
Theorem 3.3. The variance of the semi-total point graph T1(Γ) of Γ is given by
Var(T1(Γ))=4M1(Γ)+4mn+m−36m2(n+m)2. |
Proof. By using Eq 2.3 in formula (1.4), we have
Var(T1(Γ))=1n+m∑q∈V(T1(Γ))d2T1(Γ)(q)−1(n+m)2(∑q∈V(T1(Γ))dT1(Γ)(q))2=1n+m(∑q∈V(Γ)(2dΓ(q))2+∑q∈E(Γ)(2)2)−1(n+m)2(∑q∈V(Γ)2dΓ(q)+∑q∈E(Γ)(2))2=4M1(Γ)+4mn+m−(4m+2m)2(n+m)2=4M1(Γ)+4mn+m−36m2(n+m)2. |
This completes the proof.
Theorem 3.4. The variance of the semi-total line graph T2(Γ) of Γ is
Var(T2(Γ))=M1(Γ)+F(Γ)+2M2(Γ)n+m−(2m+M1(Γ))2(n+m)2. |
Proof. By using Eq 2.4 in formula (1.4), we get
Var(T2(Γ))=1n+m∑q∈V(T2(Γ))d2T2(Γ)(q)−1(n+m)2(∑q∈V(T2(Γ))dT2(Γ)(q))2=1n+m(∑q∈V(Γ)d2Γ(q)+∑q=q1q2∈E(Γ)(dΓ(q1)+dΓ(q2))2)−1(n+m)2(∑q∈V(Γ)dΓ(q)+∑q∈E(Γ)(dΓ(q1)+dΓ(q2)))2=1n+m(∑q∈V(Γ)d2Γ(q)+∑q=q1q2∈E(Γ)(d2Γ(q1)+d2Γ(q2)+2dΓ(q1)dΓ(q2)))−1(n+m)2(∑q∈V(Γ)dΓ(q)+∑q∈E(Γ)(dΓ(q1)+dΓ(q2)))2=M1(Γ)+F(Γ)+2M2(Γ)n+m−(2m+M1(Γ))2(n+m)2. |
Thus we obtain the required result.
Theorem 3.5. The variance of the total graph T(Γ) of Γ is
Var(T(Γ))=4M1(Γ)+F(Γ)+2M2(Γ)n+m−(4m+M1(Γ))2(n+m)2. |
Proof. By using Eq 2.5 in formula (1.4), we have
Var(T(Γ))=1n+m∑q∈V(T(Γ))d2T(Γ)(q)−1(n+m)2(∑q∈V(T(Γ))dT(Γ)(q))2=1n+m(∑q∈V(Γ)(2dΓ(q))2+∑q=q1q2∈E(Γ)(dΓ(q1)+dΓ(q2))2)−1(n+m)2(∑q∈V(Γ)2dΓ(q)+∑q∈E(Γ)(dΓ(q1)+dΓ(q2)))2=1n+m(∑q∈V(Γ)4d2Γ(q)+∑q=q1q2∈E(Γ)(d2Γ(q1)+d2Γ(q2)+2dΓ(q1)dΓ(q2)))−1(n+m)2(∑q∈V(Γ)2dΓ(q)+∑q∈E(Γ)(dΓ(q1)+dΓ(q2)))2=4M1(Γ)+F(Γ)+2M2(Γ)n+m−(4m+M1(Γ))2(n+m)2. |
This finishes the proof.
Theorem 3.6. The variance of the paraline graph PL(Γ)=L(S(Γ)) of Γ is given by
Var(PL(Γ))=2mF(Γ)−(M1(Γ))24m2. |
Proof. From Theorem 3.2 and |V(PL(Γ))|=|V(L(S(Γ)))|=2m, we get
Var(PL(Γ))=Var(L(S(Γ)))=F(S(Γ))+2M2(S(Γ))−4M1(S(Γ))+4|E(S(Γ))|2m−(M1(S(Γ))−2|E(S(Γ))|)2(2m)2. |
Since F(S(Γ))=F(Γ)+8m, M2(S(Γ))=2M1(Γ), M1(S(Γ))=M1(Γ)+4m, and |E(S(Γ))|=2m. Therefore
Var(PL(Γ))=F(Γ)+8m+4M1(Γ)−4M1(Γ)−16m+8m2m−(M1(Γ)+4m−4m)2(2m)2=F(Γ)2m−(M1(Γ))24m2. |
This completes the proof.
Theorem 3.7. The variance of the double graph D[Γ] of Γ is
Var(D[Γ])=4(nM1(Γ)−4m2)n2. |
Proof. By using dD[Γ](q)=2dΓ(q) in formula (1.4), we get
Var(D[Γ])=12n∑q∈V(D[Γ])d2D[Γ](q)−1(2n)2(∑q∈V(D[Γ])dD[Γ](q))2=12n(2∑q∈V(Γ)(2dΓ(q))2)−14n2(2∑q∈V(Γ)(2dΓ(q)))2=12n(8∑q∈V(Γ)d2Γ(q))−14n2(4(2m))2=8M1(Γ)2n−64m24n2=4(nM1(Γ)−4m2)n2. |
This accomplishes the proof.
Theorem 3.8. The variance of the strong double graph SD[Γ] of Γ is
Var(SD[Γ])=4nM1(Γ)−16m2n2. |
Proof. By using dSD[Γ](q)=2dΓ(q)+1 in formula (1.4), we have
Var(SD[Γ])=12n∑q∈V(SD[Γ])d2SD[Γ](q)−1(2n)2(∑q∈V(SD[Γ])dSD[Γ](q))2=12n(2∑q∈V(Γ)(2dΓ(q)+1)2)−14n2(2∑q∈V(Γ)(2dΓ(q)+1))2=1n(∑q∈V(Γ)(4d2Γ(q)+4dΓ(q)+1))−1n2(∑q∈V(Γ)(2dΓ(q)+1))2=1n(4M1(Γ)+8m+n)−1n2(4m+n)2=4nM1(Γ)+8mn+1−16m2n2−1−8mn=4nM1(Γ)−16m2n2. |
Thus we have the desired result.
Theorem 3.9. The variance of extended double cover of Γ∗ of Γ is
Var(Γ∗)=1nM1(Γ)−4m2n2. |
Proof. By using dD[Γ](q)=dΓ(q)+1 in formula (1.4), we get
Var(Γ∗)=12n∑q∈V(Γ∗)d2Γ∗(q)−1(2n)2(∑q∈V(Γ∗)dΓ∗(q))2=12n(2∑q∈V(Γ)(dΓ(q)+1)2)−14n2(2∑q∈V(Γ)(dΓ(q)+1))2=1n(∑q∈V(Γ)(d2Γ(q)+2dΓ(q)+1))−1n2(∑q∈V(Γ)(dΓ(q)+1))2=1n(M1(Γ)+4m+n)−1n2(2m+n)2=1nM1(Γ)+4mn+1−4m2n2−1−4mn=1nM1(Γ)−4m2n2. |
This finishes the proof.
In this part, we present the results related to the σ index of the derived graphs.
Theorem 3.10. The σ index of the subdivision graph S(Γ) of Γ is
σ(S(Γ))=F(Γ)−4M1(Γ)+8m. |
Proof. By using Eq 2.1 in formula (1.3), σ index can be computed in the following manner:
σ(S(Γ))=∑q1q2∈E(S(Γ))(dS(Γ)(q1)−dS(Γ)(q2))2=∑q1q2∈E(S(Γ)),q1∈V(Γ),q2∈E(Γ)(dΓ(q1)−2)2=∑q∈V(Γ)dΓ(q)(d2Γ(q)+4−4dΓ(q))=F(Γ)+8m−4M1(Γ). |
This accomplishes the proof.
Theorem 3.11. The σ index of the semi-total point graph T1(Γ) of Γ is
σ(T1(Γ))=4σ(Γ)+4F(Γ)−8M1(Γ)+8m. |
Proof. Using Eq 2.2 in formula (1.3), we get
σ(T1(Γ))=∑q1q2∈E(T1(Γ))(dT1(Γ)(q1)−dT1(Γ)(q2))2=∑q1q2∈E(T1(Γ)),q1,q2∈V(Γ)(2dΓ(q1)−2dΓ(q2))2+∑q1q2∈E(T1(Γ)),q1∈V(Γ),q2∈E(Γ)(2dΓ(q1)−2)2=4∑q1q2∈E(Γ),q1,q2∈V(Γ)(dΓ(q1)−dΓ(q2))2+4∑q1∈V(Γ),q2∈E(Γ)(d2Γ(q1)+1−2dΓ(q2))2=4σ(Γ)+∑q1∈V(Γ)dΓ(q1)(d2Γ(q1)+1−2dΓ(q1))=4σ(Γ)+4F(Γ)−8M1(Γ)+8m. |
This completes the proof.
Theorem 3.12. The σ index of the semi-total line graph T2(Γ) of Γ is
σ(T2(Γ))=σ(L(Γ))+F(Γ). |
Proof. By using Eq 2.3 in formula (1.3), σ index can be computed in the following manner:
σ(T2(Γ))=∑q1q2∈E(T2(Γ))(dT2(Γ)(q1)−dT2(Γ)(q2))2=∑q1q2∈E(T2(Γ)),q1,q2∈E(Γ)(dL(Γ)(q1)+2−dL(Γ)(q1)−2)2+∑q1q2∈E(T2(Γ)),q1∈V(Γ),q2=q1q3∈E(Γ)(dΓ(q1)−dΓ(q1)−dΓ(q3))2=∑q1q2∈E(L(Γ))(dL(Γ)(q1)−dL(Γ)(q2))2+∑q3∈V(Γ)dΓ(q3)dΓ(q3)=σ(L(Γ))+F(Γ). |
Theorem 3.13. The σ index of the total graph T(Γ) of Γ is
σ(T(Γ))=6σ(Γ)+σ(L(Γ)). |
Proof. By using Eq 2.4 in formula (1.3), σ index can be computed in the following manner:
σ(T(Γ))=∑q1q2∈E(T(Γ))(dT(Γ)(q1)−dT(Γ)(q2))2=∑q1q2∈E(T(Γ)),q1,q2∈V(Γ)(2dΓ(q1)−2dΓ(q2))2+∑q1q2∈E(T(Γ)),q1,q2∈E(Γ)(dL(Γ)(q1)+2−dL(Γ)(q2)−2)2+∑q1q2∈E(T(Γ)),q1∈V(Γ),q2=q1q3∈E(Γ)(2dΓ(q1)−dΓ(q1)−dΓ(q3))2=4∑q1q2∈E(Γ),q1,q2∈V(Γ)(dΓ(q1)−dΓ(q2))2+∑q1q2∈E(T(Γ)),q1,q2∈E(Γ)(dL(Γ)(q1)−dL(Γ)(q2))2+4∑q1∈V(Γ),q2=q1q3∈E(Γ)(dΓ(q1)−dΓ(q3))2=6σ(Γ)+σ(L(Γ)). |
Theorem 3.14. The σ index of the double graph D[Γ] of Γ is
σ(D[Γ])=16σ(Γ). |
Proof. By the definition of double graph, it is easy to follows that dD[Γ](xi)=dD[Γ](yi)=2dΓ(qi), where qi∈V(Γ) and xi,yi∈V(D[Γ]) are the corresponding clone vertices of qi. Therefore, the σ index of D[Γ] is
σ(D[Γ])=∑q1q2∈E(D[Γ])(dD[Γ](q1)−dD[Γ](q2))2=∑xixj∈E(D[Γ])(dD[Γ](xi)−dD[Γ](xj))2+∑yiyj∈E(D[Γ])(dD[Γ](yi)−dD[Γ](yj))2+∑xiyj∈E(D[Γ])(dD[Γ](xi)−dD[Γ](yj))2+∑xjyi∈E(D[Γ])(dD[Γ](xj)−dD[Γ](yi))2=4∑qiqj∈E(Γ)(2dΓ(qi)−2dΓ(qj))2=16∑qiqj∈E(Γ)(dΓ(qi)−dΓ(qj))2=16σ(Γ). |
Theorem 3.15. The σ index of the strong double graph SD[Γ] of Γ is
σ(SD[Γ])=16σ(Γ). |
Proof. By the definition of strong double graph, it is easy to see that dSD[Γ](xi)=dSD[Γ](yi)=2dΓ(qi)+1, where qi∈V(Γ) and xi,yi∈V(SD[Γ]) are the corresponding clone vertices of qi. Therefore, the σ index of SD[Γ] is
σ(SD[Γ])=∑q1q2∈E(SD[Γ])(dSD[Γ](q1)−dSD[Γ](q2))2=∑xixj∈E(SD[Γ])(dSD[Γ](xi)−dSD[Γ](xj))2+∑yiyj∈E(SD[Γ])(dSD[Γ](yi)−dSD[Γ](yj))2+∑xiyj∈E(SD[Γ])(dSD[Γ](xi)−dSD[Γ](yj))2+∑xjyi∈E(SD[Γ])(dSD[Γ](xj)−dSD[Γ](yi))2+n∑i=1(dSD[Γ](xi)−dSD[Γ](yi))2=4∑qiqj∈E(Γ)(2dΓ(qi)+1−2dΓ(qj)−1)2=16∑qiqj∈E(Γ)(dΓ(qi)−dΓ(qj))2=16σ(Γ). |
Theorem 3.16. The σ index of the extended double cover graph Γ∗ of Γ is
σ(Γ∗)=2σ(Γ). |
Proof. By the definition of extended double cover graph, it is easy to observe that dΓ∗(xi)=dΓ∗(yi)=dΓ(qi)+1, where qi∈V(Γ) and xi,yi∈V(Γ∗) are the related clone vertices of qi. Therefore, the σ index of Γ∗ is
σ(Γ∗)=∑q1q2∈E(Γ∗)(dΓ∗(q1)−dΓ∗(q2))2=∑xiyj∈E(Γ∗)(dΓ∗(xi)−dΓ∗(yj))2+∑xjyi∈E(Γ∗)(dΓ∗(xj)−dΓ∗(yi))2+n∑i=1(dΓ∗(xi)−dΓ∗(yi))2=2∑qiqj∈E(Γ)(dΓ(qi)+1−dΓ(qj)−1)2=2∑qiqj∈E(Γ)(dΓ(qi)−dΓ(qj))2=2σ(Γ). |
Finally, we give the results related to the discrepancy of derived graphs.
Theorem 3.17. The discrepancy of subdivision graph S(Γ) of Γ is given by
2(n+m)2(m(m+n)+2m|n−m|)≤Disc(S(Γ))≤2(n+m)2(m(m+3n)+2m|n−m|). |
Proof. By using Eq 2.1 in formula (1.5), we get
Disc(S(Γ))=1n+m∑q∈V(S(Γ))|dS(Γ)(q)−2(2m)n+m|=1n+m(∑q∈V(Γ)|dΓ(q)−4mn+m|+∑q∈E(Γ)|2−4mn+m|). |
Since |a|−|b|≤|a−b|≤|a|+|b|. Thus we have
Disc(S(Γ))≥1n+m(∑q∈V(Γ)|dΓ(q)|−∑q∈V(Γ)|4mn+m|+m|2n+2m−4m|n+m)=1n+m(2m−4mnn+m)+4m|n−m|(n+m)2=1n+m(2m(n+m−2n)n+m)+4m|n−m|(n+m)2=2(n+m)2(m(m+n)+2m|n−m|). |
Similarly
Disc(S(Γ))≤1n+m(∑q∈V(Γ)|dΓ(q)|+∑q∈V(Γ)|4mn+m|+m|2n+2m−4m|n+m)=1n+m(2m+4mnn+m)+4m|n−m|(n+m)2=1n+m(2m(n+m+2n)n+m)+4m|n−m|(n+m)2=2(n+m)2(m(m+3n)+2m|n−m|). |
This accomplishes the proof.
Theorem 3.18. The discrepancy of line graph L(Γ) of Γ is
0≤Disc(L(Γ))≤2mM1(Γ). |
Proof. By using Eq 2.2 in formula (1.5), we get
Disc(L(Γ))=1m∑q∈V(L(Γ))|dL(Γ)(q)−2(1/2M1(Γ)−m)m|=1m∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)−2−M1(Γ)−2mm|=1m∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)−M1(Γ)m|. |
Since |a|−|b|≤|a−b|≤|a|+|b|. Thus we have
Disc(L(Γ))≥1m(∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)|−∑q=xy∈E(Γ)|M1(Γ)m|)=1m(M1(Γ)−mM1(Γ)m)=0. |
Similarly
Disc(L(Γ))≤1m(∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)|+∑q=xy∈E(Γ)|M1(Γ)m|)=1m(M1(Γ)+mM1(Γ)m)=2mM1(Γ). |
This finishes the proof.
Theorem 3.19. The discrepancy of semi-total point graph T1(Γ) of Γ is
2m(n+m)2(2m−n+|n−2m|)≤Disc(T1(Γ))≤2m(n+m)2(2m+5n+|n−2m|). |
Proof. By using Eq 2.3 in formula (1.5), we get
Disc(T1(Γ))=1n+m∑q∈V(T1(Γ))|dT1(Γ)(q)−2(3m)n+m|=1n+m(∑q∈V(Γ)|2dΓ(q)−6mn+m|+∑q∈E(Γ)|2−6mn+m|). |
Since |a|−|b|≤|a−b|≤|a|+|b|. Thus we have
Disc(T1(Γ))≥1n+m(2∑q∈V(Γ)|dΓ(q)|−∑q∈V(Γ)|6mn+m|+2m|n+m−3m|n+m)=2n+m(2m−3mnn+m)+2m|n−2m|(n+m)2=2n+m(2mn+2m2−3mnn+m)+2m|n−2m|(n+m)2=2m(n+m)2(2m−n+|n−2m|). |
Similarly
Disc(T1(Γ))≤1n+m(2∑q∈V(Γ)|dΓ(q)|+∑q∈V(Γ)|6mn+m|+2m|n+m−3m|n+m)=2n+m(2m+3mnn+m)+2m|n−m|(n+m)2=2n+m(2mn+2m2+3mn)n+m)+2m|n−2m|(n+m)2=2m(n+m)2(2m+5n+|n−2m|). |
This accomplishes the proof.
Theorem 3.20. The discrepancy of semi-total line graph T2(Γ) of Γ is
0≤Disc(T2(Γ))≤2n+m(M1(Γ)+2m). |
Proof. By using Eq 2.4 in formula (1.5), we get
Disc(T2(Γ))=1n+m∑q∈V(T2(Γ))|dT2(Γ)(q)−2(1/2M1(Γ)+m)n+m|=1n+m(∑q∈V(Γ)|dΓ(q)−M1(Γ)+2mn+m|+∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)−M1(Γ)+2mn+m|). |
Since |a|−|b|≤|a−b|≤|a|+|b|. Thus we have
Disc(T2(Γ))≥1n+m(∑q∈V(Γ)|dΓ(q)|−∑q∈V(Γ)|M1(Γ)+2mn+m|)+1n+m(∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)|−∑q=xy∈E(Γ)|M1(Γ)+2mn+m|)=1n+m(2m−n(M1(Γ)+2m)n+m+M1(Γ)−m(M1(Γ)+2m)n+m)=0. |
Similarly
Disc(T2(Γ))≤1n+m(∑q∈V(Γ)|dΓ(q)|+∑q∈V(Γ)|M1(Γ)+2mn+m|)+1n+m(∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)|+∑q=xy∈E(Γ)|M1(Γ)+2mn+m|)=1n+m(2m+n(M1(Γ)+2m)n+m+M1(Γ)+m(M1(Γ)+2m)n+m)=1n+m(2m+(n+m)(M1(Γ)+2m)n+m+M1(Γ))=2n+m(M1(Γ)+2m). |
This finishes the proof.
Theorem 3.21. The discrepancy of total graph T(Γ) of Γ is
0≤Disc(T(Γ))≤2n+m(M1(Γ)+4m). |
Proof. By using Eq 2.5 in formula (1.5), we get
Disc(T(Γ))=1n+m∑q∈V(T(Γ))|dT(Γ)(q)−2(1/2M1(Γ)+2m)n+m|=1n+m(∑q∈V(Γ)|2dΓ(q)−M1(Γ)+4mn+m|+∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)−M1(Γ)+4mn+m|). |
Since |a|−|b|≤|a−b|≤|a|+|b|.
Disc(T(Γ))≥1n+m(∑q∈V(Γ)|2dΓ(q)|−∑q∈V(Γ)|M1(Γ)+4mn+m|)+1n+m(∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)|−∑q=xy∈E(Γ)|M1(Γ)+4mn+m|)=1n+m(4m−n(M1(Γ)+4m)n+m+M1(Γ)−m(M1(Γ)+4m)n+m)=0. |
Similarly
Disc(T(Γ))≤1n+m(∑q∈V(Γ)|2dΓ(q)|+∑q∈V(Γ)|M1(Γ)+4mn+m|)+1n+m(∑q=xy∈E(Γ)|dΓ(x)+dΓ(y)|+∑q=xy∈E(Γ)|M1(Γ)+4mn+m|)=1n+m(4m+n(M1(Γ)+4m)n+m+M1(Γ)+m(M1(Γ)+4m)n+m)=1n+m(4m+(n+m)(M1(Γ)+4m)n+m+M1(Γ))=2n+m(M1(Γ)+4m). |
This accomplishes the proof.
Theorem 3.22. The discrepancy of the paraline graph PL(Γ) of Γ is given by
0≤Disc(PL(Γ))≤12m(M1(Γ)+4m). |
Proof. From formula (1.5), we get
Disc(PL(Γ))=12m∑q∈V(PL(Γ))|dPL(Γ)(v)−2(1/2M1(S(Γ))−2m)2m|=12m∑q=xy∈E(S(Γ))|dS(Γ)(x)+dS(Γ)(y)−2−M1(S(Γ))−4m2m|=12m∑q=xy∈E(S(Γ))|dS(Γ)(x)+dS(Γ)(y)−M1(S(Γ))2m| |
Since |a|−|b|≤|a−b|≤|a|+|b|.
Disc(PL(Γ))≥12m(∑q=xy∈E(S(Γ))|dS(Γ)(x)+dS(Γ)(y)|−∑q=xy∈E(S(Γ))|M1(S(Γ))2m|)=12m(M1(S(Γ))−2mM1(S(Γ))2m)=0. |
Similarly
Disc(PL(Γ))≤12m(∑q=xy∈E(S(Γ))|dS(Γ)(x)+dS(Γ)(y)|+∑q=xy∈E(S(Γ))|M1(S(Γ))2m|)=12m(M1(S(Γ))+2mM1(S(Γ))2m)=12mM1(S(Γ)). |
Since M1(S(Γ))=M1(Γ)+4m.
Disc(PL(Γ))≤12m(M1(Γ)+4m). |
This finishes the proof.
Theorem 3.23. The discrepancy of the double graph D[Γ] of Γ is given by
2mn≤Disc(D[Γ])≤6mn. |
Proof. From formula (1.5), we get
Disc(D[Γ])=12n∑q∈V(D[Γ])|dD[Γ](v)−2(2m)2n|=22n∑q∈V(Γ)|dΓ(q)−2mn| |
Since |a|−|b|≤|a−b|≤|a|+|b|.
Disc(D[Γ])≥1n(∑q∈V(Γ)|2dΓ(q)|−∑q∈V(Γ)|2mn|)=1n(4m−2mnn)=2mn. |
Similarly
Disc(D[Γ])≤1n(∑q∈V(Γ)|2dΓ(q)|+∑q∈V(Γ)|2mn|)=1n(4m+2mnn)=6mn. |
Theorem 3.24. The discrepancy of the strong double graph SD[Γ] of Γ is given by
0≤Disc(SD[Γ])≤8mn. |
Proof. From formula (1.5), we get
Disc(SD[Γ])=12n∑q∈V(SD[Γ])|dSD[Γ](v)−2(4m+n)2n|=22n∑q∈V(Γ)|2dΓ(q)+1−4m+nn|=1n∑q∈V(Γ)|2dΓ(q)−4mn|. |
Since |a|−|b|≤|a−b|≤|a|+|b|.
Disc(SD[Γ])≥1n(∑q∈V(Γ)|2dΓ(q)|−∑q∈V(Γ)|4mn|)=1n(4m−4mnn)=0. |
Similarly
Disc(SD[Γ])≤1n(∑q∈V(Γ)|2dΓ(q)|+∑q∈V(Γ)|4mn|)=1n(4m+4mnn)=8mn. |
This accomplishes the proof.
Theorem 3.25. The discrepancy of extended double cover graph Γ∗ of Γ is
0≤Disc(Γ∗)≤4mn. |
Proof. From formula (1.5), we get
Disc(Γ∗)=12n∑q∈V(Γ∗)|dΓ∗(v)−2(2m+n)2n|=22n∑q∈V(Γ)|dΓ(q)+1−2m+nn|=1n∑q∈V(Γ)|dΓ(q)−2mn|. |
Since |a|−|b|≤|a−b|≤|a|+|b|.
Disc(Γ∗)≥1n(∑q∈V(Γ)|dΓ(q)|−∑q∈V(Γ)|2mn|)=1n(2m−2mnn)=0. |
Similarly
Disc(Γ∗)≤1n(∑q∈V(Γ)|dΓ(q)|+∑q∈V(Γ)|2mn|)=1n(2m+2mnn)=4mn. |
Thus we have the required result.
The analysis of graphs by using numerical graph invariants is a successful strategy, which plays an appreciable role in predicting the physico-chemical properties of the given chemical structure. Thereby, the computation of topological descriptors is an interesting and attractive line of research. In this paper, we have provided the expressions for the variance of vertex degrees, σ irregularity index and the discrepancy index of subdivision graph, vertex-semi total graph, edge-semi total graph, total graph, line graph, paraline graph, double graph, strong double graph and extended double cover of a graph.
The authors declare no conflict of interest.
[1] |
S. Chakraborty, K. Mali, An overview of biomedical image analysis from the deep learning perspective, Research Anthology on Improving Medical Imaging Techniques for Analysis and Intervention, 2023, 43–59. https://doi.org/10.4018/978-1-6684-7544-7.ch003 doi: 10.4018/978-1-6684-7544-7.ch003
![]() |
[2] |
N. S. Punn, S. Agarwal, Modality specific U-Net variants for biomedical image segmentation: a survey, Artif. Intell. Rev., 55 (2022), 5845–5889. https://doi.org/10.4018/978-1-6684-7544-7.ch003 doi: 10.4018/978-1-6684-7544-7.ch003
![]() |
[3] |
M. Yeung, L. Rundo, Y. Nan, E. Sala, C.B. Schönlieb, G. Yang, Calibrating the Dice loss to handle neural network overconfidence for biomedical image segmentation, J. Digit. Imaging, 36 (2023), 739–752. https://doi.org/10.1007/s10278-022-00735-3 doi: 10.1007/s10278-022-00735-3
![]() |
[4] |
A. Lou, S. Guan, M. Loew, Cfpnet-m: A lightweight encoder-decoder-based network for multimodal biomedical image real-time segmentation, Comput. Biol. Med., 154 (2023), 106579. https://doi.org/10.1016/j.compbiomed.2023.106579 doi: 10.1016/j.compbiomed.2023.106579
![]() |
[5] | K. A. Davamani, C. R. Robin, S. Amudha, L. J. Anbarasi, Biomedical image segmentation by deep learning methods, Computational Analysis and Deep Learning for Medical Care: Principles, Methods, and Applications, 2021,131–154. https://doi.org/10.1002/9781119785750.ch6 |
[6] | A. Shrivastava, M. Chakkaravathy, M. A. Shah, A Comprehensive Analysis of Machine Learning Techniques in Biomedical Image Processing Using Convolutional Neural Networks, In: 2022 5th International Conference on Contemporary Computing and Informatics (IC3I), 2022, 1363–1369. IEEE. https://doi.org/10.1109/IC3I56241.2022.10072911 |
[7] | A. Iqbal, M. Sharif, M. A. Khan, W. Nisar, M. Alhaisoni, FF-UNet: A U-shaped deep convolutional neural network for multimodal biomedical image segmentation, Cogn. Comput., 14 (2022.), 1287–1302. https://doi.org/10.1007/s12559-022-10038-y |
[8] | N. S. Punn, S. Agarwal, BT-Unet: A self-supervised learning framework for biomedical image segmentation using barlow twins with U-net models. Machine Learning, 111 (2022), 4585–4600. https://doi.org/10.1007/s10994-022-06219-3 |
[9] | A. J. Larrazabal, C. Martínez, J. Dolz, E. Ferrante, Orthogonal ensemble networks for biomedical image segmentation, In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part Ⅲ 24,594–603. Springer International Publishing. https://doi.org/10.1007/978-3-030-87199-4_56 |
[10] | Q. Liu, H. Jiang, T. Liu, Z. Liu, S. Li, W. Wen, Y. Shi, Defending deep learning-based biomedical image segmentation from adversarial attacks: A low-cost frequency refinement approach. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23,342–351. Springer International Publishing. https://doi.org/10.1007/978-3-030-59719-1_34 |
[11] | J. Zhang, Y. Zhang, Y. Jin, J. Xu, X. Xu, MDU-Net: multi-scale densely connected U-Net for biomedical image segmentation, Health Information Science and Systems, 11 (2023), 13. https://doi.org/10.1007/978-3-030-59719-1_34 |
[12] |
S. Pan, X. Liu, N. Xie, Y. Chong, EG-TransUNet: A transformer-based U-Net with enhanced and guided models for biomedical image segmentation, BMC Bioinformatics, 24 (2023), 85. https://doi.org/10.1007/978-3-030-59719-1_34 doi: 10.1007/978-3-030-59719-1_34
![]() |
[13] | N. K. Tomar, D. Jha, M. A. Riegler, H. D. Johansen, D. Johansen, J. Rittscher, et al., Fanet: A feedback attention network for improved biomedical image segmentation, IEEE Transactions on Neural Networks and Learning Systems, 2022. https://doi.org/10.1109/TNNLS.2022.3159394 |
[14] |
M. B. Shuvo, R. Ahommed, S. Reza, M. M. A. Hashem, CNL-UNet: A novel lightweight deep learning architecture for multimodal biomedical image segmentation with false output suppression, Biomed. Signal Proces., 70 (2021), 102959. https://doi.org/10.1016/j.bspc.2021.102959 doi: 10.1016/j.bspc.2021.102959
![]() |
[15] | A. Srivastava, D. Jha, S. Chanda, U. Pal, H. D. Johansen, D. Johansen, et al., MSRF-Net: a multi-scale residual fusion network for biomedical image segmentation, IEEE J. Biomed. Health, 26 (2021), 2252–2263. https://doi.org/10.1109/JBHI.2021.3138024 |
[16] |
Z. Zhao, Z. Zeng, K. Xu, C. Chen, C. Guan, Dsal: Deeply supervised active learning from strong and weak labelers for biomedical image segmentation, IEEE J. Biomed. Health, 25 (2021), 3744–3751. https://doi.org/10.1109/JBHI.2021.3052320 doi: 10.1109/JBHI.2021.3052320
![]() |
[17] | Y. Meng, M. Wei, D. Gao, Y. Zhao, X. Yang, X. Huang, et al., CNN-GCN aggregation enabled boundary regression for biomedical image segmentation, In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23,352–362. Springer International Publishing. https://doi.org/10.1109/JBHI.2021.3052320 |
[18] |
N. Ibtehaz, M. S. Rahman, MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation, Neural networks, 121 (2020), 74–87. https://doi.org/10.1109/JBHI.2021.3052320 doi: 10.1109/JBHI.2021.3052320
![]() |
[19] | T. Ma, A. V. Dalca, M. R. Sabuncu, Hyper-convolution networks for biomedical image segmentation, In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, 1933–1942. |
[20] | H. Song, Y. Wang, S. Zeng, X. Guo, Z. Li, OAU-net: Outlined Attention U-net for biomedical image segmentation, Biomed. Signal Proces. Control, 79 (2023), 104038. https://doi.org/10.1109/JBHI.2021.3052320 |
[21] | M. P. Schilling, T. Scherr, F. R. Münke, O. Neumann, M. Schutera, R. Mikut, et al., Automated annotator variability inspection for biomedical image segmentation, IEEE Access, 10 (2022), 2753–2765. https://doi.org/10.1109/ACCESS.2022.3140378 |
[22] | R. F. Mansour, N. M. Alfar, S. Abdel‐Khalek, M. Abdelhaq, R. A. Saeed, R. Alsaqour, Optimal deep learning based fusion model for biomedical image classification, Expert Syst., 39 (2022), e12764. https://doi.org/10.1111/exsy.12764 |
[23] |
F. Xie, G. Li, W. Hu, Q. Fan, S. Zhou, Intelligent Fault Diagnosis of Variable-Condition Motors Using a Dual-Mode Fusion Attention Residual, J. Mar. Sci. Eng., 11 (2023), 1385. https://doi.org/10.1111/exsy.12764 doi: 10.1111/exsy.12764
![]() |
[24] |
W. Tang, D. Zou, S. Yang, J. Shi, J. Dan, G. Song, A two-stage approach for automatic liver segmentation with Faster R-CNN and DeepLab, Neural Comput. Appl., 32 (2020), 6769–6778. https://doi.org/10.1007/s00521-019-04700-0 doi: 10.1007/s00521-019-04700-0
![]() |
[25] |
S. N. Makhadmeh, M. A. Al-Betar, K. Assaleh, S. Kassaymeh, A Hybrid White Shark Equilibrium Optimizer for Power Scheduling Problem Based IoT, IEEE Access, 10 (2022), 132212–132231. https://doi.org/10.1007/s00521-019-04700-0 doi: 10.1007/s00521-019-04700-0
![]() |
[26] | https://datasets.simula.no/kvasir-seg/ |
[27] | https://challenge.isic-archive.com/data/#2018 |
[28] | https://drive.grand-challenge.org/ |
[29] | https://blogs.kingston.ac.uk/retinal/chasedb1/ |