
Cardiovascular disease is one of the most challenging diseases in middle-aged and older people, which causes high mortality. Coronary artery disease (CAD) is known as a common cardiovascular disease. A standard clinical tool for diagnosing CAD is angiography. The main challenges are dangerous side effects and high angiography costs. Today, the development of artificial intelligence-based methods is a valuable achievement for diagnosing disease. Hence, in this paper, artificial intelligence methods such as neural network (NN), deep neural network (DNN), and fuzzy C-means clustering combined with deep neural network (FCM-DNN) are developed for diagnosing CAD on a cardiac magnetic resonance imaging (CMRI) dataset. The original dataset is used in two different approaches. First, the labeled dataset is applied to the NN and DNN to create the NN and DNN models. Second, the labels are removed, and the unlabeled dataset is clustered via the FCM method, and then, the clustered dataset is fed to the DNN to create the FCM-DNN model. By utilizing the second clustering and modeling, the training process is improved, and consequently, the accuracy is increased. As a result, the proposed FCM-DNN model achieves the best performance with a 99.91% accuracy specifying 10 clusters, i.e., 5 clusters for healthy subjects and 5 clusters for sick subjects, through the 10-fold cross-validation technique compared to the NN and DNN models reaching the accuracies of 92.18% and 99.63%, respectively. To the best of our knowledge, no study has been conducted for CAD diagnosis on the CMRI dataset using artificial intelligence methods. The results confirm that the proposed FCM-DNN model can be helpful for scientific and research centers.
Citation: Javad Hassannataj Joloudari, Hamid Saadatfar, Mohammad GhasemiGol, Roohallah Alizadehsani, Zahra Alizadeh Sani, Fereshteh Hasanzadeh, Edris Hassannataj, Danial Sharifrazi, Zulkefli Mansor. FCM-DNN: diagnosing coronary artery disease by deep accuracy fuzzy C-means clustering model[J]. Mathematical Biosciences and Engineering, 2022, 19(4): 3609-3635. doi: 10.3934/mbe.2022167
[1] | Xiaoli Wang, Lizhen Wang . Traveling wave solutions of conformable time fractional Burgers type equations. AIMS Mathematics, 2021, 6(7): 7266-7284. doi: 10.3934/math.2021426 |
[2] | Zui-Cha Deng, Fan-Li Liu, Liu Yang . Numerical simulations for initial value inversion problem in a two-dimensional degenerate parabolic equation. AIMS Mathematics, 2021, 6(4): 3080-3104. doi: 10.3934/math.2021187 |
[3] | Mohammad Partohaghighi, Ali Akgül, Jihad Asad, Rania Wannan . Solving the time-fractional inverse Burger equation involving fractional Heydari-Hosseininia derivative. AIMS Mathematics, 2022, 7(9): 17403-17417. doi: 10.3934/math.2022959 |
[4] | M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566 |
[5] | Humaira Yasmin, Aljawhara H. Almuqrin . Analytical study of time-fractional heat, diffusion, and Burger's equations using Aboodh residual power series and transform iterative methodologies. AIMS Mathematics, 2024, 9(6): 16721-16752. doi: 10.3934/math.2024811 |
[6] | Jian-Gen Liu, Jian Zhang . A new approximate method to the time fractional damped Burger equation. AIMS Mathematics, 2023, 8(6): 13317-13324. doi: 10.3934/math.2023674 |
[7] | Farman Ali Shah, Kamran, Zareen A Khan, Fatima Azmi, Nabil Mlaiki . A hybrid collocation method for the approximation of 2D time fractional diffusion-wave equation. AIMS Mathematics, 2024, 9(10): 27122-27149. doi: 10.3934/math.20241319 |
[8] | Asif Khan, Tayyaba Akram, Arshad Khan, Shabir Ahmad, Kamsing Nonlaopon . Investigation of time fractional nonlinear KdV-Burgers equation under fractional operators with nonsingular kernels. AIMS Mathematics, 2023, 8(1): 1251-1268. doi: 10.3934/math.2023063 |
[9] | Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350 |
[10] | Shuang-Shuang Zhou, Saima Rashid, Asia Rauf, Khadija Tul Kubra, Abdullah M. Alsharif . Initial boundary value problems for a multi-term time fractional diffusion equation with generalized fractional derivatives in time. AIMS Mathematics, 2021, 6(11): 12114-12132. doi: 10.3934/math.2021703 |
Cardiovascular disease is one of the most challenging diseases in middle-aged and older people, which causes high mortality. Coronary artery disease (CAD) is known as a common cardiovascular disease. A standard clinical tool for diagnosing CAD is angiography. The main challenges are dangerous side effects and high angiography costs. Today, the development of artificial intelligence-based methods is a valuable achievement for diagnosing disease. Hence, in this paper, artificial intelligence methods such as neural network (NN), deep neural network (DNN), and fuzzy C-means clustering combined with deep neural network (FCM-DNN) are developed for diagnosing CAD on a cardiac magnetic resonance imaging (CMRI) dataset. The original dataset is used in two different approaches. First, the labeled dataset is applied to the NN and DNN to create the NN and DNN models. Second, the labels are removed, and the unlabeled dataset is clustered via the FCM method, and then, the clustered dataset is fed to the DNN to create the FCM-DNN model. By utilizing the second clustering and modeling, the training process is improved, and consequently, the accuracy is increased. As a result, the proposed FCM-DNN model achieves the best performance with a 99.91% accuracy specifying 10 clusters, i.e., 5 clusters for healthy subjects and 5 clusters for sick subjects, through the 10-fold cross-validation technique compared to the NN and DNN models reaching the accuracies of 92.18% and 99.63%, respectively. To the best of our knowledge, no study has been conducted for CAD diagnosis on the CMRI dataset using artificial intelligence methods. The results confirm that the proposed FCM-DNN model can be helpful for scientific and research centers.
In recent years, fractional partial differential equations (FPDEs) have been widely used in natural science and engineering technology [1,2,3,4]. The advantage of FPDEs lies in their ability to better describe materials and processes that exhibit memory and genetic properties [5,6]. However, the solutions of FPDEs are much more complex. Many researchers have exploited diverse techniques for the investigation of FPDEs such as the finite difference method (FDM) [7], finite element method [8], spectral method [9], virtual element method [10], etc. The development of effective numerical methods to approximate FPDEs has been the goal of some researchers.
In recent years, neural networks (NNs) have been successfully applied to solve problems in various fields [11,12,13]. Due to the high expressiveness of NNs in functional approximation [14,15,16], using NNs to solve differential and integral equations has become an active and important research field. Physics-informed neural networks (PINNs) [17,18,19,20] are machine learning models that combine deep learning with physical knowledge. PINNs embed PDEs into the loss function of the NNs, enabling the NNs to learn solutions to PDEs. The PINNs algorithm is meshless and simple, and can be applied to various types of PDEs, including integral differential equations, FPDEs, and random partial differential equations. Moreover, PINNs solved the inverse problem of PDEs just as easily as they solved the forward problem [17]. PINNs have been successfully applied to solve various problems in scientific computing [21,22,23]. Pang et al. [24] used the FDM to approximate the fractional derivatives that cannot be automatically differentiated, thus extending the PINNs to fPINNs for solving FPDEs.
Despite the success of deep learning in the past, solving a wide range of PDEs is theoretically and practically challenging as complexity increases. Therefore, many aspects of PINNs need to be further improved to achieve more accurate predictions, higher computational efficiency, and robustness of training. Lu et al. [25] proposed DeepXDE, a deep learning library for solving PDEs, introduced a new residual-based adaptive refinement method to improve the training efficiency of PINNs, and new residual points were added at the position where the residuals of the PDEs were large, so that the discontinuities of PDEs could be captured well. Zhang et al. [26] combined fPINNs with the spectral method to solve the time-fractional phase field models. It had the characteristics of reducing the approximate number of discrete fractional operators, thus improving the training efficiency and obtaining higher error accuracy. Wu et al. [27] conducted a comprehensive study on two types of sampling of PINNs, including non-adaptive uniform sampling and adaptive non-uniform sampling, and the research results could also be used as a practical guide for selecting sampling methods. Zhang et al. [28] removed the soft constraints of PDEs in the loss function, and used the Lie symmetry group to generate the labeled data of PDEs to build a supervised learning model, thus effectively predicting the large amplitude and high frequency solutions of the Klein-Gordon equation. Zhang et al. [29] introduced the symmetry-enhanced physics-informed neural network (SPINN), which incorporated the invariant surface conditions derived from Lie symmetries or non-classical symmetries of PDEs into the loss function of PINNs, aiming to improve accuracy of PINNs. Lu et al. [30] and Xie et al. [31] introduced gradient-enhanced physics-informed neural networks (gPINNs) to solve PDEs and the idea of embedding the gradient information from the residuals of PDEs into the loss functions has also proven to be effective in other methods such as Gaussian process regression [32].
In this paper, inspired by the above works, gfPINNs are applied to solve the forward and inverse problems of the multiterm time-fractional Burger-type equation. The integer order derivatives are handled using the automatic differentiation capability of the NNs, while the fractional derivatives of the equation are approximated using finite difference discretization [33,34]. Subsequently, the residual information of the equation is then incorporated into the loss function of NNs and optimized to yield optimal parameters. For the inverse problems of the multiterm time-fractional Burger-type equation, their overall form are known but the coefficient and the orders of time-fractional derivatives are unknown. The gfPINNs explicitly incorporate information from the equation by including the differential operators of the equation directly into the optimization loss function. The parameters to be identified appear in the differential operators, which are then optimized by minimizing the loss function associated with those parameters. A numerical comparison between fPINNs and gfPINNs is conducted using numerical examples. The numerical results demonstrate the effectiveness of gfPINNs in solving the multiterm time-fractional Burger-type equation.
The structure of this paper is as follows. In Section 2, we define forward and inverse problems for the multiterm time-fractional Burger-type equation. In Section 3, we introduce fPINNs and gfPINNs and give the finite difference discretization to approximate the time-fractional derivatives. In Section 4, we demonstrate the effectiveness of gfPINNs in solving the forward and inverse problems of the multiterm time-fractional Burger-type equation by numerical examples, and compare the experimental results of fPINNs and gfPINNs. Finally, we give the conclusions of this paper in Section 5.
We consider the following multiterm time-fractional Burger-type equation defined on the bounded domain Ω:
c1C0Dαtu(x,t)+c2C0Dγtu(x,t)+u(x,t)∂u(x,t)∂x=v∂2u(x,t)∂x2+f(x,t), | (2.1) |
where (x,t)∈Ω×[0,T] and the initial and boundary conditions are given as
{u(x,t)=0,x∈∂Ω,u(x,0)=g(x),x∈Ω, | (2.2) |
where u(x,t) is the solution of the equation, f(x,t) is the forcing term whose values are only known at scattered spatio-temporal coordinates, v is the kinematic viscosity of fluid, g(x) is a sufficiently smooth function, the fractional orders α and γ have been restricted to (0, 1) and (1, 2), respectively, C0Dθtu(x,t) is the Caputo time-fractional derivative of order θ (θ>0,n−1≤θ<n) of u(x,t) with respect to t [35,36]:
C0Dθtu(x,t)={1Γ(n−θ)∫tα(t−s)n−1−θ∂nu(x,s)∂snds,θ∉z+,∂θu(x,t)∂tθ,θ∈z+, | (2.3) |
where Γ(⋅) is the gamma function.
The forward and inverse problems of solving the multiterm time-fractional Burger-type equation are described as follows. For the forward problem, under the given preconditions of the fractional orders α and γ, the forcing term f, and the initial and boundary conditions, the solution u(x,t) is solved. For the inverse problem, under the given preconditions of the initial and boundary conditions, the forcing term f, and additional concentration measurements at the final time u(x,t)=h(x,t), the fractional orders α and γ, the flow velocity v, and the solution u(x,t) are solved.
This subsection introduces the idea of fPINNs and we consider both the forward and inverse problems, along with their corresponding NNs. We first consider the forward problem of the multiterm time-fractional Burger-type equation in the following form:
{L{u(x,t)}=f(x,t),(x,t)∈Ω×[0,T],u(x,t)=0,x∈∂Ω,u(x,0)=g(x),x∈Ω, | (3.1) |
where L{⋅} is a nonlinear operator and L{u(x,t)}=c1C0Dαtu(x,t)+c2C0Dγtu(x,t)+u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2. We divide the nonlinear operator L{⋅} into two parts, L=LAD+LnonAD. The first part is an integer derivative operator, which can be automatically differentiated (AD) using the chain rule. We have
LAD{⋅}={u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2,α∈(0,1),γ∈(1,2),c2∂2u(x,t)∂t2+u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2,α∈(0,1),γ=2,c1∂u(x,t)∂t+u(x,t)∂u(x,t)∂x−v∂2u(x,t)∂x2,α=1,γ∈(1,2), | (3.2) |
and the second category consists of operators that lack automatic differentiation capabilities:
LnonAD{⋅}={c1C0Dαtu(x,t)+c2C0Dγtu(x,t),α∈(0,1),γ∈(1,2),c1C0Dαtu(x,t),α∈(0,1),γ=2,c2C0Dγtu(x,t),α=1,γ∈(1,2). | (3.3) |
For LnonAD, we can discretize it using FDM and denote by LFDM the discretization version of LnonAD.
During the NNs training process, our goal is to optimize its parameters in order to ensure that the approximate solution of the equation closely satisfies the initial and boundary conditions. The approximate solution is chosen as
˜u(x,t)=tρ(x)uNN(x,t)+g(x), | (3.4) |
where uNN represents the output of the NNs. The NNs acts as a surrogate model, approximating the relationship between spatio-temporal coordinates and the solution of the equation. It is defined by its weights and biases, forming the parameter vector μ; see Figure 1 for a simple NN. This is fully connected with a single hidden layer consisting of three neurons. In this network, x and t are two inputs, which go through a linear transformation to obtain x1=w1x+w4t+b1, x2=w2x+w5t+b2, and x3=w3x+w6t+b3 in the hidden layer, and then, they go through a nonlinear transformation to get Yi=f(xi) for i=1,2,3. We choose the hyperbolic tangent function tanh(⋅). Yi to go through a linear transformation to obtain the output of the NNs, uNN(x,t;μ)=w7Y1+w8Y2+w9Y3+b4. The vector of parameters μ is comprised of the weights wi and biases bi. ρ(0)=ρ(1)=0 and the auxiliary function ρ(x) is preselected. g(x) is the initial condition function such that it satisfies the initial and boundary conditions automatically.
The loss function of fPINNs for the forward problem with the approximate solution is defined as the mean-squared error of the equation residual
LFW=1|SF|∑(x,t)∈S[LFDM{˜u(x,t)}+LAD{˜u(x,t)}−f(x,t)]2, | (3.5) |
where SF⊂Ω×[0,T] and |SF| represents the number of training points. Then, we train the NNs to optimize the loss function of the forward problem with respect to the NNs parameters μ, thus obtaining the optimal parameters μbest. Finally, we specify a set of arbitrary test points to test the trained NNs and observe the training performance.
The codes for solving the forward and inverse problems of the equation using NNs is similar. We only need to incorporate the parameters to be identified in the inverse problem into the loss function to be optimized in the forward problem, and no other changes are necessary. Next, we consider the following form of the inverse problem:
{Lξ={α,γ,v}{u(x,t)}=f(x,t),(x,t)∈Ω×[0,T],u(x,t)=0,x∈∂Ω,u(x,0)=g(x),x∈Ω,u(x,t)=h(x,t),(x,t)∈Ω×[0,T], | (3.6) |
where ξ is the parameter of the equation, so the loss function LIV for the inverse problem under consideration is
LIV{μ,ξ={α,γ,v}}=WI11|SI1|∑(x,t)∈SI1[L{α,γ}FDM{˜u(x,t)}+LvAD{˜u(x,t)}−f(x,t)]2+WI21|SI2|∑(x,t)∈SI2[˜u(x,t)−h(x,t)]2, | (3.7) |
where α∈(0,1) and γ∈(1,2), SI1⊂Ω×[0,T] and SI2⊂Ω×[0,T] are two sets of different training points, and WI1 and WI2 are preselected weight coefficients. We train the NNs to minimize the loss function, thereby obtaining αbest and γbest, the flow velocity vbest, and the optimal parameters μbest of the NNs.
We incorporate the residual information of the equation into the loss function of NNs and train the NNs to minimize this loss function, thus obtaining the optimal parameters of NNs. If the residuals in the PDEs are zero, then the gradient of the residuals in the PDEs should also be zero. Therefore, adding gradient information to the loss function is a necessary condition for training NNs. One motivation behind gfPINNs is that the residual in the loss function often fluctuates near zero. Penalizing the slope of the residual can reduce these fluctuations, making the residual closer to zero. In this section, we continue to consider the formulation of the forward and inverse problems of the equation discussed in the previous section.
We first consider the forward problem in the form of (3.1) and provide the loss function of gfPINNs for this form:
LgFW=WFLFW+Wg1FLg1FW+Wg2FLg2FW, | (3.8) |
where
Lg1FW=1|Sg1F|∑(x,t)∈Sg1F[∂LFDM{˜u(x,t)}∂x+∂LAD{˜u(x,t)}∂x−∂f(x,t)∂x]2, | (3.9) |
Lg2FW=1|Sg2F|∑(x,t)∈Sg2F[∂LFDM{˜u(x,t)}∂t+∂LAD{˜u(x,t)}∂t−∂f(x,t)∂t]2, | (3.10) |
and the approximate solution of the equation is the same as Eq (3.4): ˜u(x,t)=ρ(x)uNN(x,t)+g(x). The expression LFW as shown in Eq (3.5), where WF, Wg1F, and Wg2F are preselected weighting coefficients, Sg1F⊂Ω×[0,T] and Sg2F⊂Ω×[0,T] are two sets of different training points.
Next, we consider the inverse problem in the form of (3.6) and provide the loss function of gfPINNs for this form. The approach for the inverse problem of gfPINNs is similar to that of fPINNs. We provide the loss function for the inverse problem of gfPINNs.
LgIV=WILIV{μ,ξ={α,γ,v}}+Wg1ILg1IV+Wg2ILg2IV, | (3.11) |
where
Lg1IV=Wg1I11|Sg1I1|∑(x,t)∈Sg1I1[∂L{α,γ}FDM{˜u(x,t)}∂x+∂LvAD{˜u(x,t)}∂x−∂f(x,t)∂x]2+Wg1I21|Sg1I2|∑(x,t)∈Sg1I2[∂˜u(x,t)∂x−∂h(x,t)∂x]2, | (3.12) |
Lg2IV=Wg2I11|Sg2I1|∑(x,t)∈Sg2I1[∂L{α,γ}FDM{˜u(x,t)}∂t+∂LvAD{˜u(x,t)}∂t−∂f(x,t)∂t]2+Wg2I21|Sg2I2|∑(x,t)∈Sg2I2[∂˜u(x,t)∂t−∂h(x,t)∂t]2, | (3.13) |
and the expression LIV{μ,ξ={α,γ,v}} as shown in Eq (3.7), where WI, Wg1I, Wg2I, Wg1I1, Wg1I2, Wg2I1, and Wg2I2 are preselected weighting coefficients, Sg1I1,Sg2I1⊂Ω×[0,T], Sg1I2,andSg2I2⊂Ω×[0,T] are four sets of different training points.
This defines the loss function of gfPINNs, which is exactly the same as discussed above for fPINNs. We train the NNs to obtain the optimal parameters of the NNs.
In the x direction [0,M], we take the mesh points xp=ihx,i=0,1,2,...,M1, and in the t direction [0,T], we take the mesh points tn=nτ,n=0,1,...,N, where hx=MM1 and τ=TN are the uniform spatial step size and temporal step size, respectively. Denote Ωh≡{0≤i≤M1}, Ωτ≡{0≤n≤N}. Suppose uni = u(xi,tn) is a grid function on Ωh×Ωτ.
We approximate the fractional derivatives of the equation using the finite difference discretization [33,34].
For α∈(0,1), we have C0Dαtu(x,t)∣(xi,tn)=Dατ˜uni+R1(˜uni),
Dατ˜uni:=τ−αΓ(2−α)[aα0˜uni+n−1∑k=1(aαn−k−aαn−k−1)˜uki−aαn−1˜u0i], | (3.14) |
where ˜uni=˜u(xi,tn), R1≤C(τ2−α), and aαk=(k+1)1−α−k1−α.
Lemma 3.1. [33] α∈(0,1), aαl=(l+1)1−α−l1−α, l=0,1,2,…,
(1) 1=aα0>aα1>aα2>⋯>aαl>0,liml→∞aαl→0,
(2) (1−α)l−α<a(α)l−1<(1−α)(l−1)−α,l≥1.
For γ∈(1,2), C0Dγtu(x,t)∣(xi,tn)=Dγτ˜uni+R2(˜uni),
Dγτ˜uni:=τ1−γΓ(3−γ)[bγ0δt˜uni+n−1∑k=1(bγn−k−bγn−k−1)δt˜uki−bγn−1δt˜u0i], | (3.15) |
where δtu(x,t)=∂u(x,t)∂t, R2≤C(τ3−γ), and bγk=(k+1)2−γ−k2−γ.
Given the spatial position x, it can be seen from the finite difference discretization that the time-fractional derivative of ˜u(x,t) evaluated at time t depends on the value of ˜u(x,t) calculated at all previous times 0, τ, 2τ, ⋯, t. We call the current time and the previous time the training points and the auxiliary points, respectively.
In this section, we demonstrate the effectiveness of gfPINNs in solving forward and inverse problems of the multiterm time-fractional Burger-type equation and we compared fPINNs with gfPINNs. We solve the forward problems of the equation and present the experimental results in Section 4.1. We solve the inverse problems and present the experimental results in Section 4.2.
We give a fabricated solution to the problem u(x,t)=tpsin(πx). In the given approximate solution (3.4), the auxiliary function ρ(⋅) is defined as ρ(⋅)=1−‖x‖22. We use the following form of L2 relative error:
{∑k[u(xtest,k,ttest,k)−˜u(xtest,k,ttest,k)]2}12{∑k[u(xtest,k,ttest,k)]2}12 | (4.1) |
to measure the performance of the NNs, where ˜u denotes the approximated solution, u is the exact solution, and (xk,tk) denotes the k-th test point.
We wrote the code in Python and took advantage of the automatic differentiation capability of TensorFlow [37]. The stochastic gradient descent Adam algorithm [38] was used to optimize the loss function. We initialized the NNs parameters using normalized Glorot initialization [39]. Otherwise, when training a neural network, we set the learning rate, the number of neurons, the number of hidden layers, and the activation function as 1×10−3, 20, 4, and tanh(x), respectively.
In this section, we consider the the multiterm time-fractional Burger-type equation of the form (2.1) with initial and boundary conditions (2.2). We let v=1, (x,t)∈[0,1]×[0,1], and g(x)=0, considering the smooth fabricated solution u(x,t)=tpsin(πx) and the forcing term
f(x,t)=Γ(p+1)Γ(p+1−α)t(p−α)(p−α)sin(πx)+Γ(p+1)Γ(p+1−γ)t(p−γ)(p−γ)sin(πx)+t2psin(πx)cos(πx)+π2tpsin(πx). | (4.2) |
Case 1: We choose c1=1, c2=0, and α=0.5, considering the smooth fabricated solution u(x,t)=t4sin(πx) and the forcing term f(x,t)=3.5Γ(5)Γ(4.5)t3.5sin(πx)+t8sin(πx)cos(πx)+π2t4sin(πx). We consider M1−1 training points of the spatial domain: xi=ihx for i=1,2,⋯,M1−1 and N training points of the time domain: tn=nτ for n=1,2,⋯,N. We do not need to place training points on the initial and boundary since the approximate solution ˜u(x,t)=tx(1−x)uNN(x,t;μ) satisfies the initial and boundary conditions automatically. For fPINNs, the loss function can be written as
LFW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[a0.50˜u(xi,tn)+n−1∑k=1(a0.5n−k−a0.5n−k−1)˜u(xi,tk)]+˜u(xi,tn)∂˜u(xi,tn)∂xi−∂2˜u(xi,tn)∂x2i−f(xi,tn)}2. | (4.3) |
The loss function of gfPINNs can be given as
Lg2FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[a0.50∂˜u(xi,tn)∂xi+n−1∑k=1(a0.5n−k−a0.5n−k−1)∂˜u(xi,tk)∂xi]+˜u(xi,tn)∂2˜u(xi,tn)∂x2i+(∂˜u(xi,tn)∂xi)2−∂3˜u(xi,tn)∂x3i−∂f(xi,tn)∂xi}2, | (4.4) |
Lg2FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[a0.50∂˜u(xi,tn)∂tn+n−1∑k=1(a0.5n−k−a0.5n−k−1)∂˜u(xi,tk)∂tk]+˜u(xi,tn)∂2˜u(xi,tn)∂xi∂tn+∂˜u(xi,tn)∂xi∂˜u(xi,tn)∂tn−∂3˜u(xi,tn)∂x2i∂tn−∂f(xi,tn)∂tn}2. | (4.5) |
By substituting Eqs (4.3)–(4.5) into Eq (3.8), we get the gfPINNs loss function LgFW with WF=1, Wg1F=1, and Wg2F=1. Next, we selected 2000 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 2–4 present a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the equation. Figure 5 shows the absolute errors between the exact solution and the solutions predicted by fPINNs and gfPINNs, and it can be seen that the prediction performance of gfPINNs is better than that of fPINNs. Figure 6 illustrates the L2 relative errors of both fPINNs and gfPINNs models for a single experiment as the iteration count varies, showing that while both can achieve errors as low as 10−4, gfPINNs exhibits comparatively lower error and reduced oscillation.
Case 2: We choose c1=0, c2=1, and γ=1.5, considering the smooth fabricated solution u(x,t)=t4sin(πx) and the forcing term f(x,t)=2.5Γ(5)Γ(3.5)t2.5sin(πx)+t8sin(πx)cos(πx)+π2t4sin(πx). Similarly, we give the loss function of fPINNs as
LFW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[b1.50∂˜u(xi,tn)∂tn+n−1∑k=1(b1.5n−k−b1.5n−k−1)∂˜u(xi,tn)∂tk]+˜u(xi,tn)∂˜u(xi,tn)∂xi−∂2˜u(xi,tn)∂x2i−f(xi,tn)}2. | (4.6) |
For gfPINNs, the loss function can be written as
Lg1FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[b1.50∂2˜u(xi,tn)∂tn∂xi+n−1∑k=1(b1.5n−k−b1.5n−k−1)∂2˜u(xi,tk)∂tk∂xi]+˜u(xi,tn)∂2˜u(xi,tn)∂x2i+(∂˜u(xi,tn)∂xi)2−∂3˜u(xi,tn)∂x3i−∂f(xi,tn)∂xi}2, | (4.7) |
Lg2FW=1(M1−1)NM−1∑i=1N∑n=1{τ−0.5Γ(1.5)[b1.50∂2˜u(xi,tn)∂t2n+n−1∑k=1(b1.5n−k−b1.5n−k−1)∂2˜u(xi,tk)∂t2k]+˜u(xi,tn)∂2˜u(xi,tn)∂xi∂tn+∂˜u(xi,tn)∂xi∂˜u(xi,tn)∂tn−∂3˜u(xi,tn)∂x2i∂tn−∂f(xi,tn)∂tn}2. | (4.8) |
By substituting Eqs (4.6)–(4.8) into Eq (3.8), we get the gfPINNs loss function LgFW with WF=1, Wg1F=0.16, and Wg2F=0.16. Next, we selected 2000 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 7–9 present a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the equation. Figure 10 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller absolute error. Figure 11 presents the iteration convergence curves for both the fPINNs and gfPINNs models for a single experiment, revealing that while both can achieve L2 relative errors of 10−4 with increasing iterations, the prediction errors of gfPINNs are relatively low and more stable, resulting in superior prediction performance compared to fPINNs.
We use the code that solves the forward problem to solve the inverse problem. We simply add the parameters to be identified in the inverse problem to the list of parameters to be optimized in the forward problem, without changing anything else. In this section, gfPINNs are applied to solve the inverse problems of the multiterm time-fractional Burger-type equation of the form (3.6). We let v=1, (x,t)∈[0,1]×[0,1], g(x)=0, and considering additional concentration measurements at the final time u(x,1)=h(x,1). Here, we still consider the smooth fabricated solution u(x,t)=tpsin(πx) and the forcing term of formula (4.2).
Case 1: We choose c1=1 and c2=0. Similarly, we get the gfPINNs loss function LgFW with WI=1, Wg1I=0.25, and Wg2I=0.25. We set the fractional derivative to be 0.6. We selected 470 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. Figures 12–14 display a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the problem. Figure 15 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller and more stable absolute error. Figure 16 illustrates the iteration convergence curves for the fPINNs and gfPINNs for a single experiment, indicating that although gfPINNs incur a higher computational cost for solving the inverse problem due to an additional loss term, both models can achieve L2 relative errors of 10−4 as iterations progress, with gfPINNs showing a lower and more stable error curve compared to fPINNs.
Case 2: We choose c1=0 and c2=1. Similarly, we get the gfPINNs loss function LgFW with WI=1, Wg1I=0.16, and Wg2I=0.0001. We set the fractional derivative to be 1.6. We selected 400 training points to train fPINNs and gfPINNs and other parameters of the NNs are set to those described at the beginning of this section. For fPINNs and gfPINNs, we get the similar conclusion as Case 1 by training the NNs and observing the experimental results. Figures 17–19 display a comparison between the predicted solutions from the fPINNs and gfPINNs models and the exact solution of the equation, demonstrating that gfPINNs can effectively solve the problem. Figure 20 illustrates the absolute errors between the exact solution and the solutions predicted by both fPINNs and gfPINNs, revealing that the gfPINNs exhibit a relatively smaller absolute error. Figure 21 compares the L2 relative errors of fPINNs and gfPINNs for a single experiment as iterations progress, revealing that while gfPINNs incurs a higher computational cost due to an additional loss term, both models can achieve an L2 relative error of 10−3, with gfPINNs demonstrating a lower and more stable error curve than fPINNs.
In this paper, the effectiveness of gfPINNs in solving the forward and inverse problems of the multiterm time-fractional Burger-type equation is verified through numerical examples. The L2 relative errors for solutions predicted by both fPINNs and gfPINNs can achieve 10−4 for forward problems and 10−3 or even 10−4 for inverse problems. The experimental results indicate that gfPINNs demonstrate relatively lower and more stable errors with the increase of training iterations, thereby enhancing prediction performance. Nonetheless, the inclusion of an additional loss term in gfPINNs may result in a higher computational cost, such as when solving inverse problems, fPINNs exhibit faster convergence compared to gfPINNs.
Shanhao Yuan, Yanqin Liu, Qiuping Li and Chao Guo: Conceptualization, Methodology; Yibin Xu, Shanhao Yuan and Yanfeng Shen: Software, Visualization, Validation; Shanhao Yuan: Writing–Original draft preparation; Yanqin Liu: Writing–Reviewing & editing. All authors have read and approved the final version of the manuscript for publication.
We appreciated the support by the Natural Science Foundation of Shandong Province (ZR2023MA062), the National Science Foundation of China (62103079), the Belt and Road Special Foundation of The National Key Laboratory of Water Disaster Prevention (2023491911), and the Open Research Fund Program of the Data Recovery Key Laboratory of Sichuan Province (DRN19020).
The authors declare that they have no conflicts of interest.
[1] |
F. Jiang, Y. Jiang, H. Zhi, Y. Dong, H. Li, S. Ma, et al., Artificial intelligence in healthcare: past, present and future, Stroke Vasc. Neurol., 2 (2017). https://doi.org/10.1136/svn-2017-000101 doi: 10.1136/svn-2017-000101
![]() |
[2] |
M. D. McCradden, E. A. Stephenson, J. A. Anderson, Clinical research underlies ethical integration of healthcare artificial intelligence, Nat. Med., 26 (2020), 1325-1326. https://doi.org/10.1038/s41591-020-1035-9 doi: 10.1038/s41591-020-1035-9
![]() |
[3] |
K. H. Yu, A. L. Beam, I. S. Kohane, Artificial intelligence in healthcare, Nat. Biomed. Eng., 2 (2018), 719-731. https://doi.org/10.1038/s41551-018-0305-z doi: 10.1038/s41551-018-0305-z
![]() |
[4] |
O. Asan, A. E. Bayrak, A. Choudhury, Artificial intelligence and human trust in healthcare: focus on clinicians, J. Med. Internet Res., 22 (2020), e15154. https://doi.org/10.2196/15154 doi: 10.2196/15154
![]() |
[5] |
D. Shen, G. Wu, H. I. Suk, Deep learning in medical image analysis, Annu. Rev. Biomed. Eng., 19 (2017), 221-248. https://doi.org/10.1146/annurev-bioeng-071516-044442 doi: 10.1146/annurev-bioeng-071516-044442
![]() |
[6] |
G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, et al., A survey on deep learning in medical image analysis, Med. Image Anal., 42 (2017), 60-88. https://doi.org/10.1016/j.media.2017.07.005 doi: 10.1016/j.media.2017.07.005
![]() |
[7] | M. I. Razzak, S. Naz, A. Zaib, Deep learning for medical image processing: Overview, challenges and the future, Classif. BioApps, (2018), 323-350. |
[8] |
J. H. Thrall, D. Fessell, P. V. Pandharipande, Rethinking the approach to artificial intelligence for medical image analysis: the case for precision diagnosis, J. Am. Coll. Radiol., 18 (2021), 174-179. https://doi.org/10.1016/j.jacr.2020.07.010 doi: 10.1016/j.jacr.2020.07.010
![]() |
[9] |
Y. Zhang, Z. Wang, J. Zhang, C. Wang, Y. Wang, H. Chen, et al., Deep learning model for classifying endometrial lesions, J. Transl. Med., 19 (2021), 1-13. https://doi.org/10.1186/s12967-020-02660-x doi: 10.1186/s12967-020-02660-x
![]() |
[10] |
C. Zheng, L. Chen, J. Jian, J. Li, Z. Gao, Efficacy evaluation of interventional therapy for primary liver cancer using magnetic resonance imaging and CT scanning under deep learning and treatment of vasovagal reflex, J. Supercomput., 77 (2021), 7535-7548. https://doi.org/10.1007/s11227-020-03539-w doi: 10.1007/s11227-020-03539-w
![]() |
[11] | G. A. Roth, C. Johnson, A. Abajobir, F. Abd-Allah, S. F. Abera, G. Abyu, et al., Global, regional, and national burden of cardiovascular diseases for 10 causes, 1990 to 2015, J. Am. Coll. Cardiol., 70 (2017), 1-25. |
[12] |
K. H. Miao, J. H. Miao, Coronary heart disease diagnosis using deep neural networks, Int. J. Adv. Comput. Sci. Appl., 9 (2018), 1-8. https://doi.org/10.14569/IJACSA.2018.091001 doi: 10.14569/IJACSA.2018.091001
![]() |
[13] | A. Gupta, H. S. Arora, R. Kumar, B. Raman, DMHZ: a decision support system based on machine computational design for heart disease diagnosis using z-alizadeh sani dataset, in 2021 International Conference on Information Networking (ICOIN), (2021), 818-823. https://doi.org/10.1109/ICOIN50884.2021.9333884 |
[14] |
A. D. Villa, E. Sammut, A. Nair, R. Rajani, R. Bonamini, A. Chiribiri, Coronary artery anomalies overview: the normal and the abnormal, World J. Radiol., 8 (2016), 537. https://doi.org/10.4329/wjr.v8.i6.537 doi: 10.4329/wjr.v8.i6.537
![]() |
[15] |
R. Alizadehsani, M. Abdar, M. Roshanzamir, A. Khosravi, P. M. Kebria, F. Khozeimeh, et al., Machine learning-based coronary artery disease diagnosis: A comprehensive review, Comput. Biol. Med., 111 (2019), 103346. https://doi.org/10.1016/j.compbiomed.2019.103346 doi: 10.1016/j.compbiomed.2019.103346
![]() |
[16] | T. M. Williamson, C. Moran, A. McLennan, S. Seidel, P. P. Ma, M. L. Koerner, T. S. Campbell, Promoting adherence to physical activity among individuals with cardiovascular disease using behavioral counseling: A theory and research-based primer for health care professionals, Prog. Cardiovasc. Dis., (2020). https://doi.org/10.1016/j.pcad.2020.12.007 |
[17] |
J. H. Joloudari, E. H. Joloudari, H. Saadatfar, M. Ghasemigol, S. M. Razavi, A. Mosavi, et al., Coronary artery disease diagnosis; ranking the significant features using a random trees model, Int. J. Environ. Res. Public Health, 17 (2020), 731. https://doi.org/10.3390/ijerph17030731 doi: 10.3390/ijerph17030731
![]() |
[18] | M. V. Dyke, S. Greer, E. Odom, L. Schieb, A. Vaughan, M. Kramer, et al., Heart disease death rates among blacks and whites aged ≥ 35 years-United States, 1968–2015, MMWR Surveillance Summaries, 67 (2018), 1. https://doi.org/10.15585/mmwr.ss6705a1 |
[19] | D. Mozaffarian, E. J. Benjamin, A. S. Go, D. K. Arnett, M. J. Blaha, M. Cushman, et al., Heart disease and stroke statistics—2015 update: a report from the American heart association, Circulation, 131 (2015), e29-e322. |
[20] |
E. J. Benjamin, S. S. Virani, C. W. Callaway, A. M. Chamberlain, A. R. Chang, S. Cheng, et al., Heart disease and stroke statistics—2018 update: a report from the American heart association, Circulation, 137 (2018), e67-e492. https://doi.org/10.1161/CIR.0000000000000573 doi: 10.1161/CIR.0000000000000573
![]() |
[21] | H. Larochelle, Y. Bengio, J. Louradour, P. Lamblin, Exploring strategies for training deep neural networks, J. Mach. Learn. Res., 10 (2009). |
[22] | R. O. Bonow, D. L. Mann, D. P. Zipes, P. Libby, Braunwald's Heart Disease E-Book: A Textbook of Cardiovascular Medicine, Elsevier Health Sciences, 2011. |
[23] |
E.G. Nabel, E. Braunwald, A tale of coronary artery disease and myocardial infarction, New Engl. J. Med., 366 (2012), 54-63. https://doi.org/10.1056/NEJMra1112570 doi: 10.1056/NEJMra1112570
![]() |
[24] |
İ. Babaoglu, O. Findik, E. Ülker, A comparison of feature selection models utilizing binary particle swarm optimization and genetic algorithm in determining coronary artery disease using support vector machine, Expert Syst. Appl., 37 (2010), 3177-3183. https://doi.org/10.1016/j.eswa.2009.09.064 doi: 10.1016/j.eswa.2009.09.064
![]() |
[25] |
M. Kumar, R. B. Pachori, U. R. Acharya, Characterization of coronary artery disease using flexible analytic wavelet transform applied on ECG signals, Biomed. Signal Proces. Control, 31 (2017), 301-308. https://doi.org/10.1016/j.bspc.2016.08.018 doi: 10.1016/j.bspc.2016.08.018
![]() |
[26] | R. Alizadehsani, J. Habibi, M. J. Hosseini, R. Boghrati, A. Ghandeharioun, B. Bahadorian, et al., Diagnosis of coronary artery disease using data mining techniques based on symptoms and ecg features, Eur. J. Scientific Res., 82 (2012), 542-553. |
[27] |
R. Alizadehsani, J. Habibi, M. J. Hosseini, H. Mashayekhi, R. Boghrati, A. Ghandeharioun, et al., A data mining approach for diagnosis of coronary artery disease, Comput. Meth. Prog. Bio., 111(2013), 52-61. https://doi.org/10.1016/j.cmpb.2013.03.004 doi: 10.1016/j.cmpb.2013.03.004
![]() |
[28] |
R. Alizadehsani, J. Habibi, Z. A. Sani, H. Mashayekhi, R. Boghrati, A. Ghandeharioun, et al., Diagnosing coronary artery disease via data mining algorithms by considering laboratory and echocardiography features, Res. Cardiov. Med., 2 (2013), 133. https://doi.org/10.5812/cardiovascmed.10888 doi: 10.5812/cardiovascmed.10888
![]() |
[29] |
R. Alizadehsani, M. H. Zangooei, M. J. Hosseini, J. Habibi, A. Khosravi, M. Roshanzamir, et al., Coronary artery disease detection using computational intelligence methods, Knowledge-Based Syst., 109 (2016), 187-197. https://doi.org/10.1016/j.knosys.2016.07.004 doi: 10.1016/j.knosys.2016.07.004
![]() |
[30] |
A. D. Dolatabadi, S. E. Z. Khadem, B. M. Asl, Automated diagnosis of coronary artery disease (CAD) patients using optimized SVM, Comput. Meth. Prog. Bio., 138 (2017), 117-126. https://doi.org/10.1016/j.cmpb.2016.10.011 doi: 10.1016/j.cmpb.2016.10.011
![]() |
[31] |
Z. Arabasadi, R. Alizadehsani, M. Roshanzamir, H. Moosaei, A. A. Yarifard, et al., Computer aided decision making for heart disease detection using hybrid neural network-Genetic algorithm, Comput. Meth. Prog. Bio., 141 (2017), 19-26. https://doi.org/10.1016/j.cmpb.2017.01.004 doi: 10.1016/j.cmpb.2017.01.004
![]() |
[32] |
R. Alizadehsani, M. J. Hosseini, A. Khosravi, F. Khozeimeh, M. Roshanzamir, N. Sarrafzadegan, et al., Non-invasive detection of coronary artery disease in high-risk patients based on the stenosis prediction of separate coronary arteries, Comput. Meth. Prog. Bio., 162 (2018), 119-127. https://doi.org/10.1016/j.cmpb.2018.05.009 doi: 10.1016/j.cmpb.2018.05.009
![]() |
[33] |
M. Abdar, W. Książek, U. R. Acharya, R. S. Tan, V. Makarenkov, P. Pławiak, A new machine learning technique for an accurate diagnosis of coronary artery disease, Comput. Meth. Prog. Bio., 179 (2019), 104992. https://doi.org/10.1016/j.cmpb.2019.104992 doi: 10.1016/j.cmpb.2019.104992
![]() |
[34] | C. Blake, UCI Repository of Machine Learning Databases, 1998. Available from: http://www.ics.uci.edu/~mlearn/MLRepository.html. |
[35] |
R. W. Hamersvelt, M. Zreik, M. Voskuil, M. A. Viergever, I. Išgum, T. Leiner, et al., Deep learning analysis of left ventricular myocardium in CT angiographic intermediate-degree coronary stenosis improves the diagnostic accuracy for identification of functionally significant stenosis, Eur. Rad., 29 (2019), 2350-2359. https://doi.org/10.1007/s00330-018-5822-3 doi: 10.1007/s00330-018-5822-3
![]() |
[36] |
U. R. Acharya, H. Fujita, O. S. Lih, M. Adam, J. H. Tan, C. K. Chua, Automated detection of coronary artery disease using different durations of ECG segments with convolutional neural network, Knowledge-Based Syst., 132 (2017), 62-71. https://doi.org/10.1016/j.knosys.2017.06.003 doi: 10.1016/j.knosys.2017.06.003
![]() |
[37] |
A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, et al., PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals, Circulation, 101 (2000), e215-e220. https://doi.org/10.1161/01.CIR.101.23.e215 doi: 10.1161/01.CIR.101.23.e215
![]() |
[38] |
J. H. Tan, Y. Hagiwara, W. Pang, I. Lim, S. L. Oh, M. Adam, et al., Application of stacked convolutional and long short-term memory network for accurate identification of CAD ECG signals, Comput. Biol. Med., 94 (2018), 19-26. https://doi.org/10.1016/j.compbiomed.2017.12.023 doi: 10.1016/j.compbiomed.2017.12.023
![]() |
[39] |
U. R. Acharya, H. Fujita, M. Adam, O. S. Lih, V. K. Sudarshan, T. J. Hong, et al., Automated characterization and classification of coronary artery disease and myocardial infarction by decomposition of ECG signals: A comparative study, Inform. Sci., 377 (2017), 17-29. https://doi.org/10.1016/j.ins.2016.10.013 doi: 10.1016/j.ins.2016.10.013
![]() |
[40] |
U. R. Acharya, H. Fujita, S. L. Oh, Y. Hagiwara, J. H. Tan, M. Adam, et al., Deep convolutional neural network for the automated diagnosis of congestive heart failure using ECG signals, Appl. Intell., 49 (2019), 16-27. https://doi.org/10.1007/s10489-018-1179-1 doi: 10.1007/s10489-018-1179-1
![]() |
[41] |
M. M. Ghiasi, S. Zendehboudi, A. A. Mohsenipour, Decision tree-based diagnosis of coronary artery disease: CART model, Comput. Meth. Prog. Bio., 192 (2020), 105400. https://doi.org/10.1016/j.cmpb.2020.105400 doi: 10.1016/j.cmpb.2020.105400
![]() |
[42] |
L. Verma, S. Srivastava, P. Negi, A hybrid data mining model to predict coronary artery disease cases using non-invasive clinical data, J. Med. Syst., 40 (2016), 178. https://doi.org/10.1007/s10916-016-0536-z doi: 10.1007/s10916-016-0536-z
![]() |
[43] |
N. M. Idris, Y. K. Chiam, K. D. Varathan, W. A. W. Ahmad, K. H. Chee, Y. M. Liew, Feature selection and risk prediction for patients with coronary artery disease using data mining, Med. Biol. Eng. Comput., 58 (2020), 3123-3140. https://doi.org/10.1007/s11517-020-02268-9 doi: 10.1007/s11517-020-02268-9
![]() |
[44] |
D, . Velusamy, K. Ramasamy, Ensemble of heterogeneous classifiers for diagnosis and prediction of coronary artery disease with reduced feature subset, Comput. Meth. Prog. Bio., 198 (2020), 105770. https://doi.org/10.1016/j.cmpb.2020.105770 doi: 10.1016/j.cmpb.2020.105770
![]() |
[45] | I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, Y. Bengio, Maxout networks, in International Conference On Machine Learning, PMLR, (2013), 1319-1327. |
[46] |
J. H. Joloudari, M. Haderbadi, A. Mashmool, M. GhasemiGol, S. S. Band, A. Mosavi, Early detection of the advanced persistent threat attack using performance analysis of deep learning, IEEE Access, 8 (2020), 186125-186137. https://doi.org/10.1109/ACCESS.2020.3029202 doi: 10.1109/ACCESS.2020.3029202
![]() |
[47] |
Y. Ito, Approximation of functions on a compact set by finite sums of a sigmoid function without scaling, Neural Networks, 4 (1991), 817-826. https://doi.org/10.1016/0893-6080(91)90060-I doi: 10.1016/0893-6080(91)90060-I
![]() |
[48] | N. Hassan, N. Akamatsu, A new approach for contrast enhancement using sigmoid function, Inter Arab J. Inf. Techn., 1 (2004). |
[49] |
X. Li, X. Zhang, W. Huang, Q. Wang, Truncation cross entropy loss for remote sensing image captioning, IEEE Transactions Geosci. Remote Sens., 59 (2020), 5246-5257. https://doi.org/10.1109/TGRS.2020.3010106 doi: 10.1109/TGRS.2020.3010106
![]() |
[50] |
C. Otto, D. Wang, A. K. Jain, Clustering millions of faces by identity, IEEE Transactions Pattern Anal. Mach. Intell., 40 (2017), 289-303. https://doi.org/10.1109/TPAMI.2017.2679100 doi: 10.1109/TPAMI.2017.2679100
![]() |
[51] |
E. H. Ruspini, A new approach to clustering, Inform. Control, 15 (1969), 22-32. https://doi.org/10.1016/S0019-9958(69)90591-9 doi: 10.1016/S0019-9958(69)90591-9
![]() |
[52] | R. O. Duda, P. E. Hart, Hart PE Pattern Classification And Scene Analysis, New York: Wiley, 1973. |
[53] |
R. Veloso, F. Portela, M. F. Santos, A. Silva, F. Rua, A. Abelha, et al., A clustering approach for predicting readmissions in intensive medicine, Procedia Technol., 16 (2014), 1307-1316. https://doi.org/10.1016/j.protcy.2014.10.147 doi: 10.1016/j.protcy.2014.10.147
![]() |
[54] |
H. S. Park, C. H. Jun, A simple and fast algorithm for K-medoids clustering, Expert Syst. Appl., 36 (2009), 3336-3341. https://doi.org/10.1016/j.eswa.2008.01.039 doi: 10.1016/j.eswa.2008.01.039
![]() |
[55] | R. O. Duda, P. E. Hart, Pattern Classification And Scene Analysis, Wiley New York, 1973. |
[56] |
J. C. Dunn, Well-separated clusters and optimal fuzzy partitions, J. Cybern., 4 (1974), 95-104. https://doi.org/10.1080/01969727408546059 doi: 10.1080/01969727408546059
![]() |
[57] | J. C. Bezdek, Objective function clustering, in Pattern Recognition With Fuzzy Objective Function Algorithms, Springer, (1981), 43-93. https://doi.org/10.1007/978-1-4757-0450-1_3 |
[58] |
M. S. Yang, A survey of fuzzy clustering, Math. Comput. Model., 18 (1993), 1-16. https://doi.org/10.1016/0895-7177(93)90202-A doi: 10.1016/0895-7177(93)90202-A
![]() |
[59] |
G. Govaert, M. Nadif, Clustering with block mixture models, Pattern Recogn., 36 (2003), 463-473. https://doi.org/10.1016/S0031-3203(02)00074-2 doi: 10.1016/S0031-3203(02)00074-2
![]() |
[60] |
S. Bandyopadhyay, U. Maulik, A. Mukhopadhyay, Multiobjective genetic clustering for pixel classification in remote sensing imagery, IEEE Trans. Geosci. Remote Sens., 45 (2007), 1506-1511. https://doi.org/10.1109/TGRS.2007.892604 doi: 10.1109/TGRS.2007.892604
![]() |
[61] |
R. Xu, D. Wunsch, Survey of clustering algorithms, IEEE Trans. Neural Networks, 16 (2005), 645-678. https://doi.org/10.1109/TNN.2005.845141 doi: 10.1109/TNN.2005.845141
![]() |
[62] |
J. H. Joloudari, H. Saadatfar, A. Dehzangi, S. Shamshirband, Computer-aided decision-making for predicting liver disease using PSO-based optimized SVM with feature selection, Inform. Medicine Unlocked, 17 (2019), 100255. https://doi.org/10.1016/j.imu.2019.100255 doi: 10.1016/j.imu.2019.100255
![]() |
[63] |
M. Abdar, M. Zomorodi-Moghadam, R. Das, I. H. Ting, Performance analysis of classification algorithms on early detection of liver disease, Expert Syst. Appl., 67 (2017), 239-251. https://doi.org/10.1016/j.eswa.2016.08.065 doi: 10.1016/j.eswa.2016.08.065
![]() |
[64] |
C. H. Weng, C. K. Huang, R. P. Han, Disease prediction with different types of neural network classifiers, Telemat. Inform., 33 (2016), 277-292. https://doi.org/10.1016/j.tele.2015.08.006 doi: 10.1016/j.tele.2015.08.006
![]() |
[65] |
M. Diwakar, A. Tripathi, K. Joshi, M. Memoria, P. Singh, Latest trends on heart disease prediction using machine learning and image fusion, Materials Today: Proceed., 37 (2021), 3213-3218. https://doi.org/10.1016/j.matpr.2020.09.078 doi: 10.1016/j.matpr.2020.09.078
![]() |
[66] |
J. H. Moon, W. C. Cha, M. J. Chung, K. S. Lee, B. H. Cho, J. H. Choi, Automatic stenosis recognition from coronary angiography using convolutional neural networks, Comput. Meth. Prog. Bio., 198 (2021), 105819. https://doi.org/10.1016/j.cmpb.2020.105819 doi: 10.1016/j.cmpb.2020.105819
![]() |
1. | Jiawei Wang, Yanqin Liu, Limei Yan, Kunling Han, Libo Feng, Runfa Zhang, Fractional sub-equation neural networks (fSENNs) method for exact solutions of space–time fractional partial differential equations, 2025, 35, 1054-1500, 10.1063/5.0259937 |