
In this paper, we introduce a novel deep learning method for dental panoramic image segmentation, which is crucial in oral medicine and orthodontics for accurate diagnosis and treatment planning. Traditional methods often fail to effectively combine global and local context, and struggle with unlabeled data, limiting performance in varied clinical settings. We address these issues with an advanced TransUNet architecture, enhancing feature retention and utilization by connecting the input and output layers directly. Our architecture further employs spatial and channel attention mechanisms in the decoder segments for targeted region focus, and deep supervision techniques to overcome the vanishing gradient problem for more efficient training. Additionally, our network includes a self-learning algorithm using unlabeled data, boosting generalization capabilities. Named the Semi-supervised Tooth Segmentation Transformer U-Net (STS-TransUNet), our method demonstrated superior performance on the MICCAI STS-2D dataset, proving its effectiveness and robustness in tooth segmentation tasks.
Citation: Duolin Sun, Jianqing Wang, Zhaoyu Zuo, Yixiong Jia, Yimou Wang. STS-TransUNet: Semi-supervised Tooth Segmentation Transformer U-Net for dental panoramic image[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 2366-2384. doi: 10.3934/mbe.2024104
[1] | Elias Maciel, Inocencio Ortiz, Christian E. Schaerer . A Herglotz-based integrator for nonholonomic mechanical systems. Journal of Geometric Mechanics, 2023, 15(1): 287-318. doi: 10.3934/jgm.2023012 |
[2] | Xavier Rivas, Daniel Torres . Lagrangian–Hamiltonian formalism for cocontact systems. Journal of Geometric Mechanics, 2023, 15(1): 1-26. doi: 10.3934/jgm.2023001 |
[3] | Jordi Gaset, Arnau Mas . A variational derivation of the field equations of an action-dependent Einstein-Hilbert Lagrangian. Journal of Geometric Mechanics, 2023, 15(1): 357-374. doi: 10.3934/jgm.2023014 |
[4] | Alexander Pigazzini, Cenap Özel, Saeid Jafari, Richard Pincak, Andrew DeBenedictis . A family of special case sequential warped-product manifolds. Journal of Geometric Mechanics, 2023, 15(1): 116-127. doi: 10.3934/jgm.2023006 |
[5] | Marcin Zając . The dressing field method in gauge theories - geometric approach. Journal of Geometric Mechanics, 2023, 15(1): 128-146. doi: 10.3934/jgm.2023007 |
[6] | Robert I McLachlan, Christian Offen . Backward error analysis for conjugate symplectic methods. Journal of Geometric Mechanics, 2023, 15(1): 98-115. doi: 10.3934/jgm.2023005 |
[7] | Álvaro Rodríguez Abella, Melvin Leok . Discrete Dirac reduction of implicit Lagrangian systems with abelian symmetry groups. Journal of Geometric Mechanics, 2023, 15(1): 319-356. doi: 10.3934/jgm.2023013 |
[8] | Markus Schlarb . A multi-parameter family of metrics on stiefel manifolds and applications. Journal of Geometric Mechanics, 2023, 15(1): 147-187. doi: 10.3934/jgm.2023008 |
[9] | Valentin Duruisseaux, Melvin Leok . Time-adaptive Lagrangian variational integrators for accelerated optimization. Journal of Geometric Mechanics, 2023, 15(1): 224-255. doi: 10.3934/jgm.2023010 |
[10] | Francesco Bonechi, Jian Qiu, Marco Tarlini . Generalised Kähler structure on CP2 and elliptic functions. Journal of Geometric Mechanics, 2023, 15(1): 188-223. doi: 10.3934/jgm.2023009 |
In this paper, we introduce a novel deep learning method for dental panoramic image segmentation, which is crucial in oral medicine and orthodontics for accurate diagnosis and treatment planning. Traditional methods often fail to effectively combine global and local context, and struggle with unlabeled data, limiting performance in varied clinical settings. We address these issues with an advanced TransUNet architecture, enhancing feature retention and utilization by connecting the input and output layers directly. Our architecture further employs spatial and channel attention mechanisms in the decoder segments for targeted region focus, and deep supervision techniques to overcome the vanishing gradient problem for more efficient training. Additionally, our network includes a self-learning algorithm using unlabeled data, boosting generalization capabilities. Named the Semi-supervised Tooth Segmentation Transformer U-Net (STS-TransUNet), our method demonstrated superior performance on the MICCAI STS-2D dataset, proving its effectiveness and robustness in tooth segmentation tasks.
In the last few decades, more and more scholars have studied fractional-order neural networks (FONNs) due to their powerful functions [1,2]. FONNs have been accurately applied to image processing, biological systems and so on [3,4,5,6] because of their powerful computing power and information storage capabilities. In the past, scholars have studied many types of neural network (NN) models. For example, bidirectional associative memory neural networks (BAMNNs) [7], recurrent NNs [8], Hopfield NNs [9], Cohen-Grossberg NNs [10], and fuzzy neural networks (FNNs) [11]. With the development of fuzzy mathematics, Yang et al. proposed fuzzy logic (fuzzy OR and fuzzy AND) into cellular NNs and established FNNs. Moreover, when dealing with some practical problems, it is inevitable to encounter approximation, uncertainty, and fuzziness. Fuzzy logic is considered to be a promising tool to deal with these phenomena. Therefore, the dynamical behaviors of FNNs have attracted extensive research and obtained abundant achievements [12,13,14,15].
The CVNNs are extended from real-valued NNs (RVNNs). In CVNNs, the relevant variables in CVNNs belong to the complex field. In addition, CVNNs can also address issues that can not be addressed by RVNNs, such as machine learning [16] and filtering [17]. In recent years, the dynamic behavior of complex valued neural networks has been a hot topic of research for scholars. In [16], Nitta derived some results of an analysis on the decision boundaries of CVNNs. In [17], the author studied an extension of the RBFN for complex-valued signals. In [18], Li et al. obtained some synchronization results of CVNNs with time delay. Overall, CVNNs outperform real-valued neural networks in terms of performance, as they can directly process two-dimensional data.
Time delays are inescapable in neural systems due to the limited propagation velocity between different neurons. Time delay has been extensively studied by previous researchers, such as general time delay [19], leakage delays [20], proportional delays [21], discrete delays [22], time-varying delays [23], and distributed delays [24]. In addition, the size and length of axonal connections between neurons in NNs can cause time delays, so scholars have introduced distributed delays in NNs. For example, Si et al. [25] considered a fractional NN with distributed and discrete time delays. It not only embodies the heredity and memory characteristics of neural networks but also reflects the unevenness of delay in the process of information transmission due to the addition of distributed time delays. Therefore, more and more scholars have added distributed delays to the NNs and have made some new discoveries [26,27]. On the other hand, due to the presence of external perturbations in the model, the actual values of the parameters in the NNs cannot be acquired, which may lead to parameter uncertainties. Parameter uncertainty has also affected the performance of NNs. Consequently, scholars are also closely studying the NNs model of parameter uncertainty [28,29].
The synchronization control of dynamical systems has always been the main aspect of dynamical behavior analysis, and the synchronization of NNs has become a research hotspot. In recent years, scholars have studied some important synchronization behaviors, such as complete synchronization (CS) [30], global asymptotic synchronization [31], quasi synchronization [32], finite time synchronization [33], projective synchronization (PS) [34], and Mittag-Leffler synchronization (MLS) [35]. In various synchronizations, PS is one of the most interesting, being characterized by the fact that the drive-response systems can reach synchronization according to a scaling factor. Meanwhile, the CS can be regarded as a PS with a scale factor of 1. Although PS has its advantages, MLS also has its unique features. Unlike CS, MLS can achieve synchronization at a faster speed. As a result, some scholars have combined the projective synchronization and ML synchronization to study MLPS [36,37]. However, there are few papers on MLPS of FOFNNs. Based on the above discussion, the innovative points of this article are as follows:
∙ According to the FNNs model, parameter uncertainty and distributed delays are added, and the influence of time-varying delay, distributed delay, and uncertainty on the global MLPS of FOFCVNNs is further considered in this paper.
∙ The algebraic criterion for MLPS was obtained by applying the complex valued direct method. Direct methods and algebraic inequalities greatly reduce computational complexity.
∙ The design of nonlinear hybrid controllers and adaptive hybrid controllers greatly reduces control costs.
Notations: In this article, C refers to the set of complex numbers, where O=OR+iOI∈C and OR,OI∈R, i represents imaginary units, Cn can describe a set of n-dimensional complex vectors, ¯Oψ refers to the conjugate of Oψ. |Oψ|=√Oψ¯Oψ indicating the module of Oψ. For O=(O1,O2,…,On)T∈Cn,||O||=(n∑ψ=1|Oψ|2)12 denotes the norm of O.
In this section, we provide the definitions, lemmas, assumptions, and model details required for this article.
Definition 2.1. [38] The Caputo fractional derivative with 0<Υ<1 of function O(t) is defined as
ct0DαtO(t)=1Γ(1−α)∫tt0O′(s)(t−s)αds. |
Definition 2.2. [38] The one-parameter ML function is described as
EΥ(p)=∞∑δ=0pδΓ(δΥ+1). |
Lemma 2.1. [39] Make Qψ,Oψ∈C(w,ψ=1,2,…,n,), then the fuzzy operators in the system satisfy
|n⋀ψ=1μwψfψ(Qψ)−n⋀ψ=1μwψfψ(Oψ)|≤n∑ψ=1|μwψ||fψ(Qψ)−fψ(Oψ)|,|n⋁ψ=1υwψfψ(Qψ)−n⋁ψ=1υwψfψ(Oψ)|≤n∑ψ=1|υwψ||fψ(Qψ)−fψ(Oψ)|. |
Lemma 2.2. [40] If the function ϑ(t)∈C is differentiable, then it has
Ct0Dαt(¯ϑ(t)ϑ(t))≤ϑ(t)Ct0Dαtϑ(t)+(Ct0Dαt¯ϑ(t))ϑ(t). |
Lemma 2.3. [41] Let ˆξ1,ˆξ2∈C, then the following condition:
¯ˆξ1ˆξ2+¯ˆξ2ˆξ1≤χ¯ˆξ1ˆξ1+1χ¯ˆξ2ˆξ2 |
holds for any positive constant χ>0.
Lemma 2.4. [42] If the function ς(t) is nondecreasing and differentiable on [t0,∞], the following inequality holds:
ct0Dαt(ς(t)−ȷ)2≤2(ς(t)−ȷ)ct0Dαt(ς(t)), 0<α<1, |
where constant ȷ is arbitrary.
Lemma 2.5. [43] Suppose that function ℏ(t) is continuous and satisfies
ct0Dαtˆℏ(t)≤−γˆℏ(t), |
where 0<α<1, γ∈R, the below inequality can be obtained:
ˆℏ(t)≤ˆℏ(t0)Eα[−γ(t−t0)α]. |
Next, a FOFCVNNS model with distributed and time-varying delays with uncertain coefficients is considered as the driving system:
Ct0DαtOw(t)=−awOw(t)+n∑ψ=1(mwψ+Δmwψ(t))fψ(Oψ(t−τ(t)))+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(Oψ(t))dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(Oψ(t))dt+Iw(t), | (2.1) |
where 0<α<1; O(t)=((O1(t),...,On(t))⊤; Ow(t),w=1,2,...,n is the state vector; aw∈R denotes the self-feedback coefficient; mwψ∈C stands for a feedback template element; △mwψ∈C refers to uncertain parameters; μwψ∈C and υwψ∈C are the elements of fuzzy feedback MIN and MAX templates; ⋁ and ⋀ indicate that the fuzzy OR and fuzzy AND operations; τ(t) indicates delay; Iw(t) denotes external input; and fψ(⋅) is the neuron activation function.
Correspondingly, we define the response system as follows:
Ct0DαtQw(t)=−awQw(t)+n∑ψ=1(mwψ+Δmwψ(t))fψ(Qψ(t−τ(t)))+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(Qψ(t))dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(Qψ(t))dt+Iw(t)+Uw(t), | (2.2) |
where Uw(t)∈C stands for controller. In the following formula, Ct0Dαt is abbreviated as Dα.
Defining the error as kw(t)=Qw(t)−SOw(t), where S is the projective coefficient. Therefore, we have
Dαkw(t)=−awkw(t)+n∑ψ=1(mwψ+Δmwψ(t))~fψkψ(t−τ(t))+n∑ψ=1(mwψ+Δmwψ(t))fψ(SOψ(t−τ(t)))−n∑ψ=1S(mwψ+Δmwψ(t))fψ(Oψ(t−τ(t)))+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(SOψ(t))dt−Sn⋀ψ=1μwψ∫+∞0bwψ(t)fψ(Oψ(t))dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ~kψ(t)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(SOψ(t))dt−Sn⋁ψ=1υwψ∫+∞0cwp(t)fψ(Oψ(t))dt+(1−S)Iw(t)+Uw(t), | (2.3) |
where fψ~kψ(t)=fψ(Qψ(t))−fψ(SOψ(t)).
Assumption 2.1. ∀O,Q∈C and ∀Lϖ>0, fϖ(⋅) satisfy
|fϖ(O)−fϖ(Q)|≤Lϖ|O−Q|. |
Assumption 2.2. The nonnegative functions bwψ(t) and cwψ(t) are continuous, and satisfy
(i)∫+∞0bwψ(t)dt=1,(ii)∫+∞0cwψ(t)dt=1. |
Assumption 2.3. The parameter perturbations △mwψ(t) bounded, then it has
|△mwψ(t)|≤Mwψ, |
where Mwψ is all positive constants.
Assumption 2.4. Based on the assumption for fuzzy OR and AND operations, the following inequalities hold:
(i)[n⋀ψ=1μwψ∫+∞0bwψ(t)(fψ(Oψ)−fψ(Qψ))dt][n⋀ψ=1¯μwψ∫+∞0bwψ(t)(fψ(Oψ)−fψ(Qψ))dt]≤n∑ψ=1δψ|μwψ|2(fψ(Oψ)−fψ(Qψ))(¯fψ(Oψ)−¯fψ(Qψ)),(ii)[n⋁ψ=1υwψ∫+∞0cwψ(t)(fψ(Oψ)−fψ(Qψ))dt][n⋁ψ=1¯υwψ∫+∞0cwψ(t)(fψ(Oψ−fψ(Qψ))dt]≤n∑ψ=1ηψ|υwψ|2(fψ(Oψ)−fψ(Qψ))(¯fψ(Oψ)−¯fψ(Qψ)), |
where δψ and ηψ are positive numbers.
Definition 2.3. The constant vector O∗=(O∗1,…,O∗n)T∈Cn and satisfy
0=−awO∗ψ+n∑ψ=1(mwψ+△mwψ(t))fψ(O∗ψ)+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(O∗ψ)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(O∗ψ)dt+Iw(t), |
whereupon, O∗ is called the equilibrium point of (2.1).
Definition 2.4. System (2.1) achieves ML stability if and only if O∗=(O∗1,…,O∗n)T∈Cn is the equilibrium point, and satisfies the following condition:
||O(t)||≤[Ξ(O(t0))Eα(−ρ(t−t0)α)]η, |
where 0<α<1,ρ, η>0,Ξ(0)=0.
Definition 2.5. If FOFCVNNS (2.1) and (2.2) meet the following equation:
limt→∞‖Qw(t)−SOw(t)‖=0, |
so we can say it has achieved projective synchronization, where S∈R is projective coefficient.
In this part, we mainly investigate three theorems. According to the Banach contraction mapping principle, we can deduce the result of Theorem 3.1. Next, we construct a linear hybrid controller and an adaptive hybrid controller, and establish two MLPS criteria.
Theorem 3.1. B is a Banach space. Let ||O||1=n∑ψ=1|Oψ|. Under Assumptions 2.1–2.4, if the following equality holds:
ρ=n∑w=1|mwψ|Lψ+|Mwψ|Lψ+|μwψ|Lψ+|υwψ|Lψaψ<1,w,ψ=1,2,3,...,n, | (3.1) |
then FOFCVNNS (2.1) has a unique equilibrium point with O∗=(O∗1,…,O∗n)T∈Cn.
Proof. Let O=(~O1,~O2,…,~On)⊤=(a1O1,a2O2,…,anOn)⊤∈Rn, and construct a mapping φ:B→B, φ(x)=(φ1(O),φ2(O),…,φn(O))⊤ and
φψ(O)=n∑ψ=1(mwψ+Δmwψ(t))fψ(~Oψaψ)+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(~Oψaψ)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(~Oψaψ)dt+Iw,w,ψ=1,2,…,n. | (3.2) |
For two different points α=(α1,…,αn)⊤,β=(β1,…,βn)⊤, it has
|φψ(α)−φψ(β)|≤|n∑ψ=1(mwψ+Δmwψ(t))fψ(αψaψ)−n∑ψ=1(mwψ+Δmwψ(t))fψ(βψaψ)|+|n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(αψaψ)dt−n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(βψaψ)dt|+|n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(αψaψ)dt−n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(βψaψ)dt|≤n∑ψ=1|mwψ+Δmwψ(t)||fj(αψaψ)−fψ(βψaψ)|+|n⋀ψ=1μwψfψ(αψaψ)−n⋀ψ=1μwψfψ(βψaψ)|+|n⋁ψ=1υwψfp(αψaψ)−n⋁ψ=1υwψfψ(βψaψ)|. | (3.3) |
According to Assumption 2.1, we get
|φψ(α)−φψ(β)|≤n∑ψ=1|mwψ+Δmwψ(t)|Lψaψ|αψ−βψ|+n∑ψ=1|μwψ|Lψaψ|αψ−βψ|+n∑ψ=1|υwψ|Lψaψ|αψ−βψ|. | (3.4) |
Next, we have
||φ(α)−φ(β)||1=n∑ψ=1|φψ(α)−φψ(β)|≤n∑ψ=1n∑ψ=1|mwψ+Δmwψ(t)|Lψaψ|αψ−βψ|+n∑w=1n∑ψ=1|μwψ|Lψaψ|αψ−βψ|+n∑w=1n∑ψ=1|υwψ|Lψaψ|αψ−βψ|≤n∑ψ=1(n∑ψ=1|mwψ+Δmwψ(t)|Lψaψ+n∑ψ=1|μwψ|Lψaψ+n∑w=1|υwψ|Lψaψ)|αψ−βψ|≤(n∑w=1|mwψ+Δmwψ(t)|Lψaψ+n∑w=1|μwψ|Lψaψ+n∑w=1|υwψ|Lψaψ)n∑ψ=1|αψ−βψ|. | (3.5) |
From Assumption 2.3, then it has
||φ(α)−φ(β)||1≤(n∑w=1|mwψ|Lψ+|Mwψ|Lψ+|μwψ|Lp+|υwψ|Lψaψ)||α−β||1. | (3.6) |
Finally, we obtain
||φ(α)−φ(β)||1≤ρ||α−β||1, | (3.7) |
where φ is obviously a contraction mapping. From Eq (3.2), there must exist a unique fixed point ~O∗∈Cn, such that φ(~O∗)=~O∗.
~Oψ∗=n∑ψ=1(mwψ+Δmwψ(t))fψ(~O∗ψaψ)+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(~O∗ψaψ)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(~O∗ψaψ)dt+Iw. | (3.8) |
Let O∗ψ=~O∗ψaψ, we have
0=−awO∗ψ+n∑ψ=1(mwψ+Δmwψ(t))fψ(O∗ψ)+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(O∗ψ)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(O∗ψ)dt+Iw. | (3.9) |
Consequently, Theorem 3.1 holds.
Under the error system (2.3), we design a suitable hybrid controller
uw(t)=u1w(t)+u2w(t). | (3.10) |
u1w(t)={−πkw(t)+λkw(t)¯kw(t−τ(t))¯kw(t),kw(t)≠0,0,kw(t)=0. | (3.11) |
u2w(t)=−n∑ψ=1(mwψ+Δmwψ(t))fψ(SOψ(t−τ(t)))+n∑ψ=1S(mwψ+Δmwψ(t))fψ(Oψ(t−τ(t)))−n⋀ψ=1μwψ∫+∞0bwψ(t)fψ(SOψ(t))dt+Sn⋀ψ=1μwψ∫+∞0bwψ(t)fψ(Oψ(t))dt−n⋁ψ=1υwψ∫+∞0cwψ(t)fψ(SOψ(t))dt+Sn⋁ψ=1υwψ∫+∞0cwψ(t)fψ(Oψ(t))dt−(1−h)Iψ. | (3.12) |
By substituting (3.10)–(3.12) into (2.3), it yields
Dαkw(t)=−awkw(t)+n∑ψ=1(mwψ+Δmwψ(t))~fψkψ(t−τ(t))+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ~kψ(t)dt−πkw(t)+λkw(t)¯kw(t−τ(t))¯kw(t). | (3.13) |
Theorem 3.2. Based on the controller (3.10) and Assumptions 2.1–2.4, systems (2.1) and (2.2) can achieve MLPS if the following formula holds:
ϖ1=min1≤w≤n{2aw−λ+2π−4n−n∑ψ=1δw|μψw|2L2w−n∑ψ=1ηw|υψw|2L2w}>0,ϖ2=max1≤w≤n{λ+n∑ψ=1(|mψw|2+|Mψw|2)L2w},ϖ1−ϖ2Ω1>0,Ω1>1. | (3.14) |
Proof. Picking the Lyapunov function as
v1(t)=n∑w=1kw(t)¯kw(t). | (3.15) |
By application about derivative v1(t), we derive
Dαv1(t)=n∑w=1Dαkw(t)¯kw(t). | (3.16) |
Following Lemma 2.2, it has
Dαv1(t)≤n∑w=1kw(t)Dα¯kw(t)+n∑w=1¯kw(t)Dαkw(t). | (3.17) |
Substituting Eq (3.13) to (3.17), we can get
Dαv1(t)≤n∑w=1kw(t){−aw¯kw(t)+n∑ψ=1¯(mwψ+△mwψ(t))¯~fψkψ(t−τ(t))+n⋀ψ=1¯μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+n⋁ψ=1¯υwψ∫+∞0cwψ(t)fψ~kψ(t)dt−π¯kw(t)+λ¯kw(t)¯kw(t−τ(t))kw(t)}+n∑w=1¯kw(t){−awkw(t)+n∑ψ=1(mwp+Δmwψ(t))~fψkψ(t−τ(t))+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ~kψ(t)dt−πkw(t)+λkw(t)¯kw(t−τ(t))¯kw(t)}. | (3.18) |
Combining Lemma 2.3, one gets
n∑w=1kw(t)λ¯kw(t)¯kw(t−τ(t))kw(t)+n∑w=1¯kw(t)λkw(t)¯kw(t−τ(t))¯kw(t)≤λn∑w=1(kw(t)¯kw(t)+kw(t−τ(t))¯kw(t−τ(t))). | (3.19) |
Based on (3.19), we derive
n∑w=1kw(t)(−aw¯kw(t)−π¯kw(t)+λ¯kw(t)¯kw(t−τ(t))kw(t))+n∑w=1¯kw(t)(−awkw(t)−πkw(t)+λkw(t)¯kw(t−τ(t))¯kw(t))≤−n∑w=1[2aw−λ+2π]kw(t)¯kw(t)+λn∑w=1kw(t−τ(t))¯kw(t−τ(t)). | (3.20) |
According to Assumptions 2.1 and 2.3, Lemma 2.3, we can get
n∑w=1n∑ψ=1[kw(t)¯mwψ¯~fψkψ(t−τ(t))+kw(t)¯△mwψ(t)¯~fψkψ(t−τ(t))+¯kw(t)mwψ~fψkψ(t−τ(t))+¯kw(t)△mwψ(t)~fψkψ(t−τ(t))]≤n∑w=1n∑ψ=1[2kw(t)¯kw(t)+|mwψ|2~fψkψ(t−τ(t))¯~fψkψ(t−τ(t))+|△mwψ(t)|2~fψkψ(t−τ(t))¯~fψkψ(t−τ(t))]≤n∑w=1n∑ψ=12kw(t)¯kw(t)+n∑w=1n∑ψ=1|mwψ|2L2ψkψ(t−τ(t))¯kψ(t−τ(t))+n∑w=1n∑ψ=1|Mwψ|2L2ψkψ(t−τ(t))¯kψ(t−τ(t))≤n∑w=1n∑ψ=12kw(t)¯kw(t)+n∑w=1n∑ψ=1(|mwψ|2+|Mwψ|2)L2ψkψ(t−τ(t))¯kψ(t−τ(t)). | (3.21) |
Based on Lemma 2.3, one has
n∑w=1[kw(t)n⋀ψ=1¯μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+¯kw(t)n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dt]≤n∑w=1[kw(t)¯kw(t)+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dtn⋀ψ=1¯μwψ∫+∞0bwψ(t)fψ~kψ(t)dt]. | (3.22) |
Combining Assumption 2.4 and Lemma 2.1, one obtains
n∑w=1n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dtn⋀ψ=1¯μwψ∫+∞0bwψ(t)fψ~kψ(t)dt≤n∑w=1n∑ψ=1δψ|μwψ|2L2ψkψ(t)¯kψ(t). | (3.23) |
Substituting (3.23) into (3.22), it has
n∑w=1[kw(t)n⋀ψ=1¯μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+¯kw(t)n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dt]≤n∑w=1kw(t)¯kw(t)+n∑w=1n∑ψ=1δψ|μwψ|2L2ψkψ(t)¯kψ(t). | (3.24) |
Thus,
n∑w=1[kw(t)n⋁ψ=1¯υwψ∫+∞0cwψ(t)fψ~kψ(t)dt+¯kw(t)n⋁ψ=1υwψ∫+∞0cwψ(t)fψ~kψ(t)dt]≤n∑w=1kw(t)¯kw(t)+n∑ψ=1n∑ψ=1ηψ|υwψ|2L2ψkψ(t)¯kψ(t). | (3.25) |
Substituting (3.20)–(3.25) into (3.18), we get
Dαv1(t)≤−n∑w=1[2aw−λ+2π]kw(t)¯kw(t)+λn∑w=1kw(t−τ(t))¯kw(t−τ(t))+n∑w=1n∑ψ=12kw(t)¯kw(t)+n∑w=1n∑ψ=1(|mwψ|2+|Mwψ|2)L2ψkψ(t−τ(t))¯kψ(t−τ(t))+n∑w=1kw(t)¯kw(t)+n∑w=1n∑ψ=1δψ|μwψ|2L2ψkψ(t)¯kψ(t)+n∑w=1n∑ψ=1kw(t)¯kw(t)+n∑w=1n∑ψ=1ηψ|υwψ|2L2ψkψ(t)¯kψ(t)≤−n∑w=1[2aw−λ+2π−4n−n∑ψ=1δw|μψw|2L2w−n∑ψ=1ηw|υψw|2L2w]kw(t)¯kw(t)+n∑w=1[λ+n∑ψ=1(|mψw|2+|Mψw|2)L2w]kw(t−τ(t))¯kw(t−τ(t)). | (3.26) |
Applying the fractional Razumikhin theorem, the inequality (3.26) is as follows:
Dαv1(t)≤−(ϖ1−ϖ2Ω1)v1(t)=−ϖ3v1(t). | (3.27) |
Apparently, from Lemma 2.5, it has
v1(t)≤v(0)Eα[−(ϖ1−ϖ2Ω1)tα], | (3.28) |
and
v1(t)=n∑w=1kw(t)¯kw(t)=||k(t)||2≤v(0)Eα[−(ϖ1−ϖ2Ω1)tα],||k(t)||≤[v(0)Eα(−(ϖ1−ϖ2Ω1)tα)]12. | (3.29) |
Moreover,
limt→∞||k(t)||=0. | (3.30) |
From Definition 2.4, system (2.1) is Mittag-Leffler stable. From Definition 2.5 and limt→∞||k(t)||=0, the derive-response systems (2.1) and (2.2) can reach MLPS. Therefore, Theorem 3.2 holds.
Unlike controller (3.11), we redesign an adaptive hybrid controller as follows:
uw(t)=u2w(t)+u3w(t), | (3.31) |
u3w(t)=−ςw(t)kw(t),Dαςw(t)=Iwkw(t)¯kw(t)−p(ςw(t)−ς∗w). | (3.32) |
Taking (3.12) and (3.32) into (3.31), then
Dαkw(t)=−awkw(t)+n∑ψ=1(mwψ+Δmwψ(t))~fψkψ(t−τ(t))+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+n⋁ψ=1υwψ∫+∞0cwψ(t)fψ~kψ(t)dt−ςw(t)kw(t). | (3.33) |
Theorem 3.3. Assume that Assumptions 2.1–2.4 hold and under the controller (3.31), if the positive constants Υ1, Υ2, Ω1, and Ω2 such that
Υ1=min1≤w≤n[2n∑w=1aw−4n2−n∑w=1n∑ψ=1δw|μwψ|2L2w−n∑w=1n∑ψ=1ηw|υwψ|2L2w+n∑w=1(ςw(t)+ς∗w)],Υ2=max1≤w≤nn∑w=1n∑ψ=1(|mwψ|2+|Mwψ|2)L2w,Ω1−Υ2Ω2>0,Ω2>1, | (3.34) |
we can get systems (2.1) and (2.2) to achieve MLPS.
Proof. Taking into account the Lyapunov function
v2(t)=n∑w=1kw(t)¯kw(t)⏟v21(t)+n∑w=11Iw(ςw(t)−ς∗w)2⏟v22(t). | (3.35) |
By utilizing Lemmas 2.2 and 2.4, then it has
Dαv2(t)≤n∑w=1kw(t)Dα¯kw(t)+n∑w=1¯kw(t)Dαkw(t)⏟R1+n∑w=12Iw(ςw(t)−ς∗w)Dαςw(t)⏟R2. | (3.36) |
Substituting (3.32) into (3.36), one has
R2=2n∑w=1(ςw(t)−ς∗w)kw(t)¯kw(t)−n∑w=12pIw(ςw(t)−ς∗w)2. | (3.37) |
Substituting (3.33) into R1 yields
R1=n∑w=1kw(t){−aw¯kw(t)+n∑ψ=1¯(mwψ+△mwψ(t))¯~fψkψ(t−τ(t))+n⋀ψ=1¯μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+n⋁ψ=1¯νwψ∫+∞0cwψ(t)fψ~kψ(t)dt−ςw(t)¯kw(t)}+n∑w=1¯kw(t){−awkw(t)+n∑ψ=1(mwψ+Δmwψ(t))~fψkψ(t−τ(t))+n⋀ψ=1μwψ∫+∞0bwψ(t)fψ~kψ(t)dt+n⋁ψ=1νwψ∫+∞0cwψ(t)fψ~kψ(t)dt−ςw(t)kw(t)}. | (3.38) |
According to the inequality (3.20), it has
n∑w=1kw(t){−aw¯kw(t)−ςw(t)¯kw(t)}+n∑w=1¯kw(t){−awkw(t)−ςw(t)kw(t)}≤−2n∑w=1[aw+ςw(t)]kw(t)¯kw(t). | (3.39) |
Substituting (3.39) and (3.21)–(3.25) into (3.38), we have
R1≤−n∑w=1[2(aw+ςw(t))−4n−n∑ψ=1δw|μψw|2L2w−n∑ψ=1ηw|υψw|2L2w]kw(t)¯kw(t)+n∑w=1n∑ψ=1(|mψw|2+|Mψw|2)L2wkw(t−τ(t))¯kw(t−τ(t)). | (3.40) |
Substituting (3.37) and (3.40) into (3.36), we finally get
Dαv2(t)≤−n∑w=1[2(aw+ς∗w)−4n−n∑ψ=1δw|μψw|2L2w−n∑ψ=1ηw|υψw|2L2w]kw(t)¯kw(t)+n∑w=1n∑ψ=1(|mψw|2+|Mψw|2)L2wkw(t−τ(t))¯kw(t−τ(t))−n∑w=12pIw(ςw(t)−ς∗w)2. | (3.41) |
By applying fractional-order Razumikhin theorem, the following formula holds:
Dαv2(t)≤−(Υ1−Υ2Ω2)v21(t)−2pv22(t). | (3.42) |
Let Ξ=min(Υ1−Υ2Ω2,2p), then
Dαv2(t)≤−Ξv21(t)−Ξv22(t)≤−Ξv2(t). | (3.43) |
According to Lemma 2.5, one has
v2(t)≤v2(0)Eα(−Ξtα). | (3.44) |
From Eq (3.35), we can deduce
v21(t)≤(v21(0)+v22(0))Eα[−(Υ1−Υ2Ω2)tα], | (3.45) |
and
||k(t)||2=n∑w=1kw(t)¯kw(t)=v21(t)≤v2(t)≤v2(0)Eα(−Ξtα). | (3.46) |
Therefore, it has
||k(t)||≤[v2(0)Eα(−Ξtα)]12, | (3.47) |
and
limt→∞||k(t)||=0. | (3.48) |
Obviously, from Definition 2.4, system (2.1) is Mittag-Leffler stable. From Definition 2.5 and limt→∞||k(t)||=0, systems (2.1) and (2.2) can reach MLPS.
Remark 3.1. Unlike adaptive controllers [12], linear feedback control [16], and hybrid controller [34], this article constructs two different types of controllers: nonlinear hybrid controller and adaptive hybrid controller. The hybrid controller has high flexibility, strong scalability, strong anti-interference ability, and good real-time performance. At the same time, hybrid adaptive controllers not only have the good performance of hybrid controllers but also the advantages of adaptive controllers. Adaptive controllers reduce control costs, greatly shorten synchronization time, and can achieve stable tracking accuracy. Different from the research on MLPS in literature [33,34,35], this paper adopts a complex valued fuzzy neural network model, while fully considering the impact of delay and uncertainty on actual situations. At the same time, it is worth mentioning that in terms of methods, we use complex-value direct method, appropriate inequality techniques, and hybrid control techniques, which greatly reduce the complexity of calculations.
In this section, we use the MATLAB toolbox to simulate theorem results.
Example 4.1. Study the following two-dimensional complex-valued FOFCVNNS:
Ct0DαtOw(t)=−awOw(t)+2∑ψ=1(mwψ+Δmwψ(t))fψ(Oψ(t−τ(t)))+2⋀ψ=1μwψ∫+∞0bwψ(t)fψ(Oψ(t))dt+2⋁ψ=1υwψ∫+∞0cwψ(t)fψ(Oψ(t))dt+Iw(t), | (4.1) |
where \ O_{w}(t) = O_{w}^{R}(t)+iO_{w}^{I}(t)\in C, \ O_{w}^{R}(t), O_{w}^{I}(t)\in R, \ \tau_1(t) = \tau_2(t) = |\text{tan(t)}| \, \ I_1(t) = I_2(t) = 0, \ f_{p}(O_{p}(t)) = \text{tanh}(O_{p}^{R}(t))+i\text{tanh}(O_{p}^{I}(t)).
\begin{align*} &A = a_{1} = a_{2} = 1, \\ &B = \Big(m_{w\psi}+\triangle m_{w\psi}(t)\Big)_{2\times2} = \left( {\begin{array}{*{20}{c}} {-2.5+0.3i}&{2.6-1.9i}\\ {-2.3-1.2i}&{2.8+1.7i} \end{array}} \right) + \left( {\begin{array}{*{20}{c}} {-0.4\text{cost}}&{-0.6\text{sint}}\\ {-0.5\text{sint}}&{-0.3\text{cost}} \end{array}} \right), \\ &C{\rm{ = }}(\mu_{wp})_{2\times2} = \left( {\begin{array}{*{20}{c}} -2.8+1.3i&2.5-1.2i\\ -2.2-1.1i&2.5+1.6i \end{array}} \right), D{\rm{ = }}(\upsilon_{w\psi})_{2\times2} = \left( {\begin{array}{*{20}{c}} -2.9+0.7i&2.8-1.2i\\ -2.1-1.9i&2.9+1.9i \end{array}} \right). \end{align*} |
The response system is
\begin{align} {}_{{t_0}}^CD_t^\alpha {Q_w}(t) & = -a_{w}Q_{w}(t)+\sum\limits_{\psi = 1}^{2}(m_{w\psi}+\Delta m_{w\psi}(t))f_{\psi}(Q_{\psi}(t-\tau(t))) +\bigwedge_{\psi = 1}^{2}\mu_{w\psi}\int_{0}^{+\infty}b_{w\psi}(t)f_{\psi}(Q_{\psi}(t))dt \\ &\quad+\bigvee_{\psi = 1}^{2}\upsilon_{w\psi}\int_{0}^{+\infty}c_{w\psi}(t)f_{\psi}(Q_{\psi}(t))dt+I_{w}(t)+U_{w}(t), \end{align} | (4.2) |
where {Q_w}(t) = Q_w^R(t) + iQ_w^I(t) \in {C} . The initial values of (4.1) and (4.2) are
\begin{align*} \begin{aligned} &{x_1}(0) = 1.1 - 0.2i , \ {x_2}(0) = 1.3 - 0.4i , \\ &{y_1}(0) = 1.3 - 0.3i , \ {y_2}(0) = 1.5 - 0.1i. \end{aligned} \end{align*} |
The phase portraits of system (4.1) are shown in Figure 1. The nonlinear hybrid controller is designed as (3.10), and picking \alpha = 0.95, S = 0.55, \delta_{1} = \delta_{2} = 1.1 , \eta_{1} = \eta_{2} = 1.3 , L_{1} = L_{2} = 0.11 , \lambda = 0.5 . By calculation, we get \varpi_{1} = 4.47 > 0 , \varpi_{2} = 0.71 . Taking \Omega_{1} = 1.5, then \varpi_{1}-\varpi_{2}\Omega_{1} > 0 . This also confirms that the images drawn using the MATLAB toolbox conform to the theoretical results of Theorem 3.2 . Figures 2 and 3 show the state trajectory of k_{w}(t) and ||k_{w}(t)|| without the controller (3.10). Figures 4 and 5 show state trajectories and error norms with the controller (3.10), respectively.
Example 4.2. Taking \alpha = 0.95 , S = 0.97, k_{1} = k_{2} = 5.1, \varsigma_{1} = \varsigma_{2} = 0.2, \varsigma_{1}^{*} = 5, \varsigma_{2}^{*} = 8, \delta_{1} = \delta_{2} = 1.1, \eta_{1} = \eta_{2} = 1.5, L_{1} = L_{2} = 0.1. The remaining parameters follow the ones mentioned earlier. According to calculation, \Upsilon_{1} = 12.49, \Upsilon_{2} = 7.36 . Let \Omega_{2} = 1.1 , then we have \Omega_{1}-\Upsilon_{2}\Omega_{2} > 0 . Figure 6 depicts the state trajectory diagram of k_{w}(t) with adaptive controller (3.31). Figure 7 describes the error norm ||k_{w}(t)|| with controller (3.31). According to Figure 8, it is easy to see that the control parameters \varsigma_{w}(t) are constant.
Example 4.3. Consider the following data:
\begin{align*} &A = a_{1} = a_{2} = 1, \\ &B = \Big(m_{w\psi}+\triangle m_{w\psi}(t)\Big)_{2\times2} = \left( {\begin{array}{*{20}{c}} {-1.6+0.5i}&{1.8-1.6i}\\ {-1.6-1.2i}&{2.1+1.7i} \end{array}} \right) + \left( {\begin{array}{*{20}{c}} {-0.4\text{cost}}&{-0.6\text{sint}}\\ {-0.5\text{sint}}&{-0.3\text{cost}} \end{array}} \right), \\ &C{\rm{ = }}(\mu_{wp})_{2\times2} = \left( {\begin{array}{*{20}{c}} -1.8+1.3i&1.5-1.1i\\ -1.8-1.1i&1.8+1.6i \end{array}} \right), D{\rm{ = }}(\upsilon_{w\psi})_{2\times2} = \left( {\begin{array}{*{20}{c}} -1.9+0.7i&1.9-1.4i\\ -1.7-1.3i&2.1+1.8i \end{array}} \right). \end{align*} |
Let the initial value be
\begin{align*} \begin{aligned} &{x_1}(0) = 3.2 - 1.2i , \ {x_2}(0) = 3.0 - 1.4i , \\ &{y_1}(0) = 3.1 - 1.3i , \ {y_2}(0) = 3.3 - 1.1i. \end{aligned} \end{align*} |
Picking \alpha = 0.88, S = 0.9, \delta_{1} = \delta_{2} = 1.2 , \eta_{1} = \eta_{2} = 1.5 , L_{1} = L_{2} = 0.1 , \lambda = 0.5 , \Omega_{1} = 2 . After calculation, \varpi_{1} = 6.62 > 0 , \varpi_{2} = 2.28 , and \varpi_{1}-\varpi_{2}\Omega_{1} > 0 . Similar to Example 4.1, Figures 9 and 10 show the state trajectory of k_{w}(t) and ||k_{w}(t)|| without the controller (3.10). Figures 11 and 12 show state trajectories and error norms with the controller (3.10), respectively.
Example 4.4. Taking \alpha = 0.88 , S = 0.9, k_{1} = k_{2} = 2.2, \varsigma_{1} = \varsigma_{2} = 0.2, \varsigma_{1}^{*} = 4, \varsigma_{2}^{*} = 9, \delta_{1} = \delta_{2} = 1.1, \eta_{1} = \eta_{2} = 1.5, L_{1} = L_{2} = 0.1, \Omega_{2} = 2 . The other parameters are the same as Example 4.3, where \Upsilon_{1} = 22.29, \Upsilon_{2} = 10.67 , and \Omega_{1}-\Upsilon_{2}\Omega_{2} > 0 . Thus, the conditions of Theorem 3.3 are satisfied. Figure 13 depicts the state trajectory diagram of k_{w}(t) with adaptive controller (3.31). Figure 14 describes the error norm ||k_{w}(t)|| with controller (3.31). Control parameters \varsigma_{w}(t) are described in Figure 15.
In this paper, we studied MLPS issues of delayed FOFCVNNs. First, according to the principle of contraction and projection, a sufficient criterion for the existence and uniqueness of the equilibrium point of FOFCVNNs is obtained. Second, based on the basic theory of fractional calculus, inequality analysis techniques, Lyapunov function method, and fractional Razunikhin theorem, the MLPS criterion of FOFCVNNs is derived. Finally, we run four simulation experiments to verify the theoretical results. At present, we have fully considered the delay and parameter uncertainty of neural networks and used continuous control methods in the synchronization process. However, regarding fractional calculus, there is a remarkable difference between continuous-time systems and discrete-time systems [14]. Therefore, in future work, we can consider converting the continuous time system proposed in this paper into discrete time system and further research discrete-time MLPS. Alternatively, we can consider studying finite-time MLPS based on the MLPS presented in this paper.
Yang Xu: Writing–original draft; Zhouping Yin: Supervision, Writing–review; Yuanzhi Wang: Software; Qi Liu: Writing–review; Anwarud Din: Methodology. All authors have read and approved the final version of the manuscript for publication.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors declare no conflicts of interest.
[1] |
P. Sanchez, B. Everett, Y. Salamonson, S. Ajwani, S. Bhole, J. Bishop, et al., Oral health and cardiovascular care: Perceptions of people with cardiovascular disease, PLoS One, 12 (2017), e0181189. https://doi.org/10.1371/journal.pone.0181189 doi: 10.1371/journal.pone.0181189
![]() |
[2] | M. P. Muresan, A. R. Barbura, S. Nedevschi, Teeth detection and dental problem classification in panoramic x-ray images using deep learning and image processing techniques, in 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), IEEE, (2020), 457–463. https://doi.org/10.1109/ICCP51029.2020.9266244 |
[3] |
V. Hingst, M. A. Weber, Dental X-ray diagnostics with the orthopantomography–technique and typical imaging results, Der Radiologe, 60 (2020), 77–92. https://doi.org/10.1007/s00117-019-00620-1 doi: 10.1007/s00117-019-00620-1
![]() |
[4] |
J. C. M. Román, V. R. Fretes, C. G. Adorno, R. G. Silva, J. L. V. Noguera, H. Legal-Ayala, et al., Panoramic dental radiography image enhancement using multiscale mathematical morphology, Sensors, 21 (2021), 3110. https://doi.org/10.3390/s21093110 doi: 10.3390/s21093110
![]() |
[5] |
R. Izzetti, M. Nisi, G. Aringhieri, L. Crocetti, F. Graziani, C. Nardi, Basic knowledge and new advances in panoramic radiography imaging techniques: A narrative review on what dentists and radiologists should know, Appl. Sci., 11 (2021), 7858. https://doi.org/10.3390/app11177858 doi: 10.3390/app11177858
![]() |
[6] |
Y. Zhao, P. Li, C. Gao, Y. Liu, Q. Chen, F. Yang, et al., Tsasnet: Tooth segmentation on dental panoramic X-ray images by Two-Stage Attention Segmentation Network, Knowledge-Based Syst., 206 (2020), 106338. https://doi.org/10.1016/j.knosys.2020.106338 doi: 10.1016/j.knosys.2020.106338
![]() |
[7] |
A. E. Yüksel, S. Gültekin, E. Simsar, Ş. D. Özdemir, M. Gündoğar, S. B. Tokgöz, et al., Dental enumeration and multiple treatment detection on panoramic X-rays using deep learning, Sci. Rep., 11 (2021), 12342. https://doi.org/10.1038/s41598-021-90386-1 doi: 10.1038/s41598-021-90386-1
![]() |
[8] |
R. J. Lee, A. Weissheimer, J. Pham, L. Go, L. M. de Menezes, W. R. Redmond, et al., Three-dimensional monitoring of root movement during orthodontic treatment, Am. J. Orthod. Dentofacial Orthop., 147 (2015), 132–142. https://doi.org/10.1016/j.ajodo.2014.10.010 doi: 10.1016/j.ajodo.2014.10.010
![]() |
[9] | J. Keustermans, D. Vandermeulen, P. Suetens, Integrating statistical shape models into a graph cut framework for tooth segmentation, in Machine Learning in Medical Imaging, Springer, (2012), 242–249. https://doi.org/10.1007/978-3-642-35428-1_30 |
[10] | O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional networks for biomedical image segmentation, in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Springer, (2015), 234–241. https://doi.org/10.1007/978-3-319-24574-4_28 |
[11] |
W. Wang, X. Yu, B. Fang, Y. Zhao, Y. Chen, W. Wei, et al., Cross-modality LGE-CMR segmentation using image-to-image translation based data augmentation, IEEE/ACM Trans. Comput. Biol. Bioinf., 20 (2023), 2367–2375. https://doi.org/10.1109/tcbb.2022.3140306 doi: 10.1109/tcbb.2022.3140306
![]() |
[12] |
W. Wang, J. Chen, J. Wang, J. Chen, J. Liu, Z. Gong, Trust-enhanced collaborative filtering for personalized point of interests recommendation, IEEE Trans. Ind. Inf., 16 (2020), 6124–6132. https://doi.org/10.1109/tii.2019.2958696 doi: 10.1109/tii.2019.2958696
![]() |
[13] |
B. G. He, B. Lin, H. P. Li, S. Q. Zhu, Suggested method of utilizing soil arching for optimizing the design of strutted excavations, Tunnelling Underground Space Technol., 143 (2024), 105450. https://doi.org/10.1016/j.tust.2023.105450 doi: 10.1016/j.tust.2023.105450
![]() |
[14] |
J. Chen, S. Sun, L. Zhang, B. Yang, W. Wang, Compressed sensing framework for heart sound acquisition in internet of medical things, IEEE Trans. Ind. Inf., 18 (2022), 2000–2009. https://doi.org/10.1109/tii.2021.3088465 doi: 10.1109/tii.2021.3088465
![]() |
[15] |
J. Chen, W. Wang, B. Fang, Y. Liu, K. Yu, V. C. M. Leung, et al., Digital twin empowered wireless healthcare monitoring for smart home, IEEE J. Sel. Areas Commun., 41 (2023), 3662–3676. https://doi.org/10.1109/jsac.2023.3310097 doi: 10.1109/jsac.2023.3310097
![]() |
[16] |
Y. Zhang, X. Wu, S. Lu, H. Wang, P. Phillips, S. Wang, Smart detection on abnormal breasts in digital mammography based on contrast-limited adaptive histogram equalization and chaotic adaptive real-coded biogeography-based optimization, Simulation, 92 (2016), 873–885. https://doi.org/10.1177/0037549716667834 doi: 10.1177/0037549716667834
![]() |
[17] |
J. H. Lee, S. S. Han, Y. H. Kim, C. Lee, I. Kim, Application of a fully deep convolutional neural network to the automation of tooth segmentation on panoramic radiographs, Oral Surg. Oral Med. Oral Pathol. Oral Radiol., 129 (2020), 635–642. https://doi.org/10.1016/j.oooo.2019.11.007 doi: 10.1016/j.oooo.2019.11.007
![]() |
[18] |
J. Chen, Z. Guo, X. Xu, L. Zhang, Y. Teng, Y. Chen, et al., A robust deep learning framework based on spectrograms for heart sound classification, IEEE/ACM Trans. Comput. Biol. Bioinf., 2023 (2023), 1–12. https://doi.org/10.1109/TCBB.2023.3247433 doi: 10.1109/TCBB.2023.3247433
![]() |
[19] |
S. H. Wang, D. R. Nayak, D. S. Guttery, X. Zhang, Y. D. Zhang, COVID-19 classification by CCSHNet with deep fusion using transfer learning and discriminant correlation analysis, Inf. Fusion, 68 (2021), 131–148. https://doi.org/10.1016/j.inffus.2020.11.005 doi: 10.1016/j.inffus.2020.11.005
![]() |
[20] | H. Chen, X. Huang, Q. Li, J. Wang, B. Fang, J. Chen, Labanet: Lead-assisting backbone attention network for oral multi-pathology segmentation, in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, (2023), 1–5. https://doi.org/10.1109/ICASSP49357.2023.10094785 |
[21] |
L. Wang, Y. Gao, F. Shi, G. Li, K. C. Chen, Z. Tang, et al., Automated segmentation of dental cbct image with prior-guided sequential random forests, Med. Phys., 43 (2016), 336–346. https://doi.org/10.1118/1.4938267 doi: 10.1118/1.4938267
![]() |
[22] |
S. Liao, S. Liu, B. Zou, X. Ding, Y. Liang, J. Huang, et al., Automatic tooth segmentation of dental mesh based on harmonic fields, Biomed Res. Int., 2015 (2015). https://doi.org/10.1155/2015/187173 doi: 10.1155/2015/187173
![]() |
[23] | R. Girshick, Fast R-CNN, in Proceedings of the IEEE International Conference on Computer Vision, (2015), 1440–1448. https://doi.org/10.1109/ICCV.2015.169 |
[24] | K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask R-CNN, in Proceedings of the IEEE International Conference on Computer Vision, (2017), 2961–2969. https://doi.org/10.1109/ICCV.2017.322 |
[25] |
E. Y. Park, H. Cho, S. Kang, S. Jeong, E. Kim, Caries detection with tooth surface segmentation on intraoral photographic images using deep learning, BMC Oral Health, 22 (2022), 1–9. https://doi.org/10.1186/s12903-022-02589-1 doi: 10.1186/s12903-022-02589-1
![]() |
[26] | G. Zhu, Z. Piao, S. C. Kim, Tooth detection and segmentation with mask R-CNN, in 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), IEEE, (2020), 070–072. https://doi.org/10.1109/ICAIIC48513.2020.9065216 |
[27] |
Q. Chen, Y. Zhao, Y. Liu, Y. Sun, C. Yang, P. Li, et al., Mslpnet: Multi-scale location perception network for dental panoramic X-ray image segmentation, Neural Comput. Appl., 33 (2021), 10277–10291. https://doi.org/10.1007/s00521-021-05790-5 doi: 10.1007/s00521-021-05790-5
![]() |
[28] |
P. Li, Y. Liu, Z. Cui, F. Yang, Y. Zhao, C. Lian, et al., Semantic graph attention with explicit anatomical association modeling for tooth segmentation from CBCT images, IEEE Trans. Med. Imaging, 41 (2022), 3116–3127. https://doi.org/10.1109/tmi.2022.3179128 doi: 10.1109/tmi.2022.3179128
![]() |
[29] |
E. Shaheen, A. Leite, K. A. Alqahtani, A. Smolders, A. Van Gerven, H. Willems, et al., A novel deep learning system for multi-class tooth segmentation and classification on Cone Beam Computed Tomography. A validation study, J. Dent., 115 (2021), 103865. https://doi.org/10.1016/j.jdent.2021.103865 doi: 10.1016/j.jdent.2021.103865
![]() |
[30] | M. Ezhov, A. Zakirov, M. Gusarev, Coarse-to-fine volumetric segmentation of teeth in cone-beam CT, in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), (2019), 52–56. https://doi.org/10.1109/ISBI.2019.8759310 |
[31] | A. Alsheghri, F. Ghadiri, Y. Zhang, O. Lessard, J. Keren, F. Cheriet, et al., Semi-supervised segmentation of tooth from 3D scanned dental arches, in Medical Imaging 2022: Image Processing, (2022), 766–771. https://doi.org/10.1117/12.2612655 |
[32] |
X. Liu, F. Zhang, Z. Hou, L. Mian, Z. Wang, J. Zhang, et al., Self-supervised learning: Generative or contrastive, IEEE Trans. Knowl. Data Eng., 35 (2021), 857–876. https://doi.org/10.1109/tkde.2021.3090866 doi: 10.1109/tkde.2021.3090866
![]() |
[33] | Q. Li, X. Huang, Z. Wan, L. Hu, S. Wu, J. Zhang, et al., Data-efficient masked video modeling for self-supervised action recognition, in Proceedings of the 31st ACM International Conference on Multimedia, (2023), 2723–2733. https://doi.org/10.1145/3581783.3612496 |
[34] |
H. Lim, S. Jung, S. Kim, Y. Cho, I. Song, Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network, BMC Oral Health, 21 (2021), 1–9. https://doi.org/10.2196/preprints.32088 doi: 10.2196/preprints.32088
![]() |
[35] |
F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, K. H. Maier-Hein, nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation, Nat. Methods, 18 (2021), 203–211. https://doi.org/10.1038/s41592-020-01008-z doi: 10.1038/s41592-020-01008-z
![]() |
[36] | A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint, (2020), arXiv: 2010.11929. https://doi.org/10.48550/arXiv.2010.11929 |
[37] | H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, H. Jégou, Training data-efficient image transformers & distillation through attention, in International Conference on Machine Learning, PMLR, (2021), 10347–10357. |
[38] | N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko, End-to-end object detection with transformers, in European Conference on Computer Vision, Springer, (2020), 213–229. https://doi.org/10.1007/978-3-030-58452-8_13 |
[39] | X. Zhu, W. Su, L. Lu, B. Li, X. Wang, J. Dai, Deformable DETR: Deformable transformers for end-to-end object detection, arXiv preprint, (2020), arXiv: 2010.04159. https://doi.org/10.48550/arXiv.2010.04159 |
[40] |
Q. Li, X. Huang, B. Fang, H. Chen, S. Ding, X. Liu, Embracing large natural data: Enhancing medical image analysis via cross-domain fine-tuning, IEEE J. Biomed. Health. Inf., 2023 (2023), 1–10. https://doi.org/10.1109/JBHI.2023.3343518 doi: 10.1109/JBHI.2023.3343518
![]() |
[41] | J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, et al., Transunet: Transformers make strong encoders for medical image segmentation, arXiv preprint, arXiv: 2102.04306. https://doi.org/10.48550/arXiv.2102.04306 |
[42] | A. Srinivas, T. Lin, N. Parmar, J. Shlens, P. Abbeel, A. Vaswani, Bottleneck transformers for visual recognition, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), 16519–16529. https://doi.org/10.1109/CVPR46437.2021.01625 |
[43] | Y. Li, S. Wang, J. Wang, G. Zeng, W. Liu, Q. Zhang, et al., GT U-Net: A U-Net like group transformer network for tooth root segmentation, in Machine Learning in Medical Imaging, Springer, (2021), 386–395. https://doi.org/10.1007/978-3-030-87589-3_40 |
[44] | W. Lin, Z. Wu, J. Chen, J. Huang, L. Jin, Scale-aware modulation meet transformer, arXiv preprint, (2023), arXiv: 2307.08579. https://doi.org/10.48550/arXiv.2307.08579 |
[45] | S. Woo, J. Park, J. Lee, I. Kweon, CBAM: Convolutional block attention module, in Computer Vision–ECCV 2018, Springer, (2018), 3–19. https://doi.org/10.1007/978-3-030-01234-2_1 |
[46] |
Y. Zhang, F. Ye, L. Chen, F. Xu, X. Chen, H. Wu, et al., Children's dental panoramic radiographs dataset for caries segmentation and dental disease detection, Sci. Data, 10 (2023), 380. https://doi.org/10.1038/s41597-023-02237-5 doi: 10.1038/s41597-023-02237-5
![]() |
[47] |
K. Chen, L. Yao, D. Zhang, X. Wang, X. Chang, F. Nie, A semisupervised recurrent convolutional attention model for human activity recognition, IEEE Trans. Neural Networks Learn. Syst., 31 (2019), 1747–1756. https://doi.org/10.1109/tnnls.2019.2927224 doi: 10.1109/tnnls.2019.2927224
![]() |
[48] |
G. Litjens, C. I. Sánchez, N. Timofeeva, M. Hermsen, I. Nagtegaal, I. Kovacs, et al., Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis, Sci. Rep., 6 (2016), 26286. https://doi.org/10.1038/srep26286 doi: 10.1038/srep26286
![]() |
[49] |
J. Chen, L. Chen, Y. Zhou, Cryptanalysis of a DNA-based image encryption scheme, Inf. Sci., 520 (2020), 130–141. https://doi.org/10.1016/j.ins.2020.02.024 doi: 10.1016/j.ins.2020.02.024
![]() |
[50] |
D. Yuan, X. Chang, P. Y. Huang, Q. Liu, Z. He, Self-supervised deep correlation tracking, IEEE Trans. Image Process., 30 (2020), 976–985. https://doi.org/10.1109/tip.2020.3037518 doi: 10.1109/tip.2020.3037518
![]() |
[51] |
Y. Tian, G. Yang, Z. Wang, H. Wang, E. Li, Z. Liang, Apple detection during different growth stages in orchards using the improved YOLO-V3 model, Comput. Electron. Agric., 157 (2019), 417–426. https://doi.org/10.1016/j.compag.2019.01.012 doi: 10.1016/j.compag.2019.01.012
![]() |
[52] |
D. Yuan, X. Chang, Q. Liu, Y. Yang, D. Wang, M. Shu, et al., Active learning for deep visual tracking, IEEE Trans. Neural Networks Learn. Syst., 2023 (2023), 1–13. https://doi.org/10.1109/TNNLS.2023.3266837 doi: 10.1109/TNNLS.2023.3266837
![]() |
[53] |
Y. Zhang, L. Deng, H. Zhu, W. Wang, Z. Ren, Q. Zhou, et al., Deep learning in food category recognition, Inf. Fusion, 98 (2023), 101859. https://doi.org/10.1016/j.inffus.2023.101859 doi: 10.1016/j.inffus.2023.101859
![]() |
[54] |
D. Cremers, M. Rousson, R. Deriche, A review of statistical approaches to level set segmentation: Integrating color, texture, motion and shape, Int. J. Comput. Vision, 72 (2007), 195–215. https://doi.org/10.1007/s11263-006-8711-1 doi: 10.1007/s11263-006-8711-1
![]() |
[55] |
X. Shu, Y. Yang, J. Liu, X. Chang, B. Wu, Alvls: Adaptive local variances-based levelset framework for medical images segmentation, Pattern Recognit., 136 (2023), 109257. https://doi.org/10.1016/j.patcog.2022.109257 doi: 10.1016/j.patcog.2022.109257
![]() |
[56] |
K. Ding, L. Xiao, G. Weng, Active contours driven by region-scalable fitting and optimized laplacian of gaussian energy for image segmentation, Signal Process., 134 (2017), 224–233. https://doi.org/10.1016/j.sigpro.2016.12.021 doi: 10.1016/j.sigpro.2016.12.021
![]() |
[57] | G. Jader, J. Fontineli, M. Ruiz, K. Abdalla, M. Pithon, L. Oliveira, Deep instance segmentation of teeth in panoramic X-ray images, in 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), IEEE, (2018), 400–407. https://doi.org/10.1109/SIBGRAPI.2018.00058 |
[58] | T. L. Koch, M. Perslev, C. Igel, S. S. Brandt, Accurate segmentation of dental panoramic radiographs with U-Nets, in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), IEEE, (2019), 15–19. https://doi.org/10.1109/ISBI.2019.8759563 |
[59] |
Z. Cui, C. Li, N. Chen, G. Wei, R. Chen, Y. Zhou, et al., Tsegnet: An efficient and accurate tooth segmentation network on 3D dental model, Med. Image Anal., 69 (2021), 101949. https://doi.org/10.1016/j.media.2020.101949 doi: 10.1016/j.media.2020.101949
![]() |
[60] |
X. Wang, S. Gao, K. Jiang, H. Zhang, L. Wang, F. Chen, et al., Multi-level uncertainty aware learning for semi-supervised dental panoramic caries segmentation, Neurocomputing, 540 (2023), 126208. https://doi.org/10.1016/j.neucom.2023.03.069 doi: 10.1016/j.neucom.2023.03.069
![]() |
[61] |
A. Qayyum, A. Tahir, M. A. Butt, A. Luke, H. T. Abbas, J. Qadir, et al., Dental caries detection using a semi-supervised learning approach, Sci. Rep., 13 (2023), 749. https://doi.org/10.1038/s41598-023-27808-9 doi: 10.1038/s41598-023-27808-9
![]() |
[62] | Y. Zhang, H. Liu, Q. Hu, Transfuse: Fusing transformers and cnns for medical image segmentation, in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021, Springer, (2021), 14–24. https://doi.org/10.1007/978-3-030-87193-2_2 |
[63] | Y. Wang, T. Wang, H. Li, H. Wang, ACF-TransUNet: Attention-based coarse-fine transformer U-Net for automatic liver tumor segmentation in CT images, in 2023 4th International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), IEEE, (2023), 84–88. https://doi.org/10.1109/ICBASE59196.2023.10303169 |
[64] |
B. Chen, Y. Liu, Z. Zhang, G. Lu, A. W. K. Kong, TransAttUnet: Multi-level attention-guided U-Net with transformer for medical image segmentation, IEEE Trans. Emerging Top. Comput. Intell., 2023 (2023), 1–14. https://doi.org/10.1109/TETCI.2023.3309626 doi: 10.1109/TETCI.2023.3309626
![]() |
[65] | A. Hatamizadeh, Y. Tang, V. Nath, D. Yang, A. Myronenko, B. Landman, et al., UNETR: Transformers for 3D medical image segmentation, in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), (2022), 1748–1758. https://doi.org/10.1109/WACV51458.2022.00181 |
[66] | H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, et al., Swin-Unet: Unet-like pure transformer for medical image segmentation, in European Conference on Computer Vision, Springer, (2022), 205–218. https://doi.org/10.1007/978-3-031-25066-8_9 |
[67] | S. Li, C. Li, Y. Du, L. Ye, Y. Fang, C. Wang, et al., Transformer-based tooth segmentation, identification and pulp calcification recognition in CBCT, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, (2023), 706–714. https://doi.org/10.1007/978-3-031-43904-9_68 |
[68] |
M. Kanwal, M. M. Ur Rehman, M. U. Farooq, D. K. Chae, Mask-transformer-based networks for teeth segmentation in panoramic radiographs, Bioengineering, 10 (2023), 843. https://doi.org/10.3390/bioengineering10070843 doi: 10.3390/bioengineering10070843
![]() |
[69] | W. Chen, X. Du, F. Yang, L. Beyer, X. Zhai, T. Y. Lin, et al., A simple single-scale vision transformer for object detection and instance segmentation, in European Conference on Computer Vision, Springer, (2022), 711–727. https://doi.org/10.1007/978-3-031-20080-9_41 |
[70] | M. R. Amini, V. Feofanov, L. Pauletto, E. Devijver, Y. Maximov, Self-training: A survey, arXiv preprint, (2023), arXix: 2202.12040. https://api.semanticscholar.org/CorpusID: 247084374 |
[71] | Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, J. Liang, UNet++: A nested U-Net architecture for medical image segmentation, in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer, (2018), 3–11. https://doi.org/10.1007/978-3-030-00889-5_1 |
[72] | H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, et al., Unet 3+: A full-scale connected unet for medical image segmentation, in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP, IEEE, (2020), 1055–1059. https://doi.org/10.1109/ICASSP40776.2020.9053405 |
[73] |
Q. Zuo, S. Chen, Z. Wang, R2AU-Net: Attention recurrent residual convolutional neural network for multimodal medical image segmentation, Secur. Commun. Netw., 2021 (2021), 1–10. https://doi.org/10.1155/2021/6625688 doi: 10.1155/2021/6625688
![]() |
[74] |
C. Sheng, L. Wang, Z. Huang, T. Wang, Y. Guo, W. Hou, et al., Transformer-based deep learning network for tooth segmentation on panoramic radiographs, J. Syst. Sci. Complexity, 36 (2023), 257–272. https://doi.org/10.1007/s11424-022-2057-9 doi: 10.1007/s11424-022-2057-9
![]() |
[75] | R. Azad, R. Arimond, E. K. Aghdam, A. Kazerouni, D. Merhof, DAE-former: Dual attention-guided efficient transformer for medical image segmentation, in Predictive Intelligence in Medicine, Springer, (2023), 83–95. https://doi.org/10.1007/978-3-031-46005-0_8 |
[76] | E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, P. Luo, Segformer: Simple and efficient design for semantic segmentation with transformers, Adv. Neural Inf. Process. Syst., 34 (2021), 12077–12090. |
1. | Luis C. García-Naranjo, Rafael Ortega, Antonio J. Ureña, Invariant Measures as Obstructions to Attractors in Dynamical Systems and Their Role in Nonholonomic Mechanics, 2024, 29, 1560-3547, 751, 10.1134/S156035472456003X |