
Citation: Xueyan Yang, Xiaodi Li, Qiang Xi, Peiyong Duan. Review of stability and stabilization for impulsive delayed systems[J]. Mathematical Biosciences and Engineering, 2018, 15(6): 1495-1515. doi: 10.3934/mbe.2018069
[1] | Justyna Gerłowska, Krzysztof Dmitruk, Konrad Rejdak . Facial emotion mimicry in older adults with and without cognitive impairments due to Alzheimer's disease. AIMS Neuroscience, 2021, 8(2): 226-238. doi: 10.3934/Neuroscience.2021012 |
[2] | Monireh Asadi Ghaleni, Forouzan Fattahi Masrour, Narjes Saryar, Alexandra J. Bratty, Ebrahim Norouzi, Matheus Santos de Sousa Fernandes, Georgian Badicu . Effects of an intervention combining physical activity and components of Amygdala and Insula Retraining (AIR) on sleep and working memory among older male adults. AIMS Neuroscience, 2024, 11(4): 421-438. doi: 10.3934/Neuroscience.2024025 |
[3] | Sairam Ramesh, Arosh S. Perera Molligoda Arachchige . Depletion of dopamine in Parkinson's disease and relevant therapeutic options: A review of the literature. AIMS Neuroscience, 2023, 10(3): 200-231. doi: 10.3934/Neuroscience.2023017 |
[4] | Ubaid Ansari, Alexi Omid, Dawnica Nadora, Jimmy Wen, Arman Omid, Forshing Lui . Outcomes of dietary interventions in the prevention and progression of Parkinson's disease: A literature review. AIMS Neuroscience, 2024, 11(4): 520-532. doi: 10.3934/Neuroscience.2024032 |
[5] | Anmol Kumar, Ajay Kumar Gupta, Prashant Kumar Singh . Novel perspective of therapeutic modules to overcome motor and nonmotor symptoms in Parkinson's disease. AIMS Neuroscience, 2024, 11(3): 312-340. doi: 10.3934/Neuroscience.2024020 |
[6] | I. Banegas, I. Prieto, A.B. Segarra, M. de Gasparo, M. Ramírez-Sánchez . Study of the Neuropeptide Function in Parkinson’s Disease Using the 6-Hydroxydopamine Model of Experimental Hemiparkinsonism. AIMS Neuroscience, 2017, 4(4): 223-237. doi: 10.3934/Neuroscience.2017.4.223 |
[7] | Zohaer Muttalib, Dana Aboukhalil, Chisom Nwosu, Dave D. Manguerra, Jimmy Wen, Ubaid Ansari, Meraj Alam, Ihab Abed, Ethan Tabaie, Ahmed Salem, Forshing Lui . Gluten and its relationship with inflammation and Parkinson's Disease: A literature review. AIMS Neuroscience, 2025, 12(1): 44-57. doi: 10.3934/Neuroscience.2025004 |
[8] | Marco Calabrò, Carmela Rinaldi, Giuseppe Santoro, Concetta Crisafulli . The biological pathways of Alzheimer disease: a review. AIMS Neuroscience, 2021, 8(1): 86-132. doi: 10.3934/Neuroscience.2021005 |
[9] | Adeshina O. Adekeye, Adedamola A. Fafure, Ayoola E. Ogunsemowo, Linus A. Enye, Olusola S. Saka, Oluwatosin O. Ogedengbe . Naringin ameliorates motor dysfunction and exerts neuroprotective role against vanadium-induced neurotoxicity. AIMS Neuroscience, 2022, 9(4): 536-550. doi: 10.3934/Neuroscience.2022031 |
[10] | Amber Woodard, Brandon Barbery, Reid Wilkinson, Jonathan Strozyk, Mathew Milner, Patrick Doucette, Jarred Doran, Kendra Appleby, Henry Atwill, Wade E. Bell, James E. Turner . The role of neuronal nitric oxide and its pathways in the protection and recovery from neurotoxin-induced de novo hypokinetic motor behaviors in the embryonic zebrafish (Danio rerio). AIMS Neuroscience, 2019, 6(1): 25-42. doi: 10.3934/Neuroscience.2019.1.25 |
As living standards improve, people are placing increasing emphasis on dental health. With the rising number of patients, clinicians face the cumbersome task of image interpretation, which undermines the efficiency of healthcare services. In recent years, intelligent diagnostic systems based on intraoral endoscopy have emerged as vital adjunct tools for oral treatment, capable of conducting dental lesion image segmentation and preliminary diagnosis [1]. However, due to the confined space within the oral cavity and the limited field of view of the endoscope, each capture yields only fragmentary images of a few teeth, making it difficult to provide a continuous, comprehensive diagnostic assessment of the entire jaw. To address this issue, panoramic stitching of half jaw images has become a key.
Compared to other images, intraoral endoscopic images present unique challenges due to their short focal length, limited shooting range and the similar structure of teeth. Furthermore, these images are often affected by variables such as oral cavity structure, tongue and saliva, which can result in uneven lighting or occlusions, manifesting in repetitive and weak textural features. These factors complicate the stitching of intraoral endoscopic images, leading to issues such as a scarcity of feature points and low accuracy in feature matching. Moreover, the spatial relationship between intraoral Endoscopic images generally not horizontal or vertical but instead conforms to the curvature of the dental arch. Therefore, specialized treatment is required during the image fusion stage to accommodate this curvilinear spatial arrangement. Such treatment enables accurate capture of the curved tooth structures. Presently, methods based on local features like SURF [2] and ORB [3] are widely applied in the image stitching domain [4,5]. However, when applied to intraoral endoscopic image stitching, these methods are prone to false detections and mismatches due to similarities in tooth shape and the deformable nature of oral soft tissues. Additionally, traditional panoramic image fusion techniques are ill-suited for the specific alignment of intraoral endoscopic images, leading to issues like noticeable double exposures and artifacts.
In summary, we introduce a method for panoramic stitching of semi-mandibular intraoral endoscopic images to address these unique image characteristics and alignment challenges. The main contributions are as follows:
1) By integrating the concept of Time-weighting [6], the attention mechanism has been enhanced, effectively increasing the quantity of feature-matching pairs. Coupled with the Sinkhorn [7] and RANSAC [8] algorithms, this results in heightened matching accuracy and reduced error rates.
2) We propose a wavelet transform and weighted fusion algorithm based on dental arch arrangement intraoral endoscopic images, resolving the applicability issues of intraoral endoscopic image arrangement and facilitating seamless fusion of these images.
3) An intraoral endoscopic image stitching dataset, termed as Intraoral Camera Panorama Album (ICPA), has been constructed. This dataset features image pairs with smaller overlapping regions, averaging around 35%.
Image stitching is the process of merging multiple partially overlapping images into a larger composite image, involving steps such as feature detection, feature matching and image fusion. In the domain of endoscopic medical image stitching [9], early research primarily utilized frequency-domain correlation algorithms and maximum mutual information methods. For example, Y.Hernandez-Mier [10] proposed an automated stitching algorithm specifically designed for 2D cystoscopic sequence images, demonstrating robustness against blurring, variable lighting and non-uniform radial distortions. This algorithm also exploited cancer autofluorescence within the images to detect cancerous lesions. Bergen et al. [11] employed graph-based techniques for stitching cystoscopic video frames, identifying coherent subgraphs from framework graphs to stitch local patches into larger composites. In intestinal endoscopic image stitching, Igarashi et al. [12,13] and Ishii et al. [14] utilized the "shape-from-shading" technique to generate open panoramic images of tubular organs, such as male urethrae, pig colons and human colons. They assumed the organs to be cylindrical and that the light axis was perfectly aligned with the cylindrical axis, generating panoramas from circles extracted around the image center during constant endoscope retraction. In 2002, Can et al. [15] presented mosaics generated from images of the human retina acquired with a fundus microscope. They explicitly exploited vascular structures to register pairs of images and used a quadric surface model to represent the retina. Their work is based on earlier experiments by Becker et al. carried out in 1998 [16]. In 2013, Yi et al. [17] presented real-time visualization technology for capsule endoscopic videos based on gastrointestinal tract unfolding panoramas. However, their approach was solely reliant on homographic descriptions of inter-frame transformations, leading to issues of ghosting and artifacts in the stitched result. Schuster et al. [18] have successfully applied general-purpose stitching software to laryngoscopic image sequences and presented panorama images of the larynx for documentation purposes.
Research and literature on stitching images in intraoral endoscopy are relatively sparse. In 2018, Ruiqing He proposed a teeth occlusal surface panoramic image stitching technique based on local optimization algorithms [19]. This method utilized adaptive SIFT for the stitching process but required image acquisition from devices equipped with a shooting track, making it computationally intensive and time-consuming. In the same year, he also presented a modification of the previous method, introducing a teeth buccal side panoramic image stitching technique based on local optimization algorithms [20]. This updated method employed bundle adjustment to calculate adjacent transformation matrices, thus enhancing the quality of the stitching. However, the time consumption issue persisted due to the continued use of SIFT. Additionally, the requirement for specialized image acquisition equipment with shooting tracks limits the method's universality.
The realm of image stitching has garnered substantial attention in recent research endeavors. A View-Free Image Stitching Network (VFISNet) was proposed by Lang Nie and co-authors [21], which employs deep learning to estimate homography matrices based on global homography, thus enabling effective image stitching. This method successfully mitigates the poor generalizability of previous learning algorithms in scenarios involving flexible views. However, its effectiveness diminishes in the presence of sparse feature points and abundant repetitive textures within images. Subsequently, Lang Nie and associates [22] advanced an Unsupervised Deep Image Stitching (UDIS) technique, specifically designed to enhance the accuracy of homography-based registrations in images featuring large disparities by reconstructing the stitching features. However, its utility is restricted to specific natural scenes endowed with sufficient geometric complexities. Contributing further, Daniel DeTone and collaborators [23] devised the SuperPoint network for feature detection and description in images, which detects a broader spectrum of interest points relative to conventional methods. Moreover, Sarlin and others [24] introduced the SuperGlue methodology, which incorporates graph neural networks and attention mechanisms to address the optimization of feature point assignments. Xiangyang Xu and colleagues [25] also formulated an image stitching method that integrates both global and local features, thus overcoming challenges of large disparities and high-resolution needs.
In summary, methods for stitching intraoral endoscopic images require the ability to identify as many feature points and matching pairs as possible while maintaining accuracy, especially in cases of repetitive and low textures. SuperPoint and SuperGlue demonstrate high performance in both feature detection count and accuracy when applied to intraoral endoscopic images. Therefore, we employ SuperPoint for the task of feature detection and borrows and refines the attention mechanism concept from SuperGlue for feature matching. Subsequently, we utilize a wavelet transform weighted fusion approach based on dental arch alignment to achieve panoramic image stitching of intraoral endoscopy.
As illustrated in Figure 1, the overall stitching workflow of the proposed method is delineated, with the green dashed section representing the primary innovations of this paper. Initially, preprocessing steps are applied to the intraoral endoscopic images slated for stitching. These include lighting compensation, resizing and grayscale conversion to counteract issues related to point light source imaging and significant lighting variations. Considering the recurrent and low-texture characteristics often found in intraoral endoscopic images, we employ the SuperPoint deep learning methodology for feature extraction. In addition, we design a feature-matching network that incorporates Time-weighting concepts and iteratively improves upon self-attention mechanisms for more effective feature aggregation. Subsequently, a combination of Sinkhorn and RANSAC algorithms is utilized to ascertain mutually matching feature points between images intended for stitching, thus deriving the homography matrices. Finally, due to the typical arc-shaped arrangement in intraoral endoscopic images, we propose a wavelet-transform-based weighted fusion algorithm aligned with dental arch configurations. This algorithm initially preprocesses image pairs for alignment and utilizes wavelet transformation for image fusion. Moreover, a fade-in, fade-out weighted fusion strategy is deployed for seamless stitching.
First, it is imperative to standardize the dimensions of the input images and perform lighting compensation to ensure that significant discrepancies in lighting intensity across image pairs do not adversely impact the visual perception of the stitched image. Subsequently, we employ a pre-trained SuperPoint network for feature detection. The SuperPoint network incorporates a strategy known as Homographic Adaptation to enhance the detection rate of feature points and their adaptability across different scenarios. Consequently, when confronted with large areas of repetitive textures and low-texture environments, SuperPoint is capable of detecting a greater number of features with higher accuracy compared to traditional feature detection methods.
This phase consists of two main components: the attention-based Graph Neural Network (GNN) section and the matching section. In the GNN section, feature aggregation is iteratively performed through Time-Weighting improved self-attention and cross-attention mechanisms, culminating in the generation of matching descriptors akin to feature descriptors. The matching section takes the output from the GNN as input and establishes an allocation matrix. It then employs the Sinkhorn algorithm in conjunction with the RANSAC method to identify correspondingly matched feature point pairs.
(1) MLP encoder
The attention GNN part is shown in Figure 2, For the i -th feature point of the image A to be spliced, it is represented by pAi, The feature descriptor is represented as dAi, The same method is used for image B. Initially, the feature points of both images are enhanced for unique matching characteristics via a Multilayer Perceptron (MLP) encoder. Subsequently, the concept of Time-Weighting is employed to improve the self-attention mechanism. Feature points and descriptors are then cyclically iterated through self-attention and cross-attention processes to aggregate image features, ultimately yielding matching descriptors analogous to traditional feature descriptors.
Both the location and the descriptor of each feature point contribute to heightened specificity in feature matching. Therefore, the initial representation (0)xi of each feature point combines the position and the descriptor as illustrated in Eq (1).
xi=di+MLPenc(pi) | (1) |
Among them, pi represents the i -th feature, the descriptor is represented by di, MLPenc represents a Multilayer Perceptron (MLP) employed for dimensionality elevation of low-level features, effectively coupling visual appearance with feature point location. The architecture of this encoder facilitates subsequent attention mechanisms to fully consider both the appearance and positional similarity of the features.
(2) Time-Weighting improves attention mechanism
For a given individual image, each node within its graph corresponds to each feature point in the image. The graph consists of two types of undirected edges: one type "s "Intra-image edg"s, " also known as self-edges, which connect feature points within the same image. The other type "s "Inter-image edg"s, " or cross-edges, which link feature points from the graph to all feature points in another image, thereby constituting that particular edge. Among them, self-edge uses self-attention, and cross edge uses cross-attention. Aggregating self-attention and cross-attention to obtain mε→i, as shown in Eq (2).
mε→i=∑αijvj | (2) |
The attention weight αij is the softmax of the similarity between the query and retrieved object key values, as shown in Eq (3):
αij=Softmax(qTikj) | (3) |
In Eq (2), let the feature point i to be queried be located on the query image Q, and all source feature points j be located on the source image S. For qi and key ki, the value vj can be written in the form of Eq (4):
qi=W(ℓ)1xQi+b1 and [kjvj]=[W2W3](ℓ)xSi+[b2b3] | (4) |
Each layer ℓ has its corresponding set of projection parameters W, which are shared by all feature points. qi represents a feature point i on the query image. ki, kj, vi and vj are representations of a transformed feature point j.αij signifies the similarity between the two features; a higher value indicates greater similarity. Subsequently, this similarity measure is utilized to weight-sum vj, resulting in mε→i, which is termed as feature aggregation.
According to the idea of time-weighting, each point is weighted after each softmax. As shown in Eq (5):
αij=αij∗ωij | (5) |
ωij represents the Time-Weighting factor. The weight is relatively low in non-overlapping areas along the image's edges. Conversely, the weight is higher in the image's central region and the overlapping areas. Time-Weighting is employed as a component of the relative position embedding in text recognition. Incorporating Time-Weighting into the self-attention mechanism is motivated by two considerations.
First, during the process of stitching intraoral endoscopic images, the contributions for different regions, such as teeth and tongue, ought to vary.
Second, for peripheral information with a comparatively low data density, the overall self-attention weight should be reduced.
In self-attention, edges within a single image are aggregated to better focus on all distinctive points, unrestricted by their neighboring positional features. In contrast, cross-attention serves to match features between two images that share similar appearances.
After L iterations of self/cross-attention, the output of the attention-based Graph Neural Network (GNN) for image A can be represented as shown in Equation (6).
fAi=WxAi+b,∀i∈A | (6) |
fAi can be interpreted as the matching descriptor for the ith feature point of Image A, analogous to a feature descriptor. This is specifically designed for feature matching purposes. A similar formulation applies to Image B.
The visualization of the aforementioned process is illustrated in Figure 3. In self-attention, edges within a single image are aggregated to heighten focus on all unique points without being limited by neighboring positional features. Conversely, cross-attention is employed to match features between two visually similar images. Analogous to how humans perform feature matching—by tentatively filtering key matching points through iterative scrutiny between two images—the model aims to simulate this human-like approach. The core idea is to leverage Graph Neural Networks (GNNs) based on attention mechanisms to replicate this process, thereby actively seeking context to enhance feature-point specificity and exclude anomalous matches.
(1) Assignment matrix
In the matching section, the objective is to construct an assignment matrix Pto determine the pairs of matched features, as outlined in Figure 4. Initially, the inner products of fAi and fBj obtained from the GNN steps are calculated to yield scores Sij, which are then organized into a score matrix S. An "unmatched" channel is incorporated to form ¯S. Subsequently, the Sinkhorn algorithm, in conjunction with the RANSAC algorithm, is employed to identify and refine feature matches, excluding erroneous matches during each iteration. The ultimate goal is to derive an optimal assignment matrix P, achieved by calculating a score matrix S∈Rm×n that represents potential matches. The optimization of P is accomplished by maximizing the aggregate score∑ijSi,jPi,j, According to the matrix information, the set mAB of feature point pairs matching the image A and B is obtained.
As demonstrated in Figure 5, the yellow rectangles and circles represent the reference image A and its M corresponding feature points within a pair of images to be stitched, while the blue rectangles and circles represent the target image B and its N feature points. Each row of the assignment matrix P represents the potential N matches for a particular feature point originating from the reference image A to the target image B.
In the reference image A, there are three feature points, whereas the target image B has four. Consequently, the dimensions of the assignment matrix P would be 3×4. From the first row of matrix P, as shown in Figure 5, the maximum value is 0.6, indicating a match between the first feature point in reference image A and the second feature point in target image B. Likewise, in the first column of P, the highest value is 0.5, signifying a match between the first feature in B and the second feature in A. It is worth mentioning that this assignment matrix P is not fully distributed. In an ideal scenario, the sum of each row or column in P should be equal to 1. This " ideal scenario" assumes that all features in both images A and B have corresponding matches; however, real-world conditions such as occlusions, changes in viewpoint, or noise may prevent such perfect matching.
As illustrated in Figure 6, for the third column of matrix P, which corresponds to the third feature in the target image B, no matching feature is identified. Hence, the sum of the third column is less than 1. The subsequent aim is to compute and construct an optimal assignment matrix P.
(2) Sinkhorn combines RANSAC algorithm to improve accuracy
In the matching section, the inner product of fAi and fBj, obtained through GNN aggregation, is first calculated to yield the score Sij, as shown in Eq (7).
Sij=<fAi,fBj>,∀(i,j)∈A×B | (7) |
Moreover, a specialized "unmatched" channel is introduced in the final column or row of the score matrix, denoted as S, to create an augmented matrix ¯S. This addition aims to address instances where no feature points are identifiable, serving as a mechanism to eliminate erroneous matches.
Feature points from the reference image A are either mapped to corresponding feature points in the target image B or relegated to a designated "unmatched" channel. Under this framework, each "unmatched" is associated with N or M potential matches. Accordingly, the constraints imposed on the assignment matrix are articulated in Eq (8) and (9).
¯P1N+1=a,¯P1M+1=b | (8) |
a=[1TM,N]T,b=[1TN,M]T | (9) |
The variable a denotes the anticipated count of feature matches from the reference image A, including its dedicated "unmatched" channel. Conventionally, each feature point within image A aligns with a solitary corresponding point in target image B. However, the feature points that fall into the "unmatched" channel from image A may potentially align with any feature point in image B, thereby introducing N potential matches. Consequently, we havea=[1TM,N]T. The same goes for b.
As delineated in Algorithm Table 1, we leverage an integrative approach utilizing both Sinkhorn and RANSAC algorithms to maximize our scoring metric. The Sinkhorn algorithm is traditionally deployed for optimal transport issues. In our setup, Eq (8) and (9) act as specialized cost functions, or more precisely, as their negations. While ¯S in classical optimal transport scenarios serves as a cost matrix, in this context, it represents the cosine similarity between matching descriptors. Consequently, the objective diverges from minimizing cost to maximizing descriptor similarity, as indicated by the maximization operation in Eq (8). Various parameters are set: A regularization term λ at 1, confidence ξ at 0.995, error threshold ι at 10, inlier proportion ω and a minimum sample count m of 4 for computing model H. By purging outliers, the method achieves a marked improvement in registration precision and minimizes errors.
Algorithm 1 Integration of the Sinkhorn with the RANSAC |
Inputs: Cosine similarity matrix of matching descriptors ¯S. The length and width of the matrix n and m, anticipated count of feature matches a and b, regularization term λ, confidence ξ, error threshold ι |
Output: Feature point matching pairs removing external points P
1: Initialize assignment matrix:¯P=expλ¯S; 2: while ¯P does not converge do //Determine whether the Sinkhorn algorithm converges 3: i→m; 4: ¯Pij÷∑mj=0¯Pij×ai; 5: j→n 6: ¯Pij÷∑ni=0¯Pij×bj; 7: end while 8: while the number of iterations is less than K do //Ransac algorithm removes outliers 9: ¯Mij=random(¯Pi0,¯P0j)m; 10: H=FindHomography(¯Mij); 11: Error=¯Pi0×H−¯P0j; 12: K=log1−ξlog(1−ωm); 13: if P=Error<ι then // When the error is less than the set threshold, it ends 14: break; 15: end if 16: end while 17: return P |
Standard approaches to panoramic image stitching usually necessitate that the images align in a horizontal or vertical sequence. However, intraoral endoscopic images inherently correspond to the curvature of the dental arch, necessitating specialized methods during the image fusion stage. As depicted in Figure 7, we have formulated a wavelet transform and weighted fusion algorithm based on dental arch arrangement intraoral endoscopic images specifically crafted for images arrayed along the dental arch, which alleviates issues arising from suboptimal or incorrect stitching when the images are not horizontally or vertically aligned.
Let Is stand for the collection of images awaiting stitching, and Hij signify the homography matrix that corresponds to the source image Ii and its target image Ij. In the preprocessing phase of the images, a unique identifier, denoted as T and constrained within the interval (0, 1), is assigned to each image. Specifically, Ti serves as the identifier for image Ii.
As delineated in Algorithm Table 2, the initial procedure is to obtain the source image's bias matrix. The technique involves transforming the corner points of the source image through dot multiplication with the homography matrix. Following this transformation, the smallest values of the x and y coordinates of these transformed corners serve as the bias offset. The resultant bias matrix, termed as biasmatrix, is elaborated upon in Eq (10) and (11).
Algorithm 2 Wavelet transform and weighted fusion algorithm based on dental arch arrangement intraoral endoscopic images |
Inputs: Images set I1,I2....In, Source image Ii and target image Ij, mutually matched feature points Pi,Pj and position markers Ti and Tj, weighted matrix width θ |
Output: Stitching result image Iresult
1: bias=0; 2: for i=1→n−1 do 3: j=i+1; 4: if Ti==Tj==1 then // Determine whether it is the left half of the jaw 5: Hij=FindHomography(Pi,Pj); 6: bias=|min(xi,yi)|; 7: biasmatrix=[[1,0,bias[0]],[0,1,bias[1]],[0,0,1]]; // Build bias matrix 8: Ileft=biasmatrix∗Hij∗Ii; 9: hj,wj=Ij.shape(); 10: Ileft[bias[0]:bias[0]+hj][bias[1]:bias[1]+wj]=Ij; 11: waveletfusion(); // Perform wavelet fusion 12: Hweight=getweightmatrix(θ,bias); // Create weighted matrix 13: Ileft=Ileft∗Hweight+Ij∗(1−Hweight); // Weighted fusion processing seams 14: end if 15: if Ti==Tj==0 then 16: Get Iright in the same way 17: end if |
18: end for
19: Similarly, merge Ileft and Iright into Iresult 20: return Iresult |
bias=min((xi,yi)H) | (10) |
biasmatrix=[10bias[0]01bias[1]001] | (11) |
Here, (xi,yi) refers to the coordinates of the four corners of the source imageIi, where i is an integer between 0 and 3. The final homography matrix for the source image is obtained by matrix multiplication between the bias matrix and the original homography matrix. This leads to the transformed coordinates (xout,yout)=HijbiasmatrixIi for the resultant source image Iout.
After the transformation, we use image labels Ti and Tj to ascertain whether the set of images to be stitched pertains to the left or right half jaw teeth. Concretely, when Ti=Tj=1, the set of images correspond to the left half jaw teeth and their area of overlap is in the upper-left corner of the target image. Based on the coordinates post-transformation, image pairs are meticulously aligned. The aligned images are then fused using a wavelet transformation fusion technique. To ensure the seamlessness of the stitched images, a weighted fusion strategy is executed, featuring a fade-in, fade-out weighted matrix.
We conducted the experiments on a hardware setup featuring a 64-bit Windows 10 operating system and an AMD Ryzen 9 5900HX with Radeon Graphics, clocked at 3.30 GHz. The programming is built on PyCharm, which is seamlessly integrated with Anaconda and running in a Python 3.6 environment. The implementation leverages libraries like PyTorch and OpenCV-Python. The set parameters are as follows: A SuperPoint detection threshold of 0.007, attention iteration L fixed at 16, Time-Weighting with default settings, a RANSAC regularization term λ set to 1, a confidence value ξ of 0.995, an error cutoff ι at 10, a minimum sample size m of 4 and a weighted matrix width θ marked at 50. The hyperparameter settings employed in the proposed method of this paper are as follows: The learning rate is set at 0.0001, the Batch Size at 64 and the number of Epochs at 150. The model features 102 layers in the hidden layer, with ReLu as the activation function. The Sinkhorn iteration count is set to 150. For the convolutional layer, the kernel size is configured to (1, ) and the stride to (1, ). Batch normalization momentum is set at 0.1, with epsilon at 0.00001, and the Stochastic Gradient Descent (SGD) is used as the optimizer. Regularization employs Dropout with a dropout rate of 38.1%, and the weights are initialized randomly. These parameter settings are based on recommended values from relevant literature and best practices in existing research. Parameter settings for comparison methods follow either default configurations or recommendations cited in relevant studies.
In this paper, we establish a specialized intraoral camera dataset, named ICPA, for capturing localized image samples of lower jaw teeth using an A3M model intraoral endoscope. As illustrated in Figure 8, the endoscope lens employed in this dataset features a diameter of 1.2 cm and a viewing angle of 60 degrees. To maximize the richness of the captured image content, a focal length of approximately 1.5 cm is maintained. Utilizing a stationary camera setup, the ICPA dataset captures images of both the upper and lower jaws, accumulating a total of roughly 400 adjacent image pairs across 16 mouths, all collected from real oral environment. For a set of 10 half-jaw images, we consider that there are 9 pairs of adjacent images with individual image dimensions being 480×640 pixels. The feature points of the images in the data set range from approximately 400 to 600, and the pairs of matching feature points range from approximately 60 to 180 pairs. The sample size of the data set is shown in Table 1. The images encompass common oral features such as crowded dentition, mandibular deviation, sparse tooth arrangement and dental malformations like prognathism, as well as prevalent oral diseases including dental caries, plaque, mouth ulcers and gingival bleeding. During the model training phase, we augmented the 400-pair dataset using techniques like rotation, brightness adjustment and random noise addition, expanding the data to approximately 1600 pairs. Concerning image overlap rate, a lower overlap rate can challenge the algorithm's ability to precisely match feature points, impacting the stitching's accuracy and overall quality. Conversely, a higher overlap rate, while providing more matching points and enhancing stitching precision, also increases the data collection time and cost. In practical dental diagnostics, due to the necessity of processing a large volume of cases rapidly, diagnostic images typically have a lower overlap rate. To emulate this reality, our study maintained an overlap rate of about 35% for image pairs, with an average of 10 images per half-jaw and a minimum overlap area of 25%, averaging around 35%. This setup ensures that the dataset accurately reflects the image processing requirements of real clinical diagnosis and enhances the feasibility and applicability of our research findings in future practical diagnostic applications.
Average number of feature points in the data set | The average number of feature point pairs that match each other | Average number of unmatched feature point pairs | Proportion of feature point pairs that match each other |
486 | 160 | 326 | 32% |
Moreover, excessive exposure in images compromises the clarity of edges and finer details. As depicted in Figure 9, the histogram of a well-exposed dental image maintains a balanced distribution, whereas in an overexposed version, the predominance of high-luminance pixels skews the grayscale histogram to the right. To rectify this imbalance, we implement the ACE (Automatic Color Equalization) algorithm to harmonize the color profile of the dataset. This method not only adjusts the brightness, hue and contrast of images but also takes into account local and nonlinear characteristics, aligning with the Gray World Theory and White Patch Assumption frameworks. A comparative analysis of the dataset pre- and post-ACE algorithm application is presented in Figure 10.
As delineated in Eq (12) to (14), the principal metrics for evaluating feature matching encompass the Matching Score (Ms), Precision (P) and Recall (R). In this context, n and m represent the number of feature points in the two images to be matched. The min(n,m) denotes the smaller value between n and m. TP (True Positives) refers to the feature point pairs that are correctly matched. Conversely, FP (False Positives) signifies the feature point pairs that are incorrectly matched. Lastly, FN (False Negatives) pertains to the feature point pairs that should have been matched but were not. TN (True Negatives) represents pairs of feature points that are considered not to match each other.
Ms=TP+FPmin(n,m) | (12) |
P=TPTP+FP | (13) |
R=TPTP+FN | (14) |
Correctly matched point pairs are identified based on annotated feature-matching pairs in the dataset, used to ascertain the ground-truth homography matrixH. Utilizing H, feature points are projected into the coordinate space of a corresponding image, where distances to another set of feature points are computed. A pair of feature points with the minimum distance is deemed to be a correct match. Given the potential for errors in manual annotation, a distance threshold γ is established, set at 5 in this study, constraining the distance between correctly matched feature point pairs to be within a 5-pixel range.
We evaluate five comparative algorithms: ORB, GMS [26], PointCN [27], OANet [28] and SuperGlue. Experimental trials were undertaken on a test set, and the average results were computed. Data from Table 1 illustrates that the methodology proposed herein outstripped competing approaches, with improvements of 4.5%, 6.3% and 10.6% in Ms, P and R metrics, respectively. The novel approach amalgamates self-attention with cross-attention to elevate feature point matching specificity and escalate the chances of match success. Additionally, outlier elimination is achieved in each Sinkhorn iteration through the application of the RANSAC algorithm, ensuring the accuracy of feature point matching and consequently yielding a higher recall rate vis-à-vis other methods.
When considering matching methodologies that leverage homography estimation, the yardstick for assessment is the Frobenius norm. The Frobenius norm of an arbitrary matrix A can be computed as illustrated in Eq (15).
||A||F=√∑mi=1∑nj=1|aij|2 | (15) |
Herein, ||A||F stands for the Frobenius norm of matrixA. The variables m and n correspond to the number of rows and columns in the matrix, respectively, while aij denotes the element located at the i -th row and j -th column. In the evaluation of discrepancies between two homography matrices, the Frobenius norm serves as a widely-accepted metric. The experimental protocol involves computing the Label homography matrix using labeled feature point pairs and then calculating the absolute difference between its Frobenius norm and that of the estimated homography matrix. A lower value suggests a higher degree of similarity between the estimated and Label homography matrices.
The methods employed for comparative analysis are HomographyNet[29], VFISNet and UDIS. As evidenced by Table 2, our technique demonstrates a 31% reduction in the Frobenius norm difference, thereby drawing us closer to the Label. It is noteworthy that higher accuracy and recall rates, under the condition of a consistent Label, lead to a homography matrix that is increasingly congruent with the Label's homography matrix. Hence, the homography matrix derived from our proposed method exhibits a closer alignment with the Label.
method type | Methods | Ms | P(%) | R(%) | Difference in absolute value of ||A||F |
Matching based on feature points | ORB | 11.9 | 10.8 | 2.6 | / |
GMS | 10.7 | 33.0 | 7.8 | ||
PointCN | 19.5 | 56.2 | 23.6 | ||
OANet | 23.8 | 65.0 | 40.9 | ||
SuperGlue | 31.4 | 78.3 | 67.8 | ||
Ours | 35.9 | 84.6 | 78.4 | ||
Based on homography matrix estimation | HomographyNet | / | 113.68 | ||
VFISNet | 84.47 | ||||
UDIS | 72.59 | ||||
Ours | 49.60 |
Considering the imbalance present in the dataset, this method incorporates the G-mean and Precision-Recall Area Under Curve (PR-AUC) metrics to provide a more nuanced evaluation. G-mean as delineated in Eq (16). The PR-AUC represents the area under the Precision-Recall (PR) curve, which illustrates the relationship between Precision and Recall for the model at various thresholds. In comparison to other metrics, the PR-AUC serves as a more valuable performance indicator, particularly when dealing with imbalanced datasets. Specific indicators are shown in Table 3.
G−mean=√TPTP+FN×TNFP+TN | (16) |
Methods | G-mean | PR-AUC |
PointCN | 0.445 | 0.484 |
OANet | 0.581 | 0.560 |
SuperGlue | 0.749 | 0.766 |
Ours | 0.832 | 0.813 |
To assess the comparative advantages of our proposed methodology over existing techniques, we conducted experiments using distinctively featured images sourced from the ICPA database, as delineated in Figure 11. The experimental setup comprises three specific groups: The left molar region (inclusive of the third, second and first molars), the right molar region and the anterior incisor region (which includes lateral incisors, central incisors and canines).
Figure 12 provides a comparative analysis of matching outcomes across diverse regions between our proposed technique and extant algorithms. In the inaugural column, algorithms like GMS, PointCN and OANet are observed to perform matches based on the reflections generated by saliva. Our method efficaciously eradicates a substantial number of such feature-point pairs prone to reflective matching. Given that soft tissues such as saliva and the tongue are susceptible to morphological changes during image capture, matches based on reflections usually exhibit diminished confidence levels. In the subsequent two columns, apparent mismatches are also discernible in other techniques. Hence, our approach is proficient at identifying a greater number of credible feature-point matches while effectively filtering out erroneous ones, thereby minimizing error rates.
To authenticate the algorithm's utility, we conducted experiments on three data sets from Figure 11, incorporating a total of 10 variations that encompass translational shifts, rotational adjustments, perspective transformations and variations in overlap rates. Figure 13 illustrates the test dataset, and Figure 14 delineates the matching precision of our proposed method in comparison with existing algorithms like GMS, PointCN, OANet and SuperGlue across mandibular teeth images taken at 10 divergent angles. Remarkably, our method outperforms the other algorithms, achieving an average accuracy rate exceeding 80%.
In Figure 14, an analysis of data from Groups 2 to 5 shows that all examined algorithms maintain a consistent level of matching accuracy under translational variations. However, the data from Groups 6 and 7 reveal a marked reduction in performance for GMS and PointCN when subjected to rotational transformations. Furthermore, in Groups 8 through 10, all three algorithms—GMS, PointCN and OANet—suffer from decreased accuracy. SuperGlue fares poorly in feature-sparse regions like the left (or right) molars but exhibits stable performance in feature-rich zones like the anterior incisors. Remarkably, the method proposed in this study demonstrates stable matching accuracy across different feature compositions.
In the phase dedicated to image fusion, we address a core limitation: Conventional panoramic image stitching demands either a vertical or horizontal positional relationship between images, a constraint not well-suited for intraoral endoscopic image pairs. The principal metrics evaluated are the mean gradient and standard deviation. Higher values in these metrics translate to better preservation of image details and smoother, more natural transitions in the fused image. As evidenced in Table 4, the methodology we propose achieves optimal levels in both these key metrics. This approach employs wavelet transformations for the fusion process and uses a weighted, fade-in, fade-out technique to seamlessly blend image seams. Consequently, relative to traditional approaches, our method yields superior fusion outcomes, preserving a greater extent of image details and facilitating smoother, more natural transitions.
Fusion methods | Average gradient | Standard deviation |
Based on maximum value | 6.65 | 58.71 |
Based on minimum value | 4.73 | 49.15 |
Average weighted | 7.45 | 50.29 |
Laplacian pyramid | 9.332 | 51.36 |
Wavelet transform fusion | 9.66 | 53.75 |
Ours | 11.23 | 58.78 |
In the ablation experiment, we chiefly examine the impact on the quality of image stitching involving either two or multiple images. The reliability of feature point pairs is gauged by their confidence scores, with higher scores indicating a greater likelihood of correct matching. These confidence levels are color-coded, ranging from blue for high confidence to red for low, with intermediary values represented by green and yellow. To enhance visual clarity, a lower mean confidence score will result in a reduced peak value, causing the overall image to take on more yellow or red tones. As indicated in Figure 15, the removal of the Time-Weighting-enhanced self-attention mechanism precipitates a decline in the ability to effectively aggregate feature points, and a corresponding reduction in matched feature point pairs. With the implementation of Time-Weighting, there is an approximate 16% boost in the quantity of feature point pairs. Conversely, the absence of the RANSAC algorithm leads to generally lower confidence scores for feature point pairs, manifesting in conspicuously red and yellow connecting lines.
As illustrated in Figure 16, we employ the same methodology for representing confidence levels in the context of multi-image stitching. Our results reveal that the proposed approach significantly improves the stitching outcome, augmenting the quantity of feature point pairs by an estimated 20%. While integrating the RANSAC method leads to a marginal reduction in the number of feature point pairs, it simultaneously enhances the overall confidence level of the matches.
As depicted in Figure 17, we present examples of semi-jaw data captured from an intraoral endoscope, consisting of three distinct dental photo sets: 10 images in set A, 8 in set A and 12 in set C. The image-capturing process is prone to variations in perspective due to camera shake when held by hand, leading to inconsistent overlap areas and positions between images. Current stitching algorithms generally focus on dual-image stitching, progressing step-by-step to create a panoramic image through iterative dual-image combinations. However, this approach is fraught with challenges, including the accumulation of deformation errors during the multi-image stitching, resulting in incomplete oral panoramic imagery in some instances.
As delineated in Figure 18, we employed a comparative analysis featuring ORB, GMS, PointCN, OANet, SuperGlue and our proposed algorithm. Blue rectangles highlight areas of misalignment and ghosting artifacts; green rectangles point out conspicuous distortions; while red rectangles signify incorrect stitching outcomes. ORB, GMS and PointCN tend to yield flawed results, including misalignments that prevent the creation of a complete stitched image. OANet and SuperGlue, on the other hand, do produce panoramic images but suffer from varying degrees of distortion and errors. According to the metrics compiled in Table 5, our proposed approach achieves optimal results in terms of both average feature matching and accuracy. In contrast, both ORB and GMS fail to correctly stitch over half of the total image set, and PointCN often yields incomplete panoramic images due to cumulative errors. OANet and SuperGlue manage to generate panoramas, albeit with diminished accuracy.
methods | incorrect stitching | ghosting artifacts | conspicuous distortions | Generate panorama | successfully stitched images (%) | average feature matching pair | average accuracy (%) |
ORB | √ | √ | √ | × | 20 | 16.3 | 10.1 |
GMS | √ | √ | √ | × | 26.6 | 26.4 | 23.1 |
PointCN | √ | √ | √ | × | 43.3 | 43.7 | 59.7 |
OANet | √ | √ | √ | 100 | 62.2 | 63.1 | |
SuperGlue | √ | √ | 100 | 89.5 | 79.3 | ||
Ours | √ | 100 | 108.4 | 86.2 |
As depicted in Figure 19, the proposed method begins the stitching process with images labeled starting with "a", representing the left side of the dental arch. The approach employs a two-sided sequential stitching strategy: First from one side and then the other, culminating in a fusion of these left and right stitched images. The end result accomplishes a comprehensive semi-jaw panoramic view of the teeth, meticulously preserving the content details from the original images. Moreover, each stitched image exhibits a distortion level that is within an acceptable range, devoid of discernible ghosting or artifacts.
In this paper, we study the splicing problem of intraoral endoscopic images and explore the impact of Time-Weighting combined with the attention mechanism on the number of feature point matches. In addition, the feature point pair matching mechanism of the Sinkhorn and RANSAC combination is clarified. To accomplish seamless Stitching of intraoral endoscopic visuals, a wavelet transform and weighted fusion algorithm based on dental arch alignment intraoral endoscopic images was designed. Experimental results show that the integration of Time-Weighting and attention mechanisms substantially augments the volume of feature point matches, whereas the accuracy of feature point pair matching can be improved by the combination of Sinkhorn and RANSAC. The algorithm this paper introduce excels in both quantitative metrics and visual aesthetics.
The proposed method currently has the following limitations: 1) Due to the use of point light sources in intraoral endoscopes, the images captured often exhibit uneven brightness, with higher luminance at the center and lower at the edges. This results in an unnatural brightness transition at the seams post-stitching. 2) The process of stitching involves distortion and stretching of the source images, leading to irregular boundaries in the final composite, which are not in line with typical visual perceptions and display modes of imaging devices.
Future research directions include: 1) Developing a global brightness optimization method [30] specifically for panoramic image stitching to ensure a natural luminance transition and mitigate abrupt brightness changes. 2) Devising a method to rectify the stitched panoramic intraoral endoscopic images into a rectangular format, meeting the visual expectations of humans and the display formats of imaging devices.
Furthermore, the dataset constructed by the proposed method is currently not extensive and fails to cover all possible oral features, such as periodontitis and dental cancer. Therefore, future work involves expanding the dataset, and collaborating with dental hospitals and other medical institutions to increase the diversity and quantity of samples. This will ensure a more comprehensive coverage of potential features in intraoral endoscopic images, enhancing the representativeness of the dataset and theesearchh.
The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by the Shaanxi Natural Science Fundamental Research Program Project (No. 2022JM-508), and in part by the National Natural Science Foundation of China (Grant No. 62101432).
The authors declare there are no conflicts of interest.
[1] | [M. Adimy and F. Crauste, Modeling and asymptotic stability of a growth factor-dependent stem cell dynamics model with distributed delay, Discrete & Continuous Dynamical Systems-Series B, 8 (2017), 19-38. |
[2] | [E. Alzahrani, H. Akca and X. Li, New synchronization schemes for delayed chaotic neural networks with impulses, Neural Computing and Applications, 28 (2017), 2823-2837. |
[3] | [ F. Amato, G. De Tommasi and A. Pironti, Necessary and sufficient conditions for finite-time stability of impulsive dynamical linear systems, Automatica, 49 (2013), 2546-2550. |
[4] | [ G. Ballinger and X. Liu, On boundedness of solutions of impulsive systems, Nonlinear Studies, 4 (1997), 121-131. |
[5] | [ G. Ballinger and X. Liu, On boundedness of solutions for impulsive systems in terms of two measures, Nonlinear World, 4 (1997), 417-434. |
[6] | [ T. Caraballo, M. Hammami and L. Mchiri, Practical exponential stability of impulsive stochastic functional differential equations, Systems & Control Letters, 109 (2017), 43-48. |
[7] | [ W. Chen, X. Lu and W. Zheng, Impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks, IEEE Transactions on Neural Networks and Learning Systems, 26 (2015), 734-748. |
[8] | [ W. Chen, D. Wei and X. Lu, Exponential stability of a class of nonlinear singularly perturbed systems with delayed impulses, Journal of the Franklin Institute, 350 (2013), 2678-2709. |
[9] | [ W. Chen and W. Zheng, Input-to-state stability and integral input-to-state stability of nonlinear impulsive systems with delays, Automatica, 45 (2009), 1481-1488. |
[10] | [ W. Chen and W. Zheng, The effect of delayed impulses on stability of impulsive time-delay systems, IFAC Proceedings Volumes, 44 (2011), 6307-6312. |
[11] | [ W. Chen and W. Zheng, Exponential stability of nonlinear time-delay systems with delayed impulse effects, Automatica, 47 (2011), 1075-1083. |
[12] | [ P. Cheng and F. Deng, Global exponential stability of impulsive stochastic functional differential systems, Statistics and Probability Letters, 80 (2010), 1854-1862. |
[13] | [ S. Dashkovskiy and A. Mironchenko, Input-to-State stability of nonlinear impulsive systems, SIAM J. Control Optim, 51 (2012), 1962-1987. |
[14] | [ M. Dellnitz, M. Hessel and A. Ziessler, On the computation of attractors for delay differential equations, Journal of Computational Dynamics, 3 (2016), 93-112. |
[15] | [ S. Gao, D. Xie and L. Chen, Pulse vaccination strategy in a delayed sir epidemic model with vertical transmission, Discrete & Continuous Dynamical Systems-B, 7 (2007), 77-86. |
[16] | [ K. Gopalsamy, Stability and Oscillations in Delay Differential Equations of Population Dynamics, Dordrecht, The Netherlands: Kluwer, 1992. |
[17] | [ J. Grizzle, G. Abba and F. Plestan, Asymptotically stable walking for biped robots: Analysis via systems with impulse effects, IEEE Transaction on Automatic Control, 46 (2001), 51-64. |
[18] | [ K. Gu, J. Chen and V. Kharitonov, Stability of Time-delay Systems, Springer Science & Business Media, 2003. |
[19] | [ K. Gu, V. Kharitonov and J. Chen, Stability of Time-Delay Systems, Birkhauser, Boston, MA, USA, 2003. |
[20] | [ Y. Guo, G. Xu and Y. Guo, Stabilization of the wave equation with interior input delay and mixed Neumann-Dirichlet boundary, Discrete & Continuous Dynamical Systems-B, 21 (2016), 2491-2507. |
[21] | [ J. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, NY, USA, 1993. |
[22] | [ W. He, F. Qian and J. Cao, Pinning-controlled synchronization of delayed neural networks with distributed-delay coupling via impulsive control, Neural Networks, 85 (2017), 1-9. |
[23] | [ J. Hespanha and A. Morse, Stability of switched systems with average dwell-time, Proceedings of the 38th IEEE Conference on Decision and Control, (1999), 2655-2660. |
[24] | [ L. Hetel, J. Daafouz and C. Iung, Equivalence between the lyapunov-Krasovskii functionals approach for discrete delay systems and that of the stability conditions for switched systems, Nonlinear Analysis: Hybrid Systems, 2 (2008), 697-705. |
[25] | [ J. Hu, G. Sui, S. Du and X. Li, Finite-time stability of uncertain nonlinear systems with time-varying delay, Mathematical Problems in Engineering, 2017 (2017), Article ID 2538904, 9 pages. |
[26] | [ A. Huseynov, Positive solutions of a nonlinear impulsive equation with periodic boundary conditions, Applied Mathematics and Computation, 217 (2010), 247-259. |
[27] | [ H. Jin and M. Zacksenhouse, Oscillatory neural networks for robotic yo-yo control, IEEE Transactions on Neural Networks and Learning Systems, 14 (2003), 317-325. |
[28] | [ A. Kartsatos, Advanced Ordinary Differential Equations, Mariner Publishing Co., Inc., Tampa, Fla., 1980. |
[29] | [ A. Khadra, X. Liu and X. Shen, Analyzing the robustness of impulsive synchronization coupled by linear delayed impulses, IEEE Transactions on Automatic Control, 54 (2009), 923-928. |
[30] | [ A. Khadra, X. Liu and X. Shen, Impulsively synchronizing chaotic systems with delay and applications to secure communication, Automatica, 41 (2005), 1491-1502. |
[31] | [ Y. Kuang, Delay Differential Equations with Applications in Population Dynamics, volume 191 in the series of Mathematics in Science and Engineering, Academic Press, 1993. |
[32] | [ L. Lee, Y. Liu, J. Liang and X. Cai, Finite time stability of nonlinear impulsive systems and its applications in sampled-data systems, ISA Transactions, 57 (2015), 172-178. |
[33] | [ R. Leine and T. Heimsch, Global uniform symptotic attractive stability of the non-autonomous bouncing ball system, Physica D: Nonlinear Phenomena, 241 (2012), 2029-2041. |
[34] | [ B. Li, Y. Chen and X. Lu, et al. A delayed HIV-1 model with virus waning term, Mathematical Biosciences and Engineering, 13 (2016), 135-157. |
[35] | [ C. Li, X. Liao, X. Yang and T. Huang, Impulsive stabilization and synchronization of a class of chaotic delay systems, Chaos, 15 (2005), 043103, 9pp. |
[36] | [ X. Li, New results on global exponential stabilization of impulsive functional differential equations with infinite delays or finite delays, Nonlinear Analysis: Real World Applications, 11 (2010), 4194-4201. |
[37] | [ X. Li, Existence and global exponential stability of periodic solution for impulsive CohenGrossberg-type BAM neural networks with continuously distributed delays, Applied Mathematics and Computation, 215 (2009), 292-307. |
[38] | [ X. Li, M. Bohner and C. Wang, Impulsive differential equations: Periodic solutions and applications, Automatica, 52 (2015), 173-178. |
[39] | [ X. Li and J. Cao, An impulsive delay inequality involving unbounded time-varying delay and applications, IEEE Transactions on Automatic Control, 62 (2017), 3618-3625. |
[40] | [ X. Li and Y. Ding, Razumikhin-type theorems for time-delay systems with Persistent impulses, Systems and Control Letters, 107 (2017), 22-27. |
[41] | [ X. Li and X. Fu, Effect of leakage time-varying delay on stability of nonlinear differential Systems, Journal of the Franklin Institute, 350 (2013), 1335-1344. |
[42] | [ X. Li and X. Fu, Lag synchronization of chaotic delayed neural networks via impulsive control, IMA Journal of Mathematical Control and Information, 29 (2012), 133-145. |
[43] | [ X. Li and X. Fu, On the global exponential stability of impulsive functional differential equations with infinite delays or finite delays, Communications in Nonlinear Science and Numerical Simulation, 19 (2014), 442-447. |
[44] | [ X. Li, P. Li and Q. Wang, Input/output-to-state stability of impulsive switched systems, Systems & Control Letters, 116 (2018), 1-7. |
[45] | [ X. Li and R. Rakkiyappan, Delay-dependent global asymptotic stability criteria for stochastic genetic regulatory networks with Markovian jumping parameters, Applied Mathematical Modelling, 36 (2012), 1718-1730. |
[46] | [ X. Li and R. Rakkiyappan, Stability results for Takagi-Sugeno fuzzy uncertain BAM neural networks with time delays in the leakage term, Neural Computing and Applications, 22 (2013), 203-219. |
[47] | [ X. Li, R. Rakkiyappan and N. Sakthivel, Non-fragile synchronization control for Markovian jumping complex dynamical networks with probabilistic time-varying coupling delay, Asian Journal of Control, 17 (2015), 1678-1695. |
[48] | [ X. Li, J. Shen and R. Rakkiyappan, Persistent impulsive effects on stability of functional differential equations with finite or infinite delay, Applied Mathematics and Computation, 329 (2018), 14-22. |
[49] | [ X. Li and S. Song, Impulsive control for existence, uniqueness, and global stability of periodic solutions of recurrent neural networks with discrete and continuously distributed delays, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 868-877. |
[50] | [ X. Li and S. Song, Stabilization of delay systems: delay-dependent impulsive control, IEEE Transactions on Automatic Control, 62 (2017), 406-411. |
[51] | [ X. Li and J. Wu, Stability of nonlinear differential systems with state-dependent delayed impulses, Automatica, 64 (2016), 63-69. |
[52] | [ X. Li and J. Wu, Sufficient stability conditions of nonlinear differential systems under impulsive control with state-dependent delay, IEEE Transactions on Automatic Control, 63 (2018), 306-311. |
[53] | [ X. Li, X. Zhang and S. Song, Effect of delayed impulses on input-to-state stability of nonlinear systems, Automatica, 76 (2017), 378-382. |
[54] | [ X. Li, Q. Zhu and Donal O'Regan, pth Moment exponential stability of impulsive stochastic functional differential equations and application to control problems of NNs, Journal of the Franklin Institute, 351 (2014), 4435-4456. |
[55] | [ Y. Li, H. Li, X. Ding and G. Zhao, Leader-follower consensus of multi-agent systems with time delays over finite fields, IEEE Transactions on Cybernetics, pp (2018), 1-6. |
[56] | [ X. Liao, L. Wang and P. Yu, Stability of Dynamical Systems, Elsevier Press, Oxford, 2007. |
[57] | [ D. Lin, X. Li and D. Regan, Stability analysis of generalized impulsive functional differential equations, Mathematical and Computer Modelling, 55 (2012), 1682-1690. |
[58] | [ B. Liu, X. Liu, K. Teo and Q. Wang, Razumikhin-type theorems on exponential stability of impulsive delay systems, IMA Journal of Applied Mathematics, 71 (2006), 47-61. |
[59] | [ C. Liu, X. Li and Donal O' Regan, Asymptotically practical stability of impulsive functional differential systems in terms of two measurements, Dynamic Systems and Applications, 22 (2013), 621-640. |
[60] | [ J. Liu, X. Liu and W. Xie, Input-to-state stability of impulsive and switching hybrid systems with time-delay, Automatica, 47 (2011), 899-908. |
[61] | [ L. Liu, T. Caraballo and X. Fu, Exponential stability of an incompressible non-Newtonian fluid with delay, Discrete & Continuous Dynamical Systems-B. |
[62] | [ X. Liu, Stability results for impulsive differential systems with applications to population growth models, Dynamics and Stability of Systems, 9 (1994), 163-174. |
[63] | [ X. Liu and G. Ballinger, Uniform asymptotic stability of impulsive delay differential equations, Computers and Mathematics with Applications, 41 (2001), 903-915. |
[64] | [ X. Liu, X. Shen, Y. Zhang and Q. Wang, Stability criteria for impulsive systems with time delay and unstable system matrices, IEEE Transactions on Circuits ond Systems-Ⅰ: Regular Papers, 54 (2007), 2288-2298. |
[65] | [ X. Liu and Q. Wang, On stability in terms of two measures for impulsive systems of functional differential equations, Journal of Mathematical Analysis and Applications, 326 (2007), 252-265. |
[66] | [ X. Liu and Q. Wang, The method of Lyapunov functionals and exponential stability of impulsive systems with time delay, Nonlinear Analysis, 66 (2007), 1465-1484. |
[67] | [ X. Liu, S. Zhong and X. Ding, Robust exponential stability of impulsive switched systems with switching delays: A Razumikhin approach, Communications in Nonlinear Science and Numerical Simulation, 17 (2012), 1805-1812. |
[68] | [ J. Lu, D. Ho and J. Cao, A unified synchronization criterion for impulsive dynamical networks, Automatica, 46 (2010), 1215-1221. |
[69] | [ Z. Luo and J. Shen, Stability results for impulsive functional differential equations with infinite delays, J. Comput. Appl. Math, 131 (2001), 55-64. |
[70] | [ Z. Luo and J. Shen, Stability of impulsive functional differential equations via the Lyapunov functional, Appl. Math. Lett, 22 (2009), 163-169. |
[71] | [ X. Lv and X. Li, Finite time stability and controller design for nonlinear impulsive sampleddata systems with applications, ISA Transactions, 70 (2017), 30-36. |
[72] | [ H. Miao, Z. Teng and C. Kang, Stability and Hopf bifurcation of an HIV infection model with saturation incidence and two delays, Discrete & Continuous Dynamical Systems-B, 22 (2017), 2365-2387. |
[73] | [ P. Naghshtabrizi, J. Hespanha and A. Teel, Exponential stability of impulsive systems with application to uncertain sampled-data systems, Systems & Control Letters, 57 (2008), 378-385. |
[74] | [ S. Nersesov and W. Haddad, Finite-time stabilization of nonlinear impulsive dynamical systems, Nonlinear Analysis: Hybrid Systems, 2 (2008), 832-845. |
[75] | [ E. Nikolopoulou, L. R. Johnson, D. Harris, J. D. Nagy, E. C. Stites and Y. Kuang, Tumour-immune dynamics with an immune checkpoint inhibitor, Letters in Biomathematics, 5 (2018), S137-S159. |
[76] | [ T. Nishikawa, Y. Lai and F. Hoppensteadt, Capacity of oscillatory associative-memory networks with error-free retrieval, Phys. Rev. Lett., 92 (2004), 108101. |
[77] | [ L. Pan and J. Cao, Exponential stability of impulsive stochastic functional differential equations, Journal of Mathematical Analysis and Applications, 382 (2011), 672-685. |
[78] | [ L. Pan and J. Cao, Robust stability for uncertain stochastic neural network with delay and impulses, Neurocomputing, 94 (2012), 102-110. |
[79] | [ G. Peralta and K. Kunisch, Interface stabilization of a parabolic-hyperbolic pde system with delay in the interaction, Discrete & Continuous Dynamical Systems-A, 38 (2018), 3055-3083. |
[80] | [ R. Rakkiyappan, R. Sivasamy and X. Li, Synchronization of identical and nonidentical memristor-based chaotic systems via active backstepping control technique, Circuits, Systems, and Signal Processing, 34 (2015), 763-778. |
[81] | [ R. Rakkiyappan, G. Velmurugan, X. Li and D. Regan, Global dissipativity of memristor-based complex-valued neural networks with time-varying delays, Neural Computing and Applications, 27 (2016), 629-649. |
[82] | [ A. Ruiz, D. Owens and S. Townley, Existence, learning, and replication of periodic motion in recurrent neural networks, IEEE Transactions on Neural Networks and Learning Systems, 9 (1998), 651-661. |
[83] | [ E. Rutter and Y. Kuang, Global dynamics of a model of joint hormone treatment with dendritic cell vaccine for prostate cancer, DCDS-B, 22 (2017), 1001-1021. |
[84] | [ J. Shen, J. Li and Q. Wang, Boundedness and periodicity in impulsive ordinary and functional differential equations, Nonlinear Analysis, 65 (2006), 1986-2002. |
[85] | [ J. Shen, Z. Luo and X. Liu, Impulsive stabilization of functional differential equations via liapunov functionals, Journal of Mathematical Analysis and Applications, 240 (1999), 1-15. |
[86] | [ W. Shen and Z. Shen, Transition fronts in nonlocal Fisher-KPP equations in time heterogeneous media, Communications on Pure & Applied Analysis, 15 (2016), 1193-1213. |
[87] | [ X. Song and J. H. Park, Linear optimal estimation for discrete-time measurement-delay systems with multi-channel multiplicative noise, IEEE Transactions on Circuis and Systems Ⅱ-Express Brief, 64 (2017), 156-160. |
[88] | [ X. Song, J. H. Park and X. Yan, Linear estimation for measurement-delay systems with periodic coefficients and multiplicative noise, IEEE Transactions on Automatic Control, 62 (2017), 4124-4130. |
[89] | [ I. Stamova, Stability Analysis of Impulsive Functional Differential Equations, Walter de Gruyter, Berlin, 2009. |
[90] | [ I. Stamova, Vector Lyapunov functions for practical stability of nonlinear impulsive functional differential equations, Journal of Mathematical Analysis and Applications, 325 (2007), 612-623. |
[91] | [ I. Stamova, T. Stamov and X. Li, Global exponential stability of a class of impulsive cellular neural networks with supremums, Internat. J. Adapt. Control Signal Process, 28 (2014), 1227-1239. |
[92] | [ I. Stamova, T. Stamov and N. Simeonova, Impulsive effects on the global exponential stability of neural network models with supremums, European Journal of Control, 20 (2014), 199-206. |
[93] | [ X. Tan, J. Cao and X. Li, Consensus of Leader-Following Multiagent Systems: A Distributed Event-Triggered Impulsive Control Strategy, IEEE Transactions on Cybernetics, 99 (2018), 1-10. |
[94] | [ X. Tan, J. Cao and X. Li, Leader-following mean square consensus of stochastic multi-agent systems with input delay via event-triggered control, IET Control Theory & Applications, 12(2018), 299-309. |
[95] | [ J. Tseng, Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays, Discrete & Continuous Dynamical Systems-A, 33 (2013), 4693-4729. |
[96] | [ B. Vadivoo, R. Ramachandran, J. Cao, H. Zhang and X. Li, Controllability analysis of nonlinear neutral-type fractional-order differential systems with state delay and impulsive effects, International Journal of Control, Automation and Systems, 16 (2018), 659-669. |
[97] | [ D. Wang, Emergent synchrony in locally coupled neural oscillators, IEEE Transactions on Neural Networks and Learning Systems, 6 (1995), 941-948. |
[98] | [ H. Wang, S. Duan, T. Huang and J. Tan, Synchronization of memristive delayed neural networks via hybrid impulsive control, Neurocomputing, 267 (2017), 615-623. |
[99] | [ H. Wang, S. Duan, C. Li, L. Wang and T. Huang, Stability of impulsive delayed linear differential systems with delayed impulses, Journal of the Franklin Institute, 352 (2015), 3044-3068. |
[100] | [ H. Wang, J. Li and Y. Kuang, Enhanced modeling of the glucose-insulin system and its applications in insulin therapies, J. Biological Dynamics, 3 (2009), 22-38. |
[101] | [ L. Wang, M. Yu and P. Niu, Periodic solution and almost periodic solution of impulsive Lasota-Wazewska model with multiple time-varying delays, Computers and Mathematics with Applications, 64 (2012), 2383-2394. |
[102] | [ Q. Wang and X. Liu, Exponential stability for impulsive delay differential equations by Razumikhin method, Journal of Mathematical Analysis and Applications, 309 (2005), 462-473. |
[103] | [ Q. Wang and X. Liu, Impulsive stabilization of delay differential systems via the LyapunovRazumikhin method, Applied Mathematics Letters, 20 (2007), 839-845. |
[104] | [ Q. Wang and X. Liu, Stability criteria of a class of nonlinear impulsive switching systems with time-varying delays, Journal of the Franklin Institute, 349 (2012), 1030-1047. |
[105] | [ Q. Wang and X. Liu, The method of Lyapunov functionals and exponential stability of impulsive systems with time delay, Nonlinear Analysis, 66 (2007), 1465-1484. |
[106] | [ Y. Wang, X. Shi, Z. Zuo, M. Chen and Y. Shao, On finite-time stability for nonlinear impulsive switched systems, Nonlinear Analysis: Real World Applications, 14 (2013), 807-814. |
[107] | [ N. Wouw, P. Naghshtabrizi, M. Cloosterman and J. Hespanha, Tracking control for sampleddata systems with uncertain timevarying sampling intervals and delays, International Journal of Robust and Nonlinear Control, 20 (2010), 387-411. |
[108] | [ Q. Wu, J. Zhou and L. Xiang, Global exponential stability of impulsive differential equations with any time delays, Applied Mathematics Letters, 23 (2010), 143-147. |
[109] | [ T. Yang, Impulsive Control Theory, Berlin, Germany: Springer, 2001. |
[110] | [ X. Yang, J. Cao and J. Lu, Stochastic synchronization of complex networks with nonidentical nodes via hybrid adaptive and impulsive control, IEEE Transactions On Circuits And Systems-Ⅰ: Regular Papers, 59 (2012), 371-384. |
[111] | [ X. Yang, X. Li and J. Cao, Robust finite-time stability of singular nonlinear systems with interval time-varying delay, Journal of Franklin Institute, 355 (2018), 1241-1258. |
[112] | [ X. Yang and J. Lu, Finite-time synchronization of coupled networks with Markovian topology and impulsive effects, IEEE Transaction on Automatic Control, 61 (2016), 2256-2261. |
[113] | [ X. Zeng, C. Li, T. Huang and X. He, Stability analysis of complex-valued impulsive systems with time delay, Applied Mathematics and Computation, 256 (2015), 75-82. |
[114] | [ W. Zhang, Y. Tang, J. Fang and X. Wu, Stability of delayed neural networks with timevarying impulses, Neural Networks, 36 (2012), 59-63. |
[115] | [ W. Zhang, Y. Tang, Q. Miao and W. Du, Exponential synchronization of coupled switched neural networks with mode-dependent impulsive effects, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 1316-1326. |
[116] | [ X. Zhang and X. Li, Input-to-state stability of non-linear systems with distributed delayed impulses, IET Control Theory & Applications, 11 (2017), 81-89. |
[117] | [ X. Zhang, X. Li, J. Cao and F. Miaadi, Design of delay-dependent controllers for finite-time stabilization of delayed neural networks with uncertainty, Journal of Franklin Institute, 355 (2018), 5394-5413. |
[118] | [ X. Zhang, X. Lv and X. Li, Sampled-data based lag synchronization of chaotic delayed neural networks with impulsive control, Nonlinear Dynamics, 90 (2017), 2199-2207. |
[119] | [ Y. Zhang and J. Sun, Stability of impulsive functional differential equations, Nonlinear Analysis: Theory, Methods and Applications, 68 (2008), 3665-3678. |
[120] | [ Y. Zhang and J. Sun, Stability of impulsive infinite delay differential equations, Applied Mathematics Letters, 19 (2006), 1100-1106. |
[121] | [ Y. Zheng, H. Li, X. Ding and Y. Liu, Stabilization and set stabilization of delayed Boolean control networks based on trajectory stabilization, Journal of the Franklin Institute, 354 (2017), 7812-7827. |
[122] | [ Q. Zhu and X. Li, Exponential and almost sure exponential stability of stochastic fuzzy delayed Cohen-Grossberg neural networks, Fuzzy Sets and Systems, 203 (2012), 74-94. |
[123] | [ Q. Zhu, F. Xi and X. Li, Robust exponential stability of stochastically nonlinear jump systems with mixed time delays, Journal of Optimization Theory and Applications, 154 (2012), 154-174. |
[124] | [ M. Zou, A. Liu and Z. Zhang, Oscillation theorems for impulsive parabolic differential system of neutral type, Discrete & Continuous Dynamical Systems-B, 22 (2017), 2351-2363. |
1. | Betsy H. Salazar, Kristopher A. Hoffman, Anilkumar K. Reddy, Sridhar Madala, Ravi K. Birla, 16-Channel Flexible System to Measure Electrophysiological Properties of Bioengineered Hearts, 2018, 9, 1869-408X, 94, 10.1007/s13239-017-0336-8 | |
2. | Angelina A. Buchneva, Polina P. Komarova, Dmitriy A. Egorov, Dmitriy S. Shevchenko, 2021, The System for Rodent ECG Registration Using Dry Electrode Technology without Fixation and Drug-Induced Sleep, 978-1-6654-0476-1, 1722, 10.1109/ElConRus51938.2021.9396646 |
Average number of feature points in the data set | The average number of feature point pairs that match each other | Average number of unmatched feature point pairs | Proportion of feature point pairs that match each other |
486 | 160 | 326 | 32% |
method type | Methods | Ms | P(%) | R(%) | Difference in absolute value of ||A||F |
Matching based on feature points | ORB | 11.9 | 10.8 | 2.6 | / |
GMS | 10.7 | 33.0 | 7.8 | ||
PointCN | 19.5 | 56.2 | 23.6 | ||
OANet | 23.8 | 65.0 | 40.9 | ||
SuperGlue | 31.4 | 78.3 | 67.8 | ||
Ours | 35.9 | 84.6 | 78.4 | ||
Based on homography matrix estimation | HomographyNet | / | 113.68 | ||
VFISNet | 84.47 | ||||
UDIS | 72.59 | ||||
Ours | 49.60 |
Methods | G-mean | PR-AUC |
PointCN | 0.445 | 0.484 |
OANet | 0.581 | 0.560 |
SuperGlue | 0.749 | 0.766 |
Ours | 0.832 | 0.813 |
Fusion methods | Average gradient | Standard deviation |
Based on maximum value | 6.65 | 58.71 |
Based on minimum value | 4.73 | 49.15 |
Average weighted | 7.45 | 50.29 |
Laplacian pyramid | 9.332 | 51.36 |
Wavelet transform fusion | 9.66 | 53.75 |
Ours | 11.23 | 58.78 |
methods | incorrect stitching | ghosting artifacts | conspicuous distortions | Generate panorama | successfully stitched images (%) | average feature matching pair | average accuracy (%) |
ORB | √ | √ | √ | × | 20 | 16.3 | 10.1 |
GMS | √ | √ | √ | × | 26.6 | 26.4 | 23.1 |
PointCN | √ | √ | √ | × | 43.3 | 43.7 | 59.7 |
OANet | √ | √ | √ | 100 | 62.2 | 63.1 | |
SuperGlue | √ | √ | 100 | 89.5 | 79.3 | ||
Ours | √ | 100 | 108.4 | 86.2 |
Average number of feature points in the data set | The average number of feature point pairs that match each other | Average number of unmatched feature point pairs | Proportion of feature point pairs that match each other |
486 | 160 | 326 | 32% |
method type | Methods | Ms | P(%) | R(%) | Difference in absolute value of ||A||F |
Matching based on feature points | ORB | 11.9 | 10.8 | 2.6 | / |
GMS | 10.7 | 33.0 | 7.8 | ||
PointCN | 19.5 | 56.2 | 23.6 | ||
OANet | 23.8 | 65.0 | 40.9 | ||
SuperGlue | 31.4 | 78.3 | 67.8 | ||
Ours | 35.9 | 84.6 | 78.4 | ||
Based on homography matrix estimation | HomographyNet | / | 113.68 | ||
VFISNet | 84.47 | ||||
UDIS | 72.59 | ||||
Ours | 49.60 |
Methods | G-mean | PR-AUC |
PointCN | 0.445 | 0.484 |
OANet | 0.581 | 0.560 |
SuperGlue | 0.749 | 0.766 |
Ours | 0.832 | 0.813 |
Fusion methods | Average gradient | Standard deviation |
Based on maximum value | 6.65 | 58.71 |
Based on minimum value | 4.73 | 49.15 |
Average weighted | 7.45 | 50.29 |
Laplacian pyramid | 9.332 | 51.36 |
Wavelet transform fusion | 9.66 | 53.75 |
Ours | 11.23 | 58.78 |
methods | incorrect stitching | ghosting artifacts | conspicuous distortions | Generate panorama | successfully stitched images (%) | average feature matching pair | average accuracy (%) |
ORB | √ | √ | √ | × | 20 | 16.3 | 10.1 |
GMS | √ | √ | √ | × | 26.6 | 26.4 | 23.1 |
PointCN | √ | √ | √ | × | 43.3 | 43.7 | 59.7 |
OANet | √ | √ | √ | 100 | 62.2 | 63.1 | |
SuperGlue | √ | √ | 100 | 89.5 | 79.3 | ||
Ours | √ | 100 | 108.4 | 86.2 |