
The goal of this manuscript is to study the existence theory of solution for a nonlinear boundary value problem of tripled system of fractional order hybrid sequential integro-differential equations. The analysis depends on some results from fractional calculus and fixed point theory. As a result, we generalized Darbo's fixed point theorem to form an updated version of tripled fixed point theorem to investigate the proposed system. Also, Hyres-Ulam and generalized Hyres-Ulam stabilities results are established for the considered system. For the illustration of our main results, we provide an example.
Citation: Muhammed Jamil, Rahmat Ali Khan, Kamal Shah, Bahaaeldin Abdalla, Thabet Abdeljawad. Application of a tripled fixed point theorem to investigate a nonlinear system of fractional order hybrid sequential integro-differential equations[J]. AIMS Mathematics, 2022, 7(10): 18708-18728. doi: 10.3934/math.20221029
[1] | Lihong Hu, Junjian Yang . Inequalities on 2×2 block accretive partial transpose matrices. AIMS Mathematics, 2024, 9(4): 8805-8813. doi: 10.3934/math.2024428 |
[2] | Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007 |
[3] | Sourav Shil, Hemant Kumar Nashine . Positive definite solution of non-linear matrix equations through fixed point technique. AIMS Mathematics, 2022, 7(4): 6259-6281. doi: 10.3934/math.2022348 |
[4] | Kanjanaporn Tansri, Sarawanee Choomklang, Pattrawut Chansangiam . Conjugate gradient algorithm for consistent generalized Sylvester-transpose matrix equations. AIMS Mathematics, 2022, 7(4): 5386-5407. doi: 10.3934/math.2022299 |
[5] | Xiaoyan Xiao, Feng Zhang, Yuxin Cao, Chunwen Zhang . Some matrix inequalities related to norm and singular values. AIMS Mathematics, 2024, 9(2): 4205-4210. doi: 10.3934/math.2024207 |
[6] | Nunthakarn Boonruangkan, Pattrawut Chansangiam . Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized Sylvester-transpose matrix equation. AIMS Mathematics, 2021, 6(8): 8477-8496. doi: 10.3934/math.2021492 |
[7] | Junyuan Huang, Xueqing Chen, Zhiqi Chen, Ming Ding . On a conjecture on transposed Poisson n-Lie algebras. AIMS Mathematics, 2024, 9(3): 6709-6733. doi: 10.3934/math.2024327 |
[8] | Mohammad Al-Khlyleh, Mohammad Abdel Aal, Mohammad F. M. Naser . Interpolation unitarily invariant norms inequalities for matrices with applications. AIMS Mathematics, 2024, 9(7): 19812-19821. doi: 10.3934/math.2024967 |
[9] | Arnon Ploymukda, Pattrawut Chansangiam . Metric geometric means with arbitrary weights of positive definite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(11): 26153-26167. doi: 10.3934/math.20231333 |
[10] | Shakir Ali, Amal S. Alali, Atif Ahmad Khan, Indah Emilia Wijayanti, Kok Bin Wong . XOR count and block circulant MDS matrices over finite commutative rings. AIMS Mathematics, 2024, 9(11): 30529-30547. doi: 10.3934/math.20241474 |
The goal of this manuscript is to study the existence theory of solution for a nonlinear boundary value problem of tripled system of fractional order hybrid sequential integro-differential equations. The analysis depends on some results from fractional calculus and fixed point theory. As a result, we generalized Darbo's fixed point theorem to form an updated version of tripled fixed point theorem to investigate the proposed system. Also, Hyres-Ulam and generalized Hyres-Ulam stabilities results are established for the considered system. For the illustration of our main results, we provide an example.
Noncontact gesture recognition has made a significant contribution to human-computer interaction (HCI) applications with the enormous growth of artificial intelligence (AI) and computer technology [1]. Hand gesture detection systems, with their natural human-computer interaction features, enable effective and intuitive communication through a computer interface. Furthermore, gesture detection depends on vision and can be broadly implemented in AI, natural language communication, virtual reality, and multimedia [2]. Daily, the demand for and the level of services essential to people is increasing. Hand gestures are a main component of face-to-face communication [3]. Hence, human body language serves a significant part in face-to-face transmission and making hand gestures. In interaction, many things are expressed with hand gestures, and this study presents few visions into transmission itself [4]. Yet, recent automation in this region does not concentrate on using hand gestures in everyday actions. The emerging technology eases the difficulty of processes of different user interfaces and computer programs presented to the user. To make this mechanism less complex and easy to understand, nowadays image processing is utilized [5].
When transmission has to be recognized between a deaf and a normal person, there is a robust necessity for hand gestures. To make the system smarter, there comes a necessity to enter hand gesture imageries into the mechanism and carry out an examination further to determine their meaning [6]. Still, conventional hand gesture detection related to image processing methods was not broadly implemented in HCI due to its complex algorithm, poor real-time capability, and low recognition accuracy [7]. Currently, gesture detection related to machine learning (ML) has advanced quickly in HCI owing to the presentation of AI and image processing graphics processor unit (GPU) [8]. The ML methods like neural networks, local orientation histograms, elastic graph matching, and support vector machines (SVM) were broadly utilized. Due to its learning capability, the NN does not require manual feature setting through simulating human learning processes and can execute training gesture instances to form a network classification detection map [9]. Currently, DL is a frequently utilized approach for HGR. Recurrent neural networks (RNN), CNNs, and stacked denoising auto encoders (SDAE), and are usually utilized in HGR applications [10].
This study leverages the Lion Swarm optimizer with deep convolutional neural network (LSO-DCNN) for gesture recognition and classification. The aim of the LSO-DCNN technique lies in the proper identification and categorization of various categories of gestures that exist in the input images. Primarily, the 1D-convolutional neural network (1D-CNN) method derives a collection of feature vectors. In the second step, the LSO algorithm optimally chooses the hyperparameter values of the 1D-CNN model. At the final step, the extreme gradient boosting (XGBoost) classifier allocates proper classes, i.e., recognizes the gestures efficaciously. To portray the enhanced gesture classification results of the LSO-DCNN algorithm, a wide range of experimental results are investigated. A brief comparative study reports the improvements in the LSO-DCNN technique in the gesture recognition process.
Sun et al. [11] suggested a model dependent upon multi-level feature fusion of a two-stream convolutional neural network (MFF-TSCNN) which comprises three major phases. Initially, the Kinect sensor acquires red, green, blue, and depth (RGB-D) imageries for establishing a gesture dataset. Simultaneously, data augmentation is accomplished on the datasets of testing and training. Later, a MFF-TSCNN model is built and trained. Barioul and Kanoun [12] proposed a new classifying model established on an extreme learning machine (ELM) reinforced by an enhanced grasshopper optimization algorithm (GOA) as a fundamental for a weight-pruning procedure. Myographic models like force myography (FMG) present stimulating signals that can construct the foundation for recognizing hand signs. FMG was examined for limiting the sensor numbers to appropriate locations and giving necessary signal processing techniques for observable employment in wearable embedded schemes. Gadekallu et al. [13] presented a crow search-based CNN (CS-CNN) method for recognizing gestures relating to the HCI field. The hand gesture database utilized in the research is an open database that is obtained from Kaggle. Also, a one-hot encoding method was employed for converting the definite values of the data to its binary system. After this, a crow search algorithm (CSA) for choosing optimum tuning for data training by utilizing the CNNs was employed.
Yu et al. [14] employed a particle swarm optimization (PSO) technique for the width and center value optimization of the radial basis function neural network (RBFNN). Also, the authors utilized a Electromyography (EMG) signal acquisition device and the electrode sleeve for gathering the four-channel continuous EMG signals produced by 8 serial gestures. In [15], the authors presented an ensemble of CNN-based techniques. First, the gesture segment is identified by employing the background separation model established on the binary threshold. Then, the contour section can be abstracted and the segmentation of the hand area takes place. Later, the imageries are re-sized and given to three distinct CNN methods for similar training.
Gao et al. [16] developed an effective hand gesture detection model established on deep learning. First, an RGB-D early-fusion technique established on the HSV space was suggested, efficiently mitigating background intrusion and improving hand gesture data. Second, a hand gesture classification network (HandClasNet) was suggested for comprehending hand gesture localization and recognition by identifying the center and corner hand points, and a HandClasNet was suggested for comprehending gesture detection by employing a similar EfficientNet system. In [17], the authors utilized the CNN approach for the recognition and identification of human hand gestures. This procedure workflow comprises hand region of interest segmenting by employing finger segmentation, mask image, segmented finger image normalization, and detection by utilizing the CNN classifier. The segmentation is performed on the hand area of an image from the whole image by implementing mask images.
This study has developed a new LSO-DCNN method for automated gesture recognition and classification. The major intention of the LSO-DCNN method lies in the proper identification and categorization of various categories of gestures that exist in the input images. The presented LSO-DCNN model follows a three-step procedure:
Step 1: The 1D-CNN method derives a collection of feature vectors.
Step 2: The LSO method optimally chooses the hyperparameter values of the 1D-CNN model.
Step 3: The XGBoost classifier assigns appropriate classes, i.e., effectively recognizes the gestures.
First, the 1D-CNN model derives a collection of feature vectors. The CNN can be referred to as a neural network that exploits convolutional operations in at least one layer of the network instead of normal matrix multiplication operations [18]. Convolution is a special linear operation; all the layers of the convolutional network generally consist of three layers: pooling, convolutional, and activation layers. In the image detection domain, the 2DCNN can be commonly utilized for extracting features from images. The classical CNN models are AlexNet, LeNet, ResNet, VGG, GoogleNet, and so on. The 1D-CNN is used for extracting appropriate features of the data. The input of the 1D-CNN is 1D data, hence its convolutional kernel adopts a 1D architecture. The output of every convolutional, activation, and pooling layer corresponds to a 1D feature vector. In this section, the fundamental structure of the 1DCNN will be introduced.
The convolution layer implements the convolution function on the 1D input signals and the 1D convolution filter, and later extracts local features using the activation layer. The data is inputted to the convolution layer of the 1D-CNN to implement the convolutional function.
xlk=∑ni=1conv(wl−1ik,sl−1i)+bik | (1) |
Here, xlk,blk correspondingly characterize the output and offset of the k‐thneurons in layer l;sl−1i characterizes the output of i‐th neurons in layer l−1; wl−1ik characterizes the convolutional kernels of i‐th neurons in thel−1 layer, and the k‐th neurons in layer l,i=1,2,…, n,n denotes the amount of neurons.
The activation layer implements a non-linear conversion on the input signal through a non-linear function to improve the CNN's expressive power. Currently, the typical activation function is ReLU, Sigmoid, and Tanh. Since the ReLU function may overcome gradient dispersion and converge quickly, it is extensively applied. Thus, the ReLU function was applied as the activation function, and its equation can be represented as
ylk=f(xlk)={0,xlk} | (2) |
where ylk denotes the activation value of layer l.
The pooling layer can generally be employed after the convolution layer. Downsampling avoids over-fitting, decreases the spatial size of parameters and network features, and decreases the calculation count. The typical pooling operations are maximum and average pooling.
zl(j)k={yl(t)k} | (3) |
Where zl(j)k signifies the jth value in the k‐th neuron of layer l; yl(t)k characterizes the t‐th activation value in thek‐th neuron of layer l;r denotes the pooling area's width.
In this work, the LSO approach optimally chooses hyperparameter values of the 1D-CNN model. This approach is selected for its capacity for effectively navigating the parameter space, adapting the nature of the model to local characteristics, and converging toward optimum settings, making the model more appropriate to fine-tune intricate methods. In the LSO algorithm, based on the historical optimum solution, the lion king conducts a range search to find the best solutions [19]. The equation for updating the location is given below:
xk+1i=gk(1+γ‖pki−gk‖) | (4) |
A lioness arbitrarily chooses an additional lioness to cooperate with, and the equation for location updating can be represented as
xk+1i=pki+pkc2(1+αfγ) | (5) |
Follow the lioness, leave the group, or follow the lion king to find an updated position are the three updating approaches for young lions:
xk+1i={gk+pki2(1+αcγ),0≤q≤13pkmpki2(1+αcγ),13<q≤23g_k+pki2(1+αcγ),23<q≤1 | (6) |
In Eq (6), xki denotes the i‐th individuals at the kth generation population; pki represents the prior optimum location of the i‐th individuals from the 1st to kth generation; γ shows the uniform distribution random number N(0,1)pkc is randomly chosen from the kth generation lioness group; gk shows the optimum location of the kth generation population; q denotes the uniform distribution random number U[0,1]g_=low_+up_−gk,pkm is arbitrarily chosen from the kth generation lion group; αf and αc denotes the disturbance factor, low_ and up_ indicates the minimal and maximal values of all the dimensions within the range of lion activity space
αf=0.1(up_−low_)×exp(−30tT)10 | (7) |
αc=0.1(up_−low_)×(T−tT) | (8) |
whereT shows the maximal amount of iterations andt denotes the existing amount of iterations.
The fitness selection becomes a vital component in the LSO method. Solution encoding can be used to evaluate the candidate solution's aptitude. Here, to design a fitness function, the accuracy value is the main condition used.
Fitness=max(P) | (9) |
P=TPTP+FP | (10) |
From the expression, FP means the false positive value and TP denotes the true positive.
Finally, the XGBoost classifier allocates proper classes, i.e., recognizes the gestures efficaciously. XGBoost is an ensemble ML technique, a gradient boost method utilized for improving the efficiency of a predictive model, which integrates a series of weak methods as a strong learning approach [20]. The ensemble methods offer optimum outcomes related to a single model. Figure 2 defines the architecture of XGBoost. The steps involved are given as follows.
Step 1: Initialize
To solve a binary classifier problem, where yj is the actual label denoted as 1 or 0. Consequently, the commonly exploited log loss function is assumed during this case and is demonstrated as
l(yi′ˆyti)=−(yilog(Pi)+(1−yi)log(1−Pi) | (11) |
where
pi=11+e−ˆyti. | (12) |
Based on the Pi,yi, and p values, the gi and hj values are evaluated.
gi=Pi−yi,hi=p(1−pi). | (13) |
From the (t−1)th tree of instance xi, the evaluated forecasted value is projected as ˆy(t−1)i, in which the actual value of xi is yi. But, the predictive value is 0 for the 0th tree, which implies ˆy(0)i=0.
Step 2: The Gain value of features required for traverse and is computed for determining the splitting mode for the present root node. The Gain value is support to evaluate the feature node with maximal Gain score.
Step 3: During this step, the establishment of the Current Binary Leaf Node setup is performed. Based on the feature with maximal Gain, the sample set can be categorized as 2 parts for obtaining 2 leaf nodes. Moreover, the second step can repeat to 2 leaf nodes assuming a negative gain score and end criteria, correspondingly. This step establishes the entire tree.
Step 4: Whole Leaf Node forecast values are computed in this step. Leaf node ωj forecast values are computed as
ωj=−GjHj+λ | (14) |
and the second tree forecast outcomes are expressed as
y(2)i=y(1)i+f2(xi) | (15) |
Afterward, this will result in establishing the second tree.
Step 5: The next step is to repeat steps 1 and 2 to set up further trees until a sufficient count of trees can be introduced. The predictive values of model y(t)i are expressed as ˆy(t)i=ˆy(t−1)i+f2(xi), whereas y(t)i refers to the predictive value of t trees on instance xi. This procedure creates the tth tree.
pi=11+e−ˆy | (16) |
Step 6: This equation that is utilized for determining the classifier outcome of an instance is to attain the probability by changing the last forecast value ˆy of the instance. If pi≥0.5, the probability of the instance is 1; else, it is 0.
In this section, the results of the LSO-DCNN technique are validated using two benchmark datasets: the sign language digital (SLD) dataset and the sign language gesture image (SLGI) dataset.
In Table 1 and Figure 3, the overall comparative recognition results of the LSO-DCNN technique are examined on the SLD dataset [21]. Based on accuy, the LSO-DCNN technique reaches an increased accuy of 91.32%, while the RF, LR, KNN, XGBoost, and MobileNet-RF models obtain decreased accuy of 90.19%, 89.29%, 85.79%, 90.18%, and 90.55%, respectively. Next, based on precn, the LSO-DCNN approach reaches an increased precn of 91.18%, while the RF, LR, KNN, XGBoost, and MobileNet-RF techniques obtain decreased precn of 45.77%, 50.59%, 35.53%, 49.26%, and 80.97%, correspondingly. At the same time, based on recal, the LSO-DCNN algorithm attained an increased recal of 91.31%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches obtained decreased recal of 48.67%, 44.55%, 35.83%, 50.12%, and 81.13%, respectively. Finally, based on F1score, the LSO-DCNN method reaches an increased F1score of 91.78%, while the RF, LR, KNN, XGBoost, and MobileNet-RF models obtain decreased F1score of 46.75%, 44.56%, 34.07%, 49.31%, and 80.10%, correspondingly.
Sign Language Digital Dataset | ||||
Methods | Accuracy | Precision | Recall | F1 score |
Random Forest | 90.19 | 45.77 | 48.67 | 46.75 |
Logistic Regression | 89.29 | 50.59 | 44.55 | 44.56 |
K-Nearest Neighbor | 85.79 | 35.53 | 35.83 | 34.07 |
XGBoost | 90.18 | 49.26 | 50.12 | 49.31 |
MobileNet-RF | 90.55 | 80.97 | 81.13 | 80.10 |
LSO-DCNN | 91.32 | 91.18 | 91.31 | 91.78 |
Figure 4 inspects the accuracy of the LSO-DCNN method in the training and validation of the SLD dataset. The figure notifies that the LSO-DCNN method has greater accuracy values over higher epochs. Furthermore, the higher validation accuracy over training accuracy portrays that the LSO-DCNN approach learns productively on the SLD dataset.
The loss analysis of the LSO-DCNN technique in the training and validation is given on the SLD dataset in Figure 5. The results indicate that the LSO-DCNN approach attained adjacent values of training and validation loss. The LSO-DCNN approach learns productively on the SLD database.
In Table 2 and Figure 6, the overall comparative recognition outcomes of the LSO-DCNN technique are examined on the SLGI dataset. Based on accuy, the LSO-DCNN technique reaches an increased accuy of 99.09%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches gain decreased accuy of 97.93%, 97.93%, 93.40%, 98.25%, and 98.31%, correspondingly. Next, based on precn, the LSO-DCNN methodology reaches an increased precn of 98.86%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches obtain decreased precn of 29.08%, 20.49%, 27.34%, 31.15%, and 98.12%, correspondingly. Simultaneously, based on recal, the LSO-DCNN method reaches an increased recal of 99.15%, while the RF, LR, KNN, XGBoost, and MobileNet-RF models obtain decreased recal of 30.33%, 23.37%, 27.98%, 31.78%, and 98.11%, correspondingly. Eventually, based on F1score, the LSO-DCNN technique reaches an increased F1score of 99.03%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches obtain decreased F1score of 29.10%, 19.77%, 27.30%, 30.03%, and 97.89%, correspondingly.
Sign Language Gestures Image Dataset | ||||
Methods | Accuracy | Precision | Recall | F1 score |
Random Forest | 97.93 | 29.08 | 30.33 | 29.10 |
Logistic Regression | 97.93 | 20.49 | 23.37 | 19.77 |
K-Nearest Neighbor | 93.40 | 27.34 | 27.98 | 27.30 |
XGBoost | 98.25 | 31.15 | 31.78 | 30.03 |
MobileNet-RF | 98.31 | 98.12 | 98.11 | 97.89 |
LSO-DCNN | 99.09 | 98.86 | 99.15 | 99.03 |
Figure 7 portrays the accuracy of the LSO-DCNN method in the training and validation of the SLGI database. The result shows that the LSO-DCNN technique has higher accuracy values over greater epochs. Moreover, the higher validation accuracy over training accuracy shows that the LSO-DCNN technique learns productively on the SLGI database.
The loss analysis of the LSO-DCNN approach in the training and validation is shown on the SLGI dataset in Figure 8. The results indicate that the LSO-DCNN method reaches adjacent values of training and validation loss. The LSO-DCNN method learns productively on the SLGI database.
This study developed a new LSO-DCNN technique for automated gesture recognition and classification. The major intention of the LSO-DCNN approach lies in the proper identification and categorization of various categories of gestures that exist in the input images. The presented LSO-DCNN model follows a three-step procedure, namely 1D-CNN based feature extraction, LSO-based hyperparameter tuning, and XGBoost classification. In this work, the LSO method optimally chooses the hyperparameter values of the 1D-CNN model and it helps to recognize the gestures efficaciously. To prove the enhanced gesture classification results of the LSO-DCNN approach, a wide range of experimental results are investigated. The brief comparative study reported the improvements in the LSO-DCNN technique in the gesture recognition process. In the future, multimodality concepts can enhance the performance of the LSO-DCNN technique.
The authors extend their appreciation to the King Salman center For Disability Research for funding this work through Research Group no KSRG-2023-175.
[1] | J. Banaś, On measures of noncompactness in Banach spaces, Comment. Math. Univ. Ca., 21 (1980), 131–143. |
[2] | J. Banaś, M. Jleli, M. Mursaleen, B. Samet, C. Vetro, Advances in nonlinear analysis via the concept of measure of noncompactness, Singapore: Springer, 2017. https://doi.org/10.1007/978-981-10-3722-1 |
[3] | C. Corduneanu, Integral equations and applications, Cambridge: Cambridge University Press, 1991. https://doi.org/10.1017/CBO9780511569395 |
[4] |
M. Jamil, R. A. Khan, K. Shah, Existence theory to a class of boundary value problems of hybrid fractional sequential integro-differential equations, Bound. Value Probl., 2019 (2019), 77. https://doi.org/10.1186/s13661-019-1190-4 doi: 10.1186/s13661-019-1190-4
![]() |
[5] |
H. Alrabaiah, M. Jamil, K. Shah, R. A. Khan, Existence theory and semi-analytical study of non-linear Volterra fractional integro-differential equations, Alex. Eng. J., 59 (2020), 4677–4686. https://doi.org/10.1016/j.aej.2020.08.025 doi: 10.1016/j.aej.2020.08.025
![]() |
[6] |
A. Aghajani, R. Allahyari, M. Mursaleen, A generalization of Darbo's theorem with application to the solvability of system of integral equations, J. Comput. Appl. Math., 260 (2014), 68–77. https://doi.org/10.1016/j.cam.2013.09.039 doi: 10.1016/j.cam.2013.09.039
![]() |
[7] | A. Aghajani, A. S. Haghighi, Existence of solutions for a system of integral equations via measure of noncompactness, Novi Sad J. Math., 44 (2014), 59–73. |
[8] |
S. Banaei, M. Mursaleen, V. Parvaneh, Some fixed point theorems via measure of noncompactness with applications to differential equations, Comput. Appl. Math., 39 (2020), 139. https://doi.org/10.1007/s40314-020-01164-0 doi: 10.1007/s40314-020-01164-0
![]() |
[9] | A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, Vol. 204, North-Holland Mathematics studies, Elsevier, 2006. |
[10] | R. A. Khan, K. Shah, Existence and uniqueness of solutions to fractional order multi-point boundary value problems, Commun. Appl. Anal., 19 (2015), 515–526. |
[11] |
M. ur Rehman, R. A. Khan, A note on boundary value problems for a coupled system of fractional differential equations, Comput. Math. Appl., 61 (2011), 2630–2637. https://doi.org/10.1016/j.camwa.2011.03.009 doi: 10.1016/j.camwa.2011.03.009
![]() |
[12] |
M. Benchohra, N. Hamidi, J. Henderson, Fractional differential equations with anti-periodic boundary conditions, Numer. Func. Anal. Optim., 34 (2013), 404–414. https://doi.org/10.1080/01630563.2012.763140 doi: 10.1080/01630563.2012.763140
![]() |
[13] |
H. Khan, T. Abdeljawad, M. Aslam, R. A. Khan, A. Khan, Existence of positive solution and Hyers-Ulam stability for a nonlinear singular-delay-fractional differential equation, Adv. Differ. Equ., 2019 (2019), 104. https://doi.org/10.1186/s13662-019-2054-z doi: 10.1186/s13662-019-2054-z
![]() |
[14] |
A. Deep, Deepmala, J. R. Roshan, K. S. Nisar, T. Abdeljawad, An extension of Darbo's fixed point theorem for a class of system of nonlinear integral equations, Adv. Differ. Equ., 2020 (2020), 483. https://doi.org/10.1186/s13662-020-02936-y doi: 10.1186/s13662-020-02936-y
![]() |
[15] |
V. Karakaya, N. E. H. Bouzara, K. DoLan, Y. Atalan, Existence of tripled fixed points for a class of condensing operators in Banach spaces, Sci. World J., 2014 (2014), 541862. https://doi.org/10.1155/2014/541862 doi: 10.1155/2014/541862
![]() |
[16] |
L. Baeza, H. Ouyang, A railway track dynamics model based on modal sub-structuring and a cyclic boundary condition, J. Sound Vib., 330 (2011), 75–86. https://doi.org/10.1016/j.jsv.2010.07.023 doi: 10.1016/j.jsv.2010.07.023
![]() |
[17] |
E. Okyere, J. A. Prah, F. T. Oduro, A Caputo based SIRS and SIS fractional order models with standard incidence rate and varying population, Commun. Math. Biol. Neu., 2020 (2020), 1–25. https://doi.org/10.28919/cmbn/4850 doi: 10.28919/cmbn/4850
![]() |
[18] | K. B. Oldham, J. Spanier, The fractional calculus, London: Academic Press, 1974. |
[19] | I. Podlubny, Fractional differential equation, 1Ed., New York: Academic Press, 1998. |
[20] |
B. C. Dhage, V. Lakshmikantham, Basic results on hybrid differential equations, Nonlinear Anal.: Hybrid Syst., 4 (2010), 414–424. https://doi.org/10.1016/j.nahs.2009.10.005 doi: 10.1016/j.nahs.2009.10.005
![]() |
[21] | V. Lakshmikantham, S. Leela, Differential and integral inequalities, Theory and applications: Ordinary differential equations, Vol. 55, New York: Academic Press, 1969. |
[22] |
A. Khan, Z. A. Khan, T. Abdeljawad, H. Khan, Analytical analysis of fractional-order sequential hybrid system with numerical application, Adv. Cont. Discr. Mod., 2022 (2022), 12. https://doi.org/10.1186/s13662-022-03685-w doi: 10.1186/s13662-022-03685-w
![]() |
[23] |
S. Aljoudi, B. Ahmad, J. J. Nieto, A. Alsaedi, A coupled system of Hadamard type sequential fractional differential equations with coupled strip conditions, Chaos Solitons Fract., 91 (2016), 39–46. https://doi.org/10.1016/j.chaos.2016.05.005 doi: 10.1016/j.chaos.2016.05.005
![]() |
[24] | B. Ahmad, S. Ntouyas, Existence and uniqueness of solutions for Caputo-Hadamard sequential fractional order neutral functional differential equations, Electron. J. Differ. Equ., 36 (2017), 1–11. |
[25] |
G. Nazir, K. Shah, T. Abdeljawad, H. Khalil, R. A. Khan, Using a prior estimate method to investigate sequential hybrid fractional differential equations, Fractals, 28 (2020), 2040004. https://doi.org/10.1142/S0218348X20400046 doi: 10.1142/S0218348X20400046
![]() |
[26] |
N. Li, H. Gu, Y. Chen, BVP for Hadamard sequential fractional hybrid differential inclusions, J. Funct. Spaces, 2022 (2022), 4042483. https://doi.org/10.1155/2022/4042483 doi: 10.1155/2022/4042483
![]() |
[27] | J. Banás, K. Goebel, Measures of noncompactness in Banach spaces, Lecture Notes in Pure and Applied Mathematics, Marcel Dekker, Inc., 1980. |
[28] | R. R. Akmerov, M. I. Kamenski, A. S. Potapov, A. E. Rodkina, B. N. Sadovskii, Measures of noncompactness and condensing operators, Operator theory: Advances and applications, Birkhäuser Basel, 1992. https://doi.org/10.1007/978-3-0348-5727-7 |
[29] |
D. H. Hyers, On the stability of linear functional equation, Proc. N. A. S., 27 (1941), 222–224. https://doi.org/10.1073/pnas.27.4.222 doi: 10.1073/pnas.27.4.222
![]() |
[30] |
S. M. Jung, On the Hyers-Ulam stability of the functional equations that have the quadratic property, J. Math. Anal. Appl., 222 (1998), 126–137. https://doi.org/10.1006/jmaa.1998.5916 doi: 10.1006/jmaa.1998.5916
![]() |
[31] |
P. Kumam, A. Ali, K. Shah, R. A. Khan, Existence results and Hyers-Ulam stability to a class of nonlinear arbitrary order differential equations, J. Nonlinear Sci. Appl., 10 (2017), 2986–2997. https://doi.org/10.22436/JNSA.010.06.13 doi: 10.22436/JNSA.010.06.13
![]() |
[32] |
J. Wang, L. Lv, Y. Zhou, Ulam stability and data dependence for fractional differential equations with Caputo derivative, Elec. J. Qual. Theory Differ. Equ., 63 (2011), 1–10. https://doi.org/10.14232/ejqtde.2011.1.63 doi: 10.14232/ejqtde.2011.1.63
![]() |
[33] |
F. Haq, K. Shah, G. Rahman, M. Shahzad, Hyers-Ulam stability to a class of fractional differential equations with boundary conditions, Int. J. Appl. Comput. Math., 3 (2017), 1135–1147. https://doi.org/10.1007/s40819-017-0406-5 doi: 10.1007/s40819-017-0406-5
![]() |
[34] |
I. Ahmad, K. Shah, G. ur Rahman, D. Baleanu, Stability analysis for a nonlinear coupled system of fractional hybrid delay differential equations, Math. Methods Appl. Sci., 43 (2020), 8669–8682. https://doi.org/10.1002/mma.6526 doi: 10.1002/mma.6526
![]() |
[35] |
H. Khan, Y. Li, W. Chen, D. Baleanu, A. Khan, Existence theorems and Hyers-Ulam stability for a coupled system of fractional differential equations with p-Laplacian operator, Bound. Value Probl., 2017 (2017), 157. https://doi.org/10.1186/s13661-017-0878-6 doi: 10.1186/s13661-017-0878-6
![]() |
[36] |
M. Ahmad, A. Zada, J. Alzabut, Hyers-Ulam stability of a coupled system of fractional differential equations of Hilfer-Hadamard type, Demonstr. Math., 52 (2019), 283–295. https://doi.org/10.1515/dema-2019-0024 doi: 10.1515/dema-2019-0024
![]() |
[37] |
J. Wang, K. Shah, A. Ali, Existence and Hyers-Ulam stability of fractional nonlinear impulsive switched coupled evolution equations, Math. Methods Appl. Sci., 41 (2018), 2392–2402. https://doi.org/10.1002/mma.4748 doi: 10.1002/mma.4748
![]() |
[38] |
Samina, K. Shah, R. A. Khan, Stability theory to a coupled system of nonlinear fractional hybrid differential equations, Indian J. Pure Appl. Math., 51 (2020), 669–687. https://doi.org/10.1007/s13226-020-0423-7 doi: 10.1007/s13226-020-0423-7
![]() |
[39] |
C. Urs, Coupled fixed point theorem and applications to periodic boundary value problem, Miskolc. Math. Notes, 14 (2013), 323–333. https://doi.org/10.18514/MMN.2013.598 doi: 10.18514/MMN.2013.598
![]() |
[40] |
E. V. Kirichenko, P. Garbaczewski, V. Stephanovich, M. Zaba, Lévy flights in an infinite potential well as a hypersingular Fredholm problem, Phys. Rev. E, 93 (2016), 052110. https://doi.org/10.1103/PhysRevE.93.052110 doi: 10.1103/PhysRevE.93.052110
![]() |
1. | S Padmakala, Saif O. Husain, Ediga Poornima, Papiya Dutta, Mukesh Soni, 2024, Hyperparameter Tuning of Deep Convolutional Neural Network for Hand Gesture Recognition, 979-8-3503-7289-2, 1, 10.1109/NMITCON62075.2024.10698984 | |
2. | REEMA G. AL-ANAZI, ABDULLAH SAAD AL-DOBAIAN, ASMA ABBAS HASSAN, MANAR ALMANEA, AYMAN AHMAD ALGHAMDI, SOMIA A. ASKLANY, HANAN AL SULTAN, JIHEN MAJDOUBI, INTELLIGENT SPEECH RECOGNITION USING FRACTAL AMENDED GRASSHOPPER OPTIMIZATION ALGORITHM WITH DEEP LEARNING APPROACH, 2024, 32, 0218-348X, 10.1142/S0218348X25400298 |
Sign Language Digital Dataset | ||||
Methods | Accuracy | Precision | Recall | F1 score |
Random Forest | 90.19 | 45.77 | 48.67 | 46.75 |
Logistic Regression | 89.29 | 50.59 | 44.55 | 44.56 |
K-Nearest Neighbor | 85.79 | 35.53 | 35.83 | 34.07 |
XGBoost | 90.18 | 49.26 | 50.12 | 49.31 |
MobileNet-RF | 90.55 | 80.97 | 81.13 | 80.10 |
LSO-DCNN | 91.32 | 91.18 | 91.31 | 91.78 |
Sign Language Gestures Image Dataset | ||||
Methods | Accuracy | Precision | Recall | F1 score |
Random Forest | 97.93 | 29.08 | 30.33 | 29.10 |
Logistic Regression | 97.93 | 20.49 | 23.37 | 19.77 |
K-Nearest Neighbor | 93.40 | 27.34 | 27.98 | 27.30 |
XGBoost | 98.25 | 31.15 | 31.78 | 30.03 |
MobileNet-RF | 98.31 | 98.12 | 98.11 | 97.89 |
LSO-DCNN | 99.09 | 98.86 | 99.15 | 99.03 |
Sign Language Digital Dataset | ||||
Methods | Accuracy | Precision | Recall | F1 score |
Random Forest | 90.19 | 45.77 | 48.67 | 46.75 |
Logistic Regression | 89.29 | 50.59 | 44.55 | 44.56 |
K-Nearest Neighbor | 85.79 | 35.53 | 35.83 | 34.07 |
XGBoost | 90.18 | 49.26 | 50.12 | 49.31 |
MobileNet-RF | 90.55 | 80.97 | 81.13 | 80.10 |
LSO-DCNN | 91.32 | 91.18 | 91.31 | 91.78 |
Sign Language Gestures Image Dataset | ||||
Methods | Accuracy | Precision | Recall | F1 score |
Random Forest | 97.93 | 29.08 | 30.33 | 29.10 |
Logistic Regression | 97.93 | 20.49 | 23.37 | 19.77 |
K-Nearest Neighbor | 93.40 | 27.34 | 27.98 | 27.30 |
XGBoost | 98.25 | 31.15 | 31.78 | 30.03 |
MobileNet-RF | 98.31 | 98.12 | 98.11 | 97.89 |
LSO-DCNN | 99.09 | 98.86 | 99.15 | 99.03 |