
This paper presented the formulation and solution of the time fractional q-deformed tanh-Gordon equation, a new extension to the traditional tanh-Gordon equation using fractional calculus, and a q-deformation parameter. This extension aimed to better model physical systems with violated symmetries. The approach taken involved the controlled Picard method combined with the Laplace transform technique and the Caputo fractional derivative to find solutions to this equation. Our results indicated that the method was effective and highlighted our approach in addressing this equation. We explored both the existence and the uniqueness of the solution, and included various 2D and 3D graphs to illustrate how different parameters affect the solution's behavior. This work aimed to contribute to the theoretical framework of mathematical physics and has potential applications across multiple interdisciplinary fields.
Citation: Khalid K. Ali, Mohamed S. Mohamed, Weam G. Alharbi, M. Maneea. Solving the time fractional q-deformed tanh-Gordon equation: A theoretical analysis using controlled Picard's transform method[J]. AIMS Mathematics, 2024, 9(9): 24654-24676. doi: 10.3934/math.20241201
[1] | ZongWang, Qimin Zhang, Xining Li . Markovian switching for near-optimal control of a stochastic SIV epidemic model. Mathematical Biosciences and Engineering, 2019, 16(3): 1348-1375. doi: 10.3934/mbe.2019066 |
[2] | Damilola Olabode, Jordan Culp, Allison Fisher, Angela Tower, Dylan Hull-Nye, Xueying Wang . Deterministic and stochastic models for the epidemic dynamics of COVID-19 in Wuhan, China. Mathematical Biosciences and Engineering, 2021, 18(1): 950-967. doi: 10.3934/mbe.2021050 |
[3] | Linda J. S. Allen, Vrushali A. Bokil . Stochastic models for competing species with a shared pathogen. Mathematical Biosciences and Engineering, 2012, 9(3): 461-485. doi: 10.3934/mbe.2012.9.461 |
[4] | Thomas Torku, Abdul Khaliq, Fathalla Rihan . SEINN: A deep learning algorithm for the stochastic epidemic model. Mathematical Biosciences and Engineering, 2023, 20(9): 16330-16361. doi: 10.3934/mbe.2023729 |
[5] | Edward J. Allen . Derivation and computation of discrete-delayand continuous-delay SDEs in mathematical biology. Mathematical Biosciences and Engineering, 2014, 11(3): 403-425. doi: 10.3934/mbe.2014.11.403 |
[6] | Ling Xue, Caterina Scoglio . Network-level reproduction number and extinction threshold for vector-borne diseases. Mathematical Biosciences and Engineering, 2015, 12(3): 565-584. doi: 10.3934/mbe.2015.12.565 |
[7] | A Othman Almatroud, Noureddine Djenina, Adel Ouannas, Giuseppe Grassi, M Mossa Al-sawalha . A novel discrete-time COVID-19 epidemic model including the compartment of vaccinated individuals. Mathematical Biosciences and Engineering, 2022, 19(12): 12387-12404. doi: 10.3934/mbe.2022578 |
[8] | Jinna Lu, Xiaoguang Zhang . Bifurcation analysis of a pair-wise epidemic model on adaptive networks. Mathematical Biosciences and Engineering, 2019, 16(4): 2973-2989. doi: 10.3934/mbe.2019147 |
[9] | Tingting Xue, Xiaolin Fan, Zhiguo Chang . Dynamics of a stochastic SIRS epidemic model with standard incidence and vaccination. Mathematical Biosciences and Engineering, 2022, 19(10): 10618-10636. doi: 10.3934/mbe.2022496 |
[10] | Damilola Olabode, Libin Rong, Xueying Wang . Stochastic investigation of HIV infection and the emergence of drug resistance. Mathematical Biosciences and Engineering, 2022, 19(2): 1174-1194. doi: 10.3934/mbe.2022054 |
This paper presented the formulation and solution of the time fractional q-deformed tanh-Gordon equation, a new extension to the traditional tanh-Gordon equation using fractional calculus, and a q-deformation parameter. This extension aimed to better model physical systems with violated symmetries. The approach taken involved the controlled Picard method combined with the Laplace transform technique and the Caputo fractional derivative to find solutions to this equation. Our results indicated that the method was effective and highlighted our approach in addressing this equation. We explored both the existence and the uniqueness of the solution, and included various 2D and 3D graphs to illustrate how different parameters affect the solution's behavior. This work aimed to contribute to the theoretical framework of mathematical physics and has potential applications across multiple interdisciplinary fields.
The reading and interpretation of medical images is usually performed by medical professionals. But even for experienced experts, this process of medical image interpretation and reporting is also prone to error. Staff shortages and overworked work-loads can also lead to misjudgments in radiology reports. Writing accurate medical imaging reports is necessary for inexperienced radiologists and pathologists, especially in rural areas and in areas where the quality of care is relatively low. For experienced radiologists and pathologists, writing imaging reports is tedious and time-consuming, and automated generation of medical reports can effectively reduce the doctors' workload and mistakes.
There are several issues that must be addressed to automate the generation of auxiliary reports. First, a complete diagnostic report consists of several different forms of information. Second, how to locate the image region and describe it correctly. You et al. [1] automatically extracted machine-learnable annotations from regression data, but the description results were still not ideal. Third, the description in the image re-port contains multiple sentences. Krause et al. [2] used the combined structure of image and language to generate hierarchical descriptive paragraphs, while generating such a long text is still relatively difficult to achieve. Fourth, the automatically generated statements are still unreadable and cannot be colloquial in a human voice. The current single-layer LSTM method cannot model long word sequences. The traditional RNN-+CNN architecture is difficult to generate long statement sequences. Multimodal recurrent model with attention (MRNA) can be used to model long word sequences, but the accuracy is very low and lacks readability.
In view of the above problems, the following conclusions are drawn. 1) The verbal information of medical reports is more important than the image information. 2) The final results are often more concerned with the degree of imitation of the doctor's tone. Based on this, RCLN model is proposed in this paper. RCLN model solves the problem of multiple forms of information by establishing a multi-task framework. On the area localization problem, a research team proposed a new real-time automatic calibration scheme based on scanning sources. The proposed method allows accurate calibration regardless of the path length variation caused by the non-planar topography of the sample or the scanning of the galvanometer [3]. Previously, the application of multimodal imaging technology in the study of density changes of melanosomes and lipofuscin granules in the retinal pigment epithelium (RPE) cells [4]. There is also an efficient direct time-domain resampling scheme based on phase analysis, which shows significant performance improvements in terms of accuracy and speed and silica-coated silver nanostructures can be excellent contrast agents for optical coherence tomography (OCT) imaging [5]. Multi-label classification is a multi-label classification task processing model, it regards label prediction as a multi-label classification task and long description generation as a text generation task. To solve the problem of image region localization, the MRNA model introduced a cooperative attention mechanism, and explored the synergistic effect of visual features and semantics in the grouping while biased towards im-ages and prediction labels. In view of the difficulty in generating long text, RCLN uses hierarchical LSTM to induce long text by taking advantage of the constituent nature of reports. Combined with the cooperative attention mechanism, the hierarchical LSTM first generates high-level topics, and then generates fine-grained descriptions according to the topics.
1) Aiming at the confusion of long sentences in traditional medical report generation and the difficulty in locating diseased areas, a new cycle sentence generation model and LSTM word-by-word generation model with attention were proposed to solve the problems of long text and colloquialism and achieve theoretical innovation.
2) Through comparative experiments, it is proved that the model is more effective than the traditional model in the generation of chest X-ray reports.
First of all, the first task is to predict the label of a given image. The label prediction task is processed in the way of multi-label classification task. Specifically, features of the given image I are firstly extracted:
p1,pred(li=1∣{vn}Nn=1)∝exp(MLCi({vn}Nn=1)) | (1) |
where I ∈R, L is the label vector, li = 1/0 indicates whether there is the ith label, and MLCi represents the ith output of the network. A complete diagnostic report is composed of multiple internal reports with different forms of information. The chest X-ray report contains the impression description, usually in one sentence. Findings are a description. Tags are a list of keywords. Generating such disparate information from a unified framework is technically demanding.
Secondly, it is still difficult to locate the lesion area in the image and attach the correct description.
Finally, descriptions in imaging reports are often long, containing multiple sentences or even paragraphs. y has S sentences, the ith sentence has N words, and y(i, j) is the jth word in the ith sentence. The loss ℓ(x, y) in long sentences produced by producing distribution values on each word of each sentence consists of two weighted and intersecting terms and a sentence loss ℓ shifts the distribution values when stopped, and the word loss ℓ on the word distribution p(i, j).
However, it is indispensable to generate long texts, and this traditional method cannot meet the needs of long texts.
Both CNN and RNN are extensions of traditional neural networks, which can generate results by forward calculation, and update the model by reverse calculation. Each layer of neural network can have multiple neurons horizontally, and there can be multiple layers of neural network connections vertically. The significance of the combination is that the combination can process a large amount of information and has the characteristics of time and space, such as video, image and text combination. There are also real scene dialogues and dialogues with images to make text expressions more specific, and videos are more complete than pictures description.
Feature extraction mainly adopts convolution kernel, whose width and height are greater than 1, and which only performs cross-correlation operation with each position of the same size in the image. Therefore, the output size is equal to the input size nh × nw minus the convolution kernel size kh × kw, which is:
l(x,y)=λsent∑Si=1 lsent(pi,I[i=S])+λword∑Si=1 ∑Nij=1 lword(pij,yij) | (2) |
Image description technology can automatically generate text descriptions for a given image. Most of the image text models studied recently are based on CNN-RNN framework. Vinyals et al. [6] provided image features extracted from the last hidden layer of CNN to LSTM network to generate text. Fang et al. [7] first used CNN to detect anomalies in the image which were used to generate a complete sentence through the language model. Karpathy et al. [8] put forward the use of multimodal recursive neural network to fuse visual and semantic features and then generate image description.
Scientists have been devoted to studying the attention in the field of cognitive neuroscience since the 19th century. Kernel regression [9] in 1964 was a simple demonstration of machine learning with attention mechanism. Described in mathematical language, suppose there is a query q ∈ Rq and m key-value pairs (k1, v1)..., (km, vm), where ki ∈ Rk, vi ∈ R (v). The attention convergence function F is expressed as a weighted sum of values:
hi=f(W(q)iq,W(k)ik,W(v)iv)∈Rpv | (3) |
The attention weight (scalar) of the query q and the key ki is obtained by mapping the two vectors into scalars through the attention scoring function a, and then through the softmax operation:
Wo[h1⋮hh]∈Rpo | (4) |
Attention mechanisms have proven useful for adding image text. Xu et al. introduced spatial visual attention mechanism into image features extracted from CNN middle layer [10]. Wang et al. [11] proposed a semantic attention mechanism for given image tags. In order to make better use of visual features and generate semantic labels.
The design of LSTM network was inspired by the logic gates of computers. LSTM introduces memory cells, or cells for short, whose hidden layer outputs include hidden states and memory elements. Only the hidden state is passed to the output layer, while the memory element is entirely internal information. Suppose there are h hidden units, the batch size is n, and the input number is d. Therefore, the input is X ∈ R (n × d), and the hidden state of the previous time step is H (t − 1) ∈ R (n × h). Accordingly, the gate of time step t is defined as follows: the input gate is It ∈ Rn × h, the forgetting gate is Ft ∈ Rn × h, and the output gate is Ot ∈ Rn × h. They are calculated as follows:
It=σ(XtWxi+Ht−1Whi+bi)Ft=σ(XtWxf+Ht−1Whf+bf)Ot=σ(XtWxo+Ht−1Who+bo) | (5) |
where Wxi, Wxf, Wxo ∈ R (d * h) Whi, Whi, Who ∈ Rh is the weight parameter, bi, bf, bo ∈ R (l * h) is offset parameters.
As an improved recurrent neural network, LSTM can solve the problem of the long-distance dependence in the process of medical report generation which RNN cannot deal with [12]. Tong et al. [13] are studying intensive text, requiring the model to generate a text description for each detected image region. Lei et al. [14] generated paragraph descriptions for images through layered LSTM.
The visual features of the image and the semantic features of the previous sentence are combined into a multimodal cyclic generation network model (MRNA) that generates the next sentence. The RCLN model proposed in this paper proposes a new cyclic generation model to generate results sentence by sentence, in which subsequent sentences are conditional on multi-modal input, including the preceding sentence and the original sentence image [15]. The multimodal model proposed in this paper adopts attention mechanism to improve performance. The overall architecture presented in this paper takes medical images as input from multiple views and generates a framework for radiology reports with impressions and findings. To generate the survey result paragraphs, this paper first uses an encoder-decoder model, which takes image pairs as inputs and generates the first sentence. The first sentence is then input into the sentence coding network to output the semantic representation of the sentence [16]. Suppose a result paragraph containing L sentences is being generated. The probability of generating the ith sentence of length T satisfies:
P(Si=w1,w2,…,wT∣V;θ) |
=P(S1∣V)∏i−1j=2 P(Sj∣V,S1,…Sj−1)P(w1∣V,Si−1)∏Tt=2 P(wt∣V,Si−1,w1,…wt−1) | (6) |
where V is the given medical image, θ is the model parameter (θ on the right is omitted in this paper), Si represents the ith sentence, wt is the tth mark in the ith sentence. Similar to the n-gram hypothesis in the language model, this paper adopts Markov hypothesis to generate the 2-gram model at sentence level, which means the current sentence being generated depends only on its previous sentence and image. This simplifies the steps to estimate the probability:
ˆP(Si=w1,w2,…wT∣V;θ)=P(S1∣V)⏟1∏i−1j=2 P(Sj∣V,Sj−1)⏟2P(w1∣V,Si−1)∏Tt=2 P(wt∣V,Si−1,w1,…wt−1)⏟3 | (7) |
It can be noted that for small-scale data sets, the verbal information of medical reports is more important than the image information, and the final results tend to care more about the degree of imitation of doctors' tone.
The medical reporting task is easily related to the Image2Text task, so this paper utilizes the Image Captions method to solve the problem of this task. In this model, an image encoder is applied to extract global and regional visual features from the input image. The background variable C output by the image encoder encodes the information of the entire image input sequence x1, …, xT. Given the output sequence y1, y2, …, yT′ in the training samples, for each time step t′, the conditional probability of output yt of the image decoder will be based on the previous output sequence y1, …, yt′−1 and the background variable c, which is P(yt′∣y1, …, yt′−1, c).At this time, another cyclic neural network can be used as the decoder to output the time step t 'of the sequence. The decoder takes the output yt′ −1 of the previous time step and background variable c as the input, and transforms them with the hidden state st′−1 of the previous time step into the hidden state st′ of the current time step. Therefore, function g (cyclic neural network unit) can be used to express the transformation of the hidden layer of the image decoder:
st'=g(yt'−1,c,st'−1) | (8) |
Image encoders automatically extract visual features of hierarchical CNN images. The image encoder of this model uses pre-trained Resnet-152 [10]. In this paper, the size of the input image is adjusted to 224 × 224 to keep consistent with the image of pre-trained Resnet encoder. Then, the local eigenmatrix f ∈ R1024 × 19 (reconstructed from 1024 × 14 × 14) res layer of Resnet [17]. Each column of f is a regional eigenvector. So, each image has 196 subregions. At the same time, this paper extracts the global feature vector f ∈ R2048 from the last mean pooling layer of Resnet. For multiple input images from multiple views (for example, the front and side views shown in the body text), their regional and global features are connected accordingly before feeding into the following layers [18]. For efficiency, all parameters in the layer built from Resnet-152 are fixed during training. Then, the maximum pooling operation is applied to the feature maps extracted from each convolution layer to generate 1024-dimension feature vectors. The final sentence feature is a concatenation of feature vectors from different layers. To generate a long paragraph description, a hierarchical cycle network was chosen in this paper. A two-level RNN is generally used for paragraph generation: first, some topics are generated by paragraph-level RNN which are then taken as input by a sentence-level RNN to generate sentences. The pre-trained dense subtitle model can be used to detect the semantic regions of images.
Natural language is a complex system used to express the human mind. In this system, words are the basic units of meaning. As the name suggests, a word vector is a vector used to represent the meaning of a word, and can also be considered a feature vector or representation of a word. The technique of mapping words to real vectors is called word embedding. In recent years, word embedding has gradually become the basic knowledge of natural language processing. Word vector is used to represent the word meaning which can also be regarded as the word feature vector. Each word is mapped to a fixed-length vector that better expresses similarities and analogies between different words. Word embedding consists of two models, namely skip-gram and continuous bag of words. For semantically meaningful representations, their training relies on conditional probability, which can be seen as the use of some words in a corpus to predict other words [19]. Word embedding models are self-supervised models since it is unlabeled data.
For the Impression and Findings description of medical reports, QA + Hierarchical RNN method was used in this paper to solve this problem [20]. By introducing hidden state variables to store past information and current input, current output can be determined. Hidden state is a kind of modeling of the way data is generated. It considers that data generation is divided into two steps: first, select a hidden state and then generate observation results from the hidden state [21]. Hiding means you can only see the observation sequence and not the hidden state sequence when the data generation is on the run, but it doesn't affect the hidden state being exposed to you during training [22]. All of this is done in the basic unit of time step, and the time step is the time interval of the load sub-step in the load step [23]. In rate-independent analysis such as static analysis and (static) nonlinear analysis, in a load step, the time step does not reflect the real time, it is accumulated to reflect the sequence of load sub-steps [24]. However, in rate-dependent analysis such as transient analysis, the size of time step reflects actual length of time.
The original dataset was collected from Openi's chest radiography open data, which contained 3955 radiology reports from two large hospital systems in the Indiana Patient Care Network database and 7470 related chest X-rays from the Hospital Image Archiving System.
First, the original data set contained 7470 images, 3391 pairs of positive side chest radiographs and 3631 pairs of sentences of which the number of sentences is greater than 4. In order to ensure that the largest subset of data information can be obtained, the maximum number of sentences was set to 8 since more than 90% of report statements are between 4 and 8 sentences. There were 3111 applications that met both conditions. Secondly, the training and validation dataset are spilt into 2811/300 with a ratio of about 1/10, using Adam optimization function based on stochastic gradient descent. The unused part of the dataset is then used as the test set. In this paper, 300 reports were randomly selected to form a test set on which all evaluations were performed.
Some common image caption evaluation metrics, including bilingual evaluation understudy (BLEU), metric for evaluation of translation with explicit ordering (METEOR), and recall-oriented understudy for gisting evaluation (ROUGE), are used to provide quantitative comparisons in this paper. BlEU-1 measures the accuracy of words in medical reports, and higher-order BlEU can measure the fluency of sentences. For a sentence to be translated, candidate translations can be expressed as, and the corresponding group of reference translations can represent the phrase set of n words and the possible grams of the kth group.
The purpose of METEOR is to prevent mistranslations of the reported results due to synonyms [25]. The measurement of METEOR is based on weighted harmonic mean value of single precision and single word recall rate. To calculate METEOR, a set of alignments needs to be given in advance, which is based on the thesaurus of WordNet. The alignments are calculated as harmonic average of accuracy and recall rate between the corresponding best candidate translation and reference translation the METEOR is calculated as the harmonic mean of precision and recall rate between corresponding best candidate translation and reference translations by minimizing successive ordered chunks in the corresponding statement:
Pen=γ(chm)θ | (9) |
Fmean=PmRmαPm+(1−α)Rm | (10) |
Pm=|m|∑k hk(ci) | (11) |
Rm=|m|∑k hk(sij) | (12) |
METEOR=(1−Pen)Fmean | (13) |
where α, γ and θ are the default parameters for evaluation. Therefore, the final evaluation of METEOR is based on a harmonic average of decomposition matching and characterization decomposition matching quality of chunk, and contains a penalty coefficient Pen which is different from BLEU. Accuracy and recall rate based on the whole corpus are taken into account to obtain the final measure.
ROUGE evaluates abstracts based on the co-occurrence information of n-grams in abstracts, and it is a method for evaluating the recall rate of n-gram words based on the co-occurrence information of n-gram words [26]. The basic idea is that several experts generate artificial abstracts respectively to form a standard abstract set. The quality of the abstract is evaluated by counting the number of overlapping basic units (n-element grammar, word sequences and word pairs) through comparing the automatic abstracts generated by the system with the standard abstracts generated by the manual.
ROUGE−N=∑s∈{ReferemceSumaaries}∑gramnCountmatch(gramn)∑s∈{ReferenceSummaries}∑gramn∈sCount(gramn) | (14) |
The stability and robustness of the evaluation system can be improved by comparing with the expert manual abstract. Neural machine translation (NMT) used in this paper is more powerful than its predecessor Statistical machine translation (SMT). The word order of medical reports is often correct but the error frequency increases. Therefore, a recall rate indicator like ROUGE is needed to evaluate the error frequency.
Firstly, an image encoder is used to extract global and regional visual features from the input image. Image encoder is a CNN, which automatically extracts hierarchical visual features from images. More specifically, we adjust the size of the input image to 224 × 224. (Corresponding to the image size parameter).
As shown in Figures 3–5, a dropout layer (corresponding to the dropout rate parameter) with a value of [0.3, 0.5 or 0.7] has been added to the network to reduce overfitting and this dropout layer represents the probability that the layer's output is discarded.
Word embedding is mainly responsible for processing the title of each image given as input during training. The output of the word embedding is also a vector of size 1 × 256 (corresponding to the argument word_embedding_size parameter), which is another input to the decoder sequence.
Start the training, set the batch size to 32 (corresponding to the parameter batch size), Adam optimizer makes the learning rate from 1E-2 to 1E-4 (corresponding to the parameter learning rate), a total of 50 iterations (parameter epoch num).
The probability and accuracy influence of network layer output being discarded are discussed.
In the following two model tests, the data output of the first and second tests both met the evaluation benchmark range. The label position of the model was adjusted before the second test. The performance of various indicators was improved when the label at the end of the whole sentence was changed to the half of the sentence and the training time was increased. The time complexity of this model is O(n^2). As shown in Figures 6 and 7, the minimum values of the baseline range are all 0. It can be seen that each score index is lower than the maximum value, proving that this model can generate relatively standard medical reports.
In this paper, two comparative models for medical report generation are also implemented. The same Resnet pre-training model was used for pre-training. The data results are shown in Table 1.
BLEU_1 | BLEU_2 | BLEU_3 | BLEU_4 | METEOR | ROUGE | |
CNN-RNN | 0.3063 | 0.2026 | 0.148 | 0.0994 | 0.1525 | 0.3273 |
CNN-RNN-Att | 0.3235 | 0.2374 | 0.1197 | 0.1084 | 0.1484 | 0.3256 |
MRNA | 0.3773 | 0.2436 | 0.1726 | 0.1284 | 0.1635 | 0.3263 |
RCLN | 0.4341 | 0.3336 | 0.2623 | 0.1373 | 0.2034 | 0.3663 |
CNN-NN, the prototype CNN, was published by Lecun in 1998 [27]. He formally proposed that he applied the back propagation to neural networks and proposed a new neural network convolution NN. Ronald Williams and David Zipser put forward real-time circular learning of RNN as the basis in 1989 [28].
CNN-RNN-Att: The Attention mechanism was added on the basis of the previous one. The Attention mechanism was published by google mind team in 2014 [29]. In 2017, the article "Attention is All You Need" was published by Google Machine Translation team in which self-attention mechanism was extensively used to learn text representation.
By comparing the results of other models and RCLN models, it can be seen that the model based on the multi-attention mechanism is superior to similar models in terms of BLUEs, METEOR and ROUGE, indicating the effectiveness of multi-attention mechanism on medical report generation [30]. The scores of RCLN model were much higher than CNN-RNN series model and higher than MRNA model, proving its effectiveness. Some statements in reports generated by other models are continuous but not coherent. In contrast, the model proposed in this paper is more coherent in context and more colloquial.
This paper mainly focusses on generating detailed findings for chest radiographs medical reports. For impression generation, classification-based methods may be better at distinguishing anomalies and then drawing final conclusions. But from the results, we can see that in the first line the results and impressions are consistent with the actual situation. However, the results and impressions generated in the second line leave out some exception descriptions. The main reason may be that I was training on a small training set, with fewer training samples for anomalies, and some inconsistencies caused by real noise from the original report. Furthermore, the current model does not create high-quality new sentences that never appear in the training set. The reason may be that it is difficult to learn correct grammar from a small corpus because syntactic correctness is not considered in the training objective function.
In conclusion, it is believed that with more control data sets and better noise reduction processing of data set preprocessing, better results will appear [31]. At the same time, multiple loop processing statements can also increase the depth, making the result more accurate. In the data labeling process, the addition of more high-quality sentences is expected to effectively ensure the enhancement of the quality of the results.
The research is supported by the National Natural Science Foundation of China (No.12105120, No.72174079, No.72101045), Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No.19KJB520004, No.21KJB520033), Jiangsu Province "333" project (BRA2020261), Jiangsu Qinglan Project, Lianyungang "521 project", Science and Technology project of Lianyungang High-tech Zone (No.ZD201912).
The authors declare that there is no conflict of interest.
[1] | A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, Amsterdam: Elsevier, 2006. |
[2] | M. P. Lazarevic, Advanced topics on applications of fractional calculus on control problems, system stability and modeling, WSEAS Press, 2014. |
[3] | S. S. Ray, Nonlinear differential equations in physics, Springer Singapore, 2020. https://doi.org/10.1007/978-981-15-1656-6 |
[4] |
A. Elsaid, M. S. A. Latif, M. Maneea, Similarity solutions for multiterm time-fractional diffusion equation, Adv. Math. Phys., 2016 (2016), 7304659. http://dx.doi.org/10.1155/2016/7304659 doi: 10.1155/2016/7304659
![]() |
[5] |
M. S. A. Latif, D. Baleanu, A. H. A. Kader, Exact solutions for a class of variable coefcients fractional diferential equations using Mellin transform and the invariant subspace method, Differ. Equ. Dyn. Syst., 2024. https://doi.org/10.1007/s12591-024-00680-3 doi: 10.1007/s12591-024-00680-3
![]() |
[6] | P. Kulczycki, J. Korbicz, J. Kacprzyk, Fractional dynamical systems: Methods, algorithms and applications, 402 (2022), Switzerland: Springer. https://doi.org/10.1007/978-3-030-89972-1 |
[7] |
K. K. Ali, M. Maneea, M. S. Mohamed, Solving nonlinear fractional models in superconductivity using the q-Homotopy analysis transform method, J. Math., 2023 (2023), 6647375. https://doi.org/10.1155/2023/6647375 doi: 10.1155/2023/6647375
![]() |
[8] |
T. A. Sulaiman, H. Bulut, H. M. Baskonus, Optical solitons to the fractional perturbed NLSE in nano-fibers, Discrete Cont. Dyn. S., 13 (2020), 925–936. http://dx.doi.org/10.3934/dcdss.2020054 doi: 10.3934/dcdss.2020054
![]() |
[9] |
K. Engelborghs, V. Lemaire, J. Belair, D. Roose, Numerical bifurcation analysis of delay differential equations arising from physiological modeling, J. Math. Biol., 42 (2001), 361–385. https://doi.org/10.1007/s002850000072 doi: 10.1007/s002850000072
![]() |
[10] | J. F. Gómez, L. Torres, R. F. Escobar, Fractional derivatives with Mittag-Leffler kernel, Switzerland: Springer International Publishing, 194 (2019). https://doi.org/10.1007/978-3-030-11662-0 |
[11] |
Z. Y. Fan, K. K. Ali, M. Maneea, M. Inc, S. W. Yao, Solution of time fractional Fitzhugh-Nagumo equation using semi analytical techniques, Results Phys., 51 (2023), 106679. https://doi.org/10.1016/j.rinp.2023.106679 doi: 10.1016/j.rinp.2023.106679
![]() |
[12] | O. G. Gaxiola, S. O. Edeki, O. O. Ugbebor, J. Ruiz de Chavez, Solving the Ivancevic pricing model using the He's frequency amplitude formulation, Eur. J. Pure Appl. Math., 10 (2017), 631–637. |
[13] |
K. K. Ali, M. A. Maaty, M. Maneea, Optimizing option pricing: Exact and approximate solutions for the time-fractional Ivancevic model, Alex. Eng. J., 84 (2023), 59–70. https://doi.org/10.1016/j.aej.2023.10.066 doi: 10.1016/j.aej.2023.10.066
![]() |
[14] |
A. Arai, Exactly solvable supersymmetric quantum mechanics, J. Math. Anal. Appl., 158 (1991), 63–79. https://doi.org/10.1016/0022-247X(91)90267-4 doi: 10.1016/0022-247X(91)90267-4
![]() |
[15] |
U. Carow-Watamura, S. Watamura, The q-deformed Schrodinger equation of the harmonic oscillator on the quantum Euclidean space, Int. J. Mod. Phys. A., 9 (1994), 3898–4008. https://doi.org/10.1142/S0217751X94001618 doi: 10.1142/S0217751X94001618
![]() |
[16] |
A. Dobrogowska, A. Odzijewicz, Solutions of the q-deformed Schrödinger equation for special potentials, J. Phys. A: Math. Theor., 40 (2023). https://doi.org/10.1088/1751-8113/40/9/008 doi: 10.1088/1751-8113/40/9/008
![]() |
[17] |
B. C. Lutfuoglu, A. N. Ikot, E. O. Chukwocha, F. E. Bazuaye, Analytical solution of the Klein Gordon equation with a multi-parameter q-deformed Woods-Saxon type potential, Eur. Phys. J. Plus, 133 (2018). https://doi.org/10.1140/epjp/i2018-12299-y doi: 10.1140/epjp/i2018-12299-y
![]() |
[18] |
H. Eleuch, Some analytical solitary wave solutions for the generalized q-deformed Sinh-Gordon equation:∂2u∂z∂ζ=eΘu[sinhq(uγ)]p−δ, Adv. Math. Phys., 2018 (2018), 5242757. https://doi.org/10.1155/2018/5242757 doi: 10.1155/2018/5242757
![]() |
[19] |
H. I. Alrebdi, N. Raza, S. Arshed, A. R. Butt, A. Abdel-Aty, C. Cesarano, et al., A variety of new explicit analytical soliton solutions of q-deformed Sinh-Gordon in (2+1) dimensions, Symmetry, 14 (2022), 2425. https://doi.org/10.3390/sym14112425 doi: 10.3390/sym14112425
![]() |
[20] |
N. Raza, S. Arshed, H. I. Alrebdi, A. Abdel-Aty, H. Eleuch, Abundant new optical soliton solutions related to q-deformed Sinh-Gordon model using two innovative integration architectures, Results Phys., 35 (2022), 105358. https://doi.org/10.1016/j.rinp.2022.105358 doi: 10.1016/j.rinp.2022.105358
![]() |
[21] |
K. K. Ali, M. S. Mohamed, M. Maneea, Exploring optical soliton solutions of the time fractional q-deformed Sinh-Gordon equation using a semi-analytic method, AIMS Math., 8 (2023), 27947–27968. https://doi.org/10.3934/math.20231429 doi: 10.3934/math.20231429
![]() |
[22] |
K. K. Ali, W. G. Alharbi, Exploring unconventional optical soliton solutions for a novel q-deformed mathematical model, AIMS Math., 9 (2024), 15202–15222. https://doi.org/10.3934/math.2024738 doi: 10.3934/math.2024738
![]() |
[23] |
A. F. Fareed, M. A. Elsisy, M. S. Semary, M. T. M. M. Elbarawy, Controlled Picard's transform technique for solving a type of time fractional Navier-Stokes equation resulting from incompressible fluid flow, Int. J. Appl. Comput. Math., 8 (2022). https://doi.org/10.1007/s40819-022-01361-x doi: 10.1007/s40819-022-01361-x
![]() |
[24] | S. G. Samko, A. A. Kilbas, O. L. Marichev, Fractional integrals and derivatives: Theory and applications, New York: Gordon and Breach, 1993. https://api.semanticscholar.org/CorpusID: 118631078 |
[25] | I. Podlubny, Fractional differential equations, San Diego: Academic Press, 1999. |
[26] | M. Caputo, M. Fabrizio, A new definition of fractional derivative without singular kernel, Progr. Fract. Differ. Appl., 1 (2015), 73–85. |
[27] |
A. Elsaid, M. S. A. Latif, M. Maneea, Similarity solutions for solving Riesz fractional partial differential equations, Progr. Fract. Differ. Appl., 2 (2016), 293–298. http://dx.doi.org/10.18576/pfda/020407 doi: 10.18576/pfda/020407
![]() |
[28] |
G. Adomian, R. Rach, Modified Adomian polynomials, Math. Comput. Model., 24 (1996), 39–46. https://doi.org/10.1016/S0895-7177(96)00171-9 doi: 10.1016/S0895-7177(96)00171-9
![]() |
[29] |
H. Fatoorehchi, H. Abolghasemi, Improving the differential transform method: A novel technique to obtain the differential transforms of nonlinearities by the Adomian polynomials, Appl. Math. Model., 37 (2013), 6008–6017. https://doi.org/10.1016/j.apm.2012.12.007 doi: 10.1016/j.apm.2012.12.007
![]() |
[30] |
G. C. Wu, D. Baleanu, W. H. Luo, Analysis of fractional nonlinear diffusion behaviors based on Adomian polynomials, Therm. Sci., 21 (2017), 813–817. https://doi.org/10.2298/TSCI160416301W doi: 10.2298/TSCI160416301W
![]() |
[31] |
M. Turkyilmazoglu, Accelerating the convergence of Adomian decomposition method (ADM), J. Comput. Sci., 31 (2019), 54–59. https://doi.org/10.1016/j.jocs.2018.12.014 doi: 10.1016/j.jocs.2018.12.014
![]() |
[32] | A. M. S. Mahdy, A. Mtawa, Numerical study for the fractional optimal control problem using Sumudu transform method and Picard method, Mitt. Klosterneuburg, 66 (2016), 41–59. |
[33] |
M. S. Semary, H. N. Hassan, A. G. Radwan, Controlled Picard method for solving nonlinear fractional reaction-diffusion models in porous catalysts, Chem. Eng. Commun., 204 (2017), 635–647. https://doi.org/10.1080/00986445.2017.1300151 doi: 10.1080/00986445.2017.1300151
![]() |
[34] |
R. S. Palais, A simple proof of the Banach contraction principle, J. Fixed Point Theory Appl., 2 (2007), 221–223. https://doi.org/10.1007/s11784-007-0041-6 doi: 10.1007/s11784-007-0041-6
![]() |
[35] |
J. Garcia-Falset, K. Latrach, E. Moreno-Gàlvez, M. A. Taoudi, Schaefer-Krasnoselskii fixed point theorems using a usual measure of weak noncompactness, J. Differ. Equ., 252 (2012), 3436–3452. https://doi.org/10.1016/j.jde.2011.11.012 doi: 10.1016/j.jde.2011.11.012
![]() |
BLEU_1 | BLEU_2 | BLEU_3 | BLEU_4 | METEOR | ROUGE | |
CNN-RNN | 0.3063 | 0.2026 | 0.148 | 0.0994 | 0.1525 | 0.3273 |
CNN-RNN-Att | 0.3235 | 0.2374 | 0.1197 | 0.1084 | 0.1484 | 0.3256 |
MRNA | 0.3773 | 0.2436 | 0.1726 | 0.1284 | 0.1635 | 0.3263 |
RCLN | 0.4341 | 0.3336 | 0.2623 | 0.1373 | 0.2034 | 0.3663 |