
Citation: Jalil Manafian, Onur Alp Ilhan, Sizar Abid Mohammed. Forming localized waves of the nonlinearity of the DNA dynamics arising in oscillator-chain of Peyrard-Bishop model[J]. AIMS Mathematics, 2020, 5(3): 2461-2483. doi: 10.3934/math.2020163
[1] | Guozhen Dong . A pixel-wise framework based on convolutional neural network for surface defect detection. Mathematical Biosciences and Engineering, 2022, 19(9): 8786-8803. doi: 10.3934/mbe.2022408 |
[2] | Xuguo Yan, Liang Gao . A feature extraction and classification algorithm based on improved sparse auto-encoder for round steel surface defects. Mathematical Biosciences and Engineering, 2020, 17(5): 5369-5394. doi: 10.3934/mbe.2020290 |
[3] | Xiaochen Liu, Weidong He, Yinghui Zhang, Shixuan Yao, Ze Cui . Effect of dual-convolutional neural network model fusion for Aluminum profile surface defects classification and recognition. Mathematical Biosciences and Engineering, 2022, 19(1): 997-1025. doi: 10.3934/mbe.2022046 |
[4] | Hongxia Ni, Minzhen Wang, Liying Zhao . An improved Faster R-CNN for defect recognition of key components of transmission line. Mathematical Biosciences and Engineering, 2021, 18(4): 4679-4695. doi: 10.3934/mbe.2021237 |
[5] | Lili Wang, Chunhe Song, Guangxi Wan, Shijie Cui . A surface defect detection method for steel pipe based on improved YOLO. Mathematical Biosciences and Engineering, 2024, 21(2): 3016-3036. doi: 10.3934/mbe.2024134 |
[6] | Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan . Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619 |
[7] | Yongmei Ren, Xiaohu Wang, Jie Yang . Maritime ship recognition based on convolutional neural network and linear weighted decision fusion for multimodal images. Mathematical Biosciences and Engineering, 2023, 20(10): 18545-18565. doi: 10.3934/mbe.2023823 |
[8] | Yinghong Xie, Biao Yin, Xiaowei Han, Yan Hao . Improved YOLOv7-based steel surface defect detection algorithm. Mathematical Biosciences and Engineering, 2024, 21(1): 346-368. doi: 10.3934/mbe.2024016 |
[9] | Jiaming Ding, Peigang Jiao, Kangning Li, Weibo Du . Road surface crack detection based on improved YOLOv5s. Mathematical Biosciences and Engineering, 2024, 21(3): 4269-4285. doi: 10.3934/mbe.2024188 |
[10] | Naigong Yu, Hongzheng Li, Qiao Xu . A full-flow inspection method based on machine vision to detect wafer surface defects. Mathematical Biosciences and Engineering, 2023, 20(7): 11821-11846. doi: 10.3934/mbe.2023526 |
Surface defects are an important factor affecting the quality of steel plates and strips. More than 60% of the quality objection incidents by users of steel plate and strip products are prompted by surface defects, thus causing enormous economic losses to the steel companies [1]. With the technological advancement of optical instruments, image processing-based steel plate surface defect recognition has become a research focus of scholars worldwide [2,3,4]. High-quality images of steel plates can be captured by coordinating the camera, light source, and laser line. Additionally, the surface defects of the steel plate can be detected and classified by a custom algorithm. The automatic surface defect detection system can perform online detection of surface defects and provide timely feedback. Detection and timely feedback are key to improving the surface quality of steel plates and strips. With the improvement of the production line speed and the increasingly stringent requirements that users impose regarding product quality, it is urgent to improve the accuracy, speed, and efficiency of defect detection and recognition algorithms in surface detection systems.
The machine vision detection algorithm consists of two steps: Feature extraction and classification. Multiple sets of features from different aspects, such as gray level, shape, and texture, can be extracted from the defect image and are conducive to the correct classification of defects. However, too many features affect the complexity and performance of the classifier. To achieve the best possible classification result without losing features, feature selection methods are generally used to process the features. Commonly used feature extraction algorithms include the gray-level cooccurrence matrix (GLCM) [5] and scale-invariant feature transformation (SIFT) [6]. With these methods, a high-dimensional space is mapped to a low-dimensional space to generate a linear combination of the original features and to reduce the feature dimension. Defect classification falls under the scope of pattern recognition. Commonly used classification algorithms include support vector machines [7], naïve Bayes [8], K-nearest neighbors [9], and random forests (RFs) [10]. Relevant classification algorithms have received increasing attention and have achieved good results in practice. However, the abovementioned traditional detection methods have poor generalization ability and rely on the personal experience of researchers to design the feature engineering. Hence, it is very difficult to apply these methods in large-scale industrial production.
In recent years, deep convolutional neural networks (CNNs) [11,12,13] have sparked a resurgence of visual research because of the benefit of self-learning image features. Inspired by the biological natural visual perception mechanism, CNNs can substantially reduce the number of training parameters by using the weight sharing mechanism mainly through supervised learning and backpropagation training. However, in practical applications, because specific parameters, such as weight bias in training, cannot be set, deep CNNs have certain shortcomings. Therefore, only the hyperparameters during learning can be adjusted to control the fitting. Then, the weight bias parameters needed for recognition are generated. This black-box algorithm has been criticized by researchers. Especially in steel plate surface defect recognition, due to the low contrast of image data, CNNs learn image features through randomly generated convolution kernels. By using such convolution kernels with uncertain parameters, important features may be unminable. The unminability of important features easily leads to misjudgment and affects the further improvement of the recognition rate.
To detect cracks in nuclear power plant components, Chen [14] and Jahanshahi et al. proposed a deep learning framework based on a CNN and naï ve Bayes data fusion scheme to analyze single video frames for crack detection. The authors presented a new data fusion scheme to agammaegate the information extracted from each video frame to enhance the overall performance and robustness of the system. A method to extract the potential features of steel plate defects by fusing multiple convolutional network feature layers was proposed by He [15]. However, this method relies on the network to perform the fusion, and there is a certain randomness in the learning process; additionally, this method needs a huge amount of data. Wang et al. [16] proposed an improved RF algorithm with optimal multi-feature-set fusion for distributed defect recognition. This algorithm fuses the histogram of oriented gradients (HOG) feature set and GLCM feature set by using the multi-feature-set fusion factor to change the number of decision trees corresponding to each feature set in the RF algorithm. In this paper, a feature fusion preprocessing method is designed. This method can achieve a high recognition rate for six categories of defects by using a small amount of data, help the CNN to mine the deeper features of steel plate defect images, and give full play to the learning advantage of neural networks.
Since the matrices and defects of steel plates are mostly gray and black, the overall contrast between the defects and their defects is extremely low. Even if an advanced color camera is used to collect images in a red-green-blue format, the color features of steel plate defects are not obvious. Therefore, the images acquired by industrial cameras are mostly single-channel grayscale images. In this study, specific contour and texture detection operators are combined to process the original grayscale image acquired by an industrial camera to extract the image features, fuse the image features with the original image based on the channel model, and finally convert the image to a single channel based on a certain weight ratio. By using the above processing, not only can the feature dimension of the image be increased and the image be more easily activated by the network to extract features, but the same pixel level as that of the original grayscale image can also be guaranteed, without wasting extra computational cost. Additionally, artificial guidance helps the CNN learn image features purposefully and avoid the drawbacks of black-box algorithms, thereby improving the recognition accuracy of low-contrast images.
The main contributions of this study are as follows:
(1) This study promotes the purposeful learning of a CNN, improves the classification capability of the network, and proposes a multichannel fusion strategy based on the combination of feature operators. This strategy not only increases the feature dimension but also retains the feature information of the original image.
(2) This study proposes converting multiple channels into a single channel for network training according to a certain weight ratio to reduce computational cost. Such a conversion not only ensures the classification accuracy but also does not affect the computational speed.
(3) This study obtains the optimal weight ratio of fusion and conversion by using the traversal method, which involves comparing the impacts of different fusion and conversion schemes on the classification results.
When a CNN extracts images, the shallow convolution kernel is mainly used to extract the edge features, and the deep convolution kernel is mainly used to extract the high-level abstract features, such as texture. Then, these features are integrated through the full connection layer to make the classification. A convolution kernel with a specific template can extract the directional features of the image by using, e.g., the Sobel [17], Laplace [18], Prewitt [19], and Roberts operators [20]; the texture features of image data can be encoded by contrast and integration of pixels, such as with the local binary pattern (LBP) [21]. In this paper, a feature extraction operator is used to extract the edge and texture information of an image to make it is easier for the shallow convolution kernel to learn edge features and for the deep convolution kernel to learn texture features. Additionally, the influence of different fusion schemes on the steel plate surface defect recognition rate is investigated.
Currently, five feature extraction operators are commonly used: Sobel, Laplace, Prewitt, Robert, and LBP. In this paper, we choose each pair of these operators to process the original grayscale image, and the resulting feature matrix is fused with the original grayscale matrix. The original grayscale matrix is placed in the middle channel, and the image processed by the feature operators is randomly placed on the remaining two channels. Fifteen combination schemes of the same feature operator and different feature operators form a three-channel color-effect image. The three-channel data after fusion results in the data calculation burden for two additional channels in the model. In contrast, the model of the original grayscale image requires only the data calculation for a single channel. To mitigate the increase in computational load after multichannel fusion, we use a traversal method to explore the optimal weight ratio and convert the three-channel image data into single-channel image data according to a certain weight ratio. The flowchart is shown in Figure 1.
In this study, the same CNN framework and hyperparameters are used for the training. The sensitivity of the feature fusion preprocessing model at the defect area is verified using a heat map and a feature activation map of the visual learning area. The accuracy (acc)-loss curve is used to observe the convergence of the model before and after feature preprocessing. The confusion matrix is used to assess the accuracy of network recognition before and after feature fusion preprocessing. The goal of the experiment is to find the optimal fusion scheme and conversion weight.
The NEU steel surface defect database created by Song et al. [22] of Northeastern University, China, was selected for use in this study. There are six categories of defects, namely, crazing, inclusion, patches, pitted, rolled-in, and scratches. There are 300 sample data points for each defect category, thus, resulting in a total of 1800 data points. The image resolution is 200 × 200 pixels. Figure 2 shows the defect samples.
Defect recognition based on the NEU steel plate surface defect database [23] faces three difficulties: (1) Defects of different classes have similar features; (2) illumination and material changes can affect the gray value of the acquired defect image; and (3) there are effects of various noises. To prevent overfitting, data augmentation is used in this study to expand the existing database capacity to 5000 samples, to increase the data volume of the experiment set, and to improve the generalization ability of the model. The experimental set in this paper is divided into three categories: Training set, validation set and test set, which are distributed according to the ratio of 7:2:1. So there are 3500 samples in training set, 1000 samples in validation set and 500 samples in test set. In order to ensure the consistency of the samples, the number of samples for each category in each sample set is roughly the same.
To fairly compare the data results before and after feature fusion processing, the exact same hyperparameters are used as follows: Adam, which is used as the optimizer; ReLU, which is used as the activation function; random initialization of weights; and use of the learning rate decay strategy with an initial learning rate of 0.00001, an epoch of 100, and a batch size of 32.
All experiments were run on a graphics workstation with two 10-core Intel Xeon E5-260 Wv4 central processing units (CPUs), an NVIDIA Titan 1080Ti graphics processing unit (GPU), and 128 GB memory. Windows 10-based Python was the development environment, and Keras was used as the learning framework.
Five classical models, Lenet-5 [24], Alexnet [25], VGG16 [26], InceptionV3 [27], and Resnet50 [28], are selected as the main framework of the CNN. Taking the original image data as samples, the recognition accuracies of these classical network models are compared using identical hyperparameters. The results of this comparison are shown in Figure 3. VGG16 has the highest recognition rate, up to 95.55%. Therefore, this model is the most suitable for learning the features of the NEU data. Therefore, VGG16 is chosen as the main framework.
To study the influence of the fusion of the same operators and the original image on the model, the image processed by each operator is selected and combined with the original grayscale image. The original grayscale data are placed in the middle channel, and the grayscale matrix is placed in the other two channels. The result is five different fusion schemes, as shown in Figure 4. Five sets of experiments (Nos. 1 to 5) are carried out for each fusion scheme by using the above hyperparameters for training. Hence, there are a total of 25 sets of experiments. The results are shown in Table 1. The fusion scheme of Sobel:image:Sobel has the highest average accuracy, reaching 96.22%.
Conversion weights | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Sobel | 97.24 | 96.54 | 94.27 | 93.28 | 96.77 | 96.22 |
Roberts:img:Roberts | 95.34 | 95.49 | 94.65 | 96.97 | 95.30 | 95.55 |
Prewitt:img:Prewitt | 95.25 | 93.94 | 95.21 | 94.24 | 94.71 | 94.67 |
Laplace:img:Laplace | 88.13 | 87.12 | 86.44 | 87.63 | 87.08 | 87.28 |
LBP:img:LBP | 92.11 | 92.48 | 92.26 | 93.32 | 94.43 | 92.92 |
To study the influence of the feature operator combination scheme on the recognition accuracy of the model, the image processed by each pair of operators is selected and combined with the original grayscale image, and the original grayscale data are placed in the middle channel, while the grayscale matrix processed by the two operators is randomly placed in the other two channels. Ten combination schemes are based on channel fusion, as shown in Figure 5. The fused data are all in color, and the inclusion areas of the images with edge operator fusion tend to be darker. The inclusion area of the image data processed by the LBP operator and the texture of the steel plate background are more prominent. Using the aforementioned model and hyperparameters, five sets of experiments (Nos. 1 to 5) were carried out for each fusion scheme; thus, a total of 50 experiments were carried out. The accuracy of the fusion results obtained using different operators is shown in Table 2.
Fusion scheme | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Roberts | 98.88 | 97.36 | 98.05 | 99.09 | 97.97 | 98.27 |
Sobel:img:Laplace | 97.51 | 98.14 | 98.98 | 98.91 | 99.45 | 98.61 |
Sobel:img:Prewitt | 98.51 | 97.04 | 98.17 | 99.01 | 97.27 | 98.00 |
Sobel:img:LBP | 76.11 | 76.25 | 75.22 | 74.89 | 75.58 | 75.61 |
Roberts:img:Laplace | 98.02 | 96.94 | 96.01 | 98.57 | 99.61 | 97.83 |
Roberts:img:Prewitt | 95.17 | 96.37 | 95.25 | 96.44 | 97.32 | 96.11 |
Roberts:img:LBP | 81.21 | 81.37 | 83.27 | 81.65 | 80.25 | 81.55 |
Laplace:img:Prewitt | 97.41 | 97.11 | 97.02 | 96.51 | 95.55 | 96.72 |
Laplace:img:LBP | 70.52 | 67.61 | 65.84 | 64.32 | 67.26 | 67.11 |
Prewitt:img:LBP | 19.84 | 14.28 | 15.77 | 16.19 | 17.22 | 16.66 |
Table 2 shows that among the five sets of experiments, the fusion result of Sobel:image:Laplace has the highest average accuracy, reaching 98.61%. The fused images with the LBP operator generally have poor performance. In this case, the CNN performs poorly because although the CNN is more sensitive to the visual feature information of the image, the CNN is not good at analyzing the intrinsic meaning of the abstract texture features encoded by LBP.
Processing the three-channel data after fusion will double the computational cost, while the original grayscale image model only needs to process the single-channel data. In this study, to avoid this drawback after multichannel fusion, the three-channel data are converted into a single-channel grayscale image according to a certain weight ratio, as shown in Eq (1):
Finalimage=α(1stchannel)+β(2ndchannel)+γ(3rdchannel) | (1) |
where α, β, and γ are the weight ratio coefficients of different channels, and they sum up to 1. The traversal method is used with a step size of 0.1. And there is a total of 36 combinations. Based on the original model, the average accuracy is tested, as shown in Figure 6.
The results show that when the weight coefficients of α, β, and γ are 0.2, 0.6, and 0.2, respectively, the computational cost is the same as with the original data. However, the accuracy is 1.16% higher than when using the three-channel fusion. Figure 7 compares the single-channel image converted using a weight ratio of 0.2:0.6:0.2 with the original image. The figure shows that the defect feature edge of the feature fusion image of the Sobel:image:Sobel algorithm is more obvious. The weight of 0.6 assigned to the original grayscale image can properly retain the information of the original image. The ability to retain this information helps the convolution kernel to learn weights, and the weight of 0.2 assigned to the edge extraction operator can well highlight the defect features on the steel plate image. Consequently, these defect features are conducive to CNN capturing. The same calculation parameters as those of the original data can be retained, while the recognition rate can reach as high as 99.77%.
The feature activation map of the first layer of VGG16 before and after the feature fusion processing is visualized, and the learning results are analyzed. One such image is randomly selected for processing (Figure 8). The yellow area indicates that the defect feature is activated and contains the black inclusion defects; hence, it can be determined that this feature produces the correct phase response. Figure 8 also shows that, in the activation map without feature fusion, 23 feature maps in the defect area generate correct activation. In contrast, in the results after feature fusion, 31 feature maps in the defect area generate correct activation. These data indicate that because the edge extraction operator captures the target defect area in advance and fuses it with the original image, the edge extraction operator can enhance the edge pixel feature of the defect area of the image and ensure that the original data are not lost. Consequently, it is easier for the convolution kernel to capture the defect.
In this study, the gradient-weighted class activation mapping (Grad-CAM) algorithm [29] is used to further validate the target area where the model finally learns. This algorithm can display the area where the model learns in the form of a heat-affected zone to facilitate the observation of whether the final model successfully learns the features of the correct area. Given an input image, the algorithm can obtain the output feature map of a convolution layer through the trained model, which can obtain the parameters of the convolution kernel corresponding to each pair of feature maps without modifying the original model structure. The category relative to each channel of this feature map is used as a weight to generate a spatial map of the activation intensity of the input image to the category. So the heat-affected zone can be generated. In this paper, this method is used to visualize the model before and after feature fusion. Figure 9 shows the learning result of the inclusion defect. The figure shows that the defect area of the steel plate image data subjected to feature fusion process is red; i.e., the area is a “high-temperature” zone. This zone can be accurately located in the image, while the defect area of the original image data not processed by the feature operator is not very well covered. The results show that the data processed by the feature operator have an increased pixel level in the defect area; therefore, the area is more likely to be captured by the CNN. The heat map analysis further verifies that feature fusion preprocessing significantly improves the recognition ability of the CNN model.
The acc and loss curves of the model learning before and after feature fusion are plotted in Figure 10. This figure shows that the acc curve after feature operator processing has a narrow range of fluctuation and overall larger fluctuation; in contrast, the acc curve without feature operator processing has a relatively wide range of fluctuation and overall smaller fluctuation. The loss curve after feature operator processing has a narrow range of fluctuation and overall smaller fluctuation; in contrast, the loss curve without feature operator processing has a wide range of fluctuation and overall larger fluctuation. It can be concluded that the model after feature fusion has a higher fitting ability and better convergence than the model without feature fusion.
The confusion matrix is used in this study to further compare the classification results of the model before and after feature fusion. Each row of the matrix represents the results of a predicted category. The optimal results before and after feature fusion are represented by the confusion matrix, as shown in Figure 11. The figure shows that the recognition accuracy for the crazing category with a steel plate background is 100% with or without feature fusion; the recognition accuracy for the inclusion category is 99.33% with feature fusion and 97.58% without feature fusion; the recognition accuracy for the patches category feature is 100% with feature fusion and 98.48% without feature fusion; the recognition accuracy for the pitted category is 100% with feature fusion and 78.48% without feature fusion; the recognition accuracy for the rolled-in category is 100% with feature fusion and 95.76% without feature fusion; and the recognition accuracy for the scratches category is 99.33% with feature fusion and 95.15% without feature fusion. The above results show that the recognition accuracy of the model is higher in each category after preprocessing with feature fusion than without feature fusion. These results indicate the feasibility of preprocessing with feature fusion in improving the recognition accuracy.
This study proposes an image preprocessing method based on feature operator fusion that is mainly used for defect recognition on low-contrast steel plate surface grayscale images. Through many combination tests, the original grayscale images have been processed using the Sobel and Laplace operators, then superimposed with the original grayscale image at a weight ratio of 0.2:0.6:0.2 as the input to a CNN. The experimental results show that this fusion scheme effectively improves the recognition rate of the test set. Without changing the experimental network framework, the accuracy rate can reach 99.77%, which is 4.22% higher than the accuracy rate of the CNN trained directly on the original images without fusion.
In this study, the influence of several commonly used operators on the accuracy of the model have been investigated, and a solution scheme from the perspective of optimizing the data sources has been offered to resolve the bottleneck problem of CNN learning. The results show that this feature operator-based channel fusion strategy is highly effective at improving the pixel features of defects and making it easier for the network to capture the features of defects, thereby successfully enhancing classification accuracy.
The authors acknowledge the support from the National Key R & D Program of China (Grant No. 2018YFB1701601).
The authors declared that they have no conflicts of interest to this work.
[1] | M. Dehghan, J. Manafian, A. Saadatmandi, Solving nonlinear fractional partial differential equations using the homotopy analysis method, Num. Meth. Partial Diff. Eq. J., 26 (2010), 448-479. |
[2] | M. Dehghan, J. Manafian, A. Saadatmandi, Application of semi-analytic methods for the Fitzhugh-Nagumo equation, which models the transmission of nerve impulses, Math. Meth. Appl. Sci., 33 (2010), 1384-1398. |
[3] |
M. Dehghan, J. Manafian, A. Saadatmandi, Application of the Exp-function method for solving a partial differential equation arising in biology and population genetics, Int. J. Num. Meth. Heat Fluid Flow, 21 (2011), 736-753. doi: 10.1108/09615531111148482
![]() |
[4] |
X. G. Geng, Y. L. Ma, N-soliton solution and its wronskian form of a (3+1)-dimensional nonlinear evolution equation, Phys. Lett. A, 369 (2007), 285-289. doi: 10.1016/j.physleta.2007.04.099
![]() |
[5] |
J. Manafian, M. Lakestani, Optical solitons with Biswas-Milovic equation for Kerr law nonlinearity, Eur. Phys. J. Plus, 130 (2015), 1-12. doi: 10.1140/epjp/i2015-15001-1
![]() |
[6] |
J. Manafian, On the complex structures of the Biswas-Milovic equation for power, parabolic and dual parabolic law nonlinearities, Eur. Phys. J. Plus, 130 (2015), 1-20. doi: 10.1140/epjp/i2015-15001-1
![]() |
[7] | J. Manafian, M. Lakestani, Solitary wave and periodic wave solutions for Burgers, Fisher, Huxley and combined forms of these equations by the (G'/G)-expansion method, Pramana, 130 (2015), 31-52. |
[8] | J. Manafian, M. Lakestani, New improvement of the expansion methods for solving the generalized Fitzhugh-Nagumo equation with time-dependent coefficients, Int. J. Eng. Math., 2015 (2015), Article ID 107978. |
[9] |
J. Manafian, M. Lakestani, Application of tan(φ/2)-expansion method for solving the Biswas-Milovic equation for Kerr law nonlinearity, Optik, 127 (2016), 2040-2054. doi: 10.1016/j.ijleo.2015.11.078
![]() |
[10] | J. Manafian, M. Lakestani, Dispersive dark optical soliton with Tzitzéica type nonlinear evolution equations arising in nonlinear optics, Opt. Quan. Elec., 48 (2016), 16. |
[11] |
J. Manafian, M. Lakestani, Optical soliton solutions for Schrödinger type nonlinear evolution equations by the tan(φ/2)-expansion method, Optik, 127 (2016), 4222-4245. doi: 10.1016/j.ijleo.2016.01.078
![]() |
[12] | H. M. Baskonus, H. Bulut, Exponential prototype structures for (2+1)-dimensional Boiti-Leon-Pempinelli systems in mathematical physics, Waves in Random and Complex Media, 26 (2016), 201-208. |
[13] | H. M. Baskonus, D. A. Koç, H. Bulut, New travelling wave prototypes to the nonlinear Zakharov-Kuznetsov equation with power law nonlinearity, Nonlinear Sci. Lett. A, 7 (2016), 67-76. |
[14] |
M. Peyrard, A. R. Bishop, Statistical mechanics of a nonlinear model for DNA denaturation, Phys. Rev. Lett., 62 (1989), 2755-2758. doi: 10.1103/PhysRevLett.62.2755
![]() |
[15] |
R. Abazari, S. Jamshidzadeh, G. Wang, Mathematical modeling of DNA vibrational dynamics and its solitary wave solutions, Revista Mexicana de Fisica, 64 (2018), 590-597. doi: 10.31349/RevMexFis.64.590
![]() |
[16] |
S. Dusuel, P. Michaux, M. Remoissenet, From Kinks to compactonlike Kinks, Phys. Rev. E, 57 (1998), 2320-2326. doi: 10.1103/PhysRevE.57.2320
![]() |
[17] |
A. Alvarez, S. R. Romero, J. F. R. Archilla, et al. Breather trapping and breather transmission in a DNA model with an interface, Eur. Phys. J. B, 51 (2006), 119-130. doi: 10.1140/epjb/e2006-00191-0
![]() |
[18] | L. Yakushevich, Nonlinear Physics of DNA, Wiley and Sons, 1998. |
[19] | Mika Gustafsson, Coherent waves in DNA within the Peyrard Bishop model, Master thesis, Linopings Universitet, 2003. |
[20] | E. Villagran, Estructuras solitonicas y su influencia en la dinamica vibraional del ADN, PhD Thesis, Universidad Autonoma del Estado de Mexico, Mexico, 2007. |
[21] |
Z. Wang, B. Zineddin, J. Liang, et al. A novel neural network approach to cDNA microarray image segmentation, Comput Methods Programs Biomed., 111 (2013), 189-198. doi: 10.1016/j.cmpb.2013.03.013
![]() |
[22] |
M. Aguero, M. Najera, M. Carrillo, Nonclassic solitonic structures in DNA's vibrational dynamics, Int. J. Mod. Phys. B, 22 (2008), 2571-2582. doi: 10.1142/S021797920803968X
![]() |
[23] |
G. Gaeta, Results and limitations of the soliton theory of DNA transcription, J. Biol. Phys., 24 (1999), 81-96. doi: 10.1023/A:1005158503806
![]() |
[24] |
J. B. Okaly, A. Mvogo, R. L. Woulaché, et al. Nonlinear dynamics of damped DNA systems with long-rangeinteractions, Commun. Nonlinear Sci. Numer. Simulat., 55 (2018), 183-193. doi: 10.1016/j.cnsns.2017.06.017
![]() |
[25] | G. Miloshevich, J. P. Nguenang, T. Dauxois, et al. Traveling solitons in long-range oscillator chains, J. Phys. A: Math. Theor., 50 (2017), 12LT02. |
[26] |
J. B. Okaly, A. Mvogo, R. L. Woulaché, et al. Semi-discrete breather in a helicoidal DNA double chain-model, Wave Motion, 82 (2018), 1-15. doi: 10.1016/j.wavemoti.2018.06.005
![]() |
[27] | J. B. Okaly, F. L. Ndzana, R. L. Woulaché, et al. Base pairs opening and bubble transport in damped DNA dynamics with transport memory effects, Chaos, 29 (2019), 093103. |
[28] | M. Peyrard, Nonlinear dynamics and statistical physics of DNA, Nonlinearity, 17 (2004), R1-R40. |
[29] |
J. B. Okaly, A. Mvogo, R. L. Woulaché, et al. Nonlinear dynamics of DNA systems with inhomogeneity effects, Chin. J. of Phys., 56 (2018), 2613-2626. doi: 10.1016/j.cjph.2018.07.006
![]() |
[30] | A. Mvogo, G. H. Ben-Bolie, T. C. Kofané, Solitary waves in an inhomogeneous chain of α-helical proteins, Int. J. Mod. Phys B., 28 (2014), 1-14. |
[31] | S. Zdravković, D. Chevizovich, A. N. Bugay, et al. Stationary solitary and kink solutions in the helicoidal Peyrard-Bishop model of DNA molecule, Chaos, 29 (2019), 053118. |
[32] |
S. Zdravković, S. Zeković, Nonlinear dynamics of microtubules and series expansion unknown function method, Chin. J. Phys., 55 (2017), 2400-2406. doi: 10.1016/j.cjph.2017.10.009
![]() |
[33] |
E. Tala-Tebue, Z. I. Djoufack, D. C. Tsobgni-Fozap, et al. Traveling wave solutions along microtubules and in theZhiber-Shabat equation, Chin. J. Phys., 55 (2017), 939-946. doi: 10.1016/j.cjph.2017.03.004
![]() |
[34] | J. Manafian, M. Lakestani, Lump-type solutions and interaction phenomenon to the bidirectional Sawada-Kotera equation, Pramana, 92 (2019), 41. |
[35] |
J. Manafian, Novel solitary wave solutions for the (3+1)-dimensional extended Jimbo-Miwa equations, Comput. Math. Appl., 76 (2018), 1246-1260. doi: 10.1016/j.camwa.2018.06.018
![]() |
[36] | J. Manafian, B. Mohammadi-Ivatlo, M. Abapour, Lump-type solutions and interaction phenomenon to the (2+1)-dimensional Breaking Soliton equation, Appl. Math. Comput., 13 (2019), 13-41. |
[37] |
O. A. Ilhan, J. Manafian, M. Shahriari, Lump wave solutions and the interaction phenomenon for a variable-coefficient Kadomtsev-Petviashvili equation, Comput. Math. Appl., 78 (2019), 2429-2448. doi: 10.1016/j.camwa.2019.03.048
![]() |
[38] |
S. T. R. Rizvi, I. Afzal, K. Ali, et al. Stationary Solutions for Nonlinear Schrödinger Equations by Lie Group Analysis, Acta Physica Polonica A., 136 (2019), 187-189. doi: 10.12693/APhysPolA.136.187
![]() |
[39] | K. Ali, S. T. R. Rizvi, B. Nawaz, et al. Optical solitons for paraxial wave equation in Kerr media, Modern Phys. Let. B, 33 (2019), 1950020. |
[40] | S. Ali, M. Younis, M. O. Ahmad, et al. Rogue wave solutions in nonlinear optics with coupled Schrodinger equations, Opt. Quan. Elec., 50 (2018), 266. |
[41] | A. Arif, M. Younis, M. Imran, et al. Solitons and lump wave solutions to the graphene thermophoretic motion system with a variable heat transmission, Eur. Phys. J. Plus, 134 (2019), 303. |
[42] |
S. Zdravković, Helicoidal PeyrardBishop Model of DNA Dynamics, J. Nonlinear Math. Phys., 18 (2011), 463-484. doi: 10.1142/S1402925111001635
![]() |
[43] |
M. Peyrard, A. R. Bishop, Statistical mechanics of a nonlinear model for DNA denaturation, Phys. Rev. Lett., 62 (1989), 2755-2758. doi: 10.1103/PhysRevLett.62.2755
![]() |
[44] |
T. Dauxois, Dynamics of breather modes in a nonlinear helicoidal model of DNA, Phys. Lett. A, 159 (1991), 390-395. doi: 10.1016/0375-9601(91)90367-H
![]() |
[45] |
M. A. Aguero, M. D. L. Najera, M. Carrillo, Nonclassic solitonic structures in DNA's vibrational dynamics, Int. J. Modern Physics B, 22 (2008), 2571-2582. doi: 10.1142/S021797920803968X
![]() |
[46] | L. Najera, M. Carrillo, M. A. Agüero, Non-classical solitons and the broken hydrogen bonds in DNA vibrational dynamics, Adv. Studies Theor. Phys., 4 (2010), 495-510. |
[47] | S. Zdravković, J. A. Tuszyński, M. V. Satarić, Peyrard-Bishop-Dauxois model of DNA dynamics and impact of viscosity, J. Comput. Theor. Nanosci., 2 (2005), 1-9. |
[48] |
S. Zdravković, M. V. Satarić, Parameter selection in a PeyrardBishopDauxois model for DNA dynamics, Phys. Let. A, 373 (2009), 2739-2745. doi: 10.1016/j.physleta.2009.05.032
![]() |
1. | Wen Zhang, Min Wu, Sheng Du, Luefeng Chen, Plate shape recognition based on Gaussian function and particle swarm optimization for roller quenching process, 2022, 119, 09591524, 115, 10.1016/j.jprocont.2022.10.001 | |
2. | Feng Miao Tu, Ming Hui Wei, Jun Liu, Lu Lu Liao, An adaptive weighting multimodal fusion classification system for steel plate surface defect, 2023, 45, 10641246, 3501, 10.3233/JIFS-230170 | |
3. | Rachid Zaghdoudi, Abdelmalek Bouguettaya, Adel Boudiaf, Steel surface defect recognition using classifier combination, 2024, 132, 0268-3768, 3489, 10.1007/s00170-024-13407-z | |
4. | Hongkai Zhang, Suqiang Li, Qiqi Miao, Ruidi Fang, Song Xue, Qianchuan Hu, Jie Hu, Sixian Chan, Surface defect detection of hot rolled steel based on multi-scale feature fusion and attention mechanism residual block, 2024, 14, 2045-2322, 10.1038/s41598-024-57990-3 | |
5. | Fengmin Shen, Qian Zhou, Zhijia Bai, Hui Zhang, 2024, Crane Rail Bolt Image Defect Recognition Based on Feature Fusion, 979-8-3503-7364-6, 24, 10.1109/ICSECE61636.2024.10729317 |
Conversion weights | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Sobel | 97.24 | 96.54 | 94.27 | 93.28 | 96.77 | 96.22 |
Roberts:img:Roberts | 95.34 | 95.49 | 94.65 | 96.97 | 95.30 | 95.55 |
Prewitt:img:Prewitt | 95.25 | 93.94 | 95.21 | 94.24 | 94.71 | 94.67 |
Laplace:img:Laplace | 88.13 | 87.12 | 86.44 | 87.63 | 87.08 | 87.28 |
LBP:img:LBP | 92.11 | 92.48 | 92.26 | 93.32 | 94.43 | 92.92 |
Fusion scheme | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Roberts | 98.88 | 97.36 | 98.05 | 99.09 | 97.97 | 98.27 |
Sobel:img:Laplace | 97.51 | 98.14 | 98.98 | 98.91 | 99.45 | 98.61 |
Sobel:img:Prewitt | 98.51 | 97.04 | 98.17 | 99.01 | 97.27 | 98.00 |
Sobel:img:LBP | 76.11 | 76.25 | 75.22 | 74.89 | 75.58 | 75.61 |
Roberts:img:Laplace | 98.02 | 96.94 | 96.01 | 98.57 | 99.61 | 97.83 |
Roberts:img:Prewitt | 95.17 | 96.37 | 95.25 | 96.44 | 97.32 | 96.11 |
Roberts:img:LBP | 81.21 | 81.37 | 83.27 | 81.65 | 80.25 | 81.55 |
Laplace:img:Prewitt | 97.41 | 97.11 | 97.02 | 96.51 | 95.55 | 96.72 |
Laplace:img:LBP | 70.52 | 67.61 | 65.84 | 64.32 | 67.26 | 67.11 |
Prewitt:img:LBP | 19.84 | 14.28 | 15.77 | 16.19 | 17.22 | 16.66 |
Conversion weights | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Sobel | 97.24 | 96.54 | 94.27 | 93.28 | 96.77 | 96.22 |
Roberts:img:Roberts | 95.34 | 95.49 | 94.65 | 96.97 | 95.30 | 95.55 |
Prewitt:img:Prewitt | 95.25 | 93.94 | 95.21 | 94.24 | 94.71 | 94.67 |
Laplace:img:Laplace | 88.13 | 87.12 | 86.44 | 87.63 | 87.08 | 87.28 |
LBP:img:LBP | 92.11 | 92.48 | 92.26 | 93.32 | 94.43 | 92.92 |
Fusion scheme | Accuracy | Average Accuracy | ||||
No. 1 | No. 2 | No. 3 | No. 4 | No. 5 | ||
Sobel:img:Roberts | 98.88 | 97.36 | 98.05 | 99.09 | 97.97 | 98.27 |
Sobel:img:Laplace | 97.51 | 98.14 | 98.98 | 98.91 | 99.45 | 98.61 |
Sobel:img:Prewitt | 98.51 | 97.04 | 98.17 | 99.01 | 97.27 | 98.00 |
Sobel:img:LBP | 76.11 | 76.25 | 75.22 | 74.89 | 75.58 | 75.61 |
Roberts:img:Laplace | 98.02 | 96.94 | 96.01 | 98.57 | 99.61 | 97.83 |
Roberts:img:Prewitt | 95.17 | 96.37 | 95.25 | 96.44 | 97.32 | 96.11 |
Roberts:img:LBP | 81.21 | 81.37 | 83.27 | 81.65 | 80.25 | 81.55 |
Laplace:img:Prewitt | 97.41 | 97.11 | 97.02 | 96.51 | 95.55 | 96.72 |
Laplace:img:LBP | 70.52 | 67.61 | 65.84 | 64.32 | 67.26 | 67.11 |
Prewitt:img:LBP | 19.84 | 14.28 | 15.77 | 16.19 | 17.22 | 16.66 |