
Citation: Zhiguo Qu, Shengyao Wu, Le Sun, Mingming Wang, Xiaojun Wang. Effects of quantum noises on χ state-based quantum steganography protocol[J]. Mathematical Biosciences and Engineering, 2019, 16(5): 4999-5021. doi: 10.3934/mbe.2019252
[1] | Xiaoqiang Dai, Kuicheng Sheng, Fangzhou Shu . Ship power load forecasting based on PSO-SVM. Mathematical Biosciences and Engineering, 2022, 19(5): 4547-4567. doi: 10.3934/mbe.2022210 |
[2] | Xiao Liang, Taiyue Qi, Zhiyi Jin, Wangping Qian . Hybrid support vector machine optimization model for inversion of tunnel transient electromagnetic method. Mathematical Biosciences and Engineering, 2020, 17(4): 3998-4017. doi: 10.3934/mbe.2020221 |
[3] | Bei Liu, Hongzi Bai, Wei Chen, Huaquan Chen, Zhen Zhang . Automatic detection method of epileptic seizures based on IRCMDE and PSO-SVM. Mathematical Biosciences and Engineering, 2023, 20(5): 9349-9363. doi: 10.3934/mbe.2023410 |
[4] | Xin Zhou, Shangbo Zhou, Yuxiao Han, Shufang Zhu . Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5241-5268. doi: 10.3934/mbe.2022246 |
[5] | Ming Zhu, Kai Wu, Yuanzhen Zhou, Zeyu Wang, Junfeng Qiao, Yong Wang, Xing Fan, Yonghong Nong, Wenhua Zi . Prediction of cooling moisture content after cut tobacco drying process based on a particle swarm optimization-extreme learning machine algorithm. Mathematical Biosciences and Engineering, 2021, 18(3): 2496-2507. doi: 10.3934/mbe.2021127 |
[6] | Dashe Li, Xueying Wang, Jiajun Sun, Huanhai Yang . AI-HydSu: An advanced hybrid approach using support vector regression and particle swarm optimization for dissolved oxygen forecasting. Mathematical Biosciences and Engineering, 2021, 18(4): 3646-3666. doi: 10.3934/mbe.2021182 |
[7] | Cong Wang, Chang Liu, Mengliang Liao, Qi Yang . An enhanced diagnosis method for weak fault features of bearing acoustic emission signal based on compressed sensing. Mathematical Biosciences and Engineering, 2021, 18(2): 1670-1688. doi: 10.3934/mbe.2021086 |
[8] | Zhanying Tong, Yingying Zhou, Ke Xu . An intelligent scheduling control method for smart grid based on deep learning. Mathematical Biosciences and Engineering, 2023, 20(5): 7679-7695. doi: 10.3934/mbe.2023331 |
[9] | Yunpeng Ma, Shilin Liu, Shan Gao, Chenheng Xu, Wenbo Guo . Optimizing boiler combustion parameters based on evolution teaching-learning-based optimization algorithm for reducing NOx emission concentration. Mathematical Biosciences and Engineering, 2023, 20(11): 20317-20344. doi: 10.3934/mbe.2023899 |
[10] | Rongmei Geng, Renxin Ji, Shuanjin Zi . Research on task allocation of UAV cluster based on particle swarm quantization algorithm. Mathematical Biosciences and Engineering, 2023, 20(1): 18-33. doi: 10.3934/mbe.2023002 |
The coal spontaneous combustion disaster in China was very serious and caused nearly 4,000 fire hazards due to coal spontaneous combustion. The direct economic losses of combusted coal amounted to billions [1]. Furthermore, the coal spontaneous combustion also caused secondary disasters of gas and dust explosions, air pollution and other disasters. This helped us realize that effective prediction is the key to prevention and control of coal spontaneous combustion. Along with coal oxidation and temperature rising, it will release the corresponding indicator gases, such as CO, CO2, CH4, C2H6, C2H4, C2H2 and N2. Therefore, there is a very complex nonlinear relationship between the degree of coal spontaneous combustion and gas products. By finding out the congruent relationships between these indicator gases and temperature of coal, monitoring the gas products of coal sample reaction, temperature, oxygen consumption and other indexes, the signs of coal spontaneous combustion could be detected so as to predict the development trend of spontaneous combustion [2]. However, in the coal spontaneous combustion process, due to the limitations of experimental conditions, the heating rates of coal temperature are different at different stages.
The most common method for prediction of coal spontaneous combustion is the comprehensive evaluation method, which is predicted by machine learning methods such as cluster analysis [3], neural network [4] and support vector machine [5]. Among them, the support vector machines are based on VC dimensional theory and structural risk minimization of statistical theory, and have a good generalization performance for small sample learning, rendering the possibility to effectively and accurately do predictions. Moreover, the traditional support vector machine does not work well on imbalanced sample classifications [6].
Through the proximate analysis of the composition in different coal samples, by means of the mathematical model of multiple regression analysis, Deng set up a regression equation to predict the system, the significance of the equation and its coefficient is tested, and residual analysis is carried out to identify the appropriateness of the equation [7]. Wang proposed a method to predict coal spontaneous combustion, which combines grey GM(1, 1) model with Markov Model [8]. Based on the analysis on the seam spontaneous combustion in the coal mining face, the prediction technology with the support vector machine was applied to predict and analyze the spontaneous combustion danger of the residual coal in the goaf [9]. Paper [10] introduce fuzzy membership and least squares method, adopt neighborhood rough set method to reduce the dimensions of input vectors of the coal spontaneous combustion, and use PSO algorithm to optimize the parameters of SVM model. Then a fuzzy least square spherical support vector machine is presented, use sequential minimize optimization method to solve the quadratic programming problem, and establish a coal spontaneous combustion forecast model. Therefore, this paper uses the least square method, transferring learning and particle swarm algorithms in the support vector machine, and proposes a prediction method based on adaptive particle swarm optimization least squares support vector machine.
The design complexity of standard SVM algorithm is related to the number of training samples. When the number of training samples is too large, solving the corresponding quadratic programming problem becomes complex and the calculation speed will be slowed down accordingly. Therefore, the calculation speed of standard SVM in practical application is greatly restricted. The Least Square Support Vector Machine (LS-SVM) algorithm can solve this kind of problem. The main difference between LS-SVM and standard SVM lies in the loss function and equality constraints.
In the structure of support vector machine, the input space of support vector machine is composed of original observation data, and the data of input space is mapped to high-dimensional feature space by kernel function. In feature space, support vector machine classifies or fits data by linear regression function. LS-SVM for classification is deduced as follows: Set the training set S={(xi,yi)|i=1,2,⋯,m}, in which xi∈Rn and yi∈R are input data and output data; different from the classical SVM. LS-SVM uses SRM criterion to construct the minimum objective function and its constraint conditions as follows [11]:
minJ(w,e)=12wTw+γ2∑mi=1e2is.t.yi=wTΦ(xi)+b+ei | (1) |
Wherein, w is the weight vector, γ is the constant, b is the constant deviation.
In order to solve the optimization problem of equation (1), we translate it into the following system of linear equations.
[0LTLQ+γ−1I][bα]=[0y] | (2) |
Wherein, L=[1,1,⋯,1]∈Rm, α=[α1,α2,⋯,αm]T∈Rm, I is an identity matrix, Q=yiyjΦ(xi)TΦ(xi)=yiyjK(xi,xj), K(xi,xj)=exp(−‖x−xi‖2/2σ2), are the kernel functions satisfying the Mercer condition, y=[y1,y2,⋯,ym]T∈Rm. Then the classification decision function of LS-SVM is [12]:
f(x)=sgn(∑mi=1αiyiK(x,xi)+b) | (3) |
Equation (2) is rewritten into a matrix equation AX = z, X is solved via the least squares method in the LS-SVM algorithm, so an A inversion is required. However, for large-scale problems in practical engineering, the dimensions of AT A are larger, so it is difficult to realize the process of matrix inversion [13]. Therefore, we can solve the matrix equation by the iterative computation of a PSO algorithm.
The LS-SVM algorithm is initially designed for binary classification problems. In handling multi-class problems, it needs to construct a proper multi-class classifier. At present, the method for constructing the LS-SVM multi-class classifier is mainly to combine multiple binary classifiers to realize the construction of a multi-class classifier. The common methods are one to one and one to many [14]. The method of one to one is to design a LS-SVM between any two categories of samples. Thus, it needs k LS-SVM for k(k−1)/2 categories of samples. In classifying an unknown sample, the category with the most votes shall be the category for it. It needs to traverse all the classifiers for the method, which is complex in training and low in classification effectiveness. In the training of the method of one to many, it needs to organize the samples of some kind of category as one category, and the remaining samples are the other category. Therefore, samples of k categories will construct k LS-SVM. In the classification, the unknown sample is classified as the category with a maximum classification function value. Compared with the one to one method, the method greatly reduces the necessary classifiers for training and enhances the training speed.
The PSO Algorithm is an evolutionary computation suggested in 1995 by Doctor Eberhart and Doctor Kennedy of Purdue University as inspired by observing bird foraging behavior. The basic idea is to seek for the optimal solution through cooperation and information sharing between individuals among a group. That is, the system shall initialize a group of random particles, and find out the optimal solution by iteration. In every iteration, the particle updates itself by tracking two "extreme values". After finding the two optimal values, the particle will adjust its speed in every dimensional space and calculate its new position [15,16]. The particle evolution formula is
vk+1i=ωvki+c1r1(pki−xki)+c2r2(pkg−xki)xk+1i=xki+vk+1i | (4) |
In which the r1,r2∈U(0,1), vki indicates the speed of the particle in the k time(s) of the iteration, xki indicates the position of the particle in the k time(s) of iteration, and pki indicates the individual extreme of the particle in the k time (s) iteration.pkg indicates the present global extreme point of the swarm in the k time(s) of iteration, ω is the inertia weight factor, and the constant c1,c2 is the acceleration factor.
From the model of PSO algorithm, if the acceleration factor c1,c2 or the inertia weight factor ω are with a bigger value, the particle swarms may miss the optimal solution, which will result in a non-convergence of the algorithm [17,18]. And even in convergence, if the tracking of the particle is the process of gradual convergence of the particle swarm, all the particles will tend to the optimal solution, which will cause an immature convergence, resulting in a rate of convergence in the later period of the algorithm to slow down and the decline in precision.
For this situation, the adjustment strategy is, if the search space dimension is large, in order to enhance the global search ability, the inertia weight should be increased appropriately; while the search space dimension is small, the inertia weight should be reduced appropriately to ensure the search efficiency of the algorithm. When adjusting the inertia weight, the balance between the search ability and the search efficiency should be achieved. To solve the problem, we can adjust the inertia weight factor ω and add the constraint factor α.We can introduce the source field factor and target domain factor by transferring learning [19], etc., to improve the basic model and obtain the self-adaptive PSO model:
vk+1i=ω⋅vki+c1r1[ξq(pki−xki)+ξq−1(pk−1i−xk−1i)]+c2r2[ξq(pkg−xki)+ξq−1(pk−1g−xk−1i)] | (5) |
vi={vmaxvi>vmax−vmaxvi<−vmax | (6) |
In which the ξq,ξq−1∈Rn, ∑qi=0ξi−1=1, ξq become the target domain factors, and the ξq−1 is the source field factor. In particular, when ξq−1=0 and ξq=1, PSO is a special condition of the APSO. From the perspective of psychology, the usage of knowledge of the source domain means the accumulation of the individual search experience of the particle, which is in favor of the quicker convergence of the algorithm.
The adjustment of the inertia weight factor ω can be done according to the advantages and disadvantages of the fitness of the particles. That is, in the initial stage of the algorithm, give ω a bigger positive value to obtain a better global searching ability; while in the later stage of the algorithm, give ω a smaller value to make the algorithm easier for convergence. The method suggested in the text may adjust the ω dynamically according to the degree of convergence and individual fitness of the swarm. The specific method is shown as follows:
When f(xi)<f(pi), the particles meeting the condition are the ordinary particles in the swarm, which have good global optimizing capacity and local optimizing capacity. The inertia weight ω is changed as the following formula with the searching:
ω=ωmin+(ωmax−ωmin)kmax−kkmax | (7) |
In which the ωmax is the maximum value ω in the beginning of search, which is 0.9. ωmin is the minimum value ω in the end of search, which is 0.2. k are the steps needed in the iteration, and kmax are the maximum times of iteration.
When f(xi)<f(pg), the particles meeting the condition are the better particles in the swarm, which are close to the global optimal, so they shall be given with a smaller inertia weight to accelerate the convergence to the global optimum. The inertia weight ω is changed as depicted in the following formula with the searching:
ω=ω−(ω−ωmin)|f(xi)−f(pi)f(pg)−f(pi)| | (8) |
In which the ωmin is the minimum value ω in the end of search, which is 0.2. The better the adaptive value of the particles, the smaller the inertia weight, which is beneficial to the local optimization.
The text suggests that the LS-SVM algorithm is based on self-adaption, which may translate the solution of the least squares of the matrix equation AX = z into the self-adaptive PSO algorithm for an iterative solution. In this way, a matrix inversion may be avoided, which ensures the convergence of the algorithm, accelerates the calculation speed, and enhances the solution accuracy. The process is as follows:
Step 1: Choose the proper training samples nb and test the samples for pretreatment.
Step 2: Initialize the parameters of the particle swarm, including the speed and location of the particle. Set the parameters of the particle swarm, and Rn randomly produces m particles in space (x1,x2,⋯,xm) to form the initial swarm (vi1,vi2,⋯,vim). The initial speed X(k) of the randomly produced particle forms the speed matrix V(k). The initial value of the optimal value of every particle is the initial value of xi.
Step 3: Train LS-SVM with training samples to calculate the adaptive value f(x) of every particle, and the formula of fitness function is: f(xi)=1l∑li=1(yi−xi)2
In the formula, xi is the actual value of the i sample, yi is the predicted value of the i sample, and the l is the amount of the test sample. According to the fitness value of the particle, update pid and pgd;
Step 4: for each particle, we compared the current fitness f(xi) with the fitness f(pi) of the best historical position. If f(xi)<f(pi), then pi=xi, and we adjust ω according to Eq. (6). We compare the current fitness f(xi) of all the particles of the group with the fitness f(pg) of the best position of the group. If f(xi)<f(pg), then pg=xi, and adjust ω according to Eq. (8).
Step 5: According to the improved PSO model [vk+1i,xk+1i], update the velocity and position of the particle to produce a new population X(k+1) for m = 1 to M,
Step 6: Determine whether the velocity vector meets the constraints −vmax≤vi≤vmax, and if not, adjust in accordance with the Eq. (6).
Step 7: Determine whether the fitness value meets the requirements or whether the maximum number of iterations has been reached. If the stop condition is satisfied, optimization ends, and the global optimal particle will be mapped to the optimized LS-SVM model parameters. Otherwise k = k+1, and go to Step 3.
Step 8: The LS-SVM is solved by using the training sample data and the parameters obtained in Step 7 to obtain the least squares solution of the matrix equation. That is, the optimal parameters αi and b are in the corresponding Eq.(2).
The APSO-LSSVM algorithm flow chart is as follows:
In order to verify the performance of APSO-LSSVM, three common Benchmark functions are selected as test functions. Rosenbrock is a single-peak function, search space is [−100,100]D, Schwefel and Penalized are two-peak functions, and search space are [−500,500]D and [−50, 50]D, respectively. Figure 2 shows the convergence curves of three test functions optimized by different algorithms.
It can be seen that APSO-LSSVM performs very well on all three functions. Since the adaptive strategy is adopted in the algorithm, the ability of the population to jump out of the local optimum and improve the accuracy of the solution can be improved while effectively maintaining the diversity of the population.
In 2013, coal samples from a certain coalmine in Hebi have been collected, and have carried out a spontaneous ignition test of the coal, and collected the sample data. We forecast the development trends of spontaneous ignition of coal by analyzing the characteristic parameters such as the concentration, ratio and occurrence rate of the indicator gas produced during the spontaneous ignition of coal. The process of coal spontaneous combustion can be generally divided into 3 stages [2]: the preparation period, the spontaneous heating period and the burning period. In order to better predict the state of coal spontaneous combustion, the spontaneous heating period can be further divided into 3 stages: early, mid-term and the later period of spontaneous heating.
C-SVM, LS-SVM, PSO-LSSVM and APSO-LSSVM algorithms were used to predict the degree of danger. The experiment uses the MATLAB 2010 version for programming with a 2.19GHz CPU and 2GB of memory.
To improve the accuracy of the sample, it will first be normalized to avoid the effects of singular point data on the performance of the Support Vector Machine. Set the size of the particle swarm to 25, the solution space to 350 dimensions, the maximum number of iterations to 1000, the acceleration factor c1=c2=2, initial ω. The regularized parameter γ = 1000, the width parameter of the radial basis function σ2 = 0.15, and establish 5 LS-SVM classifiers.
Test data samples with the prediction model obtained by iteration of the adaptive PSO, establish standard support vector model C-SVM, LS-SVM model and LS-SVM model of a standard PSO, and we compared the results with the predicted results proposed in this article. The C-SVM model adopts the radial basis function. The inertia weight ω in the LS-SVM model of the standard PSO is constant. The correlation curve of the training time is shown in Figure 3, and the range of the number of training sample data is [50,300]. The testing time of the prediction of accuracy is shown in Figure 4. The range of the number of testing samples is [50,300], and the number of testing samples is 300. The correlation curve of the prediction of accuracy is shown in Figure 5. The range of the number of training samples is [50,300]. The number of testing samples is 300.
It can be seen from the above response curves. As the number of training samples increases, the training time of 4 kinds of classification algorithms have been obviously increased. However, the training time of APSO-LSSVM is significantly shorter than that of C-SVM, LS-SVM and PSO-LSSVM, which proves that APSO-LSSVM has good adaptability to test conditions and the environment of the number of different samples and fast learning process. In terms of testing time, the processing time of 4 algorithms increases linearly with the increase of the number of test samples. However, the processing time of APSO-LSSVM is obviously shorter than that of C-SVM, LS-SVM and PSO-LSSVM and APSO-LSSVM, showing a good ability of real-time processing of APSO-LSSVM. Under the same conditions, the accuracy of classification of the 4 kinds of algorithms also increases with the increase of the number of training samples. However, the accuracy of APSO-LSSVM is slightly higher than that of C-SVM, LS-SVM and PSO-LSSVM. It can be seen that the adaptive PSO algorithm has obtained higher accuracy during the iteration process of the matrix.
Secondly, in order to test the performance of 4 kinds of algorithms and predict the accuracy of the results with the distribution of different samples, we selected coal samples of different coalmines for the test of spontaneous ignition to obtain the second group of sample data. We established the prediction model with different numbers of training samples. The test parameters respectively are as follows: training time, testing time, accuracy prediction and performance test, which are shown in Table 1.The test parameters respectively are as follows: training time, testing time, accuracy prediction and performance test, which are shown in Table 2.
Data set | Class 1 | CLASS 2 | CLASS 3 | Class 4 | Class 5 | |
Frist Group | Training Sample | 55 | 67 | 59 | 67 | 62 |
Test sample | 33 | 48 | 31 | 43 | 45 | |
Second group | Training Sample | 35 | 40 | 36 | 31 | 36 |
Test sample | 21 | 31 | 24 | 19 | 25 |
Algorithm | Training time (s) | Test time(s) | Forecast accuracy (%) | |
Frist group | C-SVM | 0.282 | 0.325 | 82.38 |
LS-SVM | 0.237 | 0.267 | 84.23 | |
PSO-LSSVM | 0.185 | 0.192 | 88.93 | |
APSO-LSSVM | 0.108 | 0.145 | 91.07 | |
Second group | C-SVM | 0.188 | 0.253 | 83.57 |
LS-SVM | 0.145 | 0.226 | 85.76 | |
PSO-LSSVM | 0.112 | 0.170 | 89.25 | |
APSO-LSSVM | 0.073 | 0.104 | 92.12 |
As seen from the statistical results of the performance test of different coal sample data in two regions that the training and testing time consumption of APSO-LSSVM are significantly smaller than those of C-SVM, LS-SVM and PSO-LSSVM, which proves that APSO-LSSVM has competitive advantages in dealing with relatively complex issues and issues requiring higher real-time performance. The accuracy of the training set and testing set of APSO-LSSVM is slightly higher than that of C-SVM, LS-SVM and PSO-LSSVM, and APSO-LSSVM has a relatively small error, which shows it has a better classification effect.
Next, we consider the problem of time complexity. Assuming the complexity of training a classifier is Ch and updating a training sample is Cw, the time complexity of C-SVM, LS-SVM, PSO-LSSVM and APSO-LSSVM can be approximated to ChO(kmax)+CwO(nbkmax),ChO(kmax)+CwO(lkmax),ChO(Nkmax)+CwO(lNkmax)andChO(Nkmax)+CwO(lkmax). The average training time of the four algorithms is compared in Figure 6.
In order to have objective and scientific comparison results, hypothesis testing is used on the experimental results. Let the variable X1,X2,X3,X4 denote the classification error rate of APSO-LSSVM, PSO-LSSVM, LS-SVM and C-SVM algorithms, respectively. Since the value of X1,X2,X3,X4 is subject to many random factors, we assume that they submit to normal distribution, Xi∼N(μi,σ2i),i=1,2,3,4. Now, we compare the random variable mean of these algorithms, μi,i=1,2,3,4. The smaller the μi is, the lower the expected classification error rate is, and the higher the efficiency is. Because the sample variance is the unbiased estimation of the overall variance, the sample variance value is used as an estimate of the generality variance. In this experiment the significance level α sets as 0.01.
Table 3 shows the comparison process on μi and other parameters. We can see that, the expectations of prediction accuracy in APSO-LSSVM is far below than other algorithms.
Hypothesis | H0:μ1≥μ2H1:μ1<μ2 | H0:μ1≥μ3H1:μ1<μ3 | H0:μ1≥μ4H1:μ1<μ4 |
Statistics | U1=¯X1−¯X2√σ21n1+σ22n2 | U2=¯X1−¯X3√σ21n1+σ23n3 | U3=¯X1−¯X4√σ21n1+σ24n4 |
Rejection region | U1≤−Zα=−2.325 | U2≤−Zα=−2.325 | U3≤−Zα=−2.325 |
Value of the statistic | U1=−52.58 | U2=−105.64 | U3=−124.68 |
Conclusion | H1:μ1<μ2 | H1:μ1<μ3 | H1:μ1<μ4 |
In order to verify the effectiveness of the algorithm, the data is tested using the Wine dataset and the Iris dataset in the UCI database. Wine dataset contains 3 types of samples, a total of 178 samples. 89 are selected as training samples and 89 are used as test samples. The Iris data set contains 3 types of samples, a total of 150 samples. This article selects 75 samples as training samples and 75 as test samples. In order to improve the accuracy of the sample, the sample was first normalized to [0, 1], support vector machine uses C-SVM, and kernel function uses Gauss radial base kernel function. The training process uses K-fold cross verification to test the accuracy of the judgment sample. Learning factors c1=c2=2, and number of particles N = 40, maximum number of iterations M = 100, inertial weights ω = 1.
Dataset | Parameter C | Parameter σ | Time of parameter optimization | Training sample accuracy | Test sample accuracy | Range of parameter C | Range of parameter σ |
Wine | 68.94 | 0.01 | 20.36s | 96.85% | 95.34% | [0.1, 1000] | [0.01, 1000] |
Iris | 100 | 0.01 | 13.32s | 97.80% | 98.10% | [0.1, 1000] | [0.01, 1000] |
The problem of coal spontaneous combustion prediction is very complex, and there are many factors that affect the prediction results. The LS-SVM algorithm based on adaptive PSO optimization proposed in this paper uses the LS-SVM algorithm to solve problems such as small samples, non-linear, high dimension and local very small points. The adaptive PSO algorithm is used to solve the problems such as high computational complexity and slow calculation speed of the LS-SVM model for large-scale samples, so that it can always obtain the optimal solution, and its training speed and accuracy are improved. In the coal spontaneous combustion experiment, the training and testing time consumption of the proposed method are significantly smaller than those of the remaining examined methods, which proves that APSO-LSSVM has competitive advantages in dealing with relatively complex issues and issues requiring higher real-time performance. The accuracy of the training set and testing set of APSO-LSSVM is slightly higher than that of the remaining examined methods, and APSO-LSSVM has a relatively small error, which shows it has a better classification effect.
On the basis of this article, there are still the following aspects to be studied. 1) How to more accurately solve the problem of unbalanced data in SVM requires further research. At the same time, the parameter selection problem of SVM is supported. Although the PSO is used to solve the problem to a certain extent, the parameters sought are only optimal for the relative training set. How to obtain the parameters in theory is a direction of research at present. 2) PSO will encounter a local optimal situation, so that the obtained parameters are not global optimal, but sub-optimal. How to obtain the global optimal solution more stably and efficiently for particles will be studied as the next step. 3) The prediction in this paper is only based on the index gas and does not combine with the surrounding environmental factors. In order to improve the effect of this model in practical application, the next step will comprehensively consider other factors, so that this model in practice to give full play to its advantages.
This work was jointly supported by National Natural Science Fund (NO.61473299, NO.61876185) and the Fundamental Research Funds for Central Universities (NO.2015QNB21).
All authors declare no conflicts of interest in this paper.
[1] | W. J. Liu, P. P. Gao, W. B. Yu, et al., Quantum Relief algorithm, Quantum Inf. Process., 17 (2018), 280. |
[2] | W. J. Liu, H. B. Wang, G. L. Yuan, et al., Multiparty quantum sealed-bid auction using single photons as message carrier, Quantum Inf. Process., 15 (2016), 869–879. |
[3] | W. J. Liu, Z. Y. Chen, J. S. Liu, et al., Full-blind delegating private quantum computation, CMC-Comput. Mater. Con., 56 (2018), 211–223. |
[4] | W. J. Liu, Y. Xu, C. N. Yang, et al., An efficient and secure arbitrary n-party quantum key agreement protocol using Bell states, Int. J. Theor. Phys., 57 (2018), 195–207. |
[5] | X. B. Chen, X. Tang, G. Xu, et al., Cryptanalysis of secret sharing with a single d-level quantum system, Quantum Inf. Process., 17 (2018), 225. |
[6] | J. W. Wang, T. Li, X. Y. Luo, et al., Identifying computer generated images based on quaternion central moments in color quaternion wavelet domain, IEEE T. Circ. Syst. Vid., (2018), 1. |
[7] | Y. Zhang, C. Qin, W. M. Zhang, et al., On the fault-tolerant performance for a class of robust image steganography, Signal Process., 146 (2018), 99–111. |
[8] | X. Y. Luo, X. F. Song, X. L. Li, et al., Steganalysis of HUGO steganography based on parameter recognition of syndrome-trellis-codes, Multimed. Tools Appl., 75 (2016), 13557–13583. |
[9] | T. Qiao, R. Shi, X. Y. Luo, et al., Statistical model-based detector via texture weight map: application in re-sampling authentication, IEEE T. Multimedia, 21 (2019), 1077–1092. |
[10] | Y. Y. Ma, X. Y. Luo, X. L. Li, et al., Selection of rich model steganalysis features based on decision rough set -positive region reduction, IEEE T. Circ. Syst. Vid., 29 (2019), 336–350. |
[11] | Z. G. Qu, J. Keeney, S. Robitzsch, et al., Multilevel pattern mining architecture for automatic network monitoring in heterogeneous wireless communication networks, China Commun., 13 (2016), 106–116. |
[12] | G. Xu, X. B. Chen and J. Li, Network coding for quantum cooperative multicast, Quantum Inf. Process., 14 (2015), 4297–4322. |
[13] | C. H. Bennett, G. Brassard, C. Crepeau, et al., Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels, Phys. Rev. Lett., 70 (1993), 1895–1899. |
[14] | X. B. Chen, Y. R. Sun, G. Xu, et al., Controlled bidirectional remote preparation of three-qubit state, Quantum Inf. Process., 16 (2017), 244. |
[15] | M. M. Wang, C. Yang and R. Mousoli, Controlled cyclic remote state preparation of arbitrary qubit states, CMC-Comput. Mater. Con., 55 (2018), 321–329. |
[16] | C. H. Bennett and G. Brassard, Quantum cryptography: Public key distribution and coin tossing, Theor. Comput. Sci., 560 (2014), 7–11. |
[17] | M. Hillery, V. Buzek and A. Berthiaume, Quantum secret sharing, Phys. Rev. A, 59 (1999), 1829–1834. |
[18] | M. Curty and D. J. Santos, Quantum authentication of classical messages, Phys. Rev. A, 64 (2012), 168–1. |
[19] | B. M. Terhal, D. P. Divincenzo and D. W. Leung, Hiding bits in Bell states, Phys. Rev. Lett., 86 (2001), 5807–5810. |
[20] | D. P. Divincenzo, D. W. Leung and B. M. Terhal, Quantum data hiding, IEEE T. Inform. Theory, 48 (2001), 580–598. |
[21] | B. A. Shaw and T. A. Brun, Quantum steganography with noisy quantum channels, Phys. Rev. A, 83 (2011), 498–503. |
[22] | B. A. Shaw and T. A. Brun, Hiding quantum information in the perfect code, preprint, arXiv:1007.0793. |
[23] | T. Mihara, Quantum steganography embedded any secret text without changing the content of cover data, J. Quantum Inf. Sci., 2 (2012), 10–14. |
[24] | Z. H. Wei, X. B. Chen, X. X. Niu, et al., The quantum steganography protocol via quantum noisy channels, Int. J. Theor. Phys., 54 (2015), 2505–2515. |
[25] | T. Mihara, Quantum steganography using prior entanglement, Phys. Lett. A, 379 (2015), 952–955. |
[26] | Z. G. Qu, T. C. Zhu and J. W. Wang, A novel quantum steganography based on Brown states, CMC-Comput. Mater. Con., 1 (2018), 47–59. |
[27] | Z. G. Qu, Z. W. Cheng, W. J. Liu, et al., A novel quantum image steganography algorithm based on exploiting modification direction, Multimed. Tools Appl., 78 (2019), 7981–8001. |
[28] | Z. G. Qu, Z. W. Chen, W. B. Yu, et al., Matrix coding-based quantum image steganography algorithm, IEEE Access, 1 (2019), 99–114. |
[29] | G. C. Guo and G. P. Guo, Quantum data hiding with spontaneous parameter down-conversion, Phys. Rev. A, 68 (2003), 044303. |
[30] | K. Martin, Steganographic communication with quantum information, Lecture Notes in Computer Science(LNCS), 4567 (2007), 32–49. |
[31] | Z. G. Qu, X. B. Chen, X. J. Zhou, et al., Novel quantum steganography with large payload, Opt. Commun., 283 (2010), 4782–4786. |
[32] | Z. G. Qu, X. B. Chen, M. X. Luo, et al., Quantum steganography with large payload based on entanglement swapping of χ-type entangled states, Opt. Commun., 284 (2011), 2075–2082. |
[33] | Z. H. Wei, X. B. Chen, X. X. Niu, et al., A novel quantum steganography protocol based on probability measurements, Int. J. Quantum Inf., 11 (2013), 1350068. |
[34] | Z. H. Wei, X. B. Chen, X. X. Niu, et al., Least significant qubit (LSQb) information hiding algorithm for quantum image, Int. J. Theor. Phys., 54 (2015), 32–38. |
[35] | S. Heidari and E. Farzadnia, A novel quantum lsb-based steganography method using the gray code for colored quantum images, Quantum Inf. Process., 16 (2017), 242. |
[36] | Z. G. Qu, Z. W. Cheng, M. X. Luo, et al., A robust quantum watermark algorithm based on quantum log-polar images, Int. J. Theor. Phys., 56 (2017), 3460–3476. |
[37] | Z. G. Qu, S. Y. Chen and S. Ji, A novel quantum video steganography protocol with large payload based on mcqi quantum video, Int. J. Theor. Phys., 56 (2017), 1–19. |
[38] | R. Laflamme, C. Miquel, J. P. Paz, et al., Perfect quantum error correcting code, Phys. Rev. Lett., 77 (1996), 198–201. |
[39] | L. M. Duan and G. C. Guo, Preserving coherence in quantum computation by pairing quantum bits, Physics, 79 (1998), 1953–1956. |
[40] | H. Zheng, S. Y. Zhu and M. S. Zubairy, Quantum zeno and anti-zeno effects: without the rotating- wave approximation, Phys. Rev. Lett., 101 (2008), 200404. |
[41] | Z. G. Qu, S. Y. Chen, S. Ji, et al., Anti-noise bidirectional quantum steganography qrotocol with large payload, Int. J. Theor. Phys., 57 (2018), 1–25. |
[42] | Z. G. Qu, S. Y. Wu, W. J. Liu, et al., Analysis and Improvement of Steganography Protocol Based on Bell States in Noise Environment, CMC-Comput. Mater. Con., 59 (2019), 607–624. |
[43] | N. K. Alexander and K. Kyle, Decoherence suppression by quantum measurement reversal, Phys. Rev. A, 81 (2010), 040103. |
[44] | X. W. Guan, X. B. Chen and L. C. Wang, Joint remote preparation of an arbitrary two-qubit state in noisy environments, Int. J. Theor. Phys., 53 (2014), 2236–2245. |
[45] | F. Raphael and R. Gustavo, Fighting noise with noise in realistic quantum teleportation, Phys. Rev. A, 92 (2015), 012338. |
[46] | M. M. Wang and Z. G. Qu, Effect of quantum noise on deterministic joint remote state preparation of a qubit state via a GHZ channel, Quantum Inf. Process., 15 (2016), 4805–4818. |
[47] | M. M. Wang, Z. G. Qu and W. Wang, Effect of noise on deterministic joint remote preparation of an arbitrary two-qubit state, Quantum Inf. Process., 16 (2017), 140. |
[48] | M. M. Wang, Z. G. Qu, W. Wang, et al., Effect of noise on joint remote preparation of multi-qubit state, Int. J. Quantum Inf., 15 (2017), 1750012. |
[49] | Z. G. Qu, S. Y. Wu, M. M. Wang, et al., Effect of quantum noise on deterministic remote state preparation of an arbitrary two-particle state via various quantum entangled channels, Quantum Inf. Process., 16 (2017), 306–331. |
[50] | L. Sun, S. Y. Wu, Z. G. Qu, et al., The effect of quantum noise on two different deterministic remote state preparation of an arbitrary three-particle state protocols, Quantum Inf. Process., 17 (2018), 283–301. |
1. | Zhenming Sun, Dong Li, Chi-Hua Chen, Coal Mine Gas Safety Evaluation Based on Adaptive Weighted Least Squares Support Vector Machine and Improved Dempster–Shafer Evidence Theory, 2020, 2020, 1607-887X, 1, 10.1155/2020/8782450 | |
2. | Shangbo Zhou, Yuxiao Han, Long Sha, Shufang Zhu, A multi-sample particle swarm optimization algorithm based on electric field force, 2021, 18, 1551-0018, 7464, 10.3934/mbe.2021369 | |
3. | Jing Bi, Kaiyi Zhang, Haitao Yuan, Qinglong Hu, 2021, Energy-Aware Task Offloading with Genetic Particle Swarm Optimization in Hybrid Edge Computing, 978-1-6654-4207-7, 3194, 10.1109/SMC52423.2021.9658678 | |
4. | Yiqin Bao, Hongbing Lu, Qiang Zhao, Zhongxue Yang, Wenbin Xu, Detection system of dead and sick chickens in large scale farms based on artificial intelligence, 2021, 18, 1551-0018, 6117, 10.3934/mbe.2021306 | |
5. | Jing Bi, Haitao Yuan, Kaiyi Zhang, MengChu Zhou, Energy-Minimized Partial Computation Offloading for Delay-Sensitive Applications in Heterogeneous Edge Networks, 2022, 10, 2168-6750, 1941, 10.1109/TETC.2021.3137980 | |
6. | Wei Liu, Zhenjun Song, Meng Wang, Pengyu Wen, Dynamic prediction of high-temperature points in longwall gobs under a multi-field coupling framework, 2024, 187, 09575820, 1062, 10.1016/j.psep.2024.04.097 | |
7. | Surendra Kumar Dogra, Jitendra Pramanik, Singam Jayanthu, Abhaya Kumar Samal, 2023, Chapter 13, 978-3-031-46965-7, 153, 10.1007/978-3-031-46966-4_13 | |
8. | Liyang Gao, Bo Tan, Feiran Wang, Hao Lu, David Cliff, Xiaomeng Li, Comparison and Optimization of Monitoring and Warning Methods for Coal Spontaneous Combustion Fire Area, 2025, 0010-2202, 1, 10.1080/00102202.2025.2497061 | |
9. | Xuezhao Zheng, Peihua Li, Guobin Cai, Jun Guo, Yin Liu, MICE-PSO-RF Model for Predicting Coal Spontaneous Combustion Temperature Based on Multiple Imputation by Chained Equations, 2025, 0010-2202, 1, 10.1080/00102202.2025.2500516 | |
10. | Huimin Zhao, Xu Zhou, Jingjing Han, Yixuan Liu, Zhe Liu, Shishuo Wang, Research on early warning model of coal spontaneous combustion based on interpretability, 2025, 15, 2045-2322, 10.1038/s41598-025-01154-4 |
Data set | Class 1 | CLASS 2 | CLASS 3 | Class 4 | Class 5 | |
Frist Group | Training Sample | 55 | 67 | 59 | 67 | 62 |
Test sample | 33 | 48 | 31 | 43 | 45 | |
Second group | Training Sample | 35 | 40 | 36 | 31 | 36 |
Test sample | 21 | 31 | 24 | 19 | 25 |
Algorithm | Training time (s) | Test time(s) | Forecast accuracy (%) | |
Frist group | C-SVM | 0.282 | 0.325 | 82.38 |
LS-SVM | 0.237 | 0.267 | 84.23 | |
PSO-LSSVM | 0.185 | 0.192 | 88.93 | |
APSO-LSSVM | 0.108 | 0.145 | 91.07 | |
Second group | C-SVM | 0.188 | 0.253 | 83.57 |
LS-SVM | 0.145 | 0.226 | 85.76 | |
PSO-LSSVM | 0.112 | 0.170 | 89.25 | |
APSO-LSSVM | 0.073 | 0.104 | 92.12 |
Hypothesis | H0:μ1≥μ2H1:μ1<μ2 | H0:μ1≥μ3H1:μ1<μ3 | H0:μ1≥μ4H1:μ1<μ4 |
Statistics | U1=¯X1−¯X2√σ21n1+σ22n2 | U2=¯X1−¯X3√σ21n1+σ23n3 | U3=¯X1−¯X4√σ21n1+σ24n4 |
Rejection region | U1≤−Zα=−2.325 | U2≤−Zα=−2.325 | U3≤−Zα=−2.325 |
Value of the statistic | U1=−52.58 | U2=−105.64 | U3=−124.68 |
Conclusion | H1:μ1<μ2 | H1:μ1<μ3 | H1:μ1<μ4 |
Dataset | Parameter C | Parameter σ | Time of parameter optimization | Training sample accuracy | Test sample accuracy | Range of parameter C | Range of parameter σ |
Wine | 68.94 | 0.01 | 20.36s | 96.85% | 95.34% | [0.1, 1000] | [0.01, 1000] |
Iris | 100 | 0.01 | 13.32s | 97.80% | 98.10% | [0.1, 1000] | [0.01, 1000] |
Data set | Class 1 | CLASS 2 | CLASS 3 | Class 4 | Class 5 | |
Frist Group | Training Sample | 55 | 67 | 59 | 67 | 62 |
Test sample | 33 | 48 | 31 | 43 | 45 | |
Second group | Training Sample | 35 | 40 | 36 | 31 | 36 |
Test sample | 21 | 31 | 24 | 19 | 25 |
Algorithm | Training time (s) | Test time(s) | Forecast accuracy (%) | |
Frist group | C-SVM | 0.282 | 0.325 | 82.38 |
LS-SVM | 0.237 | 0.267 | 84.23 | |
PSO-LSSVM | 0.185 | 0.192 | 88.93 | |
APSO-LSSVM | 0.108 | 0.145 | 91.07 | |
Second group | C-SVM | 0.188 | 0.253 | 83.57 |
LS-SVM | 0.145 | 0.226 | 85.76 | |
PSO-LSSVM | 0.112 | 0.170 | 89.25 | |
APSO-LSSVM | 0.073 | 0.104 | 92.12 |
Hypothesis | H0:μ1≥μ2H1:μ1<μ2 | H0:μ1≥μ3H1:μ1<μ3 | H0:μ1≥μ4H1:μ1<μ4 |
Statistics | U1=¯X1−¯X2√σ21n1+σ22n2 | U2=¯X1−¯X3√σ21n1+σ23n3 | U3=¯X1−¯X4√σ21n1+σ24n4 |
Rejection region | U1≤−Zα=−2.325 | U2≤−Zα=−2.325 | U3≤−Zα=−2.325 |
Value of the statistic | U1=−52.58 | U2=−105.64 | U3=−124.68 |
Conclusion | H1:μ1<μ2 | H1:μ1<μ3 | H1:μ1<μ4 |
Dataset | Parameter C | Parameter σ | Time of parameter optimization | Training sample accuracy | Test sample accuracy | Range of parameter C | Range of parameter σ |
Wine | 68.94 | 0.01 | 20.36s | 96.85% | 95.34% | [0.1, 1000] | [0.01, 1000] |
Iris | 100 | 0.01 | 13.32s | 97.80% | 98.10% | [0.1, 1000] | [0.01, 1000] |