Value | 700 | 750 | 800 | 850 | 900 | 950 |
Relative Error | 0.432 | 0.425 | 0.425 | 0.417 | 0.412 | 0.438 |
Citation: Dario Madeo, Chiara Mocenni, Jean Carlo Moraes, Jorge P. Zubelli. The role of self-loops and link removal in evolutionary games on networks[J]. Mathematical Biosciences and Engineering, 2019, 16(5): 5287-5306. doi: 10.3934/mbe.2019264
[1] | Nur Jassriatul Aida binti Jamaludin, Shanmugan Subramani, Mutharasu Devarajan . Thermal and optical performance of chemical vapor deposited zinc oxide thin film as thermal interface material for high power LED. AIMS Materials Science, 2018, 5(3): 402-413. doi: 10.3934/matersci.2018.3.402 |
[2] | Larysa Khomenkova, Mykola Baran, Jedrzej Jedrzejewski, Caroline Bonafos, Vincent Paillard, Yevgen Venger, Isaac Balberg, Nadiia Korsunska . Silicon nanocrystals embedded in oxide films grown by magnetron sputtering. AIMS Materials Science, 2016, 3(2): 538-561. doi: 10.3934/matersci.2016.2.538 |
[3] | Wei-Chi Chen, Pin-Yao Chen, Sheng-Hsiung Yang . Solution-processed hybrid light emitting and photovoltaic devices comprising zinc oxide nanorod arrays and tungsten trioxide layers. AIMS Materials Science, 2017, 4(3): 551-560. doi: 10.3934/matersci.2017.3.551 |
[4] | Habibur Rahman, Altab Hossain, Mohammad Ali . Experimental investigation on cooling tower performance with Al2O3, ZnO and Ti2O3 based nanofluids. AIMS Materials Science, 2024, 11(5): 935-949. doi: 10.3934/matersci.2024045 |
[5] | Yudong Mo, Jose M. Perez, Zhou Ye, Lei Zhao, Shizhong Yang, Liuxi Tan, Zhaodong Li, Feng Gao, Guanglin Zhao . Effects of light on the resistivity of chemical vapor deposited graphene films. AIMS Materials Science, 2016, 3(4): 1426-1435. doi: 10.3934/matersci.2016.4.1426 |
[6] | G. A. El-Awadi . Review of effective techniques for surface engineering material modification for a variety of applications. AIMS Materials Science, 2023, 10(4): 652-692. doi: 10.3934/matersci.2023037 |
[7] | Haridas D. Dhaygude, Surendra K. Shinde, Ninad B. Velhal, G.M. Lohar, Vijay J. Fulari . Synthesis and characterization of ZnO thin film by low cost modified SILAR technique. AIMS Materials Science, 2016, 3(2): 349-356. doi: 10.3934/matersci.2016.2.349 |
[8] | Na Ta, Lijun Zhang, Qin Li . Research on the oxidation sequence of Ni-Al-Pt alloy by combining experiments and thermodynamic calculations. AIMS Materials Science, 2024, 11(6): 1083-1095. doi: 10.3934/matersci.2024052 |
[9] | Rahmad Doni Widodo, Rusiyanto, Wahyudi, Melisa Kartika Sari, Deni Fajar Fitriyana, Januar Parlaungan Siregar, Tezara Cionita, Natalino Fonseca Da Silva Guterres, Mateus De Sousa Da Silva, Jamiluddin Jaafar . Effect of compression molding temperature on the characterization of asbestos-free composite friction materials for railway applications. AIMS Materials Science, 2023, 10(6): 1105-1120. doi: 10.3934/matersci.2023059 |
[10] | Miguel García-Tecedor, Félix del Prado, Carlos Bueno, G. Cristian Vásquez, Javier Bartolomé, David Maestre, Tomás Díaz, Ana Cremades, Javier Piqueras . Tubular micro- and nanostructures of TCO materials grown by a vapor-solid method. AIMS Materials Science, 2016, 3(2): 434-447. doi: 10.3934/matersci.2016.2.434 |
A mechanical vibration signal is inextricably related to its operating state. To obtain the key information contained in the signal, the signal is usually collected by a method following the classic Nyquist sampling theorem [1,2,3]. Most acquisition chips currently used are also based on the Nyquist sampling theorem, to uniformly sample signals at equal intervals. The theorem requires that the sampling frequency be at least twice the highest frequency of the signal being tested, and in practical applications this frequency can often be five to 10 times as high [4,5]. The purpose of this high frequency is to avoid signal aliasing and other phenomena, to obtain complete and accurate signal information. However, this method not only requires high sampling equipment, but also generates a large amount of data with greater redundancy, which increases the difficulty of data transmission and analysis [6,7,8]. To solve the problem of excessive data volumes, people often compress the data, such as the JPEG format in image compression, the MP3 format in audio compression, and the ZIP format in file encoding compression, extracting only important information from the data and discarding a great deal of redundant data [9,10,11]. Although this method can achieve information compression, the compression process is carried out after the acquisition and requires a complex algorithm to complete, which not only wastes sampled data, but also consumes a large amount of front-end hardware resources. The method thus consists of distortion compression, and the data set itself will be damaged by predictability.
To resolve the problems of the traditional sampling theorem, Donoho et al. [1] formally proposed an entirely new method of signal acquisition and processing based on sparse representation and signal approximation theory, that is, compressed sensing [12,13,14]. Compressed sensing can compress the original signal while collecting it [15,16]. The sampling frequency is related to the structure of the signal and the information is no longer determined by the signal bandwidth [17,18,19]. Part of the noise and redundant information in the signal can also be eliminated, which can greatly reduce the amount of data collected and reduce pressure on the acquisition end.
When the theory of compressed sensing was introduced, Donoho and others strictly proved its mathematical correctness. Therefore, in view of its significant advantages, compressed sensing theory has since quickly attracted the attention of many researchers. In terms of practical applications, Rice University has successfully developed a single-pixel compression digital camera [20,21,22]. The observation process is equivalent to a single-pixel video stream. The camera can take high-quality images with pixel values much lower than that of the original image [23,24]. Self-adaptive ability, however, is not available in traditional equipment. Similar devices include a coded aperture camera developed by the Massachusetts Institute of Technology, a hyperspectral imager developed by Yale University, and a DNA microarray sensor developed by Illinois State University [25,26,27].
Compressed sensing technology is currently being used in the fields of image and video compression, nuclear magnetic resonance imaging, compressed sensing radar, data communications, and so forth [28,29,30]. Since certain breakthroughs and given more in-depth research on its theoretical framework, compressed sensing has also been applied in sub-Nyquist sampling systems [31,32,33] and compressed sensing networks [34,35,36], among other fields [36,37]. However, there have been fewer applications of compressed sensing in fault diagnosis of mechanical equipment, which is the research topic of this paper. The use of compressed sensing theory to collect and detect mechanical vibration signals can reduce the sampling frequency and number of samples, with a high probability of detecting mechanical failure information, thus reducing pressure in terms of information transmission, storage, and processing [38,39]. In addition, some optimization algorithms are also used to optimize the compressed sensing networks [40,41,42].
This paper uses compressed sensing technology to compress and collect the fault signals of rolling bearings. Through the analysis and selection of the sparse representation algorithm, measurement matrix, and reconstruction algorithm and the comparative analysis of the impact of each parameter value, the original signal is recovered from a small amount of collected data with high accuracy. This paper also introduces a neural network model into the compressed sensing process, the neural network’s predictive ability is used to predict compressed sensing observation values for nonlinear time series to achieve further compression. After testing and selecting the number of neurons and transfer functions for the neural network, it is possible for the entire set of observation values to be accurately predicted, with high probability, under the condition of greatly compressing the observation values. Finally, it can accurately identify fault information bearing signal through the reconstruction of the spectrum.
The core idea of compressed sensing theory is the assumption that the signal can be sparsely represented under a transform basis technology. A linear random observation method is used to project the signal onto a low-dimensional space to obtain a small number of observations containing most of the information of the original signal. The nonlinear optimization algorithm then restores the reconstructed signal, which is similar to the original signal.
A real signal
x=Ψθ | (1) |
where the coefficient vector
Given an
y=Ψx | (2) |
where the observation vector
y=Φx=ΦΨθ | (3) |
By reconstructing
θ(1)=arg min‖θ‖1 s.t. y=Φx=ΦΨθ | (4) |
According to the process of compressed sensing, the core problems can be summarized into the following three points:
(1) Design an appropriate measurement matrix
(2) According to a priori knowledge, select the sparse basis
(3) Use an appropriate reconstruction algorithm to find the optimal solution
The sparsity of the signal is an important factor that affects the efficiency and accuracy of compressed sensing. The traditional orthogonal base dictionary sometimes cannot guarantee the signal’s sufficient sparsity. Therefore, the method of using the optimized learning algorithm to construct the redundant dictionary has received more attention. This paper uses the most representative K-SVD algorithm, widely used because of its relatively good sparse decomposition effect on various signals [43,44].
When applying the K-SVD algorithm, it is necessary to determine the original signal x, the initial dictionary atom length n, the number of atoms K, the sample set atom number N, the linearly combined atom number L in the sparse representation, and the number of iterations J. The proper parameter value combination in the K-SVD algorithm is the key to ensuring that the sparse coefficient has sufficient sparsity and attenuation. However, there is currently no uniform selection standard, which is usually selected based on the experience of researchers or repeated experiments.
This paper uses faulty rolling bearing data collected by the Case Western Reserve University Bearing Data Center for the original signal, to test combinations of various parameter values in the K-SVD algorithm. The original data were taken from 6205-2RS JEM SKF deep groove ball bearings. Inner ring fault signals were analyzed. The sampling frequency was 12 kHz, the motor speed was 1797 rpm, and the fault point was the inner ring score.
The atomic length n is chosen to be 512, because the parameter values to be determined are not unique and are related to each other. This study uses single-factor analysis to pre-set the initial values of each parameter. Each round of the test fixes three parameter values and adjusts a fourth. The order of the parameters adjustments is K, N, L, and J.
From experience, when parameter K is tested, the initial values of the other parameters are set to N = 1100, L = 10, and J = 10. To ensure a complete dictionary, the value range of K in the test is 600–950. The test results are shown in Table 1.
Value | 700 | 750 | 800 | 850 | 900 | 950 |
Relative Error | 0.432 | 0.425 | 0.425 | 0.417 | 0.412 | 0.438 |
It is not difficult that the relative error of reconstruction has a tendency to decrease with the increase of K. In this paper, there is K = 900.
Table 2 is a comparison chart of the reconstruction errors for different values of N. In the test, K takes the value of 900, selected in the previous test, and the values of L and J, both equal to 10, remain unchanged. Because of the limitation on the value of K, the dictionary needs to guarantee sufficient training. Therefore, N = 950, 1000, …, 1200 is used for testing.
Value | 950 | 1000 | 1050 | 1100 | 1150 | 1200 |
Relative Error | 0.424 | 0.418 | 0.414 | 0.413 | 0.413 | 0.414 |
Through the test results, one can find that the relative error of reconstruction is relatively stable, and the reconstruction accuracy when N = 1100 is slightly better than for the other values.
The value of the parameter L is generally small. In this paper, L = 2, 4, ..., 12 is used for the test. The test results are shown in Table 3.
Value | 2 | 4 | 6 | 8 | 10 | 12 |
Relative Error | 0.425 | 0.447 | 0.509 | 0.409 | 0.418 | 0.422 |
It is not difficult to see that, when the value of L is small, the error fluctuation is large, so this paper uses L = 8 to ensure that the error is stable.
The number of iterations J is similar to the parameter L. The values in the relevant literature are usually around eight. In this paper, J = 2, 4, …, 12 is used for testing. The test results are shown in Table 4.
Value | 2 | 4 | 6 | 8 | 10 | 12 |
Relative Error | 0.421 | 0.408 | 0.401 | 0.404 | 0.408 | 0.413 |
To reduce the dictionary training time, the number of iterations J also needs to take on a smaller value. According to the reconstruction error curve, J = 6 is more suitable.
In the follow-up experiments conducted, the values of the parameters of the K-SVD algorithm are given.
The measurement matrix is the key to the dimensionality reduction projection of the signal, and it is divided into a random matrix and a deterministic matrix. The former matrix is used in this paper. Common random measurement matrices used currently include Gaussian random matrices, Bernoulli random matrices, and partial Hadamard matrices. The performance of each matrix is different, depending on the signals of different properties. The performance of each measurement matrix in different applications is quite different, with no uniform selection standard. Therefore, this paper compares and analyzes the reconstruction performance of the above-mentioned common random measurement matrices under the conditions.
The test selects 512 continuous data observations of the vibration signal of a faulty rolling bearing as the original signal. It uses each measurement matrix to perform compression sensing and then compares the accuracy of each set of fault information detected by the matrix. In other words, the frequency corresponding to the highest spectral peak of the reconstructed signal spectrum is required to be consistent with the original signal fault frequency, and the amplitude ratio of the secondary spectral peak to the highest spectral peak of the reconstructed signal is less than 0.9.
The research shows that the frequency characteristics corresponding to each harmonic component in the Fourier domain appear as two spectral lines. Therefore, to facilitate the experiment, the signal sparsity K = 2 is used, which is equivalent to detecting only one harmonic component at a time. Under different values for the number of data M, each measurement matrix performs 1000 compressed sensing processes to calculate the probability of the conditions being satisfied. The test results are shown in Figure 1.
As can be seen from Figure 1, the success rate of the failure information extraction of the partial Hadamard matrix is significantly higher than that of the other matrices. Therefore, this paper selects the partial Hadamard matrix as the measurement matrix of compressed sensing.
The essence of signal reconstruction is to solve the optimization problem of equation (4) and find a solution that is as sparse as possible in the equation. At present, there are two main types of methods, namely, the base tracking method and the greedy algorithm. The latter is used for analysis in this paper. According to the principle of compressed sensing, for a signal with a large dimension, the amount of computation for the reconstructed signal to solve the L1 norm minimization problem is large, and the computation time required will increase. The iterative method using the greedy algorithm can reduce computational complexity, can shorten the signal recovery time, and is easy to implement in hardware. The most commonly used iterative method is the orthogonal matching pursuit algorithm [45,46,47]. The subspace pursuit algorithm selected in this paper is an improvement over the traditional orthogonal matching pursuit algorithm in terms of efficiency and stability [48,49,50].
This paper adds neural network prediction theory to the compressed sensing process. First, the signal is collected by the compressed sensing method to obtain a small amount of observation values, which are divided into several consecutive data groups
Although the application of multilayer perception neural network has been quite extensive, there are still no unified guidelines or established formulas for the specific design of its structure in different fields, which is usually selected based on the designer’s experience or the experiment.
The total number of layers of the neural network mainly depends on the number of hidden layers, which is generally one to two. In this paper, the predicted value of compressed sensing is input into the network as the original signal. Because of the high level of randomness of the signal, the use of multiple hidden layers is suitable to ensure the accuracy of the prediction. Therefore, this paper uses a double hidden layer structure.
The transfer function, an important part of the neural network, is a continuous differentiable function. Its main function is to increase the nonlinearity of the network. In this paper, the tan -sigmoid function is selected as the transfer function between the input layer and the first hidden layer, and between the first and second hidden layers. The transfer function between the second hidden layer and the output layer uses a linear transfer function.
The training algorithm of the neural network is a method for solving the local minimum. The more common algorithms include the gradient descent method, the conjugate gradient method, the Newton method, and the Levenberg–Marquardt method. In this paper, after several experiments using the above training algorithms, the network training is found to reach the upper limit of 20, 000 iterations when using the gradient descent or the conjugate gradient method, but it is still difficult for the training error to satisfy the requirements. When the Newton method is used, although the stopping condition can be satisfied after several thousand iterations, the training error is still large and does not meet the error requirements in practical applications. Therefore, this paper uses the Levenberg–Marquardt algorithm, with greater speed and higher accuracy for training the network.
The specific values of the neural network parameters are given later.
This article uses the bearing signal described earlier as the original signal. Given the number of original data n = 512, the sparseness of the signal is 110, and the K-SVD algorithm is used to sparsely represent the signal. Then, a partial Hadamard matrix is used to compress and observe the signal. The test uses the compressed data number M = 205 as an example. That is, the compressed data are compressed by about 60%. Then, the observations obtained are input into the neural network for learning. This experiment intends to compress the observed value by about 50%, that is, the first 103 data are fixed values, and the last 102 data are predicted. After testing, the numbers of neurons in the input layer, first and second hidden layers, and output layer of the neural network are 14, 11, 5, and 1, respectively. This combination of numbers can guarantee a higher prediction success rate with fewer neurons. Figure 2 shows the results of a comparison between the predicted and actual observed values. The first 103 data points in panels (a) and (b) are exactly same and are fixed values. The 104th to 205th data points in panel (b) are the predicted values for the corresponding data in panel (a). The error between the predicted and actual values is shown in Figure 3.
It can be seen error of the first 70 data points of the prediction is almost zero, and the prediction error of the subsequent data increases gradually with the rolling iteration process. The error fluctuation of the 90th to 102th data points is more obvious. Nevertheless, the overall forecast error is still small compared to the observed amplitude, which is basically maintained within a range of ±0.15, accounting for only about 0.4% of the total amplitude. It is difficult to distinguish the difference between the two from Figure 2.
Finally, all the predicted observations are reconstructed using the subspace pursuit algorithm, and MATLAB software is used to generate the reconstructed signals and their spectrograms, as shown in Figure 4.
It is not difficult to determine that the reconstructed signal obtained with the method in this paper can simulate the original signal well, with only a certain error at some sampling points. The error for the traditional method is more obvious, for example, the signal near the 160th sampling point is seriously distorted. Although, given the experimental compression ratio, the proposed method can accurately detect the fault frequency of the bearing about 154 Hz through their respective spectra, the traditional method is already exhibiting obvious errors.
The error curve in Figure 5 is the difference between the original signal and its frequency spectrum. In panels (a) and (b), the upper curve is the result of the comparison of the method in this paper, and the lower curve is the result of the traditional method. Although the signal error is not obvious, the spectral error comparison shows that the reconstruction error of this method is slightly better than that of the traditional method.
To quantitatively compare the reconstruction accuracy of the neural network prediction method used in this paper with the traditional method, the concept of a matching rate (MR) is introduced as a measure of the reconstruction accuracy of this paper. The calculation formula of the MR is
MR=1−‖|ˆx|−|x|‖2‖|ˆx|+|x|‖2 | (5) |
where x represents the original signal and
Table 5 presents the average values of the reconstructed signal MR for the prediction and traditional methods for multiple experiments under different compression amounts.
Compression ratio | MR of our method | MR of the traditional K-SVD method |
0.72 0.76 0.80 0.84 0.88 0.92 |
0.8284 0.8318 0.8277 0.8331 0.8296 0.8315 |
0.8111 0.7943 0.7822 0.7695 0.7349 0.7344 |
It can be seen from Table 5 that, when the total compression is 72–92%, the MR of the prediction method in this paper is significantly higher than that of the traditional method, which is basically stable at around 0.83. The MR of the traditional method decreases significantly with the increase in compression. At a compression of 92%, the gap between the traditional and prediction methods is quite obvious. In the experiment, when using the traditional method to compress the signal by a more compression amount, although there is only a difference of 0.1 between the final calculated match value and the predicted value, the traditional method is found to be unable to guarantee the accuracy of the extraction of the bearing fault information performance, and the success rate of the compressed sensing process is significantly reduced. At 92% compression, the success rate is lower than 50%.
In addition to inner ring failure, the bearing can undergo outer ring failure and rolling element failure. To verify the applicability of this method to the bearing’s outer ring and rolling element fault signals, the original signal is replaced by, first, the outer ring vibration signal and, then, the rolling element vibration signal, to test separately. The number of original signal data points n = 512, using the method described above to perform compression observation, observation value prediction, and signal reconstruction. The signal reconstruction results are shown in Figures 6 and 7.
Figure 6 shows that the reconstructed signal can simulate the original signal well. After calculation, the MR of the signal at this time is about 0.91. In Figure 7, the reconstructed rolling element fault signal contains significant error. The MR of the reconstructed signal at this time is about 0.75. Although there is a big difference between the original and reconstructed signals, the change trend of the two is basically the same. The amplitude and position of the main spectrum peak in the spectrum are basically the same, which can be used to identify the failure frequency of the bearing.
In practical applications, the signal is often disturbed due to factors such as the environment and equipment. Among these disturbances, the noise interference at the acquisition end, which refers to the noise contained in the original vibration signal, which have affect to a certain extent on the observation values that need to be predicted through the compressed sensing process. Neural network predictions have higher signal structure requirements. To verify the performance of our signal processing method in this paper when the original signal is disturbed by noise, noise is artificially added to the original signal for testing, and its impact on the original signal noise is analyzed by comparing the reconstructed signal spectrum and the MR. The level of noise is measured by the signal-to-noise ratio (SNR).
The original signal without noise interference used in the experiment is the same as above, but Gaussian white noise is added to the original signal before compressive sensing sampling. In this paper, the same experiment is carried out for a different SNR. At the same time, the MRs of each reconstructed signal with a noise-free interference signal and a noise-containing signal are respectively calculated. The results are shown in Table 6. In the experiment, the MR corresponding to each SNR value is the average value of the MR of the reconstructed signal under the condition that the neural network successfully predicts it five times.
SNR (dB) | MR for a pure signal | MR for a noisy signal |
50 30 25 20 15 10 8 5 |
0.833 0.834 0.809 0.763 0.682 0.592 0.514 0.475 |
0.834 0.834 0.806 0.761 0.677 0.600 0.587 0.559 |
It is not difficult to see that, when the SNR is at least 30 dB, the MR tends to be stable, and a stage is reached where no noise interference is added in the previous test, indicating that the noise interference on the signal is basically negligible at that time. When the SNR is no more than 25 dB, the signal MR decreases significantly as the SNR decreases. When the SNR equals 5 dB, the reconstructed signal has difficulty simulating the original signal, and the fault frequency cannot be accurately judged by its frequency spectrum. In addition, when the SNR is at least 15 dB, the MR of the reconstructed signal relative to the undisturbed original signal is slightly greater than that of the noisy signal. The reason for this phenomenon is that the compressed sensing process itself has a certain noise reduction capability. If the signal structure contains less interference, weak noise interference can be eliminated through its own calculation process.
In addition to the original signal being disturbed by noise, the observations can also be disturbed. A major feature of compressed sensing is that a small amount of observations after compression are used for the transmission of information, and the noise interference of observations is the impact of noise during the transmission of analog signals.
The test method involves mixing Gaussian white noise into the observation value before the observation value is input into the neural network for training. Then, the neural network is trained using observations with noise, and the latter part of the data with noise observations is predicted by the neural network. Finally, the predicted observations are used to reconstruct the signal to obtain fault frequency information through the spectrum. In this paper, the same experiment is conducted under different SNRs, and the MR of the reconstructed signal is calculated at the same time. The results are shown in Table 7. In the experiment, the MRs corresponding to each value of the SNR are the average MR values in the case of five successful predictions by the neural network.
SNR (dB) | MR |
60 55 50 45 40 35 30 |
0.838 0.835 0.825 0.785 0.735 0.600 0.514 |
It can be seen from Table 7 that, when the SNR is no more than 50 dB, as the SNR decreases, the signal MR decreases significantly. When the SNR is at least 50 dB, the signal MR tends to be stable and is close to the matching degree without noise interference in the previous section.
The experiment finds that, when the observed SNR is less than 35 dB, the neural network prediction rate is significantly reduced, the reconstructed signal cannot accurately simulate the original signal, and it is difficult to judge the fault frequency through its spectrum. The test results indicate that, when the compressed sensing observations are disturbed by noise, the original signal must be reconstructed with higher accuracy by ensuring that the SNR is no greater than 35 dB in this method.
In addition to noise interference, data loss is also an inevitable phenomenon. To simulate data loss, the loss mechanism before processing the data must be analyzed. At present, the loss mechanism is usually divided into three: nonrandom loss, random loss, and complete random loss. However, there is no good analysis and processing method for the nonrandom loss or complete random loss of data. Therefore, this study simulates the random loss of data.
In the compressed sensing process, data loss can be split into data loss at the collecting end and data loss at the observation value. However, because a neural network is needed to predict the sequence of observations, this method has strict requirements on the integrity of the sequence data. If data are lost from the observations, it is difficult to for the neural network to make accurate predictions. Therefore, in the analysis of the impact of data loss, only the loss of original data at the acquisition end is considered.
The original signal with no data loss that is selected for the experiment is the same as previously, and, based on this, part of the data are randomly set to zero to simulate data loss. In this paper, the same experiment is conducted for different data loss ratios. In the experiment, the signal with no data loss is segmented into groups of 9 to 2 data. In each data segment, one data point is randomly selected set to zero. The corresponding data loss ratio is about 11.1–50.0%. Table 8 shows the MRs of the reconstructed signal with the data loss ratios to the original signal with data loss when the other parameter conditions remain unchanged. The MR is the average value of 10 successful predictions by the neural network.
Loss ratio (%) | MR |
11.1 12.5 14.3 16.7 20.0 25.0 33.3 50.0 |
0.732 0.717 0.712 0.694 0.681 0.638 0.603 0.523 |
It can be seen from Table 8 that, in the case of loss of data from the original signal, the MR of the reconstructed signal is significantly lower than in the case of no data loss in the previous experiment. As the proportion of data loss increases, the MR of the reconstructed signal gradually decreases. The experiment finds that the loss of data randomness destroys the structure of the signal to a certain extent, so the information contained in the observations obtained through compressed sensing is disturbed and, eventually, the neural network prediction rate under the same conditions is significantly reduced. In addition, when the proportion of data loss is high, the reconstructed signal has difficulty simulating the original signal, and its frequency spectrum will occasionally appear, since the original fault frequency spectrum peak is lost according to the actual data loss.
From the results, one can conclude that, in the case of random loss of the original data, as long as the loss ratio is less than 30%, the reconstructed signal obtained by the method in this paper is guaranteed to simulate the original signal with greater accuracy. The bearing’s frequency can be used to accurately determine its fault frequency.
This paper proposes a method for extracting rolling bearing fault information by combining compressed sensing and a neural network. Four combinations of the main numerical parameters of the K-SVD algorithm are selected for use in sparse signal representation. The reconstructed performances of several common measurement matrices and greedy algorithms are compared and analyzed, and the measurement matrix and signal reconstruction algorithms are suitable for the signal feature extraction method are selected. A neural network is used to predict the latter part of the observation data obtained by compression sampling. Further compression of the observation value is thereby achieved, and the original signal is reconstructed from the prediction value. The feasibility of the method is verified by simulation experiments. For the reconstruction of the bearing inner ring, outer ring, and rolling element signals, the MRs can be guaranteed to be 0.83, 0.77, and 0.91, respectively. This method is obviously superior to the traditional method. Additionally, considering that the signal will inevitably be disturbed in practical applications, two situations are simulated in which the signal is disturbed by noise and data are randomly lost. The minimum SNR and the maximum data loss ratio to ensure the accuracy of the extracted fault information are obtained: the original signal’s SNR must be at least 10 dB, the observed SNR value must be at least 35 dB, and the original signal data random loss rate must be less than 30%.
This work was supported in part by the Natural Science Foundation of Liaoning Province under Grants 2019ZD0112 and 2019ZD0099, and in part by the National Natural Science Foundation of China under grants 51475065, 51605068, and 51879027; the Traction Power State Key Laboratory of Southwest Jiaotong University under grant TPL2002; and the Liaoning BaiQianWan Talents Program.
The authors declare no conflicts of interest.
[1] | D. Madeo and C. Mocenni, Game Interactions and dynamics on networked populations, IEEE T. Automat. Contr., 60 (2015), 1801–1810. |
[2] | A. Barrat, M. Barthelemy and A. Vespignani, Dynamical Processes on Complex Networks. Cambridge University Press, UK, 2008. |
[3] | G. Ehrhardt, M. Marsili and F. Vega-Redondo, Diffusion and growth in an evolving network, Int. J. Game Theory, 334 (2006), 383–397. |
[4] | V. Colizza, A. Barrat, M. Barthélemy, et al., The role of the airline transportation network in the prediction and predictability of global epidemics, P. Natl. Acad. Sci. USA, 103 (2006), 2015–2020. |
[5] | V. Colizza and A. Vespignani, Epidemic modeling in metapopulation systems with heterogeneous coupling pattern: Theory and simulations, J. Theor. Biol., 251 (2008), 450–467. |
[6] | S. Tully, M. G. Cojocaru and C. T. Bauch, Multiplayer games and HIV transmission via casual encounters, Math. Biosci. Eng., 14 (2017), 359–376. |
[7] | M. D'Orsogna and M. Perc, Statistical physics of crime: A review, Phys. Life Rev., 12 (2015), 1–21. |
[8] | D. Madeo, L. R. Comolli and C. Mocenni, Emergence of microbial networks as response to hostile environments, Front. Microbiol., 5 (2014), 407. |
[9] | N. Quijano, C. Ocampo-Martinez, J. Barreiro-Gomez, et al., The role of population games and evolutionary dynamics in distributed control systems: The advantages of evolutionary game theory, IEEE Contr. Sys. Mag., 37 (2017), 70–97. |
[10] | R. Gray, A. Franci, V. Srivastava, et al., Multi-agent decision-making dynamics inspired by honeybees, IEEE T. Contr. Netw. Sys., 5 (2018), 793–806. |
[11] | F. C. Santos, J. M. Pacheco and T. Lenaerts, Evolutionary dynamics of social dilemmas in structured heterogeneous populations, P. Natl. Acad. Sci. USA, 103 (2006), 3490–3494. |
[12] | H. Ohtsuki and M. A. Nowak, The replicator equation on graphs, J. Theor. Biol., 243 (2006), 86–97. |
[13] | T. Konno, A condition for cooperation in a game on complex networks, J. Theor. Biol., 269 (2011), 224–233. |
[14] | J. Gómez-Gardenes, I. Reinares, A. Arenas, et al., Evolution of cooperation in multiplex networks, Sci. Rep., 2 (2012), 620. |
[15] | S. M. Cameron and A. Cintrón-Arias, Prisoner's Dilemma on real social networks: Revisited, Math. Biosci. Eng., 10 (2013), 1381–1398. |
[16] | D. G. Rand, M. A. Nowak, J. H. Fowler, et al., Static network structure can stabilize human cooperation, P. Natl. Acad. Sci. USA, 11 (2014), 17093–17098. |
[17] | B. Allen, G. Lippner, Y. Chen, et al., Evolutionary dynamics on any population structure, Nature, 544 (2017), 227. |
[18] | B. Fotouhi, N. Momeni, B. Allen, et al., Evolution of Cooperation on Stochastic Block Models, preprint, arXiv:1807.03093. |
[19] | J. Weibull, Evolutionary Game Theory, MIT Press, Cambridge, MA, 1995. |
[20] | J. Hofbauer and K. Sigmund, Evolutionary game dynamics, B. Am. Math. Soc, 40 (2003) 479–519. |
[21] | M. A. Nowak, Evolutionary Dynamics: Exploring the Equations of Life, Belknap Press of Harvard University Press, Harvard, MA, 2006. |
[22] | G. Iacobelli, D. Madeo and C. Mocenni, Lumping evolutionary game dynamics on networks, J. Theor. Biol., 407 (2016), 328–338. |
[23] | D. Pais, C. H. Caicedo-Nùñez and N. E. Leonard, Hopf bifurcations and limit cycles in evolutionary network dynamics, SIAM J. Appl. Dyn. Syst., 11 (2012), 1754–1884. |
[24] | W. Ren and R. Beard, Consensus seeking in multiagent systems under dynamically changing interaction topologies, IEEE T. Automat. Contr., 50 (2005), 655–661. |
[25] | R. Olfati-Saber, A. Fax and R. Murray, Consensus and cooperation in networked multi-agent systems, P. IEEE, 95 (2007), 215–233. |
[26] | B. Kozma and A. Barrat, Consensus formation on adaptive networks, Phys. Rev. E, 77 (2008), 016102. |
[27] | G. Punzo, G. F. Young, M Macdonald, et al., Using network dynamical influence to drive consensus, Sci. Rep., 6 (2016), 26318. |
[28] | A. Traulsen, F. C. Santos and J. M. Pacheco, Evolutionary Games in Self-Organizing Populations, in Adaptive networks: Theory, Models and Applications (eds. T. Gross and H. Sayama), Springer Berlin Heidelberg, Germany, (2009), 253–267. |
[29] | S. Boccaletti, V. Latora, Y. Moreno, et al., Complex networks: Structure and dynamics, Phys. Rep., 424 (2006), 175–308. |
[30] | Y. Bramoullé and R. Kranton, Games Played on Networks, in The Oxford Handbook of the Economics of Networks (eds. Y. Bramoullé, A. Galeotti and B. Rogers), Oxford University Press. Available from: http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199948277.001. 0001/oxfordhb-9780199948277. |
[31] | A. Banerjee, A. G. Chandrasekhar, E. Duflo, et al., Gossip: Identifying Central Individuals in a Social Network, preprint, arXiv:1406.2293v3. |
[32] | D. Madeo and C. Mocenni, Self-regulation promotes cooperation in social networks, preprint, arXiv:1807.07848. |
[33] | M. Newman, Network: An introduction, Oxford University Press, 2010. |
1. | Nur Jassriatul Aida binti Jamaludin, Shanmugan Subramani, Mutharasu Devarajan, Thermal and optical performance of chemical vapor deposited zinc oxide thin film as thermal interface material for high power LED, 2018, 5, 2372-0484, 402, 10.3934/matersci.2018.3.402 | |
2. | Muhammad Sani Idris, Shanmugan Subramani, Performance of 9.0 W light-emitting diode on various layers of magnesium oxide thin film thermal interface material, 2020, 126, 0947-8396, 10.1007/s00339-020-03820-y | |
3. | Abdulkarim Hamza El-ladan, Shanmugan Subramani, Growth and performance analysis of BAlN alloy thin film on Al substrate as a heat spreader for effective thermal management applications on white-based high-power LED, 2021, 127, 0947-8396, 10.1007/s00339-021-04617-3 | |
4. | Abdulkarim Hamza El-ladan, Shanmugan Subramani, Influence of composition ratio on the thermal performance of AlNB nanocomposite for an efficient heat spreading in solid-state lighting package (LED), 2022, 33, 0957-4522, 2183, 10.1007/s10854-021-07425-w | |
5. | Chien-Chung Liu, Maw-Tyan Sheen, Feng-Ming Chen, Ming-Der Jean, Thermal Performance of AlN-Coated High-Power LED Optimized Using Taguchi Statistical Approach, 2023, 0361-5235, 10.1007/s11664-023-10292-2 | |
6. | Fatema Tuz Zohora Toma, Md Sharifur Rahman, Kazi Hanium Maria, A review of recent advances in ZnO nanostructured thin films by various deposition techniques, 2025, 5, 2730-7727, 10.1007/s43939-025-00201-1 |
Value | 700 | 750 | 800 | 850 | 900 | 950 |
Relative Error | 0.432 | 0.425 | 0.425 | 0.417 | 0.412 | 0.438 |
Value | 950 | 1000 | 1050 | 1100 | 1150 | 1200 |
Relative Error | 0.424 | 0.418 | 0.414 | 0.413 | 0.413 | 0.414 |
Value | 2 | 4 | 6 | 8 | 10 | 12 |
Relative Error | 0.425 | 0.447 | 0.509 | 0.409 | 0.418 | 0.422 |
Value | 2 | 4 | 6 | 8 | 10 | 12 |
Relative Error | 0.421 | 0.408 | 0.401 | 0.404 | 0.408 | 0.413 |
Compression ratio | MR of our method | MR of the traditional K-SVD method |
0.72 0.76 0.80 0.84 0.88 0.92 |
0.8284 0.8318 0.8277 0.8331 0.8296 0.8315 |
0.8111 0.7943 0.7822 0.7695 0.7349 0.7344 |
SNR (dB) | MR for a pure signal | MR for a noisy signal |
50 30 25 20 15 10 8 5 |
0.833 0.834 0.809 0.763 0.682 0.592 0.514 0.475 |
0.834 0.834 0.806 0.761 0.677 0.600 0.587 0.559 |
SNR (dB) | MR |
60 55 50 45 40 35 30 |
0.838 0.835 0.825 0.785 0.735 0.600 0.514 |
Loss ratio (%) | MR |
11.1 12.5 14.3 16.7 20.0 25.0 33.3 50.0 |
0.732 0.717 0.712 0.694 0.681 0.638 0.603 0.523 |
Value | 700 | 750 | 800 | 850 | 900 | 950 |
Relative Error | 0.432 | 0.425 | 0.425 | 0.417 | 0.412 | 0.438 |
Value | 950 | 1000 | 1050 | 1100 | 1150 | 1200 |
Relative Error | 0.424 | 0.418 | 0.414 | 0.413 | 0.413 | 0.414 |
Value | 2 | 4 | 6 | 8 | 10 | 12 |
Relative Error | 0.425 | 0.447 | 0.509 | 0.409 | 0.418 | 0.422 |
Value | 2 | 4 | 6 | 8 | 10 | 12 |
Relative Error | 0.421 | 0.408 | 0.401 | 0.404 | 0.408 | 0.413 |
Compression ratio | MR of our method | MR of the traditional K-SVD method |
0.72 0.76 0.80 0.84 0.88 0.92 |
0.8284 0.8318 0.8277 0.8331 0.8296 0.8315 |
0.8111 0.7943 0.7822 0.7695 0.7349 0.7344 |
SNR (dB) | MR for a pure signal | MR for a noisy signal |
50 30 25 20 15 10 8 5 |
0.833 0.834 0.809 0.763 0.682 0.592 0.514 0.475 |
0.834 0.834 0.806 0.761 0.677 0.600 0.587 0.559 |
SNR (dB) | MR |
60 55 50 45 40 35 30 |
0.838 0.835 0.825 0.785 0.735 0.600 0.514 |
Loss ratio (%) | MR |
11.1 12.5 14.3 16.7 20.0 25.0 33.3 50.0 |
0.732 0.717 0.712 0.694 0.681 0.638 0.603 0.523 |