In order to overcome the low accuracy of traditional Extreme Learning Machine (ELM) network in the performance evaluation of Rotate Vector (RV) reducer, a pattern recognition model of ELM based on Ensemble Empirical Mode Decomposition (EEMD) fusion and Improved artificial Jellyfish Search (IJS) algorithm was proposed for RV reducer fault diagnosis. Firstly, it is theoretically proved that the torque transmission of RV reducer has periodicity during normal operation. The characteristics of data periodicity can be effectively reflected by using the test signal periodicity characteristics of rotating machinery and EEMD. Secondly, the Logistic chaotic mapping of population initialization in JS algorithm is replaced by tent mapping. At the same time, the competition mechanism is introduced to form a new IJS. The simulation results of standard test function show that the new algorithm has the characteristics of faster convergence and higher accuracy. The new algorithm was used to optimize the input layer weight of the ELM, and the pattern recognition model of IJS-ELM was established. The model performance was tested by XJTU-SY bearing experimental data set of Xi'an Jiaotong University. The results show that the new model is superior to JS-ELM and ELM in multi-classification performance. Finally, the new model is applied to the fault diagnosis of RV reducer. The results show that the proposed EEMD-IJS-ELM fault diagnosis model has higher accuracy and stability than other models.
Citation: Xiaoyan Wu, Guowen Ye, Yongming Liu, Zhuanzhe Zhao, Zhibo Liu, Yu Chen. Application of Improved Jellyfish Search algorithm in Rotate Vector reducer fault diagnosis[J]. Electronic Research Archive, 2023, 31(8): 4882-4906. doi: 10.3934/era.2023250
Related Papers:
[1]
Laura Grumann, Mara Madaleno, Elisabete Vieira .
The green finance dilemma: No impact without risk – a multiple case study on renewable energy investments. Green Finance, 2024, 6(3): 457-483.
doi: 10.3934/GF.2024018
[2]
Mukul Bhatnagar, Sanjay Taneja, Ercan Özen .
A wave of green start-ups in India—The study of green finance as a support system for sustainable entrepreneurship. Green Finance, 2022, 4(2): 253-273.
doi: 10.3934/GF.2022012
[3]
Biljana Ilić, Dragica Stojanovic, Gordana Djukic .
Green economy: mobilization of international capital for financing projects of renewable energy sources. Green Finance, 2019, 1(2): 94-109.
doi: 10.3934/GF.2019.2.94
[4]
Amanda-Leigh O'Connell, Johan Schot .
Operationalizing transformative capacity: State policy and the financing of sustainable energy transitions in developing countries. Green Finance, 2024, 6(4): 666-697.
doi: 10.3934/GF.2024026
[5]
Shahinur Rahman, Iqbal Hossain Moral, Mehedi Hassan, Gazi Shakhawat Hossain, Rumana Perveen .
A systematic review of green finance in the banking industry: perspectives from a developing country. Green Finance, 2022, 4(3): 347-363.
doi: 10.3934/GF.2022017
[6]
João Moura, Isabel Soares .
Financing low-carbon hydrogen: The role of public policies and strategies in the EU, UK and USA. Green Finance, 2023, 5(2): 265-297.
doi: 10.3934/GF.2023011
[7]
Goshu Desalegn .
Insuring a greener future: How green insurance drives investment in sustainable projects in developing countries?. Green Finance, 2023, 5(2): 195-210.
doi: 10.3934/GF.2023008
[8]
Raja Elyn Maryam Raja Ezuma, Nitanan Koshy Matthew .
The perspectives of stakeholders on the effectiveness of green financing schemes in Malaysia. Green Finance, 2022, 4(4): 450-473.
doi: 10.3934/GF.2022022
[9]
Fu-Hsaun Chen .
Green finance and gender equality: Keys to achieving sustainable development. Green Finance, 2024, 6(4): 585-611.
doi: 10.3934/GF.2024022
[10]
Yafei Wang, Jing Liu, Xiaoran Yang, Ming Shi, Rong Ran .
The mechanism of green finance's impact on enterprises' sustainable green innovation. Green Finance, 2023, 5(3): 452-478.
doi: 10.3934/GF.2023018
Abstract
In order to overcome the low accuracy of traditional Extreme Learning Machine (ELM) network in the performance evaluation of Rotate Vector (RV) reducer, a pattern recognition model of ELM based on Ensemble Empirical Mode Decomposition (EEMD) fusion and Improved artificial Jellyfish Search (IJS) algorithm was proposed for RV reducer fault diagnosis. Firstly, it is theoretically proved that the torque transmission of RV reducer has periodicity during normal operation. The characteristics of data periodicity can be effectively reflected by using the test signal periodicity characteristics of rotating machinery and EEMD. Secondly, the Logistic chaotic mapping of population initialization in JS algorithm is replaced by tent mapping. At the same time, the competition mechanism is introduced to form a new IJS. The simulation results of standard test function show that the new algorithm has the characteristics of faster convergence and higher accuracy. The new algorithm was used to optimize the input layer weight of the ELM, and the pattern recognition model of IJS-ELM was established. The model performance was tested by XJTU-SY bearing experimental data set of Xi'an Jiaotong University. The results show that the new model is superior to JS-ELM and ELM in multi-classification performance. Finally, the new model is applied to the fault diagnosis of RV reducer. The results show that the proposed EEMD-IJS-ELM fault diagnosis model has higher accuracy and stability than other models.
1.
Introduction
Nature-inspired algorithms are well-known algorithms for finding near-optimal solutions to optimization problems. These algorithms have successfully solved optimization problems in different domains [1,2]. The natural behaviors or events among living creatures inspire these nature-inspired algorithms Examples of such events include the mountain gazelle's social behavior, and the behavior of birds, monkeys, and other animals in the wildlife [3]. These algorithms have shown promise in solving real-world engineering problems and problems in many fields. Among the plethora of algorithms that have been designed by researchers over the years, new algorithms are still of interest among optimization enthusiasts to solve the common problem of these algorithms, which are mainly entrapped in local optima solutions and slow convergences. The Coot Optimization Algorithm (COA) has shown promise in solving these difficulties in classical nature-inspired algorithms. COA is inspired by the social lifestyle exhibited by coots. It is an effective and straightforward metaheuristic algorithm to find near-optimal solutions [4]. Despite the great prosperity of COA, it suffers from a slow convergence in solving very complex problems [5]. This weakness caused COA to require an extremely high number of iterations to produce substantially good results, especially on large-scale complex optimization problems [5,6].
The poor balance of the exploration and exploitation stages significantly contributes to the slow convergence [6,7]. While exploitation guarantees the refinement of solutions in attractive parts of the search space, exploration is essential for the global search and to prevent a premature convergence to the local optima [8]. An imbalance between these two stages can seriously impair the algorithm's effectiveness, thus resulting in wasted searches and higher processing expenses that ultimately yield less-than-ideal outcomes [9].
In the literature, scholars have employed a range of corrective measures to enhance the performance of certain algorithms that have comparable shortcomings [10,11]. A typical example is the truncation parameter selection technique, which was adopted to improve the general performance of the Mountain Gazelle Optimizer (MGO) on standard benchmark test functions [12]. To enhance the overall performance of the Gorilla Troop Optimizer (GTO), A. Bright et al. [13] integrated a step adaptive simulation concept into the GTO.
In the case of the COA, Aslan et al. [14] proposed a modification by integrating a randomized mutation technique into the original COA to enhance its global search ability. However, excessive randomization within algorithms causes significant instabilities. Additionally, R. R. Mostafa et al. [7] modified COA by introducing an opposition-based learning and an orthogonal-based learning approach to improve the algorithm's performance. Moreover, authors in [15] proposed control randomization and a transition factor-based strategy to enhance the COA. This modification was geared towards solving the battery parameters estimation problem. Hence, its performance in a variety of optimization fields was not established.
Similarly, this work seeks to modify the COA to improve its general performance through an adaptive sigmoid increasing inertia weight in the "Leader Movement" phase [16]. This is a crucial stage of the COA because it determines how the coots are led by their leader coot on the surface of the water. This influences the dynamics of the search as a whole [6]. The integrated adaptive weight is intended to dynamically balance the exploration and exploitation mechanisms throughout the search process.
To achieve this aim, the exploration is dominant at the early stages of the optimization search process and then gradually improves the exploitation mechanism in the later stages. Therefore, the adaptive sigmoid rising inertia weight is designed to begin with a smaller value and gradually increase [16].
The rest of the paper is organized as follows: the original COA and the proposed modification are presented in Section 2; the performance of the simulation's outcome is discussed in Section 3; and lastly, Section 4 concludes the paper by providing a thorough synopsis of the research and suggestions for potential future studies as recommendations.
2.
Adaptive sigmoid increasing inertia weight-based modification of COA
This section presents the methodological approach followed in this research, which consists of the original COA, the proposed modification, and the test implementation.
2.1. The original coot optimization algorithm
The COA mimics the behavior of American coots as they navigate through seas or lakes. American coots have four distinct movement strategies: random movement, chain formation, moving toward group leader positions, or leading the group [6]. The initial generation is created randomly using Eqs (1) and (2):
CootPos(i)=rand(1,d)×(ub−lb)+lb,
(1)
lob=[lob1,lob1,…,lobd],uob=[uob1,uob2,…,uobd],
(2)
where CootPos denotes the position of the ith coot, d represents the dimension number, uob and lob signify the upper and lower boundaries, respectively, and rand denotes a random vector within the range [0, 1].
Random movement:
If coots exhibit random movement, then they will consequently migrate towards a position denoted as Q, which can be determined through Eq (3):
Q=rand(1,d)×(ub−lb)+lb.
(3)
To prevent getting trapped in local optimal areas, if coots encounter a failure within a local region, then they will employ a position as determined through Eq (4):
CootPos(i)=CootPos(i)+A×R2×(Q−CootPos(i)),
(4)
where R2 ∈ [0, 1] and A can be calculated using Eq (5):
A=1−L×(1Iter),
(5)
where L and Iter represent the current iteration and the maximum number of iterations, respectively.
Chain movement:
To mathematically represent the movement of the chain phase, we address the following equation, denoted as Eq (6):
CootPos(i)=0.5×(CootPos(i−1)+CootPos(i)).
(6)
Moving towards group leader:
When adjusting its position according to the leader's position, the coot's movement is governed by the following equation, denoted as Eq (7):
K=1+(iMODNL).
(7)
Here, i represents the total number of coots, NL represents the number of leaders, and K denotes a specific leader. The update process utilizes the following equation, referred to as Eq (8):
where gBest is the best position, R ∈ [−1, 1], both R3 and R4 ∈ [0, 1], B can be calculated using Eq (10):
B=2−L×(1Iter),
(10)
where L represents the current iteration, and Iter represents the maximum number of iterations.
2.2. Proposed modification
The original COA has a great potential of being adopted for applications in various optimization fields [7]. However, the COA has a slow convergence, which makes it require a lot of iterations to produce a good optimization result [6]. This drawback is caused by a poor exploration and exploitation balance to ensure an efficient search.
To remedy this weakness, an adoptive sigmoid increasing inertia weight [16] is incorporated in the Leader Movement phase. This phase of the original COA is expressed in Eq (9), which indicates how the leader coots lead the coots' group to move on the water surface. The proposed weight is incorporated as shown in Eq (11):
where the value of the weight, ω, is calculated using the sigmoid increasing inertia weight expressed in Eq (12):
ω(i)=ωmin+ωmax−ωmin1+ea−b×iMaxIter,
(12)
where a and b are parameters for adjustment, which are carefully chosen through numerical simulations. The weights ωmin and ωmax represent the minimum and maximum weight values, respectively, and i and MaxIter represent the current iteration and the maximum number of iterations, respectively.
Finally, ω(i) represents the adaptive weight value at the ith iteration. The proposed weight is integrated in Eq (11) when R4 ≥ 0.5 to gain the full benefit of the inertia weight technique while avoiding its drawback of a possible premature convergence. In algorithms with the inertia weight techniques, they might converge too quickly to suboptimal solutions, especially when the weight parameter is not tuned properly. However, they are good at providing an effective and fast convergence to global solutions when properly implemented. To tap the good qualities of the coot technique, the developers of the algorithm incorporated the weighting factor when R4 ≥ 0.5. In this contribution, the eight has been designed to ensure the versatility of the algorithm during each iteration while avoiding its weaknesses. Depending on the value of R4, Eq (11) is executed with the proposed weight, where the good qualities can be utilized, or without the weight, where the algorithm can freely search within the space without solely focusing on the temporarily best individual members of the population.
The implementation (pseudocode) of the modified COA (mCOA) is presented as shown in Algorithm 1:
Algorithm 1. The pseudo-code of mCOA.
Initialize the first coot population randomly using Eqs (1) and (2).
Initialize the parameters of P=0.5, NL (Number of leaders), Ncoot (Number of coots), ωmin, ωmax, a, b, and MaxIter (max iteration).
Random selection of coot leaders.
Calculate the fitness of coots and leaders.
Find the best coot or leader as the global optimum (gBest).
While (Iter≤MaxIter)
Calculate A, B, and ω using Eqs (5), (10), and (12) respectively.
If rand<P
R1, R2, and R3 are random vectors along the dimensions of the problem.
Else
R1, R2, and R3 are random numbers.
End
For i=1 to the number of coots
Calculate K by Eq (7).
If rand> 0.5
Update the positions of the coots by Eq (8)
Else
If rand< 0.5 i≠1
Update the positions of the coots by Eq (6)
else
Update the positions of the coots by Eq (4)
End-if
End-for
Calculate the fitness of the coot
If the fitness < the fitness of leader(k)
Temp = leader(k); leader(k)=coot; coot=Temp
End
End-????
For the number of leaders
If rand< 0.5
Update the position of the leader by Eq (11.1)
Else
Update the position of the leader by Eq (11.2)
End-if
If the fitness of the leader<gBest
Temp =gBest; gBest=leader; leader=Temp; (update Global optimum)
End-if
End
Iter=Iter+1;
End-while
The pseudo-code serves as a guide for the implementation of the proposed mCOA.
3.
Simulation test setup on benchmark functions
The proposed mCOA was tested on thirteen commonly used standard benchmark test functions to establish its performance [17]. The test functions consist of the same functions used in the literature of the original COA, the details of which can be found in the reference [6]. The algorithm was coded in the MATLAB software (R2018a), using an HP Pavilion laptop computer with AMD A-8-6410 APU, an AMD Radeon R5 Graphics processor of a 2.00GHz clock speed, a RAM size of 4.00GB, and a 64-bit operating system. Table 1 provides detailed information on the benchmark function.
Table 1.
Detail information of benchmark functions.
The simulation parameter settings used in the simulation experiment are presented in Table 2. These contain all the parameter settings used in the simulation of the original COA in the literature to facilitate a fair comparison.
On each test function, the simulation was executed thirty (30) times, and some relevant statistical indicators were calculated, including the best value (Min), the worst value (Max), the mean value (Avg), and the standard deviation (Std). This depicts the possible best performance, the worst performance, the average performance, and the possible deviation when applied to a real-world optimization problem. Since no algorithm has been deemed globally optimal, relatively better-performing algorithms were searched for based on these statistical indicators [18].
To establish the efficacy of the proposed mCOA, the simulation results were compared to the original COA and some other state-of-the-art metaheuristic algorithms in [6], namely the genetic algorithm (GA) and the particle swarm optimization (PSO) algorithm. The comparison of simulation results on the thirteen (13) test functions are presented in Table 3, where the boldened numbers represent the best performances.
Table 3.
Results comparison on benchmark functions.
The results presented in Table 2 show the performance of the proposed mCOA compared to GA, PSO, and COA. It produced improvements for the GA, PSO, and COA on functions F1–F12. In these test functions, the mCOA produced the least values of the min, max, avg, and std, which indicates a better performance for minimization optimization problems, and represents ten (10) out of the thirteen (13) benchmark test functions tested. This represents an average performance of 76.92% on the test functions. For the cases of F6 and F13, the PSO outperformed the other algorithms, while the GA outperformed the other algorithms on F13.
The convergence of the two algorithms is presented in Figures 1–13, which indicate the convergence process from the first iteration to the last iteration. This provides a clearer comparison of the performances of the two algorithms to effectively justify the superior performance of the mCOA over the COA.
Figure 1.
Convergence characteristics of function F1.
In F1–F4, F7, F9, and F10, the mCOA has better convergence characteristics than the COA with a great margin. It starts performing better effectively from the initial iteration to the final iteration, thus producing a better final optimization value on these benchmark functions. In the cases of F6, F8, and F11–F13, the mCOA produced a slightly better convergence, on average, than the COA. The two algorithms, mCOA and COA, produced fast convergences in F5.
5.
Conclusions
A modification of the COA was developed to improve its global performance by incorporating an adaptive sigmoid inertia weight-based technique in the exploration phase. The mCOA was tested on the same 13 standard benchmark test functions used for the original COA. The simulation outcome using the MATLAB software was compared to that of the COA, PSO algorithm, and the GA. The mCOA outperformed the other algorithms on most of the test functions, with a score of 10 out of the 13 functions, while maintaining a competitive performance on the other 3 test functions. Therefore, an enhanced version of the COA was developed for a better global performance. Based on the exceptional simulation performance of the mCOA, it is recommended for applications in real-life optimization problems, especially in the field of engineering without any reservation.
All authors declare no conflict of interest regarding the publication of this paper.
References
[1]
J. S. Shin, J. H. Kim, J. G. Kim, M. Jin, Development of a lifetime test bench for robot reducers for fault diagnosis and failure prognostics, J. Drive Control, 16 (2019), 33–41.
[2]
H. An, W. Liang, Y. Zhang, J. Tan, Hidden Markov model based rotate vector reducer fault detection using acoustic emissions, Int. J. Sens. Netw., 32 (2020), 116–125. ttps://doi.org/10.1504/IJSNET.2020.104927 doi: 10.1504/IJSNET.2020.104927
[3]
L. Chen, H. Hu, Z. Zhang, X. Wang, Application of nonlinear output frequency response functions and deep learning to RV reducer fault diagnosis, IEEE Trans. Instrum. Meas., 70 (2020), 1–14. https://doi.org/10.1109/TIM.2020.3029383 doi: 10.1109/TIM.2020.3029383
[4]
A. Rohan, I. Raouf, H. S. Kim, Rotate vector (RV) reducer fault detection and diagnosis system: towards component level prognostics and health management (PHM), Sensors, 20 (2020), 6845. https://doi.org/10.3390/s20236845 doi: 10.3390/s20236845
[5]
S. Wang, J. Tan, J. Gu, D. Huang, Study on torsional vibration of RV reducer based on time-varying stiffness, J. Vib. Eng. Technol., 9 (2021), 73–84. https://doi.org/10.1007/s42417-020-00211-8 doi: 10.1007/s42417-020-00211-8
[6]
I. Raouf, H. Lee, H. S. Kim, Mechanical fault detection based on machine learning for robotic RV reducer using electrical current signature analysis: a data-driven approach, J. Comput. Des. Eng., 9 (2022), 417–433. https://doi.org/10.1093/jcde/qwac015 doi: 10.1093/jcde/qwac015
[7]
Q. Xu, C. Liu, E. Yang, M. Wang, An improved convolutional capsule network for compound fault diagnosis of RV reducers, Sensors, 22 (2022), 6442. https://doi.org/10.3390/s22176442 doi: 10.3390/s22176442
[8]
Y. He, H. Wang, S. Gu, New fault diagnosis approach for bearings based on parameter optimized VMD and genetic algorithm, J. Vibr. Shock, 40 (2021), 184–189. https://doi.org/10.13465/j.cnki.jvs.2021.06.025 doi: 10.13465/j.cnki.jvs.2021.06.025
[9]
Z. Huang, Y. C. Liu, R. J. Liao, X. J. Cao, Thermal error modeling of numerical control machine tools based on neural network neural network by optimized SSO algorithm, J. Northeast. Univ. (Nat. Sci.), 42 (2021), 1569–1578. https://doi.org/10.12068/j.issn.1005-3026.2021.11.008 doi: 10.12068/j.issn.1005-3026.2021.11.008
[10]
Z. Zhu, Y. Lei, G. Qi, Y. Chai, N. Mazur, Y. An, et al., A review of the application of deep learning in intelligent fault diagnosis of rotating machinery, Measurement, 206 (2022), 112346. https://doi.org/10.1016/j.measurement.2022.112346 doi: 10.1016/j.measurement.2022.112346
[11]
X. Huang, G. Qi, N. Mazur, Y. Chai, Deep residual networks-based intelligent fault diagnosis method of planetary gearboxes in cloud environments, Simul. Modell. Pract. Theory, 116 (2022), 102469. https://doi.org/10.1016/j.simpat.2021.102469 doi: 10.1016/j.simpat.2021.102469
[12]
F. He, Q. Ye, A bearing fault diagnosis method based on wavelet packet transform and convolutional neural network optimized by simulated annealing algorithm, Sensors, 22 (2022), 1410. https://doi.org/10.3390/s22041410 doi: 10.3390/s22041410
[13]
F. An, J. Wang, Rolling bearing fault diagnosis algorithm using overlapping group sparse-deep complex convolutional neural network, Nonlinear Dyn., 108 (2022), 2353–2368. https://doi.org/10.1007/s11071-022-07314-9 doi: 10.1007/s11071-022-07314-9
[14]
K. Sharma, D. Goyal, R. Kanda, Intelligent fault diagnosis of bearings based on convolutional neural network using infrared thermography, Proc. Inst. Mech. Eng., Part J: J. Eng. Tribol., 236 (2022), 2439–2446. https://doi.org/10.1177/13506501221082746 doi: 10.1177/13506501221082746
[15]
H. Wang, W. Jing, Y. Li, H. Yang, Fault diagnosis of fuel system based on improved extreme learning machine, Neural Process. Lett., 53 (2021), 2553–2565. https://doi.org/10.1007/s11063-019-10186-7 doi: 10.1007/s11063-019-10186-7
[16]
B. H. Liu, M. M. Wang, Switch machine based on wavelet energy spectrum entropy and improved ELM fault prediction, J. Yunnan Univ. (Nat. Sci. Ed.), 44 (2022), 497–504. https://doi.org/10.7540/j.ynu.20210174 doi: 10.7540/j.ynu.20210174
[17]
X. L. Ge, X. Zhang, Bearing fault diagnosis method using singular energy spectrum and improved ELM, J. Electr. Mach. Control, 25 (2021), 80–87. https://doi.org/10.15938/j.emc.2021.05.010 doi: 10.15938/j.emc.2021.05.010
[18]
V. K. Rayi, S. P. Mishra, J. Naik, P. K. Dash, Adaptive VMD based optimized deep learning mixed kernel ELM autoencoder for single and multistep wind power forecasting, Energy, 244 (2022), 122585. https://doi.org/10.1016/j.energy.2021.122585 doi: 10.1016/j.energy.2021.122585
[19]
J. S. Chou, D. N. Truong, A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean, Appl. Math. Comput., 389 (2021), 125535. https://doi.org/10.1016/j.amc.2020.125535 doi: 10.1016/j.amc.2020.125535
[20]
Z. Zhao, G. Ye, Y. Liu, Z. Zhang, Recognition of fault state of RV reducer based on self-organizing feature map neural network, J. Phys. Conf. Ser., 1986 (2021), 012086. https://doi.org/10.1088/1742-6596/1986/1/012086 doi: 10.1088/1742-6596/1986/1/012086
[21]
L. G. Kong, Analysis Research in Reducer Fault Diagnosis Based on Fault Tree Method, Master's thesis, Dalian University of Technology in Dalian, 2018.
[22]
J. F. Yu, S. Liu, F. F. Han, Z. Y. Xiao, Antlion optimization algorithm based on Cauchy variation (in Chinese), Microelectron. Comput., 36 (2019), 45–49+54. https://doi.org/10.19304/j.cnki.issn1000-7180.2019.06.010 doi: 10.19304/j.cnki.issn1000-7180.2019.06.010
[23]
Y. B. Wang, Research and Application of the Distance-Based Ant Lion Optimizer, Master's thesis, Hunan University in Hunan, 2017.
[24]
R. C. Eberhart, Y. Shi, Comparing inertia weights and constriction factors in particle swarm optimization, in Proceedings of the 2000 Congress on Evolutionary Computation, 1 (2000), 84–88. https://doi.org/10.1109/CEC.2000.870279
[25]
Z. Wu, N. E. Huang, Ensemble empirical mode decomposition: a noise-assisted data analysis method, Adv. Adapt. Data Anal., 1 (2009), 1–41. https://doi.org/10.1142/S1793536909000047 doi: 10.1142/S1793536909000047
[26]
J. M. Chen, Z. C. Liang, Research on speech enhancement algorithm based on EEMD data preprocessing and DNN, J. Ordnance Equip. Eng., 40 (2019), 96–103. https://doi.org/10.11809/bqzbgcxb2019.06.021 doi: 10.11809/bqzbgcxb2019.06.021
[27]
Y. J. Wang, S. Q. Kang, Y. Zhang, X. Liu, Y. C. Jiang, V. I. Mikulovich, Condition recognition method of rolling bearing based on ensemble empirical mode decomposition sensitive intrinsic mode function selection algorithm, J. Electron. Inf. Technol., 36 (2014), 595–600. https://doi.org/10.3724/SP.J.1146.2013.00434 doi: 10.3724/SP.J.1146.2013.00434
[28]
G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: theory and applications, Neurocomputing, 70 (2006), 489–501. https://doi.org/10.1016/j.neucom.2005.12.126 doi: 10.1016/j.neucom.2005.12.126
[29]
B. Wang, Y. Lei, N. P. Li, N. B. Li, A hybrid prognostics approach for estimating remaining useful life of rolling element bearings, IEEE Trans. Reliab., 69 (2018), 401–412. https://doi.org/10.1109/TR.2018.2882682 doi: 10.1109/TR.2018.2882682
[30]
Y. G. Yang, Research on Rolling Bearing Fault Diagnosis Method Based on Neural Network and Support Vector Machine, Master's thesis, Lanzhou Jiaotong University in Lanzhou, 2021.
This article has been cited by:
1.
Cristina Ortega-Rodríguez, Julio Vena-Oya, Jesús Barreal, Barbara Józefowicz,
How to finance sustainable tourism: Factors influencing the attitude and willingness to pay green taxes among university students,
2024,
6,
2643-1092,
649,
10.3934/GF.2024025
2.
Isaac Ankrah, Michael Appiah-Kubi, Eric Ofosu Antwi, Ivy Drafor Amenyah, Mohammed Musah, Frank Gyimah Sackey, Richard Asravor, Isaiah Sikayena,
A spotlight on fossil fuel lobby and energy transition possibilities in emerging oil-producing economies,
2025,
11,
24058440,
e41287,
10.1016/j.heliyon.2024.e41287
3.
Mustafa Raza Rabbani, Madiha Kiran, Zakir Hossen Shaikh,
Financing the future: insights into sustainable energy investments through scientific mapping and meta-analysis,
2025,
6,
2662-9984,
10.1007/s43621-024-00788-0
4.
Yu-Wei Lin, Yu-Ting Liu, Li-Cheng Huang, Shing-Chou Lin, Shean-Jen Chen, YongMan Choi,
Biomass energy for a sustainable Taiwan: Technologies, policies, and future prospects,
2025,
199,
09619534,
107969,
10.1016/j.biombioe.2025.107969