Positive electrode material | Nickel Manganese Cobalt Oxid |
Positive electrode material | Graphite |
Nominal voltage | 3.7 V |
Nominal capacity | 2.4 Ah |
Operating voltage range | 2.5~4.2 V |
The conjugate gradient (CG) method is an optimization technique known for its rapid convergence; it has blossomed into significant developments and applications. Numerous variations of CG methods have emerged to enhance computational efficiency and address real-world challenges. This work presents a new conjugate gradient method for solving nonlinear unconstrained optimization problems by introducing a new conjugate gradient parameter. To improve the convergence properties, we have proposed a new inexact line search technique that fits in with the suggested approach and can also be useful for other gradient descent methods. The existence of a steplength that meets the new line search conditions is established. The generated descent direction and the convergence properties of the suggested approach are studied under the new line search conditions, where the global convergence is proven under mild assumptions. The proposed approach is evaluated on various test functions, and a comparison with recent similar algorithms is carried out. Furthermore, the proposed algorithm is applied for restoring images with different noise levels.
Citation: Asma Maiza, Raouf Ziadi, Mohammed A. Saleh, Abdulgader Z. Almaymuni. An improved conjugate gradient algorithm by adapting a new line search technique[J]. Electronic Research Archive, 2025, 33(4): 2148-2171. doi: 10.3934/era.2025094
[1] | A.M.S.M.H.S.Attanayaka, J.P.Karunadasa, K.T.M.U.Hemapala . Estimation of state of charge for lithium-ion batteries - A Review. AIMS Energy, 2019, 7(2): 186-210. doi: 10.3934/energy.2019.2.186 |
[2] | Xiaoyu Zheng, Dewang Chen, Yusheng Wang, Liping Zhuang . Remaining useful life indirect prediction of lithium-ion batteries using CNN-BiGRU fusion model and TPE optimization. AIMS Energy, 2023, 11(5): 896-917. doi: 10.3934/energy.2023043 |
[3] | Saad Jarid, Manohar Das . An Electro-Thermal Model based fast optimal charging strategy for Li-ion batteries. AIMS Energy, 2021, 9(5): 915-933. doi: 10.3934/energy.2021043 |
[4] | François KREMER, Stéphane RAEL, Matthieu URBAIN . 1D electrochemical model of lithium-ion battery for a sizing methodology of thermal power plant integrated storage system. AIMS Energy, 2020, 8(5): 721-748. doi: 10.3934/energy.2020.5.721 |
[5] | Eduardo Enrique Martinez Jorges, António M.N. Quintino, Diogo M.F. Santos . Economic analysis of lithium-ion battery recycling. AIMS Energy, 2023, 11(5): 960-973. doi: 10.3934/energy.2023045 |
[6] | Steven B. Sherman, Zachary P. Cano, Michael Fowler, Zhongwei Chen . Range-extending Zinc-air battery for electric vehicle. AIMS Energy, 2018, 6(1): 121-145. doi: 10.3934/energy.2018.1.121 |
[7] | Rasool M. Imran, Kadhim Hamzah Chalok . Innovative mode selective control and parameterization for charging Li-ion batteries in a PV system. AIMS Energy, 2024, 12(4): 822-839. doi: 10.3934/energy.2024039 |
[8] | Chi Van Nguyen, Thuy Nguyen Vinh . Design of energy balancing circuit for battery cells connected in series based on modifying the bidirectional CuK converter. AIMS Energy, 2022, 10(2): 219-235. doi: 10.3934/energy.2022012 |
[9] | Samson Obu Showers, Atanda Kamoru Raji . State-of-the-art review of fuel cell hybrid electric vehicle energy management systems. AIMS Energy, 2022, 10(3): 458-485. doi: 10.3934/energy.2022023 |
[10] | Kritanjali Das, Santanu Sharma . Coulombic efficiency estimation technique for eco-routing in electric vehicles. AIMS Energy, 2022, 10(3): 356-374. doi: 10.3934/energy.2022019 |
The conjugate gradient (CG) method is an optimization technique known for its rapid convergence; it has blossomed into significant developments and applications. Numerous variations of CG methods have emerged to enhance computational efficiency and address real-world challenges. This work presents a new conjugate gradient method for solving nonlinear unconstrained optimization problems by introducing a new conjugate gradient parameter. To improve the convergence properties, we have proposed a new inexact line search technique that fits in with the suggested approach and can also be useful for other gradient descent methods. The existence of a steplength that meets the new line search conditions is established. The generated descent direction and the convergence properties of the suggested approach are studied under the new line search conditions, where the global convergence is proven under mild assumptions. The proposed approach is evaluated on various test functions, and a comparison with recent similar algorithms is carried out. Furthermore, the proposed algorithm is applied for restoring images with different noise levels.
Lithium-ion batteries (LIBs), known for their high energy density, low self-discharge rate, and long lifespan, have been widely used in electric vehicles, renewable energy storage systems, and portable electronic devices [1,2]. However, with the long-term operation, the overall performance of LIBs would gradually degrade [3]. Therefore, accurate estimation of the state of health (SOH) for LIBs is very crucial for ensuring efficient energy management and safety of the overall battery energy storage system [4,5]. Generally, the battery SOH is influenced by these factors, such as discharge rate, operating temperature, and charging-discharging cycles [6]. The common definition of the SOH is the ratio of the current maximum usable capacity to the maximum usable capacity of a new battery [7,8]. Currently, features from the constant current-constant voltage (CC-CV) charging protocols are widely used for the SOH estimation of LIBs due to their simplicity and ease of acquisition [9,10]. However, users usually do not fully discharge and then fully charge the battery regularly in accordance with the CC-CV protocol in practical applications, posing a significant challenge for the SOH estimation based on these data-driven methods with complete data from the entire charging phase. Therefore, the SOH estimation based on partial or optimal charging data becomes an urgent requirement in practical applications.
Existing SOH estimation methods can be roughly divided into three categories: 1) empirical or semi-empirical models [11,12]; 2) physics-based models [13,14,15]; and 3) data-driven methods [16,17,18]. Although empirical models are simple, they lack a clear physical significance for LIBs. Physics-based models are accurate. However, using physics-based models to estimate the battery SOH would be very complicated for users and engineers. Nowadays, data-driven methods have gained increasing attention in the battery SOH estimation field due to their model-free nature [19,20,21,22,23]. Wen et al. [19] successfully employed an IC curve feature based on a BP neural network to achieve high estimation precision for the lithium battery SOH prediction across various temperature conditions. Ren et al. [20] proposed a novel SOH estimation method for the lithium-ion battery pack based on cross-generative adversarial networks. With a hybrid extreme learning machine based on an adaptive boosting algorithm, this SOH estimation method could achieve accurate SOH estimation with incomplete data. Li et al. [21] extracted health indicators from public datasets and eliminated redundant features through Pearson correlation coefficients and principal component analysis. In addition, Liu et al. [22] proposed a capture method for battery degradation by considering the tested voltage, temperature, and current data, which could achieve accurate SOH estimation even with minimal training data. Moreover, some studies explored data and model joint-driven methods for more precise battery state estimation based on unscented Kalman filter or deep learning methods [18,23]. Although these research efforts have yielded satisfactory accuracy, they should consider the feasibility of data acquisition and processing in practical applications. From the engineering standpoint, obtaining relevant data from smaller datasets is very crucial. Therefore, this work employs a small amount of data based on an optimal charging voltage interval and efficient machine-learning methods to achieve the SOH estimation for the lithium battery.
As we know, the charging curve of an LIB would present regular changes in accordance with its performance degradation [24,25]. Therefore, using the data-driven methods based on features from charging curves of LIBs to estimate their SOH has become a new trend. Generally, these data-driven methods for the battery SOH estimation should consider the changes in the entire charging phase, which leads to poor performance in both computational efficiency and computational cost [26].
To improve these data-driven methods based on the entire charging interval, Xiong et al. [27] proposed a feature selection method based on the correlation coefficient and the ReliefF algorithms. This method could significantly improve the accuracy of the battery SOH estimation. The corresponding results demonstrated that the high-precision SOH estimation could be achieved based on the selected features. Li et al. [28] analyzed the charging time during the constant current and the constant voltage stages, as well as the proportion of the charging time of the constant voltage stage. Then, a correlated feature for the battery SOH estimation could be determined. This study indicated that both the constant current and the constant voltage charging phases could be considered for the feature extraction of the battery SOH estimation. Lin et al. [26] used data-driven algorithms with the features extracted from four partial voltage segments to achieve an accurate estimation of the battery SOH. However, the above studies did not conduct detailed research on the selection of charging voltage intervals. The previous selection methods were only based on engineering experience.
According to previous studies, the incremental capacity (IC) curve was commonly used for the feature extraction of the battery SOH estimation [29]. The IC curve can effectively reflect the capacity and voltage changes during the charging operation, which can be used for identifying characteristic peaks related to phase transitions during the transmission processes of lithium ions into the battery. Previous studies indicated that the charging voltage interval selection could be performed based on the peak positions of the IC curve [27,30]. Bian et al. [31] proposed a battery SOH estimation method combining an open-circuit voltage (OCV) model and the IC analysis (ICA). According to the OCV model, interference-free IC curves could be obtained, enabling the extraction of a series of morphological features. Li et al. [32] also proposed a method combining the ICA and the grey relational analysis for the battery SOH estimation. In addition, Lin et al. [33] used the internal resistance and the peak/valley points of the IC curve as features. On this basis, accurate SOH estimation could be achieved based on a machine learning algorithm. However, these studies did not propose a clear method for the optimal voltage interval selection. In summary, existing references generally lack systematic exploration for the battery SOH estimation based on the limited or optimal charging interval selection. The selection of a partial charging voltage interval was only based on engineering experience, which lacks detailed correlation analyses between the selected charging voltage interval and the SOH of LIBs. To achieve high performance in both computational efficiency and computational cost, this paper employs two data-driven methods using the ICA and Pearson correlation analysis to select the optimal charging voltage interval for the SOH estimation of LIBs.
The main contributions of this paper are as follows:
(1) The proposed SOH estimation method combines the ICA and Pearson correlation analysis, which can identify and select the optimal charging voltage interval. On this basis, the most relevant health features for the battery SOH estimation can be extracted from the selected charging voltage interval.
(2) With the most relevant health features from the selected charging voltage interval, it is demonstrated that high accuracy in battery SOH estimation can be achieved with the data-driven methods even with a small amount of training data, which can effectively improve computational efficiency and reduce computational cost.
(3) Two typical data-driven methods, including the RFR and the SVR for the SOH estimation, are analyzed. Results show that both the RFR and the SVR methods have their own irreplaceable advantages: the SVR has stronger generalization ability, whereas the RFR possesses better overall fitting capability.
This paper includes four sections after the introduction. Section 2 presents the selection and detailed analyses for the optimal charging voltage interval. Section 3 introduces two data-driven methods for the SOH estimation. Results and discussions are presented in Section 4. Finally, conclusions are given in Section 5.
To obtain the charging data for the SOH estimation, three 18650 LIBs are used for the experimental test. The detailed parameters are shown in Table 1. Every battery would undergo CC-CV charging to reach the upper limited voltage at first. Then, these batteries would undergo CC discharging to reach the low-limited voltage. Figure 1 shows the capacity degradation curves of the three 18650 LIBs. In the experiment, Cell1 is charged at 1 C and discharged at 0.5 C, respectively. After 509 charge-discharge cycles, its SOH reaches 91.8%. Meanwhile, Cell2 is charged and discharged with 1 C. Its SOH is 81.03% after 720 cycles. In addition, Cell3 is charged at 1 C and discharged at 3 C, respectively. Its SOH is equal to 80.69% after 1181 cycles. All charge and discharge tests for the three 18650 LIBs are conducted at an ambient temperature of 25 ℃. It should be noted that it needs 1 hour for the rest time between CC-CV charging and CC discharging.
Positive electrode material | Nickel Manganese Cobalt Oxid |
Positive electrode material | Graphite |
Nominal voltage | 3.7 V |
Nominal capacity | 2.4 Ah |
Operating voltage range | 2.5~4.2 V |
It is worth noting that the tested lithium battery is a type of modified high-power lithium battery designed for unmanned aerial vehicle (UAV) applications. The high-power lithium battery would show better performance when it operates with a suitable high-power output compared with the low-power operation. Therefore, under high-power discharge conditions, the high-power lithium battery might perform at a lower degradation rate compared with low-discharging conditions. For more details, please refer to our previous work in [34].
Figure 2 presents the capacity-voltage (C-V) curves based on the experimental data. It can be seen that the envelope area of the C-V curve decreases along with the increasing number of operation cycles. It is evident that the C-V curves contain rich information about the aging or capacity degradation of LIBs. In fact, the voltage points in the regions with the most significant changes on the C-V curves can serve as external behavior characteristics of the battery aging [35].
By differentiating the C-V curves of the three 18650 LIBs, specific voltage points can be observed. The corresponding definition for the IC can be expressed as:
IC=dQdV | (1) |
where Q is the charging capacity and V is the charging voltage.
Furthermore, an appropriate voltage step needs to be designed, as the voltage difference appears in the denominator in numerical differentiation. If the voltage step is too large, the health features from the IC curve would not be obvious. Conversely, if the voltage step is too small, it may result in a sudden jump in the calculation value. After multiple calculations and corrections, a voltage sequence VC with a voltage step of 0.015 V is constructed, as shown in Eq (2). The corresponding capacity values for this voltage sequence are determined by using smooth spline interpolation based on the C-V curves.
VC=[2.5,2.515,2.530,…,4.20] | (2) |
Based on Eqs (1) and (2), the formula for the IC curve can be rewritten as follows:
IC=ΔQΔV=Qk+1−Qk0.015 | (3) |
Figure 2 (d–f) shows the changes of the IC curves on the whole charging voltage interval of different operation cycles. To select the optimal charging voltage interval, Figure 3 illustrates the enlarged and detailed IC curves with different SOH of Cell2. It can be seen that the first peak is around 3.52 V, the second peak is around 3.73 V, and the third peak is around 3.95 V, respectively. Meanwhile, the first valley is around 3.65 V, and the second valley is around 4.02 V, respectively. In addition, the peak and valley points of Cell1 and Cell3 are similar to those of Cell2.
It has been demonstrated that the peak and valley points on the IC curves are highly correlated with the kinetics of electrochemical reactions and the phase transitions within the electrode materials [36]. For these peak points, the rate of change in battery capacity in accordance with the voltage reaches its maximum value, indicating the fastest progress of chemical reactions. Similarly, the valley points also have significant physical meanings. For these valley points, the rate of change in capacity in accordance with the voltage reaches its minimum value, reflecting the changes in the activity of the electrode materials during the charge-discharge process. Therefore, this study employs the Pearson correlation analysis to investigate the degree of correlation between these peak/valley points and the battery aging. The formula for the Pearson correlation analysis can be represented as follows:
rX,Y=∑ni=1 (Xi−¯X)(Yi−¯Y)√∑ni=1 (Xi−¯X)2√∑ni=1 (Yi−¯Y)2 | (4) |
where Xi is the ith feature, Yi is the ith SOH, ¯X and ¯Y are average values of the specific feature sequence and the battery SOH sequence, respectively.
The Pearson correlation coefficients between these peak/valley points and the battery SOH are presented in Table 2. Obviously, the correlation analysis results are all negative values, indicating that during the capacity degradation process, the peak and valley points exhibit the overall leftward shift trend, which is consistent with the characteristics illustrated in Figure 2 (d–f). Hence, these peak and valley points provide suitable locations for the calibration of charging voltage intervals.
peak 1 | peak 2 | peak 3 | valley 1 | valley 2 | |
Cell1 | –0.9760 | –0.9762 | –0.9465 | –0.9710 | –0.9800 |
Cell2 | –0.9868 | –0.9877 | –0.9770 | –0.9906 | –0.9868 |
Cell3 | –0.9960 | –0.9959 | –0.9500 | –0.9913 | –0.9948 |
As shown in Table 3, ten voltage intervals can be divided based on the five points. Additionally, to achieve comprehensive correlation analyses of different voltage intervals and the battery SOH, the voltage interval including the entire constant current charging phase should be included in this study. With the defined voltage intervals, the relationship among the charging time, charging capacity, and the SOH is investigated based on Pearson correlation analysis. It should be noticed that the charging voltage intervals before 3.52 V and after 4.02 V are not suitable for the battery SOH estimation, as existing research has confirmed that the data at the beginning and end of the charging process cannot effectively reflect the battery aging characteristics [27]. Moreover, it can be observed from Figure 2 that the aging trends in the two charging voltage intervals are not obvious. Therefore, the maximum charging voltage interval is designed as 3.52~4.02 V.
Voltage interval | Voltage range |
S1 | 3.52~3.65 V |
S2 | 3.52~3.73 V |
S3 | 3.52~3.95 V |
S4 | 3.52~4.02 V |
S5 | 3.65~3.73 V |
S6 | 3.65~3.95 V |
S7 | 3.65~4.02 V |
S8 | 3.73~3.95 V |
S9 | 3.73~4.02 V |
S10 | 3.95~4.02 V |
As shown in Table 4, the Pearson correlation of features from different charging voltage intervals of the three 18650 LIBs reveals that the S3 segment has the highest correlation with the battery SOH compared with other voltage intervals. It is noteworthy that the S11 segment (i.e., the entire constant current charging phase) shows less correlation than the S3 segment, which indicates the presence of information redundancy during the whole charging phase. Meanwhile, the correlation between the S4 segment and battery aging would also be very high and at a level second only to that of the S3 segment. Figure 4 presents the correlation analyses among the normalized charging time, normalized charging capacity, and battery SOH for different charging voltage intervals. It can be observed that the trend in the S3 segment closely resembles the battery aging trend. Similarly, the use of ΔQ and Δt as features to reflect the battery aging trend can refer to the literature [37]. This provides a more intuitive demonstration of the effectiveness of the aging characteristics based on the selected S3 segment proposed in this study. In these figures, ΔQ and Δt represent the two battery aging features (i.e., the charging capacity and time) extracted from different voltage intervals, respectively. Finally, this study utilizes charging capacity and time extracted from the S3 segment as features for the SOH estimation.
Voltage interval | Cell1 | Cell2 | Cell3 |
S1 | 0.6308 | 0.9395 | 0.9282 |
S2 | 0.8323 | 0.9753 | 0.9657 |
S3 | 0.9906 | 0.9988 | 0.9922 |
S4 | 0.9883 | 0.9986 | 0.9919 |
S5 | 0.9237 | 0.9893 | 0.9807 |
S6 | 0.9583 | 0.9917 | 0.9821 |
S7 | 0.9731 | 0.9951 | 0.9838 |
S8 | 0.7430 | 0.9951 | 0.9838 |
S9 | 0.8089 | 0.9631 | 0.8157 |
S10 | 0.5981 | 0.9609 | 0.8483 |
S11 | 0.9533 | 0.9960 | 0.9854 |
Nowadays, using data-driven methods for the battery SOH estimation has become a research hotspot [31,32,33]. These data-driven methods would employ machine learning, deep learning, or statistical techniques to extract features. Machine learning methods can be categorized into supervised learning, unsupervised learning, and semi-supervised learning [38,39]. Deep learning methods mainly include artificial neural networks (ANN) [26], recurrent neural networks (RNN) [40], and their variants, such as long short-term memory (LSTM) [41]. Statistical methods include regression analysis and Bayesian methods, which utilize prior knowledge and data for inference and analysis [42]. For the aforementioned methods, machine learning algorithms RFR and SVR offer significant advantages in computational efficiency and model interpretability, enabling them to be implemented easily. By contrast, traditional neural networks, particularly deep neural networks, typically require substantial training time and high-performance computing resource to deal with large-scale datasets and complex nonlinear calculation. The corresponding algorithmic steps involved in the training process, such as backpropagation and gradient descent, are often time-consuming, limiting their feasibility for real-time or near real-time applications. Furthermore, due to their numerous parameters and intricate nonlinear transformations, neural networks are often regarded as black-box models. Although advancements in visualization techniques and interpretability algorithms have been made in recent years, neural networks still face significant challenges in directly understanding and interpreting the decision-making processes of models.
The RFR makes predictions through an ensemble of decision trees, resulting in satisfactory model performance and a reduced risk of overfitting, while the SVR exhibits strong generalization ability, achieving good fitting accuracy even with a small training set. Therefore, this study would employ the SVR and the RFR methods for the battery SOH estimation based on the selected charging voltage interval with an 80% training and 20% test dataset split. A type of n × 7 input data matrix is constructed to feed the RFR and the SVR models, as shown in Eq (5).
Input=[peak(1)1,peak(1)2,peak(1)3,valley(1)1,valley(1)2,ΔQ(1),Δt(1)peak(2)1,peak(2)2,peak(2)3,valley(2)1,valley(2)2,ΔQ(2),Δt(2)⋮peak(n)1,peak(n)2,peak(n)3,valley(n)1,valley(n)2,ΔQ(n),Δt(n)] | (5) |
where ΔQ(n) and Δt(n) represent the capacity and charging time during the n-th cycle within the selected charging voltage interval, respectively, the value of n is determined by 80% of the length of the input sequence, the term of peak and valley represents different points, while the superscript indicates different operation cycles.
The RFR is a powerful ensemble learning method used for predicting continuous outcomes. It builds upon the foundations of decision trees and integrated techniques to enhance the predictive performance and robustness. The core of random forest is decision trees, which partition the feature space into regions with homogeneous target values. A decision tree T is used to partition the data based on input features and minimize a loss function. Formally, for a dataset, the objective is to find a tree structure with the minimum solution: D={{(xi,yi)}ni=1. The decision trees divide the data to minimize prediction errors. In other words, it aims to minimize the sum of squared differences between the actual values yi and the predicted values ˆyj in each leaf node Rj:
minTJ∑j=1 ∑xi∈Rj (yi−ˆyj)2 | (6) |
where J is the number of terminal nodes (leaves), Rj represents the region corresponding to the j-th leaf, ˆyj is the predicted value in region Rj, i.e., the mean yi of in Rj.
Single decision trees are prone to overfitting and high variance. Ensemble methods mitigate this by aggregating multiple models to improve generalization. Bagging is one of the ways to achieve ensemble methods, which involves 1) generating B bootstrap samples from the original dataset; 2) training a base learner (e.g., decision tree) on every bootstrap sample; and 3) aggregating the predictions from all base learners. For regression, the bagged predictor ˆfbag(x) is:
ˆfbag(x)=1BB∑b=1ˆfb(x) | (7) |
where ˆfb(x) is the prediction from the b-th base learner.
Random forests extend bagging by introducing additional randomness in the model, particularly in feature selection. The random forest comprises B decision trees that are trained based on their bootstrap samples. Meanwhile, every decision tree uses a random subset of features when conducting splits. Its key steps include: 1) For each tree b, generating a bootstrap sample Db from D; 2) selecting a random subset of m features from the total p features (m < p) and determining the best split among these at each node in tree b; 3) growing each tree to its maximum depth without pruning. For a new input x, the RFR prediction ˆy(x) is the average of predictions from all trees:
ˆy(x)=1BB∑i=1ˆyb(x) | (8) |
where ˆyb(x) is the prediction from the b-th tree.
The SVR extends the concepts of the support vector machine, which is originally designed for classification tasks, to handle regression problems. The core idea of the SVR is to model the relationship between input features and continuous target variables. It is achieved based on a function that deviates from the actual target values, which is lower than a predefined margin ε for all training data points. The regression function f(x) of the SVR is typically modeled as follows:
f(x)=wTΦ(x)+b | (9) |
where w is the weight vector, x is the input feature vector, Φ(x) is the mapping of x, and b is the bias term.
The SVR employs kernel functions to implicitly map the input features into a high-dimensional space, enabling the modeling of complex patterns. Moreover, the SVR utilizes the ε-insensitive loss function, which disregards errors within a margin of ε. This approach allows the model to focus on significant deviations, promoting robustness and reducing the impact of noise. The ε-insensitive loss for a single data point (xi, yi) is represented as:
L(y,f(x))=max(0,|y−f(x)|−ε) | (10) |
The objective of the SVR is to find the function f(x) that minimizes the complexity of the model while ensuring that the predictions lie within the ε-margin of the actual target values. This balance is achieved through the following constrained optimization:
minw,b,ξ,ξ∗ 12∥w∥2+Cn∑i=1 (ξi+ξ∗i) | (11) |
subject to:
{yi−(wTϕ(xi)+b)≤ε+ξi(wTϕ(xi)+b)−yi≤ε+ξ∗iξi,ξ∗i≥0 | (12) |
where ξiandξ∗i are slack variables representing deviations outside the ε-margin, C>0 is a regularization parameter used to control the trade-off between model complexity and the penalty for deviations beyond ε.
To deal with non-linear issues, the SVR leverages the kernel trick, allowing the algorithm to operate in a high-dimensional feature space without explicitly computing the coordinates in that space. In other words, the dual formulation of the SVR optimization problem can be addressed based on kernel functions. The Lagrange multipliers αi and α∗i are introduced for the inequality constraints. Then, by optimizing the Lagrangian and applying the Karush-Kuhn-Tucker (KKT) conditions, the dual problem can be derived.
maxα,α∗ −12n∑i,j=1 (αi−α∗i)(αj−α∗j)K(xi,xj)+εn∑i=1 (αi+α∗i)−n∑i=1 yi(αi−α∗i) | (13) |
The dual problem can be typically solved using optimization algorithms such as the sequential minimal optimization (SMO). Once the optimal Lagrange multipliers α∗i and αi are determined, the regression function can be expressed in terms of these multipliers and the kernel function. After solving the dual problem, the regression function f(x) can be reconstructed as:
f(x)=n∑i=1(αi−α∗i)K(xi,x)+b | (14) |
The bias term b can be calculated by:
b=yi−n∑j=1(αj−α∗j)K(xj,xi) | (15) |
This calculation ensures that the regression function appropriately aligns with the support vectors. The conventional kernel functions used in the SVR are shown in Table 5. To ensure the reproducibility of the estimation results for the two aforementioned algorithms employed in this study, a fixed random seed is set during the coding process. Additionally, all codes are written using sci-kit learn 1.4.2 in Python 3. In this study, cross-validation is employed to select the optimal kernel function for the SVR. The penalty parameter C is tested with the values 0.1, 10, and 100, while γ is set to Python's built-in options 'scale' and 'auto'. The final results indicate that the linear kernel is the optimal kernel function for battery SOH estimation. Meanwhile, the penalty parameter C is set to 100, and the option of γ is 'auto'.
Kernel functions | Formulas |
Linear kernel | K(x,x′)=xTx′ |
Polynomial kernel | K(x,x′)=(γxTx′+r)d |
RBF kernel | K(x,x′)=exp(−γ||x−x′||2) |
Sigmoid kernel | K(x,x′)=tanh(γxTx′+r) |
* In this table, all γ are kernel coefficients. r represents the constant term, and d denotes the polynomial degree. |
In this section, the data-driven methods, including RFR and the SVR, are used to conduct the comprehensive analyses for the battery SOH estimation accuracy based on the selected charging voltage interval and other charging voltage intervals. To quantitatively describe the estimation results, the mean absolute error (MAE) and the root-mean-square error (RMSE) would be used as indicators of the estimation accuracy. In addition, the model's coefficient of determination (R²) would be used as the fitting accuracy indicator. The three metrics can be expressed as follows:
MAE = 1nn∑i=1∣yi−ˆyi∣ | (16) |
RMSE=√1nn∑i=1 (yi−ˆyi)2 | (17) |
R2=1−∑ni=1 (ˆyi−yi)2∑ni=1 (yi−¯y)2 | (18) |
where yi, ˆyi and ¯y represent the real value, estimated value, and average value, respectively; n represents the number of samples.
The estimated results for the SOH of the three LIBs are shown in Figure 5. It can be seen that the estimated results of Cell2 are the best among the three LIBs. Meanwhile, the comparative analyses of the RFR and the SVR algorithms are conducted based on the selected charging voltage interval. With the RFR algorithm, the estimated values of R2 for the three cells are 0.9915, 0.9986, and 0.9950, respectively. Meanwhile, the MAE for the three LIBs would be 0.10%, 0.11%, and 0.23%, respectively. Furthermore, the RMSE for the three LIBs are 0.16%, 0.15%, and 0.34%, respectively. As a comparison, with the SVR algorithm, the estimated value of R2 for the three LIBs would be 0.9876, 0.9975, and 0.9907, respectively. The corresponding MAE would be 0.15%, 0.16%, and 0.32%, while the corresponding RMSE would be 0.19%, 0.21%, and 0.46%, respectively. Therefore, the fitting accuracy of the RFR method is better than that of the SVR method.
In addition, the remaining charging voltage intervals are also used with the SVR and the RFR data-driven methods to achieve the battery SOH estimation, respectively. The corresponding evaluation analyses are shown in Tables 6 and 7. Overall, the evaluation metrics for the SOH estimation with the S3 segment are the best among all charging voltage intervals. We use the blue color to mark this optimal charging voltage interval. Moreover, the S4 segment, which has a similar coverage area to the S3 segment, also demonstrates impressive estimation accuracy. Similarly, it can be observed that estimated SOH using the charging capacity and time of the whole voltage interval S11 as features also achieves satisfactory estimation accuracy. However, compared with the S3 segment, both the estimated MAE and estimated RMSE based on the S11 segment are increased. Moreover, collecting data for the S11 segment requires more than twice the amount of data compared with the S3 segment. Therefore, the battery SOH estimation based on the selected charging voltage interval not only reduces the time cost associated with data acquisition but also decreases the computational cost involved in regression estimation algorithms. Other charging voltage intervals, such as the S5, S6, and S7 segments, also demonstrate relatively good accuracy for the battery SOH estimation. However, the corresponding accuracy is slightly inferior to that of the S3 segment. These results further support the necessity of selecting efficient health features and the optimal charging voltage interval for the battery SOH estimation.
interval | Cell1 | Cell2 | Cell3 | ||||||||
R2 | MAE | RMSE | R2 | MAE | RMSE | R2 | MAE | RMSE | |||
S1 | 0.6729 | 0.76% | 0.99% | 0.9643 | 0.57% | 0.78% | 0.9752 | 0.55% | 0.76% | ||
S2 | 0.9006 | 0.39% | 0.55% | 0.9894 | 0.29% | 0.43% | 0.9810 | 0.47% | 0.66% | ||
S3 | 0.9915 | 0.10% | 0.16% | 0.9986 | 0.11% | 0.15% | 0.9950 | 0.23% | 0.34% | ||
S4 | 0.9907 | 0.11% | 0.17% | 0.9983 | 0.12% | 0.17% | 0.9939 | 0.26% | 0.37% | ||
S5 | 0.9695 | 0.21% | 0.30% | 0.9957 | 0.19% | 0.27% | 0.9893 | 0.34% | 0.50% | ||
S6 | 0.9734 | 0.20% | 0.28% | 0.9978 | 0.15% | 0.19% | 0.9930 | 0.28% | 0.40% | ||
S7 | 0.9809 | 0.17% | 0.24% | 0.9981 | 0.13% | 0.18% | 0.9930 | 0.27% | 0.40% | ||
S8 | 0.8742 | 0.47% | 0.61% | 0.9893 | 0.32% | 0.43% | 0.9579 | 0.37% | 0.99% | ||
S9 | 0.9304 | 0.34% | 0.46% | 0.9928 | 0.26% | 0.35% | 0.9624 | 0.64% | 0.93% | ||
S10 | 0.6748 | 0.74% | 0.99% | 0.9781 | 0.46% | 0.61% | 0.9580 | 0.71% | 0.98% | ||
S11 | 0.9662 | 0.25% | 0.32% | 0.9965 | 0.16% | 0.24% | 0.9878 | 0.36% | 0.53% |
interval | Cell1 | Cell2 | Cell3 | ||||||||
R2 | MAE | RMSE | R2 | MAE | RMSE | R2 | MAE | RMSE | |||
S1 | 0.5901 | 0.87% | 1.10% | 0.9501 | 0.66% | 0.91% | 0.9629 | 0.70% | 0.93% | ||
S2 | 0.8840 | 0.46% | 0.59% | 0.9811 | 0.45% | 0.57% | 0.9758 | 0.57% | 0.75% | ||
S3 | 0.9876 | 0.15% | 0.19% | 0.9975 | 0.16% | 0.21% | 0.9907 | 0.32% | 0.46% | ||
S4 | 0.9864 | 0.16% | 0.20% | 0.9975 | 0.17% | 0.21% | 0.9902 | 0.33% | 0.48% | ||
S5 | 0.9591 | 0.25% | 0.35% | 0.9905 | 0.28% | 0.40% | 0.9846 | 0.42% | 0.60% | ||
S6 | 0.9744 | 0.22% | 0.28% | 0.9954 | 0.20% | 0.28% | 0.9886 | 0.34% | 0.51% | ||
S7 | 0.9832 | 0.17% | 0.22% | 0.9965 | 0.18% | 0.24% | 0.9882 | 0.34% | 0.52% | ||
S8 | 0.8416 | 0.55% | 0.69% | 0.9800 | 0.46% | 0.58% | 0.9618 | 0.69% | 0.94% | ||
S9 | 0.8909 | 0.46% | 0.57% | 0.9876 | 0.35% | 0.46% | 0.9651 | 0.66% | 0.90% | ||
S10 | 0.5995 | 0.80% | 1.09% | 0.9654 | 0.59% | 0.77% | 0.9465 | 0.84% | 1.11% | ||
S11 | 0.9617 | 0.29% | 0.34% | 0.9954 | 0.22% | 0.28% | 0.9845 | 0.43% | 0.60% |
Furthermore, the estimated battery SOH results of the absolute error with different charging current rates are shown in Figure 6. The box plots indicate that both the SVR and the RFR methods would experience increased prediction errors with high charging rates. However, the comparative results show that the SVR exhibits fewer outliers, but its interquartile range is wider compared with the RFR, indicating that the overall estimation error of the SVR is larger, while the corresponding variability of its errors might be smaller. These findings strongly support the argument presented in Section 3, which suggests that the SVR has stronger generalization ability, whereas the RFR possesses better fitting accuracy, respectively.
This study has successfully developed two data-driven methods for the SOH estimation of LIBs based on a selected charging voltage interval. At first, the IC curves and Pearson correlation analysis were used to select the optimal charging voltage interval. Then, two typical data-driven methods, including the RFR and the SVR, were employed for the SOH estimation of LIBs based on the extracted features from the selected charging voltage interval. The originality of this study is that the ICA and the Pearson correlation analysis are used as a bridge to select the optimal charging voltage interval, while the most relevant features for the battery SOH would be extracted from the selected charging voltage interval. In this case, the data-driven methods based on the most relevant features from the limited and optimal charging voltage interval can significantly improve computational efficiency and reduce the computational cost for the SOH estimation of LIBs. In addition, the accuracy for the battery SOH estimation can also be effectively guaranteed.
The comparative analyses between the simulation and experimental results were presented to verify the effectiveness and accuracy of the two data-driven SOH estimation methods based on the selected charging voltage interval. Both the SVR and the RFR methods demonstrated their respective advantages in the domain of the battery SOH estimation. Specifically, the SVR could produce fewer outliers in estimation errors, while the RFR could achieve higher overall estimation accuracy. With the RFR, the estimated values of R2 for the three LIBs would be 0.9915, 0.9986, and 0.9950, respectively. Meanwhile, the MAE for the three LIBs would be 0.10%, 0.11%, and 0.23%, respectively. Furthermore, the RMSE for the three LIBs would be 0.16%, 0.15%, and 0.34%, respectively. As a comparison, with the SVR, the estimated values of R2 for the three LIBs would be 0.9876, 0.9975, and 0.9907; the corresponding MAE would be 0.15%, 0.16%, and 0.32%; and the corresponding RMSE would be 0.19%, 0.21%, and 0.46%, respectively. Compared with other charging voltage intervals, the two data-driven methods with the selected charging voltage interval showed the best fitting performance for the SOH estimation of LIBs.
In summary, the selected optimal charging voltage interval exhibits the highest correlation for achieving SOH estimation compared with other charging voltage intervals. This study would provide a simple but effective way for estimating the SOH of LIBs, while the research findings would provide new references for the development and energy management optimization of battery energy storage systems in practical applications.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research was funded by the China Electric Institute State Key Laboratory of Environmental Adaptability for Industrial Products (Grant No. 2024EASKJ-006), the Aeronautical Science Foundation of China (Grant No. 2024Z039070003), and the Key Research and Development Program of Shaanxi Province (Grant No. 2023-YBGY-376).
The authors declare no conflicts of interest.
Junguang Sun: Software, Writing—original draft, Formal analysis, Data curation, Visualization, Validation. Xiaodong Zhang: Conceptualization, Methodology, Investigation, Writing—review & editing. Wenrui Cao: Software, Formal analysis, Writing—review & editing. Lili Bo: Data curation, Visualization, Validation, Writing—review & editing. Changhai Liu: Data curation, Visualization, Validation, Writing—review & editing. Bin Wang: Resources, Supervision, Writing—review & editing, Funding acquisition.
[1] |
R. Ziadi, A. Bencherif-Madani, A perturbed quasi-Newton algorithm for bound-constrained global optimization, J. Comput. Math., 43 (2025), 143–173. https://doi.org/10.4208/jcm.2307-m2023-0016 doi: 10.4208/jcm.2307-m2023-0016
![]() |
[2] |
R. Ziadi, A. Bencherif-Madani, A mixed algorithm for smooth global optimization, J. Math. Model., 11 (2023), 207–228. https://doi.org/10.22124/JMM.2022.23133.2061 doi: 10.22124/JMM.2022.23133.2061
![]() |
[3] |
O. O. O. Yousif, R. Ziadi, M. A. Saleh, A. Z. Almaymuni, Another updated parameter for the Hestenes-Stiefel conjugate gradient method, Int. J. Anal. Appl., 23 (2025), 10. https://doi.org/10.28924/2291-8639-23-2025-10 doi: 10.28924/2291-8639-23-2025-10
![]() |
[4] |
C. Souli, R. Ziadi, A. Bencherif-Madani, H. M. Khudhur, A hybrid CG algorithm for nonlinear unconstrained optimization with application in image restoration, J. Math. Model., 12 (2024), 301–317. https://doi.org/10.22124/JMM.2024.26151.2317 doi: 10.22124/JMM.2024.26151.2317
![]() |
[5] |
Y. Chen, H. Huang, K. Huang, M. Roohi, C. Tang, A selective chaos-driven encryption technique for protecting medical images, Phys. Scr., 100 (2025), 0152a3. https://doi.org/10.1088/1402-4896/ad9fad doi: 10.1088/1402-4896/ad9fad
![]() |
[6] |
K. Xiong, S. Wang, The online random Fourier features conjugate gradient algorithm, IEEE Signal Process. Lett., 26 (2019), 740–744. https://doi.org/10.1109/LSP.2019.2907480 doi: 10.1109/LSP.2019.2907480
![]() |
[7] | M. A. Saleh, Enhancing deep learning optimizers for detecting malware using line search method under strong Wolfe conditions, in 2023 3rd International Conference on Computing and Information Technology (ICCIT), (2023), 222–226. https://doi.org/10.1109/ICCIT58132.2023.10273908 |
[8] |
M. Zhang, X. Wang, X. Chen, A. Zhang, The kernel conjugate gradient algorithms, IEEE Trans. Signal Process., 66 (2018), 4377–4387. https://doi.org/10.1109/TSP.2018.2853109 doi: 10.1109/TSP.2018.2853109
![]() |
[9] |
K. Xiong, H. H. Lu, S. Wang, Kernel correntropy conjugate gradient algorithms based on half-quadratic optimization, IEEE Trans. Cyber., 55 (2020), 5497–5510. https://doi.org/10.1109/TCYB.2019.2959834 doi: 10.1109/TCYB.2019.2959834
![]() |
[10] |
R. Ziadi, R. Ellaia, A. Bencherif-Madani, Global optimization through a stochastic perturbation of the PolakC Ribière conjugate gradient method, J. Comput. Appl. Math., 317 (2017), 672–684. https://doi.org/10.1016/j.cam.2016.12.021 doi: 10.1016/j.cam.2016.12.021
![]() |
[11] |
A. Mehamdia, Y. Chaib, T. Bechouat, Two modified conjugate gradient methods for unconstrained optimization and applications, RAIRO Oper. Res., 57 (2023), 333–353. https://doi.org/10.1051/ro/2023010 doi: 10.1051/ro/2023010
![]() |
[12] | M. R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Natl. Bur. Stand., 49 (1952), 409–436. |
[13] | R. Fletcher, Practical Methods of Optimization, Wiley, New York, 1997. |
[14] |
B. T. Polyak, The conjugate gradient method in extreme problems, USSR Comput. Math. Math. Phys., 9 (1969), 94–112. https://doi.org/10.1016/0041-5553(69)90035-4 doi: 10.1016/0041-5553(69)90035-4
![]() |
[15] | E. Polak, G. Ribiere, Note sur la convergence de méthodes de directions conjuguées, Rev. Française Inf. Rech. Opr., 16 (1969), 35–43. |
[16] |
Y. Liu, C. Storey, Efficient generalized conjugate gradient algorithms. Part 1: Theory, J. Optim. Theory Appl., 69 (1992), 129–137. https://doi.org/10.1007/BF00940464 doi: 10.1007/BF00940464
![]() |
[17] |
Y. H. Dai, Y. Yuan, An efficient hybrid conjugate gradient method for unconstrained optimization, Ann. Oper. Res., 103 (2001), 33–47. https://doi.org/10.1023/A:1012930416777 doi: 10.1023/A:1012930416777
![]() |
[18] |
D. Ahmed, C. Storey, Efficient hybrid conjugate gradient techniques, J. Optim. Theory Appl., 64 (1990), 379–397. https://doi.org/10.1007/BF00939455 doi: 10.1007/BF00939455
![]() |
[19] |
J. K. Liu, S. J. Li, New hybrid conjugate gradient method for unconstrained optimization, Appl. Math. Comput., 245 (2014), 36–43. https://doi.org/10.1016/j.amc.2014.07.096 doi: 10.1016/j.amc.2014.07.096
![]() |
[20] |
N. Andrei, Another nonlinear conjugate gradient algorithm for unconstrained optimization, Optim. Meth. Soft., 24 (2008), 89–104. https://doi.org/10.1080/10556780802393326 doi: 10.1080/10556780802393326
![]() |
[21] |
X. Y. Zheng, X. L. Dong, J. R. Shi, W. Yang, Further comment on another hybrid conjugate gradient algorithm for unconstrained optimization by Andrei, Numer. Algorithms, 84 (2019), 603–608. https://doi.org/10.1007/s11075-019-00771-1 doi: 10.1007/s11075-019-00771-1
![]() |
[22] |
S. S. Djordjevic New hybrid conjugate gradient method as a convex combination of LS and CD methods, Filomat, 31 (2017), 1813–1825. https://doi.org/10.1007/s10473-019-0117-6 doi: 10.1007/s10473-019-0117-6
![]() |
[23] |
C. Souli, R. Ziadi, I. E. Lakhdari, A. Leulmi, An efficient hybrid conjugate gradient method for unconstrained optimization and image restoration problems, Iran. J. Numer. Anal. Optim., 15 (2024), 99–123. https://doi.org/10.1109/LSP.2019.290748010.22067/ijnao.2024.88087.1449 doi: 10.1109/LSP.2019.290748010.22067/ijnao.2024.88087.1449
![]() |
[24] |
M. Rivaie, M. Mamat, L. W. June, I. Mohd, A new class of nonlinear conjugate gradient coefficients with global convergence properties, Appl. Math. Comput., 218 (2012), 11323–11332. https://doi.org/10.1016/j.amc.2012.05.030 doi: 10.1016/j.amc.2012.05.030
![]() |
[25] |
Z. X. Wei, S. W. Yao, L. Y. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput., 183 (2006), 1341–1350. https://doi.org/10.1016/j.amc.2006.05.150 doi: 10.1016/j.amc.2006.05.150
![]() |
[26] |
L. Zhang, An improved Wei-Yao-Liu nonlinear conjugate gradient method for optimization computation, J. Appl. Math. Comput., 215 (2009), 2269–2274. https://doi.org/10.1016/j.amc.2009.08.016 doi: 10.1016/j.amc.2009.08.016
![]() |
[27] |
I. M. Sulaiman, N. A. Bakar, M. Mamat, B. A. Hassan, M. Malik, M. A. Ahmed, A new hybrid conjugate gradient algorithm for optimization models and its application to regression analysis, J. Electr. Eng. Comput. Sci., 23 (2021), 1100–1109. https://doi.org/10.11591/ijeecs.v23.i2.pp1100-1109 doi: 10.11591/ijeecs.v23.i2.pp1100-1109
![]() |
[28] |
S. Yao, Z. Wei, H. Huang, A note about WYLs conjugate gradient method and its applications, Appl. Math. Comput., 191 (2007), 381–388. https://doi.org/10.1016/j.amc.2007.02.094 doi: 10.1016/j.amc.2007.02.094
![]() |
[29] |
I. Bongartz, A. R. Conn, N. Gould, P. L. Toint, CUTEr: A constrained and unconstrained testing environment, ACM Trans. Math. Softw. 29 (2003), 373–394. https://doi.org/10.1145/962437.96243 doi: 10.1145/962437.96243
![]() |
[30] | N. Andrei, An unconstrained optimization test functions, Adv. Mode. Optim., 10 (2008), 147–161. |
[31] |
M. Hamoda, M. Mamat, M. Rivaie, Z. Salleh, A conjugate gradient method with strong Wolfe-Powell line search for unconstrained optimization, Appl. Math. Sci., 10 (2016), 13–16. http://dx.doi.org/10.12988/ams.2016.56449 doi: 10.12988/ams.2016.56449
![]() |
[32] |
I. E. Livieris, P. Pintelas, Globally convergent modified Perrys conjugate gradient method, Appl. Math. Comput., 218 (2012), 9197–9207. https://doi.org/10.1016/j.amc.2012.02.076 doi: 10.1016/j.amc.2012.02.076
![]() |
[33] |
W. W. Hager, H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170–192. https://doi.org/10.1137/030601880 doi: 10.1137/030601880
![]() |
[34] |
E. D. Dolan, J. J. Mor, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201–213. https://doi.org/10.1007/s101070100263 doi: 10.1007/s101070100263
![]() |
Positive electrode material | Nickel Manganese Cobalt Oxid |
Positive electrode material | Graphite |
Nominal voltage | 3.7 V |
Nominal capacity | 2.4 Ah |
Operating voltage range | 2.5~4.2 V |
peak 1 | peak 2 | peak 3 | valley 1 | valley 2 | |
Cell1 | –0.9760 | –0.9762 | –0.9465 | –0.9710 | –0.9800 |
Cell2 | –0.9868 | –0.9877 | –0.9770 | –0.9906 | –0.9868 |
Cell3 | –0.9960 | –0.9959 | –0.9500 | –0.9913 | –0.9948 |
Voltage interval | Voltage range |
S1 | 3.52~3.65 V |
S2 | 3.52~3.73 V |
S3 | 3.52~3.95 V |
S4 | 3.52~4.02 V |
S5 | 3.65~3.73 V |
S6 | 3.65~3.95 V |
S7 | 3.65~4.02 V |
S8 | 3.73~3.95 V |
S9 | 3.73~4.02 V |
S10 | 3.95~4.02 V |
Voltage interval | Cell1 | Cell2 | Cell3 |
S1 | 0.6308 | 0.9395 | 0.9282 |
S2 | 0.8323 | 0.9753 | 0.9657 |
S3 | 0.9906 | 0.9988 | 0.9922 |
S4 | 0.9883 | 0.9986 | 0.9919 |
S5 | 0.9237 | 0.9893 | 0.9807 |
S6 | 0.9583 | 0.9917 | 0.9821 |
S7 | 0.9731 | 0.9951 | 0.9838 |
S8 | 0.7430 | 0.9951 | 0.9838 |
S9 | 0.8089 | 0.9631 | 0.8157 |
S10 | 0.5981 | 0.9609 | 0.8483 |
S11 | 0.9533 | 0.9960 | 0.9854 |
Kernel functions | Formulas |
Linear kernel | K(x,x′)=xTx′ |
Polynomial kernel | K(x,x′)=(γxTx′+r)d |
RBF kernel | K(x,x′)=exp(−γ||x−x′||2) |
Sigmoid kernel | K(x,x′)=tanh(γxTx′+r) |
* In this table, all γ are kernel coefficients. r represents the constant term, and d denotes the polynomial degree. |
interval | Cell1 | Cell2 | Cell3 | ||||||||
R2 | MAE | RMSE | R2 | MAE | RMSE | R2 | MAE | RMSE | |||
S1 | 0.6729 | 0.76% | 0.99% | 0.9643 | 0.57% | 0.78% | 0.9752 | 0.55% | 0.76% | ||
S2 | 0.9006 | 0.39% | 0.55% | 0.9894 | 0.29% | 0.43% | 0.9810 | 0.47% | 0.66% | ||
S3 | 0.9915 | 0.10% | 0.16% | 0.9986 | 0.11% | 0.15% | 0.9950 | 0.23% | 0.34% | ||
S4 | 0.9907 | 0.11% | 0.17% | 0.9983 | 0.12% | 0.17% | 0.9939 | 0.26% | 0.37% | ||
S5 | 0.9695 | 0.21% | 0.30% | 0.9957 | 0.19% | 0.27% | 0.9893 | 0.34% | 0.50% | ||
S6 | 0.9734 | 0.20% | 0.28% | 0.9978 | 0.15% | 0.19% | 0.9930 | 0.28% | 0.40% | ||
S7 | 0.9809 | 0.17% | 0.24% | 0.9981 | 0.13% | 0.18% | 0.9930 | 0.27% | 0.40% | ||
S8 | 0.8742 | 0.47% | 0.61% | 0.9893 | 0.32% | 0.43% | 0.9579 | 0.37% | 0.99% | ||
S9 | 0.9304 | 0.34% | 0.46% | 0.9928 | 0.26% | 0.35% | 0.9624 | 0.64% | 0.93% | ||
S10 | 0.6748 | 0.74% | 0.99% | 0.9781 | 0.46% | 0.61% | 0.9580 | 0.71% | 0.98% | ||
S11 | 0.9662 | 0.25% | 0.32% | 0.9965 | 0.16% | 0.24% | 0.9878 | 0.36% | 0.53% |
interval | Cell1 | Cell2 | Cell3 | ||||||||
R2 | MAE | RMSE | R2 | MAE | RMSE | R2 | MAE | RMSE | |||
S1 | 0.5901 | 0.87% | 1.10% | 0.9501 | 0.66% | 0.91% | 0.9629 | 0.70% | 0.93% | ||
S2 | 0.8840 | 0.46% | 0.59% | 0.9811 | 0.45% | 0.57% | 0.9758 | 0.57% | 0.75% | ||
S3 | 0.9876 | 0.15% | 0.19% | 0.9975 | 0.16% | 0.21% | 0.9907 | 0.32% | 0.46% | ||
S4 | 0.9864 | 0.16% | 0.20% | 0.9975 | 0.17% | 0.21% | 0.9902 | 0.33% | 0.48% | ||
S5 | 0.9591 | 0.25% | 0.35% | 0.9905 | 0.28% | 0.40% | 0.9846 | 0.42% | 0.60% | ||
S6 | 0.9744 | 0.22% | 0.28% | 0.9954 | 0.20% | 0.28% | 0.9886 | 0.34% | 0.51% | ||
S7 | 0.9832 | 0.17% | 0.22% | 0.9965 | 0.18% | 0.24% | 0.9882 | 0.34% | 0.52% | ||
S8 | 0.8416 | 0.55% | 0.69% | 0.9800 | 0.46% | 0.58% | 0.9618 | 0.69% | 0.94% | ||
S9 | 0.8909 | 0.46% | 0.57% | 0.9876 | 0.35% | 0.46% | 0.9651 | 0.66% | 0.90% | ||
S10 | 0.5995 | 0.80% | 1.09% | 0.9654 | 0.59% | 0.77% | 0.9465 | 0.84% | 1.11% | ||
S11 | 0.9617 | 0.29% | 0.34% | 0.9954 | 0.22% | 0.28% | 0.9845 | 0.43% | 0.60% |
Positive electrode material | Nickel Manganese Cobalt Oxid |
Positive electrode material | Graphite |
Nominal voltage | 3.7 V |
Nominal capacity | 2.4 Ah |
Operating voltage range | 2.5~4.2 V |
peak 1 | peak 2 | peak 3 | valley 1 | valley 2 | |
Cell1 | –0.9760 | –0.9762 | –0.9465 | –0.9710 | –0.9800 |
Cell2 | –0.9868 | –0.9877 | –0.9770 | –0.9906 | –0.9868 |
Cell3 | –0.9960 | –0.9959 | –0.9500 | –0.9913 | –0.9948 |
Voltage interval | Voltage range |
S1 | 3.52~3.65 V |
S2 | 3.52~3.73 V |
S3 | 3.52~3.95 V |
S4 | 3.52~4.02 V |
S5 | 3.65~3.73 V |
S6 | 3.65~3.95 V |
S7 | 3.65~4.02 V |
S8 | 3.73~3.95 V |
S9 | 3.73~4.02 V |
S10 | 3.95~4.02 V |
Voltage interval | Cell1 | Cell2 | Cell3 |
S1 | 0.6308 | 0.9395 | 0.9282 |
S2 | 0.8323 | 0.9753 | 0.9657 |
S3 | 0.9906 | 0.9988 | 0.9922 |
S4 | 0.9883 | 0.9986 | 0.9919 |
S5 | 0.9237 | 0.9893 | 0.9807 |
S6 | 0.9583 | 0.9917 | 0.9821 |
S7 | 0.9731 | 0.9951 | 0.9838 |
S8 | 0.7430 | 0.9951 | 0.9838 |
S9 | 0.8089 | 0.9631 | 0.8157 |
S10 | 0.5981 | 0.9609 | 0.8483 |
S11 | 0.9533 | 0.9960 | 0.9854 |
Kernel functions | Formulas |
Linear kernel | K(x,x′)=xTx′ |
Polynomial kernel | K(x,x′)=(γxTx′+r)d |
RBF kernel | K(x,x′)=exp(−γ||x−x′||2) |
Sigmoid kernel | K(x,x′)=tanh(γxTx′+r) |
* In this table, all γ are kernel coefficients. r represents the constant term, and d denotes the polynomial degree. |
interval | Cell1 | Cell2 | Cell3 | ||||||||
R2 | MAE | RMSE | R2 | MAE | RMSE | R2 | MAE | RMSE | |||
S1 | 0.6729 | 0.76% | 0.99% | 0.9643 | 0.57% | 0.78% | 0.9752 | 0.55% | 0.76% | ||
S2 | 0.9006 | 0.39% | 0.55% | 0.9894 | 0.29% | 0.43% | 0.9810 | 0.47% | 0.66% | ||
S3 | 0.9915 | 0.10% | 0.16% | 0.9986 | 0.11% | 0.15% | 0.9950 | 0.23% | 0.34% | ||
S4 | 0.9907 | 0.11% | 0.17% | 0.9983 | 0.12% | 0.17% | 0.9939 | 0.26% | 0.37% | ||
S5 | 0.9695 | 0.21% | 0.30% | 0.9957 | 0.19% | 0.27% | 0.9893 | 0.34% | 0.50% | ||
S6 | 0.9734 | 0.20% | 0.28% | 0.9978 | 0.15% | 0.19% | 0.9930 | 0.28% | 0.40% | ||
S7 | 0.9809 | 0.17% | 0.24% | 0.9981 | 0.13% | 0.18% | 0.9930 | 0.27% | 0.40% | ||
S8 | 0.8742 | 0.47% | 0.61% | 0.9893 | 0.32% | 0.43% | 0.9579 | 0.37% | 0.99% | ||
S9 | 0.9304 | 0.34% | 0.46% | 0.9928 | 0.26% | 0.35% | 0.9624 | 0.64% | 0.93% | ||
S10 | 0.6748 | 0.74% | 0.99% | 0.9781 | 0.46% | 0.61% | 0.9580 | 0.71% | 0.98% | ||
S11 | 0.9662 | 0.25% | 0.32% | 0.9965 | 0.16% | 0.24% | 0.9878 | 0.36% | 0.53% |
interval | Cell1 | Cell2 | Cell3 | ||||||||
R2 | MAE | RMSE | R2 | MAE | RMSE | R2 | MAE | RMSE | |||
S1 | 0.5901 | 0.87% | 1.10% | 0.9501 | 0.66% | 0.91% | 0.9629 | 0.70% | 0.93% | ||
S2 | 0.8840 | 0.46% | 0.59% | 0.9811 | 0.45% | 0.57% | 0.9758 | 0.57% | 0.75% | ||
S3 | 0.9876 | 0.15% | 0.19% | 0.9975 | 0.16% | 0.21% | 0.9907 | 0.32% | 0.46% | ||
S4 | 0.9864 | 0.16% | 0.20% | 0.9975 | 0.17% | 0.21% | 0.9902 | 0.33% | 0.48% | ||
S5 | 0.9591 | 0.25% | 0.35% | 0.9905 | 0.28% | 0.40% | 0.9846 | 0.42% | 0.60% | ||
S6 | 0.9744 | 0.22% | 0.28% | 0.9954 | 0.20% | 0.28% | 0.9886 | 0.34% | 0.51% | ||
S7 | 0.9832 | 0.17% | 0.22% | 0.9965 | 0.18% | 0.24% | 0.9882 | 0.34% | 0.52% | ||
S8 | 0.8416 | 0.55% | 0.69% | 0.9800 | 0.46% | 0.58% | 0.9618 | 0.69% | 0.94% | ||
S9 | 0.8909 | 0.46% | 0.57% | 0.9876 | 0.35% | 0.46% | 0.9651 | 0.66% | 0.90% | ||
S10 | 0.5995 | 0.80% | 1.09% | 0.9654 | 0.59% | 0.77% | 0.9465 | 0.84% | 1.11% | ||
S11 | 0.9617 | 0.29% | 0.34% | 0.9954 | 0.22% | 0.28% | 0.9845 | 0.43% | 0.60% |