![]() |
0 | 1 | 2 | 3 | 4 |
0.15 | 0.1661 | 0.1692 | 0.1716 | 0.1736 | 0.1753 |
0.30 | 0.1199 | 0.1222 | 0.1244 | 0.1266 | 0.1292 |
0.45 | 0.0871 | 0.0900 | 0.0962 | 0.0998 | 0.1037 |
Citation: Shouyun Cheng, Lin Wei, Mustafa Alsowij, Fletcher Corbin, Eric Boakye, Zhengrong Gu, Douglas Raynie. Catalytic hydrothermal liquefaction (HTL) of biomass for bio-crude production using Ni/HZSM-5 catalysts[J]. AIMS Environmental Science, 2017, 4(3): 417-430. doi: 10.3934/environsci.2017.3.417
[1] | Romain Breuneval, Guy Clerc, Babak Nahid-Mobarakeh, Badr Mansouri . Classification with automatic detection of unknown classes based on SVM and fuzzy MBF: Application to motor diagnosis. AIMS Electronics and Electrical Engineering, 2018, 2(3): 59-84. doi: 10.3934/ElectrEng.2018.3.59 |
[2] | Qichun Zhang . Performance enhanced Kalman filter design for non-Gaussian stochastic systems with data-based minimum entropy optimisation. AIMS Electronics and Electrical Engineering, 2019, 3(4): 382-396. doi: 10.3934/ElectrEng.2019.4.382 |
[3] | Yaoyao Gong, Zengtai Gong . Measures of separation for interval-valued intuitionistic fuzzy sets and their applications. AIMS Electronics and Electrical Engineering, 2025, 9(2): 139-164. doi: 10.3934/electreng.2025008 |
[4] | Suriya Priya R Asaithambi, Sitalakshmi Venkatraman, Ramanathan Venkatraman . Proposed big data architecture for facial recognition using machine learning. AIMS Electronics and Electrical Engineering, 2021, 5(1): 68-92. doi: 10.3934/electreng.2021005 |
[5] | J. Rajeshwari, M. Sughasiny . Dermatology disease prediction based on firefly optimization of ANFIS classifier. AIMS Electronics and Electrical Engineering, 2022, 6(1): 61-80. doi: 10.3934/electreng.2022005 |
[6] | Rachida Boufouss, Abdellah Najid . A low-loss dual-band bandpass filter using open-loop stepped-impedance resonators and spur-lines for sub-6 GHz 5G mobile communications. AIMS Electronics and Electrical Engineering, 2023, 7(4): 231-242. doi: 10.3934/electreng.2023014 |
[7] | Efthymios N. Lallas . A survey on key roles of optical switching and labeling technologies on big data traffic of Data Centers and HPC environments. AIMS Electronics and Electrical Engineering, 2019, 3(3): 233-256. doi: 10.3934/ElectrEng.2019.3.233 |
[8] | Ambati. Navya, Govardhani. Immadi, Madhavareddy. Venkata Narayana . Flexible ku/k band frequency reconfigurable bandpass filter. AIMS Electronics and Electrical Engineering, 2022, 6(1): 16-28. doi: 10.3934/electreng.2022002 |
[9] | Abdullah Yahya Abdullah Amer, Tamanna Siddiqu . A novel algorithm for sarcasm detection using supervised machine learning approach. AIMS Electronics and Electrical Engineering, 2022, 6(4): 345-369. doi: 10.3934/electreng.2022021 |
[10] | Muhammad Hussain, Mahmoud Dhimish, Violeta Holmes, Peter Mather . Deployment of AI-based RBF network for photovoltaics fault detection procedure. AIMS Electronics and Electrical Engineering, 2020, 4(1): 1-18. doi: 10.3934/ElectrEng.2020.1.1 |
Data redundancy is a ubiquitous feature in machine learning and big data applications, and it results in high computational load, energy consumption, processing time, and memory usage. By exploiting data redundancy, we can improve the learning performance and reduce the computational cost. An efficient approach to exploit data redundancy is through censoring non-informative data. Indeed, there are many works in literature benefiting from data censoring, such as censoring outliers in radar data [1], big data processing [2,3], multi-sensor systems [4], sensor-centric data reduction [5], data censoring for energy-efficient communications [6], wireless sensor networks [7], data selective adaptive filters for sparse systems [8,9], just to name a few. These works illustrate the advantages of censoring data in comparison with utilizing all data.
The set-membership filtering (SMF) is an efficient technique in dividing data into informative and non-informative data set [10]. In the SMF approach, instead of processing all data in the learning process, we evaluate, select, then process data at each iteration. Indeed, the set-membership algorithms execute a new update whenever the output estimation error is larger than a predetermined value and incoming dataset contains enough innovation. Otherwise, the set-membership approach prevents the algorithm from implementing new updates; thus we will have a reduction in the computational burden. The most well known set-membership algorithms are the set-membership normalized least-mean-square [11,12] and the set-membership affine projection [13] algorithms, where they have the benefits of the normalized least-mean-square and the affine projection algorithms, respectively, but they decrease the computational complexity through exploiting data redundancy and censoring data. Furthermore, there are many variants of the set-membership algorithms and their applications in the literature [14,15,16,17,18,19,20,21,22].
The SMF is an efficient approach for censoring data in big data applications, whereas due to the very large amount of data it would be more practical to determine beforehand the amount/percentage of data we desire to use in the learning process. In real applications, many restrictions can affect our ability to analyze the incoming data, such as limitations on available energy, time, memory, etc. Therefore, to surmount the restrictions and obtain the desired performance, we should be able to determine the informative incoming data. In this paper, we use the desired probability of updating of parameters in order to estimate the threshold parameter in the single threshold set-membership affine projection (ST-SM-AP) algorithm. Indeed, the function of the threshold parameter in the SM-AP algorithm is to censor non-informative data, thus estimating this parameter is crucial to censor data properly.
The set-membership algorithms, including the ST-SM-AP, censor incoming data based on the energy of the error signal. Indeed, if the energy of the error signal is larger than the threshold parameter, the ST-SM-AP algorithm will update the coefficients of the adaptive filter; otherwise they will remain unchanged. However, very high errors do not always show the presence of informative data. More precisely, sometimes we observe very high error signal because of the existence of some irrelevant data such as outliers and, in these cases, censoring data is more practical. Hence, we propose the double threshold set-membership affine projection (DT-SM-AP) algorithm which considers an acceptable range for the absolute value of the error signal to censor non-informative and irrelevant data and to prevent unnecessary updates. The DT-SM-AP algorithm implements the update whenever the absolute value of the error signal is between two threshold parameters [23]. Also, it worth to mention that some data-selective adaptive filtering algorithms have already been proposed in [21]; however, this work implemented an online threshold parameter computation; i.e., it updates the threshold parameter at any iteration. In our method, the threshold parameter is defined offline; i.e., it is computed before running the algorithm to determine the percentage of data we want to utilize to obtain the desired result.
This paper is organized as follows. Section 2 reviews the ST-SM-AP algorithm. Section 3 proposes the estimate of the threshold parameter in order to censor non-informative incoming data. The DT-SM-AP algorithm is introduced in Section 4. Simulations and conclusions are presented in Sections 5 and 6, respectively.
Notations: Scalars are denoted by lower case letters. Vectors (matrices) are represented by lowercase (uppercase) boldface letters. At iteration k, the optimum solution, the weight vector, and the input vector are denoted by wo, w(k), x(k)∈RN+1, respectively, where N is the adaptive filter order. For a given iteration k, the error signal is defined as e(k)≜d(k)−wT(k)x(k), where d(k)∈R is the desired signal and (⋅)T stands for the vector and matrix transposition. Moreover, P[⋅] and E[⋅] denote the probability and the expected value operators, respectively.
By taking advantage of old data, data-reusing algorithms can improve the convergence rate of the learning process, particularly when the input signal is correlated. The affine projection (AP) algorithm has been traditionally considered as the benchmark among data-reusing algorithms; however, the AP algorithm is not capable of exploiting data redundancy. By incorporating the set-membership technique to the AP algorithm, the single threshold set-membership affine projection (ST-SM-AP) algorithm has already been introduced [13] in order to reduce the computational burden of the AP algorithm. However, we intend to utilize the ST-SM-AP algorithm in order to censor data and exploit data redundancy in big data applications. First, let us define some necessary variables for the ST-SM-AP algorithm. Suppose that x(k) and d(k) are the input vector and the desired signal, respectively, and the last L+1 input vectors and desired signals are available. For a given iteration k, suppose that the input matrix X(k), the input vector x(k), the adaptive filter w(k), the desired vector d(k), the additive noise vector n(k), the constraint vector (CV) γ(k), and the error vector e(k) are described as follows
X(k)=[x(k)x(k−1)⋯x(k−L)]∈R(N+1)×(L+1),x(k)=[x(k)x(k−1)⋯x(k−N)]T∈RN+1,w(k)=[w0(k)w1(k)⋯wN(k)]T∈RN+1,d(k)=[d(k)d(k−1)⋯d(k−L)]T∈RL+1,n(k)=[n(k)n(k−1)⋯n(k−L)]T∈RL+1,γ(k)=[γ0(k)γ1(k)⋯γL(k)]T∈RL+1,e(k)=[e0(k)e1(k)⋯eL(k)]T∈RL+1, | (2.1) |
where N and L are the adaptive filter order and the data-reuse factor, respectively. The entries of γ(k) should satisfy |γi(k)|≤¯γ1, for i=0,1,⋯,L, where ¯γ1∈R+ is the upper bound for the magnitude of the error signal. Moreover, The error vector e(k) is defined as e(k)=d(k)−XT(k)w(k).
We can now represent the recursion rule of the ST-SM-AP algorithm by [13]
w(k+1)={w(k)+X(k)[XT(k)X(k)+δI]−1(e(k)−γ(k))if|e(k)|>¯γ1,w(k)otherwise, | (2.2) |
where δ∈R+ and I∈R(L+1)×(L+1) are a regularization factor and the identity matrix, respectively, and δI is added to XT(k)X(k) in order to avoid numerical problems in the matrix inversion. In the following section, we intend to introduce an approach to estimate ¯γ1 such that it leads to the desired update rate in big data applications.
In this section, we obtain an estimate for ¯γ1 in the ST-SM-AP algorithm for online censoring in streaming big data applications. In the presence of data redundancy and streaming data, it is desirable to obtain an acceptable solution by using a predetermined percentage of data instead of processing all received data. The percentage of data we want to use gives us the update rate of the ST-SM-AP algorithm. Therefore, for a given predetermined update rate, we intend to estimate the threshold ¯γ1 so that the update rate of the ST-SM-AP algorithm does not exceed the predetermined update rate. In other words, for a given 0<p<1 and considering Equation (2.2), we intend to estimate ¯γ1 such that
P[|e(k)|>¯γ1]=p. | (3.1) |
In the above equation, the predetermined update rate p represents the percentage of data we would like to utilize in the learning process. Note that the responsibility of ¯γ1 is to choose the most informative data, by considering p, for the learning process.
If we know the probability distribution of the error signal e(k), then we can calculate the suitable value of ¯γ1. Generally, we do not have access to the probability distribution of the error signal. However, using the central limit theorem [24], when the adaptive filter order is sufficiently large, the error signal e(k) will have a zero-mean Gaussian distribution [22]. Therefore, by utilizing the probability distribution of n(k) in (3.1), we will be able to calculate the proper value of ¯γ1.
Note that the error signal e(k) is obtained by subtracting the output of the adaptive filter from the desired signal, thus we have
e(k)=d(k)−xT(k)w(k)=xT(k)wo+n(k)−xT(k)w(k)=xT(k)[wo−w(k)]T+n(k)=˜e(k)+n(k), | (3.2) |
where ˜e(k) is the noiseless error signal. Furthermore, we know that the ST-SM-AP algorithm is robust [25], thus ‖, for all k\in\mathbb{N} . It means that the ST-SM-AP algorithm never diverges. In general, we have {\rm E}[\mathbf{w}_o- \mathbf{w}(k)]\approx{\mathbf 0} in the steady-state, where {\mathbf 0} stands for the zero vector.
A notable example for the probability distribution of the additive noise signal is the zero-mean Gaussian noise with variance \sigma_n^2 . Therefore, assuming this important case, we are capable of estimating the threshold \overline{\gamma}_1 . As we know, the noiseless error signal \widetilde{e}(k) is uncorrelated with the additive noise signal n(k) , thus using (3.2) we get
\begin{align} {\rm E}[e(k)]& = {\rm E}[ \widetilde{e}(k)]+{\rm E}[n(k)] = 0, \end{align} | (3.3) |
\begin{align} {\rm Var}[e(k)]& = {\rm E}[ \widetilde{e}^2(k)]+\sigma_n^2. \end{align} | (3.4) |
The value of {\rm E}[\widetilde{e}^2(k)] is called the excess of mean-squared error (EMSE), and for the ST-SM-AP algorithm in the steady-state is described by [27]
\begin{align} {\rm E}[ \widetilde{e}^2(k)] = \frac{(L+1)[\sigma_n^2+ \overline{\gamma}_1^2-2 \overline{\gamma}_1\sigma_n^2\rho]p}{[(2-p)-2(1-p) \overline{\gamma}_1\rho]}\Big(\frac{1-a}{1-a^{L+1}}\Big), \end{align} | (3.5) |
where
\begin{align} \rho& = \sqrt{\frac{2}{\pi(2\sigma_n^2+\frac{1}{L+1} \overline{\gamma}_1^2)}}, \end{align} | (3.6) |
\begin{align} a& = [1-p+2p \overline{\gamma}_1\rho](1-p). \end{align} | (3.7) |
As can be seen in Equation (3.5), in order to compute {\rm E}[\widetilde{e}^2(k)] we need \overline{\gamma}_1 , whereas our target is to estimate \overline{\gamma}_1 . Therefore, initially, in the steady-state, we consider that the value of {\rm E}[\widetilde{e}^2(k)] is equal to zero and the distribution of e(k) is identical to the distribution of n(k) . It means that, for the first moment, {\rm E}[e(k)] = 0 , {\rm Var}[e(k)] = \sigma_n^2 , and the distribution of e(k) is the zero-mean Gaussian with variance \sigma_n^2 . Thus for a given p , using the relation (3.1), we can compute the initial estimate of \overline{\gamma}_1 . In other words, we have
\begin{align} {\rm P}[|e(k)| \gt \overline{\gamma}_1] = {\rm P}[e(k) \lt - \overline{\gamma}_1]+{\rm P}[e(k) \gt \overline{\gamma}_1] = p, \end{align} | (3.8) |
since the Gaussian distribution is symmetric, we get
\begin{align} {\rm P}[e(k) \gt \overline{\gamma}_1] = \frac{p}{2}. \end{align} | (3.9) |
Hence, we should compute \overline{\gamma}_1 by equation
\begin{align} \int_{ \overline{\gamma}_1}^\infty\frac{1}{\sqrt{2\pi\sigma_n^2}}{\rm exp}(-\frac{r^2}{2\sigma_n^2})dr = \frac{p}{2}, \end{align} | (3.10) |
where {\rm exp}(\cdot) stands for the exponential function. Thus, for a given update rate p , we can find the initial estimate of \overline{\gamma}_1 using the standard normal distribution table.
Then, by having the initial estimate of \overline{\gamma}_1 at hand, we should replace it in Equations (3.5), (3.6), and (3.7) to calculate {\rm E}[\widetilde{e}^2(k)] . Ultimately, by having {\rm E}[\widetilde{e}^2(k)] , we will compute the variance of e(k) utilizing Equation (3.4). Thus, the distribution of the error signal is the zero-mean Gaussian with variance \sigma_e^2 = {\rm Var}[e(k)] = {\rm E}[\widetilde{e}^2(k)]+\sigma_n^2 and, by utilizing this distribution probability, we can attain a better estimate for \overline{\gamma}_1 . Indeed, the estimate of \overline{\gamma}_1 can be obtained from equation
\begin{align} \int_{ \overline{\gamma}_1}^\infty\frac{1}{\sqrt{2\pi\sigma_e^2}}{\rm exp}(-\frac{r^2}{2\sigma_e^2})dr = \frac{p}{2}, \end{align} | (3.11) |
where we can achieve the new estimate of \overline{\gamma}_1 by standard normal distribution table. we should remind that the selected update rate p describes a loose relative importance of the innovation caused by the new incoming input and desired data set.
In the previous section, a process to estimate the threshold parameter of the ST-SM-AP algorithm has been addressed such that the update rate of the algorithm does not exceed a predetermined value p . The ST-SM-AP algorithm avoids updating the adaptive filter parameters when the absolute value of the error signal is less than \overline{\gamma}_1 . Indeed, the ST-SM-AP algorithm assumes all incoming data set with absolute value error larger than \overline{\gamma}_1 as innovative information; however, this is not always true, particularly in the presence of outliers. In this section, we intend to introduce the double threshold set-membership affine projection (DT-SM-AP) algorithm to censor non-informative data and exploit data redundancy, also to avoid outliers effect, acquisition system saturation, and impulsive noise.
For a given iteration k , the ST-SM-AP algorithm update \mathbf{w}(k) if |e(k)| > \overline{\gamma}_1 , i.e., it updates \mathbf{w}(k) when |e(k)| is sufficiently large. However, there are iterations in which the error signal is very large, and it does not contain new information. In fact, very large error signal might be happened due to some irrelevant information at incoming data such as outliers, system saturation, and impulsive noise. Hence, when the absolute value of the error signal is very large, it would be more practical to avoid new update in adaptive filter coefficients. To this end, we recommend to define an acceptable range for the absolute value of the error signal and to adopt a lower threshold \overline{\gamma}_1 and an upper threshold \overline{\gamma}_2 . Then, at each iteration k , if the incoming data set results in an error e(k) such that \overline{\gamma}_1 < |e(k)| < \overline{\gamma}_2 , we implement a new updates since it means that the incoming data set brought about some innovation; otherwise we avoid new update since the incoming data brought about non-innovative or irrelevant information.
Finally, the update equation of the DT-SM-AP algorithm can be given by
\begin{align} \mathbf{w}(k+1) = \left\{\begin{array}{ll} \mathbf{w}(k)+ \mathbf{X}(k)\Big[ \mathbf{X}^T(k) \mathbf{X}(k)+\delta \mathbf{I}\Big]^{-1}( \mathbf{e}(k)- \boldsymbol{\gamma}(k))&{\rm if\; } \overline{\gamma}_1 \lt |e(k)| \lt \overline{\gamma}_2, \\ \mathbf{w}(k)&{\rm otherwise}.\end{array}\right. \end{align} | (4.1) |
The task of \overline{\gamma}_1 is to ignore the incoming data set without enough innovation, thus we can utilize the strategy described in the previous section in order to estimate \overline{\gamma}_1 . However, \overline{\gamma}_2 is responsible for detecting irrelevant incoming data set, such as outliers, thus depending on the applications we should adopt a large enough value for \overline{\gamma}_2 .
In this section, we have applied the AP, the ST-SM-AP and DT-SM-AP algorithms to a system identification problem. For the ST-SM-AP algorithm, the threshold parameter \overline{\gamma}_1 is estimated employing the strategy explained in Section 3 in order to update the ST-SM-AP algorithm with the desired update rate p . The unknown system \mathbf{w}_o has 10 coefficients, and they are drawn from the normal distribution. We have tested the results for three different input signals, namely binary phase-shift keying (BPSK), zero-mean white Gaussian noise with unit variance (WGN), and the first-order autoregressive signal (AR(1)) produced by x(k) = 0.95x(k-1)+m(k) , where m(k) is a WGN signal. The signal-to-noise ratio is 20 dB; i.e., the additive noise signal has variance \sigma_n^2 = 0.01 . The regularization parameter \delta is set to be 10^{-12} , and the initialization for the adaptive filter is adopted as \mathbf{w}(0) = [0\; \cdots\; 0]^T . The constraint vector (CV) is selected as the simple choice CV [25,26]. The simple choice CV is defined as \gamma_0(k) = \overline{\gamma}_1\frac{e(k)}{|e(k)|} and \gamma_i(k) = e_i(k) for i = 1, \cdots, L . Moreover, the step-size parameter for the AP algorithm is represented at the legend of each figure. The number of iterations is 5\times10^4 , and the learning curves and the update rates are the averages of outcomes of 100 independent ensembles.
In this scenario, we have utilized the estimated threshold parameter \overline{\gamma}_1 in the ST-SM-AP algorithm in order to achieve the desired results by the given update rates p = 0.15 , 0.30, and 0.45. The estimated threshold parameters for L = 0, 1, 2, 3, 4 and p = 0.15 , 0.30, 0.45 are obtained by using the argument described in Section 3, and they are listed in Table 1. Furthermore, the update rates of the ST-SM-AP algorithm using these threshold parameters and different input signals in 5\times10^4 iterations are presented in Table 2. As can be seen, by utilizing the estimated \overline{\gamma}_1 s, for all input signals the resulting update rates presented in Table 2 are close to the given values of p , such that in all cases the difference between the amount of resulting update rate and the given value of p is less than 0.02 (2 percent). Therefore, by estimating \overline{\gamma}_1 for a given update rate p , we have suitably censored non-informative incoming data such that, with high precision, we have controlled the amount of data we intend to use in the learning process.
![]() |
0 | 1 | 2 | 3 | 4 |
0.15 | 0.1661 | 0.1692 | 0.1716 | 0.1736 | 0.1753 |
0.30 | 0.1199 | 0.1222 | 0.1244 | 0.1266 | 0.1292 |
0.45 | 0.0871 | 0.0900 | 0.0962 | 0.0998 | 0.1037 |
Input signal | ![]() |
0 | 1 | 2 | 3 | 4 |
BPSK | 0.15 | 0.1339 | 0.1331 | 0.1337 | 0.1334 | 0.1332 |
0.30 | 0.2920 | 0.2909 | 0.2933 | 0.2935 | 0.2949 | |
0.45 | 0.4407 | 0.4419 | 0.4441 | 0.4460 | 0.4474 | |
WGN | 0.15 | 0.1341 | 0.1329 | 0.1338 | 0.1332 | 0.1354 |
0.30 | 0.2982 | 0.2960 | 0.2967 | 0.2982 | 0.3003 | |
0.45 | 0.4439 | 0.4452 | 0.4456 | 0.4474 | 0.4483 | |
AR(1) | 0.15 | 0.1451 | 0.1445 | 0.1463 | 0.1475 | 0.1484 |
0.30 | 0.3007 | 0.3016 | 0.3025 | 0.3034 | 0.3046 | |
0.45 | 0.4513 | 0.4526 | 0.4522 | 0.4541 | 0.4557 |
Moreover, Figures 1(a), 1(b), and 1(c) depict the MSE learning curves of the AP and the ST-SM-AP algorithms when the input signals are BPSK, WGN, and AR(1), respectively. Also, the data reuse factor is chosen to be L = 2 . In these figures, the desired update rate is p = 0.15 , thus the estimated \overline{\gamma}_1 is computed as 0.1716. For the AP algorithm, two different step-sizes 0.1 and 0.9 are adopted. In Figures 1(a), 1(b), and 1(c), the update rates of the ST-SM-AP algorithm are 0.1337, 0.1338, and 0.1463, respectively, as presented in Table 2. On the one hand, we can observe that when the step-size of the AP algorithm is large, the AP algorithm can reach the steady-state as fast as the ST-SM-AP algorithm; however, the MSE of the ST-SM-AP algorithm is lower than that of the AP algorithm. On the other hand, we can see that for small step-size the AP algorithm can attain the same MSE of the ST-SM-AP algorithm; however, the convergence rate of the AP algorithm degrades significantly. Therefore, the ST-SM-AP algorithm can reach low MSE with a high convergence rate as compared to the AP algorithm. Furthermore, it is worth mentioning that the ST-SM-AP algorithm has extremely lower computational complexity in comparison with the AP algorithm since it censors non-informative incoming data and avoids unnecessary updates, whereas the AP algorithm updates the adaptive filter coefficients at every iteration. Hence, the ST-SM-AP algorithm has the great potential to reduce the computational burden in big data applications.
In this scenario, we have implemented the AP, the ST-SM-AP, and the DT-SM-AP algorithms in order to identify the unknown system \mathbf{w}_o when the desired signal contains an outliers signal. The parameters of the mentioned algorithms are set identical to those of Scenario 1. The outliers signal is added to the reference signal, and it is produced by a Bernoulli process, which takes 1 with probability 0.05, multiplying uniformly distributed random numbers belong to the interval (0, 50) . For the DT-SM-AP algorithm, we adopt \overline{\gamma}_2 = 1 in order to detect outliers and avoid updating adaptive filter coefficients for irrelevant incoming data. By utilizing the threshold parameters \overline{\gamma}_1 s presented in Table 1 and \overline{\gamma}_2 = 1 , the update rates of the DT-SM-AP algorithm for different data reuse factors L and different input signals are given in Table 3. Similarly to Scenario 1, we can observe that in the presence of outliers signal, by utilizing the estimated \overline{\gamma}_1 s, for all input signals the resulting update rates are given in Table 3 are close to the given value of p . Indeed, the DT-SM-AP algorithm detected outliers signal and non-informative data effectively and avoided unnecessary updates when receiving insignificant and irrelevant incoming data. It is worthwhile to notice that, in Table 3, the difference between the values of resulting update rates and their corresponding values of p is less than 0.02 (2 percent); therefore, it shows that the estimated \overline{\gamma}_1 s and \overline{\gamma}_2 have censored the incoming data as we wanted.
Furthermore, Figures 2a, 2(b), and 2(c) illustrate the misalignment curves of the AP, the ST-SM-AP, and the DT-SM-AP algorithms for the BPSK, WGN, and AR(1) input signals, respectively. For all mentioned algorithms, the data reuse factor L is selected to be 2. Also, the desired update rate is p = 0.15 , therefore, the estimated \overline{\gamma}_1 is adopted equal to 0.1716 as given in Table 1, and \overline{\gamma}_2 = 1 in order to detect outliers and avoid unnecessary updates. Similarly to Scenario 1, we have adopted two step-sizes 0.1 and 0.9 in the AP algorithm, so that the small step-size results in low MSE and low convergence rate but the large step-size leads to high MSE and high convergence speed. In Figures 2a, 2(b), and 2(c), the update rates of the DT-SM-AP algorithm are 0.1348, 0.1349, and 0.1458, respectively, as described in Table 3; however, the update rates of the ST-SM-AP algorithm are high values 0.8692, 0.8734, and 0.8985, respectively, because of unnecessary updates in the presence of outliers. As can be seen in these figures, the AP algorithm using neither large nor small step-sizes can reach the misalignment of the DT-SM-AP algorithm. Moreover, the misalignment and convergence speed of the ST-SM-AP algorithm are the same as those of the AP algorithm using the large step-size 0.9. Therefore, in the presence of outliers, neither ST-SM-AP nor AP algorithms can attain the superb performance of the DT-SM-AP algorithm. Indeed, in the presence of outliers, the DT-SM-AP algorithm outperforms the ST-SM-AP and the AP algorithms by obtaining lower misalignment, lower update rate, and higher convergence rate.
Input signal | ![]() |
0 | 1 | 2 | 3 | 4 |
BPSK | 0.15 | 0.1347 | 0.1342 | 0.1348 | 0.1341 | 0.1339 |
0.30 | 0.2943 | 0.2934 | 0.2941 | 0.2944 | 0.2951 | |
0.45 | 0.4426 | 0.4435 | 0.4448 | 0.4473 | 0.4472 | |
WGN | 0.15 | 0.1352 | 0.1351 | 0.1349 | 0.1357 | 0.1363 |
0.30 | 0.2973 | 0.2966 | 0.2974 | 0.2989 | 0.2996 | |
0.45 | 0.4448 | 0.4454 | 0.4469 | 0.4475 | 0.4487 | |
AR(1) | 0.15 | 0.1459 | 0.1452 | 0.1458 | 0.1471 | 0.1486 |
0.30 | 0.29992 | 0.3009 | 0.3015 | 0.3026 | 0.3034 | |
0.45 | 0.4507 | 0.4518 | 0.4514 | 0.4529 | 0.4540 |
In this paper, the single threshold set-membership affine projection (ST-SM-AP) and the double threshold set-membership affine projection (DT-SM-AP) algorithm have been proposed in order to exploit data redundancy and censor non-informative and irrelevant data aiming at improving the learning process. For this purpose, we have estimated the threshold parameter to attain the desired update rate in the proposed algorithms. The estimate of the threshold parameter has been obtained with the help of the distribution probability function of the additive noise signal and the excess mean-squared error in steady-state of the algorithm. The ST-SM-AP algorithm avoids updating the adaptive filter coefficients when the absolute value of the output estimation error is smaller than the estimated threshold. However, the two thresholds in the DT-SM-AP algorithm prevent the algorithm from updating adaptive filter parameters when the absolute value of the error signal is outside the range defined by the thresholds. The numerical results indicate that the ST-SM-AP and the DT-SM-AP algorithms outperform the conventional affine projection algorithm.
The author declares no conflicts of interest in this paper.
[1] |
Li J, Wu L, Yang Z (2008) Analysis and upgrading of bio-petroleum from biomass by direct deoxy-liquefaction. J Anal Appl Pyrolysis 81: 199-204. doi: 10.1016/j.jaap.2007.11.004
![]() |
[2] |
Qian Y, Zuo C, Tan J, et al. (2007) Structural analysis of bio-oils from sub-and supercritical water liquefaction of woody biomass. Energy 32: 196-202. doi: 10.1016/j.energy.2006.03.027
![]() |
[3] | Liu Z, Zhang FS (2008) Effects of various solvents on the liquefaction of biomass to produce fuels and chemical feedstocks. Energy Convers Manage 49: 3498-3504. |
[4] | Vardon DR, Sharma BK, Blazina GV, et al. (2012) Thermochemical conversion of raw and defatted algal biomass via hydrothermal liquefaction and slow pyrolysis. Bioresour Technol 109: 178-187. |
[5] |
Chen Y, Mu R, Yang M, et al. (2017) Catalytic hydrothermal liquefaction for bio-oil production over CNTs supported metal catalysts. Chem Eng Sci 161: 299-307. doi: 10.1016/j.ces.2016.12.010
![]() |
[6] |
Wang Y, Wang H, Lin H, et al. (2013) Effects of solvents and catalysts in liquefaction of pinewood sawdust for the production of bio-oils. Biomass Bioenerg 59: 158-167. doi: 10.1016/j.biombioe.2013.10.022
![]() |
[7] |
Toor SS, Rosendahl L, Rudolf A (2011) Hydrothermal liquefaction of biomass: a review of subcritical water technologies. Energy 36: 2328-2342. doi: 10.1016/j.energy.2011.03.013
![]() |
[8] | Saber M, Golzary A, Hosseinpour M, et al. (2016) Catalytic hydrothermal liquefaction of microalgae using nanocatalyst. Appl Energy 183: 566-576. |
[9] | Jiang J, Junming XU, Zhanqian, S (2015) Review of the direct thermochemical conversion of lignocellulosic biomass for liquid fuels. Front Agr Sci Eng 2: 13-27. |
[10] |
Okuda K, Umetsu M, Takami S, et al. (2004) Disassembly of lignin and chemical recovery-rapid depolymerizatin of lignin without char formation in water-phenol mixtures. Fuel Process Technol 85: 803-813. doi: 10.1016/j.fuproc.2003.11.027
![]() |
[11] | Yang C, Jia L, Chen C, et al. (2011) Bio-oil from hydro-liquefaction of Dunaliella salina over Ni/REHY catalyst. Bioresour Technol 102: 4580-4584. |
[12] |
Perego C, Bianchi D (2010) Biomass upgrading through acid–base catalysis. Chem Eng J 161: 314-322. doi: 10.1016/j.cej.2010.01.036
![]() |
[13] |
Gayubo AG, Alonso A, Valle B, et al. (2010) Hydrothermal stability of HZSM-5 catalysts modified with Ni for the transformation of bioethanol into hydrocarbons. Fuel 89: 3365-3372. doi: 10.1016/j.fuel.2010.03.002
![]() |
[14] |
Hamelinck CN, Van Hooijdonk G, Faaij AP (2005). Ethanol from lignocellulosic biomass: techno-economic performance in short-, middle-and long-term. Biomass Bioenerg 28: 384-410. doi: 10.1016/j.biombioe.2004.09.002
![]() |
[15] |
Huang HJ, Yuan XZ, Zeng GM, et al. (2013) Thermochemical liquefaction of rice husk for bio-oil production with sub-and supercritical ethanol as solvent. J Anal Appl Pyrolysis 102: 60-67. doi: 10.1016/j.jaap.2013.04.002
![]() |
[16] |
Fan SP, Zakaria S, Chia CH, et al. (2011) Comparative studies of products obtained from solvolysis liquefaction of oil palm empty fruit bunch fibres using different solvents. Bioresour Technol 102: 3521-3526. doi: 10.1016/j.biortech.2010.11.046
![]() |
[17] |
Aysu T, Turhan M, Küçük MM (2012) Liquefaction of Typha latifolia by supercritical fluid extraction. Bioresour Technol 107: 464-470. doi: 10.1016/j.biortech.2011.12.069
![]() |
[18] |
Huang H, Yuan X, Zeng G, et al. (2011) Thermochemical liquefaction characteristics of microalgae in sub-and supercritical ethanol. Fuel Process Technol 92: 147-153. doi: 10.1016/j.fuproc.2010.09.018
![]() |
[19] |
Li H, Yuan X, Zeng G, et al. (2010) The formation of bio-oil from sludge by deoxy-liquefaction in supercritical ethanol. Bioresour Technol 101: 2860-2866. doi: 10.1016/j.biortech.2009.10.084
![]() |
[20] |
Leng S, Wang X, He X, et al. (2013) NiFe/γ-Al 2 O 3: a universal catalyst for the hydrodeoxygenation of bio-oil and its model compounds.Catal Commun 41: 34-37. doi: 10.1016/j.catcom.2013.06.037
![]() |
[21] |
Vishnevetsky I, Epstein M (2007) Production of hydrogen from solar zinc in steam atmosphere. Int J Hydrogen Energy 32: 2791-2802. doi: 10.1016/j.ijhydene.2007.04.004
![]() |
[22] |
Brand S, Susanti RF, Kim SK, et al. (2013) Supercritical ethanol as an enhanced medium for lignocellulosic biomass liquefaction: Influence of physical process parameters. Energy 59: 173-182. doi: 10.1016/j.energy.2013.06.049
![]() |
[23] |
Xu C, Etcheverry T (2008) Hydro-liquefaction of woody biomass in sub-and super-critical ethanol with iron-based catalysts. Fuel 87: 335-345. doi: 10.1016/j.fuel.2007.05.013
![]() |
[24] |
Zhou C, Zhu X, Qian F, et al. (2016) Catalytic hydrothermal liquefaction of rice straw in water/ethanol mixtures for high yields of monomeric phenols using reductive CuZnAl catalyst. Fuel Process Technol 154: 1-6. doi: 10.1016/j.fuproc.2016.08.010
![]() |
[25] |
Cheng S, D'cruz I, Wang M, et al. (2010) Highly efficient liquefaction of woody biomass in hot-compressed alcohol− water co-solvents. Energ Fuel 24: 4659-4667. doi: 10.1021/ef901218w
![]() |
[26] |
Brand S, Hardi F, Kim J, et al. (2014) Effect of heating rate on biomass liquefaction: differences between subcritical water and supercritical ethanol. Energy 68: 420-427. doi: 10.1016/j.energy.2014.02.086
![]() |
[27] |
Zhai Y, Chen Z, Chen H, et al. (2015) Co-liquefaction of sewage sludge and oil-tea-cake in supercritical methanol: yield of bio-oil, immobilization and risk assessment of heavy metals. Environ Technol 36: 2770-2777. doi: 10.1080/09593330.2015.1049210
![]() |
[28] |
Cheng S, Wei L, Zhao X, et al. (2016) Conversion of Prairie Cordgrass to Hydrocarbon Biofuel over Co-Mo/HZSM-5 Using a Two-Stage Reactor System. Energy Technol 4: 706-713. doi: 10.1002/ente.201500452
![]() |
[29] | Maddi B, Viamajala S, Varanasi S (2011) Comparative study of pyrolysis of algal biomass from natural lake blooms with lignocellulosic biomass. Bioresour Technol 102: 11018-11026. |
[30] | Zhao X, Wei L, Cheng S, et al. (2015) Catalytic cracking of carinata oil for hydrocarbon biofuel over fresh and regenerated Zn/Na-ZSM-5. Appl Catal A 507: 44-55. |
[31] | Cheng S, Wei L, Alsowij MR, et al.(2017) In-situ hydrodeoxygenation upgrading of pine sawdust bio-oil to hydrocarbon biofuel using Pd/C catalyst. J Energy Inst: In press. |
[32] |
Cheng S, Wei L, Zhao X, et al. (2016) Hydrodeoxygenation of prairie cordgrass bio-oil over Ni based activated carbon synergistic catalysts combined with different metals. New Biotechnol 33: 440-448. doi: 10.1016/j.nbt.2016.02.004
![]() |
[33] | Zhao X, Wei L, Cheng S, et al. (2016) Hydroprocessing of carinata oil for hydrocarbon biofuel over Mo-Zn/Al2O3. Appl Catal B 196: 41-49. |
[34] |
Huang Y, Wei L, Zhao X, et al. (2016) Upgrading pine sawdust pyrolysis oil to green biofuels by HDO over zinc-assisted Pd/C catalyst. Energy Convers Manage 115: 8-16. doi: 10.1016/j.enconman.2016.02.049
![]() |
[35] |
Cheng S, Wei L, Zhao X, et al. (2015) Directly catalytic upgrading bio-oil vapor produced by prairie cordgrass pyrolysis over Ni/HZSM-5 using a two stage reactor. AIMS Energy 3: 227-240. doi: 10.3934/energy.2015.2.227
![]() |
[36] |
Wei L, Gao Y, Qu W, et al. (2016) Torrefaction of Raw and Blended Corn Stover, Switchgrass, and Prairie Grass. Trans ASABE 59: 717-726. doi: 10.13031/trans.59.10739
![]() |
[37] | Zhao X, Wei L, Cheng S, et al. (2015) Catalytic cracking of camelina oil for hydrocarbon biofuel over ZSM-5-Zn catalyst. Fuel Process Technol 139: 117-126. |
[38] | Zhao X, Wei L, Cheng S, et al. (2015) Optimization of catalytic cracking process for upgrading camelina oil to hydrocarbon biofuel. Ind Crops Prod 77: 516-526. |
[39] |
Zhao X, Wei L, Cheng S, et al. (2016) Development of hydrocarbon biofuel from sunflower seed and sunflower meat oils over ZSM-5. J Renew Sust Energ 8: 013109. doi: 10.1063/1.4941911
![]() |
[40] | Xu Y, Zheng X, Yu H, et al. (2014) Hydrothermal liquefaction of Chlorella pyrenoidosa for bio-oil production over Ce/HZSM-5. Bioresour Technol 156: 1-5. |
[41] | Duan P, Savage PE (2010) Hydrothermal liquefaction of a microalga with heterogeneous catalysts. Ind Eng Chem Res 50: 52-61. |
[42] |
Akhtar J, Kuang SK, Amin NS (2010) Liquefaction of empty palm fruit bunch (EPFB) in alkaline hot compressed water. Renew Energ 35: 1220-1227. doi: 10.1016/j.renene.2009.10.003
![]() |
[43] |
Karagöz S, Bhaskar T, Muto A, et al. (2006) Hydrothermal upgrading of biomass: effect of K 2 CO 3 concentration and biomass/water ratio on products distribution. Bioresour Technol 97: 90-98. doi: 10.1016/j.biortech.2005.02.051
![]() |
[44] |
Bhaskar T, Sera A, Muto A, et al. (2008) Hydrothermal upgrading of wood biomass: influence of the addition of K 2 CO 3 and cellulose/lignin ratio. Fuel 87: 2236-2242. doi: 10.1016/j.fuel.2007.10.018
![]() |
[45] | Liu HM, Xie XA, Feng B, et al. (2011) Effect of catalysts on 5-lump distribution of cornstalk liquefaction in sub-critical ethanol. BioResour 6: 2592-2604. |
[46] | Xu CC, Su H, Cang D (2008) Liquefaction of corn distillers dried grains with solubles (DDGS) in hot-compressed phenol. BioResour 3: 363-382. |
[47] |
Karagöz S, Bhaskar T, Muto A, et al. (2005) Low-temperature catalytic hydrothermal treatment of wood biomass: analysis of liquid products. Chem Eng J 108: 127-137. doi: 10.1016/j.cej.2005.01.007
![]() |
[48] |
Hammerschmidt A, Boukis N, Hauer E, et al. (2011) Catalytic conversion of waste biomass by hydrothermal treatment. Fuel 90: 555-562. doi: 10.1016/j.fuel.2010.10.007
![]() |
[49] |
Tymchyshyn M, Xu CC (2010) Liquefaction of bio-mass in hot-compressed water for the production of phenolic compounds. Bioresour Technol 101: 2483-2490. doi: 10.1016/j.biortech.2009.11.091
![]() |
[50] |
Wang Y, Wang H, Lin H, et al. (2013) Effects of solvents and catalysts in liquefaction of pinewood sawdust for the production of bio-oils. Biomass Bioenergy 59: 158-167. doi: 10.1016/j.biombioe.2013.10.022
![]() |
[51] | Zhu Z, Toor SS, Rosendahl L, et al. (2014) Analysis of product distribution and characteristics in hydrothermal liquefaction of barley straw in subcritical and supercritical water. Environ Prog Sustain Energy 33: 737-743. |
[52] |
Xue Y, Chen H, Zhao W, et al. (2016) A review on the operating conditions of producing bio‐oil from hydrothermal liquefaction of biomass. Int J Energy Res 40: 865-877. doi: 10.1002/er.3473
![]() |
[53] |
Zhang B, von Keitz M, Valentas K (2008) Thermal effects on hydrothermal biomass liquefaction. Appl Biochem Biotechnol 147: 143-150. doi: 10.1007/s12010-008-8131-5
![]() |
[54] |
Kruse ANDREA, Henningsen T, Sinag A, et al. (2003) Biomass gasification in supercritical water: influence of the dry matter content and the formation of phenols. Ind Eng Chem Res 42: 3711-3717. doi: 10.1021/ie0209430
![]() |
[55] |
Yang Y, Gilbert A, Xu CC (2009) Production of bio-crude from forestry waste by hydro-liquefaction in sub-/super- critical methanol. AIChE J 55: 807-819. doi: 10.1002/aic.11701
![]() |
[56] | Zhang J, Chen WT, Zhang P, et al. (2013) Hydrothermal liquefaction of Chlorella pyrenoidosa in sub-and supercritical ethanol with heterogeneous catalysts. Bioresour Technol 133: 389-397. |
[57] |
Iliopoulou EF, Stefanidis SD, Kalogiannis KG, et al. (2012). Catalytic upgrading of biomass pyrolysis vapors using transition metal-modified ZSM-5 zeolite. Appl Catal B 127: 281-290. doi: 10.1016/j.apcatb.2012.08.030
![]() |
[58] |
Adjaye JD, Bakhshi NN (1995) Catalytic conversion of a biomass-derived oil to fuels and chemicals I: Model compound studies and reaction pathways. Biomass Bioenerg 8: 131-149. doi: 10.1016/0961-9534(95)00018-3
![]() |
[59] |
Zhao C, Lercher JA (2012) Upgrading Pyrolysis Oil over Ni/HZSM-5 by Cascade Reactions. Angew Chem 124: 6037-6042. doi: 10.1002/ange.201108306
![]() |
[60] | Torri C, Fabbri D, Garcia-Alba L, et al. (2013) Upgrading of oils derived from hydrothermal treatment of microalgae by catalytic cracking over H-ZSM-5: A comparative Py–GC–MS study. J Anal Appl Pyrolysis 101: 28-34. |
[61] |
Huynh TM, Armbruster U, Nguyen LH, et al. (2015) Hydrodeoxygenation of Bio-Oil on Bimetallic Catalysts: From Model Compound to Real Feed. J Sustainable Bioenergy Syst 5: 151-160. doi: 10.4236/jsbs.2015.54014
![]() |
[62] |
Zhang X, Wang T, Ma L, et al. (2013) Hydrotreatment of bio-oil over Ni-based catalyst. Bioresour Technol 127: 306-311. doi: 10.1016/j.biortech.2012.07.119
![]() |
[63] |
Thangalazhy-Gopakumar S, Adhikari S, Gupta RB (2012) Catalytic pyrolysis of biomass over H+ ZSM-5 under hydrogen pressure. Energ Fuel 26: 5300-5306. doi: 10.1021/ef3008213
![]() |
[64] |
Weng Y, Qiu S, Ma L, et al. (2015) Jet-Fuel Range Hydrocarbons from Biomass-Derived Sorbitol over Ni-HZSM-5/SBA-15 Catalyst. Catalysts 5: 2147-2160. doi: 10.3390/catal5042147
![]() |
[65] |
Li X, Su L, Wang Y, et al. (2012) Catalytic fast pyrolysis of Kraft lignin with HZSM-5 zeolite for producing aromatic hydrocarbons. Front Environ Sci Eng 6: 295-303. doi: 10.1007/s11783-012-0410-2
![]() |
[66] |
Cheng S, Wei L, Zhao X (2016) Development of a bifunctional Ni/HZSM-5 catalyst for converting prairie cordgrass to hydrocarbon biofuel. Energy Sources Part A 38: 2433-2437. doi: 10.1080/15567036.2015.1065298
![]() |
[67] | Mortensen PM, Grunwaldt JD, Jensen PA, et al. (2011) A review of catalytic upgrading of bio-oil to engine fuels. Appl Catal A 407: 1-19. |
[68] |
Ganjkhanlou Y, Groppo E, Bordiga S, et al. (2016) Incorporation of Ni into HZSM-5 zeolites: Effects of zeolite morphology and incorporation procedure. Micropor Mesopor Mat 229: 76-82. doi: 10.1016/j.micromeso.2016.04.002
![]() |
[69] |
Lv M, Zhou J, Yang W, et al. (2010) Thermogravimetric analysis of the hydrolysis of zinc particles. Int J Hydrogen Energy 35: 2617-2621. doi: 10.1016/j.ijhydene.2009.04.017
![]() |
[70] |
Balat M (2008) Mechanisms of thermochemical biomass conversion processes. Part 3: reactions of liquefaction. Energy Sources Part A 30: 649-659. doi: 10.1080/10407780600817592
![]() |
[71] |
Brown TM, Duan P, Savage PE (2010) Hydrothermal liquefaction and gasification of Nannochloropsis sp. Energ Fuel 24: 3639-3646. doi: 10.1021/ef100203u
![]() |
1. | S. Radhika, A. Chandrasekar, 2021, 9780128238981, 3, 10.1016/B978-0-12-823898-1.00005-9 |
![]() |
0 | 1 | 2 | 3 | 4 |
0.15 | 0.1661 | 0.1692 | 0.1716 | 0.1736 | 0.1753 |
0.30 | 0.1199 | 0.1222 | 0.1244 | 0.1266 | 0.1292 |
0.45 | 0.0871 | 0.0900 | 0.0962 | 0.0998 | 0.1037 |
Input signal | ![]() |
0 | 1 | 2 | 3 | 4 |
BPSK | 0.15 | 0.1339 | 0.1331 | 0.1337 | 0.1334 | 0.1332 |
0.30 | 0.2920 | 0.2909 | 0.2933 | 0.2935 | 0.2949 | |
0.45 | 0.4407 | 0.4419 | 0.4441 | 0.4460 | 0.4474 | |
WGN | 0.15 | 0.1341 | 0.1329 | 0.1338 | 0.1332 | 0.1354 |
0.30 | 0.2982 | 0.2960 | 0.2967 | 0.2982 | 0.3003 | |
0.45 | 0.4439 | 0.4452 | 0.4456 | 0.4474 | 0.4483 | |
AR(1) | 0.15 | 0.1451 | 0.1445 | 0.1463 | 0.1475 | 0.1484 |
0.30 | 0.3007 | 0.3016 | 0.3025 | 0.3034 | 0.3046 | |
0.45 | 0.4513 | 0.4526 | 0.4522 | 0.4541 | 0.4557 |
Input signal | ![]() |
0 | 1 | 2 | 3 | 4 |
BPSK | 0.15 | 0.1347 | 0.1342 | 0.1348 | 0.1341 | 0.1339 |
0.30 | 0.2943 | 0.2934 | 0.2941 | 0.2944 | 0.2951 | |
0.45 | 0.4426 | 0.4435 | 0.4448 | 0.4473 | 0.4472 | |
WGN | 0.15 | 0.1352 | 0.1351 | 0.1349 | 0.1357 | 0.1363 |
0.30 | 0.2973 | 0.2966 | 0.2974 | 0.2989 | 0.2996 | |
0.45 | 0.4448 | 0.4454 | 0.4469 | 0.4475 | 0.4487 | |
AR(1) | 0.15 | 0.1459 | 0.1452 | 0.1458 | 0.1471 | 0.1486 |
0.30 | 0.29992 | 0.3009 | 0.3015 | 0.3026 | 0.3034 | |
0.45 | 0.4507 | 0.4518 | 0.4514 | 0.4529 | 0.4540 |
![]() |
0 | 1 | 2 | 3 | 4 |
0.15 | 0.1661 | 0.1692 | 0.1716 | 0.1736 | 0.1753 |
0.30 | 0.1199 | 0.1222 | 0.1244 | 0.1266 | 0.1292 |
0.45 | 0.0871 | 0.0900 | 0.0962 | 0.0998 | 0.1037 |
Input signal | ![]() |
0 | 1 | 2 | 3 | 4 |
BPSK | 0.15 | 0.1339 | 0.1331 | 0.1337 | 0.1334 | 0.1332 |
0.30 | 0.2920 | 0.2909 | 0.2933 | 0.2935 | 0.2949 | |
0.45 | 0.4407 | 0.4419 | 0.4441 | 0.4460 | 0.4474 | |
WGN | 0.15 | 0.1341 | 0.1329 | 0.1338 | 0.1332 | 0.1354 |
0.30 | 0.2982 | 0.2960 | 0.2967 | 0.2982 | 0.3003 | |
0.45 | 0.4439 | 0.4452 | 0.4456 | 0.4474 | 0.4483 | |
AR(1) | 0.15 | 0.1451 | 0.1445 | 0.1463 | 0.1475 | 0.1484 |
0.30 | 0.3007 | 0.3016 | 0.3025 | 0.3034 | 0.3046 | |
0.45 | 0.4513 | 0.4526 | 0.4522 | 0.4541 | 0.4557 |
Input signal | ![]() |
0 | 1 | 2 | 3 | 4 |
BPSK | 0.15 | 0.1347 | 0.1342 | 0.1348 | 0.1341 | 0.1339 |
0.30 | 0.2943 | 0.2934 | 0.2941 | 0.2944 | 0.2951 | |
0.45 | 0.4426 | 0.4435 | 0.4448 | 0.4473 | 0.4472 | |
WGN | 0.15 | 0.1352 | 0.1351 | 0.1349 | 0.1357 | 0.1363 |
0.30 | 0.2973 | 0.2966 | 0.2974 | 0.2989 | 0.2996 | |
0.45 | 0.4448 | 0.4454 | 0.4469 | 0.4475 | 0.4487 | |
AR(1) | 0.15 | 0.1459 | 0.1452 | 0.1458 | 0.1471 | 0.1486 |
0.30 | 0.29992 | 0.3009 | 0.3015 | 0.3026 | 0.3034 | |
0.45 | 0.4507 | 0.4518 | 0.4514 | 0.4529 | 0.4540 |