
Recently, convolutional neural networks (CNNs) for classification by time domain data of multi-signals have been developed. Although some signals are important for correct classification, others are not. The calculation, memory, and data collection costs increase when data that include unimportant signals for classification are taken as the CNN input layer. Therefore, identifying and eliminating non-important signals from the input layer are important. In this study, we proposed a features gradient-based signals selection algorithm (FG-SSA), which can be used for finding and removing non-important signals for classification by utilizing features gradient obtained by the process of gradient-weighted class activation mapping (grad-CAM). When we defined ns as the number of signals, the computational complexity of FG-SSA is the linear time O(ns) (i.e., it has a low calculation cost). We verified the effectiveness of the algorithm using the OPPORTUNITY dataset, which is an open dataset comprising of acceleration signals of human activities. In addition, we checked the average of 6.55 signals from a total of 15 signals (five triaxial sensors) that were removed by FG-SSA while maintaining high generalization scores of classification. Therefore, FG-SSA can find and remove signals that are not important for CNN-based classification. In the process of FG-SSA, the degree of influence of each signal on each class estimation is quantified. Therefore, it is possible to visually determine which signal is effective and which is not for class estimation. FG-SSA is a white-box signal selection algorithm because it can understand why the signal was selected. The existing method, Bayesian optimization, was also able to find superior signal sets, but the computational cost was approximately three times greater than that of FG-SSA. We consider FG-SSA to be a low-computational-cost algorithm.
Citation: Yuto Omae, Yusuke Sakai, Hirotaka Takahashi. Features gradient-based signals selection algorithm of linear complexity for convolutional neural networks[J]. AIMS Mathematics, 2024, 9(1): 792-817. doi: 10.3934/math.2024041
[1] | Yue Feng, Yujie Liu, Ruishu Wang, Shangyou Zhang . A conforming discontinuous Galerkin finite element method on rectangular partitions. Electronic Research Archive, 2021, 29(3): 2375-2389. doi: 10.3934/era.2020120 |
[2] | Chunmei Wang . Simplified weak Galerkin finite element methods for biharmonic equations on non-convex polytopal meshes. Electronic Research Archive, 2025, 33(3): 1523-1540. doi: 10.3934/era.2025072 |
[3] | Leilei Wei, Xiaojing Wei, Bo Tang . Numerical analysis of variable-order fractional KdV-Burgers-Kuramoto equation. Electronic Research Archive, 2022, 30(4): 1263-1281. doi: 10.3934/era.2022066 |
[4] | Guanrong Li, Yanping Chen, Yunqing Huang . A hybridized weak Galerkin finite element scheme for general second-order elliptic problems. Electronic Research Archive, 2020, 28(2): 821-836. doi: 10.3934/era.2020042 |
[5] | Victor Ginting . An adjoint-based a posteriori analysis of numerical approximation of Richards equation. Electronic Research Archive, 2021, 29(5): 3405-3427. doi: 10.3934/era.2021045 |
[6] | Jun Pan, Yuelong Tang . Two-grid H1-Galerkin mixed finite elements combined with L1 scheme for nonlinear time fractional parabolic equations. Electronic Research Archive, 2023, 31(12): 7207-7223. doi: 10.3934/era.2023365 |
[7] | Hongze Zhu, Chenguang Zhou, Nana Sun . A weak Galerkin method for nonlinear stochastic parabolic partial differential equations with additive noise. Electronic Research Archive, 2022, 30(6): 2321-2334. doi: 10.3934/era.2022118 |
[8] | Zexuan Liu, Zhiyuan Sun, Jerry Zhijian Yang . A numerical study of superconvergence of the discontinuous Galerkin method by patch reconstruction. Electronic Research Archive, 2020, 28(4): 1487-1501. doi: 10.3934/era.2020078 |
[9] | Suayip Toprakseven, Seza Dinibutun . A weak Galerkin finite element method for parabolic singularly perturbed convection-diffusion equations on layer-adapted meshes. Electronic Research Archive, 2024, 32(8): 5033-5066. doi: 10.3934/era.2024232 |
[10] | Bin Wang, Lin Mu . Viscosity robust weak Galerkin finite element methods for Stokes problems. Electronic Research Archive, 2021, 29(1): 1881-1895. doi: 10.3934/era.2020096 |
Recently, convolutional neural networks (CNNs) for classification by time domain data of multi-signals have been developed. Although some signals are important for correct classification, others are not. The calculation, memory, and data collection costs increase when data that include unimportant signals for classification are taken as the CNN input layer. Therefore, identifying and eliminating non-important signals from the input layer are important. In this study, we proposed a features gradient-based signals selection algorithm (FG-SSA), which can be used for finding and removing non-important signals for classification by utilizing features gradient obtained by the process of gradient-weighted class activation mapping (grad-CAM). When we defined ns as the number of signals, the computational complexity of FG-SSA is the linear time O(ns) (i.e., it has a low calculation cost). We verified the effectiveness of the algorithm using the OPPORTUNITY dataset, which is an open dataset comprising of acceleration signals of human activities. In addition, we checked the average of 6.55 signals from a total of 15 signals (five triaxial sensors) that were removed by FG-SSA while maintaining high generalization scores of classification. Therefore, FG-SSA can find and remove signals that are not important for CNN-based classification. In the process of FG-SSA, the degree of influence of each signal on each class estimation is quantified. Therefore, it is possible to visually determine which signal is effective and which is not for class estimation. FG-SSA is a white-box signal selection algorithm because it can understand why the signal was selected. The existing method, Bayesian optimization, was also able to find superior signal sets, but the computational cost was approximately three times greater than that of FG-SSA. We consider FG-SSA to be a low-computational-cost algorithm.
For simplicity, we consider Poisson equation with a Dirichlet boundary condition as our model problem.
−Δu=f,inΩ, | (1) |
u=g,on∂Ω, | (2) |
where
Using integration by parts, we can get the variational form: find
(∇u,∇v)=(f,v),∀v∈H10(Ω). | (3) |
Various finite element methods have been introduced to solve the Poisson equations (1)-(2), such as the Galerkin finite element methods (FEMs)[2, 3], the mixed FEMs [15] and the finite volume methods (FVMs) [6], etc. The FVMs emphasis on the local conservation property and discretize equations by asking the solution satisfying the flux conservation on a dual mesh consisting of control volumes. The mixed FEMs is another category method that based on the variable
The classical conforming finite element method obtains numerical approximate results by constructing a finite-dimensional subspace of
(∇uh,∇vh)=(f,vh),∀vh∈V0h, | (4) |
where
One obvious disadvantage of discontinuous finite element methods is their rather complex formulations which are often necessary to ensure connections of discontinuous solutions across element boundaries. For example, the IPDG methods add parameter depending interior penalty terms. Besides additional programming complexity, one often has difficulties in finding optimal values for the penalty parameters and corresponding efficient solvers. Most recently, Zhang and Ye [21] developed a discontinuous finite element method that has an ultra simple weak formulation on triangular/tetrahedal meshes. The corresponding numerical scheme can be written as: find
(∇wuh,∇wvh)=(f,vh),∀vh∈V0h, | (5) |
where
Following the work in [21, 22], we propose a new conforming DG finite element method on rectangular partitions in this work. It can be obtained from the conforming formulation simply by replacing
In this paper, we keep the same finite element space as DG method, replace the boundary function with the average of the inner function, and use the weak gradient arising from local Raviart-Thomas (RT) elements [5] to approximate the classic gradient. Moreover, the derivation process in this paper is based on rectangular RT elements [16]. Error estimates of optimal order are established for the corresponding conforming DG approximation in both a discrete
The rest of this paper is organized as follows: In Section 2, we shall present the conforming DG finite element scheme for the Poisson equation on rectangular partitions. Section 3 is devoted to a discussion of the stability and solvability of the new method. In Section 4, we shall prepare ourselves for error estimates by deriving some identities. Error estimates of optimal order in
Throughout this paper, we adopt the standard definition of Sobolev space
H10(Ω)={v∈H1(Ω):v|∂Ω=0}, |
and the space
H(div,Ω)={q∈[L2(Ω)]d:∇⋅q∈L2(Ω)}. |
Assume that the domain
For any interior edge
{v}=12(v|∂T1+v|∂T2),[[v]]=v|∂T1n1+v|∂T2n2, | (6) |
where
{v}=v|eand[[v]]=v|en. | (7) |
We define a discontinuous finite element space
Vh={v∈L2(Ω):v|T∈Qk(T),∀T∈Th}, | (8) |
and its subspace
V0h={v∈Vh:v=0on∂Ω}, | (9) |
where
Definition 2.1. For a given
(∇dv,q)T:=−(v,∇⋅q)T+⟨{v},q⋅n⟩∂T,∀q∈RTk(T), | (10) |
where
The weak gradient operator
(∇dv)|T=∇d(v|T). |
We introduce the following bilinear form:
a(v,w)=(∇dv,∇dw), |
the conforming DG algorithm to solve the problems (1) - (2) is given by
Conforming DG algorithm 1. Find
a(uh,vh)=(f,vh),∀vh∈V0h, | (11) |
where
We will prove the existence and uniqueness of the solution of equation (11). Firstly, we present the following two useful inequalities to derive the forthcoming analysis.
Lemma 3.1 (trace inequality). Let
‖φ‖2e≤C(h−1T‖φ‖2T+hT‖∇φ‖2T), | (12) |
where
Lemma 3.2 (inverse inequality). Let
‖∇φ‖T≤C(n)h−1T‖φ‖T,∀T∈Th. | (13) |
Then, we define the following semi-norms in the discontinuous finite element space
|||v|||2=a(v,v)=∑T∈Th‖∇dv‖2T, | (14) |
‖v‖21,h=∑T∈Th‖∇v‖2T+∑e∈E0hh−1e‖[[v]]‖2e. | (15) |
We have the equivalence between the semi-norms
Lemma 3.3. For any
C1‖v‖1,h≤|||v|||≤C2‖v‖1,h, | (16) |
where
Proof. It follows from the definition of
‖∇dv‖2T1=(∇dv,∇dv)T1=−(v,∇⋅∇dv)T1+⟨{v}n,∇dv⟩∂T1=(∇v,∇dv)T1−⟨(v−{v})n,∇dv⟩∂T1≤‖∇v‖T1‖∇dv‖T1+‖(v−{v})n‖∂T1‖∇dv‖∂T1≤‖∇dv‖T1(‖∇v‖T1+h−12T1‖(v−{v})n‖∂T1). | (17) |
For any
(v−{v})|en1=v|∂T1n1−12(v|∂T1+v|∂T2)n1=12(v|∂T1n1+v|∂T2n2)=12[[v]]e. |
Then we can get
‖(v−{v})n‖2∂T1≤12∑e∈∂T1‖[[v]]‖2e. | (18) |
Substituting (18) into (17) gives
‖∇dv‖2T1≤C2‖∇dv‖T1(‖∇v‖T1+∑e∈∂T1h−12e‖[[v]]‖e), |
this completes the proof of the right-hand of (16).
To prove the left-hand of (16), we consider the subspace of
D(k,T):={q∈RTk(T):q⋅n=0on∂T}. |
Note that
‖∇v‖T=supq∈D(k,T)(∇v,q)T‖q‖T. | (19) |
Using the integration by parts, Cauchy-Schwarz inequality, the definition of
(∇v,q)T=−(v,∇⋅q)T+⟨v,q⋅n⟩∂T=(∇dv,q)T−⟨{v},q⋅n⟩∂T=(∇dv,q)T≤‖∇dv‖T⋅‖q‖T, |
where we have used the fact that
‖∇v‖T≤‖∇dv‖T. | (20) |
We define the space
‖[[v]]‖e=supq∈De(k,T)⟨[[v]],q⋅n⟩e‖q⋅n‖e. | (21) |
Following the integration by parts and the definition of
(∇dv,q)T=(∇v,q)T−⟨v,q⋅n⟩e+⟨{v},q⋅n⟩e. |
Together with (20), we obtain
|⟨[[v]],q⋅n⟩e|=2|(∇dv,q)T−(∇v,q)T|≤2|(∇dv,q)T|+2|(∇v,q)T|≤C(‖∇dv‖T‖q‖T+‖∇v‖T‖q‖T)≤C‖∇dv‖T‖q‖T. |
Substituting the above inequality into (21), by the scaling argument [13], for such
‖[[v]]‖e≤C‖∇dv‖T‖q‖T‖q⋅n‖e≤Ch12‖∇dv‖T. | (22) |
Combining (20) and (22) gives a proof of the left-hand of (16).
Lemma 3.4. The semi-norm
Proof. We shall only verify the positivity property for
The above two lemmas imply the well posedness of the scheme (11). We prove the existence and uniqueness of solution of the conforming DG method in Theorem 3.1.
Theorem 3.1. The conforming DG scheme (11) has and only has one solution.
Proof. To prove the scheme (11) is uniquely solvable, it suffices to verify that the homogeneous equation has zero as its unique solution. To this end, let
a(uh,uh)=0, |
which leads to
In this section, we will derive an error equation which will be used for the error estimates. For any
(∇⋅q,v)T=(∇⋅Πhq,v)T,∀v∈Qk(T). | (23) |
For any
‖Πh(∇w)−∇w‖≤Chk‖w‖1+k. | (24) |
Moreover, it is easy to verify the following property holds true.
Lemma 4.1. For any
∑T∈Th(−∇⋅q,v)T=∑T∈Th(Πhq,∇dv)T,∀v∈V0h. | (25) |
Proof.
∑T∈Th⟨{v},Πhq⋅n⟩∂T=0. | (26) |
By the definition of
∑T∈Th(−∇⋅q,v)T=∑T∈Th(−∇⋅Πhq,v)T=∑T∈Th(−∇⋅Πhq,v)T+∑T∈Th⟨{v},Πhq⋅n⟩∂T=∑T∈Th(Πhq,∇dv)T. |
This completes the proof of the lemma.
Before establishing the error equation, we define a continuous finite element subspace of
˜Vh={v∈H1(Ω):v|T∈Qk(T),∀T∈Th}. | (27) |
so as a subspace of
˜V0h:={v∈˜Vh:v|∂Ω=0}. | (28) |
Lemma 4.2. For any
∇dv=∇v. |
Proof. By the definition of
(∇dv,q)T=−(v,∇⋅q)T+⟨{v},q⋅n⟩∂T=−(v,∇⋅q)T+⟨v,q⋅n⟩∂T=(∇v,q)T, |
which gives
(∇dv−∇v,q)T=0,∀q∈RTk(T). |
Letting
Let
‖Ihu−u‖≤Chk+1‖u‖k+1, | (29) |
‖∇Ihu−∇u‖≤Chk‖u‖k+1. | (30) |
It is obvious that
Lemma 4.3. Denote
a(eh,vh)=lu(vh), | (31) |
where
lu(vh)=∑T∈Th(∇Ihu−Πh∇u,∇dvh). | (32) |
Proof. Since
∑T∈Th(∇dIhu,∇dvh)T=∑T∈Th(∇Ihu,∇dvh)T=∑T∈Th(∇Ihu−Πh∇u+Πh∇u,∇dvh)T=∑T∈Th(∇Ihu−Πh∇u,∇dvh)T+∑T∈Th(Πh∇u,∇dvh)T=lu(vh)−∑T∈Th(∇⋅∇u,vh)T=lu(vh)+(f,vh). |
By the definition of the scheme (11), we have
∑T∈Th(∇dIhu−∇duh,∇dvh)T=lu(vh). |
This completes the proof of the lemma.
The goal of this section is to derive the error estimates in
Theorem 5.1. Let
|||eh|||≤Chk|u|k+1. | (33) |
Proof. Letting
|||eh|||2=lu(eh). | (34) |
From the Cauchy-Schwarz inequality, the triangle inequality, the definition of
lu(vh)=∑T∈Th(∇Ihu−Πh(∇u),∇dvh)T≤∑T∈Th‖∇Ihu−Πh(∇u)‖T‖∇dvh‖T≤(∑T∈Th‖∇Ihu−Πh(∇u)‖2T)12(∑T∈Th‖∇dvh‖2T)12=(∑T∈Th‖∇Ihu−∇u+∇u−Πh(∇u)‖2T)12|||vh|||≤(∑T∈Th‖∇Ihu−∇u‖2T+‖∇u−Πh(∇u)‖2T)12|||vh|||≤Chk|u|k+1|||vh|||. |
Then, we have
lu(eh)≤Chk|u|k+1|||eh|||. | (35) |
Substituting (35) to (34), we obtain
|||eh|||2≤Chk|u|k+1|||eh|||, |
which completes the proof of the lemma.
It is obvious that
(∇˜uh,∇v)=(f,v),∀v∈˜V0h. | (36) |
For any
(∇duh−∇˜uh,∇v)=0,∀v∈˜V0h. | (37) |
In the rest of this section, we derive an optimal order error estimate for the conforming DG approximation (11) in
−∇⋅(∇Φ)=uh−˜uh,inΩ. | (38) |
Assume that the dual problem satisfies
‖Φ‖2≤C‖uh−˜uh‖. | (39) |
In the following of this paper, we note
Theorem 5.2. Assume
‖u−uh‖≤Chk+1|u|k+1. | (40) |
Proof. First, we shall derive the optimal order for
a(Φh,v)=(εh,v),∀v∈V0h. | (41) |
Since
(∇duh−∇˜uh,∇IhΦ)=0,∇dIhΦ=∇IhΦ, |
which gives
(∇duh−∇˜uh,∇dIhΦ)=0. | (42) |
Setting
‖εh‖2=a(Φh,εh)=∑T∈Th(∇dΦh,∇dεh)T=∑T∈Th(∇d(Φh−IhΦ),∇duh−∇˜uh)T≤|||Φh−IhΦ|||(|||uh−Ihu|||+‖∇(Ihu−˜uh)‖). |
Then, by the Cauchy-Schwarz inequality, (33) and (39), we obtain
‖εh‖2≤Ch|Φ|2hk|u|k+1≤Chk+1|u|k+1‖εh‖, |
which gives
‖εh‖≤Chk+1|u|k+1. | (43) |
Combining the error estimate of finite element solution, the triangle inequality and (43) yields (40), which completes the proof of the theorem.
In this section, we shall present some numerical results for the conforming discontinuous Galerkin method analyzed in the previous sections.
We solve the following Poisson equation on the unit square domain
−Δu=2π2sin(πx)sin(πy)in Ω | (44) |
u=0on ∂Ω. | (45) |
The exact solution of the above problem is
We first use the
level | rate | rate | |||
by |
|||||
6 | 0.1996E-02 | 1.97 | 0.8887E-02 | 1.98 | 1024 |
7 | 0.5013E-03 | 1.99 | 0.2228E-02 | 2.00 | 4096 |
8 | 0.1255E-03 | 2.00 | 0.5574E-03 | 2.00 | 16384 |
by |
|||||
6 | 0.2427E-02 | 1.97 | 0.1027E+00 | 1.02 | 3072 |
7 | 0.6100E-03 | 1.99 | 0.5105E-01 | 1.01 | 12288 |
8 | 0.1527E-03 | 2.00 | 0.2546E-01 | 1.00 | 49152 |
by |
|||||
5 | 0.1533E-03 | 3.00 | 0.2042E-01 | 2.03 | 1536 |
6 | 0.1915E-04 | 3.00 | 0.5061E-02 | 2.01 | 6144 |
7 | 0.2394E-05 | 3.00 | 0.1260E-02 | 2.01 | 24576 |
by |
|||||
5 | 0.7959E-05 | 4.00 | 0.1965E-02 | 3.00 | 2560 |
6 | 0.4971E-06 | 4.00 | 0.2451E-03 | 3.00 | 10240 |
7 | 0.3140E-07 | 3.98 | 0.3059E-04 | 3.00 | 40960 |
by |
|||||
4 | 0.1055E-04 | 4.97 | 0.1421E-02 | 4.05 | 960 |
5 | 0.3314E-06 | 4.99 | 0.8735E-04 | 4.02 | 3840 |
6 | 0.1057E-07 | 4.97 | 0.5417E-05 | 4.01 | 15360 |
by |
|||||
2 | 0.2835E-02 | 6.24 | 0.1450E+00 | 5.49 | 84 |
3 | 0.4532E-04 | 5.97 | 0.4718E-02 | 4.94 | 336 |
4 | 0.7115E-06 | 5.99 | 0.1478E-03 | 5.00 | 1344 |
The same test case is also computed using the
level | rate | rate | |||
by |
|||||
6 | 0.4006E-03 | 1.99 | 0.2389E-02 | 1.99 | 4096 |
7 | 0.1003E-03 | 2.00 | 0.5982E-03 | 2.00 | 16384 |
8 | 0.2510E-04 | 2.00 | 0.1496E-03 | 2.00 | 65536 |
by |
|||||
6 | 0.2360E-04 | 2.99 | 0.3186E-02 | 1.99 | 9216 |
7 | 0.2953E-05 | 3.00 | 0.7976E-03 | 2.00 | 36864 |
8 | 0.3692E-06 | 3.00 | 0.1995E-03 | 2.00 | 147456 |
by |
|||||
5 | 0.1413E-04 | 4.08 | 0.1650E-02 | 2.97 | 4096 |
6 | 0.8676E-06 | 4.03 | 0.2072E-03 | 2.99 | 16384 |
7 | 0.5398E-07 | 4.01 | 0.2593E-04 | 3.00 | 65536 |
by |
|||||
3 | 0.2226E-02 | 4.59 | 0.5414E-01 | 3.52 | 400 |
4 | 0.9610E-04 | 4.53 | 0.3723E-02 | 3.86 | 1600 |
5 | 0.3279E-05 | 4.87 | 0.2392E-03 | 3.96 | 6400 |
To test the superconvergence of
−Δu+u=fin Ωu=0on ∂Ω, |
where
u=(x−x2)(y−y3). | (46) |
Uniform square grids as shown in Figure 1 are used for numerical computation. The numerical results are listed in Table 3. Surprising, for this problem, the
level | rate | rate | |||
by |
|||||
3 | 0.8265E-02 | 1.06 | 0.4577E-01 | 1.14 | 16 |
4 | 0.2772E-02 | 1.58 | 0.1732E-01 | 1.40 | 64 |
5 | 0.7965E-03 | 1.80 | 0.6331E-02 | 1.45 | 256 |
6 | 0.2142E-03 | 1.90 | 0.2290E-02 | 1.47 | 1024 |
7 | 0.5564E-04 | 1.94 | 0.8213E-03 | 1.48 | 4096 |
8 | 0.1419E-04 | 1.97 | 0.2928E-03 | 1.49 | 16384 |
To test further the superconvergence of
−∇(a∇u)=fin Ωu=0on ∂Ω, |
where
u=(x−x3)(y2−y3). | (47) |
Uniform square grids as shown in Figure 1 are used for computation. The numerical results are listed in Table 4. Surprising, again, the
level | rate | rate | |||
by |
|||||
3 | 0.4929E-02 | 0.97 | 0.5371E-01 | 0.80 | 16 |
4 | 0.1917E-02 | 1.36 | 0.2401E-01 | 1.16 | 64 |
5 | 0.6004E-03 | 1.67 | 0.9407E-02 | 1.35 | 256 |
6 | 0.1682E-03 | 1.84 | 0.3507E-02 | 1.42 | 1024 |
7 | 0.4457E-04 | 1.92 | 0.1275E-02 | 1.46 | 4096 |
8 | 0.1148E-04 | 1.96 | 0.4576E-03 | 1.48 | 16384 |
In this paper, we establish a new numerical approximation scheme based on the rectangular partition to solve second order elliptic equation. We derived the numerical scheme and then proved the optimal order of convergence of the error estimates in
[1] |
N. Shahini, Z. Bahrami, S. Sheykhivand, S. Marandi, M. Danishvar, S. Danishvar, et al., Automatically identified EEG signals of movement intention based on CNN network (end-to-end), Electronics, 11 (2022), 3297. https://doi.org/10.3390/electronics11203297 doi: 10.3390/electronics11203297
![]() |
[2] | T. Zebin, P. J. Scully, K. B. Ozanyan, Human activity recognition with inertial sensors using a deep learning approach, Proceedings IEEE Sensors, (2017), 1–3. https://doi.org/10.1109/ICSENS.2016.7808590 |
[3] | W. Xu, Y. Pang, Y. Yang, Y. Liu, Human activity recognition based on convolutional neural network, Proceedings of the International Conference on Pattern Recognition, (2018), 165–170. https://doi.org/10.1109/ICPR.2018.8545435 |
[4] |
Y. Omae, M. Kobayashi, K. Sakai, T. Akiduki, A. Shionoya, H. Takahashi, Detection of swimming stroke start timing by deep learning from an inertial sensor, ICIC Express Letters Part B: Applications ICIC International, 11 (2020), 245–251. https://doi.org/10.24507/icicelb.11.03.245 doi: 10.24507/icicelb.11.03.245
![]() |
[5] | D. Sagga, A. Echtioui, R. Khemakhem, M. Ghorbel, Epileptic seizure detection using EEG signals based on 1D-CNN approach, Proceedings of the 20th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering, (2020), 51–56. https://doi.org/10.1109/STA50679.2020.9329321 |
[6] |
N. Dua, S. N. Singh, V. B. Semwal, Multi-input CNN-GRU based human activity recognition using wearable sensors, Computing, 103 (2021), 1461–1478. https://doi.org/10.1007/s00607-021-00928-8 doi: 10.1007/s00607-021-00928-8
![]() |
[7] | Y. H. Yeh, D. P. Wong, C. T. Lee, P. H. Chou, Deep learning-based real-time activity recognition with multiple inertial sensors, Proceedings of the 2022 4th International Conference on Image, Video and Signal Processing, (2022), 92–99. https://doi.org/10.1145/3531232.3531245 |
[8] | J. P. Wolff, F. Grützmacher, A. Wellnitz, C. Haubelt, Activity recognition using head worn inertial sensors, Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction, (2018), 1–7. https://doi.org/10.1145/3266157.3266218 |
[9] |
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-CAM: Visual explanations from deep networks via gradient-based localization, Int. J. Comput. Vision, 128 (2016), 336–359. https://doi.org/10.1109/ICCV.2017.74 doi: 10.1109/ICCV.2017.74
![]() |
[10] | B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 2921–2929. |
[11] |
M. Kara, Z. Öztürk, S. Akpek, A. A. Turupcu, P. Su, Y. Shen, COVID-19 diagnosis from chest ct scans: A weakly supervised CNN-LSTM approach, AI, 2 (2021), 330–341. https://doi.org/10.3390/ai2030020 doi: 10.3390/ai2030020
![]() |
[12] | M. Kavitha, N. Yudistira, T. Kurita, Multi instance learning via deep CNN for multi-class recognition of Alzheimer's disease, 2019 IEEE 11th International Workshop on Computational Intelligence and Applications, (2019), 89–94. https://doi.org/10.1109/IWCIA47330.2019.8955006 |
[13] |
J. G. Nam, J. Kim, K. Noh, H. Choi, D. S. Kim, S. J. Yoo, et al., Automatic prediction of left cardiac chamber enlargement from chest radiographs using convolutional neural network, Eur. Radiol., 31 (2021), 8130–8140. https://doi.org/10.1007/s00330-021-07963-1 doi: 10.1007/s00330-021-07963-1
![]() |
[14] |
T. Matsumoto, S. Kodera, H. Shinohara, H. Ieki, T. Yamaguchi, Y. Higashikuni, et al., Diagnosing heart failure from chest X-ray images using deep learning, Int. Heart J., 61 (2020), 781–786. https://doi.org/10.1536/ihj.19-714 doi: 10.1536/ihj.19-714
![]() |
[15] |
Y. Hirata, K. Kusunose, T. Tsuji, K. Fujimori, J. Kotoku, M. Sata, Deep learning for detection of elevated pulmonary artery wedge pressure using standard chest X-ray, Can. J. Cardiol., 37 (2021), 1198–1206. https://doi.org/10.1016/j.cjca.2021.02.007 doi: 10.1016/j.cjca.2021.02.007
![]() |
[16] |
M. Dutt, S. Redhu, M. Goodwin, C. W. Omlin, SleepXAI: An explainable deep learning approach for multi-class sleep stage identification, Appl. Intell., 53 (2023), 16830–16843. https://doi.org/10.1007/s10489-022-04357-8 doi: 10.1007/s10489-022-04357-8
![]() |
[17] |
S. Jonas, A. O. Rossetti, M. Oddo, S. Jenni, P. Favaro, F. Zubler, EEG-based outcome prediction after cardiac arrest with convolutional neural networks: Performance and visualization of discriminative features, Human Brain Mapp., 40 (2019), 4606–4617. https://doi.org/10.1002/hbm.24724 doi: 10.1002/hbm.24724
![]() |
[18] |
C. Barros, B. Roach, J. M. Ford, A. P. Pinheiro, C. A. Silva, From sound perception to automatic detection of schizophrenia: An EEG-based deep learning approach, Front. Psychiatry, 12 (2022), 813460. https://doi.org/10.3389/fpsyt.2021.813460 doi: 10.3389/fpsyt.2021.813460
![]() |
[19] |
Y. Yan, H. Zhou, L. Huang, X. Cheng, S. Kuang, A novel two-stage refine filtering method for EEG-based motor imagery classification, Front. Neurosci., 15 (2021), 657540. https://doi.org/10.3389/fnins.2021.657540 doi: 10.3389/fnins.2021.657540
![]() |
[20] |
M. Porumb, S. Stranges, A. Pescapè, L. Pecchia, Precision medicine and artificial intelligence: A pilot study on deep learning for hypoglycemic events detection based on ECG, Sci. Rep-UK., 10 (2020), 170. https://doi.org/10.1038/s41598-019-56927-5 doi: 10.1038/s41598-019-56927-5
![]() |
[21] |
S. Raghunath, A. E. U. Cerna, L. Jing, D. P. vanMaanen, J. Stough, D. N. Hartzel, et al., Prediction of mortality from 12-lead electrocardiogram voltage data using a deep neural network, Nat. Med., 26 (2020), 886–891. https://doi.org/10.1038/s41591-020-0870-z doi: 10.1038/s41591-020-0870-z
![]() |
[22] |
H. Shin, Deep convolutional neural network-based hemiplegic gait detection using an inertial sensor located freely in a pocket, Sensors, 22 (2022), 1920. https://doi.org/10.3390/s22051920 doi: 10.3390/s22051920
![]() |
[23] |
G. Aquino, M. G. Costa, C. F. C. Filho, Explaining one-dimensional convolutional models in human activity recognition and biometric identification tasks, Sensors, 22 (2022), 5644. https://doi.org/10.3390/s22155644 doi: 10.3390/s22155644
![]() |
[24] |
R. Ge, M. Zhou, Y. Luo, Q. Meng, G. Mai, D. Ma, et al, , Mctwo: A two-step feature selection algorithm based on maximal information coefficient, BMC Bioinformatics, 17 (2016), 142. https://doi.org/10.1186/s12859-016-0990-0 doi: 10.1186/s12859-016-0990-0
![]() |
[25] | T. Naghibi, S. Hoffmann, B. Pfister, Convex approximation of the NP-hard search problem in feature subset selection, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, (2013), 3273–3277. https://doi.org/10.1109/ICASSP.2013.6638263 |
[26] |
D. S. Hochba, Approximation algorithms for NP-hard problems, ACM SIGACT News, 28 (1997), 40–52. https://doi.org/10.1145/261342.571216 doi: 10.1145/261342.571216
![]() |
[27] | C. Yun, J. Yang, Experimental comparison of feature subset selection methods, Seventh IEEE International Conference on Data Mining Workshops, (2007), 367–372. https://doi.org/10.1109/ICDMW.2007.77 |
[28] |
W. C. Lin, Experimental study of information measure and inter-intra class distance ratios on feature selection and orderings, IEEE T. Syst. Man Cy-S, 3 (1973), 172–181. https://doi.org/10.1109/TSMC.1973.5408500 doi: 10.1109/TSMC.1973.5408500
![]() |
[29] |
W. Y. Loh, Classification and regression trees, Data Mining and Knowledge Discovery, 1 (2011), 14–23. https://doi.org/10.1002/widm.8 doi: 10.1002/widm.8
![]() |
[30] | M. R. Osborne, B. Presnell, B. A. Turlach, On the lasso and its dual, J. Comput. Graph. Stat., 9 (2000), 319–337. https://doi.org/10.1080/10618600.2000.10474883 |
[31] |
R. J. Palma-Mendoza, D. Rodriguez, L. de Marcos, Distributed Relieff-based feature selection in spark, Knowl. Inf. Syst., 57 (2018), 1–20. https://doi.org/10.1007/s10115-017-1145-y doi: 10.1007/s10115-017-1145-y
![]() |
[32] |
Y. Huang, P. J. McCullagh, N. D. Black, An optimization of Relieff for classification in large datasets, Data Knowl. Eng., 68 (2009), 1348–1356. https://doi.org/10.1016/j.datak.2009.07.011 doi: 10.1016/j.datak.2009.07.011
![]() |
[33] |
R. Yao, J. Li, M. Hui, L. Bai, Q. Wu, Feature selection based on random forest for partial discharges characteristic set, IEEE Access, 8 (2020), 159151–159161. https://doi.org/10.1109/ACCESS.2020.3019377 doi: 10.1109/ACCESS.2020.3019377
![]() |
[34] |
M. Mori, R. G. Flores, Y. Suzuki, K. Nukazawa, T. Hiraoka, H. Nonaka, Prediction of Microcystis occurrences and analysis using machine learning in high-dimension, low-sample-size and imbalanced water quality data, Harmful Algae, 117 (2022), 102273. https://doi.org/10.1016/j.hal.2022.102273 doi: 10.1016/j.hal.2022.102273
![]() |
[35] |
Y. Omae, M. Mori, E2H distance-weighted minimum reference set for numerical and categorical mixture data and a Bayesian swap feature selection algorithm, Mach. Learn. Know. Extr., 5 (2023), 109–127. https://doi.org/10.3390/make5010007 doi: 10.3390/make5010007
![]() |
[36] |
R. Garriga, J. Mas, S. Abraha, J. Nolan, O. Harrison, G. Tadros, et al., Machine learning model to predict mental health crises from electronic health records, Nat. Med., 28 (2022), 1240–1248. https://doi.org/10.1038/s41591-022-01811-5 doi: 10.1038/s41591-022-01811-5
![]() |
[37] |
G. Chandrashekar, F. Sahin, A survey on feature selection methods, Comput. Electr. Eng., 40 (2014), 16–28. https://doi.org/10.1016/j.compeleceng.2013.11.024 doi: 10.1016/j.compeleceng.2013.11.024
![]() |
[38] | N. Gopika, M. Kowshalaya, Correlation based feature selection algorithm for machine learning, Proceedings of the 3rd International Conference on Communication and Electronics Systems, (2018), 692–695. https://doi.org/10.1109/CESYS.2018.8723980 |
[39] |
L. Fu, B. Lu, B. Nie, Z. Peng, H. Liu, X. Pi, Hybrid network with attention mechanism for detection and location of myocardial infarction based on 12-lead electrocardiogram signals, Sensors, 20 (2020), 1020. https://doi.org/10.3390/s20041020 doi: 10.3390/s20041020
![]() |
[40] |
F. M. Rueda, R. Grzeszick, G. A. Fink, S. Feldhorst, M. T. Hompel, Convolutional neural networks for human activity recognition using body-worn sensors, Informatics, 5 (2018), 26. https://doi.org/10.3390/informatics5020026 doi: 10.3390/informatics5020026
![]() |
[41] |
T. Thenmozhi, R. Helen, Feature selection using extreme gradient boosting bayesian optimization to upgrade the classification performance of motor imagery signals for BCI, J. Neurosci. Meth., 366 (2022), 109425. https://doi.org/10.1016/j.jneumeth.2021.109425 doi: 10.1016/j.jneumeth.2021.109425
![]() |
[42] | R. Garnett, M. A. Osborne, S. J. Roberts, Bayesian optimization for sensor set selection, Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks, (2019), 209–219. https://doi.org/10.1145/1791212.1791238 |
[43] |
E. Kim, Interpretable and accurate convolutional neural networks for human activity recognition, IEEE T. Ind. Inform., 16 (2020), 7190–7198. https://doi.org/10.1109/TII.2020.2972628 doi: 10.1109/TII.2020.2972628
![]() |
[44] |
M. Jaén-Vargas, K. M. R. Leiva, F. Fernandes, S. B. Goncalves, M. T. Silva, D. S. Lopes, et al., Effects of sliding window variation in the performance of acceleration-based human activity recognition using deep learning models, PeerJ Comput. Sci., 8 (2022), e1052. https://doi.org/10.7717/peerj-cs.1052 doi: 10.7717/peerj-cs.1052
![]() |
[45] |
R. Chavarriaga, H. Sagha, A. Calatroni, S. T. Digumarti, G. Tröster, J. D. R. Millán, et al., The opportunity challenge: A benchmark database for on-body sensor-based activity recognition, Pattern Recogn. Lett., 34 (2013), 2033–2042. https://doi.org/10.1016/j.patrec.2012.12.014 doi: 10.1016/j.patrec.2012.12.014
![]() |
[46] | H. Sagha, S. T. Digumarti, J. D. R. Millán, R. Chavarriaga, A. Calatroni, D. Roggen, et al., Benchmarking classification techniques using the opportunity human activity dataset, 2011 IEEE International Conference on Systems, Man and Cybernetics, (2011), 36–40. doi: 10.1109/ICSMC.2011.6083628 |
[47] |
A. Murad, J. Y. Pyun, Deep recurrent neural networks for human activity recognition, Sensors, 17 (2017), 2556. https://doi.org/10.3390/s17112556 doi: 10.3390/s17112556
![]() |
[48] | J. B. Yang, M. N. Nguyen, P. P. San, X. L. Li, S. Krishnaswamy, Deep convolutional neural networks on multichannel time series for human activity recognition, Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, (2015), 3995–4001. |
[49] |
O. Banos, J. M. Galvez, M. Damas, H. Pomares, I. Rojas, Window size impact in human activity recognition, Sensors, 14 (2014), 6474–6499. https://doi.org/10.3390/s140406474 doi: 10.3390/s140406474
![]() |
[50] |
T. Tanaka, I. Nambu, Y. Maruyama, Y. Wada, Sliding-window normalization to improve the performance of machine-learning models for real-time motion prediction using electromyography, Sensors, 22 (2022), 5005. https://doi.org/10.3390/s22135005 doi: 10.3390/s22135005
![]() |
[51] |
J. Wu, X. Y. Chen, H. Zhang, L. D. Xiong, H. Lei, S. H. Deng, Hyperparameter optimization for machine learning models based on bayesian optimization, J. Electron. Sci. Technol., 17 (2019), 26–40. https://doi.org/10.11989/JEST.1674-862X.80904120 doi: 10.11989/JEST.1674-862X.80904120
![]() |
[52] |
P. Doke, D. Shrivastava, C. Pan, Q. Zhou, Y. D. Zhang, Using CNN with bayesian optimization to identify cerebral micro-bleeds, Mach. Vision Appl., 31 (2020), 1–14. https://doi.org/10.1007/s00138-020-01087-0 doi: 10.1007/s00138-020-01087-0
![]() |
[53] | J. Bergstra, R. Bardenet, Y. Bengio, B. Kegl, Algorithms for hyper-parameter optimization, Adv. Neural Inf. Process. Syst., 24 (2011), 2546–2554. |
[54] | T. Akiba, S. Sano, T. Yanase, T. Ohta, M. Koyama, Optuna: A next-generation hyperparameter optimization framework, Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, (2019), 2623–2631, https://optuna.readthedocs.io/en/stable/. doi: 10.1145/3292500.3330701 |
[55] | H. Makino, E. Kita, Stochastic schemata exploiter-based AutoML, 2021 IEEE International Conference on Data Mining Workshops, (2021), 238–245. https://doi.org/10.1109/ICDMW53433.2021.00037 |
[56] | P. Siirtola, P. Laurinen, J. Roning and H. Kinnunen, Efficient accelerometer-based swimming exercise tracking, IEEE SSCI 2011: Symposium Series on Computational Intelligence, (2011), 156–161. https://doi.org/10.1109/CIDM.2011.5949430 |
[57] | G. Brunner, D. Melnyk, B. Sigfússon, R. Wattenhofer, Swimming style recognition and lap counting using a smartwatch and deep learning, 2019 International Symposium on Wearable Computers, (2019), 23–31. https://doi.org/10.1145/3341163.3347719 |
1. | Xiu Ye, Shangyou Zhang, A weak divergence CDG method for the biharmonic equation on triangular and tetrahedral meshes, 2022, 178, 01689274, 155, 10.1016/j.apnum.2022.03.017 | |
2. | Jun Zhou, Da Xu, Wenlin Qiu, Leijie Qiao, An accurate, robust, and efficient weak Galerkin finite element scheme with graded meshes for the time-fractional quasi-linear diffusion equation, 2022, 124, 08981221, 188, 10.1016/j.camwa.2022.08.022 | |
3. | Xiu Ye, Shangyou Zhang, A conforming discontinuous Galerkin finite element method for the Stokes problem on polytopal meshes, 2021, 93, 0271-2091, 1913, 10.1002/fld.4959 | |
4. | Xiu Ye, Shangyou Zhang, Constructing a CDG Finite Element with Order Two Superconvergence on Rectangular Meshes, 2023, 2096-6385, 10.1007/s42967-023-00330-5 | |
5. | Yan Yang, Xiu Ye, Shangyou Zhang, A pressure-robust stabilizer-free WG finite element method for the Stokes equations on simplicial grids, 2024, 32, 2688-1594, 3413, 10.3934/era.2024158 | |
6. | Xiu Ye, Shangyou Zhang, A superconvergent CDG finite element for the Poisson equation on polytopal meshes, 2023, 0044-2267, 10.1002/zamm.202300521 | |
7. | Xiu Ye, Shangyou Zhang, Two-Order Superconvergent CDG Finite Element Method for the Heat Equation on Triangular and Tetrahedral Meshes, 2024, 2096-6385, 10.1007/s42967-024-00444-4 | |
8. | Xiu Ye, Shangyou Zhang, Order two superconvergence of the CDG finite elements for non-self adjoint and indefinite elliptic equations, 2024, 50, 1019-7168, 10.1007/s10444-023-10100-9 | |
9. | Fuchang Huo, Weilong Mo, Yulin Zhang, A locking-free conforming discontinuous Galerkin finite element method for linear elasticity problems, 2025, 465, 03770427, 116582, 10.1016/j.cam.2025.116582 |
level | rate | rate | |||
by |
|||||
6 | 0.1996E-02 | 1.97 | 0.8887E-02 | 1.98 | 1024 |
7 | 0.5013E-03 | 1.99 | 0.2228E-02 | 2.00 | 4096 |
8 | 0.1255E-03 | 2.00 | 0.5574E-03 | 2.00 | 16384 |
by |
|||||
6 | 0.2427E-02 | 1.97 | 0.1027E+00 | 1.02 | 3072 |
7 | 0.6100E-03 | 1.99 | 0.5105E-01 | 1.01 | 12288 |
8 | 0.1527E-03 | 2.00 | 0.2546E-01 | 1.00 | 49152 |
by |
|||||
5 | 0.1533E-03 | 3.00 | 0.2042E-01 | 2.03 | 1536 |
6 | 0.1915E-04 | 3.00 | 0.5061E-02 | 2.01 | 6144 |
7 | 0.2394E-05 | 3.00 | 0.1260E-02 | 2.01 | 24576 |
by |
|||||
5 | 0.7959E-05 | 4.00 | 0.1965E-02 | 3.00 | 2560 |
6 | 0.4971E-06 | 4.00 | 0.2451E-03 | 3.00 | 10240 |
7 | 0.3140E-07 | 3.98 | 0.3059E-04 | 3.00 | 40960 |
by |
|||||
4 | 0.1055E-04 | 4.97 | 0.1421E-02 | 4.05 | 960 |
5 | 0.3314E-06 | 4.99 | 0.8735E-04 | 4.02 | 3840 |
6 | 0.1057E-07 | 4.97 | 0.5417E-05 | 4.01 | 15360 |
by |
|||||
2 | 0.2835E-02 | 6.24 | 0.1450E+00 | 5.49 | 84 |
3 | 0.4532E-04 | 5.97 | 0.4718E-02 | 4.94 | 336 |
4 | 0.7115E-06 | 5.99 | 0.1478E-03 | 5.00 | 1344 |
level | rate | rate | |||
by |
|||||
6 | 0.4006E-03 | 1.99 | 0.2389E-02 | 1.99 | 4096 |
7 | 0.1003E-03 | 2.00 | 0.5982E-03 | 2.00 | 16384 |
8 | 0.2510E-04 | 2.00 | 0.1496E-03 | 2.00 | 65536 |
by |
|||||
6 | 0.2360E-04 | 2.99 | 0.3186E-02 | 1.99 | 9216 |
7 | 0.2953E-05 | 3.00 | 0.7976E-03 | 2.00 | 36864 |
8 | 0.3692E-06 | 3.00 | 0.1995E-03 | 2.00 | 147456 |
by |
|||||
5 | 0.1413E-04 | 4.08 | 0.1650E-02 | 2.97 | 4096 |
6 | 0.8676E-06 | 4.03 | 0.2072E-03 | 2.99 | 16384 |
7 | 0.5398E-07 | 4.01 | 0.2593E-04 | 3.00 | 65536 |
by |
|||||
3 | 0.2226E-02 | 4.59 | 0.5414E-01 | 3.52 | 400 |
4 | 0.9610E-04 | 4.53 | 0.3723E-02 | 3.86 | 1600 |
5 | 0.3279E-05 | 4.87 | 0.2392E-03 | 3.96 | 6400 |
level | rate | rate | |||
by |
|||||
3 | 0.8265E-02 | 1.06 | 0.4577E-01 | 1.14 | 16 |
4 | 0.2772E-02 | 1.58 | 0.1732E-01 | 1.40 | 64 |
5 | 0.7965E-03 | 1.80 | 0.6331E-02 | 1.45 | 256 |
6 | 0.2142E-03 | 1.90 | 0.2290E-02 | 1.47 | 1024 |
7 | 0.5564E-04 | 1.94 | 0.8213E-03 | 1.48 | 4096 |
8 | 0.1419E-04 | 1.97 | 0.2928E-03 | 1.49 | 16384 |
level | rate | rate | |||
by |
|||||
3 | 0.4929E-02 | 0.97 | 0.5371E-01 | 0.80 | 16 |
4 | 0.1917E-02 | 1.36 | 0.2401E-01 | 1.16 | 64 |
5 | 0.6004E-03 | 1.67 | 0.9407E-02 | 1.35 | 256 |
6 | 0.1682E-03 | 1.84 | 0.3507E-02 | 1.42 | 1024 |
7 | 0.4457E-04 | 1.92 | 0.1275E-02 | 1.46 | 4096 |
8 | 0.1148E-04 | 1.96 | 0.4576E-03 | 1.48 | 16384 |
level | rate | rate | |||
by |
|||||
6 | 0.1996E-02 | 1.97 | 0.8887E-02 | 1.98 | 1024 |
7 | 0.5013E-03 | 1.99 | 0.2228E-02 | 2.00 | 4096 |
8 | 0.1255E-03 | 2.00 | 0.5574E-03 | 2.00 | 16384 |
by |
|||||
6 | 0.2427E-02 | 1.97 | 0.1027E+00 | 1.02 | 3072 |
7 | 0.6100E-03 | 1.99 | 0.5105E-01 | 1.01 | 12288 |
8 | 0.1527E-03 | 2.00 | 0.2546E-01 | 1.00 | 49152 |
by |
|||||
5 | 0.1533E-03 | 3.00 | 0.2042E-01 | 2.03 | 1536 |
6 | 0.1915E-04 | 3.00 | 0.5061E-02 | 2.01 | 6144 |
7 | 0.2394E-05 | 3.00 | 0.1260E-02 | 2.01 | 24576 |
by |
|||||
5 | 0.7959E-05 | 4.00 | 0.1965E-02 | 3.00 | 2560 |
6 | 0.4971E-06 | 4.00 | 0.2451E-03 | 3.00 | 10240 |
7 | 0.3140E-07 | 3.98 | 0.3059E-04 | 3.00 | 40960 |
by |
|||||
4 | 0.1055E-04 | 4.97 | 0.1421E-02 | 4.05 | 960 |
5 | 0.3314E-06 | 4.99 | 0.8735E-04 | 4.02 | 3840 |
6 | 0.1057E-07 | 4.97 | 0.5417E-05 | 4.01 | 15360 |
by |
|||||
2 | 0.2835E-02 | 6.24 | 0.1450E+00 | 5.49 | 84 |
3 | 0.4532E-04 | 5.97 | 0.4718E-02 | 4.94 | 336 |
4 | 0.7115E-06 | 5.99 | 0.1478E-03 | 5.00 | 1344 |
level | rate | rate | |||
by |
|||||
6 | 0.4006E-03 | 1.99 | 0.2389E-02 | 1.99 | 4096 |
7 | 0.1003E-03 | 2.00 | 0.5982E-03 | 2.00 | 16384 |
8 | 0.2510E-04 | 2.00 | 0.1496E-03 | 2.00 | 65536 |
by |
|||||
6 | 0.2360E-04 | 2.99 | 0.3186E-02 | 1.99 | 9216 |
7 | 0.2953E-05 | 3.00 | 0.7976E-03 | 2.00 | 36864 |
8 | 0.3692E-06 | 3.00 | 0.1995E-03 | 2.00 | 147456 |
by |
|||||
5 | 0.1413E-04 | 4.08 | 0.1650E-02 | 2.97 | 4096 |
6 | 0.8676E-06 | 4.03 | 0.2072E-03 | 2.99 | 16384 |
7 | 0.5398E-07 | 4.01 | 0.2593E-04 | 3.00 | 65536 |
by |
|||||
3 | 0.2226E-02 | 4.59 | 0.5414E-01 | 3.52 | 400 |
4 | 0.9610E-04 | 4.53 | 0.3723E-02 | 3.86 | 1600 |
5 | 0.3279E-05 | 4.87 | 0.2392E-03 | 3.96 | 6400 |
level | rate | rate | |||
by |
|||||
3 | 0.8265E-02 | 1.06 | 0.4577E-01 | 1.14 | 16 |
4 | 0.2772E-02 | 1.58 | 0.1732E-01 | 1.40 | 64 |
5 | 0.7965E-03 | 1.80 | 0.6331E-02 | 1.45 | 256 |
6 | 0.2142E-03 | 1.90 | 0.2290E-02 | 1.47 | 1024 |
7 | 0.5564E-04 | 1.94 | 0.8213E-03 | 1.48 | 4096 |
8 | 0.1419E-04 | 1.97 | 0.2928E-03 | 1.49 | 16384 |
level | rate | rate | |||
by |
|||||
3 | 0.4929E-02 | 0.97 | 0.5371E-01 | 0.80 | 16 |
4 | 0.1917E-02 | 1.36 | 0.2401E-01 | 1.16 | 64 |
5 | 0.6004E-03 | 1.67 | 0.9407E-02 | 1.35 | 256 |
6 | 0.1682E-03 | 1.84 | 0.3507E-02 | 1.42 | 1024 |
7 | 0.4457E-04 | 1.92 | 0.1275E-02 | 1.46 | 4096 |
8 | 0.1148E-04 | 1.96 | 0.4576E-03 | 1.48 | 16384 |