In order to address the issue of multi-information fusion, this paper proposed a method for bearing fault diagnosis based on multisource and multimodal information fusion. Existing bearing fault diagnosis methods mainly rely on single sensor information. Nevertheless, mechanical faults in bearings are intricate and subject to countless excitation disturbances, which poses a great challenge for accurate identification if only relying on feature extraction from single sensor input. In this paper, a multisource information fusion model based on auto-encoder was first established to achieve the fusion of multi-sensor signals. Based on the fused signals, multimodal feature extraction was realized by integrating image features and time-frequency statistical information. The one-dimensional vibration signals were converted into two-dimensional time-frequency images by continuous wavelet transform (CWT), and then they were fed into the Resnet network for fault diagnosis. At the same time, the time-frequency statistical features of the fused 1D signal were extracted from the integrated perspective of time and frequency domains and inputted into the improved 1D convolutional neural network model based on the residual block and attention mechanism (1DCNN-REA) model to realize fault diagnosis. Finally, the tree-structured parzen estimator (TPE) algorithm was utilized to realize the integration of two models in order to improve the diagnostic effect of a single model and obtain the final bearing fault diagnosis results. The proposed model was validated using real experimental data, and the results of the comparison and ablation experiments showed that compared with other models, the proposed model can precisely diagnosis the fault type with an accuracy rate of 98.93%.
Citation: Xu Chen, Wenbing Chang, Yongxiang Li, Zhao He, Xiang Ma, Shenghan Zhou. Resnet-1DCNN-REA bearing fault diagnosis method based on multi-source and multi-modal information fusion[J]. Electronic Research Archive, 2024, 32(11): 6276-6300. doi: 10.3934/era.2024292
[1] | Yanpeng Zheng, Xiaoyu Jiang . Quasi-cyclic displacement and inversion decomposition of a quasi-Toeplitz matrix. AIMS Mathematics, 2022, 7(7): 11647-11662. doi: 10.3934/math.2022649 |
[2] | B. Amutha, R. Perumal . Public key exchange protocols based on tropical lower circulant and anti circulant matrices. AIMS Mathematics, 2023, 8(7): 17307-17334. doi: 10.3934/math.2023885 |
[3] | Chih-Hung Chang, Ya-Chu Yang, Ferhat Şah . Reversibility of linear cellular automata with intermediate boundary condition. AIMS Mathematics, 2024, 9(3): 7645-7661. doi: 10.3934/math.2024371 |
[4] | Imrana Kousar, Saima Nazeer, Abid Mahboob, Sana Shahid, Yu-Pei Lv . Numerous graph energies of regular subdivision graph and complete graph. AIMS Mathematics, 2021, 6(8): 8466-8476. doi: 10.3934/math.2021491 |
[5] | Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of $ S $-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317 |
[6] | Yifan Luo, Qingzhong Ji . On the sum of matrices of special linear group over finite field. AIMS Mathematics, 2025, 10(2): 3642-3651. doi: 10.3934/math.2025168 |
[7] | Lanlan Liu, Pan Han, Feng Wang . New error bound for linear complementarity problem of $ S $-$ SDDS $-$ B $ matrices. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179 |
[8] | Chaojun Yang . More inequalities on numerical radii of sectorial matrices. AIMS Mathematics, 2021, 6(4): 3927-3939. doi: 10.3934/math.2021233 |
[9] | Xiaofeng Guo, Jianyu Pan . Approximate inverse preconditioners for linear systems arising from spatial balanced fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 17284-17306. doi: 10.3934/math.2023884 |
[10] | Qin Zhong, Ling Li . Notes on the generalized Perron complements involving inverse $ {{N}_{0}} $-matrices. AIMS Mathematics, 2024, 9(8): 22130-22145. doi: 10.3934/math.20241076 |
In order to address the issue of multi-information fusion, this paper proposed a method for bearing fault diagnosis based on multisource and multimodal information fusion. Existing bearing fault diagnosis methods mainly rely on single sensor information. Nevertheless, mechanical faults in bearings are intricate and subject to countless excitation disturbances, which poses a great challenge for accurate identification if only relying on feature extraction from single sensor input. In this paper, a multisource information fusion model based on auto-encoder was first established to achieve the fusion of multi-sensor signals. Based on the fused signals, multimodal feature extraction was realized by integrating image features and time-frequency statistical information. The one-dimensional vibration signals were converted into two-dimensional time-frequency images by continuous wavelet transform (CWT), and then they were fed into the Resnet network for fault diagnosis. At the same time, the time-frequency statistical features of the fused 1D signal were extracted from the integrated perspective of time and frequency domains and inputted into the improved 1D convolutional neural network model based on the residual block and attention mechanism (1DCNN-REA) model to realize fault diagnosis. Finally, the tree-structured parzen estimator (TPE) algorithm was utilized to realize the integration of two models in order to improve the diagnostic effect of a single model and obtain the final bearing fault diagnosis results. The proposed model was validated using real experimental data, and the results of the comparison and ablation experiments showed that compared with other models, the proposed model can precisely diagnosis the fault type with an accuracy rate of 98.93%.
In 1999, Fallat and Johnson [1] introduced the concept of k-subdirect sums of square matrices, which generalizes the usual sum and the direct sum of matrices [2], and has potential applications in several contexts such as matrix completion problems [3,4,5], overlapping subdomains in domain decomposition methods [6,7,8], and global stiffness matrices in finite elements [7,9], etc.
Definition 1.1. [1] Let A∈Cn1×n1 and B∈Cn2×n2, and k be an integer such that 1≤k≤min{n1,n2}. Suppose that
A=[A11A12A21A22]andB=[B11B12B21B22], | (1.1) |
where A22 and B11 are square matrices of order k. Then
C=[A11A120A21A22+B11B120B21B22] |
is called the k-subdirect sum of A and B and is denoted by C=A⨁kB.
For the k-subdirect sums of matrices, one of the important problems is that if A and B lie in a certain subclass of H-matrices must a k-subdirect sum C lie in this class, since it can be used to analyze the convergence of Jacobi and Gauss-Seidel methods in solving the linearized system of nonlinear equations [10]. Here, a square matrix A is called an H-matrix if there exists a positive diagonal matrix X such that AX is a strictly diagonally dominant matrix [11]. To answer this question, several results about subdirect sum problems for H-matrices and some subclasses of H-matrices have been obtained, such as S-strictly diagonally dominant matrices [12], doubly diagonally dominant matrices [13], Σ-strictly diagonally dominant matrices [14], α1 and α2-matrices [15], Nekrasov matrices [16], weakly chained diagonally dominant matrices [17], QN-(quasi-Nekrasov) matrices [18], SDD(p)-matrices [10], and H-matrices [19]. Besides, the subdirect sum problems for some other structure matrices, including B-matrices, BRπ-matrices, P-matrices, doubly non-negative matrices, completely positive matrices, and totally non-negative matrices, were also studied; for details, see [1,20,21,22] and references therein.
In 2009, Cvetković, Kostić, and Rauški [23] introduced a new subclass of H-matrices called S-Nekrasov matrices.
Definition 1.2. [23] Given any nonempty proper subset S of N:={1,2,…,n} and ¯S=N∖S. A matrix A=[aij]∈Cn×n is called an S-Nekrasov matrix if |aii|>hSi(A) for all i∈S, and
(|aii|−hSi(A))(|ajj|−h¯Sj(A))>h¯Si(A)hSj(A),foralli∈S,j∈¯S, |
where hS1(A)=∑j∈S∖{1}|a1j| and
hSi(A)=i−1∑j=1|aij||ajj|hSj(A)+n∑j=i+1,j∈S|aij|,i=2,3,…,n. | (1.2) |
Specially, if S=N, then Definition 1.2 coincides with the definition of Nekrasov matrices [23], that is, a matrix A=[aij]∈Cn×n is called a Nekrasov matrix if |aii|>hi(A) for all i∈N, where hi(A):=hNi(A). It is worth noticing that the class of S-Nekrasov matrices has many potential applications in scientific computing, such as estimating the infinity norm for the inverse of S-Nekrasov matrices [24], estimating error bounds for linear complementarity problems [25,26,27], and identifying nonsingular H-tensors [28], etc. However, to the best of the author's knowledge, the subdirect sum problem for S-Nekrasov matrices remains unclear. In this paper, we introduce the class of {i0}-Nekrasov matrices and prove that it is a subclass of S-Nekrasov matrices, and then we focus on the subdirect sum problem of {i0}-Nekrasov matrices. We provide some sufficient conditions such that the k-subdirect sum of {i0}-Nekrasov matrices and Nekrasov matrices belongs to the class of {i0}-Nekrasov matrices. Numerical examples are presented to illustrate the corresponding results.
We start with some notations and definitions. For a non-zero complex number z, we define arg(z)={θ:z=|z|exp(iθ),−π<θ≤π}. As is shown in [12], if we let C=A⨁kB=[cij], where A=[aij]∈Cn1×n1 and B=[bij]∈Cn2×n2, then
cij={aij,i∈S1,j∈S1⋃S2,0,i∈S1,j∈S3,aij,i∈S2,j∈S1,aij+bi−t,j−t,i∈S2,j∈S2,bi−t,j−t,i∈S2,j∈S3,0,i∈S3,j∈S1,bi−t,j−t,i∈S3,j∈S2⋃S3, |
where t=n1−k and
S1={1,2,…,n1−k},S2={n1−k+1,…,n1},S3={n1+1,…,n}, | (2.1) |
with n=n1+n2−k. Obviously, S1⋃S2⋃S3=N.
We introduce the following subclass of S-Nekrasov matrices by requiring S is a singleton.
Definition 2.1. A matrix A=[aij]∈Cn×n is called an {i0}-Nekrasov matrix if there exists i0∈N such that |ai0,i0|>ηi0(A), and that for all j∈N∖{i0},
(|ai0,i0|−ηi0(A))⋅(|ajj|−hj(A)+ηj(A))>(hi0(A)−ηi0(A))⋅ηj(A), |
where ηi(A)=0 for all i∈N if i0=1, otherwise, η1(A)=|ai,i0| and
ηi(A)={i−1∑j=1|aij||ajj|ηj(A)+|ai,i0|,i=2,…,i0−1,i−1∑j=1|aij||ajj|ηj(A),i=i0,i0+1,…,n. | (2.2) |
Remark 2.1. (i) If A is an {i0}-Nekrasov matrix, then A is an S-Nekrasov matrix for S={i0}. In fact, using recursive relations (1.2) and (2.2), it follows that hSi0(A)=0=ηi0(A) if i0=1, otherwise, hS1(A)=η1(A) and
hSi(A)=i−1∑j=1|aij||ajj|hSj(A)+n∑j=i+1,j∈S|aij|={i−1∑j=1|aij||ajj|ηj(A)+|ai,i0|,i=2,…,i0−1,i−1∑j=1|aij||ajj|ηj(A),i=i0,i0+1,…,n.=ηi(A). |
In addition, h¯Si(A)=hi(A)−ηi(A) follows from the fact that hi(A)=hSi(A)+h¯Si(A) for each i∈N. These imply that an {i0}-Nekrasov matrix is an S-Nekrasov matrix for S={i0}.
(ii) Since a Nekrasov matrix is an S-Nekrasov matrices for any S, it follows that a Nekrasov matrix is an {i0}-Nekrasov matrices.
The following example shows that the k-subdirect sum of two {i0}-Nekrasov matrices may not be an {i0}-Nekrasov matrix in general.
Example 2.1. Consider the {i0}-Nekrasov matries A and B for i0=2, where
![]() |
Then, the 3-subdirect sum C=A⨁3B gives
![]() |
It is easy to check that C=A⨁3B is not an {i0}-Nekrasov matrix for neither one index i0. This motivates us to seek some simple conditions such that C=A⨁kB for any k is an {i0}-Nekrasov matrix. First, we provide the following conditions such that A⨁1B is an {i0}-Nekrasov matrix, where A is an {i0}-Nekrasov matrix and B is a Nekrasov matrix.
Theorem 2.1. Let A=[aij]∈Cn1×n1 be an {i0}-Nekrasov matrix with i0∈S1 and B=[bij]∈Cn2×n2 be a Nekrasov matrix, partitioned as in (1.1), which defines the sets S1, S2 and S3 as in (2.1), and let t=n1−1. If arg(aii) = arg(bi−t,i−t) for all i∈S2 and B21=0, then the 1-subdirect sum C=A⨁1B is an {i0}-Nekrasov matrix.
Proof. Since A is an {i0}-Nekrasov matrix and i0∈S1, it follows that if i0=1 then |c11|=|a11|>η1(A)=0=η1(C), otherwise,
|ci0,i0|=|ai0,i0|>ηi0(A)=i0−1∑j=1|ai0,j||ajj|ηj(A)=i0−1∑j=1|ci0,j||cjj|ηj(C)=ηi0(C). | (2.3) |
Case 1: For i∈S1, we have
hi(C)=i−1∑j=1|cij||cjj|hj(C)+n∑j=i+1|cij|=i−1∑j=1|aij||ajj|hj(A)+n1∑j=i+1|aij|=hi(A), |
and if i0=1, then ηi(C)=0=ηi(A), and if i0≠1, then η1(C)=|ci,i0|=|ai,i0|=η1(A) and
ηi(C)={i−1∑j=1|cij||cjj|ηj(C)+|ci,i0|,i=2,…,i0−1,i−1∑j=1|cij||cjj|ηj(C),i=i0,…,n1−k.={i−1∑j=1|aij||ajj|ηj(A)+|ai,i0|,i=1,2,…,i0−1,i−1∑j=1|aij||ajj|ηj(A),i=i0,…,n1−k.=ηi(A). |
Hence, for all j∈S1∖{i0},
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)+ηj(C))=(|ai0,i0|−ηi0(A))⋅(|ajj|−hj(A)+ηj(A))>(hi0(A)−ηi0(A))⋅ηj(A)=(hi0(C)−ηi0(C))⋅ηj(C). |
Case 2: For i∈S2={n1}, we have
hn1(C)=n1−1∑j=1|cn1,j||cjj|hj(C)+n∑j=n1+1|cn1,j|=n1−1∑j=1|an1,j||ajj|hj(A)+n∑j=n1+1|bn1−t,j−t|=hn1(A)+hn1−t(B), |
and
ηn1(C)=n1−1∑j=1|cn1,j||cjj|ηj(C)=n1−1∑j=1|an1,j||ajj|ηj(A)=ηn1(A). |
So,
(|ci0,i0|−ηi0(C))⋅(|cn1,n1|−hn1(C)+ηn1(C))=(|ci0,i0|−ηi0(C))⋅(|an1,n1+b11|−(hn1(A)+h1(B))+ηn1(A))=(|ai0,i0|−ηi0(A))⋅(|an1,n1|−hn1(A)+|b11|−h1(B)+ηn1(A))>(|ai0,i0|−ηi0(A))⋅(|an1,n1|−hn1(A)+ηn1(A))>(hi0(A)−ηi0(A))⋅ηn1(A)=(hi0(C)−ηi0(C))⋅ηn1(C). |
Case 3: For i∈S3={n1+1,…,n}, we have
hi(C)=i−1∑j=1|cij||cjj|hj(C)+n∑j=i+1|cij|=n1−1∑j=1|cij||cjj|hj(C)+|ci,n1||cn1,n1|hn1(C)+i−1∑j=n1+1|cij||cjj|hj(C)+n∑j=i+1|cij|=|bi−t,n1−t||an1,n1+bn1−t,n1−t|(hn1(A)+hn1−t(B))+i−1∑j=n1+1|bi−t,j−t||bj−t,j−t|hj−t(B)+n∑j=i+1|bi−t,j−t|=|bi−t,n1−t||bn1−t,n1−t|⋅hn1−t(B)+i−1∑j=n1+1|bi−t,j−t||bj−t,j−t|hj−t(B)+n∑j=i+1|bi−t,j−t|(byB21=0)=hi−t(B). |
It follows from B21=0 that
ηn1+1(C)=n1∑j=1|cn1+1,j||cjj|ηj(C)=n1−1∑j=1|cn1+1,j||cjj|ηj(C)+|cn1+1,n1||cn1,n1|ηn1(C)=|bn1+1−t,n1−t||bn1−t,n1−t|ηn1(C)=0, |
and for each i=n1+2,…,n,
ηi(C)=i−1∑j=1|cij||cjj|ηj(C)=|ci,n1||cn1,n1|ηn1(C)+i−1∑j=n1+1|bi−t,j−t||bj−t,j−t|ηj(C)=0. |
So, for all j∈S3,
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)+ηj(C))≥(|ai0,i0|−ηi0(A))⋅(|bj−t,j−t|−hj−t(B))>0=(hi0(C)−ηi0(C))⋅ηj(C). |
The conclusion follows from (2.3), Case 1–3.
Next, we give some conditions such that C=A⨁kB for any k is an {i0}-Nekrasov matrix, where A is an {i0}-Nekrasov matrix and B is a Nekrasov matrix. First, a lemma is given which will be used in the sequel.
Lemma 2.1. Let A=[aij]∈Cn1×n1 be an {i0}-Nekrasov matrix with i0∈S1∪S2 and B=[bij]∈Cn2×n2 be a Nekrasov matrix, partitioned as in (1.1), k be an integer such that 1≤k≤min{n1,n2}, which defines the sets S1, S2 and S3 as in (2.1), let t=n1−k and C=A⨁kB. If arg(aii) = arg(bi−t,i−t) for all i∈S2, B12=0, and |aij+bi−t,j−t|≤|aij| for i≠j,i,j∈S2, then
hi0(C)−ηi0(C)≤hi0(A)−ηi0(A). |
Proof. If i0∈S1, then it follows from the proof of Case I in Theorem 2.1 that hi(C)−ηi(C)=hi(A)−ηi(A) for all i∈S1, and thus hi0(C)−ηi0(C)=hi0(A)−ηi0(A).
If i0∈S2={n1−k+1,…,n1}, then from the assumptions and t=n1−k we have
ht+1(C)−ηt+1(C)={t∑j=1|ct+1,j||cjj|(hj(C)−ηj(C))+n∑j=t+2|ct+1,j|−|ct+1,i0|ift+1<i0,t∑j=1|ct+1,j||cjj|(hj(C)−ηj(C))+n∑j=t+2|ct+1,j|ift+1=i0,≤{t∑j=1|at+1,j||ajj|(hj(A)−ηj(A))+n1∑j=t+2|at+1,j|−|at+1,i0|ift+1<i0,t∑j=1|at+1,j||ajj|(hj(A)−ηj(A))+n1∑j=t+2|at+1,j|ift+1=i0,=ht+1(A)−ηt+1(A). |
Suppose that hi(C)−ηi(C)≤hi(A)−ηi(A) for all i<t+m, where m is a positive integer and 1<m≤k. We next prove that ht+m(C)−ηt+m(C)≤ht+m(A)−ηt+m(A). Since
ht+m(C)−ηt+m(C)={∑j<t+m|ct+m,j||cjj|(hj(C)−ηj(C))+n∑j>t+m|ct+m,j|−|ct+m,i0|,t+m<i0,∑j<t+m|ct+m,j||cjj|(hj(C)−ηj(C))+n∑j>t+m|ct+m,j|,t+m≥i0,≤{∑j<t+m|at+m,j||ajj|(hj(A)−ηj(A))+n∑j>t+m|at+m,j|−|at+m,i0|,t+m<i0,∑j<t+m|at+m,j||ajj|(hj(A)−ηj(A))+n∑j>t+m|at+m,j|,t+m≥i0,=ht+m(A)−ηt+m(A), |
it follows that hi(C)−ηi(C)≤hi(A)−ηi(A) for all i∈S2. Hence, hi0(C)−ηi0(C)≤hi0(A)−ηi0(A). The proof is complete.
Theorem 2.2. Let A=[aij]∈Cn1×n1 be an {i0}-Nekrasov matrix with i0∈S1 and B=[bij]∈Cn2×n2 be a Nekrasov matrix, partitioned as in (1.1), k be an integer such that 1≤k≤min{n1,n2}, which defines the sets S1, S2 and S3 as in (2.1), and let t=n1−k. If arg(aii) = arg(bi−t,i−t) for all i∈S2, A21=0, and |aij+bi−t,j−t|≤|bi−t,j−t| for i≠j,i,j∈S2, then the k-subdirect sum C=A⨁kB is an {i0}-Nekrasov matrix.
Proof. Since A is an {i0}-Nekrasov matrix and i0∈S1, it is obvious that |ci0,i0|>ηi0(C).
Case 1: For i∈S1, since hi(C)=hi(A) and ηi(C)=ηi(A), it holds that for all j∈S1∖{i0},
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)+ηj(C))>(hi0(C)−ηi0(C))⋅ηj(C). |
Case 2: For i∈S2, by the assumptions, we have
hn1−k+1(C)=n1−k∑j=1|cn1−k+1,j||cjj|hj(C)+n∑j=n1−k+2|cn1−k+1,j|≤n∑j=n1−k+2|bn1−k+1−t,j−t|=h1(B). |
Similarly, for i=n1−k+2,…,n1,
hi(C)=n1−k∑j=1|cij||cjj|hj(C)+i−1∑j=n1−k+1|cij||cjj|hj(C)+n∑j=i+1|cij|≤i−1∑j=n1−k+1|bi−t,j−t||bj−t,j−t|hj−t(B)+n∑j=i+1|bi−t,j−t|=hi−t(B). |
And for i=n1−k+1, by A21=0,
ηn1−k+1(C)=n1−k∑j=1|cn1−k+1,j||cjj|ηj(C)=0, |
implying that for all i∈S2,
ηi(C)=n1−k∑j=1|cij||cjj|ηj(C)+i−1∑j=n1−k+1|cij||cjj|ηj(C)=0. |
So, for all j∈S2,
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)+ηj(C))>(|ai0,i0|−ηi0(A))⋅(|bj−t,j−t|−hj−t(B))>0=(hi0(C)−ηi0(C))⋅ηj(C). | (2.4) |
Analogously to the proof of Case 2, we can easily obtain that (2.4) holds for all j∈S3. Combining with Case 1 and Case 2, the conclusion follows.
Example 2.2. Consider the following matrices:
![]() |
It is easy to verify that A is an {i0}-Nekrasov matrix for i0∈S1={1,2} and B is a Nekrasov matrix, which satisfy the hypotheses of Theorem 2.2. So, by Theorem 2.2, A⨁2B is an {i0}-Nekrasov matrix for i0∈S1={1,2}. In fact, let C=A⨁2B. Then,
![]() |
and from Definition 2.1, one can verify that C is an {i0}-Nekrasov matrix for i0∈S1={1,2}.
Theorem 2.3. Let A=[aij]∈Cn1×n1 be an {i0}-Nekrasov matrix with i0∈S1∪S2 and B=[bij]∈Cn2×n2 be a Nekrasov matrix, partitioned as in (1.1), k be an integer such that 1≤k≤min{n1,n2}, which defines the sets S1, S2 and S3 as in (2.1), and let t=n1−k. If arg(aii) = arg(bi−t,i−t) for all i∈S2, B12=B21=0, and |aij+bi−t,j−t|≤|aij| for i≠j,i,j∈S2, then the k-subdirect sum C=A⨁kB is an {i0}-Nekrasov matrix.
Proof. Since A is an {i0}-Nekrasov matrix, it follows that if i0∈S1, then |ci0,i0|>ηi0(C), and if i0∈S2, then
ηn1−k+1(C)={n1−k∑j=1|cn1−k+1,j||cjj|ηj(C)+|cn1−k+1,i0|,n1−k+1≠i0,n1−k∑j=1|cn1−k+1,j||cjj|ηj(C),n1−k+1=i0.≤{n1−k∑j=1|an1−k+1,j||ajj|ηj(A)+|an1−k+1,i0|,n1−k+1≠i0,n1−k∑j=1|an1−k+1,j||ajj|ηj(A),n1−k+1=i0.=ηn1−k+1(A). |
Similarly, we can obtain that ηj(C)≤ηj(A) for all j∈{n1−k+2,…,n1}. Therefore,
ηi0(C)=i0−1∑j=1|ci0,j||cjj|ηj(C)=n1−k∑j=1|ci0,j||cjj|ηj(C)+i0−1∑j=n1−k+1|ci0,j||cjj|ηj(C)≤n1−k∑j=1|ai0,j||ajj|ηj(A)+i0−1∑j=n1−k+1|ai0,j||ajj|ηj(A)=ηi0(A), |
and
|ci0,i0|=|ai0,i0|+|bi0−t,i0−t|>|ai0,i0|>ηi0(A)≥ηi0(C). |
Case 1: For i∈S1, proceeding as in the proof of Case 1 in Theorem 2.1, we have hi(C)=hi(A) and ηi(C)=ηi(A), which implies that for all j∈S1∖{i0},
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)+ηj(C))>(hi0(C)−ηi0(C))⋅ηj(C). |
Case 2: For i∈S2, by the assumptions, we have
hi(C)=n1−k∑j=1|cij||cjj|hj(C)+i−1∑j=n1−k+1|cij||cjj|hj(C)+n1∑j=i+1|cij|≤n1−k∑j=1|aij||ajj|hj(A)+i−1∑j=n1−k+1|aij||ajj|hj(A)+n1∑j=i+1|aij|=hi(A), |
and
ηi(C)={i−1∑j=1|cij||cjj|ηj(C)+|ci,i0|,i=n1−k+1,…,i0−1,i−1∑j=1|cij||cjj|ηj(C),i=i0,…,n1.≤{i−1∑j=1|aij||ajj|ηj(A)+|ai,i0|,i=n1−k+1,…,i0−1,i−1∑j=1|aij||ajj|ηj(A),i=i0,…,n1.=ηi(A). |
Hence, by Lemma 2.1, it follows that for all j∈S2∖{i0},
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)ηj(C)+1)>(|ai0,i0|−ηi0(A))⋅(|ajj|−hj(A)ηj(A)+1)>(hi0(A)−ηi0(A))≥(hi0(C)−ηi0(C)). |
Case 3: For i∈S3, similarly to the proof of Case 3 in Theorem 2.1, we show that for all i∈S3,
hi(C)=hi−t(B),andηi(C)=0, |
which implies that for all j∈S3,
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)+ηj(C))>(|ai0,i0|−ηi0(A))⋅(|bj−t,j−t|−hj−t(B))>0=(hi0(C)−ηi0(C))ηj(C). |
From the above three cases, the conclusion follows.
Example 2.3. Consider the following matrices:
![]() |
where A is an {i0}-Nekrasov matrix for i0∈S1∪S2={1,2,3,4} and B is a Nekrasov matrix, and they satisfy the hypotheses of Theorem 2.3. Then, from Theorem 2.3, we get that the 3-subdirect sum C=A⨁3B is also an {i0}-Nekrasov matrix for i0∈S1∪S2={1,2,3,4}. Actually, by Definition 2.1, one can check that
![]() |
is an {i0}-Nekrasov matrix for i0∈S1∪S2={1,2,3,4}.
Theorem 2.4. Let A=[aij]∈Cn1×n1 be an {i0}-Nekrasov matrix for some i0∈S2 and B=[bij]∈Cn2×n2 be a Nekrasov matrix, partitioned as in (1.1), k be an integer such that 1≤k≤min{n1,n2}, which defines the sets S1, S2 and S3 as in (2.1), and let t=n1−k. If
(i) arg(aii) = arg(bi−t,i−t) for all i∈S2,
(ii) B12=0, hi(A)≤hi−t(B), ηi(A)≤ηi−t(B), and |aij+bi−t,j−t|≤|aij| for i≠j,i,j∈S2,
(iii) (hi0−t(B)−ηi0−t(B))(|ai0,i0|−ηi0(A))≥(|bi0−t,i0−t|−ηi0−t(B))(hi0(A)−ηi0(A)),
then the k-subdirect sum C=A⨁kB is an {i0}-Nekrasov matrix.
Proof. Due to A is an {i0}-Nekrasov matrix and i0∈S2, it follows from the proof of Case 2 in Theorem 2.3 that ηi(C)≤ηi(A) for all i∈S2, which leads to
|ci0,i0|>|ai0,i0|>ηi0(A)≥ηi0(C). |
Case 1: For i∈S1, it is obvious that hi(C)=hi(A) and ηi(C)=ηi(A). Hence, for all j∈S1,
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)+ηj(C))>(|ai0,i0|−ηi0(A))⋅(|ajj|−hj(A)+ηj(A))>(hi0(A)−ηi0(A))⋅ηj(A)≥(hi0(C)−ηi0(C))⋅ηj(C). | (2.5) |
Case 2: For i∈S2, it follows from B12=0 and |aij+bi−t,j−t|≤|aij| for i≠j,i,j∈S2 that hi(C)≤hi(A), and thus for all j∈S2∖{i0}, (2.5) also holds.
Case 3: For i=n1+1∈S3, by the assumption, it follows that
hn1+1(C)=n1−k∑j=1|cn1+1,j||cjj|hj(C)+n1∑j=n1−k+1|cn1+1,j||cjj|hj(C)+n∑j=n1+2|cn1+1,j|≤n1∑j=n1−k+1|bn1+1−t,j−t||cj−t,j−t|hj(A)+n∑j=n1+2|bn1+1−t,j−t|≤n1∑j=n1−k+1|bn1+1−t,j−t||cj−t,j−t|hj−t(B)+n∑j=n1+2|bn1+1−t,j−t|,=hn1+1−t(B), |
which recursively yields that for i=n1+2,…,n,
hi(C)=i−1∑j=1|cij||cjj|hj(C)+n∑j=i+1|cij|=n1−k∑j=1|cij||cjj|hj(C)+n1∑j=n1−k+1|cij||cjj|hj(C)+i−1∑j=n1+1|cij||cjj|hj(C)+n∑j=i+1|cij|≤n1∑j=n1−k+1|bi−t,j−t||bj−t,j−t|hj(A)+i−1∑j=n1+1|bi−t,j−t||bj−t,j−t|hj−t(B)+n∑j=i+1|bi−t,j−t|≤i−1∑j=n1−k+1|bi−t,j−t||bj−t,j−t|hj−t(B)+n∑j=i+1|bi−t,j−t|=hi−t(B). |
Similarly, we have
ηn1+1(C)=n1−k∑j=1|cn1+1,j||cjj|ηj(C)+n1∑j=n1−k+1|cn1+1,j||cjj|ηj(C)=n1∑j=n1−k+1|bn1+1−t,j−t||ajj+bj−t,j−t|ηj(A)≤n1∑j=n1−k+1|bn1+1−t,j−t||bj−t,j−t|ηj−t(B)=ηn1+1−t(B), |
and for all i=n1+2,…,n,
ηi(C)=i−1∑j=1|cij||cjj|ηj(C)=n1−k∑j=1|cij||cjj|ηj(C)+n1∑j=n1−k+1|cij||cjj|ηj(C)+i−1∑j=n1+1|cij||cjj|ηj(C)≤n1∑j=n1−k+1|bi−t,j−t||ajj+bj−t,j−t|ηj(A)+i−1∑j=n1+1|bi−t,j−t||bj−t,j−t|ηj−t(B)≤n1∑j=n1−k+1|bi−t,j−t||ajj+bj−t,j−t|ηj−t(B)+i−1∑j=n1+1|bi−t,j−t||bj−t,j−t|ηj−t(B)=ηi−t(B). |
Hence, for all j∈S3,
(|ci0,i0|−ηi0(C))⋅(|cjj|−hj(C)ηj(C)+1)>(|ai0,i0|−ηi0(A))⋅(|bj−t,j−t|−hj−t(B)ηj(C)+1)≥(|ai0,i0|−ηi0(A))⋅(|bj−t,j−t|−hj−t(B)ηj−t(B)+1)>(|ai0,i0|−ηi0(A))⋅hi0−t(B)−ηi0−t(B)|bi0−t,i0−t|−ηi0−t(B)≥(|ai0,i0|−ηi0(A))⋅hi0(A)−ηi0(A)|ai0,i0|−ηi0(A)=hi0(C)−ηi0(C). |
From Case 1, Case 2 and Case 3, we can conclude that C=A⨁kB is an {i0}-Nekrasov matrix.
Example 2.4. Consider the following matrices:
![]() |
where A is an {i0}-Nekrasov matrix for i0=2 and B is a Nekrasov matrix. By computation, we have h1(A)=3,h2(A)=5, h3(A)=1.25,h4(A)=0.75, h1(B)=5,h2(B)=1.5556,h3(B)=0.8667,h4(B)=0, η1(A)=1,η2(A)=0.3333,η3(A)=0.0167,η4(A)=0.1767, η1(B)=4,η2(B)=0.4444,η3(B)=0.5333, and η4(B)=0, which satisfy the hypotheses of Theorem 2.4. Hence, from Theorem 2.4, we have that A⨁3B is also an {i0}-Nekrasov matrix for i0=2. In fact, let C=A⨁3B. Then
![]() |
and one can verify that C is an {i0}-Nekrasov matrix for i0=2 from Definition 2.1.
In this paper, for an {i0}-Nekrasov matrix A as a subclass of S-Nekrasov matrices and a Nekrasov matrix B, we provide some sufficient conditions such that the k-subdirect sum A⨁kB lies in the class of {i0}-Nekrasov matrices. Numerical examples are included to illustrate the advantages of the given conditions. {The results obtained here have potential applications in some scientific computing problems such as matrix completion problem and the convergence of iterative methods for large sparse linear systems. For instance, consider large scale linear systems
Cx=b. | (3.1) |
Note that if the coefficient matrix C in (3.1) is an H-matrix, then the iterative methods of Jacobi and Gauss-Seidel associated with (3.1) are both convergent [29], but it is not easy to determine C as an H-matrix in general. However, if C is exactly the subdirect sum of matrices A and B, i.e., C=A⨁kB, where A and B satisfy the sufficient conditions given here, then it is easy to see that C is an {i0}-Nekrasov matrix, and thus an H-matrix.
The author wishes to thank the three anonymous referees for their valuable suggestions to improve the paper. The research was supported by the National Natural Science Foundation of China (31600299), the Natural Science Foundations of Shaanxi province, China (2020JM-622), and the Projects of Baoji University of Arts and Sciences (ZK2017095, ZK2017021).
The author declares no conflict of interest.
[1] |
G. Yu, A concentrated time-frequency analysis tool for bearing fault diagnosis, IEEE Trans. Instrum. Meas., 69 (2020), 371–381. https://doi.org/10.1109/TIM.2019.2901514 doi: 10.1109/TIM.2019.2901514
![]() |
[2] |
Q. Ni, J. C. Ji, K. Feng, B. Halkon, A fault information-guided variational mode decomposition (FIVMD) method for rolling element bearings diagnosis, Mech. Syst. Signal Process., 164 (2022), 108216. https://doi.org/10.1016/j.ymssp.2021.108216 doi: 10.1016/j.ymssp.2021.108216
![]() |
[3] |
L. P. Ji, C. Q. Fu, W. Q. Sun, Soft fault diagnosis of analog circuits based on a resnet with circuit spectrum map, IEEE Trans. Circuits Syst. I Regul. Pap., 68 (2021), 2841–2849. https://doi.org/10.1109/TCSI.2021.3076282 doi: 10.1109/TCSI.2021.3076282
![]() |
[4] |
L. Wen, X. Y. Li, L. Gao, A transfer convolutional neural network for fault diagnosis based on ResNet-50, Neural Comput. Appl., 32 (2020), 6111–6124. https://doi.org/10.1007/s00521-019-04097-w doi: 10.1007/s00521-019-04097-w
![]() |
[5] |
Y. Xu, K. Feng, X. Yan, R. Yan, Q. Ni, B. Sun, et al., CFCNN: A novel convolutional fusion framework for collaborative fault identification of rotating machinery, Inf. Fusion, 95 (2023), 1–16. https://doi.org/10.1016/j.inffus.2023.02.012 doi: 10.1016/j.inffus.2023.02.012
![]() |
[6] |
W. Fu, X. Jiang, B. Li, C. Tan, B. Chen, X. Chen, Rolling bearing fault diagnosis based on 2D time-frequency images and data augmentation technique, Meas. Sci. Technol., 34 (2023). https://doi.org/10.1088/1361-6501/acabdb doi: 10.1088/1361-6501/acabdb
![]() |
[7] |
L. Yuan, D. Lian, X. Kang, Y. Chen, K. Zhai, Rolling bearing fault diagnosis based on convolutional neural network and support vector machine, IEEE Access, 8 (2020), 137395–137406. https://doi.org/10.1109/ACCESS.2020.3012053 doi: 10.1109/ACCESS.2020.3012053
![]() |
[8] |
S. Shao, R. Yan, Y. Lu, P. Wang, R. X. Gao, DCNN-based multi-signal induction motor fault diagnosis, IEEE Trans. Instrum. Meas., 69 (2020), 2658–2669. https://doi.org/10.1109/TIM.2019.2925247 doi: 10.1109/TIM.2019.2925247
![]() |
[9] |
H. Wu, Y. Yang, S. Deng, Q. Wang, H. Song, GADF-VGG16 based fault diagnosis method for HVDC transmission lines, PLoS One, 17 (2022). https://doi.org/10.1371/journal.pone.0274613 doi: 10.1371/journal.pone.0274613
![]() |
[10] |
H. Liang, J. Cao, X. Zhao, Average descent rate singular value decomposition and two-dimensional residual neural network for fault diagnosis of rotating machinery, IEEE Trans. Instrum. Meas., 71 (2022), 1–16. https://doi.org/10.1109/TIM.2022.3170973 doi: 10.1109/TIM.2022.3170973
![]() |
[11] |
J. Zheng, J. Wang, J. Ding, C. Yi, H. Wang, Diagnosis and classification of gear composite faults based on S-transform and improved 2D convolutional neural network, Int. J. Dyn. Control, 12 (2024), 1659–1670. https://doi.org/10.1007/s40435-023-01324-0 doi: 10.1007/s40435-023-01324-0
![]() |
[12] |
Y. Zhang, Z. Cheng, Z. Wu, E. Dong, R. Zhao, G. Lian, Research on electronic circuit fault diagnosis method based on SWT and DCNN-ELM, IEEE Access, 11 (2023), 71301–71313. https://doi.org/10.1109/ACCESS.2023.3292247 doi: 10.1109/ACCESS.2023.3292247
![]() |
[13] |
P. Hu, C. Zhao, J. Huang, T. Song, Intelligent and small samples gear fault detection based on wavelet analysis and improved CNN, Processes, 11 (2023), 2969. https://doi.org/10.3390/pr11102969 doi: 10.3390/pr11102969
![]() |
[14] |
J. Zhao, S. Yang, Q. Li, Y. Liu, X. Gu, W. Liu, A new bearing fault diagnosis method based on signal-to-image mapping and convolutional neural network, Measurement, 176 (2021). https://doi.org/10.1016/j.measurement.2021.109088 doi: 10.1016/j.measurement.2021.109088
![]() |
[15] |
A. Choudhary, R. K. Mishra, S. Fatima, B. K. Panigrahi, Multi-input CNN based vibro-acoustic fusion for accurate fault diagnosis of induction motor, Eng. Appl. Artif. Intell., 120 (2023), 105872. https://doi.org/10.1016/j.engappai.2023.105872 doi: 10.1016/j.engappai.2023.105872
![]() |
[16] |
Z. Hu, Y. Wang, M. Ge, J. Liu, Data-driven fault diagnosis method based on compressed sensing and improved multiscale network, IEEE Trans. Ind. Electron., 67 (2020), 3216–3225. https://doi.org/10.1109/TIE.2019.2912763 doi: 10.1109/TIE.2019.2912763
![]() |
[17] |
D. Ruan, J. Wang, J. Yan, C. Gühmann, CNN parameter design based on fault signal analysis and its application in bearing fault diagnosis, Adv. Eng. Inf., 55 (2023), 101877. https://doi.org/10.1016/j.aei.2023.101877 doi: 10.1016/j.aei.2023.101877
![]() |
[18] |
J. Xiong, M. Liu, C. Li, J. Cen, Q. Zhang, Q. Liu, A bearing fault diagnosis method based on improved mutual dimensionless and deep learning, IEEE Sens. J., 23 (2023), 18338–18348. https://doi.org/10.1109/JSEN.2023.3264870 doi: 10.1109/JSEN.2023.3264870
![]() |
[19] |
J. Zhang, Y. Sun, L. Guo, H. Gao, X. Hong, H. Song, A new bearing fault diagnosis method based on modified convolutional neural networks, Chin. J. Aeronaut., 33 (2020), 439–447. https://doi.org/10.1016/j.cja.2019.07.011 doi: 10.1016/j.cja.2019.07.011
![]() |
[20] |
Y. H. Zhang, T. T. Zhou, X. F. Huang, L. C. Cao, Q. Zhou, Fault diagnosis of rotating machinery based on recurrent neural networks, Measurement, 171 (2021), 108774. https://doi.org/10.1016/j.measurement.2020.108774 doi: 10.1016/j.measurement.2020.108774
![]() |
[21] | D. Ruan, F. Zhang, C. Gühmann, Exploration and effect analysis of improvement in convolution neural network for bearing fault diagnosis, in Proceedings of the 2021 IEEE International Conference on Prognostics and Health Management (ICPHM), 2021. https://doi.org/10.1109/ICPHM51084.2021.9486665 |
[22] |
L. X. Yang, Z. J. Zhang, A conditional convolutional autoencoder-based method for monitoring wind turbine blade breakages, IEEE Trans. Ind. Inf., 17 (2021), 6390–6398. https://doi.org/10.1109/TII.2020.3011441 doi: 10.1109/TII.2020.3011441
![]() |
[23] |
H. T. Wang, X. W. Liu, L. Y. Ma, Y. Zhang, Anomaly detection for hydropower turbine unit based on variational modal decomposition and deep autoencoder, Energy Rep., 7 (2021), 938–946. https://doi.org/10.1016/j.egyr.2021.09.179 doi: 10.1016/j.egyr.2021.09.179
![]() |
[24] |
H. Y. Zhong, Y. Lv, R. Yuan, D. Yang, Bearing fault diagnosis using transfer learning and self-attention ensemble lightweight convolutional neural network, Neurocomputing, 501 (2022), 765–777. https://doi.org/10.1016/j.neucom.2022.06.066 doi: 10.1016/j.neucom.2022.06.066
![]() |
[25] |
Y. W. Cheng, M. X. Lin, J. Wu, H. P. Zhu, X. Y. Shao, Intelligent fault diagnosis of rotating machinery based on continuous wavelet transform-local binary convolutional neural network, Knowledge-Based Syst., 216 (2021), 106796. https://doi.org/10.1016/j.knosys.2021.106796 doi: 10.1016/j.knosys.2021.106796
![]() |
[26] |
Y. Xu, Z. X. Li, S. Q. Wang, W. H. Li, T. Sarkodie-Gyan, S. Z. Feng, A hybrid deep-learning model for fault diagnosis of rolling bearings, Measurement, 169 (2021), 108502. https://doi.org/10.1016/j.measurement.2020.108502 doi: 10.1016/j.measurement.2020.108502
![]() |
[27] |
L. Han, C. C. Yu, K. T. Xiao, X. Zhao, A new method of mixed gas identification based on a convolutional neural network for time series classification, Sensors, 19 (2019), 1960. https://doi.org/10.3390/s19091960 doi: 10.3390/s19091960
![]() |
[28] |
Y. He, K. Song, Q. Meng, Y. Yan, An end-to-end steel surface defect detection approach via fusing multiple hierarchical features, IEEE Trans. Instrum. Meas., 69 (2020), 1493–1504. https://doi.org/10.1109/TIM.2019.2915404 doi: 10.1109/TIM.2019.2915404
![]() |
[29] |
M. A. Mohammed, K. H. Abdulkareem, S. A. Mostafa, M. K. A. Ghani, M. S. Maashi, B. Garcia-Zapirain, et al., Voice pathology detection and classification using convolutional neural network model, Appl. Sci.-Basel, 10 (2020), 3723. https://doi.org/10.3390/app10113723 doi: 10.3390/app10113723
![]() |
[30] |
Y. Bai, S. Liu, Y. He, L. Cheng, F. Liu, X. Geng, Identification of MOSFET working state based on the stress wave and deep learning, IEEE Trans. Instrum. Meas., 71 (2022), 1–9. https://doi.org/10.1109/TIM.2022.3165276 doi: 10.1109/TIM.2022.3165276
![]() |
[31] |
S. Z. Huang, J. Tang, J. Y. Dai, Y. Y. Wang, Signal status recognition based on 1DCNN and its feature extraction mechanism analysis, Sensors, 19 (2019), 2018. https://doi.org/10.3390/s19092018 doi: 10.3390/s19092018
![]() |
[32] |
M. W. Newcomer, R. J. Hunt, NWTOPT-A hyperparameter optimization approach for selection of environmental model solver settings, Environ. Modell. Software, 147 (2022), 105250. https://doi.org/10.1016/j.envsoft.2021.105250 doi: 10.1016/j.envsoft.2021.105250
![]() |
[33] |
W. Wei, X. Zhao, Bi-TLLDA and CSSVM based fault diagnosis of vehicle on-board equipment for high speed railway, Meas. Sci. Technol., 32 (2021). https://doi.org/10.1088/1361-6501/abe667 doi: 10.1088/1361-6501/abe667
![]() |
1. | Jiaqi Qi, Yaqiang Wang, Subdirect Sums of $ GSD{D_1} $ matrices, 2024, 32, 2688-1594, 3989, 10.3934/era.2024179 | |
2. | Jiaqi Qi, Keru Wen, Yaqiang Wang, On k-subdirect sums of $$SD{D_1}$$ matrices, 2025, 0916-7005, 10.1007/s13160-025-00693-7 |