
The study of drug side effects is a significant task in drug discovery. Candidate drugs with unaccepted side effects must be eliminated to prevent risks for both patients and pharmaceutical companies. Thus, all side effects for any candidate drug should be determined. However, this task, which is carried out through traditional experiments, is time-consuming and expensive. Building computational methods has been increasingly used for the identification of drug side effects. In the present study, a new path-based method was proposed to determine drug side effects. A heterogeneous network was built to perform such method, which defined drugs and side effects as nodes. For any drug and side effect, the proposed path-based method determined all paths with limited length that connects them and further evaluated the association between them based on these paths. The strong association indicates that the drug has a side effect with a high probability. By using two types of jackknife test, the method yielded good performance and was superior to some other network-based methods. Furthermore, the effects of one parameter in the method and heterogeneous network was analyzed.
Citation: Meng Jiang, Bo Zhou, Lei Chen. Identification of drug side effects with a path-based method[J]. Mathematical Biosciences and Engineering, 2022, 19(6): 5754-5771. doi: 10.3934/mbe.2022269
[1] | Gonca Kizilaslan . The altered Hermite matrix: implications and ramifications. AIMS Mathematics, 2024, 9(9): 25360-25375. doi: 10.3934/math.20241238 |
[2] | Hasan Gökbaş . Some properties of the generalized max Frank matrices. AIMS Mathematics, 2024, 9(10): 26826-26835. doi: 10.3934/math.20241305 |
[3] | Qin Zhong, Ling Li . Notes on the generalized Perron complements involving inverse N0-matrices. AIMS Mathematics, 2024, 9(8): 22130-22145. doi: 10.3934/math.20241076 |
[4] | Salima Kouser, Shafiq Ur Rehman, Mabkhoot Alsaiari, Fayyaz Ahmad, Mohammed Jalalah, Farid A. Harraz, Muhammad Akram . A smoothing spline algorithm to interpolate and predict the eigenvalues of matrices extracted from the sequence of preconditioned banded symmetric Toeplitz matrices. AIMS Mathematics, 2024, 9(6): 15782-15795. doi: 10.3934/math.2024762 |
[5] | Sumaira Hafeez, Rashid Farooq . On generalized inverse sum indeg index and energy of graphs. AIMS Mathematics, 2020, 5(3): 2388-2411. doi: 10.3934/math.2020158 |
[6] | Yuwen He, Jun Li, Lingsheng Meng . Three effective preconditioners for double saddle point problem. AIMS Mathematics, 2021, 6(7): 6933-6947. doi: 10.3934/math.2021406 |
[7] | Chih-Hung Chang, Ya-Chu Yang, Ferhat Şah . Reversibility of linear cellular automata with intermediate boundary condition. AIMS Mathematics, 2024, 9(3): 7645-7661. doi: 10.3934/math.2024371 |
[8] | Yongge Tian . Miscellaneous reverse order laws and their equivalent facts for generalized inverses of a triple matrix product. AIMS Mathematics, 2021, 6(12): 13845-13886. doi: 10.3934/math.2021803 |
[9] | Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007 |
[10] | Wanlin Jiang, Kezheng Zuo . Revisiting of the BT-inverse of matrices. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158 |
The study of drug side effects is a significant task in drug discovery. Candidate drugs with unaccepted side effects must be eliminated to prevent risks for both patients and pharmaceutical companies. Thus, all side effects for any candidate drug should be determined. However, this task, which is carried out through traditional experiments, is time-consuming and expensive. Building computational methods has been increasingly used for the identification of drug side effects. In the present study, a new path-based method was proposed to determine drug side effects. A heterogeneous network was built to perform such method, which defined drugs and side effects as nodes. For any drug and side effect, the proposed path-based method determined all paths with limited length that connects them and further evaluated the association between them based on these paths. The strong association indicates that the drug has a side effect with a high probability. By using two types of jackknife test, the method yielded good performance and was superior to some other network-based methods. Furthermore, the effects of one parameter in the method and heterogeneous network was analyzed.
The spectral properties of tridiagonal matrices is a well-studied topic for which a vast literature can be found (e.g. [1,5,16,17,19,25,27,35], among others), and even formulae for the corresponding inverse of these matrices has also been discussed over the last decades of twentieth century (see [15] and references therein). Recently, taking advantage of basic properties of the Chebyshev polynomials, some authors have established localization theorems for the eigenvalues of real pentadiagonal and heptadiagonal symmetric Toeplitz matrices by expressing them as the zeros of explicit rational functions [12,32]. The eigenvalues of a special kind of heptadiagonal matrices were still derived in [26] by employing other methods, namely, determinant properties and recurrence relations.
In fact, the above-mentioned matrices are typical examples of a much more wider class called band matrices (see [30], page 13), and the idea of having explicit formulas to compute its eigenvalues, eigenvectors or establishing some other properties is both appealing and challenging by reason of their usefulness in many areas of science and engineering (see, for instance, [4,10,11,14,20,24,33]).
In order to give a contribution on this matter, we shall obtain the eigenvalues of the following n×n heptadiagonal matrix
Hn=[ξηcd0……………0ηabcd⋱⋮cbabc⋱⋱⋮dcbab⋱⋱⋱⋮0dcba⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱abcd0⋮⋱⋱⋱babcd⋮⋱⋱cbabc⋮⋱dcbaη0……………0dcηξ] | (1.1) |
as the zeros of explicit rational functions, also providing upper/lower bounds non-depending of any unknown parameter to each of them. Further, we shall compute eigenvectors for these sort of matrices at the expense of the prescribed eigenvalues. To accomplish these purposes, we will obtain an orthogonal block diagonalization for matrix (1.1) where each block is a sum of a diagonal matrix plus dyads, i.e.
diag(d1,d2,…,dκ)+u1v⊤1+u2v⊤2+…+umv⊤m, | (1.2) |
where uj,vj, j=1,…,m are κ×1 matrices, by exploiting the modification technique introduced by Fasino in [13] for matrices of the type (1.1). This key ingredient allows us to get formulas for the characteristic polynomial of Hn on one hand, and for the inverse of Hn on the other (assuming, of course, its nonsingularity). With the aim of getting expressions as explicit as possible, we will use not only results concerning the secular equation of diagonal matrices perturbed by the addition of rank-one matrices developed by Anderson in the nineties [2], but also a Miller's formula of the eighties for the inverse of the sum of matrices [29]. In section four of the paper, applications are given for the established results, showing its potential usage.
Since the class of matrices Hn includes the ones considered in [12] and [32], our statements will extend necessarily the results of these papers. Moreover, the current approach also points a way to achieve localization formulas for the eigenvalues of general symmetric quasi-Toeplitz matrices. In detail, the eigenvalues of any symmetric quasi-Toeplitz matrix enjoying a block diagonalization with diagonal elements of the form (1.2) are precisely the eigenvalues of each one of these diagonal blocks, which in turn can be located/computed by rational functions via Anderson's secular equation.
In this paper, n is generally assumed to be an integer greater or equal to four and a,b,c,d,ξ,η in (1.1) will be taken as real numbers; in fact, this last restriction can be discarded because the majority of forthcoming statements remain valid when a,b,c,d,ξ,η are complex numbers. Moreover, Sn will be the n×n symmetric, involutory and orthogonal matrix defined by
[Sn]k,ℓ:=√2n+1sin(kℓπn+1). | (2.1) |
Our first auxiliary result is an orthogonal diagonalization for the following n×n heptadiagonal symmetric matrix
ˆHn=[a−cb−dcd0……………0b−dabcd⋱⋮cbabc⋱⋱⋮dcbab⋱⋱⋱⋮0dcba⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱abcd0⋮⋱⋱⋱babcd⋮⋱⋱cbabc⋮⋱dcbab−d0……………0dcb−da−c]. | (2.2) |
Lemma 1. Let a,b,c,d be real numbers and
λk=a+2bcos(kπn+1)+2ccos(2kπn+1)+2dcos(3kπn+1),k=1,…,n. | (2.3) |
If ˆHn is the n×n matrix (2.2), then
ˆHn=SnΛnSn, |
where Λn=diag(λ1,…,λn), and Sn is the matrix (2.1).
Proof. Supposing the n×n matrix
Ωn=[010………0101⋱⋮010⋱⋱⋮⋮⋱⋱⋱⋱⋱⋮⋮⋱⋱010⋮⋱1010………010], |
it is a simple matter of routine to verify that
ˆHn=(a−2c)In+(b−3d)Ωn+cΩ2n+dΩ3n. |
Using the spectral decomposition
Ωn=n∑ℓ=12cos(ℓπn+1)sℓs⊤ℓ, |
where
sℓ=[√2n+1sin(ℓπn+1)√2n+1sin(2ℓπn+1)⋮√2n+1sin(nℓπn+1)] |
(i.e. the ℓth column of Sn), it follows
ˆHn=n∑ℓ=1[(a−2c)+2(b−3d)cos(ℓπn+1)+4ccos2(ℓπn+1)+8dcos3(ℓπn+1)]sℓs⊤ℓ=n∑ℓ=1λℓsℓs⊤ℓ=SnΛnSn, |
where Λn=diag(λ1,…,λn), and Sn is the matrix (2.1). The proof is complete.
The next statement is an orthogonal block diagonalization for matrices Hn of the form (1.1) and it extends Proposition 3.1 in [7], which is valid only for heptadiagonal symmetric Toeplitz matrices.
Lemma 2. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3) and Hn be the n×n matrix (1.1).
(a) If n is even,
x=[2√n+1sin(πn+1)2√n+1sin(3πn+1)⋮2√n+1sin[(n−1)πn+1]],y=[2√n+1sin(2πn+1)2√n+1sin(6πn+1)⋮2√n+1sin[(2n−2)πn+1]] | (2.4a) |
and
v=[2√n+1sin(2πn+1)2√n+1sin(4πn+1)⋮2√n+1sin(nπn+1)],w=[2√n+1sin(4πn+1)2√n+1sin(8πn+1)⋮2√n+1sin(2nπn+1)], | (2.4b) |
then
Hn=SnPn[Φn2OOΨn2]P⊤nSn, |
where Sn is the n×n matrix (2.1), Pn is the n×n permutation matrix defined by
[Pn]k,ℓ={1ifk=2ℓ−1ork=2ℓ−n0,otherwise | (2.4c) |
and
Φn2=diag(λ1,λ3,…,λn−1)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (2.4d) |
Ψn2=diag(λ2,λ4,…,λn)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤. | (2.4e) |
(b) If n is odd,
x=[2√n+1sin(πn+1)2√n+1sin(3πn+1)⋮2√n+1sin(nπn+1)],y=[2√n+1sin(2πn+1)2√n+1sin(6πn+1)⋮2√n+1sin(2nπn+1)] | (2.5a) |
and
v=[2√n+1sin(2πn+1)2√n+1sin(4πn+1)⋮2√n+1sin[(n−1)πn+1]],w=[2√n+1sin(4πn+1)2√n+1sin(8πn+1)⋮2√n+1sin[2(n−1)πn+1]], | (2.5b) |
then
Hn=SnPn[Φn+12OOΨn−12]P⊤nSn, |
where Sn is the n×n matrix (2.1), Pn is the n×n permutation matrix defined by
[Pn]k,ℓ={1ifk=2ℓ−1ork=2ℓ−n−10,otherwise | (2.5c) |
and
Φn+12=diag(λ1,λ3,…,λn)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (2.5d) |
Ψn−12=diag(λ2,λ4,…,λn−1)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤. | (2.5e) |
Proof. Consider a,b,c,d,ξ,η as real numbers, λk, k=1,…,n given by (2.3) and Hn as the n×n matrix (1.1). Setting θ:=c+ξ−a, ϑ:=d+η−b,
ˆEn=[c+ξ−a0……000⋱⋮⋮⋱⋱⋱⋮⋮⋱000……0c+ξ−a] |
and
ˆFn=[0d+η−b0……0d+η−b00⋱⋮00⋱⋱⋱⋮⋮⋱⋱⋱00⋮⋱00d+η−b0……0d+η−b0], |
we have from Lemma 1
SnHnSn=Sn(ˆHn+ˆEn+ˆFn)Sn=Λn+Gn+Kn, |
where Sn is the n×n matrix (2.1), ˆHn is the n×n matrix (2.2),
Λn=diag(λ1,…,λn),[Gn]k,ℓ=2θn+1sin(kπn+1)sin(ℓπn+1)[1+(−1)k+ℓ] |
and
[Kn]k,ℓ=2ϑn+1[sin(kπn+1)sin(2ℓπn+1)+sin(2kπn+1)sin(ℓπn+1)][1+(−1)k+ℓ]. |
Since [Gn]k,ℓ=[Kn]k,ℓ=0 whenever k+ℓ is odd, we can permute rows and columns of Λn+Gn+Kn according to the permutation matrices (2.4c) and (2.5c), yielding: for n even,
Hn=SnPn[Υn2+θxx⊤+ϑxy⊤+ϑyx⊤OOΔn2+θvv⊤+ϑvw⊤+ϑwv⊤]P⊤nSn, |
where Pn is the matrix (2.4c), Υn2=diag(λ1,λ3,…,λn−1), Δn2=diag(λ2,λ4,…,λn) and x, y are given by (2.4a); for n odd,
Hn=SnPn[Υn+12+θxx⊤+ϑxy⊤+ϑyx⊤OOΔn−12+θvv⊤+ϑvw⊤+ϑwv⊤]P⊤nSn, |
with Pn defined in (2.5c), Υn+12=diag(λ1,λ3,…,λn), Δn−12=diag(λ2,λ4,…,λn−1) and v, w defined by (2.5a). The proof is complete.
Remark 1. Let us point out that the decomposition for real heptadiagonal symmetric Toeplitz matrices established in Proposition 3.1 of [7] at the expense of the bordering technique is no more useful for matrices having the shape (1.1). As consequence, some results stated by these authors will be necessarily extended, particularly, the referred decomposition and a formula to compute the determinant of real heptadiagonal symmetric Toeplitz matrices (Corollary 3.1 of [7]).
The orthogonal block diagonalization established in Lemma 2 will lead us to an explicit formula for the determinant of the matrix Hn.
Theorem 1. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3), xk=sin(kπn+1), k=1,…,2n and Hn the n×n matrix (1.1). If θ:=c+ξ−a, ϑ:=d+η−b and
(a) n is even, then
det(Hn)=[n2∏j=1λ2j+n2∑k=14θx22k+8ϑx2kx4k(n+1)n2∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n2∏j=1j≠k,ℓλ2j]×[n2∏j=1λ2j−1+n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n2∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n2∏j=1j≠k,ℓλ2j−1]. |
(b) n is odd, then
det(Hn)=[n−12∏j=1λ2j+n−12∑k=14θx22k+8ϑx2kx4k(n+1)n−12∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n−1216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n−12∏j=1j≠k,ℓλ2j]×[n+12∏j=1λ2j−1+n+12∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n+12∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n+1216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n+12∏j=1j≠k,ℓλ2j−1]. |
Proof. Since both assertions can be proven in the same way, we only prove (a). Consider a,b,c,d,ξ,η are real numbers, xk=sin(kπn+1), k=1,…,2n, λk, k=1,…,n as given by (2.3), θ:=c+ξ−a, ϑ:=d+η−b and the notations used in Lemma 2. The determinant formula for block-triangular matrices (see [21], page 185) and Lemma 2 ensure det(Hn)=det(Φn2)det(Ψn2). We shall first assume λk≠0 for all k=1,…,n,
4θn+1n2∑k=1x22k−1λ2k−1≠−1 | (3.1a) |
4θn+1n2∑k=1x22k−1λ2k−1+4ϑn+1n2∑k=1x2k−1x4k−2λ2k−1≠−1 | (3.1b) |
n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)λ2k−1−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2λ2k−1λ2ℓ−1≠−1 | (3.1c) |
and
4θn+1n2∑k=1x22kλ2k≠−1 | (3.2a) |
4θn+1n2∑k=1x22kλ2k+4ϑn+1n2∑k=1x2kx4kλ2k≠−1 | (3.2b) |
n2∑k=14θx22k+8ϑx2kx4k(n+1)λ2k−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2(x2kx4ℓ−x2ℓx4k)2λ2kλ2ℓ≠−1. | (3.2c) |
Putting Υn2:=diag(λ1,λ3,…,λn−1) and Δn2:=diag(λ2,λ4,…,λn), we have
det(Φn2)=det(Υn2+θxx⊤+ϑxy⊤+ϑyx⊤)=[1+θx⊤Υ−1n2x+2ϑx⊤Υ−1n2y+ϑ2(x⊤Υ−1n2y)2−ϑ2(x⊤Υ−1n2x)(y⊤Υ−1n2y)]det(Υn2)=[n2∏j=1λ2j−1+n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n2∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n2∏j=1j≠k,ℓλ2j−1] |
and
det(Ψn2)=det(Δn2+θvv⊤+ϑvw⊤+ϑwv⊤)=[1+θv⊤Δ−1n2v+2ϑv⊤Δ−1n2w+ϑ2(v⊤Δ−1n2w)2−ϑ2(v⊤Δ−1n2v)(w⊤Δ−1n2w)]det(Δn2)=[n2∏j=1λ2j+n2∑k=14θx22k+8ϑx2kx4k(n+1)n2∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n2∏j=1j≠k,ℓλ2j] |
(see [29], pages 69 and 70), i.e.
det(Hn)=[n2∏j=1λ2j+n2∑k=14θx22k+8ϑx2kx4k(n+1)n2∏j=1j≠kλ2j−∑1⩽k<ℓ⩽n216ϑ2(x2kx4ℓ−x2ℓx4k)2(n+1)2n2∏j=1j≠k,ℓλ2j]×[n2∏j=1λ2j−1+n2∑k=14θx22k−1+8ϑx2k−1x4k−2(n+1)n2∏j=1j≠kλ2j−1−∑1⩽k<ℓ⩽n216ϑ2(x2k−1x4ℓ−2−x2ℓ−1x4k−2)2(n+1)2n2∏j=1j≠k,ℓλ2j−1]. | (3.3) |
Since both sides of (3.3) are polynomials in the variables a,b,c,d,ξ,η, conditions (3.1a)–(3.2c) as well as λk≠0 can be dropped, and (3.3) is valid more generally.
Example 1. Suppose the following symmetric quasi-Toeplitz matrix
Tn=[ξbc0…………0babc⋱⋮cbab⋱⋱⋮0cba⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱abc0⋮⋱⋱babc⋮⋱cbab0…………0cbξ] |
(when ξ=a, we have a pentadiagonal symmetric Toeplitz matrix). Let us notice that Theorem 3 of [12] cannot be employed to compute det(Tn). However, according to our Theorem 1 we get (with d=0, η=b and ϑ=0)
det(Tn)={[n2∏j=1λ2j+n2∑k=14(c+ξ−a)x22k(n+1)n2∏j=1j≠kλ2j][n2∏j=1λ2j−1+n2∑k=14(c+ξ−a)x22k−1(n+1)n2∏j=1j≠kλ2j−1],neven[n−12∏j=1λ2j+n−12∑k=14(c+ξ−a)x22k(n+1)n−12∏j=1j≠kλ2j][n+12∏j=1λ2j−1+n+12∑k=14(c+ξ−a)x22k−1(n+1)n+12∏j=1j≠kλ2j−1],nodd |
where
λk=a+2bcos(kπn+1)+2ccos(2kπn+1),k=1,…,n |
and xk=sin(kπn+1), k=1,…,2n. Moreover, if ξ=a−c in Tn, then det(Tn) simply turns into λ1λ2…λn (let us stress that this includes the particular case c=0, i.e. the determinant of tridiagonal symmetric Toeplitz matrices).
The following lemma will allows us to express the eigenvalues of key matrices in this paper as the zeros of explicit rational functions providing, additionally, explicit upper and lower bounds for each one. We will denote the Euclidean norm by ‖⋅‖.
Lemma 3. Let a,b,c,d,ξ,η be real numbers and λk, k=1,…,n be given by (2.3).
(a) If n is even,
ⅰ. x,y are given by (2.4a) and the eigenvalues of
diag(λ1,λ3,…,λn−1)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (3.4a) |
are not of the form a+2bcos[(2k−1)πn+1]+2ccos[2(2k−1)πn+1]+2dcos[3(2k−1)πn+1], k=1,…,n2, then the eigenvalues of (3.4a) are precisely the zeros of the rational function
f(t)=1+4n+1n2∑k=1(c+ξ−a)sin2[(2k−1)πn+1]+2(d+η−b)sin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n2{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−t)(λ2ℓ−1−t). | (3.4b) |
Moreover, if μ1⩽μ2⩽…⩽μn2 are the eigenvalues of (3.4a) and λτ(1)⩽λτ(3)⩽…⩽λτ(n−1) are arranged in a nondecreasing order by some bijection τ defined in {1,3,…,n−1}, then
λτ(2k−1)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽μk⩽λτ(2k−1)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.4c) |
for each k=1,…,n2.
ⅱ. v,w are given by (2.4b) and the eigenvalues of
diag(λ2,λ4,…,λn)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤ | (3.5a) |
are not of the form a+2bcos(2kπn+1)+2ccos(4kπn+1)+2dcos(6kπn+1), k=1,…,n2, then the eigenvalues of (3.5a) are precisely the zeros of the rational function
g(t)=1+4n+1n2∑k=1(c+ξ−a)sin2(2kπn+1)+2(d+η−b)sin(2kπn+1)sin(4kπn+1)λ2k−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n2[sin(2kπn+1)sin(4ℓπn+1)−sin(4kπn+1)sin(2ℓπn+1)]2(λ2k−t)(λ2ℓ−t). | (3.5b) |
Furthermore, if ν1⩽ν2⩽…⩽νn2 are the eigenvalues of (3.5a) and λσ(2)⩽λσ(4)⩽…⩽λσ(n) are arranged in a nondecreasing order by some bijection σ defined in {2,4,…,n}, then
λσ(2k)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽νk⩽λσ(2k)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.5c) |
for every k=1,…,n2.
(b) If n is odd,
ⅰ. x,y are given by (2.5a) and the eigenvalues of
diag(λ1,λ3,…,λn)+(c+ξ−a)xx⊤+(d+η−b)xy⊤+(d+η−b)yx⊤ | (3.6a) |
are not of the form a+2bcos[(2k−1)πn+1]+2ccos[2(2k−1)πn+1]+2dcos[3(2k−1)πn+1], k=1,…,n+12, then the eigenvalues of (3.6a) are precisely the zeros of the rational function
f(t)=1+4n+1n+12∑k=1(c+ξ−a)sin2[(2k−1)πn+1]+2(d+η−b)sin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n+12{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−t)(λ2ℓ−1−t). | (3.6b) |
Moreover, if μ1⩽μ2⩽…⩽μn+12 are the eigenvalues of (3.6a) and λτ(1)⩽λτ(3)⩽…⩽λτ(n) are arranged in a nondecreasing order by some bijection τ defined in {1,3,…,n}, then
λτ(2k−1)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽μk⩽λτ(2k−1)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.6c) |
for any k=1,…,n+12.
ⅱ. v,w are given by (2.5b) and the eigenvalues of
diag(λ2,λ4,…,λn−1)+(c+ξ−a)vv⊤+(d+η−b)vw⊤+(d+η−b)wv⊤ | (3.7a) |
are not of the form a+2bcos(2kπn+1)+2ccos(4kπn+1)+2dcos(6kπn+1), k=1,…,n−12, then the eigenvalues of (3.7a) are precisely the zeros of the rational function
g(t)=1+4n+1n−12∑k=1(c+ξ−a)sin2(2kπn+1)+2(d+η−b)sin(2kπn+1)sin(4kπn+1)λ2k−t−16(d+η−b)2(n+1)2∑1⩽k<ℓ⩽n−12[sin(2kπn+1)sin(4ℓπn+1)−sin(4kπn+1)sin(2ℓπn+1)]2(λ2k−t)(λ2ℓ−t). | (3.7b) |
Furthermore, if ν1⩽ν2⩽…⩽νn−12 are the eigenvalues of (3.7a) and λσ(2)⩽λσ(4)⩽…⩽λσ(n−1) are arranged in a nondecreasing order by some bijection σ defined in {2,4,…,n−1}, then
λσ(2k)+(c+ξ−a)−√(c+ξ−a)2+4(d+η−b)22⩽νk⩽λσ(2k)+(c+ξ−a)+√(c+ξ−a)2+4(d+η−b)22 | (3.7c) |
for all k=1,…,n−12.
Proof. Suppose real numbers a,b,c,d,ξ,η, λk, k=1,…,n given by (2.3) and put θ:=c+ξ−a, ϑ:=d+η−b. We shall denote by S(k,m) the collection of all k-element subsets of {1,2,…,m} written in increasing order; additionally, for any rectangular matrix M, we shall indicate by det(M[I,J]) the minor determined by the subsets I={i1<i2<…<ik} and J={j1<j2<…<jk}. Supposing θ≠0,
X=[2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)2ϑ√θ(n+1)sin(2πn+1)2ϑ√θ(n+1)sin(6πn+1)…2ϑ√θ(n+1)sin[(4n−2)πn+1]] |
and
Y=[2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)2ϑ√θ(n+1)sin(2πn+1)2ϑ√θ(n+1)sin(6πn+1)…2ϑ√θ(n+1)sin[(4n−2)πn+1]2√θn+1sin(πn+1)2√θn+1sin(3πn+1)…2√θn+1sin(nπn+1)]. |
Theorem 1 of [2] ensures that ζ is an eigenvalue of (3.4a) if, and only if,
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,3)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=0, |
provided that ζ is not an eigenvalue of diag(λ1,λ3,…,λn−1). Since
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,3)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=1+4n+1n2∑k=1θsin2[(2k−1)πn+1]+2ϑsin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−ζ−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−ζ)(λ2ℓ−1−ζ), |
we obtain (3.4b). Considering now θ=0 and setting
X=[2√ϑn+1sin(πn+1)2√ϑn+1sin(3πn+1)…2√ϑn+1sin(nπn+1)2√ϑn+1sin(2πn+1)2√ϑn+1sin(6πn+1)…2√ϑn+1sin[(4n−2)πn+1]],Y=[2√ϑn+1sin(2πn+1)2√ϑn+1sin(6πn+1)…2√ϑn+1sin[(4n−2)πn+1]2√ϑn+1sin(πn+1)2√ϑn+1sin(3πn+1)…2√ϑn+1sin(nπn+1)] |
we still have that ζ is an eigenvalue of (3.4a) if, and only if,
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,2)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=0, |
assuming that ζ is not an eigenvalue of diag(λ1,λ3,…,λn−1). Hence,
1+n2∑k=1∑J∈S(k,n2)∑I∈S(k,2)det(X[I,J])det(Y[I,J])∏j∈J(λ2j−1−ζ)=1+8ϑn+1n2∑k=1sin[(2k−1)πn+1]sin[(4k−2)πn+1]λ2k−1−ζ−16ϑ2(n+1)2∑1⩽k<ℓ⩽n2{sin[(2k−1)πn+1]sin[(4ℓ−2)πn+1]−sin[(4k−2)πn+1]sin[(2ℓ−1)πn+1]}2(λ2k−1−ζ)(λ2ℓ−1−ζ), |
and (3.4b) is established. Let μ1⩽μ2⩽…⩽μn2 be the eigenvalues of (3.4a) and λτ(1)⩽λτ(3)⩽…⩽λτ(n−1) be arranged in a nondecreasing order by some bijection τ defined in {1,3,…,n−1}. Thus,
λτ(2k−1)+λmin(θxx⊤+ϑxy⊤+ϑyx⊤)⩽μk⩽λτ(2k−1)+λmax(θxx⊤+ϑxy⊤+ϑyx⊤) | (3.8) |
for each k=1,…,n2 (see [23], page 242). Since the characteristic polynomial of θxx⊤+ϑxy⊤+ϑyx⊤ is
det[tIn2−θxx⊤−ϑxy⊤−ϑyx⊤]=tn2−2[t2−(θx⊤x+ϑy⊤x+ϑx⊤y)t+ϑ2(x⊤y)(y⊤x)−ϑ2(x⊤x)(y⊤y)]=tn2−2{t2−(θ‖x‖2+2ϑx⊤y)t+ϑ2[(x⊤y)2−‖x‖2‖y‖2]}, |
we have that its spectrum is
Spec(θxx⊤+ϑxy⊤+ϑyx⊤)={0,α−,α+}, | (3.9) |
where α±:=θ‖x‖2+2ϑx⊤y±√(θ‖x‖2+2ϑx⊤y)2−4ϑ2[(x⊤y)2−‖x‖2‖y‖2]2. From the identities
n2∑k=1sin2[(2k−1)πn+1]=n+14=n2∑k=1sin2[(4k−2)πn+1],n2∑k=1sin[(2k−1)πn+1]sin[(4k−2)πn+1]=0, |
it follows ‖x‖=‖y‖=1 and x⊤y=0. Hence, (3.8) and (3.9) yields (3.4c). The proofs of the remaining assertions are performed in the same way and so will be omitted.
The next statement allows us to locate the eigenvalues of Hn, providing also explicit bounds for each of them.
Theorem 2. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3) and Hn be the n×n matrix (1.1).
(a) If n is even, the eigenvalues of Φn2 in (2.4d) are not of the form λ2k−1, k=1,…,n2 and the eigenvalues of Ψn2 in (2.4e) are not of the form λ2k, k=1,…,n2, then the eigenvalues of Hn are precisely the zeros of the rational functions f(t) and g(t) given by (3.4b) and (3.5b), respectively. Moreover, if μ1⩽μ2⩽…⩽μn2 are the zeros of f(t) and ν1⩽ν2⩽…⩽νn2 are the zeros of g(t) (counting multiplicities in both cases), then μk, k=1,…,n2 and νk, k=1,…,n2 satisfy (3.4c) and (3.5c), respectively.
(b) If n is odd, the eigenvalues of Φn+12 in (2.5d) are not of the form λ2k−1, k=1,…,n+12 and the eigenvalues of Ψn−12 in (2.5e) are not of the form λ2k, k=1,…,n−12, then the eigenvalues of Hn are precisely the zeros of the rational functions f(t) and g(t) given by (3.6b) and (3.7b), respectively. Furthermore, if μ1⩽μ2⩽…⩽μn+12 are the zeros of f(t) and ν1⩽ν2⩽…⩽νn−12 are the zeros of g(t) (counting multiplicities in both cases), then μk, k=1,…,n+12 and νk, k=1,…,n−12 satisfy (3.6c) and (3.7c), respectively.
Proof. Suppose a,b,c,d,ξ,η are real numbers and λk, k=1,…,n as given by (2.3).
(a) According to Lemma 2 and the determinant formula for block-triangular matrices (see [21], page 185), the characteristic polynomial of Hn for n even is
det(tIn−Hn)=det(tIn2−Φn2)det(tIn2−Ψn2), |
where Φn2 and Ψn2 are given by (2.4d) and (2.4e), respectively, so that the thesis is a direct consequence of Lemma 2.
(b) For n odd, we obtain
det(tIn−Hn)=det(tIn+12−Φn+12)det(tIn−12−Ψn−12), |
where Φn+12 and Ψn−12 are given by (2.5d) and (2.5e), respectively. The conclusion follows from Lemma 2.
From Geršgorin theorem (see [23], Theorem 6.1.1), it can also be stated that all eigenvalues of Hn (n⩾7) belong to [hmin,hmax], where
hmin:=min{ξ−|c|−|d|−|η|,a−|b|−|c|−|d|−|η|,a−2|b|−2|c|−2|d|} |
and
hmax:=max{ξ+|c|+|d|+|η|,a+|b|+|c|+|d|+|η|,a+2|b|+2|c|+2|d|}. |
Further, all eigenvalues of the n×n heptadiagonal symmetric Toeplitz matrix
heptan(d,c,b,a,b,c,d)=[abcd0……………0babcd⋱⋮cbabc⋱⋱⋮dcbab⋱⋱⋱⋮0dcba⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱⋱⋱⋱⋱⋱⋮⋮⋱⋱⋱⋱abcd0⋮⋱⋱⋱babcd⋮⋱⋱cbabc⋮⋱dcbab0……………0dcba] |
are contained in the interval
[min−π⩽t⩽πφ(t),max−π⩽t⩽πφ(t)], |
where φ(t)=a+2bcos(t)+2ccos(2t)+2dcos(3t), −π⩽t⩽π (see [18], Theorem 6.1). As illustrated, eigenvalues of Hn and those of heptan(d,c,b,a,b,c,d) with a=0, b=−2, c=−1, d=2, ξ=9, η=−7 are depicted in complex plane for increasing values of n.
A distinctive feature of the blue graphics is the existence of two outliers for Hn, i.e. eigenvalues that do not belong to the interval [−15427,7], which seems to become just one as n→∞. This numerical experiment also reveals that as the matrix size increases, the spectrum of quasi-Toeplitz matrix Hn approaches the spectrum of Toeplitz matrix heptan(2,−1,−2,1,−2,−1,2) plus the outliers; this is the scenario that is consistent with the study presented in [6].
Remark 2. In [12] and [32], similar localization results were established for the eigenvalues of symmetric Toeplitz matrices (pentadiagonal and heptadiagonal, respectively). The referred papers make use of Chebyshev polynomials and their properties to earn rational functions with a more concise form. However, its statements do not cover the broader class of matrices (1.1).
The decomposition presented in Lemma 2 allows us also to compute eigenvectors for Hn in (1.1).
Theorem 3. Let a,b,c,d,ξ,η be real numbers, λk, k=1,…,n be given by (2.3) and Hn be the n×n matrix (1.1).
(a) If n is even, Sn is the n×n matrix (2.1), Pn is the n×n permutation matrix (2.4c), the zeros μ1,…,μn2 of (3.4b) are not of the form λ2k−1, k=1,…,n2, the zeros ν1,…,νn2 of (3.5b) are not of the form λ2k, k=1,…,n2,
n2∑j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}≠n+14,n2∑j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]≠n+14 |
and b≠d+η, then
SnPn[2sin(2πn+1)√n+1(μk−λ1)+8∑n2j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n2j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin(πn+1)√n+1(μk−λ1)2sin(6πn+1)√n+1(μk−λ3)+8∑n2j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n2j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin(3πn+1)√n+1(μk−λ3)⋮2sin[(2n−2)πn+1]√n+1(μk−λn−1)+8∑n2j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n2j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin[(n−1)πn+1]√n+1(μk−λn−1)0⋮0] | (3.10a) |
is an eigenvector of Hn associated to μk, k=1,…,n2, and
SnPn[0⋮02sin(4πn+1)√n+1(νk−λ2)+8∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]n+1−4∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]sin(2πn+1)√n+1(νk−λ2)2sin(8πn+1)√n+1(νk−λ4)+8∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]n+1−4∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]sin(4πn+1)√n+1(νk−λ4)⋮2sin(2nπn+1)√n+1(νk−λn)+8∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]n+1−4∑n2j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]sin(nπn+1)√n+1(νk−λn)] | (3.10b) |
is an eigenvector of Hn associated to νk, k=1,…,n2.
(b) If n is odd, Sn is the n×n matrix (2.1), Pn is the n×n permutation matrix (2.4c), the zeros μ1,…,μn+12 of (3.6b) are not of the form λ2k−1, k=1,…,n+12, the zeros ν1,…,νn−12 of (3.7b) are not of the form λ2k, k=1,…,n−12,
∑n+12j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}≠n+14, |
∑n−12j=1[(c+ξ−a)sin(2jπn+1)sin(4jπn+1)+(d+η−b)sin2(4jπn+1)νk−λ2j]≠n+14 |
and b≠d+η, then
SnPn[2sin(2πn+1)√n+1(μk−λ1)+8∑n+12j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n+12j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin(πn+1)√n+1(μk−λ1)2sin(6πn+1)√n+1(μk−λ3)+8∑n+12j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n+12j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin(3πn+1)√n+1(μk−λ3)⋮2sin(2nπn+1)√n+1(μk−λn−1)+8∑n+12j=1{(c+ξ−a)sin[(2j−1)πn+1]sin[(4j−2)πn+1]+(d+η−b)sin2[(4j−2)πn+1]μk−λ2j−1}n+1−4∑n+12j=1{(c+ξ−a)sin2[(2j−1)πn+1]+(d+η−b)sin[(2j−1)πn+1]sin[(4j−2)πn+1]μk−λ2j−1}sin(nπn+1)√n+1(μk−λn−1)0⋮0] | (3.11a) |
is an eigenvector of {{\bf{H}}}_{n} associated to \mu_{k} , k = 1, \ldots, \frac{n+1}{2} , and
\begin{equation} {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c} 0 \\ \vdots \\[3pt] 0 \\ \frac{2\sin\left(\frac{4\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{2})} + \frac{8 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]} \frac{\sin \left(\frac{2\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{2})} \\[20pt] \frac{2\sin\left(\frac{8\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{4})} + \frac{8 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]} \frac{\sin \left(\frac{4\pi}{n+1} \right)}{\sqrt{n+1}(\nu_{k} - \lambda_{4})} \\ \vdots \\ \frac{2\sin\left[\frac{2(n-1)\pi}{n+1} \right]}{\sqrt{n+1}(\nu_{k} - \lambda_{n})} + \frac{8 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]}{n + 1 - 4 \underset{j = 1}{\overset{\frac{n-1}{2}}{\sum}} \left[\frac{(c + \xi - a) \sin \left(\frac{2j\pi}{n + 1} \right) \sin \left(\frac{4j\pi}{n + 1} \right) + (d + \eta - b) \sin^{2} \left(\frac{4j\pi}{n + 1} \right)}{\nu_{k} - \lambda_{2j}} \right]} \frac{\sin \left[\frac{(n-1)\pi}{n+1} \right]}{\sqrt{n+1}(\nu_{k} - \lambda_{n})} \end{array} \right] \end{equation} | (3.11b) |
is an eigenvector of {{\bf{H}}}_{n} associated to \nu_{k} , k = 1, \ldots, \frac{n-1}{2} .
Proof. Since both assertions can be proven in the same way, we only prove (a). Let n be even. We can rewrite the matricial equation (\mu_{k} {{\bf{I}}}_{n} - {{\bf{H}}}_{n}) {{\bf{q}}} = {{\bf{0}}} as
\begin{equation} {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c|c} \mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Phi}}_{\frac{n}{2}} & {{\bf{O}}} \\[2pt] \hline {{\bf{O}}} & \mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Psi}}_{\frac{n}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n} {{\bf{q}}} = {{\bf{0}}}, \end{equation} | (3.12) |
where {{\bf{S}}}_{n} is the matrix (2.1), {{\bf{P}}}_{n} is the permutation matrix (2.4c) and {\boldsymbol{\Phi}}_{\frac{n}{2}} and {\boldsymbol{\Psi}}_{\frac{n}{2}} are given by (2.4d) and (2.4e), respectively. Thus,
\begin{gather*} \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{1},\lambda_{3},\ldots,\lambda_{n-1} \right) - (c + \xi - a) {{\bf{x}}} {{\bf{x}}}^{\top} - (d + \eta - b) {{\bf{x}}} {{\bf{y}}}^{\top} - (d + \eta - b) {{\bf{y}}} {{\bf{x}}}^{\top} \right] {{\bf{q}}}_{1} = {{\bf{0}}}, \\ \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{2},\lambda_{4},\ldots,\lambda_{n} \right) - (c + \xi - a) {{\bf{v}}} {{\bf{v}}}^{\top} - (d + \eta - b) {{\bf{v}}} {{\bf{w}}}^{\top} - (d + \eta - b) {{\bf{w}}} {{\bf{v}}}^{\top} \right] {{\bf{q}}}_{2} = {{\bf{0}}}, \\ \left[ \begin{array}{c} {{\bf{q}}}_{1} \\ {{\bf{q}}}_{2} \end{array} \right] = {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n} {{\bf{q}}}. \end{gather*} |
That is,
\begin{gather*} {{\bf{q}}}_{1} = \alpha \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{1},\lambda_{3},\ldots,\lambda_{n-1} \right) - (c + \xi - a) {{\bf{x}}} {{\bf{x}}}^{\top} - (d + \eta - b) {{\bf{x}}} {{\bf{y}}}^{\top} \right]^{-1} {{\bf{y}}} \\ {{\bf{q}}}_{2} = {{\bf{0}}} \end{gather*} |
for \alpha \neq 0 (see [8], page 41), and
\begin{equation*} {{\bf{q}}} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c} \alpha \left[\mu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{1},\lambda_{3},\ldots,\lambda_{n-1} \right) - (c + \xi - a) {{\bf{x}}} {{\bf{x}}}^{\top} - (d + \eta - b) {{\bf{x}}} {{\bf{y}}}^{\top} \right]^{-1} {{\bf{y}}} \\ {{\bf{0}}} \end{array} \right] \end{equation*} |
is a nontrivial solution of (3.12). Thus, choosing \alpha = 1 , we conclude that (3.10a) is an eigenvector of {{\bf{H}}}_{n} associated to the eigenvalue \mu_{k} . Similarly, from (\nu_{k} {{\bf{I}}}_{n} - {{\bf{H}}}_{n}) {{\bf{q}}} = {{\bf{0}}} , we have
\begin{equation*} {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c|c} \nu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Phi}}_{\frac{n}{2}} & {{\bf{O}}} \\[2pt] \hline {{\bf{O}}} & \nu_{k} {{\bf{I}}}_{\frac{n}{2}} - {\boldsymbol{\Psi}}_{\frac{n}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n} {{\bf{q}}} = {{\bf{0}}} \end{equation*} |
and
\begin{equation*} {{\bf{q}}} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{c} {{\bf{0}}} \\ \alpha \left[\nu_{k} {{\bf{I}}}_{\frac{n}{2}} - {{\mathrm{diag}}} \left(\lambda_{2},\lambda_{4},\ldots,\lambda_{n} \right) - (c + \xi - a) {{\bf{v}}} {{\bf{v}}}^{\top} - (d + \eta - b) {{\bf{v}}} {{\bf{w}}}^{\top} \right]^{-1} {{\bf{w}}} \end{array} \right] \end{equation*} |
for \alpha \neq 0 , which is an eigenvector of {{\bf{H}}}_{n} associated to the eigenvalue \nu_{k} .
The orthogonal block diagonalization presented in Lemma 2 and Miller's formula for the inverse of the sum of nonsingular matrices lead us to an explicit expression for the inverse of {{\bf{H}}}_{n} .
Theorem 4. Let a, b, c, d, \xi, \eta be real numbers, \lambda_{k} , k = 1, \ldots, n be given by (2.3) and {{\bf{H}}}_{n} be the n \times n matrix (1.1). If \lambda_{k} \neq 0 for every k = 1, \ldots, n , {{\bf{H}}}_{n} is nonsingular and:
(a) n is even, then
\begin{equation*} {{\bf{H}}}_{n}^{-1} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{cc} {{\bf{Q}}}_{\frac{n}{2}} & {{\bf{O}}} \\ {{\bf{O}}} & {{\bf{R}}}_{\frac{n}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n}, \end{equation*} |
where {{\bf{S}}}_{n} is the n \times n matrix (2.1), {{\bf{P}}}_{n} is the n \times n permutation matrix (2.4c),
\begin{equation} \begin{split} {{\bf{Q}}}_{\frac{n}{2}} & = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \left({{\bf{y}}} {{\bf{x}}}^{\top} + {{\bf{x}}} {{\bf{y}}}^{\top} \right) {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.13a) |
with {\boldsymbol{\Upsilon}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{1}, \lambda_{3}, \ldots, \lambda_{n-1} \right) , {{\bf{x}}}, {{\bf{y}}} given by (2.4a),
\begin{equation} \rho = 1 + (c + \xi - a) {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + 2 (d + \eta - b) {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + (d + \eta - b)^{2} \left[\big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} \big)^{2} - \big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} \big) \big({{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} \big) \right] \end{equation} | (3.13b) |
and
\begin{equation} \begin{split} {{\bf{R}}}_{\frac{n}{2}} & = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \left({{\bf{w}}} {{\bf{v}}}^{\top} + {{\bf{v}}} {{\bf{w}}}^{\top} \right) {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.13c) |
with {\boldsymbol{\Delta}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{2}, \lambda_{4}, \ldots, \lambda_{n} \right) , {{\bf{v}}}, {{\bf{w}}} given by (2.5a) and
\begin{equation} \varrho = 1 + (c + \xi - a) {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + 2 (d + \eta - b) {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + (d + \eta - b)^{2} \left[\big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} \big)^{2} - \big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} \big) \big({{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} \big) \right]. \end{equation} | (3.13d) |
(b) n is odd, then
\begin{equation*} {{\bf{H}}}_{n}^{-1} = {{\bf{S}}}_{n} {{\bf{P}}}_{n} \left[ \begin{array}{cc} {{\bf{Q}}}_{\frac{n+1}{2}} & {{\bf{O}}} \\ {{\bf{O}}} & {{\bf{R}}}_{\frac{n-1}{2}} \end{array} \right] {{\bf{P}}}_{n}^{\top} {{\bf{S}}}_{n}, \end{equation*} |
where {{\bf{S}}}_{n} is the n \times n matrix (2.1), {{\bf{P}}}_{n} is the n \times n permutation matrix (2.5c),
\begin{equation} \begin{split} {{\bf{Q}}}_{\frac{n+1}{2}} & = {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} \left({{\bf{y}}} {{\bf{x}}}^{\top} + {{\bf{x}}} {{\bf{y}}}^{\top} \right) {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1}, \end{split} \end{equation} | (3.14a) |
with {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}} : = {{\mathrm{diag}}} \left(\lambda_{1}, \lambda_{3}, \ldots, \lambda_{n} \right) , {{\bf{x}}}, {{\bf{y}}} given by (2.5a),
\begin{equation} \begin{split} \rho & = 1 + (c + \xi - a) {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} + 2 (d + \eta - b) {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} \\&+ (d + \eta - b)^{2} \left[\big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} \big)^{2} - \big({{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{x}}} \big) \big({{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n+1}{2}}^{-1} {{\bf{y}}} \big) \right] \end{split} \end{equation} | (3.14b) |
and
\begin{equation} \begin{split} {{\bf{R}}}_{\frac{n-1}{2}} & = {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} - \tfrac{(d + \eta - b) + (d + \eta - b)^{2} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} \left({{\bf{w}}} {{\bf{v}}}^{\top} + {{\bf{v}}} {{\bf{w}}}^{\top} \right) {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} \\ & \quad + \tfrac{(d + \eta - b)^{2}{{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} - (c + \xi - a)}{\rho} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} + \tfrac{(d + \eta - b)^{2} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}}}{\rho} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1}, \end{split} \end{equation} | (3.14c) |
with {\boldsymbol{\Delta}}_{\frac{n-1}{2}} : = {{\mathrm{diag}}} \left(\lambda_{2}, \lambda_{4}, \ldots, \lambda_{n-1} \right) , {{\bf{v}}}, {{\bf{w}}} in (2.5b),
\begin{equation} \begin{split} \varrho & = 1 + (c + \xi - a) {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} + 2 (d + \eta - b) {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} \\ &+ (d + \eta - b)^{2} \left[\big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} \big)^{2} - \big({{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{v}}} \big) \big({{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n-1}{2}}^{-1} {{\bf{w}}} \big) \right] . \end{split} \end{equation} | (3.14d) |
Proof. Consider a, b, c, d, \xi, \eta as real numbers, \lambda_{k} \neq 0 , k = 1, \ldots, n are given by (2.3) and {{\bf{H}}}_{n} in (1.1) is nonsingular. Recall that if {{\bf{H}}}_{n} is nonsingular, then \rho and \varrho in (3.13b) and (3.13d), respectively, are both nonzero. Setting \theta : = c + \xi - a , \vartheta : = d + \eta - b and assuming that conditions (3.1a) and (3.1b) are satisfied (note that (3.1c) corresponds to \rho \neq 0 ), we have from the main result of [29] (see pages 69 and 70),
\begin{equation*} \begin{split} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} & = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1}, \\ \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} & = \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{y}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} {{\bf{x}}}} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} {{\bf{x}}} {{\bf{y}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} \big)^{-1} \\ & = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + \vartheta {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta}{1 + \theta {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} + \vartheta {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \end{split} \end{equation*} |
and
\begin{equation} \begin{split} &\big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} + \vartheta {{\bf{y}}} {{\bf{x}}}^{\top} \big)^{-1} \\ & \quad = \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{x}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} {{\bf{y}}}} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} {{\bf{x}}} {{\bf{y}}}^{\top} \big({\boldsymbol{\Upsilon}}_{\frac{n}{2}} + \theta {{\bf{x}}} {{\bf{x}}}^{\top} + \vartheta {{\bf{x}}} {{\bf{y}}}^{\top} \big)^{-1} \\ & \quad = {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta + \vartheta^{2} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} \left({{\bf{y}}} {{\bf{x}}}^{\top} + {{\bf{x}}} {{\bf{y}}}^{\top} \right) {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2}{{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} - \theta}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2} {{\bf{x}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{x}}}}{\rho} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1} {{\bf{y}}} {{\bf{y}}}^{\top} {\boldsymbol{\Upsilon}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.15) |
with {\boldsymbol{\Upsilon}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{1}, \lambda_{3}, \ldots, \lambda_{n-1} \right) , {{\bf{x}}}, {{\bf{y}}} given by (2.4a) and \rho in (3.13b). In the same way, supposing (3.2a) and (3.2b) (observe that (3.2c) is \varrho \neq 0 ), we obtain
\begin{equation*} \begin{split} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} & = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1}, \\ \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} & = \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{w}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} {{\bf{v}}}} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} {{\bf{v}}} {{\bf{w}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} \big)^{-1} \\ & = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\theta}{1 + \theta {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + \vartheta {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta}{1 + \theta {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} + \vartheta {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \end{split} \end{equation*} |
and
\begin{equation} \begin{split} &\big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} + \vartheta {{\bf{w}}} {{\bf{v}}}^{\top} \big)^{-1} \\ &\quad = \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} - \tfrac{\vartheta}{1 + \vartheta {{\bf{v}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} {{\bf{w}}}} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} {{\bf{v}}} {{\bf{w}}}^{\top} \big({\boldsymbol{\Delta}}_{\frac{n}{2}} + \theta {{\bf{v}}} {{\bf{v}}}^{\top} + \vartheta {{\bf{v}}} {{\bf{w}}}^{\top} \big)^{-1} \\ & \quad = {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} - \tfrac{\vartheta + \vartheta^{2} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\varrho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} \left({{\bf{w}}} {{\bf{v}}}^{\top} + {{\bf{v}}} {{\bf{w}}}^{\top} \right) {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2}{{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} - \theta}{\varrho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} + \tfrac{\vartheta^{2} {{\bf{v}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{v}}}}{\varrho} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1} {{\bf{w}}} {{\bf{w}}}^{\top} {\boldsymbol{\Delta}}_{\frac{n}{2}}^{-1}, \end{split} \end{equation} | (3.16) |
where {\boldsymbol{\Delta}}_{\frac{n}{2}} : = {{\mathrm{diag}}} \left(\lambda_{2}, \lambda_{4}, \ldots, \lambda_{n} \right) and {{\bf{v}}}, {{\bf{w}}} are given by (2.5a) and \varrho in (3.13d). Since the nonsingularity of {{\bf{H}}}_{n} and \lambda_{k} \neq 0 , for all k = 1, \ldots, n are sufficient for both sides of (3.15) and (3.16) to be well-defined, conditions (3.1a), (3.1b), (3.2a) and (3.2b) previously assumed can be dropped. Hence, the block diagonalization provided in (a) of Lemma 2 together with 8.5b of [21] (see page 88) establish the thesis in (a). The proof of (b) is analogous, so we will omit the details.
It is well known that the fourth derivative can be computed through the following centered finite-formula
\begin{equation} f^{(4)}(x_{k}) \approx \frac{-f(x_{k-3}) + 12 f(x_{k-2}) - 39 f(x_{k-1}) + 56 f(x_{k}) - 39 f(x_{k+1}) + 12 f(x_{k+2}) - f(x_{k+3})}{6h^{4}} \end{equation} | (4.1) |
(see [9], page 556). Consider an interval [a, b] (a < b) , a mesh of points x_{k} = a + k h , k = 0, 1, \ldots, N where h = (b - a)/N and a function f\colon [a, b] \longrightarrow \mathbb{R} , such that f(a) = 0 = f(b) . By setting
\begin{equation*} \begin{split} f(x_{-2}) : = \alpha f(x_{2}), \\ f(x_{-1}) : = \alpha f(x_{1}), \\ f(x_{N+1}) : = \alpha f(x_{N-1}), \\ f(x_{N+2}) : = \alpha f(x_{N-2}) \end{split} \end{equation*} |
for some \alpha\in \mathbb{R} , the matrix operator corresponding to (4.1) for the fourth derivative is
\begin{equation} \left[ \begin{array}{ccccccccccc} 12 \alpha + 56 & -(\alpha + 39) & 12 & -1 & 0 & \ldots & \ldots & \ldots & \ldots & \ldots & 0 \\ -(\alpha + 39) & 56 & -39 & 12 & -1 & \ddots & & & & & \vdots \\ 12 & -39 & 56 & -39 & 12 & \ddots & \ddots & & & & \vdots \\ -1 & 12 & -39 & 56 & -39 & \ddots & \ddots & \ddots & & & \vdots \\ 0 & -1 & 12 & -39 & 56 & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & 56 & -39 & 12 & -1 & 0 \\ \vdots & & & \ddots & \ddots & \ddots & -39 & 56 & -39 & 12 & -1 \\ \vdots & & & & \ddots & \ddots & 12 & -39 & 56 & -39 & 12 \\ \vdots & & & & & \ddots & -1 & 12 & -39 & 56 & -(\alpha + 39) \\ 0 & \ldots & \ldots & \ldots & \ldots & \ldots & 0 & -1 & 12 & -(\alpha + 39) & 12 \alpha + 56 \end{array} \right]. \end{equation} | (4.2) |
A remarkable example that involves the fourth derivative is the ordinary differential equation that governs the deflection of a laterally loaded symmetrical beam of length L ,
\begin{equation} {{\mathrm{E}}} \, {{\mathrm{I}}}(x) y^{(4)}(x) = q(x), \quad x \in ]0,L[, \end{equation} | (4.3) |
where {{\mathrm{E}}} is the modulus of elasticity of the beam material, {{\mathrm{I}}}(x) is the moment of inertia of the beam cross section and q(x) is the distributed load. The ordinary differential equation (4.3) can be equipped with the boundary conditions y(0) = 0 = y(L) (see, for instance, [22]).
The eigenvalues of derivative matrices are very useful. In fact, they can be compared with those of the exact (continuous) derivative operator to gauge the accuracy of the finite difference approximation. On the other hand, in the context of partial differential equations, the eigenvalues of the spatial operator is considered along with the stability diagram of the time-integration scheme to evaluate the stability of the numerical solution for the partial differential equation [3]. The statements of subsection 3.2 can be employed to locate (bound) the eigenvalues of (4.2).
Another example of a derivative matrix is
\begin{equation} \left[ \begin{array}{ccccccc} -\frac{2}{3} & \frac{2}{3} & 0 & \ldots & \ldots & \ldots & 0 \\ 1 & -2 & 1 & \ddots & & & \vdots \\ 0 & 1 & -2 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & -2 & 1 & 0 \\ \vdots & & & \ddots & 1 & -2 & 1 \\ 0 & \ldots & \ldots & \ldots & 0 & \frac{2}{3} & - \frac{2}{3} \end{array} \right], \end{equation} | (4.4) |
which appears in the discretization of the second-derivative operator via three-point centered finite-difference formula with Neumann boundary conditions f'(x_{0}) = a and f'(x_{N}) = b (see [3], pages 133 and 134). Our results can also be used to locate (bound) its eigenvalues by noticing that the eigenvalues of (4.4) and
\begin{align*} &{{\mathrm{diag}}}\left(1,\frac{\sqrt{6}}{3},\ldots,\frac{\sqrt{6}}{3},1 \right) \left[ \begin{array}{ccccccc} -\frac{2}{3} & \frac{2}{3} & 0 & \ldots & \ldots & \ldots & 0 \\ 1 & -2 & 1 & \ddots & & & \vdots \\ 0 & 1 & -2 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & -2 & 1 & 0 \\ \vdots & & & \ddots & 1 & -2 & 1 \\ 0 & \ldots & \ldots & \ldots & 0 & \frac{2}{3} & - \frac{2}{3} \end{array} \right] {{\mathrm{diag}}}\left(1,\frac{\sqrt{6}}{2},\ldots,\frac{\sqrt{6}}{2},1 \right) \\ &\quad = \left[ \begin{array}{ccccccc} -\frac{2}{3} & \frac{\sqrt{6}}{3} & 0 & \ldots & \ldots & \ldots & 0 \\ \frac{\sqrt{6}}{3} & -2 & 1 & \ddots & & & \vdots \\ 0 & 1 & -2 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & -2 & 1 & 0 \\ \vdots & & & \ddots & 1 & -2 & \frac{\sqrt{6}}{3} \\ 0 & \ldots & \ldots & \ldots & 0 & \frac{\sqrt{6}}{3} & -\frac{2}{3} \end{array} \right] \end{align*} |
are exactly the same.
Consider n pairs of observations (x_{1}, y_{1}), (x_{2}, y_{2}), \ldots, (x_{n}, y_{n}) such that
\begin{equation*} y_{k} = r(x_{k}) + \varepsilon_{k} \quad {\text{and}} \quad \mathbb{E}(\varepsilon_{k}) = 0 \qquad (k = 1,2,\ldots,n), \end{equation*} |
where r is the regression function to be estimated. The estimator of r(x) is usually denoted by \widehat{r}(x) and called smoother. An estimator \widehat{r} of r is a linear smoother if, for each x , there exists a vector {\boldsymbol{\varsigma}}(x) = (\varsigma_{1}(x), \ldots, \varsigma_{n}(x))^{\top} such that
\begin{equation*} \widehat{r}(x) = \sum\limits_{k = 1}^{n} \varsigma_{k}(x) y_{k}. \end{equation*} |
Defining the vector of fitted values {{\bf{\widehat{y}}}} = (\widehat{r}_{n}(x_{1}), \ldots, \widehat{r}_{n}(x_{n}))^{\top} , it follows
\begin{equation*} {{\bf{\widehat{y}}}} = {\boldsymbol{\Sigma}} \, {{\bf{y}}}, \end{equation*} |
where {\boldsymbol{\Sigma}} is an n \times n matrix whose k^{{{\mathrm{th}}}} row is {\boldsymbol{\varsigma}}(x_{k})^{\top} , called the smoothing matrix and {{\bf{y}}} = (y_{1}, \ldots, y_{n})^{\top} (see [34], page 66).
The eigendecomposition of the smoothing matrix {\boldsymbol{\Sigma}} provides a useful characterization of the properties of a smoother. In fact, if {\boldsymbol{\Sigma}} = \sum_{k = 1}^{n} \lambda_{k} {\boldsymbol{\sigma}}_{k} {\boldsymbol{\sigma}}_{k}^{\top} is the spectral decomposition of the smoothing matrix, where \lambda_{k} are the ordered eigenvalues and {\boldsymbol{\sigma}}_{k} the corresponding eigenvectors, we can meaningfully decompose the fit as {{\bf{\widehat{y}}}} = \sum_{k = 1}^{n} \alpha_{k} \lambda_{k} {\boldsymbol{\sigma}}_{k} , where the eigenvectors {\boldsymbol{\sigma}}_{k} illustrate what sequences are preserved or compressed via a scalar multiplication and \alpha_{k} are the specific coefficients of the projection of {{\bf{y}}} onto the space spanned by the eigenvectors {\boldsymbol{\sigma}}_{k} , {{\bf{y}}} = \sum_{k = 1}^{n} \alpha_{k} {\boldsymbol{\sigma}}_{k} . Moreover, {{\mathrm{tr}}}({\boldsymbol{\Sigma}}) = \sum_{k = 1}^{n} \lambda_{k} provides the number of degrees of freedom of a smoother, which is a measure of the equivalent number of parameters used to obtain the fit {{\bf{\widehat{y}}}} that allows us to compare alternative filters according to their degree of smoothing (see [28] and the references therein).
The smoothing matrix associated to the Beveridge-Nelson smoother (see [31] for details) when the observed series is generated by an {{\mathrm{ARIMA}}}(1, 1, 0) model with -1 < \phi < 0 and (half) bandwidth filter m = 1 is the following tridiagonal matrix:
\begin{equation*} {\boldsymbol{\Sigma}} = \left[ \begin{array}{ccccccc} \frac{1}{1 - \phi} & -\frac{\phi}{1 - \phi} & 0 & \ldots & \ldots & \ldots & 0 \\ -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & \ddots & & & \vdots \\ 0 & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & 0 \\ \vdots & & & \ddots & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} \\ 0 & \ldots & \ldots & \ldots & 0 & -\frac{\phi}{1 - \phi} & \frac{1}{1 - \phi} \end{array} \right] \end{equation*} |
(see [28]). Since the matrices {\boldsymbol{\Sigma}} and
\begin{align*} &{{\mathrm{diag}}}\left(1,\sqrt{1 - \phi},\ldots,\sqrt{1 - \phi},1 \right) \, {\boldsymbol{\Sigma}} \, {{\mathrm{diag}}}\left(1,\frac{1}{\sqrt{1 - \phi}},\ldots,\frac{1}{\sqrt{1 - \phi}},1 \right) \\ &\quad = \left[ \begin{array}{ccccccc} \frac{1}{1 - \phi} & -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} & 0 & \ldots & \ldots & \ldots & 0 \\ -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & \ddots & & & \vdots \\ 0 & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots &\ddots & \vdots \\ \vdots & & \ddots & \ddots & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{(1 - \phi)^{2}} & 0 \\ \vdots & & & \ddots & -\frac{\phi}{(1 - \phi)^{2}} & \frac{1 + \phi^{2}}{(1 - \phi)^{2}} & -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} \\ 0 & \ldots & \ldots & \ldots & 0 & -\frac{\phi}{\sqrt{(1 - \phi)^{3}}} & \frac{1}{1 - \phi} \end{array} \right] \end{align*} |
share the same eigenvalues, we are able to locate (bound) the eigenvalues of {\boldsymbol{\Sigma}} by using results of subsection 3.2. Moreover, at the expense of the prescribed eigenvalues, an eigendecomposition for {\boldsymbol{\Sigma}} can also be obtained at the expense of statements in subsection 3.3.
In this paper, a procedure to express the eigenvalues and associated eigenvectors of a symmetric heptadiagonal quasi-Toeplitz matrix was presented, as well as an explicit formula for its inverse. The proposed method allowed us to get rational functions to locate the eigenvalues and closed-form formulas to the corresponding eigenvectors for the class of matrices under analysis, which cannot be considered in recent works on this subject, but most of all leave an open door for additional statements on symmetric quasi-Toeplitz matrices in general. The numerical example provided to highlight the differences between the quasi-Toeplitz and Toeplitz cases also raised some open questions. Indeed, despite Geršgorin theorem leading us to an interval containing all eigenvalues of generic quasi-Toeplitz matrices, it would be interesting to have a more precise tool, as in the "pure" Toeplitz case. A method that could predict the number of outliers and its asymptotic behavior when n tends to infinity would be also very welcome. Of course, another open problem closely related to the content of this paper would be the obtention of a block diagonalization for nonsymmetric quasi-Toeplitz matrices in the same spirit of Lemma 2.
The author declares he has not used Artificial Intelligence (AI) tools in the creation of this article.
The author would like to thank Professor Yongjian Hu for the invitation to submit the manuscript, and also to anonymous referees for the careful reading of it as well as their very constructive comments, which greatly improved the final version of the paper.
This work is funded by national funds through the FCT - Fundação para a Ciência e a Tecnologia, I.P., under the scope of project UIDB/04035/2020 (GeoBioTec).
The author declares there is no conflict of interest.
[1] |
S. Shabani-Mashcool, S. A. Marashi, S. Gharaghani, NDDSA: A network- and domain-based method for predicting drug-side effect associations, Inform. Process. Manag., 57 (2020), 102357. https://doi.org/10.1016/j.ipm.2020.102357 doi: 10.1016/j.ipm.2020.102357
![]() |
[2] |
Y. J. Ding, J. J. Tang, F. Guo, Identification of drug-side effect association via multiple information integration with centered kernel alignment, Neurocomputing, 325 (2019), 211–224. https://doi.org/10.1016/j.neucom.2018.10.028 doi: 10.1016/j.neucom.2018.10.028
![]() |
[3] |
A. Lakizadeh, S. M. H. Mir-Ashrafi, Drug repurposing improvement using a novel data integration framework based on the drug side effect, Inform. Med. Unlocked, 23 (2021), 100523. https://doi.org/10.1016/j.imu.2021.100523 doi: 10.1016/j.imu.2021.100523
![]() |
[4] |
E. Pauwels, V. Stoven, Y. Yamanishi, Predicting drug side-effect profiles: a chemical fragment-based approach, BMC Bioinformatics, 12 (2011), 169. https://doi.org/10.1186/1471-2105-12-169 doi: 10.1186/1471-2105-12-169
![]() |
[5] |
S. Jamal, S. Goyal, A. Shanker, A. Grover, Predicting neurological adverse drug reactions based on biological, chemical and phenotypic properties of drugs using machine learning models, Sci. Rep., 7 (2017), 872. https://doi.org/10.1038/s41598-017-00908-z doi: 10.1038/s41598-017-00908-z
![]() |
[6] |
Y. Zheng, H. Peng, S. Ghosh, C. Lan, J. Li, Inverse similarity and reliable negative samples for drug side-effect prediction, BMC Bioinformatics, 19 (2019), 554. https://doi.org/10.1186/s12859-018-2563-x doi: 10.1186/s12859-018-2563-x
![]() |
[7] |
M. Liu, Y. Wu, Y. Chen, J. Sun, Z. Zhao, X. W. Chen, et al., Large-scale prediction of adverse drug reactions using chemical, biological, and phenotypic properties of drugs, J. Am. Med. Inform. Assoc., 19 (2012), e28–35. https://doi.org/10.1136/amiajnl-2011-000699 doi: 10.1136/amiajnl-2011-000699
![]() |
[8] |
S. Dey, H. Luo, A. Fokoue, J. Hu, P. Zhang, Predicting adverse drug reactions through interpretable deep learning framework, BMC Bioinformatics, 19 (2018), 476. https://doi.org/10.1186/s12859-018-2544-0 doi: 10.1186/s12859-018-2544-0
![]() |
[9] |
L. Chen, T. Huang, J. Zhang, M. Y. Zheng, K. Y. Feng, Y. D. Cai, et al., Predicting drugs side effects based on chemical-chemical interactions and protein-chemical interactions, BioMed Res. Int., 2013 (2013), 485034. https://doi.org/10.1155/2013/485034 doi: 10.1155/2013/485034
![]() |
[10] |
W. Zhang, F. Liu, L. Luo, J. Zhang, Predicting drug side effects by multi-label learning and ensemble learning, BMC Bioinformatics, 16 (2015), 365. https://doi.org/10.1186/s12859-015-0774-y doi: 10.1186/s12859-015-0774-y
![]() |
[11] |
N. Atias, R. Sharan, An algorithmic framework for predicting side effects of drugs, J. Comput. Biol., 18 (2011), 207–218. https://doi.org/10.1089/cmb.2010.0255 doi: 10.1089/cmb.2010.0255
![]() |
[12] |
E. Muñoz, V. Novácek, P. Y. Vandenbussche, Facilitating prediction of adverse drug reactions by using knowledge graphs and multi-label learning models, Brief. Bioinform., 20 (2017), 190–202. https://doi.org/10.1093/bib/bbx099 doi: 10.1093/bib/bbx099
![]() |
[13] | W. Zhang, Y. Chen, S. Tu, F. Liu, Q. Qu, Drug side effect prediction through linear neighborhoods and multiple data source integration, in IEEE International Conference on Bioinformatics and Biomedicine, (2016), 427–434. https://doi.org/10.1109/BIBM.2016.7822555 |
[14] | E. Munoz, V. Novacek, P. Y. Vandenbussche, Using drug similarities for discovery of possible adverse reactions, AMIA Annu. Symp. Proc., 2016 (2016), 924–933. |
[15] |
X. Zhao, L. Chen, J. Lu, A similarity-based method for prediction of drug side effects with heterogeneous information, Math. Biosci., 306 (2018), 136–144. https://doi.org/10.1016/j.mbs.2018.09.010 doi: 10.1016/j.mbs.2018.09.010
![]() |
[16] |
H. Liang, L. Chen, X. Zhao, X. Zhang, Prediction of drug side effects with a refined negative sample selection strategy, Comput. Math. Method. M., 2020 (2020), 1573543. https://doi.org/10.1155/2020/1573543 doi: 10.1155/2020/1573543
![]() |
[17] |
X. Zhao, L. Chen, Z. H. Guo, T. Liu, Predicting drug side effects with compact integration of heterogeneous networks, Curr. Bioinform., 14 (2019), 709–720. https://doi.org/10.2174/1574893614666190220114644 doi: 10.2174/1574893614666190220114644
![]() |
[18] |
X. Guo, W. Zhou, Y. Yu, Y. Ding, J. Tang, F. Guo, A novel triple matrix factorization method for detecting drug-side effect association based on kernel target alignment, BioMed Res. Int., 2020 (2020), 4675395. https://doi.org/10.1155/2020/4675395 doi: 10.1155/2020/4675395
![]() |
[19] |
Y. Ding, J. Tang, F. Guo, Identification of drug-side effect association via semi-supervised model and multiple kernel learning, IEEE J. Biomed. Health, 23 (2019), 2619–2632. https://doi.org/10.1109/JBHI.2018.2883834 doi: 10.1109/JBHI.2018.2883834
![]() |
[20] | H. Tong, C. Faloutsos, J. Pan, Fast random walk with restart and its applications, in Sixth International Conference on Data Mining, (2006), 613–622. https://doi.org/10.1109/ICDM.2006.70 |
[21] |
D. E. Carlin, B. Demchak, D. Pratt, E. Sage, T. Ideker, Network propagation in the cytoscape cyberinfrastructure, PLoS Comput. Biol., 13 (2017), e1005598. https://doi.org/10.1371/journal.pcbi.1005598 doi: 10.1371/journal.pcbi.1005598
![]() |
[22] |
M. Kuhn, M. Campillos, I. Letunic, L. J. Jensen, P. Bork, A side effect resource to capture phenotypic effects of drugs, Mol. Syst. Biol., 6 (2010), 343. https://doi.org/10.1038/msb.2009.98 doi: 10.1038/msb.2009.98
![]() |
[23] |
M. Kuhn, D. Szklarczyk, S. Pletscher-Frankild, T. H. Blicher, C. von Mering, L. J. Jensen, et al., STITCH 4: integration of protein–chemical interactions with user data, Nucleic Acids Res., 42 (2014), D401–D407. https://doi.org/10.1093/nar/gkt1207 doi: 10.1093/nar/gkt1207
![]() |
[24] |
D. Weininger, SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules, J. Chem. Inf. Comput. Sci., 28 (1988), 31–36. https://doi.org/10.1021/ci00057a005 doi: 10.1021/ci00057a005
![]() |
[25] |
X. Xiao, W. Zhu, B. Liao, J. Xu, C. Gu, B. Ji, et al., BPLLDA: predicting lncRNA-Disease associations based on simple paths with limited lengths in a heterogeneous network, Front. Genet., 9 (2018), 411. https://doi.org/10.3389/fgene.2018.00411 doi: 10.3389/fgene.2018.00411
![]() |
[26] |
W. Ba-Alawi, O. Soufan, M. Essack, P. Kalnis, V. B. Bajic, DASPfind: new efficient method to predict drug-target interactions, J. Cheminformatics, 8 (2016), 15. https://doi.org/10.1186/s13321-016-0128-4 doi: 10.1186/s13321-016-0128-4
![]() |
[27] |
Z. H. You, Z. A. Huang, Z. Zhu, G. Y. Yan, Z. W. Li, Z. Wen, et al., PBMDA: a novel and effective path-based computational model for miRNA-disease association prediction, PLoS Comput. Biol., 13 (2017), e1005455. https://doi.org/10.1371/journal.pcbi.1005455 doi: 10.1371/journal.pcbi.1005455
![]() |
[28] |
J. Gao, B. Hu, L. Chen, A path-based method for identification of protein phenotypic annotations, Curr. Bioinform., 16 (2021), 1214–1222. https://doi.org/10.2174/1574893616666210531100035 doi: 10.2174/1574893616666210531100035
![]() |
[29] |
S. Kohler, S. Bauer, D. Horn, P. N. Robinson, Walking the interactome for prioritization of candidate disease genes, Am. J. Hum. Genet., 82 (2008), 949–958. https://doi.org/10.1016/j.ajhg.2008.02.013 doi: 10.1016/j.ajhg.2008.02.013
![]() |
[30] |
Y.J. Li, J. C. Patra, Genome-wide inferring gene-phenotype relationship by walking on the heterogeneous network, Bioinformatics, 26 (2010), 1219–1224. https://doi.org/10.1093/bioinformatics/btq108 doi: 10.1093/bioinformatics/btq108
![]() |
[31] |
X. Chen, M. X. Liu, G. Y. Yan, Drug-target interaction prediction by random walk on the heterogeneous network, Mol. BioSyst., 8 (2012), 1970–1978. https://doi.org/10.1039/C2MB00002D doi: 10.1039/C2MB00002D
![]() |
[32] |
L. Chen, T. Liu, X. Zhao, Inferring anatomical therapeutic chemical (ATC) class of drugs using shortest path and random walk with restart algorithms, BBA Mol. Basis Dis., 1864 (2017), 2228–2240. https://doi.org/10.1016/j.bbadis.2017.12.019 doi: 10.1016/j.bbadis.2017.12.019
![]() |
[33] |
L. Chen, Y. H. Zhang, Z. Zhang, T. Huang, Y. D. Cai, Inferring novel tumor suppressor genes with a protein-protein interaction network and network diffusion algorithms, Mol. Ther. Methods Clin. Dev., 10 (2018), 57–67. https://doi.org/10.1016/j.omtm.2018.06.007 doi: 10.1016/j.omtm.2018.06.007
![]() |
[34] |
S. Lu, K. Zhao, X. Wang, H. Liu, X. Ainiwaer, Y. Xu, et al., Use of laplacian heat diffusion algorithm to infer novel genes with functions related to uveitis, Front. Genet., 9 (2018), 425. https://doi.org/10.3389/fgene.2018.00425 doi: 10.3389/fgene.2018.00425
![]() |
[35] |
H. Y. Liang, B. Hu, L. Chen, S. Q. Wang, Aorigele, Recognizing novel chemicals/drugs for anatomical therapeutic chemical classes with a heat diffusion algorithm, BBA Mol. Basis. Dis., 1866 (2020), 165910. https://doi.org/10.1016/j.bbadis.2020.165910 doi: 10.1016/j.bbadis.2020.165910
![]() |
[36] |
M. Imanishi, Y. Hori, M. Nagaoka, Y. Sugiura, Design of novel zinc finger proteins: towards artificial control of specific gene expression, Eur. J. Pharm. Sci., 13 (2001), 91–97. https://doi.org/10.1016/S0928-0987(00)00212-8 doi: 10.1016/S0928-0987(00)00212-8
![]() |
[37] |
M. Alirezaei, E. Mordelet, N. Rouach, A. C. Nairn, J. Glowinski, J. Premont, Zinc-induced inhibition of protein synthesis and reduction of connexin-43 expression and intercellular communication in mouse cortical astrocytes, Eur. J. Neurosci., 16 (2002), 1037–1044. https://doi.org/10.1046/j.1460-9568.2002.02180.x doi: 10.1046/j.1460-9568.2002.02180.x
![]() |
[38] |
K. H. Ibs, L. Rink, Zinc-altered immune function, J. Nutr., 133 (2003), 1452s–1456s. https://doi.org/10.1093/jn/133.5.1452S doi: 10.1093/jn/133.5.1452S
![]() |
[39] |
Z. A. Bhutta, R. E. Black, K. H. Brown, J. M. Gardner, S. Gore, A. Hidayat, et al., Prevention of diarrhea and pneumonia by zinc supplementation in children in developing countries: Pooled analysis of randomized controlled trials, J. Pediatr., 135 (1999), 689–697. https://doi.org/10.1016/S0022-3476(99)70086-7 doi: 10.1016/S0022-3476(99)70086-7
![]() |
[40] |
D. E. Roth, S. A. Richard, R. E. Black, Zinc supplementation for the prevention of acute lower respiratory infection in children in developing countries: meta-analysis and meta-regression of randomized trials, Int. J. Epidemiol., 39 (2010), 795–808. https://doi.org/10.1093/ije/dyp391 doi: 10.1093/ije/dyp391
![]() |
[41] |
D. Hulisz, Efficacy of zinc against common cold viruses: an overview, J. Am. Pharm. Assoc., 44 (2004), 594–603. https://doi.org/10.1331/1544-3191.44.5.594.Hulisz doi: 10.1331/1544-3191.44.5.594.Hulisz
![]() |
[42] |
R. O. Suara, J. E. Crowe, Effect of zinc salts on respiratory syncytial virus replication, Antimicrob. Agents Ch., 48 (2004), 783–790. https://doi.org/10.1128/AAC.48.3.783-790.2004 doi: 10.1128/AAC.48.3.783-790.2004
![]() |
[43] | D. Li, L. Z. Wen, H. Yuan, Observation on clinical efficacy of combined therapy of zinc supplement and jinye baidu granule in treating human cytomegalovirus infection, Zhongguo Zhong xi yi jie he za zhi, 25 (2005), 449–451. |
[44] |
F. Femiano, F. Gombos, C. Scully, Recurrent herpes labialis: a pilot study of the efficacy of zinc therapy, J. Oral Pathol. Med., 34 (2005), 423–425. https://doi.org/10.1111/j.1600-0714.2005.00327.x doi: 10.1111/j.1600-0714.2005.00327.x
![]() |
[45] |
M. Singh, R. R. Das, Zinc for the common cold, Cochrane Database Syst. Rev., 6 (2013), CD001364. https://doi.org/10.1002/14651858.CD001364.pub4 doi: 10.1002/14651858.CD001364.pub4
![]() |
[46] |
M. Lazzerini, H. Wanzira, Oral zinc for treating diarrhoea in children, Cochrane Database Syst. Rev., 12 (2016), CD005436. https://doi.org/10.1002/14651858.CD005436.pub5 doi: 10.1002/14651858.CD005436.pub5
![]() |
[47] |
F. Sakai, S. Yoshida, S. Endo, H. Tomita, Double-blind, placebo-controlled trial of zinc picolinate for taste disorders, Acta oto-laryngol., 122 (2002), 129–133. https://doi.org/10.1080/00016480260046517 doi: 10.1080/00016480260046517
![]() |
[48] | A. R. Watson, A. Stuart, F. E. Wells, I. B. Houston, G. M. Addison, Zinc supplementation and its effect on taste acuity in children with chronic renal failure, Hum. Nutr. Clin. Nutr., 37 (1983), 219–225. |
[49] |
J. Cervantes, A. E. Eber, M. Perper, V. M. Nascimento, K. Nouri, J. E. Keri, The role of zinc in the treatment of acne: A review of the literature, Dermatol. Ther., 31 (2018), e12576. https://doi.org/10.1111/dth.12576 doi: 10.1111/dth.12576
![]() |
[50] | A. Y. Bedikian, M. Valdivieso, L. K. Heilbrun, R. H. Withers, G. P. Bodey, E. J. Freireich, Glycerol: an alternative to dexamethasone for patients receiving brain irradiation for metastatic disease, South. Med. J., 73 (1980), 1210–1214. |
[51] |
M. S. Frank, M. C. Nahata, M. D. Hilty, Glycerol: a review of its pharmacology, pharmacokinetics, adverse reactions, and clinical use, Pharmacotherapy, 1 (1981), 147–160. https://doi.org/10.1002/j.1875-9114.1981.tb03562.x doi: 10.1002/j.1875-9114.1981.tb03562.x
![]() |
[52] |
J. Wang, Y. Ren, S. F. Wang, L. D. Kan, L. J. Zhou, H. M. Fang, et al., Comparative efficacy and safety of glycerol versus mannitol in patients with cerebral oedema and elevated intracranial pressure: A systematic review and meta-analysis, J. Clin. Pharm. Ther., 46 (2021), 504–514. https://doi.org/10.1111/jcpt.13314 doi: 10.1111/jcpt.13314
![]() |
[53] |
J. Wang, Y. Ren, L. J. Zhou, L. D. Kan, H. Fan, H. M. Fang, Glycerol Infusion Versus Mannitol for Cerebral Edema: A Systematic Review and Meta-analysis, Clin. Ther., 43 (2021), 637–649. https://doi.org/10.1016/j.clinthera.2021.01.010 doi: 10.1016/j.clinthera.2021.01.010
![]() |
[54] |
E. Righetti, M. G. Celani, T. A. Cantisani, R. Sterzi, G. Boysen, S. Ricci, Glycerol for acute stroke, Cochrane Database Syst. Rev., 2 (2004), CD000096. https://doi.org/10.1002/14651858.CD000096.pub2 doi: 10.1002/14651858.CD000096.pub2
![]() |
[55] |
A. Frei, C. Cottier, P. Wunderlich, E. Lüdin, Glycerol and dextran combined in the therapy of acute stroke. A placebo-controlled, double-blind trial with a planned interim analysis, Stroke, 18 (1987), 373–379. https://doi.org/10.1161/01.STR.18.2.373 doi: 10.1161/01.STR.18.2.373
![]() |
[56] |
E. Lin, Glycerol utilization and its regulation in mammals, Annu. Rev. Biochem., 46 (1977), 765–795. https://doi.org/10.1146/annurev.bi.46.070177.004001 doi: 10.1146/annurev.bi.46.070177.004001
![]() |
[57] |
Y. Yu, C. Kumana, I. Lauder, Y. Cheung, F. Chan, M. Kou, et al., Treatment of acute cortical infarct with intravenous glycerol. A double-blind, placebo-controlled randomized trial, Stroke, 24 (1993), 1119–1124. https://doi.org/10.1161/01.STR.24.8.1119 doi: 10.1161/01.STR.24.8.1119
![]() |
[58] |
B. á Rogvi-Hansen, G. Boysen, Intravenous Glycerol Treatment of Acute Stroke – A Statistical Review, Cerebrovasc. Dis., 2 (1992), 11–13. https://doi.org/10.1159/000108981 doi: 10.1159/000108981
![]() |
[59] |
H. L. Philpott, S. Nandurkar, J. Lubel, P. R. Gibson, Drug-induced gastrointestinal disorders, Frontline Gastroente., 5 (2014), 49–57. http://dx.doi.org/10.1136/flgastro-2013-100316 doi: 10.1136/flgastro-2013-100316
![]() |
[60] | S. Saleem, How to induce arrhythmias with dopamine, in Arrhythmia Induction in the EP Lab, Springer, (2019), 81–89. https://doi.org/10.1007/978-3-319-92729-9_9 |
[61] |
R. Ceravolo, C. Rossi, E. Del Prete, U. Bonuccelli, A review of adverse events linked to dopamine agonists in the treatment of Parkinson's disease, Expert Opin. Drug Saf., 15 (2016), 181–198. https://doi.org/10.1517/14740338.2016.1130128 doi: 10.1517/14740338.2016.1130128
![]() |
![]() |
![]() |