1.
Introduction
Let A0, A1,…,An be n+1 real symmetric n-by-n matrices. For any c=(c1,c2,…,cn)T ∈Rn such that the eigenvalues {λi(A(c))}ni=1 of the matrices
with the order λ1(A(c))≤λ2(A(c))≤⋯≤λn(A(c)). In this note, the inverse eigenvalue problem (IEP) defined here is, for the given n real numbers {λ∗i}ni=1 with the order λ∗1≤λ∗2≤⋯≤λ∗n, to find a vector c∗∈Rn such that
The IEP is utilized in a wide range of fields including the inverse Toeplitz eigenvalue problem [1,2,3], structural dynamics [4], molecular spectroscopy [5], the pole assignment problem [6], the inverse Sturm-Liouville problem [7], and also problems in mechanics applications [8,9], structural integrity assessments [10], geophysical studies [11], particle physics research [12], numerical analysis [13], and dynamics systems [14]. For further insights into the diverse practical uses, underlying mathematical principles, and computational techniques of general IEPs, readers may consult the comprehensive review articles [15,16] and the relevant literature [17,18].
The IEP (1.2) can be represented mathematically through a set of non-linear equations:
In situations where the given eigenvalues are distinct, i.e.,
Newton's method can be applied to nonlinear equation (1.3) with (1.4). However, as noted in [19,20,21], Newton's method has the following two disadvantages: (ⅰ) It requires computing the exact eigenvectors at each iteration; (ⅱ) It requires solving a Jacobian equation at each iteration. These two facts make it inefficient from the point of numerical computations especially when the problem size n is large. Thus, a focus was placed on avoiding on the disadvantages: (ⅰ) The Newton-like method was proposed in [22,23] which computed the approximate eigenvectors instead of the exact eigenvectors. The quadratic convergence rate of this type of Newton-like method was re-proved in [24]. To alleviate the over-solving problem, Chen et al. proposed in [25] an inexact Newton-like method, which stopped the inner iterations before convergence. (ⅱ) Shen et al. proposed in [26,27] an Ulm-type method, which avoided solving approximate Jacobian equation at each outer iteration and hence could reduce the instability problem caused by the possible ill-conditioning in solving an approximate Jacobian equation.
Note that all of the methods mentioned above are quadratically convergent. In order to speed up the convergence rate of the methods, Chen et al. [28] proposed a super quadratic convergent two-step Newton-type method where the approximate Jacobian equations are solved by inexact methods. In view of this difficulty, Wen et al. proposed, in [29], a two-step inexact Newton-Chebyshev-like method with cubic root-convergence rate, in which the approximate eigenvectors were obtained by applying the one-step inverse power method and avoided solving the approximate Jacobian equations by using the Chebyshev method to approach the inverse of the Jacobian matrix. In 2022, Wei Ma designed a two-step Ulm-Chebyshev-like Cayley transform method [30] which utilized a Cayley transform to find the approximate eigenvectors. However, the convergence analysis for the above methods became ineffective in the absence of distinct eigenvalues, due to the breakdown of f′s differentiability and the eigenvectors' continuity for multiple eigenvalues [22]. When multiple eigenvalues are present, all of the numerical methods in the mentioned references above are quadratic convergent, which extends to the case of multiples.
In this paper, motivated by [30], we propose a two-step Ulm-Chebyshev-like Cayley transform method for solving the IEP (1.2). Further exploration involves analyzing the performance of the newly introduced two-step Ulm-Chebyshev-like method in the presence of multiple eigenvalues. Under the assumption similar to the one used in a previous study, by the Rayleigh quotient as an approximate eigenvalue of the symmetric matrix and the estimates of eigenvalues, eigenvectors, and the relative generalized Jacobian, we show that the proposed method is still cubically convergent. Numerical experiments show the efficiency of our method and comparisons with some known methods are made.
The structure of this paper is as follows. We give some notations and preliminary results of the relative generalized Jacobian and some useful lemmas in Section 2. A novel method, the two-step Ulm-Chebyshev-like Cayley transform approach, is introduced in Section 3 and our main convergence theorems are established for the new method in Section 4. Experimental results are presented in the final section.
2.
Preliminaries
Let n be a positive integer. Let Rn represent an n-dimensional Euclidean space, S be a subset of Rn, and clS represent the closure of S. Usually, we use B(x,δ1) to represent the empty sphere of Rn center x∈Rn and radius δ1>0. Let ‖⋅‖ and ‖⋅‖F represent the Euclidean vector norms or their corresponding induced matrix norms and Frobenius norms, respectively. I is the identity matrix of appropriate dimensions. Then, by (2.3.7) in [31], we have
We define
and
where δ0 and ρ0 are defined in Lemma 2.1, β and ϵ0 are defined in Lemma 2.2, and λ∗ is defined in (2.9).
2.1. Relative generalized Jacobian
A locally Lipschitz continuous function h:Rn→Rm is considered. The Jacobian of h, denoted as h′, whenever it exists, and Dh represents the set of differentiable points of h. Moreover, the B-differential Jacobian of h at x∈Rn is denoted according to [32].
Considering the composite nonsmooth function h:=φ∘ψ, in which φ:Rt→Rm is nonsmooth and ψ:Rn→Rt is continuously differentiable, the generalized Jacobian ∂Qh(x) [33] and relative generalized Jacobian ∂Q|Sh(x) [34] are respectively defined by
and
For c∈Rn, write
and define
By (1.3) and the concept of a generalized Jacobian to f, we have [34]
In particular, if J(c) is a singleton, we write ∂Qf(c)={J(c)}. Let
Then, let c∈S and f be continuously differentiable at c. Moreover,
Thus, we get the following relative generalized Jacobian [34]:
2.2. Preliminary results
Throughout this paper, let the given eigenvalues {λ∗i}ni=1 satisfy λ∗1≤λ∗2≤⋯≤λ∗n. For simplicity, without loss of generality, we assume that
where 1≤t≤n. Write
Then a solution of the IEP (1.2) can be written by c∗ and Q(c∗) as
where Q(c∗) is an orthogonal matrix. Recall that Q(c) can be defined by (2.7). Let Q(c)∈Q(c) and write Q(c)=[Q(1)(c),Q(2)(c)] in which Q(1)(c)∈Rn×t and Q(2)(c)∈Rn×(n−t). Let c∗ be the solution of the IEPs with (2.8). Define
Clearly, Π is the eigenprojection of A(c∗) for λ∗1 in (2.8). Given an orthogonal matrix P=[P(1),P(2)], where P(1)∈Rn×t and P(2)∈Rn×(n−t), we obtain the QR factorization of ΠP(1) by
where R is a t×t nonsingular upper triangular matrix and ˜Q(1)(c∗) is an n×t matrix whose columns are orthonormal. Let
Obviously, ˜Q(c∗)∈Q(c∗). Moreover, we define the error matrix
Now, we state the following two lemmas, which are usefull for our proof.
Lemma 2.1. [35,36] Let c∗∈Rn and the eigenvalues of the matrix A(c∗) satisfy (2.8). Then, there exist two positive numbers δ0 and ρ0 such that, for each c∈B(c∗,δ0) and [Q(1)(c), Q(2)(c)]∈Q(c), we get
and
Lemma 2.2. There exist two positive numbers ϵ0 and β such that, for any orthogonal matrix P=[P(1),P(2)], if ‖E‖=‖P(1)−ΠP(1), P(2)−Q(2)(c∗)‖≤ϵ0 and the skew-symmetric matrix X defined by eX=PT˜Q(c∗), then we get
in which X(11) is the t-by-t leading block of X. Moreover, if ‖X‖F<1, then
Proof. (2.13) can be found in [22,35,36]. Noting that
If ‖X‖F≤1, we get
which implies that
3.
The two-step Ulm-Chebyshev-like Cayley transform method
We first recall the given eigenvalues in (2.8). Suppose that Pk is the current estimate of Q(c∗) and Yk is a skew-symmetric matrix, i.e., YTk=−Yk. Let us write Q(c∗)=PkeYk. Then, by using the Taylor series of the exponential function, we can express (2.10) as
The vector ck is updated as ck+1 by neglecting the second–order term of the above equality in Yk as
We obtain ck+1 by equating the diagonal elements in (3.1) as
in which, Jk and bk are defined by
On the other hand, equating the off-diagonal in (3.1),
and, assuming that the given eigenvalues are defined in (2.8), we obtain the skew-symmetric matrix Yk as
Furthermore, by using the Cayley transform, we calculate the matrix Pk+1 as
Finally, by (3.3), (3.4), and the two-step Ulm-Chebyshev iterative procedure[30], we can propose the following two-step Ulm-Chebyshev-like Cayley transform method for solving the IEP with multiple eigenvalues.
Algorithm Ⅰ: The two-step Ulm-Chebyshev-like Cayley transform method
Step 1. Given c0∈Rn, calculate the orthogonal eigenvectors {qi(c0)}ni=1 of A(c0). Let
and J0=J(c0) and the vector b0 are defined as follows:
Let B0∈Rn×n satisfy
where μ is a positive constant.
Step 2. For k=0, until convergence, do:
(a) Calculate yk by
(b) Form the skew-symmetric matrix Yk:
where the matrix A(yk) is defined by (1.1).
(c) Calculate P(yk)=[p1(yk),p2(yk),…,pn(yk)]T=[vk1,vk2,…,vkn]T by solving
where hkj is the jth column of Hk=(I−12Yk)PTk.
(d) Calculate the approximate eigenvalues of A(yk) via
(e) Calculate ck+1 by
(f) Form the skew-symmetric matrix ˆYk:
where the matrix A(ck+1) is defined by (1.1).
(g) Calculate Pk+1=[pk+11,pk+12,…,pk+1n]T=[ˆvk1,ˆvk2,…,ˆvkn]T by solving
where ˆhkj is the jth column of ˆHk=(I−12ˆYk)(P(yk))T.
(h) Form the matrix Jk+1 and the vector bk+1:
(i) Calculate the Chebyshev matrices Bk+1 by
Remark 3.1. For k=0,1,2,…, from (c) and (g) in Step 2 in Algorithm Ⅰ, we have
and
Remark 3.2. Without the distinction of the given eigenvalues, the convergence analysis of the two-step Ulm-Chebyshev-like method in [30] cannot work properly due to the differentiability of f and the discontinuity of the eigenvectors corresponding to multiple eigenvalues [22]. Based on the relative generalized Jacobian of eigenvalue function [32], we propose the improved method for solving the IEP (1.2) with multiple eigenvalues. Clearly, in the case when t=1, the method presented below is reduced to the two-step Ulm-Chebyshev-like method proposed in [30] for the distinct case.
4.
Convergence analysis
In this section, we shall analyze the convergence of Algorithm Ⅰ. To ensure the cubical convergence, it is assumed that all J∈∂Qf(c∗) are nonsingular for robustness. Yet, a suitable choice of eigenvectors can render J nonsingular in a general manner. Therefore, we assume that all J∈∂Q|Sf(c∗) are nonsingular.
Let ck, yk, Yk, ˆYk, Pk, P(yk), Jk and Bk be generated by Algorithm Ⅰ with initial point c0. For k=0,1,2,…, let
and
Then, we can get the following lemmas.
Lemma 4.1. Let the given eigenvalues {λ∗i}ni=1 be defined as in (2.8). Then, in Algorithm Ⅰ, there exists a number 0<δ2≤1 such that for k=0,1,2,…, if ‖yk−c∗‖≤δ2, ‖ck+1−c∗‖≤δ2, ‖Ek‖≤δ2 and ‖E(yk)‖≤δ2, then
and
where K and ρ are defined by (2.1) and (2.2), respectively.
Proof. Let eXk:=PTk˜Q(c∗), where Xk is the skew-symmetric matrix and ˜Q(c∗) is defined by (2.11) and (2.12) with P=Pk. By ‖Ek‖≤δ2≤ϵ0 and Lemma 2.2, we have
where β is a positive number and X(11)k is the t-by-t leading block of Xk. On the other hand, by ˜Q(c∗)∈Q(c∗), we derive
Thus, by the fact of eXk=∞∑l=0(Xlkl!), we can express (4.10) as
where
By (2.1) and Lemma 2.2, we get
Thus, (4.3) is seen to hold by (4.11) and (4.12).
In order to prove (4.4) and (4.5), assume that ‖yk−c∗‖⩽δ2. We note by (4.11) that
where t+1≤i≤n, 1≤j≤n,i>j. Combining this with (3.7), we have
in which t+1≤i≤n, 1≤j≤n,i>j. By (4.12), Lemma 2.1, and the fact that {pki}ni=1 are orthogonal, we get
In addition, by the fact that [Yk]ij=0, for each i,j∈[1,t], we have
Then, it follows from (2.1) and (4.9) that
and so
Thus, thanks to the fact that β‖Ek‖≤βδ2≤1 and (2.2), one has
Since ‖Ek‖⩽δ2 and ‖yk−c∗‖⩽δ2, it follows from (2.4) and (4.14) that ‖Yk‖⩽1. Consequently,
Therefore, in the following, we estimate ‖P(yk)−Pk‖ and ‖E(yk)‖. Indeed, by (3.11),
This together with (4.14), (4.15), as well as the orthogonality of Pk indicate that (4.4) holds.
As for (4.5), we note by (3.11) and Xk that
Combining this with eXk=∞∑l=0(Xlkl!), we get
Since Pk is orthogonal, note by (2.14) and (4.15) that
Thus, we derive by using (2.13), (4.13), and (4.14),
where the second inequality holds because of the fact that β‖Ek‖⩽1 while the last inequality holds because of the definition of C. Write P(yk)=[P(yk)(1) P(yk)(2)], where P(yk)(1)∈Rn×t and P(yk)(2)∈Rn×(n−t). Since (I−Π)˜Q(1)(c∗)=0, where 0 is a zero matrix, we have
and
Hence, by (4.2) and (4.16), we obtain
Therefore, (4.5) is proved by (2.2) and (4.17). We defined eYk:=P(yk)T˜Q(c∗), where Yk is the skew-symmetric matrix. Similarly, (4.6)–(4.8) also hold. □
Lemma 4.2. Let ρ0 and δ0 be defined in Lemma 2.1. If ‖yk−c∗‖≤δ0 and ˆλ(yk)=(ˆλ1(yk),ˆλ2(yk),…,ˆλn(yk))T, in which ˆλi(yk)=(pi(yk))TA(yk)pi(yk), 1≤i≤n, then
Proof. From the diagonal elements of Λ∗+YkΛ∗−Λ∗Yk−P(yk)TA(c∗)P(yk), we obtain from (4.6) that
which together with Lemma 3.1 gives
Therefore,
which together with the fact that ‖ˆλ(yk)‖≤‖ˆλ(yk)−λ∗‖+‖λ∗‖, we can get (4.18). □
Lemma 4.3. Let the Jacobian matrix of ˜J(c∗) and the vector ˜b be defined as follows:
Then, we have
Proof. Let eYk:=P(yk)T˜Q(c∗) where Yk is the skew-symmetric matrix and ˜Q(c∗) is defined by (2.11) and (2.12) with P=P(yk). By ‖E(yk)‖≤δ2≤ϵ0 and Lemma 2.2, we get
where β is a positive number. By ˆΛ(yk)=P(yk)TA(yk)P(yk) and e−YkˆΛ(yk)eYk=˜Q(c∗)TA(yk)˜Q(c∗), we have
where
By Lemma 2.2, we get
Thus,
By the diagonal entries of ˜Q(c∗)TA(yk)˜Q(c∗)+YkˆΛ(yk)−ˆΛ(yk)Yk−ˆΛ(yk) and (4.20), we get
Therefore, by the definitions of ˜J(c∗),ˆλ(yk),˜b and A(yk), we can get (4.19). □
Lemma 4.4. [35] Let J0 be defined in (3.5). Suppose that J0 is invertible. Let k≥1 such that
Then, the matrix Jk is nonsingular and
Lemma 4.5. Let the vector c∗∈clS and the given eigenvalues of the matrix A(c∗) satisfy (2.8). Let all J∈∂Q|Sf(c∗) be nonsingular. If for c0∈B(c∗,δ)⋂S where δ>0 and k=0,1,2,…, then there exist three numbers 0<τ≤1, 0<δ<τ and 0≤μ≤δ, that if ‖c0−c∗‖≤δ, the conditions
and
imply
Proof. Since all J∈∂Q|Sf(c∗) are nonsingular, and from Theorem 3.2 in [34], for each c0∈B(c∗,δ0)⋂S, we have
where ξ>0 and δ0>0. From (2.3), (2.5), and (2.6), we know that
By (2.6) and (4.22), we get
and then, from Lemma 2.2, we obtain
where β>0. By the definitions of Xk and the Taylor series of the exponential function eXk, we have
Since Pk is orthogonal, note by (2.14) and (4.27) that
Similarly, we also have
Thus, by (2.5), (4.22), and (4.28), we have
Therefore, we further have
Since 3n≥2n+1 for each n≥0, we obtain from (4.30) that
which together with (2.1) and (2.6), we obtain
Consequently, using Lemma 4.4, we can derive that Jk is nonsingular and moreover
Furthermore, by (2.5), (2.6), and (4.24), we have
On the other hand, considering the diagonal elements of Λ∗+XkΛ∗−Λ∗Xk−PTkA(c∗)Pk, we obtain from (4.3) that
Therefore, by the definitions of λ∗, Jk, bk and A(c∗), we have
From (3.6), we get
It follows that
which together with (4.22)–(4.24), (4.32), and (4.33) gives
By (2.6), (4.22), and (4.34), we have
and
which together with (4.22) and (4.28), we obtain
This together with the orthogonality of Pk and ˜Q(c∗) and the Cauchy-Schwarz inequality indicates that
Thus, we get
By (4.24), (4.32), and (4.37), we have
By (4.34)–(4.36) and Lemma 4..1, we obtain
which together with (2.6), (4.22), and (4.34), we have
which together with (4.35) and Lemmas 4.2 and 4..3, we get
Together with (3.9) and {\boldsymbol \lambda}^{*} = \tilde{J}({\bf c}^{*}) {\bf c}^{*}+\tilde{ {\bf b}} , we get
It follows from (4.26), (4.32), (4.34), (4.38), and (4.40) that
Finally, by (4.31), (4.32), and (4.41), we have (4.25). □
Next, we can analyze the convergence of Algorithm Ⅰ.
Theorem 4.1. Let the vector {\bf c}^{*}\in clS and the given eigenvalues \{\lambda_{i}^{*}\}_{i = 1}^{n} satisfy (2.8). All J\in \partial_{Q|S} {\bf f}({\bf c}^*) are nonsingular. Then Algorithm Ⅰ is locally cubic convergent.
Proof. Let us start by mathematical induction that (4.22)–(4.24) are true for all k > 0 . Clearly, by assumptions \mu\leq\delta and \| {\bf c}^{0}- {\bf c}^{*}\|\leq\delta , (4.23) and (4.24) for k = 0 are trivial. From Lemma 2.1, we have
and this gives that (4.22) is true for k = 0 .
Now assume that (4.22)–(4.24) are true for all k\leq m-1 . Recalling that (2.5) and (2.6), we get
which together with Lemma 4.1, (2.5), (2.6), (4.34), and (4.39) with k = m-1 gives
It follows from Lemma 4.1, (2.5), (4.39), and (4.41) with k = m-1 that
and
Thus, (4.22) and (4.23) hold for k = m . From Lemma 4.5 with k = m-1 , we get \|I-B_{m-1}J_{m-1}\|\leq\tau\Big(\frac{\delta}{\tau}\Big)^{3^{m-1}} and \|B_{m-1}\|\leq2\gamma . Thanks to (4.29) (with k = m ), one can see that
which implies that
Consequently,
Hence,
It follows that
Notice that B_{m} = B_{m-1}+B_{m-1}(2I-J({\bf c}^{m})B_{m-1})(I-J({\bf c}^{m})B_{m-1}) , and we obtain
Together with (2.5), (4.26), and (4.42), one has
and therefore, (4.24) is true for k = m and the proof is complete. □
5.
Numerical experiments
In this section, we present the computational performance of Algorithm Ⅰ in addressing the IEP with several multiple eigenvalues. Algorithm Ⅰ is contrasted with the two-step Ulm-Chebyshev-like Cayley transform method (TUCT method), inexact Cayley transform method (ICT method), and Ulm-like Cayley transform method (UCT method) as presented in [30,35,36], respectively. The tests were carried out in MATLAB 7.10 running on a PC Intel Pentium Ⅳ of 3.0 GHz CPU. We will now consider the problem of finding the eigenvalues for a matrix with a Toeplitz structure, as previously investigated in references [35,36].
The QMR method [37] was utilized to solve all linear systems in all algorithms, using the MATLAB QMR function and setting the maximum number of iterations to 1000 . Notably, in the context of solving approximate Jacobian equations within the framework of the inexact Cayley transform method, an advanced approach was employed. This approach involved leveraging the preconditioned QMR method with a specific stopping tolerance, and integrating the MATLAB incomplete LU factorization as the designated preconditioner. Specifically, the drop tolerance in LUINC( A , drop-tolerance) is fixed at 0.001 . Furthermore, the precision level for the linear systems involved in all computational methods is adjusted to match the machine accuracy, ensuring the attainment of the desired solutions. The termination criterion for the outer (Newton) iterations is met in every algorithm when
Example 5.1. (See [35,36]) Referring to the Toeplitz matrices stated in [35,36], consider \{A_{i}\}_{i = 0}^{n} as
Hence, the matrix A({\bf c}) can be characterized as a symmetric Toeplitz matrix where the first column is identical to the vector {\bf c} .
Here, we consider three cases: n = 100,200,300 . For any n , we generate a vector \tilde{ {\bf c}}^{*} such that |\lambda_{k+1}(\tilde{ {\bf c}}^{*})-\lambda_{k}(\tilde{ {\bf c}}^{*})| < \eta with 1\leq l\leq n-1 , where
Set
Subsequently, we choose \{\lambda_{i}^{*}\}_{i = 1}^{n} as the prescribed eigenvalues. It is clear that, in this way of selecting \{\lambda_{i}^{*}\}_{i = 1}^{n} , multiple eigenvalues are present. Since all of the algorithms are locally convergent, the initial guess {\bf c}_{0} is formed by chopping the components of \tilde{ {\bf c}}^{*} to six decimal places for n = 100, \ 200, \ 300 . The information in Table 1 presents the average values of \|P_{k}^{T}A(\mathbf{c}^{k})P_{k}-\Lambda^{*}\| , and "it." denotes the the averaged numbers of outer iterations. The data in Table 2 presents the average total numbers of outer iterations N_{0} throughout ten test scenarios and the average total numbers of inner iterations N_{i} essential for solving the IEPs. A comparison of the averaged CPU time of all of the algorithms for the ten tests with different n is shown in Table 3.
From the data presented in Table 1, we see that Algorithm Ⅰ requires a lower number of iterations compared to the ICT and UCT methods, and the TUCT method fails to converge. It is evident from the data presented in Table 2 that Algorithm Ⅰ outperforms the inexact Cayley transform method and Ulm-like Cayley transform method. Table 3 shows that Algorithm Ⅰ demonstrates a more cost-effective CPU time compared to the other approaches.
Example 5.2. [22] This is an inverse problem with multiple eigenvalues and n = 8 . We are given that B = I+WW^T , where
Define
We report our numerical results for different starting points: (a) {\bf c}^0 = 10^{-5}\cdot(1, 1, 1, 1, 1, 1, 1, 1)^T ; (b) {\bf c}^0 = (0, 0, 0, 0, 0, 0, 0, 0)^T .
Table 4 presents the average values of \|P_{k}^{T}A(\mathbf{c}^{k})P_{k}-\Lambda^{*}\| , and "it." denotes the the averaged numbers of outer iterations.
It can be observed from Table 4 that Algorithm Ⅰ requires a lower number of iterations compared to the ICT and UCT methods and Algorithm Ⅰ has global non-convergence and local cubic convergence.
To further illustrate the effectiveness of Algorithm Ⅰ, we present a practical engineering application in vibrations [15,16]. We consider the vibration of a taut string with n beads. Figure 1 shows such a model for the case where n = 4 . Here, we assume that the n beads are placed along the string, where the ends of the string are clamped. The mass of the j th bead is denoted by m_j . The horizontal lengths between masses m_j and m_{j+1} (and between each bead at each end and the clamped support) are set to be a constant L . The horizontal tension is set to be a constant T . Then the equation of motion is governed by
where y_0 = y_{n+1} = 0 . That is, the ends of the string are fixed. The matrix form of (5.1) is given by
where y(t) = (y_1(t), y_2(t), \ldots, y_n(t))^T , C = {{\rm{diag}}}(c_1, c_2, \ldots, c_n) with c_j = \frac{T}{m_jL} , and J is the discrete Laplacian matrix
The general solution of (5.2) is given in terms of the eigenvalue problem
where \lambda is the square of the natural frequency of the vibration system and the nonzero vector y accounts for the interplay between the masses. The inverse problem for the beaded string is to compute the masses \{m_j\}_{j = 1}^n so that the resulting system has a prescribed set of natural frequencies.
It is easy to check that the eigenvalues of J are given by
Thus, J is symmetric and positive definite and CJ is similar to L^TCL , where L is the Cholesky factor of J = LL^T [31]. Then, the inverse problem is converted into the form of the IEP where A_0 = 0 and A_j = L^TE_jL with E_j = {{\rm{diag}}}(e_j) for j = 1, 2, \ldots, n . The beaded string data in Examples 5.3 and 5.4 comes from the website \mathtt{http://www.caam.rice.edu/{\sim}beads} .
Example 5.3. This is an inverse problem for the beaded string with n = 4 beads, where
We report our numerical result for the starting points: c^0 = 10^{-5}\cdot(27720.80, 47929.08, 47929.08, 27720.80)^T .
Example 5.4. This is an inverse problem for the beaded string with n = 6 beads, where
We report our numerical results for the starting points: c^0 = 10^{-5}\cdot(58081.57, 33592.71, 58081.57, 58081.57, 33592.71, 58081.57)^T .
The information in Table 5 presents the average values of \|P_{k}^{T}A(\mathbf{c}^{k})P_{k}-\Lambda^{*}\| , and Table 6 displays the computed masses for the beaded string.
From Table 5, we know that Algorithm Ⅰ requires a lower number of iterations compared to the ICT and UCT methods, and the TUCT method fails to converge. Table 6 shows that the desired masses are recovered. All of these numerical observations agree with our prediction and further validate our theoretical results.
6.
Conclusions
In this paper, we have proposed a two-step Ulm-Chebyshev-like Cayley transform method for solving the IEP (1.2) with multiple eigenvalues, which avoids solving (approximate) Jacobian equations in each outer iteration. Furthermore, the proposed algorithm is proved to have cubical convergence under the following nonsingular condition in terms of the relative generalized Jacobian evaluated at a solution {\bf c}^{*} : Each J\in \partial_{Q|S} {\bf f}({\bf c}^*) is nonsingular. Furthermore, this kind of method can be worked with larger problems for the Toeplitz inverse eigenvalue problem and is efficient when working with small and medium-sized problems for a more general perspective on this problem. Nevertheless, the effective methods for general larger inverse eigenvalue problems should be studied in the future.
Author contributions
Wei Ma, Zhenhao Li and Yuxin Zhang: Algorithms, Software, Numerical examples, Writing-original draft, Writing-review & editing. All authors of this article have been contributed equally. All authors have read and approved the final version of the manuscript for publication.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This work was supported by the Henan Province College Student Innovation and Entrepreneurship Training Program Project (No: 202410481003).
Conflict of interest
The authors declare no conflicts of interest.