Loading [MathJax]/jax/output/SVG/jax.js

Matrix valued inverse problems on graphs with application to mass-spring-damper systems

  • Received: 01 June 2018 Revised: 01 August 2019
  • Primary: 05C22, 05C50; Secondary: 35R30

  • We consider the inverse problem of finding matrix valued edge or node quantities in a graph from measurements made at a few boundary nodes. This is a generalization of the problem of finding resistors in a resistor network from voltage and current measurements at a few nodes, but where the voltages and currents are vector valued. The measurements come from solving a series of Dirichlet problems, i.e. finding vector valued voltages at some interior nodes from voltages prescribed at the boundary nodes. We give conditions under which the Dirichlet problem admits a unique solution and study the degenerate case where the edge weights are rank deficient. Under mild conditions, the map that associates the matrix valued parameters to boundary data is analytic. This has practical consequences to iterative methods for solving the inverse problem numerically and to local uniqueness of the inverse problem. Our results allow for complex valued weights and give also explicit formulas for the Jacobian of the parameter to data map in terms of certain products of Dirichlet problem solutions. An application to inverse problems arising in networks of springs, masses and dampers is presented.

    Citation: Travis G. Draper, Fernando Guevara Vasquez, Justin Cheuk-Lum Tse, Toren E. Wallengren, Kenneth Zheng. Matrix valued inverse problems on graphs with application to mass-spring-damper systems[J]. Networks and Heterogeneous Media, 2020, 15(1): 1-28. doi: 10.3934/nhm.2020001

    Related Papers:

    [1] Travis G. Draper, Fernando Guevara Vasquez, Justin Cheuk-Lum Tse, Toren E. Wallengren, Kenneth Zheng . Matrix valued inverse problems on graphs with application to mass-spring-damper systems. Networks and Heterogeneous Media, 2020, 15(1): 1-28. doi: 10.3934/nhm.2020001
    [2] Sergei Avdonin, Julian Edward . An inverse problem for quantum trees with observations at interior vertices. Networks and Heterogeneous Media, 2021, 16(2): 317-339. doi: 10.3934/nhm.2021008
    [3] Robert Carlson . Dirichlet to Neumann maps for infinite quantum graphs. Networks and Heterogeneous Media, 2012, 7(3): 483-501. doi: 10.3934/nhm.2012.7.483
    [4] Franco Cardin, Alberto Lovison . Finite mechanical proxies for a class of reducible continuum systems. Networks and Heterogeneous Media, 2014, 9(3): 417-432. doi: 10.3934/nhm.2014.9.417
    [5] Luis Caffarelli, Antoine Mellet . Random homogenization of fractional obstacle problems. Networks and Heterogeneous Media, 2008, 3(3): 523-554. doi: 10.3934/nhm.2008.3.523
    [6] Luca Placidi, Julia de Castro Motta, Rana Nazifi Charandabi, Fernando Fraternali . A continuum model for the tensegrity Maxwell chain. Networks and Heterogeneous Media, 2024, 19(2): 597-610. doi: 10.3934/nhm.2024026
    [7] Manuel Friedrich, Bernd Schmidt . On a discrete-to-continuum convergence result for a two dimensional brittle material in the small displacement regime. Networks and Heterogeneous Media, 2015, 10(2): 321-342. doi: 10.3934/nhm.2015.10.321
    [8] Delio Mugnolo . Gaussian estimates for a heat equation on a network. Networks and Heterogeneous Media, 2007, 2(1): 55-79. doi: 10.3934/nhm.2007.2.55
    [9] Bernd Schmidt . On the derivation of linear elasticity from atomistic models. Networks and Heterogeneous Media, 2009, 4(4): 789-812. doi: 10.3934/nhm.2009.4.789
    [10] Fermín S. V. Bazán, Luciano Bedin, Mansur I. Ismailov, Leonardo S. Borges . Inverse time-dependent source problem for the heat equation with a nonlocal Wentzell-Neumann boundary condition. Networks and Heterogeneous Media, 2023, 18(4): 1747-1771. doi: 10.3934/nhm.2023076
  • We consider the inverse problem of finding matrix valued edge or node quantities in a graph from measurements made at a few boundary nodes. This is a generalization of the problem of finding resistors in a resistor network from voltage and current measurements at a few nodes, but where the voltages and currents are vector valued. The measurements come from solving a series of Dirichlet problems, i.e. finding vector valued voltages at some interior nodes from voltages prescribed at the boundary nodes. We give conditions under which the Dirichlet problem admits a unique solution and study the degenerate case where the edge weights are rank deficient. Under mild conditions, the map that associates the matrix valued parameters to boundary data is analytic. This has practical consequences to iterative methods for solving the inverse problem numerically and to local uniqueness of the inverse problem. Our results allow for complex valued weights and give also explicit formulas for the Jacobian of the parameter to data map in terms of certain products of Dirichlet problem solutions. An application to inverse problems arising in networks of springs, masses and dampers is presented.



    We study a class of inverse problems where the objective is to find matrix valued quantities defined on the edges or vertices (nodes) of a graph from measurements made at a few boundary nodes. The scalar case corresponds to the problem of finding resistors in a resistor network from electrical measurements made at a few nodes, see e.g. [14]. As in the scalar case, the vector potential at all the nodes can be found from its value at a few nodes by solving a Dirichlet problem, i.e. finding a vector potential satisfying a vector version of conservation of currents (Kirchhoff's node law) and having a prescribed value at certain nodes. We study in detail the Dirichlet problem and give conditions guaranteeing it admits a unique solution (sections 2 and 3). We also include cases where the matrix valued weights are rank deficient and uniqueness of the Dirichlet problem holds only up to a known subspace.

    Then we present different inverse problems, where either the matrix valued weights on the edges or the vertices or even their eigenvalues are the unknown parameters that are sought after. All these inverse problems share a common structure that is given in section 4. Any inverse problem that fits this mold has certain desirable properties: mainly the parameter to data map (i.e. the forward map) is analytic and its Jacobian can be computed in terms of products of internal "states" (e.g. solutions to the Dirichlet problem). Analyticity can be used to guarantee local uniqueness for such inverse problems, for almost any parameter within a region of interest provided the Jacobian is injective for one parameter (a generalization of the results in [6]). Moreover, we show that Newton's method applied to such problems is very likely to produce valid steps. Then in sections 5 and 6 we formulate inverse problems with matrix valued weights and determine conditions under which they have the structure of section 4. Some of the inverse problems we consider arise in networks of springs, masses and dampers.

    The discrete conductivity inverse problem consists of finding the resistors in a resistor network from voltage and current measurements made at a few nodes, assuming the underlying graph is known. For this problem, the uniqueness results in [13,14,10,12,11] apply to circular planar graphs (i.e. planar graphs that can be embedded in a disk and where the nodes at which measurements are made can be laid on the disk boundary) and real conductivities. A different approach is taken in [9] where a monotonicity property inspired from the continuum [1] is used to show that if the conductivities satisfy a certain inequality then they can be uniquely determined from measurements, without specific assumptions on the underlying graph. The lack of uniqueness is shown for cylindrical graphs in [21]. For complex conductivities, a condition for "uniqueness almost everywhere" regardless of the underlying graph is given in [6]. Uniqueness almost everywhere means that the set of conductivities that have the same boundary data lie in a zero measure set and that the linearized problem is injective for almost all conductivities in some region.

    Uniqueness for the discrete Schrödinger problem is considered in the real scalar case on circular planar graphs in [2,3,4]. This problem involves a resistor network with known underlying graph and resistors but where every node is connected to the ground (zero voltage) via a resistor with unknown resistance. These unknown resistors are a discrete version of the Schrödinger potential in the Schrödinger equation, and the goal is to find them from measurements made at a few nodes. A discrete Liouville identity [5] can be used to relate the discrete Schrödinger inverse problem for certain Schrödinger potentials to the discrete conductivity inverse problem, also on circular planar graphs. A condition guaranteeing uniqueness almost everywhere for complex valued potentials without an assumption on the graph is given in [6].

    One of the consequences of the present study is a uniqueness almost everywhere result for matrix valued inverse problems on graphs. To the best of our knowledge there are no results for uniqueness of the inverse problem with matrix valued edge or node quantities other than the characterization and synthesis results for networks of springs masses and dampers (discussed in more detail in section 6) that are derived in [7,18,17]. These results solve an inverse problem for networks of springs, masses and dampers that assumes we are free to choose the graph topology. Indeed the constructions in [7,18,17] start from data generated by these networks (displacement to forces map) and give a network that reproduces this data. We emphasize that in the present study, the underlying graph is always assumed to be known.

    We use the set theory notation YX for the set of functions from X to Y. For example u(Cd)X is a function u:XCd that to some xX associates u(x)Cd. For some matrix aCd×d we write a0 (resp. a0) to say that a is positive definite (resp. positive semidefinite). When the same notation is used for a(Cd×d)X, the generalized inequality is understood componentwise, e.g. for a(Cd×d)X, a0 means a(x)0 for all xX. When we write ab (or ab) we mean ab0 (or ab0). We use the notation a=Rea+ȷIma, for the real Rea and imaginary Ima parts of a. The complex conjugate is ¯a=ReaȷIma.

    By ordering a finite set X, it can be identified with {1,,|X|}, where |X| is the cardinality of X. Thus (Cd)X can be identified with vectors in Cd|X|. Similarly, upon fixing an ordering for another finite set Y, we can identify linear operators (Cd)X(Cd)Y with matrices in Cd|X|×d|Y|.

    For ACm×n, we denote by vect(A)Cmn the vector representation of the matrix A, i.e. the vector obtained by stacking the columns of A in their natural ordering. Similarly for a(Cd×d)X, we denote by vect(a)Cd2|X|, the vector representation of a, i.e. the vector obtained by stacking the vector representations vect(a(x)) of the matrices a(x), for xX in the predetermined ordering of X.

    In addition to the usual matrix vector product, we also use a block-wise outer product (), the Hadamard product () and the Kronecker () product. For u,v(Cd)X, the (block-wise) outer product uv(Cd×d)X is

    (uv)(x)=u(x)[v(x)]T,xX. (1)

    The Hadamard or componentwise product of two vectors a,bCn is denoted by ab and it is given by (ab)(i)=a(i)b(i), i=1,n. The Kronecker product of two matrices ACn×m and BCp×q is the np×mq complex matrix AB given by (see e.g. [20])

    AB=[A11BA1mBAn1BAnmB]. (2)

    When we write uTv for u,v(Cd)X, we mean

    uTv=xX[u(x)]Tv(x)C, (3)

    and similarly for uv=¯uTv, where the bar denotes the complex conjugate (understood componentwise). With this in mind we can define a norm for u(Cd)X by u2=uu.

    We work with graphs G=(V,E), where V is the set of vertices or nodes (assumed finite) and E is the set of edges E{{i,j}|i,jV,ij}. All graphs we consider are undirected and with no self-edges. We partition the nodes V=BI into a (nonempty) set B of "boundary" nodes and a set I of "interior" nodes. The boundary nodes are where we can make measurements and the interior nodes are considered not accessible.

    By (discrete) conductivity σ we mean a symmetric matrix valued function defined on the edges, i.e. σ(Cd×d)E. Here symmetric means [σ(e)]T=σ(e), for all eE. By (discrete) Schrödinger potential q we mean a symmetric matrix valued function defined on the vertices i.e. q(Cd×d)V.

    The ddimensional discrete gradient is the linear operator :(Cd)V(Cd)E defined for some u(Cd)V by

    (u)({i,j})=u(i)u(j),{i,j}E.

    The discrete gradient assumes an edge orientation that is fixed a priori and that is irrelevant in the remainder of this paper.

    The weighted graph Laplacian is the linear map Lσ:(Cd)V(Cd)V defined by

    Lσ=Tdiag(σ), (4)

    where we used the linear operator diag(σ):(Cd)E(Cd)E, which is defined for some v(Cd)E by (diag(σ)v)(e)=σ(e)v(e), eE. Its matrix representation is a block diagonal matrix with the σ(e) on its diagonal. The operator T:(Cd)E(Cd)V is the adjoint of the d-dimensional discrete gradient .

    The discrete Schrödinger operator associated with a conductivity σ and a Schrödinger potential q is a block diagonal perturbation (with blocks of size d×d) of the weighted graph Laplacian, i.e

    Lσ+diag(q). (5)

    We now give two concrete examples of problems that can be described using matrix valued conductivities and Schrödinger potentials.

    Example 1. We consider a network of springs, masses and dampers based on the graph appearing in black in fig. 1. Since this is a planar network we take d=2. We associate to each edge a spring with spring constant k0i and a damper in parallel with a damping constant being c0i, i=1,,3. These physical quantities are defined more precisely in section 6. Each node is associated a mass mi and also a damping constant νi, i=1,,3, corresponding to the mass moving in a viscous fluid. If we subject the nodes to time harmonic external forces of the form fi(t)=exp[ȷωt]ˆfi (where t is time and ω the angular frequency) the displacement of the nodes is also time harmonic and satisfies the 8×8 complex linear system

    Figure 1.  A simple spring, mass and damper network that we use in example 1. Here V={0,1,2,3} and E={{0,1},{0,2},{0,3}}. The node positions at equilibrium are given in black and correspond to x0=(0,0)T, and xi=(cos(2πi/3),sin(2πi/3))T, i=1,,3. A displaced configuration is given in blue, where the new node positions are xi+ui, with displacements ui for i=0,,3.
    (ω2M+ȷωC+K)ˆu=ˆf, (6)

    where ˆu[ˆuT0,ˆuT1,ˆuT2,ˆuT3]T and similarly for ˆf. The matrices in (6) are given as follows.

    ● The matrix M is the mass matrix and is given by M=diag(m), where m:VR2×2 associates to each vertex a 2×2 matrix, namely m(i)=miI, for i=0,,3 and where I denotes the identity matrix of appropriate dimension.

    ● The matrix K is the stiffness matrix, and can be written as K=Tdiag(σ) where is the discrete gradient written under the edge ordering of fig. 1

    =[IIIIII], (7)

    and the "conductivity" σ:ER2×2 given by

    σ({0,i})=k0iP0i,i=1,,3 (8)

    where P0i is the orthogonal projector onto direction xix0 of edge {0,i}, i.e.

    P0i=(xix0)(xix0)T(xix0)T(xix0). (9)

    ● The matrix C is the damping matrix and can be written as C=CE+CV, where CE accounts for the edge dampers and CV for the viscous damping at the nodes. These matrices are given by

    CE=Tdiag(μE)andCV=diag(ν) (10)

    where μE:ER2×2 and ν:VR2×2 are given by

    μE({0,i})=c0iP0i,i=1,,3andν(i)=νiI,i=0,,3. (11)

    The detailed derivation of these relations for general networks of springs, masses and dampers appears later in section 6. Nevertheless we can already see that

    K+ȷωCE=Tdiag(σ+ȷωμE), (12)

    is a discrete Laplacian with complex symmetric matrix valued edge weights σ+ȷωμE. We emphasize that these weights are in general not Hermitian. Moreover

    ω2M+ȷωCV=diag(ω2m+ȷων), (13)

    corresponds to complex symmetric matrix valued node weights given by ω2m+ȷων. Again we emphasize that the Schrödinger potential we obtain is in general not Hermitian. Thus we can write the matrix in the system (6) as a discrete Schrödinger operator (5) since

    ω2M+ȷωC+K=Lσ+ȷωμE+diag(ω2m+ȷων). (14)

    Example 2. We show how to use the matrix valued conductivities and Schrödinger potentials to view the Laplacian of a cylindrical graph C=Pk×G with scalar weights as a matrix valued Schrödinger operator on the graph Pk, a path with k nodes with vertices V(Pk)={1,,k} and edges E(Pk)={{1,2},,{k1,k}}. Here × denotes the Cartesian product between graphs. Such cylindrical graphs arise e.g. in a finite difference discretization of the conductivity equation on a rectangle with a Cartesian grid, as illustrated in fig. 2.

    Figure 2.  The Laplacian for a scalar conductivity on the cylindrical graph CP5×P3 can be seen as a matrix valued Schrödinger operator on the graph P5, as explained in example 2. To fix ideas, s4R2 represents the conductivities of C within the 4th group in red and defines the matrix valued Schrödinger potential q(4). The conductivity s2,3R3 represents the conductivities of the 3 edges between the 2nd and 3rd group and is used to define the matrix valued conductivity σ({2,3}).

    Let s(0,)E(C) be a scalar conductivity on the cylindrical graph C. We view s as a vector and split it into the sub-vectors sj(0,)E(G), j=1,,k and sj,j+1(0,)V(G), j=1,,k1. The sub-vector sj represents the scalar conductivity of the jth copy of the graph G. The sub-vector sj,j+1 corresponds to the conductivity linking layer j to layer j+1. Define the matrix valued conductivity σ(R|V(G)|×|V(G)|)E(Pk) by σ({j,j+1})=diagsj,j+1, j=1,,k1 and matrix valued Schrödinger potential q(R|V(G)|×|V(G)|)V(Pk) by q(j)=Lsj, i.e. the Laplacian of the graph induced by the vertices in the jth copy of G, j=1,,k. Then with an appropriate ordering of the vertices we have

    Lσ+diag(q)=Ls.

    Example 3. To further motivate the symmetry assumptions we make on the matrix valued conductivities and Schrödinger potentials, notice that the Laplacian on the cylindrical graph C=Pk×G of example 2 can be used to express Kirchhoff's node law in a circuit made of resistors whose conductances are given by s(0,)E(C). In other words, the equation (Lσu)i=0 imposes that the sum of currents at the ith node of C must be equal to zero. If we allow the conductances to be complex, i.e. s{zC|Rez>0}E(C), then s is now the admittance of the circuit elements, at a particular operating frequency. Proceeding as in example 2, the complex matrix valued conductivity σ, defined on the edges of Pk by σ({j,j+1})=diagsj,j+1 must be complex symmetric (and not Hermitian). The complex matrix valued Schrödinger potential q defined on the nodes of Pk by q(j)=Lsj must also complex symmetric (and not Hermitian).

    For a conductivity σ(Cd×d)E and a Schrödinger potential q(Cd×d)V, the σ,q Dirichlet problem consists in finding u(Cd)V satisfying

    {((Lσ+diag(q))u)I=0,anduB=g, (15)

    where g(Cd)B is the Dirichlet boundary condition and the subscript I (resp. B) is used to denote restriction of a quantity in (Cd)V to one in (Cd)I. The Dirichlet to Neumann map, when it exists, is the linear mapping Λσ,q:(Cd)B(Cd)B defined by

    Λσ,qg=((Lσ+diag(q))u)B, (16)

    where u solves the Dirichlet problem (15) with boundary condition uB=g(Cd)B. We call Λσ,q a Dirichlet to Neumann map since this terminology is used when q=0 in the scalar case (d=1), see e.g. [10,13,14]. In the scalar case with q=0, the conditions on the interior nodes (15) correspond to conservation of currents (Kirchhoff's node law) in an resistor network. Similarly (16) represents the net currents flowing out of the boundary nodes B, when the voltage is set to g on the boundary. These currents can be interpreted as the Neumann boundary data corresponding to the Dirichlet boundary data g, hence the name for Λσ,0.

    The Dirichlet to Neumann map is well defined e.g. when the solution to the Dirichlet problem is uniquely determined by the boundary condition.1 Conditions guaranteeing Dirichlet problem uniqueness are given in the next theorem, together with a formula for the Dirichlet to Neumann map.

    1In section 3.3 we consider Dirichlet problems that do not admit a unique solution and yet the Dirichlet to Neumann map is well defined.

    Theorem 2.1. The σ,q Dirichlet problem on a connected graph with connected interior admits a unique solution when σ(Cd×d)E and q(Cd×d)V are symmetric and one of the two following conditions is satisfied.

    (i) Reσ0 and ReqIλmin((LReσ)II).

    (ii) ReqI0 and (LReσ)IIλmin(diag(ReqI)).

    When any of the two conditions above hold, the Dirichlet to Neumann map can be written as

    Λσ,q=LBB+diag(qB)LBI(LII+diag(qI))1LIB, (17)

    where we dropped the subscript σ in the blocks (Lσ)BB, for clarity. This is the Schur complement of block (Lσ+diag(q))II in the matrix Lσ+diag(q).

    Proof. The proof appears in section 3.2.

    In the previous theorem, λmin(A) denotes the smallest eigenvalue of a real symmetric matrix A. Recall that we identify operators A(Cd)V(Cd)V to Cd|V|×d|V| matrices. We use AXY, X,Y{I,B}, to denote the submatrix of A with rows (resp. columns) associated with the vertices in X (resp. Y).

    Unfortunately theorem 2.1 and the expression (17) of the Dirichlet to Neumann map do not apply to one of the main applications of our results: static spring networks. As we see in more detail in section 6.1, the linearization of Hooke's law we use allows for non-physical floppy modes, i.e. non-zero displacements that can be made with zero forces. A generalization of the static spring network problem is to consider symmetric conductivities with Reσ0. In this situation, floppy modes may also arise if there are edges e for which Reσ(e) has a non-trivial nullspace. They can be defined as follows.

    Definition 2.2. A non-zero z(Cd)V, is said to be a floppy mode for a symmetric conductivity σ(Cd×d)E if z solves the equation

    {(Lσz)I=0,zB=0. (18)

    If z is a floppy mode, then the solution to the σ,0 Dirichlet problem cannot be unique. Indeed if u is a solution to the σ,0 Dirichlet problem, then so is u+αz for any scalar α. The following theorem shows that even in the degenerate case Reσ0, q=0, there are situations where the Dirichlet problem admits a solution that is unique up to floppy modes.

    Theorem 2.3. The σ,0 Dirichlet problem on a connected graph with connected interior and σ(e)0 for all eE, admits a unique solution up to floppy modes when Reσ0, and for each eE the inclusion N(Reσ(e))N(Imσ(e)) holds. In this case, the Dirichlet to Neumann map can be written as

    Λσ,0=LBBLBIQ(QTLIIQ)1QTLIB, (19)

    where for clarity we dropped the subscript σ in the blocks (Lσ)BB etc. The matrix Q is real with orthonormal columns (QTQ=I) and satisfies R(Q)=R(LII), i.e. its columns form an orthonormal basis of R(LII) and Q is of size d|I|×dim(R(LII)). Moreover Q depends only on the ranges of Reσ(e) for eE.

    Here we denote the nullspace or kernel of a matrix ACn×m by N(A) and the range or column space of A is denoted by R(A).

    Proof. The proof appears in section 3.3.1.

    We note that theorem 2.3 applies in particular to the case of real positive semidefinite conductivities.

    Remark 1 (Discrete Dirichlet principle). For real σ0 and real q0, it is easy to show that the Dirichlet problem (15) is equivalent to finding u(Rd)V minimizing the energy

    E(u)=uT(Lσ+diag(q))u={i,j}E(u(i)u(j))Tσ({i,j})(u(i)u(j))+kVu(k)Tq(k)u(k), (20)

    subject to uB=g. The function E(u) is the energy needed to maintain a potential u in the network and is the sum of energies associated to each edge and node. The edge terms are akin to the current-voltage product to calculate the power dissipated by a two terminal electrical component. The node terms represent the energy leaked by an electrical component linking the node to the ground (zero potential). The conditions σ0, q0 guarantee E(u) is a convex quadratic function in u. The first equality in the Dirichlet problem (15) is identical to uIE(u)=0.

    The following lemma is a straightforward generalization to complex matrix valued conductivities and Schrödinger potentials of the interior identities [6,Lemmas 5.1 and 6.1], which are in turn inspired by the continuum interior identities used by Sylvester and Uhlmann [25] to prove uniqueness for the continuum conductivity and Schrödinger problems.

    Lemma 2.4 (Boundary/Interior Identity). Let σ1,σ2(Cd×d)E be conductivities and q1,q2(Cd×d)V be Schrödinger potentials. Let u1,u2(Cd)V be solutions to the σ1,q1 and σ2,q2 Dirichlet problems:

    {((Lσ1+diag(q1))u1)I=0,(u1)B=g1,and{((Lσ2+diag(q2))u2)I=0,(u2)B=g2,

    for some boundary conditions g1,g2(Cd)B. Then if the Dirichlet to Neumann maps Λσi,qi, i=1,2, are well defined we have the identities

    gT2(Λσ1,q1Λσ2,q2)g1=uT2(Lσ1Lσ2)u1+uT2diag(q1q2)u1=eE[(u2)(e)]T[(σ1σ2)(e)][(u1)(e)]+iV[u2(i)]T[(q1q2)(i)][u1(i)]=vect((u2)(u1))Tvect(σ1σ2)+vect(u2u1)Tvect(q1q2),

    where the outer product is as in (1).

    Proof. Since u1 solves the σ1,q1 Dirichlet problem we have

    uT2Lσ1u1+uT2diag(q1)u1=(u2)TB((Lσ1+diag(q1))u1)B+(u2)TI((Lσ1+diag(q1))u1)I=(u2)TB((Lσ1+diag(q1))u1)B=gT2Λσ1,q1g1. (21)

    Similarly, we have that

    uT2Lσ2u1+uT2diag(q2)u1=gT2Λσ2,q2g1. (22)

    Subtracting (22) from (21) gives the first equality. To obtain the second equality, use the definition of the weighted graph Laplacian to see that

    uT2Lσiu1=eE[(u2)(e)]Tσi(e)(u1)(e),i=1,2.

    By applying for each eE the identity xTAy=vect(xyT)Tvect(A), which holds for any x,yCd and ACd×d, we get

    uT2(Lσ1Lσ2)u1=eEvect([(u2)(e)][(u1)(e)]T)Tvect((σ1σ2)(e)). (23)

    By applying the same identity for all nodes iV we get

    uT2diag(q1q2)u1=iVvect([u2(i)][u1(i)]T)Tvect((q1q2)(i)). (24)

    The third equality follows from identities (23) and (24).

    We first focus in section 3.1 on proving theorem 2.1 for the particular case where the conductivity is real positive definite and the Schrödinger potential is zero (lemma 3.3). We then complete the proof of theorem 2.1 by considering either Reσ0 or ReqI0 in section 3.2. In both cases the objective is to show that the conditions given in theorem 2.1 are sufficient to guarantee that the matrix (Lσ)II+diag(qI) is invertible. The case where Reσ0, q=0 is dealt with in section 3.3 which includes the proof of theorem 2.3, and is more delicate because the matrix (Lσ)II may no longer be invertible. However it is still possible to show that the σ,0 Dirichlet solution is unique up to floppy modes (definition 2.2).

    The purpose of this section is to establish uniqueness for the σ,0 Dirichlet problem for real σ with σ0 (lemma 3.3). We need two intermediary results on the discrete graph Laplacian Lσ with real matrix valued symmetric conductivity σ0. The first one is a discrete version of the first Korn inequality (lemma 3.1). The second is to show that a vector potential uN(Lσ) must be constant on all connected components of the graph G (lemma 3.2). Using these properties, we can show that when σ is a real conductivity with σ0, we can deduce that (Lσ)II0 which is sufficient to ensure uniqueness for the corresponding σ,0 Dirichlet problem.

    The following is a discrete version of the first Korn inequality which bounds the elastic energy stored in a body from below by the gradient of the strain, see e.g. [22,§1.12].

    Lemma 3.1 (Discrete Korn inequality). Let σ(Rd×d)E be a conductivity with σ0. Then there is a constant C>0 such that for any u(Rd)V,

    u2CuTLσu. (25)

    Proof. By using Rayleigh quotients,

    vTσ(e)vλmin(σ(e))v2for allvRdandeE.

    Define λ=mineEλmin(σ(e))=λmin(diag(σ)). Clearly σ0 implies λ>0. The inequality we seek follows with C=λ1 from

    uTLσu=eE[(u)(e)]T[σ(e)][(u)(e)]λeE(u)(e)2=λu2.

    The next lemma extends to matrix valued conductivities a well known characterization of the nullspace of (scalar) weighted graph Laplacians (see e.g. [8]).

    Lemma 3.2 (Nullspace of graph Laplacian). For real σ0, uN(Lσ) implies that u=0. In particular if the graph is connected then u is constant, meaning there is a constant cRd such that u(i)=c for all iV.

    Proof. If uN(Lσ) then uTLσu=0. Using the discrete Korn inequality (lemma 3.1), we get u=0. This means that for any edge {i,j}E, we must have u(i)=u(j). Therefore u must be constant on connected components of the graph.

    We can now prove the first uniqueness result for the Dirichlet problem.

    Lemma 3.3 (Uniqueness for real positive definite conductivities). Assume both the graph G and its subgraph induced by the interior nodes are connected. For real conductivities σ with σ0, the matrix (Lσ)II is invertible and the σ,0 Dirichlet problem admits a unique solution.

    Proof. Our goal here is to show that (Lσ)II0 which implies invertibility and therefore uniqueness for the σ,0 Dirichlet problem. By definition of the weighted graph Laplacian (4), the matrix Lσ must be real and symmetric. Moreover using the discrete Korn inequality (lemma 3.1), there is a constant C>0 such that for all u(Rd)V:

    uTLσuCu2.

    This implies Lσ0 and hence (Lσ)II0. Now we can write

    (Lσ)II=LσI+diag(f),

    where LσI is the weighted graph Laplacian on the subgraph of G induced by the interior nodes I and f(Rd×d)I is given for iI by

    f(i)={i,j}E,iI,jBσ({i,j}). (26)

    Since the sum of positive definite matrices is positive definite, σ0 implies f(i)0 for all nodes iI that are connected via an edge to some boundary node and f(i)=0 otherwise. This guarantees that diag(f)0.

    Now take v(Rd)I with vT(Lσ)IIv=0. Since both LσI and diag(f) are positive semidefinite, we must have that

    (a)vTLσIv=0and(b)vTdiag(f)v=0.

    By using (a) and lemma 3.2 on the subgraph induced by the interior nodes (which is connected by assumption), we get that v is constant, i.e. v(i)=v(j) for any i,jV. By using (b), we get that v(i)Tσ({i,j})v(i)=0 for all {i,j}E where iI and jB. Hence there must be at least one iI such that v(i)=0 (since G is connected). Since the subgraph of G induced by the interior nodes is connected, we conclude that v=0. This gives the desired result (Lσ)II0.

    We start with the following lemma that allows us to extend the uniqueness result from lemma 3.3 to complex conductivities and Schrödinger potentials.

    Lemma 3.4. Let A,BRn×n be symmetric with A0. Then the matrix M=A+ȷB is invertible.

    Proof. The field of values (or numerical range, see e.g. [20]) of MCn×n is the complex plane region given by

    F(M)={vMv|vCn,v=1}.

    Since A0 we have vAv>0 for v0. Since B is real symmetric, vBv must be real. Therefore Re(vMv)=vAv>0 for v0 and the field of values F(M) lies on the right hand complex plane, excluding the imaginary axis. Since the spectrum of M is contained in F(M) this means that 0 is not an eigenvalue of M and that M is invertible.

    We are now ready to prove theorem 2.1.

    Proof. First assume condition (ⅰ) of theorem 2.1 holds. We want to show that (Lσ)II+diag(qI) is invertible when Reσ0 and ReqIζ, for some ζ>0 to be determined and depending on Reσ. By lemma 3.3 and because Reσ0, we have that (LReσ)II0. Since we assume ReqIλmin((LReσ)II), we must have (LReσ)II+diag(ReqI)0. We can now use lemma 3.4 with A(LReσ)II+diag(ReqI) and B(LImσ)II+diag(ImqI) to conclude that (Lσ)II+diag(qI) is invertible. Uniqueness follows from the definition of the σ,q Dirichlet problem.

    Then assume condition (ⅱ) of theorem 2.1 holds. By the hypothesis, we have that (LReσ)II+diag(ReqI)0. Hence we can use lemma 3.4 with A(LReσ)II+diag(ReqI) and B(LImσ)II+diag(ImqI) to conclude that (Lσ)II+diag(qI) is invertible and the desired uniqueness of the σ,q Dirichlet problem follows.

    It now remains to prove (17). The proof is very similar to the scalar case (see e.g. [13]), but we include it here for completeness. The first equation of (15) can be rewritten as

    (Lσ)IBuB+[(Lσ)II+diag(qI)]uI=0. (27)

    Since under the hypothesis of theorem 2.1 the matrix (Lσ)II+diag(qI) is invertible, we have

    uI=[(Lσ)II+diag(qI)]1(Lσ)IBuB. (28)

    The expression (17) of the Dirichlet to Neumann map can be identified from

    Λσ,quB=(Lσu)B=[(Lσ)BB+diag(qB)]uB+(Lσ)BIuI=[(Lσ)BB+diag(qB)(Lσ)BI[(Lσ)II+diag(qI)]1(Lσ)IB]uB.

    The purpose of this section is to prove theorem 2.3, which deals with the situation Reσ0 and q=0. We start with several intermediary results. The first is a characterization of floppy modes (lemma 3.5) that shows that floppy modes for such σ are entirely determined by the subspace N(diag(σ)). Them we establish some relations between the ranges and nullspaces of certain blocks of Lσ and of LReσ. One key observation is that floppy modes do not affect the boundary data (lemma 3.7).

    First notice that for any u(Cd)V and real σ0, we have

    uLσu=uTdiag(σ)u=diag(σ1/2)u2, (29)

    where σ1/2(e)=(σ(e))1/2, and the square root of a positive semidefinite matrix is defined by taking the square root of the eigenvalues in its eigendecomposition, see e.g. [20]. We use this identity to give a characterization of the floppy modes for complex conductivities with Reσ0 and N(Reσ(e))N(Imσ(e)), eE. From this characterization, we can see that floppy modes depend only on the subspaces R(Reσ(e)) (or equivalently N(Reσ(e))), for eE.

    Lemma 3.5. For conductivities with Reσ0 and N(Reσ(e))N(Imσ(e)), eE, the following are equivalent.

    (i) z is a floppy mode (i.e. it satisfies (18))

    (ii) z is a non-zero solution to

    {diag(Reσ)z=0,zB=0. (30)

    (iii) z is such that zB=0 and zIN((LReσ)II).

    Proof. (ii)(i)_. Assume that z0 satisfies (30). Because of the inclusion N(Reσ(e))N(Imσ(e)), eE, we also have diag(Imσ)z=0 and diag(σ)z=0. Hence Lσz=Tdiag(σ)z=0,

    (i)(ii)._ Now we assume that z is a floppy mode, i.e. (18) holds. Clearly this means that 0=zLσz=zLReσz+ȷzLImσz. Since both LReσ and LImσ are real symmetric, the scalars zLReσz and zLImσz must be real. We can conclude that zLReσz=0. Now we use (29) to realize that diag((Reσ)1/2)z2=0. Since Reσ and (Reσ)1/2 have identical nullspaces, we see that (30) holds as well.

    (i)(iii)._ We assume z0 satisfies (18). Since zB=0 we have zLσz=zI(Lσ)IIzI=0. But then

    0=zI(LReσ)IIzI+ȷzI(LImσ)IIzI

    implies that zI(LReσ)IIzI=0. Because of (29) we have LReσ0 and also (LReσ)II0. Therefore zIN((LReσ)II).

    (iii)(ii)._ Now let z be such that zB=0 and zIN((LReσ)II). Again since zB=0, we have 0=zI(LReσ)IIzI=zLReσz. By (29) we get

    diag((Reσ)1/2)z2=0

    which readily implies (30).

    Next we continue with a technical result, which is a slight generalization of the elastic network result in [18,Lemma 1].

    Lemma 3.6. Let σ be a conductivity with Reσ0 and satisfying the inclusion N(Reσ(e))N(Imσ(e)) for eE. Assume the subgraph induced by the interior nodes is connected. Then we have the following inclusions

    (i) N((Lσ)II)=N((LReσ)II)

    (ii) N((Lσ)II)N((Lσ)BI)

    (iii) R((Lσ)II)R((Lσ)IB)

    Proof.  Proof of (i)._ Let zIN(Lσ)II. Because zIN(Lσ)II we also have

    0=zI(Lσ)IIzI=zI(LReσ)IIzI+ȷzI(LImσ)IIzI.

    But the real matrices LReσ and LImσ are symmetric so we can conclude that zI(LReσ)IIzI=0. Again by using (29) we see that LReσ0 and thus zIN((LReσ)II). Now assume zIN(LReσ)II. By lemma 3.5, the extension z of zI by zeros on B is a floppy mode, i.e. it satisfies (18). In particular, because zB=0, we have 0=(Lσz)I=(Lσ)IIzI=0. Thus we get zIN((Lσ)II) and (ⅰ) holds.

    Proof of (ii)._ We now proceed as in the proof of lemma 3.3, and write

    (LReσ)II=LReσI+diag(Ref), (31)

    where LReσI is the weighted graph Laplacian on the subgraph GI induced by the interior nodes and f(Cd×d)I is given as in (26). Take a zN(Lσ)II and multiply (31) on the left by z and on the right by z to obtain

    0=z(LReσ)IIz=zLReσIz+zdiag(Ref)z. (32)

    By (29) applied to the subgraph GI, we have LReσI0. By the assumption Reσ0, we also have diag(Ref)0. Thus the two last terms in (32) are non-negative and must be zero by the first equality in (32). We have established that zdiag(Ref)z=0. But Ref(i) is a sum of positive semidefinite matrices (since Reσ0). Thus we must have that

    z(i)Reσ({i,j})z(i)=0for all{i,j}E,withiI,jB.

    Since conductivities are assumed to have symmetric positive semidefinite real parts, this means that for all {i,j}E, with iI and jB we have Reσ({i,j})z(i)=0. By the inclusion in the hypothesis of the lemma we get Imσ({i,j})z(i)=0 and also σ({i,j})z(i)=0. Now from the definition of the Laplacian we have

    ((Lσ)BIz)(j)={i,j}E,iIσ({i,j})z(i)=0,for jB.

    This shows the inclusion (ⅱ).

    Proof of (iii)._ Apply statement (ⅱ) to the conductivity ¯σ=ReσȷImσ and the fundamental theorem of linear algebra to get R((L¯σ)II)R((L¯σ)IB). Since Reσ and Imσ are symmetric we get the desired result R((Lσ)II)R((Lσ)IB).

    The following lemma shows that even if there are floppy modes, these do not influence Neumann (or net current) measurements at the boundary. In other words, floppy modes cannot be observed from boundary measurements.

    Lemma 3.7. Assume the hypothesis of lemma 3.6 hold. Then floppy modes correspond to zero boundary measurements.

    Proof. We need to show that if z(Cd)V is a floppy mode, then we have zero Neumann boundary data, i.e. (Lσz)B=0. Since z is a floppy mode, we see from lemma 3.5 that zB=0 and zIN(LReσ)II. By inclusions (ⅰ) and (ⅱ) of lemma 3.6 we deduce that zIN(Lσ)IIN(Lσ)BI. We conclude by noticing that (Lσz)B=(Lσ)BBzB+(Lσ)BIzI=0.

    Proof. If u is a solution to the σ,0 Dirichlet problem with boundary condition g(Cd)B, then uB=g and

    (Lσ)IBg+(Lσ)IIuI=0. (33)

    The inclusion (ⅲ) of lemma 3.6 guarantees that equation (33) admits a solution for all g(Cd)B. The general solution to (33) may be written as

    uI=((Lσ)II)(Lσ)IBg+z, (34)

    where zN((Lσ)II) and the symbol is the Moore-Penrose pseudoinverse. The Neumann boundary data corresponding to such solution is:

    (Lσu)B=(Lσ)BBg(Lσ)BI((Lσ)II)(Lσ)IBg+(Lσ)BIz.

    However the inclusion (ⅱ) in lemma 3.6 (or lemma 3.7) guarantees that (Lσ)BIz=0. Hence the Dirichlet to Neumann map is uniquely defined and can be written as

    Λσ,0=(Lσ)BB(Lσ)BI((Lσ)II)(Lσ)IB. (35)

    Now let Q be such that QTQ=I and R(Q)=R((Lσ)II). We can always find a real Q because of lemma 3.6 (ⅰ), and it can be found from (LReσ)II e.g. via a QR factorization or an eigendecomposition. By the fundamental theorem of linear algebra, the space R(Q) is the orthogonal to the interior components of floppy modes, and thus depends only on the subspaces R(Reσ(e)), eE (see lemma 3.5, (ⅱ)). We can use Q to write the pseudoinverse of (Lσ)II as follows

    (Lσ)II=Q(QT(Lσ)IIQ)1QT,

    and we get the alternate expression (19) for the Dirichlet to Neumann map.

    The discrete inverse problems we consider share a common structure that we describe in section 4.1 and that is motivated in section 4.2 by the classic uniqueness proof for the continuum Schrödinger inverse problem [25]. Under the assumptions we make here, the linearization of the problem is readily available (section 4.3) and analyticity of the forward map is ensured. This has practical implications that are described in section 4.4.

    We denote by pCm the unknown parameter. As we see later in sections 5 and 6, the parameter p may represent a matrix valued quantity (or its eigenvalues) defined on the edges or nodes of a graph. The forward or parameter to data map associates to the parameter p the matrix ΛpCn×n (the data), provided the parameter p belongs to an admissible set RCm of parameters. The inverse problem is to find p from Λp. Furthermore, we assume that the discrete inverse problems we consider satisfy the following assumptions.

    Assumption 1. The parameter p belongs to an open convex set RCm of admissible parameters. The forward map that to a parameter pR associates the data Λp is well defined for pR.

    Assumption 2. For all f,gCn and p1,p2R the following boundary/interior identity holds:

    fT(Λp1Λp2)g=b(Sp2g,Sp1f)T(p1p2), (36)

    where b:C×CCm is a bilinear mapping and SpC×n is a matrix defined for pR that associates to a boundary condition fCn, an internal "state" SpfC. This identity is motivated in section 4.2 by looking at the continuum Schrödinger problem.

    Assumption 3: Analyticity. The entries of Sp are analytic functions of p for pR.

    Here by "analytic" we mean in the sense of analyticity of several complex variables, see e.g. [19]. For completeness, we recall in appendix A all the results we use from the theory of functions of several complex variables.

    The identity (36) in Assumption 2 is a discrete version of a similar identity that plays a key role in the Sylvester and Uhlmann [25] proof of uniqueness for the continuum Schrödinger inverse problem. To motivate (36) we make a short excursion to the continuum (fully contained within this section) and consider an open connected and bounded domain ΩRd with smooth surface Ω. We say u solves the Schrödinger problem if

    Δu+qu=0in Ωandu=fon Ω, (37)

    where Δ=21+2d is the Laplacian and q is a Schrödinger potential and f the Dirichlet boundary data. The Dirichlet to Neumann map is the linear mapping Λq that maps Dirichlet boundary data u|Ω to Neumann boundary data nu|Ω, where n is the outward pointing unit normal to Ω and u=[1u,,du]T is the gradient of u. Now if u1 (resp. u2) solves the Schrödinger problem with u1|Ω=f (resp. u2|Ω=g) and Schrödinger potential q1 (resp. q2), then by using Green's identities one gets the following identity that can be found in [25]:

    Ωf([Λq1Λq2]g)dS=Ω(q1q2)u1u2dx. (38)

    This is called a boundary/interior identity because it relates the boundary data (difference of Dirichlet to Neumann maps for q1 and q2) to a linear functional of q1q2 in the interior of Ω. The linear functional itself is the product of the solutions u1 and u2.

    Analogously in the boundary/interior identity (36) we require that the difference in the parameter to data map for two parameters p1 and p2 is given by a linear functional of p1p2, where the linear functional itself is a "product" of the "internal states" corresponding to p1 and p2. What we mean by "product" and "internal state" is left abstract for now, but many concrete examples are given in section 5.

    For a discrete inverse problem satisfying assumptions 1–3, we define the following product of solutions matrix, which is the matrix valued function W:R×RCm×n2 with columns given by 2

    2We use Matlab style notation where the jth column of a matrix A is A(:,j) and its ith row is A(i,:).

    [W(p1,p2)](:,i+(j1)n)=b(Sp1(:,i),Sp2(:,j)),i,j=1,,n. (39)

    The next lemma shows that the parameter to data map Λp must be Fréchet differentiable (specialized versions of this lemma appear in [6,lemma 5.4 and 6.3]).

    Lemma 4.1 (Linearization of discrete inverse problem). Let pR. For sufficiently small δpCm, we have

    fTΛp+δpg=fTΛpg+b(Spf,Spg)Tδp+o(δp). (40)

    Proof. Use the boundary/interior identity (36) with p1=p+ϵδp and p2=p, for some scalar ϵ. To conclude divide both sides by ϵ and take the limit as ϵ0. Notice that assumption 3 guarantees that Sp is analytic in p, therefore we do have continuity of Sp in p and Sp+ϵδpSp as ϵ0.

    A consequence of lemma 4.1 is that W(p,p)T is a n2×m matrix representation of the Jacobian matrix for the parameter to data map at parameter value p. From (39), the matrix representation of the Jacobian is associated to identifying the matrix ΛpCn×n with the vector vect(Λp)Cn2. Clearly the linearized inverse problem about p is injective when N(W(p,p)T)={0}, i.e. when the product of solutions matrix W(p,p) has full row rank, i.e. R(W(p,p))=Cm.

    Another consequence of lemma 4.1 is that the Jacobian of Λp with respect to p must be analytic for pR (by assumption 3). Clearly the forward map Λp must also be analytic for pR.

    We look at the impact of analyticity on the uniqueness question:

    If p1,p2Cm are parameters with identical data Λp1=Λp2, can we conclude that p1=p2?

    For inverse problems satisfying assumptions 1–3, we can only guarantee uniqueness in a weak sense that we call uniqueness almost everywhere (as in [6]). By this we mean that (a) the linearized problem is injective for almost all parameters pR and (b) for any p, the sets Mp{qR|Λq=Λp} must have zero measure. The set Mp is the equivalence class of all parameters that give the same data as the parameter p. If the uniqueness were guaranteed for the inverse problem, we would have Mp={p}. In our case we can only guarantee the much weaker statement that Mp has zero measure. Both (a) and (b) follow from analyticity of the forward map and assuming that the Jacobian is injective at a single parameter ρR.

    First assume analyticity of Λp and that we can find ρ1,ρ2R2 such that Λρ1Λρ2. Define the function g:R×RC by g(x,y)=[ΛxΛy]ij for some i,j1,,n. Clearly g is analytic on R2 and we can find i,j such that g(ρ1,ρ2)0. By analytic continuation, the set {(p1,p2)R×R|g(p1,p2)=0} must be a zero measure set. In particular if we fix pR we have shown that Mp×Mp has zero measure and so Mp must have zero measure as well. This is a much simpler way of reaching a result similar to that in [6] and was suggested by Druskin [15].

    Analyticity can also be used to deduce that if the Jacobian of the forward map is injective at a parameter ρR, then it must be injective at almost any other parameter pR, i.e. (a). Indeed, lemma 4.1 shows that the Jacobian at p can be represented by the n2×m matrix W(p,p)T defined in (39). If W(ρ,ρ)T is injective for a ρR, then there is a m×m submatrix [W(ρ,ρ)]:,α of W(ρ,ρ) that is invertible, where α=(α1,,αm){1,,n2}m is a multi-index used to represent the particular choice of columns. Thus the function f:RC defined by

    f(p)=det[W(p,p)]:,α (41)

    is analytic for pR and is such that f(ρ)0. By analytic continuation, the zero set of f must be of measure zero. Thus the set of parameters for which the Jacobian is not injective must be a zero measure set.

    Finally we note that if we can find a parameter ρ for which the Jacobian W(ρ,ρ)T is injective, then we can use the constant rank theorem (see e.g. [24]) to show that there is a pR in a neighborhood of ρ such that ΛρΛp.

    Uniqueness a.e. has several practical applications that are illustrated for the scalar discrete conductivity problem in [6]. We give an outline of these applications for completeness. The first application is a simple test to determine whether uniqueness a.e. holds for a particular discrete inverse problem and that may also indicate sensitivity to noise (section 4.5.1). Once we know uniqueness a.e. holds for a particular discrete inverse problem, we can guarantee that the situations in which Newton's method with line search fails can be easily avoided (section 4.5.2). Naturally a statement about zero measure sets can be translated to a probabilistic setting (section 4.5.3).

    Recall from lemma 4.1 that the Jacobian of the discrete inverse problem at a parameter p can be easily computed as a products of solutions matrix (39) with pp1=p2. As discussed in section 4.4, if we can find a parameter pR for which the Jacobian is injective, then uniqueness a.e. holds for the problem. A numerical test for uniqueness a.e. can be summarized as follows.

    1. Pick a parameter pR.

    2. Calculate the Jacobian W(p,p)T using (39).

    3. Find the largest and smallest singular values σmax,σmin of W(p,p).

    4. If σmin>ϵσmax, where ϵ is a tolerance set a priori, then uniqueness a.e. holds.

    We point out that if σminϵσmax it is not possible to distinguish between the two following scenarios: (a) uniqueness a.e. holds but W(p,p) is not injective to tolerance ϵ; or (b) uniqueness a.e. does not hold for the problem. Thus the test is inconclusive. However we know that scenario (a) is very unlikely because we would have had to pick p on the zero measure subset of R that contains all parameters for which the Jacobian is not injective. Thus the most likely outcome is (b). Finally we remark that other methods may be used instead of the Singular Value Decomposition (SVD) to find the rank of the Jacobian (e.g. the QR factorization). We prefer the SVD because the ratio σmax/σmin is the conditioning of the linear least squares problem associated with the linearization of the discrete inverse problem, and thus measures the sensitivity to noise of the linearization of the inverse problem about the parameter p.

    The discrete inverse problem of finding the parameter p from the data Λp is a non-linear system of equations that can be solved numerically using Newton's method (see e.g. [23]). Let us denote by DΛp=W(p,p)T the Jacobian of the Dirichlet to Neumann map about the parameter pR. For our particular problem we get the following.

    Newton's method

    p(0)= given

    for k=0,1,2,

    Find step δp(k) s.t. DΛp(k)δp(k)=vect(Λp(k)Λp)

    Choose step length tk>0

    Update p(k+1)=p(k)+tkδp(k)

    The first operation in the Newton iteration is to solve a linear problem for the step δp(k). This operation can fail either because vect(Λp(k)Λp)R(DΛp(k)) or because N(DΛp(k)){0}. A remedy to either of these situations is to solve the linear least squares system

    minδpDΛp(k)δpvect(Λp(k)Λp)22, (42)

    and pick δp(k) as the minimal norm solution to (42). If uniqueness a.e. holds for the problem at hand then clearly DΛp(k) is injective except on a zero measure set. Therefore we can expect the step in Newton's method to be defined uniquely. Now assume we found a step. If we assume a particular form of analyticity for the entries of Sp (in Assumption 3), then we can guarantee that there are only finitely many choices of the step length tk for which DΛp(k+1) is not injective. In the unlikely event one encounters one of such points, the step length tk can be reduced by a small amount to make DΛp(k+1) injective. This is a consequence of the following lemma, which is a generalization of the result for the scalar discrete conductivity inverse problem in [6,Corollary 5.7].

    Lemma 4.2. Consider a discrete inverse problem satisfying assumptions 1–3 and further assume that all entries of Sp are rational functions of p (of the form P(p)/Q(p), where P and Q are polynomials). Let pR and δpCm and assume the Jacobian of Λp at p is injective. Then there are at most finitely many tR for which p+tδpR and either

    (i) The Jacobian of Λp at p+tδp is not injective.

    (ii) Λp+tδp=Λp.

    Proof. Since the Jacobian of Λp is injective at pR, there is a multi-index α{1,,n2}m such that the function f defined in (41) satisfies f(p)0. Since the admissible set is open and convex, there is an interval [a,b] containing 0 such that t[a,b]p+tδpR. Since the entries of Sp are rational functions of p and f is defined through a determinant we can see that the function g(t)=f(p+tδp) is a rational function of t, i.e. it can be written in the form g(t)=F(t)/G(t) where F(t) and G(t) are polynomials. Since F(t) can only have finitely many zeroes, we conclude that there are only finitely many t for which g(t)=0, or in other words, for which the Jacobian at p+tδp is not injective. This proves (ⅰ). To prove (ⅱ) we consider the function h(t)=det[W(p,p+tδp)]:,α, with α being the same multi-index as in (i). The function h is also a rational function in t with finitely many zeroes. Notice that h(t)0 implies the matrix W(p,p+tδp) has full row rank. Using the boundary/interior identity (36) we see that

    Λp+tδpΛp=tW(p,p+tδp)Tδp0

    when δp0. Thus there are at most finitely many t for which Λp+tδp=Λp, p,p+δpR2 and δp0.

    Remark 2. The assumption on the entries of Sp being rational functions of p is satisfied by all the examples of discrete inverse problems on graphs that we consider in sections 5 and 6. This is a simple consequence of the cofactor formula for the inverse of a matrix.

    The discussion in section 4.4 has a probabilistic flavor as was remarked for the scalar conductivity problem in [6]. To see this, consider a probability space (Ω,F,P) (i.e. a sample space Ω, a set of events F and a probability measure P) and consider a random variable P:ΩR×R with distribution μP that we assume is absolutely continuous with respect to the Lebesgue measure on Cm×Cm. Note that this assumption precludes the distribution μP from being supported on a set of Lebesgue measure zero in R×R. We write P(P1,P2) when we want to distinguish the components of P. If uniqueness a.e. holds for the discrete inverse problem at hand and MR×R is a measurable set for which P{PM}>0, then we must have

    P{W(P1,P2)is injective|PM}=1.

    To see this, remark that uniqueness a.e. guarantees that the set

    Z={(p1,p2)M|W(p1,p2)is not injective}

    is of measure zero. Since the distribution is absolutely continuous with respect to the Lebesgue measure, this also means μP(Z)=0. Roughly speaking, if we choose two admissible parameters p1,p2 at random, we have W(p1,p2) injective almost surely. Thus we can tell p1 and p2 apart from the data Λp1, Λp2 almost surely.

    A similar observation can be made regarding the injectivity of the Jacobian of the problem. Let Q:ΩR be a random variable with distribution μQ that is assumed to be absolutely continuous with respect to the Lebesgue measure. If uniqueness a.e. holds and NR is some measurable set with P{QN}>0, then we must have

    P{Jacobian at Q is injective|QN}=1.

    Here we use the graph theoretical results from section 2 to give examples of matrix inverse problems on graphs that fit the mold of section 4.

    Given a graph G=(V,E) with boundary, the problem here is to find the matrix valued conductivity σ(Cd×d)E from the Dirichlet to Neumann map Λσ,0. We explain below why this problem satisfies the assumptions of section 4.

    ● Here we take as admissible set:

    R={σ(Cd×d)E|σ=σTandReσ0}.

    This is an open convex set in (Cd×d)E which can be identified to an open convex subset of Cd2|E|. The forward map is the map that to σR associates the Dirichlet to Neumann map Λσ,0Cd|B|×d|B|. This map is well defined for σR because of theorem 2.1.

    ● By lemma 2.4 with qi=0, i=1,2, we have the boundary/interior identity

    gT2(Λσ1,0Λσ2,0)g1=b(Sσ2g2,Sσ1g1)Tvect(σ1σ2).

    Here we define SσCd|E|×d|B| by its action on some fCd|B|

    Sσf=u,

    where u solves the Dirichlet problem (15), with q=0 and uB=f. The bilinear map b:Cd|E|×Cd|E|C|E|d2 is defined by b(u,v)=uv, where the outer product is defined in section 2.1 and we are implicitly identifying (Cd)E with Cd|E| and similarly for (Cd×d)E and C|E|d2.

    Analyticity assumption. From theorem 2.1 (see also lemmas 3.3 and 3.4), the solution u to the Dirichlet problem (15) with uB=f, σR and q=0 is determined by uI=(LII)1LIBf, where for clarity we omitted the subscript σ from the graph Laplacian Lσ. Therefore the entries of Sσ depend analytically on σ, for σR.

    Here we show that if the Jacobian for the scalar conductivity problem on a graph is injective at a conductivity σ(0,)E then the Jacobian for the matrix conductivity problem on the same graph but with conductivity of edge eE given by σ(e)IRd×d must also also be injective. Because of the discussion in section 4.4, this result shows that if uniqueness a.e. holds for a scalar conductivity problem, then it must also hold for the matrix valued problem. In particular uniqueness a.e. holds on the critical circular planar graphs that are defined in [13,14].

    Lemma 5.1. Let G=(V,E) be a graph with boundary and let s(0,)E be a scalar conductivity. Define the conductivity σ(Rd×d)E by σ(e)=s(e)I with I being the d×d identity. Then if the Jacobian of the forward problem is injective for the scalar conductivity s, it must also be injective for the matrix valued conductivity σ.

    Proof. We need to show that

    R(W(s,s))=R|E|R(W(σ,σ))=Rd2|E|.

    To this end, we first link the Laplacian Lσ for the matrix valued conductivity σ (a d|V|×d|V| matrix) to the Laplacian for a graph Gd=(Vd,Ed) that corresponds to having d copies of the graph G without any edges between the copies, and each copy of G having the same scalar conductivity s. To be more precise the vertex set of Gd is Vd=V×{1,,d}, the edge set is

    Ed={{(v1,1),(v2,2)}Vd×Vd|1=2and{v1,v2}E}.

    Then with an appropriate ordering of Vd, we have Lσ=Lsd, where the conductivity sd(0,)Ed is defined by

    sd({(v1,1),(v2,2)})=δ1,2s({v1,v2})

    for all {v1,v2}E, 1,2{1,,d}, and with δ1,2 being the Kronecker delta. Now take a solution vRV to the Dirichlet problem on G with scalar conductivity s and let ei be the ith canonical basis vector of Rd. Then up to a reordering of Vd, vei solves the Dirichlet problem on G with matrix valued conductivity σ and boundary data v|Bei, i=1,,d. Here we used the Kronecker product which we recall for convenience in (2). Let vj be the solution to the Dirichlet problem on G with conductivity s such that vj|B=ej, j=1,,|B|. Then we have

    R(W(s,s))=span{vect((vi)(vi)),i,i=1,,|B|},

    where we used the outer product defined in (1). Since (vei)=(v)ei for any vRV we should have that

    ((viej)(viej))=[(vi)(vi)](ejeTj), (43)

    for any i,i=1,,|B| and j,j=1,,d. Now let us consider the subspace MRd2|E| spanned by all possible products (43) in vector form, i.e.

    Mspan{vect[((viej))(viej))],i,i=1,,|B|,j,j=1,,d}.

    Since R(W(s,s))=R|E| we deduce that M=Rd2|E|. Indeed, the d2 subspaces associated with the pairs (j,j){1,,d}2 are mutually orthogonal and each has dimension |E|. The desired result follows because we have the inclusion MR(W(σ,σ)).

    Given a graph G=(V,E) with boundary, the inverse problem we consider here is to find the symmetric matrix valued Schrödinger potential q(Cd×d)V from the Dirichlet to Neumann map Λσ,q, where the conductivity σ(Cd×d)E is symmetric with Reσ0 and is assumed to be known. This problem has the structure of the abstract inverse problem of section 4, as we see next.

    ● The admissible set is

    R={q(Cd×d)V|q=qTandReqIλmin((LReσ)II)}.

    This is an open convex set in (Cd×d)V which can be identified to an open convex subset of Cd2|V|. The forward map is the map that to qR associates the Dirichlet to Neumann map Λσ,qCd|B|×d|B|. This map is given by (17) and is well defined for qR because of theorem 2.1.

    ● The boundary/interior identity is given by applying lemma 2.4 with σi=σ, i=1,2:

    gT2(Λσ,q1Λσ,q2)g1=b(Sq2g2,Sq1g1)Tvect(q1q2).

    Here we define SqCd|V|×d|B| by its action on some fCd|B|

    Sqf=uI,

    where u solves the Dirichlet problem (15) with uB=f. The bilinear map b:Cd|V|×d|V|C|V|d2 is defined by b(u,v)=uv, where the block-wise outer product is defined in (1) and we implicitly identify (Cd)V with Cd|V| and (Cd×d)V with C|V|d2.

    Analyticity assumption. From the proof of theorem 2.1, the solution u to the Dirichlet problem (15) with uB=f and qR is determined by uI=(LII+diag(qI))1LIBf, where we omitted the subscript σ from the graph Laplacian Lσ. Hence the entries of Sq are analytic for qR.

    Here we consider the inverse problem of recovering a conductivity σ(Cd×d)E that is rank deficient from its Dirichlet to Neumann map Λσ,0. To simplify the discussion, we assume that the conductivities of all edges have the same rank r1, i.e. rankσ(e)=r for all eE. The conductivities we consider here satisfy the inclusion assumption of theorem 2.3, namely that N(Reσ(e))N(Imσ(e)), for all eE. Moreover we assume that Reσ(e) and Imσ(e) commute so that they can be decomposed on the same basis of real eigenvectors. Therefore the eigenvectors of σ(e) are real and we may define x(Rd×r)E and λ(Cr)E to write the eigendecomposition of each of the conductivities i.e.

    σ(e)=[x(e)][diag(λ(e))][x(e)]T,eE, (44)

    with x(e)Tx(e)=I being the r×r identity. By the hypothesis σ0 we must have Reλ>0. We note that the assumptions we make here are more restrictive than those of theorem 2.3, but they suffice for networks of springs, masses and dampers where r=1, as we see later in section 6.1.

    ● We take as admissible set

    R={λ(Cr)E|Reλ>0}.

    Clearly R is an open convex set in (Cr)E, which can be identified to an open convex subset of Cr|E|. The forward map associates to λR the Dirichlet to Neumann map Λσ(λ),0 where σ(λ) has eigenvectors x and eigenvalues λ, i.e. σ(λ) satisfies (44). Theorem 2.3 guarantees that this map is well defined for λR.

    ● Let λ1,λ2R. By lemma 2.4 with σiσ(λi) and qi=0, i=1,2, we get the boundary/interior identity

    gT2(Λσ1,0Λσ2,0)g1=b(Sλ2g2,Sλ1g1)T(λ1λ2).

    We define the matrix SλCr|E|×d|B| by its action on some fCd|B|,

    Sλf=diag(x)Tu,

    where u solves the Dirichlet problem (15) with boundary data uB=f, conductivity σ(λ) satisfying (44) and q=0. Recall that the Dirichlet problem solution is determined up to floppy modes. However from the floppy mode characterization in lemma 3.5, we see that the definition of Sλ is independent of the choice of floppy mode. The bilinear map b:Cr|E|×Cr|E|Cr|E| is simply the Hadamard product, i.e. b(u,v)=uv, and as before we identify (Cr)E with Cr|E|.

    Analyticity assumption. From the proof of theorem 2.3 (see section 3.3.1), a solution u to the Dirichlet problem (15) with boundary data uB=f, conductivity σ(λ) satisfying (44) and q=0, is determined by

    uI=Q(QT(Lσ)IIQ)1QT(Lσ)IBf,

    where Q is a real matrix such that QTQ=I and R(Q)=R((Lσ)II). Since Q depends only on the graph and the (known a priori) eigenvectors x, the entries of uI are analytic for λR. Hence the entries of Sλ must also be analytic for λR.

    Consider a graph G=(V,E) with boundary B and let p(Rd)V be a function representing the equilibrium position of each node in dimension d=2 or 3. Each edge eE represents a spring with positive spring constant given by the function k(0,)E. Let u(Rd)V denote the displacements of the nodes with respect to the equilibrium position. The quantity u(Rd)E is the net spring displacement. By Hooke's law, the force exerted by a spring is proportional to the net spring displacement. Here the proportionality is given by a function k(0,)E. For infinitesimally small displacements, the force exerted by spring {i,j}E is proportional to the projection of the net displacement of spring {i,j} on the direction p(i)p(j). In other words, the forces are diag(σ)u, where σ(Rd×d)E is the positive semidefinite conductivity

    σ({i,j})=k({i,j})[p(i)p(j)][p(i)p(j)]T[p(i)p(j)]T[p(i)p(j)],for{i,j}E. (45)

    Now assume we displace the boundary nodes by an amount g(Rd)B. If the interior nodes are left to move freely, the net forces at the interior nodes should be zero, this condition is equivalent to (Lσu)I=0. Hence finding the displacements in a spring network arising from (static) boundary displacements is the same as solving the Dirichlet problem (15) with the particular matrix valued conductivity (45) and zero Schrödinger potential. Using theorem 2.3, we see that the interior displacements are uniquely determined by the boundary displacements (up to floppy modes) and that the Dirichlet to Neumann map Λσ is given by (19). In this particular case this map is called displacement to forces map.

    We now consider the case where the displacements depend on time, i.e. the function u:V×RRd is defined such that u(i,t) is the displacement about the equilibrium position p(i) of node iV at time tR. We use the notation ˙u=du/dt and ¨u=d2u/dt2 and we assume that all nodes have a non-zero mass, which is given by the function m(0,)V.

    We consider two kinds of viscous damping. The first is spring damping, which is proportional to the net velocity of a spring and is assumed to be in the same direction as the equilibrium position of the springs, with proportionality constant given by a function cE[0,)E. This corresponds to having a damper in parallel with each spring. The net forces associated with this damping are given by Lμ˙u, where μ(Rd×d)E is defined by

    μ({i,j})=cE({i,j})[p(i)p(j)][p(i)p(j)]T[p(i)p(j)]T[p(i)p(j)],for{i,j}E. (46)

    The second is node damping, meaning that each node is inside a small cavity containing a viscous fluid and is thus subject to a damping force proportional to the node velocity, with the proportionality constant given by a function cV[0,)V. The forces associated with this kind of damping are diag(qdamp)˙u where qdamp(Rd×d)V is defined by qdamp(i)=cV(i)I, for iV and I being the d×d identity matrix.

    Putting everything together and applying Newton's second law, we get the equations of motion for a network of springs, masses and dampers:

    diag(qmass)¨u+(diag(qdamp)+Lμ)˙u+Lσu=f, (47)

    where qmass(Rd×d)V is defined by qmass(i)=m(i)I for iV. The function f:V×RRd is a function representing any external forces, i.e. f(i,t) is the external force exerted on node iV at time t. This second order system of ordinary differential equations can be written as

    M¨u+C˙u+Ku=f, (48)

    where M=diag(qmass) is the mass matrix, C=diag(qdamp)+Lμ is the damping matrix and Lσ is the stiffness matrix. We recall that σ is defined in (45) and μ in (46).

    For a time harmonic displacement u(i,t)=exp[ȷωt]ˆu(i,ω), the equations of motion (48) become

    (ω2M+ȷωC+K)ˆu=ˆf. (49)

    Now consider the problem of finding the (frequency domain) displacements ˆuI at the interior nodes knowing the displacements ˆuB at the boundary nodes and that there are no external forces at the interior nodes (i.e. ˆfI=0). We immediately see that we have another instance of the Dirichlet problem (15) with complex conductivity σ+ȷωμ and complex Schrödinger potential ω2qmass+ȷωqdamp. Unfortunately we cannot apply theorem 2.1 directly because we do not have σ0 or ω2qmass0. To remedy this, we assume there is always a small amount of damping at the nodes i.e. cV(0,)V, in a way reminiscent of the limiting absorption principle for the Helmholtz equation. We rewrite the equations of motion (49) as follows

    (ȷωM+C+(ȷω)1K)(ȷωˆu)=ˆf. (50)

    Again if ˆfI=0, this is an instance of the Dirichlet problem (15) with complex conductivity μ+(ȷω)1σ and complex Schrödinger potential qdamp+ȷωqmass. A positive damping at the nodes guarantees qdamp0. Thus the Dirichlet problem admits a unique solution by theorem 2.1. Indeed the condition (Lμ)IIλmin(qdamp) always holds in this case because (Lμ)II0. Hence the Dirichlet to Neumann map Λμ+(ȷω)1σ,qdamp+ȷωqmass is well defined by (17) and so is the Dirichlet to Neumann map for the original problem: Λσ+ȷωμ,ω2qmass+ȷωqdamp, as can be seen from a homogeneity argument. Since the latter map associates the frequency domain displacements to frequency domain forces, we also call it displacement to forces map.

    Let us consider the inverse problem of finding the spring constants kRE from the static displacement to forces map Λσ(k),0 of a network of springs. We assume the equilibrium positions p(Rd)V of the nodes are known. Uniqueness a.e. for this inverse problem can be established using the result in section 5.3 for rank deficient matrix valued conductivities, which we adapt here to this particular problem. Since we are not aware of a physically relevant interpretation of complex valued spring constants in the static case, we take spring constants in the admissible set

    R=(0,)E.

    The forward map associates to kR, the displacement to forces map Λσ(k),0. The conductivity σ(k) is defined in (45). For an edge {i,j}, the spring constant k({i,j}) is the only non-zero eigenvalue of the conductivity σ({i,j}). To write the boundary/interior identity for this problem we introduce the function x(Rd)E that to an edge {i,j}E associates the corresponding normalized eigenvector, i.e.

    x({i,j})=p(i)p(j)p(i)p(j).

    The boundary/interior identity is then

    gT2(Λσ1,0Λσ2,0)g1=([Sk1g1][Sk2g2])T(k1k2),

    where σiσ(ki), i=1,2 and g1,g2 are vectors in Rd|B|. The matrix SkiR|E|×d|B| is defined such that the vector Skigi contains the components of ui along the spring directions, i.e.

    (Skigi)(e)=x(e)T(ui)(e),eE, (51)

    where ui is the displacement arising from displacing the boundary nodes by gi, i=1,2. Concretely, this problem fits the mold of section 4. Thus from section 4.4, if the linearization of the inverse problem for the spring constants in a spring network is injective for particular spring constants, then it must also be injective for almost all other spring constants.

    Here we consider the problem where the operating frequency ω, the equilibrium position of the nodes p(Rd)V, the masses m(0,)V and node dampers cV(0,)V are all known, but we want to recover the spring constants k(0,)E and spring dampers cE(0,)E from the displacement to forces map at the frequency ω.

    ● The admissible set is

    R={ρCE|Reρ>0,sign(ω)Imρ>0},

    where we grouped for convenience the spring constants and spring dampers into a single complex valued edge function ρ. To be more precise, if ρR we have Reρ=k and Imρ=ωcE. The forward map associates to ρR the displacement to forces map Λσ(ρ),q, where σ(ρ) is defined as in (45) with kρ and qω2qmass+ȷωqdamp. The forward map is well defined and given by (16) for all ρR because we assumed damping at the nodes, see section 6.2.3.

    ● The boundary/interior identity is

    gT2(Λσ1,qΛσ2,q)g1=([Sρ1g1][Sρ2g2])T(ρ1ρ2),

    where σiσ(ρi), giCd|B| and the matrix Ski is defined as in (51) for i=1,2.

    Analyticity assumption. We can use theorem 2.1 to guarantee that the solution to the Dirichlet problem with boundary displacements uB=f is given by uI=(LII+diag(qI))1LIBf, where we omitted the subscript σ(ρ) from the Laplacian Lσ(ρ). This implies the entries of Sρ are analytic for ρR.

    Thus the problem of finding the spring constants when the masses are known fits the mold of section 4. The above argument can be adapted to the case where there are no spring dampers, i.e. cE=0. In this case the admissible set would be R=(0,)E. However the problem no longer satisfies the assumptions of section 4 if we do not know if spring dampers are present, because with cE[0,)E the admissible set would not be open.

    Here we assume that the operating frequency ω, the equilibrium position of the nodes p(Rd)E, the spring constants k(0,)E and the spring dampers cE[0,)E are known. The inverse problem is to find the masses m(0,)V and node dampers cV(0,)V from the displacement to forces map at the frequency ω. As we see next, this problem also satisfies the assumptions of section 4.

    ● The admissible set is

    R={ρCV|Reρ<0,sign(ω)Imρ>0},

    where we grouped for convenience the masses and node dampers into a single complex valued function ρ defined on the vertices. If ρR, then Reρ=ω2m and Imρ=ωcV. The forward map associates to ρR the displacement to forces map Λσ,q(ρ), where σ is defined as in (45) with kk+ȷωcE, and q(ρ)(Cd×d)V is defined by q(ρ)(i)=ρ(i)I for all vertices iV. The displacement to forces map is well defined and given by (16) for all ρR because we assumed damping at the nodes, see section 6.2.3.

    ● The boundary/interior identity is

    gT2(Λσ,q(ρ1)Λσ,q(ρ2))g1=(u1u2)Tvect(q(ρ1ρ2)),

    where ui solves the Dirichlet problem (15) with conductivity σ+ȷωμ and Schrödinger potential q(ρi), i=1,2.

    Analyticity follows from the solution u to the Dirichlet problem being well defined from all Schrödinger potentials of the form q(ρ) (see the discussion in section 6.2.3).

    We have presented several inverse problems on graphs that share the common structure of section 4. In these inverse problems, the unknowns are matrices (or their eigenvalues) defined on the edges or nodes of a graph (section 2). By giving sufficient conditions under which the Dirichlet problem on a graph with matrix valued weights admits a unique solution we can deduce a set of parameters on which the forward map is analytic. In cases where the weights are rank deficient, the solution is not unique but can be determined up to "floppy modes" that depend only on the nullspaces of the weights (sections 2 and 3). Thus the forward map can still be shown to be analytic in this case. Analyticity of the forward map and its Jacobian have practical consequences that are given in sections 4.4 and 4.5. Particular examples of inverse problems on graphs are given in section 5, with a focus on inverse problems arising on networks of springs, masses and dampers (section 6) at a single frequency. Multi-frequency or time domain problems are left for future studies.

    There remains many open questions. For example, it is not clear how to find a graph on which uniqueness a.e. holds for a given problem. This was done in [6] by trying many random graphs drawn from the Erdős-Rényi model [16]. A similar approach could be taken here. It is also not clear whether direct solution methods such as the layer peeling algorithm in [13] exist for these matrix inverse problems on networks. Finally, the theoretical results we present here rely on analytic continuation, which is a notoriously unstable procedure.

    This work was supported by the National Science Foundation grant DMS-1411577. Most of these results were derived in the Spring 2016 introduction to research class called "Network Inverse Problems", supported by the same grant, at the University of Utah. FGV also thanks support from the National Science Foundation grant DMS-1439786, while FGV was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Fall 2017 semester. FGV thanks Laboratoire Jean Kuntzmann for hosting him during the 2017-2018 academic year. We thank Vladimir Druskin for his comments on an earlier version of this manuscript. Finally the authors would like to thank the referees for their insightful comments, that among other things simplified the assumptions and proof of theorem 2.3.

    A function f:CnC is analytic on some open set RCn if for any z0R, the function f(z) can be expressed as a convergent power series, i.e. we can find complex coefficients cα for which the series

    f(z)=αNncα(zz0)α,

    converges for all zR. Here we used the notation zα=zα11zα22zαnn, for a multi-index αNn. Rational functions of the form P(z)/Q(z), for P(z) and Q(z) polynomials, are analytic on any connected open set where Q(z)0. Moreover, the product and the sum of two analytic functions is also analytic. The uniqueness lemma below is a consequence of analytic continuation, i.e. if f(z) is analytic for zR and we can find z0 such that f(z0)0, then the zero set of f

    Z{zR|f(z)=0},

    must be a set of measure zero with respect to the Lebesgue measure on R (see e.g. [19]).



    [1] G. Alessandrini, Remark on a paper by H. Bellout and A. Friedman: "Identification problems in potential theory" [Arch. Rational Mech. Anal., 101 (1988), 143–160; MR0921936 (89c: 31003)], Boll. Un. Mat. Ital. A (7), 3 (1989), 243–249.
    [2] Dirichlet-to-Robin matrix on networks. Electronic Notes in Discrete Mathematics (2014) 46: 65-72.
    [3] Dirichlet-to-Robin maps on finite networks. Applicable Analysis and Discrete Mathematics (2015) 9: 85-102.
    [4] Overdetermined partial boundary value problems on finite networks. Journal of Mathematical Analysis and Applications (2015) 423: 191-207.
    [5] A discrete Liouville identity for numerical reconstruction of Schrödinger potentials. Inverse Probl. Imaging (2017) 11: 623-641.
    [6] On the solvability of the discrete conductivity and Schrödinger inverse problems. SIAM J. Applied Math. (2016) 76: 1053-1075.
    [7] Determination of the closure of the set of elasticity functionals. Arch. Ration. Mech. Anal. (2003) 170: 211-245.
    [8] F. R. K. Chung, Spectral Graph Theory, vol. 92 of CBMS Regional Conference Series in Mathematics, Published for the Conference Board of the Mathematical Sciences, Washington, DC, 1997.
    [9] Identification of resistors in electrical networks. J. Korean Math. Soc. (2010) 47: 1223-1238.
    [10] Réseaux électriques planaires. Ⅰ. Comment. Math. Helv. (1994) 69: 351-374.
    [11] Y. Colin de Verdière, Spectres de Graphes, vol. 4 of Cours Spécialisés [Specialized Courses], Société Mathématique de France, Paris, 1998.
    [12] Réseaux électriques planaires. Ⅱ. Comment. Math. Helv. (1996) 71: 144-167.
    [13] Finding the conductors in circular networks from boundary measurements. RAIRO Modél. Math. Anal. Numér. (1994) 28: 781-814.
    [14] Circular planar graphs and resistor networks. Linear Algebra Appl. (1998) 283: 115-150.
    [15] V. Druskin, personal communication, 2015.
    [16] On random graphs. I. Publ. Math. Debrecen (1959) 6: 290-297.
    [17] Characterization and synthesis of Rayleigh damped elastodynamic networks. Networks and Heterogeneous Media (2014) 9: 299-314.
    [18] Complete characterization and synthesis of the response function of elastodynamic networks. J. Elasticity (2011) 102: 31-54.
    [19] R. C. Gunning and H. Rossi, Analytic Functions of Several Complex Variables, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1965.
    [20] (2013) Matrix Analysis. Cambridge: Cambridge University Press.
    [21] Inverse problem in cylindrical electrical networks. SIAM J. Appl. Math. (2012) 72: 767-788.
    [22] J. E. Marsden and T. J. R. Hughes, Mathematical Foundations of Elasticity, Dover Publications, Inc., New York, 1994, Corrected reprint of the 1983 original.
    [23] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd edition, Springer Series in Operations Research and Financial Engineering, Springer, New York, 2006.
    [24] W. Rudin, Principles of Mathematical Analysis, 3rd edition, McGraw-Hill Book Co., New York-Auckland-Düsseldorf, 1976, International Series in Pure and Applied Mathematics.
    [25] J. Sylvester and G. Uhlmann, A global uniqueness theorem for an inverse boundary value problem, Ann. of Math. (2), 125 (1987), 153–169. doi: 10.2307/1971291
  • This article has been cited by:

    1. Anna Muranova, Wolfgang Woess, Networks with Complex Weights: Green Function and Power Series, 2022, 10, 2227-7390, 820, 10.3390/math10050820
    2. Costy Kodsi, Andrey P. Jivkov, Boundary Value Problem on a Weighted Graph Relevant to the Static Analysis of Truss Structures, 2021, 81, 0036-1399, 1190, 10.1137/18M1206977
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3708) PDF downloads(423) Cited by(2)

Article outline

Figures and Tables

Figures(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog