Processing math: 47%

Error bounds for Kalman filters on traffic networks

  • Received: 01 May 2017 Revised: 01 February 2018
  • Primary: 93B07; Secondary: 35L65, 93C20

  • This work analyzes the estimation performance of the Kalman filter (KF) on transportation networks with junctions. To facilitate the analysis, a hybrid linear model describing traffic dynamics on a network is derived. The model, referred to as the switching mode model for junctions, combines the discretized Lighthill-Whitham-Richards partial differential equation with a junction model. The system is shown to be unobservable under nearly all of the regimes of the model, motivating attention to the estimation error bounds in these modes. The evolution of the estimation error is investigated via exploring the interactions between the update scheme of the KF and the intrinsic physical properties embedded in the traffic model (e.g., conservation of vehicles and the flow-density relationship). It is shown that the state estimates of all the cells in the traffic network are ultimately bounded inside a physically meaningful interval, which cannot be achieved by an open-loop observer.

    Citation: Ye Sun, Daniel B. Work. Error bounds for Kalman filters on traffic networks[J]. Networks and Heterogeneous Media, 2018, 13(2): 261-295. doi: 10.3934/nhm.2018012

    Related Papers:

    [1] Ye Sun, Daniel B. Work . Error bounds for Kalman filters on traffic networks. Networks and Heterogeneous Media, 2018, 13(2): 261-295. doi: 10.3934/nhm.2018012
    [2] Caterina Balzotti, Maya Briani, Benedetto Piccoli . Emissions minimization on road networks via Generic Second Order Models. Networks and Heterogeneous Media, 2023, 18(2): 694-722. doi: 10.3934/nhm.2023030
    [3] Matthieu Canaud, Lyudmila Mihaylova, Jacques Sau, Nour-Eddin El Faouzi . Probability hypothesis density filtering for real-time traffic state estimation and prediction. Networks and Heterogeneous Media, 2013, 8(3): 825-842. doi: 10.3934/nhm.2013.8.825
    [4] Maya Briani, Emiliano Cristiani . An easy-to-use algorithm for simulating traffic flow on networks: Theoretical study. Networks and Heterogeneous Media, 2014, 9(3): 519-552. doi: 10.3934/nhm.2014.9.519
    [5] Emiliano Cristiani, Smita Sahu . On the micro-to-macro limit for first-order traffic flow models on networks. Networks and Heterogeneous Media, 2016, 11(3): 395-413. doi: 10.3934/nhm.2016002
    [6] Samitha Samaranayake, Axel Parmentier, Ethan Xuan, Alexandre Bayen . A mathematical framework for delay analysis in single source networks. Networks and Heterogeneous Media, 2017, 12(1): 113-145. doi: 10.3934/nhm.2017005
    [7] Mauro Garavello, Benedetto Piccoli . On fluido-dynamic models for urban traffic. Networks and Heterogeneous Media, 2009, 4(1): 107-126. doi: 10.3934/nhm.2009.4.107
    [8] Dirk Helbing, Jan Siegmeier, Stefan Lämmer . Self-organized network flows. Networks and Heterogeneous Media, 2007, 2(2): 193-210. doi: 10.3934/nhm.2007.2.193
    [9] Fabio Della Rossa, Carlo D’Angelo, Alfio Quarteroni . A distributed model of traffic flows on extended regions. Networks and Heterogeneous Media, 2010, 5(3): 525-544. doi: 10.3934/nhm.2010.5.525
    [10] Tibye Saumtally, Jean-Patrick Lebacque, Habib Haj-Salem . A dynamical two-dimensional traffic model in an anisotropic network. Networks and Heterogeneous Media, 2013, 8(3): 663-684. doi: 10.3934/nhm.2013.8.663
  • This work analyzes the estimation performance of the Kalman filter (KF) on transportation networks with junctions. To facilitate the analysis, a hybrid linear model describing traffic dynamics on a network is derived. The model, referred to as the switching mode model for junctions, combines the discretized Lighthill-Whitham-Richards partial differential equation with a junction model. The system is shown to be unobservable under nearly all of the regimes of the model, motivating attention to the estimation error bounds in these modes. The evolution of the estimation error is investigated via exploring the interactions between the update scheme of the KF and the intrinsic physical properties embedded in the traffic model (e.g., conservation of vehicles and the flow-density relationship). It is shown that the state estimates of all the cells in the traffic network are ultimately bounded inside a physically meaningful interval, which cannot be achieved by an open-loop observer.



    The unprecedented growth of sensing and computational capabilities have advanced the development of real-time traffic estimation techniques. However, many estimation algorithms proposed to monitor traffic conditions are only verified through experiments (e.g., [5,13,26,27,42,43]), while a theoretical analysis on the estimator performance is lacking. This is mainly due to the complexity of the models describing traffic (e.g., non-linearity and non-differentiability) [2], and, more importantly, the non-observability [30,37] of the systems subject to conservation laws. When a system is not observable, the available sensor measurements (in conjunction with the model describing system dynamics) are insufficient to correctly reconstruct the full state to be estimated. In traffic estimation problems, the non-observability issue is inevitable when a shock exists and the sensors cannot measure every state variable in the road network of interest.

    This work aims at addressing the issue of the lack of a theoretical performance analysis for traffic estimation problems under unobservable scenarios occurring at junctions. Note that in classical estimation / filtering theory, an unobservable system is very likely to result in an estimation error that diverges [1,17]. Nevertheless, in this work we derive theoretical error bounds for traffic estimation on unobservable transportation networks (with junctions), where the state estimate is provided by the Kalman filter (KF). Specifically, when the system is not observable, the error bounds are derived through leveraging the intrinsic properties of traffic models (e.g., mass conservation and the flow-density relationship), and exploring the interactions between these intrinsic properties and the update scheme in the Kalman filtering algorithm.

    The classical conservation law describing the evolution of traffic density on a road link (i.e., a road section without junctions) is the Lighthill-Whitham-Richards partial differential equation (LWR PDE) [25,32]. The cell transmission model (CTM) [7,8,23] is a discretization of the LWR PDE using the Godunov scheme [11]. In [30,35], the CTM is transformed to a hybrid linear system known as the switching mode model (SMM) that switches among different linear modes, and the observability of each mode is analyzed.

    To extend the traffic model on road links to networks, a model is needed to describe how the traffic exiting the road links on the upstream side of a junction is received by the road links on the downstream side of the junction. A well known issue is that the conservation of vehicles across the junction is insufficient to uniquely define the flows at the junction. To address this issue, a number of junction models [8,9,10,12,14,15,18,19,20,22] have been proposed via additional rules governing the distribution or priority of the flows on different road links.

    In parallel to the ongoing development of traffic models, a number of sequential traffic estimation algorithms have been proposed to integrate model predictions with real-time sensor measurements. For example, the mixture Kalman filter is applied to the SMM in [35] to estimate traffic densities for ramp metering. The parallelized particle filters and the parallelized Gaussian sum particle filter are designed in [27] for computational scalability. In [40], an efficient multiple model particle filter is proposed for joint traffic estimation and incident detection on freeways. Other treatments of traffic estimation include [5,26,29,38,41,42,43]. A comprehensive survey of sequential estimation techniques for traffic models can be found in [34].

    Although many traffic estimation algorithms proposed in the existing literature are verified experimentally, very few theoretical results exist that analyze the performance of traffic estimators (e.g., bounds on the estimation error), especially under unobservable scenarios. The main results to date are as follows. In [16], the KF is applied on a Gaussian approximation of a stochastic traffic model, and the stochastic observability of the system is proved. To ensure observability of the system, a warm-up period is required where the initial conditions are restricted to be free-flow traffic conditions. In [6], the local observability around an equilibrium traffic state is studied using a Lagrangian formulation of the traffic model. The authors of [28] prove the performance of a noise-free Luenberger observer for traffic estimation based on the SMM, which is the first work to provide theoretical performance analysis for any traffic estimator under both observable and unobservable scenarios. The Luenberger observer in [28] discards measurement feedback in unobservable modes, ensuring the spatial integral of the estimation error is conserved. Similarly in [39], the estimator runs an open-loop predictor under unobservable cases to ensure that the estimation error does not diverge. Although conserving the spatial integral of the estimation error, it is illustrated in [36] that dropping measurement feedback can lead to physically unreasonable estimates. In our earlier work [37], we showed that incorporating measurement feedback under unobservable scenarios is essential in maintaining physically reasonable estimates, i.e., due to the interactions of the model prediction and the measurement feedback in the filtering algorithm, the mean estimates of all state variables are guaranteed to be ultimately bounded inside the physically meaningful interval under unobservable modes. Note that the results in [6,16,28,37,39] are restricted to road stretches without considering junction dynamics, which is the focus of the work in this article.

    The main contribution of this article is the theoretical analysis of the estimation error bounds when using the KF to estimate traffic densities on transportation networks with junctions. To support a performance study of the KF, we first propose a switched linear system to model the evolution of traffic densities on road networks with junctions, namely the switching mode model for junctions (SMM-J). The SMM-J is an extension of the SMM, in the sense that the SMM-J describes the traffic density evolution on a road section with a junction, while the SMM only considers one-dimensional (i.e., without junctions) road sections. The SMM-J combines a switched linear system representation of the CTM with the junction model developed in [24], however, other junction models can also be incorporated in a similar fashion to derive linear traffic models on junctions. We also provide the observability result on each mode of the SMM-J, and show that nearly all modes are unobservable due to the irreversibility of the conservation laws in the presence of shocks and junctions. Compared to the one-dimensional road sections, the issue of non-observability is more frequently encountered when junctions exists, motivating attentions to the error bounds under unobservable scenarios. Next, we prove the estimation error properties of the KF that uses the SMM-J to estimate an unobservable freeway section with a junction inside. Although an unobservable system typically results in diverging estimation errors [17], we show that by combining the update scheme of the KF with the physical properties embedded in the traffic model, the following properties can be derived for traffic estimation under unobservable modes:

    1. For an unobservable road section, the Kalman gain has bounded infinity norm. This is a necessary condition to ensure bounded mean estimation error for the KF.

    2. When a road section stays in an unobservable mode, the mean estimate of each state variable is ultimately bounded inside a physically meaningful interval, thus the mean estimation error of the enire state is also ultimately bounded.

    This work complements our previous work studying the performance of traffic estimators that use the SMM to monitor unobservable one-dimensional freeway traffic conditions [37]. Thus, when estimating the traffic conditions in a large-scale road network, we can partition the traffic network into local sections which are either one-dimensional, or having a junction inside. The traffic condition in each local section is estimated by a local estimator based on the KF and the SMM-J (or the SMM). Under this distributed computing manner, the estimation errors for the sections without junctions reside below the bounds derived in [37], and the error bounds studied in this work are used to justify the estimation accuracy in the sections with junctions.

    This work is organized as follows. Section 2 reviews the KF and its error properties under observable and unobservable systems, and briefly summarizes the CTM. Section 3 introduces the SMM-J, its observability under different modes, and the properties of the state transition matrices that reflect the intrinsic physical properties of the traffic model. The properties of the state transition matrices of the SMM-J are applied in Section 4, where we prove the boundedness of the Kalman gain and the ultimate bound of the mean estimation error under unobservable scenarios. Finally, some concluding remarks are provided in Section 5.

    Notations. Let $I_n$ and ${\bf{0}}_{n, m}$ be the $n\times n$ identity matrix and the $n\times m$ zero matrix, respectively. The subscripts of $I_n$ and ${\bf{0}}_{n, m}$ are sometimes omitted, when dimensionality is clear from the context. Denote as $\mathbb{E}[\cdot]$ the expectation operator, and the "bar" symbol is used to denote the expected value of random vector $x$, i.e., $\bar{x} = \mathbb{E}[x]$. The symbol $^{\top}$ is used to denote the transpose operator.

    In this section, we first review the Kalman filter and the properties of its error covariance as well as mean estimation error under observable and unobservable scenarios. We also provide a brief overview of the cell transmission model used to construct the SMM-J.

    In this subsection, we briefly review the KF and introduce necessary notations in this article. Consider a general linear time-varying system

    $ ρk+1=Akρk+uk+ωk,     ρkRn, $ (1)
    $ zk=Hkρk+vk,     zkRm, $ (2)

    where $\rho_k$ and $z_k$ are the state vector and sensor measurement vector at time $k\in \mathbb{N}$, respectively. The matrices $A_k$ and $H_k$ are the state transition matrix and the observation matrix at time $k$. The term $u_k$ in (1) is a deterministic system input. The noise terms $\omega_k\sim \mathcal{N}({\bf{0}}, Q_k)$ and $v_k\sim \mathcal{N}({\bf{0}}, R_k)$ are the white Gaussian model and measurement noise, where $Q_k$ and $R_k$ denote the model and the measurement error covariance matrices at time $k$. Given the sensor data up to time $k$ denoted by $\mathcal{Z}_k = \{z_0, \cdots, z_k\}$, the prior estimate and posterior estimate of the state can be expressed as $\rho_{k|k-1} = \mathbb{E}[\rho_k|\mathcal{Z}_{k-1}]$ and $\rho_{k|k} = \mathbb{E}[\rho_k|\mathcal{Z}_k]$, respectively. Let $\eta_{k|k-1} = \rho_{k|k-1}-\rho_k$ and $\eta_{k|k} = \rho_{k|k}-\rho_k$ denote the prior and posterior estimation errors. The estimation error covariance matrices associated with $\rho_{k|k-1}$ and $\rho_{k|k}$ are given by $\Gamma_{k|k-1} = \mathbb{E}[\eta_{k|k-1}\eta_{k|k-1}^{\top}|\mathcal{Z}_{k-1}]$ and $\Gamma_{k|k} = \mathbb{E}[\eta_{k|k}\eta_{k|k}^{\top}|\mathcal{Z}_k]$. The KF sequentially computes $\rho_{k|k}$ from $\rho_{k-1|k-1}$ as follows:

    $ Prediction:{ρk|k1=Ak1ρk1|k1+ukΓk|k1=Ak1Γk1|k1Ak1+Qk1, $ (3)
    $ Correction:{ρk|k=ρk|k1+Kk(zkHkρk|k1)Γk|k=Γk|k1KkHkΓk|k1Kk=Γk|k1Hk(Rk+HkΓk|k1Hk)1. $ (4)

    In (4), the matrix $K_k$ is denoted as the Kalman gain at time $k$. Note that for all $k$, the state estimates $\rho_{k|k-1}$ and $\rho_{k|k}$ are random vectors. The mean posterior estimate and the mean posterior estimation error1 are denoted as $\bar{\rho}_{k|k}$ and $\bar{\eta}_{k|k}$, respectively.

    1For the remainder of this article, the term state estimates/estimation errors/error covariance refers to the posterior estimates, unless specified otherwise.

    Observability. The discrete system (1)-(2) is uniformly completely observable if there exists a positive integer $N$ and positive constants $\alpha$, $\beta$ such that

    $ αIn<Ik,kN<βIn,    for all kN, $ (5)

    where $\mathcal{I}_{k_1, k_0}$ is defined as the information matrix for time interval $k\in [k_0, k_1]$:

    $Ik1,k0=k1k=k0Ξk,k1HkR1kHkΞk,k1,$

    where

    $ Ξk,k1=k11κ=kA1κ,   and   Ξk1,k=Ξ1k,k1=kκ=k11Aκ. $ (6)

    The observability of a system characterizes whether the sensor measurements of the system is sufficient for the KF to correctly estimate the state vector. Given the positive definiteness of the information matrix, the observability grammian matrix linking the the initial state $\rho_0$ to the sensor measurements up to time $N$ (i.e., $\mathcal{Z}_N$) is invertible. The next lemma states the boundedness (both from below and above) of the estimation error covariance and the convergence (to zero) of the estimation error, when using the KF (3)-(4) to estimate a uniformly completely observable system.

    Lemma 1 (Chapter 7.6 in [17]). If the dynamical system (1)-(2) is uniformly completely observable and the following conditions hold:

    (C.1): there exist positive constants $q_1$, $q_2$, $r_1$ and $r_2$ such that $q_1 I_n <Q_{k}<q_2 I_n$ and $r_1 I_m<R_{k}<r_2 I_m$ for all $k$;

    (C.2): the initial error covariance is positive definite, i.e., $\Gamma_{0|0}> {\bf{0}}$;

    (C.3): the state transition matrix $A_k$ is nonsingular for all $k$,

    then there exist positive constants $c_1>0$ and $c_2>0$ such that the error covariance of the KF (3)-(4) satisfies

    $c1In<Γk|k<c2In,   for all k0.$

    Moreover, there exists positive constants $a>0$ and $0<q<1$ such that the 2-norm2 of the mean estimation error satisfies

    2For the remainder of this article, we denote as $\|\cdot\|$ the 2-norm of a matrix or a vector.

    $ˉηk|k<aqkˉη0|0,   for all k0.$

    When system (1)-(2) is not observable, the mean estimation error $\bar{\eta}_{k|k}$ will diverge, unless the unobservable part of the state is bounded or converges to zero automatically. In Appendix.1, we present an example illustrating the evolution of the mean estimation error given by the KF when tracking an unobservable system.

    The classical conservation law describing the evolution of traffic density $\rho(t, x)$ on a road at location $x$ and time $t$ is the Lighthill-Whitham-Richards partial differential equation (LWR PDE) [25,32]:

    $ tρ+xF(ρ)=0. $ (7)

    The function ${\rm{F}}(\rho) = \rho {\rm{v}}(\rho)$ is called the flux function, where ${\rm{v}}(\rho)$ is an empirical velocity function used to close the model.

    The cell transmission model (CTM) [7,8,23] is a discretization of (7) using a Godunov scheme [11]. Consider a uniformly sized discretization grid defined by a space step $\Delta x >0$ and a time step $\Delta t >0$. Let $l$ index the cell defined by $x\in [l\Delta x, (l+1)\Delta x)$, and denote as $\rho^l_{k}$ the density at time $k\Delta t$ in cell $l$, where $k\in \mathbb{N}$ and $l\in \mathbb{N}^{+}$. Moreover, denote as ${\rm{f}}(\rho^{l-1}_{k}, \rho^l_{k})$ the flux between cell $l-1$ and $l$. In the CTM, the discretized model (7) becomes

    $ ρlk+1=ρlk+ΔtΔx(f(ρl1k,ρlk)f(ρlk,ρl+1k)), $ (8)

    where ${\rm{f}}(\rho^{l-1}_{k}, \rho^l_{k})$ is computed by

    $ f(ρl1k,ρlk)=min{s(ρl1k),r(ρlk)}. $ (9)

    In (9), ${\rm{s}}(\rho^{l-1}_{k})$ is the sending capacity (i.e., maximum sending flow) of cell $l-1$ at time $k$, which is a function of $\rho^{l-1}_{k}$, and ${\rm{r}}(\rho^{l}_{k})$ is the receiving capacity (i.e., maximum receiving flow) of cell $l$ at time $k$, which is a function of $\rho^{l}_{k}$. Note that if the Courant-Friedrichs-Lewy (CFL) condition is satisfied, the solution of the CTM converges in $L^1$ to the weak solution of the LWR PDE as $\Delta x\rightarrow 0$ [33].

    Remark 1. Note that the terminologies sending capacity and receiving capacity are equivalent to the notions of demand and supply. Both sending/receiving and demand/supply terminologies are widely used in the traffic community, with the former introduced in [7,8], and the latter introduced in [23]. In this work we use the sending/receiving terminology, which is consistent throughout the article, and is also consistent with our previous work [37].

    The flux function [8] used in this work is the triangular flux function (shown in Figure 1) given by

    Figure 1.  The triangular fundamental diagram in (10).
    $ F(ρ)={ρvmif ρ[0,ϱc]w(ϱmρ)if ρ[ϱc,ϱm]  $ (10)

    where $w = \frac{\varrho_{\text{c}} v_{\text{m}}}{\varrho_{\text{m}}-\varrho_{\text{c}}}$, $v_{\text{m}}$ denotes the freeflow speed, and $\varrho_{\text{m}}$ denotes the maximum density. The parameter $\varrho_{\text{c}}$ is the critical density at which the maximum flux is realized. For the triangular fundamental diagram, the flux function has different slopes in freeflow ($0<\rho \leq\varrho_{\text{c}}$) and congestion ($\varrho_{\text{c}}<\rho \le \varrho_{\text{m}}$). In freeflow, the slope is $v_{\text{m}}$, and in congestion, it is $w$. Under the triangular flux function, the sending and receiving capacities are determined by:

    $ s(ρ)={ρvmif ρ[0,ϱc]qmif ρ[ϱc,ϱm] r(ρ)={qmif ρ[0,ϱc]w(ϱmρ)if ρ[ϱc,ϱm]  $ (11)

    where $q_{\text{m}}$ is the maximum flow given by $q_{\text{m}} = v_{\text{m}}\varrho_{\text{c}}$.

    In this section, we derive the hybrid linear model describing traffic density evolution on a road section with a junction, namely the switching mode model for junctions (SMM-J). The SMM-J combines a switched linear system representation of the CTM (i.e., the CTM where the flux function is the triangular fundamental diagram) with a junction solver. This section starts with a review of the applied junction solver. Next, we introduce the SMM-J and provide examples regarding its explicit formulas. Finally, the observability of the SMM-J is discussed.

    This subsection introduces the junction solver proposed in [24]. As shown in Figure 2a, the junction solver computes flux ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ between the connecting cells $l$, $i$ and $j$ forming a diverge junction. In the merge junction shown in Figure 2b, the junction solver computes ${\rm{f}}(\rho^{i}_{k}, \rho^{l}_{k})$ and ${\rm{f}}(\rho^{j}_{k}, \rho^{l}_{k})$ between the connecting cells. This solver is later applied in Section 3.2 to develop the SMM-J.

    Figure 2.  A diverge and a merge junction connected by three cells indexed by $i$, $j$, and $l$.

    At a diverge junction in Figure 2a, the junction solver is governed by the following rules:

    (R.1): The mass across the junction is conserved.

    (R.2): The throughput flow ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})+{\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$, i.e., the outgoing flow of cell $l$, is maximized subject to the maximum flow that can be sent or received on each connecting cell.

    (R.3): The distribution of flux ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ satisfies ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k}) = \alpha_{\text{d}} {\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$, where $\alpha_{\text{d}}$ is the prescribed distribution parameter that models the routing preference to the downstream cells. When (R.3) conflicts (R.2), i.e., the flow solution that satisfies the distribution equation does not maximize the throughput, then (R.3) is relaxed, such that the solution satisfies (R.2) and minimizes the deviation from the prescribed distribution parameter, e.g., $|{\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})/{\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})-\alpha_{\text{d}}|$.

    The diverge junction solver is posed as a convex program with a carefully constructed objective function to accommodate the throughput maximization (R.2) and the flow distribution (R.3). The mathematical formula of the diverge junction solver is given below (see [24,Section 3.2] for more details).

    Definition 1 (Convex program for the diverge junction solver). Define the objective function ${\rm{J}}(f_1, f_2)$ as:

    $J(f1,f2)=(1λ)(f1+f2)λ(f2αdf1)2,$

    where $0<\lambda<1$ and $\lambda$ is chosen such that $\frac{\partial {\rm{J}}}{\partial f_1}>0$ and $\frac{\partial {\rm{J}}}{\partial f_2}>0$3. The conditions on $\lambda$ is used to prioritize maximizing the throughput ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})+{\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ (as stated in (R.2)), then minimizing the deviation from the prescribed distribution parameter $\alpha_{\text{d}}$ (as stated in (R.3)). The fluxes ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ are obtained by solving the following convex program:

    3One possible choice of $\lambda$ is $\lambda = \min\left\{(1+2\alpha_{\text{d}}^2q_{\text{m}}+\varepsilon)^{-1}, (1+2q_{\text{m}}+\varepsilon)^{-1}\right\}$, where $\varepsilon > 0$ can be any positive value.

    $ f(ρlk,ρik), f(ρlk,ρjk) = argmaxf1,f2 J(f1,f2)s.t.  f1r(ρik) $ (12)
    $ f_2 \le {\rm{r}}(\rho_k^j)\label{eq:cp_c2} $ (13)
    $ f_1+f_2 \le \mathrm{s}(\rho_k^l)\label{eq:cp_c3}. $ (14)

    Figure 3 provides a graphical illustration for the solutions of the convex program defined in Definition 1. The blue vertical solid line denotes the receiving capacity of cell $i$, i.e., ${\rm{r}}(\rho_k^i)$, and the blue horizontal solid line denotes the receiving capacity of cell $j$, i.e., ${\rm{r}}(\rho_k^j)$. The intercepts of the blue dashed line (with slope -1) denote the sending capacity of cell $l$, i.e., ${\rm{s}}(\rho_k^l)$. The shaded area denotes the feasible values of the flux from cell $l$ to $i$ and the flux from cell $l$ to $j$, the feasible area is obtained based on (12)-(14). The slope of the black dotted line is the prescribed distribution ratio $\alpha_{\text{d}}$. The fluxes computed by the junction solver in Definition 1 is marked by the red dot, whose horizontal axis and vertical axis values are the obtained ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$, respectively.

    Figure 3.  Three scenarios in the junction solver [24] for the diverge junction shown in Figure 2a, where cell $l$ diverges to cell $i$ and cell $j$. The blue vertical (resp. horizontal) solid line denotes the receiving capacity of cell $i$ (resp. cell $j$). The intercepts of the blue dashed line denote the sending capacity of cell $l$. The shaded area denotes the feasible values of the flux from cell $l$ to $i$ and the flux from cell $l$ to $j$. The slope of the black dotted line is the prescribed distribution ratio $\alpha_{\text{d}}$. The fluxes computed by the junction solver is marked by the red dot, whose horizontal axis and vertical axis values are the obtained ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$, respectively. Note that in diverge case Ⅱ and diverge case Ⅲ, the receiving capacities of cell $i$ and cell $j$ are not necessarily smaller than the sending capacity of cell $l$, and the graphical illustration of the flux solutions is also applicable for ${\rm{r}}(\rho_k^i)\ge {\rm{s}}(\rho_k^l)$ and/or ${\rm{r}}(\rho_k^j)\ge {\rm{s}}(\rho_k^l)$..

    According to the convex program in Definition 1, to obtain ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ we need to find the solution point (i.e., the red dot) in Figure 3 that satisfies the following requirements:

    ● The point lies in the shaded feasible area, so that conditions (12)-(14) are satisfied;

    ● The point is as close as possible to the blue dashed line (with slope -1), so that the throughput ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})+{\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ is maximized; this is due to the fact that the distance $d_0$ between the point and the blue dashed line is proportional to the disparity between the sending capacity of cell $l$ and the throughput, i.e., $|{\rm{s}}(\rho_k^l)-({\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})+{\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k}))| = \sqrt{2}d_0$ (as illustrated in diverge case Ⅰ of Figure 3);

    ● Conditioned on prioritizing maximizing the throughput, the point is as close as possible to the black dotted line (with slope $\alpha_{\text{d}}$), so that the distribution ratio is respected; this means that when maximizing the throughput conflicts with the distribution ratio, the requirement ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k}) = \alpha_{\text{d}} {\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ can be relaxed.

    As shown in Figure 3, there are in total three scenarios depending on the values of ${\rm{s}}(\rho_k^l)$, ${\rm{r}}(\rho_k^i)$ and ${\rm{r}}(\rho_k^j)$. The three scenarios are: (i) diverge case Ⅰ, when ${\rm{s}}(\rho_k^l) \ge {\rm{r}}(\rho_k^i)+{\rm{r}}(\rho_k^j)$; (ii) diverge case Ⅱ, when ${\rm{s}}(\rho_k^l) < {\rm{r}}(\rho_k^i)+{\rm{r}}(\rho_k^j)$, and the prescribed distribution ratio $\alpha_{\text{d}}$ can be followed exactly; and (iii) diverge case Ⅲ, when ${\rm{s}}(\rho_k^l) < {\rm{r}}(\rho_k^i)+{\rm{r}}(\rho_k^j)$, but (due to throughput maximization) the prescribed distribution ratio $\alpha_{\text{d}}$ cannot be followed exactly.

    Under diverge case Ⅰ, the fluxes ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ are computed by:

    $f(ρlk,ρik)=r(ρik),   f(ρlk,ρjk)=r(ρjk). $ (15)

    Under diverge case Ⅱ, the fluxes ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$ are given as follows:

    $ f(ρlk,ρik)=1αd+1s(ρlk),   f(ρlk,ρjk)=αdαd+1s(ρlk). $ (16)

    Under diverge case Ⅲ, the junction solver first finds the two vertices of the shaded area that lie on the dashed line, and next define the point closer to the dotted line as the flux solutions ${\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$ and ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k})$. Hence, depending on the magnitude of ${\rm{s}}(\rho^{l}_{k})$, ${\rm{r}}(\rho^{i}_{k})$ and ${\rm{r}}(\rho^{j}_{k})$, the solutions could be either

    $ f(ρlk,ρik)=r(ρik),  f(ρlk,ρjk)=s(ρlk)r(ρik), $ (17)

    or

    $ f(ρlk,ρik)=s(ρlk)r(ρjk),  f(ρlk,ρjk)=r(ρjk). $ (18)

    Note that the diverge case Ⅲ shown in Figure 3 illustrates the flux solutions presented in (17).

    Remark 2. The diverge junction solver described above is a non-First-In-First-Out (FIFO) model [8,10]. The FIFO diverge model maximizes the outgoing flow of cell $l$ subject to the distribution ratio ${\rm{f}}(\rho^{l}_{k}, \rho^{j}_{k}) = \alpha_{\text{d}} {\rm{f}}(\rho^{l}_{k}, \rho^{i}_{k})$. Although the FIFO model circumvents the conflicts between throughput maximization and flow distribution, it produces unrealistic solutions in some circumstances. Several diverge junction models have been proposed to resolve this issue [14,20,22]. In the same spirit of these models, the diverge junction solver applied in this article is developed to produce similar traffic condition dependent solutions without introducing additional complexity on the traffic dynamics [24]. As a related note, the results proved in this article can be extended to FIFO models with minor changes to the proof.

    At a merge junction in Figure 2b, the junction solver conserves mass, and maximizes the throughput while minimizing the deviation from a prescribed priority parameter $\alpha_{\text{p}}$ denoting the flow assignment ratio ${\rm{f}}(\rho^{j}_{k}, \rho^{l}_{k}) = \alpha_{\text{p}} {\rm{f}}(\rho^{i}_{k}, \rho^{l}_{k})$. This priority equation is relaxed if it conflicts with flow maximization. The reader is referred to [10,12] for a detailed description of the merge model.

    The structure of the diverge and merge models are similar, in the sense that both maximize the throughput while minimizing the deviation from the prescribed distribution or priority parameters. Therefore, the remainder of this article focuses on deriving the linear traffic model and analyzing the performance of the KF on the road network with a diverge junction. The analysis can be transferred to the merge case via combining the merge junction solver with the switched linearized representation of the CTM, exploring the properties of the resulting state transition matrices as in the diverge case, and analyzing the effect of these properties on the boundedness of the Kalman gain and the mean estimation error.

    As stated in Section 1.2, when monitoring traffic on large-scale networks, the road network is partitioned into local sections which are either one-dimensional, or having a junction inside. The traffic states on the one-dimensional local sections evolve according to the SMM, and the SMM-J is used to describe the evolution of traffic states on local sections with junctions.

    As shown in Figure 4, consider a local section with $n$ cells, three links and a junction. The number of cells on each link is $n_1$, $n_2$, and $n_3$, respectively, with $n_1+n_2+n_3 = n$. The state variable at time $k \in \mathbb{N}$ is constructed as $\rho_k = \left(\rho^1_{k}, \cdots, \rho^{n_1}_{k}, \rho^{n_1+1}_{k}, \cdots, \rho^{n_1+n_2}_k, \rho^{n_1+n_2+1}_k, \cdots, \rho^{n}_k\right)^{\top}$. As a common treatment [28,30,35,38,39], the boundary flows, denoted by $\phi_k^1$, $\phi_k^2$, and $\phi_k^3$, are considered to be deterministic system inputs (please refer to [3] for the concept of using ghost cells to compute boundary flows using boundary state measurements). The SMM-J describes the evolution of $\rho_k$ using a switched linear dynamics, and is derived under the following assumptions:

    Figure 4.  A local section with $n$ cells, three links and a junction.

    (A.1): For each local section, there is at most one transition between freeflow and congestion in each of the three links.

    (A.2): The three connecting cells forming the junction (i.e., cell $n_1$, $n_1+1$ and $n_1+n_2+1$ in Figure 4) are either all in freeflow or all in congestion.

    Assumption (A.1) is practically meaningful since the local sections are usually partitioned sufficiently short such that at most one queue is growing or dissipating within each link, which is also a commonly used assumption in the SMM [28,30,35,37,39]. Assumption (A.2) is imposed to simplify the model by reducing the number of modes considered. Note that when (A.2) is relaxed, the number of modes is greatly increased, but without yielding new insights into the estimation performance at junctions. The analysis in this work still holds, the only difference is to consider an enormously increased number of modes.

    Given the assumptions stated above, on each local section with a junction, the SMM-J may switch among 32 modes (defined in Table 1) depending on (i) the freeflow/congestion status of the boundary cells and the connecting cells near the junction, and (ii) the flux solution of the junction solver characterized by the three scenarios shown in Figure 3.

    Table 1.  Mode definition and observability of the SMM-J.
    Mode F/C$^1$ status of cell(s) Transition$^{3}$ on link Diverge case Obser-vability$^4$
    $1$ $n_1+n_2$ $n$ near junction$^{2}$ 1 2 3
    1 F F F F none none none O
    2 F F F C Sh. Ep. Ep. U
    3 F F F C Sh. Ep. Ep. U
    4 F F F C Sh. Ep. Ep. U
    5 C C C C none none none U
    6 C C C C none none none U
    7 C C C C none none none U
    8 C C C F Ep. Sh. Sh. U
    9 F C C C Sh. none none U
    10 F C C C Sh. none none U
    11 F C C C Sh. none none U
    12 F C C F none Sh. Sh. U
    13 C C F C none none Ep. U
    14 C C F C none none Ep. U
    15 C C F C none none Ep. U
    16 C C F F Ep. Sh. none U
    17 C F C C none Ep. none U
    18 C F C C none Ep. none U
    19 C F C C none Ep. none U
    20 C F C F Ep. none Sh. U
    21 C F F F Ep. none none O
    22 C F F C none Ep. Ep. U
    23 C F F C none Ep. Ep. U
    24 C F F C none Ep. Ep. U
    25 F C F F none Sh. none U
    26 F C F C Sh. none Ep. U
    27 F C F C Sh. none Ep. U
    28 F C F C Sh. none Ep. U
    29 F F C F none none Sh. U
    30 F F C C Sh. Ep. none U
    31 F F C C Sh. Ep. none U
    32 F F C C Sh. Ep. none U
    1 "F" and "C" stand for freeflow and congestion, respectively.
    2 Cells indexed by $n_1$, $n_1+1$ and $n_1+n_2+1$.
    3 "Sh." and "Ep." stand for shock (i.e., transition from freeflow to congestion) and expansion fan (i.e., transition from congestion to freeflow), respectively.
    4 "O" stands for uniformly completely observable and "U" stands for unobservable. Note that the observability results are derived under sensor locations shown in Figure 4.

     | Show Table
    DownLoad: CSV

    In the SMM-J, the density update scheme for the interior cells in each link (i.e., cells indexed by $l\in\{2, \cdots, n_1-1\}\cup\{n_1+2, \cdots, n_1+n_2-1\}\cup\{n_1+n_2+2, \cdots, n-1\}$) is given by (8), where the flow between two adjacent cells is computed according to (9). For the three boundary cells, their density updates are given as follows:

    $ρ1k+1=ρ1k+ΔtΔx(ϕ1kf(ρ1k,ρ2k)),ρn1+n2k+1=ρn1+n2k+ΔtΔx(f(ρn1+n21k,ρn1+n2k)ϕ2k),ρnk+1=ρnk+ΔtΔx(f(ρn1k,ρnk)ϕ3k),$

    where ${\rm{f}}(\rho^1_{k}, \rho^{2}_{k})$, ${\rm{f}}(\rho^{n_1+n_2-1}_{k}, \rho^{n_1+n_2}_{k})$ and ${\rm{f}}(\rho^{n-1}_{k}, \rho^{n}_{k})$ are also obtained from (9). The density update scheme for the three cells near the junction reads:

    $ρn1k+1=ρn1k+ΔtΔx(f(ρn11k,ρn1k)f(ρn1k,ρn1+1k)f(ρn1k,ρn1+n2+1k)),ρn1+1k+1=ρn1+1k+ΔtΔx(f(ρn1k,ρn1+1k)f(ρn1+1k,ρn1+2k)),ρn1+n2+1k+1=ρn1+n2+1k+ΔtΔx(f(ρn1k,ρn1+n2+1k)f(ρn1+n2+1k,ρn1+n2+2k)),$

    where ${\rm{f}}(\rho^{n_1-1}_{k}, \rho^{n_1}_{k})$, ${\rm{f}}(\rho^{n_1+1}_{k}, \rho^{n_1+2}_{k})$, and ${\rm{f}}(\rho^{n_1+n_2+1}_{k}, \rho^{n_1+n_2+2}_{k})$ are computed by (9), and the flows between neighboring cells (i.e., ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k})$ and ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+n_2+1}_{k})$) are computed based on the junction solver discussed in Section 3.1.

    When applying the triangular fundamental diagram (10) to compute the flow across the cells, the traffic state $\rho_k$ in a local section evolves with linear dynamics in each mode stated in Table 1, forming a hybrid system:

    $ ρk+1=Akρk+Bρk1ϱm+Bqk1qm+Bϕkϕk, $ (19)

    where ${\bf{1}}$ is the vector of all ones, the vector $\phi_k = (\phi_k^1, \phi_k^2, \phi_k^3)^{\top}$, and $A_k \in \mathbb{R}^{n\times n}$, $B^{\rho}_{k} \in \mathbb{R}^{n\times n}$, $B^{q}_k\in \mathbb{R}^{n\times n}$, $B^{\phi}_k\in \mathbb{R}^{n\times 3}$ are constructed based on the current mode of the local section, the locations of the shocks and expansion fans (if they exist), and the moving direction of the shocks.

    The next two examples demonstrate the construction of matrices $A_k$, $B^{\rho}_{k}$, $B^{q}_k$ and $B^{\phi}_k$. Specifically, the construction of matrices $A_k$ provides insights on the properties of the state transition matrices of the SMM-J that reflect the intrinsic physical properties of the traffic model. These properties are critical in proving the estimation error bounds of the KF on traffic networks. Before showing the examples, we first introduce some notation which will be used as elements of the matrices to be constructed. For $p\in \mathbb{N}^+$, define $\Theta_p\in\mathbb{R}^{p\times p}$ and $\Delta_p\in\mathbb{R}^{p\times p}$ by their $(i, j)^{\textrm{th}}$ entries as

    $ Θp(i,j)={1vmΔtΔxif i=jvmΔtΔxif i=j+10otherwise,  $ (20)
    $ Δp(i,j)={1wΔtΔxif i=jwΔtΔxif i=j10otherwise. $ (21)

    For $p_1\in \mathbb{N}^+$, $p_2\in \mathbb{N}^+$, $p_3\in \mathbb{N}^+$ and $p_4\in \mathbb{N}^+$, define $E_{p_1, p_2}^{p_3, p_4}\in\mathbb{R}^{p_1\times p_2}$ as the $p_1\times p_2$ matrix with all entries zero except its $(p_3, p_4)^{\textrm{th}}$ entry, which is one. Explicitly,

    $ Ep3,p4p1,p2(i,j)={1if i=p3 and j=p40otherwise. $ (22)

    Moreover, define $\tilde{\Theta}_p \in\mathbb{R}^{p\times p}$ and $\tilde{\Delta}_p \in\mathbb{R}^{p\times p}$ as:

    $ ˜Θp={(Θp10p1,1vmΔtΔxE1,p11,p11)if p21if p=1 $

    and

    $ ˜Δp={(1wΔtΔxE1,11,p10p1,1Δp1)if p21if p=1. $

    Example 1 (System dynamics of the SMM-J under Mode 1). Consider the local section in Figure 4 where all the cells are in freeflow. Within a single link, the flux between two neighboring cells indexed by $i$ and $i+1$ is given by ${\rm{f}}(\rho^{i}_{k}, \rho^{i+1}_{k}) = v_{\text{m}}\rho^{i}_{k}$. At the junction, it holds that ${\rm{s}}(\rho^{n_1}_{k}) = v_{\text{m}}\rho^{n_1}_{k}<{\rm{r}}(\rho^{n_1+1}_{k})+{\rm{r}}(\rho^{n_1+n_2+1}_{k}) = 2q_{\text{m}}$. Also note that ${\rm{s}}(\rho^{n_1}_{k})\le {\rm{r}}(\rho^{n_1+1}_{k}) = q_{\text{m}}$ and ${\rm{s}}(\rho^{n_1}_{k})\le {\rm{r}}(\rho^{n_1+n_2+1}_{k}) = q_{\text{m}}$, the distribution ratio $\alpha_{d}$ can be followed exactly. Hence, the state is under Mode 1 at time $k$, and the junction solver computes fluxes ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k})$ and ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+n_2+1}_{k})$ according to diverge case Ⅱ (16) as follows:

    $f(ρn1k,ρn1+1k)=ρn1kvm1+αd,   f(ρn1k,ρn1+n2+1k)=αdρn1kvm1+αd.$

    Substituting the flows computed above into the update scheme of the traffic density on each cell, it follows that the explicit forms of $A_k$, $B^{\rho}_{k}$, $B^{q}_k$, and $B^{\phi}_{k}$ in (19) are

    $ Ak=(Θn1vmΔt(1+αd)ΔxE1,n1n2,n1˜Θn2αdvmΔt(1+αd)ΔxE1,n1n3,n1˜Θn3),   Bρk=Bqk=0n,n,Bϕk=ΔtΔx(E1,1n,3En1+n2,2n,3En,3n,3) $ (23)

    in Mode 1. Note that in the above definitions and for the remainder of this subsection, blocks in the matrices which are left blank are zeros everywhere.

    Example 2 (System dynamics of the SMM-J under Modes 2-4). Consider the local section in Figure 4 where the three boundary cells indexed by $1$, $n_1+n_2$ and $n$ are all in freeflow, and the three cells near the junction, i.e., the cells indexed by $n_1$, $n_1+1$ and $n_1+n_2+1$ are in congestion. Given the assumption that there is at most one transition between freeflow and congestion in each of the three links connecting the junction, it can be concluded that there is a shock (i.e., transition from freeflow to congestion) on link 1, while link 2 and link 3 each has an expansion fan. Let $l_1$ be the location of the shock on link 1, i.e., the transition from freeflow to congestion on link 1 occurs between cell $l_1$ and $l_1+1$. Moreover, we define

    $˜l1={l1if the shock has positive velocity or is stationaryl11if the shock has negative velocity, $

    and

    $ˆl1=n11˜l1,$

    which are later used to simplify the notations to define matrices $A_k$, $B^{\rho}_{k}$, $B^{q}_k$, and $B^{\phi}_{k}$. Similarly, denote as $l_2$ (resp. $l_3$) the location of the expansion fan on link 2 (resp. link 3), i.e., the transition from congestion to freeflow on link 2 (resp. link 3) occurs between cells $l_2$ and $l_2+1$ (resp. cells $l_3$ and $l_3+1$). To simplify the notation, we also define

    $˜l2=l2n1,   ˆl2=n2˜l2,   ˜l3=l3n1n2,   ˆl3=n3˜l3.$

    At the junction, the sending capacity of cell $n_1$ is ${\rm{s}}(\rho^{n_1}_{k}) = q_{\text{m}}$, and the receiving capacities of cell $n_1+1$ and cell $n_1+n_2+1$ are ${\rm{r}}(\rho^{n_1+1}_{k}) = w(\varrho_{\text{m}}-\rho^{n_1+1}_{k})$ and ${\rm{r}}(\rho^{n_1+n_2+1}_{k}) = w(\varrho_{\text{m}}-\rho^{n_1+n_2+1}_{k})$, respectively. Depending on the magnitudes of ${\rm{s}}(\rho^{n_1}_{k})$, ${\rm{r}}(\rho^{n_1+1}_{k})$, and ${\rm{r}}(\rho^{n_1+n_2+1}_{k})$, the junction solver follows one of the three possible scenarios shown in Figure 3.

    1. Diverge case Ⅰ: when ${\rm{s}}(\rho^{n_1}_{k})\ge {\rm{r}}(\rho^{n_1+1}_{k})+{\rm{r}}(\rho^{n_1+n_2+1}_{k})$. In this case, the state is under Mode 2 at time $k$, and the junction solver computes fluxes ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k})$ and ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+n_2+1}_{k})$ according to diverge case Ⅰ (15) as follows:

    $f(ρn1k,ρn1+1k)=r(ρn1+1k)=w(ϱmρn1+1k),f(ρn1k,ρn1+n2+1k)=r(ρn1+n2+1k)=w(ϱmρn1+n2+1k).$

    Hence in Mode 2, the explicit forms of $A_k$, $B^{\rho}_{k}$, $B^{q}_k$, and $B^{\phi}_{k}$ in (19) are

    $ Ak=(Θ˜l1vmΔtΔxE1,˜l11,˜l11wΔtΔxE1,11,ˆl1Δˆl1wΔtΔxEˆl1,1ˆl1,˜l2wΔtΔxEˆl1,1ˆl1,˜l3Δ˜l2˜Θˆl2Δ˜l3˜Θˆl3), $
    $Bρk=wΔtΔx(E˜l1+1,˜l1+2n,nEn1,n1+n2+1n,n+El2,l2n,n+El3,l3n,n),Bqk=ΔtΔx(El2.l2n,n+El2+1,l2n,nEl3,l3n,n+El3+1,l3n,n),Bϕk=ΔtΔx(E1,1n,3En1+n2,2n,3En,3n,3).$

    2. Diverge case Ⅱ: when ${\rm{s}}(\rho^{n_1}_{k})<{\rm{r}}(\rho^{n_1+1}_{k})+{\rm{r}}(\rho^{n_1+n_2+1}_{k})$, and the prescribed distribution ratio $\alpha_{\text{d}}$ can be followed exactly. In this case, the state is under Mode 3 at time $k$, and the junction solver computes fluxes ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k})$ and ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+n_2+1}_{k})$ according to diverge case Ⅱ (16) as follows:

    $f(ρn1k,ρn1+1k)=1αd+1s(ρn1k)=1αd+1qm,f(ρn1k,ρn1+n2+1k)=αdαd+1s(ρn1k)=αdαd+1qm.$

    In Mode 3, the explicit forms of $A_k$, $B^{\rho}_{k}$, $B^{q}_k$, and $B^{\phi}_{k}$ in (19) are

    $ Ak=(Θ˜l1vmΔtΔxE1,˜l11,˜l11wΔtΔxE1,11,ˆl1Δˆl1˜Δ˜l2˜Θˆl2˜Δ˜l3˜Θˆl3), $
    $Bρk=wΔtΔx(E˜l1+1,˜l1+2n,n+En1,n1n,nEn1+1,n1+2n,n+El2,l2n,nEn1+n2+1,n1+n2+2n,n+El3,l3n,n),Bqk=ΔtΔx(En1+n1n,n+11+αdEn1+1,n1n,n+αd1+αdEn1+n2+1,n1n,nEl2,l2n,n+El2+1,l2n,nEl3,l3n,n+El3+1,l3n,n),Bϕk=ΔtΔx(E1,1n,3En1+n2,2n,3En,3n,3).$

    3. Diverge case Ⅲ: when ${\rm{s}}(\rho^{n_1}_{k})<{\rm{r}}(\rho^{n_1+1}_{k})+{\rm{r}}(\rho^{n_1+n_2+1}_{k})$, but the prescribed distribution ratio $\alpha_{\text{d}}$ cannot be followed exactly. In this case, the state is under Mode 4 at time $k$. Depending on the magnitudes of ${\rm{s}}(\rho^{n_1}_{k})$, ${\rm{r}}(\rho^{n_1+1}_{k})$ and ${\rm{r}}(\rho^{n_1+n_2+1}_{k})$, the fluxes ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k})$ and ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+n_2+1}_{k})$ computed by the junction solver are either obtained from (17), i.e.,

    $ f(ρn1k,ρn1+1k)=r(ρn1+1k)=w(ϱmρn1+1k),f(ρn1k,ρn1+n2+1k)=s(ρn1k)r(ρn1+1k)=qmw(ϱmρn1+1k), $ (24)

    or obtained from (18), i.e.,

    $ f(ρn1k,ρn1+1k)=s(ρn1k)r(ρn1+n2+1k)=qmw(ϱmρn1+n2+1k),f(ρn1k,ρn1+n2+1k)=r(ρn1+n2+1k)=w(ϱmρn1+n2+1k). $ (25)

    For conciseness of the presentation, we provide next the explicit formulas of $A_k$, $B^{\rho}_{k}$, $B^{q}_k$, and $B^{\phi}_{k}$ when the fluxes are computed according to (24), and the construction of the system dynamics when the fluxes are given by (25) can be done in a similar fashion. The matrices $A_k$, $B^{\rho}_{k}$, $B^{q}_k$, and $B^{\phi}_{k}$ read

    $ Ak=(Θ˜l1vmΔtΔxE1,˜l11,˜l11wΔtΔxE1,11,ˆl1Δˆl1wΔtΔxEˆl1,1ˆl1,˜l2Δ˜l2˜Θˆl2wΔtΔxE1,1˜l3,˜l2˜Δ˜l3˜Θˆl3), $
    $Bρk=wΔtΔx(E˜l1+1,˜l1+2n,n+El2,l2n,nEn1+n2+1,n1+n2+2n,nEn1+n2+1,n1+1n,n+El3,l3n,n),Bqk=ΔtΔx(El2.l2n,n+El2+1,l2n,n+En1+n2+1,n1+1n,nEl3,l3n,n+El3+1,l3n,n),Bϕk=ΔtΔx(E1,1n,3En1+n2,2n,3En,3n,3).$

    Some properties of the state transition matrix $A_{k}$ are summarized below, which will be demonstrated in Section 4.1 to assume important roles in proving the boundedness of the Kalman gain (a necessary condition to ensure the ultimate boundedness of the mean estimation error) when using the KF to estimate traffic conditions based on the SMM-J.

    (P.1): For $A_k$ in all modes, each element satisfies

    $ 0Ak(r,c)1,for all kN and r,c{1,,n}. $

    This property is due to the CFL condition [33] in the discretization scheme (8).

    (P.2): When $A_k$ is derived under diverge case Ⅰ and diverge case Ⅱ, the sum of the elements in $A_k$ at the same column $c$ satisfies

    $ nr=1Ak(r,c)1,for all kN and c{1,,n}. $

    This property is due to the CFL condition as in (P.1) and the conservation law embedded in the traffic model.

    (P.3): When $A_k$ is derived under diverge case Ⅲ, the sum of the elements in $A_k$ at the same column $c$ satisfies

    $ nr=1Ak(r,c)1,for all kN and c{c|c{1,,n},c}$

    where $\ell = n_1+1$ if ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k}) = {\rm{r}}(\rho^{n_1+1}_{k})$ and $\ell = n_1+n_2+1$ if ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k}) = {\rm{s}}(\rho^{n_1}_{k})-{\rm{r}}(\rho^{n_1+n_2+1}_{k})$. Moreover, it also holds that for $A_k$ under diverge case Ⅲ,

    $Ak(r,c)=0,for all kr{n1+1,,n} and c{1,n1}.$

    This is due to the facts that (i) the flows from cell $n_1$ at the junction to the two downstream cells (i.e., cell $n_1+1$ and cell $n_1+n_2+1$) do not depend on the densities of the cells on link 1, as shown in (24)-(25), and (ii) the internal flows between adjacent cells on link 2 and link 3 are also independent of the densities of the cells on link 1.

    Based on the above properties, the following lemma derives the bounds on each entry of the product of the state transition matrices, which will later be applied to prove the boundedness of the Kalman gain (see Lemma 3).

    Lemma 2. Consider the local section in Figure 4 that stays inside a SMM-J mode while $k\in (k_0, k_1]$4, where $0\le k_0<{k}_1\le+\infty$. Recall from (6) that the product of the state transition matrices is defined as

    4Recall that the time instant $k\in \mathbb{N}$, hence $k\in (k_0, {k}_1]$ denotes $k\in\{k_0+1, \cdots, k_1\}$.

    $ Ξk+1,k0+1=k0+1κ=kAκ,   for k(k0,k1]. $ (26)

    The $(i, j)^{{th}}$ entry of $\Xi_{k+1, k_0+1}$ satisfies

    $ 0Ξk+1,k0+1(i,j)1  for all k(k0,k1]  and  i,j{1,,n}. $ (27)

    Proof. The proof applies properties (P.1)-(P.3), and is reported in Appendix.2.

    From an estimation point of view, it is assumed that the sensors are located on the far ends of the three links connecting the junction (as illustrated in Figure 4), measuring the densities of the boundary cells of the local section (i.e., $\rho^1_{k}$, $\rho^{n_1+n_2}_k$ and $\rho^n_{k}$).

    Incorporating model noise in the SMM-J (19) yields:

    $ ρk+1=Akρk+uk+ωk,    ρkRn, $ (28)

    where $\omega_{k}\sim \mathcal{N}(0, Q_{k})$ is the white Gaussian model noise, and define the deterministic system input as:

    $uk=Bρk1ϱm+Bqk1qm+Bϕkϕk.$

    The sensor measurements are modeled as follows:

    $ zk=Hkρk+vk,    zkR3, $ (29)

    where the observation matrix $H_k = E_{3, n}^{1, 1}+E_{3, n}^{2, n_1+n_2}+E_{3, n}^{3, n}$, and $v_k\sim \mathcal{N}(0, R_k)$ is the white Gassian measurement noise. Hence, as shown in (28)-(29), the system dynamics of the SMM-J is rewritten in the form of (1)-(2).

    The observability of system (28)-(29) under different modes are listed in Table 1, which can be derived directly from the definition of observability stated in Section 2.1, i.e., checking the boundedness of the information matrix. According to Table 1, most of the modes are not observable except (i) when all cells in the local section are in freeflow, and (ii) when an expansion fan sits on link 1 and no other transitions between freeflow and congestion exist in the local section. From a physical viewpoint, the non-observability of the SMM-J is due to the irreversibility of the vehicle conservation law given the available sensor measurements in the presence of shocks, and the presence of the junction. It is indicated that compared to the observability of the SMM [30], the issue of non-observability is more critical when junctions exist. For example, a one-dimensional road section where the traffic is in congestion everywhere is observable given measurements of the upstream boundary cell [30], while a congested local section with a junction is not observable even with the measurements of all the three boundary cells (as shown by the cases for Modes 5-7). The performance of the KF under uniformly completely observable systems is widely studied (as summarized in Lemma 1). In this article, we focus on analysing the theoretical performance of the KF under the unobservable modes of the SMM-J.

    Challenges for estimating an unobservable system stem from the fact that the estimation error covariance could grow unbounded, thus the mean estimation error also potentially diverges (as shown in Example 3 in the appendix). In this subsection, we show that when combining the physical properties of the traffic model (i.e., vehicle conservation and the flow-density relationship) with the update scheme of the KF, the mean estimation errors of all the cells in an unobservable local section are ultimately bounded inside $(-\epsilon, \varrho_{\text{m}}+\epsilon)$ for all $\epsilon>0$, provided that the density measurements of the three boundary cells are available. This ensures that the mean estimates of the KF for unobservable modes are always physically meaningful to within $\epsilon$. Comparatively, it is shown in [36] that an open-loop observer may result in non-physical state estimates in unobservable modes.

    Note that the three conditions (C.1)-(C.3) in Lemma 1 are necessary for proving the properties of the KF for traffic estimation under unobservable systems. For system (28)-(29), we assume that conditions (C.1) and (C.2) can be ensured when setting up the parameters in the KF. It can also be directly verified that condition (C.3) always holds for all the modes of the SMM-J.

    Let $({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ be the time interval inside which the local section shown in Figure 4 stays in an unobservable mode of the SMM-J. In this subsection, we present a lemma stating the boundedness of the Kalman gain for $k\in ({k}_0^{\text{U}}, {k}_1^{\text{U}}]$, which is obtained based on the boundedness of the cross-covariance of the observable and unobservable subsystems in the Kalman observability canonical form (see Appendix.3). According to the KF update scheme in (4), the boundedness of the Kalman gain is a necessary condition for the boundedness of the state estimate.

    Lemma 3. Consider an unobservable local section shown in Figure 4. The local section stays inside an unobservable mode of the SMM-J while $k\in ({k}_0^{\text{U}}, {k}_1^{\text{U}}]$, where $0\le {k}_0^{\text{U}}<{k}_1^{\text{U}}\le+\infty$. Given density measurements of the three boundary cells, the infinity norm5 of the Kalman gain computed by the KF (3)-(4) satisfies

    5Recall that for matrix $M \in \mathbb{R}^{p \times q}$, its infinity norm is defined as $\|M\|_{\infty} = \underset{{r\in \{1, \cdots, p\}}}{\max}\sum_{c = 1}^q\left|M(r, c)\right|$.

    $ Kkk(ΓkU0|kU0),for all k(kU0,kU1], $ (30)

    where ${\rm{k}}\left(\cdot\right)$ is a function of the error covariance at time ${k}_0^{{\text{U}}}$.

    Proof. The explicit formula of ${\rm{k}}\left(\Gamma_{{k}_0^{{\text{U}}}|{k}_0^{{\text{U}}}}\right)$ is presented in Appendix.4.1, and is derived in Appendix.4.2.

    In the proof of Lemma 3, the Kalman gain is partitioned into blocks corresponding to the observable and unobservable subsystems (as shown in (49)). The part corresponding to the observable subsystem is a function of the estimation error covariance of the observable subsystem (see (50)), thus its boundedness is relatively straightforward to justify. On the other hand, the block of the Kalman gain that corresponds to the unobservable subsystem is a function of the cross-covariance of the observable and the unobservable subsystems (see (51)). By exploring the interaction between the evolution equation of the cross-covariance (shown in (48)) and the physical properties of the traffic model (reflected in (P.1)-(P.3)), we also derive the boundeness of the unobservable block of the Kalman gain. In summary, the combination of the update scheme of the KF and the intrinsic properties of the traffic model is critical in showing the boundedness of the Kalman gain under unobservable modes.

    In this subsection, we show that for an unobservable local section, the mean estimate of each cell will ultimately achieve the physically meaningful interval, thus the mean estimation error is also ultimately bounded. Unlike typical unobservable scenarios where the mean estimation error diverges (as shown in Example 3), the boundedness of the mean error here is ensured by the intrinsic physical properties of the traffic model, i.e., the relationship between the density and the sending/receiving capacity for each cell, as illustrated in the proof of the next proposition.

    Proposition 1. Consider an unobservable local section as shown in Figure 4 that stays inside an unobservable mode while $k\in ({k}_0^{\text{U}}, \infty)$. For all $\epsilon>0$, a finite time ${\rm{t}}(\epsilon)$ exists such that $\bar{\rho}^l_{k|k}\in(-\epsilon, \varrho_{\text{m}}+\epsilon)$ for all $k>{k}_0^{\text{U}}+{\rm{t}}(\epsilon)$ and for all $l\in \{1, \cdots, n\}$, independent of the initial estimate. Moreover, the mean estimation error satisfies $\|\bar{\eta}_{k|k}\|<\sqrt{n}(\varrho_{\text{m}}+\epsilon)$ for all $k>{k}_0^{\text{U}}+{\rm{t}}(\epsilon)$, independent of the initial estimate.

    Proof. Denote as ${\bar{\eta}}^{\text{b}}_{k|k} = (\bar{\eta}^{1}_{k|k}, \bar{\eta}^{n_1+n_2}_{k|k}, \bar{\eta}^{n}_{k|k})^{\top}$ the mean error of the three boundary cells. Since the three boundary cells are all inside the observable subsystem, it follows that $\left\|{\bar{\eta}}^{\text{b}}_{k|k}\right\|\rightarrow 0$ as $k\rightarrow\infty$.

    The proof is by induction. In Step 1, we use an induction from cell $1$ to the downstream cells to show that if the estimate of cell $1$ converges to the true state, then the estimate of all cells will ultimately be greater than $-\epsilon$ for all $\epsilon >0$. In Step 2, we use an induction from the two downstream boundary cells (i.e., cell $n_1+n_2$ and cell $n$) to their upstream cells to show that if the estimates of the two downstream boundary cells converge to the true state, then the estimate of all cells will ultimately be smaller than $\varrho_{\text{m}}+\epsilon$ for all $\epsilon >0$. In Step 3, we combine Step 1 and Step 2 to derive an ultimate bound for the mean estimation error.

    Step 1. We use induction to show that for all $\epsilon>0$ and $l \in \{1, \cdots, n\}$, there exists a finite time ${\rm{t}}_1^l(\epsilon)$ such that $\bar{\rho}^{l}_{k|k}> -\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^l(\epsilon)$.

    Since the upstream boundary cell (i.e., cell $1$) is in the observable subsystem, we have $\bar{\eta}^{1}_{k|k}\rightarrow 0$ and $\bar{\rho}^1_{k|k}\rightarrow\rho^1_{k}$, where $\rho^1_{k}\ge 0$. Hence a finite time ${\rm{t}}_1^1(\epsilon)$ exists such that $\bar{\rho}^1_{k|k}>-\frac{\epsilon}{n}$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^1(\epsilon)$.

    For all interior cells on link 1, i.e., cells indexed by $l \in \{2, 3, \cdots, n_1\}$, suppose $\bar{\rho}^{l-1}_{k|k}>-\frac{(l-1)\epsilon}{n}$, if $\bar{\rho}^{l}_{k|k}<-\frac{(l-1)\epsilon}{n}$, we obtain from (9) that

    $ f(ˉρl1k|k,ˉρlk|k)=vmˉρl1k|k>vm(l1)ϵn, $ (31)
    $ f(ˉρlk|k,ˉρl+1k|k)vmˉρlk|k. $ (32)

    It follows that the estimate of cell $l$ satisfies

    $ ˉρlk+1|k+1=ˉρlk|k+ΔtΔx(f(ˉρl1k|k,ˉρlk|k)f(ˉρlk|k,ˉρl+1k|k))Kk+1(l,1)ˉη1k+1|kKk+1(l,2)ˉηn1+n2k+1|kKk+1(l,3)ˉηnk+1|k>ˉρlk|k+vmΔtΔx|ˉρlk|k+(l1)ϵn|k(ΓkU0|kU0)ˉηbk|k, $ (33)

    where the inequality is due to $\|K_k\|_{\infty}\le {\rm{k}}(\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}})$ given in Lemma 3. Thus there exists a scalar $v_1>\frac{\Delta x{\rm{k}}\left(\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right)}{v_{\text{m}}\Delta t}$ such that

    $ ˉρlk+1|k+1ˉρlk|k>v0|ˉρlk|k+(l1)ϵn|,for all |ˉρlk|k+(l1)ϵn|v1ˉηbk|k $ (34)

    where $v_0 = \frac{v_{\text{m}}\Delta t}{\Delta x}-\frac{{\rm{k}}\left(\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right)}{v_1}>0$. Also note that $\left\|{\bar{\eta}}^{\text{b}}_{k|k}\right\|_{\infty}\rightarrow 0$ as $k\rightarrow\infty$, which indicates that the one-step change of the estimates is ultimately positive, and large enough so that a finite time ${\rm{t}}_1^l(\epsilon)$ exists such that $\bar{\rho}^{l}_{k|k}>-\frac{l\epsilon}{n}>-\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^l(\epsilon)$ [21].

    We now show that for the two cells on the downstream side of the junction, i.e., cell $n_1+1$ and cell $n_1+n_2+1$, there exist finite times ${\rm{t}}_1^{n_1+1}(\epsilon)$ and ${\rm{t}}_1^{n_1+n_2+1}(\epsilon)$ such that $\bar{\rho}^{n_1+1}_{k|k}>-\frac{(n_1+1)\epsilon}{n}>-\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^{n_1+1}(\epsilon)$ and $\bar{\rho}^{n_1+n_2+1}_{k|k}>-\frac{(n_1+1)\epsilon}{n}>-\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^{n_1+n_2+1}(\epsilon)$. Suppose $\bar{\rho}^{n_1}_{k|k}>-\frac{n_1\epsilon}{n}$, if $\bar{\rho}^{n_1+1}_{k|k}<-\frac{n_1\epsilon}{n}$, the junction solver follows diverge case Ⅱ or diverge case Ⅲ. Hence, the flow from cell $n_1$ to cell $n_1+1$ satisfies:

    $f(ˉρn1k|k,ˉρn1+1k|k)={1αd+1s(ˉρn1k|k)>vmn1ϵndiverge case IIs(ˉρn1k|k)r(ˉρn1+n2+1k|k)>vmn1ϵndiverge case III, $

    and the outgoing flow for cell $n_1+1$ satisfies:

    $f(ˉρn1+1k|k,ˉρn1+2k|k)vmˉρn1+1k|k.$

    Following the similar arguments as in (33)-(34), it can be concluded that there exist a finite time ${\rm{t}}_1^{n_1+1}(\epsilon)$ such that $\bar{\rho}^{n_1+1}_{k|k}>-\frac{(n_1+1)\epsilon}{n}>-\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^{n_1+1}(\epsilon)$. Applying the same analysis for cell $n_1+n_2+1$, it can be concluded that there exist a finite time ${\rm{t}}_1^{n_1+n_2+1}(\epsilon)$ such that $\bar{\rho}^{n_1+n_2+1}_{k|k}>-\frac{(n_1+1)\epsilon}{n}>-\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^{n_1+n_2+1}(\epsilon)$. Continuing the induction on link 2 from cell $n_1+1$ to cell $n_1+n_2$, we obtain that for all $\epsilon>0$ and $l \in \{n_1+1, n_1+2, \cdots, n_1+n_2\}$, there exists a finite time ${\rm{t}}_1^l(\epsilon)$ such that $\bar{\rho}^{l}_{k|k}>-\frac{l\epsilon}{n}> -\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^l(\epsilon)$. As for the cells on link 3, we process the same induction from cell $n_1+n_2+1$ to cell $n$, which yields that for all $\epsilon>0$ and $l \in \{n_1+n_2+1, n_1+n_2+2, \cdots, n\}$, there exists a finite time ${\rm{t}}_1^l(\epsilon)$ such that $\bar{\rho}^{l}_{k|k}>-\frac{(l-n_2)\epsilon}{n}> -\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1^l(\epsilon)$.

    Let ${\rm{t}}_1(\epsilon) = \max_{l\in\{1, \cdots, n\}}\{{\rm{t}}_1^l(\epsilon)\}$, it is concluded that $\bar{\rho}^{l}_{k|k}>-\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_1(\epsilon)$ and $l\in \{1, \cdots, n\}$. This proves the ultimate lower bound of the estimates.

    Step 2. We use induction to show that for all $\epsilon>0$ and $l \in \{1, \cdots, n\}$, there exists a finite time ${\rm{t}}_2^l(\epsilon)$ such that $\bar{\rho}^{l}_{k|k}<\varrho_{\text{m}}+\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^l(\epsilon)$.

    Since the two downstream boundary cells (indexed by $n_1+n_2$ and $n$) are in the observable subsystem, we have $\bar{\eta}^{n_1+n_2}_{k|k}\rightarrow 0$ and $\bar{\rho}^{n_1+n_2}_{k|k}\rightarrow\rho^{n_1+n_2}_{k}$, as well as $\bar{\eta}^{n}_{k|k}\rightarrow 0$ and $\bar{\rho}^{n}_{k|k}\rightarrow\rho^{n}_{k}$. Given the facts that $\rho^{n_1+n_2}_{k}\le \varrho_{\text{m}}$ and $\rho^{n}_{k}\le \varrho_{\text{m}}$, there exist finite times ${\rm{t}}_2^{n_1+n_2}(\epsilon)$ and ${\rm{t}}_2^{n}(\epsilon)$ such that $\bar{\rho}^{n_1+n_2}_{k|k}<\varrho_{\text{m}}+\frac{\epsilon}{n}$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^{n_1+n_2}(\epsilon)$, and $\bar{\rho}^{n}_{k|k}<\varrho_{\text{m}}+\frac{\epsilon}{n}$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^{n}(\epsilon)$.

    For all interior cells on link 3, i.e., cells indexed by $l \in \{n_1+n_2+1, n_1+n_2+2, \cdots, n-1\}$, suppose $\bar{\rho}^{l+1}_{k|k}<\varrho_{\text{m}}+\frac{(n-l)\epsilon}{n}$, if $\bar{\rho}^{l}_{k|k}>\varrho_{\text{m}}+\frac{(n-l)\epsilon}{n}$, we obtain from (9) that

    $ f(ˉρl1k|k,ˉρlk|k)w(ϱmˉρlk|k), $ (35)
    $ f(ˉρlk|k,ˉρl+1k|k)=w(ϱmˉρl+1k|k)>w((nl)ϵn). $ (36)

    It follows that the estimate of cell $l$ satisfies

    $ ˉρlk+1|k+1=ˉρlk|k+ΔtΔx(f(ˉρl1k|k,ˉρlk|k)f(ˉρlk|k,ˉρl+1k|k))Kk+1(l,1)ˉη1k+1|kKk+1(l,2)ˉηn1+n2k+1|kKk+1(l,3)ˉηnk+1|k<ˉρlk|kwΔtΔx|ˉρlk|kϱm(nl)ϵn|+k(ΓkU0|kU0)ˉηbk|k. $ (37)

    Thus there exists scalar $w_1$ such that $\frac{\Delta x{\rm{k}}\left(\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right)}{w\Delta t}<w_1$, and

    $ ˉρlk+1|k+1ˉρlk|k<w0|ˉρlk|kϱm(nl)ϵn|,for all |ˉρlk|kϱm(nl)ϵn|w1ˉηbk|k. $ (38)

    Also note that $\left\|{\bar{\eta}}^{\text{b}}_{k|k}\right\|_{\infty}\rightarrow 0$ as $k\rightarrow\infty$, which indicates that the one-step change of the estimates is ultimately negative, and large enough so that a finite time ${\rm{t}}_2^l(\epsilon)$ exists such that $\bar{\rho}^{l}_{k|k}<\varrho_{\text{m}}+\frac{(n-l+1)\epsilon}{n}<\varrho_{\text{m}}+\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^l(\epsilon)$.

    The above arguments can be generalized for all cells on link 2. Hence for all $\epsilon>0$ and $l \in \{n_1+1, n_1+2, \cdots, n_1+n_2-1\}$, there exists a finite time ${\rm{t}}_2^l(\epsilon)$ such that $\bar{\rho}^{l}_{k|k}<\varrho_{\text{m}}+\frac{(n_1+n_2-l+1)\epsilon}{n}< \varrho_{\text{m}}+\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^l(\epsilon)$.

    We now show that for the cell on the upstream side of the junction, i.e., cell $n_1$, there exists finite time ${\rm{t}}_2^{n_1}(\epsilon)$ such that $\bar{\rho}^{n_1}_{k|k}<\varrho_{\text{m}}+\frac{(n-n_1+1)\epsilon}{n}<\varrho_{\text{m}}+\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^{n_1}(\epsilon)$. Suppose $\bar{\rho}^{n_1+1}_{k|k}<\varrho_{\text{m}}+\frac{n_2\epsilon}{n}$ and $\bar{\rho}^{n_1+n_2+1}_{k|k}<\varrho_{\text{m}}+\frac{n_3\epsilon}{n}$, if $\bar{\rho}^{n_1}_{k|k}>\varrho_{\text{m}}+\frac{(n-n_1)\epsilon}{n} = \varrho_{\text{m}}+\frac{(n_2+n_3)\epsilon}{n}$, the incoming and outgoing flows of cell $n_1$ satisfy

    $f(ˉρn11k|k,ˉρn1k|k)w(ϱmˉρn1k|k),$

    and

    $f(ˉρn1k|k,ˉρn1+1k|k)+f(ˉρn1k|k,ˉρn1+n2+1k|k)={s(ˉρn1k|k)=qm>w(n2+n3nϵ)diverge case II or IIIr(ˉρn1+1k|k)+r(ˉρn1+n2+1k|k)>w(n2+n3nϵ)diverge case I.$

    Following the similar arguments as in (37)-(38) it can be concluded that there exist finite time ${\rm{t}}_2^{n_1}(\epsilon)$ such that $\bar{\rho}^{n_1}_{k|k}<\varrho_{\text{m}}+\frac{(n-n_1+1)\epsilon}{n}<\varrho_{\text{m}}+\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^{n_1}(\epsilon)$. Continuing the induction from cell $n_1$ to cell $1$, it follows that for all $l\in \{1, 2, \cdots, n_1\} $ there exists a finite time ${\rm{t}}_2^l(\epsilon)$ such that $\bar{\rho}^{l}_{k|k}<\varrho_{\text{m}}+\frac{(n-l+1)\epsilon}{n}\le \varrho_{\text{m}}+\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2^l(\epsilon)$.

    Let ${\rm{t}}_2(\epsilon) = \max_{l\in\{1, \cdots, n\}}\{{\rm{t}}_2^l(\epsilon)\}$, we obtain $\bar{\rho}^{l}_{k|k}<\varrho_{\text{m}}+\epsilon$ for all $k>{k}_0^{\text{U}}+{\rm{t}}_2(\epsilon)$ and $l\in \{1, \cdots, n\}$. This proves the ultimate upper bound of the estimates.

    Step 3. Combining Steps 1 and 2, and define ${\rm{t}}(\epsilon) = \max_{l}\{{\rm{t}}_1(\epsilon), {\rm{t}}_2(\epsilon)\}$, we obtain $\bar{\rho}^{l}_{k|k}\in(-\epsilon, \varrho_{\text{m}}+\epsilon)$ for all $l\in \{1, \cdots, n\}$ and $k>{k}_0^{\text{U}}+{\rm{t}}(\epsilon)$. Consequently, $\|\bar{\eta}_{k|k}\|<\sqrt{n}(\varrho_{\text{m}}+\epsilon)$ for all $k>{k}_0^{\text{U}}+{\rm{t}}(\epsilon)$.

    Proposition 1 indicates that when the mean estimation error of the three boundary cells converges to zero, it will drive the state estimate of all the interior cells inside $[0, \varrho_{\text{m}}]$ due to the conservation law and the flow-density relationship embedded in the traffic model. For example, when the state estimate $\bar{\rho}_{k|k}^{l}$ is smaller than zero, the sending capacity of cell $l$ is much smaller than the receiving capacity of cell $l$ (as shown in equations (31)-(32)). Consequently, the update equation of the estimate (33) ensures that the one-step change of the state estimate of cell $l$ is always positive, and the magnitude of the one-step change is proportional to the distance between zero and the current state estimate $\bar{\rho}_{k|k}^{l}$. This ensures that the estimate of cell $l$ is constantly pushed towards zero unless it is sufficiently close to zero. The ultimate upper bound can also be derived under the same fashion.

    In this article, we establish the theoretical performance of the KF applied to estimate the traffic density on transportation networks under unobservable scenarios. To facilitate the performance analysis of the KF, a linear SMM-J model is introduced which combines a junction solver with the switched linear system representation of the CTM. It is shown that in addition to the existence of shocks, the presence of junctions contributes significantly to the non-observability of the system.

    To derive the error bounds for the KF under unobservable traffic estimation problems, we analyze several properties of the state transition matrices of the SMM-J, which reflect the intrinsic physical properties (e.g., vehicle conservation and the CFL condition in the discretization scheme) of the traffic model. Based on the above properties of the SMM-J, we show that the infinity norm of the Kalman gain is uniformly bounded under unobservable modes. Finally, we show that the mean estimate of each cell is ultimately bounded inside the physically meaningful interval, provided that the density measurements of the boundary cells are available. The ultimate lower and upper bounds are derived based on the convergence (to zero) of the mean estimation error of the boundary cells, the boundedness of the Kalman gain, and the flow-density relationship embedded in the model prediction step of the KF. As indicated in the proof, feeding sensor data back to the estimator is critical to ensure physically meaningful estimates under unobservable system, which cannot be naturally achieved by an open-loop observer. These results provide some theoretical insights into the performance of sequential estimation algorithms widely used in traffic monitoring applications.

    Example 3. Consider a linear discrete system describing the evolution of a moving object. The state vector is constructed as $\rho_k = \left(\rho^1_k, \rho^2_k, \rho^3_k\right)$ , where $\rho_{k|k}^1$ , $\rho_{k|k}^2$ , and $\rho^3_{k|k}$ are the location, speed, and acceleration of the moving object at time $k\in\mathbb{N}$ , respectively. The moving object travels with a constant acceleration. The system dynamics is given by

    $ ρk+1=Akρk+ωk,ρkR3, $ (39)

    where $\omega_k\sim \mathcal{N}\left({\bf{0}}, Q_k\right)$ , the state transition matrix and the model error covariance matrices are given as follows:

    $ Ak=(110.5011001),Qk=(100010001),for all k. $

    The initial state is $\rho_0 = (2, 1, 0.05)^{\top}$ . The sensor measures the acceleration of the moving object, i.e., the measurement is modeled by

    $ \begin{align} z_k& = H_k\rho_k+v_k, \quad z_k\in \mathbb{R}, \label{eq:ex_measure} \end{align} $ (40)

    where

    $ \begin{align*} H_{k} = \left( \begin{array}{ccc} 0&0&1 \end{array} \right), \quad \text{$v_k \sim \mathcal{N}\left(0, R_k\right)$ with $R_k = 1$}, \quad \text{for all $k$.} \end{align*} $

    We use the KF (3)-(4) to estimate the state, where the initial condition is set to be $\bar{\eta}_{0|0} = (3, 2, 0.2)^{\top}$ and $\Gamma_{0|0} = I_3$ .

    The system (39)-(40) is not observable, which can be concluded by computing its observability matrix [4,Theorem 6.O1], and showing that the observability matrix is not full rank. The mean estimation error evolves as the following equation:

    $ \begin{align} \bar{\eta}_{k|k} = \left(I-K_kH_k\right)A_{k-1}\bar{\eta}_{k-1|k-1}. \label{eq:evolve_mean_eta} \end{align} $ (41)

    In Figure 5a, the solid curve shows the analytical evolution of $\left\| \bar{\eta}_{k|k}\right\|$ which follows (41). A Monte Carlo test of $N_r = 10,000$ realizations of the KF is also conducted, and the dashed curve in Figure 5a shows the empirical evolution of the estimation error $\left\|\hat{\eta}_{k|k}\right\|$ , where $\hat{\eta}_{k|k} = \frac{1}{N_r} \sum_{r = 1}^{N_r} \eta_{r, k|k}$ , and $\eta_{r, k|k}$ is the posterior estimation error at time $k$ on the $r^{\text{th}}$ realization. We also plot in Figure 5b the trace of the estimation error covariance $\text{tr}(\Gamma_{k|k})$ . It is shown that unlike the observable scenarios described in Lemma 1, the error covariance and the estimation error diverge as $k$ increases in this example, which is typical for unobservable systems.

    Figure 5.  The evolutions of the estimation errors (A) and the trace of the error covariance (B) when using the KF to track the unobservable system (39)-(40).

    The proof is divided into two steps. In Step 1, we apply properties (P.1) and (P.2) to show that (27) holds for the modes where the junction solver follows diverge case Ⅰ or diverge case Ⅱ. In Step 2, properties (P.1) and (P.3) are applied to show that (27) holds for the modes where the junction solver follows diverge case Ⅲ.

    For all $k_0+1 \le \ell < k \le {k}_1$ and $i, j\in \{1, \cdots, n\}$ , the $(i, j)^{\text{th}}$ entry of $\prod_{\kappa = k}^{\ell}A_{\kappa}$ is given by

    $ \begin{align} \left(\prod\limits_{\kappa = k}^{\ell}A_{\kappa}\right)\left(i, j\right) = \sum\limits_{r = 1}^{n}\left(\left(\prod\limits_{\kappa = k}^{\ell+1}A_{\kappa}\right)\left(i, r\right)\right)\left(A_{\ell}\left(r, j\right)\right).\label{eq:prod_A} \end{align} $ (42)

    Step 1. Suppose $A_k$ is under a mode where the junction solver follows diverge case Ⅰ or diverge case Ⅱ. Recall from (P.2) that

    $ \begin{align*} \sum\limits_{r = 1}^n A_{k}\left(r, j\right) \le 1, \quad \text{for all $k\in (k_0, {k}_1]$ and $j\in \{1, \cdots, n\}$.} \end{align*} $

    Hence, the $(i, j)^{\text{th}}$ entry of $\prod_{\kappa = k}^{\ell}A_{\kappa}$ is no greater than the convex combination of all the entries on the $i^{\text{th}}$ row of $\prod_{\kappa = k}^{\ell+1}A_{\kappa}$ . Moreover, recall from (P.1) that

    $ \begin{align*} 0\le A_{k}(r, c)\le 1, \quad\textrm{for all $k\in (k_0, k_1]$ and $r, c\in \{1, \cdots, n\}$, } \end{align*} $

    it follows that

    $ \begin{align*} 0\le \left(\prod\limits_{\kappa = k}^{\ell}A_{\kappa}\right)\left(i, j\right) \le 1, \quad\textrm{for all $k\in (k_0, k_1]$, $\ell \in [k_0+1, k)$ and $i, j\in \{1, \cdots, n\}$, } \end{align*} $

    thus (27) follows directly by setting $\ell = k_0+1$ in the above equation.

    Step 2. Suppose $A_k$ is under a mode where the junction solver follows diverge case Ⅲ. We prove for the case where ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k}) = {\rm{r}}(\rho^{n_1+1}_{k})$ , and the proof for the case where ${\rm{f}}(\rho^{n_1}_{k}, \rho^{n_1+1}_{k}) = {\rm{s}}(\rho^{n_1}_{k})-{\rm{r}}(\rho^{n_1+n_2+1}_{k})$ follows by symmetry.

    Recall from (P.3) that

    $ \begin{align} \sum\limits_{r = 1}^n A_{k}(r, j) \le 1, \quad\textrm{for all $k\in (k_0, k_1]$ and $j\in\{j|j\in \{1, \cdots, n\}, j\neq n_1+1\}$.}\label{eq:A_prop_ele} \end{align} $ (43)

    For $j = n_1+1$ , the sum of all entries of $A_k$ on column $j$ is given by

    $ \begin{equation*} \begin{split} \sum\limits_{r = 1}^n A_{k}(r, n_1+1)& = A_{k}(n_1, n_1+1)+A_{k}(n_1+1, n_1+1)+A_{k}(n_1+n_2+1, n_1+1)\\ & = \frac{w\Delta t}{\Delta x}+\left(1-\frac{w\Delta t}{\Delta x}\right)+\frac{w\Delta t}{\Delta x} = 1+\frac{w\Delta t}{\Delta x}, \quad\textrm{for all $k\in (k_0, k_1]$ }. \end{split} \end{equation*} $

    Additionally, one may note that for all $k\in (k_0, k_1]$ ,

    $ \begin{equation*} A_{k}(r, n_1+n_2+1) = \left\{ \begin{array}{ll} 1 &\textrm{if $r = n_1+n_2+1$ }\\ 0& \textrm{otherwise.} \end{array} \right. \end{equation*} $

    It follows that for all $k_0+1 \le \ell < k \le k_1$ ,

    $ \begin{equation}\label{eq:prod_A_zero} \left(\prod\limits_{\kappa = k}^{\ell}A_{\kappa}\right)\left(r, n_1+n_2+1\right) = \left\{ \begin{array}{ll} 1 &\textrm{if $r = n_1+n_2+1$ }\\ 0 &\textrm{otherwise.} \end{array} \right. \end{equation} $ (44)

    Combining (43) and (44) with (42), we obtain that for all $(i, j)\neq (n_1+n_2+1, n_1+1)$ , the $(i, j)^{\text{th}}$ entry of $\prod_{\kappa = k}^{\ell}A_{\kappa}$ is no greater than the convex combination of all the (non-zero) entries on the $i^{\text{th}}$ row of $\prod_{\kappa = k}^{\ell+1}A_{\kappa}$ . Also recall from (P.3) that for all $k\in (k_0, k_1]$ ,

    $ \begin{align*} A_{k}(r, c) = 0, \quad\textrm{for all $r\in\{n_1+1, \cdots, n\}$ and $c\in\{1, \cdots n_1\}$, } \end{align*} $

    which yields that for all $k_0+1 \le \ell < k \le k_1$ ,

    $ \begin{align*} \left(\prod\limits_{\kappa = k}^{\ell+1}A_{\kappa} \right)\left(r, c\right) = 0, \quad\textrm{for all $r\in\{n_1+1, \cdots, n\}$ and $c\in\{1, \cdots n_1\}$, } \end{align*} $

    thus

    $ \begin{align*} \left(\prod\limits_{\kappa = k}^{\ell+1}A_{\kappa} \right)\left(n_1+n_2+1, n_1\right) = 0, \quad\textrm{for all $k_0+1 \le \ell < k \le k_1$.} \end{align*} $

    Hence for $(i, j) = (n_1+n_2+1, n_1+1)$ , the $(i, j)^{\text{th}}$ entry of $\prod_{\kappa = k}^{\ell}A_{\kappa}$ is also no greater than the convex combination of all the (non-zero) entries on the $i^{\text{th}}$ row of $\prod_{\kappa = k}^{\ell+1}A_{\kappa}$ . Moreover, according to (P.1) it holds that

    $ \begin{align*} 0\le A_{k}(r, c)\le 1, \quad\textrm{for all $k\in (k_0, k_1]$ and $r, c\in \{1, \cdots, n\}$.} \end{align*} $

    It can be concluded that

    $ \begin{align*} 0\le \left(\prod\limits_{\kappa = k}^{\ell}A_{\kappa}\right)\left(i, j\right) \le 1, \quad\textrm{for all $k\in (k_0, k_1]$, $\ell \in [k_0+1, k)$ and $i, j\in \{1, \cdots, n\}$, } \end{align*} $

    thus (27) follows directly by setting $\ell = k_0+1$ in the above equation.

    In an unobservable mode, the SMM-J can be transformed to the Kalman observability canonical form. The transformed state consists of the observable and the unobservable parts of the system, i.e.,

    $ \begin{align*} \rho_{k}^{(\text{t})} = U\rho_{k} = \left( \begin{array}{c} {\rho}_k^{(1)}\\ {\rho}_k^{(2)} \end{array} \right), \end{align*} $

    where $U$ is an orthogonal matrix, ${\rho}_k^{(1)}\in \mathbb{R}^{d_1}$ and ${\rho}_k^{(2)}\in \mathbb{R}^{d_2}$ are the state in the observable and unobservable subsystems, respectively, with $d_1+d_2 = n$ . Moreover, since the densities of the three boundary cells are directly measured, it holds that $d_1 \ge 3$ . As a consequence, system (28)-(29) is transformed to the following formula:

    $ \begin{align*} {\rho}^{(\text{t})}_{k+1}& = {A}^{(\text{t})}_k\rho^{(\text{t})}_k+u^{(\text{t})}_k+{\omega}^{(\text{t})}_{k}, \quad \rho_k \in \mathbb{R}^{n}, \\ z_{k}& = {H}^{(\text{t})}_{k}{\rho}^{(\text{t})}_{k}+v_{k}, \quad z_k \in \mathbb{R}^{3}, \end{align*} $

    where the transformed state transition matrix ${A}^{(\text{t})}_{k}$ and the transformed observation matrix ${H}^{(\text{t})}_{k}$ can also be partitioned according to the observable and unobservable subsystems, i.e.,

    $ \begin{align} {A}^{(\text{t})}_{k} = UA_{k}U^{\top} = \left( \begin{array}{cc} {A}_k^{(1)}&{\bf{0}}_{d_1, d_2}\\ {A}_k^{(21)}&{A}^{(2)}_k \end{array} \right), \quad {H}^{(\text{t})}_{k} = H_kU^{\top} = \left( \begin{array}{cc} {H}_k^{(1)}&{\bf{0}} \end{array} \right), \label{eq:a_transform} \end{align} $ (45)

    with ${H}_k^{(1)}\in \mathbb{R}^{3\times d_1}$ defined as follows:

    $ \begin{equation*} {H}_k^{(1)} = \left\{ \begin{array}{ll} I_3 &\textrm{if $d_1 = 3$}\\ \left( \begin{array}{cc} I_3&{\bf{0}}_{3, d_1-3} \end{array} \right)& \textrm{if $d_1 > 3$, } \end{array} \right. \quad \text{for all $k$.} \end{equation*} $

    Moreover, the transformed system input is given by ${u}^{(\text{t})}_k = U u_k$ , and the transformed model noise is given by ${w}^{(\text{t})}_k = U w_k \sim \mathcal{N}({\bf{0}}, {Q}^{(\text{t})}_k)$ , where the transformed model error covariance ${Q}^{(\text{t})}_{k}$ can be partitioned to blocks corresponding to the observable and unobservable subsystems, i.e.,

    $ \begin{align*} {Q}^{(\text{t})}_{k} = UQ_{k}U^{\top} = \left( \begin{array}{cc} {Q}_k^{(1)}&{Q}_k^{(12)}\\ {Q}_k^{(21)}&{Q}^{(2)}_k \end{array} \right). \end{align*} $

    The prior estimation error covariance matrix partitioned into the observable and unobservable subsystems is constructed as follows:

    $ \begin{align} {\Gamma}^{(\text{t})}_{k|k-1} = \left( \begin{array}{cc} {\Gamma}^{(1)}_{k|k-1}&{\Gamma}^{(12)}_{k|k-1}\\ {\Gamma}^{(21)}_{k|k-1}&{\Gamma}^{(2)}_{k|k-1} \end{array} \right).\notag \end{align} $

    In the KF, the prior error covariance matrix is computed recursively by the Riccati equation

    $ \begin{align} {\Gamma}^{(\text{t})}_{k+1|k} = &{A}^{(\text{t})}_k\left({\Gamma}^{(\text{t})}_{k|k-1}-{\Gamma}^{(\text{t})}_{k|k-1}\left({H}^{(\text{t})}_k\right)^{\top}\left({H}^{(\text{t})}_k{\Gamma}^{(\text{t})}_{k|k-1}\left({H}_k^{(\text{t})}\right)^{\top}+R_k\right)^{-1}\times\right.\notag\\ &\quad \quad \quad \left.{H}^{(\text{t})}_k{\Gamma}^{(\text{t})}_{k|k-1}\right)\left({A}_{k}^{(\text{t})}\right)^{\top}+{Q}^{(\text{t})}_k, \label{eq:reca_eq} \end{align} $ (46)

    Define

    $ \begin{align} {\Upsilon}^{(1)}_{k}& = {A}_k^{(1)}-{A}_k^{(1)}{\Gamma}^{(1)}_{k|k-1}\left({H}_k^{(1)}\right)^{\top}\left({H}^{(\text{t})}_k{\Gamma}^{(\text{t})}_{k|k-1}\left({H}_k^{(\text{t})}\right)^{\top}+R_k\right)^{-1}{H}_k^{(1)}\notag\\ & = {A}_k^{(1)}-{A}_k^{(1)}{K}^{(1)}_{k}{H}_k^{(1)}, \notag \end{align} $

    and apply partition into observable and unobservable subsystems to both sides of (46), we obtain the following two blocks of equations describing the evolutions of ${\Gamma}^{(1)}_{k+1|k}$ and ${\Gamma}^{(12)}_{k+1|k}$ :

    $ \begin{align} {\Gamma}^{(1)}_{k+1|k}& = {\Upsilon}^{(1)}_{k}{\Gamma}^{(1)}_{k|k-1}\left({A}_k^{(1)}\right)^{\top}+{Q}_k^{(1)}, \label{eq:18} \end{align} $ (47)
    $ \begin{align} {\Gamma}^{(12)}_{k+1|k}& = {\Upsilon}^{(1)}_{k}{\Gamma}^{(12)}_{k|k-1}\left({A}^{(2)}_k\right)^{\top}+{\Upsilon}^{(1)}_{k}{\Gamma}^{(1)}_{k|k-1}\left({A}_k^{(21)}\right)^{\top}+{Q}_k^{(12)}.\label{eq:19} \end{align} $ (48)

    In this section, we present the explicit formula of ${\rm{k}}\left(\Gamma_{k_0^{{\text{U}}}|{k}_0^{{\text{U}}}}\right)$ and prove Lemma 3. As detailed in Appendix.3, we transform the state vector according to observable and unobservable subsystems, i.e.,

    $ \begin{align*} {\rho}^{\text{(t)}}_k = U\rho_{k} = \left( \begin{array}{c} {\rho}_k^{(1)}\\ {\rho}_k^{(2)} \end{array} \right), \end{align*} $

    where ${\rho}_k^{(1)}\in \mathbb{R}^{d_1}$ and ${\rho}_k^{(2)}\in \mathbb{R}^{d_2}$ are the state in the observable and unobservable subsystems, respectively. The transformed Kalman gain is given by

    $ \begin{align}\label{eq:k_divide} {K}^{\text{(t)}}_k = UK_{k} = \left( \begin{array}{c} {K}_k^{(1)}\\ {K}_k^{(21)} \end{array} \right), \end{align} $ (49)

    where ${K}_k^{(1)}$ and ${K}_k^{(21)}$ correspond to the observable and unobservable subsystems, respectively, with

    $ \begin{align} {K}^{(1)}_{k}& = {\Gamma}^{(1)}_{k|k-1}\left({H}_k^{(1)}\right)^{\top}\left(R_{k}+{H}_k^{(1)}{\Gamma}^{(1)}_{k|k-1}\left({H}_k^{(1)}\right)^{\top}\right)^{-1}, \label{eq:evol_k_1} \end{align} $ (50)
    $ \begin{align} {K}_k^{(21)}& = {\Gamma}_{k|k-1}^{(21)}\left({H}_k^{(1)}\right)^{\top}\left(R_k+{H}_k^{(1)}{\Gamma}_{k|k-1}^{(1)}\left({H}_k^{(1)}\right)^{\top}\right)^{-1}.\label{eq:evol_k_2} \end{align} $ (51)

    Appendix.4.1. Explicit formula of the Kalman gain bound. Define

    $ \begin{align*} a_1 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\{\left\|A_k\right\|_{\infty}\right\}, ~ a_2 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\{\left\|A_k^{\top}\right\|_{\infty}\right\}, \end{align*} $

    and

    $ \begin{align*} \tilde{a}_1& = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\|A^{(1)}_k\right\|_{\infty}, ~~~~ \tilde{a}_2 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\|\left(A_k^{(1)}\right)^{\top}\right\|_{\infty}, ~\\[2mm] \tilde{a}_3& = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\sigma_{\max}\left({A}_k^{(1)}\right), ~ \tilde{a}_4 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\|{A}_k^{(21)}\right\|_{\infty}. \end{align*} $

    Moreover, define as $\tilde{c}_1$ and $\tilde{c}_2$ the lower and upper bounds of the error covariance of the observable subsystem, i.e.,

    $ \begin{align} \tilde{c}_1 I < {\Gamma}_{k|k}^{(1)} < \tilde{c}_2 I, \quad \text{for all $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$, } \label{eq:bound_cov_obsv_0} \end{align} $ (52)

    and let

    $ \begin{align*} &\tilde{c}_3 = \left(\tilde{c}_2+q_1^{-1}\tilde{c}_2^{2}\tilde{a}_3\right)^{-1}, ~ \tilde{t} = n^2\sqrt{d_1} \left(\check{c}_2\check{c}_1^{-1}\right)^{\frac{1}{2}}, ~ \tilde{q} = \left(1-\tilde{c}_3\tilde{c}_1\right)^{\frac{1}{2}}\\[2mm] &\tilde{p} = d_1\tilde{c}_2\tilde{a}_4\left(\tilde{a}_1\tilde{a}_2\tilde{c}_2+q_2\right)q_1^{-1}+q_2, \\[2mm] &~ \tilde{\gamma} = n\sqrt{n}\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|\left(a_1a_2\right)^2+n\sqrt{n}a_1a_2q_2+\sqrt{n}q_2. \end{align*} $

    The upper bound of $\|K_k\|_{\infty}$ for $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ in (30) is defined as

    $ \begin{align*} &{\rm{k}}\left(\Gamma_{{k}_0^{{\text{U}}}|{k}_0^{{\text{U}}}}\right)\\[2mm] = &\frac{\sqrt{3}d_1}{r_1}\max\left\{\frac{\sqrt{n}}{d_1}\left(\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|a_1a_2+q_2\right), \frac{1}{\sqrt{d_1}}\left(\tilde{a}_1\tilde{a}_2\tilde{c}_2+q_2\right), \tilde{\gamma}, \tilde{t}\tilde{q}\tilde{\gamma}+\tilde{p}, \frac{\tilde{t}\tilde{p}\tilde{q}}{1-\tilde{q}}+\tilde{p}\right\}. \end{align*} $

    Appendix.4.2. Proof of Lemma 3. The proof consists of the following five steps. Step 1 derives an upper bound for $\left\|K_{{k}_0^{\text{U}}+1}\right\|_{\infty}$ . Step 2 derives an upper bound of $\left\|{K}^{(1)}_k\right\|_{\infty}$ for $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ . In Step 3, we study the convergence rate of the error dynamics of the observable subsystem, which is also related to the boundedness of ${K}^{(21)}_k$ . Based on the convergence rate obtained in Step 3, Step 4 derives an upper bound of $\left\|{K}^{(21)}_k\right\|_{\infty}$ for $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ . Step 5 combines the above steps together and concludes the proof.

    Step 1. At time step ${k}_0^{\text{U}}+1$ , the Kalman gain is computed as follows:

    $ \begin{align*} K_{{k}_0^{\text{U}}+1} = \Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}H_{{k}_0^{\text{U}}+1}^{\top}\left(R_{{k}_0^{\text{U}}+1}+H_{{k}_0^{\text{U}}+1}\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}H_{{k}_0^{\text{U}}+1}^{\top}\right)^{-1}, \end{align*} $

    where $\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}} = A_{{k}_0^{\text{U}}}\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}A_{{k}_0^{\text{U}}}^{\top}+Q_{{k}_0^{\text{U}}}$ . Given that $\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|_{\infty}\le\sqrt{n}\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|$ , $\left\|Q_k\right\|_{\infty}<\sqrt{n}q_2$ , and define

    $ \begin{align*} a_1 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\{\left\|A_k\right\|_{\infty}\right\}, \quad a_2 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\{\left\|A_k^{\top}\right\|_{\infty}\right\}, \end{align*} $

    the prior error covariance at time ${k}_0^{\text{U}}+1$ satisfies

    $ \begin{align*} \left\|\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}\right\|_{\infty} &\le \left\|A_{{k}_0^{\text{U}}}\right\|_{\infty}\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|_{\infty}\left\|A_{{k}_0^{\text{U}}}^{\top}\right\|_{\infty}+\left\|Q_{{k}_0^{\text{U}}}\right\|_{\infty}\\ & < \sqrt{n}\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|a_1a_2+\sqrt{n}q_2. \end{align*} $

    Moreover, since

    $ \begin{equation*} \begin{split} &\left\|\left(R_{{k}_0^{\text{U}}+1}+H_{{k}_0^{\text{U}}+1}\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}H_{{k}_0^{\text{U}}+1}^{\top}\right)^{-1}\right\|_{\infty}\\ \le&\sqrt{3} \left\|\left(R_{{k}_0^{\text{U}}+1}+H_{{k}_0^{\text{U}}+1}\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}H_{{k}_0^{\text{U}}+1}^{\top}\right)^{-1}\right\|\\ = &\sqrt{3} \left(\sigma_{\min}\left(R_{{k}_0^{\text{U}}+1}+H_{{k}_0^{\text{U}}+1}\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}H_{{k}_0^{\text{U}}+1}^{\top}\right)\right)^{-1}\\ \le& \sqrt{3} \left(\sigma_{\min}\left(R_{{k}_0^{\text{U}}+1}\right)\right)^{-1}\\ < &\frac{\sqrt{3}}{r_1}, \end{split} \end{equation*} $

    it follows that

    $ \begin{equation*} \begin{split} \left\|K_{{k}_0^{\text{U}}+1}\right\|_{\infty}& \le \left\|\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}\right\|_{\infty}\left\|H_{{k}_0^{\text{U}}+1}^{\top}\right\|_{\infty}\left\|\left(R_{{k}_0^{\text{U}}+1}+H_{{k}_0^{\text{U}}+1}\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}H_{{k}_0^{\text{U}}+1}^{\top}\right)^{-1}\right\|_{\infty}\\ & < \frac{\sqrt{3n}}{r_1}\left(\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|a_1a_2+q_2\right). \end{split} \end{equation*} $

    Step 2. As stated in (50), Kalman gain associated with the observable subsystem is given by

    $ \begin{align*} {K}^{(1)}_{k}& = {\Gamma}^{(1)}_{k|k-1}\left({H}_k^{(1)}\right)^{\top}\left(R_{k}+{H}_k^{(1)}{\Gamma}^{(1)}_{k|k-1}\left({H}_k^{(1)}\right)^{\top}\right)^{-1}. \end{align*} $

    According to Lemma 1, there exist constants $\tilde{c}_1$ and $\tilde{c}_2$ such that the error covariance of the observable subsystem satisfies

    $ \begin{align} \tilde{c}_1 I < {\Gamma}_{k|k}^{(1)} < \tilde{c}_2 I, \quad \text{for all $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ .} \label{eq:bound_cov_obsv} \end{align} $ (53)

    Given that

    $ \begin{align*} {\Gamma}^{(1)}_{k|k-1} = {A}_k^{(1)}{\Gamma}^{(1)}_{k-1|k-1}\left({A}_k^{(1)}\right)^{\top}+{Q}_{k-1}^{(1)}, \end{align*} $

    we have

    $ \begin{align*} \left\|{\Gamma}^{(1)}_{k|k-1}\right\|_{\infty} \le \tilde{a}_1\tilde{a}_2\left\|{\Gamma}^{(1)}_{k-1|k-1}\right\|_{\infty}+\left\|{Q}_{k-1}^{(1)}\right\|_{\infty} < \sqrt{d_1}\left(\tilde{a}_1\tilde{a}_2\tilde{c}_2+q_2\right), \end{align*} $

    with $\tilde{a}_1$ and $\tilde{a}_2$ defined as

    $ \begin{align*} \tilde{a}_1 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\|A^{(1)}_k\right\|_{\infty}, \quad \tilde{a}_2 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\|\left(A_k^{(1)}\right)^{\top}\right\|_{\infty}. \end{align*} $

    Following the similar argument as in Step 1, we obtain

    $ \begin{equation*} \begin{split} \left\|{K}^{(1)}_{k}\right\|_{\infty} < \frac{\sqrt{3d_1}}{r_1}\left(\tilde{a}_1\tilde{a}_2\tilde{c}_2+q_2\right), \quad \text{for all $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ .} \end{split} \end{equation*} $

    Step 3. Define the Lyapunov function of the observable subsystem as

    $ \begin{align*} {V}^{\text{(1)}}_k = \left({\bar{\eta}}^{(1)}_{k|k}\right)^{\top}\left({\Gamma}^{(1)}_{k|k}\right)^{-1}{\bar{\eta}}^{(1)}_{k|k}. \end{align*} $

    According to Lemma 3 in [31], the one-step change of ${V}^{\text{(1)}}_k$ is given by:

    $ \begin{equation*} \begin{split} &{V}^{\text{(1)}}_{k+1}-{V}^{\text{(1)}}_k\\ = &-\left({\bar{\eta}}^{(1)}_{k|k}\right)^{\top}\left({\Gamma}^{(1)}_{k|k}+{\Gamma}^{(1)}_{k|k}\left({A}_k^{(1)}\right)^{\top}\times \right.\\ &\quad \quad \quad \quad \quad \quad \left.\left({Q}_{k}^{(1)}+{\Gamma}^{(1)}_{k+1|k}\left({H}_k^{(1)}\right)^{\top}R_{k+1}^{-1}{H}_k^{(1)}{\Gamma}^{(1)}_{k+1|k}\right)^{-1}{A}_k^{(1)}{\Gamma}^{(1)}_{k|k}\right)^{-1}{\bar{\eta}}^{(1)}_{k|k}\\ \le& -\left\|{\bar{\eta}}^{(1)}_{k|k}\right\|^2\left\|{\Gamma}^{(1)}_{k|k}+{\Gamma}^{(1)}_{k|k}\left({A}_k^{(1)}\right)^{\top}\times\right.\\ &\quad \quad \quad \quad \quad \quad\left.\left({Q}_{k}^{(1)}+{\Gamma}^{(1)}_{k+1|k}\left({H}_k^{(1)}\right)^{\top}R_{k+1}^{-1}{H}_k^{(1)}{\Gamma}^{(1)}_{k+1|k}\right)^{-1}{A}_k^{(1)}{\Gamma}^{(1)}_{k|k}\right\|^{-1}, \end{split} \end{equation*} $

    where

    $ \begin{equation*} \begin{split} &\left\|{\Gamma}^{(1)}_{k|k}+{\Gamma}^{(1)}_{k|k}\left({A}_k^{(1)}\right)^{\top}\left({Q}_{k}^{(1)}+{\Gamma}^{(1)}_{k+1|k}\left({H}_k^{(1)}\right)^{\top}R_{k+1}^{-1}{H}_k^{(1)}{\Gamma}^{(1)}_{k+1|k}\right)^{-1}{A}_k^{(1)}{\Gamma}^{(1)}_{k|k}\right\|\\ %\le&\left\|{\Gamma}^{(1)}_{k|k}\right\|+\left\|{\Gamma}^{(1)}_{k|k}\left({A}_k^{(1)}\right)^{\top}\left({Q}_{k}^{(1)}+{\Gamma}^{(1)}_{k+1|k}\left({H}_k^{(1)}\right)^{\top}R_{k+1}^{-1}{H}_k^{(1)}{\Gamma}^{(1)}_{k+1|k}\right)^{-1}{A}_k^{(1)}{\Gamma}^{(1)}_{k|k}\right\|\\ < &\tilde{c}_2+q_1^{-1}\left\|{\Gamma}^{(1)}_{k|k}\left({A}_k^{(1)}\right)^{\top}{A}_k^{(1)}{\Gamma}^{(1)}_{k|k}\right\|\\ \le&\tilde{c}_2+q_1^{-1}\left\|{\Gamma}^{(1)}_{k|k}\right\|\left\|\left({A}_k^{(1)}\right)^{\top}{A}_k^{(1)}\right\|\left\|{\Gamma}^{(1)}_{k|k}\right\|\\ < &\tilde{c}_2+q_1^{-1}\tilde{c}_2^{2}\tilde{a}_3, \end{split} \end{equation*} $

    with $\tilde{a}_3$ defined as

    $ \begin{align*} \tilde{a}_3 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\sigma_{\max}\left({A}_k^{(1)}\right), \end{align*} $

    and $\sigma_{\max}(M)$ is the maximum singular value of matrix $M$ . It follows that for all $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ , the Lyapunov function ${V}^{\text{(1)}}_k$ satisfies

    $ \begin{align*} \tilde{c}_2^{-1}\left\|{\bar{\eta}}^{(1)}_{k|k}\right\|^2 < {V}^{\text{(1)}}_k < \tilde{c}_1^{-1}\left\|{\bar{\eta}}^{(1)}_{k|k}\right\|^2, \quad \text{and} \quad {V}^{\text{(1)}}_{k+1}-{V}^{\text{(1)}}_k < -\tilde{c}_3\left\|{\bar{\eta}}^{(1)}_{k|k}\right\|^2, \end{align*} $

    where

    $ \begin{align*} \tilde{c}_3 = \tilde{c}_2+q_1^{-1}\tilde{c}_2^{2}\tilde{a}_3. \end{align*} $

    Hence, for all $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ , the 2-norm of the mean estimation error of the observable subsystem satisfies

    $ \begin{equation}\label{eq:normerror_o} \begin{split} \left\|{\bar{\eta}}^{(1)}_{k|k}\right\|& < \left(\tilde{c}_2{V}^{\text{(1)}}_k\right)^{\frac{1}{2}}\\ & < \left(\tilde{c}_2{V}_{{k}_0^{\text{U}}+1}^{\text{(1)}}\left(1-{c}_3\tilde{c}_1\right)^{k-{k}_0^{\text{U}}-1}\right)^{\frac{1}{2}}\\ & < \left(\frac{\tilde{c}_2}{\tilde{c}_1}\left\|{\bar{\eta}}^{(1)}_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}+1}\right\|^2\left(1-\tilde{c}_3\tilde{c}_1\right)^{k-{k}_0^{\text{U}}-1}\right)^{\frac{1}{2}}\\ & = \left(\frac{\tilde{c}_2}{\tilde{c}_1}\right)^{\frac{1}{2}}\left\|{\bar{\eta}}^{(1)}_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}+1}\right\|\left(\left(1-\tilde{c}_3\tilde{c}_1\right)^{\frac{1}{2}}\right)^{k-{k}_0^{\text{U}}-1}. \end{split} \end{equation} $ (54)

    Moreover, for all $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ , the mean estimation error of the observable subsystem is given as follows:

    $ \begin{align}\label{eq:error_evolve_o} {\bar{\eta}}^{(1)}_{k|k} = \prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}{\Upsilon}^{(1)}_{\kappa}{\bar{\eta}}^{(1)}_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}+1}, \end{align} $ (55)

    where ${\Upsilon}^{(1)}_{\kappa} = {\Gamma}^{(1)}_{\kappa|\kappa}\left({\Gamma}_{\kappa|\kappa-1}^{(1)}\right)^{-1}{A}_{\kappa-1}^{(1)}$ . Combining (54) and (55), it is concluded based on the definition of matrix induced norm that

    $ \begin{align}\label{eq:error_evolve_rate} \left\|\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}{\Upsilon}^{(1)}_{\kappa}\right\|\le \left(\frac{\tilde{c}_2}{\tilde{c}_1}\right)^{\frac{1}{2}}\left(\left(1-\tilde{c}_3\tilde{c}_1\right)^{\frac{1}{2}}\right)^{k-{k}_0^{\text{U}}-1}, \quad \text{for $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ }. \end{align} $ (56)

    Step 4. Vectorizing both sides of (48) yields that for $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ ,

    $ \begin{align*} \textrm{vec}\left\{{\Gamma}^{(12)}_{k+1|k}\right\} = &\left({A}^{(2)}_k\otimes{\Upsilon}^{(1)}_{k}\right)\textrm{vec}\left\{{\Gamma}^{(12)}_{k|k-1}\right\}\\ &+\textrm{vec}\left\{{\Upsilon}^{(1)}_{k}{\Gamma}^{(1)}_{k|k-1}\left({A}^{(21)}_k\right)^{\top}\right\}+\textrm{vec}\left\{{Q}_k^{(12)}\right\}, \end{align*} $

    which implies that for all $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ ,

    $ \begin{align} \textrm{vec}\left\{{\Gamma}^{(12)}_{k+1|k}\right\} = \left(\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}\left({A}^{(2)}_{\kappa}\otimes{\Upsilon}^{(1)}_{\kappa}\right)\right)\textrm{vec}\left\{{\Gamma}^{(12)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\right\}+\Phi_k, \label{eq:Gamma12} \end{align} $ (57)

    where

    $ \begin{align*} \Phi_k = &\textrm{vec}\left\{{\Upsilon}^{(1)}_{k}{\Gamma}^{(1)}_{k|k-1}\left({A}^{(21)}_k\right)^{\top}+{Q}_k^{(12)}\right\}\\ &+\left({A}^{(2)}_k\otimes{\Upsilon}^{(1)}_{k}\right)\textrm{vec}\left\{{\Upsilon}^{(1)}_{k-1}{\Gamma}^{(1)}_{k-1|k-2}\left({A}^{(21)}_{k-1}\right)^{\top}+{Q}_{k-1}^{(12)}\right\}\\ &+\left({A}^{(2)}_k\otimes{\Upsilon}^{(1)}_{k}\right)\left({A}^{(2)}_{k-1}\otimes{\Upsilon}^{(1)}_{k-1}\right)\textrm{vec}\left\{{\Upsilon}^{(1)}_{k-2}{\Gamma}^{(1)}_{k-2|k-3}\left({A}^{(21)}_{k-2}\right)^{\top}+{Q}_{k-2}^{(12)}\right\}\\ &+\cdots+\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+3}\left({A}^{(2)}_{\kappa}\otimes{\Upsilon}^{(1)}_{\kappa}\right)\textrm{vec}\left\{{\Upsilon}^{(1)}_{{k}_0^{\text{U}}+2}{\Gamma}^{(1)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\left({A}^{(21)}_{{k}_0^{\text{U}}+2}\right)^{\top}+{Q}_{{k}_0^{\text{U}}+2}^{(12)}\right\}. \end{align*} $

    The explicit form of ${A}^{(2)}_{k}\otimes{\Upsilon}^{(1)}_{k}$ reads

    $ \begin{align*} {A}^{(2)}_{k}\otimes{\Upsilon}^{(1)}_{k} = \left( \begin{array}{ccc} {A}^{(2)}_{k}(1, 1){\Upsilon}^{(1)}_{k}&\cdots&{A}^{(2)}_{k}(1, d_2){\Upsilon}^{(1)}_{k}\\ \vdots&\ddots&\vdots\\ {A}^{(2)}_{k}(d_2, 1){\Upsilon}^{(1)}_{k}&\cdots&{A}^{(2)}_{k}(d_2, d_2){\Upsilon}^{(1)}_{k}\\ \end{array} \right), \end{align*} $

    hence

    $ \begin{align*} \prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}\left({A}^{(2)}_{\kappa}\otimes{\Upsilon}^{(1)}_{\kappa}\right) = \left( \begin{array}{ccc} \vartheta_{k}(1, 1)\prod\nolimits_{\kappa = k}^{{k}_0^{\text{U}}+2}{\Upsilon}^{(1)}_{\kappa}&\cdots&\vartheta_{k}(1, d_2)\prod\nolimits_{\kappa = k}^{{k}_0^{\text{U}}+2}{\Upsilon}^{(1)}_{\kappa}\\ \vdots&\ddots&\vdots\\ \vartheta_{k}(d_2, 1)\prod\nolimits_{\kappa = k}^{{k}_0^{\text{U}}+2}{\Upsilon}^{(1)}_{\kappa}&\cdots&\vartheta_{k}(d_2, d_2)\prod\nolimits_{\kappa = k}^{{k}_0^{\text{U}}+2}{\Upsilon}^{(1)}_{\kappa}\\ \end{array} \right), \end{align*} $

    where $\vartheta_{k}(i, j)$ is the $(i, j)^{\textrm{th}}$ element of $\prod_{\kappa = k}^{{k}_0^{\text{U}}+2}{A}^{(2)}_{\kappa}$ . Define

    $ \begin{align*} P = \left( \begin{array}{cc} {\bf{0}}_{d_2, n-d_2}&I_{d_2}\\ \end{array} \right), \end{align*} $

    and given that the top right block of ${A}^{\text{(t)}}_k$ is a zero matrix ${\bf{0}}_{d_1, d_2}$ (as shown in (45)), it can be concluded that

    $ \begin{align} \nonumber\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}{A}^{(2)}_{\kappa} = P\left(\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}{A}^{\text{(t)}}_{\kappa}\right)P^{\top}& = P\left(\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}UA_{\kappa}U^{\top}\right)P^{\top}\\ & = PU\left(\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}A_{\kappa}\right)U^{\top}P^{\top}.\label{eq:prod_A_transform} \end{align} $ (58)

    Based on Lemma 2, the $(i, j)^{th}$ element of $\prod_{\kappa = k}^{{k}_0^{\text{U}}+2}A_{\kappa}$ satisfies

    $ \begin{align*} 0\le \left(\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}A_{\kappa}\right)\left(i, j\right) \le 1, \quad \text{for all $i, j\in \{1, \cdots, n\}$ .} \end{align*} $

    Hence, it can be derived from (58) that

    $ \begin{align*} \left\|\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}\check{A}^{(2)}_{\kappa}\right\|_{\infty}\le \left\|\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}\check{A}_{\kappa}\right\|_{\infty}\le \left\|U\left(\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}A_{\kappa}\right)U^T\right\|_{\infty} \le \sqrt{n} \left\|\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}A_{\kappa}\right\|\le n^2. \end{align*} $

    Consequently,

    $ \begin{split} \left\|\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}\left({A}^{(2)}_{\kappa}\otimes{\Upsilon}^{(1)}_{\kappa}\right)\right\|_{\infty}&\le n^2\left\|\prod\limits_{\kappa = k}^{{k}_0^{\text{U}}+2}{\Upsilon}^{(1)}_{\kappa}\right\|_{\infty}\\ &\le n^2\sqrt{d_1} \left(\frac{\tilde{c}_2}{\tilde{c}_1}\right)^{\frac{1}{2}}\left(\left(1-\tilde{c}_3\tilde{c}_1\right)^{\frac{1}{2}}\right)^{k-{k}_0^{\text{U}}-1}\\ & = \tilde{t}\tilde{q}^{k-{k}_0^{\text{U}}-1}, \end{split} $ (59)

    where the last inequality is due to (56). Recall from (53) that $\tilde{c}_1 I<{\Gamma}_{k|k}^{(1)}<\tilde{c}_2 I$ for $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ . Since

    $ \begin{equation*} {\Gamma}_{k|k-1}^{(1)} = {A}_k^{(1)}{\Gamma}^{(1)}_{k-1|k-1}\left({A}_k^{(1)}\right)^{\top}+{Q}^{(1)}_{k-1}, \end{equation*} $

    it follows that $\left\|{\Gamma}_{k|k-1}^{(1)}\right\|_{\infty}\le \sqrt{d_1}\left(\tilde{a}_1\tilde{a}_2\tilde{c}_2+q_2\right)$ for all $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ , and the prior error covariance of the observable subsystem satisfies $q_1I<{\Gamma}_{k|k-1}^{(1)}$ for $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ . As a consequence,

    $ \begin{equation}\label{eq:bound_a_upsilon} \begin{split} \left\|{\Upsilon}^{(1)}_{k}\right\|_{\infty}\le\sqrt{d_1}\left\|{\Gamma}^{(1)}_{k|k}\left({\Gamma}_{k|k-1}^{(1)}\right)^{-1}\right\| < \sqrt{d_1}\tilde{c}_2q_1^{-1}, \quad \text{for $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ .} \end{split} \end{equation} $ (60)

    Define $\tilde{a}_4$ as

    $ \begin{align*} \tilde{a}_4 = \max\limits_{k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]}\left\|{A}_k^{(21)}\right\|_{\infty}, \end{align*} $

    it follows that6 for $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ ,

    6Recall that for matrix $M\in\mathbb{R}^{p\times q}$ , $\left\|M\right\|_{\max}\le \left\|M\right\|_{2} = \max_{1\le r\le p, 1 \le c\le q}\left|M(r, c)\right|$ .

    $ \begin{split}\nonumber &\left\|\textrm{vec}\left\{{\Upsilon}^{(1)}_{k}{\Gamma}^{(1)}_{k|k-1}\left({A}^{(21)}_k\right)^{\top}\right\}+\textrm{vec}\left\{{Q}_k^{(12)}\right\}\right\|_{\infty}\\ \le& \left\|{\Upsilon}^{(1)}_{k}{\Gamma}^{(1)}_{k|k-1}\left({A}^{(21)}_k\right)^{\top}\right\|_{\infty}+\left\|{Q}_k^{(12)}\right\|_{\max}\\ < &\sqrt{d_1}\tilde{c}_2q_1^{-1}\tilde{a}_4\left\|{\Gamma}^{(1)}_{k|k-1}\right\|_{\infty}+q_2\\ < & d_1\tilde{c}_2\tilde{a}_4\left(\tilde{a}_1\tilde{a}_2\tilde{c}_2+q_2\right)q_1^{-1}+q_2\\ = &\tilde{p}. \end{split} $ (61)

    Substituting (59) and (61) into (57), we obtain that for $k\in({k}_0^{\text{U}}+1, {k}_1^{\text{U}}]$ ,

    $ \begin{align*} \left\|\textrm{vec}\left\{{\Gamma}^{(12)}_{k+1|k}\right\}\right\|_{\infty}\le {\rm{b}}\left(k\right)\triangleq\tilde{t}\tilde{q}^{k-{k}_0^{\text{U}}-1}\left\|\textrm{vec}\left\{{\Gamma}^{(12)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\right\}\right\|_{\infty}+\tilde{p}+\tilde{t}\tilde{p}\sum\limits_{\ell = 1}^{k-{k}_0^{\text{U}}-2}\tilde{q}^{\ell}, \end{align*} $

    where ${\rm{b}}(k)$ is either a non-increasing or a non-decreasing function of $k$ . Hence, we obtain that for $k\in({k}_0^{\text{U}}, {k}_1^{\text{U}}]$ ,

    $ \begin{align*} &\left\|\textrm{vec}\left\{{\Gamma}^{(12)}_{k+1|k}\right\}\right\|_{\infty}\\ \le&\max\left\{\left\|\textrm{vec}\left\{{\Gamma}^{(12)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\right\}\right\|_{\infty}, \quad {\rm{b}}({k}_0^{\text{U}}+2), \quad \lim\limits_{k\rightarrow \infty}{\rm{b}}(k)\right\}\\ \le&\max\left\{\left\|\textrm{vec}\left\{{\Gamma}^{(12)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\right\}\right\|_{\infty}, \quad \tilde{t}\tilde{q}\left\|\textrm{vec}\left\{{\Gamma}^{(12)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\right\}\right\|_{\infty}+\tilde{p}, \quad \frac{\tilde{t}\tilde{p}\tilde{q}}{1-\tilde{q}}+\tilde{p}\right\}, \end{align*} $

    where

    $ \begin{equation*} \begin{split} \left\|\textrm{vec}\left\{{\Gamma}^{(12)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\right\}\right\|_{\infty}&\le \left\|{\Gamma}^{(12)}_{{k}_0^{\text{U}}+2|{k}_0^{\text{U}}+1}\right\|_{\infty}\\ & < \sqrt{n}a_1a_2\left\|\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}+1}\right\|+\sqrt{n}q_2\\ & \le \sqrt{n}a_1a_2\left\|\Gamma_{{k}_0^{\text{U}}+1|{k}_0^{\text{U}}}\right\|+\sqrt{n}q_2\\ & < n\sqrt{n}\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|\left(a_1a_2\right)^2+n\sqrt{n}a_1a_2q_2+\sqrt{n}q_2\\ & = \tilde{\gamma}. \end{split} \end{equation*} $

    Also since

    $ \begin{align*} {K}_k^{(21)} = \left({\Gamma}_{k|k-1}^{(12)}\right)^{\top}\left({H}_k^{(1)}\right)^{\top}\left(R_k+{H}_k^{(1)}{\Gamma}_{k|k-1}^{(1)}\left({H}_k^{(1)}\right)^{\top}\right)^{-1}, \end{align*} $

    it follows that for $k\in({k}_0^{\text{U}}+1, k_1^{\text{U}}]$ ,

    $ \begin{align*} \left\|{K}_k^{(21)}\right\|_{\infty}&\le \frac{\sqrt{3}}{r_1}\left\|\left({\Gamma}_{k|k-1}^{(12)}\right)^{\top}\right\|_{\infty}\le \frac{d_1\sqrt{3}}{r_1}\max\left\{\tilde{\gamma}, \quad \tilde{t}\tilde{q}\tilde{\gamma}+\tilde{p}, \quad \frac{\tilde{t}\tilde{p}\tilde{q}}{1-\tilde{q}}+\tilde{p}\right\}. \end{align*} $

    Step 5. Combining Steps 1, 2 and 4, it can be concluded that for $k\in ({k}_0^{\text{U}}, {k}_1^{\text{U}}]$

    $ \begin{align*} &\left\|K_k\right\|_{\infty} = \left\|U^{\top}{K}^{\text{(t)}}_k\right\|_{\infty}\le n\left\|{K}^{\text{(t)}}_k\right\|_{\infty}\\ \le&\frac{\sqrt{3}d_1}{r_1}\max\left\{\frac{\sqrt{n}}{d_1}\left(\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|a_1a_2+q_2\right), \frac{1}{\sqrt{d_1}}\left(\tilde{a}_1\tilde{a}_2\tilde{c}_2+q_2\right), \tilde{\gamma}, \tilde{t}\tilde{q}\tilde{\gamma}+\tilde{p}, \frac{\tilde{t}\tilde{p}\tilde{q}}{1-\tilde{q}}+\tilde{p}\right\}\\ \triangleq&{\rm{k}}\left(\left\|\Gamma_{{k}_0^{\text{U}}|{k}_0^{\text{U}}}\right\|\right), \end{align*} $

    which completes the proof.

    [1] B. Anderson and J. Moore, Optimal Filtering, Prentice-Hall, inc, Englewood Cliffs, N. J., 1979.
    [2] On sequential data assimilation for scalar macroscopic traffic flow models. Physica D: Nonlinear Phenomena (2012) 241: 1421-1440.
    [3] Numerical approximations of a traffic flow model on networks. Networks and Heterogeneous Media (2006) 1: 57-84.
    [4] C. Chen, Linear System Theory and Design, 3rd edition, Oxford University Press, 1999.
    [5] H. Chen and H. A. Rakha, Prediction of dynamic freeway travel times based on vehicle trajectory construction, in "Proceedings of the 15th IEEE Conference on Intelligent Transportation Systems", (2012), 576-581. doi: 10.1109/ITSC.2012.6338825
    [6] Quality of traffic observability on highways with Lagrangian sensors. IEEE Transactions on Automation Science and Engineering (2018) 15: 761-771.
    [7] The cell transmission model: a dynamic representation of highway traffic consistent with the hydrodynamic theory. Transportation Research Part B: Methodological (1994) 28: 269-287.
    [8] The cell transmission model, part Ⅱ: Network traffic. Transportation Research Part B: Methodological (1995) 29: 79-93.
    [9] M. Garavello, K. Han and B. Piccoli, Models for Vehicular Traffic on Networks, American Institute of Mathematical Sciences, Springfield, MO, 2016.
    [10] M. Garavello and B. Piccoli, Traffic Flow on Networks, American Institute of Mathematical Sciences, Springfield, MO, 2006.
    [11] A difference method for the numerical calculation of discontinuous solutions of hydrodynamic equations. Mathematics Sbornik (1959) 47: 271-306.
    [12] Continuous-time link-based kinematic wave model: Formulation, solution existence, and well-posedness. Transportmetrica B: Transport Dynamics (2016) 4: 187-222.
    [13] Evaluation of traffic data obtained via GPS-enabled mobile phones: The mobile century field experiment. Transportation Research Part C: Emerging Technologies (2010) 18: 568-583.
    [14] Modeling, simulation, and optimization of traffic flow networks. SIAM Journal on Scientific Computing (2003) 25: 1066-1087.
    [15] A mathematical model of traffic flow on a network of unidirectional roads. SIAM Journal on Mathematical Analysis (1995) 26: 999-1017.
    [16] A stochastic model of traffic flow: Gaussian approximation and estimation. Transportation Research Part B: Methodological (2013) 47: 15-41.
    [17] A. H. Jazwinski, Stochastic Process and Filtering Theory, Academic Press, Cambridge, MA, 1970.
    [18] Continuous kinematic wave models of merging traffic flow. Transportation Research Part B: Methodological (2010) 44: 1084-1103.
    [19] A Riemann solver for a system of hyperbolic conservation laws at a general road junction. Transportation Research Part B: Methodological (2017) 98: 21-41.
    [20] On the distribution schemes for determining flows through a merge. Transportation Research Part B: Methodological (2003) 37: 521-540.
    [21] H. Khalil, Nonlinear Systems, 3rd edition, Prentice Hall, 2002.
    [22] J. Lebacque, Intersection modeling, application to macroscopic network traffic flow models and traffic management, in "Traffic and Granular Flow'03", Springer, (2005), 261-278. doi: 10.1007/3-540-28091-X_26
    [23] J.-P. Lebacque, The Godunov scheme and what it means for first order traffic flow models, in International Symposium on Transportation and Traffic Theory, (1996), 647-677.
    [24] Y. Li, C. G. Claudel, B. Piccoli and D. B. Work, A convex formulation of traffic dynamics on transportation networks, SIAM J. Appl. Math., 77 (2017), 1493-1515, arXiv: 1702.03908. doi: 10.1137/16M1074795
    [25] M. Lighthill and G. Whitham, On kinematic waves. Ⅱ) a theory of traffic flow on long crowded roads, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 229 (1955), 317-345. doi: 10.1098/rspa.1955.0089
    [26] Freeway traffic estimation within particle filtering framework. Automatica (2007) 43: 290-300.
    [27] Parallelized particle and Gaussian sum particle filters for large-scale freeway traffic systems. IEEE Transactions on Intelligent Transportation Systems (2012) 13: 36-48.
    [28] I. Morarescu and C. Canudas de Wit, Highway traffic model-based density estimation, in Proceedings of the American Control Conference, 3 (2011), 2012-2017.
    [29] F. Morbidi, L. L. Ojeda, C. Canudas de Wit and I. Bellicot, A new robust approach for highway traffic density estimation, in Proceedings of the 13th European Control Conference, (2014), 1-6. doi: 10.1109/ECC.2014.6862333
    [30] Piecewise-linearized cell transmission model and parameter calibration methodology. Transportation Research Record (2006) 1965: 183-191.
    [31] R. Olfati-Saber, Kalman-consensus filter: optimality, stability, and performance, in Proceedings of the 48th IEEE Conference on Decision and Control, (2009), 7036-7042. doi: 10.1109/CDC.2009.5399678
    [32] Shock waves on the highway. Operations Research (1956) 4: 42-51.
    [33] On the convergence of the Godounov's scheme for first order quasi linear equations. Proceedings of the Japan Academy (1976) 52: 488-491.
    [34] Traffic state estimation on highway: A comprehensive survey. Annual Reviews in Control (2017) 43: 128-151.
    [35] X. Sun, L. Munoz and R. Horowitz, Highway traffic state estimation using improved mixture Kalman filters for effective ramp metering control, in Proceedings of the 42nd IEEE Conference on Decision and Control, (2003), 6333-6338. doi: 10.1109/CDC.2003.1272322
    [36] Y. Sun, A Distributed Local Kalman Consensus Filter for Traffic Estimation: Design, Analysis and Validation, Master's thesis, University of Illinois at Urbana-Champaign, 2015. doi: 10.1109/CDC.2014.7040406
    [37] Scaling the Kalman filter for large-scale traffic estimation. IEEE Transactions on Control of Network Systems (2017) 1-1.
    [38] State estimation for polyhedral hybrid systems and applications to the Godunov scheme for highway traffic estimation. IEEE Transactions on Automatic Control (2015) 60: 311-326.
    [39] C. Vivas, S. Siri, A. Ferrara, S. Sacone, G. Cavanna and F. R. Rubio, Distributed consensusbased switched observers for freeway traffic density estimation, in Proceedings of the 54th IEEE Conference on Decision and Control, (2015), 3445-3450. doi: 10.1109/CDC.2015.7402672
    [40] Efficient multiple model particle filtering for joint traffic state estimation and incident detection. Transportation Research Part C: Emerging Technologies (2016) 71: 521-537.
    [41] Real-time freeway traffic state estimation based on extended Kalman filter: A general approach. Transportation Research Part B: Methodological (2005) 39: 141-167.
    [42] A traffic model for velocity data assimilation. Applied Mathematics Research eXpress (2010) 2010: 1-35.
    [43] Real-time Lagrangian traffic state estimator for freeways. IEEE Transactions on Intelligent Transportation Systems (2012) 13: 59-70.
  • This article has been cited by:

    1. Yanbing Wang, Daniel B. Work, Estimation for heterogeneous traffic using enhanced particle filters, 2022, 18, 2324-9935, 568, 10.1080/23249935.2021.1881186
    2. Zlatinka Dimitrova, Flows of Substances in Networks and Network Channels: Selected Results and Applications, 2022, 24, 1099-4300, 1485, 10.3390/e24101485
    3. M. L. Delle Monache, J. Sprinkle, R. Vasudevan, D. Work, 2019, Autonomous vehicles: From vehicular control to traffic contro, 978-1-7281-1398-2, 4680, 10.1109/CDC40024.2019.9029535
    4. Yanbing Wang, Daniel B. Work, 2019, Heterogeneous traffic estimation with particle filtering, 978-1-5386-7024-8, 2551, 10.1109/ITSC.2019.8917248
    5. Joel C. de Goma, Lourd Andre B. Ammuyutan, Hans Luigi S. Capulong, Katherine P. Naranjo, Madhavi Devaraj, 2019, Vehicular Obstruction Detection In The Zebra Lane Using Computer Vision, 978-1-7281-0851-3, 362, 10.1109/IEA.2019.8715022
    6. Maria Laura Delle Monache, Sean T. McQuade, Hossein Nick Zinat Matin, Derek A. Gloudemans, Yanbing Wang, George L. Gunter, Alexandre M. Bayen, Jonathan W. Lee, Benedetto Piccoli, Benjamin Seibold, Jonathan M. Sprinkle, Daniel B. Work, Modeling, Monitoring, and Controlling Road Traffic Using Vehicles to Sense and Act, 2025, 8, 2573-5144, 211, 10.1146/annurev-control-030123-015145
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6838) PDF downloads(473) Cited by(6)

Figures and Tables

Figures(5)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog