Loading [MathJax]/jax/output/SVG/jax.js
Research article

Intelligence and global bias in the stock market

  • Trade is one of the essential features of human intelligence. The securities market is the ultimate expression of it. The fundamental indicators of stocks include information about the effects of noise and bias on stock prices; however, distinguishing between them is generally hard. In this article, I present the fundamentals hypothesis based on rational expectations and detect the global bias components from the actual fundamental indicators using a log-normal distribution model based on the fundamentals hypothesis. The analysis results show that biases generally exhibit the same characteristics, strongly supporting our theory. Notably, the positive price-to-cash flows from the investing activities ratio are proxies for the fundamentals. The answer is simple: "Cash is a fact, and profit is an opinion." Namely, opinions of management and accounting added noise to fundamentals. As a result, we obtain the Kesten process and the Pareto distribution. This result means the market knows this noise and shows a stable global bias in the stock market.

    Citation: Kazuo Sano. Intelligence and global bias in the stock market[J]. Data Science in Finance and Economics, 2023, 3(2): 184-195. doi: 10.3934/DSFE.2023011

    Related Papers:

    [1] Syeda Nadiah Fatima Nahri, Shengzhi Du, Barend J. van Wyk, Oluwaseun Kayode Ajayi . A comparative study on time-delay estimation for time-delay nonlinear system control. AIMS Electronics and Electrical Engineering, 2025, 9(3): 314-338. doi: 10.3934/electreng.2025015
    [2] José M. Campos-Salazar, Pablo Lecaros, Rodrigo Sandoval-García . Dynamic analysis and comparison of the performance of linear and nonlinear controllers applied to a nonlinear non-interactive and interactive process. AIMS Electronics and Electrical Engineering, 2024, 8(4): 441-465. doi: 10.3934/electreng.2024021
    [3] José M. Campos-Salazar, Roya Rafiezadeh, Felipe Santander, Juan L. Aguayo-Lazcano, Nicolás Kunakov . Comprehensive GSSA and D-Q frame dynamic modeling of dual-active-bridge DC-DC converters. AIMS Electronics and Electrical Engineering, 2025, 9(3): 288-313. doi: 10.3934/electreng.2025014
    [4] Jun Yoneyama . H disturbance attenuation of nonlinear networked control systems via Takagi-Sugeno fuzzy model. AIMS Electronics and Electrical Engineering, 2019, 3(3): 257-273. doi: 10.3934/ElectrEng.2019.3.257
    [5] Thanh Tung Pham, Chi-Ngon Nguyen . Adaptive PID sliding mode control based on new Quasi-sliding mode and radial basis function neural network for Omni-directional mobile robot. AIMS Electronics and Electrical Engineering, 2023, 7(2): 121-134. doi: 10.3934/electreng.2023007
    [6] José M. Campos-Salazar, Juan L. Aguayo-Lazcano, Roya Rafiezadeh . Non-ideal two-level battery charger—modeling and simulation. AIMS Electronics and Electrical Engineering, 2025, 9(1): 60-80. doi: 10.3934/electreng.2025004
    [7] Qichun Zhang, Xuewu Dai . Special Issue "Uncertainties in large-scale networked control systems". AIMS Electronics and Electrical Engineering, 2020, 4(4): 345-346. doi: 10.3934/ElectrEng.2020.4.345
    [8] Yanming Wu, Zelun Wang, Guanglei Meng, Jinguo Liu . Neural networks-based event-triggered consensus control for nonlinear multiagent systems with communication link faults and DoS attacks. AIMS Electronics and Electrical Engineering, 2024, 8(3): 332-349. doi: 10.3934/electreng.2024015
    [9] Rasool M. Imran, Kadhim Hamzah Chalok, Siraj A. M. Nasrallah . Innovative two-stage thermal control of DC-DC converter for hybrid PV-battery system. AIMS Electronics and Electrical Engineering, 2025, 9(1): 26-45. doi: 10.3934/electreng.2025002
    [10] Qichun Zhang . Performance enhanced Kalman filter design for non-Gaussian stochastic systems with data-based minimum entropy optimisation. AIMS Electronics and Electrical Engineering, 2019, 3(4): 382-396. doi: 10.3934/ElectrEng.2019.4.382
  • Trade is one of the essential features of human intelligence. The securities market is the ultimate expression of it. The fundamental indicators of stocks include information about the effects of noise and bias on stock prices; however, distinguishing between them is generally hard. In this article, I present the fundamentals hypothesis based on rational expectations and detect the global bias components from the actual fundamental indicators using a log-normal distribution model based on the fundamentals hypothesis. The analysis results show that biases generally exhibit the same characteristics, strongly supporting our theory. Notably, the positive price-to-cash flows from the investing activities ratio are proxies for the fundamentals. The answer is simple: "Cash is a fact, and profit is an opinion." Namely, opinions of management and accounting added noise to fundamentals. As a result, we obtain the Kesten process and the Pareto distribution. This result means the market knows this noise and shows a stable global bias in the stock market.



    This work is motivated by the networked control system (NCS) or remote control system where the media that connects the controller and actuator causes random packet delays or dropouts. This type of control systems are anticipated to have vast applications. The readers are referred to the review paper [1]. The common features of NCS are random packet delays or dropouts, competition of multiple nodes in the network, data quantization, etc., and this makes the modeling and stability analysis very challenging. Markovian type regime switching models have been proved to be very successful in modeling NCS [2,3]. However, the stability of such systems still remains a challenge. Most stability criteria are given as complicated LMI conditions [4,5] via Lyapunov function approach, or dwell time [6], or delay Riccati equations [7]. A stability condition using hybrid system analysis is found in [8], which is again a sufficient condition and is not easy to verify. A neat sufficient and necessary condition is in great need.

    In order to overcome the constraints of NCS, such as random packet delays or dropouts, it is a natural idea to estimate system state and compensate for packet disorders [9]. One believes that in this way better system performance and better system stability can be achieved. Predictive control has been effectively applied to NCS [10,11,12,13]. This control method generates a sequence of future control variables, which can be used to compensate for packet disorder or dropouts [14], and to estimate unknown system state as well [15,16]. However, as mentioned earlier, the stability conditions are usually hard to verify. To analyze the stability issue, one must have a thorough understanding of the dynamics of NCS with random time delays and state estimate.

    In this paper we propose a predictive control method of NCS with random packet delays and state estimate/compensation, and clearly show the structure of this system. This control method has much less computational burden compared to classical predictive control method. Then the system is modeled as a regime switching system, and an upper bound on the number of regimes is provided. By the word ``scale'' we mean the number of distinct regimes (denoted by M), and the size of minimal state representation in each regime (denoted by d). Then we provide a concise stability criterion that is sufficient and necessary. Both M and d play a role in the validation of this criterion (see Section 4).

    Throughout this paper we use xk,uk to denote the system state on the actuator side, and the control variable, respectively, where k is time index. Note that this is not the state representation in the overall system. The controller and actuator are connected through a network. The random time delay in actuator-to-controller channel is called the backward delay, denoted by τ1. The delay in the controller-to-actuator channel is called the forward delay, denoted by τ2. Markov models have been shown to be very general for the transition of these random delays [13], which we shall adopt.

    Unlike most NCSs where the objective systems on the actuator side are time invariant, here we assume that the objective system is a regime switching system with Markovian jumps, and we call it a Markov Jump System (MJS). The application of switching systems in engineering can be found, for example, in [17,18,19]. We let ik{1,2,...,m} denote the system mode/regime at time k. Pm with size m×m denotes the transition probability matrix among the m modes. Entries in Pm are denoted by λij's, i,j{1,2,...,m}. Thus, this NCS has another layer of complexity.

    We begin with simple and practically reasonable assumptions on this system.

    The actuator receives signals with random delays.

    The actuator receives control signals from the controller and performs the required actions. Due to the random packet delays, the actuator might receive no signal, or receive a packet that is randomly delayed, or receive multiple packets. If more than one packet arrive at the same time, the actuator executes the most recent one.

    Obsolete signals are discarded.

    If, at a certain time instant, a control variable has been applied, then any control signal older than it but arrives at later times will be discarded. For example, if u5 has been applied to the system at the present time and at the next time step u4 arrives, then u4 is discarded.

    The delay times are bounded.

    The backward time delay τ1, and the forward time delay τ2 are independent and bounded. This can be done by designing a transmission protocol such that, when a packet has been delayed for a certain amount of time, it is upgraded to the highest priority at a relay station and is transmitted immediately. Therefore we assume that τ1{0,1,...,T1} and τ2{0,1,2,...,T2}.

    The controller sends out signals at every discrete instant.

    The controller receives state feedback from the actuator via randomly delayed packets, and the controller estimates the future system state and sends out a new control signal. If no state feedback is received at a certain time instant, the controller simply sends a zero control signal.

    Past control signals are recorded.

    The controller keeps a record of past control signals that have been sent. This information is used to calculate a new control signal whenever a feedback packet arrives.

    Packet size is limited.

    The network bandwidth is very limited so that each time the controller and actuator just send out one small packet to the other party. That means, sending out a large packet containing a long sequence of controls or system states is not applicable. (Otherwise this system reduces to classical control system without time delay.)

    The controlled system on the actuator side becomes

    xk+1=a(ik)xk+b(j)(ik)ukτ2, (1.1)

    where k is present time, and the system dynamics a(ik),b(j)(ik) depend on the mode ik. The superscript j is an indicator function such that if j=0, b(j)(ik)=0 which means no control signal is received; and if j=1, b(j)(ik)=b(ik), ik{1,2,...,m} meaning at least one (possibly delayed) control signal is received. In the complete state representation to be introduced shortly it can be seen that the information of j is absorbed by a variable C(k)2. Recall that the control variable uk depends on delayed feedback xkτ1 and ikτ11, and the controller does not know the system information between kτ11 and k, except the past controls up to uk1. In what follows we shall show that this seemingly simple model actually has a very large scale.

    The NCS is a special dynamical system that might be easy to describe in words, but is hard to understand because its dynamics looks chaotic. One must get down to the bottom and have a careful dissemination to obtain a clear understanding. In what follows we shall do this work.

    We use the triplet (C(k)2,τ(k)2,ik) to denote the situation at the actuator side at time k, where C(k)2{0,1,2,...,T2}. Without the script k, C2,τ2,i just denote generic information, respectively.

    If no control signal arrives at time k but there was one packet arrived at k1, it means that this is a new round of receiving no signal, hence C(k)2=1. And at the same time we set τ(k)2 in this triplet equal to its counterpart in the proceeding packet, i.e., τ(k)2=τ(k1)2. Thus the information of the last applied control index is recorded. If at the next time step, say k+1, no control signal arrives, C(k+1)2=2, τ(k+1)2=τ(k)2 and so forth. If a new packet arrives at time k+2, we set C(k+2)2=0 and τ(k+2)2 equals the time delay in control in this received packet. Recall that the forward time delay is bounded. So before or by the time C2+τ2=T2, a new control signal is guaranteed to arrive. And finally ik denotes the mode of the system at time k. For example, (3,τ(k)2,ik) denotes the situation that for the past 3 time instants, no control signal was received, and the last applied control was delayed by τ(k)2 (the total delay at this time is 3+τ(k)2). (0,2,ik) denotes the situation that at least one control signal arrives, and uk2 is the most recent one (if more than one signal have arrived).

    The expression (1,τ(k)2,ik)(2,τ(k+1)2,ik+1) represents a state transition. It should be pointed out that the states (C(k)2,τ(k)2,ik) do not have full reachability. For example, if the current state is (0,0,ik), the next one cannot be (0,1,ik+1). That is because, the state (0,0,ik) indicates that at time k the control uk arrived with no delay and was applied to the system, while (0,1,ik+1) means that at time k+1, the one step delayed control (which is again uk) arrives and is to be applied. But this is a contradiction. The transition probability matrix for the transitions (C(k)2,τ(k)2)(C(k+1)2,τ(k+1)2) is denoted by P2.

    We use a similar mechanism to represent the situation at the controller side. The pair (C(k)1,τ(k)1) is constructed where C(k)1{0,1,2,...,T1}. If C(k)1=0, that means the controller receives a feedback packet from the actuator and the time delay in this packet is τ(k)1, which is the most recent one if multiple packets have arrived. If C(k)1>0, that means no feedback packet arrives and C(k)1 is a counter recording the time steps of receiving no signal, while τ(k)1 equals its counterpart in the previously received packet. Because the backward time delay is bounded, before or by the time C1+τ1=T1 a feedback packet is guaranteed to arrive. The transition probability matrix for the transitions (C(k)1,τ(k)1)(C(k+1)1,τ(k+1)1) is denoted by P1.

    Notice that the received information depends on the sequence of the past states (C(j)2,τ(j)2,ij),j<k, from the actuator side. Then we have the following proposition on the number of different regimes/scenarios of this control system.

    Proposition 2.1. An upper bound on the number of different regimes of this problem (denoted by M), or equivalently, an upper bound on the number of different scenarios of the system, is given by

    m2+T1(1+T2)2+T1(2+T2)/2(1+T1)(2+T1)/2 (2.1)

    Proof. On the controller side we consider the pair (C(k)1,τ(k)1), where C(k)1,τ(k)1 both take values in {0,1,2,...,T1}. Notice that any combination of C(k)1,τ(k)1 such that C(k)1+τ(k)1T1 is possible, thus there are 1+2++(1+T1)=(1+T1)(2+T1)/2 different scenarios in total for the pair (C(k)1,τ(k)1).

    Similarly, on the actuator side, any pair (C2,τ2) satisfying C2+τ2T2, C2,τ2{0,1,2,...,T2}, is a possible scenario. There are 1+2++(1+T2)=(1+T2)(2+T2)/2 different combinations. Furthermore, ik{1,2,...,m} also determines the system dynamics. So there are m((1+T2)T2/2+1+T2)=m(1+T2)(2+T2)/2 different cases.

    Due to the feedback delay, a sequence with length 1+T1 of past system states (C(j)2,τ(j)2,ij), j=kT11,kT1,...,k1, must be maintained. Thus, a state (C(k)1,τ(k)1)=(0,τ(k)1) indicates that the controller receives the information xkτ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2 and ikτ(k)11 at time k.

    The controller then calculates and sends out a new control variable uk based on this received information. When or whether will this control be applied by the actuator depends on the situation (C(k)2,τ(k)2,ik) at time k (which the controller does not know), and this contributes another triplet to the representation. That means, there should be a total of 2+T1 of the triplets (C2,τ2,i) in the representation of the system regime.

    However, recall that not all the transitions (C(k)2,τ(k)2,ik)(C(k+1)2,τ(k+1)2,ik+1) are possible. The sum C(k)2+τ(k)2 (which is less than or equal to T2) indicates the time index of the last applied control, and only the controls newer that it are likely to be accepted by the actuator. When the time index shifts to the next time instant, it automatically gives one more spot for future control. Therefore, given a triplet (C(k)2,τ(k)2,ik) in a sequence, there are at most m(1+T2) successors. Thus, given that the first triplet has m(1+T2)(2+T2)/2 possibilities, the rest 1+T1 triplets each has m(1+T2) possibilities. This contributes

    m(1+T2)(2+T2)/2[m(1+T2)]1+T1=m2+T1(1+T2)2+T1(2+T2)/2.

    Now the proof is complete.

    It is clear that to represent the complete system scenario, we need 2+T1 triplets (C2,τ2,i) and a pair (C1,τ1). We call this complete scenario/regime representation (R).

    In each scenario in (R), we need to represent the system dynamics. Firstly we write a column vector zk as follows:

    zk=[xk(ik),xk1(ik1),...,xkT1(ikT1),uk1,uk2,...,ukT1T2]T, (3.1)

    where the superscript T means transpose, then we get

    zk+1=A(C(k)2,τ(k)2,ik)zk+B(C(k)2,τ(k)2,ik)uk, (3.2)

    which is an extension of (1.1). Since each entry in (3.1) is necessary to describe the system dynamics in a certain regime, we are clear that d (as mentioned in Section 1) equals the length of zk. Notice that if T1=T2=0, there is no need to keep any control variable in zk (and zk reduces to xk). The matrices A,B are explained in the following.

    If C(k)2>0, that means no control signal is received at time k, we simply write

    A(C(k)2,τ(k)2,ik)=(a(ik)    0 0   0 01   0 0   0 00   1 0   0 00() 0   0 00 0   1 0),B(C(k)2,τ(k)2,ik)=[0 0 0 1() 0 0]T,

    where () corresponds to the T1+2nd row. Indeed it corresponds to the entry uk in zk+1 on the left side of (3.2). The information C(k)2 affects the scenario representation in (R) only and plays a role as a counter, and it does not alter the system dynamics. τ(k)2 records the delay in the last received control packet.

    If C(k)2=0, it means at least one control signal is received, then there are two cases. If τ(k)2=0, uk arrives at the actuator side without delay, then A(0,0,ik) remains the same as A(C(k)2,τ(k)2,ik) shown above, and B(0,0,ik)=[b(ik) 0 0 1() 0 0]T.

    If τ(k)2>0 while C(k)2=0, then

    A(0,τ(k)2,ik)=(a(ik)    0b(ik) 0   01   0 0   0 0   00   1 0   0 0   00() 0   0 0   00 0   1 0   0),B(0,τ(k)2,ik)=[0 0 0 1() 0 0]T,

    where the position of b(ik) corresponds to the entry ukτ(k)2 in zk.

    On the controller side we have

    uk=F(C(k)1,τ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11)zk. (3.3)

    If C(k)1>0, it means no feedback packet is received at time k, so F=0 and C(k)1 plays the role as a counter. If C(k)1=0, the controller receives at least one feedback packet among which the most recent one is xkτ(k)1, together with the last applied control whose index is kτ(k)1τ(kτ(k)11)21. Based on the information xkτ(k)1,ikτ(k)11,τ(kτ(k)11)2 and the past controls

    ukτ(k)1τ(kτ(k)11)2,ukτ(k)1τ(kτ(k)11)2+1,...,uk1, (3.4)

    the controller is able to estimate the system state xk and sends out a new control variable uk. It is thus given in the feedback form (3.3). Notice that when C(k)1=0, τ(k)1 is the only random variable: it determines which past triplet (C(j)2,τ(j)2,ij) in (R) is received by the controller. The controller also keeps a record of τ(k)1 in the last received packet such that any future arrivals with indices older than kτ(k)1 will be discarded.

    Recall that if the controller does not receive any feedback packet at present time k, it simply sends uk=0. If the controller receives a packet that contains xkτ(k)1 with the triplet (C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11), and if the time index kτ(k)1 is the newest, it accepts this packet. Then the controller estimates the system state xk and calculates a new control uk that is to be sent.

    We need to point out that there is much flexibility in calculating the new control variable, which might be the most resourceful research in NCS. In this paper we present our control method as follows. Firstly, based on the most recent information xkτ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11, where k is present time, the controller knows that the last applied control to the system was ukτ(k)1τ(kτ(k)11)21, and any controls older than it will be discarded by the actuator, while the controls ukτ(k)1τ(kτ(k)11)2,ukτ(k)1τ(kτ(k)11)2+1,,uk1 are either on their way, or have been (possibly partially) applied by time k. Since this sequence of controls were sent out in order, it is reasonable to assume that they were or will be applied to the system one by one. A letter with a hat denotes the estimate of that corresponding quantity. Thus, ˆxkτ(k)1+1 is estimated at

    ˆxkτ(k)1+1=a(ˆikτ(k)1)xkτ(k)1+b(ˆikτ(k)1)ukτ(k)1τ(kτ(k)11)2, (3.5)

    and ˆxkτ(k)1+2 is estimated at

    ˆxkτ(k)1+2=a(ˆikτ(k)1+1)ˆxkτ(k)1+1+b(ˆikτ(k)1+1)ukτ(k)1τ(kτ(k)11)2+1, (3.6)

    and so forth until uk1 is applied to yield ˆxk+τ(kτ(k)11)2.

    One may notice that the system mode information ˆikτ(k)1,ˆikτ(k)1+1,,ˆik+τ(kτ(k)11)21 are not known by the controller. So the controller needs to predict a future path of modes.

    Based on the probability transition matrix Pm and the received mode ikτ(k)11, one can easily calculate the probability for each path to occur. We hope to determine one path over which we may estimate system states up to ˆxk+τ(kτ(k)11)2, so we apply a greedy search algorithm as follows. Starting from mode ikτ(k)11, we search in Pm the next connected mode ˆikτ(k)1 so that λikτ(k)11ˆikτ(k)1 is the maximum in the row of Pm corresponding to mode ikτ(k)11. If there is a tie, we just randomly choose one mode. Then starting from ˆikτ(k)1 we search in Pm the next connected mode ˆikτ(k)1+1 so that λˆikτ(k)1ˆikτ(k)1+1 is the maximum in the row of Pm corresponding to the mode ˆikτ(k)1. We repeat this process to obtain a sequence of future modes ˆikτ(k)1,ˆikτ(k)1+1,,ˆik+τ(kτ(k)11)21. With the past controls (3.4) we obtain an estimate for ˆxk+τ(kτ(k)11)2 via (3.5) and (3.6) recursively.

    To calculate a new control uk, we apply the idea of predictive control, which is also referred to as receding horizon control (RHC). Firstly we extend the prediction of future modes to get

    ˆikτ(k)1,ˆikτ(k)1+1,,ˆik+τ(kτ(k)11)21,ˆik+τ(kτ(k)11)2,..,ˆikτ(k)1+L1, (3.7)

    where L (>T1+T2) is the total length of prediction. The future controls ˆuk,ˆuk+1,...,ˆuk+Lτ(k)1τ(kτ(k)11)21 are chosen to minimize the following cost function:

    J(ˆxk+τ2,ˆik+τ(kτ(k)11)21,τ(k)1,τ(kτ(k)11)2)=Lτ(k)1j=τ(kτ(k)11)2+1ˆxTk+jQ(k+j)ˆxk+j+Lτ(k)1τ(kτ(k)11)21j=0ˆuTk+jR(k+j)ˆuk+j, (3.8)

    where Q,R's are symmetric positive definite (SPD) matrices. A sequence of future controls are thus determined and the control uk to be sent equals the first control variable in this sequence, i.e., uk=ˆuk.

    Unlike the usual cost function which is the expected value along all possible future paths, this cost function (3.8) is calculated along the deterministic path (3.7), and because the optimization is performed on one path, a huge amount of calculation is saved. To see this in more details, let us consider a control method which chooses future controls ˆuk,ˆuk+1,...,ˆuk+Lτ(k)1τ(kτ(k)11)21 to minimize the following expected cost function:

    J(ˆxk+τ2,ˆik+τ(kτ(k)11)21,τ(k)1,τ(kτ(k)11)2)=Lτ(k)1j=τ(kτ(k)11)2+1E[ˆxTk+jQ(k+j)ˆxk+j|xkτ(k)1]+Lτ(k)1τ(kτ(k)11)21j=0ˆuTk+jR(k+j)ˆuk+j, (3.9)

    where E[|] denotes conditional expectation. Since the system has m modes, from kτ(k)1 to K+Lτ(k)1 the calculation will have to consider mL different paths. Furthermore, this calculation is based on the assumption that the past controls that have been sent before will be applied to the actuator one by one in order, which is certainly not guaranteed in NCS with random time delay. Even if this calculation is carried out to send the new control variable uk, this control may not be applied to the actuator at expected instant. Therefore, it looks to have no advantage to carry out this heavy calculation to optimize the expected cost as in classical predictive control.

    But for our proposed simplified predictive control method, a natural question remains: what if the path we predicted does not actually occur? Remember that so far we have not used rolling optimization which is the true power of RHC. The calculation of future control sequences is repeated at each sample instant, and if a prediction error occurs, it will be corrected by re-planning at the next sample instant.

    Actually we can make this calculation faster by noticing that the control format in RHC with quadratic criterion function (3.8) is given by linear feedback form, see, e.g., [20]. We borrow the iterative algorithm in [20] for the calculation of state feedback control in time varying case, then we obtain

    uk=F(ˆik+τ(kτ(k)11)21,τ(k)1,τ(kτ(k)11)2)ˆxk+τ(kτ(k)11)2. (3.10)

    Recall that ˆxk+τ(kτ(k)11)2 was determined by xkτ(k)1,ikτ(k)11 and the past controls (3.4), via (3.5) and (3.6) recursively. So we construct the following matrix

    Dkτ(k)1+1=(0  0    a(ˆikτ(k)1) 0  0   b(ˆikτ(k)1)0        0 1     00       1 0     0     ),

    where a(ˆikτ(k)1),b(ˆikτ(k)1) in the first row correspond to the entries xkτ(k)1 and ukτ(k)1τ(kτ(k)11)2 in zk, respectively. The entry 1 in the second row corresponds to the entry ukτ(k)1τ(kτ(k)11)2+1 in zk, and so forth up to the entry uk1. Then

    Dkτ(k)1+1zk=(ˆxkτ(k)1+1  ukτ(k)1τ(kτ(k)11)2+1uk1)T

    by (3.5).

    Now we define

    Dkτ(k)1+2=(a(ˆikτ(k)1+1)   b(ˆikτ(k)1+1)0   00         01   00         00   1      ),

    and get

    Dkτ(k)1+2Dkτ(k)1+1zk=(ˆxkτ(k)1+2 ukτ(k)1τ(kτ(k)11)2+2 uk1)T

    by (3.6). Continue this process until we reach Dk+τ(kτ(k)11)2 so that

    Dk+τ(kτ(k)11)2Dkτ(k)1+1zk=ˆxk+τ(kτ(k)11)2. (3.11)

    Substituting (3.11) in (3.10) yields

    uk=F(ˆik+τ(kτ(k)11)21,τ(k)1,τ(kτ(k)11)2)Dk+τ(kτ(k)11)2Dkτ(k)1+1zk.

    Since ˆik+τ(kτ(k)11)21 is determined by ikτ(k)11 via (3.7) and C(k)1,C(kτ(k)11)2 as well, we obtain by (3.3)

    F(C(k)1,τ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11)=F(ˆik+τ(kτ(k)11)21,τ(k)1,τ(kτ(k)11)2)Dk+τ(kτ(k)11)2Dkτ(k)1+1. (3.12)

    In the next section we shall investigate the stability of this control algorithm.

    Putting (3.3) in (3.2) yields

    zk+1=A(C(k)2,τ(k)2,ik)zkB(C(k)2,τ(k)2,ik)F(C(k)1,τ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11)zk. (4.1)

    Denote

    H=H(C(k)2,τ(k)2,ik,C(k)1,τ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11)=A(C(k)2,τ(k)2,ik)B(C(k)2,τ(k)2,ik)F(C(k)1,τ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11),

    then we obtain the following dynamics

    zk+1=Hzk, (4.2)

    where H depends on the representation C(k)2,τ(k)2,ik,C(k)1,τ(k)1,C(kτ(k)11)2,τ(kτ(k)11)2,ikτ(k)11. This result also explains the scale estimate stated in (2.1). Now it can be seen that the system becomes a pure Markov Jump System where the total number of regimes/scenarios is bounded by (2.1). Let P be the transition probability matrix for the regimes in the complete representation (R). The sufficient and necessary stability tests provided in [21] can be directly applied. Before we introduce those tests, we shall introduce the stability criteria for MJS.

    The usual stability criteria on MJSs are mean convergence (MC) and mean square convergence (MSC). We say that a system is uniformly MC if E(zk|z0)0 as k for any initial condition z0 and any initial regime, where E denotes expectation. The system is said to be uniformly MSC if E(||zk||2|z0)0 as k for any initial condition z0 and any initial regime. Let I denote the identity matrix, and define

    F=diag(H)(PId),A=diag(HH)(PId2), (4.3)

    where diag denotes the diagonal block matrix, i.e., the H's corresponding to different regimes are put in the diagonal entries. The symbol denotes the Kronecker product of two matrices, and d is the dimension of the dynamical system (the size of zk) in each regime. Then F has size Md×Md and A has size Md2×Md2, where M is the total number of regimes in the complete representation (bounded by (2.1)). Let ρ() denote the spectral radius of a matrix, then we have the following result.

    Theorem 4.1. The MJS (4.1) is uniformly MC if and only if ρ(F)<1 and uniformly MSC if and only if ρ(A)<1

    Proof. The proof can be found in [22]. Here we just point out that the matrices F and A are deterministic, but not stochastic. Also, MSC implies MC but not the other way.

    In the numerical example we shall test both stabilities.

    Remark 4.1. We are able to provide this sufficient and necessary condition on stability, and this is largely due to the fact that the structure of this NCS has been clearly revealed.

    For the complete representation (R), the transition to the next regime depends on the most recent regime thanks to the Markovian property of this system, i.e.,

    (C(k)2,τ(k)2,ik,C(k)1,τ(k)1)(C(k+1)2,τ(k+1)2,ik+1,C(k+1)1,τ(k+1)1). (5.1)

    As we mentioned earlier, the regimes do not have full reachability, but the transition (5.1) can be decomposed into three parts that are treated independently. The first part ikik+1 is determined by Pm. The other two parts, (C(k)2,τ(k)2)(C(k+1)2,τ(k+1)2), and (C(k)1,τ(k)1)(C(k+1)1,τ(k+1)1) are determined by P2,P1, respectively. In this way the overall transition matrix P can be constructed.

    We consider an example where the objective system, which is a MJS, has two modes (m=2),

    [x(1)k+1x(2)k+1]=[1.20.2  10 ][x(1)kx(2)k]+[0.05 0.1]ukτ2,

    and

    [x(1)k+1x(2)k+1]=[0.70.2  10 ][x(1)kx(2)k]+[0.10.3]ukτ2,

    respectively. This system is controlled by a controller through a network with random time delays. It is assumed that τ1{0,1} and τ2{0,1,2}. We shall see how big the actual size of this system will be.

    The matrix Pm for the two modes is given as follows:

    Pm=(0.8  0.20.2  0.8).

    To construct P2, we need to keep in mind that not all the transitions (C(k)2,τ(k)2)(C(k+1)2,τ(k+1)2) are possible. For instance, the transition (0,0)(0,0) means that at time k, uk is received, and at time k+1, uk+1 is received with no delay. (0,0)(0,1) is impossible because this is a contradiction on uk. (0,0)(0,2) has zero probability because at time k, uk is received, and at time k+1, uk1 is received (delay of 2) but is discarded since its index is older than k. (0,0)(1,0) means at time k, uk is received, and at time k+1 no control signal is received and the last applied control (which is uk) has 0 delay. The matrix P2 is given in the following.

    (0,0)(0,1)  (0,2)(1,0)  (1,1)(2,0)(0,0)  0.70       00.3       00  (0,1)  0.40.4       00       0.20  (0,2)  0.50.2       0.30       00  (1,0)  0.30.4       00       00.3  (1,1)  0.50.2       0.30       00  (2,0)  0.40.2       0.40       00   (6.1)

    Now that T1=1, (C1,τ1) has three possibilities, namely, (0,0),(0,1),(1,0), and P1 is given below.

    P1=(0.6    0 0.40.5    0.5 00.7    0.3 0).

    By the derivation in this paper, we need to keep 2+T1=3 consecutive triplets (C2,τ2,i) in the complete representation. Because the mode information i is independent, we consider 3 consecutive pairs (C2,τ2) that are derived from (6.1). Some sample paths are illustrated below:

    (0,0)(0,0)(0,0)(0,0)(0,0)(1,0)(0,0)(1,0)(0,0)(0,0)(1,0)(0,1)(0,0)(1,0)(2,0),

    and the total number of all possible 3-pair paths is 45. The estimate from (2.1) on this part is given by (1+T2)2+T1(2+T2)/2=54. With the 3 possibilities on (C1,τ1), there are 45×3=135 cases. Now that the system itself has m=2 modes, then the total number of regimes in the complete representation is 23×135=1080. That is, in the regime switching representation (4.2) there are M=1080 regimes for this dynamical system. Formula (2.1) yields an upper bound of 23×54×3=1296 on M.

    The vector zk=[x(1)k x(2)k x(1)k1 x(2)k1 uk1 uk2 uk3]T, so d=7 (see (4.3)), and Md=7560 Md2=52920. That is to say, the size of the matrix F in (4.3) is 7560×7560 and that of A is 52920×52920.

    Without any control we have ρ(F)=1.1861 and ρ(A)=1.5237. So the system is not stable in either sense. Figure 1 shows 10 sample paths of system state without control, and the initial states are x1=x2=2.

    Figure 1.  System state without control, 10 sample paths.

    We choose the prediction horizon L=20 in RHC, and calculate the feedback law F as in (3.10), then we obtain F as in (3.12) and further obtain H as in (4.2). Then we calculate the spectral radii ρ(F) and ρ(A) as in (4.3). In our example, ρ(F)=0.9195,ρ(A)=1.0358. Thus the controlled system is MC, but not MSC. Figure 2 shows the system state and control of one sample path while Figure 3 shows the results of 10 sample paths.

    Figure 2.  State and control of one sample path.
    Figure 3.  State and control of 10 sample paths.

    The MJS stability certainly depends on the transition probabilities among the regimes. For instance, if we use

    Pm=(0.85  0.150.25  0.75),P1=(0.7      0 0.30.6    0.4 00.8    0.2 0),

    and

    P2=(0.8      0  0   0.2  0    00.5    0.3 0     00.2    00.6    0.3 0.1     00    00.4    0.30     00  0.30.5   0.20.3     00    00.7   0.20.1     00    0),

    then again, without control we have ρ(F)=1.2301 and ρ(A)=1.6022. Figure 4 shows the system output with no control.

    Figure 4.  System state of 10 sample paths without control.

    Then we apply the proposed predictive control method and have ρ(F)=0.8769,ρ(A)=0.9466. Thus the controlled system is MC as well as MSC. Figure 5 shows the system state and control of one sample path while Figure 6 shows the results of 10 sample paths.

    Figure 5.  State and control of one sample path.
    Figure 6.  State and control of 10 sample paths.

    To overcome the constraints of NCS such as random packet delays/disorders, it is a natural idea to estimate the system states and send out a control signal that compensates the time delay, hence the expectation of a better control performance. In this paper we modeled the NCS as a regime switching system and proposed a simplified predictive control method. The regime estimate (2.1), which is one of the main contributions of this paper, illustrates that this seemingly small system actually has a very large scale. The structure of this system has been clearly revealed, and a concise sufficient and necessary condition on stability is obtained. Numerical examples clearly show the features of this dynamical system.

    This work is supported by Barrios Technology Faculty Fellowship from University of Houston - Clear Lake, 2017-2018.

    The author declares no conflict of interest in this paper.



    [1] Aoki S and Nirei M (2017) Zipf's law, pareto's law, and the evolution of top incomes in the united states. Am Econ J-Macroecon 9: 36–71. https://doi.org/10.1257/mac.20150051 doi: 10.1257/mac.20150051
    [2] Bewley T (1977) The permanent income hypothesis: A theoretical formulation. J Econ Theory 16: 252–292. https://doi.org/10.1016/0022-0531(77)90009-6 doi: 10.1016/0022-0531(77)90009-6
    [3] Black F (1986) Noise. J Financ 41: 528–543. https://doi.org/10.1111/j.1540-6261.1986.tb04513.x doi: 10.1111/j.1540-6261.1986.tb04513.x
    [4] Bodenhorn D (1964) A cash-flow concept of profit. J Financ 19: 16–31. https://doi.org/10.2307/2977477 doi: 10.2307/2977477
    [5] Brosnan SF, Grady MF, Lambeth SP, et al. (2008) Chimpanzee autarky. PLOS ONE 3: 1–5. https://doi.org/10.1371/journal.pone.0001518 doi: 10.1371/journal.pone.0001518
    [6] De Long JB, Shleifer A, Summers LH, et al. (1990) Noise trader risk in financial markets. J Polit Econ 98: 703–738. https://doi.org/10.1086/261703 doi: 10.1086/261703
    [7] Fama EF (1965) The behavior of stock-market prices. J Bus 38: 34–105. https://doi.org/10.1086/294743 doi: 10.1086/294743
    [8] Gabaix X (1999) Zipf's Law for Cities: An Explanation. Q J Econ 114: 739–767. https://doi.org/10.1162/003355399556133 doi: 10.1162/003355399556133
    [9] Hartley JE (1996) Retrospectives: The origins of the representative agent. J Econ Perspect 10: 169–177. https://doi.org/10.1257/jep.10.2.169 doi: 10.1257/jep.10.2.169
    [10] Kesten H (1973) Random difference equations and Renewal theory for products of random matrices. Acta Math 131: 207–248. https://doi.org/10.1007/BF02392040 doi: 10.1007/BF02392040
    [11] Kirman AP (1992) Whom or what does the representative individual represent? J Econ Perspect 6: 117–136. https://doi.org/10.1257/jep.6.2.117 doi: 10.1257/jep.6.2.117
    [12] Kirman A (1993) Ants, rationality, and recruitment. Q J Econ 108: 137–156. https://doi.org/10.2307/2118498 doi: 10.2307/2118498
    [13] Kobayashi N, Kuninaka H, Wakita J, et al. (2011) Statistical features of complex systems: toward establishing sociological physics. J Phys Soc Japan 80: 072001. https://doi.org/10.1143/JPSJ.80.072001 doi: 10.1143/JPSJ.80.072001
    [14] Lucas RE (1976) Econometric policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy 1: 19–46. https://doi.org/10.1016/S0167-2231(76)80003-6 doi: 10.1016/S0167-2231(76)80003-6
    [15] Malone TW and Bernstein MS (2015) Handbook of collective intelligence, MIT press.
    [16] Muth JF (1961) Rational expectations and the theory of price movements. Econometrica 29: 315–335. https://doi.org/10.2307/1909635 doi: 10.2307/1909635
    [17] Nirei M and Aoki S (2016) Pareto distribution of income in neoclassical growth models. Rev Econ Dyn 20: 25–42. https://doi.org/10.1016/j.red.2015.11.002 doi: 10.1016/j.red.2015.11.002
    [18] Samuels JM (1965) Size and the growth of firms. Rev Econ Stud 32: 105–112. https://doi.org/10.2307/2296055 doi: 10.2307/2296055
    [19] Sano K (2022) A binary decision model and fat tails in financial market. Appl Sci 12: 7019. https://doi.org/10.3390/app12147019 doi: 10.3390/app12147019
    [20] Tversky A and Kahneman D (1974) Judgment under uncertainty: Heuristics and biases. Science 185: 1124–1131. https://doi.org/10.1126/science.185.4157.112 doi: 10.1126/science.185.4157.112
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1848) PDF downloads(68) Cited by(1)

Figures and Tables

Figures(5)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog