Diffusion approximation of neuronal models revisited

  • Received: 01 December 2012 Accepted: 29 June 2018 Published: 01 September 2013
  • MSC : 60J60, 60J70, 92C20.

  • Leaky integrate-and-fire neuronal models with reversal potentials have a number of different diffusion approximations, each depending on the form of the amplitudes of the postsynaptic potentials.Probability distributions of the first-passage times of the membrane potential in the original model and itsdiffusion approximations are numerically compared in order to find whichof the approximations is the most suitable one.The properties of the random amplitudes of postsynapticpotentials are discussed.It is shown on a simple example that the quality of the approximation depends directly on them.

    Citation: Jakub Cupera. Diffusion approximation of neuronal models revisited[J]. Mathematical Biosciences and Engineering, 2014, 11(1): 11-25. doi: 10.3934/mbe.2014.11.11

    Related Papers:

    [1] Sergei Yu. Pilyugin, Maria S. Tarasova, Aleksandr S. Tarasov, Grigorii V. Monakov . A model of voting dynamics under bounded confidence with nonstandard norming. Networks and Heterogeneous Media, 2022, 17(6): 917-931. doi: 10.3934/nhm.2022032
    [2] Sergei Yu. Pilyugin, M. C. Campi . Opinion formation in voting processes under bounded confidence. Networks and Heterogeneous Media, 2019, 14(3): 617-632. doi: 10.3934/nhm.2019024
    [3] Sabrina Bonandin, Mattia Zanella . Effects of heterogeneous opinion interactions in many-agent systems for epidemic dynamics. Networks and Heterogeneous Media, 2024, 19(1): 235-261. doi: 10.3934/nhm.2024011
    [4] Yuntian Zhang, Xiaoliang Chen, Zexia Huang, Xianyong Li, Yajun Du . Managing consensus based on community classification in opinion dynamics. Networks and Heterogeneous Media, 2023, 18(2): 813-841. doi: 10.3934/nhm.2023035
    [5] Clinton Innes, Razvan C. Fetecau, Ralf W. Wittenberg . Modelling heterogeneity and an open-mindedness social norm in opinion dynamics. Networks and Heterogeneous Media, 2017, 12(1): 59-92. doi: 10.3934/nhm.2017003
    [6] Mayte Pérez-Llanos, Juan Pablo Pinasco, Nicolas Saintier . Opinion fitness and convergence to consensus in homogeneous and heterogeneous populations. Networks and Heterogeneous Media, 2021, 16(2): 257-281. doi: 10.3934/nhm.2021006
    [7] Rainer Hegselmann, Ulrich Krause . Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model. Networks and Heterogeneous Media, 2015, 10(3): 477-509. doi: 10.3934/nhm.2015.10.477
    [8] Aylin Aydoğdu, Sean T. McQuade, Nastassia Pouradier Duteil . Opinion Dynamics on a General Compact Riemannian Manifold. Networks and Heterogeneous Media, 2017, 12(3): 489-523. doi: 10.3934/nhm.2017021
    [9] Marina Dolfin, Mirosław Lachowicz . Modeling opinion dynamics: How the network enhances consensus. Networks and Heterogeneous Media, 2015, 10(4): 877-896. doi: 10.3934/nhm.2015.10.877
    [10] Michael Herty, Lorenzo Pareschi, Giuseppe Visconti . Mean field models for large data–clustering problems. Networks and Heterogeneous Media, 2020, 15(3): 463-487. doi: 10.3934/nhm.2020027
  • Leaky integrate-and-fire neuronal models with reversal potentials have a number of different diffusion approximations, each depending on the form of the amplitudes of the postsynaptic potentials.Probability distributions of the first-passage times of the membrane potential in the original model and itsdiffusion approximations are numerically compared in order to find whichof the approximations is the most suitable one.The properties of the random amplitudes of postsynapticpotentials are discussed.It is shown on a simple example that the quality of the approximation depends directly on them.


    Opinion dynamics in social networks is an area that has received significant attention in recent years [28,33]. This body of work has applications in the areas of advertising, election campaigns, opinion evolution in online social networks, and public policy.

    One widely used approach to study opinion dynamics in networks is to model it as a stochastic process on a network of individuals. The voter model [19] is the most popular model used for such studies. Under the voter model, the network is modeled using a bidirectional graph with the set of individuals represented by nodes. There exists an edge between two individuals if they interact with each other. Opinions of individuals are binary. In each time-slot, a node and one of its neighbours are chosen uniformly at random. The node then adopts the opinion of the chosen neighbour. It is well-known that the voter model dynamics on a $ d $-dimensional lattice converges almost surely to a state where all individuals have the same opinion [7]. The consensus opinion depends on the initial state of the system. This behaviour is similar to that of a Pólya urn process, where a ball is drawn uniformly at random from the urn at every time-step and it is replaced in the urn along with another ball of same colour. Such phenomenon of reinforcement is found is several real-life processes like opinion evolution, disease transmission, modeling ant walks etc. Pólya process has been used to model opinion evolution in [20,32]. Numerous other way of modeling evolution of binary opinions and variants of the voter model have been proposed and studied (See recent surveys [1,29]). In the majority-rule version of the voter model, instead of sampling a single neighbour, the updating node/individual adopts the majority opinion in its neighbourhood [5,8,11]. Voter model on various kinds of fixed and evolving network topology have been studied [4,10,12,27,34]. The voter model has also been extended to incorporate biased or stubborn behaviour of individuals [25,35]. In [26] authors combine some of these extensions to study voter model with majority rule in presence of biased and stubborn individuals. The central question is to understand the conditions for asymptotic consensus and the rate of convergence to the consensus. For instance, in [26] authors show that the expected time to reach consensus is $ \text{\mathcal O}(\log N) $, where $ N $ is the size of the population. Most of these models assume Markovian evolution and ideas from Markov Processes, Mean Field Theory and Branching Processes have been used to address these questions.

    In this work, we study a generalization of the voter model and analyze its finite time as well as asymptotic behaviour. Our generalization has two key aspects. Firstly, we model three types of behaviours, namely, strong-willed, conformist, and rebel. We say that an individual is strong-willed when their opinion is not affected by the opinion of their neighbours. Further, we say that an individual is conformist/rebellious when their opinion is positively/negatively affected by the opinion of their neighbours. These three types of behaviours have been studied in [31] and [14]. In the model proposed in this paper, new individuals are added to the population over time and we allow their initial opinions to depend on the state of the system at the time of joining. We study the asymptotic behaviour of the opinion dynamics of this growing population on a complete network as a function of various system parameters.

    Although studying asymptotic behaviour has been the main focus of opinion modeling for many years, in various real-life situations like rumour spread in small communities or online opinion polls, it is important to understand how close does the opinion profile get to the limiting behaviour in a given time-period and what happens in presence of an external influencing agent. Opinion evolution over finite horizon has been studied in [3,31]. The information about the finite time behaviour is particularly important for opinion shaping by advertising and/or influencing agencies. In this paper, in addition to the asymptotic behaviour, we also study the effect of external influence over finite time horizons. Opinion manipulation or opinion shaping has been studied before (See [9,18,30]) but most of the research is focused on determining which nodes of the network should be influenced to have a favourable cascading effect. In this paper, we are interested in understanding whether it is more advantageous to influence closer to the voting deadline or at the beginning, and how does this depend on the three types of behaviour described above. Characterizing optimal influence strategies over finite time horizons in fixed networks is the focus of [16,23,31]. In [23], the authors show that if individuals only exhibit strong-willed behaviour, the optimal strategy is to influence towards the end of the finite time horizon, while if individuals are conformists, it may be optimal to influence towards the beginning of the time horizon. In [31], the authors show that if individuals are predominantly strong-willed/rebel, the optimal strategy is to influence towards the end of the finite time horizon. While [23,31] consider a complete graph between individuals in the network, in [16], the authors study the effect of the nature of the graph (random/fixed) on the nature of optimal influencing strategies. Unlike [16,23,31], where the network of individuals is fixed, in this work we model a system with new individuals entering the network over time. However, we still restrict our analysis to complete graphs, thus when the chosen individuals behave conformist or rebellious they take into account the complete opinion profile of the population at that time.

    The main contributions of this work are as follows:

    Asymptotic Behaviour: As mentioned above, while the three types of behaviour studied in this paper are similar to that of [14], this paper advances the investigation of asymptotic properties of heterogenous populations by incorporating the feature of growing population with arbitrary but fixed number individuals, extending beyond the single-individual addition model. This gives us an opportunity to explore the dependence of the opinion profile on the ratio of number of people added versus the number of people chosen for opinion update at each step. Our analysis of asymptotic behaviour is divided into three parts:

    (i) when the probability of an incoming individuals holding opinion $ 1 $ or $ 0 $ is independent of the current state of the system.

    (ii) when the probability of an incoming individuals holding opinion $ 1 $ is proportional to the fraction of individuals with opinion $ 1 $ in the system at that time.

    (iii) when the probability of an incoming individuals holding opinion $ 1 $ is proportional to the fraction of individuals with opinion $ 0 $ in the system at that time.

    Since the individuals can behave strong-willed with positive probability, in all the three cases the limiting fraction of people with either opinion converges almost surely to a deterministic limit, independent of the initial configuration of opinion. The main result on the limiting opinion profile allows us to study the dependence of the limiting fraction of individuals with either opinion on the ratio of number of individuals being chosen for opinion update at every time-step and the number of individuals being added to the population. We also compare the limiting fraction of individuals with opinion $ 1 $ for the three cases and obtain conditions that determine what kind of behaviour of the incoming individuals leads to a higher fraction of people with opinion $ 1 $ asymptotically. Further, we observe that in the Central Limit Theorem (CLT) type results for cases (i) and (iii), the critical and superdiffusive regimes do not exist, while case (ii) exhibits all the three regimes. Explicit conditions on system parameters for transitions from subdiffusive, critical and superdiffusive regimes are obtained.

    Finite Time Behaviour: We study the system over a time-horizon of $ T $ consecutive time-slots out of which external influence is exerted in $ bT $ time-slots for some $ b \in (0, 1) $. The opinion dynamics of the network under external influence tends to move towards a specific direction preferred by the influencer as compared to its organic evolution in the absence of any external influence. An influence strategy is defined by the time-slots in which external influence is exerted. We show that the optimal influence strategy, i.e., the strategy that maximizes the number of nodes with the opinion supported by the influencer at the end of the time-horizon is a function of the number of new nodes joining the network in each time-slot, and the mechanisms (i)–(iii) via which new nodes form their initial opinion.

    The paper is organized as follows. In Section 2 we describe the opinion evolution model. In Section 3 we state the results concerning the asymptotic properties of the system. More precisely, we obtain the limiting fraction of people with opinion $ 1 $ for various cases and state the fluctuation limit theorems for each case. In Section 4, we use Martingale concentration to show that the random process governing the evolution of fraction of people with either opinion can be approximated by trajectories of an ODE. Using this approximation we analyse the finite-time behaviour of the fraction of people with opinion $ 1 $ and its dependence on certain parameters of the system. Further, in Section 4.3, we introduce external influence and obtain optimal influencing strategies to maximize the expected fraction of people with opinion $ 1 $ at time $ T $. Finally, Section 5 contains the proofs of the theorems from Sections 3 and 4. We conclude with discussion on the results obtained in this paper and possible future directions in Section 6.

    We consider a growing population with binary opinions: $ 1 $ or $ 0 $ (denoted by $ 1 $ and $ 0 $ respectively). We start with $ M_0 > 0 $ individuals at time $ t = 0 $. Let $ X_t $ denote the fraction of people with opinion $ 1 $ at time $ t $. For $ t \geq 1 $, at each discrete time step, the system evolves in two steps:

    1. A fixed number of individuals, denoted by $ R_c (\leq M_0) $, are chosen uniformly at random and they update their opinions.

    2. A fixed number of individuals, denoted by $ R_a $ having opinion $ 1 $ with probability $ \alpha_t $ and $ 0 $ with probability $ 1-\alpha_t $, are added to the population. We study three cases.

    (i) $ \alpha_t = \alpha \ \forall t \geq 1 $ and some fixed $ \alpha \in [0, 1] $. That is, probability of the new individuals having opinion $ 1 $ remains constant throughout the fixed time interval $ [0, T] $.

    (ii) $ \alpha_t = \alpha_C X_t \ \forall t \geq 1 $ and some fixed $ \alpha_C \in [0, 1] $, that is the probability of the new individuals having opinion $ 1 $ is proportional to the fraction of individuals of opinion $ 1 $ in the population at time $ t $.

    (iii) $ \alpha_t = \alpha_R (1-X_t) \ \forall t \geq 1 $ and some fixed $ \alpha_R \in [0, 1] $, that is the probability of the new individuals having opinion $ 1 $ is proportional to the fraction of individuals of opinion $ 0 $ in the population at time $ t $.

    We now describe how the opinions of the chosen individual are updated. Define random variables $ \{ I_i(t) \}_{1 \leq i \leq M_t} $, $ t \geq 0 $ taking values in $ \{0, 1\} $, where $ I_i(t) $ denotes the opinion of the $ i^{th} $ individual at time $ t $. Note that the total number of individuals at time $ t+1 $ is given by $ M_{t+1} = M_t + R_a $. Thus, the population increases linearly and deterministically in $ t $. Define random variables: $ Y_t = \sum\limits_{i = 1}^{M_t} I_i(t) $ and $ N_t = M_t - Y_t $ as total number of people with opinion $ 1 $ and the total number of people with opinion $ 0 $ at time $ t $ respectively. Then, $ X_t = \frac{Y_t}{M_t} $. The opinion of a chosen individual $ j $ evolves in time-slot according the following transition probabilities.

    $ P(Ij(t+1)=0|Ij(t)=1)=pt,P(Ij(t+1)=1|Ij(t)=1)=1pt,P(Ij(t+1)=1|Ij(t)=0)=qt,P(Ij(t+1)=0|Ij(t)=0)=1qt.
    $
    (2.1)

    We model three types of behaviour in the population.

    $ – $ Strong-willed: the chosen individuals are not influenced by peers and change their opinions independent of anyone else in the population. In this case, $ p_t = p $ and $ q_t = q $.

    $ – $ Conformist: the chosen individuals change their opinion based on the majority opinion at that given time and tend to adopt the "popular" opinion at that time. In this case, $ p_t = p (1-X_t) $ and $ q_t = q X_t. $

    $ – $ Rebel: the chosen individuals change their opinion based on the majority opinion at that given time and tend to adopt the "unpopular" opinion at that time. In this case, $ p_t = p X_t $ and $ q_t = q (1-X_t). $

    Let $ \lambda, \mu \in [0, 1] $. At each time-step $ t $, with probability $ \lambda $ the chosen individuals behave as strong-willed, with probability $ \mu $ they behave as conformist and with probability $ 1-\lambda-\mu $ they behave rebellious. Let $ O_{t+1} $ be the change in the number of people of opinion $ 1 $ from time $ t $ to $ t+1 $. That is, $ Y_{t+1} = Y_t + O_{t+1} $. Then, we have

    $ Xt+1=Yt+1Mt+1=MtMt+1Xt+Ot+1Mt+1.
    $
    (2.2)

    The random variable $ O_{t+1} $ depends on the two independent processes that we can write as

    $ O_{t+1} = O^{R_c}_{t+1} + O^{R_a}_{t+1}, $

    where $ O^{R_c}_{t+1} $ is the change due to opinion evolution of the chosen individuals and $ O^{R_a}_{t+1} $ is the change due to the newly added individuals. We have

    $ E[Ot+1|Ft]=E[ORct+1+ORat+1|Ft]=Rc[(1Xt)qtXtpt]+αtRa=Rc(1λ2μ)(pq)X2t+(Rc[(3μ+λ2)q(λ+μ)p])Xt+αtRa+q(1μ)Rc=Rar(1λ2μ)(qp)X2t+Ra[r[(3μ+λ2)q(λ+μ)p])Xt+αt+q(1μ)r],
    $
    (2.3)

    where $ r = R_c/R_a $ is the ratio of people chosen and people added at each time step. Note that $ \mathbb{E}[O_{t+1}|\mathcal{F}_t] $ is a linear function of $ X_t $ provided (1) $ \lambda + 2 \mu = 1 $ or (2) $ p = q $. The first case is that of a mixed population where probability of a chosen individual being conformist is same as that of her being a rebel, and the second case is inspired from the conventional voter model transition rule. Throughout this paper, we assume the following.

    Assumption 1. We assume that the probability of a chosen individual behaving as conformist is same as that of her behaving as a rebel. That is, $ \lambda + 2 \mu = 1 $. Further, we assume that $ R_a > 0 $.

    As we shall see, the results for the case $ p = q $ can be obtained as a special case under Assumption 1. The linearity of $ \mathbb{E}[O_{t+1}|\mathcal{F}_t] $ as a function of $ X_t $ implies that $ \mathbb{E}[X_{t+1}|\mathcal{F}_t] $ is linear in $ X_t $. This allows us to give an ODE approximation for the recursion with explicit error bounds. In the next section, we investigate the asymptotic properties of the fraction of people with opinion $ 1 $.

    We use the stochastic approximation theory to analyse the behaviour of fraction of individuals of opinion $ 1 $. Note that the recursion in Eq (5.2) is of the form

    $ x_{t+1} = x_t + a_t (h(x_t) + S_{t+1}), $

    where $ 1/a_t $ is a linear function of $ t $, $ h $ is a linear function of $ X_t $ (whenever $ \lambda+ 2\mu = 1 $), $ S_t $ is a square integrable zero mean martingale and $ x_t $ is bounded. Therefore the limiting point of the recursion is same as that of the stable limit point of the ODE $ \dot{x}_t = h(x_t) $ (See Chapter 1 in [2]). Thus, the following is immediate.

    Theorem 3.1. Under Assumption 1, $ X_t \to X^* $ almost surely as $ t \to \infty $, where

    $ X={α+q(1μ)r1+(p+q)(1μ)r for  αt=α,q(1μ)r1αC+(p+q)(1μ)r for  αt=αCXt,αR+q(1μ)r1+αR+(p+q)(1μ)r for  αt=αR(1Xt).
    $
    (3.1)

    Note that for large $ r $, that is when the number of individuals chosen at every step for opinion update is much larger than the number of people being added to the population at every step, $ X^* $ is approximately $ \frac{q}{p+q} $ in all cases. That is, when $ R_c > > R_a $, the asymptotic composition of the opinion in the population does not depend on the initial inclination of people getting added to the population or on the behaviour of people chosen at each step. For small $ r $, in cases $ \alpha_t = \alpha $ and $ \alpha_R (1-X_t) $, the limiting fraction of of people with opinion $ 1 $ is close to $ \alpha $ and $ \frac{\alpha_R}{1+\alpha_R} $ respectively, whereas for $ \alpha = \alpha_C X_t $, it is close to zero. A similar trend is observed when everyone in the population is conformist with probability $ 1 $.

    Remark 1. For the case $ p = q $, we get

    $ X={α+q(1μ)r1+2q(1μ)r for  p=q   and  αt=αq(1μ)r1αC+2q(1μ)r for  p=q   and  αt=αCXtαR+q(1μ)r1+αR+2q(1μ)r for  p=q   and  αt=αR(1Xt),
    $

    which turns out to be a special case of Theorem 3.1. This is because the ODE under assumption $ p = q $ is same as that obtained under Assumption 1 along with $ p = q $. However, as we shall see the fluctuations of $ X_t $ around the limit $ X^* $ have a different behaviour under $ \lambda + 2 \mu = 1 $ and $ p = q $ and latter cannot be obtained as special case of the former.

    In the corollary below we compare the limiting fraction of people with opinion $ 1 $ in the three cases and obtain conditions that lead to a larger fraction of people with opinion $ 1 $ asymptotically.

    Corollary 3.2. Let $ X_S^*, X^*_C $ and $ X^*_R $ denote the limiting fraction of individuals with opinion $ 1 $ asymptotically for cases $ \alpha_t = \alpha, \alpha_t = \alpha_C X_t $ and $ \alpha_t = \alpha_R (1-X_t) $ respectively. Under Assumption 1,

    (i) $ X_S^* \geq X_R^* $ if if $ \alpha \geq \alpha_R $ or $ \alpha \geq 1-X^*_S $. In particular, $ X^*_S \geq X^*_R $ when $ \alpha_R = \alpha $.

    (ii) $ X^*_S \geq X^*_C $ if $ \alpha \geq \alpha_C $ or $ \alpha \geq X^*_S $. In particular, $ X^*_S \geq X^*_C $ when $ \alpha_C = \alpha $.

    (iii) $ X^*_C \geq X^*_R $ if $ (1-\mu)r \left(q \alpha_C - p \alpha_R \right) - \alpha_R + \alpha_R \alpha_C \geq 0 $. In particular, $ X^*_C \geq X^*_R $ if $ \alpha_C = 1 $ and $ q \geq p $ or $ \alpha_C = \alpha_R = \alpha \geq 1 + (1-\mu)r(p-q) $.

    Remark 2. Note that if $ \alpha_R = 0 $, in case (iii), new people have opinion $ 0 $ with probability $ 1 $ and therefore $ X_C^* \geq X_R^* $. This is straightforward from Eq (5.5). Similarly, when $ \alpha_C = 0 $, $ X^*_C \leq X^*_R $. By the argument in the proof above, we also get that $ X^*_C \leq X^*_R $ whenever $ \alpha_R = 1 $ and $ p \geq q $.

    Finally, we show that a phase transition in the fluctuation around the limit exists only for the case when $ \alpha_t = \alpha_C X_t $, that is when the probability of new individuals having opinion $ 1 $ is directly proportional to the fraction of individuals with opinion $ 1 $ at that time. The classification of diffusive, critical and superdiffusive regime is based on the values of parameters $ r, p, q, \mu $ and $ \alpha_C $. In the other two cases viz. $ \alpha_t = \alpha $ and $ \alpha_t = \alpha_R(1-X_t) $, we only have the diffusive case with $ \sqrt{t} $ scaling.

    Theorem 3.3. Let $ X_S^*, X^*_C $ and $ X^*_R $ be as in Corollary 3.2 and suppose Assumption 1 holds.

    1. For $ \alpha_t = \alpha $, as $ t \to \infty $

    $ \sqrt{t}(X_t-X^*_S) \xrightarrow{d} \mathcal{N}\left(0, \sigma \right), $

    where $ \sigma = \frac{R_a}{2R_a[r(1-\mu)(p+q)+1]-1} \left[r(1-\mu)(p-q)\left(\frac{r(1-\mu)q+\alpha}{r(1-\mu)(p+q)+1}\right)+r(1-\mu)q+\alpha(1-\alpha)\right] $.

    2. For $ \alpha_t = \alpha_R (1-X_t) $, as $ t \to \infty $

    $ \sqrt{t}(X_t-X^*_R) \xrightarrow{d} \mathcal{N}\left(0, \sigma_R \right), $

    where

    $ σR=Ra[r(1μ)(pq)+2α2αR]2Ra[r(1μ)(p+q)+1+αR]1(r(1μ)q+αRr(1μ)(p+q)+1+αR)+Ra[r(1μ)q+αR(1αR)]2Ra[r(1μ)(p+q)+1+αR]1.
    $

    3. For $ \alpha_t = \alpha_C X_t $, as $ t \to \infty $

    (a) if $ \alpha_C < r(1-\mu)(p+q)+1-\frac{1}{2R_a} $ then

    $ \sqrt{t}(X_t-X^*_C) \xrightarrow{d} \mathcal{N}\left(0, \sigma_C \right), $

    where

    $ σC=Ra[r(1μ)(pq)+αC]2Ra[r(1μ)(p+q)+1αC]1(r(1μ)qr(1μ)(p+q)+1αC)+Rar(1μ)q2Ra[r(1μ)(p+q)+1αC]1.
    $

    (b) if $ \alpha_C = r(1-\mu)(p+q)+1-\frac{1}{2R_a} $ then

    $ \sqrt{\frac{t}{\log{t}}}(X_t-X^*_C) \xrightarrow{d} \mathcal{N}\left(0, \sigma_C \right), $

    where $ \sigma_C = \left[(r(1-\mu)(p-q)+\alpha_C)\left(\frac{r(1-\mu)q}{r(1-\mu)(p+q)+1-\alpha_C}\right)+r(1-\mu)q\right] $.

    (c) if

    $ \alpha_C > r(1-\mu)(p+q)+1-\frac{1}{2R_a} $

    then as as

    $ t \to \infty, t^{-\mathcal{D}h(X^*)}(X_t-X^*_C) $

    almost surely converges to a finite random variable, where

    $ -\mathcal{D}h(X^*) = R_a(r(1-\mu)(p+q)+1-\alpha_C). $

    A similar result can be obtained for the case $ p = q $. The proofs of the above theorems are in Section 5.

    The scaling limits of this nature can also be obtained for general reinforced stochastic processes, including urn models and reinforced random walk. While we have used stochastic approximation theory, such limiting behaviour can also be obtained by exploiting the martingale structure as done in [15,24] for urn models or using results from [17]. As mentioned before, the random process governing the reinforcement of opinions is similar to the behaviour of a generalized two-colour Pólya urn. The new individuals being added to the population corresponds to adding new balls to the urn independently or based on the composition of the urn at that time. The chosen individuals correspond to multiple drawings of balls from the urn that are then replaced after some re-colouring. In this context, the behaviour of $ X_t $ observed here is similar in spirit to that observed in [15,21,24] for the fraction of balls of a given colour in generalized two-colour Pólya urns.

    In the next section, we study the same model and obtain optimal influencing strategies over a finite time horizon.

    We now analyse the evolution of opinions over a finite time interval $ [0, T] $. We are interested in understanding how the parameters of the system affect the final opinion profile at time $ T $ and what kind of influencing strategies result in a larger fraction of people with opinion $ 1 $ at time $ T $. The key mathematical idea is to approximate the random process of the fraction of people with a given opinion with an ODE.

    It can be shown that under Assumption 1, the iterates of the recursion for $ X_t $ remain close to the trajectories of the ODEs given by

    $ dxtdt={[r(1μ)(p+q)+1]xt+q(1μ)r+αM0/Ra+t  when   αt=α[r(1μ)(p+q)+1αC]xt+q(1μ)rM0/Ra+t  when   αt=αCXt[r(1μ)(p+q)+1+αR]xt+q(1μ)r+αRM0/Ra+t  when   αt=αR(1Xt).
    $

    Thus it is enough to analyse ODE of the form

    $ dxtdt=[r(1μ)(p+q)+1+A]xt+q(1μ)r+BM0/Ra+t with x0=X0,
    $
    (4.1)

    where

    $ A = {0  for   αt=ααC  for   αt=αCXtαR  for   αt=αR(1Xt)
    \text{ and }~~ \; \; B = {α  for   αt=α0  for   αt=αCXtαR  for   αt=αR(1Xt).
    $

    The solution for ODE in Eq (4.1) is given by:

    $ xt=rq(1μ)+Br(1μ)(p+q)+1+A+(x0rq(1μ)+Br(1μ)(p+q)+1+A)(t+1+M0/Ra1+M0/Ra)(r(1μ)(p+q)+1+A).
    $
    (4.2)

    The following theorem asserts that the recursion $ X_t $ remains close to the trajectories of the ODE in Eq (4.1).

    Theorem 4.1 (Martingale concentration). Suppose Assumption 1 holds. Let $ x_T $ denote the solution at time $ T $ of the ODE in Eq (4.1). Then for $ D_{M_0} = {{\mathcal O}}\left(\frac{1}{M_0}\right) $,

    $ P \left( |X_T - x_T| \geq \epsilon + D_{M_0} \right) \leq 2 e^{-\epsilon^2 CT}, $

    for some constant $ C > 0 $.

    The constant $ C $ depends on various parameters of the system, including the initial population. Figures 1 and 2 illustrate how well the ODE solution tracks the simulated trajectories of the recursion for $ X_t $. In the rest of the paper, we use this approximation to analyse the recursion by studying the ODE solution. We refer to the random process by $ X_t $ and the ODE solution by $ x_t $.

    Figure 1.  The process $ X_t $ vs the corresponding ODE solution with system parameters given by $ \alpha_t = X_t, \lambda = 0.2, \mu = 0.4, r = 5, p = 0.3, q = 0.7, M_0 = 100, x_0 = 0.3, T = 2000 $.
    Figure 2.  The process $ X_t $ vs the corresponding ODE solution with system parameters given by $ \alpha_t = \alpha = 0.2, \lambda = 0.3333, \mu = 0.3333, r = 0.2, p = 0.8, q = 0.2, M_0 = 500, x_0 = 0.7, T = 2000 $.

    We first put the above ODE approximation to use for analysing the dependence of the final $ x_T $ (which approximates the fraction of individuals with opinion $ 1 $ at the time of voting) on $ r $, that is the ratio of number of people who may change their opinion at time $ t $ and the number of people added to the population at each time-step $ t $. Clearly, the behaviour of $ x_T $ depends of $ p, q, x_0 $ and $ \alpha_t $. Differentiating the solution of Eq (4.1) at $ T $ with respect to $ r $ we get

    $ xT=((A+1)q(1μ)B(p+q)(1μ)(r(1μ)(p+q)+1+A)2)[1(τ1τT+1)r(1μ)(p+q)+1+A]+(x0rq(1μ)+Br(1μ)(p+q)+1+A)[(p+q)(1μ)(τ1τT+1)r(1μ)(p+q)+1+Alog(τ1τT+1)],
    $
    (4.3)

    where $ \tau_k = k+M_0/R_a $. We consider the $ \alpha_t = \alpha $ case first. In this case,

    $ xT=(1μ)(qα(p+q)(r(1μ)(p+q)+1)2)[1(τ1τT+1)r(1μ)(p+q)+1]+(x0rq(1μ)+αr(1μ)(p+q)+1)[(p+q)(1μ)(τ1τT+1)r(1μ)(p+q)+1log(τ1τT+1)].
    $
    (4.4)

    Clearly, for $ \alpha = \frac{q}{q+p} $, we get that $ x_T^\prime $ is positive for $ x_0 > \frac{q}{p+q} $, negative for $ x_0 < \frac{q}{p+q} $ and zero for $ x_0 = \frac{q}{p+q} $. For $ \alpha > \frac{q}{q+p} $, it is immediate that $ x_T^\prime < 0 $ for

    $ x_0 < \frac{rq(1-\mu) + \alpha}{r(p+q)(1-\mu)+1}. $

    For,

    $ x_0 > \frac{rq(1-\mu) + \alpha}{r(p+q)(1-\mu)+1}. $

    Observe that for $ T $ not very small,

    $ \Big \vert \frac{q}{q+p} - \alpha \Big \vert \tau_{T+1}^{u_r} > \tau_1^{u_r} \left[ (r(p+q)(1-\mu)+1)^2 \left(\frac{rq(1-\mu) + \alpha}{r(p+q)(1-\mu)+1}-x_0 \right) \log \left(\frac{\tau_1}{\tau_{T+1}} \right) + \Big \vert \frac{q}{q+p} - \alpha \Big \vert \right], $

    where

    $ u_r = {r(1-\mu)(p+q)+1}. $

    Therefore, $ x_T^\prime < 0 $ for all $ r $ provided $ \alpha > \frac{q}{q+p} $. A similar argument for $ \alpha < \frac{q}{q+p} $ shows that $ x_T $ is a non-decreasing function of $ r $. For $ \alpha = \frac{q}{q+p} $, we get

    $ xT=(x0qp+q)[(p+q)(1μ)(τ1τT+1)r(1μ)(p+q)+1log(τ1τT+1)].
    $

    Therefore,

    ● for $ x_0 > \frac{q}{p+q} $, $ x_T $ is a non-increasing function of $ r $.

    ● for $ x_0 = \frac{q}{p+q} $, $ x_T $ is a constant function of $ r $.

    ● for $ x_0 < \frac{q}{p+q} $, $ x_T $ is a non-decreasing function of $ r $.

    Using similar arguments we characterize the behaviour of $ x_T $ as a function of $ r $ for the rest of the cases as well. Table 1 details the behaviour of the final fraction of individuals with opinion $ 1 $ as a function of $ r $.

    Table 1.  Fraction of people with opinion $ 1 $ at time $ T $ as a function of $ r $.
    Behaviour of $ x_T $ as a function of $ r $. (i) $ \alpha_t = \alpha $ (ii) $ \alpha_t = \alpha_C X_t $ (iii) $ \alpha_t = \alpha_R(1- X_t) $
    $ x_T $ is a non-increasing function of $ r $. $ \alpha > \frac{q}{q+p} $ or $ \alpha = \frac{q}{q+p} < x_0 $ $ \alpha_C = 1 $ and $ x_0 > \frac{q}{p+q} $ $ \alpha_R > \frac{q}{p} $ or $ \alpha_R=\frac{q}{p}, x_0 > \frac{q}{p+q} $
    $ x_T $ is a non-decreasing function of $ r $. $ \alpha < \frac{q}{q+p} $ or $ x_0 < \alpha = \frac{q}{q+p} $ $ \alpha_C \in [0, 1) $ or $ \alpha_C =1, x_0 < \frac{q}{p+q} $ $ \alpha_R < \frac{q}{p} $ or $ \alpha_R=\frac{q}{p}, x_0 < \frac{q}{p+q} $
    $ x_T $ is a constant function of $ r $. $ \alpha = \frac{q}{q+p} = x_0 $ $ \alpha_C =1 $ and $ x_0 = \frac{q}{p+q} $ $ \alpha_R=\frac{q}{p} $

     | Show Table
    DownLoad: CSV

    In the next section, we state and discuss the main result for the finite time opinion evolution under external influence and optimal strategies for influencing the dynamics to obtain higher number of individuals with opinion $ 1 $ at the end of the finite time horizon.

    In this section, we study the opinion evolution over a finite time interval $ [0, T] $. We assume that there is an external influencing agency with a limited budget that tries to skew the opinion of the population in their favour at the end of time $ T $. Due to budgetary constraints, the advertising agency can influence the opinion in exactly $ bT $ of the $ T $ time-slots, where $ b \in [0, 1] $ is such that $ bT \in \mathbb{N} $. The influence is exerted by manipulating the transition probabilities of the Markov process defined in Eq (2.1). That is, if the chosen individuals are being externally influenced in time-slot $ t $, $ p_t = \tilde{p} $ and $ q_t = \tilde{q} $, else, their opinion evolves as described before. Without loss of generality, we assume that the aim of the advertising agency is to maximise the number of individuals with the opinion $ 1 $ at the end of the $ T $ time-slots. Similar model of opinion evolution in presence of external influence has been studied for fixed population in [31]. Our aim is to obtain optimal influencing strategies in different regimes depending on the model parameters. In particular, we want to study the dependence on $ R_c $ and $ R_a $. We begin by defining influencing strategy and what we mean by optimality here.

    Definition 4.2 (Influencing strategy). An influencing strategy $ \mathcal{S} \in [0, 1]^T $ is defined as binary string of length $ T $ that has exactly $ bT $ number of $ 1 $s. For all $ i \in \{0, \dots, T-1 \} $ such that $ \mathcal{S}_i = 1 $, the transition parameters are $ p_i = \tilde{p} $ and $ q_i = \tilde{q} $. The strategies to influence in the first $ bT $ and the last $ bT $ time-slots are denoted by $ \mathcal{S}_F $ and $ \mathcal{S}_L $ respectively.

    For two strategies $ \mathcal{S}_1 $ and $ \mathcal{S}_2 $, we write $ \mathcal{S}_1 \gg \mathcal{S}_2 $ if influence according to $ \mathcal{S}_1 $ leads to a higher expected number of $ 1 $ at the end of time $ T $ than the expected number of $ 1 $ at the end of time $ T $ under $ \mathcal{S}_2 $.

    Definition 4.3 (Optimal strategy). A strategy is called optimal if the influence according to that strategy results in a higher expected number of $ 1 $ at the end of time $ T $ than the expected number of $ 1 $ at the end of time $ T $ using any other influence strategy.

    Thus, an optimal strategy $ \mathcal{S}^* $ is such that $ \mathcal{S}^* \gg \mathcal{S} $, where $ \mathcal{S} $ is any other collection of $ bT $ time-slots to be influenced. As we shall see, due to monotonicity, in most cases, influencing the first or the last $ bT $ slots is optimal. We assume that the influencing strategy is rational. That is, during influence, the probability of switching from $ 0 $ to $ 1 $ increases from what it is when the individuals behave strong-willed. Similarly, under rational influence the probability of switching from $ 1 $ to $ 0 $ decreases. More, precisely, we have the following assumption.

    Assumption 2 (Rational influence). We assume that the external influence is such that $ { \tilde{p} } < { \tilde{q} } $, $ { \tilde{p} } < p $ and $ q < { \tilde{q} } $.

    We now state our results for optimal strategies for the cases $ \alpha_t = \alpha $, $ \alpha_t = \alpha_CX_t $ and $ \alpha_t = \alpha_R(1- X_t) $. Again, a transition in optimality of influencing strategy is observed in the case $ \alpha_t = \alpha_C X_t $ at the critical value $ \alpha_C = r({ \tilde{p} }+ { \tilde{q} }) $.

    Figures 3 and 4 compare the strategies $ \mathcal{S}_L, \mathcal{S}_F $ and a strategy $ \mathcal{S} $ where the slots in the interval $ [0.4T, 0.7T] $ are influenced. In Figure 5, we compare $ \mathcal{S}_L, \mathcal{S}_F $ and a split strategy $ \mathcal{S} $ where the influence is over time-slots in the intervals $ [0.3T, 0.5T] $ and $ [0.8T, T] $. The figures illustrate that $ \mathcal{S}_L $ is optimal for the cases $ \alpha_t = \alpha $ and $ \alpha_R(1-X_t) $ whereas there is a transition in optimality of the strategies for a certain threshold value of $ \alpha_C $ in the case when $ \alpha = \alpha_C X_t $.

    Figure 3.  Comparison of influencing strategies for the case $ \alpha_t = \alpha $. Other system parameters are given by $ b = 0.4, \tilde{p} = 0.1, \tilde{q} = 0.6, M_0 = 1000, p = 0.7, q = 0.3, \lambda = 0.4, \mu = 0.3, x_0 = 0.7, r = 1 (\text{with } R_a = R_c = 5) $.
    Figure 4.  Comparison of influencing strategies for the case $ \alpha_t = \alpha_C X_t $. Other system parameters are given by $ b = 0.4, \tilde{p} = 0.16, \tilde{q} = 0.8, M_0 = 500, p = 0.8, q = 0.4, \lambda = 0.6, \mu = 0.2, x_0 = 0.5, r = 0.625 (\text{with } R_a = 8, R_c = 5) $.
    Figure 5.  Comparison of influencing strategies for the case $ \alpha_t = \alpha_R (1-X_t) $. Other system parameters are given by $ b = 0.4, \tilde{p} = 0.1, \tilde{q} = 0.5, M = 1000, p = 0.8, q = 0.4, \lambda = 0, \mu = 0.5, x_0 = 0.5, r = 5, (\text{with } R_c = 5, R_a = 1) $.

    Theorem 4.4. Suppose $ \delta = r[({ \tilde{p} }+ { \tilde{q} })-(1-\mu)(p+q)] = 0 $. Then, under Assumptions 1 and 2,

    1. For $ \alpha_t = \alpha $, it is optimal to influence in the last $ bT $ slots.

    2. For $ \alpha_t = \alpha_CX_t $,

    (a) if $ r({ \tilde{p} }+ { \tilde{q} }) > \alpha_C $, it is optimal to influence in the last $ bT $ slots.

    (b) If $ r({ \tilde{p} }+ { \tilde{q} }) < \alpha_C $, it is optimal to influence in the first $ bT $ slots.

    (c) If $ r({ \tilde{p} }+ { \tilde{q} }) = \alpha_C $, all strategies perform equally well.

    3. For $ \alpha_t = \alpha_R(1-X_t) $, it is optimal to influence in the last $ bT $ slots.

    The main idea is to compare the strategies $ \mathcal{S}_L $ and $ \mathcal{S}_F $ and then get optimality using monotonicity of the ODE solution with respect to the initial conditions. Suppose $ X_{T}^{L} $ and $ X_{T}^{F} $ denote the corresponding ODE solutions for the influencing strategies $ \mathcal{S}_L $ and $ \mathcal{S}_F $, respectively. Then, we have

    $ XFT=r(1μ)q+Br(1μ)(p+q)+1+A+(x0r˜q+Br(˜p+˜q)+1+A)[bT+1+M0/Ra1+M0/Ra]r(˜p+˜q)1A×[T+1+M0/RabT+1+M0/Ra]r(1μ)(p+q)1A+Δ[T+1+M0/RabT+1+M0/Ra]r(1μ)(p+q)1A,
    $
    (4.5)

    where $ \Delta = \frac{r { \tilde{q} }+B}{r({ \tilde{p} }+ { \tilde{q} })+1+A} - \frac{r(1-\mu)q+B}{r(1-\mu)(p+q)+1+A} $. Similarly,

    $ XLT=r˜q+Br(˜p+˜q)+1+A+(x0r(1μ)q+Br(1μ)(p+q)+1+A)[T(1b)+1+M0/Ra1+M0/Ra]r(p+q)1A×[T+1+M0/Ra(1b)T+1+M0/Ra]r(˜p+˜q)1AΔ[T+1+M0/Ra(1b)T+1+M0/Ra]r(˜p+˜q)1A.
    $
    (4.6)

    Define $ \widetilde{T} = \frac{T}{1+ \frac{M_{0}}{R_{a}}} $ and $ D_T = X_{T}^{L} - X_{T}^{F} $ to be the difference between fraction of people with opinion $ 1 $ at time $ T $ when under influencing strategies $ \mathcal{S}_L $ and $ \mathcal{S}_F $. We get

    $ DT=XLTXFT=Δ[1(˜T+1(1b)˜T+1)r(˜p+˜q)1A(˜T+1b˜T+1)r(p+q)1A]+(x0r(1μ)q+Br(1μ)(p+q)+1+A)((1b)˜T+1)δ(˜T+1)r(˜p+˜q)1A(x0r˜q+Br(˜p+˜q)+1+A)(b˜T+1)δ(˜T+1)r(p+q)1A.
    $
    (4.7)

    To compare $ \mathcal{S}_L $ and $ \mathcal{S}_F $, it is enough to analyse whether $ D_T $ is positive or negative. The detailed proof is in Section 5.

    We need the assumption $ \delta = 0 $ to ensure the mathematical tractability of the expression for $ D_T $. In general, when $ b $ is bounded away from $ 0 $ and $ 1 $ (which is reasonable since we would like to study scenarios where influence is over a non-trivial subset of $ [0, T] $) and $ T $ is large, we have

    $ \frac{(1-b) { \widetilde{T} }+1}{ { \widetilde{T} }+1} = 1- b \frac{ { \widetilde{T} }}{ { \widetilde{T} }+1} \approx 1-b \ \text{ and }~~ \frac{b { \widetilde{T} }+1}{ { \widetilde{T} }+1} \approx b. $

    Using these approximations, we get

    $ DTΔ[1(1b)r(˜p+˜q)+1+Abr(p+q)+1+A]+(x0r(1μ)q+Br(1μ)(p+q)+1+A)(1b)δ(˜T+1)δ(˜T+1)r(˜p+˜q)1A(x0r˜q+Br(˜p+˜q)+1+A)bδ(˜T+1)δ(˜T+1)r(p+q)1A.
    $

    For large $ { \widetilde{T} } $, the second and the third term are very small. Since for $ \alpha_t = \alpha $ or $ \alpha_R $, $ r(p+q)+1+A > 1 $, we have

    $ 1 = (1-b)+b \geq (1-b)^{r( { \tilde{p} }+ { \tilde{q} })-1-A} + b^{r(p+q)-1-A}, $

    which along with $ \Delta > 0 $, implies $ D_T \geq 0 $. Combining this with the optimality argument, we get that if the voting happens after a large time $ T $, in case $ \alpha_t = \alpha $ or $ \alpha_R $, it is better to influence towards the end. Also, if $ r < < 1 $ and $ \alpha_t = \alpha $, $ D_T \approx 0 $, making all strategies more or less comparable in terms of effectiveness for getting more $ 1 $'s at the end of time $ T $.

    In this section, we prove the results from previous sections. The main tool to prove the results from Section 3 is the stochastic approximation theory. For the results in Section 4, we first prove Theorem 4.1 to show that the discrete dynamics can be approximated well by an O.D.E. and then use the corresponding O.D.E. to show the transition in optimal influence strategy as stated in Theorem 4.4.

    We first prove the results establishing asymptotic behaviour of $ X_t $. From Eq (2.2) we have

    $ Xt+1=Mt+1RaMt+1Xt+E[Ot+1|Ft]Mt+1+Ot+1E[Ot+1|Ft]Mt+1=Xt+1Mt+1[E[Ot+1|Ft]RaXt]+St+1Mt+1,
    $
    (5.1)

    where $ S_{t+1} = O_{t+1} - \mathbb{E}[O_{t+1}|\mathcal{F}_t] $ is a zero-mean martingale with respect to $ \{ \text{\mathcal F}_t = \sigma (O_s; 0 \leq s \leq t) \}_{t \geq 1} $. We have,

    $ \mathbb{E}[O_{t+1}|\mathcal{F}_t] = \mathbb{E}[O^{R_c}_{t+1} + O^{R_a}_{t+1} |\mathcal{F}_t] = R_c[(1-X_t)q_t - X_tp_t]+ \alpha_t {R_a}. $

    Using this in Eq (5.1) and substituting $ p_t = \lambda p + \mu p(1-X_t) + (1-\lambda-\mu) p X_t $ and $ q_t = \lambda q + \mu qX_t + (1-\lambda - \mu) q (1-X_t) $, we get the following recursion.

    $ Xt+1=Xt+1Mt+1h(Xt)+St+1Mt+1,
    $
    (5.2)

    where from Eq (2.3) we get that

    $ h(Xt)=E[Ot+1|Ft]RaXt=Rc(1λ2μ)(pq)X2t+(Rc[(3μ+λ2)q(λ+μ)p]Ra)Xt+αtRa+q(1μ)Rc=Rar(1λ2μ)(qp)X2t+Ra[(r[(3μ+λ2)q(λ+μ)p]1)Xt+αt+q(1μ)r].
    $

    The recursion for $ X_t $ can thus be written as a stochastic approximation scheme (See [2]) and the corresponding ODE is given by

    $ dxtdt=Ra[r(1λ2μ)(qp)x2t+(r[(3μ+λ2)q(λ+μ)p]1)xt+αt+q(1μ)r]=Ra[r(1λ2μ)(qp)x2t+(r[(3μ+λ2)q(λ+μ)p]1A)xt+q(1μ)r+B],
    $
    (5.3)

    where $ A $ and $ B $ are as in Eq (4.1). It is easy to verify that Eq (5.2) satisfies the conditions of a stochastic approximation scheme since the martingale difference is bounded, $ X_t \leq 1 \forall t \geq 0 $, $ h(\cdot) $ is Lipschitz in $ X_t $ and the step-size $ 1/M_{t} $ is inverse of a linear function of $ t $. From the stochastic approximation theory, we know that the recursion for $ X_t $ converges almost surely to the stable limit points of the ODE, which are given by $ h(x_t) = 0 $. Define

    $ \mathcal{D}(x) : = \frac{\partial h }{\partial x} = R_a \left[2r(1-\lambda-2\mu)(q-p)x + r\{(3 \mu + \lambda - 2)q - (\lambda+\mu)p\} - 1-A \right]. $

    Proof of Theorem 3.1. Under Assumption 1, the corresponding ODE is given by

    $ dxtdt=Ra[[r(1μ)(p+q)+1+A]xt+q(1μ)r+B],
    $

    where $ r = \frac{R_c}{R_a} $. Clearly, $ x_t = \frac{rq(1-\mu) + B }{r(1-\mu)(p+q)+1+A} $ is a limit point. Further, it is easy to verify that for $ \lambda+2\mu = 1 $ or $ p = q $, $ \mathcal{D}(x) < 0 $ for all $ x $. Thus, $ \frac{rq(1-\mu) + B }{r(1-\mu)(p+q)+1+A} $ is a stable fixed point. Thus, as $ t \to \infty $, $ X_t \to X^* $ almost surely where

    $ X={α+q(1μ)r1+(p+q)(1μ)r  for   λ=12μ  and   αt=αq(1μ)r1αC+(p+q)(1μ)r  for   λ=12μ  and   αt=αCXtαR+q(1μ)r1+αR+(p+q)(1μ)r  for   λ=12μ  and   αt=αR(1Xt)
    $
    (5.4)

    The $ p = q $ case mentioned in the remark 1 is obtained in the same way or by simply putting $ p = q $ in Eq (5.4) above since the ODE for $ p = q $ is a special case of the ODE under Assumption 1. In general, $ h(x_t) $ is a polynomial of degree $ 2 $ in $ x_t $. Let $ \rho_1 $ and $ \rho_2 $ be the roots $ h(x_t) = 0 $ with $ \rho_1 > \rho_2 $. Note that

    $ \rho_1\rho_2 = \frac{ q(1-\mu)r +B}{(1-\lambda-2\mu)(q-p)} $

    and

    $ \rho_1+\rho_2 = -\frac{(r[(3 \mu + \lambda - 2)q - (\lambda+\mu)p] - 1-A)}{(1-\lambda-2\mu)(q-p)}. $

    Thus, $ \mathcal{D}(x) : = R_a(1-\lambda-2\mu)(q-p)[2x-(\rho_1+\rho_2)] $ and we get the following.

    1. for the case $ \lambda+2\mu > 1 $ and $ q > p $ or $ \lambda+2\mu < 1 $ and $ q < p $, $ \rho_1 > 0 $ and $ \rho_2 < 0 $ as $ \rho_1\rho_2 < 0 $. Also, $ \mathcal{D}(\rho_1) < 0 $ and $ \mathcal{D}(\rho_2) > 0 $. So, in these cases, $ \rho_1 $ is a stable limit point.

    2. For the case $ \lambda+2\mu > 1 $ and $ q < p $ or $ \lambda+2\mu < 1 $ and $ q > p $, $ \rho_1 > 0 $ and $ \rho_2 > 0 $ as $ \rho_1\rho_2 > 0 $ and $ \rho_1+\rho_2 > 0 $. Also, $ \mathcal{D}(\rho_1) > 0 $ and $ \mathcal{D}(\rho_2) < 0 $. Hence, in these cases, $ \rho_2 $ is a stable limit point.

    While the asymptotic analysis is possible, a martingale argument for ODE approximation as done in Section 4 is not possible when $ h $ is non-linear. Further, the ODE $ \dot{x}(t) = h(x_t) $ yields fairly complicated solutions and obtaining optimal strategies is non-tractable.

    Proof of Corollary 3.2. We first compare $ X_S^*, X_R^* $.

    $ XSXR=α+q(1μ)r1+(p+q)(1μ)rαR+q(1μ)r1+αR+(p+q)(1μ)r=α[1+αR+(p+q)(1μ)r]αR(1+p(1μ)r)(1+(p+q)(1μ)r)(1+αR+(p+q)(1μ)r)=(ααR)[1+(p+q)(1μ)r]+αR[α+q(1μ)r](1+(p+q)(1μ)r)(1+αR+(p+q)(1μ)r)=(ααR)+αRXS(1+αR+(p+q)(1μ)r)
    $

    Thus, $ X_S^* - X_R^* \geq 0 $ iff $ \alpha - \alpha_R(1-X_S^*) \geq 0 $. Therefore, $ \alpha \geq \alpha_R $ or $ \alpha \geq 1-X^*_S $ implies $ X_S^* \geq X_R^* $. Next we compare $ X_S^*, X_C^* $.

    $ XSXC=α+q(1μ)r1+(p+q)(1μ)rq(1μ)r1αC+(p+q)(1μ)r=α[1αC+(p+q)(1μ)r]αCq(1μ)r(1+(p+q)(1μ)r)(1αC+(p+q)(1μ)r)=α[1+(p+q)(1μ)r]αC[α+q(1μ)r](1+(p+q)(1μ)r)(1αC+(p+q)(1μ)r)
    $

    Clearly, $ X_S^* - X_C^* \geq 0 $ iff $ \alpha \geq \alpha_C X^*_S $. Thus, $ \alpha \geq \alpha_C $ or $ \alpha \geq X_S^* $ implies $ X_S^* - X_C^* \geq 0 $. Further, when $ \alpha_C = \alpha_R = \alpha $, $ X_S^* \geq X^*_C $ for all $ \alpha \in [0, 1] $.

    For the case $ \alpha_t = \alpha_R(1-X_t) $ and $ \alpha_t = \alpha_C X_t $ we get that

    $ XCXRrq(1μ)r(p+q)(1μ)+1αCrq(1μ)+αRr(p+q)(1μ)+1+αR,
    $

    which holds if and only if

    $ (1μ)r(qαCpαR)αR+αRαC0.
    $
    (5.5)

    Clearly, this holds for $ \alpha_C = 1 $ and $ q \geq p $ since $ p \geq p \alpha_R $. Further, for $ \alpha_C = \alpha_R = \alpha $, we have $ X^*_C \geq X^*_R $ iff $ \alpha \geq 1 + (1-\mu)r(p-q) $.

    We now prove the fluctuation limit theorem.

    Proof of Theorem 3.3. We use results from [36]. We first compute $ \Gamma = \lim\limits_{t \to \infty} E[S^2_{t+1}|\mathcal{F}_t] $. Note that $ E[S^2_{t+1}|\mathcal{F}_t] = E[(O_{t+1}-E[O_{t+1}|\mathcal{F}_t])^2|\mathcal{F}_t] $. We have

    $ E[(Ot+1E[Ot+1|Ft])2|Ft]=Var[Ot+1|Ft]=Var[ORct+1+ORat+1|Ft]=Var[ORct+1|Ft]+Var[ORat+1|Ft]=Var[Mti=1Oi(t+1)|Ft]+Var[ORat+1|Ft]=αt(1αt)Ra+Mti=1RcMt((1Xt)qt+Xtpt)R2cM2t((1Xt)qtXtpt)2=αt(1αt)Ra+Rc((1Xt)qt+Xtpt)R2cMt((1Xt)qtXtpt)2.
    $

    The term $ \frac{R^2_c}{M_t}\left((1-X_t)q_t-X_tp_t\right)^2 $ goes to zero as $ t\xrightarrow{}\infty $. For $ p_t = \lambda p + \mu p(1-X_t) + (1-\lambda-\mu) p X_t $ and $ q_t = \lambda q + \mu qX_t + (1-\lambda - \mu) q (1-X_t) $ we get that $ \lim\limits_{t \to \infty}E[S^2_{t+1}|\mathcal{F}_t] $ is same as

    $ limtRa[(r(1λ2μ)(p+q)+A1)X2t+(r((λ+3μ2)q+(λ+μ)p)+A2)Xt+r(1μ)q+A3],
    $
    (5.6)

    where $ A_1 = {0  for   αt=αα2C  for   αt=αCXtα2R  for   αt=αR(1Xt)

    , \qquad A_2 = {0  for   αt=ααC  for   αt=αCXt2αR2αR  for   αt=αR(1Xt)
    $

    and, $ A_3 = {α(1α)  for   αt=α0  for   αt=αCXtαR(1αR)  for   αt=αR(1Xt).

    $

    Under Assumption 1, we get

    $ Γ=Ra[{r(1μ)(pq)+A2}(r(1μ)q+Br(1μ)(p+q)+1+A)+r(1μ)q+A3],
    $

    We now compute the limiting variance $ \sigma $. Thus using Theorems 2.1–2.3 from [36] we have:

    ● For $ -\mathcal{D}h(X^*) > \frac{1}{2} $, $ \sigma = \int_{0}^{\infty}e^{-(-\mathcal{D}h(X^*) - \frac{1}{2})u}\Gamma e^{-(-\mathcal{D}h(X^*) - \frac{1}{2})u}du $.

    ● For $ -\mathcal{D}h(X^*) = \frac{1}{2} $, $ \sigma = \lim\limits_{t \to \infty} \frac{1}{\log t} \int_{0}^{\log t}e^{-(-\mathcal{D}h(X^*) - \frac{1}{2})u}\Gamma e^{-(-\mathcal{D}h(X^*) - \frac{1}{2})u}du $.

    Therefore, whenever $ -\mathcal{D}h(X^*) = R_a(r(1-\mu)(p+q)+1+A) > \frac{1}{2} $ we have,

    $ σ=0e(Dh(X)12)uΓe(Dh(X)12)udu=0e2(Dh(X)12)uΓdu=Γ0e2(Ra(r(1μ)(p+q)+1+A)12)udu=Γ2Ra(r(1μ)(p+q)+1+A)1.
    $

    Thus, with Assumption 1, we get the following.

    (a) For $ \alpha_t = \alpha $ and $ X^* = \frac{r(1-\mu)q+\alpha}{r(1-\mu)(p+q)+1} $, we get $ \mathcal{D}h(X^*) = -R_a(r(1-\mu)(p+q)+1) $. Therefore,

    $ t(XtX)dtN(0,σ),
    $
    (5.7)

    where $ \sigma = \frac{R_a}{2R_a[r(1-\mu)(p+q)+1]-1} \left[r(1-\mu)(p-q) \left(\frac{r(1-\mu)q+\alpha}{r(1-\mu)(p+q)+1}\right)+r(1-\mu)q+\alpha(1-\alpha)\right] $.

    (b) For $ \alpha_t = \alpha_R (1-X_t) $ and $ X^* = \frac{r(1-\mu)q+\alpha_R}{r(1-\mu)(p+q)+1+\alpha_R} $, we get $ \mathcal{D}h(X^*) = -R_a[r(1-\mu)(p+q)+1+\alpha_R] $. Therefore,

    $ t(XtX)dtN(0,σR),
    $
    (5.8)

    where

    $ σR=Ra[r(1μ)(pq)+2αR2αR]2Ra[r(1μ)(p+q)+1+αR]1(r(1μ)q+αRr(1μ)(p+q)+1+αR)+Rar(1μ)q+αR(1αR)2Ra[r(1μ)(p+q)+1+αR]1.
    $

    (c) For $ \alpha_t = \alpha_C X_t $ and $ X^* = \frac{r(1-\mu)q}{r(1-\mu)(p+q)+1-\alpha_C} $

    (i) if $ -\mathcal{D}h(X^*) = R_a[r(1-\mu)(p+q)+1-\alpha_C] > \frac{1}{2} $ that is $ \alpha_C < r(1-\mu)(p+q)+1-\frac{1}{2R_a} $ then

    $ t(XtX)dtN(0,σC),
    $
    (5.9)

    with $ \sigma_C = \frac{R_a}{2R_a[r(1-\mu)(p+q)+1-\alpha_C]-1} \left[(r(1-\mu)(p-q)+\alpha_C)\left(\frac{r(1-\mu)q}{r(1-\mu)(p+q)+1-\alpha_C}\right)+ r(1-\mu)q\right]. $

    (ii) if $ -\mathcal{D}h(X^*) = R_a[r(1-\mu)(p+q)+1-\alpha_C] = \frac{1}{2} $ that is $ \alpha_C = r(1-\mu)(p+q)+1-\frac{1}{2R_a} $ then

    $ tlogt(XtX)LtN(0,σC),
    $
    (5.10)

    with $ \sigma_C = \left[(r(1-\mu)(p-q)+\alpha_C)\left(\frac{r(1-\mu)q}{r(1-\mu)(p+q)+1-\alpha_C}\right)+r(1-\mu)q\right] $.

    (iii) if $ -\mathcal{D}h(X^*) = R_a(r(1-\mu)(p+q)+1-\alpha_C) < \frac{1}{2} $ that is $ \alpha_C > r(1-\mu)(p+q)+1-\frac{1}{2R_a} $ then as $ t \to \infty $, $ t^{-\mathcal{D}h(X^*))}(X_t-X^*) $ almost surely converges to a finite random variable.

    This completes the proof.

    A similar argument can be used to obtain scaling limits for the case $ p = q $ as well. We get

    $ limtE[μ2t+1|Ft]=Ra[{2r(1λ2μ)q+A1}(r(1μ)q+B2r(1μ)q+1+A)2+{2r(λ+2μ1)q+A2}(r(1μ)q+B2r(1μ)q+1+A)+r(1μ)q+A3].
    $

    Following the same argument as above yields the following.

    (a) For $ \alpha_t = \alpha $ and $ X^* = \frac{r(1-\mu)q+\alpha}{2r(1-\mu)q+1} $ we get $ \mathcal{D}h(X^*) = -R_a[2r(1-\mu)q+1] $ and Eq (5.7) holds with

    $ σ=Ra2Ra[2r(1μ)q+1]1[2rq(1λ2μ)(r(1μ)q+α2r(1μ)q+1)2+2rq(λ+2μ1)(r(1μ)q+α2r(1μ)q+1)+r(1μ)q+α(1α)].
    $

    (b) For $ \alpha_t = \alpha_R (1-X_t) $ and $ X^* = \frac{r(1-\mu)q+\alpha_R}{2r(1-\mu)q+1+\alpha_R} $, we get $ \mathcal{D}h(X^*) = -R_a[2r(1-\mu)q+1+\alpha_R] $ and Eq (5.8) holds with

    $ σR=Ra2Ra[2r(1μ)q+1+αR]1[(2r(1λ2μ)qα2R)(r(1μ)q+αR2r(1μ)q+1+αR)2+(2r(λ+2μ1)q+2αR2αR)(r(1μ)q+αR2r(1μ)q+1+αR)+r(1μ)q+αR(1αR)].
    $

    (c) For $ \alpha_t = \alpha_C X_t $ and $ X^* = \frac{r(1-\mu)q}{2r(1-\mu)q+1-\alpha_C} $. If $ -\mathcal{D}h(X^*) = R_a(r(1-\mu)(p+q)+1-\alpha_C) = \frac{1}{2} $ then Eq (5.9) holds with

    $ σC=[(2r(1λ2μ)qα2C)(r(1μ)q2r(1μ)q+1αC)2+(2r(λ+2μ1)q+αC)(r(1μ)q2r(1μ)q+1αC)+r(1μ)q].
    $

    Thus, unlike for the limiting fraction $ X^* $, the limiting second moment or the variance for the case $ p = q $ cannot be obtained as a special case of $ \lambda+2\mu = 1 $.

    In this section, we prove results from Section 4. We use a Martingale Concentration Inequality to prove the ODE approximation (from Section 4.1) of the recursion of $ X_t $ and then use the approximation to prove Theorem 4.4 for optimal influencing strategy.

    Proof of Theorem 4.1. Under Assumption 1, the recurrence is given by

    $ X_{t+1} = X_t + \frac{R_a}{M_{t+1}} \left[ -(r(1-\mu)(p+q)+1+A) X_t + q(1-\mu)r +B\right] + \frac{S_t}{M_{t+1}}. $

    This gives,

    $ E[X_{t+1} | \text{\mathcal F}_t] = X_t + \frac{1}{M'_{t+1}} \left[ -(r(1-\mu)(p+q)+1+A) X_t + q(1-\mu)r +B\right], $

    where $ M'_{t+1} = \frac{M_{t+1}}{R_a} $. Thus,

    $ \mathbb{E}[X_{t+1}|\mathcal{F}_t] = X_t\left(1 - \frac{r(p + q)(1-\mu) + 1+A}{M'_{t+1}} \right) + \frac{r(1-\mu)q+B }{M'_{t+1}}. $

    Define $ \alpha_t = \left(1 - \frac{r(p + q)(1-\mu) + 1+A}{M'_{t+1}} \right), \beta_t = \frac{r(1-\mu)q+B }{M'_{t+1}} $ and

    $ Z_t = X_t \prod\limits_{i = 0}^{t-1} \alpha_k^{-1} - \sum\limits_{i = 0}^{t-1} \beta_i \prod\limits_{j = 0}^{i}\alpha_j^{-1}. $

    Note that $ Z_t $ is an $ \{\mathcal{F}_t\}_{t\geq 0} $-martingale by definition. For a fixed $ T $, define $ Y_t = \left(\prod_{k = 0}^{T-1} \alpha_k \right) Z_t $ is also an $ \{\mathcal{F}_t\}_{t\geq 0} $-martingale. Using Azuma-Hoeffding inequality, we get

    $ P(|Y_T-Y_0| \geq \epsilon) \leq 2 \exp^{\left( - \frac{\epsilon^2}{2\sum\limits_{i = 1}^T c_i^2} \right)}, $

    where $ |Y_{t}-Y_{t-1}| \leq c_t $ for all $ 1 \leq t \leq T $. We first obtain a reasonable bound on $ |Y_{t}-Y_{t-1}| $. We have

    $ YtYt1=T1k=0αk(Xtt1k=0α1kXt1t2k=0α1kβt1t1j=0α1j)=T1k=tαk(XtXt1αt1βt1)
    $

    From Lemma 2 in [6], we get $ \Big \vert \prod_{k = t}^{T-1} \alpha_k \Big \vert \leq K \left(\frac{T}{t}\right)^{- { C^\prime }} $, where $ { C^\prime } = r(p+q)(1-\mu)+1+A $ and $ K = K({ C^\prime }, M_0/R_a) $ is a positive constant. Further, $ \vert X_t - X_{t-1} \alpha_{t-1} - \beta_{t-1} \vert \leq \frac{B}{t} $ for some constant $ B > 0 $. Then, $ \sum\limits_{t = 0}^T c^2_t = T^{-2 { C^\prime }} \sum\limits_{t = 1}^T K^2 B^2 t^{2 { C^\prime }-2} \leq \frac{K^2B^2}{(2 { C^\prime }-1)T} $. This implies,

    $ P(|Y_T-Y_0| \geq \epsilon) \leq 2 \exp \left( - \frac{\epsilon^2 (2 \text{ C^\prime }-1) T}{K^2B^2} \right). $

    Note $ 2 { C^\prime }-1 > 0 $. Equivalently, we have

    $ P\left( \Big \vert X_T - \sum\limits_{i = 0}^{t-1} \beta_i \prod\limits_{j = i+1}^{T-1}\alpha_j - \prod\limits_{k = 0}^{T-1}\alpha_k X_0 \Big \vert \geq \epsilon \right) \leq 2 \exp \left( - \frac{\epsilon^2 (2 \text{ C^\prime }-1) T}{K^2B^2} \right). $

    Next, we show that $ \sum\limits_{i = 0}^{t-1} \beta_i \prod_{j = i+1}^{T-1}\alpha_j - \prod_{k = 0}^{T-1}\alpha_k X_0 $ is close to the ODE solution. We have,

    $  T1i=0(1r(p+q)(1μ)+1+AMi+1)(T+1+M0/Ra1+M0/Ra)(r(p+q)(1μ)+1+A)
    $
    (5.11)

    and,

    $ T1i=01Mi+1T1j=i+1(1r(p+q)(1μ)+1+AMj+1)(T+1+M0/Ra)(r(p+q)(1μ)+1+A)r(p+q)(1μ)+1+A(T+1+M0/Ra)r(p+q)(1μ)+1+A(T+1+M0/Ra)(r(p+q)(1μ)+1+A)r(p+q)(1μ)+1+A(1+M0/Ra)r(p+q)(1μ)+1+A
    $
    (5.12)

    To be precise, the approximations in Eqs (5.11) and (5.12) give

    $ \Big \vert \sum\limits_{i = 0}^{t-1} \beta_i \prod\limits_{j = i+1}^{T-1}\alpha_j - \prod\limits_{k = 0}^{T-1}\alpha_k X_0 - x_T \Big \vert \leq D_{M_0}, $

    where $ D_{M_0} = \mathcal{O} \left(\frac{1}{M_0} \right) $. We have

    $ P(|XTt1i=0βiT1j=i+1αjT1k=0αkX0|ϵ+DM0)2exp(ϵ2(2 C^\prime 1)T4K2B2).
    $

    We are now ready to prove Theorem 4.4. In addition to the ODE approximation, we need the following Lemma.

    Lemma 5.1 ($ x_t $ as a function of $ x_0 $). The ODE solution obtained by solving Eq (4.1) is an increasing function of the initial configuration $ x_0 $.

    Proof. Note that the solution (4.2) of the ODE (4.1) is of the form

    $ f(x) = a_{1}+(x-a_{1})(b_{1})^{-c_{1}}, $

    where $ a_{1}, b_{1} > 0 \text{ and }~~ c_{1} \geq 0 $ $ \forall \ A \text{ and }~~ B $. Let $ x_{1} < x_{2} $ then

    $ x_{1} < x_{2} \iff x_{1}-a_{1} < x_{2}-a_{1} \iff (x_{1}-a_{1})(b_{1})^{-c_{1}} < (x_{2}-a_{1})(b_{1})^{-c_{1}}. $

    Thus, the ODE solution is an increasing function of the initial configuration $ x_0 $.

    Before we prove Theorem 4.4, note that restriction of a strategy to any subset of $ [0, T] $ defines an influencing strategy on that subset. For any strategy $ \mathcal{S} $, we denote the strategy on $ [T_1, T_2] \subset [0, T] $ given by the substring of $ \mathcal{S} $ on $ [T_1, T_2] $ by $ \mathcal{S} \vert_{[T_1, T_2]} $.

    Proof of Theorem 4.4. We first compare $ \mathcal{S}_F $ and $ \mathcal{S}_L $. For $ { \tilde{p} }+ { \tilde{q} } = (1-\mu)(p+q) $ we get

    $ DT=Δ[1(˜T+1(1b)˜T+1)r(˜p+˜q)1A(˜T+1b˜T+1)r(˜p+˜q)1A+(˜T+1)r(˜p+˜q)1A]
    $
    (5.13)

    and

    $ Δ=r˜q+Br(˜p+˜q)+1+Ar(1μ)q+Br(1μ)(p+q)+1+A=r(˜q(1μ)q)r(˜p+˜q)+1+A.
    $

    Due to rational influence, $ \Delta > 0 $. Define $ F: [0, 1] \to [0, 1] $ such that

    $ F(b) = 1-\left(\frac{(1-b) { \widetilde{T} }+1}{ { \widetilde{T} } + 1}\right)^{r( { \tilde{p} }+ { \tilde{q} })+1+A} - \left(\frac{b { \widetilde{T} }+1}{ { \widetilde{T} } + 1}\right)^{r(p+q)+1+A}+\left(\frac{1}{ { \widetilde{T} }+1}\right)^{r( { \tilde{p} }+ { \tilde{q} })+1+A}. $

    Differentiating with respect to $ b $ once and twice we get

    $ F(b)=(r(˜p+˜q)+1+A)˜T(˜T+1)r(˜p+˜q)+1+A[((1b)˜T+1)r(˜p+˜q)+A(b˜T+1)r(˜p+˜q)+A]
    $
    (5.14)
    $ F(b)=(r(˜p+˜q)+1+A)(r(˜p+˜q)+A)˜T2(˜T+1)r(˜p+˜q)+1+A[((1b)˜T+1)r(˜p+˜q)+A1(b˜T+1)r(˜p+˜q)+A1]
    $
    (5.15)

    Clearly, $ F'(b) = 0 $ for $ b = 1/2 $. For the case $ \alpha_t = \alpha \text{ or } \alpha_R(1-X_t) $, $ A\geq 0 $. Thus, $ F $ is increasing in $ b \in [0, 1/2) $ as $ F'(b) > 0 $ and $ F $ is decreasing in $ b \in (1/2, 1] $. Since, $ A\geq 0 $ we also have for any $ b \in [0, 1] $, $ F''(1/2) < 0 $. Thus, $ b = \frac{1}{2} $ is a point of maxima for $ F(b) $. Since, $ F(0) = F(1) = 0 $, we get that $ F(b)\geq 0 $ for $ b \in [0, 1] $. This implies, $ D_T \geq 0 $. Hence, $ \mathcal{S}_L \gg \mathcal{S}_F $ for the case $ \alpha_t = \alpha \text{ or } \alpha_R(1-X_t) $.

    We now address the case $ \alpha_t = \alpha_CX_t $, $ A = -\alpha_C $. If $ r({ \tilde{p} }+ { \tilde{q} }) > \alpha_C $ the same argument as above works and we get $ \mathcal{S}_L > > \mathcal{S}_F $. If $ r({ \tilde{p} }+ { \tilde{q} }) < \alpha_C $, we get that $ F $ is decreasing in $ b \in [0, 1/2) $ and $ F $ is increasing in $ b \in (1/2, 1] $. It is also easy to check that $ F''(1/2) > 0 $. Thus, $ b = 1/2 $ is a point of minima for $ F(\cdot) $. Again, using $ F(0) = F(1) = 0 $, we conclude that $ F(b)\leq 0 $ for $ b \in [0, 1] $ and therefore $ D_T \leq 0 $. Hence, $ \mathcal{S}_L < < \mathcal{S}_F $ for the case $ r({ \tilde{p} }+ { \tilde{q} }) < \alpha_C $. Finally, for the case $ r({ \tilde{p} }+ { \tilde{q} }) = \alpha_C $, it can be easily verified that $ D_T = 0 $ and therefore $ \mathcal{S}_L = \mathcal{S}_F $.

    We now prove the optimality using Lemma 5.1. We give the argument for optimality of $ \mathcal{S}_L $ when $ \mathcal{S}_L \gg \mathcal{S}_F $. A similar argument works for the rest of the cases. Let $ \mathcal{S} $ be an influencing strategy. Scanning from the left (from the first coordinate), let $ t_1, t_1+1 $ be the first time we encounter a '10' subsequence in $ \mathcal{S} $. Since $ \mathcal{S}_L \gg \mathcal{S}_F $, $ \mathcal{S} \vert_{[0, t_1 + 1]} < < \mathcal{S}^\prime $, where $ \mathcal{S}^\prime $ is a strategy on $ [0, t_1 + 1] $ with $ \mathcal{S}^\prime_i = \mathcal{S}_i $ for all $ i \leq t_1-1 $ and $ \mathcal{S}^\prime_{t_1} = 0, \mathcal{S}^\prime_{t_1+1} = 1 $. In other words, a local swap of $ 10 $ to $ 01 $ improves the strategy. This combined with the Lemma 5.1 shows that $ \mathcal{S}_L $ is optimal.

    We consider a population of $ M_0 $ individuals on a complete graph, each holding an opinion $ 1 $ or $ 0 $ at time $ t = 0 $. At every time-step a fixed number of individuals are added to the population and a fixed number of uniformly chosen individuals update their opinion. New individuals can have opinion $ 1 $ with probability that may or may not depend on the current state of the system. Similarly, chosen individuals may update their opinion independently of the state of the system or depending on the fraction of individuals of opinion $ 1 $ or $ 0 $ at that time. We observed that the limiting fraction of individuals with opinion $ 1 $ depends crucially on various parameters that can be adjusted in order to obtain a higher fraction of individuals with opinion $ 1 $ in the long run. Further, we demonstrate that the case when the incoming individuals have opinion $ 1 $ with probability proportional to the number of individuals with opinion $ 1 $ in the population, the fluctuations exhibit all three regimes (diffusive, critical and superdiffusive) of scaling, which is not the case otherwise. On the finite horizon version of the problem, we study optimal influencing strategies to obtain maximum expected fraction of people with opinion $ 1 $ at the end of the finite time $ T $. Again, a transition in the type of the influencing strategy is observed only in the case when the incoming individuals have opinion $ 1 $ with probability proportional to the number of individuals with opinion $ 1 $ in the population. We also remark that we consider a particular method of influencing the population that works by tweaking the transition probabilities of the underlying Markov chain. Another possible way to influence such a system is to add a certain number of bots or stubborn individuals to the system. Further, while modeling evolution of binary opinion for a growing population is an important direction of extension of the existing body of work, the same methods could be employed to study a similar multi-opinion model. One of the important future directions to explore would be to study the transitions in scaling of fluctuations around the limit as well as the transitions in optimality of the influencing strategies of such growing population models on a fixed or random graphs with nearest-neighbour interaction. It would also be interesting to study the same model without the restrictions of Assumption 1. It is clear from Eq (2.3), that this leads to a non-linear structure in the expression for $ E[X_{t+1} \vert \mathcal{F}_t] $, thereby making the problem more challenging. Finally, we remark that similar phase transition for asymptotic behaviour has been observed in reinforced random walks with non-trivial memory effects (for instance, see [13,22]). This would be a very interesting aspect to incorporate in this opinion dynamics model as dropping the Markovian update and introducing some memory would bring these models closer to reality.

    The work of Neeraja Sahasrabudhe was supported by the DST-INSPIRE fellowship and SERB-MATRICS MTR/2019/001072 grant.

    The authors declare that there is no conflict of interest.

    [1] Springer-Verlag, New York, 1998.
    [2] Phys. Rev. E (3), 73 (2006), 061910, 9 pp.
    [3] Biol. Cybern., 99 (2008), 279-301.
    [4] J. Theor. Neurobiol., 2 (1983), 127-153.
    [5] J. Physiol., 117 (1952), 500-544.
    [6] Mass. MIT Press, Cambridge, 1989.
    [7] Brain Res., 1434 (2012), 136-141.
    [8] J. Theor. Biol., 166 (1994), 393-406.
    [9] J. Theor. Biol., 107 (1984), 631-647.
    [10] Biol. Cybern., 56 (1987), 19-26.
    [11] Biol. Cybern., 73 (1995), 457-465.
    [12] Biosystems, 25 (1991), 179-191.
    [13] J. Theor. Biol., 171 (1994), 225-232.
    [14] Notes taken by Charles E. Smith, Lecture Notes in Biomathematics, Vol. 14, Springer-Verlag, Berlin-New York, 1977.
    [15] Biol. Cybern., 35 (1979), 1-9.
    [16] Phys. Rev. E, 76 (2007), 021919.
    [17] Springer Series in Synergetics, 18, Springer-Verlag, Berlin, 1989.
    [18] Neural Comput., 17 (2005), 2301-2315.
    [19] Springer-Verlag, Berlin, 1978.
    [20] J. Theor. Neurobiol., 3 (1984), 67-77.
    [21] Biophys. J., 5 (1965), 173-194.
    [22] J. Theor. Biol., 77 (1979), 65-81.
    [23] J. Theor. Biol., 83 (1980), 377-387.
    [24] Comput. Biol. Med., 27 (1997), 1-7.
    [25] J. Theor. Biol., 105 (1983), 345-368.
  • Reader Comments
  • © 2014 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2837) PDF downloads(473) Cited by(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog