Research article Special Issues

A bounded rational agent-based model of consumer choice

  • In a bottom-up approach, agent-based models have been extensively used in finance and economics in order to understand how macro-level phenomena can emerge from myriads of micro-level behaviours of individual agents. Moreover, in the absence of (big) data there is still the need to test economic theories and understand how macro-level laws can be materialized as the aggregate of a multitude of interactions of discrete agents. We exemplify how we can solve this problem in a particular instance: We introduce an agent-based method in order to generate data with Monte Carlo and then we interpolate the data with machine learning methods in order to derive multi-parametric demand functions. In particular, the model we construct is implemented in a simulated economy with 1000 consumers and two products, where each consumer is characterized by a unique set of preferences and available income. The demand for each product is determined by a stochastic process, incorporating the uncertainty in consumer preferences. By interpolating the data for the demands for various scenarios and types of consumers we derive poly-parametric demand functions. These demand functions are partially in tension with classical demand theory since on certain occasions they imply that the demand of a product increases as its price increases. Our proposed method of generating data from discrete agents with Monte Carlo and of interpolating the data with machine learning methods can be easily generalized and applied to the assessment of economic theories and to the derivation of economic laws in a bottom-up approach.

    Citation: Georgios Alkis Tsiatsios, John Leventides, Evangelos Melas, Costas Poulios. A bounded rational agent-based model of consumer choice[J]. Data Science in Finance and Economics, 2023, 3(3): 305-323. doi: 10.3934/DSFE.2023018

    Related Papers:

    [1] Muhammad Asif Zahoor Raja, Adeeba Haider, Kottakkaran Sooppy Nisar, Muhammad Shoaib . Intelligent computing knacks for infected media and time delay impacts on dynamical behaviors and control measures of rumor-spreading model. AIMS Biophysics, 2024, 11(1): 1-17. doi: 10.3934/biophy.2024001
    [2] Yong Zhou, Yiming Ding . Finding travel proportion under COVID-19. AIMS Biophysics, 2022, 9(3): 235-245. doi: 10.3934/biophy.2022020
    [3] C. Dal Lin, M. Falanga, E. De Lauro, S. De Martino, G. Vitiello . Biochemical and biophysical mechanisms underlying the heart and the brain dialog. AIMS Biophysics, 2021, 8(1): 1-33. doi: 10.3934/biophy.2021001
    [4] Kieran Greer . New ideas for brain modelling 6. AIMS Biophysics, 2020, 7(4): 308-322. doi: 10.3934/biophy.2020022
    [5] Mati ur Rahman, Mehmet Yavuz, Muhammad Arfan, Adnan Sami . Theoretical and numerical investigation of a modified ABC fractional operator for the spread of polio under the effect of vaccination. AIMS Biophysics, 2024, 11(1): 97-120. doi: 10.3934/biophy.2024007
    [6] Carlo Bianca . Interplay and multiscale modeling of complex biological systems. AIMS Biophysics, 2022, 9(1): 56-60. doi: 10.3934/biophy.2022005
    [7] Dafina Xhako, Niko Hyka, Elda Spahiu, Suela Hoxhaj . Medical image analysis using deep learning algorithms (DLA). AIMS Biophysics, 2025, 12(2): 121-143. doi: 10.3934/biophy.2025008
    [8] Alexander G. Volkov, Yuri B. Shtessel . Propagation of electrotonic potentials in plants: Experimental study and mathematical modeling. AIMS Biophysics, 2016, 3(3): 358-379. doi: 10.3934/biophy.2016.3.358
    [9] Omar El Deeb, Joseph El Khoury Edde . COVID19 vaccines as boosters or first doses: simulating scenarios to minimize infections and deaths. AIMS Biophysics, 2024, 11(2): 239-254. doi: 10.3934/biophy.2024014
    [10] Natarajan Mala, Arumugam Vinodkumar, Jehad Alzabut . Passivity analysis for Markovian jumping neutral type neural networks with leakage and mode-dependent delay. AIMS Biophysics, 2023, 10(2): 184-204. doi: 10.3934/biophy.2023012
  • In a bottom-up approach, agent-based models have been extensively used in finance and economics in order to understand how macro-level phenomena can emerge from myriads of micro-level behaviours of individual agents. Moreover, in the absence of (big) data there is still the need to test economic theories and understand how macro-level laws can be materialized as the aggregate of a multitude of interactions of discrete agents. We exemplify how we can solve this problem in a particular instance: We introduce an agent-based method in order to generate data with Monte Carlo and then we interpolate the data with machine learning methods in order to derive multi-parametric demand functions. In particular, the model we construct is implemented in a simulated economy with 1000 consumers and two products, where each consumer is characterized by a unique set of preferences and available income. The demand for each product is determined by a stochastic process, incorporating the uncertainty in consumer preferences. By interpolating the data for the demands for various scenarios and types of consumers we derive poly-parametric demand functions. These demand functions are partially in tension with classical demand theory since on certain occasions they imply that the demand of a product increases as its price increases. Our proposed method of generating data from discrete agents with Monte Carlo and of interpolating the data with machine learning methods can be easily generalized and applied to the assessment of economic theories and to the derivation of economic laws in a bottom-up approach.



    Modelling epidemic spreading is a central topic in the research of both contagion spreading and network science [1]. With the gravity of the impact on global health and economy of the recent COVID-19 pandemic [2], the field has gained an increasing amount of attention. Accordingly, models of epidemic spreading have been of great focus with new models constantly being proposed and built upon earlier ones. One approach to such models is compartmental models that divide the population into compartments based on their current role in the spreading process [3]. A simple example of a compartmental model is the SIR model which classifies each individual as either susceptible, infected, or recovered. Once recovered, the infection can no longer be transmitted to the recovered individual. The SIR model is well-studied and has been applied to complex networks in the literature [4][6]. Other types of compartmental models include SIS, SIRS, and SEIR, [3] in addition to more complex ones [7]. Models of contagion spreading can be used to quantitatively study effective ways of epidemic prevention [8], [9].

    In this article, we highlight two models for different applications introduced in our earlier articles [9][11] and explain their equivalent results even when applied to each other's use cases. Both of the models are based on similar foundations for calculating the probabilities of spreading or connections between nodes in a network [12], where methods of probabilistic networks [13] are applied.

    As mentioned, the models (referred to as the Spreading model and the Connectivity model) are designed for two separate applications. While the Spreading model is built for epidemic and behaviour spreading [10], the Connectivity model is constructed to simulate the reliability of connectivity in a service network [11]. Since the models are built for different use cases, we will introduce general vocabulary to talk about both models more easily. We call influence spreading the propagation of contagion, information, or a similar phenomenon through nodes of a network. The interpretation of contagion as influence spreading is rather natural: contagion spreads from one individual to another when they are in contact and, through these contacts, continues to spread further. Connectivity in information networks can also be thought of as information spreading through nodes and connections of the network. Where information can be transferred, a connection exists. Neither model simulates the spreading process through time—instead, only the final state of the spreading is calculated.

    As the models produce equivalent results, they can both be applied to epidemic spreading. In particular, they are applicable to a situation where individuals gain full immunity to a disease after contagion, as a result of the models' principle to spread influence to a node only once, similarly to the SIR model [3]. In the case of information network connectivity, immunity can be equated to the inactivity of nodes in the network. Diseases resulting in full or near full immunisation include measles, mumps, rubella, and chickenpox [14], though many infections give no or only partial immunity. We will propose a model for partial immunity epidemic spreading in a future study of ours.

    The Spreading model is built to be extendable and has the potential for additional parameters that the Connectivity model lacks. For example, parameters such as maximum spreading path length (Lmax) and breakthrough probability can be taken into account. This enables the calculation of novel forms of connectivity, based not only on the structure of the network but also on other possible phenomena arising from the new parameters.

    We compare two simulation models built for two use cases: the epidemic spreading simulation model, and the connection reliability model. Both models consider a network as a set of weighted nodes and weighted directed edges between them. Undirected edges are modelled as two identical edges with the endpoints swapped. The output of the models is the two-dimensional probability matrix C:V×V[0,1], where C(s, t) marks the conditional probability of influence spreading from node s to node t, given that spreading starts from node s and V is the set of nodes of the network. Even though both models work by simulation, their approaches differ. Additionally, we present applications of our models for calculating centrality measures for nodes in the network.

    The probability matrix C(s, t) produced by both of the models gives the conditional probabilities of influence spreading from node s to node t given that influence starts to spread from node s. If the probabilities, at which the nodes initially start to spread influence are known, the conditional probabilities of the probability matrix can be multiplied by them, producing unconditional probabilities.

    The Spreading model works by simulating the spreading of influence from each node to the rest of the network separately. For each node as the source, the simulation is carried out a certain number of times: this is called the number of iterations. The probability of influence spreading to another node is calculated as the number of iterations where the node was influenced divided by the total number of iterations. The simulation itself works in steps, on which all newly influenced nodes (that is, nodes that became influenced on the step right before the current one) attempt to spread influence to all of their neighbouring nodes. This attempt automatically fails if the neighbouring node has been previously influenced. Otherwise, the spreading will succeed with a probability of we · Wt, where we is the weight of the edge connecting the nodes and Wt is the weight of the target node. If the spreading succeeds, the target node will be marked as influenced and will attempt to spread influence in the subsequent step. Another parameter, Lmax, is used to limit the maximum spreading path length: influence will only spread along paths at maximum Lmax edges long. Limiting Lmax can be used to shorten the execution times of the model at the cost of less precise results or to model a situation where spreading paths are limited by some factor, such as cutting spreading chains as a preventive measure against an epidemic. A more detailed description and pseudocode for the Simulation model are provided in our earlier study [10].

    The calculation in the Connectivity model works similarly in that it simulates the connectivity multiple times, taking the average results. Instead of simulating the influence spreading from each node separately, for each iteration, the set of active nodes and edges is randomly determined: the probability at which each node and edge is active is its weight. For each active node, the nodes that can be reached by paths consisting of active nodes and edges are considered connected to it. C(s, t) is then the number of iterations where the active node s had an active path to node t divided by the total number of iterations. Unlike the Spreading model, the Connectivity model does not have the Lmax parameter, as the connectivity is not determined by stepping along paths. A more detailed description and pseudocode for the Connectivity model is provided in our earlier study [11].

    For both models, the probability of influence spreading through a path with a specified starting node is the product of the weights of each edge and node on the path, excluding that of the starting node. In other words,

    P()=mi=1weiWni,

    where P(ℒ) is the probability of spreading through the path ℒ of m edges and nodes excluding the starting node, and wei and Wni are the weight of the ith edge and node on the path, respectively, excluding the starting node. This is clear in the Spreading model, as all attempts to spread along the path must be successful, which happens with the probability of the product of the individual spreading probabilities, corresponding to the weights of the edges and nodes along the path. Here, the weight of the starting node is not included in the product, as the spreading is assumed to start from there. Similarly, in the Connectivity model, for two nodes to be connected via a certain path, all edges and nodes of that path must be active. The probability at which the whole path is active is then the product of the weights of the edges and the nodes on the path. The starting node's weight is left out of the product, as an active path from it can only exist if it is active in the first place.

    As the probability of influence spreading through a path is given by the same equation for both of the models, the models' results should theoretically be equivalent. This equivalence only holds, however, when Lmax is not capped to some number in the Spreading model, in order to take into account all possible paths in the network as the Connectivity model does.

    The probability matrix that both of the models generate can be used for various applications, as covered in our previous articles [9], [10]. At the core of these applications are different centrality measures that reflect how central each edge and node in the network is. These measures give insight into the structure of the network, helping to understand patterns and phenomena present therein. We give three examples of centrality measures: the in- and out-centralities and the betweenness centrality.

    The in- and out-centrality measures are a natural way to approach node centrality [10]. They represent how much influence flows in and out of the node, respectively. We define the in-centrality of node t as

    C(in)(t)=sVstC(s,t)

    and, similarly, the out-centrality of node s as

    C(out)(s)=tVstC(s,t),

    where V is the set of nodes in the network [10].

    In other words, the in-centrality of a node is the sum of the probabilities of spreading from all other nodes to the node in question, and the out-centrality of a node is the sum of the probabilities of spreading from the node in question to all other nodes. The in- and out-centralities directly translate to physical quantities of the network. As C(s, t) is the probability of influence spreading from node s to node t, the sum can be thought of as an expected value. In the case of in-centrality (Eq. 2.2), the sum represents the expected number of nodes that will spread influence to the specified node, and, for the out-centrality (Eq. 2.3), the sum represents the expected number of nodes that influence from the specified node will spread to. As a concrete example, the out-centrality of a node represents the expected number of infected people in a social network, when a contagious infection begins to spread from the starting node, and the in-centrality of an individual's vulnerability to being infected by different sources.

    The betweenness centrality measure is another approach to studying node centrality. It represents the significance of a node in transmitting influence between different parts of the network. Betweenness centrality can be easily defined for a set of nodes S, where the betweenness centrality of a single node s can be expressed as that for the set {s}. We first define the cohesion of a network as

    𝒞=s,tVstC(s,t)

    and the partial cohesion of the network without the set of nodes S as

    𝒞S=s,tV\SstCS(s,t),

    where V is the set of nodes in the network and CS is the probability matrix calculated with only nodes and edges between nodes in S taken into account. The probability matrix for partial cohesion has to be calculated independently from that for total cohesion, as the effects of removing nodes and edges can cut off paths between parts of the network and change the spreading probabilities.

    With Eqs. 2.4 and 2.5, we define the betweenness centrality as the relative difference between the total and partial cohesion:

    bS=𝒞𝒞S𝒞=1𝒞S𝒞.

    The cohesion portrays the total interconnectivity of the network. As the betweenness centrality is a relative difference in the cohesion (Eq. 2.6), the larger the betweenness centrality, the greater the effect of removing the specified nodes is on the interconnectivity. The betweenness centrality can be used to spot individuals who act in bridging roles between parts of the network, the isolation of which can help to contain the contagion to only a small part of the network.

    Figure 1.  Visualisation of the networks used in the demonstration of the models. (a) A visualisation of the Student network. (b) A visualisation of the Organisation network. Edges drawn in black represent relations within a group and have a weight of 0.5. Edges drawn in grey represent leadership relations and have a weight of 0.3. No other edges exist in the network. The colours represent the network's division into departments.

    To compare the models, we run them on two networks, the Student network [15] and the Organisation network [9] (Figure 1a,b). Precise descriptions of how the calculations in the models are carried out are presented in our earlier studies [9], [10]. The Student network is a small, 32-node network composed of empirical data on the relationships of Dutch university students [15]. We consider the network with all edge weights set to 0.5. The Organisation network, on the other hand, is a larger, 181-node network that represents a real-world organisation structure. The network consists of five departments with multiple groups forming each of them as well as hierarchical leadership relations. The Organisation network was first introduced in [9], along with different classes of preventive measures that simulate epidemic prevention by decreasing the weights of certain edges. In this article, we study the case where preventive measures are in use on all edges except on those representing leadership relations, which means that only edges representing leadership and group relations are present. Edges representing group relations have a weight of 0.5, and edges representing leadership relations have a weight of 0.3. Both of the networks are considered to be undirected with node weights equal to 1. The models work and give equivalent results for directed networks with varying node weights as well. Further study into the performance of the models and their application to other networks is presented in our earlier studies [9][11].

    As both of the networks are real-life social networks, they are well suited for modelling epidemic spreading. Using the two networks, we can compare the models on networks exhibiting different properties: the Student network is small and sparse, whereas the Organisation network is larger and much denser. Together, the networks represent a multitude of different situations for our models to perform on. It is worth noting, however, that as the edges of the networks are undirected, the in- and out-centrality (Eqs. 2.2 and 2.3) for a node will always be equal due to the symmetry of spreading from and to the node. This is not the case for scenarios where a node can be influenced more than once, such as an epidemic of an infectious disease, the contraction of which does not lead to full immunity [2]. In practice, the probability of an infection spreading from one person to another is often different than the probability of spreading in the other direction, represented by different weights in the directed edges between them.

    The models calculate the probabilities of spreading between all pairs of nodes given the layout and weight parameters of the network. The results can be used to calculate important quantities, specific to the network and its weights. One example of such quantity is the basic reproduction number for epidemics, which is a measure of contagiousness defined as the number of new infections a single infection on average leads to [16]. We have calculated the basic reproduction number for simulations run on the Organisation network as a function of edge weights in our previous study [9]. It is important to note that quantities such as the basic reproduction number are not specific to the models and must be independently calculated for any network and parameters that the models are run on. As each set of parameters and the input network define a unique spreading scenario, measures such as the basic reproduction number can always be calculated for any combination thereof. An example of such a calculation is provided in [9].

    Figure 2.  The normalised cohesion and betweenness centrality (Eqs. 2.4 and 2.6) for the Student network as a function of Lmax calculated with 1 000 000 iterations in the Connectivity and Spreading models.
    Figure 3.  The normalised cohesion and betweenness centrality (Eqs. 2.4 and 2.6) for the Organisation network as a function of Lmax calculated with 100 000 iterations in the Connectivity and Spreading models.

    For both networks, we calculate the total cohesion (Eq. 2.4) and mean betweenness centrality (Eq. 2.6) using both models and varying Lmax for the Spreading model. We normalise the total cohesion by the number of values constituting its sum, N(N – 1), where N is the number of nodes in the network, to get the node-wise average in- and out-centrality. This scaling factor is attained from the probability matrix being of shape N × N with the diagonal consisting of N missing values. The sum of the in- and out-centralities (Eqs. 2.2 and 2.3) are equal to the cohesion:

    tVC(in)(t)=tVsVstC(s,t)=𝒞=sVtVstC(s,t)=sVC(out)(s)

    From Eq. 3.1, the averages of the in- and out-centralities are also equal. This means that the normalised cohesion represents both the average node-wise in- and out-centralities.

    The mean betweenness centrality is calculated as the average of the node-wise betweenness centralities. The minimum Lmax value, for which the results of the models are ideally equal (assuming arbitrary precision), is the maximum length, for which a self-avoiding path exists in the network, since then the Spreading model is able to simulate spreading throughout the whole network. A self-avoiding path is never longer than the number of nodes in the network, which gives a trivial upper bound. Thus, as Lmax increases, the results of the Spreading model approach those of the Connectivity model. In practice, this happens with Lmax much lower than N (Figures 2 and 3). This is due to the probability of spreading through a path decreasing exponentially with the path length.

    The cohesions of the Student and Organisation networks are around 19% and 40%, respectively (Figures 2a and 3a). The Organisation network achieves a higher cohesion due to its much denser nature. The difference in density can also be seen in Figures 2b and 3b, where the Student and Organisation networks' betweenness centralities (Eq. 2.6) are around 0.12 and 0.03, respectively. Nodes in the Student network are connected by much fewer paths than in the Organisation network, and therefore each node has a more prominent role in allowing connections between other nodes.

    As the results given by the Spreading model converge already with lower values of Lmax, epidemic simulation can be performed more efficiently by not taking longer spreading paths into account. With a low Lmax, however, the results differ, which allows the simulation of scenarios where spreading paths are limited by factors such as consistent isolation of patients. Since the two models produce identical results with high Lmax, both models can be used for the intended purpose of the other. The equivalence of the results indicates the similarities in the mechanisms of epidemic spreading [9] and information network connectivity [11]: where contagion spreads from node to node with immunity inhibiting it in epidemic spreading, in connectivity, information has to similarly travel from node to node with inactive nodes as a limiting factor. These similarities in the mechanisms might extend to other types of network modelling, which is a subject for future research.

    The models can be run on any network with any edge and node weights but are capable of modelling only scenarios where any one node is influenced at most once. In our earlier studies, we have also presented a model where all nodes can get influenced any number of times. We will present a model capable of dealing with situations where getting influenced decreases a node's probability of getting influenced again by a given breakthrough probability in our future studies.

    Modelling breakthrough influence is an example of how the models can be applied to the purposes of each other. Namely, many epidemics [9] and influence spreading [10] processes are based on similar spreading mechanisms. Both processes can have significant breakthrough probabilities depending on the specific characteristics of virus species or information types. For example, COVID-19 variants [2] can have different breakthrough probabilities despite being very close in nature. Rumours and propaganda have a higher breakthrough probability, as they circulate between people and change form more effectively than facts and knowledge.

    We have compared the calculation methods and results of two models designed for epidemic spreading and connectivity simulation introduced in our earlier research. We call these models the Spreading model and Connectivity model, respectively. Even though the models are designed for very different applications, the manner in which they calculate their results is mathematically equivalent. Accordingly, the two models produce equivalent results with high enough spreading path length, Lmax, for the Spreading model. As the models were independently developed for their different purposes, their equivalence highlights the similarities in the mechanisms of epidemic spreading and information network connectivity. The similar results also enable the application of the models to each others' intended purposes, allowing different parameters to be taken into account.

    In this study, we have highlighted the opportunities for using interdisciplinary modelling and simulation methods in the research of epidemic spreading, resilience in communication networks, and influence spreading in social networks.



    [1] Akerlof GA (2002) Behavioral macroeconomics and macroeconomic behavior. Am Econ Rev 92: 411–433. https://doi.org/10.1257/00028280260136192 doi: 10.1257/00028280260136192
    [2] Anderson TW, Goodman LA (1957) Statistical inference about Markov chains. Ann Math Stat 28: 89–110.
    [3] Arrow KJ (1971) Essays in the Theory of Risk-Bearing. Markham Pub Co
    [4] Assenza T, Delli Gatti D, Grazzini J (2015) Emergent dynamics of a macroeconomic agent-based model with capital and credit. J Econ Dyn Control 50: 5–28. https://doi.org/10.1016/j.jedc.2014.07.001 doi: 10.1016/j.jedc.2014.07.001
    [5] Axtell RL, Farmer JD (2022) Agent-Based Modeling in Economics and Finance: Past, Present, and Future. INET Oxford Working Paper No. 2002-10. 21st June 2022.
    [6] Barbu VS, D'Amico G, De Blasis R (2017) Novel advancements in the Markov chain stock model: Analysis and inference. Ann Financ 13: 125–152. https://doi.org/10.1007/s10436-017-0297-9 doi: 10.1007/s10436-017-0297-9
    [7] Battiston S, Farmer JD, Flache A, et al. (2016) Complexity theory and financial regulation. Science 351: 818–819. https://doi.org/10.1126/science.aad0299 doi: 10.1126/science.aad0299
    [8] Billingsley P (1961) Statistical methods in Markov chains. Ann Math Stat 32: 12–40.
    [9] Bonabeau (2002) Agent-based modeling: Methods and techniques for simulating human systems. P Natl Acad Sci 99: 7280–7287. https://doi.org/10.1073/pnas.0820808 doi: 10.1073/pnas.0820808
    [10] Bookstaber R (2017) Agent-Based Models for Financial Crises. Ann Rev Financ Econ 9: 85–100. https://doi.org/10.1146/annurev-financial-110716-032556 doi: 10.1146/annurev-financial-110716-032556
    [11] Buchanan M (2009) Meltdown modelling: could agent-based computer models prevent another financial crisis? Nature 460: 680–683.
    [12] Delli G, Gaffeo E, Gallegati M (2010) Complex agent-based macroeconomics: a manifesto for a new paradigm. J Econ Interact Coor 5: 111–135. https://doi.org/10.1007/s11403-010-0064-8 doi: 10.1007/s11403-010-0064-8
    [13] Ehrenberg AS (1988) Repeat-Buying: Facts, Theory and Applications. Oxford University Press.
    [14] Epstein JM (2006) Generative social science: Studies in agent-based computational modeling. Princeton University Press.
    [15] Farmer JD, Foley D (2009) The economy needs agent–based modelling. Nature 460: 685–686. https://doi.org/10.1038/460685a doi: 10.1038/460685a
    [16] Gary JG, Irwin PL, Goutam C, et al. (1991) Consumer evaluation of multi-product bundles: An information integration analysis. Market Lett 2: 47–57. https://doi.org/10.1007/BF00435195 doi: 10.1007/BF00435195
    [17] Grazzini J, Richiardi MG (2015) Estimation of ergodic agent-based models by simulated minimum distance. J Econ Dyn Control 51: 148–165. https://doi.org/10.1016/j.jedc.2014.10.006 doi: 10.1016/j.jedc.2014.10.006
    [18] Hamill L, Gilbert N (2015) Agent–based modelling in economics. John Wiley & Sons.
    [19] Kahneman D (2003) Maps of Bounded Rationality: Psychology for Behavioral Economics. Am Econ Rev 93: 1449–1475.
    [20] Kaplow L (2008) Optimal policy with heterogeneous preferences. BE J Econ Anal Policy 8. https://doi.org/10.2202/1935-1682.1947 doi: 10.2202/1935-1682.1947
    [21] Leijonhufvud A (1996) Towards a not-too-rational macroeconomics, In: Colander, D. (Ed.), Beyond Microfoundations: Post Walrasian Macroeconomics. Cambridge University Press, Cambridge, MA., 39–55.
    [22] Macal CM, North MJ (2005) Tutorial on agentbased modeling and simulation. Proceedings of the Winter Simulation Conference 14, IEEE.
    [23] Maravilha D, Silva S, Laranjeira E (2022) Consumer's behavior determinants after the electricity market liberalization: the Portuguese case. Green Financ 4: 436–449. https://doi.org/10.3934/GF.2022021 doi: 10.3934/GF.2022021
    [24] Mandel A, Landini S, Gallegati M, et al. (2015) Price dynamics, financial fragility and aggregate volatility. J Econ Dyn Control 51: 257–277. https://doi.org/10.1016/j.jedc.2014.11.001 doi: 10.1016/j.jedc.2014.11.001
    [25] Manout O, Ciari F (2021) Assessing the Role of Daily Activities and Mobility in the Spread of COVID-19 in Montreal with an Agent-Based Approach. Front Built Enviroment 7: 2021. https://doi.org/10.3389/fbuil.2021.654279 doi: 10.3389/fbuil.2021.654279
    [26] McFadden D (1974) Conditional Logit Analysis of Qualitative Choice Behavior. Front Econometrics Zarembka (Ed.) 105–142. New York: Academic Press.
    [27] Mullainathan S, Spiess J (2017) Machine Learning: An Applied Econometric Approach. J Econ Perspect 31: 87–106. https://doi.org/10.1257/jep.31.2.87 doi: 10.1257/jep.31.2.87
    [28] Peña G (2020) A new trading algorithm with financial applications. Quant Financ Econ 4: 596–607. https://doi.org/10.3934/QFE.2020027 doi: 10.3934/QFE.2020027
    [29] Predelus W, Amine S (2022) The insolvency choice during an economic crisis: the case of Canada. Quant Financ Econ 6: 658–668. https://doi.org/10.3934/QFE.2022029 doi: 10.3934/QFE.2022029
    [30] Reisch LA, Zhao M (2017) Behavioural economics, consumer behaviour and consumer policy: state of the art. Behav Public Policy 1: 190–206. https://doi.org/10.1017/bpp.2017.1 doi: 10.1017/bpp.2017.1
    [31] Seetharaman PB (2004) Modeling multiple sources of state dependence in random utility models: A distributed lag approach. Market Sci 23: 263–271. https://doi.org/10.1287/mksc.1030.0024 doi: 10.1287/mksc.1030.0024
    [32] Simon HA (1955) A Behavioral Model of Rational Choice. Q J Econ 69: 99–118.
    [33] Simudyne (2023) A Complete-Guide to Agent-Based Modeling for Financial Services. Simudyne 2023.
    [34] Tesfatsion L (2006) Agent-Based Computational Economics: A Constructive Approach to Economic Theory. Handb Computat Econ 2: 831–880. https://doi.org/10.1016/S1574-0021(05)02016-2 doi: 10.1016/S1574-0021(05)02016-2
    [35] Tesfatsion L, Judd KL (2006) Handbook of computational economics: agent-based computational economics. Elsevier.
    [36] Thaler RH (2015) Misbehaving: The Making of Behavioral Economics. W. W. Norton & Company.
    [37] Thurner S, Farmer JD, Geanakoplos J (2012) Using agent-based simulations, this work discusses the role of leverage in financial markets, explaining phenomena like fat-tailed asset returns. Quant Financ 12: 695–707.
    [38] Tsiatsios GA, Kollias I, Melas E, et al. (2022) Some first results from an agent–based model of consumer demand. submitted.
    [39] Tsiatsios GA, Leventides I, Melas E, et al. (2023a) A bounded rational agent-based model of consumer choice. submitted.
    [40] Tsiatsios GA, Leventides I, Toudas K, et al. (2023b) An agent–based study on the dynamic distribution and firms concentration in a closed economy. submitted.
    [41] Turell A (2016) Agent-based models: Understanding the economy from the bottom up. Bank England Quart Bull 2016.
    [42] Varian HR (2014) Intermediate Microeconomics: A Modern Approach. W. W. Norton & Company.
    [43] Zhong Li Z (2020) Impact of economic policy uncertainty shocks on China's financial conditions. Financ Res Lett 35: 101303. https://doi.org/10.1016/j.frl.2019.101303 doi: 10.1016/j.frl.2019.101303
  • This article has been cited by:

    1. Into Almiala, Henrik Aalto, Vesa Kuikka, Influence spreading model for partial breakthrough effects on complex networks, 2023, 630, 03784371, 129244, 10.1016/j.physa.2023.129244
    2. Vesa Kuikka, Detecting Overlapping Communities Based on Influence-Spreading Matrix and Local Maxima of a Quality Function, 2024, 12, 2079-3197, 85, 10.3390/computation12040085
    3. Vesa Kuikka, Kimmo K. Kaski, Detailed-level modelling of influence spreading on complex networks, 2024, 14, 2045-2322, 10.1038/s41598-024-79182-9
    4. Vesa Kuikka, Opinion Formation on Social Networks—The Effects of Recurrent and Circular Influence, 2023, 11, 2079-3197, 103, 10.3390/computation11050103
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2572) PDF downloads(124) Cited by(2)

Figures and Tables

Figures(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog