Research article

Modeling the implications of nitric oxide dynamics on information transmission: An automata networks approach

  • Received: 15 August 2023 Revised: 17 October 2023 Accepted: 23 October 2023 Published: 06 November 2023
  • MSC : 92B20, 92B25, 92C20

  • Nitric oxide (NO) is already recognized as an important signaling molecule in the brain. It diffuses easily and the nervous cell's membrane is permeable to NO. The information transmission is three-dimensional, which is different from synaptic transmission. NO operates in two different ways: Close and specific at the synapses of neurons, and as a volumetric transmitter sending signals to various targets, regardless of their anatomy, connectivity or function, when multiple nearby sources act simultaneously. These modes of operation seem to be the basis by which NO is involved in many central mechanisms of the brain, such as learning, memory formation, brain development and synaptogenesis. This work focuses on the effect of NO dynamics on the environment through which it diffuses, using automata networks. We study their implications in the formation of complex functional structures in the volume transmission (VT), which are necessary for the synchronous functional recruitment of neuronal populations. We qualitatively and quantitatively analyze the proposed model regarding these characteristics through the concepts of entropy and mutual information. The proposed deterministic model allows the incorporation of fuzzy dynamics. With that, a generalized model based on fuzzy automata networks can be provided. This allows the generation and diffusion processes of NO to be arbitrarily produced and maintained over time. This model can accommodate arbitrary processes in decision-making mechanisms and can be part of a complete formal VT framework in the brain and artificial neural networks.

    Citation: Pablo Fernández-López, Patricio García Báez, Ylermi Cabrera-León, Aleš Procházka, Carmen Paz Suárez-Araujo. Modeling the implications of nitric oxide dynamics on information transmission: An automata networks approach[J]. AIMS Mathematics, 2023, 8(12): 30142-30181. doi: 10.3934/math.20231541

    Related Papers:

    [1] Ramsha Shafqat, Ateq Alsaadi . Artificial neural networks for stability analysis and simulation of delayed rabies spread models. AIMS Mathematics, 2024, 9(12): 33495-33531. doi: 10.3934/math.20241599
    [2] Yasir Ramzan, Aziz Ullah Awan, Muhammad Ozair, Takasar Hussain, Rahimah Mahat . Innovative strategies for Lassa fever epidemic control: a groundbreaking study. AIMS Mathematics, 2023, 8(12): 30790-30812. doi: 10.3934/math.20231574
    [3] Ali Yousef . A fractional-order model of COVID-19 with a strong Allee effect considering the fear effect spread by social networks to the community and the existence of the silent spreaders during the pandemic stage. AIMS Mathematics, 2022, 7(6): 10052-10078. doi: 10.3934/math.2022560
    [4] Tahir Khan, Fathalla A. Rihan, Muhammad Bilal Riaz, Mohamed Altanji, Abdullah A. Zaagan, Hijaz Ahmad . Stochastic epidemic model for the dynamics of novel coronavirus transmission. AIMS Mathematics, 2024, 9(5): 12433-12457. doi: 10.3934/math.2024608
    [5] Mdi Begum Jeelani, Abeer S Alnahdi, Rahim Ud Din, Hussam Alrabaiah, Azeem Sultana . Mathematical model to investigate transmission dynamics of COVID-19 with vaccinated class. AIMS Mathematics, 2023, 8(12): 29932-29955. doi: 10.3934/math.20231531
    [6] Yanxin Li, Shangkun Liu, Jia Li, Weimin Zheng . Congestion tracking control of multi-bottleneck TCP networks with input-saturation and dead-zone. AIMS Mathematics, 2024, 9(5): 10935-10954. doi: 10.3934/math.2024535
    [7] Yasir Nadeem Anjam, Asma Arshad, Rubayyi T. Alqahtani, Muhammad Arshad . Unveiling the dynamics of drug transmission: A fractal-fractional approach integrating criminal law perspectives. AIMS Mathematics, 2024, 9(5): 13102-13128. doi: 10.3934/math.2024640
    [8] Abdul Qadeer Khan, Fakhra Bibi, Saud Fahad Aldosary . Bifurcation analysis and chaos in a discrete Hepatitis B virus model. AIMS Mathematics, 2024, 9(7): 19597-19625. doi: 10.3934/math.2024956
    [9] Yue Wu, Shenglong Chen, Ge Zhang, Zhiming Li . Dynamic analysis of a stochastic vector-borne model with direct transmission and media coverage. AIMS Mathematics, 2024, 9(4): 9128-9151. doi: 10.3934/math.2024444
    [10] Parveen Kumar, Sunil Kumar, Badr Saad T Alkahtani, Sara S Alzaid . A mathematical model for simulating the spread of infectious disease using the Caputo-Fabrizio fractional-order operator. AIMS Mathematics, 2024, 9(11): 30864-30897. doi: 10.3934/math.20241490
  • Nitric oxide (NO) is already recognized as an important signaling molecule in the brain. It diffuses easily and the nervous cell's membrane is permeable to NO. The information transmission is three-dimensional, which is different from synaptic transmission. NO operates in two different ways: Close and specific at the synapses of neurons, and as a volumetric transmitter sending signals to various targets, regardless of their anatomy, connectivity or function, when multiple nearby sources act simultaneously. These modes of operation seem to be the basis by which NO is involved in many central mechanisms of the brain, such as learning, memory formation, brain development and synaptogenesis. This work focuses on the effect of NO dynamics on the environment through which it diffuses, using automata networks. We study their implications in the formation of complex functional structures in the volume transmission (VT), which are necessary for the synchronous functional recruitment of neuronal populations. We qualitatively and quantitatively analyze the proposed model regarding these characteristics through the concepts of entropy and mutual information. The proposed deterministic model allows the incorporation of fuzzy dynamics. With that, a generalized model based on fuzzy automata networks can be provided. This allows the generation and diffusion processes of NO to be arbitrarily produced and maintained over time. This model can accommodate arbitrary processes in decision-making mechanisms and can be part of a complete formal VT framework in the brain and artificial neural networks.



    In the early 1980s, Robert F. Furchgott and John V. Zawadzki, while analyzing the contractions and relaxations of vascular rings in response to certain pharmacological substances (adrenaline for constriction and acetylcholine for dilation), observed that acetylcholine did not produce vasodilation when the inner layer of cells in blood vessels was removed [1]. This relaxation or vasodilation, dependent on the vascular endothelium, was due to the release of a substance by acetylcholine that, when it reached the smooth muscles of the vascular wall, produced vasodilation. This unknown substance was called the endothelium-derived relaxing factor (EDRF); this substance had a very unstable character (in normal physiological conditions, 3 seconds of half-life), and the vasodilation that it produced was a consequence from activating a localized enzyme in the vascular wall called guanylate cyclase [2].

    Identifying EDRF with nitric oxide (NO) was a major milestone in vascular physiology and pathophysiology research. Its discovery was the product of parallel work from different research groups that reached similar results [3,4,5,6], and it was at the end of the 1980s when researchers David S. Bredt, Solomon H. Snyder and Salvador Moncada began to relate NO to the nervous system in its role as a neurotransmitter [7,8,9].

    NO is an unstable molecule that is free radical, gaseous and fat soluble and it is capable of crossing cell membranes without the aid of specific transporters or mechanisms. It is small (1.15 ¨A=1.15104μm), spreads rapidly and has a half-life between 3 and 5 seconds. It is produced from the amino acid L-arginine and its production is regulated by the enzyme nitric oxide synthase (NOS).

    The two classes of NOS that exist, constitutive (cNOS) and inducible (iNOS), are distributed in very different and varied areas. Thus, the endothelial isoform (eNOS) of cNOS is located in the nerve fibres that surround blood vessels and endothelial cells, and the neuronal isoform (nNOS) is located in neurons of the central nervous system (CNS) and the peripheral nervous system (PNS). On the other hand, iNOS appears in glial cells and is only expressed in the event of brain injury or neurodegenerative diseases [10]. Based on the above, we observed that there are multiple neurons capable of synthesising NO, which is the subject of our modeling and simulation study. This NO acts in the CNS as an atypical neurotransmitter because it can be released by any part of the cell membrane without the need for presynaptic or postsynaptic structures, storage vesicles or transporter proteins.

    The dynamics of NO are formed by three fundamental processes (generation, diffusion and self-regulation and recombination) [11] and begin through its generation, which is a process that occurs within the chemical communication itself and requires the existence of the NOS enzyme and Ca2+ flux. This process causes a rapid and transitory release of moderate amounts of NO that diffuse in all directions. The diffusion of NO is governed by the gradient of its own concentration, and its objectives are those cells that, being within its reach, have the ability to recombine with it, such as the enzyme soluble guanylate cyclase (GCs) [12,13]. The maximum range of NO influence through diffusion can reach 300 μm, totaling approximately 2106 synapses [14], and as this diffusion takes place, a process of self-regulation and recombination occurs with different substrates [15], with no reuptake processes. The generated NO contributes to inhibiting the activity of NOS by a negative feedback mechanism, leaving the area in a refractive period, during which NO generation does not occur again.

    The presence of a molecule such as NO in the nervous system opens up new perspectives in the study of its functioning due to its various functions in both the PNS and CNS [11], identifying it as a possible underlying element and supporting the high capacity of adaptation, flexibility and prediction that the brain possesses. Confirming the latter is to understand the functioning of NO as a signaling molecule in the brain. The absence of decisive experimental data for this function has generated the development of various models. Thus, Tadeusz Malinski et al. [16] directly applied the Fick equation to model the behavior of NO through isotropic and homogeneous diffusion. Lancaster J. R. [17] associated the dynamics of NO with a random walk, intrinsically related to the parabolic equation of diffusion, and introduced the compartment concept to model NO. Wood J. and Garthwaite J. [18] based their model on an analytical solution of the diffusion equation and studied the physiological sphere of action and influence of an isolated source of NO, also analyzing the self-regulation process of NO. Vaughn M. W. et al. [19] characterized the dynamics of NO by means of a model composed of three regions with differentiated dynamics and followed the anatomical structure of the vascular endothelium (luminal, endothelial and adluminal). Philippides A. et al. [20] modeled the diffusion of NO from an irregular structure using an analytical solution of the diffusion equation and introduced a global morphology for a generation process where the generation of NO takes place in a spherical section. Suárez Araujo C. P. et al. [21,22,23], developed several models to study and determine the dynamics of NO from different approaches: An analytical model in a continuum based on the Fick equation that allowed working with cylindrical morphologies; a compartmental model that, using phenomenological equations of transport and compartmental analysis as an underlying formal mechanism, allowed considering different morphologies and characteristics of NO dynamics, such as the nonisotropy of the medium and the nonhomogeneous character of the basic processes of NO dynamics (generation and diffusion of NO and reaction of NO with other molecules).

    Our model focuses on the dynamics of NO as a signaling molecule in VT, mainly on how NO transmits information using the extracellular environment in its three modes of operation: In the single synapse, synaptic spillover and Volume Transmission (VT) [24]. We do not rule out extending the scope of this model in the future, to incorporate the dynamics of NO recombination with other substances and other biochemical networks, including models that allow us to work from the perspective of the micro kinetics of NO.

    In this field, metabolic models [25,26], represent a very important tool for our understanding of biological systems as they are more closely connected with cell behavior, and can adequately represent the effects of the dynamics of NO on target cells, which span different time scales and are frequently not recordable electrophysiologically [24].

    Although there is a large body of work related to the dynamic of NO as a mediator of Central Nervous System (CNS) functions, the manner in which NO does this is not well understood, because its actions in target cells are mediated by metabotropic receptors with different time scales in its effects, and these are often not electrophysiologically detectable. On the other hand, we seek to know how a coherent and interpretable information transfer can be encoded by this signaling system in the brain. Particularly because the message itself is not channelled to any target, but is free to diffuse from its source in all three dimensions.

    The main motivation of our work is to improve the knowledge and understanding the implications of NO in VT, and its effects in the formation of complex functional structures on which the VT performed by NO is supported, which are necessities for the synchronous functional recruitment of neuronal populations.

    In this work, we model the effect of NO dynamics in its environment, as well as its implications in the formation of complex functional structures on which the VT performed by NO is supported, which is necessary for the synchronous functional recruitment of neuronal populations. Therefore, we propose a model based on the different changes that occur in the environment where this dynamic takes place. We use automata networks, which provide an alternative to classic and traditional models based on continuous dynamic systems-ordinary differential equations and partial derivative equations. The first model based on automata networks is deterministic, which presents a finite time of behavior based on an initial configuration of compartments that are in a state of NO generation.

    This deterministic model is extendable and can incorporate complexity into its processes. One of these approaches allows the generation processes to occur arbitrarily in the compartments and be kept random over time. In modeling, fuzzy dynamics are considered in the behavior of the automata that are associated with these compartments. Thus, we obtain a model of a fuzzy automata network for the diffusion of NO.

    We organize this work into two major sections, in addition to the introduction and the conclusions. Section 2 presents the formal tools and conceptualization of the automata networks (deterministic and fuzzy) used to model the NO dynamics. The results of the model, an analysis and study of deterministic convergence and an analysis of its dynamics are detailed in Section 3. We end this work with a section that summarizes the conclusions and identifies the methodological axis of our future work. The list of abbreviations and symbols is given in Table A1.1 (Annex 1. List of abbreviations and symbols).

    The method of our work is based on modeling using the discrete mathematical structure of an automata network. In general, an automata network (AN) can be defined as a set of locally interconnected automata that evolve in discrete time steps through mutual interactions between them [27]. From a mathematical point of view, ANs are discrete dynamic systems.

    Before going into the formal details of AN (in their two forms: deterministic automata networks and fuzzy automata networks), we define the basic components that compose them: Deterministic automata (DA) and fuzzy automata (FA).

    Deterministic Automata (DA). A DA is a mathematical structure of states and transitions. The states of a DA represent a configuration in which a modeled real system is found with one or more states designated as starting state/s that represent the initial configuration of said real system.

    Transitions in a DA are associated with changes in the configuration of the real system due to an action that can be internal or external. The former represent internal computing steps (τ) and are not visible in the DA environment. An external action is visible in the DA environment and is used to interact with it.

    Formally, a DA, denoted by A, is made up of the following four components:

    () A set of states SA.

    () A nonempty set of starting states IASA.

    () Two sets of actions VA and WA, external (and that interact with the DA environment) and internal, respectively, such that VAWA= and where ActA=VAWA defines the full set of actions associated with DA transitions.

    () A Cartesian product ΔASA×ActA×SA, which defines the relationship of DA transitions.

    Thus, we say that sas, if (s,a,s)ΔA, meaning that action a is enabled in state s, and therefore there is a transition labelled a from s towards another state of DA.

    The DA constitute a mathematical framework for the specification and analysis of real systems, but there are various formal extensions of them that collect aspects such as concurrency, asynchronous events, probabilistic configuration changes and environments with uncertainty. One of these extensions is probabilistic automata (PA), which has been formalized and developed by various authors [27,28,29,30,31].

    An additional extension to the above is fuzzy automata (FA), which are based on the concepts associated with fuzzy sets [32] and have been formalized and characterized by various authors [33,34,35,36,37,38,39]. DA and PA are particular cases of FA [34].

    Fuzzy automata (FA). A FA is a DA where the final existence of the transitions between one state and the subsequent set of states depends on a fuzzy affiliation function that defines the degrees of transition. Therefore, in a FA, the transition from a state s to a state s, can be implemented if the value of the fuzzy function affiliation, before a given input e, meets a certain established existence criterion. Thus, a transition in a FA relates a state and an action to a fuzzy interval [0,1].

    In this context, we define a fuzzy function affiliation (FFA) that characterizes a set A, which we also call a fuzzy set (FS), xX, where we can calculate a value fA(x)[0,1] and whose meaning is the degree of membership, or affiliation, that point x has to the set A and where fA(x)=0 corresponds to the fact that xA, and fA(x)=1 with xA. Although the above can be interpreted as the simplest way to define membership for xX to FS A, there are other alternatives in which said membership is defined based on the parameters α and β, where α,β[0,1], and α>β. In this case, membership is defined based on the following three cases: 1) xA if fA(x)α, 2) xA if fA(x)β, and 3) the membership of point x to FS A is indeterminate when βfA(x)α.

    Formally, we define an FA as an algebraic structure A, formed by the following five components:

    () A nonempty set of input states EA.

    () A nonempty set of internal states SA.

    () A nonempty set of output states VA.

    () A first FFA, fA:SA×EA×SA[0,1] called a direct fuzzy transition function, where fA(s,ej,s) defines the degree of state transition s to the state s, when the input ej is received.

    () A second FFA, gA:VA×EA×SA[0,1] called a direct fuzzy output function, where gA(vi,ej,s) defines the degree to which the automaton produces the output vi when it is in a state s and receives an input ej.

    Based on the previous definition, we say that the transition sa,fA(s,ej,s),gA(vi,ej,s)s occurs if fA(s,ej,s) and gA(vi,ej,s) meet the membership criteria on the FSs SA×EA×SA and VA×EA×SA.

    The generalization of the FA behavior for an input sequence Ek of arbitrary length k can be defined by applying the definition of FFA composition [32].

    In this case, fA(s,Ek,s)=fA(s,e0,s1;s1,e1,sk1;;sk2,ek1,sk1) =Max{s1,s2,,sk2}Min[fA(s,e0,s1),fA(s1,e1,s2),,fA(sk2,ek1,sk1)], obtaining in this case the degree of transition of the state s to the state sk1, when the sequence of inputs defined by Ek is received. It is also necessary to apply the same process with the FFA gA.

    Automata Networks. Generalizing the above definitions associated with a DA and FA, an AN can be described as a function defined between two spaces, F:E×SnSn, where E and S are finite spaces called spaces of inputs and states. The AN is then formed on n interconnected automata, where the connection structure is defined by F in the following way: automaton i receives a connection from j if Fi depends on variable j, where Fi corresponds to component i of F.

    The way in which automata are interconnected in the AN is given by their structure or topology of connections.

    A state of the network is a vector x in Sn. The dynamics of the network are then defined as the rule that transforms said vector x of Sn into a vector y of Sn when a given input e is received. The rule of parallel iteration is defined by y=F(x,e) and can be interpreted as follows: In each time step, each automaton calculates its next state by means of the function Fi on the current state x of the network. The AN iteration rule can present different modes of iteration: Parallel (the order in which each Fi is calculated is irrelevant) and sequential (the order in which Fi is calculated does matter).

    As soon as the dynamics-or temporal evolutions-of the network are defined, the problem of its asymptotic behavior arises. Since the space of states S, on which the AN has been defined, is finite, all trajectories of the AN are periodic, i.e., from cyclical or fixed points.

    A fuzzy automata network (FAN) is made up of n interconnected FA, where the connection structure is defined by the inputs that each FA receives.

    Once the formal tools that we use in our modelling have been presented, we will define a series of concepts associated with NO dynamics and their effects on the diffusion environment.

    Compartment. The concept of compartment determines, through a mathematical construct, the minimum place study where there is a complete NO dynamic diffusion defined by the automaton [22,23], which is part of the neuronal substrate. From a biochemical point of view, it could be identified as a chemical computing environment capable of producing the different processes involved in NO dynamics: NO generation, NO reception and NO self-regulation [11], as shown in Figure 1(a). In direct correspondence with each of these processes and to identify the set of possible states in which said compartment may exist, it is established that such processes may be in a state of activation or operation or in a state of nonactivation.

    Figure 1.  Identification of the possible states involved in NO dynamics (a) and of previous states caused in the neuronal/biological substrate (b). The states that are modeled in our model are inscribed with dotted lines and the states that are modeled in other models [16,17,18,19] and [21] are inscribed with dashed lines.

    We can then match the situation in which the NO generation process is active in the compartment with that in which certain chemical machinery is active in the biological substrate that converges in the generation of NO. Likewise, we can say that when NO reception is active in a compartment, a combination of NO and its possible receptor molecules is occurring in the biological substrate associated with the compartment, and thus some functional and/or metabolic change is being activated with said NO reception, as shown in Figure 1(b).

    We treat the self-regulation process independently, knowing that, at the biological level, NO is depleted as it diffuses and combines with different substrates and that certain levels of NO condition its own generation. We can see in the definition of the set of states and transition function that the state of self-regulation is always later than that of generation and that when a compartment is in that state, there is no NO. We can also identify how, at the level of the neuronal substrate, we will have a state of transmission, which is a direct consequence of the state of diffusion of NO that occurs in its dynamics. In this state, the compartment receives NO from the environment and generates NO that will be received in other compartments, Figure 1(b).

    Neighborhood and Scope. Within our set of compartments, whose structural and functional definition is established in the following two sections, we can define the following relationship:

    Relationship belonging to a neighborhood. Given two compartments Ci and Cj, we say that Ci belongs to the neighborhood of Cj if and only if Ci is influenced by the state of Cj. In this case, we say that Cj creates the neighborhood ΠCj and that CiΠCj.

    The above relationship allows us to define the neighborhoods of each compartment, and these are the ones that inherently define the scope of NO that is generated in each compartment to later diffuse. In Figure 2, we can see how compartment Ci simultaneously belongs to the neighborhoods ΠCj, ΠCk and ΠCp. This means that the dynamics of compartment Ci are influenced by the states in which compartments Cj, Ck and Cp are found. Likewise, compartment Ci will create a neighborhood ΠCi to which compartments Cj, Ck and Cp may belong.

    Figure 2.  Scheme of compartments that create neighborhoods ΠCi, ΠCi and ΠCi. Compartment Ci belongs to these neighborhoods.

    Based on the previous definitions, we identify the following states in which the automaton can be found:

    (1) Nonactivity state, (n). A compartment is in a nonactive state when the concentration of NO, or its variation, is negligible.

    (2) Receiving state, (r). A compartment in a receiving state is receiving NO. This NO is generated or transmitted by any of the compartments that are creating the neighborhoods to which it belongs.

    (3) Transmission state, (t). Two different situations can cause a compartment to be in a transmitting state.

    – The first of these situations occurs when the generation of NO occurs in a compartment. When (as we will see later) and if there is a compartment in its neighborhood in a receiving state, this would imply that our automaton would go to a state of NO generation. However, if it also happens that the generation of NO occurs in any of the compartments, to whose neighborhood this compartment in question belongs, then the state associated with this situation is that of transmission.

    – The second of the situations presented corresponds to the opposite sequence. A compartment may be receiving NO and begin a process of NO generation in it, which takes it to the transmission state if there is a compartment in its neighborhood with the capacity to receive NO.

    The state of transmission, in the definition of our automaton, has been called this way because it intrinsically captures the existence of a relationship between what is received and what is generated from NO. It is not known if the reception of NO can have some implication in subsequent generation processes. In the same region of the nervous tissue, we understand that this relationship may occur in higher level processes. Likewise, it is not known what NO reception in a certain region of nervous tissue may imply when there is already a process of NO generation in that region. Our hypothesis could be an acceleration towards self-regulation. Therefore, this name identifies only one state of our automaton that is reached by the two situations previously explained, and from the point of view of transmission of the information, independent collection is relevant.

    Figure 3 shows a sequence of states with which to better understand the transmission state. A small section of an automaton network is shown where compartments Cj and Cp have been highlighted along with the neighborhoods ΠCj and ΠCp that they form and thus the scope that the NO generated in compartments Cj and Cp can have. In Figure 3(a), we can see the initial state of the compartments and their associated neighborhoods. Figure 3(b) shows how the activation of a NO generation process in compartment Cj initiates a receiving state in all the compartments that belong to its neighborhood, including Cp itself. Figure 3(c) shows how a generation process is activated in compartment Cp, thus initiating a transmission state (in the same way, compartment Cj also goes to a transmission state), without any relationship existing between the generation that has occurred and the NO that it received previously. Figure 3(d) shows how the generation of NO in Cj ends, and thus Cj goes to a state of self-regulation and Cp to one of generation.

    Figure 3.  Sequences of events in the AN that bring compartments Cj and Cp to the transmission state. (a) initial state of the compartments and their associated neighbourhoods, (b) activation of a NO generation process in compartment Cj, (c) generation process is activated in compartment Cp and (d) the generation of NO in Cj has finished, and thus Cj goes to a state of self-regulation and Cp pass to one of generation.

    (4) NO generation state, (g). A compartment is in this state when NO generation takes place in it. Through this process of NO generation, the compartment influences those compartments that belong to its neighborhood.

    (5) State of self-regulation, (a). The state of self-regulation is subsequent in sequence to the state of NO generation. During this state, there are no dynamics related to NO. It is important to bear in mind that, both in the self-regulation sate and in the receiving state, NO disappears. The self-regulation of NO is a process by which the generated NO disappears. At this time, it is being argued that this disappearance is carried out in proportion to the amount of NO that exists in the environment, although some models assume this occurs with other types of dependence [19]. The automaton model that we present here does not handle amounts of NO, and self-regulation is introduced as a state after the generation and forced transition.

    We begin the specification of the transition function of the automaton by defining the jump behavior between each of the possible states in which said automaton can be found.

    Two versions of the transition function are proposed, as shown in Table 1 (version 0 and version 1). Both versions reach a finite and deterministic behavior regardless of the initial configuration, and the generation and nonactivity states of NO are possible only in these initial configurations. These versions, although they present a complex behavior, should be seen as an intermediate step for the final definition of our fuzzy transition function, which will require stochastic behaviors to reflect NO generation and self-regulation more realistically.

    Table 1.  A deterministic transition function that defines the basic jump conditions.
    [Version 0] Transition condition ϕkCiϕCk+1i
    r0(Cj:CiΠCj)(ϕCkj=gϕCkj=t) n r, r r
    ¯r0 n n, r n
    r1(Cj:CiΠCj)(ϕCkj=gϕCkj=t)(Ch:ChΠCi)(ϕCkh=nϕCkh=r) g t, t t
    ¯r1 t a
    r2(Cj:CiΠCj)(ϕCkj=gϕCkj=t)(Ch:ChΠCi)(ϕCkh=gϕCkh=t) g a
    ¯r1¯r2 g g
    ε a n
    [Version 1] Transition condition ϕkCiϕCk+1i
    s0(Cj:CiΠCj)(ϕCkj=gϕCkj=t)(Ch:ChΠCi)(ϕCkhr) n r, r r
    ¯s0 r n
    s1(Cj:CiΠCj)(ϕCkj=gϕCkj=t)(Ch:ChΠCi)(ϕCkh=r) n g
    ¯s0¯s1 n n
    s2(Cj:CiΠCj)(ϕCkj=g)(Ch:ChΠCi)(ϕCkh=nϕCkh=r) g t
    ¯s2 g a
    s3(Cj:CiΠCj)(ϕCkj=t)(Ch:ChΠCi)(ϕCkh=nϕCkh=r) t r
    ¯s2 t a
    ε a n

     | Show Table
    DownLoad: CSV

    Figure 4 shows the state diagrams of both versions of the transition function, which cause the environment to be divided into three different basic dynamics: The dynamics of nonactivity or receiving NO, the dynamics of NO generation and the dynamics of NO transmission. The possibility of passing between the previous dynamics is the main difference between the two versions that have been defined as the transition function.

    Figure 4.  Scheme of rules between states of the deterministic automaton network that define the transition function: (a) Version 1 of the DAN and (b) version 2 of the DAN.

    As seen in the definition of the transition functions in Table 1, in version 0 of our automata network (rules r0, r1 and r2), whether a compartment is in the dynamic of nonactivity or NO reception, or in the dynamics of generation or transmission of NO, will depend mainly on the initial configuration, since in no rule of its transition function is it allowed that a compartment initiating the receiving dynamics can pass to the dynamics of generation or transmission. We can also see that the reverse transition is allowed in this version of the automaton. A compartment can start in a generation dynamic and later move to a receiving dynamic directly or by a temporary transition through the transmission dynamic.

    Version 1 of our automata network (rules s0, s1, s2 and s3) completes the previous scenario, allowing the transition between the dynamics of nonactivity or reception, originating in an initial configuration, to generation and transmission dynamics.

    Therefore, our two versions of the transition function have 7 rules (defined algebraically in Table 1) to control the previous behavior. Below is a detailed textual definition of these rules:

    Version 0:

    Rule r0: Represents a situation in which, being a compartment without NO dynamics, it begins to receive NO. The automaton associated with said compartment will change from a state of nonactivity (n) to a state of reception (r). This occurs when any of the generating automatons of the neighborhoods to which the associated automaton belongs are in a generation (g) or transmission (t) state. The automaton will continue in this state of reception (r) as long as the previous situation is maintained. When this situation ceases to occur, the automaton returns to a state of nonactivity (n).

    The rule r0 controls the dynamics of those zones in which NO is not generated and only NO is received. The areas where it is produced and received will NOT be controlled by Rules r1 and r2.

    Rule r1: Rule r1 controls passing from NO generation dynamics to transmission dynamics, where there is NO generation and reception.

    Therefore, we are faced with a situation where some of the generating automatons of the neighborhoods to which the associated automaton belongs are in a generation (g) or transmission (t) state, and some of the automatons of their own neighborhood are found in a state of reception (r) or nonactivity (n).

    When the above occurs, the automaton goes from a generation state (g), which it reached as a result of its initial configuration, to a transmission state (t), remaining in said transmission state (t) as long as said rule is met.

    Rule r2: This rule identifies a situation after an NO generation dynamic, which in our model is associated with an NO self-regulation state. There is a period of nonactivity in the compartment, which is necessary after the dynamics of NO generation and transmission.

    With the automaton in a generation state (g), we go to a self-regulation state (a) when the automaton is being influenced by states of generation (g) or transmission (t) from the automatons that have it in their neighborhoods, and in the neighborhood of the automaton itself, there are also automatons in a state of generation (g) or transmission (t).

    When the conditions of Rules r1 and r2 are not met, the automaton must continue in a state of generation (g).

    Version 1:

    Rule s0: Represents a situation in which, being a compartment without NO dynamics, it begins to receive NO. Therefore, the automaton will change from a state of nonactivity (n) to a state of reception (r) and remain in this state as long as it continues to receive NO. This occurs when any of the generating automatons of the neighborhoods to which the associated automaton belongs are in the generation (g) or transmission (t) state, and none of the automatons that belong to its neighborhood are already in a reception state (r). When this situation ceases to occur, the automaton returns to a state of nonactivity (n).

    Rule s1: This rule identifies the conditions that must be produced to cause a change between the dynamics of nonactivity or NO reception and the dynamics of NO generation or transmission.

    With the automaton in a state of nonactivity (n), we go to a state of generation (g) when the automaton is being influenced by states of generation (g) or transmission (t) from the automatons that have it in its neighborhood, and there are automata from its own neighborhood in a state of reception (r), a situation in which there is a functional demand for NO.

    Rule s2: Like Rule r1 of version 0 in the transition function, this rule controls a transition from the dynamics of only NO generation to the dynamics where there is generation and reception (transmission).

    We are faced with a situation where if any of the generating automatons of the neighborhoods to which the automaton belongs are in a generation state (g) and some of the automatons from its own neighborhood are in a reception state (r) or nonactivity state (n), the automaton goes to the transmission state (t).

    If the above condition is not met, the automaton goes to a state of self-regulation (a).

    Rule s3: This rule is directly associated with a condition where the compartment should leave the NO transmission state (t).

    If any of the generating automatons of the neighborhoods to which the automaton belongs are in a state of transmission (t), and any of the automatons from its own neighborhood are in a state of reception (r) or nonactivity (n), the automaton goes from a transmitting state (t) to a receiving state (r).

    If the above condition is not met, the automaton goes to a state of self-regulation (a).

    Of the wide spectrum of particularities presented by NO dynamics, such as 1) nonlocalized generation, 2) existence of a maximum range of NO beyond which there should be no influence thereof, 3) existence of areas where NO is recombined with other substances to influence the dynamics of other mechanisms of cellular communication, and therefore, in higher order processes such as neural plasticity and learning and 4) existence of substance self-regulation, among others, in our work, we will focus on how NO performs complex structures formation when it develops its dynamics and on which the possible transmission of information is constituted, which is necessary to provoke the synchronous functional recruitment of involved neural populations.

    It is in this section that the result of analyzing the modeling through the DANs made explicit in the previous sections can be observed. This will allow us to know what type of behavior they develop throughout the generations.

    We focus on the 1D versions of our automaton networks for modeling NO, and these studies are totally valid for automata networks of different dimensionalities [40]. Likewise, these results will be extrapolated to their 2D versions.

    Figure 5 shows different 1D evolutions associated with the two versions of our automaton network; Figure 5(a), (b) for version 0; Figure 5(c), (d) for version 1. In this case, networks with 32 automata are used. The initial configurations have been established randomly, following the premise that each automaton can be either in a generation state (g) or nonactivity state (n) with equal probability, assuming the above that in these initial configurations, we will have approximately 50% of the automatons in a generation state (g). In these figures, it is observed how, in all cases, separate structures of a stable or periodic type are generated.

    Figure 5.  1D evolutions of DANs for modeling the dynamics and effect of NO. (a) and (b) for version 0, (c) and (d) for version 1. Yellow: Transmission state (t), white: State of self-regulation (a), red: Generation state (g), blue: Nonactivity state (n) and green: Receiving state (r).

    In version 0 of the DAN, the structures are supported by the different convergence cycles of each automaton, and we can quantify their appearance based on the level of NO generation contained in the initial configurations, as well as on the range of the neighborhood. In Figure 6, we can observe the number of times that the convergence cycle appears, composed only of the generation state (g) and for different reaches of the neighborhood: Range 1 for a neighborhood with three cells of the lattice (the two neighbors and the cell itself), range 2 for 5 cells and range 3 for 7 cells. This figure shows us how the occurrence of this convergence cycle varies as a function of the probability of the generation in the initial configuration and the scope of the neighborhood. It is observed that an interval occurs in which said convergence cycle disappears completely, the latter being able to be interpreted as a control mechanism of the NO level.

    Figure 6.  Histogram of the generation state (g) convergence cycle in version 0 of the automata network for different ranges of scope of the neighborhood.

    Annex 2 (identification and analysis of the different convergence cycles of the automata networks for modeling NO) compiles this quantification for all the possible sequences of these convergence cycles of said version 0 for our automaton network.

    As we have already indicated, in version 0 of the transition function for the automaton network, the generation state (g) of a compartment can only occur in the initial configuration of the automaton. This is the main difference between the two versions of the developed automaton network, since in version 1, the process of generating NO in the automaton cannot only occur in the initial configuration.

    As seen in the regular expressions that define the sequences of states that can be produced,

    n{n|r{r}n|gan|gtan|gtr{r}n}gan{n|r{r}n|gan|gtan|gtr{r}n}. (3.1)

    Both expressions are mutually self-contained, and therefore, the automaton can begin in a state of nonactivity (n) and later go to a state of generation (g) depending on whether the rules that define its transition function are met. This makes a detailed analysis of all the possible sequences, or convergence cycles, not feasible, as presented by version 1 of our automaton network.

    Once this first visual analysis of both versions of our automaton network has been carried out, a qualitative analysis is carried out, taking as a cataloguing reference the classification established by Stephen Wolfram [40]. In this classification, there are four different types of dynamics for automata networks, which, depending on their variation in space and time from a random initial configuration, are classified as follows:

    Class Ⅰ. The evolution of the automaton network converges to a homogeneous state, without spatial or temporal structures of any kind.

    Class Ⅱ. The evolution of the automaton network tends to separate structures of a stable or periodic type.

    Class Ⅲ. The evolution of the automaton network presents chaotic patterns. Fractal structures emerge spatially, and cycles of very long length are observed.

    Class Ⅳ. The evolution of the automaton network generates localized complex structures, which spread and whose duration increases exponentially with the size of the network.

    The first three classes correspond qualitatively to the three types of behaviors observed in continuous systems (attractors, periodic/quasiperiodic and chaotic).

    From the qualitative analysis, the temporal evolutions of both versions of our automaton network are determined to be Class Ⅱ.

    Quantitative analysis is driven by the work of Chris G. Langton [41]. First, the set DKN of all the possible transition functions follow the algebraic structure Δ:ΣNΣ, where K corresponds to the number of states of the automaton and N to the number of neighbors that are involved in said transition function, including the automaton for which the transition function calculates the new state.

    Let sq be a state that we identify as a quiescent state. Once this quiescent state is identified, we can identify the n transitions that lead us to that state. Once identified, we are interested in quantifying the values (KNn) because they represent the rest of the transitions. The degree of heterogeneity in the behavior of the automaton will depend on the way in which said (KNn) transitions are defined. For this purpose, the parameter λ is established according to expression 3.2 [41]:

    λ=(KNn)/KN. (3.2)

    If n=KN, all transitions have sq as the final state, and the behavior of the automaton is completely homogeneous (all initial configurations end in sq) and λ=0. On the other hand, if n=0, no transition has sq as the final state, and then λ=1. The most heterogeneous behavior occurs when all state transitions (including those with sq as the final state) are equally represented, which happens when n=KN1 and λ=11/K. Based on the definition of the parameter λ, we perform its calculation for the two versions of the automaton network when we work in a 1D environment.

    As seen in Tables 2 and 3, for each possible state si(t) and states of its neighbors si1(t) and si+1(t), the state si(t+1) is made to correspond according to the rules that define our transitions, as shown in Table 1. In this case, we are working with a value of K=5, corresponding directly with the states: Nonactivity (n), generation (g), transmission (t), reception (r) and self-regulation (a). The definition of the neighborhood of the automaton and the fact that we are working in a 1D environment give us a value of N=3.

    Table 2.  Evolution of the automaton for a 1D environment (Version 0), row: si(t), column: si1(t)si+1(t).
    nn ng nt nr na gn gg gt gr ga tn tg tt tr ta rn rg rt rr ra an ag at ar aa
    n n r r n n r r r r r r r r r r n r r n n n r r n n
    g g t t g g t a a t g t a a t g g t t g g g g g g g
    t a t t a a t a a t a t a a t a a t t a a a a a a a
    r n n n n n n r r n n n r r n n n n n n n n n n n n
    a n r r n n r r r r r r r r r r n r r n n n r r n n

     | Show Table
    DownLoad: CSV
    Table 3.  Evolution of the automaton for a 1D environment (Version 1), row: si(t), column: si1(t)si+1(t).
    nn ng nt nr na gn gg gt gr ga tn tg tt tr ta rn rg rt rr ra an ag at ar aa
    n n r r g n r r r n r r r r n r g n n g g n r r g n
    g a t a a a t a a t a a a a a a a t a a a a a a a a
    t a a r a a a a a a a r a a r a a a r a a a a a a a
    r n r r n n r r r n r r r r n r n n n n n n r r n n
    a n n n n n n n n n n n n n n n n n n n n n n n n n

     | Show Table
    DownLoad: CSV

    Based on the logical rules defined in Table 1, all possible transitions are detailed in Tables 2 and 3 as a preliminary step for calculating the parameter λ. The expression of the transition functions, according to these tables, is defined as follows: If [s(i1)(t)s(i+1)(t)=nr][si(t)=t] then si(t+1)=a, following, in this case, the transition indicated in Table 3.

    The above allows us to calculate the value of λ for a certain quiescent state sq by direct application of the expression (KN - number of occurrences of sq)/KN, obtaining the values shown in Table 4 for both versions of our automata network.

    Table 4.  Values of λ for both versions of the automata network.
    sq λv0 λv1
    n 0.656 0.632
    g 0.896 0.96
    t 0.872 0.968
    r 0.744 0.776
    a 0.832 0.664

     | Show Table
    DownLoad: CSV

    Taking as sq the state of nonactivity, we have λv0=0.656 and a λv1=0.632, as shown in Table 4. These values allow us to locate the transition functions of our automata in the set of all possible functions, DKN. It is important to note that when any of the other states is selected as state sq, the values of λv0 and λv1 change.

    To analyze the emergence of complexity produced by the dynamics of our automata networks, we will use the probabilistic approach offered by entropy as a basic measure of self-information. For a discrete process T of K possible states, this is defined according to Eq (3.3).

    H(T)=Ki=1pilog(pi), (3.3)

    where pi corresponds to the probability of state i occurring in process T.

    On the other hand, to quantify the degree of cooperation that may exist in our DANs, on which the synchronous functional recruitment feature is supported, we quantify the level of correlation that exists in the events that occur (state changes) in the automata. For this, the concept of mutual information I(Tn,Tm) is used between two automata n and m, in which the discrete processes Tn and Tm occur. The magnitude is defined as a function of the individual entropy H(Tn) and H(Tm) of the two automata and the entropy of the two automata considered as a joint process H(Tn,m).

    Therefore, the mutual information is given by the following expression:

    I(Tn,Tm)=H(Tn)+H(Tm)H(Tn,m). (3.4)

    This measure will have direct dependence on the correlation of process Tn with the state of process Tm. Thus, high values in the average of I(Tn,Tm) will imply a high cooperation between automata n and m. In contrast, a functional independence, or change of states, between automata will assume low values of the previous measure.

    By virtue of the defined magnitudes, our network of automatons for NO modeling adequately incorporates the characteristics of complex structure formation by having an intermediate value of the average of the general entropy ¯H, understanding that the complex is between the order of a system, where ¯H0, and the total disorder, where ¯H presents its highest values. On the other hand, having a high synchronous functional recruitment between the various dynamics of NO implies that we must have high values in the average general mutual information ¯I.

    It should be taken into consideration that since the previous magnitudes are dependent on the trajectories followed by the automata network, according to the initial configuration, and since the latter has a random character, in our quantitative study, average values of these magnitudes are calculated throughout a set of executions where the initial configuration changes in its form for each one of them, but the proportions of automata that can be in an initial state of generation (g) or nonactivity (n).

    In the same way, all the calculated variables that depend on the trajectories that the network of automatons follow are organized on the axis of λ associated with the set DKN, being able to locate our networks of automatons in said set for comparison.

    Figure 7 shows us the values of the general entropy ¯H and general mutual information ¯I for a subset of DKN along the dimension λ in the range between 0 and 11/K. This interval corresponds to the interval that goes from the values of λ associated with the most homogeneous automata networks λ=0 to the most heterogeneous λ=11/K.

    Figure 7.  Values of the average entropy (a), and of the average mutual information (b), associated with version 1 of the automata network, and with the rest of the automata networks DKN obtained with a sq = nonactivity (n), and depending on the parameter λ.

    Figure 7(a) shows how the value of ¯H in version 1 of our automaton is approximately ¯H=1.25, which is approximately half the value (¯H2.1) that the automata networks of DKN have for that same λ. This fact indicates that version 1 of our automata network forms complex structures in its dynamics.

    Figure 7(b) shows the suitability of version 1 of our automata network from the perspective of synchronous functional recruitment, since this network has a value of ¯I=0.8, which practically doubles the average value of the rest of the automata networks of DKN.

    In both comparisons shown in Figure 7, for the selection of the rest of the automata networks that make up the subset of DKN, the "table-walk-through" procedure has been carried out [41], taking as a seed version 1 of the automaton network and modifying its transition function stochastically to obtain automaton networks with different λ within the study interval indicated above, and a state sq equal to that of nonactivity (n).

    In Figure 8, we have the same comparison of the values of ¯H and ¯I, but in this case, the subset of DKN has been generated with the "table-walk-through" process, using version 1 as a seed automaton network and the state sq = self-regulation (a). The main difference is if we compare the levels of ¯H (Figures 7(a) and 8(a)), when we work with sq = self-regulation (a), its lowest values are close to that of our automata network, implying that the level of complex structures of our automata network does not present a differentiating character for the rest.

    Figure 8.  Values of the average entropy (a), and of the average mutual information (b), associated with version 1 of the automata network, and with the rest of the automata networks. DKN obtained with a sq = self-regulation (a), and as a function of the parameter λ.

    Figure 9 shows a relationship factor that has ¯H and ¯I with the temporal evolution of the automaton networks of DKN for different values of λ. This figure shows the evolution of 6 automata networks made up of 32 automata in a cyclical lattice, where their transition functions have been obtained by taking both versions of our automata network for modeling NO and where sq corresponds to the state of nonactivity (n). Figure 9(a)(c) correspond to version 0, and Figure 9(d)(f) correspond to version 1. It can be seen in these figures how the temporal evolution of the states of these networks presents structures of a certain complexity and fractal tendency, and they are characterized by different values, not only for the parameter λ but also with regard to the values of ¯H and ¯I.

    Figure 9.  Evolution of different automata networks belonging to DKN and for different values of λ. (a), (b) and (c) correspond to version 0 of the automata network and different λ values: 0.304, 0.504 and 0.704 respectively, and (d), (e) and (f) correspond to version 1 of the automata network and different λ values: 0.304, 0.504 and 0.704 respectively. Yellow: Transmission state (t), white: State of self-regulation (a), red: Generation state (g), blue: Nonactivity state (n) and green: Receiving state (r).

    The behavior observed in the analyzed DANs is also present in larger automaton networks, as we can see in Annex 3. Detail of the evolution of the automata networks with 128 automata (version 0 and version 1) of the automata network for modeling NO, for DANs with 128 automata.

    In the developed quantitative analysis, it has also been verified how dependent the values of ¯H and ¯I are in relation to the initial configurations and to the level of NO generation that may exist in them. Figure 10 shows the evolution of ¯H and ¯I when we vary the percentage of automata that are in a generation state (g) in the initial configuration.

    Figure 10.  Values of the average entropy (a), and of the average mutual information (b), associated with version 1 of the automata network and for different percentages of the level of NO generation in the initial configuration.

    In this figure, it can be seen that the average values of ¯H show a slight increase (from a value of ¯H1.2, up to a value of ¯H1.3) as we increase the level of NO generation present in the initial configurations, from 10% to 70%, becoming unstable for percentages greater than the latter. For the case of ¯I, we see that its variation, despite being upwards, is practically negligible, staying around ¯I0.8 and behaving in the same way as ¯H once the NO generation percentage exceeds 70%.

    On the other hand, it seems that the storage of information by any system (be it discrete or continuous) implies a low entropy, and the transmission of information implies the existence of an increase [42]. Likewise, a high mutual information supposes a high correlation between the automata.

    The final analysis for version 1 of our automaton network seeks to see what position the latter has in the plane facing both magnitudes: Entropy versus mutual information, which is always in compared with the rest of the automaton networks of DKN.

    Figure 11 shows that version 1 of our network is located in the area of the plane that gives it a medium entropy level. Consequently, we have a high formation of complex structures, and on the other hand, the level of mutual information seems to be at high values compared to all the automata networks that are defined by DKN. The above allows us to argue that version 1 of the automaton network also presents high levels of synchronous functional recruitment.

    Figure 11.  Relationship between ¯H and ¯I for various subsets of DKN (blue) and different initial configurations. This relationship is also shown for version 1 (red colour) of the automata network, all for certain values of the parameter λ.

    A more exhaustive analysis of how version 1 of our automaton network behaves in relation to the values of ¯H and ¯I for all the possible initial configurations is shown in Figure 12. The different colors determine the level of NO generation that the initial configurations present. It is observed in this figure that the initial configurations that make our network of automatons exit the area given by the following values are minimal (¯H0.3 and ¯I0.8), which is identified as suitable for having an adequate level of complex structure formation and synchronous functional recruitment. It is also seen in this figure that the initial configuration that produces maximum values of ¯H and ¯I corresponds to an initial configuration composed of the following sequence of states: nnggnngg… nngg, where n and g correspond to the nonactivity and generation states respectively and whose evolution is shown in Figure 13.

    Figure 12.  Relationship between ¯H and ¯I for version 1 of our automaton network and all the possible initial configurations. The values of the NO generation percentage that these initial configurations have are also shown.
    Figure 13.  Evolution of version 1 of our automaton network when it has an initial configuration that causes maximum values in ¯H and ¯I.

    We extend our study to 2D DAN, where results parallel to those achieved with 1D DAN can be observed. Thus, Figure 14 shows the complex structure formation by all the dynamics associated with the different states, Figure 14(a)(e), in which automata may exist when we work with a network of 10,000 automata arranged in a 100×100 lattice and where each automaton has 5 neighbors, in correspondence with a Moore-type neighborhood. In these figures, the formation of a set of structures is perceived when the automaton network is in the 75th generation.

    Figure 14.  Complex structure formation in a network of 2D automata (version 1) made up of 10,000 automata arranged in a 100×100 lattice, when the automata network is in the 75th generation. (a) non-activity state, (b) generation state, (c) reception state, (d) transmission state and (e) self-regulation state.

    Figure 15 shows the convergence of the same automata network with version 1 of the transition function towards the final complex structures when we sufficiently advance in the number of generations. The edges of the structure show a changing and cyclical behavior.

    Figure 15.  Convergence towards a complex structure in a 2D environment of the NO generation dynamics, using version 1 of the automata network. (a) the automata network is in the 50th generation, (b) in the 100th generation, (c) in the 200th generation, (d) in the 300th generation, (e) in the 400th generation and (f) in the 500th generation.

    The implementation of the different versions of the automaton network was developed in Python 3.8.13 (www.python.org) using the TensorFlow 2.3.0 libraries (www.tensorflow.org). The source code used in all the experiments in this article is available for download on GitHub (https://github.com/pablo-fernandez-lopez/NANetwork_NO).

    In this work, a model based on deterministic automata networks is proposed for modeling the effect that NO dynamics exert on the environment through which it diffuses in its role as a molecule with the ability to perform VT. We carry out the formalization of its transition function through the logical extrapolation of those mechanisms associated with the diffusion dynamics of NO as a neuroactive substance. The obtained model adequately depicts the characteristics of complex structure formation and synchronous functional recruitment.

    To achieve this, we have built two versions of transition functions (version 0 and version 1) that segment the environment into three different basic dynamics: the dynamics of nonactivity or NO reception, the dynamics of NO generation and the dynamics of NO transmission. The possibility of passing between the dynamics of nonactivity or NO reception to the rest of dynamics is the main difference in version 1 compared to version 0. The two versions of the transition function, defined and analysed in this work utilize a deterministic NO generation process. This generation of NO occurs in the initial configuration of the automata (case of version 0) or in the initial configuration by meeting a specific logical rule of its transition function that responds to the situation of the functional requirement of NO (case version 1).

    Both versions cause separate structures of a stable or periodic type in all the sequences of states through which the automata network passes for each initial configuration. These structures are supported by the different convergence cycles that each automaton develops, whose occurrences depend on the level of NO generation of the initial configuration as well as the range of the neighborhood. From a qualitative perspective, the two versions of our automaton network are classified as class Ⅱ on the Stephen Wolfram rating scale [40].

    The quantitative analysis of version 1 of our automaton network, when compared with the rest of the possible automata networks generated and organized according to the heterogeneity measurement parameter λ, defined by Chris G. Langton [41], presents adequate values for entropy and mutual information (¯H1.3 and ¯I0.8), achieving an adequate predisposition of the network for complex structure formation and synchronous functional recruitment necessary to model the VT and study its implications in mechanisms and higher processes of the brain, such as learning and memory formation.

    Working with version 1 of our automaton network in 2D environments, it is observed that NO dynamics produce areas of isolation and segmentation of the environment in relation to the characteristics of complex structure formation and synchronous functional recruitment. These complex structures present a zonal convergence when we sufficiently advance the number of generations, where the edges of the structure present a changing and cyclical behavior.

    Finally, we propose the first discrete model in all its variables that can work with different NO dynamics and analyze the implications of VT in more complex architectures and aspects related to learning and memory formation.

    We consider the DAN model proposal presented in this work, in its two versions, as a first step, and we identified the need to extend our model to incorporate stochastic conditions that make the state of NO generation be induced by higher mechanisms or brain processes. To carry out this generalization, which will also constitute a model that can accommodate arbitrary processes in decision-making mechanisms, we propose the use of fuzzy automata networks. This model will be part of a complete formal framework of volumetric transmission in the brain and in artificial neural networks and therefore in complex decision-making systems.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The research presented in this paper has been funded by the Project "Investigación en Computación Neuronal por el Grupo de Investigación CIPERBIG (Research in Neural Computation by the CIPERBIG Research Group) (ULPGC)"; No: 23/2021, from Cabildo de Gran Canaria.

    We are thankful to the "Council of First Vice-presidency and Public Works, Infrastructures, Transport and Mobility of the Cabildo de Gran Canaria".

    The authors declare no conflicts of interest.

    Table A1.1.  List of abbreviations and symbols.
    Abbreviation/symbol Description
    NO Nitric oxide
    VT Volumetric transmission
    EDRF Endothelium-derived relaxing factor
    NOS Nitric oxide synthase
    cNOS Constitutive nitric oxide synthase
    eNOS Endothelial isoform nitric oxide synthase
    iNOS Inducible nitric oxide synthase
    nNOS Neuronal isoform nitric oxide synthase
    PNS Peripheral nervous system
    CNS Central nervous system
    GCs Soluble guanylate cyclase
    AN Automata network
    DA Deterministic automata
    FA Fuzzy automata
    τ Internal computing steps
    SA Set of states (internal in Fuzzy automata)
    IA Set of starting states
    ActA Set of external actions
    WA Set of internal actions
    PA Probabilistic automata
    FA Fuzzy set
    FFA, fA, gA Fuzzy function affiliation
    EA Set of input states (in Fuzzy Automata)
    VA Set of output states (in Fuzzy Automata)
    DAN Deterministic automata network
    FAN Fuzzy automata network
    Ci Compartment i
    ΠCi Neighborhood of Ci
    n State of nonactivity
    r State of receiving
    t State of Transmission
    g State of NO generation
    a State of self-regulation
    λ Langton parameter
    DKN Set of all possible transition functions
    sq Quiescent state
    I Mutual information
    ¯I Average general mutual information
    H Individual entropy
    ¯H Average of the general entropy

     | Show Table
    DownLoad: CSV

    Version 0 of the automata network:

    Figure A2.1.  Histogram of the generation state (g) convergence cycle in version 0 of the automata network for different ranges of neighborhood scope.
    Figure A2.2.  Histogram of the convergence cycle of the generation (g), self-regulation (a) and nonactivity (n) states in version 0 of the automata network for different ranges of neighborhood scope.
    Figure A2.3.  Histogram of the convergence cycle of the generation (g), self-regulation (a), nonactivity (n) and reception (r) states in version 0 of the automata network for different ranges of neighborhood scope.
    Figure A2.4.  Histogram of the convergence cycle of the generation (g) and transmission (t) states in version 0 of the automata network for different ranges of neighborhood scope.
    Figure A2.5.  Histogram of the convergence cycle of the generation (g), transmission (t), self-regulation (a) and nonactivity (n) states in version 0 of the automata network for different ranges of neighborhood scope. The absence of a histogram profile for a specific range means that, when this range is used, this convergence cycle does not occur.
    Figure A2.6.  Histogram of the nonactivity state (n) convergence cycle in version 0 of the automata network for different ranges of neighborhood scope.
    Figure A2.7.  Histogram of the convergence cycle of the nonactivity (n) and reception (r) states in version 0 of the automata network for different ranges of neighborhood scope.
    Figure A2.8.  Histogram of the convergence cycle of the nonactivity (n) and reception (r) states in version 0 of the automata network for different ranges of neighborhood scope. The absence of a histogram profile for a specific range means that, when this range is used, this convergence cycle does not occur.
    Figure A3.1.  Evolution of an automata network for an approximate λ=0.1, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.2.  Evolution of an automata network for an approximate λ=0.2, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.3.  Evolution of an automata network for an approximate λ=0.3, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.4.  Evolution of an automata network for an approximate λ=0.4, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.5.  Evolution of an automata network for an approximate λ=0.5, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.6.  Evolution of an automata network for an approximate λ=0.6, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.7.  Evolution of an automata network for an approximate λ=0.7, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.8.  Evolution of an automata network for an approximate λ=0.8, calculated according to the "table-walk-through" procedure [41], taking as seed the version 0 (a), version 1 (b), of the automata network for the modeling of NO.
    Figure A3.9.  Evolution of version 0 (a), and version 1 (b), of the automata network for NO modelling.


    [1] R. F. Furchgott, J. V. Zawadzki, The obligatory role of endothelial cells in the relaxation of arterial smooth muscle by ACH, Nature, 288 (1980), 372–376. https://doi.org/10.1038/288373a0 doi: 10.1038/288373a0
    [2] R. F. Furchgott, J. V. Zawadzki, P. D. Cherry, Role of endothelium in the vasodilator response to acetylcholine, Vasodilatation, Edited by P. M. Vanhoutte and I. Leusen, New York: Raven Press, 1981, 49–66.
    [3] L. J. Ignarro, G. M. Buga, K. S. Wood, R. E. Byrns, G. Chaudhuri, Endothelium-derived relaxing factor produced and released from artery and vein is nitric oxide, P. Natl. Acad. Sci. USA, 84 (1987), 9265–9269. https://doi.org/10.1073/pnas.84.24.9265 doi: 10.1073/pnas.84.24.9265
    [4] S. Moncada, R. M. J. Palmer, R. J. Gryglewski, Mechanism of action of some inhibitors of endothelium-derived relaxing factor, P. Natl. Acad. Sci. USA, 83 (1986), 9164–9168. https://doi.org/10.1073/pnas.83.23.9164 doi: 10.1073/pnas.83.23.9164
    [5] R. M. J. Palmer, A. G. Ferrige, S. Moncada, Nitric oxide release accounts for the biological activity of endothelium-derived relaxing factor, Nature, 327 (1987), 524–526. https://doi.org/10.1038/327524a0 doi: 10.1038/327524a0
    [6] M. T. Khan, R. F. Furchgott, Additional evidence that endothelium-derived relaxing factor is nitric oxide, Pharmacology, Edited by M. J. Rand and C. Raper, Amsterdam: Elsevier, 1987,341–344.
    [7] D. S. Bredt, S. H. Snyder, Nitric oxide mediates glutamate-linked enhancement of cGMP levels in the cerebellum, P. Natl. Acad. Sci. USA, 86 (1989), 9030–9033. https://doi.org/10.1073/pnas.86.22.9030 doi: 10.1073/pnas.86.22.9030
    [8] D. S. Bredt, P. M. Hwang, S. H. Snyders, Localization of nitric oxide synthase indicating a neural role for nitric oxide, Nature, 347 (1990), 765–770. https://doi.org/10.1038/347768a0 doi: 10.1038/347768a0
    [9] J. Garthwaite, G. Garthwaite, R. M. Palmer, S. Moncada, NMDA receptor activation induces nitric oxide synthesis from arginine in rat brain slices, Eur. J. Pharmacol., 172 (1989), 413–416. https://doi.org/10.1016/0922-4106(89)90023-0 doi: 10.1016/0922-4106(89)90023-0
    [10] E. T. Cuevas, M. C. Lara, G. M. Lorenzana, Aspectos sobre las funciones del óxido nítrico como mensajero celular en el sistema nervioso central, Salud Ment., 26 (2003), 42–50.
    [11] P. Fernández-López, P. García-Báez, Y. Cabrera-León, J. L. Navarro-Mesa, C. P. Suárez-Araujo, Volume signaling and neural-indexing by nitric oxide in artificial neural networks, IEEE Access, 10 (2022), 82246–82258. https://doi.org/10.1109/ACCESS.2022.3196672 doi: 10.1109/ACCESS.2022.3196672
    [12] O. V. Bohlen, R. Dermietzel, Neurotransmitters and neuromodulators: Handbook of receptors and biological effects, John Wiley & Sons, 2002.
    [13] J. Garthwaite, C. L. Boulton, Nitric oxide signaling in the central nervous system, Annu. Rev. Physiol., 57 (1995), 683–706. https://doi.org/10.1146/annurev.ph.57.030195.003343 doi: 10.1146/annurev.ph.57.030195.003343
    [14] J. Herrmann, L. Lerman, A. Lerman, Simply say yes to NO? Nitric oxide (NO) sensor-based assessment of coronary endothelial function, Eur. Heart J., 31 (2010), 2834–2836. https://doi.org/10.1093/eurheartj/ehq279 doi: 10.1093/eurheartj/ehq279
    [15] S. R. Vincent, Nitric oxide in the nervous system, Edited by S. R. Vincent, Academic Press, 2013.
    [16] T. Malinski, Z. Taha, S. Grunfeld, S. Patton, M. Kapturczak, P. Tomboulian, Diffusion of nitric oxide in the aorta wall monitored in situ by porphyrinic microsensors, Biochem. Bioph. Res. Co., 193 (1993), 1076–1082. https://doi.org/10.1006/bbrc.1993.1735 doi: 10.1006/bbrc.1993.1735
    [17] J. R. Lancaster, Simulation of the diffusion and reaction of endogenously produced nitric oxide, P. Natl. Acad. Sci. USA, 91 (1994), 8137–8141. https://doi.org/10.1073/pnas.91.17.8137 doi: 10.1073/pnas.91.17.8137
    [18] J. Wood, J. Garthwaite, Models of the diffusional spread of nitric oxide: Implications for neural nitric oxide signalling and its pharmacological properties, Neuropharmacology, 33 (1994), 1235–1244. https://doi.org/10.1016/0028-3908(94)90022-1 doi: 10.1016/0028-3908(94)90022-1
    [19] M. W. Vaughn, L. Kuo, J. C. Liao, Effective diffusion distance of nitric oxide in the microcirculation, Am. J. Physiol-Heart C., 274 (1998), 1705–1714. https://doi.org/10.1152/ajpheart.1998.274.5.H1705 doi: 10.1152/ajpheart.1998.274.5.H1705
    [20] A. Philippides, P. Husbands, M. O'Shea, Four-dimensional neuronal signaling by nitric oxide: A computational analysis, J. Neurosci., 20 (2000), 1199–1207. https://doi.org/10.1523/JNEUROSCI.20-03-01199.2000 doi: 10.1523/JNEUROSCI.20-03-01199.2000
    [21] C. P. Suárez-Araujo, P. Fernández-López, P. García-Báez, Towards a model of volume transmission in biological and artificial neural networks: A CAST approach, Lecture Notes in Computer Science, Edited by R. M. Díaz, B. Buchberger, and J. L. Freire, Computer Aided Systems Theory, EUROCAST 2001, Berlin: Springer, 2178 (2001), 328–342.
    [22] C. P. Suárez-Araujo, P. Fernández-López, P. García-Báez, J. L. S. Fonseca, A model of nitric oxide diffusion based in compartmental systems, Int. J. Comput. Anticipatory Syst., 18 (2006), 172–186.
    [23] C. P. Suárez-Araujo, P. Fernández-López, P. García-Báez, Nitric oxide diffusion attributes in biological and artificial environments: A computational study, Majlesi J. Elect. Eng., 5 (2011), 73–82.
    [24] J. Garthwaite, Nitric oxide as a multimodal brain transmitter, Brain Neurosci. Adv., 2 (2018). https://doi.org/10.1177/2398212818810683 doi: 10.1177/2398212818810683
    [25] M. Kastelic, D. Kopač, U. Novak, B. Likozar, Dynamic metabolic network modeling of mammalian Chinese hamster ovary (CHO) cell cultures with continuous phase kinetics transitions, Biochem. Eng. J., 142 (2019), 124–134. https://doi.org/10.1016/j.bej.2018.11.015 doi: 10.1016/j.bej.2018.11.015
    [26] V. E. Zajec, U. Novak, M. Kastelic, B. Japelj, L. Lah, A. Pohar, et al., Dynamic multiscale metabolic network modeling of Chinese hamster ovary cell metabolism integrating N-linked glycosylation in industrial biopharmaceutical manufacturing, Biotechnol. Bioeng., 118 (2021), 397–411. https://doi.org/10.1002/bit.27578 doi: 10.1002/bit.27578
    [27] M. A. Arbib, Theories of abstract automata, Edited by Michael A. Arbib, New Jersey: Prentice-Hall Englewood Cliffs, 1969.
    [28] M. O. Rabin, Probabilistic automata, Inf. Control, 6 (1963), 230–245. https://doi.org/10.1016/S0019-9958(63)90290-0 doi: 10.1016/S0019-9958(63)90290-0
    [29] R. Segala, Compositional trace–based semantics for probabilistic automata, Lecture Notes in Computer Science, International Conference on Concurrency Theory, Springer, 962 (1995), 234–248.
    [30] R. Segala, Modeling and verification of randomized distributed real-time systems, PhD thesis, Department of Electrical Engineering and Computer Science, Technical Report MIT/LCS/TR-676, Massachusetts Institute of Technology, 1995.
    [31] A. Paz, Introduction to probabilistic automata, New York: Academic Press, 2014. https://doi.org/10.1016/C2013-0-11297-4
    [32] L. A. Zadeh, Fuzzy sets, Inf. Control, 8 (1965), 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X doi: 10.1016/S0019-9958(65)90241-X
    [33] W. G. Wee, On generalizations of adaptive algorithms and application of the fuzzy sets concept to pattern classification, PhD thesis, Purdue University, Indiana, 1997.
    [34] W. G. Wee, K. S. Fu, A formulation of fuzzy automata and its application as a model of learning systems, IEEE T. Syst. Sci. Cy., 5 (1969), 215–223. https://doi.org/10.1109/TSSC.1969.300263 doi: 10.1109/TSSC.1969.300263
    [35] D. Qiu, Characterizations of fuzzy finite automata, Fuzzy Set. Syst., 141 (2004), 394–414. https://doi.org/10.1016/S0165-0114(03)00202-1 doi: 10.1016/S0165-0114(03)00202-1
    [36] M. Mraz, I. Lapanja, N. Zimic, J. Virant, Notes on fuzzy cellular automata, J. Chin. Inst. Indust. Eng., 17 (2000), 469–476.
    [37] M. Mraz, I. Lapanja, N. Zimic, J. Virant, Fuzzy numbers as inputs to fuzzy automata, 18th Int. Conf. of the North American Fuzzy Information Processing Society, New York, 1999,453–456. https://doi.org/10.1109/NAFIPS.1999.781734
    [38] D. S. Malik, J. N. Mordeson, M. K. Sen, Minimization of fuzzy finite automata, Inform. Sciences, 113 (1999), 323–330. https://doi.org/10.1016/S0020-0255(98)10073-7 doi: 10.1016/S0020-0255(98)10073-7
    [39] M. Mizumito, J. Toyoda, K. Tanaka, Some considerations on fuzzy automata, J. Comput. Syst. Sci., 3 (1969), 409–422.
    [40] S. Wolfram, Cellular automata as models of complexity, Nature, 311 (1984), 419–424. https://doi.org/10.1038/311419a0 doi: 10.1038/311419a0
    [41] C. G. Langton, Computation at the edge of chaos: Phase transitions and emergent computation, Physica D, 42 (1990), 12–37. https://doi.org/10.1016/0167-2789(90)90064-V doi: 10.1016/0167-2789(90)90064-V
    [42] L. L. Gatlin, Information theory and the living system, New York: Columbia University Press, 1972.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1406) PDF downloads(55) Cited by(0)

Figures and Tables

Figures(32)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog