
Embeddings in normed spaces are a widely used tool in automatic linguistic analysis, as they help model semantic structures. They map words, phrases, or even entire sentences into vectors within a high-dimensional space, where the geometric proximity of vectors corresponds to the semantic similarity between the corresponding terms. This allows systems to perform various tasks like word analogy, similarity comparison, and clustering. However, the proximity of two points in such embeddings merely reflects metric similarity, which could fail to capture specific features relevant to a particular comparison, such as the price when comparing two cars or the size of different dog breeds. These specific features are typically modeled as linear functionals acting on the vectors of the normed space representing the terms, sometimes referred to as semantic projections. These functionals project the high-dimensional vectors onto lower-dimensional spaces that highlight particular attributes, such as the price, age, or brand. However, this approach may not always be ideal, as the assumption of linearity imposes a significant constraint. Many real-world relationships are nonlinear, and imposing linearity could overlook important non-linear interactions between features. This limitation has motivated research into non-linear embeddings and alternative models that can better capture the complex and multifaceted nature of semantic relationships, offering a more flexible and accurate representation of meaning in natural language processing.
Citation: Pedro Fernández de Córdoba, Carlos A. Reyes Pérez, Enrique A. Sánchez Pérez. Mathematical features of semantic projections and word embeddings for automatic linguistic analysis[J]. AIMS Mathematics, 2025, 10(2): 3961-3982. doi: 10.3934/math.2025185
[1] | Ramsha Shafqat, Ateq Alsaadi . Artificial neural networks for stability analysis and simulation of delayed rabies spread models. AIMS Mathematics, 2024, 9(12): 33495-33531. doi: 10.3934/math.20241599 |
[2] | Yasir Ramzan, Aziz Ullah Awan, Muhammad Ozair, Takasar Hussain, Rahimah Mahat . Innovative strategies for Lassa fever epidemic control: a groundbreaking study. AIMS Mathematics, 2023, 8(12): 30790-30812. doi: 10.3934/math.20231574 |
[3] | Ali Yousef . A fractional-order model of COVID-19 with a strong Allee effect considering the fear effect spread by social networks to the community and the existence of the silent spreaders during the pandemic stage. AIMS Mathematics, 2022, 7(6): 10052-10078. doi: 10.3934/math.2022560 |
[4] | Tahir Khan, Fathalla A. Rihan, Muhammad Bilal Riaz, Mohamed Altanji, Abdullah A. Zaagan, Hijaz Ahmad . Stochastic epidemic model for the dynamics of novel coronavirus transmission. AIMS Mathematics, 2024, 9(5): 12433-12457. doi: 10.3934/math.2024608 |
[5] | Mdi Begum Jeelani, Abeer S Alnahdi, Rahim Ud Din, Hussam Alrabaiah, Azeem Sultana . Mathematical model to investigate transmission dynamics of COVID-19 with vaccinated class. AIMS Mathematics, 2023, 8(12): 29932-29955. doi: 10.3934/math.20231531 |
[6] | Yanxin Li, Shangkun Liu, Jia Li, Weimin Zheng . Congestion tracking control of multi-bottleneck TCP networks with input-saturation and dead-zone. AIMS Mathematics, 2024, 9(5): 10935-10954. doi: 10.3934/math.2024535 |
[7] | Yasir Nadeem Anjam, Asma Arshad, Rubayyi T. Alqahtani, Muhammad Arshad . Unveiling the dynamics of drug transmission: A fractal-fractional approach integrating criminal law perspectives. AIMS Mathematics, 2024, 9(5): 13102-13128. doi: 10.3934/math.2024640 |
[8] | Abdul Qadeer Khan, Fakhra Bibi, Saud Fahad Aldosary . Bifurcation analysis and chaos in a discrete Hepatitis B virus model. AIMS Mathematics, 2024, 9(7): 19597-19625. doi: 10.3934/math.2024956 |
[9] | Yue Wu, Shenglong Chen, Ge Zhang, Zhiming Li . Dynamic analysis of a stochastic vector-borne model with direct transmission and media coverage. AIMS Mathematics, 2024, 9(4): 9128-9151. doi: 10.3934/math.2024444 |
[10] | Parveen Kumar, Sunil Kumar, Badr Saad T Alkahtani, Sara S Alzaid . A mathematical model for simulating the spread of infectious disease using the Caputo-Fabrizio fractional-order operator. AIMS Mathematics, 2024, 9(11): 30864-30897. doi: 10.3934/math.20241490 |
Embeddings in normed spaces are a widely used tool in automatic linguistic analysis, as they help model semantic structures. They map words, phrases, or even entire sentences into vectors within a high-dimensional space, where the geometric proximity of vectors corresponds to the semantic similarity between the corresponding terms. This allows systems to perform various tasks like word analogy, similarity comparison, and clustering. However, the proximity of two points in such embeddings merely reflects metric similarity, which could fail to capture specific features relevant to a particular comparison, such as the price when comparing two cars or the size of different dog breeds. These specific features are typically modeled as linear functionals acting on the vectors of the normed space representing the terms, sometimes referred to as semantic projections. These functionals project the high-dimensional vectors onto lower-dimensional spaces that highlight particular attributes, such as the price, age, or brand. However, this approach may not always be ideal, as the assumption of linearity imposes a significant constraint. Many real-world relationships are nonlinear, and imposing linearity could overlook important non-linear interactions between features. This limitation has motivated research into non-linear embeddings and alternative models that can better capture the complex and multifaceted nature of semantic relationships, offering a more flexible and accurate representation of meaning in natural language processing.
In the early 1980s, Robert F. Furchgott and John V. Zawadzki, while analyzing the contractions and relaxations of vascular rings in response to certain pharmacological substances (adrenaline for constriction and acetylcholine for dilation), observed that acetylcholine did not produce vasodilation when the inner layer of cells in blood vessels was removed [1]. This relaxation or vasodilation, dependent on the vascular endothelium, was due to the release of a substance by acetylcholine that, when it reached the smooth muscles of the vascular wall, produced vasodilation. This unknown substance was called the endothelium-derived relaxing factor (EDRF); this substance had a very unstable character (in normal physiological conditions, 3 seconds of half-life), and the vasodilation that it produced was a consequence from activating a localized enzyme in the vascular wall called guanylate cyclase [2].
Identifying EDRF with nitric oxide (NO) was a major milestone in vascular physiology and pathophysiology research. Its discovery was the product of parallel work from different research groups that reached similar results [3,4,5,6], and it was at the end of the 1980s when researchers David S. Bredt, Solomon H. Snyder and Salvador Moncada began to relate NO to the nervous system in its role as a neurotransmitter [7,8,9].
NO is an unstable molecule that is free radical, gaseous and fat soluble and it is capable of crossing cell membranes without the aid of specific transporters or mechanisms. It is small (1.15 ¨A=1.15⋅10−4μm), spreads rapidly and has a half-life between 3 and 5 seconds. It is produced from the amino acid L-arginine and its production is regulated by the enzyme nitric oxide synthase (NOS).
The two classes of NOS that exist, constitutive (cNOS) and inducible (iNOS), are distributed in very different and varied areas. Thus, the endothelial isoform (eNOS) of cNOS is located in the nerve fibres that surround blood vessels and endothelial cells, and the neuronal isoform (nNOS) is located in neurons of the central nervous system (CNS) and the peripheral nervous system (PNS). On the other hand, iNOS appears in glial cells and is only expressed in the event of brain injury or neurodegenerative diseases [10]. Based on the above, we observed that there are multiple neurons capable of synthesising NO, which is the subject of our modeling and simulation study. This NO acts in the CNS as an atypical neurotransmitter because it can be released by any part of the cell membrane without the need for presynaptic or postsynaptic structures, storage vesicles or transporter proteins.
The dynamics of NO are formed by three fundamental processes (generation, diffusion and self-regulation and recombination) [11] and begin through its generation, which is a process that occurs within the chemical communication itself and requires the existence of the NOS enzyme and Ca2+ flux. This process causes a rapid and transitory release of moderate amounts of NO that diffuse in all directions. The diffusion of NO is governed by the gradient of its own concentration, and its objectives are those cells that, being within its reach, have the ability to recombine with it, such as the enzyme soluble guanylate cyclase (GCs) [12,13]. The maximum range of NO influence through diffusion can reach 300 μm, totaling approximately 2⋅106 synapses [14], and as this diffusion takes place, a process of self-regulation and recombination occurs with different substrates [15], with no reuptake processes. The generated NO contributes to inhibiting the activity of NOS by a negative feedback mechanism, leaving the area in a refractive period, during which NO generation does not occur again.
The presence of a molecule such as NO in the nervous system opens up new perspectives in the study of its functioning due to its various functions in both the PNS and CNS [11], identifying it as a possible underlying element and supporting the high capacity of adaptation, flexibility and prediction that the brain possesses. Confirming the latter is to understand the functioning of NO as a signaling molecule in the brain. The absence of decisive experimental data for this function has generated the development of various models. Thus, Tadeusz Malinski et al. [16] directly applied the Fick equation to model the behavior of NO through isotropic and homogeneous diffusion. Lancaster J. R. [17] associated the dynamics of NO with a random walk, intrinsically related to the parabolic equation of diffusion, and introduced the compartment concept to model NO. Wood J. and Garthwaite J. [18] based their model on an analytical solution of the diffusion equation and studied the physiological sphere of action and influence of an isolated source of NO, also analyzing the self-regulation process of NO. Vaughn M. W. et al. [19] characterized the dynamics of NO by means of a model composed of three regions with differentiated dynamics and followed the anatomical structure of the vascular endothelium (luminal, endothelial and adluminal). Philippides A. et al. [20] modeled the diffusion of NO from an irregular structure using an analytical solution of the diffusion equation and introduced a global morphology for a generation process where the generation of NO takes place in a spherical section. Suárez Araujo C. P. et al. [21,22,23], developed several models to study and determine the dynamics of NO from different approaches: An analytical model in a continuum based on the Fick equation that allowed working with cylindrical morphologies; a compartmental model that, using phenomenological equations of transport and compartmental analysis as an underlying formal mechanism, allowed considering different morphologies and characteristics of NO dynamics, such as the nonisotropy of the medium and the nonhomogeneous character of the basic processes of NO dynamics (generation and diffusion of NO and reaction of NO with other molecules).
Our model focuses on the dynamics of NO as a signaling molecule in VT, mainly on how NO transmits information using the extracellular environment in its three modes of operation: In the single synapse, synaptic spillover and Volume Transmission (VT) [24]. We do not rule out extending the scope of this model in the future, to incorporate the dynamics of NO recombination with other substances and other biochemical networks, including models that allow us to work from the perspective of the micro kinetics of NO.
In this field, metabolic models [25,26], represent a very important tool for our understanding of biological systems as they are more closely connected with cell behavior, and can adequately represent the effects of the dynamics of NO on target cells, which span different time scales and are frequently not recordable electrophysiologically [24].
Although there is a large body of work related to the dynamic of NO as a mediator of Central Nervous System (CNS) functions, the manner in which NO does this is not well understood, because its actions in target cells are mediated by metabotropic receptors with different time scales in its effects, and these are often not electrophysiologically detectable. On the other hand, we seek to know how a coherent and interpretable information transfer can be encoded by this signaling system in the brain. Particularly because the message itself is not channelled to any target, but is free to diffuse from its source in all three dimensions.
The main motivation of our work is to improve the knowledge and understanding the implications of NO in VT, and its effects in the formation of complex functional structures on which the VT performed by NO is supported, which are necessities for the synchronous functional recruitment of neuronal populations.
In this work, we model the effect of NO dynamics in its environment, as well as its implications in the formation of complex functional structures on which the VT performed by NO is supported, which is necessary for the synchronous functional recruitment of neuronal populations. Therefore, we propose a model based on the different changes that occur in the environment where this dynamic takes place. We use automata networks, which provide an alternative to classic and traditional models based on continuous dynamic systems-ordinary differential equations and partial derivative equations. The first model based on automata networks is deterministic, which presents a finite time of behavior based on an initial configuration of compartments that are in a state of NO generation.
This deterministic model is extendable and can incorporate complexity into its processes. One of these approaches allows the generation processes to occur arbitrarily in the compartments and be kept random over time. In modeling, fuzzy dynamics are considered in the behavior of the automata that are associated with these compartments. Thus, we obtain a model of a fuzzy automata network for the diffusion of NO.
We organize this work into two major sections, in addition to the introduction and the conclusions. Section 2 presents the formal tools and conceptualization of the automata networks (deterministic and fuzzy) used to model the NO dynamics. The results of the model, an analysis and study of deterministic convergence and an analysis of its dynamics are detailed in Section 3. We end this work with a section that summarizes the conclusions and identifies the methodological axis of our future work. The list of abbreviations and symbols is given in Table A1.1 (Annex 1. List of abbreviations and symbols).
The method of our work is based on modeling using the discrete mathematical structure of an automata network. In general, an automata network (AN) can be defined as a set of locally interconnected automata that evolve in discrete time steps through mutual interactions between them [27]. From a mathematical point of view, ANs are discrete dynamic systems.
Before going into the formal details of AN (in their two forms: deterministic automata networks and fuzzy automata networks), we define the basic components that compose them: Deterministic automata (DA) and fuzzy automata (FA).
Deterministic Automata (DA). A DA is a mathematical structure of states and transitions. The states of a DA represent a configuration in which a modeled real system is found with one or more states designated as starting state/s that represent the initial configuration of said real system.
Transitions in a DA are associated with changes in the configuration of the real system due to an action that can be internal or external. The former represent internal computing steps (τ) and are not visible in the DA environment. An external action is visible in the DA environment and is used to interact with it.
Formally, a DA, denoted by A, is made up of the following four components:
(ⅰ) A set of states SA.
(ⅱ) A nonempty set of starting states IA⊆SA.
(ⅲ) Two sets of actions VA and WA, external (and that interact with the DA environment) and internal, respectively, such that VA∩WA=∅ and where ActA=VA∪WA defines the full set of actions associated with DA transitions.
(ⅳ) A Cartesian product ΔA⊆SA×ActA×SA, which defines the relationship of DA transitions.
Thus, we say that sa→s′, if ∃(s,a,s′)∈ΔA, meaning that action a is enabled in state s, and therefore there is a transition labelled a from s towards another state of DA.
The DA constitute a mathematical framework for the specification and analysis of real systems, but there are various formal extensions of them that collect aspects such as concurrency, asynchronous events, probabilistic configuration changes and environments with uncertainty. One of these extensions is probabilistic automata (PA), which has been formalized and developed by various authors [27,28,29,30,31].
An additional extension to the above is fuzzy automata (FA), which are based on the concepts associated with fuzzy sets [32] and have been formalized and characterized by various authors [33,34,35,36,37,38,39]. DA and PA are particular cases of FA [34].
Fuzzy automata (FA). A FA is a DA where the final existence of the transitions between one state and the subsequent set of states depends on a fuzzy affiliation function that defines the degrees of transition. Therefore, in a FA, the transition from a state s to a state s′, can be implemented if the value of the fuzzy function affiliation, before a given input e, meets a certain established existence criterion. Thus, a transition in a FA relates a state and an action to a fuzzy interval [0,1].
In this context, we define a fuzzy function affiliation (FFA) that characterizes a set A, which we also call a fuzzy set (FS), ∀x∈X, where we can calculate a value fA(x)∈[0,1] and whose meaning is the degree of membership, or affiliation, that point x has to the set A and where fA(x)=0 corresponds to the fact that x∉A, and fA(x)=1 with x∈A. Although the above can be interpreted as the simplest way to define membership for ∀x∈X to FS A, there are other alternatives in which said membership is defined based on the parameters α and β, where α,β∈[0,1], and α>β. In this case, membership is defined based on the following three cases: 1) x∈A if fA(x)≥α, 2) x∉A if fA(x)≤β, and 3) the membership of point x to FS A is indeterminate when β≤fA(x)≤α.
Formally, we define an FA as an algebraic structure A, formed by the following five components:
(ⅰ) A nonempty set of input states EA.
(ⅱ) A nonempty set of internal states SA.
(ⅲ) A nonempty set of output states VA.
(ⅳ) A first FFA, fA:SA×EA×SA→[0,1] called a direct fuzzy transition function, where fA(s,ej,s′) defines the degree of state transition s to the state s′, when the input ej is received.
(ⅴ) A second FFA, gA:VA×EA×SA→[0,1] called a direct fuzzy output function, where gA(vi,ej,s) defines the degree to which the automaton produces the output vi when it is in a state s and receives an input ej.
Based on the previous definition, we say that the transition sa,fA(s,ej,s′),gA(vi,ej,s)→s′ occurs if fA(s,ej,s′) and gA(vi,ej,s′) meet the membership criteria on the FSs SA×EA×SA and VA×EA×SA.
The generalization of the FA behavior for an input sequence Ek of arbitrary length k can be defined by applying the definition of FFA composition [32].
In this case, fA(s,Ek,s′)=fA(s,e0,s1;s1,e1,sk−1;…;sk−2,ek−1,sk−1) =Max{s1,s2,…,sk−2}⋅Min[fA(s,e0,s1),fA(s1,e1,s2),…,fA(sk−2,ek−1,sk−1)], obtaining in this case the degree of transition of the state s to the state sk−1, when the sequence of inputs defined by Ek is received. It is also necessary to apply the same process with the FFA gA.
Automata Networks. Generalizing the above definitions associated with a DA and FA, an AN can be described as a function defined between two spaces, F:E×Sn→Sn, where E and S are finite spaces called spaces of inputs and states. The AN is then formed on n interconnected automata, where the connection structure is defined by F in the following way: automaton i receives a connection from j if Fi depends on variable j, where Fi corresponds to component i of F.
The way in which automata are interconnected in the AN is given by their structure or topology of connections.
A state of the network is a vector x in Sn. The dynamics of the network are then defined as the rule that transforms said vector x of Sn into a vector y of Sn when a given input e is received. The rule of parallel iteration is defined by y=F(x,e) and can be interpreted as follows: In each time step, each automaton calculates its next state by means of the function Fi on the current state x of the network. The AN iteration rule can present different modes of iteration: Parallel (the order in which each Fi is calculated is irrelevant) and sequential (the order in which Fi is calculated does matter).
As soon as the dynamics-or temporal evolutions-of the network are defined, the problem of its asymptotic behavior arises. Since the space of states S, on which the AN has been defined, is finite, all trajectories of the AN are periodic, i.e., from cyclical or fixed points.
A fuzzy automata network (FAN) is made up of n interconnected FA, where the connection structure is defined by the inputs that each FA receives.
Once the formal tools that we use in our modelling have been presented, we will define a series of concepts associated with NO dynamics and their effects on the diffusion environment.
Compartment. The concept of compartment determines, through a mathematical construct, the minimum place study where there is a complete NO dynamic diffusion defined by the automaton [22,23], which is part of the neuronal substrate. From a biochemical point of view, it could be identified as a chemical computing environment capable of producing the different processes involved in NO dynamics: NO generation, NO reception and NO self-regulation [11], as shown in Figure 1(a). In direct correspondence with each of these processes and to identify the set of possible states in which said compartment may exist, it is established that such processes may be in a state of activation or operation or in a state of nonactivation.
We can then match the situation in which the NO generation process is active in the compartment with that in which certain chemical machinery is active in the biological substrate that converges in the generation of NO. Likewise, we can say that when NO reception is active in a compartment, a combination of NO and its possible receptor molecules is occurring in the biological substrate associated with the compartment, and thus some functional and/or metabolic change is being activated with said NO reception, as shown in Figure 1(b).
We treat the self-regulation process independently, knowing that, at the biological level, NO is depleted as it diffuses and combines with different substrates and that certain levels of NO condition its own generation. We can see in the definition of the set of states and transition function that the state of self-regulation is always later than that of generation and that when a compartment is in that state, there is no NO. We can also identify how, at the level of the neuronal substrate, we will have a state of transmission, which is a direct consequence of the state of diffusion of NO that occurs in its dynamics. In this state, the compartment receives NO from the environment and generates NO that will be received in other compartments, Figure 1(b).
Neighborhood and Scope. Within our set of compartments, whose structural and functional definition is established in the following two sections, we can define the following relationship:
Relationship belonging to a neighborhood. Given two compartments Ci and Cj, we say that Ci belongs to the neighborhood of Cj if and only if Ci is influenced by the state of Cj. In this case, we say that Cj creates the neighborhood ΠCj and that Ci∈ΠCj.
The above relationship allows us to define the neighborhoods of each compartment, and these are the ones that inherently define the scope of NO that is generated in each compartment to later diffuse. In Figure 2, we can see how compartment Ci simultaneously belongs to the neighborhoods ΠCj, ΠCk and ΠCp. This means that the dynamics of compartment Ci are influenced by the states in which compartments Cj, Ck and Cp are found. Likewise, compartment Ci will create a neighborhood ΠCi to which compartments Cj, Ck and Cp may belong.
Based on the previous definitions, we identify the following states in which the automaton can be found:
(1) Nonactivity state, (n). A compartment is in a nonactive state when the concentration of NO, or its variation, is negligible.
(2) Receiving state, (r). A compartment in a receiving state is receiving NO. This NO is generated or transmitted by any of the compartments that are creating the neighborhoods to which it belongs.
(3) Transmission state, (t). Two different situations can cause a compartment to be in a transmitting state.
– The first of these situations occurs when the generation of NO occurs in a compartment. When (as we will see later) and if there is a compartment in its neighborhood in a receiving state, this would imply that our automaton would go to a state of NO generation. However, if it also happens that the generation of NO occurs in any of the compartments, to whose neighborhood this compartment in question belongs, then the state associated with this situation is that of transmission.
– The second of the situations presented corresponds to the opposite sequence. A compartment may be receiving NO and begin a process of NO generation in it, which takes it to the transmission state if there is a compartment in its neighborhood with the capacity to receive NO.
The state of transmission, in the definition of our automaton, has been called this way because it intrinsically captures the existence of a relationship between what is received and what is generated from NO. It is not known if the reception of NO can have some implication in subsequent generation processes. In the same region of the nervous tissue, we understand that this relationship may occur in higher level processes. Likewise, it is not known what NO reception in a certain region of nervous tissue may imply when there is already a process of NO generation in that region. Our hypothesis could be an acceleration towards self-regulation. Therefore, this name identifies only one state of our automaton that is reached by the two situations previously explained, and from the point of view of transmission of the information, independent collection is relevant.
Figure 3 shows a sequence of states with which to better understand the transmission state. A small section of an automaton network is shown where compartments Cj and Cp have been highlighted along with the neighborhoods ΠCj and ΠCp that they form and thus the scope that the NO generated in compartments Cj and Cp can have. In Figure 3(a), we can see the initial state of the compartments and their associated neighborhoods. Figure 3(b) shows how the activation of a NO generation process in compartment Cj initiates a receiving state in all the compartments that belong to its neighborhood, including Cp itself. Figure 3(c) shows how a generation process is activated in compartment Cp, thus initiating a transmission state (in the same way, compartment Cj also goes to a transmission state), without any relationship existing between the generation that has occurred and the NO that it received previously. Figure 3(d) shows how the generation of NO in Cj ends, and thus Cj goes to a state of self-regulation and Cp to one of generation.
(4) NO generation state, (g). A compartment is in this state when NO generation takes place in it. Through this process of NO generation, the compartment influences those compartments that belong to its neighborhood.
(5) State of self-regulation, (a). The state of self-regulation is subsequent in sequence to the state of NO generation. During this state, there are no dynamics related to NO. It is important to bear in mind that, both in the self-regulation sate and in the receiving state, NO disappears. The self-regulation of NO is a process by which the generated NO disappears. At this time, it is being argued that this disappearance is carried out in proportion to the amount of NO that exists in the environment, although some models assume this occurs with other types of dependence [19]. The automaton model that we present here does not handle amounts of NO, and self-regulation is introduced as a state after the generation and forced transition.
We begin the specification of the transition function of the automaton by defining the jump behavior between each of the possible states in which said automaton can be found.
Two versions of the transition function are proposed, as shown in Table 1 (version 0 and version 1). Both versions reach a finite and deterministic behavior regardless of the initial configuration, and the generation and nonactivity states of NO are possible only in these initial configurations. These versions, although they present a complex behavior, should be seen as an intermediate step for the final definition of our fuzzy transition function, which will require stochastic behaviors to reflect NO generation and self-regulation more realistically.
[Version 0] Transition condition | ϕkCi→ϕCk+1i |
r0≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t) | n → r, r → r |
¯r0 | n → n, r → n |
r1≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | g → t, t → t |
¯r1 | t → a |
r2≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=g∨ϕCkh=t) | g → a |
¯r1∧¯r2 | g → g |
ε | a → n |
[Version 1] Transition condition | ϕkCi→ϕCk+1i |
s0≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∀Ch:Ch∈ΠCi)∧(ϕCkh≠r) | n → r, r → r |
¯s0 | r → n |
s1≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∀Ch:Ch∈ΠCi)∧(ϕCkh=r) | n → g |
¯s0∧¯s1 | n → n |
s2≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | g → t |
¯s2 | g → a |
s3≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | t → r |
¯s2 | t → a |
ε | a → n |
Figure 4 shows the state diagrams of both versions of the transition function, which cause the environment to be divided into three different basic dynamics: The dynamics of nonactivity or receiving NO, the dynamics of NO generation and the dynamics of NO transmission. The possibility of passing between the previous dynamics is the main difference between the two versions that have been defined as the transition function.
As seen in the definition of the transition functions in Table 1, in version 0 of our automata network (rules r0, r1 and r2), whether a compartment is in the dynamic of nonactivity or NO reception, or in the dynamics of generation or transmission of NO, will depend mainly on the initial configuration, since in no rule of its transition function is it allowed that a compartment initiating the receiving dynamics can pass to the dynamics of generation or transmission. We can also see that the reverse transition is allowed in this version of the automaton. A compartment can start in a generation dynamic and later move to a receiving dynamic directly or by a temporary transition through the transmission dynamic.
Version 1 of our automata network (rules s0, s1, s2 and s3) completes the previous scenario, allowing the transition between the dynamics of nonactivity or reception, originating in an initial configuration, to generation and transmission dynamics.
Therefore, our two versions of the transition function have 7 rules (defined algebraically in Table 1) to control the previous behavior. Below is a detailed textual definition of these rules:
Version 0:
● Rule r0: Represents a situation in which, being a compartment without NO dynamics, it begins to receive NO. The automaton associated with said compartment will change from a state of nonactivity (n) to a state of reception (r). This occurs when any of the generating automatons of the neighborhoods to which the associated automaton belongs are in a generation (g) or transmission (t) state. The automaton will continue in this state of reception (r) as long as the previous situation is maintained. When this situation ceases to occur, the automaton returns to a state of nonactivity (n).
The rule r0 controls the dynamics of those zones in which NO is not generated and only NO is received. The areas where it is produced and received will NOT be controlled by Rules r1 and r2.
● Rule r1: Rule r1 controls passing from NO generation dynamics to transmission dynamics, where there is NO generation and reception.
Therefore, we are faced with a situation where some of the generating automatons of the neighborhoods to which the associated automaton belongs are in a generation (g) or transmission (t) state, and some of the automatons of their own neighborhood are found in a state of reception (r) or nonactivity (n).
When the above occurs, the automaton goes from a generation state (g), which it reached as a result of its initial configuration, to a transmission state (t), remaining in said transmission state (t) as long as said rule is met.
● Rule r2: This rule identifies a situation after an NO generation dynamic, which in our model is associated with an NO self-regulation state. There is a period of nonactivity in the compartment, which is necessary after the dynamics of NO generation and transmission.
With the automaton in a generation state (g), we go to a self-regulation state (a) when the automaton is being influenced by states of generation (g) or transmission (t) from the automatons that have it in their neighborhoods, and in the neighborhood of the automaton itself, there are also automatons in a state of generation (g) or transmission (t).
When the conditions of Rules r1 and r2 are not met, the automaton must continue in a state of generation (g).
Version 1:
● Rule s0: Represents a situation in which, being a compartment without NO dynamics, it begins to receive NO. Therefore, the automaton will change from a state of nonactivity (n) to a state of reception (r) and remain in this state as long as it continues to receive NO. This occurs when any of the generating automatons of the neighborhoods to which the associated automaton belongs are in the generation (g) or transmission (t) state, and none of the automatons that belong to its neighborhood are already in a reception state (r). When this situation ceases to occur, the automaton returns to a state of nonactivity (n).
● Rule s1: This rule identifies the conditions that must be produced to cause a change between the dynamics of nonactivity or NO reception and the dynamics of NO generation or transmission.
With the automaton in a state of nonactivity (n), we go to a state of generation (g) when the automaton is being influenced by states of generation (g) or transmission (t) from the automatons that have it in its neighborhood, and there are automata from its own neighborhood in a state of reception (r), a situation in which there is a functional demand for NO.
● Rule s2: Like Rule r1 of version 0 in the transition function, this rule controls a transition from the dynamics of only NO generation to the dynamics where there is generation and reception (transmission).
We are faced with a situation where if any of the generating automatons of the neighborhoods to which the automaton belongs are in a generation state (g) and some of the automatons from its own neighborhood are in a reception state (r) or nonactivity state (n), the automaton goes to the transmission state (t).
If the above condition is not met, the automaton goes to a state of self-regulation (a).
● Rule s3: This rule is directly associated with a condition where the compartment should leave the NO transmission state (t).
If any of the generating automatons of the neighborhoods to which the automaton belongs are in a state of transmission (t), and any of the automatons from its own neighborhood are in a state of reception (r) or nonactivity (n), the automaton goes from a transmitting state (t) to a receiving state (r).
If the above condition is not met, the automaton goes to a state of self-regulation (a).
Of the wide spectrum of particularities presented by NO dynamics, such as 1) nonlocalized generation, 2) existence of a maximum range of NO beyond which there should be no influence thereof, 3) existence of areas where NO is recombined with other substances to influence the dynamics of other mechanisms of cellular communication, and therefore, in higher order processes such as neural plasticity and learning and 4) existence of substance self-regulation, among others, in our work, we will focus on how NO performs complex structures formation when it develops its dynamics and on which the possible transmission of information is constituted, which is necessary to provoke the synchronous functional recruitment of involved neural populations.
It is in this section that the result of analyzing the modeling through the DANs made explicit in the previous sections can be observed. This will allow us to know what type of behavior they develop throughout the generations.
We focus on the 1D versions of our automaton networks for modeling NO, and these studies are totally valid for automata networks of different dimensionalities [40]. Likewise, these results will be extrapolated to their 2D versions.
Figure 5 shows different 1D evolutions associated with the two versions of our automaton network; Figure 5(a), (b) for version 0; Figure 5(c), (d) for version 1. In this case, networks with 32 automata are used. The initial configurations have been established randomly, following the premise that each automaton can be either in a generation state (g) or nonactivity state (n) with equal probability, assuming the above that in these initial configurations, we will have approximately 50% of the automatons in a generation state (g). In these figures, it is observed how, in all cases, separate structures of a stable or periodic type are generated.
In version 0 of the DAN, the structures are supported by the different convergence cycles of each automaton, and we can quantify their appearance based on the level of NO generation contained in the initial configurations, as well as on the range of the neighborhood. In Figure 6, we can observe the number of times that the convergence cycle appears, composed only of the generation state (g) and for different reaches of the neighborhood: Range 1 for a neighborhood with three cells of the lattice (the two neighbors and the cell itself), range 2 for 5 cells and range 3 for 7 cells. This figure shows us how the occurrence of this convergence cycle varies as a function of the probability of the generation in the initial configuration and the scope of the neighborhood. It is observed that an interval occurs in which said convergence cycle disappears completely, the latter being able to be interpreted as a control mechanism of the NO level.
Annex 2 (identification and analysis of the different convergence cycles of the automata networks for modeling NO) compiles this quantification for all the possible sequences of these convergence cycles of said version 0 for our automaton network.
As we have already indicated, in version 0 of the transition function for the automaton network, the generation state (g) of a compartment can only occur in the initial configuration of the automaton. This is the main difference between the two versions of the developed automaton network, since in version 1, the process of generating NO in the automaton cannot only occur in the initial configuration.
As seen in the regular expressions that define the sequences of states that can be produced,
n{n|r{r}n|gan|gtan|gtr{r}n}gan{n|r{r}n|gan|gtan|gtr{r}n}. | (3.1) |
Both expressions are mutually self-contained, and therefore, the automaton can begin in a state of nonactivity (n) and later go to a state of generation (g) depending on whether the rules that define its transition function are met. This makes a detailed analysis of all the possible sequences, or convergence cycles, not feasible, as presented by version 1 of our automaton network.
Once this first visual analysis of both versions of our automaton network has been carried out, a qualitative analysis is carried out, taking as a cataloguing reference the classification established by Stephen Wolfram [40]. In this classification, there are four different types of dynamics for automata networks, which, depending on their variation in space and time from a random initial configuration, are classified as follows:
● Class Ⅰ. The evolution of the automaton network converges to a homogeneous state, without spatial or temporal structures of any kind.
● Class Ⅱ. The evolution of the automaton network tends to separate structures of a stable or periodic type.
● Class Ⅲ. The evolution of the automaton network presents chaotic patterns. Fractal structures emerge spatially, and cycles of very long length are observed.
● Class Ⅳ. The evolution of the automaton network generates localized complex structures, which spread and whose duration increases exponentially with the size of the network.
The first three classes correspond qualitatively to the three types of behaviors observed in continuous systems (attractors, periodic/quasiperiodic and chaotic).
From the qualitative analysis, the temporal evolutions of both versions of our automaton network are determined to be Class Ⅱ.
Quantitative analysis is driven by the work of Chris G. Langton [41]. First, the set DKN of all the possible transition functions follow the algebraic structure Δ:ΣN→Σ, where K corresponds to the number of states of the automaton and N to the number of neighbors that are involved in said transition function, including the automaton for which the transition function calculates the new state.
Let sq be a state that we identify as a quiescent state. Once this quiescent state is identified, we can identify the n transitions that lead us to that state. Once identified, we are interested in quantifying the values (KN−n) because they represent the rest of the transitions. The degree of heterogeneity in the behavior of the automaton will depend on the way in which said (KN−n) transitions are defined. For this purpose, the parameter λ is established according to expression 3.2 [41]:
λ=(KN−n)/KN. | (3.2) |
If n=KN, all transitions have sq as the final state, and the behavior of the automaton is completely homogeneous (all initial configurations end in sq) and λ=0. On the other hand, if n=0, no transition has sq as the final state, and then λ=1. The most heterogeneous behavior occurs when all state transitions (including those with sq as the final state) are equally represented, which happens when n=KN−1 and λ=1−1/K. Based on the definition of the parameter λ, we perform its calculation for the two versions of the automaton network when we work in a 1D environment.
As seen in Tables 2 and 3, for each possible state si(t) and states of its neighbors si−1(t) and si+1(t), the state si(t+1) is made to correspond according to the rules that define our transitions, as shown in Table 1. In this case, we are working with a value of K=5, corresponding directly with the states: Nonactivity (n), generation (g), transmission (t), reception (r) and self-regulation (a). The definition of the neighborhood of the automaton and the fact that we are working in a 1D environment give us a value of N=3.
nn | ng | nt | nr | na | gn | gg | gt | gr | ga | tn | tg | tt | tr | ta | rn | rg | rt | rr | ra | an | ag | at | ar | aa | |
n | n | r | r | n | n | r | r | r | r | r | r | r | r | r | r | n | r | r | n | n | n | r | r | n | n |
g | g | t | t | g | g | t | a | a | t | g | t | a | a | t | g | g | t | t | g | g | g | g | g | g | g |
t | a | t | t | a | a | t | a | a | t | a | t | a | a | t | a | a | t | t | a | a | a | a | a | a | a |
r | n | n | n | n | n | n | r | r | n | n | n | r | r | n | n | n | n | n | n | n | n | n | n | n | n |
a | n | r | r | n | n | r | r | r | r | r | r | r | r | r | r | n | r | r | n | n | n | r | r | n | n |
nn | ng | nt | nr | na | gn | gg | gt | gr | ga | tn | tg | tt | tr | ta | rn | rg | rt | rr | ra | an | ag | at | ar | aa | |
n | n | r | r | g | n | r | r | r | n | r | r | r | r | n | r | g | n | n | g | g | n | r | r | g | n |
g | a | t | a | a | a | t | a | a | t | a | a | a | a | a | a | a | t | a | a | a | a | a | a | a | a |
t | a | a | r | a | a | a | a | a | a | a | r | a | a | r | a | a | a | r | a | a | a | a | a | a | a |
r | n | r | r | n | n | r | r | r | n | r | r | r | r | n | r | n | n | n | n | n | n | r | r | n | n |
a | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n |
Based on the logical rules defined in Table 1, all possible transitions are detailed in Tables 2 and 3 as a preliminary step for calculating the parameter λ. The expression of the transition functions, according to these tables, is defined as follows: If [s(i−1)(t)s(i+1)(t)=′nr′]∧[si(t)=′t′] then si(t+1)=′a′, following, in this case, the transition indicated in Table 3.
The above allows us to calculate the value of λ for a certain quiescent state sq by direct application of the expression (KN - number of occurrences of sq)/KN, obtaining the values shown in Table 4 for both versions of our automata network.
sq | λv0 | λv1 |
n | 0.656 | 0.632 |
g | 0.896 | 0.96 |
t | 0.872 | 0.968 |
r | 0.744 | 0.776 |
a | 0.832 | 0.664 |
Taking as sq the state of nonactivity, we have λv0=0.656 and a λv1=0.632, as shown in Table 4. These values allow us to locate the transition functions of our automata in the set of all possible functions, DKN. It is important to note that when any of the other states is selected as state sq, the values of λv0 and λv1 change.
To analyze the emergence of complexity produced by the dynamics of our automata networks, we will use the probabilistic approach offered by entropy as a basic measure of self-information. For a discrete process T of K possible states, this is defined according to Eq (3.3).
H(T)=−K∑i=1pilog(pi), | (3.3) |
where pi corresponds to the probability of state i occurring in process T.
On the other hand, to quantify the degree of cooperation that may exist in our DANs, on which the synchronous functional recruitment feature is supported, we quantify the level of correlation that exists in the events that occur (state changes) in the automata. For this, the concept of mutual information I(Tn,Tm) is used between two automata n and m, in which the discrete processes Tn and Tm occur. The magnitude is defined as a function of the individual entropy H(Tn) and H(Tm) of the two automata and the entropy of the two automata considered as a joint process H(Tn,m).
Therefore, the mutual information is given by the following expression:
I(Tn,Tm)=H(Tn)+H(Tm)−H(Tn,m). | (3.4) |
This measure will have direct dependence on the correlation of process Tn with the state of process Tm. Thus, high values in the average of I(Tn,Tm) will imply a high cooperation between automata n and m. In contrast, a functional independence, or change of states, between automata will assume low values of the previous measure.
By virtue of the defined magnitudes, our network of automatons for NO modeling adequately incorporates the characteristics of complex structure formation by having an intermediate value of the average of the general entropy ¯H, understanding that the complex is between the order of a system, where ¯H≈0, and the total disorder, where ¯H presents its highest values. On the other hand, having a high synchronous functional recruitment between the various dynamics of NO implies that we must have high values in the average general mutual information ¯I.
It should be taken into consideration that since the previous magnitudes are dependent on the trajectories followed by the automata network, according to the initial configuration, and since the latter has a random character, in our quantitative study, average values of these magnitudes are calculated throughout a set of executions where the initial configuration changes in its form for each one of them, but the proportions of automata that can be in an initial state of generation (g) or nonactivity (n).
In the same way, all the calculated variables that depend on the trajectories that the network of automatons follow are organized on the axis of λ associated with the set DKN, being able to locate our networks of automatons in said set for comparison.
Figure 7 shows us the values of the general entropy ¯H and general mutual information ¯I for a subset of DKN along the dimension λ in the range between 0 and 1−1/K. This interval corresponds to the interval that goes from the values of λ associated with the most homogeneous automata networks λ=0 to the most heterogeneous λ=1−1/K.
Figure 7(a) shows how the value of ¯H in version 1 of our automaton is approximately ¯H=1.25, which is approximately half the value (¯H≈2.1) that the automata networks of DKN have for that same λ. This fact indicates that version 1 of our automata network forms complex structures in its dynamics.
Figure 7(b) shows the suitability of version 1 of our automata network from the perspective of synchronous functional recruitment, since this network has a value of ¯I=0.8, which practically doubles the average value of the rest of the automata networks of DKN.
In both comparisons shown in Figure 7, for the selection of the rest of the automata networks that make up the subset of DKN, the "table-walk-through" procedure has been carried out [41], taking as a seed version 1 of the automaton network and modifying its transition function stochastically to obtain automaton networks with different λ within the study interval indicated above, and a state sq equal to that of nonactivity (n).
In Figure 8, we have the same comparison of the values of ¯H and ¯I, but in this case, the subset of DKN has been generated with the "table-walk-through" process, using version 1 as a seed automaton network and the state sq = self-regulation (a). The main difference is if we compare the levels of ¯H (Figures 7(a) and 8(a)), when we work with sq = self-regulation (a), its lowest values are close to that of our automata network, implying that the level of complex structures of our automata network does not present a differentiating character for the rest.
Figure 9 shows a relationship factor that has ¯H and ¯I with the temporal evolution of the automaton networks of DKN for different values of λ. This figure shows the evolution of 6 automata networks made up of 32 automata in a cyclical lattice, where their transition functions have been obtained by taking both versions of our automata network for modeling NO and where sq corresponds to the state of nonactivity (n). Figure 9(a)–(c) correspond to version 0, and Figure 9(d)–(f) correspond to version 1. It can be seen in these figures how the temporal evolution of the states of these networks presents structures of a certain complexity and fractal tendency, and they are characterized by different values, not only for the parameter λ but also with regard to the values of ¯H and ¯I.
The behavior observed in the analyzed DANs is also present in larger automaton networks, as we can see in Annex 3. Detail of the evolution of the automata networks with 128 automata (version 0 and version 1) of the automata network for modeling NO, for DANs with 128 automata.
In the developed quantitative analysis, it has also been verified how dependent the values of ¯H and ¯I are in relation to the initial configurations and to the level of NO generation that may exist in them. Figure 10 shows the evolution of ¯H and ¯I when we vary the percentage of automata that are in a generation state (g) in the initial configuration.
In this figure, it can be seen that the average values of ¯H show a slight increase (from a value of ¯H≈1.2, up to a value of ¯H≈1.3) as we increase the level of NO generation present in the initial configurations, from 10% to 70%, becoming unstable for percentages greater than the latter. For the case of ¯I, we see that its variation, despite being upwards, is practically negligible, staying around ¯I≈0.8 and behaving in the same way as ¯H once the NO generation percentage exceeds 70%.
On the other hand, it seems that the storage of information by any system (be it discrete or continuous) implies a low entropy, and the transmission of information implies the existence of an increase [42]. Likewise, a high mutual information supposes a high correlation between the automata.
The final analysis for version 1 of our automaton network seeks to see what position the latter has in the plane facing both magnitudes: Entropy versus mutual information, which is always in compared with the rest of the automaton networks of DKN.
Figure 11 shows that version 1 of our network is located in the area of the plane that gives it a medium entropy level. Consequently, we have a high formation of complex structures, and on the other hand, the level of mutual information seems to be at high values compared to all the automata networks that are defined by DKN. The above allows us to argue that version 1 of the automaton network also presents high levels of synchronous functional recruitment.
A more exhaustive analysis of how version 1 of our automaton network behaves in relation to the values of ¯H and ¯I for all the possible initial configurations is shown in Figure 12. The different colors determine the level of NO generation that the initial configurations present. It is observed in this figure that the initial configurations that make our network of automatons exit the area given by the following values are minimal (¯H≈0.3 and ¯I≈0.8), which is identified as suitable for having an adequate level of complex structure formation and synchronous functional recruitment. It is also seen in this figure that the initial configuration that produces maximum values of ¯H and ¯I corresponds to an initial configuration composed of the following sequence of states: nnggnngg… nngg, where n and g correspond to the nonactivity and generation states respectively and whose evolution is shown in Figure 13.
We extend our study to 2D DAN, where results parallel to those achieved with 1D DAN can be observed. Thus, Figure 14 shows the complex structure formation by all the dynamics associated with the different states, Figure 14(a)–(e), in which automata may exist when we work with a network of 10,000 automata arranged in a 100×100 lattice and where each automaton has 5 neighbors, in correspondence with a Moore-type neighborhood. In these figures, the formation of a set of structures is perceived when the automaton network is in the 75th generation.
Figure 15 shows the convergence of the same automata network with version 1 of the transition function towards the final complex structures when we sufficiently advance in the number of generations. The edges of the structure show a changing and cyclical behavior.
The implementation of the different versions of the automaton network was developed in Python 3.8.13 (www.python.org) using the TensorFlow 2.3.0 libraries (www.tensorflow.org). The source code used in all the experiments in this article is available for download on GitHub (https://github.com/pablo-fernandez-lopez/NANetwork_NO).
In this work, a model based on deterministic automata networks is proposed for modeling the effect that NO dynamics exert on the environment through which it diffuses in its role as a molecule with the ability to perform VT. We carry out the formalization of its transition function through the logical extrapolation of those mechanisms associated with the diffusion dynamics of NO as a neuroactive substance. The obtained model adequately depicts the characteristics of complex structure formation and synchronous functional recruitment.
To achieve this, we have built two versions of transition functions (version 0 and version 1) that segment the environment into three different basic dynamics: the dynamics of nonactivity or NO reception, the dynamics of NO generation and the dynamics of NO transmission. The possibility of passing between the dynamics of nonactivity or NO reception to the rest of dynamics is the main difference in version 1 compared to version 0. The two versions of the transition function, defined and analysed in this work utilize a deterministic NO generation process. This generation of NO occurs in the initial configuration of the automata (case of version 0) or in the initial configuration by meeting a specific logical rule of its transition function that responds to the situation of the functional requirement of NO (case version 1).
Both versions cause separate structures of a stable or periodic type in all the sequences of states through which the automata network passes for each initial configuration. These structures are supported by the different convergence cycles that each automaton develops, whose occurrences depend on the level of NO generation of the initial configuration as well as the range of the neighborhood. From a qualitative perspective, the two versions of our automaton network are classified as class Ⅱ on the Stephen Wolfram rating scale [40].
The quantitative analysis of version 1 of our automaton network, when compared with the rest of the possible automata networks generated and organized according to the heterogeneity measurement parameter λ, defined by Chris G. Langton [41], presents adequate values for entropy and mutual information (¯H≈1.3 and ¯I≈0.8), achieving an adequate predisposition of the network for complex structure formation and synchronous functional recruitment necessary to model the VT and study its implications in mechanisms and higher processes of the brain, such as learning and memory formation.
Working with version 1 of our automaton network in 2D environments, it is observed that NO dynamics produce areas of isolation and segmentation of the environment in relation to the characteristics of complex structure formation and synchronous functional recruitment. These complex structures present a zonal convergence when we sufficiently advance the number of generations, where the edges of the structure present a changing and cyclical behavior.
Finally, we propose the first discrete model in all its variables that can work with different NO dynamics and analyze the implications of VT in more complex architectures and aspects related to learning and memory formation.
We consider the DAN model proposal presented in this work, in its two versions, as a first step, and we identified the need to extend our model to incorporate stochastic conditions that make the state of NO generation be induced by higher mechanisms or brain processes. To carry out this generalization, which will also constitute a model that can accommodate arbitrary processes in decision-making mechanisms, we propose the use of fuzzy automata networks. This model will be part of a complete formal framework of volumetric transmission in the brain and in artificial neural networks and therefore in complex decision-making systems.
The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.
The research presented in this paper has been funded by the Project "Investigación en Computación Neuronal por el Grupo de Investigación CIPERBIG (Research in Neural Computation by the CIPERBIG Research Group) (ULPGC)"; No: 23/2021, from Cabildo de Gran Canaria.
We are thankful to the "Council of First Vice-presidency and Public Works, Infrastructures, Transport and Mobility of the Cabildo de Gran Canaria".
The authors declare no conflicts of interest.
Abbreviation/symbol | Description |
NO | Nitric oxide |
VT | Volumetric transmission |
EDRF | Endothelium-derived relaxing factor |
NOS | Nitric oxide synthase |
cNOS | Constitutive nitric oxide synthase |
eNOS | Endothelial isoform nitric oxide synthase |
iNOS | Inducible nitric oxide synthase |
nNOS | Neuronal isoform nitric oxide synthase |
PNS | Peripheral nervous system |
CNS | Central nervous system |
GCs | Soluble guanylate cyclase |
AN | Automata network |
DA | Deterministic automata |
FA | Fuzzy automata |
Internal computing steps | |
Set of states (internal in Fuzzy automata) | |
Set of starting states | |
Set of external actions | |
Set of internal actions | |
PA | Probabilistic automata |
FA | Fuzzy set |
FFA, |
Fuzzy function affiliation |
Set of input states (in Fuzzy Automata) | |
Set of output states (in Fuzzy Automata) | |
DAN | Deterministic automata network |
FAN | Fuzzy automata network |
Compartment |
|
Neighborhood of |
|
n | State of nonactivity |
r | State of receiving |
t | State of Transmission |
g | State of NO generation |
a | State of self-regulation |
Langton parameter | |
Set of all possible transition functions | |
Quiescent state | |
I | Mutual information |
Average general mutual information | |
H | Individual entropy |
Average of the general entropy |
Version 0 of the automata network:
[1] | C. D. Aliprantis, K. C. Border, Infinite Dimensional Analysis, 3 Eds., Germany: Springer, 2006. |
[2] |
R. F. Arens, J. Eels Jr., On embedding uniform and topological spaces, Pacific J. Math., 6 (1956), 397–403. https://doi.org/10.2140/pjm.1956.6.397 doi: 10.2140/pjm.1956.6.397
![]() |
[3] |
R. Arnau, J. M. Calabuig, E. A. Sánchez Pérez, Representation of Lipschitz Maps and Metric Coordinate Systems, Mathematics, 10 (2022), 3867. https://doi.org/10.3390/math10203867 doi: 10.3390/math10203867
![]() |
[4] | M. Baroni, R. Zamparelli, Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, 2010, 1183–1193. |
[5] |
G. Boleda, Distributional semantics and linguistic theory, Ann. Rev. Linguist., 6 (2020), 213–234. https://doi.org/10.1146/annurev-linguistics-011619-030303 doi: 10.1146/annurev-linguistics-011619-030303
![]() |
[6] | S. Clark, Vector space models of lexical meaning, In: The Handbook of Contemporary Semantics, Malden: Blackwell, 2015,493–522. |
[7] | Ş. Cobzaş, R. Miculescu, A. Nicolae, Lipschitz functions, Berlin: Springer, 2019. |
[8] |
J. Dai, Y. Zhang, H. Lu, H. Wang, Cross-view semantic projection learning for person re-identification, Pattern Recognit., 75 (2018), 63–76. http://dx.doi.org/10.1016/j.patcog.2017.04.022 doi: 10.1016/j.patcog.2017.04.022
![]() |
[9] |
K. Erk, Vector space models of word meaning and phrase meaning: A survey, Lang. Linguist. Compass, 6 (2012), 635–653. http://dx.doi.org/10.1002/lnco.362 doi: 10.1002/lnco.362
![]() |
[10] |
G. Grand, I. A. Blank, F. Pereira, E. Fedorenko, Semantic projection recovers rich human knowledge of multiple object features from word embeddings, Nat. Hum. Behav., 6 (2022), 975–987. https://doi.org/10.1038/s41562-022-01316-8 doi: 10.1038/s41562-022-01316-8
![]() |
[11] | N. J. Kalton, Spaces of Lipschitz and Hölder functions and their applications, Collect. Math., 55 (2004), 171–217. |
[12] | J. L. Kelley, General Topology, Graduate Texts in Mathematics, New York: Springer, 1975. |
[13] |
A. Lenci, Distributional models of word meaning, Ann. Rev. Linguist., 4 (2018), 151–171. http://dx.doi.org/10.1146/annurev-linguistics-030514-125254 doi: 10.1146/annurev-linguistics-030514-125254
![]() |
[14] | O. Levy, Y. Goldberg, Neural word embedding as implicit matrix factorization, Adv. Neural Inf. Proc. Syst., 2014, 2177–2185. |
[15] |
H. Lu, Y. N. Wu, K. J. Holyoak, Emergence of analogy from relation learning, Proc. Natl. Acad. Sci. U. S. A., 116 (2019), 4176–4181. http://dx.doi.org/10.1073/pnas.1814779116 doi: 10.1073/pnas.1814779116
![]() |
[16] | T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean, Distributed representations of words and phrases and their compositionality, Adv. Neural Inf. Proc. Syst., 2013, 3111–3119. |
[17] | J. Pennington, R. Socher, C. Manning, Glove: Global vectors for word representation, 2014. Available from: https://nlp.stanford.edu/projects/glove/. |
[18] | N. Weaver, Lipschitz Algebras, Singapore: World Scientific Publishing Co., 1999. |
[19] | Y. Xian, S. Choudhury, Y. He, B. Schiele, Z. Akata, Semantic projection network for zero-and few-label semantic segmentation, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, 8256–8265. http://dx.doi.org/10.1109/CVPR.2019.00845 |
[20] | L. A. Zadeh, A Fuzzy-Set-Theoretic Interpretation of Linguistic Hedges, J. Cybern., 2 (1972), 34–34. |
[Version 0] Transition condition | ϕkCi→ϕCk+1i |
r0≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t) | n → r, r → r |
¯r0 | n → n, r → n |
r1≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | g → t, t → t |
¯r1 | t → a |
r2≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=g∨ϕCkh=t) | g → a |
¯r1∧¯r2 | g → g |
ε | a → n |
[Version 1] Transition condition | ϕkCi→ϕCk+1i |
s0≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∀Ch:Ch∈ΠCi)∧(ϕCkh≠r) | n → r, r → r |
¯s0 | r → n |
s1≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∀Ch:Ch∈ΠCi)∧(ϕCkh=r) | n → g |
¯s0∧¯s1 | n → n |
s2≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | g → t |
¯s2 | g → a |
s3≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | t → r |
¯s2 | t → a |
ε | a → n |
nn | ng | nt | nr | na | gn | gg | gt | gr | ga | tn | tg | tt | tr | ta | rn | rg | rt | rr | ra | an | ag | at | ar | aa | |
n | n | r | r | n | n | r | r | r | r | r | r | r | r | r | r | n | r | r | n | n | n | r | r | n | n |
g | g | t | t | g | g | t | a | a | t | g | t | a | a | t | g | g | t | t | g | g | g | g | g | g | g |
t | a | t | t | a | a | t | a | a | t | a | t | a | a | t | a | a | t | t | a | a | a | a | a | a | a |
r | n | n | n | n | n | n | r | r | n | n | n | r | r | n | n | n | n | n | n | n | n | n | n | n | n |
a | n | r | r | n | n | r | r | r | r | r | r | r | r | r | r | n | r | r | n | n | n | r | r | n | n |
nn | ng | nt | nr | na | gn | gg | gt | gr | ga | tn | tg | tt | tr | ta | rn | rg | rt | rr | ra | an | ag | at | ar | aa | |
n | n | r | r | g | n | r | r | r | n | r | r | r | r | n | r | g | n | n | g | g | n | r | r | g | n |
g | a | t | a | a | a | t | a | a | t | a | a | a | a | a | a | a | t | a | a | a | a | a | a | a | a |
t | a | a | r | a | a | a | a | a | a | a | r | a | a | r | a | a | a | r | a | a | a | a | a | a | a |
r | n | r | r | n | n | r | r | r | n | r | r | r | r | n | r | n | n | n | n | n | n | r | r | n | n |
a | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n |
sq | λv0 | λv1 |
n | 0.656 | 0.632 |
g | 0.896 | 0.96 |
t | 0.872 | 0.968 |
r | 0.744 | 0.776 |
a | 0.832 | 0.664 |
Abbreviation/symbol | Description |
NO | Nitric oxide |
VT | Volumetric transmission |
EDRF | Endothelium-derived relaxing factor |
NOS | Nitric oxide synthase |
cNOS | Constitutive nitric oxide synthase |
eNOS | Endothelial isoform nitric oxide synthase |
iNOS | Inducible nitric oxide synthase |
nNOS | Neuronal isoform nitric oxide synthase |
PNS | Peripheral nervous system |
CNS | Central nervous system |
GCs | Soluble guanylate cyclase |
AN | Automata network |
DA | Deterministic automata |
FA | Fuzzy automata |
Internal computing steps | |
Set of states (internal in Fuzzy automata) | |
Set of starting states | |
Set of external actions | |
Set of internal actions | |
PA | Probabilistic automata |
FA | Fuzzy set |
FFA, |
Fuzzy function affiliation |
Set of input states (in Fuzzy Automata) | |
Set of output states (in Fuzzy Automata) | |
DAN | Deterministic automata network |
FAN | Fuzzy automata network |
Compartment |
|
Neighborhood of |
|
n | State of nonactivity |
r | State of receiving |
t | State of Transmission |
g | State of NO generation |
a | State of self-regulation |
Langton parameter | |
Set of all possible transition functions | |
Quiescent state | |
I | Mutual information |
Average general mutual information | |
H | Individual entropy |
Average of the general entropy |
[Version 0] Transition condition | ϕkCi→ϕCk+1i |
r0≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t) | n → r, r → r |
¯r0 | n → n, r → n |
r1≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | g → t, t → t |
¯r1 | t → a |
r2≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=g∨ϕCkh=t) | g → a |
¯r1∧¯r2 | g → g |
ε | a → n |
[Version 1] Transition condition | ϕkCi→ϕCk+1i |
s0≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∀Ch:Ch∈ΠCi)∧(ϕCkh≠r) | n → r, r → r |
¯s0 | r → n |
s1≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g∨ϕCkj=t)∧(∀Ch:Ch∈ΠCi)∧(ϕCkh=r) | n → g |
¯s0∧¯s1 | n → n |
s2≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=g)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | g → t |
¯s2 | g → a |
s3≡(∃Cj:Ci∈ΠCj)∧(ϕCkj=t)∧(∃Ch:Ch∈ΠCi)∧(ϕCkh=n∨ϕCkh=r) | t → r |
¯s2 | t → a |
ε | a → n |
nn | ng | nt | nr | na | gn | gg | gt | gr | ga | tn | tg | tt | tr | ta | rn | rg | rt | rr | ra | an | ag | at | ar | aa | |
n | n | r | r | n | n | r | r | r | r | r | r | r | r | r | r | n | r | r | n | n | n | r | r | n | n |
g | g | t | t | g | g | t | a | a | t | g | t | a | a | t | g | g | t | t | g | g | g | g | g | g | g |
t | a | t | t | a | a | t | a | a | t | a | t | a | a | t | a | a | t | t | a | a | a | a | a | a | a |
r | n | n | n | n | n | n | r | r | n | n | n | r | r | n | n | n | n | n | n | n | n | n | n | n | n |
a | n | r | r | n | n | r | r | r | r | r | r | r | r | r | r | n | r | r | n | n | n | r | r | n | n |
nn | ng | nt | nr | na | gn | gg | gt | gr | ga | tn | tg | tt | tr | ta | rn | rg | rt | rr | ra | an | ag | at | ar | aa | |
n | n | r | r | g | n | r | r | r | n | r | r | r | r | n | r | g | n | n | g | g | n | r | r | g | n |
g | a | t | a | a | a | t | a | a | t | a | a | a | a | a | a | a | t | a | a | a | a | a | a | a | a |
t | a | a | r | a | a | a | a | a | a | a | r | a | a | r | a | a | a | r | a | a | a | a | a | a | a |
r | n | r | r | n | n | r | r | r | n | r | r | r | r | n | r | n | n | n | n | n | n | r | r | n | n |
a | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n | n |
sq | λv0 | λv1 |
n | 0.656 | 0.632 |
g | 0.896 | 0.96 |
t | 0.872 | 0.968 |
r | 0.744 | 0.776 |
a | 0.832 | 0.664 |
Abbreviation/symbol | Description |
NO | Nitric oxide |
VT | Volumetric transmission |
EDRF | Endothelium-derived relaxing factor |
NOS | Nitric oxide synthase |
cNOS | Constitutive nitric oxide synthase |
eNOS | Endothelial isoform nitric oxide synthase |
iNOS | Inducible nitric oxide synthase |
nNOS | Neuronal isoform nitric oxide synthase |
PNS | Peripheral nervous system |
CNS | Central nervous system |
GCs | Soluble guanylate cyclase |
AN | Automata network |
DA | Deterministic automata |
FA | Fuzzy automata |
Internal computing steps | |
Set of states (internal in Fuzzy automata) | |
Set of starting states | |
Set of external actions | |
Set of internal actions | |
PA | Probabilistic automata |
FA | Fuzzy set |
FFA, |
Fuzzy function affiliation |
Set of input states (in Fuzzy Automata) | |
Set of output states (in Fuzzy Automata) | |
DAN | Deterministic automata network |
FAN | Fuzzy automata network |
Compartment |
|
Neighborhood of |
|
n | State of nonactivity |
r | State of receiving |
t | State of Transmission |
g | State of NO generation |
a | State of self-regulation |
Langton parameter | |
Set of all possible transition functions | |
Quiescent state | |
I | Mutual information |
Average general mutual information | |
H | Individual entropy |
Average of the general entropy |