
Neural information theory represents a fundamental method to model dynamic relations in biological systems. However, the notion of information, its representation, its content and how it is processed are the subject of fierce debates. Since the limiting capacity of neuronal links strongly depends on how neurons are hypothesized to work, their operating modes are revisited by analyzing the differences between the results of the communication models published during the past seven decades and those of the recently developed generalization of the classical information theory. It is pointed out that the operating mode of neurons is in resemblance with an appropriate combination of the formerly hypothesized analog and digital working modes; furthermore that not only the notion of neural information and its processing must be reinterpreted. Given that the transmission channel is passive in Shannon's model, the active role of the transfer channels (the axons) may introduce further transmission limits in addition to the limits concluded from the information theory. The time-aware operating model enables us to explain why (depending on the researcher's point of view) the operation can be considered either purely analog or purely digital.
Citation: János Végh, Ádám József Berki. Revisiting neural information, computing and linking capacity[J]. Mathematical Biosciences and Engineering, 2023, 20(7): 12380-12403. doi: 10.3934/mbe.2023551
[1] | Kim Hiang Liow, Jeongseop Song, Xiaoxia Zhou . Volatility connectedness and market dependence across major financial markets in China economy. Quantitative Finance and Economics, 2021, 5(3): 397-420. doi: 10.3934/QFE.2021018 |
[2] | Ke Liu, Changqing Luo, Zhao Li . Investigating the risk spillover from crude oil market to BRICS stock markets based on Copula-POT-CoVaR models. Quantitative Finance and Economics, 2019, 3(4): 754-771. doi: 10.3934/QFE.2019.4.754 |
[3] | Fangzhou Huang, Jiao Song, Nick J. Taylor . The impact of business conditions and commodity market on US stock returns: An asset pricing modelling experiment. Quantitative Finance and Economics, 2022, 6(3): 433-458. doi: 10.3934/QFE.2022019 |
[4] | Rubaiyat Ahsan Bhuiyan, Tanusree Chakravarty Mukherjee, Kazi Md Tarique, Changyong Zhang . Hedge asset for stock markets: Cryptocurrency, Cryptocurrency Volatility Index (CVI) or Commodity. Quantitative Finance and Economics, 2025, 9(1): 131-166. doi: 10.3934/QFE.2025005 |
[5] | Chikashi Tsuji . The historical transition of return transmission, volatility spillovers, and dynamic conditional correlations: A fresh perspective and new evidence from the US, UK, and Japanese stock markets. Quantitative Finance and Economics, 2024, 8(2): 410-436. doi: 10.3934/QFE.2024016 |
[6] | Takashi Kanamura . Diversification effect of commodity futures on financial markets. Quantitative Finance and Economics, 2018, 2(4): 821-836. doi: 10.3934/QFE.2018.4.821 |
[7] | Lucjan T. Orlowski, Monika Sywak . Wavering interactions between commodity futures prices and us dollar exchange rates. Quantitative Finance and Economics, 2019, 3(2): 221-243. doi: 10.3934/QFE.2019.2.221 |
[8] | Samuel Kwaku Agyei, Ahmed Bossman . Investor sentiment and the interdependence structure of GIIPS stock market returns: A multiscale approach. Quantitative Finance and Economics, 2023, 7(1): 87-116. doi: 10.3934/QFE.2023005 |
[9] | James P. Gander . Market Value Volatility and the Volume of Traded Stock for U.S. Industrial Corporations. Quantitative Finance and Economics, 2017, 1(4): 403-410. doi: 10.3934/QFE.2017.4.403 |
[10] | Makoto Nakakita, Teruo Nakatsuma . Analysis of the trading interval duration for the Bitcoin market using high-frequency transaction data. Quantitative Finance and Economics, 2025, 9(1): 202-241. doi: 10.3934/QFE.2025007 |
Neural information theory represents a fundamental method to model dynamic relations in biological systems. However, the notion of information, its representation, its content and how it is processed are the subject of fierce debates. Since the limiting capacity of neuronal links strongly depends on how neurons are hypothesized to work, their operating modes are revisited by analyzing the differences between the results of the communication models published during the past seven decades and those of the recently developed generalization of the classical information theory. It is pointed out that the operating mode of neurons is in resemblance with an appropriate combination of the formerly hypothesized analog and digital working modes; furthermore that not only the notion of neural information and its processing must be reinterpreted. Given that the transmission channel is passive in Shannon's model, the active role of the transfer channels (the axons) may introduce further transmission limits in addition to the limits concluded from the information theory. The time-aware operating model enables us to explain why (depending on the researcher's point of view) the operation can be considered either purely analog or purely digital.
We denote by
G=(V,E) |
an undirected graph with a set, V, of n vertices, and a set, E, of m edges such that E is a set of 2-vertex subsets of V. We say that u∈V is adjacent to v∈V in G if and only if {u,v}∈E. We denote by u∼v an edge {u,v}∈E. Let A and B be disjoint nonempty subsets of V. We call {A,B} a partition of G if and only if
A∪B=V. |
We call {A,B} a satisfactory partition of G if and only if {A,B} is a partition of G such that for every vertex u∈A,
|{v∈A:u∼v}|≥|{v∈B:u∼v}|, |
and for every vertex x∈B,
|{y∈B:x∼y}|≥|{y∈A:x∼y}|. |
See, for example, the graph in Figure 1, where {1,2,3,4,5,6} and {7,8,9,10,11,12} are a satisfactory partition of the depicted graph.
The satisfactory partition problem (SPP for short) is the problem of deciding whether an undirected graph G has a satisfactory partition. The SPP was introduced in a paper [1], where an integer-programming formulation of the SPP was given, and a heuristic procedure was employed for solving the SPP. For an interpretation of the SPP, consider the problem of organizing a sightseeing tour on two boats where it is required to separate the participants into two groups and to satisfy everyone. A participant is satisfied if he knows at least as many people on his boat as on the other. This problem can be seen as an instance of the SPP on a graph where the participants are the vertices, and two vertices are adjacent if the corresponding persons know each other (see the article [1]). Research [2] showed that the SPP can be solved in polynomial time on graphs with bounded clique width. An article [3] studied the complexity of different variants of the SPP. A work [4] proved that the SPP is NP-complete. However, an article [5] showed that for graphs with maximum degree at most 4, the SPP is polynomially solvable. Additionally, a paper [6] presented several parameterized algorithms for solving the SPP.
In this paper, we give new results on the time complexity of computing satisfactory partitions. We prove that for a given undirected graph with n vertices, a satisfactory partition (if any exists) can be computed recursively with a recursion tree of depth O(lnn) in expectation. Subsequently, we show that a satisfactory partition for those undirected graphs with recursion tree depth matching the expectation can be computed in time O(n32O(lnn)).
The article is structured as follows: Section 2 discusses related work in a broader scope of graph partitioning literature. Section 3 illustrates a recursive algorithm to compute a satisfactory partition of a given undirected graph. Section 3 shows that the algorithm's recursion tree has a depth of O(lnn) in expectation. In Section 4, we give further arguments on the correctness of our results. Section 5 discusses our results, limitations, and future directions, while section 6 concludes the article.
In the introduction, we discussed closely related work to the study presented in this article. This section gives a glimpse of broader literature overviewing diverse works on various graph processing and applications in recent years. Thus, a study [7] employed RNA graph partitioning to discover RNA modularity. A work [8] studied subgraphs of pair vertices. An article [9] presented a novel edge detection algorithm based on a hierarchical graph-partition approach. A paper [10] exploited a genetic algorithm for graph partition in a heterogeneous cluster. A study [11] presented a large-scale graph partition algorithm with redundant multi-order neighbor vertex storage. An article [12] investigated forbidden subgraphs in reduced power graphs of finite groups. A work [13,14] discussed an ant-local search algorithm for the partition graph coloring problem. An article [15] analyzed an efficient and balanced graph partition algorithm for the subgraph-centric programming model on large-scale power-law graphs. A work [16] introduced an improved spectral graph partition intelligent clustering algorithm for low-power wireless networks. A study [17] presented an artificial intelligence knowledge graph for dynamic networks. A paper [18] used a graph partition sampling algorithm in medical intelligent systems and orthopedic clinical nursing. A study [19] proposed a robust spectral clustering algorithm based on grid partition and decision graph. An article [20] argued about improving a graph-based label propagation algorithm with group partition for fraud detection. A study [21] presented results on monochromatic vertex disconnection of graphs. A paper [22] analyzed a task partition algorithm based on grid and graph partition for distributed crowd simulation. An article [23] introduced a novel sports video background segmentation algorithm based on graph partition. A work [24] discussed a property graph partition algorithm based on improved barnacle mating optimization. A study [25] put forward a text mining method of dispatching operation ticket systems based on graph partition spectral clustering algorithms. A work [26] discussed distinguishing colorings of graphs and their subgraphs. A paper [27] proposed an implementation of a parallel graph partition algorithm to speed up computations. A work [28] presented a memetic algorithm with two distinct solution representations for the partition graph coloring problem. A paper [29] studied the informational entropy of B-ary trees after a vertex cut. A work [30] discussed the existence of a graph whose vertex set can be partitioned into a fixed number of strong domination-critical vertex sets. An article [31] examined network bipartitioning in the anti-communicability Euclidean space. The interested reader may find further citations in the reviewed literature to other studies within the broad domain of graph processing algorithms.
We formalize the structures of our exact process of finding a satisfactory partition, {A,B}, of a given undirected graph
G=(V,E). |
Hence, let γ: V→N0, α:V→N0, and β: V→N0 be total mappings such that
γ(v)=|{u∣u∼v}|, α(v)=0andβ(v)=0 |
for all v. During our process of finding a satisfactory partition, {A,B}, of the given graph G, for every vertex v∈V, we use α(v) to track the number of v's adjacent vertices that are in part A, and we use β(v) to track the number of v's adjacent vertices that are in part B. Since initially we have empty parts, i.e., A=∅ and B=∅, we initialize α(v) and β(v) with 0 for all v. On the other hand, for all vertices v, we employ γ(v) to track the number of v's adjacent vertices that are not assigned to a part yet. Subsequently, for every v, γ(v) is started with |{u∣u∼v}|.
Additionally, we utilize another total mapping μ: V→{0,1,2} such that for all vertices v, μ(v) is initialized with 0 to indicate that v is not assigned to either A nor B. Thus, assigning 1 to μ(v) means that the vertex v now is a member of A, while assigning 2 to μ(v) implies that v now is a member of B. Whenever a vertex v joins some part, we update the status of the adjacent vertices of v as follows: If v joins A, then we increment
α(u)←α(u)+1 |
for all u∼v. Likewise, if a vertex v joins part B, then we update
β(u)←β(u)+1 |
for all u∼v. Whenever we assign a vertex to some part, we verify that this assignment adheres to the satisfactory partition specification. That is, we check that for every vertex v∈A (i.e., μ(v)=1),
α(v)+γ(v)≥β(v); |
likewise, we ensure that for every vertex v in part B (i.e., μ(v)=2),
β(v)+γ(v)≥α(v). |
Consequently, whenever we put a vertex in some part, if for some vertex v∈V, μ(v)=1 with
β(v)>α(v)+γ(v), |
or, μ(v)=2 with
α(v)>β(v)+γ(v), |
then a contradiction with the satisfactory-partition specification is found; hence we return to a previous stage of the search process where the satisfactory-partition specification is unviolated by revoking one or more vertices from part A (or from part B) as we elaborate throughout this section.
Further, in finding a satisfactory partition, we perform the following routine every time we put a vertex in some part. For every vertex v in the graph,
(i) If v∈A (i.e., μ(v)=1) and half of the adjacent vertices of v are in part B, then we assign to part A the adjacent vertices of v that are not assigned to any part yet;
(ii) If v∈B (i.e., μ(v)=2) and half of the adjacent vertices of v are in part A, then we assign to part B the adjacent vertices of v that are not assigned to any part yet. This routine is repeated every time we put a vertex in some part. We stress that assigning a vertex to some part may require other vertices, which are not assigned to any part yet, to join a specific part to fulfill the satisfactory partition requirement.
Our exact process for constructing a satisfactory partition of a given graph
G=(V,E) |
is rigorously described in Algorithm 1. The algorithm finds a satisfactory partition (if any exists) immediately after Σ(∅,V,μ,α,β,γ) is invoked with μ, α, β, and γ being initialized such that every vertex y∈V has
μ(y)=0, γ(y)=|{z:z∼y}|, α(y)=0andβ(y)=0. |
We shortly explain the first parameter (i.e., ∅) passed to the algorithm. The second parameter of our procedure Σ is initially the whole set, V, of the graph vertices. Referring to lines 30–31, a call for a new instance, k, of the algorithm (i.e., the procedure Σ) entails creating a new copy of the passed structures such that the last copy of the structures (belonging to the caller instance, k−1, of Σ) are not affected by the updates carried on during the execution of instance k. In other words, all the calls for the procedure Σ are done by passing a copy of the parameters.
Algorithm 1: Σ(Δ,V,μ,α,β,γ). |
1 while: Δ≠∅ do
2 Extract a vertex v from Δ; 3 if μ(v)=1 then 4 if β(v)>α(v)+γ(v) then return; 5 if α(v)+γ(v)−1≤β(v)≤α(v)+γ(v) then 6 foreach u∼v with μ(u)=0 do 7 μ(u)←1; V←V∖{u}; Δ←Δ∪{u}; 8 foreach u∼v do 9 α(u)←α(u)+1; γ(u)←γ(u)−1; 10 if μ(u)=2∧α(u)>β(u)+γ(u) then return; 11 if μ(u)=2∧β(u)+γ(u)−1≤α(u)≤β(u)+γ(u) then 12 foreach s∼u with μ(s)=0 do 13 μ(s)←2; V←V∖{s}; Δ←Δ∪{s}; 14 if μ(v)=2 then 15 if α(v)>β(v)+γ(v) then return; 16 if β(v)+γ(v)−1≤α(v)≤β(v)+γ(v) then 17 foreach u∼v with μ(u)=0 do 18 μ(u)←2; V←V∖{u}; Δ←Δ∪{u}; 19 foreach u∼v do 20 β(u)←β(u)+1; γ(u)←γ(u)−1; 21 if μ(u)=1∧β(u)>α(u)+γ(u) then return; 22 if μ(u)=1∧α(u)+γ(u)−1≤β(u)≤α(u)+γ(u) then 23 foreach s∼u with μ(s)=0 do 24 μ(s)←1; V←V∖{s}; Δ←Δ∪{s}; 25 if V=∅ then 26 if {v∣μ(v)=1}≠∅∧{v∣μ(v)=2}≠∅ then 27 μ is a satisfactory partition; end the execution of the algorithm; 28 else 29 Extract (a randomly selected) v from V; 30 select randomly i from {1,2}; μ(v)←i; Σ({v},V,μ,α,β,γ); 31 Let j∈{1,2} but j≠i; μ(v)←j; Σ({v},V,μ,α,β,γ). } |
Let us go through the algorithm's actions in further detail, line by line. Referring to the caption of Algorithm 1, note that our procedure Σ is recursive (see lines 30 and 31). As noted earlier, we initially run Σ with Δ=∅, the whole vertex set of the given graph V, and the total mappings μ, α, β, γ such that
μ(v)=α(v)=β(v)=0 |
while
γ(v)=|{u:u∼v}| |
for all v∈V. Thus, because Δ is initially empty, we delay discussing the purpose of Δ and the while loop at line 1 in the algorithm. Likewise, since V is initially nonempty, let us skip line 25 and go to line 29, where we take a vertex v out of V. Referring to line 30 (respectively line 31), supposing i=1 (respectively j=2), we assign v to part A (respectively part B) by applying μ(v)←1 (respectively μ(v)←2). Subsequently, we run the algorithm again by invoking Σ({v},V,μ,α,β,γ) trying to construct a satisfactory partition with v∈A (see line 30). If this attempt is unsuccessful, we try constructing a satisfactory partition with v in B (see line 31).
Note that during the very first call for the procedure Σ, there is no need to try constructing a satisfactory partition by assigning v (the extracted vertex from V in line 29) to part B, as this is symmetric to the scenario where v∈A. Suppose no satisfactory partition is possible with v∈A. In that case, there is no satisfactory partition possible with v∈B, which means that in the initial call for Σ, we can omit to execute line 31. We leave this tiny detail out of Algorithm 1. However, suppose one wants to include such detail. In that case, the algorithm should be started with Σ({x},V∖{x},μ,α,β,γ), where the first parameter (i.e., {x}) indicates that the process of constructing a satisfactory partition is started by assigning some vertex x to some part. As said earlier, it does not matter whether x is set to part A or B. Hence, one might start the algorithm by assigning x to part A. Therefore, x must be removed from the vertex set as indicated by V∖{x}, the second parameter passed to the algorithm. For the rest of the parameters (i.e., μ, α, β, and γ), we mentioned earlier that these structures are usually initialized such that for every vertex y∈V,
μ(y)=0, γ(y)=|{z:z∼y}|, α(y)=0andβ(y)=0. |
But since we start the algorithm with some vertex x being included in part A, we must initially set μ(x)←1.
Back to the description of Algorithm 1, now assume that the algorithm has been recursively invoked with Σ({v},V,μ,α,β,γ) where v is assigned to either part A or part B (see lines 30 and 31). At this point, we note that Δ={v}. So, we run the while loop (line 1) and extract v from Δ according to line 2. The purpose of Δ is to temporarily hold those vertices that are newly assigned to some part until we accordingly update the total mappings (employed by the algorithm) as we elaborate next. Referring to line 3, we check if v was assigned to part A (i.e., μ(v)=1), and if so, the following actions need to be performed. In line 4, we examine if the number, β(v), of v's adjacent vertices that are in part B is greater than the sum of the number, α(v), of v's adjacent vertices contained in part A plus the number, γ(v), of v's adjacent vertices that are not assigned to any part yet; if it is the case that
β(v)>α(v)+γ(v), |
then the algorithm returns to the previous instance of Σ. In line 5, we verify if the number, β(v), of v's adjacent vertices that are in part B has reached the maximum allowed value of ⌊|{w:w∼v}|2⌋ by examining the inequality laid down in line 5; if this inequality is true then the v's adjacent vertices that are not assigned to any part yet must join part A (the part of v); see lines 6 and 7 where for every u adjacent to v with μ(u)=0, we assign u to part A, remove u from V, and then put u in Δ. To see that the inequality in line 5 is true if
β(v)=⌊|{w:w∼v}|2⌋, |
let
k=|{w:w∼v}|. |
Recall that throughout the algorithm it is the case that
β(v)+α(v)+γ(v)=|{w:w∼v}|. |
If k is even, then the inequality is
k2−1≤k2≤k2. |
If k is odd, then the inequality is
k+12−1≤k−12≤k+12. |
Now we show that if the inequality (in line 5) is true, then it is the case that
β(v)=⌊|{w:w∼v}|2⌋. |
Given that
β(v)+α(v)+γ(v)=|{w:w∼v}|, |
the inequality can be rewritten as
α(v)+γ(v)−1≤|{w:w∼v}|−α(v)−γ(v)≤α(v)+γ(v), |
which is
2α(v)+2γ(v)−1≤|{w:w∼v}|≤2α(v)+2γ(v). |
Divide all sides by two and then take the floor value. The inequality becomes
⌊α(v)+γ(v)−12⌋≤⌊|{w:w∼v}|2⌋≤⌊α(v)+γ(v)⌋. |
Hence,
α(v)+γ(v)−1≤⌊|{w:w∼v}|2⌋≤α(v)+γ(v). |
That is,
β(v)=⌊|{w:w∼v}|2⌋. |
Back to the algorithm. In line 8, we process the adjacent vertices of v as follows: For every vertex u∼v, we update
α(u)←α(u)+1andγ(u)←γ(u)−1(line 9). |
Subsequently, we check whether the specifications of satisfactory partition are violated as a consequence of v being included in part A. More specifically, referring to line 10, if a vertex u (such that u∼v) is in part B, then we verify that α(u), the number of adjacent vertices of u that are in part A, is still less than or equal to β(u)+γ(u), which is the number of adjacent vertices of u that are either in the part of u (i.e., B) or not assigned to any part. Otherwise, again referring to line 10, the algorithm returns to the previous state (i.e., the algorithm returns to the caller instance of Σ). Moving on to line 11, the algorithm tries to find out if any adjacent vertices of u must join a specific part, according to the definition of satisfactory partition. Therefore, in line 11, we check that if u is in part B and that the number of u's adjacent vertices in part A has reached the maximum allowed value, i.e.,
α(u)=⌊|{w:w∼u}|2⌋, |
then for every vertex s∼u such that s is not assigned to any part yet (lines 11 and 12), s must join part B, and consequently s is removed from V and then included in Δ (line 13).
The rest of the while loop, lines 14–24 in Algorithm 1, deal with the case where the vertex v (extracted from Δ in line 2) is assigned to part B. This is analogous to the scenario of v being assigned to part A (lines 3–13). However, we trace lines 14–24 for the reader's convenience. Referring to line 15, if the number, α(v), of v's adjacent vertices that are in part A is greater than the sum of the number, β(v), of v's adjacent vertices that are in part B plus the number, γ(v), of v's adjacent vertices that are not assigned to any part, then the algorithm returns to the previous stage (i.e., to the caller instance of Σ). As to line 16, if the number, α(v), of v's adjacent vertices that are in part A has reached the maximum allowed value, i.e.,
α(u)=⌊|{w:w∼u}|2⌋, |
then, in lines 17 and 18, we put into part B every u∼v with μ(u)=0 (i.e., every u∼v not assigned a part yet). In line 18, we move u from V to Δ so that u is processed in a subsequent round of the while loop. In lines 19–20, we update
β(u)←β(u)+1 |
and
γ(u)←γ(u)−1 |
for every u∼v. In line 21, if a vertex u∼v is in part A and the number, β(u), of u's adjacent vertices that are in part B exceeds the sum of the number, α(u), of u's adjacent vertices that are in part A plus the number, γ(u), of u's adjacent vertices that are not assigned to any part, then the algorithm returns to the previous state (i.e., the algorithm returns to the caller instance of the procedure Σ). In line 22, we check whether there is a vertex u∼v such that u∈A (i.e., μ(u)=2), but the number, β(u), of u's adjacent vertices belonging to part B, has reached the maximum allowed value, i.e.,
β(u)=⌊|{w:w∼u}|2⌋; |
if this is true, then we put each s∼u into part A, provided that s is not already included in any part; see lines 23 and 24. In line 24, we move s from V to Δ so that s is processed further in a later round of the while loop.
Thereby, the while loop continues as long as Δ has some vertices that need to be processed. In summary, we note that the while loop's actions ensure that the current part assignment of graph vertices is consistent with the satisfactory partition specifications. Going beyond the while loop, in line 25, the algorithm checks if all the graph vertices are included in some part; if not, i.e., if V≠∅, then we repeat the same process as discussed above by re-applying lines 29–31, and so forth. Referring to the lines 25–27, if V=∅ (line 25) with A and B being nonempty (line 26), then we stop the search process for a satisfactory partition since A and B is a satisfactory partition of the given graph. Referring to line 26, recall that the set
{v∣μ(v)=1} |
designates part A, whereas the set
{v∣μ(v)=2} |
denotes part B. By this, we completed a description of Algorithm 1 and its structures. The next section discusses the algorithm's running time (and running space).
Example 1. Let us demonstrate the operation of Algorithm 1 on the graph depicted in Figure 2.
Initially, we invoke Σ(Δ,V,μ,α,β,γ) with
Δ=∅,V={w,x,y,z},μ={(w,0),(x,0),(y,0),(z,0)},α={(w,0),(x,0),(y,0),(z,0)},β={(w,0),(x,0),(y,0),(z,0)},γ={(w,2),(x,2),(y,2),(z,2)}. |
Assume that w is extracted from the vertex set (in line 29). Then, in line 30, suppose w is assigned to the first part A by labeling it with μ(w)←1. Then, we invoke Σ(Δ,V,μ,α,β,γ) with
Δ={w},V={x,y,z},μ={(w,1),(x,0),(y,0),(z,0)},α={(w,0),(x,0),(y,0),(z,0)},β={(w,0),(x,0),(y,0),(z,0)},γ={(w,2),(x,2),(y,2),(z,2)}. |
Now, line 2 is operated to extract w from Δ, which makes Δ empty; then, line 9 is executed twice for w's adjacent vertices x and z such that
α(x)=1, γ(x)=1, α(z)=1andγ(z)=1. |
Thus, after execution of this round of the while loop, the state is
Δ=∅,V={x,y,z},μ={(w,1),(x,0),(y,0),(z,0)},α={(w,0),(x,1),(y,0),(z,1)},β={(w,0),(x,0),(y,0),(z,0)},γ={(w,2),(x,1),(y,2),(z,1)}. |
Now, line 29 is executed. Assume that x is extracted from V. Then, in line 30, suppose x is assigned to part B by labeling it with μ(x)←2. Afterwards, in line 30, we invoke Σ(Δ,V,μ,α,β,γ) with
Δ={x},V={y,z},μ={(w,1),(x,2),(y,0),(z,0)},α={(w,0),(x,1),(y,0),(z,1)},β={(w,0),(x,0),(y,0),(z,0)},γ={(w,2),(x,1),(y,2),(z,1)}. |
Now, we track the actions taken in a round of the while loop. Line 2 is operated to extract x from Δ, which makes Δ empty. Lines 17 and 18 are executed such that
μ(y)=2, V={z}, Δ={y}. |
Lines 19 and 20 are operated such that
β(w)=1, γ(w)=1, β(y)=1, γ(y)=1. |
Then, lines 23 and 24 are executed such that
μ(z)=1, V=∅, Δ={y,z}. |
In summary, after execution of this round of the while loop, the state is
Δ={y,z},V=∅,μ={(w,1),(x,2),(y,2),(z,1)},α={(w,0),(x,1),(y,0),(z,1)},β={(w,1),(x,0),(y,1),(z,0)},γ={(w,1),(x,1),(y,1),(z,1)}. |
The following actions are taken in the next round of the while loop. Line 2 is run to extract y from Δ, so
Δ={z}. |
Lines 19 and 20 are executed such that
β(x)=1, γ(x)=0, β(z)=1, γ(z)=0. |
The state after execution of the second round is
Δ={z},V=∅,μ={(w,1),(x,2),(y,2),(z,1)},α={(w,0),(x,1),(y,0),(z,1)},β={(w,1),(x,1),(y,1),(z,1)},γ={(w,1),(x,0),(y,1),(z,0)}. |
In the following round of the while loop, line 2 is run to extract z from Δ, so
Δ=∅. |
Lines 8 and 9 are executed such that
α(w)=1, γ(w)=0, α(y)=1, γ(y)=0. |
Now, the state is
Δ=∅,V=∅,μ={(w,1),(x,2),(y,2),(z,1)},α={(w,1),(x,1),(y,1),(z,1)},β={(w,1),(x,1),(y,1),(z,1)},γ={(w,0),(x,0),(y,0),(z,0)}. |
Since Δ is empty now, there are no more rounds of the while loop. Hence, line 27 is executed to terminate the algorithm as a satisfactory partition {{x,y},{w,z}} is found.
We first discuss the implementation of V in Algorithm 1. We implement V as a variable-size array, R, of |V| pairs where each pair has a vertex v and a boolean value b reflecting whether v is deleted from V or not such that
b=true |
means that the vertex v is still in V. Now, deleting v from V is implemented by firstly locating v in R using another fixed-size array I[v] that holds the current position of v in R. Note that throughout the algorithm's execution, the size of I constantly equals the number of vertices of the originally inputted graph. Hence, by applying
R[I[v]].b←false, |
we delete v from V. Therefore, deleting one vertex from V costs constant time, which implies that line 29 in the algorithm runs in constant time.
However, observe that before we randomly select a vertex to be extracted from V (line 29), we must shrink the V's underlying array R due to vertex removal from V (that happened in lines 7, 13, 18, 24, and 29 since the last time we ran line 29). To this end, we create an array, R′, of pairs (v,b) with a size of |R|−d, where d is the number of vertex deletions that occurred since the last time we shrank R. Then, we check all R[k]; if
R[k].b=true, |
then we copy R[k] into R′[m] and perform
I[R[k].v]←m. |
After processing all R[k], we cancel R and replace it with R′ to represent the current V.
Now, we show that the recursion tree of Algorithm 1 has a logarithmic depth in expectation.
Theorem 1. For a given undirected graph with n vertices, the recursion tree of Algorithm 1 has a depth O(lnn) in expectation.
Proof. Whenever we make a non-terminal recursive call to the algorithm, we extract a randomly selected vertex v from the current vertex set V; see line 29 in the algorithm. So, the expected depth of the recursion tree of Algorithm 1 is
n∑v=1Ev(X), |
where Ev(X) is the expected number of times a vertex v is selected and extracted within a simple, complete path of the recursion tree starting from the root call until a terminal call where V is empty (see line 25 in the algorithm).
Considering line 29, let V={1,...,i} such that 1≤i≤n. Recall that throughout the algorithm's execution, V is a subset of the vertex set of the originally inputted graph. At the start of the execution, V contains n vertices. But as we go on with the algorithm's execution, V is reduced to i≤n vertices as an effect of executing lines 7, 13, 18, 24, and 29 in the algorithm. The algorithm keeps reducing V across multiple recursive calls until V is empty; see line 25 in Algorithm 1.
To calculate the expected number of times a given vertex v is selected and extracted within a simple, complete path of the recursion tree, let
X:{({1,2,...,i},v)∣1≤i≤n}→{0,1} |
be a random variable that represents the number of times we select and extract a given vertex v from a set of i vertices. For every i, it is the case that
X(({1,2...,i},v))=1 |
if 1≤v≤i and otherwise
X(({1,2...,i},v))=0. |
For a given i≤n, it is the case that
P(X(({1,2,...,i},v))=1)=(1i)(1n), |
where 1i is the probability of selecting v from i≤n vertices and, 1n is the probability of having i≤n vertices. Hence, the expected depth of the recursion tree of Algorithm 1 is
n∑v=1Ev(X)=n∑v=1n∑i=1X(({1,2,...,i},v))P(X(({1,2,...,i},v))=1)=n∑v=1n∑i=1(1)(1i)(1n)=n∑v=1(1n)n∑i=1(1i)=n∑i=1(1i). |
Thus,
n∑v=1Ev(X)∈O(lnn). |
This ends the proof.
In the following theorem, we illustrate the overall time complexity of Algorithm 1 for cases where the recursion tree of the algorithm has a logarithmic depth.
Theorem 2. Let G be an undirected graph with n vertices such that the Algorithm 1's recursion tree depth matches the expectation, i.e., O(lnn). Then, Algorithm 1 runs in time O(n32O(lnn)).
Proof. Observe that the algorithm's running time is bounded by the number of Σ calls multiplied by the running time of the while loop (in line 1 in the algorithm). However, the while loop requires at most O(n3) time due to the three nested loops that have no more than n rounds each. Referring to lines 29–31 in the algorithm, assume two Σ calls are made for every graph vertex. Then, the algorithm's running time in this extreme scenario is bounded by O(n32n). Nevertheless, as the depth of the recursion tree of Algorithm 1 is O(lnn), the recursion tree size is O(2O(lnn)). Consequently, the overall running time of Algorithm 1 is estimated by the running time of the while loop (i.e., O(n3)) multiplied by the recursion tree size (i.e., O(2O(lnn))). This means Algorithm 1 runs in time O(n32O(lnn)).
Regarding the space complexity of Algorithm 1, we note that the input graph requires O(n2) space. In the following theorem, we discuss the additional space needed by Algorithm 1 other than the space required to hold the input graph. Next, we show that the additional space of Algorithm 1 is linearithmic in expectation.
Theorem 3. For a given undirected graph with n vertices, Algorithm 1 runs in additional space O(nlnn) in expectation.
Proof. We note that the maximum size of any structure employed by the algorithm is O(n) and the expected depth of the recursion of the algorithm is O(n), which implies that the maximum number of copies of any structure employed by the algorithm is O(n). Thus, the additional space of Algorithm 1 is O(n2). But since the expected depth of the recursion of the algorithm is logarithmic, as established earlier, the expected additional space is O(nlnn).
The following technical lemmata together solidify the validity of Algorithm 1. Recall that the algorithm finds (if any exists) a satisfactory partition {A,B} for a given undirected graph G. In the following lemma, we state that throughout the execution of our algorithm, the algorithm (line 3) checks every vertex assigned to part A; likewise, the algorithm (line 14) checks every vertex put in part B.
Lemma 1. Throughout the execution of Algorithm 1 on a graph G, for all vertices y of G, whenever y is put into part A or part B, then the algorithm examines the adjacent vertices, z, of y to update the mappings α(z), β(z), and γ(z) accordingly.
Proof. Recall that assigning μ(x) to 1 means that x is included in part A, while μ(x) getting 2 indicates that x is assigned to part B. Now, it suffices to note that whenever the algorithm assigns a vertex to a part, it immediately adds the vertex to Δ; see lines 7, 13, 18, 24, 30, and 31. Subsequently, the algorithm processes every vertex v in Δ; see line 2; and later, the algorithm examines all v's adjacent vertices, u, to update the mappings α(u), β(u), and γ(u) accordingly; see lines 3, 8, 9, 14, 19, and 20.
The following three lemmata show the correct usage of the mappings α, β, and γ, respectively.
Lemma 2. Throughout the execution of Algorithm 1 on a graph G, for any vertex y in G, α(y) indicates exactly the number of y's adjacent vertices that are in part A.
Proof. Recollect that our algorithm is started with
α(y)=0 |
for every vertex y in G. Likewise, the algorithm is started with
μ(y)=0 |
for all y in G, which means that the algorithm is started with part A being empty. Throughout its execution, the algorithm performs
α(y)←α(y)+1 |
for any vertex y whenever an adjacent vertex of y is assigned to part A, see lines 3, 8, and 9. Recall that, by definition, assigning μ(x)←1 for any vertex x is equivalent to assigning x to part A.
Lemma 3. Throughout the execution of Algorithm 1 on a graph G, for any vertex y in G, β(y) indicates exactly the number of y's adjacent vertices that are in part B.
Proof. Recall that our algorithm is started with
β(y)=0 |
for every vertex y in G. Additionally, the algorithm is started with
μ(y)=0 |
for all y in G, which implies that part B is initially empty. The algorithm updates
β(y)←β(y)+1 |
for any vertex y whenever the algorithm assigns an adjacent vertex of y to part B, see lines 14, 19, and 20. By definition, assigning μ(x)←2 for any vertex x implies putting x into part B.
Lemma 4. Throughout the execution of Algorithm 1 on a graph G, for any vertex y in G, γ(y) indicates exactly the number of y's adjacent vertices that are not assigned to any part.
Proof. Consider that our algorithm is started with
γ(y)=|{x:x∼y}| |
for every vertex y in G. Similarly, the algorithm is started with part A and part B being empty. The algorithm updates
γ(y)←γ(y)−1(lines 9 and 20) |
for any vertex y whenever the algorithm assigns an adjacent vertex of y to either part A (see lines 3, 8, and 9) or part B (see lines 14, 19, and 20).
Now, we emphasize the integrity of line 4, where Algorithm 1 might return to a previous point of the search process under some conditions, as detailed in the following lemma:
Lemma 5. For any graph G, at any stage of the execution of Algorithm 1, if there is v∈A (i.e., μ(v)=1) with
β(v)>α(v)+γ(v), |
then there is no
A′⊇A, B′⊇B |
such that {A′,B′} is a satisfactory partition of G.
Proof. This follows directly from the definition of satisfactory partition. But we mean here to highlight that the premise of this lemma corresponds to the conditions of lines 3 and 4. Likewise, we stress that the consequence of this lemma agrees with the action of Algorithm 1 as declared in line 4, where the algorithm returns to a previous instance of the procedure Σ, which holds the previous copy of an under-construction satisfactory partition.
Next, we show the soundness of line 10, where Algorithm 1 goes back to a previous point of the search process under some conditions, as illustrated in the following lemma:
Lemma 6. For any graph G, at any stage of the execution of Algorithm 1, if there is a vertex v∈A (i.e., μ(v)=1) with a vertex u∼v such that u∈B (i.e., μ(u)=2) and
α(u)>β(u)+γ(u), |
then there is no
A′⊇A, B′⊇B |
such that {A′,B′} is a satisfactory partition of G.
Proof. This lemma follows directly from the definition of satisfactory partition. Observe that the premise of this lemma corresponds to the conditions of lines 3, 8, and 10 in Algorithm 1. Similarly, the consequence of this lemma is consistent with the action of Algorithm 1 as stated in line 10, where the algorithm returns to the previous instance of the procedure Σ that holds the previous copy of an under-construction satisfactory partition.
In the following lemma, we focus on the correctness of line 15, where Algorithm 1 might return to a previous point of the search process under some conditions, as detailed next.
Lemma 7. For any graph G, at any stage of the execution of Algorithm 1, if there is v∈B (i.e., μ(v)=2) with
α(v)>β(v)+γ(v), |
then there is no
A′⊇A, B′⊇B |
such that {A′,B′} is a satisfactory partition of G.
Proof. This follows directly from the definition of satisfactory partition. Note that the premise of this lemma corresponds to the conditions of lines 14 and 15 in Algorithm 1. Likewise, the consequence of this lemma agrees with the action of Algorithm 1 as declared in line 15, where the algorithm returns to the previous instance of the procedure Σ, which holds the previous copy of an under-construction satisfactory partition.
Now, we underline the correctness of line 21, where Algorithm 1 returns to a previous stage of the search process when some conditions hold, as stated in the following lemma:
Lemma 8. For any graph G, throughout the execution of Algorithm 1, if there is a vertex v∈B (i.e., μ(v)=2) such that there exists u∼v with u∈A (i.e., μ(u)=1) and
β(u)>α(u)+γ(u), |
then there is no
A′⊇A, B′⊇B |
such that {A′,B′} is a satisfactory partition of G.
Proof. This lemma is in line with the definition of satisfactory partition. Observe that the premise of this lemma corresponds to the condition of lines 14, 19, and 21 in Algorithm 1. Further, the consequence of this lemma is consistent with the action of Algorithm 1 as stated in line 21, where the algorithm returns to the previous instance of the procedure Σ that holds the previous copy of an under-construction satisfactory partition.
Next, we prove the correctness of lines 11–13 and 16–18, where Algorithm 1 decides to assign part B for some vertex provided that some conditions hold, as demonstrated by the following lemma:
Lemma 9. For any graph G, at any stage of the execution of Algorithm 1, if there is x∈B (i.e., μ(x)=2) such that
β(x)+γ(x)−1≤α(x)≤β(x)+γ(x) |
and, there is a satisfactory partition, {A′,B′}, of G such that A′⊇A with B′⊇B, then for every y∼x with y∉A∪B, y∈B′.
Proof. The premise of this lemma corresponds to lines 11, 14, and 16. The consequence of this lemma is guaranteed by Algorithm 1 according to the actions in lines 12, 13, 17, and 18. However, let us show the correctness of these actions (i.e., this lemma) by contradiction. Suppose there is y∼x with y∉A∪B such that y∉B′. This means y∈A′∖A. Hence, this requires
β(x)+γ(x)−2≤α(x)+1≤β(x)+γ(x)−1. |
Therefore, it holds that
β(x)+γ(x)−3≤α(x)≤β(x)+γ(x)−2. |
Consequently,
α(x)=β(x)+γ(x)−2 |
or
α(x)=β(x)+γ(x)−3. |
This contradicts the inequality given in the premise of this lemma
β(x)+γ(x)−1≤α(x)≤β(x)+γ(x), |
that means
α(x)=β(x)+γ(x)−1 |
or
α(x)=β(x)+γ(x). |
This completes the proof of this lemma.
Now, we aim to show the correctness of lines 5–7 and 22–24, where Algorithm 1 decides to assign part A for some vertex provided that some conditions hold, as we elaborate in the following lemma:
Lemma 10. For any graph G, at any stage of the execution of Algorithm 1, if there is x∈A (i.e., μ(x)=1) such that
α(x)+γ(x)−1≤β(x)≤α(x)+γ(x) |
and if there is a satisfactory partition, {A′,B′}, of G with A′⊇A and B′⊇B, then for every y∼x with y∉A∪B, y∈A′.
Proof. The premise of this lemma corresponds to lines 3, 5, and 22. The consequence of this lemma is guaranteed by Algorithm 1 according to the actions in lines 6, 7, 23, and 24. However, let us show the correctness of these actions (i.e., this lemma) by contradiction. Suppose there is y∼x with y∉A∪B such that y∉A′. Thus, y∈B′∖B. This requires that
α(x)+γ(x)−2≤β(x)+1≤α(x)+γ(x)−1. |
Therefore, it holds that
α(x)+γ(x)−3≤β(x)≤α(x)+γ(x)−2. |
Consequently,
β(x)=α(x)+γ(x)−2 |
or
β(x)=α(x)+γ(x)−3. |
This contradicts the inequality given in the premise of this lemma
α(x)+γ(x)−1≤β(x)≤α(x)+γ(x), |
that implies
β(x)=α(x)+γ(x)−1 |
or
β(x)=α(x)+γ(x). |
This completes the proof of this lemma.
In the following, we show the correctness of lines 25–27 in Algorithm 1.
Lemma 11. For any graph G, if line 27 of Algorithm 1 is executed, then the reported set
{{x∣μ(x)=1}, {x∣μ(x)=2}}, |
which denotes the set {A,B}, is truly a satisfaction partition of the inputted graph G.
Proof. We need to show that the set {A,B} reported by the algorithm in line 27 satisfies the definition of satisfactory partition. Thus, we first show that
A≠∅ and B≠∅, |
which is guaranteed by line 26. Likewise, we need to show that
A∩B=∅. |
This is ensured using our total mapping μ throughout the algorithm. Additionally, we need to show that
A∪B=V, |
which is certain by the actions of the algorithms that whenever a vertex joins part A or part B, v is removed from the vertex set V, see lines 7, 13, 18, and 24. Further, the algorithm reports {A,B} being a partition of the inputted graph if and only if V=∅; see line 25. Equally, we need to show that
∀x∈A,α(x)≥β(x), |
which is warranted by lines 4 and 21; note that whenever line 27 is executed, it is the case that γ(x)=0 for all vertices x. Lastly, we need to prove that
∀y∈B,β(y)≥α(y), |
which is maintained by lines 10 and 15.
In our last theorem, we stress that Algorithm 1 is correct.
Theorem 4. Algorithm 1 finds a satisfactory partition (if any exists) of a given undirected graph.
Proof. Lemmas 1–11 altogether show this claim.
Literature demonstrating applications of "satisfactory partitions" is scarce. Hence, a natural direction to extend this research line in the future is to conduct case studies in domains that can benefit from the "satisfactory partitions" notion. Nonetheless, identifying a satisfactory partition of a graph may play a vital role in understanding the social structure of communities. Take, for example, a community of voters; one might be interested in discovering networks of electors or coalitions that are internally cohesive. Likewise, consider a human management case where the company aims to create two teams to implement different projects. The company desires teams with minimal conflict, minimal communication across teams, and maximal collaboration within each team. Represent employees as vertices and put links between the vertices to denote interactions between the employees. A satisfactory partition of the resulting graph will be an optimal solution to the team formation problem. However, further work might examine diverse potential applications in other specialized domains, such as studying fullerenes (see, e.g., [32,33]).
Limitations of our study lie in that we calibrate the design of our algorithm (i.e., Algorithm 1) to optimize its expected recursion depth. Specifically, we incorporate the process of the "while" loop, which runs in cubic time. However, for the general worst-case, the running time of Algorithm 1 is O(n32n). Suppose we drop the while loop and modify the if statement (at line 25) to check for the conditions of a satisfactory partition. In that case, the algorithm becomes literary a "generate and test" procedure that runs always (in all cases) in exponential time O(2n). Compare this to the running time of Algorithm 1 O(n32O(lnn)) under the assumption that the recursion depth of the algorithm does not exceed the expected depth O(lnn). As discussed earlier in this article, this upper bound of the expected recursion depth is reached by approximating the sum
n∑i=11i. |
In fact,
lnn+1n≤n∑i=11i≤lnn+1. |
This means if the recursion depth of Algorithm 1 does not exceed the expectation, then the running time of Algorithm 1 is in
O(n32lnn)∈O(n3nln2)∈O(n3.7) |
Therefore, there is an obvious trade-off between achieving a logarithmic expected depth of the recursion of the algorithm versus an all-case exponential time of O(2n). Future research might investigate whether the amortized running time of the while loop is better than cubic time to mitigate this trade-off.
We studied an algorithm for computing a satisfactory partition of an undirected graph and showed that for a given undirected graph with n vertices, a satisfactory partition (if any exists) can be computed recursively with a recursion tree of depth O(lnn) in expectation. Likewise, we showed that a satisfactory partition for those undirected graphs, with recursion tree depth meeting the expectation, can be computed in time O(n32O(lnn)). However, in the general case, we note that the algorithm runs in time O(n32n), as it is known that the problem of computing a satisfactory partition is NP-complete.
The author declares no conflict of interest.
[1] |
W. S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous activity, J. Bull. Math. Biophys, 5 (1943), 115–133. https://doi.org/10.1007/BF02478259 doi: 10.1007/BF02478259
![]() |
[2] |
W. Pitts, W. S. McCulloch, How we know universals the perception of auditory and visual forms, J. Bull. Math. Biophys, 9 (1947), 127–147. https://doi.org/10.1007/BF02478291 doi: 10.1007/BF02478291
![]() |
[3] |
C. E. Shannon, A mathematical theory of communication, Bell System Techn. J., 27 (1948), 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x doi: 10.1002/j.1538-7305.1948.tb01338.x
![]() |
[4] |
J. Végh, Á. J. Berki, Towards generalizing the information theory for neural communication, Entropy, 24 (2022), 1086. https://doi.org/10.3390/e24081086 doi: 10.3390/e24081086
![]() |
[5] | L. Nizami, Information theory is abused in neuroscience, Cybern. Human Knowing, 26 (2019), 47–97. |
[6] | C. E. Shannon, The Bandwagon, IRE Trans. Inf. Theory, 2 (1956), 3. |
[7] | D. H. Johnson, Information theory and neuroscience: Why is the intersection so small?, in 2008 IEEE Information Theory Workshop, (2008), 104–108. https://doi.org/10.1109/ITW.2008.4578631 |
[8] |
M. D. McDonnell, S. Ikeda, J. H. Manton, An introductory review of information theory in the context of computational neuroscience, Biol. Cybern., 105 (2011). https://doi.org/10.1007/s00422-011-0451-9 doi: 10.1007/s00422-011-0451-9
![]() |
[9] | J. N. Carbone, J. A. Crowder, The great migration: Information content to knowledge using cognition based frameworks, in Biomedical Engineering, Springer, New York, (2011), 17–46. https://doi.org/10.1007/978-1-4614-0116-2_2 |
[10] |
J. Végh, Why do we need to Introduce Temporal Behavior in both Modern Science and Modern Computing, Global J. Comput. Sci. Technol., 20 (2020), 13–29. https://doi.org/10.34257/GJCSTAVOL20IS1PG13 doi: 10.34257/GJCSTAVOL20IS1PG13
![]() |
[11] |
J. Végh, Revising the classic computing paradigm and its technological implementations, Informatics, 8 (2021). https://doi.org/10.3390/informatics8040071 doi: 10.3390/informatics8040071
![]() |
[12] |
J. Végh, Á. J. Berki, On the role of speed in technological and biological information transfer for computations, Acta Biotheor., 70 (2022), 26. https://doi.org/10.1007/s10441-022-09450-6 doi: 10.1007/s10441-022-09450-6
![]() |
[13] | G. Buzsáki, J. Végh, Space, Time and Memory, 1st edition, Oxford University Press, in print, 2023. |
[14] | H. Minkowski, Die Grundgleichungen für die electromagnetischen Vorgänge in bewegten Körpern, Nachr. Königl., Ges. der Wissenschaften zu Göttingen (in German), (1908), 53–111. |
[15] |
L. Pyenson, Hermann Minkowski and Einstein's special theory of relativity, Arch. Hist. Exact Sci., 17 (1977), 71–95. https://doi.org/10.1007/BF00348403 doi: 10.1007/BF00348403
![]() |
[16] |
J. M. Gomes, C. Bédard, S. Valtcheva, M. Nelson, V. Khokhlova, P. Pouget, et al., Intracellular impedance measurements reveal non-ohmic properties of the extracellular medium around neurons, Biophys. J., 110 (2016), 234–246. https://doi.org/10.1016/j.bpj.2015.11.019 doi: 10.1016/j.bpj.2015.11.019
![]() |
[17] | D. Johnston, S. M. S. Wu, Foundations of Cellular Neurophysiology, Massachusetts Institute of Technology, 1995. |
[18] | C. Koch, Biophysics of Computation, Oxford University Press, 1999. |
[19] |
B. Podobnik, M. Jusup, Z. Tiganj, W. X. Wang, J. M. Buld, H. E. Stanley, Biological conservation law as an emerging functionality in dynamical neuronal networks, PNAS, 45 (2017), 11826–11831. https://doi.org/10.1073/pnas.1705704114 doi: 10.1073/pnas.1705704114
![]() |
[20] |
J. von Neumann, First draft of a report on the EDVAC, IEEE Ann. Hist. Comput., 15 (1993), 27–75. https://doi.org/10.1109/85.238389 doi: 10.1109/85.238389
![]() |
[21] | C. Koch, T. A. Poggio, A theoretical analysis of electrical properties of spines, Proc. R. Soc. Ser. B Biol. Sci., 218 (1983), 455–477. |
[22] | G. Somjen, Sensory Coding in the Mammalian Nervous System, New York: Meredith Corporation, 1972. |
[23] |
C. Fiorillo, J. Kim, S. Hong, The meaning of spikes from the neuron's point of view: predictive homeostasis generates the appearance of randomness, Front. Comput. Neurosci., 8 (2014). https://doi.org/10.3389/fncom.2014.00049 doi: 10.3389/fncom.2014.00049
![]() |
[24] |
T. J. Sejnowski, The computer and the brain revisited, IEEE Ann. History of Computing, 11 (1989), 197–201. https://doi.org/10.1109/MAHC.1989.10028 doi: 10.1109/MAHC.1989.10028
![]() |
[25] | D. Tsafrir, The context-switch overhead inflicted by hardware interrupts (and the enigma of do-nothing loops), in Proceedings of the 2007 workshop on Experimental computer science, ACM, New York, USA, (2007), 4–es. |
[26] | F. M. David, J. C. Carlyle, R. H. Campbell, Context switch overheads for Linux on ARM platforms, in Proceedings of the 2007 workshop on Experimental computer science, ACM, New York, USA, (2007), 3–es. http://doi.acm.org/10.1145/1281700.1281703 |
[27] | J. von Neumann, The Computer and the Brain (The Silliman Memorial Lectures Series), New Haven, Yale University Press, 2012. |
[28] |
P. Mitra, Fitting elephants in modern machine learning by statistically consistent interpolation, Nat. Mach. Intell., 3 (2021), 378–386. https://doi.org/10.1038/s42256-021-00345-8 doi: 10.1038/s42256-021-00345-8
![]() |
[29] | R. P. Feynman, Feynman Lectures on Computation, CRC Press, 2018. |
[30] |
Y. A. Cengel, On entropy, information, and conservation of information, Entropy, 23 (2021), 779. https://doi.org/10.3390/e23060779 doi: 10.3390/e23060779
![]() |
[31] |
A. Borst, F. E. Theunissen, Information theory and neural coding, Nat. Neurosci., 2 (1999), 947–957. https://doi.org/10.1038/14731 doi: 10.1038/14731
![]() |
[32] |
R. Brette, Is coding a relevant metaphor for the brain, Behav. Brain Sci., 42 (2018), e215. https://doi.org/10.1017/S0140525X19000049 doi: 10.1017/S0140525X19000049
![]() |
[33] |
N. Brenner, S. P. Strong, R. Koberle, W. Bialek, R. R. de Ruyter van Steveninck, Synergy in a neural code, Neural Comput., 12 (2000), 1531–1552. https://doi.org/10.1162/089976600300015259 doi: 10.1162/089976600300015259
![]() |
[34] | S. P. Strong, R. R. de Ruyter van Steveninck, W. Bialek, R. Koberle, On the application of information theory to neural spike trains, Neural Comput., 1998 (1998), 621–632. |
[35] |
M. Li, J. Z. Tsien, Neural code—neural self-information theory on how cell-assembly code rises from spike time and neuronal variability, Front. Cell. Neurosci., 11 (2017). https://doi.org/10.3389/fncel.2017.00236 doi: 10.3389/fncel.2017.00236
![]() |
[36] | I. Csiszár, J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems, Cambridge Universiy Press, 2011. |
[37] |
C. Wilson, Up and down states, Scholarpedia J., 6 (2008), 1410. https://doi.org/10.4249/scholarpedia.1410 doi: 10.4249/scholarpedia.1410
![]() |
[38] | D. Levenstein, G. Girardeau, J. Gornet, A. Grosmark, R. Huszár, A. Peyrache, et al., Distinct ground state and activated state modes of spiking in forebrain neurons, bioRxiv, 2021. https://doi.org/10.1101/2021.09.20.461152 |
[39] |
S. Eddy, What is a hidden markov model, Nat. Biotechnol., 22 (2004), 1315–1316. https://doi.org/10.1038/nbt1004-1315 doi: 10.1038/nbt1004-1315
![]() |
[40] |
S. B. Laughlin, Energy as a constraint on the coding and processing of sensory information, Curr. Opin. Neurobiol., 11 (2001), 475–480. https://doi.org/10.1016/S0959-4388(00)00237-3 doi: 10.1016/S0959-4388(00)00237-3
![]() |
[41] |
H. Barlow, Redundancy reduction revisited, Network: Comput. Neural Syst., 12 (2001), 241. https://doi.org/10.1088/0954-898X/12/3/301 doi: 10.1088/0954-898X/12/3/301
![]() |
[42] |
T. Berger, W. B. Levy, A mathematical theory of energy efficient neural computation and communication, IEEE Trans. Inf. Theory, 56 (2010), 852–874. https://doi.org/10.1109/TIT.2009.2037089 doi: 10.1109/TIT.2009.2037089
![]() |
[43] |
D. M. MacKay, W. S. McCulloch, The limiting information capacity of a neuronal link, Bull. Math. Biophys., 14 (1952), 127–135. https://doi.org/10.1007/BF02477711 doi: 10.1007/BF02477711
![]() |
[44] | F. Rieke, D. Warland, W. Bialek, Spikes: Exploring the Neural Code, 2nd edition, The MIT Press, 1997. |
[45] | J. V. Stone, Principles of Neural Information Theory, Sebtel Press, Sheffield, UK, 2018. |
[46] | P. Sterling, S. Laughlin, Principles of Neural Design, 1st edition, The MIT Press, 2017. |
[47] | P. M. DiLorenzo, J. D. Victor, Spike Timing: Mechanisms and Function, 1st edition, CRC Press, 2013. |
[48] |
I. Nemenman, G. D. Lewen, W. Bialek, R. R. de Ruyter van Steveninck, Neural coding of natural stimuli: Information at sub-millisecond resolution, PLoS Comput. Biol., 4 (2008), 1–12. https://doi.org/10.1371/journal.pcbi.1000025 doi: 10.1371/journal.pcbi.1000025
![]() |
[49] |
A. Losonczy, J. Magee, Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons, Neuron, 50 (2006), 291–307. https://doi.org/10.1016/j.neuron.2006.03.016 doi: 10.1016/j.neuron.2006.03.016
![]() |
[50] |
R. B. Stein, The information capacity of nerve cells using a frequency code, Biophys. J., 6 (1967), 797–826. https://doi.org/10.1016/S0006-3495(67)86623-2 doi: 10.1016/S0006-3495(67)86623-2
![]() |
[51] |
S. P. Strong, R. Koberle, R. R. de Ruyter van Steveninck, W. Bialek, Entropy and information in neural spike trains, Phys. Rev. Lett., 80 (1998), 197–200. https://doi.org/10.1103/PhysRevLett.80.197 doi: 10.1103/PhysRevLett.80.197
![]() |
[52] |
R. Sarpeshkar, Analog versus digital: Extrapolating from electronics to neurobiology, Neural Comput., 10 (1998), 1601–1638. https://doi.org/10.1162/089976698300017052 doi: 10.1162/089976698300017052
![]() |
[53] |
S. B. Laughlin, R. R. de Ruyter van Steveninck, J. C. Anderson, The metabolic cost of neural information, Nat. Neurosci., 1 (1998), 36–41. https://doi.org/10.1038/236 doi: 10.1038/236
![]() |
[54] |
P. Singh, P. Sahoo, K. Saxena, J. S. Manna, K. Ray, S. Kanad, et al., Cytoskeletal filaments deep inside a neuron are not silent: They regulate the precise timing of nerve spikes using a pair of vortices, Symmetry, 13 (2021). https://doi.org/10.3390/sym13050821 doi: 10.3390/sym13050821
![]() |
[55] |
M. Stemmler, C. Koch, How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate, Nat. Neurosci., 2 (1999), 521–527. https://doi.org/10.1038/9173 doi: 10.1038/9173
![]() |
[56] |
P. Khorsand, F. Chance, Transient responses to rapid changes in mean and variance in spiking models, PLoS ONE, 3 (208), e3786. doi.org/10.1371/journal.pone.0003786 doi: 10.1371/journal.pone.0003786
![]() |
[57] |
K. Kar, S. Kornblith, E. Fedorenko, Interpretability of artificial neural network models in artificial intelligence versus neuroscience, Nature Mach. Intell., 4 (2022), 1065–1067. https://doi.org/10.1038/s42256-022-00592-3 doi: 10.1038/s42256-022-00592-3
![]() |
[58] |
R. Vicente, M. Wibral, M. Lindner, G. Pipa, Transfer entropy—a model-free measure of effective connectivity for the neurosciences, J. Comput. Neurosci., 30 (2011), 45–67. https://doi.org/10.1007/s10827-010-0262-3 doi: 10.1007/s10827-010-0262-3
![]() |
[59] |
K. Hlaváčková-Schindler, M. Paluš, M. Vejmelka, J. Bhattacharya, Causality detection based on information-theoretic approaches in time series analysis, Phys. Rep., 441 (2007), 1–46. https://doi.org/10.1016/j.physrep.2006.12.004 doi: 10.1016/j.physrep.2006.12.004
![]() |
[60] |
A. Abbott, Documentary follows implosion of billion-euro brain project, Nature, 588 (2020), 215–216. https://doi.org/10.1038/d41586-020-03462-3 doi: 10.1038/d41586-020-03462-3
![]() |
[61] |
A. G. Dimitrov, J. P. Miller, Neural coding and decoding: communication channels and quantization, Network: Comput. Neural Syst., 12 (2001), 441. https://doi.org/10.1088/0954-898X/12/4/303 doi: 10.1088/0954-898X/12/4/303
![]() |
[62] | G. M. Shepherd, The Synaptic Organization of the Brain, 5 edition, Oxford Academic, New York, 2006. |
[63] |
W. B. Levy, V. G. Calvert, Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number, Proc. Nat. Acad. Sci., 118 (2021), e2008173118. https://doi.org/10.1073/pnas.2008173118 doi: 10.1073/pnas.2008173118
![]() |
[64] | H. Simon, Why we need Exascale and why we won't get there by 2020, in Conference: AASCTS2: Exascale Radioastronomy Meeting, 2014. Accesse date: Oct. 23, 2021. Available from: https://www.researchgate.net/publication/261879110_Why_we_need_Exascale_and_why_we_won't_get_there_by_2020. |
[65] |
J. Végh, Finally, how many efficiencies the supercomputers have, J. Supercomput., 76 (2020), 9430–9455. https://doi.org/10.1007/s11227-020-03210-4 doi: 10.1007/s11227-020-03210-4
![]() |
[66] |
S. Williams, A. Waterman, D. Patterson, Roofline: An insightful visual performance model for multicore architectures, Commun. ACM, 52 (2009), 65–76. https://doi.org/10.1145/1498765.1498785 doi: 10.1145/1498765.1498785
![]() |
[67] |
F. Zeldenrust, S. de Knecht, W. J. Wadman, S. Denève, B. Gutkin, Estimating the information extracted by a single spiking neuron from a continuous input time series, Front. Comput. Neurosci., 11 (2017), 49. https://doi.org/10.3389/fncom.2017.00049 doi: 10.3389/fncom.2017.00049
![]() |
[68] |
L. Eisenman, C. Emnett, J. Mohan, C. Zorumski, S. Mennerick, Quantification of bursting and synchrony in cultured hippocampal neurons, J. Neurophysiol., 114 (2015). https://doi.org/10.1152/jn.00079.2015 doi: 10.1152/jn.00079.2015
![]() |
[69] | D. H. Johnson, Dialogue Concerning Neural Coding and Information Theory, 2003. Available from: http://www.ece.rice.edu/dhj/dialog.pdf. |
[70] |
R. R. de Ruyter van Steveninck, G. D. Lewen, S. P. Strong, R. Koberle, W. Bialek, Reproducibility and variability in neural spike trains, Science, 275 (1997), 1805–1808. https://doi.org/10.1126/science.275.5307.1805 doi: 10.1126/science.275.5307.1805
![]() |
[71] |
B. Sengupta, S. Laughlin, J. Niven, Consequences of converting graded to action potentials upon neural information coding and energy efficiency, PLoS Comput. Biol., 1 (2014). https://doi.org/10.1371/journal.pcbi.1003439 doi: 10.1371/journal.pcbi.1003439
![]() |
[72] |
S. J. van Albada, A. G. Rowley, J. Senk, M. Hopkins, M. Schmidt, A. B. Stokes, et al., Performance comparison of the digital neuromorphic hardware spiNNaker and the neural network simulation software NEST for a full-scale cortical microcircuit model, Front. Neurosci., 12 (2018), 291. https://doi.org/10.3389/fnins.2018.00291 doi: 10.3389/fnins.2018.00291
![]() |
[73] |
J. Végh, How Amdahl's Law limits performance of large artificial neural networks, Brain Inf., 6 (2019), 1–11. https://doi.org/10.1186/s40708-019-0097-2 doi: 10.1186/s40708-019-0097-2
![]() |
[74] |
J. Végh, Which scaling rule applies to Artificial Neural Networks, Neural Comput. Appl., 33 (2021), 16847–16864. https://doi.org/10.1007/s00521-021-06456-y doi: 10.1007/s00521-021-06456-y
![]() |
[75] | Human Brain Project, E. Human Brain Project, 2018. Available from: https://www.humanbrainproject.eu/en/. |
[76] |
A. Mehonic, A. Kenyon, Brain-inspired computing needs a master plan, Nature, 604 (2022), 255–260. https://doi.org/10.1038/s41586-021-04362-w doi: 10.1038/s41586-021-04362-w
![]() |
[77] |
D. Markovic, A. Mizrahi, D. Querlioz, J. Grollier, Physics for neuromorphic computing, Nat. Rev. Phys., 2 (2020), 499–510. https://doi.org/10.1038/s42254-020-0208-2 doi: 10.1038/s42254-020-0208-2
![]() |
1. | Mohammad Ashraful Ferdous Chowdhury, Mohammad Abdullah, Emmanuel Joel Aikins Abakah, Aviral Kumar Tiwari, Examining the role of geopolitical risk on energy market tail risk forecasting using explainable machine learning, 2025, 24058513, 100478, 10.1016/j.jcomm.2025.100478 |