In this paper, we study the boundedness of the commutator of the rough fractional Hausdorff operator on grand-variable-Herz-Morrey spaces when the symbol functions belong to bounded mean oscillations (BMO) space.
Citation: Javeria Younas, Amjad Hussain, Hadil Alhazmi, A. F. Aljohani, Ilyas Khan. BMO estimates for commutators of the rough fractional Hausdorff operator on grand-variable-Herz-Morrey spaces[J]. AIMS Mathematics, 2024, 9(9): 23434-23448. doi: 10.3934/math.20241139
[1] | Amber Yousaf Dar, Nadia Saeed, Moustafa Omar Ahmed Abu-Shawiesh, Saman Hanif Shahbaz, Muhammad Qaiser Shahbaz . A new class of ratio type estimators in single- and two-phase sampling. AIMS Mathematics, 2022, 7(8): 14208-14226. doi: 10.3934/math.2022783 |
[2] | Saman Hanif Shahbaz, Aisha Fayomi, Muhammad Qaiser Shahbaz . Estimation of the general population parameter in single- and two-phase sampling. AIMS Mathematics, 2023, 8(7): 14951-14977. doi: 10.3934/math.2023763 |
[3] | Riffat Jabeen, Aamir Sanaullah, Muhammad Hanif, Azam Zaka . Two-exponential estimators for estimating population mean. AIMS Mathematics, 2021, 6(1): 737-753. doi: 10.3934/math.2021045 |
[4] | Hleil Alrweili, Fatimah A. Almulhim . Estimation of the finite population mean using extreme values and ranks of the auxiliary variable in two-phase sampling. AIMS Mathematics, 2025, 10(4): 8794-8817. doi: 10.3934/math.2025403 |
[5] | Khazan Sher, Muhammad Ameeq, Muhammad Muneeb Hassan, Basem A. Alkhaleel, Sidra Naz, Olyan Albalawi . Novel efficient estimators of finite population mean in stratified random sampling with application. AIMS Mathematics, 2025, 10(3): 5495-5531. doi: 10.3934/math.2025254 |
[6] | Xuechen Liu, Muhammad Arslan . A general class of estimators on estimating population mean using the auxiliary proportions under simple and two phase sampling. AIMS Mathematics, 2021, 6(12): 13592-13607. doi: 10.3934/math.2021790 |
[7] | Tolga Zaman, Cem Kadilar . Exponential ratio and product type estimators of the mean in stratified two-phase sampling. AIMS Mathematics, 2021, 6(5): 4265-4279. doi: 10.3934/math.2021252 |
[8] | Olayan Albalawi . Estimation techniques utilizing dual auxiliary variables in stratified two-phase sampling. AIMS Mathematics, 2024, 9(11): 33139-33160. doi: 10.3934/math.20241582 |
[9] | Sohaib Ahmad, Sardar Hussain, Javid Shabbir, Muhammad Aamir, M. El-Morshedy, Zubair Ahmad, Sharifah Alrajhi . Improved generalized class of estimators in estimating the finite population mean using two auxiliary variables under two-stage sampling. AIMS Mathematics, 2022, 7(6): 10609-10624. doi: 10.3934/math.2022592 |
[10] | Anoop Kumar, Priya, Abdullah Mohammed Alomair . Efficient classes of estimators for estimating indeterminate population mean using neutrosophic ranked set sampling. AIMS Mathematics, 2025, 10(4): 8946-8964. doi: 10.3934/math.2025410 |
In this paper, we study the boundedness of the commutator of the rough fractional Hausdorff operator on grand-variable-Herz-Morrey spaces when the symbol functions belong to bounded mean oscillations (BMO) space.
An H-intersecting family of graphs is a collection of finite, undirected, and simple graphs (i.e., graphs with no self-loops or parallel edges) on a fixed number of vertices, in which the intersection of every two graphs in the family contains a subgraph isomorphic to H. For instance, if H is an edge or a triangle, then every pair of graphs in the family shares at least one edge or triangle, respectively. These intersecting families of graphs play a central role in extremal graph theory, where determining their maximum possible size remains a longstanding challenge. Different choices of H lead to distinct combinatorial problems and structural constraints.
A pivotal conjecture, proposed in 1976 by Simonovits and Sós, concerned the maximum size of triangle-intersecting graph families---those in which the intersection of any two graphs contains a triangle. They conjectured that the largest size of such a family is obtained by the family of all graphs on n vertices that contain a fixed triangle, leading to the conjectured largest size of 2(n2)−3. Furthermore, a trivial upper bound on the size of a triangle-intersecting graph family on n vertices is 4 times larger than this conjectured value. This holds since a graph and its complement cannot both belong to an edge-intersecting family, let alone a triangle-intersecting family. The foundational work in [1,2,3] explores intersection theorems for graph families whose shared subgraphs are cycles or paths.
The first major progress on the conjecture by Simonovits and Sós was made in 1986 by Chung, Graham, Frankl, and Shearer [4], who utilized Shearer's inequality to establish a non-trivial upper bound on the largest possible cardinality of a family of triangle-intersecting graphs with a fixed number of vertices. This bound was "midway" between the trivial and conjectured bounds (or, more formally, it was equal to twice the conjectured bound, which is also the geometric mean of the conjectured and trivial bounds).
The conjecture by Simonovits and Sós was ultimately resolved in 2012 by Ellis, Filmus, and Friedgut [5], who proved that the largest triangle-intersecting family comprises all graphs containing a fixed triangle. Building on the Fourier analytic methods of Boolean functions, used to prove this conjecture in [5] (see also Section 4 of [6]), a recent work by Berger and Zhao [7] extended the investigation to K4-intersecting graph families, addressing analogous questions for graph families where every pair of graphs intersects in a complete subgraph of size four. Additionally, Keller and Lifshitz [8] constructed, for every graph H and for every p∈(12,1), an H-intersecting family of graphs G on n vertices such that a random graph G∼G(n,p) belongs to G with probability tending to 1 exponentially fast in n2. Here, G(n,p) denotes the (binomial) Erdǒs-Rényi random graph, in which every edge of Kn (the complete graph on n vertices) is included independently with probability p. These contributions highlight the interplay between combinatorial, probabilistic, and algebraic methods in the analysis of intersecting graph families.
The interplay between Shannon entropy and extremal combinatorics has significantly enhanced the understanding of the structural and quantitative properties of combinatorial objects through information-theoretic methods. Entropy serves as a versatile and powerful tool to derive concise, often elegant proofs of classical results in extremal combinatorics (see, e.g., Chapter 37 of [9], Chapter 22 of [10], and [11,12,13,14,15]). Notable examples include Radhakrishnan's entropy-based proof of Bregman's theorem on matrix permanents [14] and the application of Shearer's lemma to upper-bound the size of the largest triangle-intersecting graph families with a fixed number of vertices [4]. Beyond this specific context, Shearer's inequalities have found extensive applications across diverse areas, including finite geometry, graph theory, the analysis of Boolean functions, and large deviations (see [4,15,16,17,18,19,20,21,22]). A recent talk by the author addresses Shearer's inequalities and their applications in combinatorics [23].
The first part of this paper relies on the combinatorial version of Shearer's inequalities in [4] to derive a new upper bound on the cardinality of families of H-intersecting graphs with a fixed number of vertices. The bound represents a nontrivial extension of the bound in [4], where H is specialized to a triangle. The derived bound is expressed in terms of the chromatic number of H, while a relaxed version, formulated using the Lovász ϑ-function of the complement of H, reduces computational complexity. The relaxed bound is further explored in the case where H is a regular graph, particularly when it is strongly regular.
Graph homomorphisms serve as a versatile framework to understand graph mappings, which facilitate studies of structural properties, colorings, and symmetries. The applications of graph homomorphisms span various fields, including statistical physics, where they model spin systems [24], and computational complexity, where they underpin constraint satisfaction problems [25]. Recent research has yielded profound insights into counting graph homomorphisms, a problem with deep theoretical and practical relevance (see [11,26,27,28,29,30,31]). The second part of this paper relies on Shearer's inequalities and properties of Shannon entropy to derive bounds on the number of homomorphisms.
The paper is structured as follows: Section 2 presents essential preliminary material, including three versions of Shearer's inequalities. In Section 3, the combinatorial version of Shearer's lemma is employed for upper bounding the size of H-intersecting families of graphs. Section 4 focuses on entropy-based proofs, also incorporating a probabilistic version of Shearer's lemma, to derive bounds on the number of graph homomorphisms.
The following subsection introduces three versions of Shearer's inequalities that are useful in the analysis presented in this paper. The first version serves as a foundation for proving the other two, which are directly applied in this work. Familiarity with Shannon entropy and its basic properties is assumed, following standard notation (see, e.g., Chapter 3 of [32]).
Proposition 1 (Shearer's Lemma). Let
● n,m,k∈N,
● X1,…,Xn be discrete random variables,
● [{n}] \triangleq \{1, \ldots, n\} ,
● \mathcal {S}_1, \ldots, \mathcal {S}_m \subseteq [{n}] be subsets such that each i \in [{n}] belongs to at least k \geq 1 of these subsets,
● X^n \triangleq (X_1, \ldots, X_n) , and X_{ \mathcal {S}_j} \triangleq (X_i)_{i \in \mathcal {S}_j} for all j \in [{m}] .
Then,
\begin{align} k \text{H}({X^n}) \leq \sum\limits_{j = 1}^m \text{H}({X_{ \mathcal {S}_j}}). \end{align} | (2.1) |
Proof. By assumption, d(i) \geq k for all i \in [{n}] , where
\begin{align} d(i) \triangleq \bigl|{\bigl\{j \in [{m}]: \, i \in \mathcal {S}_j \bigr\}}\bigr|. \end{align} | (2.2) |
Let \mathcal {S} = \{i_1, \ldots, i_\ell\} , 1 \leq i_1 < \ldots < i_\ell \leq n , which implies that |{ \mathcal {S}}| = \ell, \, \mathcal {S} \subseteq [{n}] . Further, let X_{ \mathcal {S}} \triangleq (X_{i_1}, \ldots, X_{i_\ell}) . By the chain rule and the fact that conditioning reduces entropy,
\begin{align} \text{H}({X_{ \mathcal {S}}}) & = \text{H}({X_{i_1}}) + \text{H}({X_{i_2}} | \kern0.1em {X_{i_1}}) + \ldots + \text{H}({X_{i_\ell}} | \kern0.1em {X_{i_1}, \ldots, X_{i_{\ell-1}}}) \\ &\geq \underset{i \in \mathcal {S}}{\sum} \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \\ & = \overset{n}{\underset{i = 1}{\sum}} \, \Bigl\{ {\mathbb 1}\{{i \in \mathcal {S}}\} \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \Bigr\}, \end{align} | (2.3) |
where {\mathbb 1}\{{i \in \mathcal {S}}\} on the right-hand side of (2.3) denotes the indicator function of the event \{i \in \mathcal {S}\} , meaning that it is equal to 1 if i \in \mathcal {S} and it is zero otherwise. Consequently, we get
\begin{align} \sum\limits_{j = 1}^m \text{H}({X_{ \mathcal {S}_j}}) &\geq \overset{m}{\underset{j = 1}{\sum}} \, \overset{n}{\underset{i = 1}{\sum}} \, \Bigl\{ {\mathbb 1}\{{i \in \mathcal {S}_j}\} \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \Bigr\} \\ & = \overset{n}{\underset{i = 1}{\sum}} \, \Biggl\{ \overset{m}{\underset{j = 1}{\sum}} \, {\mathbb 1}\{{i \in \mathcal {S}_j}\} \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \Biggr\} \\ & = \overset{n}{\underset{i = 1}{\sum}} \, \Bigl\{ d(i) \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \Bigr\} \\ &\geq k \, \overset{n}{\underset{i = 1}{\sum}} \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \end{align} | (2.4) |
\begin{align} & = k \, \text{H}({X^n}), \end{align} | (2.5) |
where inequality (2.4) holds due to the nonnegativity of the conditional entropies of discrete random variables, and since (by assumption) d(i) \geq k for all i \in [{n}] , and equality (2.5) holds by the chain rule of Shannon entropy.
Remark 1. If every element i \in [{n}] belongs to exactly k of the subsets \mathcal {S}_j \, (j \in [{m}]) , then Shearer's lemma also applies to continuous random variables X_1, \ldots, X_n , with entropy replaced by the differential entropy.
Example 1 (Subadditivity of Shannon Entropy). Let n = m with n \in {\mathbb{N}} , and \mathcal {S}_i = \{i\} (singletons) for all i \in [{n}] , so every element i \in [{n}] belongs to a single set among \mathcal {S}_1, \ldots, \mathcal {S}_n (i.e., k = 1 ). By Shearer's lemma, it follows that
\begin{align} \text{H}({X^n}) \leq \sum\limits_{j = 1}^n \text{H}({X_j}), \end{align} | (2.6) |
which expresses the subadditivity property of Shannon entropy for discrete random variables. This also holds for continuous random variables, where the entropy is replaced by differential entropy since every element i \in [{n}] is contained in exactly one subset (see Remark 1). Consequently, Shearer's lemma establishes the subadditivity property of Shannon entropy for both discrete and continuous random variables.
Example 2 (Han's Inequality, [33]). For every \ell \in [{n}] , let \mathcal {S}_\ell = [{n}] \setminus \{\ell\} . By Shearer's lemma (Proposition 1) applied to these n subsets of [{n}] , since every element i \in [{n}] is contained in exactly k = n-1 of these subsets,
\begin{align} (n-1) \text{H}({X^n}) \leq \sum\limits_{\ell = 1}^n \text{H}({X_1, \ldots, X_{\ell-1}, X_{\ell+1}, \ldots, X_n}) \leq n \text{H}({X^n}). \end{align} | (2.7) |
An equivalent form of (2.7) is given by
\begin{align} 0 \leq \sum\limits_{\ell = 1}^n \Bigl\{ \text{H}({X^n}) - \text{H}({X_1, \ldots, X_{\ell-1}, X_{\ell+1}, \ldots , X_n}) \Bigr\} \leq \text{H}({X^n}). \end{align} | (2.8) |
The equivalent forms in (2.7) and (2.8) are known as Han's inequality. Note that, by Remark 1, the left-hand side inequality of (2.7) and, equivalently, the right-hand side inequality of (2.8) remain valid for continuous random variables as well.
In the combinatorial version of Shearer's lemma [4], next given, the concept of entropy is hidden.
Proposition 2 (Combinatorial Version of Shearer's Lemma). Consider the following setting:
● Let \mathscr{F} be a finite multiset of subsets of [{n}] (allowing repetitions of some subsets), where each element i \in [{n}] is included in at least k \geq 1 sets of \mathscr{F} .
● Let \mathscr{M} be a set of subsets of [{n}] .
● For every set \mathcal {S} \in \mathscr{F} , let the trace of \mathscr{M} on \mathcal {S} , denoted by \mathit{\text{trace}}_{ \mathcal {S}}(\mathscr{M}) , be the set of all possible intersections of elements of \mathscr{M} with \mathcal {S} , i.e.,
\begin{align} \mathit{\text{trace}}_{ \mathcal {S}}(\mathscr{M}) \triangleq \bigl\{ \mathcal {A} \cap \mathcal {S}: \, \mathcal {A} \in \mathscr{M} \bigr\}, \quad \forall \, \mathcal {S} \in \mathscr{F}. \end{align} | (2.9) |
Then,
\begin{align} |{\mathscr{M}}| \leq \prod\limits_{ \mathcal {S} \in \mathscr{F}} \bigl| \mathit{\text{trace}}_{ \mathcal {S}}(\mathscr{M}) \bigr|^{\frac1k}. \end{align} | (2.10) |
Proof.
● Let \mathcal {X} \subseteq [{n}] be a set that is selected uniformly at random from \mathscr{M} .
● Represent \mathcal {X} by the binary random vector X^n = (X_1, \ldots, X_n) , where X_i = {\mathbb 1}\{{i \in \mathcal {X}}\} for all i \in [{n}] , so X_i = 1 if i \in \mathcal {X} and X_i = 0 otherwise.
● For \mathcal {S} \in \mathscr{F} , let X_{ \mathcal {S}} = (X_i)_{i \in \mathcal {S}} . By the maximal entropy theorem, which states that the entropy of a discrete random variable (or vector) is upper-bounded by the logarithm of the cardinality of its support, with equality if and only if the variable is uniformly distributed over its support, we get
\begin{align} \text{H}({X_{ \mathcal {S}}}) \leq \log \, \bigl| \text{trace}_{ \mathcal {S}}(\mathscr{M}) \bigr|. \end{align} | (2.11) |
● By the assumption that every element i \in [{n}] is included in at least k \geq 1 sets of \mathscr{F} , it follows from combining Shearer's lemma (Proposition 1) and (2.11) that
\begin{align} k \, \text{H}({X^n}) & \leq \sum\limits_{ \mathcal {S} \in \mathscr{F}} \text{H}({X_{ \mathcal {S}}}) \\ & \leq \sum\limits_{ \mathcal {S} \in \mathscr{F}} \log \, \bigl| \text{trace}_{ \mathcal {S}}(\mathscr{M}) \bigr|. \end{align} | (2.12) |
● The equality \text{H}({X^n}) = \log |{\mathscr{M}}| holds since X^n is in one-to-one correspondence with \mathcal {X} , which is uniformly selected at random from \mathscr{M} . Combining this with (2.12) gives
\begin{align} \log |{\mathscr{M}}| \leq \frac1k \sum\limits_{ \mathcal {S} \in \mathscr{F}} \log \, \bigl| \text{trace}_{ \mathcal {S}}(\mathscr{M}) \bigr|, \end{align} | (2.13) |
and exponentiating both sides of (2.13) gives (2.10).
The following is the probabilistic entropy-based formulation of Shearer's lemma, which is also applied in this paper.
Proposition 3 (Probabilistic Version of Shearer's Lemma). Let X^n be a discrete n -dimensional random vector, and let \mathcal {S} \subseteq [{n}] be a random subset of [{n}] , independent of X^n , with an arbitrary probability mass function \mathsf{{P}}_{{\smash{{{ \mathcal {S}}}}\vphantom{XYZ}}} . If there exists \theta > 0 such that
\begin{align} \Pr[{i \in \mathcal {S}}] \geq \theta, \quad \forall \, i \in [{n}], \end{align} | (2.14) |
then,
\begin{align} \mathbb{E}_{{ \mathcal {S}}}\bigl[{\text{H}({X_{ \mathcal {S}}})}\bigr] \geq \theta \text{H}({X^n}). \end{align} | (2.15) |
Proof. The expectation of the entropy \text{H}({X_{ \mathcal {S}}}) with respect to the random subset \mathcal {S} \subseteq [{n}] , where (by assumption) \mathcal {S} is independent of X^n , gives
\begin{align} \mathbb{E}_{{ \mathcal {S}}}\bigl[{\text{H}({X_{ \mathcal {S}}})}\bigr] & = \sum\limits_{ \mathcal {S} \, \subseteq [{n}]} \mathsf{{P}}_{{\smash{{{ \mathcal {S}}}}\vphantom{XYZ}}}( \mathcal {S}) \, \text{H}({X_{ \mathcal {S}}}) \\ &\geq \sum\limits_{ \mathcal {S} \, \subseteq [{n}]} \Biggl\{ \mathsf{{P}}_{{\smash{{{ \mathcal {S}}}}\vphantom{XYZ}}}( \mathcal {S}) \; \overset{n}{\underset{i = 1}{\sum}} \Bigl\{ {\mathbb 1}\{{i \in \mathcal {S}}\} \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \Bigr\} \Biggr\} \end{align} | (2.16) |
\begin{align} & = \overset{n}{\underset{i = 1}{\sum}} \Biggl\{ \sum\limits_{ \mathcal {S} \, \subseteq [{n}]} \Bigl\{ \mathsf{{P}}_{{\smash{{{ \mathcal {S}}}}\vphantom{XYZ}}}( \mathcal {S}) \; {\mathbb 1}\{{i \in \mathcal {S}}\} \Bigr\} \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \Biggr\} \end{align} | (2.17) |
\begin{align} & = \overset{n}{\underset{i = 1}{\sum}} \Pr[{i \in \mathcal {S}}] \, \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \\ &\geq \theta \, \overset{n}{\underset{i = 1}{\sum}} \text{H}({X_i} | \kern0.1em {X_1, \ldots, X_{i-1}}) \end{align} | (2.18) |
\begin{align} & = \theta \text{H}({X^n}), \end{align} | (2.19) |
where (2.16) holds by (2.3) that is valid for every set \mathcal {S} \subseteq [{n}] ; (2.17) holds by swapping the order of summation, and (2.18) holds by the assumption that the random variables \{X_i\} are discrete (so, the conditional entropies are nonnegative) and by the condition in (2.14). Finally, (2.19) holds by the chain rule of Shannon entropy.
Remark 2. Similarly to Remark 1, if \Pr[{i \in \mathcal {S}}] = \theta for all i \in [{n}] , then inequality (2.18) holds with equality. Consequently, if the condition in (2.14) is satisfied with equality for all i \in [{n}] , then (2.15) extends to continuous random variables, with entropies replaced by differential entropies.
We start by considering triangle-intersecting families of graphs, which was the problem in extremal combinatorics addressed in [4].
Definition 1 (Triangle-Intersecting Families of Graphs). Let \mathcal {G} be a family of graphs on the vertex set [{n}] , with the property that for every \mathsf{{G}}_1, \mathsf{{G}}_2 \in \mathcal {G} , the intersection \mathsf{{G}}_1 \cap \mathsf{{G}}_2 contains a triangle (i.e., there are three vertices i, j, k \in [{n}] such that each of \{i, j\} , \{i, k\} , \{j, k\} is in the edge sets of both \mathsf{{G}}_1 and \mathsf{{G}}_2 ). The family \mathcal {G} is referred to as a triangle-intersecting family of graphs on n vertices.
Question 1. (Simonovits and Sós, [2]) How large can \mathcal {G} (a family of triangle-intersecting graphs) be?
The family \mathcal {G} can be as large as 2^{\binom{n}{2}-3} . To that end, consider the family {\mathcal G} of all graphs on n vertices that include a particular triangle. On the other hand, |{ \mathcal {G}}| cannot exceed 2^{\binom{n}{2}-1} . The latter upper bound holds since, in general, a family of distinct subsets of a set of size m , where any two of these subsets have a nonempty intersection, can have a cardinality of at most 2^{m-1} ( \mathcal {A} and { \mathcal {{A}}}^{ \text{c}} cannot be members of this family). The edge sets of the graphs in \mathcal {G} satisfy this property, with m = \binom{n}{2} .
Proposition 4 (Ellis, Filmus, and Friedgut, [5]). The size of a family {\mathcal G} of triangle-intersecting graphs on n vertices satisfies |{ \mathcal {G}}| \leq 2^{\binom{n}{2}-3} , and this upper bound is attained by the family of all graphs with a common vertex set of n vertices, and with a fixed common triangle.
This result was proved by using discrete Fourier analysis to obtain the sharp bound in Proposition 4, as conjectured by Simonovits and Sós [2].
The first significant progress toward proving the Simonovits–Sós conjecture came from an information-theoretic approach [4]. Using the combinatorial Shearer lemma (Proposition 2), a simple and elegant upper bound on the size of \mathcal {G} was derived in [4]. That bound is equal to 2^{\binom{n}{2}-2} , falling short of the Simonovits–Sós conjecture by a factor of 2.
Proposition 5 (Chung, Graham, Frankl, and Shearer, [4]). Let \mathcal {G} be a family of \mathsf{K}_{{3}} -intersecting graphs on a common vertex set [{n}] . Then, |{ \mathcal {G}}| \leq 2^{\binom{n}{2}-2} .
We next consider more general intersecting families of graphs.
Definition 2 ( \mathsf{{H}} -Intersecting Families of Graphs). Let \mathcal{G} be a family of graphs on a common vertex set. Then, it is said that \mathcal{G} is \mathsf{{H}} -intersecting if for every two graphs \mathsf{{G}}_1, \mathsf{{G}}_2 \in \mathcal{G} , the graph \mathsf{{G}}_1 \cap \mathsf{{G}}_2 contains a subgraph isomorphic to \mathsf{{H}} .
In the following, \mathsf{K}_{{t}} , for t \in {\mathbb{N}} , denotes the complete graph on t vertices. This graph consists of t vertices, with every pair of vertices being adjacent. For example, \mathsf{K}_{{2}} represents an edge, while \mathsf{K}_{{3}} corresponds to a triangle.
Example 3. Let \mathsf{{H}} = \mathsf{K}_{{t}} , with t \geq 2 . Then, t = 2 means that \mathcal{G} is edge-intersecting (or simply intersecting), and t = 3 means that \mathcal{G} is triangle-intersecting.
Question 2. [Problem in Extremal Combinatorics] Given \mathsf{{H}} and n , what is the maximum size of an \mathsf{{H}} -intersecting family of graphs on n labeled vertices?
Conjecture 1. (Ellis, Filmus, and Friedgut, [5]) Every \mathsf{K}_{{t}} -intersecting family of graphs on a common vertex set [{n}] has a size of at most 2^{\binom{n}{2}-\binom{t}{2}} , with equality for the family of all graphs containing a fixed clique on t vertices.
● For t = 2 , it is trivial (since \mathsf{K}_{{2}} is an edge).
● For t = 3 , it was proved by Ellis, Filmus, and Friedgut [5].
● For t = 4 , it was recently proved by Berger and Zhao [7].
● For t \geq 5 , this problem is left open.
In the sequel, let \mathsf{V}({\mathsf{{H}}}) and \mathsf{E}({\mathsf{{H}}}) denote the vertex and edge sets of a graph \mathsf{{H}} , respectively. Further, let \mathsf{{T}} and \mathsf{{G}} be finite, simple, and undirected graphs, and denote the edge connecting a pair of adjacent vertices u, v \in \mathsf{V}({\mathsf{{H}}}) by an edge e = \{u, v\} \in \mathsf{E}({\mathsf{{H}}}) .
Definition 3 (Homomorphism). A homomorphism from \mathsf{{T}} to \mathsf{{G}} , denoted by \mathsf{{T}} \to \mathsf{{G}} , is a mapping of the vertices of \mathsf{{T}} to those of \mathsf{{G}} , \sigma \colon \mathsf{V}({\mathsf{{T}}}) \to \mathsf{V}({\mathsf{{G}}}) , such that every edge in \mathsf{{T}} is mapped to an edge in \mathsf{{G}} :
\begin{align} \{u,v\} \in \mathsf{E}({\mathsf{{T}}}) \implies \{\sigma(u), \sigma(v)\} \in \mathsf{E}({\mathsf{{G}}}). \end{align} | (2.20) |
On the other hand, non-edges in \mathsf{{T}} may be mapped to the same vertex, a non-edge, or an edge in \mathsf{{G}} .
Example 4. There is a homomorphism from every bipartite graph \mathsf{{G}} to \mathsf{K}_{{2}} . Indeed, let \mathsf{V}({\mathsf{{G}}}) = \mathcal {X} \cup \mathcal {Y} , where \mathcal {X} and \mathcal {Y} are the two disjoint partite sets. A mapping that maps every vertex in \mathcal {X} to '0', and every vertex in \mathcal {Y} to '1' is a homomorphism \mathsf{{G}} \to \mathsf{K}_{{2}} because every edge in \mathsf{{G}} is mapped to the edge \{0, 1\} in \mathsf{K}_{{2}} . Note that every non-edge in \mathcal {X} or in \mathcal {Y} is mapped to the same vertex in \mathsf{K}_{{2}} , and every non-edge between two vertices in \mathcal {X} and \mathcal {Y} is mapped to \{0, 1\} .
The following connects graph homomorphisms to graph invariants. Let \omega ({\mathsf{{G}}}) and \chi ({\mathsf{{G}}}) denote the clique number and chromatic number, respectively, of a finite, simple, and undirected graph \mathsf{{G}} . That is, \omega ({\mathsf{{G}}}) is the maximum number of vertices in \mathsf{{G}} such that every two of them are adjacent (i.e., these vertices form a complete subgraph of \mathsf{{G}} ), and \chi ({\mathsf{{G}}}) is the smallest number of colors required to color the vertices of \mathsf{{G}} such that no two adjacent vertices share the same color. Then,
● \omega ({\mathsf{{G}}}) is the largest integer k for which a homomorphism \mathsf{K}_{{k}} \to \mathsf{{G}} exists. This holds because the image of a complete graph under a homomorphism is a complete graph of the same size. This is valid because a homomorphism preserves adjacency, and for a complete graph \mathsf{K}_{{k}} , all pairs of vertices are adjacent. To preserve this property, the image of \mathsf{K}_{{k}} under the homomorphism must also be a complete graph of size k .
● A graph \mathsf{{G}} is k -colorable if and only if it has a homomorphism to the complete graph \mathsf{K}_{{k}} ; this is because k -coloring assigns one of k colors to each vertex such that adjacent vertices receive different colors, which is equivalent to mapping the vertices of \mathsf{{G}} to the k vertices of \mathsf{K}_{{k}} in a way that adjacency is preserved. Consequently, it follows by definition that \chi ({\mathsf{{G}}}) is the smallest integer k for which there exists a homomorphism \mathsf{{G}} \to \mathsf{K}_{{k}} .
Let \mathrm{Hom}(\mathsf{{{T}}}, \mathsf{{{G}}}) denote the set of all the homomorphisms \mathsf{{T}} \to \mathsf{{G}} , and let
\begin{align} \mathrm{hom}(\mathsf{{{T}}},\mathsf{{{G}}}) \triangleq \bigl|{\mathrm{Hom}(\mathsf{{{T}}},\mathsf{{{G}}})}\bigr| \end{align} | (2.21) |
denote the number of these homomorphisms.
The independence number of a graph \mathsf{{G}} , denoted by \alpha({\mathsf{{G}}}) , is the maximum number of vertices in \mathsf{{G}} such that no two of them are adjacent. It can be formulated as an integer linear program, whose relaxation gives rise to the following graph invariant.
Definition 4 (Fractional Independence Number). The fractional independence number of a graph \mathsf{{G}} , denoted as \alpha_{\mathrm{f}}({\mathsf{{G}}}) , is a fractional relaxation of the independence number \alpha({\mathsf{{G}}}) . It is defined as the optimal value of the following linear program:
● Optimization variables: x_v for every vertex v \in \mathsf{V}({\mathsf{{G}}}) .
● Objective: Maximize \underset{v \in \mathsf{V}({\mathsf{{G}}})}{\sum} x_v .
● Constraints: x_v \geq 0 for all v \in \mathsf{V}({\mathsf{{G}}}) , and \underset{v \in \mathcal {C}}{\sum} x_v \leq 1 for every clique \mathcal {C} \subseteq \mathsf{V}({\mathsf{{G}}}) .
This relaxation allows fractional values for x_v , in contrast to the integer programming formulation for \alpha({\mathsf{{G}}}) , where x_v must be binary (either 0 or 1) for all v \in \mathsf{V}({\mathsf{{G}}}) . Consequently, \alpha_{\mathrm{f}}({\mathsf{{G}}}) \geq \alpha({\mathsf{{G}}}) .
The following result was obtained by Alon [34] and by Friedgut and Khan [28], where the latter provides an entropy-based proof that relies on Shearer's lemma with an extension of the result on the number of homomorphisms for hypergraphs.
Proposition 6 (Number of Graph Homomorphisms). Let \mathsf{{T}} and \mathsf{{G}} be finite, simple, and undirected graphs, having no isolated vertices. Then,
\begin{align} \mathrm{hom}(\mathsf{{{T}}},\mathsf{{{G}}}) \leq \bigl( 2 |{\mathsf{E}({\mathsf{{G}}})}| \bigr)^{\alpha_{\mathrm{f}}({\mathsf{{T}}})}. \end{align} | (2.22) |
Furthermore, the upper bound in (2.22) is essentially tight for a fixed graph \mathsf{{T}} in the sense that there exists a graph \mathsf{{G}} such that
\begin{align} \mathrm{hom}(\mathsf{{{T}}},\mathsf{{{G}}}) \geq \Biggl( \frac{|{\mathsf{E}({\mathsf{{G}}})}|}{|{\mathsf{E}({\mathsf{{T}}})}|}\Biggr)^{\alpha_{\mathrm{f}}({\mathsf{{T}}})}, \end{align} | (2.23) |
so, a comparison between the upper and lower bounds on the number of graph homomorphisms in (2.22) and (2.23) indicates that \mathrm{hom}(\mathsf{{{T}}}, \mathsf{{{G}}}) scales like |{\mathsf{E}({\mathsf{{G}}})}|^{\, \alpha_{\mathrm{f}}({\mathsf{{T}}})} .
This section derives an upper bound on the maximum cardinality of a family of \mathsf{{H}} -intersecting graphs on a fixed number of vertices, where the intersection of every two graphs in that family contains a subgraph that is isomorphic to a specified graph \mathsf{{H}} . Specifically, the next result generalizes Proposition 5 by extending the proof technique of [4] to apply to all families of \mathsf{{H}} -intersecting graphs.
Proposition 7. Let \mathsf{{H}} be a nonempty graph, and let \mathcal {G} be a family of \mathsf{{H}} -intersecting graphs on a common vertex set [{n}] . Then,
\begin{align} |{ \mathcal {G}}| \leq 2^{\binom{n}{2}-(\chi ({\mathsf{{H}}})-1)}. \end{align} | (3.1) |
Proof.
● Identify every graph \mathsf{{G}} \in \mathcal {G} with its edge set \mathsf{E}({\mathsf{{G}}}) , and let \mathscr{M} = \bigl\{ \mathsf{E}({\mathsf{{G}}}): \mathsf{{G}} \in \mathcal {G} \bigr\} (recall that all these graphs have the common vertex set [{n}] ).
● Let \mathcal {U} = \mathsf{E}({\mathsf{K}_{{n}}}) . For every \mathsf{{G}} \in \mathcal {G} , we have \mathsf{E}({\mathsf{{G}}}) \subseteq \mathcal {U} , and |{ \mathcal {U}}| = \binom{n}{2} .
● Let t \triangleq \chi ({\mathsf{{H}}}) be the chromatic number of \mathsf{{H}} (or any graph isomorphic to \mathsf{{H}} ).
● For every unordered equipartition or almost-equipartition of [{n}] into t-1 disjoint subsets, i.e., \overset{t-1}{\underset{j = 1}{\bigcup}} \mathcal {A}_j = [n] , satisfying \bigl| | \mathcal {A}_i|-| \mathcal {A}_j|\bigr| \leq 1 and \mathcal {A}_i \cap \mathcal {A}_j = \varnothing for all 1 \leq i < j \leq t-1 , define \mathcal {U}(\{ \mathcal {A}_j\}_{j = 1}^{t-1}) as the subset of \mathcal{U} consisting of all edges that are entirely contained within some subset \mathcal {A}_j for j \in [{t-1}] . In other words, each edge lies entirely within one of the subsets \{ \mathcal {A}_j\}_{j = 1}^{t-1} , although different edges may belong to different subsets.
● We apply Proposition 2 with
\begin{align} \mathscr{F} = \{ \mathcal {U}(\{ \mathcal {A}_j\}_{j = 1}^{t-1})\}, \end{align} | (3.2) |
where \mathscr{F} is obtained by referring in the right-hand side of (3.2) to all possible choices of \{ \mathcal {A}_j\}_{j = 1}^{t-1} , as described in the previous item.
● Let m = | \, \mathcal {U}(\{ \mathcal {A}_j\}_{j = 1}^{t-1}) \, | . Then, m is independent of \{ \mathcal {A}_j\}_{j = 1}^{t-1} as described above, since
\begin{align} m = \left\{ \begin{array}{ll} (t-1) \, \binom{n/(t-1)}{2} & \mbox{if } (t-1)|n , \\ (t-2) \, \binom{\lfloor n/(t-1) \rfloor}{2} + \binom{\lceil n/(t-1) \rceil}{2} & \mbox{if } (t-1)|(n-1) ,\\ \vdots \\ \binom{\lfloor n/(t-1) \rfloor}{2} + (t-2) \, \binom{\lceil n/(t-1) \rceil}{2} & \mbox{if } (t-1)|\bigl(n-(t-2)\bigr) . \end{array} \right. \end{align} | (3.3) |
● By (3.3) with t = \chi ({\mathsf{{H}}}) , it follows that
\begin{align} m \leq \frac{1}{\chi ({\mathsf{{H}}})-1} \, \binom{n}{2}. \end{align} | (3.4) |
Proof. The graph \mathsf{{H}} is nonempty, so t = \chi ({\mathsf{{H}}}) \geq 2 . If (t-1)|n , then,
\begin{align} (t-1) \, \binom{n/(t-1)}{2} & = \frac{n(n-(t-1))}{2(t-1)} \\ &\leq \frac1{t-1} \, \binom{n}{2}. \end{align} | (3.5) |
Otherwise, (t-1)|(n-j) for some integer j \in [{t-2}] , so n = j+r(t-1) with r \in {\mathbb{N}} . Consequently, since j \in \{1, \ldots, t-2\} , (3.3) gives
\begin{align} m & = (t-1-j) \, \binom{r}{2} + j \, \binom{r+1}{2} \\ & = \frac{r}{2} \, \Bigl[ (t-1-j)(r-1)+j(r+1) \Bigr] \\ & = \frac{n-j}{2(t-1)} \, \Bigl[ (t-1-j) \, \Bigl( \frac{n-j}{t-1}-1 \Bigr) + j \, \Bigl(\frac{n-j}{t-1} + 1\Bigr) \Bigr] \\ & = \frac{n-j}{2(t-1)^2} \, \Bigl[ (t-1-j)(n-j-t+1) + j(n-j + t-1) \Bigr] \\ & = \frac{(n-j) \, \bigl[ n(t-1) + j(t-1) - (t-1)^2 \bigr]}{2(t-1)^2} \\ & = \frac{(n-j) \, \bigl(n+j-(t-1)\bigr)}{2(t-1)} \\ &\leq \frac{(n-j)(n-1)}{2(t-1)} \\ & < \frac1{t-1} \, \binom{n}{2}. \end{align} | (3.6) |
Combining (3.5) and (3.6) for the cases where (t-1)|n or (t-1) \not| n , respectively, gives (3.4).
● By a symmetry consideration, which follows from the construction of \mathscr{F} in (3.2) as a set of subsets of \mathcal {U} (due to the fourth item of this proof), each of the \binom{n}{2} elements in \mathcal {U} (i.e., each edge of the complete graph \mathsf{K}_{{n}} ) belongs to a fixed number k of elements in \mathscr{F} .
● By a double-counting argument on the edges of the complete graph \mathsf{K}_{{n}} (the set \mathcal {U} ), since each element of \mathscr{F} is a subset of \mathcal {U} of size m (see (3.3)) and each edge in \mathcal {U} belongs to exactly k elements in \mathscr{F} , the following equality holds:
\begin{equation} m \, |\mathscr{F}| = \binom{n}{2} \, k. \end{equation} | (3.7) |
● Let \mathcal {S} \in \mathscr{F} . Observe that {\rm trace}_{\, \mathcal {S}}(\mathscr{M}) , as defined in (2.9), forms an intersecting family of subsets of \mathcal {S} . Indeed,
1. Assign to each vertex in [{n}] the index j of the subset \mathcal {A}_j ( 1 \leq j \leq \chi ({\mathsf{{H}}})-1 ) in the partition of [{n}] corresponding to \mathcal {S} . Let these assignments be associated with \chi ({\mathsf{{H}}})-1 color classes of the vertices.
2. For any \mathsf{{G}}, \mathsf{{G}}' \in {\mathcal G} , the graph \mathsf{{G}} \cap \mathsf{{G}}' contains a subgraph isomorphic to \mathsf{{H}} (by assumption).
3. By definition, t = \chi ({\mathsf{{H}}}) is the smallest number of colors required to properly color the vertices of \mathsf{{H}} , ensuring that no two adjacent vertices in \mathsf{{H}} share the same color. Hence, in any coloring of the vertices of any graph isomorphic to \mathsf{{H}} with t-1 colors, there must exist at least one edge whose two endpoints are assigned the same color. This implies that such an edge must belong to some subset \mathcal {A}_j for j \in [{\chi ({\mathsf{{H}}})-1}] , and therefore, it is contained in \mathcal {S} .
4. Let \mathsf{{G}}, \mathsf{{G}}' \in {\mathcal G} . Hence, at least one of the edges in \mathsf{E}({\mathsf{{G}}}) \cap \mathsf{E}({\mathsf{{G}}'}) , which contains the edges of a graph isomorphic to \mathsf{{H}} , belongs to \mathcal {S} . By the definition of \mathscr{M} (see the first item), this means by (2.9) that {\rm trace}_{\, \mathcal {S}}(\mathscr{M}) is an intersecting family of subsets of \mathcal {S} since
\bigl(\mathsf{E}({\mathsf{{G}}}) \cap \mathcal {S} \bigr) \cap \bigl(\mathsf{E}({\mathsf{{G}}'}) \cap \mathcal {S} \bigr) = \bigl(\mathsf{E}({\mathsf{{G}}}) \cap \bigl(\mathsf{E}({\mathsf{{G}}'}) \bigr) \cap \mathcal {S} \neq \varnothing. |
Consequently, |{ \mathcal {S}}| = m and {\rm trace}_{\, \mathcal {S}}(\mathscr{M}) forms an intersecting family of subsets of \mathcal {S} , which yields
\begin{align} |{{\rm trace}_{\, \mathcal {S}} (\mathscr{M})}| \leq 2^{m - 1}. \end{align} | (3.8) |
This holds since the total number of subsets of \mathcal {S} is 2^m , and since {\rm trace}_{\, \mathcal {S}}(\mathscr{M}) forms an intersecting family of these subsets, it cannot contain any subset along with its complement with respect to \mathcal {S} . Consequently, the cardinality of an intersecting family of subsets of \mathcal {S} is at most \tfrac12 \cdot 2^m = 2^{m-1} .
● By Proposition 2 (and the one-to-one correspondence between \mathcal {G} and \mathscr{M} ),
\begin{align} |{\mathcal G}| & = |{\mathscr{M}}| \\ & \leq \left(2^{m - 1}\right)^\frac{|\mathscr{F}|}{k} \end{align} | (3.9) |
\begin{align} & = 2^{\binom{n}{2}\left(1-\frac{1}{m}\right)} \end{align} | (3.10) |
\begin{align} & \leq 2^{\binom{n}{2}-(\chi ({\mathsf{{H}}})-1)}, \end{align} | (3.11) |
where (3.9) relies on (2.10) and (3.8), equality (3.10) relies on (3.7), and (3.11) holds due to (3.4).
The family \mathcal {G} of \mathsf{{H}} -intersecting graphs on n vertices can be as large as 2^{\binom{n}{2}-|{\mathsf{E}({\mathsf{{H}}})}|} . To that end, consider the family {\mathcal G} of all graphs on n vertices that include \mathsf{{H}} as a subgraph.
Combining this lower bound on |{ \mathcal {G}}| with its upper bound in Proposition 7 gives that the largest family \mathcal {G} of \mathsf{{H}} -intersecting graphs on n vertices satisfies
\begin{align} 2^{\binom{n}{2}-|{\mathsf{E}({\mathsf{{H}}})}|} \leq |{ \mathcal {G}}| \leq 2^{\binom{n}{2}-(\chi ({\mathsf{{H}}})-1)}. \end{align} | (3.12) |
Specialization of Proposition 7 to complete subgraphs gives the following.
Corollary 1. Let \mathcal {G} be a family of \mathsf{K}_{{t}} -intersecting graphs, with t \geq 2 , on a common vertex set [{n}] . Then,
\begin{align} |{ \mathcal {G}}| \leq 2^{\binom{n}{2}-(t-1)}. \end{align} | (3.13) |
Proof. \chi ({\mathsf{K}_{{t}}}) = t for all t \in {\mathbb{N}} , and if t \geq 2 , then the complete graph \mathsf{K}_{{t}} is nonempty.
Remark 3. For t \geq 3 , the bound in Corollary 1 falls short of the conjectured result in [5], which states that every \mathsf{K}_{{t}} -intersecting family of graphs on a common vertex set [{n}] has a size of at most 2^{\binom{n}{2}-\frac{t(t-1)}{2}} , with equality achieved by the family of all graphs containing a fixed clique on t vertices. Nevertheless, it generalizes Proposition 5, which addresses the special case \mathsf{{H}} = \mathsf{K}_{{3}} (triangle-intersecting graphs), and it uniformly improves the trivial bound of 2^{\binom{n}{2}-1} for \mathsf{K}_{{t}} -intersecting graph families on n vertices with t \geq 3 .
Remark 4. If \mathsf{{H}} is a bipartite graph, then \chi ({\mathsf{{H}}}) = 2 . In that case, our result for an \mathsf{{H}} -intersecting family of graphs on n vertices is specialized to the trivial bound
\begin{align} |{ \mathcal {G}}| \leq 2^{\binom{n}{2}-1}. \end{align} | (3.14) |
This bound is tight for the largest family of \mathsf{K}_{{2}} -intersecting graphs on n vertices, consisting of all graphs that contain a fixed edge as a subgraph (since in this case, the upper and lower bounds in (3.12) coincide).
The computational complexity of the chromatic number of a graph is in general NP-hard [35]. This poses a problem in calculating the upper bound in Proposition 7 on the cardinality of \mathsf{{H}} -intersecting families of graphs on a fixed number of vertices. This bound can be however loosened, expressing it in terms of the Lovász \vartheta -function of the complement graph \overline{\mathsf{{H}}} (see Corollary 3 of [36]).
Corollary 2. Let \mathsf{{H}} be a graph, and let \mathcal {G} be a family of \mathsf{{H}} -intersecting graphs on a common vertex set [{n}] . Then,
\begin{align} |{ \mathcal {G}}| \leq 2^{\binom{n}{2}-(\lceil\vartheta(\overline{\mathsf{{H}}})\rceil-1)}. \end{align} | (3.15) |
Proof. The Lovász \vartheta -function of the complement graph \overline{\mathsf{{H}}} satisfies the inequality
\begin{align} \omega ({\mathsf{{H}}}) \leq \vartheta(\overline{\mathsf{{H}}}) \leq \chi ({\mathsf{{H}}}), \end{align} | (3.16) |
so it is bounded between the clique and chromatic numbers of \mathsf{{H}} , which are both NP-hard to compute [35]. Since the chromatic number \chi ({\mathsf{{H}}}) is an integer, we have
\begin{align} \chi ({\mathsf{{H}}}) \geq \lceil\vartheta(\overline{\mathsf{{H}}}) \rceil. \end{align} | (3.17) |
Combining (3.1) and (3.17) yields (3.15).
The Lovász \vartheta -function of the complement graph \overline{\mathsf{{H}}} , as presented in Corollary 2, can be efficiently computed with a precision of r decimal digits, having a computational complexity that is polynomial in p \triangleq |{\mathsf{V}({\mathsf{{H}}})}| and r . It is obtained by solving the following semidefinite programming (SDP) problem [37]:
![]() |
(3.18) |
where the following notation is used:
● {\bf{A}} = {\bf{A}}(\mathsf{{H}}) is the p \times p adjacency matrix of \mathsf{{H}} ;
● {\bf{J}}_p is the all-ones p \times p matrix;
● {\bf{ \mathcal {S}}}_{+}^p is the set of all p \times p positive semidefinite matrices.
The reader is referred to an account of interesting properties of the Lovász \vartheta -function in [38], Chapter 11 of [39], and more recently in Section 2.5 of [40].
The following result generalizes Corollary 1 by relying on properties of the Lovász \vartheta -function. For the third item of the next result, strongly regular graphs are first presented.
Definition 5 (Strongly Regular Graphs). A regular graph \mathsf{{G}} that is neither complete nor empty is called a strongly regular graph with parameters (n, d, \lambda, \mu) , where \lambda and \mu are nonnegative integers, if the following conditions hold:
(1) \mathsf{{G}} is a d -regular graph on n vertices.
(2) Every two adjacent vertices in \mathsf{{G}} have exactly \lambda common neighbors.
(3) Every two distinct and nonadjacent vertices in \mathsf{{G}} have exactly \mu common neighbors.
The family of strongly regular graphs with these four specified parameters is denoted by \mathsf{srg}({n}, {d}, {\lambda}, {\mu}) . It is important to note that a family of the form \mathsf{srg}({n}, {d}, {\lambda}, {\mu}) may contain multiple nonisomorphic strongly regular graphs [41]. We refer to a strongly regular graph as \mathsf{srg}({n}, {d}, {\lambda}, {\mu}) if it belongs to this family.
Proposition 8. Let \mathcal {G} be an \mathsf{{H}} -intersecting family of graphs on n vertices, where \mathsf{{H}} is a nonempty, simple, and undirected graph on p vertices. The following holds:
(1)
\begin{align} \log_2 |{ \mathcal {G}}| \leq \binom{n}{2} - \Biggl\lceil \max\limits_{\bf{T}} \, \frac{\lambda_{\max}({\bf{T}})}{|\lambda_{\min}({\bf{T}})|} \Biggr\rceil, \end{align} | (3.19) |
where the maximization on the right-hand side of (3.19) is taken over all nonzero, symmetric p \times p matrices {\bf{T}} = (T_{i, j}) with T_{i, j} = 0 for all i, j \in [{p}] such that \{i, j\} \notin \mathsf{E}({\mathsf{{H}}}) or i = j (e.g., the adjacency matrix of \mathsf{{H}} ), and \lambda_{\max}({\bf{T}}) and \lambda_{\min}({\bf{T}}) denote the largest and smallest (real) eigenvalues of {\bf{T}} , respectively.
(2) Specifically, if \mathsf{{H}} is a d -regular graph on p vertices, where d \in [{p-1}] , then
\begin{align} \log_2 |{ \mathcal {G}}| \leq \binom{n}{2} - \Biggl\lceil \frac{d}{|\lambda_{\min}(\mathsf{{H}})|} \Biggr\rceil, \end{align} | (3.20) |
where \lambda_{\min}(\mathsf{{H}}) is the smallest eigenvalue of the adjacency matrix of the graph \mathsf{{H}} .
(3) If \mathsf{{H}} is a connected strongly regular graph in the family \mathsf{srg}({p}, {d}, {\lambda}, {\mu}) , then
\begin{align} \log_2 |{ \mathcal {G}}| \leq \binom{n}{2} - \Biggl\lceil \frac{2d}{\sqrt{(\lambda-\mu)^2+4(d-\mu)}-\lambda+\mu} \Biggr\rceil. \end{align} | (3.21) |
Proof. By Corollary 2,
\begin{align} \log_2 |{ \mathcal {G}}| \leq \binom{n}{2} - \bigl( \lceil \vartheta(\overline{\mathsf{{H}}}) \rceil - 1 \bigr). \end{align} | (3.22) |
Item 1 then holds by the property that for every finite, simple, and undirected graph \mathsf{{G}} on n vertices,
\begin{align} \vartheta(\mathsf{{G}}) = 1 + \max\limits_{{\bf{T}}} \frac{\lambda_{\max}({\bf{T}})}{\bigl|\lambda_{\min}({\bf{T}})\bigr|}, \end{align} | (3.23) |
where the maximization on the right-hand side of (3.23) is taken over all symmetric nonzero n \times n matrices {\bf{T}} = (T_{i, j}) with T_{i, j} = 0 for all i, j \in [{n}] such that \{i, j\} \in \mathsf{E}({\mathsf{{G}}}) or i = j . Equality (3.23) is then applied to the graph \overline{\mathsf{{H}}} , so the maximization on the right-hand side of (3.19) is taken over all symmetric nonzero p \times p matrices {\bf{T}} = (T_{i, j}) with T_{i, j} = 0 for all i, j \in [{p}] such that \{i, j\} \notin \mathsf{E}({\mathsf{{H}}}) or i = j . This includes in particular the adjacency matrix of the graph \mathsf{{H}} , i.e., {\bf{T}} = {\bf{A}}(\mathsf{{H}}) .
We next prove Item 2, which refers to nonempty d -regular graphs on p vertices. Relaxing the bound in (3.19) by selecting {\bf{T}} = {\bf{A}}(\mathsf{{H}}) gives \lambda_{\max}({\bf{T}}) = d , and \lambda_{\min}({\bf{T}}) = \lambda_{\min}(\mathsf{{H}}) , which then gives the relaxed bound in (3.20).
Item 3 follows from Item 2 by relying on the closed-form expression of the smallest eigenvalue of the adjacency matrix of a connected strongly d -regular graph \mathsf{{H}} on p vertices, where each pair of adjacent vertices has exactly \lambda common neighbors and each pair of nonadjacent vertices has exactly \mu common neighbors. Recall that a strongly regular graph is connected if and only if \mu > 0 . In that case, the largest eigenvalue of the adjacency matrix is \lambda_1(\mathsf{{H}}) = d with multiplicity 1, and the other two distinct eigenvalues of its adjacency matrix are given by (see, e.g., Chapter 21 of [42])
\begin{align} r_{1,2} = \tfrac12 \, \Bigl( \lambda - \mu \pm \sqrt{ (\lambda-\mu)^2 + 4(d-\mu) } \, \Bigr), \end{align} | (3.24) |
with the respective multiplicities
\begin{align} m_{1,2} = \tfrac12 \, \Biggl( t-1 \mp \frac{2d+(t-1)(\lambda-\mu)}{\sqrt{(\lambda-\mu)^2+4(d-\mu)}} \, \Biggr). \end{align} | (3.25) |
Specifically, by (3.24), the absolute value of the smallest eigenvalue of \mathsf{{H}} is given by
\begin{align} \bigl|\lambda_{\min}(\mathsf{{H}})\bigr| = \tfrac12 \, \Bigl( \sqrt{ (\lambda-\mu)^2 + 4(d-\mu) } + \mu -\lambda \, \Bigr). \end{align} | (3.26) |
Finally, substituting (3.26) into (3.20) gives (3.21).
Remark 5. The derivation of (3.21) relies on (3.20), where the latter is based on the lower bound on \vartheta(\overline{\mathsf{{H}}}) , obtained by relaxing the bound in (3.19) and selecting {\bf{T}} = {\bf{A}}(\mathsf{{H}}) (see the proof of Item 2 in Proposition 8). Fortunately, the Lovász \vartheta -function of strongly regular graphs (and their complements, which are also strongly regular) is known exactly, so there is no need for the lower bound on \vartheta(\overline{\mathsf{{H}}}) in this case. It is therefore of interest to examine whether the bound in (3.21) can be improved by using (3.15) in combination with the exact value of \vartheta(\overline{\mathsf{{H}}}) for a strongly regular graph \mathsf{{H}} in the family \mathsf{srg}({p}, {d}, {\lambda}, {\mu}) . In that case, by Proposition 1 of [43], we have
\begin{align} \vartheta(\overline{\mathsf{{H}}}) & = 1 - \frac{d}{\lambda_{\min}(\mathsf{{H}})}, \end{align} | (3.27) |
which also gives
\begin{align} \vartheta(\overline{\mathsf{{H}}}) & = 1 + \frac{2d}{\sqrt{(\lambda-\mu)^2+4(d-\mu)}-\lambda+\mu}, \end{align} | (3.28) |
where (3.28) holds by the expression of the smallest eigenvalue of the adjacency matrix of \mathsf{{H}} , as given by r_2 in (3.24). Finally, combining inequality (3.15) with equality (3.28) gives exactly the same bound as in (3.21). Thus, there is no improvement, and the bound remains identical in both approaches. As a side note, interested readers are referred to a recent application of (3.28), which provides an alternative proof of the friendship theorem in graph theory [44].
It is natural to ask the following question:
Question 3. Is there a graph \mathsf{{H}} , apart of \mathsf{K}_{{2}} , for which the bound provided in Proposition 7 is tight for a largest \mathsf{{H}} -intersecting family of graphs?
We provide a partial reply to Question 3 by comparing the leftmost and rightmost terms in (3.12), which is equivalent to comparing |{\mathsf{E}({\mathsf{{H}}})}| and \chi ({\mathsf{{H}}})-1 . According to the inequality
\begin{align} \chi ({\mathsf{{H}}}) \leq \Delta(\mathsf{{H}})+1, \end{align} | (3.29) |
where \Delta(\mathsf{{H}}) is the maximum degree of the vertices in the graph \mathsf{{H}} , it follows that unless \mathsf{{H}} is an edge, there exists a gap between the size of the graph ( |{\mathsf{E}({\mathsf{{H}}})}| ) and the chromatic number minus one ( \chi ({\mathsf{{H}}})-1 ). Furthermore, according to Brooks' theorem, for connected, undirected graphs \mathsf{{H}} that are neither complete nor odd cycles, the chromatic number satisfies
\begin{align} \chi ({\mathsf{{H}}}) \leq \Delta(\mathsf{{H}}), \end{align} | (3.30) |
which provides a tighter bound in comparison to (3.29), further increasing the gap between |{\mathsf{E}({\mathsf{{H}}})}| and \chi ({\mathsf{{H}}})-1 unless \mathsf{{H}} is an edge. It is also noted that the chromatic number satisfies
\begin{align} \chi ({\mathsf{{H}}}) \leq \tfrac12 \bigl(1 + \sqrt{1 + 8 |{\mathsf{E}({\mathsf{{H}}})}|}\bigr), \end{align} | (3.31) |
with equality if and only if \mathsf{{H}} is a complete graph. This bound follows from the observation that a graph with chromatic number k must contain at least as many edges as the complete graph on k vertices. The chromatic number \chi ({\mathsf{{H}}}) cannot therefore exceed the largest integer k satisfying \binom{k}{2} \leq |{\mathsf{E}({\mathsf{{H}}})}| . Consequently, if |{\mathsf{E}({\mathsf{{H}}})}| \triangleq m \geq 2 , then the minimum possible gap between \chi ({\mathsf{{H}}})-1 and |{\mathsf{E}({\mathsf{{H}}})}| satisfies
\begin{align} |{\mathsf{E}({\mathsf{{H}}})}| - \bigl(\chi ({\mathsf{{H}}}) - 1 \bigr) &\geq \Bigl\lceil m - \tfrac12 \bigl(\sqrt{8m+1}-1 \bigr) \Bigr\rceil \geq 1, \end{align} | (3.32) |
which tends to infinity as m \to \infty .
This section, composed of two independent parts, applies properties of Shannon entropy to derive bounds related to the enumeration of graph homomorphisms. It offers additional insight into the interplay between combinatorial structures and information-theoretic principles.
The following known result relates the number of cliques of any two distinct orders in a graph.
Proposition 9. Let \mathsf{{G}} be a finite, simple, and undirected graph on n vertices, and let m_\ell be the number of cliques of order \ell \in {\mathbb{N}} in \mathsf{{G}} . Then, for all s, t \in {\mathbb{N}} with 2 \leq s < t \leq n ,
\begin{align} (t! \, m_t)^s \leq (s! \, m_s)^t. \end{align} | (4.1) |
We next suggest a generalization of Proposition 9.
Proposition 10. Let \mathsf{{G}} be a finite, simple, and undirected graph on n vertices, let s, t \in {\mathbb{N}} with s < t < n , let \mathsf{{T}} be an induced subgraph of \mathsf{{G}} on t vertices, and let m(\mathsf{{H}}, \mathsf{{G}}) denote the number of copies of a subgraph \mathsf{{H}} in the graph \mathsf{{G}} . Then,
\begin{align} \bigl( t! \, m(\mathsf{{T}}, \mathsf{{G}}) \bigr)^s \leq \max\limits_{\mathsf{{S}}} \Bigl( s! \, m(\mathsf{{S}}, \mathsf{{G}}) \Bigr)^t, \end{align} | (4.2) |
where the maximization in (4.2) is taken over all induced subgraphs \mathsf{{S}} of \mathsf{{T}} with s vertices.
Let \mathrm{aut}(\mathsf{{{H}}}) denote the size of the automorphism group of a graph \mathsf{{H}} , defined as the number of vertex permutations that preserve the graph's structure, i.e., its adjacency and non-adjacency relations. Furthermore, let \mathrm{inj}(\mathsf{{{H}}}, \mathsf{{{G}}}) denote the number of injective homomorphisms from \mathsf{{H}} to \mathsf{{G}} (i.e., homomorphisms where distinct vertices of \mathsf{{H}} map to distinct vertices of \mathsf{{G}} ). Then, equivalently to (4.2), in terms of injective homomorphism counts,
\begin{align} \biggl(\frac{t!}{\mathrm{aut}(\mathsf{{{T}}})} \cdot \mathrm{inj}(\mathsf{{{T}}},\mathsf{{{G}}}) \biggr)^s \leq \max\limits_{\mathsf{{S}}} \, \biggl(\frac{s!}{\mathrm{aut}(\mathsf{{{S}}})} \cdot \mathrm{inj}(\mathsf{{{S}}},\mathsf{{{G}}}) \biggr)^t, \end{align} | (4.3) |
where the maximization in (4.3) is the same as in (4.2).
Proof.
● Let \mathsf{V}({\mathsf{{G}}}) = [{n}] , and let \mathsf{{T}} be an induced subgraph of \mathsf{{G}} with |{\mathsf{V}({\mathsf{{T}}})}| = t < n .
● Select a copy of \mathsf{{T}} in \mathsf{{G}} uniformly at random, and then choose a uniform random ordering of the vertices in that copy. This process produces a random vector (X_1, \ldots, X_t) , representing the selected order of the vertices.
● Let m(\mathsf{{T}}, \mathsf{{G}}) denote the number of copies of \mathsf{{T}} in \mathsf{{G}} . Then,
\begin{align} \text{H}({X_1, \ldots, X_t}) = \log \bigl(t! \; m(\mathsf{{T}},\mathsf{{G}})\bigr), \end{align} | (4.4) |
as the vertices of each copy of \mathsf{{T}} in \mathsf{{G}} can be ordered in t! equally probable ways.
● Let \mathcal {S} be chosen uniformly at random from all subsets of [{t}] of fixed size s , where 1 \leq s < t . Then,
\begin{align} \Pr[{i \in \mathcal {S}}] = \frac{s}{t}, \quad \forall \, i \in [{t}]. \end{align} | (4.5) |
● By Proposition 3 and equalities (4.4) and (4.5), it follows that
\begin{align} \mathbb{E}_{{ \mathcal {S}}}\bigl[{\text{H}({X_{ \mathcal {S}}})}\bigr] \geq \frac{s}{t} \cdot \log \bigl(t! \; m(\mathsf{{T}},\mathsf{{G}})\bigr). \end{align} | (4.6) |
● The random subvector X_{ \mathcal {S}} corresponds to a copy, in \mathsf{{G}} , of an induced subgraph \mathsf{{S}} \subseteq \mathsf{{T}} with s vertices. All s! permutations of the subvector X_{ \mathcal {S}} correspond to the same copy of \mathsf{{S}} in \mathsf{{G}} , and there are m(\mathsf{{S}}, \mathsf{{G}}) such copies of \mathsf{{S}} in \mathsf{{G}} .
● The entropy of the random subvector X_{ \mathcal {S}} therefore satisfies
\begin{align} \text{H}({X_{ \mathcal {S}}}) \leq \log \bigl(s! \; m(\mathsf{{S}},\mathsf{{G}})\bigr), \end{align} | (4.7) |
where m(\mathsf{{S}}, \mathsf{{G}}) denotes the number of copies of a graph \mathsf{{S}} in \mathsf{{G}} , and s! accounts for the s! permutations of the vector X_{ \mathcal {S}} that correspond to the same copy of \mathsf{{S}} in \mathsf{{G}} .
● By (4.7), it follows that
\begin{align} \mathbb{E}_{{ \mathcal {S}}}\bigl[{\text{H}({X_{ \mathcal {S}}})}\bigr] \leq \max\limits_{\mathsf{{S}}} \log \bigl(s! \; m(\mathsf{{S}},\mathsf{{G}})\bigr), \end{align} | (4.8) |
where the maximization on the right-hand side of (4.8) is taken over all induced subgraphs \mathsf{{S}} of \mathsf{{T}} on s vertices.
● Combining (4.6) and (4.8) yields
\begin{align} \frac{s}{t} \cdot \log \bigl(t! \; m(\mathsf{{T}},\mathsf{{G}})\bigr) \leq \max\limits_{\mathsf{{S}}} \, \log \bigl(s! \; m(\mathsf{{S}},\mathsf{{G}})\bigr), \end{align} | (4.9) |
and exponentiating both sides of (4.9) gives (4.2).
● The following equality holds:
\begin{align} \mathrm{inj}(\mathsf{{{H}}},\mathsf{{{G}}}) = \mathrm{aut}(\mathsf{{{H}}}) \, m(\mathsf{{H}},\mathsf{{G}}), \end{align} | (4.10) |
since each copy of \mathsf{{H}} in \mathsf{{G}} corresponds to exactly \mathrm{aut}(\mathsf{{{H}}}) distinct injective homomorphisms from \mathsf{{H}} to \mathsf{{G}} , as its vertices can be labeled in \mathrm{aut}(\mathsf{{{H}}}) different ways. Combining (4.2) and (4.10) yields inequality (4.3). Furthermore, by (4.10), inequalities (4.2) and (4.3) are equivalent.
Remark 6 (Specialization of Proposition 10). Proposition 10 can be specialized to Proposition 9 by setting \mathsf{{T}} = \mathsf{K}_{{t}} (a clique of order t ), for which every induced subgraph of \mathsf{{T}} on s vertices is a clique \mathsf{{S}} of order s ( \mathsf{{S}} = \mathsf{K}_{{s}} ). In that case, \mathrm{aut}(\mathsf{{{T}}}) = t! and \mathrm{aut}(\mathsf{{{S}}}) = s! . Consequently, the maximization on the right-hand side of (4.3) is performed over the single graph \mathsf{K}_{{s}} , which gives
\begin{align} \mathrm{inj}(\mathsf{{{\mathsf{K}_{{t}}}}},\mathsf{{{G}}})^s \leq \mathrm{inj}(\mathsf{{{\mathsf{K}_{{s}}}}},\mathsf{{{G}}})^t, \quad 1 \leq s < t < n. \end{align} | (4.11) |
By (4.10), we have
\begin{align} \mathrm{inj}(\mathsf{{{\mathsf{K}_{{t}}}}},\mathsf{{{G}}}) = t! \, m_t, \end{align} | (4.12) |
\begin{align} \mathrm{inj}(\mathsf{{{\mathsf{K}_{{s}}}}},\mathsf{{{G}}}) = s! \, m_s, \end{align} | (4.13) |
where m_t and m_s denote, respectively, the number of cliques of orders t and s in \mathsf{{G}} . Combining (4.11), (4.12), and (4.13) then gives
\begin{align} (t! \, m_t)^s \leq (s! \, m_s)^t, \quad 1 \leq s < t < n. \end{align} | (4.14) |
This reproduces Proposition 9, which establishes a relationship between the numbers of cliques of two different orders in a finite, simple, undirected graph \mathsf{{G}} .
Perfect graphs are characterized by the property that not only their chromatic and clique numbers coincide, but also the same coincidence applies to every induced subgraph. The complement of a perfect graph is perfect as well. Perfect graphs include many important families of graphs such as bipartite graphs, line graphs of bipartite graphs, chordal graphs, comparability graphs, and the complements of all these graphs [45].
A complete bipartite graph, denoted by \mathsf{K}_{{s}, {t}} for s, t \in {\mathbb{N}} , consists of two partite sets of sizes s and t , where every vertex in one partite set is adjacent to all the vertices in the other partite set. It is in particular a perfect graph.
In the following, we rely on Proposition 6 to derive an upper bound on the number of homomorphisms from a perfect graph to a graph. We then rely on properties of Shannon entropy to derive a lower bound on the number of homomorphisms from any complete bipartite graph to any bipartite graph, and examine its tightness by comparing it to the specialized upper bound that is based on Proposition 6.
Proposition 11. Let \mathsf{{T}} and \mathsf{{G}} be simple, finite, and undirected graphs having no isolated vertices, and suppose that \mathsf{{T}} is also a perfect graph. Then, the number of homomorphisms from \mathsf{{T}} to \mathsf{{G}} satisfies
\begin{align} \mathrm{hom}(\mathsf{{{T}}},\mathsf{{{G}}}) \leq \bigl( 2 |{\mathsf{E}({\mathsf{{G}}})}| \bigr)^{\vartheta(\mathsf{{T}})}. \end{align} | (4.15) |
Furthermore, let \mathsf{{G}} be a simple bipartite graph having no isolated vertices. Let its partite vertex sets be of sizes n_1 and n_2 , and let its number of edges be equal to \alpha n_1 n_2 with \alpha \in (0, 1] . Then, the number of homomorphisms from the complete bipartite graph \mathsf{K}_{{s}, {t}} to \mathsf{{G}} , where s, t \in {\mathbb{N}} , satisfies
\begin{align} \alpha^{st} \, \min\{n_1, n_2\}^{-|s-t|} \bigl(n_1 n_2 \bigr)^{\max\{s,t\}} \leq \mathrm{hom}(\mathsf{{{\mathsf{K}_{{s},{t}}}}},\mathsf{{{G}}}) \leq \bigl( 2 \alpha n_1 n_2 \bigr)^{\max\{s,t\}}. \end{align} | (4.16) |
Proof. For a perfect graph \mathsf{{T}} , we have \alpha({\mathsf{{T}}}) = \vartheta(\mathsf{{T}}) = \alpha_{\mathrm{f}}({\mathsf{{T}}}) = \chi ({\overline{\mathsf{{T}}}}) , which by (2.22) gives (4.15).
We next prove the rightmost inequality in (4.16). Every bipartite graph is a perfect graph, so it follows that \alpha({\mathsf{K}_{{s}, {t}}}) = \vartheta(\mathsf{K}_{{s}, {t}}) = \alpha_{\mathrm{f}}({\mathsf{K}_{{s}, {t}}}) = \chi ({\overline{\mathsf{{\mathsf{K}_{{s}, {t}}}}}}) . The independence number of the complete bipartite graph \mathsf{K}_{{s}, {t}} is the size of the largest among its two partite vertex sets, so \alpha({\mathsf{K}_{{s}, {t}}}) = \max\{s, t\} . Similarly, the complement graph \overline{\mathsf{{\mathsf{K}_{{s}, {t}}}}} = \mathsf{K}_{{s}} \cup \mathsf{K}_{{t}} is the disjoint union of the complete graphs \mathsf{K}_{{s}} and \mathsf{K}_{{t}} , so its chromatic number is given by \chi ({\overline{\mathsf{{\mathsf{K}_{{s}, {t}}}}}}) = \max\{s, t\} . Consequently, \vartheta(\mathsf{K}_{{s}, {t}}) = \max\{s, t\} , whose substitution into the right-hand side of (4.15), along with |{\mathsf{E}({\mathsf{{G}}})}| = \alpha n_1 n_2 where \alpha \in (0, 1] (by assumption), gives the rightmost inequality in (4.16).
We finally prove the leftmost inequality in (4.16). Let \mathcal {U} and \mathcal {V} denote the partite vertex sets of the simple bipartite graph \mathsf{{G}} , where |{ \mathcal {U}}| = n_1 and |{ \mathcal {V}}| = n_2 . Let (U, V) be a random vector taking values in \mathcal {U} \times \mathcal {V} , and distributed uniformly at random on the edges of \mathsf{{G}} . Then, its entropy is given by
\begin{align} \text{H}({U,V}) & = \log \, \bigl| \mathsf{E}({\mathsf{{G}}}) \bigr| \\ & = \log(\alpha n_1 n_2). \end{align} | (4.17) |
The random vector (U, V) can be sampled by first sampling U = u from the marginal probability mass function (PMF) of U , denoted by \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}} , and then sampling V from the conditional PMF \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(\cdot|u) . We next construct a random vector (U_1, \ldots, U_s, V_1, \ldots, V_t) as follows:
● Let V_1, \ldots, V_t be conditionally independent and identically distributed (i.i.d.) given U , having the conditional PMF
\begin{align} \mathsf{{P}}_{{\smash{{{V_1, \ldots, V_t}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_1, \ldots, v_t|u) = \prod\limits_{j = 1}^t \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_j|u), \quad \forall \, u \in \mathcal {U}, \; \; (v_1, \ldots, v_t) \in \mathcal {V}^{\, t}. \end{align} | (4.18) |
● Let U_1, \ldots, U_s be conditionally i.i.d. given (V_1, \ldots, V_t) , having the conditional PMF
\begin{align} & \mathsf{{P}}_{{\smash{{{U_1, \ldots, U_s}}}\vphantom{XYZ}} | {\smash{{{V_1, \ldots, V_t}}}\vphantom{XYZ}}}(u_1, \ldots, u_s|v_1, \ldots, v_t) \\ & = \prod\limits_{i = 1}^s \mathsf{{P}}_{{\smash{{{U_i}}}\vphantom{XYZ}} | {\smash{{{V_1, \ldots, V_t}}}\vphantom{XYZ}}}(u_i|v_1, \ldots, v_t), \quad \forall \, (u_1, \ldots, u_s) \in \mathcal {U}^{\, s}, \; \; (v_1, \ldots, v_t) \in \mathcal {V}^{\, t}, \end{align} | (4.19) |
where the conditional PMFs on the right-hand side of (4.19) are given by
\begin{align} & \mathsf{{P}}_{{\smash{{{U_i}}}\vphantom{XYZ}} | {\smash{{{V_1, \ldots, V_t}}}\vphantom{XYZ}}}(u|v_1, \ldots, v_t) \\ & = \frac{\mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}}(u) \, \overset{t}{\underset{j = 1}{\prod}} \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_j|u)}{\underset{u' \in \mathcal {U}}{\sum} \biggl\{ \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}}(u') \, \overset{t}{\underset{j = 1}{\prod}} \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_j|u') \biggr\}}, \quad \forall \, u \in \mathcal {U}, \; \; (v_1, \ldots, v_t) \in \mathcal {V}^{\, t}, \; \; i \in [{s}]. \end{align} | (4.20) |
By the construction of the random vector (U_1, \ldots, U_s, V_1, \ldots, V_t) in (4.18)–(4.20), the following holds:
1. The random variables U_1, \ldots, U_s are identically distributed, and U_i \sim U (i.e., \mathsf{{P}}_{{\smash{{{U_i}}}\vphantom{XYZ}}} = \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}} ) for all i \in [{s}] . Indeed, it first follows from (4.18) that
\begin{align} \mathsf{{P}}_{{\smash{{{V_1, \ldots, V_t}}}\vphantom{XYZ}}}(v_1, \ldots, v_t) = \sum\limits_{u \in \mathcal {U}} \biggl\{ \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}}(u) \, \overset{t}{\underset{j = 1}{\prod}} \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_j|u) \biggr\} \quad \forall \, (v_1, \ldots, v_t) \in \mathcal {V}^{\, t}. \end{align} | (4.21) |
Hence, for all i \in [{s}] and u \in \mathcal {U} ,
\begin{align} \mathsf{{P}}_{{\smash{{{U_i}}}\vphantom{XYZ}}}(u) & = \sum\limits_{(v_1, \ldots, v_t) \in \mathcal {V}^{\, t}} \biggl\{ \mathsf{{P}}_{{\smash{{{U_i}}}\vphantom{XYZ}} | {\smash{{{V_1, \ldots, V_t}}}\vphantom{XYZ}}}(u|v_1, \ldots, v_t) \, \mathsf{{P}}_{{\smash{{{V_1, \ldots, V_t}}}\vphantom{XYZ}}}(v_1, \ldots, v_t) \biggr\} \end{align} | (4.22) |
\begin{align} & = \sum\limits_{(v_1, \ldots, v_t) \in \mathcal {V}^{\, t}} \biggl\{ \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}}(u) \, \overset{t}{\underset{j = 1}{\prod}} \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_j|u) \biggr\} \end{align} | (4.23) |
\begin{align} & = \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}}(u) \, \overset{t}{\underset{j = 1}{\prod}} \biggl\{ \sum\limits_{v_j \in \mathcal {V}} \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_j|u) \biggr\} \end{align} | (4.24) |
\begin{align} & = \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}}(u), \end{align} | (4.25) |
where (4.22) holds by Bayes' rule; (4.23) holds by combining (4.20) and (4.21); (4.24) holds by expressing the outer t -dimensional summation over \mathcal {V}^{\, t} as the product of t inner one-dimensional summations over \mathcal {V} , due to the product form on the right-hand side of (4.23), and finally (4.25) holds since the conditional probability masses in each inner summation on the right-hand side of (4.24) add to 1.
2. For all i \in [{s}] and j \in [{t}] , we have (U_i, V_j) \sim (U, V) , and further (U_i, V_1, \ldots, V_t) \sim (U, V_1, \ldots, V_t) . Indeed, combining (4.18), (4.20), and (4.21) yields (by another application of Bayes' rule)
\begin{align} \mathsf{{P}}_{{\smash{{{U_i, V_1, \ldots, V_t}}}\vphantom{XYZ}}}(u, v_1, \ldots, v_t) & = \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}}(u) \, \overset{t}{\underset{j = 1}{\prod}} \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}}(v_j|u) \\ & = \mathsf{{P}}_{{\smash{{{U, V_1, \ldots, V_t}}}\vphantom{XYZ}}}(u, v_1, \ldots, v_t), \quad \forall \, u \in \mathcal {U}, \; \; (v_1, \ldots, v_t) \in \mathcal {V}^{\, t}, \; \; i \in [{s}]. \end{align} | (4.26) |
Then, a marginalization of (4.26) by summing over all (v_1, \ldots, v_{j-1}, v_{j+1}, \ldots, v_t) \in \mathcal {V}^{\, t-1} gives
\begin{align} \mathsf{{P}}_{{\smash{{{U_i,V_j}}}\vphantom{XYZ}}}(u,v) = \mathsf{{P}}_{{\smash{{{U,V}}}\vphantom{XYZ}}}(u,v), \quad \forall \, i \in [{s}], \; j \in [{t}], \; u \in \mathcal {U}, \; v \in \mathcal {V}. \end{align} | (4.27) |
The joint entropy of the random subvector (U_1, V_1, \ldots, V_t) then satisfies
\begin{align} \text{H}({U_1, V_1, \ldots, V_t}) & = \text{H}({U_1}) + \sum\limits_{j = 1}^t \text{H}({V_j} | \kern0.1em {U_1}) \end{align} | (4.28) |
\begin{align} & = \text{H}({U}) + t \text{H}({V} | \kern0.1em {U}) \end{align} | (4.29) |
\begin{align} & = t \text{H}({U,V}) - (t-1) \text{H}({U}) \end{align} | (4.30) |
\begin{align} & = t \log(\alpha n_1 n_2) - (t-1) \text{H}({U}) \end{align} | (4.31) |
\begin{align} &\geq t \log(\alpha n_1 n_2) - (t-1) \log n_1 \end{align} | (4.32) |
\begin{align} & = \log(\alpha^t n_1 n_2^t), \end{align} | (4.33) |
where (4.28) holds by the chain rule of Shannon entropy, since (by construction) V_1, \ldots, V_t are conditionally independent given U (see (4.18)) and also since (U_1, V_1, \ldots, V_t) \sim (U, V_1, \ldots, V_t) (see (4.26) with i = 1 ); (4.29) relies on (4.27); (4.30) holds by a second application of the chain rule; (4.31) holds by (4.17), and finally (4.32) follows from the uniform bound, which states that if X is a discrete random variable supported on a finite set \mathcal {S} , then \text{H}({X}) \leq \log |{ \mathcal {S}}| . In this case, \text{H}({U}) \leq \log |{ \mathcal {U}}| = \log n_1 . Consequently, the joint entropy of the random vector (U_1, \ldots, U_s, V_1, \ldots, V_t) satisfies
\begin{align} \text{H}({U_1, \ldots, U_s, V_1, \ldots, V_t}) & = \text{H}({V_1, \ldots, V_t}) + \sum\limits_{i = 1}^s \text{H}({U_i} | \kern0.1em {V_1, \ldots, V_t}) \end{align} | (4.34) |
\begin{align} & = \text{H}({V_1, \ldots, V_t}) + s \text{H}({U_1} | \kern0.1em {V_1, \ldots, V_t}) \end{align} | (4.35) |
\begin{align} & = s \bigl[ \text{H}({V_1, \ldots, V_t}) + \text{H}({U_1} | \kern0.1em {V_1, \ldots, V_t}) \bigr] - (s-1) \text{H}({V_1, \ldots, V_t}) \\ & = s \text{H}({U_1, V_1, \ldots, V_t}) - (s-1) \text{H}({V_1, \ldots, V_t}) \end{align} | (4.36) |
\begin{align} &\geq s \log(\alpha^t n_1 n_2^t) - (s-1) \text{H}({V_1, \ldots, V_t}) \end{align} | (4.37) |
\begin{align} &\geq s \log(\alpha^t n_1 n_2^t) - (s-1) \log(n_2^t) \end{align} | (4.38) |
\begin{align} & = \log(\alpha^{st} n_1^s n_2^t), \end{align} | (4.39) |
where (4.34) holds by the chain rule and since (by construction) the random variables U_1, \ldots, U_s are conditionally independent given V_1, \ldots, V_t (see (4.19)); (4.35) holds since, by construction, all the U_i 's ( i \in [{s}] ) are identically conditionally distributed given (V_1, \ldots, V_t) (see (4.20)); (4.36) holds by another use of the chain rule; (4.37) holds by (4.33), and finally (4.38) holds by the uniform bound which implies in this case that \text{H}({V_1, \ldots, V_t}) \leq \log(|{ \mathcal {V}}|^{\, t}) = \log(n_2^t) .
The vector (U_1, \ldots, U_s, V_1, \ldots, V_t) is in one-to-one correspondence with a homomorphism from \mathsf{K}_{{s}, {t}} to \mathsf{{G}} . To that end, label the vertices of the complete bipartite graph \mathsf{K}_{{s}, {t}} by the elements of [{s+t}] , where the vertices of the partite set of size s are labeled by the elements of [{s}] , and the vertices of the second partite set (of size t ) are labeled by the elements of \{s+1, \ldots, s+t\} . Then, for all i \in [{s}] and j \in [{t}] , map each edge \{i, i+j\} \in \mathsf{E}({\mathsf{K}_{{s}, {t}}}) to \{U_i, V_j\} \in \mathsf{E}({\mathsf{{G}}}) . This gives a homomorphism \mathsf{K}_{{s}, {t}} \to \mathsf{{G}} since \{U_i, V_j\} \in \mathsf{E}({\mathsf{{G}}}) holds by construction, see (4.20), where \mathsf{{P}}_{{\smash{{{UV}}}\vphantom{XYZ}}} is uniformly distributed over the edges of the graph \mathsf{{G}} , \mathsf{{P}}_{{\smash{{{U}}}\vphantom{XYZ}}} is the marginal PMF of U , and \mathsf{{P}}_{{\smash{{{V}}}\vphantom{XYZ}} | {\smash{{{U}}}\vphantom{XYZ}}} is the conditional PMF of V given U . By the uniform bound, it then follows that
\begin{align} \text{H}({U_1, \ldots, U_s, V_1, \ldots, V_t}) \leq \log \mathrm{hom}(\mathsf{{{\mathsf{K}_{{s},{t}}}}},\mathsf{{{G}}}). \end{align} | (4.40) |
Combining (4.39) and (4.40) yields
\begin{align} \mathrm{hom}(\mathsf{{{\mathsf{K}_{{s},{t}}}}},\mathsf{{{G}}}) \geq \alpha^{st} n_1^s n_2^t. \end{align} | (4.41) |
The right-hand side of (4.41) is not necessarily symmetric in n_1 and n_2 (or in s, t \in {\mathbb{N}} ). Consequently, swapping either n_1 and n_2 (or s and t ) gives
\begin{align} \mathrm{hom}(\mathsf{{{\mathsf{K}_{{s},{t}}}}},\mathsf{{{G}}}) & \geq \max \Bigl\{ \alpha^{st} n_1^s n_2^t, \; \alpha^{st} n_1^t n_2^s \Bigr\} \\ & = \alpha^{st} \, \min\{n_1, n_2\}^{\min\{s,t\}} \, \max\{n_1, n_2\}^{\max\{s,t\}} \\ & = \alpha^{st} \, \min\{n_1, n_2\}^{\min\{s,t\}-\max\{s,t\}} \, (n_1 n_2)^{\max\{s,t\}} \\ & = \alpha^{st} \, \min\{n_1, n_2\}^{-|s-t|} \, (n_1 n_2)^{\max\{s,t\}}, \end{align} | (4.42) |
which proves the leftmost inequality in (4.16).
Setting s = t in Proposition 11 gives the following.
Corollary 3. Let \mathsf{{G}} be a bipartite graph without isolated vertices, let its partite vertex sets be of sizes n_1 and n_2 , and let its number of edges be equal to \alpha n_1 n_2 with \alpha \in (0, 1] . Then,
\begin{align} \alpha^{s^2} (n_1 n_2)^s \leq \mathrm{hom}(\mathsf{{{\mathsf{K}_{{s},{s}}}}},\mathsf{{{G}}}) \leq (2 \alpha)^s (n_1 n_2)^s. \end{align} | (4.43) |
Consequently, for a fixed \alpha \in (0, 1) , it follows from (4.43) that the number of homomorphisms from the complete bipartite graph \mathsf{K}_{{s}, {s}} to \mathsf{{G}} scales like (n_1 n_2)^s .
The author declares that no Artificial Intelligence (AI) tools were utilized in creating this article.
The author acknowledges the timely and helpful reports of the two referees, and a stimulating discussion with Yuval Peled during the author's seminar talk on the subject at the Einstein Institute of Mathematics, Hebrew University of Jerusalem. The author also appreciates the hospitality provided during the seminar, which was organized by Yuval.
The author declares no conflicts of interest.
Prof. Igal Sason is the Guest Editor of special issue “Mathematical Foundations of Information Theory” for AIMS Mathematics. Prof. Igal Sason was not involved in the editorial review and the decision to publish this article.
[1] |
G. Gao, A. Hussain, (L^p, L^q)-boundedness of Hausdorff operators with power weight on Euclidean spaces, Anal. Theor. Appl., 31 (2015), 101–108. https://doi.org/10.4208/ata.2015.v31.n2.1 doi: 10.4208/ata.2015.v31.n2.1
![]() |
[2] |
X. Lin, L. Sun, Some estimates on the Hausdorff operator, Acta Sci. Math., 78 (2012), 669–681. https://doi.org/10.1007/BF03651391 doi: 10.1007/BF03651391
![]() |
[3] |
J. Chen, D. Fan, J. Li, Hausdorff operators on function spaces, Chinese Ann. Math., 33 (2012), 537–556. https://doi.org/10.1007/s11401-012-0724-1 doi: 10.1007/s11401-012-0724-1
![]() |
[4] | E. Liflyand, F. Móricz, The Hausdorff operator is bounded on the real Hardy space H^{1}(\mathbb R), Proc. Am. Math. Soc., 128 (2000), 1391–1396. https://doi.org/10.1090/S0002-9939-99-05159-X |
[5] | E. Liflyand, A. Miyachi, Boundedness of the Hausdorff operators in H^{p} spaces, 0 < p < 1, Stud. Math., 194 (2009), 279–292. https://doi.org/10.4064/sm194-3-4 |
[6] |
D. Fan, X. Lin, Hausdorff operator on real Hardy spaces, Analysis, 34 (2014), 319–337. https://doi.org/10.1515/anly-2012-1183 doi: 10.1515/anly-2012-1183
![]() |
[7] | M. Ruzicka, Electroreological fluids: Modeling and mathematical theory, Lecture Notes in Math., Springer: Berlin/Heidelberg, Germany, 2000. |
[8] |
W. Orlicz, Über konjugierte exponentenfolgen, Stud. Math., 3 (1931), 200–211. https://doi.org/10.4064/sm-3-1-200-211 doi: 10.4064/sm-3-1-200-211
![]() |
[9] |
O. Kováčik, J. Rákosník, On spaces L^{p(x)} and W^{k, p(x)}, Czechoslovak Math. J., 41 (1991), 592–618. https://doi.org/10.21136/CMJ.1991.102493 doi: 10.21136/CMJ.1991.102493
![]() |
[10] | D. Cruz-Uribe, A. Fiorenza, C. J. Neugebauer, The maximal function on variable L^{p} spaces, Ann. Acad. Sci. Fenn. Math., 28 (2003), 223–238. |
[11] | A. Nekavinda, Hardy-Littlewood maximal operator on L^{p(x)}(\mathbb{R}), Math. Inequal. Appl., 7 (2004), 255–265. |
[12] | D. C. Uribe, A. Fiorenza, Variable exponent Lebesgue space: Foundations and harmonic analysis, Birkhauser: Basel, Switzerland, 2013. |
[13] | L. Diening, P. Harjulehto, P. Hästö, M. Ru\breve{\rm{z}}icka, Lebesgue and Sobolev spaces with variable exponent, Lect. Notes Math., Springer, Heidelberg, 2011. https://doi.org/10.1007/978-3-642-18363-8 |
[14] | V. Kokilashvili, A. Meskhi, H. Rafeiro, S. Samko, Integral operators in nonstandard function spaces: Variable exponent Lebesgue and amalgam spaces, Birkauser, Heidelberg, 1 (2016). |
[15] |
K. P. Ho, Extrapolation to Herz spaces with variable exponents and applications, Rev. Mat. Complut., 33 (2020), 437–463. https://doi.org/10.1007/s13163-019-00320-3 doi: 10.1007/s13163-019-00320-3
![]() |
[16] |
K. P. Ho, Spherical maximal function, maximal Bochner-Riesz mean and geometrical maximal function on Herz spaces with variable exponents, Rend. Circ. Mat. Palerm., 70 (2021), 559–574. https://doi.org/10.1007/s12215-020-00511-8 doi: 10.1007/s12215-020-00511-8
![]() |
[17] | M. Z. Abidin, M. Marwan, N. Ullah, A. M. Zidan, Well-Posedness in variable-exponent function spaces for the three-dimensional micropolar fluid equations, J. Math., 2023. https://doi.org/10.1155/2023/4083997 |
[18] | A. Hussain, I. Khan, A. Mohamed, Variable Herz-Morrey estimates for rough fractional Hausdorff operator, J. Inequal. Appl., 33 (2024). https://doi.org/10.1186/s13660-024-03110-8 |
[19] | T. Iwaniec, C. Sbordone, On integrability of the Jacobien under minimal hypotheses, Arch. Ration. Mech. An., 119 (1992), 129–143. |
[20] | L. Greco, T. Iwaniec, C. Sbordone, Inverting the p-harmonic operator, Manuscripta Math., 92 (1997), 249–258. |
[21] |
A. Meskhi, Maximal functions, potentials and singular integrals in grand Morrey spaces, Complex Var. Elliptic, 56 (2011), 1003–1019. https://doi.org/10.1080/17476933.2010.534793 doi: 10.1080/17476933.2010.534793
![]() |
[22] |
R. Bandaliyev, K. Safarova, On Hardy type inequalities in grand Lebesgue spaces L_{p)} for 0 < p \le1, Linear Multilinear A., 70 (2022), 6053–6066. https://doi.org/10.1080/03081087.2021.1944968 doi: 10.1080/03081087.2021.1944968
![]() |
[23] |
S. G. Samko, S. M. Umarkhadzhiev, Weighted Hardy operators in grand Lebesgue spaces on \mathbb R^n, J. Math. Sci., 268 (2022), 509–522. https://doi.org/10.1007/s10958-022-06208-w doi: 10.1007/s10958-022-06208-w
![]() |
[24] |
A. P. Singh, P. Jain, R. Panchal, On quasi-grand Lebesgue spaces and the Hausdorff operator, Bull. Malays. Math. Sci. Soc., 47 (2024), 14. https://doi.org/10.1007/s40840-023-01618-8 doi: 10.1007/s40840-023-01618-8
![]() |
[25] |
V. Kokilashvili, A. Meskhi, Maximal and Calderon-Zygmund operators in grand variable exponent Lebesgue spaces, Georgian Math. J., 21 (2014), 447–461. https://doi.org/10.1515/gmj-2014-0047 doi: 10.1515/gmj-2014-0047
![]() |
[26] |
H. Nafis, H. Rafeiro, M. Zaighum, A note on the boundedness of sublinear operators on grand variable Herz spaces, J. Inequal. Appl., 1 (2020), 2020. https://doi.org/10.1186/s13660-019-2265-6 doi: 10.1186/s13660-019-2265-6
![]() |
[27] |
M. Sultan, B. Sultan, A. Hussain, Grand Herz-Morrey spaces with variable exponent, Math. Notes, 114 (2023), 957–977. https://doi.org/10.1134/S0001434623110305 doi: 10.1134/S0001434623110305
![]() |
[28] |
S. Bashir, B. Sultan, A. Hussain, A. Khan, T. Abdeljawad, A note on the boundedness of Hardy operators in grand Herz spaces with variable exponent, AIMS Math., 8 (2023), 22178–22191. https://doi.org/10.3934/math.20231130 doi: 10.3934/math.20231130
![]() |
[29] |
R. Coifman, R. Rochberg, G. Weiss, Factorization theorems for Hardy spaces in several variables, Ann. Math., 103 (1976), 611–635. https://doi.org/10.2307/1970954 doi: 10.2307/1970954
![]() |
[30] |
R. Gong, N. M. Vempati, Q. Wu, P. Xie, Boundedness and compactness of Cauchy-type integral commutators on weighted Morrey spaces, J. Aust. Math. Soc., 113 (2022), 3656. https://doi.org/10.1017/S1446788722000015 doi: 10.1017/S1446788722000015
![]() |
[31] |
A. Hussain, A. Ajaib, Some results for commutators of generalized Hausdorff operator, J. Math. Inequal., 13 (2019), 1129–1146. https://doi.org/10.7153/jmi-2019-13-80 doi: 10.7153/jmi-2019-13-80
![]() |
[32] |
A. Ajaib, A. Hussain, Weighted CBMO estimates for commutators of matrix Hausdorff operator on the Heisenberg group, Open Math., 18 (2020), 496–511. https://doi.org/10.1515/math-2020-0175 doi: 10.1515/math-2020-0175
![]() |
[33] |
Y. Sun, J. Zhao, Two-weighted estimates for multilinear Hausdorff operators and commutators on stratified groups, J. Pseudo-Differ. Oper., 13 (2022), 49. https://doi.org/10.1007/s11868-022-00480-9 doi: 10.1007/s11868-022-00480-9
![]() |
[34] |
N. Sarfraz, F. Gürbüz, Weak and strong boundedness for p-adic fractional Hausdorff operator and its commutator, Int. J. Nonlinear Sci. Numer. Simulat., 24 (2023), 2281–2292. https://doi.org/10.1515/ijnsns-2020-0290 doi: 10.1515/ijnsns-2020-0290
![]() |
[35] |
E. Nakai, Y. Sawano, Hardy spaces with variable exponents and generalized Campanato spaces, J. Funct. Anal., 262 (2012), 3665–3748. https://doi.org/10.1016/j.jfa.2012.01.004 doi: 10.1016/j.jfa.2012.01.004
![]() |
[36] |
M. Izuki, Fractional integrals on Herz-Morrey spaces with variable exponent, Hiroshima Math. J., 40 (2010), 343–355. https://doi.org/10.32917/hmj/1291818849 doi: 10.32917/hmj/1291818849
![]() |
[37] | M. Izuki, Boundedness of vector-valued sublinear operators on Herz-Morrey spaces with variable exponent, Math. Sci. Res. J., 13 (2009), 243–253. |
[38] |
M. Izuki, Boundedness of commutators on Herz spaces with variable exponent, Rend. Circ. Mat. Palerm., 59 (2010), 199–213. https://doi.org/10.1007/s12215-010-0015-1 doi: 10.1007/s12215-010-0015-1
![]() |
[39] |
S. Lu, L. Xu, Boundedness of rough singular integral operators on the homogeneous Morrey-Herz spaces, Hokkaido Math. J., 34 (2005), 299–314. https://doi.org/10.14492/hokmj/1285766224 doi: 10.14492/hokmj/1285766224
![]() |