
Multimodal aspect term extraction (MATE) and multimodal aspect-oriented sentiment classification (MASC) are two crucial subtasks in multimodal sentiment analysis. The use of pretrained generative models has attracted increasing attention in aspect-based sentiment analysis (ABSA). However, the inherent semantic gap between textual and visual modalities poses a challenge in transferring text-based generative pretraining models to image-text multimodal sentiment analysis tasks. To tackle this issue, this paper proposes a self-adaptive cross-modal attention fusion architecture for joint multimodal aspect-based sentiment analysis (JMABSA), which is a generative model based on an image-text selective fusion mechanism that aims to bridge the semantic gap between text and image representations and adaptively transfer a textual-based pretraining model to the multimodal JMASA task. We conducted extensive experiments on two benchmark datasets, and the experimental results show that our model significantly outperforms other state of the art approaches by a significant margin.
Citation: Ziyue Wang, Junjun Guo. Self-adaptive attention fusion for multimodal aspect-based sentiment analysis[J]. Mathematical Biosciences and Engineering, 2024, 21(1): 1305-1320. doi: 10.3934/mbe.2024056
[1] | Ze-Miao Dai, Jia-Bao Liu, Kang Wang . Analyzing the normalized Laplacian spectrum and spanning tree of the cross of the derivative of linear networks. AIMS Mathematics, 2024, 9(6): 14594-14617. doi: 10.3934/math.2024710 |
[2] | Zhi-Yu Shi, Jia-Bao Liu . Topological indices of linear crossed phenylenes with respect to their Laplacian and normalized Laplacian spectrum. AIMS Mathematics, 2024, 9(3): 5431-5450. doi: 10.3934/math.2024262 |
[3] | Yinzhen Mei, Chengxiao Guo . The minimal degree Kirchhoff index of bicyclic graphs. AIMS Mathematics, 2024, 9(7): 19822-19842. doi: 10.3934/math.2024968 |
[4] | Ali Al Khabyah . Mathematical aspects and topological properties of two chemical networks. AIMS Mathematics, 2023, 8(2): 4666-4681. doi: 10.3934/math.2023230 |
[5] | Yuan Shan, Baoqing Liu . Existence and multiplicity of solutions for generalized asymptotically linear Schrödinger-Kirchhoff equations. AIMS Mathematics, 2021, 6(6): 6160-6170. doi: 10.3934/math.2021361 |
[6] | Giovanni G. Soares, Ernesto Estrada . Navigational bottlenecks in nonconservative diffusion dynamics on networks. AIMS Mathematics, 2024, 9(9): 24297-24325. doi: 10.3934/math.20241182 |
[7] | Zenan Du, Lihua You, Hechao Liu, Yufei Huang . The Sombor index and coindex of two-trees. AIMS Mathematics, 2023, 8(8): 18982-18994. doi: 10.3934/math.2023967 |
[8] | Ali N. A. Koam, Ali Ahmad, Yasir Ahmad . Computation of reverse degree-based topological indices of hex-derived networks. AIMS Mathematics, 2021, 6(10): 11330-11345. doi: 10.3934/math.2021658 |
[9] | Jahfar T K, Chithra A V . Central vertex join and central edge join of two graphs. AIMS Mathematics, 2020, 5(6): 7214-7233. doi: 10.3934/math.2020461 |
[10] | Yun-Ho Kim, Hyeon Yeol Na . Multiplicity of solutions to non-local problems of Kirchhoff type involving Hardy potential. AIMS Mathematics, 2023, 8(11): 26896-26921. doi: 10.3934/math.20231377 |
Multimodal aspect term extraction (MATE) and multimodal aspect-oriented sentiment classification (MASC) are two crucial subtasks in multimodal sentiment analysis. The use of pretrained generative models has attracted increasing attention in aspect-based sentiment analysis (ABSA). However, the inherent semantic gap between textual and visual modalities poses a challenge in transferring text-based generative pretraining models to image-text multimodal sentiment analysis tasks. To tackle this issue, this paper proposes a self-adaptive cross-modal attention fusion architecture for joint multimodal aspect-based sentiment analysis (JMABSA), which is a generative model based on an image-text selective fusion mechanism that aims to bridge the semantic gap between text and image representations and adaptively transfer a textual-based pretraining model to the multimodal JMASA task. We conducted extensive experiments on two benchmark datasets, and the experimental results show that our model significantly outperforms other state of the art approaches by a significant margin.
Graph categories considered in this study are simple, finite, and linked. Allow the graph G to be made up of VG and edge set EG, i.e., G=(VG,EG). For more graph notations, readers should refer to [1].
If and only if two neighbouring verticies i and j of G, the adjacency matrix A(G)=(aij) is a (0, 1)-matrix, aij=1. The diagonal degree matrix of G is
DG=diag(d1,d2,⋯,dn), |
where di represents the degree of vertex i in G. The difference between the degree matrix DG and adjacency matrix AG of G gives rise to the Laplace matrix, denoted as LG. The normalized Laplacian [2] is defined as
L(G)=I−D(G)12(D(G)−1A(G))D(G)−12=D(G)−12L(G)D(G)−12. |
The (m,n)th-entry of L(G), which is designated
(L(G))mn={1,m=n;−1√dmdn,m≠n,vmisadjacenttovn;0,otherwise. | (1.1) |
The distance between both vi and vj, known as dij=dG(vi,vj), represents the length of the smallest path in question. Wiener and Dobrynin [3,4] introduced the Wiener index for the first time in 1947. In addition, the Wiener index is denoted as
W(G)=∑i<jdij. |
For further information on the Wiener index, please refer to [5,6,7,8,9].
The Gutman index of a simple graph G is introduced [10] and denoted as
Gut(G)=∑i<jdidjdij, |
taking into account the degree di of vertex vi.
The Kirchhoff index[11,12] characterizes graph G by summing the resistance distances between every pair of vertices, similar to the Wiener index, namely
Kf(G)=∑i<jrij. |
The multiplicative degree-Kirchhoff index, initially introduced by Chen and Zhang[13] in 2007. It is an extension of the traditional Kirchhoff index, which is expressed as
Kf∗(G)=∑i<jdidjrij. |
The techniques of the Kirchhoff index and multiplication degree-Kirchhoff index can be found in [14,15,16,17,18]. The multiplication degree-Kirchhoff index has garnered significant attention due to its remarkable contributions in academia and practical applications in computer network science, epidemiology, social economics, and other fields. Further research results on the Kirchhoff index and multiplication degree-Kirchhoff index can be explored through [19,20,21,22,23].
The spanning tree of a graph G, also known as complexity, denoted as τ(G), refers to the number of subgraphs that encompass all vertices in G. This measure serves as a crucial indicator for network stability and plays a significant role in assessing the structural characteristics of graphs. For further insights into related topics such as the count of spanning trees, interested readers are encouraged to consult [24,25,26].
With the rapid advancement of scientific research and the successful application of topology in practical scenarios, topological theory has gained increasing recognition worldwide. The calculation problem concerning the phenylene Wiener index has been effectively resolved by Pavlović and Gutman [27]. Chen and Zhang [28] have developed a precise equation for predicting the Wiener index of random phenylene chains. Additionally, Liu et al. [29] have identified both the degree-Kirchhoff index and the number of spanning trees for Ln dicyclobutadieno derivatives of [n] phenylenes.
Given two automorphic graphs S and K, we define the symbol S⊠K to represent the strong product of these two graphs with V(S)×V(K), which is commonly referred to as the strong product in graph theory literature. Readers can refer to [30] for more comprehensive definitions and concepts. Recently, Pan et al.[25] utilized the resistance distance of a strong prism formed by Pn and Cn to determine the Kirchhoff index. Similarly, Li et al.[31] derived graph invariants and spanning trees from the strong prism of a star Sn. Motivated by [30,31,32,33], we obtain the pentagonal network Rn and its strong product P2n. The pentagonal network consists of numerous adjacent pentagons and quadrilaterals with each quadrangle having a maximum of two non-adjacent pentagons, as shown in Figure 1. The P2n is the strong product of Rn, as depicted in Figure 2. It obviously that
|V(P2n)|=14nand|E(P2n)|=47n−8. |
In this paper, we focus on the strong product of pentagonal networks, specifically examining the graph P2n with n≥1. The subsequent sections are organized as follows: Section 2 provides a comprehensive review of relevant research materials, presenting illustrations, concepts and lemmas. In Section 3, we derive the normalized Laplacian spectrum and present an explicit closed formula for the multiplicative degree-Kirchhoff index. Additionally, we calculate the complexity of P2n. In Section 4, we conclude the paper.
In this section, let Rn represent the penagonal-quadrilateral networks, as illustrated in Figure 1. P2n is composed of Rn and its copy R′n, positioned one in front and one behind, as shown in Figure 2. Moreover,
ΦA(x)=det(xIn−A) |
represents the characteristic polynomial of matrix A.
The fact that
π=(1o,2o,⋯,no)(1,1′)(2,2′)⋯((3n),(3n)′) |
is an automorphism deserves attention. Let
V1={1o,2o,⋯,no,u1,u2,⋯,u3n,v1,⋯,v3n}, |
V2={1′o,2′o,⋯,n′o,u′1,u′2,⋯,u′3n,v′1,⋯,v′3n}, |
|V(P2n)|=14nand|E(P2n)|=47n−8. |
Subsequently, the normalized Laplacian matrix can be represented as a block matrix, that is
L(P2n)=(LV1V1LV1V2LV2V1LV2V2), |
in which
LV1V1=LV2V2,LV1V2=LV2V1. |
Let
W=(1√2I6n1√2I6n1√2I6n−1√2I6n), |
then,
WL(P2n)W′=(LA00LS), |
where
LA=LV1V1+LV1V2andLS=LV1V1−LV1V2. |
Observe that W′ and W are transpose matrices of each other.
The characteristic polynomial of the matrix R, is denoted as
Φ(R):=det(xI−R). |
The decomposition theorem process for P2n is obtained in a similar manner to Pan and Li[34], thus we omit this proof and present it as follows:
Lemma 2.1. [35] Assuming that the determination of LA and LS has been previously described, then
ΦL(Ln)(x)=ΦLA(x)⋅ΦLS(x). |
Lemma 2.2. [13] The graph G is an undirected connected graph with n vertices and m edges. Then
Kf∗(G)=2mn∑k=21λk. |
Lemma 2.3. [2] The number of spanning trees in G, referred to as the graph's complexity, can be considered a fundamental measure in graph theory. Then
τ(G)=12mn∏i=1di⋅n∏j=2λj. |
In this section, we explore the methodology for deriving an explicit analytical expression of the multiplicative Kirchhoff index by traversing the normalized Laplacian matrix. Meanwhile, we determine the computational complexity of P2n. Subsequently, employing the normalized Laplacian, we derive matrices of order 6n as
LV1V1=(1−1√350⋯00−1500⋯00−1√351−17⋯000−170⋯000−171⋯0000−17⋯00⋮⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮⋮000⋯1−1√35000⋯−170000⋯−1√351000⋯0−15−1500⋯001−1√350⋯000−170⋯00−1√351−17⋯0000−17⋯000−171⋯00⋮⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮⋮000⋯−170000⋯1−1√35000⋯0−15000⋯−1√351) |
and
LV1V2=(−15−1√350⋯00−1500⋯00−1√35−17−17⋯000−170⋯000−17−17⋯0000−17⋯00⋮⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮⋮000⋯−17−1√35000⋯−170000⋯−1√35−15000⋯0−15−1500⋯00−15−1√350⋯000−170⋯00−1√35−17−17⋯0000−17⋯000−17−17⋯00⋮⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮⋮000⋯−170000⋯−17−1√35000⋯0−15000⋯−1√35−15). |
Due to
LA=LV1V1(P2n)+LV1V2(P2n) |
and
LS=LV1V1(P2n)−LV1V2(P2n), |
it can be convincingly argued that
LA=2(25−1√350⋯00−1500⋯00−1√3537−17⋯000−170⋯000−1737⋯00000⋯00⋮⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮⋮000⋯37−1√35000⋯−170000⋯−1√3525000⋯0−15−1500⋯0025−1√350⋯000−170⋯00−1√3537−17⋯0000−17⋯000−1737⋯00⋮⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮⋮000⋯−170000⋯37−1√35000⋯0−15000⋯−1√3525) |
and
LS=diag(65,87,87,⋅⋅⋅,87,65,65,87,87,⋅⋅⋅,87,65). |
Utilizing Lemma 2.1, it is revealed that the P2n normalized Laplacian spectrum consists of the eigenvalues from LA and LS. It is established that the LS possesses eigenvalues 65 and 87 with multiplicities of 4 and (6n−4), respectively.
Let
M=(25−1√3500⋯000−1√3537−170⋯0000−1737−17⋯00000−1737⋯000⋮⋮⋮⋮⋱⋮⋮⋮0000⋯37−1700000⋯−1737−1√350000⋯0−1√3525)(3n)×(3n) |
and
N=diag(−15,−17,−17,−17,−17,⋯,−17,−17,−15), |
where the matrices M and N are both of order 3n.
The matrices M and N are combined to form a block matrix, denoted as 12LA, in the following manner:
12LA=(MNNM). |
Suppose that
W=(1√2I3n1√2I3n1√2I3n−1√2I3n) |
is a block matrix. Hence, we can obtain
W(12LA)W′=(M+N00M−N). |
Let J=M+N and K=M−N. Then,
J=(15−1√3500⋯000−1√3527−170⋯0000−1727−17⋯00000−1727⋯000⋮⋮⋮⋮⋱⋮⋮⋮0000⋯27−1700000⋯−1747−1√350000⋯0−1√3535)(3n)×(3n) |
and
K=(35−1√3500⋯000−1√3547−170⋯0000−1747−17⋯00000−1747⋯000⋮⋮⋮⋮⋱⋮⋮⋮0000⋯47−1700000⋯−1747−1√350000⋯0−1√3535)(3n)×(3n), |
in which the diagonal elements are
(35,47,47,47,⋅⋅⋅47,47,35). |
Based on Lemma 2.1, it is evident and demonstrable that the eigenvalues of 12LA are identical to those of J and K. Assume that the eigenvalues of J and K are σi and ςj(i,j=1,2,⋯,3n), respectively, with
σ1≤σ2≤σ3≤⋯≤σ3n,ς1≤ς2≤ς3≤⋯≤ς3n. |
We verify σ1≥0 and ς1≥0. In addition, it is easy to know that the normalized Laplacian spectrum of P2n is {2σ1,2σ2,⋯,2σ3n,2ς1,2ς2,⋯,2ς3n}. Note that
|E(P2n)|=47n−8, |
we can obtain Lemma 3.1 according to Lemma 2.2.
Lemma 3.1. Assume that P2n is the strong product of the pentagonal network. Then,
Kf∗(P2n)=2(47n−8)(2×56+(6n−2)78+123n∑i=21σi+123n∑j=11ςj)=(47n−8)(63n−16+3n∑i=21σi+3n∑j=11ςj). |
Subsequently, we partition the computation of the aforementioned equation into two distinct components and prioritize the initial calculation of ∑3ni=21σi.
Lemma 3.2. Suppose that σi(i=1,2,⋯,3n) is defined as described previously.
3n∑i=21σi=1035n3+142n2+617n2(81n+490). |
Proof. Suppose that
Φ(J)=x3n+a1x3n−1+⋯+a3nx2+a3n+1x=x(x3n−1+a1x3n−2+⋯+a3n−1x+a3n−2). |
Then σ2,σ3,⋯,σ3n fulfil the following equation
x3n−1+a1x3n−2+⋯+a3n−2x+a3n−1=0, |
and we observe that 1σ2,1σ3,⋯,1σ3n are the roots of the following equation
a3n−1x3n−1+a3n−2x3n−2+⋯+a1x+1=0. |
By Vieta's Theorem, one has
3n∑i=21σi=(−1)3n−2a3n−2(−1)3n−1a3n−1. | (3.1) |
For each value of i from 1 to 3n+1, we consider Ji and assign ji as the determinant of Ji. We will derive the formula for ji, which can be utilized to calculate (−1)3n−2a3n−2 and (−1)3n−1a3n−1. Then one has
j1=15,j2=135,j3=1245,j4=11715,j5=112005,j6=184035,j7=1588245,j8=14117715, |
and
{j3i=27j3i−1−149j3i−2,1≤i≤n;j3i+1=27j3i−149j3i−1,0≤i≤n−1;j3i+2=27j3i+1−149j3i,0≤i≤n−1. |
Through a straightforward computation, one can derive the following general formulas:
{j3i=75⋅(1343)i,1≤i≤n;j3i+1=15⋅(1343)i,0≤i≤n−1;j3i+2=135⋅(1343)i,0≤i≤n−1. | (3.2) |
According to Eq (3.1), we divide the numerator and denominator into two facts and reveal them later. For the sake of convenience, we represent the diagonal elements of J as lii in a simplified manner.
Fact 3.3.
(−1)3n−1a3n−1=490+81n25(1343)n. |
Proof. Given that J is a left-right symmetric matrix, the sum of all principal minors can be represented by the number (−1)3n−1a3n−1, where these minors correspond to rows and columns with indices equal to (3n−1).
(−1)3n−1a3n−1=3n∑i=1detLA[i]=3n∑i=1det(Ji−100J3n−i)=3n∑i=1ji−1⋅j3n−i, | (3.3) |
where
J3n−i=(li+1,i+1⋯00⋮⋮⋱⋮0⋯l3n−1,3n−1−1√350⋯−1√35l3n,3n). |
By Eqs (3.2) and (3.3), we have
(−1)3n−1a3n−1=2j3n+1+n∑l=1j3(l−1)+2⋅j3(n−l)+2+n∑l=1j3l⋅j3(n−l)+1+n−1∑l=0j3l+1⋅j3(n−l)=985⋅(1343)n+343n352⋅(1343)n+7n25⋅(1343)n+n⋅(1343)n=490+81n25(1343)n. |
This is the completion of the proof.
Fact 3.4.
(−1)3n−2a3n−2=1035n3+142n2−617n50(1343)n. |
Proof. We note that the sum of all principal minors of order 3n−2 in J can be expressed as (−1)3n−2a3n−2. After that,
(−1)3n−2a3n−2=3n∑1≤i<j|Ji−1000Z000J3n−j|,1≤i<j≤3n−2, |
where
Z=(li+1,i+1⋯0⋮⋱⋮0⋯lj−1,j−1) |
and
J3n−j=(lj+1,j+1⋯00⋮⋱⋮⋮0⋯l3n−1,3n−1−1√350⋯−1√35l3n,3n). |
Note that
(−1)3n−2a3n−2=3n∑1≤i<jdetJi−1⋅detZ⋅detJ3n−j=3n∑1≤i<jdetZ⋅si−1⋅s3n−j. | (3.4) |
According to Eq (3.4), the determinant of Z varies depending on the values of i and j, as well as s and t. Consequently, we can categorize the primary scenarios into six distinct classifications.
Case 1. i=3s, j=3t for 1≤s<t≤n, and
detZ1=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−170000⋯−1727|(3s−3t−1)=21(s−t)(1343)s−t. |
Case 2. i=3s, j=3t+1 for 1≤s≤t≤n, and
detZ2=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−170000⋯−1727|(3s−3t)=[3(s−t)+1](1343)s−t, |
or i=3s+2, j=3t for 0≤s<t≤n, and
detZ3=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−170000⋯−1727|(3s−3t−3)=(3s−3t−2)(1245)s−t−1. |
Case 3. In the same way, i=3s, j=3t+2 for 1≤s≤t≤n, and
detZ4=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−1√350000⋯−1√3515|(3s−3t+1)=49(3s−3t+2)(1343)s−t+1, |
or i=3s+1, j=3t for 0≤s<t≤n, and
detZ5=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−170000⋯−1727|(3s−3t−2)=49(3s−3t−1)(1343)s−t−1. |
Case 4. Similarly, i=3s+1, j=3t+1 for 0≤s<t≤n, and
detZ6=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−170000⋯−1727|(3s−3t−1)=21(s−t)(1343)s−t, |
or i=3s+2, j=3t+2 for 0≤k<l≤n, and
detZ7=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−1√350000⋯−1√3515|(3s−3t−1)=detZ6. |
Case 5. i=3s+1, j=3t+2 for 0≤s≤t≤n, and
detZ8=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯25−1√350000⋯−1√3527|(3s−3t)=(3s−3t+1)(1343)s−t. |
Case 6. i=3s+2, j=3t+1 for 0≤k<l≤n, and
detZ9=|27−1700⋯00−1727−170⋯000−1727−17⋯0000−1727⋯00⋮⋮⋮⋮⋱⋮⋮0000⋯27−170000⋯−1727|(3s−3t−2)=49(3s−3t−1)(1343)s−t. |
Therefore, we can obtain
(−1)3n−2a3n−2=∑1≤i<j≤3ndetZ⋅jk−1⋅j3n−l=℘1+℘2+℘3, | (3.5) |
where
℘1=∑1≤s<t≤ndetZ1⋅j3s−1⋅j3n−3t+∑1≤s≤t≤ndetZ2⋅j3s−1⋅j3n−3t−1+∑1≤s≤t≤n−1detZ4⋅j3s−1⋅j3n−3t+∑1≤s≤ndetJ[3s,3n+2]⋅j3s−1=7n(n2−1)50(1343)n+(n2+2n)(n+1)2450(1343)n+n2(n−1)2450(1343)n+n(3n+1)490(1343)n=345n3+17n2−336n50(1343)n, |
℘2=∑1≤s<t≤ndetZ5⋅j3s⋅j3n−3t+2+∑1≤s<t≤ndetZ6⋅j3s⋅j3n−3t+1+∑1≤s≤t≤n−1detZ8⋅j3s⋅j3n−3t+∑1≤s≤ndetJ[3s+1,3n+2]⋅j3s+∑1≤s≤ndetJ[1,3t]⋅j3n−3t+2+∑1≤t≤ndetS[1,3t+1]⋅s3n−3t+1+∑0≤t≤n−1detJ[1,3t+2]⋅j3n−3t+detJ[1,3n+2]=4950(n3−n2−2n+2)(1343)n−1+49n(n2−1)50(1343)n+49n(n2−1)50(1343)n+7n(3n−1)10(1343)n+7n(3n+1)10(1343)n−1+21n(n+1)10(1343)n+7n(3n−1)10(1343)n+n(3n+1)(1343)n=345n3+73n2−92n50(1343)n |
and
℘3=∑0≤s<t≤ndetZ3⋅j3s+1⋅j3n−3t+2+∑0≤s<t≤n−1detZ7⋅j3s+1⋅j3n−3t+∑0≤s<t≤ndetZ9⋅j3s+1⋅j3n−3t+1+∑0≤s≤n−1detJ[3s+2,3n+2]⋅j3s+1=49n(n2+n−4)50(1343)n−1+n(n2−1)50(1343)n+49n(n2+2n−1)50(1343)n+21n(n+1)10(1343)n−1=345n3+52n2−189n50(1343)n. |
By substituting ℘1, ℘2, and ℘3 into Eq (3.5), the desired outcome can be deduced.
(−1)3n−2a3n−2=℘1+℘2+℘3=1035n3+142n2−617n50(1343)n. |
This is the completion of the proof.
0=ς1<ς2≤ς3≤⋯≤ς3n |
represent the eigenvalues of J. By Facts 3.3 and 3.4, we can further investigate Lemma 3.2. According to Eq (3.1), it is evident that
3n∑i=21σi=(−1)3n−2a3n−2(−1)3n−1a3n−1=1035n3+142n2+617n2(490+81n). |
Considering Lemma 3.1, we will focus on the calculations of 3n∑j=11ςj. Let
δ(n)=10290n(11+2√30)+3600+12430√3060000 |
and
ξ(n)=10290n(11−2√30)+3600−12430√3060000. |
Lemma 3.5. The variable ςj (where j ranges from 1 to 3n+2) is assumed to be defined as previously described. One has
3n∑j=11ςj=(−1)3n−1b3n−1detK, |
where
detK=45+11√30125(11+4√30343)n+45−11√30125(11−4√30343)n |
and
(−1)3n−1b3n−1=δ(n)(11+4√30343)n−ξ(n)(11−4√30343)n. |
Proof. The representation of Φ(K) can be expressed as
y3n+b1y3n−1+⋅⋅⋅+b3n−2y2+b3n−1y=y(y3n+1+b1y3n+⋅⋅⋅+b3n−2y+b3n−1), |
where ς1,ς2,⋯,ς3n represent the roots of the equation.
y3n−1+b1y3n−2+⋯+b3n−2y+b3n−1=0, |
and the equation is determined to possess 1ς1,1ς2,⋯,1ς3n as its solutions
b3n−1y3n−1+b3n−2y3n−2+⋯+b1y+1=0. |
By Vieta's Theorem, one holds that
3n∑j=11ςj=(−1)3n−1b3n−1detK. | (3.6) |
To simplify the analysis, let Rp denote the p-th order principal minors of matrix K and kp represent the determinant of Rp. We will derive an equation for kp that can be utilized to calculate (−1)3n−1b3n−1 and detK for values of p ranging from 1 to 3n−1. Subsequently, we arrive at
k1=15,k2=135,k3=1245,k4=11715,k5=112005,k6=184035,k7=1588245,k8=14117715 |
and
{k3p=27k3p−1−149k3p−2,1≤p≤n;k3p+1=27k3p−149k3p−1,1≤p≤n;k3p+2=27k3p+1−149k3p,0≤p≤n−1. |
After a straightforward computation, the following general formulas can be derived
{k3p=105+14√30150⋅(11+4√30343)p+105−14√30150⋅(11−4√30343)p,1≤p≤n;k3p+1=45+8√30150⋅(11+4√30343)p+45−8√30150⋅(11−4√30343)p,1≤p≤n;k3p+2=11+2√3070⋅(11+4√30343)p+11−2√3070⋅(11−4√30343)p,0≤p≤n−1. |
Fact 3.6.
detK=45+11√30125(11+4√30343)n+45−11√30125(11−4√30343)n. |
Proof. By expanding detK with respect to its last row, we obtain
detK=35k3n+1−135k3n=45+11√30125(11+4√30343)n+45−11√30125(11−4√30343)n. |
The desired outcome has been achieved.
Fact 3.7.
(−1)3n−1b3n−1=δ(n)(11+4√30343)n−ξ(n)(11−4√30343)n, |
where
δ(n)=10290n(11+4√30)+3600+12430√3060000 |
and
ξ(n)=10290n(11−4√30)+3600−12430√3060000. |
Proof. Considering that K has a (3n−1)-row and (3n−1)-column structure, the sum of all principal minors can be expressed as (−1)3n−1b3n−1. Here, lii represents the diagonal entries of K. It is noteworthy that K exhibits bilateral symmetry, which enables us to derive specific information.
(−1)3n−1b3n−1=3n∑i=1detK[i]=3n∑i=1det(Ri−100R3n−i)=3n∑i=1ri−1⋅r3n−i, | (3.7) |
where
R3n−i=(gi+1,i+1⋯00⋮⋮⋱⋮0⋯g3n−1,3n−1−1√350⋯−1√35g3n,3n). |
In line with Eq (3.7), we have
(−1)3n−1b3n−1=2k3n−1+n∑l=1detK[3l]+n−1∑l=0detK[3l+1]+n−1∑l=0detK[3l+2]=2k3n−1+n∑l=1k3(l−1)+2⋅k3(n−l)+2+n−1∑l=0k3l⋅k3(n−l)+1+n−1∑l=0k3l+1⋅k3(n−l). | (3.8) |
Immediately, we can get
n∑l=1k3(l−1)+2⋅k3(n−l)+2=245n20(11+4√30343)n+1−(11−4√30343)n+1+√30600(11+4√30343)n−√30600(11−4√30343)n, | (3.9) |
n∑l=0k3l⋅k3(n−l)+1=2391n300(11+4√30343)n+1−(11−4√30343)n+1+137√3060000(11+4√30343)n−137√3060000(11−4√30343)n, | (3.10) |
n−1∑l=0k3l+1⋅k3(n−l)=2391n300(11+4√30343)n+1−(11−4√30343)n+1+1271√3060000(11+4√30343)n−1271√3060000(11−4√30343)n | (3.11) |
and
2k3n−1=90+16√3075(11+4√30343)n+90−16√3075(11−4√30343)n. | (3.12) |
By incorporating Eqs (3.9)–(3.12) into Eq (3.8), we can achieve Lemma 3.7. Utilizing Eq (3.6), in conjunction with Facts 3.6 and 3.7, Lemma 3.5 can be promptly derived.
By integrating Lemmas 3.1, 3.2 and 3.5, we can readily deduce the Theorems 3.8 and 3.9.
Theorem 3.8. Assume that P2n is the strong product of the pentagonal network. One has
Kf∗(P2n)=1029n3+5736n2+4275n+8503+(57n+30)(−1)3n−1b3n−1detK, |
where
(−1)3n−1b3n−1=δ(n)(11+4√30343)n−ξ(n)(11−4√30343)n,detK=45+11√30125(11+4√30343)n+45−11√30125(11−4√30343)n, |
additionally,
δ(n)=10290n(11+2√30)+3600+12430√3060000 |
and
ξ(n)=10290n(11−2√30)+3600−12430√3060000. |
Theorem 3.9. Let P2n be the strong product of pentagonal network. Then
τ(P2n)=35⋅232n+75((45+11√30)(11+4√30)n+(45−11√30)(11−4√30)n). |
Proof. Based on the proof of Lemma 2.2, it is evident that σ1,σ2,⋯σ2n+1 constitute the roots of the equation
x2n+a1x2n−1+⋯+a2n−1x+a2n=0. |
Accordingly, one has
3n∏i=2σi=(−1)3n−1a3n−1. |
By Fact 3.3, we have
3n∏i=2σi=12+23n25(1343)n. |
By the same method,
3n∏j=1ςj=detK=45+11√30375(11+4√30343)n+45−11√30375((11−4√30)343)n. |
Note that
∏v∈VP2nd(P2n)=54⋅716n−4 |
and
|E(P2n)|=48n−7. |
In conjunction with Lemma 2.3, we get
τ(P2n)=12|E(P2n)|((65)2⋅(87)6n−2⋅3n∏i=22σi⋅3n∏j=12ςj⋅∏v∈VP2nd(P2n))=35⋅232n+75((45+11√30)(11+4√30)n+(45−11√30)(11−4√30)n). |
This is the completion of the proof.
In this study, we have derived explicit expressions for the multiplicative degree-Kirchhoff index and complexity of P2n based on the spectrum of the Laplacian matrix, where P2n=Rn⊠R′n. These two fundamental calculations serve as simple yet reliable graph invariants that effectively capture the stability of diverse networks. Future research should focus on applying our methodology to determine spectra for strong products of automorphic and symmetric networks.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors would like to express their sincere gratitude to the editor and anonymous referees for providing valuable suggestions, which significantly contributed to the substantial improvement of the original manuscript. This work was supported by Anhui Jianzhu University Funding under Grant KJ212005, and by Natural Science Fund of Education Department of Anhui Province under Grant KJ2020A0478.
No potential confilcts of interest were reported by the authors.
[1] |
L. Zhu, M. Xu, Y. Bao, Y. Xu, X. Kong, Deep learning for aspect-based sentiment analysis: A review, PeerJ Comput. Sci., 8 (2022), e1004. https://doi.org/10.7717/peerj-cs.1044 doi: 10.7717/peerj-cs.1044
![]() |
[2] |
L. Zhu, Z. Zhu, C. Zhang, Y. Xu, X. Kong, Multimodal sentiment analysis based on fusion methods: A survey, Inf. Fusion, 95 (2023), 306–325. https://doi.org/10.1016/j.inffus.2023.02.028 doi: 10.1016/j.inffus.2023.02.028
![]() |
[3] | J. Yu, J. Jiang, Adapting BERT for target-oriented multimodal sentiment classification, IJCAI, (2019), 5408–5414. https://doi.org/10.24963/ijcai.2019/751 |
[4] | Z. Khan, Y. Fu, Exploiting BERT for multimodal target sentiment classification through input space translation, in Proceedings of the 29th ACM International Conference on Multimedia, (2021), 3034–3042. https://doi.org/10.1145/3474085.3475692 |
[5] |
J. Yu, K. Chen, R. Xia, Hierarchical interactive multimodal transformer for aspect-based multimodal sentiment analysis, IEEE Trans. Affective Comput., 14 (2021), 1966–1978. https://doi.org/10.1109/TAFFC.2022.3171091 doi: 10.1109/TAFFC.2022.3171091
![]() |
[6] | L. Yan, J. Yu, R. Xia, Vision-language pre-training for multimodal aspect-based sentiment analysis, preprint, arXiv: 2204.07955. |
[7] | H. Xu, B. Liu, L. Shu, P. S. Yu, Double embeddings and CNN-based sequence labeling for aspect extraction, preprint, arXiv: 1805.04601. |
[8] | P. Liu, S. Joty, H. Meng, Fine-grained opinion mining with recurrent neural networks and word embeddings, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, (2015), 1433–1443. https://doi.org/10.18653/v1/D15-1168 |
[9] | D. Ma, S. Li, F. Wu, X. Xie, H. Wang, Exploring sequence-to-sequence learning in aspect term extraction, in Proceedings of the 57th Annual Meeting of The Association for Computational Linguistics, (2019), 3538–3547. https://doi.org/10.18653/v1/P19-1344 |
[10] | J. Yu, K. Chen, R. Xia, Conditional augmentation for aspect term extraction via masked sequence-to-sequence generation, preprint, arXiv: 2004.14769. |
[11] | C. Brun, D. N. Popa, C. Roux, XRCE: Hybrid classification for aspect-based sentiment analysis, in Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), (2021), 838–842. https://doi.org/10.3115/v1/S14-2149 |
[12] | M. Pontiki, D. Galanis, H. Papageorgiou, I. Androutsopoulos, S. Manandhar, M. AL-Smadi, et al., Semeval-2016 task 5: Aspect based sentiment analysis, in Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), (2016), 19–30. https://doi.org/10.18653/v1/S16-1002 |
[13] | H. T. Nguyen, M. Le Nguyen, Effective attention networks for aspect-level sentiment classification, in 2018 10th International Conference on Knowledge and Systems Engineering (KSE), IEEE, (2018), 25–30. https://doi.org/10.1109/KSE.2018.8573324 |
[14] | J. Cheng, S. Zhao, J. Zhang, I. King, X. Zhang, H.Wang, Aspect-level sentiment classification with heat (hierarchical attention) network, in Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, (2017), 97–106. https://doi.org/10.1145/3132847.3133037 |
[15] | J. Liu, Y. Zhang, Attention modeling for targeted sentiment, in Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2 (2017), 572–577. |
[16] | Y. Tay, L. A. Tuan, S. C. Hui, Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis, in Proceedings of the AAAI Conference on Artificial Intelligence, 32 (2018). https://doi.org/10.1609/aaai.v32i1.12049 |
[17] | Y. Wang, M. Huang, X. Zhu, L. Zhao, Attention-based LSTM for aspect-level sentiment classification, in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, (2016), 606–615. |
[18] | X. Li, L. Bing, W. Lam, B. Shi, Transformation networks for target-oriented sentiment classification, preprint, arXiv: 1805.01086. |
[19] | W. Xue, T. Li, Aspect based sentiment analysis with gated convolutional networks, preprint, arXiv: 1805.07043. |
[20] | C. Chen, Z. Teng, Y. Zhang, Inducing target-specific latent structures for aspect sentiment classification, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), (2020), 5596–5607. https://doi.org/10.18653/v1/2020.emnlp-main.451 |
[21] | K. Wang, W. Shen, Y. Yang, X. Quan, R. Wang, Relational graph attention network for aspect-based sentiment analysis, preprint, arXiv: 2004.12362. https://doi.org/10.48550/arXiv.2004.12362 |
[22] | M. Zhang, T. Qian, Convolution over hierarchical syntactic and lexical graphs for aspect level sentiment analysis, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), (2020), 3540–3549. https://doi.org/10.18653/v1/2020.emnlp-main.286 |
[23] | H. Tang, D. Ji, C. Li, Q. Zhou, Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification, in Proceedings of the 58th Annual Meeting of The Association for Computational Linguistics, (2020), 6578–6588. https://doi.org/10.18653/v1/2020.acl-main.588 |
[24] | B. Huang, K. M. Carley, Syntax-aware aspect level sentiment classification with graph attention networks, preprint, arXiv: 1909.02606. https://doi.org/10.48550/arXiv.1909.02606 |
[25] | J. Devlin, M. W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, in Proceedings ofthe 2019 Conference of the North American Chapter ofthe Association for Computational Linguistics: Human Language Technologies, 1 (2019), 4171–4186. https://doi.org/10.18653/v1/N19-1423 |
[26] | R. Li, H. Chen, F. Feng, Z. Ma, X. Wang, E. Hovy, Dual graph convolutional networks for aspect-based sentiment analysis, in Proceedings of the 59th Annual Meeting ofthe Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 1 (2021), 6319–6329. https://doi.org/10.18653/v1/2021.acl-long.494 |
[27] | M. Mitchell, J. Aguilar, T. Wilson, B. Van Durme, Open domain targeted sentiment, in Proceedings of the 2013 conference on empirical methods in natural language processing, (2013), 1643–1654. |
[28] | M. Zhang, Y. Zhang, D. T. Vo, Neural networks for open domain targeted sentiment, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, (2015), 612–621. https://doi.org/10.18653/v1/D15-1073 |
[29] | M. Hu, Y. Peng, Z. Huang, D. Li, Y. Lv, Open-domain targeted sentiment analysis via span-based extraction and classification, preprint, arXiv: 1906.03820. |
[30] | H. Yan, J. Dai, X. Qiu, Z. Zhang, A unified generative framework for aspect-based sentiment analysis, preprint, arXiv: 2106.04300. |
[31] | N. Xu, W. Mao, G. Cheng, Multi-interactive memory network for aspect based multimodal sentiment analysis, in Proceedings of the AAAI Conference on Artificial Intelligence, 33 (2019), 371–378. https://doi.org/10.1609/aaai.v33i01.3301371 |
[32] |
D. Gu, J. Wang, S. Cai, C. Yang, Z. Song, H. Zhao, et al., Targeted aspect-based multimodal sentiment analysis: An attention capsule extraction and multi-head fusion network, IEEE Access, 9 (2021), 157329–157336. https://doi.org/10.1109/ACCESS.2021.3126782 doi: 10.1109/ACCESS.2021.3126782
![]() |
[33] |
J. Yu, J. Jiang, R. Xia, Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification, IEEE/ACM Trans. Audio Speech Language Process., 28 (2019), 429–439. https://doi.org/10.1109/TASLP.2019.2957872 doi: 10.1109/TASLP.2019.2957872
![]() |
[34] |
Z. Zhang, Z. Wang, X. Li, N. Liu, B. Guo, Z. Yu, ModalNet: An aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network, World Wide Web, 24 (2021), 1957–1974. https://doi.org/10.1007/s11280-021-00955-7 doi: 10.1007/s11280-021-00955-7
![]() |
[35] | X. Ju, D. Zhang, R. Xiao, J. Li, S. Li, M. Zhang, et al., Joint multi-modal aspect-sentiment analysis with auxiliary cross-modal relation detection, in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, (2021), 4395–4405. https://doi.org/10.18653/v1/2021.emnlp-main.360 |
[36] | M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, L. Zettlemoyer, Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, preprint, arXiv: 1910.13461. |
[37] | P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, et al., Bottom-up and top-down attention for image captioning and visual question answering, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 6077–6086. |
[38] | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, I. Polosukhin, Attention is all you need, Adv. Neural Inf. Process. Syst., 30 (2017). |
[39] | J. L. Ba, J. R. Kiros, G. E. Hinton, Layer normalization, preprint, arXiv: 1607.06450. |
[40] | Y. H. H. Tsai, S. Bai, P. P. Liang, J. Z. Kolter, L. P. Morency, R. Salakhutdinov, Attention is all you need, in Proceedings of the conference Association for Computational Linguistics. Meeting, 2019 (2019), 6558–6569. https://doi.org/10.18653/v1/p19-1656 |
[41] | J. Chen, Z. Yang, D. Yang, Mixtext: Linguistically-informed interpolation of hidden space for semi-supervised text classification, preprint, arXiv: 2004.12239. |
[42] | Y. H. H. Tsai, S. Bai, P. P. Liang, J. Z. Kolter, L. P. Morency, R. Salakhutdinov, Km-bart: Knowledge enhanced multimodal bart for visual commonsense generation, preprint, arXiv: 2101.00419. |
[43] | G. Chen, Y. Tian, Y. Song, Joint aspect extraction and sentiment analysis with directional graph convolutional networks, in Proceedings of the 28th International Conference on Computational Linguistics, (2020), 272–279. https://doi.org/10.18653/v1/2020.coling-main.24 |
[44] | J. Yu, J. Jiang, L. Yang, R. Xia, Improving multimodal named entity recognition via entity span detection with unified multimodal transformer, Assoc. Comput. Linguist., (2020), 272–279. |
[45] | H. Wu, S. Cheng, J. Wang, S. Li, L. Chi, Association for Computational Linguistics, in Natural Language Processing and Chinese Computing: 9th CCF International Conference, (2020), 145–156. |
[46] | L. Sun, J. Wang, K. Zhang, Y. Su, F. Weng, RpBERT: A text-image relation propagation-based BERT model for multimodal NER, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 13860–13868. https://doi.org/10.1609/aaai.v35i15.17633 |
[47] |
L. Yang, J. C. Na, J. Yu, Cross-modal multitask transformer for end-to-end multimodal aspect-based sentiment analysis, Inf. Process. Manage., 59 (2022), 103038. https://doi.org/10.1016/j.ipm.2022.103038 doi: 10.1016/j.ipm.2022.103038
![]() |
1. | Muhammad Shoaib Sardar, Shou-Jun Xu, Resistance Distance and Kirchhoff Index in Windmill Graphs, 2025, 22, 15701794, 159, 10.2174/0115701794299562240606054510 | |
2. | Md. Abdus Sahir, Sk. Md. Abu Nayeem, Characterizing the Degree-Kirchhoff, Gutman, and Schultz Indices in Pentagonal Cylinders and Möbius Chains, 2025, 0278-081X, 10.1007/s00034-025-03130-9 |