
The main objective of this research was to analyze the variable of corporate reputation as a mediating variable to determine the relationship between corporate social responsibility and financial performance. Simple random sampling was used in the study to obtain 300 respondents from Bangladeshi manufacturing companies. Statistical Package for the Social Sciences (SPSS) 23.0 was used to analyze the data. To evaluate the hypotheses in this study, structural equation modeling (SEM) was used. The results demonstrated that corporate social responsibility positively influences corporate reputation and financial performance, while corporate reputation is statistically significant for financial performance. Environmental contribution, philanthropic responsibility, legal responsibility, ethical responsibility, economic responsibility and social responsibility are listed in order of significance as corporate social responsibility factors. It was determined how corporate reputation influences the link between corporate social responsibility and financial performance. However, it may be logical to conclude that there is a considerable correlation between corporate social responsibility and financial performance based on the data analysis. The results of corporate social responsibility practices in manufacturing organizations in developing nations, particularly Bangladesh, have significant consequences for businesses, entrepreneurs, communities, researchers and policymakers in understanding the outcomes of sustainability. The conclusion has drawn implications for sustainability practice and future research.
Citation: Zhang Jing, Gazi Md. Shakhawat Hossain, Badiuzzaman, Md. Shahinur Rahman, Najmul Hasan. Does corporate reputation play a mediating role in the association between manufacturing companies' corporate social responsibility (CSR) and financial performance?[J]. Green Finance, 2023, 5(2): 240-264. doi: 10.3934/GF.2023010
[1] | Robert A. Moss, Jarrod Moss . The Role of Dynamic Columns in Explaining Gamma-band Synchronization and NMDA Receptors in Cognitive Functions. AIMS Neuroscience, 2014, 1(1): 65-88. doi: 10.3934/Neuroscience.2014.1.65 |
[2] | Ziphozethu Ndlazi, Oualid Abboussi, Musa Mabandla, Willie Daniels . Memantine increases NMDA receptor level in the prefrontal cortex but fails to reverse apomorphine-induced conditioned place preference in rats. AIMS Neuroscience, 2018, 5(4): 211-220. doi: 10.3934/Neuroscience.2018.4.211 |
[3] | Suresh D Muthukumaraswamy . Introduction to AIMS Special Issue “How do Gamma Frequency Oscillations and NMDA Receptors Contribute to Normal and Dysfunctional Cognitive Performance”. AIMS Neuroscience, 2014, 1(2): 183-184. doi: 10.3934/Neuroscience.2014.2.183 |
[4] | Robert A. Moss, Jarrod Moss . Commentary on the Pinotsis and Friston Neural Fields DCM and the Cadonic and Albensi Oscillations and NMDA Receptors Articles. AIMS Neuroscience, 2014, 1(2): 158-162. doi: 10.3934/Neuroscience.2014.2.158 |
[5] | Mazyar Zahir, Amir Rashidian, Mohsen Hoseini, Reyhaneh Akbarian, Mohsen Chamanara . Pharmacological evidence for the possible involvement of the NMDA receptor pathway in the anticonvulsant effect of tramadol in mice. AIMS Neuroscience, 2022, 9(4): 444-453. doi: 10.3934/Neuroscience.2022024 |
[6] | Nicholas J. D. Wright . A review of the direct targets of the cannabinoids cannabidiol, Δ9-tetrahydrocannabinol, N-arachidonoylethanolamine and 2-arachidonoylglycerol. AIMS Neuroscience, 2024, 11(2): 144-165. doi: 10.3934/Neuroscience.2024009 |
[7] | Dai Mitsushima . Contextual Learning Requires Functional Diversity at Excitatory and Inhibitory Synapses onto CA1 Pyramidal Neurons. AIMS Neuroscience, 2015, 2(1): 7-17. doi: 10.3934/Neuroscience.2015.1.7 |
[8] | Tevzadze Gigi, Zhuravliova Elene, Barbakadze Tamar, Shanshiashvili Lali, Dzneladze Davit, Nanobashvili Zaqaria, Lordkipanidze Tamar, Mikeladze David . Gut neurotoxin p-cresol induces differential expression of GLUN2B and GLUN2A subunits of the NMDA receptor in the hippocampus and nucleus accumbens in healthy and audiogenic seizure-prone rats. AIMS Neuroscience, 2020, 7(1): 30-42. doi: 10.3934/Neuroscience.2020003 |
[9] | Thomas Papikinos, Marios Krokidis, Aris Vrahatis, Panagiotis Vlamos, Themis P. Exarchos . Drug repurposing for obsessive-compulsive disorder using deep learning-based binding affinity prediction models. AIMS Neuroscience, 2024, 11(2): 203-211. doi: 10.3934/Neuroscience.2024013 |
[10] | Valentina Bashkatova, Athineos Philippu . Role of nitric oxide in psychostimulant-induced neurotoxicity. AIMS Neuroscience, 2019, 6(3): 191-203. doi: 10.3934/Neuroscience.2019.3.191 |
The main objective of this research was to analyze the variable of corporate reputation as a mediating variable to determine the relationship between corporate social responsibility and financial performance. Simple random sampling was used in the study to obtain 300 respondents from Bangladeshi manufacturing companies. Statistical Package for the Social Sciences (SPSS) 23.0 was used to analyze the data. To evaluate the hypotheses in this study, structural equation modeling (SEM) was used. The results demonstrated that corporate social responsibility positively influences corporate reputation and financial performance, while corporate reputation is statistically significant for financial performance. Environmental contribution, philanthropic responsibility, legal responsibility, ethical responsibility, economic responsibility and social responsibility are listed in order of significance as corporate social responsibility factors. It was determined how corporate reputation influences the link between corporate social responsibility and financial performance. However, it may be logical to conclude that there is a considerable correlation between corporate social responsibility and financial performance based on the data analysis. The results of corporate social responsibility practices in manufacturing organizations in developing nations, particularly Bangladesh, have significant consequences for businesses, entrepreneurs, communities, researchers and policymakers in understanding the outcomes of sustainability. The conclusion has drawn implications for sustainability practice and future research.
Clustering stands as a foundational research area in the realm of computer vision and machine learning. Over decades of exploration, traditional clustering methods have achieved a certain level of maturity, yielding promising results. However, when confronted with complex signals (e.g., images and videos), clustering methods reliant on Euclidean distance measure often yield unsatisfactory performance. Although the data observed in practical applications is complicated, the clusters to which the samples belong are generally located near the corresponding low-dimensional space. Therefore, some researchers have turned to subspace clustering to characterize the geometrical structure of the original data manifold. These algorithms transform each data point into multi-dimensional subspaces, fitting the data point to which it belongs through the low-dimensional subspace clusters to achieve the final clustering prediction. At present, subspace clustering methods catering to implicit subspace structures include iterative methods [1,2], algebraic methods [3,4], statistical methods [5,6], and self-representation methods [7,8,9]. Among them, self-representation has received widespread attention due to their effective utilization of the latent subspace structure and attributes of the data. Notable representatives of such an approach include sparse subspace clustering (SSC) [7] and low-rank representation (LRR) [9]. These methods typically contain two steps: (1) exploring the data structure by using Euclidean distance measure and obtaining a coefficient matrix; (2) leveraging the learned coefficient matrix to conduct spectral clustering [10]. Then, the data is assigned to k clusters.
Nevertheless, previous studies reveal that image sets or video data often inhabit nonlinear manifold structure [11,12]. Therefore, the Euclidean distance cannot accurately measure the similarity between any two data points residing on a non-Euclidean space. Wei et al. [13] proposed a discrete metric learning method for fast image set classification, which significantly improves classification efficiency and accuracy by learning metrics and hashing techniques on the Riemannian manifold. Furthermore, video data commonly feature varying frames per video clip, leading to exaggerated data dimensions upon the vectorization of video frames. These challenges pose difficulties for original clustering algorithms such as SSC or LRR.
Deep learning methods emerge as capable learners of discriminative feature representations [14]. A number of studies propose combining metric learning and deep learning for classification [14,15]. Recently, deep learning techniques have started to be used in the scenario of data clustering and have shown remarkable performance [16,17,18]. To name a few, Xie et al. [19] proposed a clustering method based on deep embedding, where a deep neural network learns feature representations and clustering assignments concurrently. Wang et al. [20] proposed a multi-level representation learning method for incomplete multiview clustering, which incorporates contrastive and adversarial regularization to improve clustering performance. Yang et al. [21] proposed a novel graph contrastive learning framework for clustering multi-layer networks, effectively combining nonnegative matrix factorization and contrastive learning to enhance the discriminative features of vertices across different network layers. Guo et al. [22] proposed an adaptive multiview subspace learning method using distributed optimization, which enhances clustering performance by capturing high-order correlations among views. Li et al. [23] introduced a deep adversarial MVC method that explores the intrinsic structure embedded in multiview data. Ma et al. [24] suggested a deep generative clustering model based on variational autoencoders, being able to yield a more appropriate embedding subspace for clustering with respect to complex data distributions. These methods address challenges faced by traditional data modeling (e.g., preprocessing high-dimensional data and characterizing nonlinear relationships) by nonlinearly mapping data points into a latent space through a series of encoder layers. However, existing deep clustering algorithms have an inherent limitation, i.e., they utilize Euclidean computations to analyze the semantic features generated by convolutional neural networks, which leads to unfaithful data representation. The fundamental reason is that these features inherently have a submanifold structure [25,26]. While techniques such as fine-tuning, optimization, and feature activation in deep neural networks can affect the experimental results, manifold learning provides a theoretical basis for the analysis of non-Euclidean data structures.
Recognizing the nonlinear structure of high-dimensional data, some studies [11,27,28,29,30,31] extended the traditional clustering paradigm to the context of Riemannian manifolds. Typically, a manifold represents a smooth surface embedded in Euclidean space [32]. In image analysis, covariance matrices often serve as region descriptors, with these matrices regarded as points on the symmetric positive definite (SPD) manifold. Hu et al. [12] proposed a multi-geometry SSC method, aiming to mine complementary geometric representations. Wei et al. [33] proposed a method called sparse representation classifier guided Grassmann reconstruction metric learning (GRML), which enhances image set classification by learning a robust metric on the Grassmann manifold, making it effective in handling noisy data. Linear subspaces have attracted widespread attention in many scientific fields, as their underlying space is a Grassmannian manifold, which provides a solid theoretical foundation for the characterization of signal data (e.g., video clips, image sets, and point clouds). In addition, the Grassmannian manifold plays a crucial role in handling non-Euclidean data structures, particularly through its compact manifold structure. Wang et al. [34] extended the ideology of LRR to the context of the Grassmannian manifold, facilitating more accurate representation and analysis in complicated data scenarios. Piao et al. [35] proposed a double-kernel norm low-rank representation based on the Grassmannian manifold for clustering. However, the issues such as integrating manifold representation learning with clustering and preserving subspace structure in the data tramsformation process are still challenging for such a framework.
It can be concluded that deep learning-based clustering struggles to learn effective representations from the data with a submanifold structure, and the underlying space of subspace features is a Grassmannian manifold. This motivates us to propose the framework of DGMVCL to achieve a more reasonable view-invariant representation on a non-Euclidean space. The proposed framework comprises four main components: the FEM, MMM, Grassmannian manifold learning module (GMLM), and the contrastive leaning module (CLM). Specifically, the FEM is responsible for mapping the original data into a feature subspace, the MMM is designed for the projection of subspace features onto a Grassmannian manifold, the GMLM is used to facilitate deep subspace learning on the Grassmannian manifold, and the CLM is focused on discriminative cluster assignments through contrastive learning. Additionally, the positive and negative samples relied upon in the contrastive learning process are constructed on the basis of Riemannian distance on the Grassmannian manifold, which can better reflect the geometric distribution of the data. Extensive experiments across five benchmarking datasets validate the effectiveness of the proposed DGMVCL.
Our main contributions are summarized as follows:
● A lightweight geometric learning model build upon the Grassmannian manifold is proposed for multiview subspace clustering in an end-to-end manner.
● A Grassmannian-level contrastive learning strategy is suggested to help improve the accuracy of cluster assignments among multiple views.
● Extensive experiments conducted on five multiview datasets demonstrate the effectiveness of the proposed DGMVCL.
In this section, we will give a brief introduction to some related works, including multiview subspace clustering, contrastive learning, and the Riemannian geometry of Grassmannian manifold.
Although there exists a number of subspace clustering methods, such as LRR [9] and SSC [7], most of them adopt self-representation to obtain subspace features. Low-rank tensor-constrained multiview subspace clustering (LMSC) [36] is able to generate a common subspace for different views instead of individual representations. Flexible multiview representation learning for subspace clustering (FMR) [37] avoids using partial information for data reconstruction. Zhou et al. [38] proposed an end-to-end adversarial attention network to align latent feature distributions and assess the importance of different modalities. Despite the improved clustering performance, these methods do not consider the semantic label consistency across multiple views, potentially resulting in challenges when learning consistent clustering assignments.
To address these challenges, Kang et al. [39] proposed a structured graph learning method that constructs a bipartite graph to manage the relationships between samples and anchor points, effectively reducing computational complexity in large-scale data and enabling support for out-of-sample extensions. This approach is particularly advantageous in scalable subspace clustering scenarios, extending from single-view to multiview settings.
Furthermore, to enhance clustering performance by leveraging high-order structural information, Pan and Kang [40] introduced a high-order multiview clustering (HMvC) method. This approach utilizes graph filtering and an adaptive graph fusion mechanism to explore long-distance interactions between different views. By capturing high-order neighborhood relationships, HMvC effectively improves clustering results on both graph and non-graph data, addressing some of the limitations in prior methods that overlooked the intrinsic high-order information from data attributes.
Contrastive learning has made significant progress in self-supervised learning [41,42,43,44]. The methods based on contrastive learning essentially rely on a large number of pairwise feature comparisons. Specifically, they aim to maximize the similarity between positive samples in the latent feature space while minimizing the similarity between negative samples simultaneously. In the field of clustering, positive samples are constructed from the invariant representations of all multiview instances of the same sample, while negative samples are obtained by pairing representations from different samples across various views. Chen et al. [42] proposed a contrastive learning framework that maximizes consistency between differently augmented views of the same example in the latent feature space. Wang et al. [44] investigated two key properties of the contrastive loss, namely feature alignment from positive samples and uniformity of induced feature distribution on the hypersphere, which can be used to measure the quality of generated samples. Although these methods can learn robust representations based on data augmentation, learning invariant representations across multiple views remains challenging.
The Grassmannian manifold G(q,d) comprises a collection of q-dimensional linear subspaces within Rd. Each of them can be naturally represented by an orthonormal basis denoted as Y, with the size of d×q (YTY=Iq, where Iq is a q×q identity matrix). Consequently, the matrix representation of each Grassmannian element is comprised of an equivalence class of this orthonormal basis:
[Y]={˜Y∣˜Y=YO,O∈O(q)}, | (2.1) |
where Y represents a d×q column-wise orthonormal matrix. The definition in Eq (2.1) is commonly referred to as the orthonormal basis (ONB) viewpoint [32].
As demonstrated in [32], each point of the Grassmannian manifold can be alternatively represented as an idempotent symmetric matrix of rank q, given by Φ(Y)=YYT. This representation, known as the projector perspective (PP), signifies that the Grassmannian manifold is a submanifold of the Euclidean space of symmetric matrices. Therefore, an extrinsic distance can be induced by the ambient Euclidean space, termed the projection metric (PM) [49]. The PM is defined as:
dPM(Y1,Y2)=2−1/2‖Y1YT1−Y2YT2‖F, | (2.2) |
where ‖⋅‖F denotes the Frobenius norm. As evidenced in [45], the distance computed by the PM deviates from the true geodesic distance on the Grassmannian manifold by a scale factor of √2, making it a widely used Grassmannian distance.
We are given a group of multiview data X = Xv∈Rdv×Nv}Vv=1, where V denotes the number of views, N_v signifies the number of instances contained in the v-th view, and d_v represents the dimensionality of each instance in Xv. The proposed DGMVCL is an end-to-end neural network built upon the Grassmannian manifold, aiming to generate clustering semantic labels from the original instances across multiple views. As shown in Figure 1, the proposed framework mainly consists of four modules, which are the FEM, MMM, GMLM, and CLM, respectively. The following is a detailed introduction to them.
To obtain an effective semantic space for the subsequent computations, we exploit a CNN network to transform the input images into subspace representations with lower redundancy. The proposed FEM contains two blocks, each of which comprises a convolutional layer, a ReLU activation layer, and a max-pooling layer, respectively. The difference of the two blocks lies in the number of convolutional kernels involved, i.e., 32 for the first convolutional layer and 64 for the second one. To characterize the geometric structure of the generated features, the following manifold modeling module is designed.
Let Evi∈Rc×l be the i-th (i∈{1,2,...,Nv}) feature matrix generated by FEM with respect to the i-th input instance of the v-th view. Here, c represents the number of channels while l indicates the length of a vectorized feature map. Since each point of G(q,d) represents a q-dimentional linear subspace of the d-dimentional vector space Rd (see Section 2.3), the Grassmannian manifold thus becomes a reasonable and efficient tool for parametrizing the q-dimentional real vector subspace embedded in Evi [49,51].
Cov Layer: To capture complementary statistical information embodied in different channel features, a similarity matrix is computed for each Evi:
˜Evi=Evi(Evi)T. | (3.1) |
Orth Layer: Following the Cov layer, the SVD operation is applied to obtain a q-dimensional linear subspace spanned by an orthonormal matrix Yvi∈Rc×q, that is ˜Evi≃YviΣvi(Yvi)T, wherein, Yvi and Σvi are two matrices consisting of q leading eigenvalues and the corresponding eigenvectors, respectively. Now, the resulting Grassmannian representation with respect to the input Xv is denoted by Υv=[Yv1,Yv2,...,YvNv].
An overview of the designed GMLM is shown in Figure 1. We can note that the input to this module is a series of orthonormal matrices. For simplicity, we abbreviate Yvi as Yi in the following. Besides, since Yi∈G(q,c), we replace the symbol c with the symbol d. To implement deep subspace learning, the following three basic layers are designed.
FRMap Layer: This layer transforms the input orthogonal matrices into new ones through a linear mapping function ffr:
Yi,k=f(k)fr(Yi,k−1;Wk)=WkYi,k−1, | (3.2) |
where Yi,k−1∈G(q,dk−1) is the input data of the k-th layer, Wk∈Rdk×dk−1 (dk<dk−1) is the to-be-learned transformation matrix (connection weights), essentially required to be a row full-rank matrix, and Yi,k∈Rdk×q is the resulting matrix. Due to the fact that the weight matrices lie in a non-compact Stiefel manifold and the geodesic distance has no upper bound [46,47], direct optimization on such a manifold is unfeasible. To address this challenge, we follow [46] to impose an orthogonality constraint on each weight matrix Wk. As a consequence, the weight space Rdk×dk−1∗ becomes a compact Stiefel manifold St(dk,dk−1) [48], facilitating better optimization.
ReOrth Layer: Inspired by [46], we design the ReOrth layer for the sake of preventing the output matrices of the FRMap layer from degeneracy. Specifically, we first impose QR decomposition on the input matrix Yi,k−1:
Yi,k−1=Qi,k−1Ri,k−1, | (3.3) |
where Qi,k−1∈Rdk−1×q is an orthogonal matrix, and Ri,k−1∈Rq×q is an invertible upper triangular matrix. Therefore, Yi,k−1 can be normalized into an orthonormal basis matrix via the following transformation function f(k)ro:
Yi,k=f(k)ro(Yi,k−1)=Yi,k−1R−1i,k−1=Qi,k−1. | (3.4) |
ProjMap Layer: As studied in [47,49,50,51,52], the PM is one of the most popular Grassmannian distance measures, providing a specific inner product structure to a concrete Riemannian manifold. In such a case, the original Grassmannian manifold reduces to a flat space, in which the Euclidean computations can be generalized to the projection domain of orthogonal matrices. Formally, by applying the projection operator [47] to the orthogonal matrix Yi,k−1 of the k-th layer, the designed ProjMap layer can be formulated as:
Yi,k=f(k)pm(Yi,k−1)=Yi,k−1YTi,k−1. | (3.5) |
Subsequently, we embed a contrastive learning module at the end of the GMLM to enhance the discriminatory power of the learned features.
For simplicity, we assume that nx=1. In this case, the output data representation with respect to Υv is denoted by ˜Υv=[Yv1,3,Yv2,3,...,YvNv,3]. Then, the projection metric, defined in Eq (2.2), is applied to compute the geodesic distance between any two data points, enabling the execution of K-means clustering within each view. At the same time, we can obtain the membership degree tij, computed as follows:
tij=1d2PM(Yvi,3,Uj)∑Kr=11d2PM(Yvi,3,Ur), | (3.6) |
where K represents the total number of clusters, and Uj denotes the j-th (j∈{1,2,...,K}) cluster. To enhance the discriminability of the global soft assignments, we consider a unified target distribution probability Pv∈RNv×K, which is formulated as [53]:
pvij=(t2ij/∑Nvitij)∑Kj(t2ij/∑Nvitij), | (3.7) |
where each pvij represents the soft cluster assignment of the i-th sample to the j-th cluster. Therefore, pvj represents the cluster assignments of the same semantic cluster.
The similarity between the two cluster assignments pv1j and pv2j for cluster j is measured by the following cosine similarity [54]:
s(pv1j,pv2j)=pv1j⋅pv2j‖pv1j‖⋅‖pv2j‖, | (3.8) |
where v1 and v2 represent two different views. Since these instances belong to the same labels in each view, the cluster assignment probabilities of instances from different views should be similar. Moreover, instances from multiple views are independent of each other for different samples. Therefore, for V views with K clusters, there are (V−1) positive cluster assignment pairs and V(K−1) negative cluster assignment pairs.
The goal of CLM is to maximize the similarity between cluster assignments within clusters and minimize the similarity between cluster assignments across clusters. Inspired by [55], the cross-view contrastive loss between pv1k and pv2k is defined as follows:
l(v1,v2)=−1KK∑k=1loges(pv1k,pv2k)/τ∑Kj=1,j≠kes(pv1j,pv1k)/τ+∑Kj=1es(pv1j,pv2k)/τ, | (3.9) |
where τ is the temperature parameter, (pv1k,pv2k) is a positive cluster assignment pair between views v1 and v2, and (pv1j,pv1k) (j≠k), (pv1j,pv2k) are negative cluster assignment pairs in the views of v1 and v2, respectively. The introduced cross-view contrastive loss across multiple views is given below:
Lg=12V∑v1=1V∑v2=1,v2≠v1l(v1,v2). | (3.10) |
The cross-view contrastive loss explicitly pulls together cluster assignments within the same cluster and pushes apart cluster assignment pairs from different clusters. This inspiration comes from the recently proposed contrastive learning paradigm, which is applied to semantic labels to explore consistent information across multiple views.
Additionally, to ensure consistency between cluster labels on the Grassmannian manifold and Euclidean space, as shown in Figure 1, we append two linear layers and a softmax function to the tail of the ProjMap layer to generate a probability matrix Qv∈RNv×K for cluster assignments. Let qvi be the i-th row in Qv, and qvij represents the probability that the i-th instance belongs to the j-th cluster of the v-th view. The semantic label of the i-th instance is determined by the maximum value in qvi. Following similar steps as processing Lg, we get the contrastive loss Lc for different views in Euclidean space:
Lc=−12KV∑v1=1V∑v2=1,v2≠v1K∑k=1loges(qv1k,qv2k)/τ∑Kj=1,j≠kes(qv1j,qv1k)/τ+∑Kj=1es(qv1j,qv2k)/τ. | (3.11) |
Inspired by [53], we introduce the following regularization term to prevent all the instances from being assigned to a specific cluster:
La=V∑v=1K∑j=1hvjloghvj, | (3.12) |
where hvj=1Nv∑Nvi=1qvij. This term is considered as the cross-view consistency loss in DGMVCL.
The objective function of the proposed method comprises three primary components: the Grassmannian contrastive loss, the Euclidean contrastive loss, and the cross-view consistency loss, given below:
Lobj=αLg+βLc+γLa, | (3.13) |
where α, β, and γ are three trade-off parameters.
The goal of Lobj is to learn common semantic labels from feature representations in multiple views. Let pvi be the i-th row of Pv, and pvij represents the j-th element of pvi. Specifically, pvi is a K-dimensional soft assignment probability, where ∑Kj=1pvij=1. Once the training process of the proposed network is completed, the semantic label of sample i (i∈1,2,...,Nv) can be predicted by:
yi=argmaxj(1VV∑v=1pvij). | (3.14) |
For the proposed GMLM, the composition of a series of successive functions f=f(ρ)∘f(ρ−1)∘f(ρ−2)∘...∘f(2)∘f(1) with W={Wρ,Wρ−1,...,W1} is the parameter tuple which can be viewed as the data embedding model, which satisfies the properties of metric space. Here, f(k) and Wk are, respectively, the operation function and weight parameter of the k-th layer, and ρ denotes the number of layers contained in GMLM. The loss of the k-th layer can be signified as: L(k)=ℓ∘f(ρ)∘...∘f(k), where ℓ is actually the Lobj.
Due to the fact that the weight space of the FRMap layer is a compact Stiefel manifold St(dk,dk−1), we refer to the method studied in [46] to realize parameter optimization by generalizing the traditional stochastic gradient descent (SGD) settings to the context of Stiefel manifolds. The updating rule for Wk is given below:
According to Eq (3.2), we can have that: Yk=f(k)(Wk,Yk−1)=WkYk−1. Then, the following variation of Yk can be derived:
dYk=dWkYk−1+WkdYk−1. | (3.15) |
Based on the invariance of the first-order differential, the following chain rule can be deduced:
∂L(k+1)∂Yk:dYk=∂L(k)∂Wk:dWk+∂L(k)∂Yk−1:dYk−1. | (3.16) |
By replacing the left-hand side of Eq (3.16) with Eq (3.15) and exploiting the matrix inner product ":" property, the following two formulas can be derived:
∂L(k+1)∂Yk:dWkYk−1=∂L(k+1)∂YkYTk−1:dWk, | (3.17) |
∂L(k+1)∂Yk:WkdYk−1=WTk∂L(k+1)∂Yk:dYk−1. | (3.18) |
Combining Eqs (3.16)–(3.18), the partial derivatives of L(k) with respect to Wk and Yk−1 can be computed by:
∂L(k)∂Wk=∂L(k+1)∂YkYTk−1,∂L(k)∂Yk−1=WTk∂L(k+1)∂Yk. | (3.19) |
At this time, the updating criterion of Wk on the Stiefel manifold is given below:
Wt+1k=RWtk(−ηΠWtk(∇L(k)Wtk)), | (3.20) |
where R signifies the retraction operation used to map the optimized parameter back onto the Stiefel manifold, η is the learning rate, and Π represents the projection operator used to convert the Euclidean gradient into the corresponding Riemannian counterpart:
˜∇L(k)Wtk=ΠWtk(∇L(k)Wtk)=∇L(k)Wtk−Wtk(∇L(k)Wtk)TWtk, | (3.21) |
where ∇L(k)Wtk is the Euclidean gradient, computed by the first term of Eq (3.19), and ˜∇L(k)Wtk denotes the Riemannian gradient. After that, the weight parameter can be updated by: Wt+1k=R(Wtk−η˜∇L(k)Wtk). For detailed information about the Riemannian geometry of a Stiefel manifold and its associated retraction operation, please kindly refer to [48].
In this section, we conduct experiments on five benchmarking datasets to evaluate the performance of the proposed DGMVCL. All the experiments are run on a Linux workstation with a GeForce RTX 4070 GPU (12 GB caches).
The proposed DGMVCL is evaluated on five publicly available multiview datasets. The MNIST-USPS [56] dataset contains 5000 samples with two different styles of digital images. The Multi-COIL-10 dataset [57] is comprised of 720 grayscale images collected from 10 categories with the image size of 32 × 32, where different views represent different object poses. The Fashion dataset [58] is made up of 10,000 images belonging to 10 categories, where three different styles of an object are regarded as its three different views, i.e., front view, side view, and back view, respectively. The ORL dataset [59] consists of 400 face images collected from 40 volunteers, with each volunteer providing 10 images under different expressions and lighting conditions. The Scene-15 [60] dataset contains 4485 scene images belonging to 15 categories.
We evaluate the clustering performance using the following three metrics, i.e., clustering accuracy (ACC), normalized mutual information (NMI), and purity. Here, ACC is the ratio of correctly classified samples to the total number of samples, NMI is an indicator to measure the consistency between the clustering result and the true class distribution, and purity refers to the ratio of the number of samples in the largest cluster to the total number of samples, indicating whether each cluster contains samples belonging to the same class.
To validate the effectiveness of the proposed method, we compare DGMVCL with several state-of-the-art methods, including augmented sparse representation (ASR) [41], deep save IMVC (DSIMVC) [63], dual contrastive prediction (DCP) [64], multi-level feature learning (MFL) [65], and cross-view contrastive learning (CVCL) [55]. For DCP, we report the best clustering results obtained for each pair of individual views in each dataset. For better comparison, we include two additional baselines. Specifically, we first apply spectral clustering [61] to each individual view and report the best clustering results obtained across multiple views, termed as best single view clustering (BSVC). Then, we utilize an adaptive neighborhood graph learning method [62] to generate a similarity matrix for each individual view. We aggregate all the similarity matrices into a new one for spectral clustering, denoted as SCAgg.
The clustering results of various algorithms obtained on the five used datasets are reported in Tables 1 and 2, respectively. The best results are highlighted in bold. It can be seen that the clustering performance of the contrastive learning-based methods, including DGMVCL, CVCL, MFL, and DCP, are superior to other competitors (e.g., BSVC and SCAgg) on the large-scale MNIST-USPS, Scene-15, and Fashion datasets. This is mainly attributed to the fact that the contrastive learning-based self-supervised learning mechanism is capable of learning more discriminative feature representations by maximizing the similarity between positive samples and minimizing the similarity between negative samples simultaneously. Furthermore, our proposed DGMVCL is the best performer on all the used datasets, demonstrating its effectiveness. To name a few, on the Scene-15 dataset, the DGMVCL achieves approximately 16.7%, 34.22%, and 17.68% improvements in ACC, NMI, and purity in comparison with the second-best CVCL method. The fundamental reason is that the proposed subspace-based geometric learning method can characterize and analyze the structural information of the input subspace features more faithfully. In addition, the introduced dual contrastive losses makes it possible to learn a more discriminative network embedding.
Datasets | Methods | ACC | NMI | Purity |
BSVC [61] | 67.98 | 74.43 | 72.34 | |
SCAgg [62] | 89.00 | 77.12 | 89.18 | |
ASR [41] | 97.90 | 94.72 | 97.90 | |
MNIST-USPS | DSIMVC [63] | 99.34 | 98.13 | 99.34 |
DCP [64] | 99.02 | 97.29 | 99.02 | |
MFL [65] | 99.66 | 99.01 | 99.66 | |
CVCL [55] | 99.58 | 98.79 | 99.58 | |
DGMVCL | 99.82 | 99.52 | 99.82 | |
BSVC [61] | 60.32 | 64.91 | 63.84 | |
SCAgg [62] | 98.00 | 94.80 | 97.56 | |
ASR [41] | 96.52 | 93.04 | 96.52 | |
Fashion | DSIMVC [63] | 88.21 | 83.99 | 88.21 |
DCP [64] | 89.37 | 88.61 | 89.37 | |
MFL [65] | 99.20 | 98.00 | 99.20 | |
CVCL [55] | 99.31 | 98.21 | 99.31 | |
DGMVCL | 99.52 | 98.73 | 99.52 | |
BSVC [61] | 73.32 | 76.91 | 74.11 | |
SCAgg [62] | 68.34 | 70.18 | 69.26 | |
ASR [41] | 84.23 | 65.47 | 84.23 | |
Multi-COIL-10 | DSIMVC [63] | 99.38 | 98.85 | 99.38 |
DCP [64] | 70.14 | 81.9 | 70.14 | |
MFL [65] | 99.20 | 98.00 | 99.20 | |
CVCL [55] | 99.43 | 99.04 | 99.43 | |
DGMVCL | 100.00 | 100.00 | 100.00 |
Datasets | Methods | ACC | NMI | Purity |
BSVC | 61.31 | 64.91 | 61.31 | |
SCAgg [62] | 61.65 | 77.41 | 66.22 | |
ASR [41] | 79.49 | 78.04 | 81.49 | |
ORL | DSIMVC [63] | 25.37 | 52.91 | 25.37 |
DCP [64] | 27.70 | 49.93 | 27.70 | |
MFL [65] | 80.03 | 89.34 | 80.03 | |
CVCL [55] | 85.50 | 93.17 | 86.00 | |
DGMVCL | 92.25 | 98.34 | 92.25 | |
BSVC | 38.05 | 38.85 | 42.08 | |
SCAgg [62] | 38.13 | 39.31 | 44.76 | |
ASR [41] | 42.70 | 40.70 | 45.60 | |
Scene-15 | DSIMVC [63] | 28.27 | 29.04 | 29.79 |
DCP [64] | 42.32 | 40.38 | 43.85 | |
MFL [65] | 42.52 | 40.34 | 44.53 | |
CVCL [55] | 44.59 | 42.17 | 47.36 | |
DGMVCL | 61.29 | 76.39 | 65.04 |
Objective Function: To verify the impact of each loss function in Eq (3.13) on the performance of the proposed method, we conduct ablation experiments on the MNIST-USPS, ORL, and Fashion datasets as three examples. Table 3 shows the experimental results of our model under different combinations of loss functions. It can be seen that under type D, that is, all the loss functions are used at the same time, our method achieves the best performance. However, when La is removed (type A), the clustering performance of DGMVCL decreases on the three used datasets. For the MVC task, similar samples usually exhibit a large intra-data diversity, while dissimilar samples usually demonstrate a large inter-data correlation. This makes it impossible for a single Lc to effectively learn invariant representations from such data variations. Therefore, La is introduced to prevent instances from being assigned to a particular cluster, and the experimental results confirm its effectiveness. From Table 3, we can also conclude that the introduced Grassmannian contrastive loss Lg is beneficial for ameliorating the discriminability of the learned geometric features. Another interesting observation from Table 3 is that the clustering performance of type C (Lc is removed) is visibly lower than type D in terms of ACC, NMI, and purity. The fundamental reason is that Lc is the most critical loss function, as it is not only used to train the proposed network, but also participates in the testing phase. This indicates that Lc is crucial for distinguishing positive and negative samples, i.e., maintaining the cross-view consistency, and plays a pivotal role in the overall clustering task. Since Lg and La act as two regularization terms to proivide additional discriminative information for Lc and are mainly confined to the training phase, neither La nor Lg can be used alone. All in all, the aforementioned experimental findings demonstrate that each term in Eq (3.13) is useful.
Loss | MNIST-USPS | ORL | Fashion | |||||||||
Types | Lg | Lc | La | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
A | ✔ | ✔ | 19.96 | 42.62 | 19.96 | 69.00 | 92.67 | 70.00 | 84.78 | 93.13 | 89.38 | |
B | ✔ | ✔ | 99.76 | 99.37 | 99.76 | 85.00 | 86.58 | 85.00 | 99.48 | 98.60 | 99.48 | |
C | ✔ | ✔ | 10.31 | 12.54 | 10.31 | 12.50 | 26.69 | 12.50 | 25.75 | 38.72 | 25.75 | |
D | ✔ | ✔ | ✔ | 99.82 | 99.52 | 99.82 | 92.25 | 98.34 | 92.25 | 99.52 | 98.73 | 99.52 |
E | ✔ | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | ||
F | ✔ | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
The Components of GMLM: To verify the significance of each operation layer defined in GMLM, we make ablation experiments on the MNIST-USPS, Fashion, and Multi-COIL-10 datasets as three examples. The clustering results of different subarchitectures are listed in Table 4. It is evident that group A (served as the reference group, with only the loss function Lg removed) is superior to group D in terms of ACC, NMI, and purity. Wherein, group D is generated by removing the ProjMap layer from group A. This demonstrates the necessity of Riemannian computation in preserving the geometric structure of the original data manifold. From Table 4, we can also find that the performance of group C (obtained by removing the layers of FRMap, ReOrth, and ProjMap from group A) is significantly inferior to that of group D on the Fashion and Multi-COIL-10 datasets, suggesting that deep subspace learning is able to improve the effectiveness of the features. Another interesting observation from Table 4 is that after expurgating the ReOrth layer from group A (this becomes group B), the clustering results are reduced to a certain extent on the three used datasets. The basic reason is that the Grassmannian properties, e.g., orthogonality, of the input feature matrices cannot be maintained, resulting in imprecise Riemannian distances computed in Lg. All in all, the experimental results mentioned above confirm the usefulness of each part in GMLM.
MNIST-USPS | Fashion | Multi-COIL-10 | |||||||
Groups | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
A | 99.76 | 99.37 | 99.76 | 99.48 | 98.60 | 99.48 | 100.00 | 100.00 | 100.00 |
B | 99.70 | 99.11 | 99.70 | 99.02 | 97.60 | 99.02 | 99.14 | 98.71 | 99.14 |
C | 98.00 | 97.00 | 97.00 | 81.25 | 86.24 | 81.40 | 31.70 | 31.00 | 32.00 |
D | 89.92 | 94.51 | 89.92 | 98.49 | 96.52 | 98.49 | 99.00 | 97.31 | 99.01 |
Network Depth: Inspired by the experimental results presented in Table 4, we carry out ablation experiments on the MNIST-USPS, Fashion, and Multi-COIL-10 datasets as three examples to study the impact of nx (the number of blocks in GMLM) on the model performance. According to Table 5, we can find that nx=1 results in the best clustering results. However, the increase in network depth leads to a slight decrease in clustering performance on the three used datasets. It needs to be emphasized that the sizes of the transformation matrices are set to (49×25, 25×20) and (49×25, 25×20, 20×15) under the two settings of nx=2 and nx=3, respectively. Therefore, the lose of pivotal structural information in the process of multi-stage deep subspace learning is considered to be the fundamental reason for the degradation of model ability. In summary, these experimental findings not only reaffirm the effectiveness of the designed GMLM for subspace learning again, but also reveal the degradation issue of Grassmannian networks. In the future, we plan to generalize the Euclidean residual learning mechanism to the context of Grassmannian manifolds to mitigate the above problem.
MNIST-USPS | Fashion | Multi-COIL-10 | |||||||
Metrics | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
nx=1 | 99.82 | 99.52 | 99.82 | 99.58 | 98.83 | 99.58 | 100.00 | 100.00 | 100.00 |
nx=2 | 99.68 | 99.08 | 99.68 | 99.39 | 98.37 | 99.39 | 98.14 | 96.71 | 98.14 |
nx=3 | 99.58 | 98.73 | 99.58 | 99.53 | 98.71 | 99.53 | 99.14 | 98.13 | 99.14 |
To measure the impact of the trade-off parameters (i.e., α, β, and γ) in Eq (3.13) on the clustering performance of the proposed method, we make experiments on the ORL and MNIST-USPS datasets as two examples. The purpose of introducing these three trade-off parameters to Eq (3.13) is to balance the magnitude of the Grassmannian contrastive learning term Lg, Euclidean contrastive learning term Lc, and regularization term La, so as to learning an effective network embedding for clustering. From Figure 2, we have some interesting observations. First, it is not recommended to endow β with a relatively small value. The basic reason is that Lc is not only used to learn discriminative features, but also for the final label prediction. What is more, we can observe that when the value of β is fixed, the clustering accuracy of the proposed method exhibits a trend of first increasing and then decreasing with the change of γ. In addition, it can be found that the value of γ cannot be greater than β. Otherwise, the performance of our method will be significantly affected. The fundamental reason is that a larger γ or a smaller β will cause the gradient information related to Lc to be dramatically weakened in the network optimization process, which is not conducive to learning an effective clustering hypersphere. Besides, we can also observe that the model tends to be less sensitive to β and γ when they vary within the ranges of 0.05∼0.005 and 0.01∼0.001, respectively. These experimental comparisons support our assertion that the regularization term helps to fine-tune the clustering performance.
In this part, we further investigate the impact of α on the accuracy of the proposed method. We take the MNIST-USPS dataset as an example to conduct experiments, while changing the value of α, and we fix β and γ at their optimal values as shown in Figure 2.
It can be seen from Figure 3 that the accuracy of the proposed method generally shows a trend of first increasing and then decreasing, and reaches a peak when α is set to 0.02. Besides, α is not suggested to be endowed with a relatively large value. Otherwise, the gradient information associated with Lc will be weakened in the network optimization process, which is not conducive to the learning of a discriminative network embedding. These experimental results not only further demonstrate the criticality of Lc, but also underscore the effectiveness of α in fine-tuning the discriminability of the learned representations.
All in all, these experimental observations confirm the complementarity of these three terms in guiding the proposed model to learn more informative features for better clustering. In this paper, our guideline for choosing their values is to ensure that the order of magnitude of the Grassmannian contrastive learning term Lg and the regularization term La do not exceed that of the Euclidean contrastive learning term Lc. With this criterion, the model can better integrate the gradient information of La and Lg regarding the data distribution with Lc to learn a more reasonable hypersphere for different views. On the MNIST-USPS dataset, the eligible values of α, β, and γ are set to 0.02, 0.05, and 0.005, respectively. We use a similar way to determine that their appropriate values are respectively configured as (0.005, 0.005, 0.005), (0.01, 0.01, 0.01), (0.01, 0.01, 0.005), and (0.1, 0.1, 0.1) on the Fashion, Multi-COIL-10, ORL, and Scene-15 datasets. For a new dataset, the aforementioned principle can help the readers quickly determine the initial value ranges of α, β, and γ.
To intuitively test the effectiveness of the proposed method, we select the Fashion and Multi-COIL-10 datasets as two examples to perform 2-D visualization experiments. The experimental results generated by the t-SNE technique [66] are presented in Figure 4, where different colors denote the labels of different clusters. It can be seen that compared to the case where GMLM is not included, the final clustering results, measured by the compactness between similar samples and the diversity between dissimilar samples, are improved by using GMLM on both of the two benchmarking datasets. This further demonstrates that the designed GMLM can help improve the discriminability of cluster assignments.
Besides, we further investigate the impact of GMLM on the computational burden of the proposed method. The experimental results are summarized in Table 6, where "w/o" means "without containing". Note that the parameters of the new architecture are kept as the originals.
Datasets | MNIST-USPS | Fashion | Multi-COIL-10 | ORL | Scene-15 |
DGMVCL-"w/o" GMLM | 11.9 | 34.4 | 2.6 | 1.5 | 14.5 |
DGMVCL | 12.3 | 36.1 | 2.7 | 1.6 | 16.9 |
From Table 6, we can see that the integration of GMLM slightly increases the training time of the proposed model across all the five used datasets. According to Section 3, it can be known that the main computational burden of GMLM comes from the QR decomposition used in the ReOrth layer. Nevertheless, as shown in Figure 4, GMLM can enhance the clustering performance of our method, demonstrating its effectiveness.
Recent advancements in robust learning, such as projected cross-view learning for unbalanced incomplete multiview clustering [67], have highlighted the importance of handling noisy data and outlier instances. In this section, we evaluate the robustness of the proposed method when subjected to various levels of noise and occlusion, choosing CVCL [55] as a representative competitor. Figure 5 illustrates the accuracy (%) of different methods as a function of the variance of Gaussian noise added to the Scene-15 dataset. It can be seen that as the variance of the Gaussian noise increases, both the proposed method and CVCL experience a decline in accuracy. However, our method consistently outperforms CVCL across all the noise levels. This shows that the suggested model is robust to the Gaussian noise, maintaining good clustering ability even under severe noise levels.
We further investigate the robustness of the proposed method when handling the data with random pixel masks, selecting the Fashion dataset as an example. Some sample images of the Fashion dataset with 15%, 30%, 45%, and 60% of the pixels masked are shown in Figure 6. According to Table 7, we can see that the performance of the proposed method is superior to that of CVCL under all the corruption levels. These experimental findings again demonstrate that the suggested modifications over the baseline model are effective to learn more robust and discriminative view-invariant representations. However, as the occlusion rate and the level of Gaussian noise increase, the performance of the proposed DGMVCL shows a noticeable decline. As discussed in Section 3.1.2, each element in the orthogonal basis matrix on the Grassmannian manifold represents the correlation between the original feature dimensions. Although the ability to encode long-range dependencies between different local feature regions enables our model to capture more useful information, it is also susceptible to the influence of local prominent noise. In such a case, the contrastive loss may fail to effectively distinguish between positive and negative samples. This is mainly attributed to the fact that the contrastive learning term treats the decision space as an explicit function of the data distribution. In the future, incorporating techniques like those studied in [67], such as multiview projections or advanced data augmentation, could improve the model's ability to handle these challenges.
Methods | 15% | 30% | 45% | 60% |
CVCL | 98.49 | 97.23 | 95.06 | 89.48 |
DGMVCL | 99.16 | 98.29 | 97.28 | 92.70 |
Innovation Analysis: The novelty of our proposed DGMVCL lies not in the mere combination of several existing components but in the thoughtful and innovative way in which these components are integrated and optimized, leading to a lightweight and discriminative geometric learning framework for multiview subspace clustering. Specifically, the suggested DGMVCL introduces several pivotal innovations: ⅰ) The Grassmannian neural network, designed for geometric subspace learning, could not be treated as a simple attempt on a new vision application, but rather as an intrinsic method for encoding the underlying submanifold structure of channel features. This is crucial for enabling the model to learn more effective subspace features; ⅱ) The proposed method introduces contrastive learning in both Grassmannian manifolds and Euclidean space. Compared to the baseline model (CVCL [6]) that utilizes the Euclidean-based contrastive loss for network training (in this paper), the additional designed Grassmannian contrastive learning module enables our DGMVCL to characterize and learn the geometrical distribution of the subspace data points more faithfully. Therefore, such a dual-space contrastive learning mechanism is eligible to improve the representational capacity of our model and is capable of extracting view-invariant representations; ⅲ) Extensive evaluations across multiple benchmarking datasets not only demonstrate the superiority of our proposed DGMVCL over the state-of-the-art methods, but also underscores the significance of each individual component and their complementarity.
The Effectiveness of Grassmannian Representation: The Grassmannian manifold is a compact representation of the covariance matrix and encodes the vibrant subspace information, which has shown great success in many applications [11,27,28]. Inspired by this, the MMM is designed to capture and parameterize the q-dimensional real vector subspace formed by the features extracted from the FEM. However, the Grassmannian manifold is not a Euclidean space but a Riemannian manifold. We therefore adopt a Grassmannian network to respect the latent Riemannian geometry. Specifically, each network layer can preserve the Riemannian property of the input feature matrices by normalizing each one into an orthonormal basis matrix. Besides, each manifold-valued weight parameter of the FRMap layer is optimized on a compact Stiefel manifold, not only maintaining its orthogonality but also ensuring better network training. The projector perspective studied in [32] shows that the Grassmannian manifold is an embedded submanifold of the Euclidean space of symmetric matrices, allowing the use of an extrinsic distance, i.e., a projection metric (PM), for measuring the similarity between subspaces over the Grassmannian manifold. In addition, the PM can approximate the true geodesic distance up to a scale factor of √2 and is more efficient than the geodesic distance. By leveraging the PM-based contrastive loss, the consistency between cluster assignments across views will be intensified from the Grassmannian perspective while their underlying manifold structure is preserved.
To intuitively demonstrate the effectiveness of MMM, we conduct a new ablation study to evaluate the clustering ability of the proposed method that does not contain it. Note that the learning rate, batch size, and three trade-off parameters of the new model remain the same as the original, while the size of the input feature matrix of GMLM becomes 49×64. The experimental results on the five used datasets are summarized in Table 8, where "w/o" means "without containing". From Table 8, we can see that removing the designed MMM from DGMVCL leads to a significant decrease in its accuracy across all the five used datasets. This not only confirms the significance of MMM in capturing and parameterizing the underlying submanifold structure, but also reveals the effectiveness of our proposed model in preserving and leveraging the Riemannian geometry of the data for improved clustering performance.
Datasets | MNIST-USPS | Fashion | Multi-COIL-10 | ORL | Scene-15 |
DGMVCL-"w/o" MMM | 10.00 | 10.00 | 10.29 | 2.60 | 9.14 |
DGMVCL | 99.82 | 99.52 | 100.00 | 92.25 | 61.29 |
Selection of Network Parameters: The selection of network parameters is based on experiments and analysis to ensure optimal outcomes. Specifically, the trade-off parameters α, β, and γ play a critical role in balancing the contributions of different loss functions in the overall objective function. The magnitude of the pivotal loss functions should be slightly higher. Based on this guideline, we can roughly determine their initial value, signified as the anchor point. Then, a candidate set can be formed around the selected anchor point. After that, we can conduct experiments to determine their optimal values. Additionally, the learning rate and batch size are crucial for the convergence and effectiveness of the proposed model. A too-high learning rate might cause the model to diverge, while a too-low one would slow down the training process [68]. Since CVCL [55] is our base model, we treat its learning rate as the initial value and adjust around it to find the suitable one. The batch size is configured to balance the memory usage and training efficiency. For the Scene-15 dataset, a batch size of 69 was chosen because this dataset contains 4485 samples, and 69 divides this number evenly, ensuring efficient utilization of data in each batch. Additionally, as shown in Figure 7, this batch size can yield good performance. However, in practice, the batch size is also related to the computing device. On the MNIST-USPS, Multi-COIL-10, Fashion, ORL, and Scene-15 datasets, the learning rate and batch size are specifically configured as (0.0002, 50), (0.0001, 50), (0.0005,100), (0.0001, 50), and (0.001, 69), respectively. Furthermore, the experimental results presented in Figure 8 suggest that it is appropriate to configure the size of the transformation matrix in the FRMap layer as 49×25. When dk is assigned to a small value, some useful geometric information will be lost during feature transformation mapping. In contrast, a relatively larger dk results in more redundant information being incorporated into the generated subspace features. Both of these two cases have a negative impact on the model performance.
In short, the choice of network parameters are supported by both theoretical considerations and empirical evidence, and they contribute to the overall performance of the proposed model.
Other Datasets: In this part, the Caltech-101 dataset [39] has been applied to further evaluate the effectiveness of the proposed model. This dataset is a challenging benchmark for object detection, which consists of 101 different object categories as well as one background category, totaling approximately 9146 images.
The experimental results achieved by different comparative models on the Caltech-101 dataset are listed in Table 9. Note that the learning rate, batch size, and the size of the weight matrix in the FRMap layer of the proposed DGMVCL are configured as 0.005, 50, and 49 × 25, respectively. According to Table 9, we can see that the clustering accuracy of our proposed DGMVCL are 8.87%, 1.62%, and 3.55% higher than that of DSIMVC, DCP, and CVCL, respectively. Additionally, under the other two validation metrics, i.e., NMI and purity, our method is still the best performer. This demonstrates that the suggested Grassmannian manifold-valued deep contrastive learning mechanism can learn compact and discriminative geometric features for MVC, even in complicated data scenarios.
Methods | ACC | NMI | Purity |
DSIMVC [63] | 20.25 | 31.43 | 23.68 |
DCP [64] | 27.41 | 39.58 | 37.01 |
CVCL [55] | 25.48 | 37.67 | 36.63 |
DGMVCL | 29.03 | 45.81 | 40.02 |
In this paper, a novel framework is suggested to learn view-invariant representations for multiview clustering (MVC), called DGMVCL. Considering the submanifold structure of channel features, a Grassmannian neural network is constructed for the sake of characterizing and learning the subspace data more faithfully and effectively. Besides, the contrastive learning mechanism built upon the Grassmannian manifold and Euclidean space enables more discriminative cluster assignments. Extensive experiments and ablation studies conducted on five MVC datasets not only demonstrate the superiority of our proposed method over the state-of-the-art methods, but also confirm the usefulness of each designed component.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported in part by the National Natural Science Foundation of China (62306127, 62020106012, 62332008, U1836218), the Natural Science Foundation of Jiangsu Province (BK20231040), the Fundamental Research Funds for the Central Universities (JUSRP124015), the Key Project of Wuxi Municipal Health Commission (Z202318), and the National Key R & D Program of China (2023YFF1105102, 2023YFF1105105).
The authors declare no conflict of interest.
[1] | Abu-Baker N, Naser K (2000). Empirical evidence on corporate social disclosure (CSD) practices in Jordan. Int J Commer Manage 10: 18–34. https://doi.org/10.1108/eb047406 |
[2] | Adamkaite J, Streimikiene D, Rudzioniene K (2023) The impact of social responsibility on corporate financial performance in the energy sector: Evidence from Lithuania. Corp Soc Resp Envir Ma. https://doi.org/10.1002/csr.2340 |
[3] | Aggarwal A, Saxena N (2023) Examining the relationship between corporate social responsibility, corporate reputation and brand equity in Indian banking industry. J Public Aff 23: e2838. https://doi.org/10.1002/pa.2838 |
[4] | Aguilera RV, Rupp DE, Williams CA, et al. (2022) Organizational governance and ethics: An ongoing research agenda. J Bus Ethics 183: 529–546. |
[5] |
Ahamed WSW, Almsafir MK, Al-Smadi AW (2014) Does corporate social responsibility lead to improve in firm financial performance? Evidence from Malaysia. Int J Econ Financ 6: 126–138. https://doi.org/10.5539/ijef.v6n3p126 doi: 10.5539/ijef.v6n3p126
![]() |
[6] |
AlAjmi J, Buallay A, Saudagaran S (2023) Corporate social responsibility disclosure and banks' performance: the role of economic performance and institutional quality. Int J Soc Econ 50: 359–376. https://doi.org/10.1108/IJSE-11-2020-0757 doi: 10.1108/IJSE-11-2020-0757
![]() |
[7] | Alarcón D, Sánchez JA, De Olavide U (2015) Assessing convergent and discriminant validity in the ADHD-R IV rating scale: User-written commands for Average Variance Extracted (AVE), Composite Reliability (CR), and Heterotrait-Monotrait ratio of correlations (HTMT). In Spanish STATA Meeting, 1–39. |
[8] |
Ali HY, Danish RQ, Asrar-ul-Haq M (2020) How corporate social responsibility boosts firm financial performance: The mediating role of corporate image and customer satisfaction. Corp Soc Resp Environ Ma 27: 166–177. https://doi.org/10.1002/csr.1781 doi: 10.1002/csr.1781
![]() |
[9] | Al-Sfan MBB (2023) A Study of the Relationship Between Social Responsibility and Earning Management And Its Reflection On The Financial Performance Of A Sample Of Companies Listed On The Iraq Stock Exchange. World Bull Manage Law18: 8–18. |
[10] | Amran A, Siti-Nabiha A (2009) Corporate social reporting in Malaysia: a case of mimicking the West or succumbing to local pressure. Soc Resp J. https://doi.org/10.1108/17471110910977285 |
[11] | Aranguren Gómez N, Maldonado García S (2022) Building Corporate Reputation Through Corporate Social Responsibility Disclosures. The Case of Colombian Companies. Corp Rep Rev, 1–25. https://doi.org/10.1057/s41299-022-00155-7 |
[12] |
Aupperle KE, Van Pham D (1989) An expanded investigation into the relationship of corporate social responsibility and financial performance. Employ Responsib Rig J 2: 263–274. https://doi.org/10.1007/BF01423356 doi: 10.1007/BF01423356
![]() |
[13] |
Azzalini A, Browne RP, Genton MG, et al. (2016) On nomenclature for, and the relative merits of, two formulations of skew distributions. Stat Prob Lett 110: 201–206. https://doi.org/10.1016/j.spl.2015.12.008 doi: 10.1016/j.spl.2015.12.008
![]() |
[14] | Babalola YA (2012) The impact of corporate social responsibility on firms' profitability in Nigeria. Eur J Econ Financ Adm Sci 45: 39–50. |
[15] |
Bagozzi RP, Yi Y, Phillips LW (1991) Assessing construct validity in organizational research. Adm Sci Q, 421–458. https://doi.org/10.2307/2393203 doi: 10.2307/2393203
![]() |
[16] |
Baldarelli MG, Gigli S (2014) Exploring the drivers of corporate reputation integrated with a corporate responsibility perspective: some reflections in theory and in praxis. J Manage Gov 18: 589–613. https://doi.org/10.1007/s10997-011-9192-3 doi: 10.1007/s10997-011-9192-3
![]() |
[17] |
Balmer JM, Powell SM, Hildebrand D, et al. (2011) Corporate social responsibility: a corporate marketing perspective. Eur J Mark. https://doi.org/10.1108/03090561111151790 doi: 10.1108/03090561111151790
![]() |
[18] |
Barnett ML (2007) Stakeholder influence capacity and the variability of financial returns to corporate social responsibility. Acade Manage Rev 32: 794–816. https://doi.org/10.5465/amr.2007.25275520 doi: 10.5465/amr.2007.25275520
![]() |
[19] |
Barnett ML, Jermier JM, Lafferty BA (2006) Corporate reputation: The definitional landscape. Corp Rep Rev 9: 26–38. https://doi.org/10.1057/palgrave.crr.1550012 doi: 10.1057/palgrave.crr.1550012
![]() |
[20] | Baron RM, Kenny DA (1986) The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. J Pers Soc Psy 51: 1173. https://doi.org/10.1037/0022-3514.51.6.1173 |
[21] | Bashir M (2022) Corporate social responsibility and financial performance–the role of corporate reputation, advertising and competition. PSU Res Rev. https://doi.org/10.1108/PRR-10-2021-0059 |
[22] | Bebbington J, Larrinaga-González C, Moneva-Abadía JM (2008) Legitimating reputation/the reputation of legitimacy theory. Account Audit Account J. https://doi.org/10.1108/09513570810863969 |
[23] | Becerra-Vicario R, Ruiz-Palomo D, León-Gómez A, et al. (2023) The Relationship between Innovation and the Performance of Small and Medium-Sized Businesses in the Industrial Sector: The Mediating Role of CSR. Economies 11: 92. https://doi.org/10.3390/economies11030092 |
[24] | Belal AR (2001) A study of corporate social disclosures in Bangladesh. Manag Audit J. |
[25] |
Black BS, Khanna VS (2007) Can corporate governance reforms increase firm market values? Event study evidence from India. J Empir Legal Stud 4: 749–796. https://doi.org/10.1111/j.1740-1461.2007.00106.x doi: 10.1111/j.1740-1461.2007.00106.x
![]() |
[26] |
Boyle EJ, Higgins MM, Rhee GS (1997) Stock market reaction to ethical initiatives of defense contractors: Theory and evidence. Crit Perspect Accoun 8: 541–561. https://doi.org/10.1006/cpac.1997.0124 doi: 10.1006/cpac.1997.0124
![]() |
[27] | Brinkmann F, Gerstmeier E, Schiereck D (2020) Community engagement and corporate social responsibility (CSR): An overview and recent developments. J Bus Ethics 161: 1–21. |
[28] |
Bromley DB (2000) Psychological aspects of corporate identity, image and reputation. Corp Rep Rev 3: 240–252. https://doi.org/10.1057/palgrave.crr.1540117 doi: 10.1057/palgrave.crr.1540117
![]() |
[29] | Cabrera-Luján SL, Sánchez-Lima DJ, Guevara-Flores SA, et al. (2023) Impact of Corporate Social Responsibility, Business Ethics and Corporate Reputation on the Retention of Users of Third-Sector Institutions. Sustainability 15: 1781. https://doi.org/10.3390/su15031781 |
[30] |
Carroll AB (1979) A three-dimensional conceptual model of corporate performance. Acad Manage Rev 4: 497–505. https://doi.org/10.2307/257850 doi: 10.2307/257850
![]() |
[31] | Carroll AB, Shabana KM (2021) The business case for corporate social responsibility: A review of concepts, research, and practice. Oxf Res Encyclopedia Bus Manage. |
[32] | Chen P (2023) Corporate social responsibility, financing constraints, and corporate carbon intensity: new evidence from listed Chinese companies. Environ Sci Pollut Res, 1–9. https://doi.org/10.1007/s11356-023-25176-5 |
[33] |
Chen Y, Xu Z, Wang X, et al. (2023) How does green credit policy improve corporate social responsibility in China? An analysis based on carbon-intensive listed firms. Corp Soc Resp Environ Manage 30: 889–904. https://doi.org/10.1002/csr.2395 doi: 10.1002/csr.2395
![]() |
[34] |
Cheng B, Ioannou I, Serafeim G (2014) Corporate social responsibility and access to finance. Strat Manage J 35: 1–23. https://doi.org/10.1002/smj.2131 doi: 10.1002/smj.2131
![]() |
[35] | Chih C (2023) The impact of perceived corporate social responsibility on participating in philanthropic road-running events: a moderated mediation model. Sport Bus Manage Int J. https://doi.org/10.1108/SBM-05-2022-0038 |
[36] |
Chung KH, Yu JE, Choi MG, et al. (2015) The effects of CSR on customer satisfaction and loyalty in China: the moderating role of corporate image. J Econ Bus Manage 3: 542–547. https://doi.org/10.7763/JOEBM.2015.V3.243 doi: 10.7763/JOEBM.2015.V3.243
![]() |
[37] |
Cochran PL, Wood RA (1984) Corporate social responsibility and financial performance. Acad Manage J 27: 42–56. https://doi.org/10.2307/255956 doi: 10.2307/255956
![]() |
[38] |
Cox BJ, Fleet C, Stein MB (2004) Self-criticism and social phobia in the US national comorbidity survey. J Affect Disorders 82: 227–234. https://doi.org/10.1016/j.jad.2003.12.012 doi: 10.1016/j.jad.2003.12.012
![]() |
[39] |
Das KP, Mukhopadhyay S, Suar D (2023) Enablers of workforce agility, firm performance, and corporate reputation. Asia Pacific Manage Rev 28: 33–44. https://doi.org/10.1016/j.apmrv.2022.01.006 doi: 10.1016/j.apmrv.2022.01.006
![]() |
[40] | Dentchev NA (2004) Corporate social performance as a business strategy. J Bus Ethics 55: 395–410. https://doi.org/10.1007/s10551-004-1348-5 |
[41] |
Dobers P, Halme M (2009) Corporate social responsibility and developing countries. Corp Soc Resp Environ Manage 16: 237–249. https://doi.org/10.1002/csr.212 doi: 10.1002/csr.212
![]() |
[42] |
Donaldson T, Preston LE (1995) The stakeholder theory of the corporation: Concepts, evidence, and implications. Acad ManageRev 20: 65–91. https://doi.org/10.2307/258887 doi: 10.2307/258887
![]() |
[43] |
Dwertmann DJ, Goštautaitė B, Kazlauskaitė R, et al. (2023) Receiving service from a person with a disability: Stereotypes, perceptions of corporate social responsibility, and the opportunity for increased corporate reputation. Acad Manage J 66: 133–163. https://doi.org/10.5465/amj.2020.0084 doi: 10.5465/amj.2020.0084
![]() |
[44] | Eberl M, Schwaiger M (2005) Corporate reputation: disentangling the effects on financial performance. Eur J Mark. https://doi.org/10.1108/03090560510601798 |
[45] | Farooq SU, Ullah S, Kimani D (2015) The Relationship between Corporate Governance and Corporate Social Responsibility (CSR) Disclosure: Evidence from the USA. Abasyn Univ J Soc Sci 8. |
[46] | Fauzi H, Idris K (2009) The relationship of CSR and financial performance: New evidence from Indonesian companies. Issues Soc Enviro Account 3. https://doi.org/10.22164/isea.v3i1.38 |
[47] |
Fauzi H, Mahoney LS, Abdul Rahman A (2007) Institutional ownership and corporate social performance: Empirical evidence from Indonesian companies. Issues Soc Environ Account 1: 334–347. https://doi.org/10.22164/isea.v1i2.21 doi: 10.22164/isea.v1i2.21
![]() |
[48] | Febra L, Costa M, Pereira F (2023) Reputation, return and risk: A new approach. Eur Res Manage Bus Econ 29: 100207. https://doi.org/10.1016/j.iedeen.2022.100207 |
[49] | Fombrun CJ, Rindova V (1996) Who's tops and who decides? The social construction of corporate reputations. New York University, Stern School of Business, Working Paper, 5–13. |
[50] | Fornell C, Larcker DF (1981) Structural equation models with unobservable variables and measurement error: Algebra and statistics. https://doi.org/10.2307/3150980 |
[51] |
Frerichs IM, Teichert T (2023) Research streams in corporate social responsibility literature: a bibliometric analysis. Manage Rev Q 73: 231–261. https://doi.org/10.1007/s11301-021-00237-6 doi: 10.1007/s11301-021-00237-6
![]() |
[52] |
Friedman AL, Miles S (2001) Socially responsible investment and corporate social and environmental reporting in the UK: an exploratory study. Brit Account Rev 33: 523–548. https://doi.org/10.1006/bare.2001.0172 doi: 10.1006/bare.2001.0172
![]() |
[53] |
Frooman J (1997) Socially irresponsible and illegal behavior and shareholder wealth: A meta-analysis of event studies. Bus Soc 36: 221–249. https://doi.org/10.1177/000765039703600302 doi: 10.1177/000765039703600302
![]() |
[54] |
Galbreath J, Shum P (2012) Do customer satisfaction and reputation mediate the CSR–FP link? Evidence from Australia. Aust J Manage 37: 211–229. https://doi.org/10.1177/0312896211432941 doi: 10.1177/0312896211432941
![]() |
[55] | García-Rosell JC, Moisander J, Mäkinen J (2023) Conceptual framework for understanding the ethical dimension of corporate social responsibility, New Directions in Art, Fashion, and Wine: Sustainability, Digitalization, and Artification, 139. |
[56] | Ghardallou W (2022) Corporate sustainability and firm performance: the moderating role of CEO education and tenure. Sustainability 14: 3513. https://doi.org/10.3390/su14063513 |
[57] | Gimeno-Arias F, Santos-Jaén JM, Palacios-Manzano M, et al. (2021) Using PLS-SEM to analyze the effect of CSR on corporate performance: The mediating role of human resources management and customer satisfaction. An empirical study in the Spanish food and beverage manufacturing sector. Mathematics 9: 2973. https://doi.org/10.3390/math9222973 |
[58] |
Godfrey PC (2005) The relationship between corporate philanthropy and shareholder wealth: A risk management perspective. Acad Manage Rev 30: 777–798. https://doi.org/10.5465/amr.2005.18378878 doi: 10.5465/amr.2005.18378878
![]() |
[59] | Goi CL, Yong KH (2009) Contribution of public relations (PR) to corporate social responsibility (CSR): A review on Malaysia perspective. Int J Mark Stud 1: 46. https://doi.org/10.5539/ijms.v1n2p46 |
[60] |
Graafland J, Mazereeuw-Van der Duijn Schouten C (2012) Motives for corporate social responsibility. De Economist 160: 377–396. https://doi.org/10.1007/s10645-012-9198-5 doi: 10.1007/s10645-012-9198-5
![]() |
[61] |
Greening DW, Turban DB (2000) Corporate social performance as a competitive advantage in attracting a quality workforce. Bus Soc 39: 254–280. https://doi.org/10.1177/000765030003900302 doi: 10.1177/000765030003900302
![]() |
[62] |
Hair JF, Sarstedt M, Ringle CM, et al. (2012) An assessment of the use of partial least squares structural equation modeling in marketing research. J Acad Mark Sci 40: 414–433. https://doi.org/10.1007/s11747-011-0261-6 doi: 10.1007/s11747-011-0261-6
![]() |
[63] |
Handayani R, Wahyudi S, Suharnomo S (2017) The effects of corporate social responsibility on manufacturing industry performance: the mediating role of social collaboration and green innovation. Bus Theory Pract 18: 152–159. https://doi.org/10.3846/btp.2017.016 doi: 10.3846/btp.2017.016
![]() |
[64] |
Helm S (2007) The role of corporate reputation in determining investor satisfaction and loyalty. Corp Rep Rev 10: 22–37. https://doi.org/10.1057/palgrave.crr.1550036 doi: 10.1057/palgrave.crr.1550036
![]() |
[65] |
Hill A, Roberts J, Ewings P, et al. (1997) Non-response bias in a lifestyle survey. J Public Health 19: 203–207. https://doi.org/10.1093/oxfordjournals.pubmed.a024610 doi: 10.1093/oxfordjournals.pubmed.a024610
![]() |
[66] | Hossain GMS, Rahman MS, Das S (2019) A Structural Equation Modeling (SEM) Approach to Explore the Association between Corporate Social Responsibility and Financial Performance: A Single Mediating Mechanism. North Am Acad Res 2: 155–172. |
[67] |
Hur WM, Kim H, Woo J (2014) How CSR leads to corporate brand equity: Mediating mechanisms of corporate brand credibility and reputation. J Bus Ethics 125: 75–86. https://doi.org/10.1007/s10551-013-1910-0 doi: 10.1007/s10551-013-1910-0
![]() |
[68] |
Hutton W, MacDougall A, Zadek S (2001) Session 3: Topics in Business Ethics: Corporate Stakeholding, Ethical Investment, Social Acconting. J Bus Ethics, 107–117. https://doi.org/10.1023/A:1010641830759 doi: 10.1023/A:1010641830759
![]() |
[69] | Imam S (2000) Corporate social performance reporting in Bangladesh. Manage Audit J. https://doi.org/10.1108/02686900010319384 |
[70] | Ingram RW, Frazier KB (1980) Environmental performance and corporate disclosure. J Account Res, 614–622. https://doi.org/10.2307/2490597 |
[71] |
Iqbal N, Ahmad N, Basheer NA, et al. (2012) Impact of corporate social responsibility on financial performance of corporations: Evidence from Pakistan. Int J Learn Dev 2: 107–118. https://doi.org/10.5296/ijld.v2i6.2717 doi: 10.5296/ijld.v2i6.2717
![]() |
[72] |
Islam ZM, Ahmed SU, Hasan I (2012) Corporate social responsibility and financial performance linkage: Evidence from the banking sector of Bangladesh. J Organ Manage 1: 14–21. https://doi.org/10.1002/csr.1298 doi: 10.1002/csr.1298
![]() |
[73] |
Javed M, Rashid MA, Hussain G, et al. (2020) The effects of corporate social responsibility on corporate reputation and firm financial performance: Moderating role of responsible leadership. Corp Soc Resp Environ Manage 27: 1395–1409. https://doi.org/10.1002/csr.1892 doi: 10.1002/csr.1892
![]() |
[74] | Jiang H, Cheng Y, Park K, et al. (2022) Linking CSR communication to corporate reputation: Understanding hypocrisy, employees' social media engagement and CSR-related work engagement. Sustainability 14: 2359. https://doi.org/10.3390/su14042359 |
[75] |
Johnson RA, Greening DW (1999) The effects of corporate governance and institutional ownership types on corporate social performance. Acad Manage J 42: 564–576. https://doi.org/10.2307/256977 doi: 10.2307/256977
![]() |
[76] |
Julian SD, Ofori-dankwa JC (2013) Financial resource availability and corporate social responsibility expenditures in a sub-Saharan economy: The institutional difference hypothesis. Strat Manage J 34: 1314–1330. https://doi.org/10.1002/smj.2070 doi: 10.1002/smj.2070
![]() |
[77] |
Khan HUZ (2010) The effect of corporate governance elements on corporate social responsibility (CSR) reporting: Empirical evidence from private commercial banks of Bangladesh. Int J Law Manage 52: 82–109. https://doi.org/10.1108/17542431011029406 doi: 10.1108/17542431011029406
![]() |
[78] | Kiliç M, Kuzey C, Uyar A (2015) The impact of ownership and board structure on Corporate Social Responsibility (CSR) reporting in the Turkish banking industry. Corp Gov. https://doi.org/10.1108/CG-02-2014-0022 |
[79] |
Kim RC (2022) Rethinking corporate social responsibility under contemporary capitalism: Five ways to reinvent CSR. Bus Ethics Environ Respon 31: 346–362. https://doi.org/10.1111/beer.12414 doi: 10.1111/beer.12414
![]() |
[80] | Kusakci S, Bushera I (2023) Corporate social responsibility pyramid in Ethiopia: A mixed study on approaches and practices. Int J Bus Ecosyst Strat (2687–2293) 5: 37–48. https://doi.org/10.36096/ijbes.v5i1.378 |
[81] | Le TT (2022) Corporate social responsibility and SMEs' performance: mediating role of corporate image, corporate reputation and customer loyalty. Int J Emerg Mark. https://doi.org/10.1108/IJOEM-07-2021-1164 |
[82] |
Lee KH, Herold DM, Yu AL (2016) Small and medium enterprises and corporate social responsibility practice: A Swedish perspective. Corp Soc Respon Environ Manage 23: 88–99. https://doi.org/10.1002/csr.1366 doi: 10.1002/csr.1366
![]() |
[83] |
Leong LY, Hew TS, Ooi KB, et al. (2012) The determinants of customer loyalty in Malaysian mobile telecommunication services: a structural analysis. Int J Serv Econ Manage 4: 209–236. https://doi.org/10.1504/IJSEM.2012.048620 doi: 10.1504/IJSEM.2012.048620
![]() |
[84] |
León-Gómez A, Santos-Jaén JM, Ruiz-Palomo D, et al. (2022) Disentangling the impact of ICT adoption on SMEs performance: the mediating roles of corporate social responsibility and innovation. Oeconomia Copernicana 13: 831–866. https://doi.org/10.24136/oc.2022.024 doi: 10.24136/oc.2022.024
![]() |
[85] | Li J, Fu T, Han S, et al. (2023) Exploring the Impact of Corporate Social Responsibility on Financial Performance: The Moderating Role of Media Attention. Sustainability 15: 5023. https://doi.org/10.3390/su15065023 |
[86] |
Little PL, Little BL (2000) Do perceptions of corporate social responsibility contribute to explaining differences in corporate price-earnings ratios? A research note. Corp Reput Rev 3: 137–142. https://doi.org/10.1057/palgrave.crr.1540108 doi: 10.1057/palgrave.crr.1540108
![]() |
[87] | Liu X, Vredenburg H, Steel P (2019) Exploring the mechanisms of corporate reputation and financial performance: A meta-analysis. Acad Manage Proc 2019: 17903. https://doi.org/10.5465/AMBPP.2019.193 |
[88] |
Low MP (2016) Corporate social responsibility and the evolution of internal corporate social responsibility in 21st century. Asian J Soc Sci Manage Stud 3: 56–74. https://doi.org/10.20448/journal.500/2016.3.1/500.1.56.74 doi: 10.20448/journal.500/2016.3.1/500.1.56.74
![]() |
[89] |
Lynch-Wood G, Williamson D, Jenkins W (2009) The over-reliance on self-regulation in CSR policy. Bus Ethics Eur Rev 18: 52–65. https://doi.org/10.1111/j.1467-8608.2009.01548.x doi: 10.1111/j.1467-8608.2009.01548.x
![]() |
[90] |
Maalim BM, Kibe LW, Ndolo J (2023) Assessment of Shareholder Strategy–An Internal Corporate Social Responsibility Perspective on Organizational Commitment in Five-Star Hotels in Kenya. Asian J Econ Bus Account 23: 72–85. https://doi.org/10.9734/ajeba/2023/v23i141006 doi: 10.9734/ajeba/2023/v23i141006
![]() |
[91] |
Madueno JH, Jorge ML, Conesa IM, et al. (2016) Relationship between corporate social responsibility and competitive performance in Spanish SMEs: Empirical evidence from a stakeholders' perspective. BRQ Bus Res Q 19: 55–72. https://doi.org/10.1016/j.brq.2015.06.002 doi: 10.1016/j.brq.2015.06.002
![]() |
[92] |
Maignan I (2001) Consumers' perceptions of corporate social responsibilities: A cross-cultural comparison. J Bus Ethics 30: 57–72. https://doi.org/10.1023/A:1006433928640 doi: 10.1023/A:1006433928640
![]() |
[93] |
Maignan I, Ferrell OC (2000) Measuring corporate citizenship in two countries: The case of the United States and France. J Bus Ethics 23: 283–297. https://doi.org/10.1023/A:1006262325211 doi: 10.1023/A:1006262325211
![]() |
[94] |
Maignan I, Ferrell OC, Hult GTM (1999) Corporate citizenship: Cultural antecedents and business benefits. J Acad Mark Sci 27: 455–469. https://doi.org/10.1177/0092070399274005 doi: 10.1177/0092070399274005
![]() |
[95] | Mankelow G, Quazi A (2007) Factors affecting SMEs motivations for corporate social responsibility. In 3Rs: Reputation, Responsibility & Relevance: Australian and New Zealand Marketing Academy (ANZMAC) Conference 2007 (ANZMAC 2007), Australian and New Zealand Marketing Academy, 2367–2374. |
[96] | Masud AA, Hossain GMS (2019) A Model to Explain How an Organization's Corporate Social Responsibility (CSR) Contributes to Corporate Image and Financial Performance: By Using Structural Equation Modelling (SEM). Int J Manage IT Eng 9: 32–52. |
[97] | Masud A, Hoque AAM, Hossain MS, et al. (2013) Corporate social responsibility practices in garments sector of bangladesh, A study of multinational garments, CSR view in dhaka EPZ. Dev Country Stud 3: 27–37. |
[98] | Masud AA (2018) Corporate Social Responsibility and Customer Loyalty, How Corporate Social Responsibility lead to Firm Performance: A Study on textile sector in Southern Bangladesh. Barisal UnivJ J Bus Stud 4: 180–192. |
[99] | Masud AA, Ferdous R (2016) Relationship between the Expenditure of Corporate Social Responsibility and the Export Growth of Readymade Garments Sector; An Econometric Review based on Bangladesh perspective. J Bus Res Publicat Bus Res Bureau 1: 23–44. |
[100] | Masud AA, Islam S (2018) The relationship between corporate social responsibility (CSR) and financial performance of textiles sector in Bangladesh especially in Barisal Division. Int J Adv Soc Sci Human 5: 6–16. |
[101] | Masud AA, Ferdous R, Hossain DMM (2017) Corporate Social responsibility practices on the productivity of readymade garments sector in Bangladesh. Barisal Uni J J Art Human Soc Sci Law 1: 5–18. |
[102] | McGee JE (2021) Nonfinancial disclosure and its impact on sustainability, social responsibility, and ethics research. J Bus Ethics 169: 1–21. |
[103] |
McWilliams A, Siegel D (2000) Corporate social responsibility and financial performance: correlation or misspecification? Strat Manage J 21: 603–609. https://doi.org/10.1002/(SICI)1097-0266(200005)21:5<603::AID-SMJ101>3.0.CO;2-3 doi: 10.1002/(SICI)1097-0266(200005)21:5<603::AID-SMJ101>3.0.CO;2-3
![]() |
[104] |
McWilliams A, Siegel D (2001) Corporate social responsibility: A theory of the firm perspective. Acade Manage Rev 26: 117–127. https://doi.org/10.2307/259398 doi: 10.2307/259398
![]() |
[105] |
Mishra S, Suar D (2010) Does corporate social responsibility influence firm performance of Indian companies? J Bus Ethics 95: 571–601. https://doi.org/10.1007/s10551-010-0441-1 doi: 10.1007/s10551-010-0441-1
![]() |
[106] |
Mitnick BM, Windsor D, Wood DJ (2023) Moral CSR. Bus Soc 62: 192–220. https://doi.org/10.1177/00076503221086881 doi: 10.1177/00076503221086881
![]() |
[107] | , Abdullah KH, Mohd-Sabrun I (2023) Research on corporate reputation: A bibliometric review of 43 years (1977−2020). Int J Infor Scienc Manage (IJISM) 21: 31–54. |
[108] | Moore G, Spence L (2006) Responsibility and small business. https://doi.org/10.1007/s10551-006-9180-8 |
[109] |
Morimoto R, Ash J, Hope C (2005) Corporate social responsibility audit: From theory to practice. J Bus Ethics 62: 315–325. https://doi.org/10.1007/s10551-005-0274-5 doi: 10.1007/s10551-005-0274-5
![]() |
[110] | Mugisa J (2011) The effects of corporate social responsibility on business operations and performance: case study-Vision Group and Uganda Clays Limited. Doctoral dissertation, Uganda Martyrs University. |
[111] | Munasinghe MATK, Malkumari AP (2012) Corporate social responsibility in small and medium enterprises (SME) in Sri Lanka. J Emerg Trends Econ Manage Sci 3: 168–172. |
[112] | Murcia FD, Souza FCD (2009) Discretionary-based disclosure: the case of social and environmental reporting in Brazil. In Congresso USP, 9: 2009). |
[113] |
Murillo D, Lozano JM (2006) SMEs and CSR: An approach to CSR in their own words. J Bus Ethics 67: 227–240. https://doi.org/10.1007/s10551-006-9181-7 doi: 10.1007/s10551-006-9181-7
![]() |
[114] |
Nardella G, Brammer S, Surdu I (2023) The social regulation of corporate social irresponsibility: Reviewing the contribution of corporate reputation. Int J Manage Rev 25: 200–229. https://doi.org/10.1111/ijmr.12311 doi: 10.1111/ijmr.12311
![]() |
[115] | Nasser HS, Beydoun AR, et al. (2023) Investigating the Mediating Role of Perceived Corporate Reputation on the Relationship between Customer Satisfaction, Customer Trust, and Loyalty: A Study of Lebanese Hotels. Eur J Sci Innovation Technol 3: 112–126. |
[116] | Okwemba EM, Chitiavi MS, Egessa R, et al. (2014) Effect of corporate social responsibility on organization performance; Banking industry Kenya, Kakamega County. Int J Bus Manage Invent 3: 37–51. |
[117] | Olowokudejo F, Aduloju SA, Oke SA (2011) Corporate social responsibility and organizational effectiveness of insurance companies in Nigeria. J Risk Financ. https://doi.org/10.1108/15265941111136914 |
[118] |
Ooi KB, Lee VH, Tan GWH, et al. (2018) Cloud computing in manufacturing: The next industrial revolution in Malaysia?. Expert Syst Appl 93: 376–394. https://doi.org/10.1016/j.eswa.2017.10.009 doi: 10.1016/j.eswa.2017.10.009
![]() |
[119] | Orlitzky M (2008) Corporate social performance and financial performance: A research synthesis. Doctoral dissertation, Oxford University Press Incorporated. https://doi.org/10.1093/oxfordhb/9780199211593.003.0005 |
[120] |
Orlitzky M (2013) Corporate social responsibility, noise, and stock market volatility. Acad Manage Perspect 27: 238–254. https://doi.org/10.5465/amp.2012.0097 doi: 10.5465/amp.2012.0097
![]() |
[121] |
Orlitzky M, Benjamin JD (2001) Corporate social performance and firm risk: A meta-analytic review. Bus Soc 40: 369–396. https://doi.org/10.1177/000765030104000402 doi: 10.1177/000765030104000402
![]() |
[122] |
Ortiz-Martínez E, Marín-Hernández S, Santos-Jaén JM (2023) Sustainability, corporate social responsibility, non-financial reporting and company performance: Relationships and mediating effects in Spanish small and medium sized enterprises. Sustain Product Consump 35: 349–364. https://doi.org/10.1016/j.spc.2022.11.015 doi: 10.1016/j.spc.2022.11.015
![]() |
[123] | Palacios-Manzano M, León-Gomez A, Santos-Jaén JM (2021) Corporate Social Responsibility as a Vehicle for Ensuring the Survival of Construction SMEs. The Mediating Role of Job Satisfaction and Innovation. IEEE Ts Eng Manage. https://doi.org/10.1109/TEM.2021.3114441 |
[124] |
Pan X, Sha J, Zhang H, et al. (2014) Relationship between corporate social responsibility and financial performance in the mineral Industry: Evidence from Chinese mineral firms. Sustainability 6: 4077–4101. https://doi.org/10.3390/su6074077 doi: 10.3390/su6074077
![]() |
[125] |
Pérez A (2015) Corporate reputation and CSR reporting to stakeholders. Corp Communicat Int J. https://doi.org/10.1108/CCIJ-01-2014-0003 doi: 10.1108/CCIJ-01-2014-0003
![]() |
[126] |
Podsakoff PM, MacKenzie SB, Lee JY, et al. (2003) Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 88: 879. https://doi.org/10.1037/0021-9010.88.5.879 doi: 10.1037/0021-9010.88.5.879
![]() |
[127] |
Polonsky MJ, Neville BA, Bell SJ, et al. (2005) Corporate reputation, stakeholders and the social performance-financial performance relationship. Eur J Mark. https://doi.org/10.1108/03090560510610798 doi: 10.1108/03090560510610798
![]() |
[128] |
Preston LE, O'bannon DP (1997) The corporate social-financial performance relationship: A typology and analysis. Bus Soc 36: 419–429. https://doi.org/10.1177/000765039703600406 doi: 10.1177/000765039703600406
![]() |
[129] | Rahim RA, Jalaludin FW, Tajuddin K (2011) The importance of corporate social responsibility on consumer behaviour in Malaysia. Asian Acad Manage J 16: 119–139. |
[130] | Raj AB, Subramani AK (2022) Building corporate reputation through corporate social responsibility: the mediation role of employer branding. Int J Soc Econ. |
[131] |
Reisinger A (2023) Challenges in the CSR–Competitiveness Relationship Based on the Literature. Financ Econ Rev 22: 104–125. https://doi.org/10.33893/FER.22.1.104 doi: 10.33893/FER.22.1.104
![]() |
[132] |
Rettab B, Brik AB, Mellahi K (2009) A study of management perceptions of the impact of corporate social responsibility on organisational performance in emerging economies: the case of Dubai. J Bus Ethics 89: 371–390. https://doi.org/10.1007/s10551-008-0005-9 doi: 10.1007/s10551-008-0005-9
![]() |
[133] |
Roberts PW, Dowling GR (2002) Corporate reputation and sustained superior financial performance. StraT Manage J 23: 1077–1093. https://doi.org/10.1002/smj.274 doi: 10.1002/smj.274
![]() |
[134] | Sacconi L (2011) A Rawlsian view of CSR and the game theory of its implementation (part i): The multi-stakeholder model of corporate governance. In: Corporate Social Responsibility and Corporate Governance, Palgrave Macmillan, London, 57–193. https://doi.org/10.1057/9780230302112_7 |
[135] | Sacconi L, Antoni G (Eds.) (2010) Social capital, Corporate social responsibility, economic behaviour and performance. Springer. https://doi.org/10.1057/9780230306189 |
[136] |
Saeidi SP, Sofian S, Saeidi P, et al. (2015) How does corporate social responsibility contribute to firm financial performance? The mediating role of competitive advantage, reputation, and customer satisfaction. J Bus Res 68: 341–350. https://doi.org/10.1016/j.jbusres.2014.06.024 doi: 10.1016/j.jbusres.2014.06.024
![]() |
[137] | Salam MA, Jahed MA (2023) CSR orientation for competitive advantage in business-to-business markets of emerging economies: the mediating role of trust and corporate reputation. J Bus Ind Mark. https://doi.org/10.1108/JBIM-12-2021-0591 |
[138] | Santos-Jaén JM, León-Gómez A, Ruiz-Palomo D, et al. (2022). Exploring Information and Communication Technologies as Driving Forces in Hotel SMEs Performance: Influence of Corporate Social Responsibility. Mathematics 10: 3629. https://doi.org/10.3390/math10193629 |
[139] |
Santos-Jaén JM, Madrid-Guijarro A, García-Pérez-de-Lema D (2021) The impact of corporate social responsibility on innovation in small and medium-sized enterprise s: The mediating role of debt terms and human capital. Corp Soc Resp Environ Manage 28: 1200–1215. https://doi.org/10.1002/csr.2125 doi: 10.1002/csr.2125
![]() |
[140] |
Silva Junior AD, Martins-Silva PDO, Coelho VD, et al. (2023) The corporate social responsibility pyramid: Its evolution and the proposal of the spinner, a theoretical refinement. Soc Resp J 19: 358–376. https://doi.org/10.1108/SRJ-05-2021-0180 doi: 10.1108/SRJ-05-2021-0180
![]() |
[141] | Solomon RC (1992) Ethics and excellence: Cooperation and integrity in business. |
[142] |
Torugsa NA, O'Donohue W, Hecker R (2012) Capabilities, proactive CSR and financial performance in SMEs: Empirical evidence from an Australian manufacturing industry sector. J Bus Ethics 109: 483–500. https://doi.org/10.1007/s10551-011-1141-1 doi: 10.1007/s10551-011-1141-1
![]() |
[143] | Trotta A, Cavallaro G (2012) Measuring corporate reputation: A framework for Italian banks. Int J Econ Financ Stud 4: 21–30. |
[144] | Tsang EW (1998) A longitudinal study of corporate social reporting in Singapore. Account Audit Account J. https://doi.org/10.1108/09513579810239873 |
[145] |
Turban DB, Greening DW (1997) Corporate social performance and organizational attractiveness to prospective employees. Acad Manage J 40: 658–672. https://doi.org/10.2307/257057 doi: 10.2307/257057
![]() |
[146] |
Vilanova M, Lozano JM, Arenas D (2009) Exploring the nature of the relationship between CSR and competitiveness. J Bus Ethics 87: 57–69. https://doi.org/10.1007/s10551-008-9812-2 doi: 10.1007/s10551-008-9812-2
![]() |
[147] |
Waddock S (2000) The multiple bottom lines of corporate citizenship: Social investing, reputation, and responsibility audits. Bus Soc Rev 105: 323–345. https://doi.org/10.1111/0045-3609.00085 doi: 10.1111/0045-3609.00085
![]() |
[148] |
Waddock SA, Graves SB (1997) The corporate social performance–financial performance link. Strat Manage J 18: 303–319. https://doi.org/10.1002/(SICI)1097-0266(199704)18:4<303::AID-SMJ869>3.0.CO;2-G doi: 10.1002/(SICI)1097-0266(199704)18:4<303::AID-SMJ869>3.0.CO;2-G
![]() |
[149] | Whetten DA, Mackey A (2002) A social actor conception of organizational identity and its implications for the study of organizational reputation. Bus Soc 41: 393–414. ttps://doi.org/10.1177/0007650302238775 |
[150] |
Wright P, Ferris SP (1997) Agency conflict and corporate strategy: The effect of divestment on corporate value. Strat Manage J 18: 77–83. https://doi.org/10.1002/SICI)1097-0266(199701)18:1<77::AID-SMJ810>3.0.CO;2-R doi: 10.1002/SICI)1097-0266(199701)18:1<77::AID-SMJ810>3.0.CO;2-R
![]() |
[151] |
Yan X, Espinosa-Cristia JF, Kumari K, et al. (2022) Relationship between Corporate Social Responsibility, Organizational Trust, and Corporate Reputation for Sustainable Performance. Sustainability 14: 8737. https://doi.org/10.3390/su14148737 doi: 10.3390/su14148737
![]() |
[152] |
Zhao L, Yang MM, Wang Z, et al. (2023) Trends in the dynamic evolution of corporate social responsibility and leadership: A literature review and bibliometric analysis. J Bus Ethics 182: 135–157. https://doi.org/10.1007/s10551-022-05035-y doi: 10.1007/s10551-022-05035-y
![]() |
[153] |
Zheng Q, Luo Y, Wang SL (2014) Moral degradation, business ethics, and corporate social responsibility in a transitional economy. J Bus Ethics 120: 405–421. https://doi.org/10.1007/s10551-013-1668-4 doi: 10.1007/s10551-013-1668-4
![]() |
![]() |
![]() |
1. | Vassiliki Aroniadou-Anderjaska, Volodymyr I. Pidoplichko, Taiza H. Figueiredo, Maria F.M. Braga, Oscillatory Synchronous Inhibition in the Basolateral Amygdala and its Primary Dependence on NR2A-containing NMDA Receptors, 2018, 373, 03064522, 145, 10.1016/j.neuroscience.2018.01.021 | |
2. | Hiroyuki Kanayama, Takashi Tominaga, Yoko Tominaga, Nobuo Kato, Hiroshi Yoshimura, Action of GABAB receptor on local network oscillation in somatosensory cortex of oral part: focusing on NMDA receptor, 2024, 74, 1880-6562, 10.1186/s12576-024-00911-w |
Datasets | Methods | ACC | NMI | Purity |
BSVC [61] | 67.98 | 74.43 | 72.34 | |
SCAgg [62] | 89.00 | 77.12 | 89.18 | |
ASR [41] | 97.90 | 94.72 | 97.90 | |
MNIST-USPS | DSIMVC [63] | 99.34 | 98.13 | 99.34 |
DCP [64] | 99.02 | 97.29 | 99.02 | |
MFL [65] | 99.66 | 99.01 | 99.66 | |
CVCL [55] | 99.58 | 98.79 | 99.58 | |
DGMVCL | 99.82 | 99.52 | 99.82 | |
BSVC [61] | 60.32 | 64.91 | 63.84 | |
SCAgg [62] | 98.00 | 94.80 | 97.56 | |
ASR [41] | 96.52 | 93.04 | 96.52 | |
Fashion | DSIMVC [63] | 88.21 | 83.99 | 88.21 |
DCP [64] | 89.37 | 88.61 | 89.37 | |
MFL [65] | 99.20 | 98.00 | 99.20 | |
CVCL [55] | 99.31 | 98.21 | 99.31 | |
DGMVCL | 99.52 | 98.73 | 99.52 | |
BSVC [61] | 73.32 | 76.91 | 74.11 | |
SCAgg [62] | 68.34 | 70.18 | 69.26 | |
ASR [41] | 84.23 | 65.47 | 84.23 | |
Multi-COIL-10 | DSIMVC [63] | 99.38 | 98.85 | 99.38 |
DCP [64] | 70.14 | 81.9 | 70.14 | |
MFL [65] | 99.20 | 98.00 | 99.20 | |
CVCL [55] | 99.43 | 99.04 | 99.43 | |
DGMVCL | 100.00 | 100.00 | 100.00 |
Datasets | Methods | ACC | NMI | Purity |
BSVC | 61.31 | 64.91 | 61.31 | |
SCAgg [62] | 61.65 | 77.41 | 66.22 | |
ASR [41] | 79.49 | 78.04 | 81.49 | |
ORL | DSIMVC [63] | 25.37 | 52.91 | 25.37 |
DCP [64] | 27.70 | 49.93 | 27.70 | |
MFL [65] | 80.03 | 89.34 | 80.03 | |
CVCL [55] | 85.50 | 93.17 | 86.00 | |
DGMVCL | 92.25 | 98.34 | 92.25 | |
BSVC | 38.05 | 38.85 | 42.08 | |
SCAgg [62] | 38.13 | 39.31 | 44.76 | |
ASR [41] | 42.70 | 40.70 | 45.60 | |
Scene-15 | DSIMVC [63] | 28.27 | 29.04 | 29.79 |
DCP [64] | 42.32 | 40.38 | 43.85 | |
MFL [65] | 42.52 | 40.34 | 44.53 | |
CVCL [55] | 44.59 | 42.17 | 47.36 | |
DGMVCL | 61.29 | 76.39 | 65.04 |
Loss | MNIST-USPS | ORL | Fashion | |||||||||
Types | Lg | Lc | La | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
A | ✔ | ✔ | 19.96 | 42.62 | 19.96 | 69.00 | 92.67 | 70.00 | 84.78 | 93.13 | 89.38 | |
B | ✔ | ✔ | 99.76 | 99.37 | 99.76 | 85.00 | 86.58 | 85.00 | 99.48 | 98.60 | 99.48 | |
C | ✔ | ✔ | 10.31 | 12.54 | 10.31 | 12.50 | 26.69 | 12.50 | 25.75 | 38.72 | 25.75 | |
D | ✔ | ✔ | ✔ | 99.82 | 99.52 | 99.82 | 92.25 | 98.34 | 92.25 | 99.52 | 98.73 | 99.52 |
E | ✔ | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | ||
F | ✔ | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
MNIST-USPS | Fashion | Multi-COIL-10 | |||||||
Groups | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
A | 99.76 | 99.37 | 99.76 | 99.48 | 98.60 | 99.48 | 100.00 | 100.00 | 100.00 |
B | 99.70 | 99.11 | 99.70 | 99.02 | 97.60 | 99.02 | 99.14 | 98.71 | 99.14 |
C | 98.00 | 97.00 | 97.00 | 81.25 | 86.24 | 81.40 | 31.70 | 31.00 | 32.00 |
D | 89.92 | 94.51 | 89.92 | 98.49 | 96.52 | 98.49 | 99.00 | 97.31 | 99.01 |
MNIST-USPS | Fashion | Multi-COIL-10 | |||||||
Metrics | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
nx=1 | 99.82 | 99.52 | 99.82 | 99.58 | 98.83 | 99.58 | 100.00 | 100.00 | 100.00 |
nx=2 | 99.68 | 99.08 | 99.68 | 99.39 | 98.37 | 99.39 | 98.14 | 96.71 | 98.14 |
nx=3 | 99.58 | 98.73 | 99.58 | 99.53 | 98.71 | 99.53 | 99.14 | 98.13 | 99.14 |
Datasets | MNIST-USPS | Fashion | Multi-COIL-10 | ORL | Scene-15 |
DGMVCL-"w/o" GMLM | 11.9 | 34.4 | 2.6 | 1.5 | 14.5 |
DGMVCL | 12.3 | 36.1 | 2.7 | 1.6 | 16.9 |
Methods | 15% | 30% | 45% | 60% |
CVCL | 98.49 | 97.23 | 95.06 | 89.48 |
DGMVCL | 99.16 | 98.29 | 97.28 | 92.70 |
Datasets | MNIST-USPS | Fashion | Multi-COIL-10 | ORL | Scene-15 |
DGMVCL-"w/o" MMM | 10.00 | 10.00 | 10.29 | 2.60 | 9.14 |
DGMVCL | 99.82 | 99.52 | 100.00 | 92.25 | 61.29 |
Datasets | Methods | ACC | NMI | Purity |
BSVC [61] | 67.98 | 74.43 | 72.34 | |
SCAgg [62] | 89.00 | 77.12 | 89.18 | |
ASR [41] | 97.90 | 94.72 | 97.90 | |
MNIST-USPS | DSIMVC [63] | 99.34 | 98.13 | 99.34 |
DCP [64] | 99.02 | 97.29 | 99.02 | |
MFL [65] | 99.66 | 99.01 | 99.66 | |
CVCL [55] | 99.58 | 98.79 | 99.58 | |
DGMVCL | 99.82 | 99.52 | 99.82 | |
BSVC [61] | 60.32 | 64.91 | 63.84 | |
SCAgg [62] | 98.00 | 94.80 | 97.56 | |
ASR [41] | 96.52 | 93.04 | 96.52 | |
Fashion | DSIMVC [63] | 88.21 | 83.99 | 88.21 |
DCP [64] | 89.37 | 88.61 | 89.37 | |
MFL [65] | 99.20 | 98.00 | 99.20 | |
CVCL [55] | 99.31 | 98.21 | 99.31 | |
DGMVCL | 99.52 | 98.73 | 99.52 | |
BSVC [61] | 73.32 | 76.91 | 74.11 | |
SCAgg [62] | 68.34 | 70.18 | 69.26 | |
ASR [41] | 84.23 | 65.47 | 84.23 | |
Multi-COIL-10 | DSIMVC [63] | 99.38 | 98.85 | 99.38 |
DCP [64] | 70.14 | 81.9 | 70.14 | |
MFL [65] | 99.20 | 98.00 | 99.20 | |
CVCL [55] | 99.43 | 99.04 | 99.43 | |
DGMVCL | 100.00 | 100.00 | 100.00 |
Datasets | Methods | ACC | NMI | Purity |
BSVC | 61.31 | 64.91 | 61.31 | |
SCAgg [62] | 61.65 | 77.41 | 66.22 | |
ASR [41] | 79.49 | 78.04 | 81.49 | |
ORL | DSIMVC [63] | 25.37 | 52.91 | 25.37 |
DCP [64] | 27.70 | 49.93 | 27.70 | |
MFL [65] | 80.03 | 89.34 | 80.03 | |
CVCL [55] | 85.50 | 93.17 | 86.00 | |
DGMVCL | 92.25 | 98.34 | 92.25 | |
BSVC | 38.05 | 38.85 | 42.08 | |
SCAgg [62] | 38.13 | 39.31 | 44.76 | |
ASR [41] | 42.70 | 40.70 | 45.60 | |
Scene-15 | DSIMVC [63] | 28.27 | 29.04 | 29.79 |
DCP [64] | 42.32 | 40.38 | 43.85 | |
MFL [65] | 42.52 | 40.34 | 44.53 | |
CVCL [55] | 44.59 | 42.17 | 47.36 | |
DGMVCL | 61.29 | 76.39 | 65.04 |
Loss | MNIST-USPS | ORL | Fashion | |||||||||
Types | Lg | Lc | La | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
A | ✔ | ✔ | 19.96 | 42.62 | 19.96 | 69.00 | 92.67 | 70.00 | 84.78 | 93.13 | 89.38 | |
B | ✔ | ✔ | 99.76 | 99.37 | 99.76 | 85.00 | 86.58 | 85.00 | 99.48 | 98.60 | 99.48 | |
C | ✔ | ✔ | 10.31 | 12.54 | 10.31 | 12.50 | 26.69 | 12.50 | 25.75 | 38.72 | 25.75 | |
D | ✔ | ✔ | ✔ | 99.82 | 99.52 | 99.82 | 92.25 | 98.34 | 92.25 | 99.52 | 98.73 | 99.52 |
E | ✔ | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | ||
F | ✔ | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
MNIST-USPS | Fashion | Multi-COIL-10 | |||||||
Groups | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
A | 99.76 | 99.37 | 99.76 | 99.48 | 98.60 | 99.48 | 100.00 | 100.00 | 100.00 |
B | 99.70 | 99.11 | 99.70 | 99.02 | 97.60 | 99.02 | 99.14 | 98.71 | 99.14 |
C | 98.00 | 97.00 | 97.00 | 81.25 | 86.24 | 81.40 | 31.70 | 31.00 | 32.00 |
D | 89.92 | 94.51 | 89.92 | 98.49 | 96.52 | 98.49 | 99.00 | 97.31 | 99.01 |
MNIST-USPS | Fashion | Multi-COIL-10 | |||||||
Metrics | ACC | NMI | Purity | ACC | NMI | Purity | ACC | NMI | Purity |
nx=1 | 99.82 | 99.52 | 99.82 | 99.58 | 98.83 | 99.58 | 100.00 | 100.00 | 100.00 |
nx=2 | 99.68 | 99.08 | 99.68 | 99.39 | 98.37 | 99.39 | 98.14 | 96.71 | 98.14 |
nx=3 | 99.58 | 98.73 | 99.58 | 99.53 | 98.71 | 99.53 | 99.14 | 98.13 | 99.14 |
Datasets | MNIST-USPS | Fashion | Multi-COIL-10 | ORL | Scene-15 |
DGMVCL-"w/o" GMLM | 11.9 | 34.4 | 2.6 | 1.5 | 14.5 |
DGMVCL | 12.3 | 36.1 | 2.7 | 1.6 | 16.9 |
Methods | 15% | 30% | 45% | 60% |
CVCL | 98.49 | 97.23 | 95.06 | 89.48 |
DGMVCL | 99.16 | 98.29 | 97.28 | 92.70 |
Datasets | MNIST-USPS | Fashion | Multi-COIL-10 | ORL | Scene-15 |
DGMVCL-"w/o" MMM | 10.00 | 10.00 | 10.29 | 2.60 | 9.14 |
DGMVCL | 99.82 | 99.52 | 100.00 | 92.25 | 61.29 |
Methods | ACC | NMI | Purity |
DSIMVC [63] | 20.25 | 31.43 | 23.68 |
DCP [64] | 27.41 | 39.58 | 37.01 |
CVCL [55] | 25.48 | 37.67 | 36.63 |
DGMVCL | 29.03 | 45.81 | 40.02 |