Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

An application of structural equation modeling with partial least squares to analyze life satisfaction by degrees of urbanization

  • The objective conditions of people's lives have historically played a greater role than subjective perceptions. However, this trend is shifting, accompanied by an ongoing debate about life satisfaction and urbanization. This research has two objectives, the first is to test the hypothesis of the urban paradox. The second is to analyze the differences in the determinants of life satisfaction by urbanization levels. In order to determine the degrees of urbanization, a geographic grid was used to distinguish between rural, intermediate, and urban areas. In addition, a multi-group analysis in structural equation models with partial least squares was conducted to analyze the determinants of life satisfaction. The model has been developed using three general determinants: economic situation, health, and social relations, which has been applied to a case study of the Region of Murcia. The database used for this study includes 2462 samples. The population living in rural areas showed less life satisfaction. Regarding the determinants, it was found that health acts as a partial mediator of social relations. Differences were also observed in the intensity of the determinants according to the degree of urbanization. Thus, the degree of urbanization acts as a moderator of life satisfaction. This research is noteworthy because it provides valuable insights for guiding regional public policies.

    Citation: Jose A. Sánchez-Martí, Miguel A. Esteban-Yago. An application of structural equation modeling with partial least squares to analyze life satisfaction by degrees of urbanization[J]. Electronic Research Archive, 2024, 32(8): 4772-4795. doi: 10.3934/era.2024218

    Related Papers:

    [1] William Guo . Solving word problems involving triangles and implications on training pre-service mathematics teachers. STEM Education, 2024, 4(3): 263-281. doi: 10.3934/steme.2024016
    [2] Isidro Max V. Alejandro, Joje Mar P. Sanchez, Gino G. Sumalinog, Janet A. Mananay, Charess E. Goles, Chery B. Fernandez . Pre-service teachers' technology acceptance of artificial intelligence (AI) applications in education. STEM Education, 2024, 4(4): 445-465. doi: 10.3934/steme.2024024
    [3] Ana Barbosa, Isabel Vale . Rebuilding manipulatives through digital making in teacher education. STEM Education, 2025, 5(4): 515-545. doi: 10.3934/steme.2025025
    [4] Ana Barbosa, Isabel Vale, Dina Alvarenga . The use of Tinkercad and 3D printing in interdisciplinary STEAM education: A focus on engineering design. STEM Education, 2024, 4(3): 222-246. doi: 10.3934/steme.2024014
    [5] Jingwen He, Shirley Simon, Feng-Kuang Chiang . A comparative study of pre-service teachers' perceptions on STEAM education in UK and China. STEM Education, 2022, 2(4): 318-344. doi: 10.3934/steme.2022020
    [6] Yujuan Li, Robert N. Hibbard, Peter L. A. Sercombe, Amanda L. Kelk, Cheng-Yuan Xu . Inspiring and engaging high school students with science and technology education in regional Australia. STEM Education, 2021, 1(2): 114-126. doi: 10.3934/steme.2021009
    [7] William Guo . Design and implementation of multi-purpose quizzes to improve mathematics learning for transitional engineering students. STEM Education, 2022, 2(3): 245-261. doi: 10.3934/steme.2022015
    [8] William Guo . A comparative case study of a foundation mathematics course for student mathematics teachers before and after the COVID-19 pandemic and implications for low-enrollment programs at regional universities. STEM Education, 2025, 5(3): 333-355. doi: 10.3934/steme.2025017
    [9] William Guo . Solving problems involving numerical integration (I): Incorporating different techniques. STEM Education, 2023, 3(2): 130-147. doi: 10.3934/steme.2023009
    [10] Tang Wee Teo, Zi Qi Peh . An exploratory study on eye-gaze patterns of experts and novices of science inference graph items. STEM Education, 2023, 3(3): 205-229. doi: 10.3934/steme.2023013
  • The objective conditions of people's lives have historically played a greater role than subjective perceptions. However, this trend is shifting, accompanied by an ongoing debate about life satisfaction and urbanization. This research has two objectives, the first is to test the hypothesis of the urban paradox. The second is to analyze the differences in the determinants of life satisfaction by urbanization levels. In order to determine the degrees of urbanization, a geographic grid was used to distinguish between rural, intermediate, and urban areas. In addition, a multi-group analysis in structural equation models with partial least squares was conducted to analyze the determinants of life satisfaction. The model has been developed using three general determinants: economic situation, health, and social relations, which has been applied to a case study of the Region of Murcia. The database used for this study includes 2462 samples. The population living in rural areas showed less life satisfaction. Regarding the determinants, it was found that health acts as a partial mediator of social relations. Differences were also observed in the intensity of the determinants according to the degree of urbanization. Thus, the degree of urbanization acts as a moderator of life satisfaction. This research is noteworthy because it provides valuable insights for guiding regional public policies.



    Supervised learning is an important branch of machine learning. In supervised multi-classification problems, each sample is assigned a label which indicates the category it belongs to [1]. Supervised learning is effective when there are enough samples with high quality labels. However, it is expensive and time-consuming to build datasets with a multitude of accurate labels. To solve this problem, researchers have proposed a series of weakly supervised learning (WSL) methods, which aim to train models with partial, incomplete or inaccurate supervised information, such as noise-label learning [2,3,4,5], semi-supervised learning [6,7,8,9], partial-label learning [10,11,12], positive-confidence learning [13], unlabeled-unlabeled learning [14] and others.

    In this paper, we consider another WSLframework called complementary label learning (CLL). We show the difference between complemtary labels and true labels in Figure 1. Compared to an ordinary label, a complementary label indicates the class that the sample does not belong to. Obviously, it is easier and less costly to collect these complementary labels. For example, in some very specialized domains, the expert knowledge is very expensive. If complementary labels are used for annotation, we need to only determine the extent of the label space and then use common sense to determine which category is wrong. It is much simpler and faster to determine which class a sample does not belong to than it belongs to. Besides, CLL can also protect data privacy in some sensitive fields like medical and financial records because we no longer need to disclose the true information of the data. This not only protects data privacy and security, but also makes it easier to collect data in these areas.

    Figure 1.  Comparison of the complementary labels (bottom) with the real labels (top). Complementary label is one of categories the image does not belong to.

    The framework of CLL was first proposed by Ishida et al. [15]. They proved that the unbiased risk estimator (URE) only from complementary labels is equivalent to the ordinary classification risk when the loss function satisfies certain conditions. In URE, the loss function must be nonconvex and symmetric which leads to certain limitations. To overcome this limitation, Yu et al. [16] made cross-entropy loss usable in CLL by constructing a complementary label transition matrix, and they also considered that different labels had different probability of being selected as a complementary label. Then, Ishida et al. [17] expanded URE and proposed a CLL framework adapted to more general loss functions. This framework still has an unbiased estimator of the regular classification risk, but it works for all loss functions. Chou et al. [18] optimized URE from gradient estimation, and proposed that using surrogate complementary loss (SCL) to obtain unbiased risk estimation, which effectively alleviated the problem of overfitting in URE. Liu et al. [19] applied common losses such as categorical cross entropy (CCE), mean square error (MSE) and mean absolute error (MAE) to CLL. Ishiguro et al. [20] conducted a study on the problem that complementary labels may be affected by label noise. To mitigate its adverse effects, they selected losses with noise robustness which satisfied weighted symmetric condition or a more relaxed condition. Recently, Zhang et al. [21] broadened the setting of complementary label datasets and discussed the case that the datasets contained a large number of complementary labels and a small number of true labels at the same time. They proposed an adversarial complementary label learning network, named Clarinet. Clarinet consists of two deep neural networks, one to classify complementary labels and true labels, and the other to learn from complementary labels.

    Previous studies on CLL always focus on rewriting the classification risk under the ordinary label distribution to the risk under the complementary label distribution and exploring the use of more loss functions [15,16,17,18,19]. These rewriting risk techniques prove the consistency relationship between the risk of complementary label classification and the risk of supervised classification. This enables the classifier to perform accurate classification using only the complementary labels. However, in this process, only complementary labels are involved in the risk calculation, and the information contained in them is extremely limited, which results in consistently lower performance of CLL compared to supervised learning. Therefore, we aim to enhance the supervision information of the complementary labels to further improve the performance of CLL. In this paper, we propose a two-step complementary label enhancement framework based on knowledge distillation (KDCL). It consists of the following components: 1) a teacher model trained on complementary label dataset to generate soft labels which contain more supervision information as label distribution; 2) a student model trained on the same dataset to learn from both soft labels and complementary labels; 3) a final loss function to integrate loss from soft labels and complementary labels and update parameters of the student model. We use three CLL loss functions to conduct experiments on several benchmark datasets, and compare the accuracy of the student model before and after enhancement by KDCL. The experimental results show that KDCL can effectively improve the performance of CLL.

    Supposing that the input sample is a d-dimensional vector xRd with class labels y{1,2,...,K}, where K stands for K classes in the dataset. Giving a training set D={(xi,yi)}Ni=1 with N samples, all of which independently follow the same distribution p(x,y). The goal of learning from true labels is to learn a mapping relation f(x) from the sample space Rd to the label space {1,2,...,K} and f(x) is also called a classifier. We want f(x) to minimize the multi-class classification risk:

    R(f)=Ep(x,y)D[L(f(x),y)], (1)

    where L(f(x),y) is multi-class loss function, f(x) is usually obtained by the following equation:

    f(x)=argmaxy1,2,,Kgy(x), (2)

    where g(x):RdRK. In deep neural networks, g(x) is the prediction distribution of the output from the last fully connected layer.

    In general, distribution p(x,y) is unknown. We can use the sample mean to approximate the classification risk in Eq (1). R(f) is empirically estimated as ˆR(f):

    ˆR(f)=1Nni=1L(f(xi),yi), (3)

    where N is the number of training data and i is the i-th sample.

    In CLL, each sample x is assigned only one complementary label ˉy. Therefore, the dataset is switched from D={(xi,yi)}Ni=1 to ˉD={(xi,ˉyi)}Ni=1, where ˉy{1,2,...,K}{y} and DˉD. ˉD independently follow an unknown distribution ˉp(x,ˉy). If all complementary labels are selected in an unbiased way, which means that they have the same probability of being chosen, ˉp(x,ˉy) can be presented as:

    ˉp(x,ˉy)=1K1yˉyp(x,y). (4)

    Supposing that ˉL(f(x),ˉy) is complementary loss function, we can obtain similar multi-class risk as Eq (1) in distribution ˉp(x,ˉy):

    ˉR(f)=Eˉp(x,ˉy)ˉD[ˉL(f(x),ˉy)]. (5)

    To our best knowledge, Ishida et al. [15] are the first to prove that the difference between Eq (1) and Eq (5) is constant when the loss function ˉL satisfies certain conditions and this constant M only depends on the number of categories K:

    R(f)=(K1)Eˉp(x,ˉy)ˉD[ˉL(f(x),ˉy)]+M=(K1)ˉR(f)+M. (6)

    All coefficients are constant when the loss function satisfies the condition. So it is possible to learn from complementary labels by minimizing R(f) in Eq (6). Then, they rewrite one-versus-all (OVA) loss LOVA and pairwise-comparison (PC) loss LPC in ordinary multi-class classification as ˉLOVA and ˉLPC in CLL:

    ˉLOVA(g(x),ˉy)=1K1yˉyl(gy(x))+l(gˉy(x)),ˉLPC(g(x),ˉy)=yˉyl(gy(x)gˉy(x)), (7)

    where l(z):RR is a binary loss and it must be nonconvex and symmetric, such as sigmoid loss. g(x) is the same as Eq (2) and gy(x) is the y-th element of g(x). Finally, the unbiased risk estimator of R(f) can be obtained by sample mean:

    ˆR(f)(K1)NNn=1ˉL(f(xn),ˉyn)+M. (8)

    Although it is feasible to learn a classifier that minimizes Eq (8) from complementary labels, the restriction on the loss function limits the application of URE. Yu et al. [16] analyze the relationship between ordinary and complementary labels in terms of conditional probability:

    P(ˉy=j|x)=ijP(ˉy=j|y=i)P(y=i|x), (9)

    where i,j{1,2,,K}. When all complementary labels are selected in an unbiased way, P(ˉy|y) can be expressed as a transition matrix Q:

    Q=[01K11K11K10]K×K, (10)

    where each element in Q represents P(ˉy=j|y=i). Since the true label and the complementary label of the sample are mutually-exclusive, that is P(ˉy=j|y=i)=0. Therefore, the entries on the diagonal of the matrix are 0.

    Combining Eqs (5), (9) and (10), we can rewrite ˉR(f) as:

    ˉR(f)=Eˉp(x,ˉy)[LCE(QTg(x),ˉy)], (11)

    where LCE is cross-entropy loss which is widely used in deep learning. The classification risk ˉR(f) in Eq (8) is also consistent with the ordinary classification risk R(f) [16].

    In image classification, outputs from the last fully connected layer of a deep neural network contain the predicted probability distribution of all classes after the Softmax function. Comparing with a single logical label, the outputs carry more information. Hinton et al. [22] define the outputs as soft labels and propose a knowledge distillation framework. We draw on the idea of knowledge distillation and hope to improve the performance of CLL by enhancing complementary labels through soft labels.

    In the framework of knowledge distillation, Hinton et al. [22] modify the Softmax function and they introduce the parameter T to control the smoothness of soft labels. The ordinary Softmax function can be expressed as follows:

    y'i=exp(yi)jexp(yj), (12)

    where y'i is the predicted probability of the i-th class, exp() is the exponential function and yi is the predicted output of the classification network for the ith class. The Softmax function combines the prediction outputs of the model for all classes, and uses the exponential function to normalize the output values in the interval [0, 1].

    The rewritten Softmax function is as follows:

    y'i=exp(yiT)jexp(yiT). (13)

    We present a comparison of the smoothness of soft labels for different T in Figure 2. As T gradually increases, soft labels will become smoother. Actually, T regulates the degree to the attention to the negative labels. The higher T, the more attention is paid to negative labels. T is an adjustable hyperparameter during training.

    Figure 2.  The smoothness of soft labels for different T. The higher T, the smoother soft labels will be.

    For one sample, soft labels not only clarify its correct category, but also contain the correlation between other labels. More abundant information is carried in soft labels than the complementary label. If we add an extra term to the ordinary supplementary label classification loss and introduce soft labels as additional supervision information, CLL will perform better than using only complementary labels. Of course, we need a model with high accuracy to produce soft labels, which will make the soft labels more credible. This model is also trained by complementary labels.

    Taking advantage of this property, we propose KDCL, a complementary label learning framework based on knowledge distillation. The overall structure is shown in Figure 3.

    Figure 3.  The framework architecture of KDCL. α and β are the weighting factors to balance KL loss and complementary loss.

    KDCL is a two-stage training framework consisting of a more complex teacher model with higher accuracy and a simpler student model with lower accuracy. First, the teacher model is trained with complementary labels on the dataset and predicts all samples in the training set. The prediction results are normalized by the Softmax function with T=t(t>1) to generate soft labels Stea. Second, the student model is trained and its outputs are processed in two ways, one to produce the soft prediction results Sstu with T=t(t>1), and the other to output ordinary prediction results Pstu with T=1. Then, the KL divergence between Stea and Sstu is calculated, and the complementary label loss between Pstu and the complementary labels is calculated at the same time. The two losses are weighted to obtain the final distillation loss. Finally, parameters of the student model will be updated by the final loss.

    In KDCL, the final loss consists of Kullback-Leible (KL) loss and complementary loss. On the one hand, the student model needs to learn knowledge from the teacher model to improve its ability. On the other hand, the teacher model is not completely correct, and the student model also needs to learn by itself to reduce the influence of the teacher model’s errors on the learning process. It is better to consider both of them.

    The final distillation loss consists of two parts and it can be expressed as follows:

    LKDCL=αLKL+LCL, (14)

    where LKL denotes the KL divergence and LCL denotes the complementary loss. Given the probability distributions pt from the teacher model and ps from the student model, their KL divergence can be expressed as follows:

    LKL(pt,ps)=iptilogpsipti, (15)

    where i denotes the i-th element in tensor pt or ps.

    We select three complementary losses for KDCL. They are the PC loss proposed by Ishida et al. [15], FWD loss proposed by Yu et al. [16] and SCL-NL loss proposed by Chou et al. [18]. Supposing that ps is the probability distribution for sample x from the student model and ˉy is the complementary label of x, these complementary losses are shown in Eqs (16)–(18).

    ˉLPC(ps,ˉy)=K1nyˉy(psypsˉy)K×(K1)2+K1, (16)
    ˉLFWD(ps,ˉy)=iˉyi×log(QT×psi), (17)
    ˉLSCLNL(ps,ˉy)=iˉyi×(log(1psˉy)), (18)

    where K denotes the number of categories of the dataset, and QT denotes the transpose of Q which is a K×K square matrix with all entries 1/(K1) except the diagonal.

    With parameters pt,psandˉy, the final loss can be expressed in more detail as follows:

    LKDPC(pt,ps,ˉy)=αLKL(pt,ps)+ˉLPC(ps,ˉy) (19)
    LKDFWD(pt,ps,ˉy)=αLKL(pt,ps)+ˉLFWD(ps,ˉy) (20)
    LKDSCL(pt,ps,ˉy)=αLKL(pt,ps)+ˉLSCLNL(ps,ˉy) (21)

    α is the weighting factor, which is used to control the degree of influence of soft labels on the overall classification loss. The values of α will be determined in the experiment.

    We evaluate and compare the student models optimized by KDCL with the same models only trained by complementary labels on four public image classification datasets. Three complementary label losses including PC loss [15], FWD loss [16] and SCL-NL loss [18], are used as loss functions for training the models. All the experiments are carried out on a server with a 15 vCPU Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz, 80 GB RAM and one RTX 3090 GPU with 24 GB memory.

    Four benchmark image classification datasets, including MNIST, Fashion-MNIST(F-MNIST), Kuzushiji-MNIST(K-MNIST) and CIFAR10, are used to verify the effectiveness of KDCL.

    MNIST: consists of 60,000 28 × 28 pixel grayscale images for training and 10,000 images for testing, with a total of 10 categories representing numbers between 0 and 9.

    F-MNIST: is an alternative dataset to MNIST and consists of 10 categories, 60,000 training images and 10,000 test images, each with a size of 28 × 28 pixels.

    K-MNIST: is a dataset derived from 10 Japanese ancient characters widely used between the mid-Heian period and early modern Japan, which is an extension of the MNIST dataset. K-MNIST contains a total of 74,000 gray-scale images of 28 × 28 pixels in 10 categories.

    CIFAR10: consists of 60,000 32 × 32 color images, 50,000 of which are used as the training set and 10,000 as the test set. Each category contains 6000 images.

    Following the settings in [15,17,18], we use an unbiased way to select complementary labels for samples in all datasets. Besides, we apply two different sets of teacher-student networks to these datasets. Specifically, for MNIST, F-MNIST and K-MNIST, we chose Lenet-5 [23] as the teacher model and MLP [24] with 500 hidden neurons as the student model. Because these datasets are relatively simple, simple networks can work well. For CIFAR10 dataset, since color images are more difficult to be classified, we need deeper CNN to extract features. We choose DenseNet-121 [25] as the teacher model and ResNet-18 [26] as the student model.

    In the setting of training details, for MNIST, F-MNIST and K-MNIST, we train Lenet-5 and MLP with 120 epochs and use SGD as the optimizer with a momentum 0.9 and a weight decay of 0.0001. The initial learning rate is 0.1 and it is halved every 30 epochs. The batch size is set to 128. For CIFAR10 dataset, we train DenseNet-121 and ResNet-18 with 80 epochs and use SGD as the optimizer with a momentum 0.9 and a weight decay of 0.0005. The learning rate is from {1e-1, 1e-2, 5e-3, 1e-3, 5e-4, 1e-4} and it is divided by 10 every 30 epochs.

    In Figure 4, we make a parameter sensitivity analysis of the distillation temperature T in Eq (13) and the soft label weighting factor α in Eqs (19)–(21).

    Figure 4.  Test accuracy results of different T with fixed α and comparison results of different α with fixed T. The experiments are conducted with Lenet-5 and MLP on MNIST, F-MNIST, K-MNIST and Desenet-121 and Resnet-18 on CIFAR-10.

    We first explore the influence of different distillation temperature T. As we can see, when T=1, which means directly using the probability distribution output by the teacher model as soft labels without softening, KDCL exhibits the worst accuracy. This is because when the temperature is low, there is a significant difference in soft labels between positive and negative classes, making it difficult for the student model to learn effectively. As T gradually increases, the soft labels become more and more smooth, and student model can easily learn the knowledge in soft labels, and the accuracy is gradually improved. When T80, the gap between positive and negative classes in soft labels is extremely small, as well as the influence of negative classes is too large, which leads to the accuracy no longer increasing, or even decreasing.

    Then, we further investigate the optimal value of soft label weighting factor α. We follow the setting in Hinton et al. [22], and set α in the range of 0 to 1. On the same dataset, the change of α does not have a great impact on the accuracy of KDCL. This indicates that the KDCL model parameter optimization process is not sensitive to the hyperparameter α. Nevertheless, the model still achieves higher accuracy when α=0.5.

    Based on the above analysis, we will set T=80, α=0.5 in subsequent experiments.

    We show the accuracy for all models with three complementary label losses before and after being optimized by KDCL on four datasets. The results are presented in Table 1.

    Table 1.  Comparison of classification accuracies between different methods using different network architectures on MNIST, F-MNIST, K-MNIST and CIFAR-10.
    Dataset MNIST F-MNIST K-MNIST CIFAR-10
    Model Lenet-5 MLP KDCL-MLP Lenet-5 MLP KDCL-MLP Lenet-5 MLP KDCL-MLP Lenet-5 MLP KDCL-MLP
    PC 89.94% 83.78% 86.10% 77.22% 76.67% 77.42% 67.77% 60.52% 60.34% 38.31% 32.74% 33.37%
    FWD 85.35% 83.67% 84.61% 85.35% 83.67% 84.61% 86.85% 70.86% 75.41% 60.74% 44.93% 46.65%
    SCL-NL 98.18% 92.06% 94.33% 85.93% 83.69% 84.66% 86.85% 70.59% 75.25% 61.64% 40.46% 45.98%

     | Show Table
    DownLoad: CSV

    In Table 1, we show the experimental results of KDCL, where we compare the performance of the student model optimized by KDCL with that trained only with complementary labels across different losses and datasets. On MNIST, which is a relatively simple and easy dataset, all methods can achieve high accuracies. With the help of KDCL, we improve the accuracy of MLP from 83.78% to 86.10% with PC loss, 92.07% to 94.32% with FWD loss and 92.06% to 94.33% with SCL-NL loss. SCL-NL loss performs better among three loss functions. Besides, after being enhanced by KDCL, the accuracy of KDCL-MLP falls between the accuracy of MLP model and Lenet-5. On F-MNIST, which is more complex than MNIST, all methods have a slight decrease. Our KDCL achives 77.42% with PC loss, 84.61% with FWD loss and 84.66% with SCL-NL loss. On K-MNIST, which is more complex than F-MNIST, when using PC loss, our method does not significantly improve the accuracy of MLP, but we improve 4.55% with FWD loss and 4.66% with SCL-NL loss. On CIFAR-10, which is the most complex among the four datasets, there is a significant drop in accuracies. Nevertheless, the student model can still be optimized by KDCL, demonstrating its robustness and effectiveness across different datasets.

    We show the testing process of all models in Figure 5.

    Figure 5.  Comparison of the testing process of teacher models, student models and KDCL-student models on four datasets.

    In Figure 5, we present the convergence speed of all models in our experiments. The results show that the student model distilled by KDCL converges faster than that trained only with complementary labels. This indicates that the model can learn the features of the images more accurately and efficiently when utilizing both soft labels and complementary labels.

    Additionally, we observe that the PC loss exhibits a decrease in accuracy on more challenging datasets, particularly on CIFAR10. This is because the PC loss uses the Sigmoid function as the normalization function, which can lead to negative values in the loss calculation and prevent the model from finding better parameters when updating. This phenomenon becomes more pronounced on the CIFAR10 dataset, where a peak appears. However, KDCL can alleviate this phenomenon and shift the peak to a later epoch. This demonstrates the effectiveness of KDCL in addressing the limitations of existing CLL methods and improving the performance of complementary label learning.

    In this study, we established a knowledge distillation training framework for CLL, called KDCL. As stated in the introduction, the supervision information in complementary labels is easily missed. The proposed framework employed a deep CNN model with higher accuracy to soften complementary labels to soft labels. Both soft labels and origion complementary labels are used to train the classification model. After the optimization of KDCL, compared to just using the normal CLL methods, the accuracy has been improved by 0.5–4.5%.

    The main limitation lies in multiple aspects. First, KDCL’s performance could be influenced by the choice of teacher-student models and CLL algorithms. Our experiments utilize specific combinations of models and algorithms, and the results may vary with different configurations. By choosing better CNN networks and more excellent CLL algorithms, KDCL can achieve better performance on more difficult datasets. Another drawback of the proposed scheme is time cost. Due to the two-stage training framework of KDCL, which involves training a high-accuracy teacher model using complementary labels, the overall training time cost of KDCL is relatively high. Training a high-accuracy model typically takes a considerable amount of time, which poses a challenge to the efficiency of KDCL. In addition, KDCL is only tested on public datasets, and the data distribution is relatively uniform. In the future, we also consider expanding the application scope of KDCL to use dynamically imbalanced data for CLL, or to combine with hybrid deep learning models [27,28,29].

    In this paper, we give the first attempt to leverage the knowledge distillation training framework in CLL. To enhance the supervised information present in complementary labels, which are often overlooked in existing CLL methods, we propose a complementary label enhancement framework based on knowledge distillation, called KDCL. Specifically, KDCL consists of a teacher model and a student model. By adopting knowledge distillation techniques, the teacher model transfers its softened knowledge to the student model. The student model then learns from both soft labels and complementary labels to improve its classification performance. The experimental results on four benchmark datasets show that KDCL can improve the classification accuracy of CLL, and maintain robustness and effectiveness on difficult datasets.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China (No. 61976217, 62306320), the Natural Science Foundation of Jiangsu Province (No. BK20231063), the Fundamental Research Funds of Central Universities (No. 2019XKQYMS87), Science and Technology Planning Project of Xuzhou (No. KC21193).

    All authors declare that they have no conflicts of interest.



    [1] E. Diener, Subjective well-being, Psych. Bull., 95 (1984), 542–575. https://doi.org/10.1037/0033-2909.95.3.542 doi: 10.1037/0033-2909.95.3.542
    [2] D. Kahneman, A. B. Krueger, Developments in the measurement of subjective well-being, J. Econ. Perspect., 20 (2006), 3–24. http://doi.org/10.1257/089533006776526030 doi: 10.1257/089533006776526030
    [3] E. Diener, S. Oishi, L. Tay, Advances in subjective well-being research, Nat. Hum. Behav., 2 (2018), 253–260. https://doi.org/10.1038/s41562-018-0307-6 doi: 10.1038/s41562-018-0307-6
    [4] R. Veenhoven, Happiness in Nations: Subjective Appreciation of Life in 56 Nations 1946–1992, Erasmus University Rotterdam, Rotterdam, Netherlands, 1993.
    [5] R. A. Easterlin, Does economic growth improve the human lot? Some empirical evidence, in Nations and Households in Economic Growth: Essays in Honour of Moses Abramovitz, (eds. P. A. David and M. W. Reder), Academic Press, (1974), 89–125. https://doi.org/10.1016/B978-0-12-205050-3.50008-7
    [6] R. A. Easterlin, Income and happiness: Towards a unified theory, Econ. J., 111 (2001), 465–484. http://www.jstor.org/stable/2667943 doi: http://www.jstor.org/stable/2667943
    [7] D. G. Blanchflower, A. J. Oswald, Well-being over time in Britain and the USA, J. Public Econ., 88 (2004), 1359–1386. https://doi.org/10.1016/S0047-2727(02)00168-8 doi: 10.1016/S0047-2727(02)00168-8
    [8] J. F. Helliwell, R. D. Putnam, The social context of well-being, Philos. Trans. R. Soc. London. Ser B Biol. Sci., 359 (2004), 1435–1446. https://doi.org/10.1098/rstb.2004.1522 doi: 10.1098/rstb.2004.1522
    [9] P. Dolan, T. Peasgood, M. White, Do we really know what makes us happy? A review of the economic literature on the factors associated with subjective well-being, J. Econ. Psych., 29 (2008), 94–112. https://doi.org/10.1016/j.joep.2007.09.001 doi: 10.1016/j.joep.2007.09.001
    [10] R. Layard, Happiness: Lessons from a New Science, The Penguin Press, New York, 2005.
    [11] A. Peiró, Happiness, satisfaction and socio-economic conditions: Some international evidence, J. Socio-Econ., 35 (2006), 348–365. https://doi.org/10.1016/j.socec.2005.11.042 doi: 10.1016/j.socec.2005.11.042
    [12] R. A. Easterlin, L. Angelescu, J. S. Zweig, The impact of modern economic growth on urban-rural differences in subjective well-being, World Dev., 39 (2011), 2187–2198. https://doi.org/10.1016/j.worlddev.2011.04.015 doi: 10.1016/j.worlddev.2011.04.015
    [13] P. S. Morrison, Wellbeing and the region, in Handbook of Regional Science, (eds. M. M. Fisher and P. Nijkamp), 2nd edition, Springer, (2020), 779–798. https://doi.org/10.1007/978-3-642-36203-3-16-1
    [14] J. F. L. Sørensen, The rural happiness paradox in developed countries, Soc. Sci. Res., 98 (2021), 102581. https://doi.org/10.1016/j.ssresearch.2021.102581 doi: 10.1016/j.ssresearch.2021.102581
    [15] R. Veenhoven, How satisfying is rural life? Fact and value, in Changing Values and Attitudes in Family Households, Implications for Institutional Transition in East and West, (eds. J. Cecora), Society for agricultural policy research in rural society, (1994), 41–51.
    [16] M. Shucksmith, S. Cameron, T. Merridew, F. Pichler, Urban-rural differences in quality of life across the European Union, Reg. Stud., 43 (2009), 1275–1289. https://doi.org/10.1080/00343400802378750 doi: 10.1080/00343400802378750
    [17] C. Lenzi, G. Perucca, Are urbanized areas source of life satisfaction? Evidence from EU regions, Pap. Reg. Sci., 97 (2018), 105–122. https://doi.org/10.1111/pirs.12232 doi: 10.1111/pirs.12232
    [18] J. F. L. Sørensen, Rural-urban differences in life satisfaction: Evidence from the European Union, Reg. Stud., 48 (2014), 1451–1466. https://doi.org/10.1080/00343404.2012.753142 doi: 10.1080/00343404.2012.753142
    [19] A. Okulicz-Kozaryn, J. M. Mazelis, Urbanism and happiness: A test of Wirth's theory of urban life, Urban Stud., 55 (2018), 349–364. https://doi.org/10.1177/0042098016645470 doi: 10.1177/0042098016645470
    [20] A. T. Piper, Europe's capital cities and the happiness penalty: An investigation using the European Social Survey, Soc. Indic. Res., 123 (2015), 103–126. https://doi.org/10.1007/s11205-014-0725-4 doi: 10.1007/s11205-014-0725-4
    [21] P. S. Morrison, M. Weckroth, Human values, subjective well-being and the metropolitan region, Reg. Stud., 52 (2017), 325–337. https://doi.org/10.1080/00343404.2017.1331036 doi: 10.1080/00343404.2017.1331036
    [22] A. Losa, M. A. Esteban-Yago, E. Gadea, M. B. García-Romero, M. Á. Sánchez-García, J. A. Sánchez-Martí, Condiciones de Vida y Bienestar Social de la Población en Riesgo de Pobreza y Exclusión Social en la Región de Murcia, EAPN-Región de Murcia, Murcia, España, 2020.
    [23] A. Losa, M. A. Esteban-Yago, E. Gadea, M. B. García-Romero, M. Á. Sánchez-García, J. A. Sánchez-Martí, Población en Riesgo de Pobreza y Exclusión Social en la Región de Murcia: Un Análisis Territorial sobre Condiciones de Vida y Bienestar Social, EAPN-Región de Murcia, Murcia, España, 2020.
    [24] J. A. Sánchez-Martí, M. A. Esteban-Yago, A. Losa, Risk of poverty and social exclusion among the urban and rural population in the Region of Murcia, Spain, Stud. Appl. Econ., 40 (2022), 1–17. https://doi.org/10.25115/eea.v40i2.7912 doi: 10.25115/eea.v40i2.7912
    [25] J. Colino, F. Martínez-Carrasco, A. Losa, J. M. Martínez-Paz, A. Pérez-Morales, J. A. Albaladejo, Las Zonas Rurales de la Región de Murcia, Consejo Económico y Social de la Región de Murcia, Murcia, España, 2022.
    [26] F. Requena, Rural-urban living and level of economic development as factors in subjective well-being, Soc. Indic. Res., 128 (2016), 693–708. https://doi.org/10.1007/s11205-015-1051-1 doi: 10.1007/s11205-015-1051-1
    [27] F. Brereton, C. Bullock, J. P. Clinch, M. Scott, Rural change and individual well-being: The case of Ireland and rural quality of life, Eur. Urban Reg. Stud., 18 (2011), 203–227. https://doi.org/10.1177/0969776411399346 doi: 10.1177/0969776411399346
    [28] A. Ferrer-i-Carbonell, P. Frijters, How important is methodology for the estimates of the determinants of happiness?, Econ. J., 114 (2004), 641–659. https://doi.org/10.1111/j.1468-0297.2004.00235.x doi: 10.1111/j.1468-0297.2004.00235.x
    [29] J. F. Hair, G. T. M. Hult, C. M. Ringle, M. Sarstedt, N. P. Danks, S. Ray, Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook, Springer, Switzerland, 2021.
    [30] J. F. Hair, A. Alamer, Partial least squares structural equation modeling (PLS-SEM) in second language and education research: Guidelines using an applied example, Res. Method. Appl. Linguist., 1 (2022), 100027. https://doi.org/10.1016/j.rmal.2022.100027 doi: 10.1016/j.rmal.2022.100027
    [31] J. F. Hair, G. T. M. Hult, C. M. Ringle, M. Sarstedt, A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), Sage, Los Angeles, CA, 2014.
    [32] R. Veenhoven, M. Berg, Has modernisation gone too far? Modernity and happiness in 141 contemporary nations, Int. J. Happiness Dev., 1 (2013), 172–195. https://doi.org/10.1504/IJHD.2013.055645 doi: 10.1504/IJHD.2013.055645
    [33] M. J. Burger, P. S. Morrison, M. Hendriks, M. M. Hoogerbrugge, Urban-rural happiness differentials across the world, in World Happiness Report 2020, (eds. J. F. Helliwell, R. Layard, J. Sachs and J. E. De Neve), Sustainable Development Solutions Network, (2020), 66–93.
    [34] L. Dijkstra, E. Papadimitriou, Using a new global urban-rural definition, called the degree of urbanisation, to assess happiness, in World Happiness Report 2020, (eds. J. F. Helliwell, R. Layard, J. Sachs and J. E. De Neve), Sustainable Development Solutions Network, (2020), 146–151.
    [35] M. Weckroth, T. Kemppainen, (Un) Happiness, where are you? Evaluating the relationship between urbanity, life satisfaction and economic development in a regional context, Reg. Stud., Reg. Sci., 8 (2021), 207–227. https://doi.org/10.1080/21681376.2021.1925146 doi: 10.1080/21681376.2021.1925146
    [36] C. Bernini, A. Tampieri, The mediating role of urbanization on the composition of happiness, Pap. Reg. Sci., 101 (2022), 639–658. https://doi.org/10.1111/pirs.12671 doi: 10.1111/pirs.12671
    [37] S. Savahl, S. Adams, M. Florence, F. Casas, M. Mpilo, D. Isobell, et al., The relation between children's participation in daily activities, their Engagement with family and friends, and subjective well-being, Child Indic. Res., 13 (2020), 1283–1312. https://doi.org/10.1007/s12187-019-09699-3 doi: 10.1007/s12187-019-09699-3
    [38] J. Wang, J. Zhao, Y. Wang, Self-efficacy mediates the association between shyness and subjective well-being: The case of Chinese college students, Soc. Indic. Res., 119 (2014), 341–351. https://doi.org/10.1007/s11205-013-0487-4 doi: 10.1007/s11205-013-0487-4
    [39] X. Mao, W. Han, Living arrangements and older adults' psychological well‐being and life satisfaction in China: Does social support matter?, Fam. Relat., 67 (2018), 567–584. https://doi.org/10.1111/fare.12326 doi: 10.1111/fare.12326
    [40] O. Schilling, H. W. Wahl, Family networks and life-satisfaction of older adults in rural and urban regions, Koln. Z. Soziol. Sozpsych., 54 (2002), 304–307. https://doi.org/10.1007/s11577-002-0041-x doi: 10.1007/s11577-002-0041-x
    [41] P. Diego-Rosell, R. Tortora, J. Bird, International determinants of subjective well-being: Living in a subjectively material world, J. Happiness Stud., 19 (2018), 123–143. https://doi.org/10.1007/s10902-016-9812-3 doi: 10.1007/s10902-016-9812-3
    [42] B. Stevenson, J. Wolfers, Subjective well-being and income: Is there any evidence of satiation?, Am. Econ. Rev. Pap. Proc., 103 (2013), 598–604. http://dx.doi.org/10.1257/aer.103.3.598 doi: 10.1257/aer.103.3.598
    [43] B. S. Frey, A. Stutzer, Happiness, economy and institutions, Econ. J., 110 (2000), 918–938. https://doi.org/10.1111/1468-0297.00570
    [44] A. Ferrer-i-Carbonell, Income and well-being: An empirical analysis of the comparison income effect, J. Public Econ., 89 (2005), 997–1019. https://doi.org/10.1016/j.jpubeco.2004.06.003 doi: 10.1016/j.jpubeco.2004.06.003
    [45] M. F. Bhuiyan, R. S. Szulga, Extreme bounds of subjective well-being: Economic development and micro determinants of life satisfaction, Appl. Econ., 49 (2017), 1351–1378. https://doi.org/10.1080/00036846.2016.1218426 doi: 10.1080/00036846.2016.1218426
    [46] B. Hayo, W. Seifert, Subjective economic well-being in Eastern Europe, J. Econ. Psych., 24 (2003), 329–348. https://doi.org/10.1016/S0167-4870(02)00173-3 doi: 10.1016/S0167-4870(02)00173-3
    [47] D. Zhou, Y. Xu, P. Ai, The effects of online social interactions on life satisfaction of older chinese adults: New insights based on a longitudinal approach, Healthcare, 10 (2022), 1964. https://doi.org/10.3390/healthcare10101964 doi: 10.3390/healthcare10101964
    [48] Y. J. Masuda, J. R. Williams, H. Tallis, Does life satisfaction vary with time and income? Investigating the relationship among free time, income, and life satisfaction, J. Happiness Stud., 22 (2021), 2051–2073. https://doi.org/10.1007/s10902-020-00307-8 doi: 10.1007/s10902-020-00307-8
    [49] R. Margolis, M. Myrskylä, Family, money, and health: Regional differences in the determinants of life satisfaction over the life course, Adv. Life Course Res., 18 (2013), 115–126. http://dx.doi.org/10.1016/j.alcr.2013.01.001 doi: 10.1016/j.alcr.2013.01.001
    [50] A. Steptoe, A. Deaton, A. A. Stone, Subjective wellbeing, health, and ageing, Lancet, 385 (2015), 640–648. https://doi.org/10.1016/S0140-6736(13)61489-0 doi: 10.1016/S0140-6736(13)61489-0
    [51] W. Yip, S. V. Subramanian, A. D. Mitchell, D. T. S. Lee, J. Wang, I. Kawachi, Does social capital enhance health and well-being? Evidence from rural China, Soc. Sci. Med., 64 (2007), 35–49. https://doi.org/10.1016/j.socscimed.2006.08.027 doi: 10.1016/j.socscimed.2006.08.027
    [52] X. Wang, P. Wang, P. Wang, M. Cao, X. Xu, Relationships among mental health, social capital and life satisfaction in rural senior older adults: A structural equation model, BMC Geriatr., 22 (2022), 73. https://doi.org/10.1186/s12877-022-02761-w doi: 10.1186/s12877-022-02761-w
    [53] M. A. Shields, S. W. Price, Exploring the economic and social determinants of psychological well-being and perceived social support in England, J. R. Stat. Soc. Ser. A: Stat. Soc., 168 (2005), 513–537. https://doi.org/10.1111/j.1467-985X.2005.00361.x doi: 10.1111/j.1467-985X.2005.00361.x
    [54] J. Yeo, Y. G. Lee, Understanding the association between perceived financial well-being and life satisfaction among older adults: Does social capital play a role?, J. Fam. Econ. Issues, 40 (2019), 592–608. https://doi.org/10.1007/s10834-019-09634-2 doi: 10.1007/s10834-019-09634-2
    [55] D. Moreno-Agostino, J. De la Fuente, M. Leonardi, S. Koskinen, B. Tobiasz-Adamczyk, A. Sánchez-Niubò et al., Mediators of the socioeconomic status and life satisfaction relationship in older adults: A multi-country structural equation modeling approach, Aging Ment. Health, 25 (2021), 585–592. https://doi.org/10.1080/13607863.2019.1698513 doi: 10.1080/13607863.2019.1698513
    [56] E. Reig, F. J. Goerlich, I. Cantarino, Delimitación de Áreas Rurales y Urbanas a Nivel Local, Fundación BBVA, Bilbao, España, 2016.
    [57] Eurostat, Eurostat Regional Yearbook 2012, Publications Office of the European Union, Luxembourg, 2012.
    [58] National Statistics Institute, Concept Glossary, 2024. Available from: https://www.ine.es/DEFIne/en/concepto.htm?c=4930&tf=&op=30261.
    [59] Population and Housing Census, Madrid: National Statistics Institute, 2011.
    [60] Living Conditions Survey, Madrid: National Statistics Institute, 2023.
    [61] Korean Labor and Income Panel Study, Seoul: Korea Labor Institute, 2024.
    [62] W. W. Chin, The partial least squares approach for structural equation modeling, in Modern Methods for Business Research, (eds. G. A. Marcoulides), Lawrence Erlbaum Associates, (1998), 295–336. https://doi.org/10.1007/978-3-642-36203-3-16-1
    [63] J. F. Hair, C. M. Ringle, M. Sarstedt, PLS-SEM: Indeed a silver bullet, J. Mark. Theory Pract., 19 (2011), 139–152. https://doi.org/10.2753/MTP1069-6679190202 doi: 10.2753/MTP1069-6679190202
    [64] W. Reinartz, M. Haenlein, J. Henseler, An empirical comparison of the efficacy of covariance-based and variance-based SEM, Int. J. Res. Mark., 26 (2009), 332–344. http://dx.doi.org/10.1016/j.ijresmar.2009.08.001 doi: 10.1016/j.ijresmar.2009.08.001
    [65] M. Sarstedt, J. F. Hair, M. Pick, B. D. Liengaard, L. Radomir, C. M. Ringle, Progress in partial least squares structural equation modeling use in marketing research in the last decade, Psych. Mark., 39 (2022), 1035–1064. https://doi.org/10.1002/mar.21640 doi: 10.1002/mar.21640
    [66] J. Henseler, C. M. Ringle, R. R. Sinkovics, The use of partial least squares path modeling in international marketing, in New Challenges to International Marketing, (eds. R. R. Sinkovics and P. N. Ghauri), Emerald Group Publishing Limited, (2009), 277–319. https://doi.org/10.1108/S1474-7979(2009)0000020014
    [67] S. Streukens, S. Lezai-Werelds, Bootstrapping and PLS-SEM: A step-by-step guide to get more out of your bootstrap results, Eur. Manag. J., 34 (2016), 618–632. http://dx.doi.org/10.1016/j.emj.2016.06.003 doi: 10.1016/j.emj.2016.06.003
    [68] N. Kock, P. Hadaya, Minimum sample size estimation in PLS-SEM: The inverse square root and gamma-exponential methods, Inf. Syst. J., 28 (2018), 227–261. https://doi.org/10.1111/isj.12131 doi: 10.1111/isj.12131
    [69] M. Tenenhaus, V. E. Vinzi, Y. M. Chatelin, C. Lauro, PLS path modeling, Comput. Stat. Data Anal., 48 (2005), 159–205. https://doi.org/10.1016/j.csda.2004.03.005
    [70] K. A. Bollen, Structural Equations with Latent Variables, John Wiley & Sons, New York, 1989.
    [71] E. E. Rigdon, Rethinking partial least squares path modeling: In praise of simple methods, Long Range Plan., 45 (2012), 341–358. https://doi.org/10.1016/j.lrp.2012.09.010 doi: 10.1016/j.lrp.2012.09.010
    [72] J. F. Hair, W. C. Black, B. J. Babin, R. E. Anderson, Multivariate Data Analysis, 7th edition, Pearson Prentice Hall, New York, 2010.
    [73] J. F. Hair, J. J. Rischer, M. Sarstedt, C. M. Ringle, When to use and how to report the results of PLS-SEM, Eur. Bus. Rev., 31 (2018), 2–24. https://doi.org/10.1108/EBR-11-2018-0203 doi: 10.1108/EBR-11-2018-0203
    [74] T. K. Dijkstra, J. Henseler, Consistent partial least squares path modeling, MIS Q., 39 (2015), 297–316. https://www.jstor.org/stable/26628355 doi: https://www.jstor.org/stable/26628355
    [75] C. Fornell, D. F. Larcker, Structural equation models with unobservable variables and measurement error, J. Mark. Res., 18 (1981), 382–388. https://doi.org/10.2307/3150980 doi: 10.2307/3150980
    [76] J. Henseler, C. M. Ringle, M. A. Sarstedt, A new criterion for assessing discriminant validity in variance-based structural equation modeling, J. Acad. Mark. Sci., 43 (2015), 115–135. https://doi.org/10.1007/s11747-014-0403-8 doi: 10.1007/s11747-014-0403-8
    [77] J. Benitez, J. Henseler, A. Castillo, F. Schuberth, How to perform and report an impactful analysis using partial least squares: Guidelines for confirmatory and explanatory IS research, Inf. Manag., 57 (2020), 103168. https://doi.org/10.1016/j.im.2019.05.003 doi: 10.1016/j.im.2019.05.003
    [78] J. Cohen, Statistical Power Analysis for the Behavioral Sciences, 2nd edition, Lawrence Erlbaum Associates, Hillsdale, NJ, 1988.
    [79] G. Shmueli, S. Ray, J. M. Velasquez, S. Babu, The elephant in the room: Predictive performance of PLS models, J. Bus. Res., 69 (2016), 4552–4564. http://dx.doi.org/10.1016/j.jbusres.2016.03.049 doi: 10.1016/j.jbusres.2016.03.049
    [80] G. Shmueli, M. Sarstedt, J. F. Hair, J. H. Cheah, H. Ting, S. Vaithilingam, C. M. Ringle, Predictive model assessment in PLS-SEM: Guidelines for using PLSpredict, Eur. J. Mark., 53 (2019), 2322–2347. https://doi.org/10.1108/EJM-02-2019-0189 doi: 10.1108/EJM-02-2019-0189
    [81] R. M. Baron, D. A. Kenny, The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations, J. Personal. Soc. Psych., 51 (1986), 1173–1182. https://doi.org/10.1037/0022-3514.51.6.1173 doi: 10.1037/0022-3514.51.6.1173
    [82] C. Nitzl, J. L. Roldan, G. Cepeda, Mediation analysis in partial least squares path modeling: Helping researchers discuss more sophisticated models, Ind. Manag. Data Syst., 116 (2016), 1849–1864. https://doi.org/10.1108/IMDS-07-2015-0302 doi: 10.1108/IMDS-07-2015-0302
    [83] X. Zhao, J. G. Lynch, Q. Chen, Reconsidering Baron and Kenny: Myths and truths about mediation analysis, J. Consum. Res., 37 (2010), 197–206. https://doi.org/10.1086/651257 doi: 10.1086/651257
    [84] M. Sarstedt, J. Henseler, C. M. Ringle, Multigroup analysis in partial least squares (PLS) path modeling: Alternative methods and empirical results, in Measurement and Research Methods in International Marketing (Advances in International Marketing, Vol. 22), (eds. M. Sarstedt, M. Schwaiger and C. R. Taylor), Emerald Group Publishing Limited, (2011), 195–218. https://doi.org/10.1108/S1474-7979(2011)0000022012
    [85] M. Klesel, F. Schuberth, B. Niehaves, J. Henseler, Multigroup analysis in information systems research using PLS-PM: A systematic investigation of approaches, SIGMIS Database, 53 (2022), 26–48. https://doi.org/10.1145/3551783.3551787 doi: 10.1145/3551783.3551787
    [86] A. Sen, Health: Perception versus observation, BMJ, 324 (2002), 860–861. https://doi.org/10.1136/bmj.324.7342.860 doi: 10.1136/bmj.324.7342.860
    [87] J. E. Stiglitz, A. Sen, J. P. Fitoussi, Report by the Commission on the Measurement of Economic Performance and Social Progress, Commission on the Measurement of Economic Performance and Social Progress, Paris, 2009.
    [88] A. Diamantopoulos, M. Sarstedt, C. Fuchs, P. Wilczynski, S. Kaiser, Guidelines for choosing between multi-item and single-item scales for construct measurement: A predictive validity perspective, J. Acad. Mark. Sci., 40 (2012), 434–449. https://doi.org/10.1007/s11747-011-0300-3 doi: 10.1007/s11747-011-0300-3
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1399) PDF downloads(72) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(11)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog