Loading [MathJax]/jax/output/SVG/jax.js
Research article

Do diversity & inclusion of human capital affect ecoefficiency? Evidence for the energy sector

  • Received: 06 April 2024 Revised: 02 August 2024 Accepted: 14 August 2024 Published: 20 August 2024
  • JEL Codes: M12, M14, M53

  • The aim of this study was to assess the impact of diversity and inclusion (D&I) initiatives in workplaces on both financial performance and environmental considerations (referred to as ecoefficiency, ECO). We focused on the energy sector, a significant environmental contributor, and the research spanned from 2016 to 2022, analyzing a broad global sample of 373 firms from 53 countries. ECO was evaluated by integrating environmental scores and conventional financial metrics using data envelopment analysis (DEA).

    The findings revealed a significant positive relationship between the collective indicator of diversity, inclusion, people development, and the absence of labor incidents on ECO. Specifically, practices related to workforce diversity, cultural and gender implementation, and investments in employee training and development opportunities were found to be beneficial for ECO. Additionally, we found that these policies impact the environmental component of ECO. However, no significant relationship was observed between practices related to inclusion policies and controversial labors, and ECO.

    Furthermore, the results suggested that ECO within the energy sector is influenced by factors such as board size, the integration of environmental, social, and governance (ESG) aspects into executive remuneration, the adoption of a corporate social responsibility (CSR) strategy, alignment with the United Nations (UN) Environmental Sustainable Development Goals (SDGs), and the implementation of quality management systems. Conversely, CEO-chairman duality and the presence of independent board members do not significantly impact ECO in energy companies.

    These research findings provide valuable insights and recommendations for industry managers pursuing sustainable business practices, particularly through effective talent management strategies. Additionally, they offer guidance for investors interested in constructing environmentally conscious portfolios.

    Citation: Óscar Suárez-Fernández, José Manuel Maside-Sanfiz, Mª Celia López-Penabad, Mohammad Omar Alzghoul. Do diversity & inclusion of human capital affect ecoefficiency? Evidence for the energy sector[J]. Green Finance, 2024, 6(3): 430-456. doi: 10.3934/GF.2024017

    Related Papers:

    [1] Dorel Feldman . Poly(vinyl alcohol) nanocomposites: Recent contributions to engineering and medicine. AIMS Materials Science, 2021, 8(1): 119-129. doi: 10.3934/matersci.2021008
    [2] Rusul A. Ghazi, Khalidah H. Al-Mayalee, Ehssan Al-Bermany, Fouad Sh. Hashim, Abdul Kareem J. Albermany . Impact of polymer molecular weights and graphene nanosheets on fabricated PVA-PEG/GO nanocomposites: Morphology, sorption behavior and shielding application. AIMS Materials Science, 2022, 9(4): 584-603. doi: 10.3934/matersci.2022035
    [3] Zhenguo Yu, Dong Wang, Zhentan Lu . Nanocomposite hydrogel fibers in the field of diagnosis and treatment. AIMS Materials Science, 2023, 10(6): 1004-1033. doi: 10.3934/matersci.2023054
    [4] Madhuri Sharon, Farha Modi, Maheshwar Sharon . Titania based nanocomposites as a photocatalyst: A review. AIMS Materials Science, 2016, 3(3): 1236-1254. doi: 10.3934/matersci.2016.3.1236
    [5] Omar Hassan Mahmood, Mustafa Sh. Aljanabi, Farouk M. Mahdi . Effect of Cu nanoparticles on microhardness and physical properties of aluminum matrix composite prepared by PM. AIMS Materials Science, 2025, 12(2): 245-257. doi: 10.3934/matersci.2025013
    [6] Mohamed Jaffer Sadiq Mohamed, Denthaje Krishna Bhat . Novel ZnWO4/RGO nanocomposite as high performance photocatalyst. AIMS Materials Science, 2017, 4(1): 158-171. doi: 10.3934/matersci.2017.1.158
    [7] Mercy Ogbonnaya, Abimbola P.I Popoola . Potential of mucilage-based hydrogel for passive cooling technology: Mucilage extraction techniques and elucidation of thermal, mechanical and physiochemical properties of mucilage-based hydrogel. AIMS Materials Science, 2023, 10(6): 1045-1076. doi: 10.3934/matersci.2023056
    [8] Carlos A. Sánchez, Yamile Cardona-Maya, Andrés D. Morales, Juan S. Rudas, Cesar A. Isaza . Development and evaluation of polyvinyl alcohol films reinforced with carbon nanotubes and alumina for manufacturing hybrid metal matrix composites by the sandwich technique. AIMS Materials Science, 2021, 8(2): 149-165. doi: 10.3934/matersci.2021011
    [9] Chaitra Srikanth, Gattumane Motappa Madhu, Hemanth Bhamidipati, Siddarth Srinivas . The effect of CdO–ZnO nanoparticles addition on structural, electrical and mechanical properties of PVA films. AIMS Materials Science, 2019, 6(6): 1107-1123. doi: 10.3934/matersci.2019.6.1107
    [10] Sekhar Chandra Ray . Possible magnetic performances of graphene-oxide and it's composites: A brief review. AIMS Materials Science, 2023, 10(5): 767-818. doi: 10.3934/matersci.2023043
  • The aim of this study was to assess the impact of diversity and inclusion (D&I) initiatives in workplaces on both financial performance and environmental considerations (referred to as ecoefficiency, ECO). We focused on the energy sector, a significant environmental contributor, and the research spanned from 2016 to 2022, analyzing a broad global sample of 373 firms from 53 countries. ECO was evaluated by integrating environmental scores and conventional financial metrics using data envelopment analysis (DEA).

    The findings revealed a significant positive relationship between the collective indicator of diversity, inclusion, people development, and the absence of labor incidents on ECO. Specifically, practices related to workforce diversity, cultural and gender implementation, and investments in employee training and development opportunities were found to be beneficial for ECO. Additionally, we found that these policies impact the environmental component of ECO. However, no significant relationship was observed between practices related to inclusion policies and controversial labors, and ECO.

    Furthermore, the results suggested that ECO within the energy sector is influenced by factors such as board size, the integration of environmental, social, and governance (ESG) aspects into executive remuneration, the adoption of a corporate social responsibility (CSR) strategy, alignment with the United Nations (UN) Environmental Sustainable Development Goals (SDGs), and the implementation of quality management systems. Conversely, CEO-chairman duality and the presence of independent board members do not significantly impact ECO in energy companies.

    These research findings provide valuable insights and recommendations for industry managers pursuing sustainable business practices, particularly through effective talent management strategies. Additionally, they offer guidance for investors interested in constructing environmentally conscious portfolios.



    Biometrics recognition of individuals has gained attention recently including international border crossing to unlock mobile devices. Technological advances, improved accuracy coupled with increased demands of the development of real applications have led to emerging a multimodal biometric system. The integration of multiple biometric modalities in the multimodal system has proven more robustness to non-universality, noisy data, the possibility of spoof attacks [1,2], and shown to be very effective in improving recognition performance [1,3]. However, the design and implementation of the fusion algorithm is a challenging task as its benefits depend on the selection of biometrics modality, computational and storage resources, accuracy, choice of fusion strategy, and cost [1,3,4]. The fusion of multiple biometric evidences may be carried out at four different levels - sensor, feature, match score, or decision [1,2,3,4,5,6,7]. Among these fusion levels, feature level fusion results in better recognition performance as more discriminative features from different modalities can be well preserved [1,8]. In the multimodal system, usually, feature fusion is performed by integrating different features extracted from different modalities into a joint and compact feature representation. For homogeneous extracted feature sets (having the same measurement scale and dimension), it is easier to apply feature fusion, and a fused feature vector can be obtained using the weighted average technique. While in heterogeneous feature sets(e.g., face and fingerprint) - feature sets extracted for different modalities using different feature extraction algorithms, a single feature set can be formed by concatenating them [9]. But for incompatible feature sets (e.g., IrisCode feature of Iris and minutiae feature of fingerprint), it becomes difficult to perform concatenation directly due to inherent differences in the feature representation [10]. Several authors [7,9,10,11,12,13,14,15,16,17] have explored different feature fusion approaches in order to fuse different modalities reasonably and effectively for multimodal systems.

    From the literature, the conventional fusion methods such as the weighted feature fusion or concatenation or weighted concatenation, ignore the intrinsic relationship of the feature sets, are inefficient as the dimensionality of feature space increases and also requires complex matcher to classify fused feature vector. To address these limitations, for feature fusion, the learning method based on maximizing mutual information is proposed in this paper. Thus, it can retain the effective discriminant information within the features sets and removes the redundant information [18], which is especially required for effective recognition. For this, we explore canonical correlation analysis (CCA) for feature fusion which deals with the mutual information between two sets of multidimensional data [19,20] by identifying linear relationships. The objective function of CCA is to maximize the pair-wise correlations between two feature sets. CCA looks for the optimal transformation which makes the corresponding variables in the two feature sets maximally correlated. This approach can learn and map the function into a space for correlation measurement. For optimal transformation, it aims to maximize the similarity between the discriminatory feature sets of different modalities by removing the redundant ones. In addition, CCA is independent of affine transformation which eliminates the need for a complex matcher design. In the literature, CCA and its several variants have been proposed for finding maximally correlated projections and demonstrated outstanding results than the prevalent feature fusion methods in a wide variety of domains. In [18] CCA is proposed for feature fusion where canonical correlation features from face and handwritten Arabic numerals are extracted as effective discriminant information for recognition. In another work [21], extracted feature vectors from the palmprint and finger geometry are fused using CCA to get a reduced feature space dimension thus improving the average recognition rate. In the study [22], kernel CCA (KCCA) based feature fusion is explored to discover the nonlinear subspace learning representation between the ear and profile face modalities. In order to improve the performance of CCA in classification, a supervised local-preserving canonical correlation analysis method (SLPCCAM) is proposed for fingerprint and finger-vein [23]. For a good representation of the similarity between the samples in the same class and to evaluate the dissimilarity between the samples in different classes, a feature level fusion approach based on the Discriminant Correlation Analysis (DCA) is presented in [24] for Iris, Fingerprint, and Face multimodal recognition. To deal with multiple sets of features, a feature fusion approach based on multiset generalized canonical discriminant projection (MGCDP) that incorporates the class associations are studied in [25]. Experiments show that MGCDP achieves promising recognition accuracy on palm vein, face, and fingerprint.

    Further, a feature fusion approach based on CCA is presented in [26] for Iris and Fingerprint images and achieved significantly improved performance. In another work [27], CCA based on L1-norm minimization (CCA-L1) and its extension are proposed to deal with multi-feature data for feature learning and image recognition. In the study [28], two-dimensional supervised canonical correlation analysis (2D-SCCA) and multiple-rank supervised canonical correlation analysis (MSCCA) algorithms are proposed to perform multiple feature extraction for classification. Experiments show that MSCCA achieves promising recognition accuracy on object, face, and fingerprint recognition. In another study, [29], multiple rank canonical correlation analysis (MRCCA) and its multiset version referred as multiple rank multiset canonical correlation analysis (MRMCCA) are explored for effective feature extraction from matrix data. The authors demonstrated the superiority of the proposed methods in terms of classification accuracy and computing time on the face, fingerprint, and Palm data sets. Recently, 2D models for multi-view feature extraction and fusion of matrix data such as two-dimensional locality preserving canonical correlation analysis (2D-LPCCA) and two-dimensional sparsity preserving canonical correlation analysis (2D-SPCCA) are proposed in [30]. In this work, to reveal the inherent data structure with relations, 2D-LPCCA utilizes the neighborhood information while 2D-SPCCA utilizes the sparse reconstruction information.

    Motivated by the success of CCA and its extensions in the feature fusion, in this paper, we propose CCA to represent discriminative features by exploring significant relationships between the Iris and Fingerprint feature sets of the same person. In this aspect, we propose a simple, extremely fast, and promising approach to make a unified framework that can conveniently investigate the feature fusion information mainly contributed by the CCA. In summary, the key contributions of this work are:

    ● We propose a novel approach for accurately modeling the feature fusion of Iris and Fingerprint modalities by maximizing the pair-wise correlations between them.

    ● We show the effectiveness of the proposed model by experimenting with it on a publicly available SDUMLA-HMT multimodal dataset. The affine invariance property of CCA eliminates the need for a complex matcher and helps to design a rotation-invariant recognition system.

    ● We explore the effect of feature dominance and laterality of the selected modalities on the performance of a developed system by performing cross-match biometrics feature fusion. For that, performance evaluation is carried out considering i) Left Iris and Right Fingerprint ii) Right Iris and Left Fingerprint images of the same person and obtained interesting initial results for the developed robust multimodal biometric system.

    ● We evaluate our proposed approach showing significantly improved recognition performance of the multimodal biometric system over other existing methods.

    Paper organization: Proposed CCA based feature fusion multimodal system with different distance measures is outlined in Section 2. An experimental setup is described in Section 3. Experimental results and analysis are presented in Section 4. Cross match experimentation and analysis based on the proposed fusion approach are discussed in Section 5 and then conclusions in Section 6.

    In this paper, we present a framework for feature level fusion using canonical correlation analysis. Although our proposed framework applies to any biometric modality, we restricted it to the Fingerprint and Iris modality of the same subjects. Both Fingerprint [2,3] and Iris [31,32] recognition, have higher accuracy, reliability, simplicity, and are well-accepted, making them very promising technologies for wide deployments compared to other biometric modalities. An overview of our framework is demonstrated in Figure 1 which mainly consists of a training phase and recognition phase. Here, we try to learn canonical correlation features from Iris and Fingerprint feature sets in the canonical space by adopting correlation criterion function and devise effective discriminant representations. During the training phase, transformation matrix or basis vectors are found to project Iris and Fingerprint feature sets in the canonical space. Then, by applying the summation method in the canonical space, the fused feature vectors are created for 'n' number of training samples and stored in the database. During the recognition phase, first, extract canonical correlation features for the test sample, projects them in the canonical space using the same transformation matrix, and then by applying the summation method test fused feature is created. This test fused feature vector is compared with the stored fused vector to find match or non-match. In the following sections, we explain the fundamentals of CCA and show how it is suitable for information fusion at the feature level.

    Figure 1.  Overview of proposed feature fusion model for multimodal system using canonical correlation analysis(CCA).

    CCA is a subspace learning method that learns a common representation by maximizing the correlation between two sets of feature vectors when projected on the common space [19,20]. Given, two feature sets X=[X1,X2...Xn] and Y=[Y1,Y2...Yn] with zero-mean such that XRpxn and YRqxn, from the same 'n' number of subjects. As proposed by Hotelling [19], CCA is used to compute linear transformations, Wx and Wy, one for each feature sets, which make the corresponding variables in the two feature sets, maximally correlated in the projected space. The information on associations between two feature sets X and Y can be obtained by considering the within sets covariance and between sets covariance matrices. Following correlation function [33] is to be maximized to find Wx and Wy,

    ρ=max(wx,wy)WTxCxyWyWTxCxxWxWTyCyyWy (2.1)

    Here, within sets covariance matrices are denoted by CxxRpxp and CyyRqxq, and between sets covariance matrices are indicated by CxyRpxq and Cyx=CTxy.

    The maximization of Eq (2.1) is equal to maximizing the numerator [20], subject to WTxCxxWx=WTyCyyWy=1 for i=j, the subsequent canonical correlations are uncorrelated for different solutions where ij. According to [20] the canonical correlations between X and Y found by solving the eigenvalue equations,

    C1xxCxyC1yyCyxWx=ρ2WxC1yyCyxC1xxCxyWy=ρ2Wy} (2.2)

    where, ρ2 eigenvalues, are the squared canonical correlations or the diagonal matrix of eigenvalues and Wx and Wy eigenvectors, are normalised canonical correlation basis vectors. From Eq (2.2), both matrices C1xxCxyC1yyCyx and C1yyCyxC1xxCxy have the same eigenvalues but different eigenvectors and its solutions are related by [33],

    CxyWy=ρλxCxxWxandCyxWx=ρλyCyyWywhereλx=λ1y=WTyCyyWyWTxCxxWx (2.3)

    Equation (2.3) consists of nonzero eigenvalues equal to d = rank (Cxy)min(p,q), such that λ1λ2...λd. While the sorted eigenvectors are given by the transformation matrices Wx and Wy, and x=WTxXRdxn and y=WTyYRdxn are refereed as canonical variates or projected correlation features. These canonical variates are uncorrelated within each feature set as it shows nonzero correlation only on their corresponding indices of the canonical variates.

    The graphical interpretation for the application of CCA in our experiment on Iris and Fingerprint images of the same subjects is shown in Figure 2. Here we are interested in learning common representations contained in the Iris feature space and Fingerprints feature space which is reflected in correlations between them. By finding transformation matrix Wx and Wy, such that it maximizes the pair-wise correlations between two sets. We applied CCA to project the extracted Iris and Fingerprint feature set of 'n' number of training samples in the canonical space. Within the canonical space, a fused feature may be obtained by performing either concatenation or summation [18]. Given, X=[X1,X2...Xn]Rpxn and Y=[Y1,Y2...Yn]Rqxn, the corresponding feature sets - Iris and Fingerprint respectively, and, p and q be their feature dimensions. After obtaining eigenvectors Wx=[wx1,wx2...wxd]Rpxd,Wy=[wy1,wy2...wyd]Rqxd, for any sample Xi,Yi, let ˜Xi=Xi¯X and ˜Yi=Yi¯Y, where ¯X is the mean of feature vectors X and ¯Y is the mean of feature vectors Y. Then fused feature vector Z using concatenation Eq (2.4) and summation Eq (2.5) of the transformed feature vectors [18] can be computed as:

    Z=[WTx˜XiWTy˜Yi]=[Wx00Wy]T[˜Xi˜Yi] (2.4)
    Figure 2.  Graphical interpretation where Iris and Fingerprint feature sets are mapped to a common subspace using CCA.

    Or

    Z=WTx˜Xi+WTy˜Yi=[WTxWTy][˜Xi˜Yi] (2.5)

    We used summation method Eq (2.5) in our proposed approach to reduce computational complexity as the vector length of the concatenation method is twice that of the summation method. Then, during the training phase, the fused feature vectors Z are stored as a template in the gallery. While in the testing phase, the fused feature vector of the query sample can be classified using any classifier.

    In this paper, the fused feature of test image Zt is matched with gallery fused feature vectors Z, by using three different distance measures, namely, Manhattan, Euclidean, and Cosine Similarity for the feature level fusion. By definition [34], Euclidean and Manhattan distance, exhibit the distance between two vectors, considering the magnitude. The cosine similarity measure only considers the angle similarity and discards the scaling on the magnitude and also overcomes the limitations of the Euclidean which is sensitive to outliers. At the matching stage, to classify fused feature vector in canonical subspace instead of a single vector, the distance measures are no longer effective, but the angles between subspaces become a practical measurement. Hence, simple matchers are selected to make the matching process extremely fast and to study the performance of the multimodal system, mainly contributed by the CCA based feature fusion algorithm. In Manhattan distance, the match image is found by performing the matching between the test vector Zt with training vectors using Eq (2.6)

    argmini[1,2,...,n]1N|ZtZi| (2.6)

    where N is used as a scaling factor and equal to the length of the fused feature vector. As this distance measure does not take the shortest path possible, it yields a higher distance estimate.

    To find a match using Euclidean distance, we find the matching training image that satisfies Eq (2.7)

    argmini[1,2,...,n]ZtZi2 (2.7)

    In Cosine similarity measure, the cosine of the angle between the test vector and training vectors is computed. The match for the test vector Zt is found by Eq (2.8). Here, the angle value closer to zero indicate a better match.

    argmini[1,2,...,n]cos1(ZTtZiZt.Zi) (2.8)

    The distance between Zt and Zi approaches '0', when the estimates are close to each other.

    SDUMLA-HMT Multimodal Database from Shandong University [35] consists of total five biometric modalities such as Face, Finger vein, Gait, Iris, and Fingerprint of the same subject (person). This database has a total of 106 subjects, including 61 males and 45 females with ages between 17 and 31 [35]. Here, we chose Iris and Fingerprint modalities for evaluation with details shown in Table 1.

    Table 1.  Details of Iris and Fingerprint SDUMLA-HMT database.
    Iris Image Resolution Format (No. Subjects * Iris Samples per Subject) Total Images
    768x576 256 gray Left Iris 106*5 =530 1060
    level BMP Right Iris 106*5 =530
    Fingerprint Resolution Format (No.Subjects * No.of Finger * Finger Impressions per Subject) Type of sensor
    256x304 256 gray Total images=106*6*8=5088 FPR620
    level BMP (For only one finger Optical
    Total images=106*1*8=848) Scanner

     | Show Table
    DownLoad: CSV

    The recognition performance of the biometric system heavily depends on the quality of biometric sample in consideration [36]. The information about the quality of the biometric sample prior to matching may be used to extract the reliable features and boost the performance of a biometric recognition system [37].

    In this work, Iris image quality is assessed using VASIR (Video-based Automatic System for Iris Recognition)[38], developed by NIST Iris recognition software platform. VASIR uses the automatic image quality measurement (AIQM) method to generate scalar quality scores [37,38]. For each Iris image, the quality score is calculated as shown in Figure 3. For SDUMLA database, the threshold comes to be 14.73615 using Threshold=Average((MaxMin)/4) from the entire database quality score. Iris images having a quality score greater than or equal to the threshold are selected to perform experiments.

    Figure 3.  Bottom row indicates the quality score for each Iris image using VASIR (Video-based Automatic System for Iris Recognition).

    In this work, NIST Fingerprint Image Quality algorithm (NFIQ)[39] is used to assess the quality of Fingerprints. NFIQ analyses a Fingerprint image and assigns five different quality levels with '1' being the highest quality and '5' being the lowest quality [40]. For each Fingerprint image, the quality level is calculated as shown in Figure 4. Images having a quality level of 1,2, and 3 are selected to perform experiments. Fingerprint images with NFIQ score quality 4 and 5 are considered as bad Fingerprint images and not recommended to be enrolled for biometric purposes.

    Figure 4.  Bottom row indicates the NIST Fingerprint Image Quality (NFIQ) level for each Fingerprint Image.

    From the quality assessment results of the Iris images, it is found that Iris images from the SDUMLA-HMT database have very low contrast between sclera and iris, and failed to segment iris correctly. Hence, Iris image enhancement step is necessary before performing segmentation. Resized 768x576 gray level eye images to 384x288 and contrast enhancement are performed using 'imadjust' and log transformation. This results in a smoother transformation that mostly enhances useful details and thus improves segmentation. Then, we have used the automatic Iris segmentation approach presented in [41] for extraction of Iris region from the eye image. For perfect segmentation, the radius values are in the range of 78 to 148 pixels for the Iris and 14 to 58 pixels for the pupil.

    After Quality Assessment using NFIQ of Fingerprint images, it is resized to 256x256 pixels. Then the center point (the uppermost point of the innermost curving ridge) is detected from the resized images. The translation invariance of Fingerprint images can be achieved by using this center point as a reference point. Here, a circular region of interest around the reference point is determined which is tessellated into concentric bands, and each band is further divided into sectors.

    To achieve good performance on both unimodal Iris [42] and unimodal Fingerprints [43] system, a fixed length IrisCode and FingerCode are generated by extracting the features from the preprocessed images using the Gabor filter. For Iris images, following the segmentation step, normalization is done to make Iris representation invariant to the size of iris and pupil dilation. The extracted iris is mapped into fixed dimensions of 20(r) x 240(θ) of polar image coordinates. These values indicate the radial and angular resolution of the normalized image respectively, which is a trade-off between noise removal and obtaining reasonable size templates. For Iris images, normalized images are convolved with a log-Gabor filter for feature extraction. Then, encoding is performed by mapping the phase responses of the filter to one of the four quadrants in the complex plane and are quantized to '0's and '1's. This encoded binary representation of the Iris image is referred as the IrisCode. As per [41], the total number of bits in the IrisCode is the angular resolution times the radial resolution, times 2, times the number of filters. This produces a fixed-length (240*20*2*1) 9600x1 dimensional feature vector in binary form.

    We have used Gabor filters to capture the texture information of the preprocessed Fingerprint images at a different orientation. Features for Fingerprint images are obtained by convolving the preprocessed images with Gabor filters at eight different orientations as proposed in [43]. The advantages of using Gabor filters in the Fingerprint are ⅰ) removes noise, ⅱ) preserves the ridge and valley structures, ⅲ) provides the information contained in an orientation, ⅳ) Minutia viewed as an anomaly in parallel ridges. All this texture information is captured by determining the average absolute deviation from the mean of gray values in individual sectors in filtered images, to represent Fingerprint feature vector 'FingerCode'. In our experiment, we have used a total of 5 concentric bands having width of 18 pixels each and each band of 16 sectors. Hence, a FingerCode of size 640x1 is formed using the selected parameter [No. of concentric band * No. of sectors per band * No. of Gabor filter]. The generated features vector is real-valued vectors. In this work, a static one-bit discretization scheme that uses simple threshold-based binarization for the quantization of a feature element [44] is implemented. For this, feature mean of the entire training set is computed and set as a threshold. Then by applying quantization, a binary representation of the real valued feature vector of 640x1 dimensions is obtained. The primary purpose of using a discretization step is to employ Hamming distance matcher even for FingerCode features.

    In this work, the performance of unimodal Iris recognition system as well as unimodal Fingerprint recognition system is evaluated using Hamming distance (HD) matcher. The advantage of using the single matcher for both modalities is that it improves the processing speed, reduces the complexity of the system, and also simplifies the design process. HD offers fast matching speed because the calculation of the HD is taken only with bits that are generated from the actual Iris region or Fingerprint region. Both feature representations, Iriscode and FingerCodes are not rotationally invariant. In order to make a rotation invariant recognition system, a circular shift of 150 to +150 is used while calculating the HD for IrisCodes as well as for FingerCodes. The minimum HD from these shifts indicates a better match [41]. Further, unimodal system performance is also tested with Manhattan, Euclidian, and Cosine Similarity measures.

    We first performed dimensionality reduction on extracted feature vectors of multimodal Iris and Fingerprint samples using principal component analysis (PCA). It helps to minimize the computational cost in the training phase as well as avoid small sample problem [18]. In PCA, the upper bound of the feature vector length corresponds to nonzero eigenvalues which is equal to 'total images -1' for each modality. In this work, we reduce the Iris feature vector of 9600x1 dimensions and the Fingerprint feature vector of 640x1 dimensions to two decreased dimension feature vectors of the same dimensions (e.g., Feature dimensions of 235x1 for right images). In the training phase, the reduced dimension feature vectors of Iris and Fingerprint are further processed by CCA as shown in Figure 5. The two projection matrix Wx and Wy, and single fused feature vector Z are obtained as defined in Eq (2.5) and then stored in the database, Wx,Wy and Z as the template.

    Figure 5.  Proposed feature level fusion approach.

    In the testing phase, test sample features are first projected in the canonical space using the same projection matrix Wx and Wy. Then by applying the summation method Eq (2.5) test fused feature vector Zt is created. This test fused feature vector Zt is compared with the fused vector templates Z for matching based on different distance or similarity measures as described by Eqs (2.6), (2.7) and (2.8).

    The recognition performance of the proposed feature fusion method is evaluated on the Right Iris and Right thumb Fingerprint images of the multimodal database in order to do rigorous testing of the designed framework and algorithm. Here, based on the quality result and the added constraints of correct segmentation of Iris and correct detection of the central point of Fingerprint, out of 106 subjects, only 59 common subjects are selected having both modalities. In this work, for both modalities, we use the first 4 images per subject in the training set (total 59 Classes and 4 impressions per Class) and the remaining for testing. Thus, for both modalities, a total of 2*59*4 = 472 images are used for training, with a total of 354 intra-class comparisons (genuine trials) and 27376 inter-class comparisons (imposter trials).

    The experimental results for the Right Iris unimodal system and Right Fingerprint unimodal system is presented in Table 2. The performance, in terms of EER of 1.9762% and 2.7287%, is obtained for individual Iris recognition system and Fingerprint recognition system using Hamming distance matcher, respectively. Furthermore, for a fair comparison, we have applied PCA to extracted features from individual modalities (IrisCode and FingerCode) and performed recognition using PCs. Table 2 shows the performance of individual Iris recognition systems and Fingerprint recognition systems, in terms of EER, for similarity metrics such as Manhattan, Euclidian, and Cosine Similarity, and corresponding ROC curves are shown in the Figure 7(a).

    Table 2.  Unimodal system performance using Right Iris and Right Fingerprint.
    Modality Feature Vector (length) PCA Feature Vector Genuine Trials(G) Imposter Trials(I) Matcher EER%
    Iris 9600x1 - 354 27376 Hamming 1.9762
    9600x1 235x1 354 27376 Manhattan 4.5186
    9600x1 235x1 354 27376 Euclidean 5.6509
    9600x1 235x1 354 27376 Cosine 6.4984
    Similarity
    Fingerprint 640x1 - 354 27376 Hamming 2.7287
    640x1 235x1 354 27376 Manhattan 2.3853
    640x1 235x1 354 27376 Euclidean 3.8574
    640x1 235x1 354 27376 Cosine 5.5121
    Similarity
    [1] Training Images: No. of Class (N)=59 and Images per Class (t)=4
    [2] G = Nt(t1)/2 and I = N(N1)tt/2

     | Show Table
    DownLoad: CSV

    The experimental findings for feature level fusion on the Right Iris and Right thumb Fingerprint using a PCA, CCA and PCA+CCA approach is shown in Table 3. Experimental results demonstrate that the PCA+CCA approach benefits from its encouraging properties and achieves competitive recognition performance with low computational complexity. Three distinct matchers are used to assess the performance of the proposed feature level fusion. The performance, in terms of EER of 0.5698% for Manhattan Distance, 0.2813% for Euclidian Distance, and 0.2812% for Cosine Similarity. Thus, the proposed PCA+CCA feature level fusion approach outperforms both PCA feature fusion and CCA feature fusion for Iris and fingerprint modalities, as shown by achieved performance in terms of EERs. Therefore, except Table 3, in the entire paper, the proposed PCA+CCA approach is referred as CCA based feature fusion.

    Table 3.  Feature level fusion on Right Iris and Right Fingerprint.
    Fusion Type Feature (length)Vector PCA Feature Fusion using PCA Feature Fusion using CCA Genuine Trials(G) Imposter Trials(I) Matcher EER%
    PCA approach IrisCode Iris Fused - 354 27376 Manhattan 3.1086
    9600x1 235x1 Vector - Euclidean 3.9560
    FingerCode Fingerprint 235x236 - Cosine 6.2610
    640x1 235x1 (n) - Similarity
    CCA approach IrisCode - - Fused 354 27376 Manhattan 8.0326
    9600x1 - - Vector Euclidean 17.6505
    FingerCode - - 235x236 Cosine 15.0387
    640x1 - - (n) Similarity
    PCA+CCA IrisCode Iris - Fused 354 27376 Manhattan 0.5698
    approach 9600x1 235x236 - Vector Euclidean 0.2813
    (Proposed) FingerCode Fingerprint - 235x236 Cosine 0.2812
    640x1 235x236 - (n) Similarity
    (n)
    [1] Training Images: No. of Class (N)=59 and Images per Class (t)=4, n=594
    [2] G = Nt(t1)/2 and I = N(N1)tt/2
    [3] Except Table 3, in entire paper the proposed PCA+CCA approach referred as CCA based feature fusion

     | Show Table
    DownLoad: CSV

    For a clear comparison, Figure 6 shows match score distribution for unimodal and multimodal system. It can be seen from Figure 7(b) ROC curves that PCA+CCA approach with cosine similarity measure consistently outperforms than other matchers. This clearly indicates that PCA+CCA approach (referred as CCA based feature fusion) not only brings the effect of dimension reduction while fusing correlated features of two modality but also achieves higher recognition accuracy.

    Figure 6.  Results (a) shows inter and intra class Hamming distance distributions for Iriscode (Right Iris Image), (b) shows inter and intra class Hamming distance distributions for Fingercode (Right Fringerprint Image), and (c) shows inter and intra class Cosine Similarity distributions for CCA based feature fusion (proposed approach).
    Figure 7.  Results (a) ROC Curve for Individual Iris system and Individual Fingerprint system using PCA and Feature level fusion based on PCA. (b) shows ROC Curve for comparison of PCA, CCA, and PCA+CCA approach for feature level fusion.

    We also note that, in practice, both Euclidean and Manhattan metrics which depends on the magnitude of the vectors, are incapable of capturing the intrinsic similarities between images while cosine similarity offers the advantage of stability to noise and is insensitive to the global scaling of the vector magnitude. The cosine similarity metric enhances the robustness of the fused feature by implying a good generalization ability which is one possible reason for the superior performance.

    In this work, we also compare the performance of the proposed feature level fusion with score level fusion. Here again, the fusion of matching scores obtained from Hamming distance matcher for Right Iris and Right Fingerprint images is implemented using classic rules such as Sum rule and Weighted Sum rule [45]. The sum rule is an extensively used and efficient fusion scheme [45,46], capable of combining the scores provided by multiple matchers effectively using a weighted sum. In this work, the fusion score Sfuse is computed for the simple weighted fusion using Eq (4.1) for N matcher or classifier is given as follows:

    Sfuse=Ni=1siWi (4.1)

    For two modalities, N = 2, Eq (4.1), score becomes S1 and S2, W1 and W2 be their weights. Here, S1 and S2 are Iris and Fingerprint matched scores respectively; weights W1 and W2 are varied over the range [0, 1], such that the constraint W1+W2=1 to be satisfied [45]. However, the scores of different biometric can be weighted differently, for example, the error rate of Iris is lower than Fingerprint, so the Iris score may be assigned greater weight than that of the Fingerprint. Finally, this fused matching score is used to recognize an individual as a genuine or an imposter. The experimental result is presented in Table 4. We empirically selected the weights for match score level fusion using the weighted sum method by attempting to get the maximum recognition accuracy rate with each matcher. The least equal error rate is used to define the set of weights to be used. After experimenting with different weight values, the weights for each individual matcher are fixed to the same value: 0.5 for W1 and 0.5 for W2. Normally, each matcher's weight is determined by its recognition performance on a training set.

    Table 4.  Score level fusion on Right Iris and Right Fingerprint images.
    Modality Feature Vector (length) Genuine Trials (G) Imposter Trials (I) Matcher (Distance) Score Fusion Method EER%
    Iris 9600x1 354 27376 Hamming - 1.9762
    Fingerprint 640x1 354 27376 Hamming - 2.7287
    Score Level Iris Code Sum Rule 0.8474
    Fusion: 9600x1
    Iris and Finger Code Weighted 0.8474
    Fingerprint 640x1 Sum Rule
    [1] Training Images: No. of Class (N)=59 and Images per Class (t)=4
    [2] G = Nt(t1)/2 and I= N(N1)tt/2

     | Show Table
    DownLoad: CSV

    In this work, for both modalities, the Hamming distance matcher is proposed so that output scores from both of the systems are in the same format and helps to eliminate the use of additional normalization techniques and complex fusion matcher techniques. Figure 8(b) shows the comparative EER performances of score level fusion with feature level fusion. The ROC curve shows that CCA based feature level fusion significantly outperforms than the match score level fusion approach.

    Figure 8.  Results (a) ROC curve for CCA based feature level fusion. (b) shows performance comparison of score level with feature level fusion.

    In this paper, we have performed an experiment to evaluate the effect of cross matching biometrics feature fusion using Iris and Fingerprint biometric modalities, that are strictly captured from the same person (subjects). In order to study the performance effect due to cross matching in the true sense, we have selected Iris and Fingerprint images of the same person who is present in both earlier left and right experimentation. The images selection protocol remains same as stated earlier - selection based on the quality result and the added constraints of correct segmentation of Iris and correct detection of the central point of Fingerprint. There are a total of 45 subjects and a total of 59 subjects that satisfied images selection protocol in Left Iris and Left Fingerprint experimentation, and, Right Iris and Right Fingerprint experimentation respectively. Among 45 and 59 subjects, only 35 common subjects are selected having both modalities in both the experiments. In this work, for both modalities, we use the first 4 images per subject in the training set (total 35 Classes and 4 impressions per Class) and the remaining for testing. For training, total images of 2354=280 for both modalities are used. There are a total of 210 intra-class comparisons, and 9520 inter-class comparisons. We have performed the following two cross matching experiments and the evaluation performance is summarised in Table 5.

    Table 5.  Cross match CCA based feature level fusion.
    Modality Feature Vector(length) PCA Feature Fusion(CCA) Matcher EER%
    Using Left Iris and Right Fingerprint
    Iris 9600x1 - - Hamming 0.9559
    Fingerprint 640x1 - - Hamming 3.1513
    Feature Fusion: Iris Code Iris Fused Manhattan 0.3466
    Iris and 9600x1 139x140 Vector Euclidean 0.1471
    Fingerprint Finger Code Fingerprint 139x140 Cosine 1.4286
    640x1 139x140 Similarity
    Using Right Iris and Left Fingerprint
    Iris 9600x1 - - Hamming 0.4727
    Fingerprint 640x1 - - Hamming 3.0042
    Feature Fusion: Iris Code Iris Fused Manhattan 0.4307
    Iris and 9600x1 139x140 Vector Euclidean 0.1786
    Fingerprint Finger Code Fingerprint 139x140 Cosine 0.1050
    640x1 139x140 Similarity
    [1] Training Images: No. of Class (N)=35 and Images per Class (t)=4
    [2] Genuine Trials (G)= Nt(t1)/2 = 210, [3] Imposter Trials (I)= N(N1)tt/2= 9520

     | Show Table
    DownLoad: CSV

    In this experiment, the Left Iris and the Right Fingerprint of 35 subjects are used to perform cross matching feature fusion. For unimodal Left Iris recognition and unimodal Right Fingerprint with Hamming distance matcher, the performance in terms of EERs is of 0.9559% and 3.1513% respectively. But for CCA based feature fusion, using Left Iris and Right Fingerprint, we observed EER of 1.4286% for Cosine Similarity, 0.1471% Euclidean and 0.3466% Manhattan distance. Figure 9 shows ROC curves with different matchers. For feature fusion approach with cosine similarity measure, a significant drop in EER as compared to other matchers. In this cross matching experiment, multimodal features have different discriminating power which could further limit the discriminability of a fused result. This implies that performance-wise if strong Left Iris modality is fused with weak Right Fingerprint modality at feature level then it does not guarantee that obtained result is encouraging as obtained in the earlier experiments. This clearly indicates that even if Iris and Fingerprint modalities are of the same person, there is a certain close relationship, maybe genetics based relationship that directly affects and dominates the performance [47]. This intimates that one should take into account the feature dependency while designing the multimodal system as it affects the system performance.

    Figure 9.  Results (a) shows ROC curves for Left Iris and Right Fingerprint images (b) shows ROC curves for Right Iris and Left Fingerprint images.

    In this experiment, the Right Iris and Left Fingerprint of 35 subjects are used to perform cross matching feature fusion. For unimodal Right Iris recognition and unimodal Left Fingerprint with Hamming distance matcher, the performance in terms of EERs is of 0.4727% and 3.0042% respectively. But for CCA based feature fusion, using Right Iris and Left Fingerprint, we observed EER of 0.1050% for Cosine Similarity, 0.1786% Euclidean and 0.4307% Manhattan distance. Figure (9b) shows ROC curves with different matchers. For the feature fusion approach with cosine similarity measure, EER is significantly better as compared to other matchers. This experiment shows that performance wise if strong Right Iris modality is fused with moderate Left Fingerprint modality at the feature level then there is a possibility to obtain the consistent result as obtained in the earlier experiments. It suggests that the concepts of laterality should be considered while implementing the matching algorithm to improve the verification performance of the multimodal system [47]. Again here, this clearly indicates that feature dependency should be taken into account while designing the multimodal system as it directly affects the performance of the multimodal system.

    Comparing with earlier work based on feature fusion and matcher score fusion, our algorithm shows an encouraging performance among typical algorithms. As there are limited previous studies found that utilized the SDUMLA-HMT database, we compare our approach with real multimodal different datasets for the same biometric modalities. For example, an efficient fusion scheme at the feature and match score level to combine face and palmprint modalities is [46] presented. They have performed feature selection and fusion using binary particle swarm optimization (PSO) technique and achieved the best GAR (Genuine Acceptance Rate) of 97.25% at FAR (False Acceptance Rate) of 0.01% for hybrid fusion. The author claims that the use of PSO benefits to reduce the number of feature dimensions and complexity of the multimodal system. A multimodal sparse representation at feature level fusion algorithm for Fingerprint and Iris modalities is explored in [10]. This approach utilizes a sparse linear combination of training data to represent the test data. A quality measure for fusion based on the joint sparse representation and kernel technique has been presented to achieve recognition robustness. The experimental evaluation demonstrates the rank-1 recognition rate of 98.7%, indicating a significant improvement in the performance of a multimodal system. Another work [24], considers a feature level fusion strategy for multimodal recognition based on Discriminant Correlation Analysis (DCA). This fusion method takes into account the feature sets' class relationships, removing correlations between classes while concurrently restricting correlations within classes. Using DCA-based feature fusion algorithms and a minimum distance classifier, a rank-1 recognition rate of 99.60% is attained for the multimodal system. The Group Sparse Representation based Classifier (GSRC) approach is studied by [14], which integrates multi feature representation seamlessly into classification. This approach utilizes the feature vectors extracted from different modalities to perform accurate identification with feature level fusion and classification. The author reported the efficacy of the proposed approach at the rank-1 recognition rate. This approach has the benefit of efficiently handling multimodal biometrics and multiple types of features in a single framework. We found a previous work [17] that used SDUMLA-HMT database to investigate the multimodal system using the Iris, Face, and Finger Vein modalities. So, this work is considered for comparison. A feature level fusion strategy is used in this paper, which uses convolutional neural networks (CNNs) to extract features and classify images using the softmax classifier. A pertained model VGG-16 was used to develop a CNN model and got a 99.39% accuracy.

    The experimental findings of our proposed approach show that feature level fusion based on CCA is useful in identifying the most correlated features between two feature sets of Iris and Fingerprint. Furthermore, our method is equally powerful in representing the fused feature vector referred as canonical correlation discriminant vector and reducing the probability of false match rate. Thus, the proposed multimodal biometrics system can surely improve the universality, accuracy, and security of a verification system with due consideration of cross match modalities. Using the SDUMLA-HMT database to examine the performance of a multimodal system in cross match modalities is unique to our research because no other study had done so before us. Our prototype model ran on PC with 3.10 GHz processor and 8GB RAM. For Right Iris and Right Fingerprint, training time is 0.145945 seconds while testing time is of 0.012539 seconds per person. The comparative result analysis of our proposed approach with existing approaches is shown in Table 6.

    Table 6.  Comparison with existing methods.
    Authors Modalities Level of Fusion Database Fusion Methodology Performance Result
    Raghavendra et al. [46] Face, Palmprint Feature + Match Score FRGC face, PolyU palmprint version Ⅱ Feature, Concatenation and PSO GAR% at FAR=0.01% Feature: 94.72%, Score: 86.50%, Hybrid: 97.25%
    Shekhar et al. [10] Iris, Fingerprint Feature WVU Multimodal Joint sparse representation Rank-1 Recognition Rates 4 Fingers: 97.9%, 2 Iris: 76.5%, All modalities : 98.7%
    Haghighat et al. [24] Iris, Fingerprint Feature Multimodal Dataset at WUV: BIOMDATA DCA/MDCA with Miminum Distance Classifier Rank-1 Recognition Rate 99.60%
    Goswami et al. [14] Iris, Fingerprint, Face Feature fusion and classification WVU multimodal and Law Enforcement Agency (LEA) Dataset Group Sparse Representation based Classifier (GSRC) Rank-1 Identification Accuracy 99.1% for WVU 62.3% for LEA
    Nada A et al. [17] Iris, Face, and Finger Vein Feature SDUMULA-HMT multimodal Dataset CNN model Classification Accuracy 99.39 %
    Proposed Method Iris, Fingerprint Feature SDUMULA-HMT multimodal Dataset Canonical Correctional Analysis (CCA) Right Images: EER%: Iris- 1.9762, Fingerprint- 2.7287, Feature- 0.2812
    Cross Match(Left Iris & Right Fingerprint):EER%: Iris- 0.9559, Fingerprint- 3.1513, Feature- 1.4286
    Cross Match(Right Iris & Left Fingerprint):EER%: Iris- 0.4727, Fingerprint- 3.0042, Feature- 0.1050
    Match Score EER%: using Right Images - 0.8474

     | Show Table
    DownLoad: CSV

    In this paper, an optimal feature level fusion model based on CCA is presented to extract and represent discriminative features by exploring significant relationships between the Iris and Fingerprint feature sets of the same person. The performance is evaluated for different distance and cosine similarity measures on the SDUMLA-HMT multimodal database in a verification scenario. From experimental results of CCA based feature level fusion with Cosine Similarity matcher, we found significantly improved recognition performance compared to unimodal systems, in terms of equal error rate (EER)using a) Right Iris and Right Fingerprint images (EER of 0.2812%) and b) Right Iris and Left Fingerprint images (EER of 0.1050%), while significantly poorer recognition performance using c) Left Iris and Right Fingerprint images (EER of 1.4286%). It suggests that the concepts of laterality should be considered while implementing the matching algorithm to improve the verification performance of the multimodal system. Further, one should take into account the feature dependency while designing the multimodal system as it affects the system performance. Cross matching is a novel area of profound investigation in multimodal systems. We have obtained interesting initial results, but further exploration should be done with a larger database. This paper offers new perspectives for designing the feature level fusion model for multimodal systems for Iris and Fingerprint modalities which are efficiently represented in canonical space. But, in order to take advantage of feature level fusion and find the deep rooted relation of cross matching modalities features, further exploration needs to be addressed by designing an intelligent matcher framework at the matching level as well. To further enhance the robustness of the proposed approach, we intend to investigate geometric consistency for feature matching as stated in [48] and also exploit superior CNN architecture-based models in Iris segmentation for higher recognition accuracy [49].

    All authors declare no conflicts of interest in this paper.



    [1] Adeneye YB, Ahmed M (2015) Corporate social responsibility and company performance. J Bus Stud Q 7: 151–166.
    [2] Ahmadi A, Nakaa N, Bouri A (2018) Chief Executive Officer attributes, board structures, gender diversity and firm performance among French CAC 40 listed firms. Res Int Bus Financ 44: 218–226. https://doi.org/10.1016/j.ribaf.2017.07.083 doi: 10.1016/j.ribaf.2017.07.083
    [3] Ajgaonkar S, Neelam NG, Wiemann J (2021) Drivers of workforce agility: a dynamic capability perspective. Int J Organ Anal 30: 951–982. https://doi.org/10.1108/ijoa-11-2020-2507 doi: 10.1108/ijoa-11-2020-2507
    [4] Amorelli MF, García‐Sánchez IM (2023) Leadership in heels: Women on boards and sustainability in times of COVID‐19. Corp Soc Responsib Environ Manag 30: 1987–2010. https://doi.org/10.1002/csr.2469 doi: 10.1002/csr.2469
    [5] Aouadi A, Marsat S (2018) Do ESG Controversies Matter for Firm Value? Evidence from International Data. J Bus Eth 151: 1027–1047. https://doi.org/10.1007/s10551-016-3213-8 doi: 10.1007/s10551-016-3213-8
    [6] Aziri B (2011) Job satisfaction, a literature review. Man Res Pract 3: 77–68.
    [7] Bax K (2023) Do diverse and inclusive workplaces benefit investors? An Empirical Analysis on Europe and the United States. Fin Res Lett 52: 103509. https://doi.org/10.1016/j.frl.2022.103509 doi: 10.1016/j.frl.2022.103509
    [8] Beck C, Frost G, Jones S (2018) CSR disclosure and financial performance revisited: A cross-country analysis. Aust J Mana 43: 517–537. https://doi.org/10.1177/0312896218771438 doi: 10.1177/0312896218771438
    [9] Ben-Amar W, Chang M, McIlkenny P (2017) Board gender diversity and corporate response to sustainability initiatives: evidence from the Carbon Disclosure Project. J Bus Ethics 142: 369–383. https://doi.org/10.1007/s10551-015-2759-1 doi: 10.1007/s10551-015-2759-1
    [10] Bengisu M, Balta S (2011) Employment of the workforce with disabilities in the hospitality industry. J Sustain Tour 19: 35–57. https://doi.org/10.1080/09669582.2010.499172 doi: 10.1080/09669582.2010.499172
    [11] Brooks C (2019) Introductory econometrics for finance. Cambridge university press.
    [12] Burkhardt K, Nguyen P, Poincelot E (2020) Agents of change: Women in top management and corporate environmental performance. Corp Soc Responsib Environ Manag 27: 1591–1604. https://doi.org/10.1002/csr.1907 doi: 10.1002/csr.1907
    [13] Camilleri MA (2017) Corporate sustainability and responsibility: creating value for business, society and the environment. Asian J Sustain Soc Responsib 2: 59–74. https://doi10.1186/s41180-017-0016-5
    [14] Choi JN, Sung SY, Zhang Z (2017) Workforce diversity in manufacturing companies and organizational performance: the role of status-relatedness and internal processes. Int J Hum Resour Manag 28: 2738–2761. https://doi.org/10.1080/09585192.2016.1138315 doi: 10.1080/09585192.2016.1138315
    [15] Colella AJ, Bruyère SM (2011) Disability and employment: New directions for industrial and organizational psychology. In S. Zedeck (Ed.), APA handbook of industrial/organizational psychology. Washington, DC: American Psychological Association, 473–503. https://doi.org/10.1037/12169-015
    [16] D'apolito E, Iannuzzi AP, Labini SS, et al. (2019) Sustainable compensation and performance: an empirical analysis of European banks. J Financ Manag Mark I 7: 1940004. https://doi.org/10.1142/S2282717X19400048 doi: 10.1142/S2282717X19400048
    [17] Dahanayake P, Rajendran D, Selvarajah C, et al. (2018) Justice and fairness in the workplace: A trajectory for managing diversity. Equal Divers Incl 37: 470–490. https://doi.org/10.1108/EDI-11-2016-0105 doi: 10.1108/EDI-11-2016-0105
    [18] Davidson DJ, Freudenburg WR (1996) Gender and environmental risk concerns. Environ Behav 28: 302–339. https://doi.org/10.1177/0013916596283003 doi: 10.1177/0013916596283003
    [19] de Klerk K, Singh F (2023) Does Gender and Cultural Diversity Matter for Sustainability in Healthcare? Evidence from Global Organizations. Sustainability 15: 11695. https://doi.org/10.3390/su151511695 doi: 10.3390/su151511695
    [20] De Villiers C, Naiker V, Van Staden CJ (2011) The effect of board characteristics on firm environmental performance. J Manage 37: 1636–1663. https://doi.org/10.1177/0149206311411506 doi: 10.1177/0149206311411506
    [21] European Commission (2020) Study on energy prices, costs and their impact on industry and households: Final report. Directorate-General for Energy, European Union. https://doi.org/10.2833/49063
    [22] Farrell KA, Hersch PL (2005) Additions to corporate boards: The effect of gender. J Corp Fin 11: 85–106. https://doi.org/10.1016/j.jcorpfin.2003.12.001 doi: 10.1016/j.jcorpfin.2003.12.001
    [23] Freeman RE (1984) Strategic Management: A Stakeholder Approach (Pittman, Marshfield, MA).
    [24] García-Amate A, Ramírez-Orellana A, Rojo-Ramírez AA, et al. (2023) Do ESG controversies moderate the relationship between CSR and corporate financial performance in oil and gas firms? Humanit Soc Sci Commun 10: 1–14. https://doi.org/10.1057/s41599-023-02256-y doi: 10.1057/s41599-023-02256-y
    [25] Gherghina ŞC, Vintilă G, Dobrescu D (2015) An empirical research on the relationship between corporate social responsibility ratings and US listed companies' value. J Econ Stud 2015: 1–12. https://doi.org/10.5171/2015.260450 doi: 10.5171/2015.260450
    [26] Giannetti M, Zhao M (2019) Board ancestral diversity and firm-performance volatility. J Financ Quant Anal 54: 1117–1155. https://doi.org/10.1017/S0022109018001035 doi: 10.1017/S0022109018001035
    [27] Golany B, Roll Y (1989) An Application Procedure for DEA. Omega 17: 237–250. https://doi.org/10.1016/0305-0483(89)90029-7 doi: 10.1016/0305-0483(89)90029-7
    [28] González-Ramos MI, Donate MJ, Guadamillas F (2018) An empirical study on the link between corporate social responsibility and innovation in environmentally sensitive industries. Eur J Int Manag 12: 402–422. https://doi.org/10.1504/EJIM.2018.092842 doi: 10.1504/EJIM.2018.092842
    [29] Gotsis G, Kortezi Z (2013) Ethical paradigms as potential foundations of diversity management initiatives in business organizations. J Organ Change Manag 26: 948–976. https://doi.org/10.1108/JOCM-11-2012-0183 doi: 10.1108/JOCM-11-2012-0183
    [30] Guo H, Wang C, Su Z, et al. (2020) Technology push or market pull? Strategic orientation in business model design and digital start-up performance. J Prod Innov Manage 37: 352–372. https://doi.org/10.1111/jpim.12526 doi: 10.1111/jpim.12526
    [31] Guthrie JP (2001) High-Involvement Work Practices, Turnover, and Productivity: Evidence from New Zealand. Acad Manage J 44: 180–190. https://doi.org/10.5465/3069345 doi: 10.5465/3069345
    [32] Habib A, Khalid A (2019) High-Performance Work Practices and Environmental Social Responsibility of Firm: Mediatory role of Individually Perceived Stress. Int J Psychol 1: 1–21.
    [33] Haque F (2017) The effects of board characteristics and sustainable compensation policy on carbon performance of UK firms. Brit Account Rev 49: 347–364. https://doi.org/10.1016/j.bar.2017.01.001 doi: 10.1016/j.bar.2017.01.001
    [34] Harjoto MA, Laksmana I, Yang YW (2019) Board nationality and educational background diversity and corporate social performance. Corp Gov 19: 217–239. https://doi.org/10.1108/CG-04-2018-0138 doi: 10.1108/CG-04-2018-0138
    [35] Harrison JS, Wicks A C (2013) Stakeholder theory, value and firm performance. Bus Ethics Q 23: 97–124. https://doi.org/10.5840/beq20132314 doi: 10.5840/beq20132314
    [36] Horwitz SK (2005) The compositional impact of team diversity on performance: Theoretical considerations. Hum Resour Dev Rev 4: 219–245. https://doi.org/10.1177/1534484305275847 doi: 10.1177/1534484305275847
    [37] Hossain M, Atif M, Ahmed A, et al. (2020) Do LGBT workplace diversity policies create value for firms? J Bus Ethics 167: 775–791. https://doi.org/10.1002/hrm.21831 doi: 10.1002/hrm.21831
    [38] Hussain N, Rigoni U, Orij RP (2018) Corporate governance and sustainability performance: analysis of triple bottom line performance. J Bus Ethics 149: 411–432. https://doi.org/10.1007/s10551-016-3099-5 doi: 10.1007/s10551-016-3099-5
    [39] Iazzolino G, Bruni ME, Veltri S, et al. (2023) The impact of ESG factors on financial efficiency: An empirical analysis for the selection of sustainable firm portfolios. Corp Soc Responsib Environ Manag 30: 1917–1927. https://doi.org/10.1002/csr.2463 doi: 10.1002/csr.2463
    [40] IEA (2018) Topics: Climate change. Available from: https://www.iea.org/topics/climatechange.
    [41] Issa A, Zaid MAA, Hanaysha JR, et al. (2022) An examination of board diversity and corporate social responsibility disclosure: Evidence from banking sector in the Arabian Gulf countries. Int J Account Inf Manag 30: 22–46. https://doi.org/10.1108/IJAIM-07-2021-0137
    [42] Jiraporn P, Potosky D, Lee SM (2019) Corporate governance and lesbian, gay, bisexual, and transgender‐supportive human resource policies from corporate social responsibility, resource‐based, and agency perspectives. Hum Resour Manage-US 58: 317–336. https://doi.org/10.1002/hrm.21954 doi: 10.1002/hrm.21954
    [43] Jo H, Harjoto MA (2011) Corporate Governance and Firm Value: The Impact of Corporate Social Responsibility. J Bus Ethics 103: 351–383. https://doi.org/10.1007/s10551-011-0869-y doi: 10.1007/s10551-011-0869-y
    [44] Kang J, Kim YH (2013) The impact of media on corporate social responsibility. Available at SSRN 2287002. https://ssrn.com/abstract = 2287002
    [45] Kareem MA, Hussein IJ (2019) The impact of human resource development on employee performance and organizational effectiveness. Manag Dyn Knowl Econ 7: 307–322. https://doi.org/10.25019/mdke/7.3.02 doi: 10.25019/mdke/7.3.02
    [46] Katou AA (2011) A mediation model linking business strategies, human resource management, psychological contract, and organisational performance. Int J Hum Resour Dev Manag 11: 51–67. https://doi.org/10.1504/IJHRDM.2011.041115 doi: 10.1504/IJHRDM.2011.041115
    [47] Kemp LJ, Madsen SR, Davis J (2015) Women in business leadership: A comparative study of countries in the Gulf Arab states. Int J Cross Cult Manag 15: 215–233. https://doi.org/10.1177/1470595815594819 doi: 10.1177/1470595815594819
    [48] Khatri I (2023) Board gender diversity and sustainability performance: Nordic evidence. Corp Soc Resp Env Manag 30: 1495–1507. https://doi.org/10.1002/csr.2432 doi: 10.1002/csr.2432
    [49] Kim DH, Wu YC, Lin SC (2022) Carbon dioxide emissions, financial development and political institutions. Econ Chang Restruct 55: 837–874. https://doi.org/10.1007/s10644-021-09331-x doi: 10.1007/s10644-021-09331-x
    [50] Kim KH, Kim M, Qian C (2018) Effects of corporate social responsibility on corporate financial performance: A competitive-action perspective. J Manag 44: 1097–1118. https://doi.org/10.1177/0149206315602530 doi: 10.1177/0149206315602530
    [51] Kraiger K, McLinden D, Casper WJ (2004) Collaborative planning for training impact. Hum Resour Manag J 43: 337–351. https://doi.org/10.1002/hrm.20028 doi: 10.1002/hrm.20028
    [52] Krüger P (2015) Corporate goodness and shareholder wealth. J Financ Econ 115: 304–329. https://doi.org/10.1016/j.jfineco.2014.09.008 doi: 10.1016/j.jfineco.2014.09.008
    [53] Kumar A, Gupta J, Das N (2022) Revisiting the influence of corporate sustainability practices on corporate financial performance: An evidence from the global energy sector. Bus Strategy Environ 31: 3231–3253. https://doi.org/10.1002/bse.3073 doi: 10.1002/bse.3073
    [54] Kumar P, Maiti J, Gunasekaran A (2018) Impact of quality management systems on firm performance. Int J Qual Reliab Manage 35: 1034–1059. https://doi.org/10.1108/IJQRM-02-2017-0030 doi: 10.1108/IJQRM-02-2017-0030
    [55] Lahouel BB, Zaied YB, Managi S, et al. (2022) Re-thinking about U: The relevance of regime-switching model in the relationship between environmental corporate social responsibility and financial performance. J Bus Res 140: 498–519. https://doi.org/10.1016/j.jbusres.2021.11.019 doi: 10.1016/j.jbusres.2021.11.019
    [56] Lee P, Seo YW (2017) Directions for social enterprise from an efficiency perspective. Sustainability 9: 1914. https://doi.org/10.3390/su9101914 doi: 10.3390/su9101914
    [57] Li F, Nagar V (2013) Diversity and performance. Manag Sci 59: 529–544. https://doi.org/10.1287/mnsc.1120.1548 doi: 10.1287/mnsc.1120.1548
    [58] Li J, Haider ZA, Jin X, et al. (2019) Corporate controversy, social responsibility and market performance: International evidence. J Int Financ Mark Inst Money 60: 1–18. https://doi.org/10.1016/j.intfin.2018.11.013 doi: 10.1016/j.intfin.2018.11.013
    [59] Liu C (2018) Are women greener? Corporate gender diversity and environmental violations. J Corp Fin 52: 118–142. https://doi.org/10.1016/j.jcorpfin.2018.08.004 doi: 10.1016/j.jcorpfin.2018.08.004
    [60] López-Penabad MC, Iglesias-Casal A, Neto JFS, et al. (2022) Does corporate social performance improve bank efficiency? Evidence from European banks. Rev Manag Sci 17: 1399–1437. https://doi.org/10.1007/s11846-022-00579-9 doi: 10.1007/s11846-022-00579-9
    [61] LSEG Data, Analytics (2022) Environmental, social and governance scores from LSEG. Available from: https://www.lseg.com/content/dam/data-analytics/en_us/documents/methodology/lseg-esg-scores-methodology.pdf.
    [62] Lu WM, Kweh QL, Ting IWK, et al. (2023) How does stakeholder engagement through environmental, social, and governance affect eco-efficiency and profitability efficiency? Zooming into Apple Inc. 's counterparts. Bus Strategy Environ 32: 587–601. https://doi.org/10.1002/bse.3162 doi: 10.1002/bse.3162
    [63] Maside‐Sanfiz JM, Suárez-Fernández Ó, López‐Penabad MC, et al. (2023) Does corporate social performance improve environmentally adjusted efficiency? Evidence from the energy sector. Corp Soc Responsib Environ Manag. https://doi.org/10.1002/csr.2650 doi: 10.1002/csr.2650
    [64] McGuinness PB, Vieito JP, Wang M (2017) The role of board gender and foreign ownership in the CSR performance of Chinese listed firms. J Corp Finance 42: 75–99. https://doi.org/10.1016/j.jcorpfin.2016.11.001 doi: 10.1016/j.jcorpfin.2016.11.001
    [65] McKinsey, Company Organisation (2015) Women in the workplace. Available from: https://www.mckinsey.com/business-functions/organisation/our-insights/women-in-the-workplace.
    [66] Meyer CS, Mukerjee S, Sestero A (2001) Work‐family benefits: which ones maximize profits? J Manag Issues, 28‐44. https://www.jstor.org/stable/40604332
    [67] Moussa AS, Elmarzouky M (2023) Does Capital Expenditure Matter for ESG Disclosure? A UK Perspective. J Risk Financial Manag 16: 429. https://doi.org/10.3390/jrfm16100429 doi: 10.3390/jrfm16100429
    [68] Naciti V, Noto G, Vermiglio C, et al. (2022) Gender representation and financial performance: an empirical analysis of public hospitals. Int J Public Sect Manag 35: 603–621. https://doi.org/10.1108/IJPSM-01-2022-00 doi: 10.1108/IJPSM-01-2022-00
    [69] Nadler Z (2012) Designing Training Programs. Hoboken, NJ: Taylor and Francis. https://doi.org/10.4324/9780080503974
    [70] Nirino N, Santoro G, Miglietta N, et al. (2021) Corporate controversies and company's financial performance: Exploring the moderating role of ESG practices. Technol Forecast Soc 162: 120341. DOI10.1016/j.techfore.2020.120341 doi: 10.1016/j.techfore.2020.120341
    [71] Nyeadi JD, Kamasa K, Kpinpuo S (2021) Female in top management and firm performance nexus: Empirical evidence from Ghana. Cogent Eco Financ 9: 1921323. https://doi.org/10.1080/23322039.2021.1921323 doi: 10.1080/23322039.2021.1921323
    [72] Özbilgin M, Tatli A (2011) Mapping out the field of equality and diversity: Rise of individualism and voluntarism. Hum Relat 64: 1229–1253. https://doi.org/10.1177/0018726711413620 doi: 10.1177/0018726711413620
    [73] Pichler S, Blazovich JL, Cook KA, et al. (2018) Do LGBT‐supportive corporate policies enhance firm performance? Hum Resour Manag J 57: 263–278. https://doi.org/10.1002/hrm.21831 doi: 10.1002/hrm.21831
    [74] Prieto LC, Phipps ST, Osiri JK (2009) Linking workplace diversity to organizational performance: A conceptual framework. J Divers Manag 4: 13–22. https://doi.org/10.19030/jdm.v4i4.4966 doi: 10.19030/jdm.v4i4.4966
    [75] Provasi R, Harasheh M (2020) Gender diversity and corporate performance: Emphasis on sustainability performance. Corp Soc Responsib Environ Manag 28: 127–137. https://doi.org/10.1002/csr.2037 doi: 10.1002/csr.2037
    [76] Ramecesse AD (2021) Corporate Social Responsibility and Firm Performance in SMEs: Empirical Evidence from Cameroon. Bus Econ Res 11: 88–105. https://doi.org/10.5296/ber.v11i3.18986 doi: 10.5296/ber.v11i3.18986
    [77] Ramírez-Orellana A, Martínez-Victoria M, García-Amate A, et al. (2023) Is the corporate financial strategy in the oil and gas sector affected by ESG dimensions? Resour Pol 81: 103303. https://doi.org/10.1016/j.resourpol.2023.103303 doi: 10.1016/j.resourpol.2023.103303
    [78] Ren C, Ting IWK, Lu WM, et al. (2022) Nonlinear effects of ESG on energy-adjusted firm efficiency: Evidence from the stakeholder engagement of apple incorporated. Corp Soc Responsib Environ Manag 29: 1231–1246. https://doi.org/10.1002/csr.2266 doi: 10.1002/csr.2266
    [79] Rodríguez-Fernández M, Sánchez-Teba EM, López-Toro AA, et al. (2019) Influence of ESGC indicators on financial performance of listed travel and leisure companies. Sustainability 11: 5529. https://doi.org/10.3390/su11195529 doi: 10.3390/su11195529
    [80] Rohwerder B (2017) Impact of diversity and inclusion within organizations. Inst Dev Stud, 13073.
    [81] Rosati F, Faria LGD (2019) Business contribution to the Sustainable Development Agenda: Organizational factors related to early adoption of SDG reporting. Corp Soc Responsib Environ Manag 26: 588–597. https://doi.org/10.1002/csr.1705 doi: 10.1002/csr.1705
    [82] Ruggiero P, Cupertino S (2018) CSR strategic approach, financial resources and corporate social performance: The mediating effect of innovation. Sustainability 10: 3611. https://doi.org/10.3390/su10103611 doi: 10.3390/su10103611
    [83] Salas E, Cannon-Bowers J (2000) Teams in organizations: Lessons from history. Work teams: Past, present and future: 323–331.
    [84] Sanchez-Robles B, Herrador-Alcaide TC, Hernández-Solís M (2022) Efficiency of European oil companies: an empirical analysis. Energy Effic 15: 1–28. https://doi.org/10.1007/s12053-022-10069-2 doi: 10.1007/s12053-022-10069-2
    [85] Sears B, Mallory C (2011) Documented evidence of employment discrimination & its effects on LGBT people. Available from: https://escholarship.org/uc/item/03m1g5sg.
    [86] Sgrò F (2021) Intellectual capital and organizational performance. SIDREA Series in Accounting and Business Administration. New York: Springer International Publishing.
    [87] Shahbaz M, Karaman AS, Kilic M, et al. (2020) Board attributes, CSR engagement, and corporate performance: What is the nexus in the energy sector? Energ Policy 143: 111582. https://doi.org/10.1016/j.enpol.2020.111582 doi: 10.1016/j.enpol.2020.111582
    [88] Shaukat A, Qiu Y, Trojanowsk G (2016) Board attributes, corporate social responsibility strategy, and corporate environmental and social performance. J Bus Ethics 135: 569–585. https://doi.org/10.1007/s10551-014-2460-9 doi: 10.1007/s10551-014-2460-9
    [89] Stefanoni S, Voltes-Dorta A (2021) Technical efficiency of car manufacturers under environmental and sustainability pressures: A data envelopment analysis approach. J Clean Prod 311: 127589. https://doi.org/10.1016/j.jclepro.2021.127589 doi: 10.1016/j.jclepro.2021.127589
    [90] Suciu MC, Noja GG, Cristea M (2020) Diversity, social inclusion and human capital development as fundamentals of financial performance and risk mitigation. Amfiteatru Econ 22: 742–757. http://dx.doi.org/10.24818/EA/2020/55/742 doi: 10.24818/EA/2020/55/742
    [91] Sueyoshi T, Yuan Y, Goto M (2017) A literature study for DEA applied to energy and environment. Energy Econ 62: 104–124. https://doi.org/10.1016/j.eneco.2016.11.006 doi: 10.1016/j.eneco.2016.11.006
    [92] Syed MW, Li JZ, Junaid M, et al. (2020) Relationship between human resource management practices, relationship commitment and sustainable performance. Green Financ 2: 227–242. https://doi.org/10.3934/GF.2020013 doi: 10.3934/GF.2020013
    [93] Taglialatela J, Pirazzi Maffiola K, Barontini R, et al. (2023) Board of Directors' characteristics and environmental SDGs adoption: an international study. Corp Soc Responsib Environ Manag 30: 2490–2506. https://doi.org/10.1002/csr.2499 doi: 10.1002/csr.2499
    [94] United Nations (2012) SD21 Summary for Policy Makers. In Back to Our Common Future: Sustainable Development in the 21st century (SD21) project. United Nations, New York.
    [95] Urwin P, Parry E, Dodds I, et al. (2013) The Business Case for Equality and Diversity: a survey of the academic literature (BIS OCCASIONAL PAPER NO. 4). Department for Business Innovation & Skills & Government Equalities Office. Available from: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/49638/the_business_case_for_equality_and_diversity.pdf.
    [96] Waddock SA, Graves SB (1997) Quality of management and quality of stakeholder relations: Are they synonymous? Bus Soc 36: 250–279. https://doi.org/10.1177/000765039703600303 doi: 10.1177/000765039703600303
    [97] Walls JL, Hoffman AJ (2013) Exceptional boards: Environmental experience and positive deviance from institutional norms. J Organ Behav 34: 253–271. https://doi.org/10.1002/job.1813 doi: 10.1002/job.1813
    [98] Wang Y, Clift B (2009) Is there a "business case" for board diversity? Pac Account Rev 21: 88–103. https://doi.org/10.1108/01140580911002044 doi: 10.1108/01140580911002044
    [99] Webb E (2004) An examination of socially responsible firms' board structure. J Manag Gov 8: 255–277. https://doi.org/10.1007/s10997-004-1107-0 doi: 10.1007/s10997-004-1107-0
    [100] Wernerfelt B (1984) A resource‐based view of the firm. Strategic Manage J 5: 171–180. https://doi.org/10.1002/smj.4250050207 doi: 10.1002/smj.4250050207
    [101] Williams RJ (2003) Women on corporate boards of directors and their influence on corporate philanthropy. J Bus Ethics 42: 1–10. https://doi.org/10.1023/A:1021626024014 doi: 10.1023/A:1021626024014
    [102] Wright PC, Geroy GD (2001) Changing the mindset: the training myth and the need for world-class performance. Int J Hum Resour Man 12: 586–600. https://doi.org/10.1080/09585190122342 doi: 10.1080/09585190122342
    [103] Zaid MA, Wang M, Adib M, et al. (2020) Boardroom nationality and gender diversity: Implications for corporate sustainability performance. J Clean Prod 251: 119652. https://doi.org/10.1016/j.jclepro.2019.119652 doi: 10.1016/j.jclepro.2019.119652
    [104] Zampone G, Nicolò G, Sannino G, et al. (2024) Gender diversity and SDG disclosure: the mediating role of the sustainability committee. J Appl Account Res 25: 171–193. https://doi.org/10.1108/JAAR-06-2022-0151 doi: 10.1108/JAAR-06-2022-0151
  • GF-06-03-017-s001.pdf
  • This article has been cited by:

    1. Mehdi Entezam, Ashkan Ehghaghiyan, Negar Naghdi Sedeh, Seyed Hassan Jafari, Hossein Ali Khonakdar, Physicomechanical and antimicrobial characteristics of hydrogel based on poly(vinyl alcohol): Performance improvement via inclusion of chitosan-modified nanoclay, 2019, 00218995, 47444, 10.1002/app.47444
    2. P A Putro, A S Sulaeman, , Synthesis and characterization of swelling properties superabsorbent Hydrogel Carboxymethylcellulose-g-Poly (Acrylic Acid)/Natrium Alginate cross-linked by gamma-ray irradiation technique, 2019, 1171, 1742-6588, 012011, 10.1088/1742-6596/1171/1/012011
    3. Eugen A. Preoteasa, Elena S. Preoteasa, Ioana Suciu, Ruxandra N. Bartok, Atomic and nuclear surface analysis methods for dental materials: A review, 2018, 5, 2372-0484, 781, 10.3934/matersci.2018.4.781
    4. Aliakbar Tofangchi Kalle Basti, Mehdi Jonoobi, Sima Sepahvand, Alireza Ashori, Valentina Siracusa, Davood Rabie, Tizazu H. Mekonnen, Fatemeh Naeijian, Employing Cellulose Nanofiber-Based Hydrogels for Burn Dressing, 2022, 14, 2073-4360, 1207, 10.3390/polym14061207
    5. Hassan Ouachtak, Anouar El Guerdaoui, Redouane Haounati, Siham Akhouairi, Rachid El Haouti, Naima Hafid, Abdelaziz Ait Addi, Biljana Šljukić, Diogo M.F. Santos, Mohamed Labd Taha, Highly efficient and fast batch adsorption of orange G dye from polluted water using superb organo-montmorillonite: Experimental study and molecular dynamics investigation, 2021, 335, 01677322, 116560, 10.1016/j.molliq.2021.116560
    6. Malsawmdawngkima Hnamte, Ajmal Koya Pulikkal, Clay-polymer nanocomposites for water and wastewater treatment: A comprehensive review, 2022, 307, 00456535, 135869, 10.1016/j.chemosphere.2022.135869
    7. Reem Mohammed, Mohamed Eid M. Ali, E. Gomaa, M. Mohsen, Promising MoS2 – ZnO hybrid nanocomposite photocatalyst for antibiotics, and dyes remediation in wastewater applications, 2023, 19, 22151532, 100772, 10.1016/j.enmm.2022.100772
    8. Md. Arif Roman Azady, Sony Ahmed, Md. Shafiul Islam, A review on polymer nanocomposite hydrogel preparation, characterization, and applications, 2021, 12, 2153-2257, 329, 10.5155/eurjchem.12.3.329-339.2100
    9. Vaneet Kumar, Diksha Bhatt, Hamed A. El-Serehy, Sadanand Pandey, Gum katira-silver nanoparticle-based bionanocomposite for the removal of methyl red dye, 2023, 10, 2296-2646, 10.3389/fchem.2022.959104
    10. Reem. Mohammed, Mohamed E.M. Ali, Decorated ZnO nanorods with Bi2S3 nanosheets for enhanced photocatalytic removal of antibiotics and analgesics: Degradation mechanism and pathway, 2023, 20, 22151532, 100885, 10.1016/j.enmm.2023.100885
    11. Gharieb M. Meselhy, Mostafa Y. Nassar, Ibrahim M. Nassar, Sabry H. Seda, Auto-combustion synthesis of lanthanum-doped TiO 2 nanostructures for efficient photocatalytic degradation of crystal violet dye , 2024, 1432-8917, 1, 10.1080/14328917.2024.2304927
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1844) PDF downloads(170) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog