Processing math: 100%
Research article Topical Sections

DDT Theorem over ideal in quadratic field

  • Let K be a quadratic field and a be a fixed integral ideal of OK. In this paper, we investigate the distribution of ideals that divide a using the Selberg-Delange method. This is a natural variation of a result studied by Deshouillers, Dress, and Tenenbaum (often referred to as the DDT Theorem), and we find that this distribution converges to the arcsine distribution.

    Citation: Zhishan Yang, Zongqi Yu. DDT Theorem over ideal in quadratic field[J]. AIMS Mathematics, 2025, 10(1): 1921-1934. doi: 10.3934/math.2025089

    Related Papers:

    [1] Lihong Hu, Junjian Yang . Inequalities on 2×2 block accretive partial transpose matrices. AIMS Mathematics, 2024, 9(4): 8805-8813. doi: 10.3934/math.2024428
    [2] Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007
    [3] Sourav Shil, Hemant Kumar Nashine . Positive definite solution of non-linear matrix equations through fixed point technique. AIMS Mathematics, 2022, 7(4): 6259-6281. doi: 10.3934/math.2022348
    [4] Kanjanaporn Tansri, Sarawanee Choomklang, Pattrawut Chansangiam . Conjugate gradient algorithm for consistent generalized Sylvester-transpose matrix equations. AIMS Mathematics, 2022, 7(4): 5386-5407. doi: 10.3934/math.2022299
    [5] Xiaoyan Xiao, Feng Zhang, Yuxin Cao, Chunwen Zhang . Some matrix inequalities related to norm and singular values. AIMS Mathematics, 2024, 9(2): 4205-4210. doi: 10.3934/math.2024207
    [6] Nunthakarn Boonruangkan, Pattrawut Chansangiam . Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized Sylvester-transpose matrix equation. AIMS Mathematics, 2021, 6(8): 8477-8496. doi: 10.3934/math.2021492
    [7] Junyuan Huang, Xueqing Chen, Zhiqi Chen, Ming Ding . On a conjecture on transposed Poisson n-Lie algebras. AIMS Mathematics, 2024, 9(3): 6709-6733. doi: 10.3934/math.2024327
    [8] Mohammad Al-Khlyleh, Mohammad Abdel Aal, Mohammad F. M. Naser . Interpolation unitarily invariant norms inequalities for matrices with applications. AIMS Mathematics, 2024, 9(7): 19812-19821. doi: 10.3934/math.2024967
    [9] Arnon Ploymukda, Pattrawut Chansangiam . Metric geometric means with arbitrary weights of positive definite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(11): 26153-26167. doi: 10.3934/math.20231333
    [10] Shakir Ali, Amal S. Alali, Atif Ahmad Khan, Indah Emilia Wijayanti, Kok Bin Wong . XOR count and block circulant MDS matrices over finite commutative rings. AIMS Mathematics, 2024, 9(11): 30529-30547. doi: 10.3934/math.20241474
  • Let K be a quadratic field and a be a fixed integral ideal of OK. In this paper, we investigate the distribution of ideals that divide a using the Selberg-Delange method. This is a natural variation of a result studied by Deshouillers, Dress, and Tenenbaum (often referred to as the DDT Theorem), and we find that this distribution converges to the arcsine distribution.



    Noncontact gesture recognition has made a significant contribution to human-computer interaction (HCI) applications with the enormous growth of artificial intelligence (AI) and computer technology [1]. Hand gesture detection systems, with their natural human-computer interaction features, enable effective and intuitive communication through a computer interface. Furthermore, gesture detection depends on vision and can be broadly implemented in AI, natural language communication, virtual reality, and multimedia [2]. Daily, the demand for and the level of services essential to people is increasing. Hand gestures are a main component of face-to-face communication [3]. Hence, human body language serves a significant part in face-to-face transmission and making hand gestures. In interaction, many things are expressed with hand gestures, and this study presents few visions into transmission itself [4]. Yet, recent automation in this region does not concentrate on using hand gestures in everyday actions. The emerging technology eases the difficulty of processes of different user interfaces and computer programs presented to the user. To make this mechanism less complex and easy to understand, nowadays image processing is utilized [5].

    When transmission has to be recognized between a deaf and a normal person, there is a robust necessity for hand gestures. To make the system smarter, there comes a necessity to enter hand gesture imageries into the mechanism and carry out an examination further to determine their meaning [6]. Still, conventional hand gesture detection related to image processing methods was not broadly implemented in HCI due to its complex algorithm, poor real-time capability, and low recognition accuracy [7]. Currently, gesture detection related to machine learning (ML) has advanced quickly in HCI owing to the presentation of AI and image processing graphics processor unit (GPU) [8]. The ML methods like neural networks, local orientation histograms, elastic graph matching, and support vector machines (SVM) were broadly utilized. Due to its learning capability, the NN does not require manual feature setting through simulating human learning processes and can execute training gesture instances to form a network classification detection map [9]. Currently, DL is a frequently utilized approach for HGR. Recurrent neural networks (RNN), CNNs, and stacked denoising auto encoders (SDAE), and are usually utilized in HGR applications [10].

    This study leverages the Lion Swarm optimizer with deep convolutional neural network (LSO-DCNN) for gesture recognition and classification. The aim of the LSO-DCNN technique lies in the proper identification and categorization of various categories of gestures that exist in the input images. Primarily, the 1D-convolutional neural network (1D-CNN) method derives a collection of feature vectors. In the second step, the LSO algorithm optimally chooses the hyperparameter values of the 1D-CNN model. At the final step, the extreme gradient boosting (XGBoost) classifier allocates proper classes, i.e., recognizes the gestures efficaciously. To portray the enhanced gesture classification results of the LSO-DCNN algorithm, a wide range of experimental results are investigated. A brief comparative study reports the improvements in the LSO-DCNN technique in the gesture recognition process.

    Sun et al. [11] suggested a model dependent upon multi-level feature fusion of a two-stream convolutional neural network (MFF-TSCNN) which comprises three major phases. Initially, the Kinect sensor acquires red, green, blue, and depth (RGB-D) imageries for establishing a gesture dataset. Simultaneously, data augmentation is accomplished on the datasets of testing and training. Later, a MFF-TSCNN model is built and trained. Barioul and Kanoun [12] proposed a new classifying model established on an extreme learning machine (ELM) reinforced by an enhanced grasshopper optimization algorithm (GOA) as a fundamental for a weight-pruning procedure. Myographic models like force myography (FMG) present stimulating signals that can construct the foundation for recognizing hand signs. FMG was examined for limiting the sensor numbers to appropriate locations and giving necessary signal processing techniques for observable employment in wearable embedded schemes. Gadekallu et al. [13] presented a crow search-based CNN (CS-CNN) method for recognizing gestures relating to the HCI field. The hand gesture database utilized in the research is an open database that is obtained from Kaggle. Also, a one-hot encoding method was employed for converting the definite values of the data to its binary system. After this, a crow search algorithm (CSA) for choosing optimum tuning for data training by utilizing the CNNs was employed.

    Yu et al. [14] employed a particle swarm optimization (PSO) technique for the width and center value optimization of the radial basis function neural network (RBFNN). Also, the authors utilized a Electromyography (EMG) signal acquisition device and the electrode sleeve for gathering the four-channel continuous EMG signals produced by 8 serial gestures. In [15], the authors presented an ensemble of CNN-based techniques. First, the gesture segment is identified by employing the background separation model established on the binary threshold. Then, the contour section can be abstracted and the segmentation of the hand area takes place. Later, the imageries are re-sized and given to three distinct CNN methods for similar training.

    Gao et al. [16] developed an effective hand gesture detection model established on deep learning. First, an RGB-D early-fusion technique established on the HSV space was suggested, efficiently mitigating background intrusion and improving hand gesture data. Second, a hand gesture classification network (HandClasNet) was suggested for comprehending hand gesture localization and recognition by identifying the center and corner hand points, and a HandClasNet was suggested for comprehending gesture detection by employing a similar EfficientNet system. In [17], the authors utilized the CNN approach for the recognition and identification of human hand gestures. This procedure workflow comprises hand region of interest segmenting by employing finger segmentation, mask image, segmented finger image normalization, and detection by utilizing the CNN classifier. The segmentation is performed on the hand area of an image from the whole image by implementing mask images.

    Figure 1.  Overall process of the LSO-DCNN approach.

    This study has developed a new LSO-DCNN method for automated gesture recognition and classification. The major intention of the LSO-DCNN method lies in the proper identification and categorization of various categories of gestures that exist in the input images. The presented LSO-DCNN model follows a three-step procedure:

    Step 1: The 1D-CNN method derives a collection of feature vectors.

    Step 2: The LSO method optimally chooses the hyperparameter values of the 1D-CNN model.

    Step 3: The XGBoost classifier assigns appropriate classes, i.e., effectively recognizes the gestures.

    First, the 1D-CNN model derives a collection of feature vectors. The CNN can be referred to as a neural network that exploits convolutional operations in at least one layer of the network instead of normal matrix multiplication operations [18]. Convolution is a special linear operation; all the layers of the convolutional network generally consist of three layers: pooling, convolutional, and activation layers. In the image detection domain, the 2DCNN can be commonly utilized for extracting features from images. The classical CNN models are AlexNet, LeNet, ResNet, VGG, GoogleNet, and so on. The 1D-CNN is used for extracting appropriate features of the data. The input of the 1D-CNN is 1D data, hence its convolutional kernel adopts a 1D architecture. The output of every convolutional, activation, and pooling layer corresponds to a 1D feature vector. In this section, the fundamental structure of the 1DCNN will be introduced.

    The convolution layer implements the convolution function on the 1D input signals and the 1D convolution filter, and later extracts local features using the activation layer. The data is inputted to the convolution layer of the 1D-CNN to implement the convolutional function.

    xlk=ni=1conv(wl1ik,sl1i)+bik (1)

    Here, xlk,blk correspondingly characterize the output and offset of the kthneurons in layer l;sl1i characterizes the output of ith neurons in layer l1; wl1ik characterizes the convolutional kernels of ith neurons in thel1 layer, and the kth neurons in layer l,i=1,2,, n,n denotes the amount of neurons.

    The activation layer implements a non-linear conversion on the input signal through a non-linear function to improve the CNN's expressive power. Currently, the typical activation function is ReLU, Sigmoid, and Tanh. Since the ReLU function may overcome gradient dispersion and converge quickly, it is extensively applied. Thus, the ReLU function was applied as the activation function, and its equation can be represented as

    ylk=f(xlk)={0,xlk} (2)

    where ylk denotes the activation value of layer l.

    The pooling layer can generally be employed after the convolution layer. Downsampling avoids over-fitting, decreases the spatial size of parameters and network features, and decreases the calculation count. The typical pooling operations are maximum and average pooling.

    zl(j)k={yl(t)k} (3)

    Where zl(j)k signifies the jth value in the kth neuron of layer l; yl(t)k characterizes the tth activation value in thekth neuron of layer l;r denotes the pooling area's width.

    In this work, the LSO approach optimally chooses hyperparameter values of the 1D-CNN model. This approach is selected for its capacity for effectively navigating the parameter space, adapting the nature of the model to local characteristics, and converging toward optimum settings, making the model more appropriate to fine-tune intricate methods. In the LSO algorithm, based on the historical optimum solution, the lion king conducts a range search to find the best solutions [19]. The equation for updating the location is given below:

    xk+1i=gk(1+γpkigk) (4)

    A lioness arbitrarily chooses an additional lioness to cooperate with, and the equation for location updating can be represented as

    xk+1i=pki+pkc2(1+αfγ) (5)

    Follow the lioness, leave the group, or follow the lion king to find an updated position are the three updating approaches for young lions:

    xk+1i={gk+pki2(1+αcγ),0q13pkmpki2(1+αcγ),13<q23g_k+pki2(1+αcγ),23<q1 (6)

    In Eq (6), xki denotes the ith individuals at the kth generation population; pki represents the prior optimum location of the ith individuals from the 1st to kth generation; γ shows the uniform distribution random number N(0,1)pkc is randomly chosen from the kth generation lioness group; gk shows the optimum location of the kth generation population; q denotes the uniform distribution random number U[0,1]g_=low_+up_gk,pkm is arbitrarily chosen from the kth generation lion group; αf and αc denotes the disturbance factor, low_ and up_ indicates the minimal and maximal values of all the dimensions within the range of lion activity space

    αf=0.1(up_low_)×exp(30tT)10 (7)
    αc=0.1(up_low_)×(TtT) (8)

    whereT shows the maximal amount of iterations andt denotes the existing amount of iterations.

    The fitness selection becomes a vital component in the LSO method. Solution encoding can be used to evaluate the candidate solution's aptitude. Here, to design a fitness function, the accuracy value is the main condition used.

    Fitness=max(P) (9)
    P=TPTP+FP (10)

    From the expression, FP means the false positive value and TP denotes the true positive.

    Finally, the XGBoost classifier allocates proper classes, i.e., recognizes the gestures efficaciously. XGBoost is an ensemble ML technique, a gradient boost method utilized for improving the efficiency of a predictive model, which integrates a series of weak methods as a strong learning approach [20]. The ensemble methods offer optimum outcomes related to a single model. Figure 2 defines the architecture of XGBoost. The steps involved are given as follows.

    Figure 2.  Structure of XGBoost.

    Step 1: Initialize

    To solve a binary classifier problem, where yj is the actual label denoted as 1 or 0. Consequently, the commonly exploited log loss function is assumed during this case and is demonstrated as

    l(yiˆyti)=(yilog(Pi)+(1yi)log(1Pi) (11)

    where

    pi=11+eˆyti. (12)

    Based on the Pi,yi, and p values, the gi and hj values are evaluated.

    gi=Piyi,hi=p(1pi). (13)

    From the (t1)th tree of instance xi, the evaluated forecasted value is projected as ˆy(t1)i, in which the actual value of xi is yi. But, the predictive value is 0 for the 0th tree, which implies ˆy(0)i=0.

    Step 2: The Gain value of features required for traverse and is computed for determining the splitting mode for the present root node. The Gain value is support to evaluate the feature node with maximal Gain score.

    Step 3: During this step, the establishment of the Current Binary Leaf Node setup is performed. Based on the feature with maximal Gain, the sample set can be categorized as 2 parts for obtaining 2 leaf nodes. Moreover, the second step can repeat to 2 leaf nodes assuming a negative gain score and end criteria, correspondingly. This step establishes the entire tree.

    Step 4: Whole Leaf Node forecast values are computed in this step. Leaf node ωj forecast values are computed as

    ωj=GjHj+λ (14)

    and the second tree forecast outcomes are expressed as

    y(2)i=y(1)i+f2(xi) (15)

    Afterward, this will result in establishing the second tree.

    Step 5: The next step is to repeat steps 1 and 2 to set up further trees until a sufficient count of trees can be introduced. The predictive values of model y(t)i are expressed as ˆy(t)i=ˆy(t1)i+f2(xi), whereas y(t)i refers to the predictive value of t trees on instance xi. This procedure creates the tth tree.

    pi=11+eˆy (16)

    Step 6: This equation that is utilized for determining the classifier outcome of an instance is to attain the probability by changing the last forecast value ˆy of the instance. If pi0.5, the probability of the instance is 1; else, it is 0.

    In this section, the results of the LSO-DCNN technique are validated using two benchmark datasets: the sign language digital (SLD) dataset and the sign language gesture image (SLGI) dataset.

    In Table 1 and Figure 3, the overall comparative recognition results of the LSO-DCNN technique are examined on the SLD dataset [21]. Based on accuy, the LSO-DCNN technique reaches an increased accuy of 91.32%, while the RF, LR, KNN, XGBoost, and MobileNet-RF models obtain decreased accuy of 90.19%, 89.29%, 85.79%, 90.18%, and 90.55%, respectively. Next, based on precn, the LSO-DCNN approach reaches an increased precn of 91.18%, while the RF, LR, KNN, XGBoost, and MobileNet-RF techniques obtain decreased precn of 45.77%, 50.59%, 35.53%, 49.26%, and 80.97%, correspondingly. At the same time, based on recal, the LSO-DCNN algorithm attained an increased recal of 91.31%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches obtained decreased recal of 48.67%, 44.55%, 35.83%, 50.12%, and 81.13%, respectively. Finally, based on F1score, the LSO-DCNN method reaches an increased F1score of 91.78%, while the RF, LR, KNN, XGBoost, and MobileNet-RF models obtain decreased F1score of 46.75%, 44.56%, 34.07%, 49.31%, and 80.10%, correspondingly.

    Table 1.  Comparative analysis of the LSO-DCNN approach with other systems on the SLD dataset.
    Sign Language Digital Dataset
    Methods Accuracy Precision Recall F1 score
    Random Forest 90.19 45.77 48.67 46.75
    Logistic Regression 89.29 50.59 44.55 44.56
    K-Nearest Neighbor 85.79 35.53 35.83 34.07
    XGBoost 90.18 49.26 50.12 49.31
    MobileNet-RF 90.55 80.97 81.13 80.10
    LSO-DCNN 91.32 91.18 91.31 91.78

     | Show Table
    DownLoad: CSV
    Figure 3.  Comparative outcome of LSO-DCNN approach on the SLD dataset.

    Figure 4 inspects the accuracy of the LSO-DCNN method in the training and validation of the SLD dataset. The figure notifies that the LSO-DCNN method has greater accuracy values over higher epochs. Furthermore, the higher validation accuracy over training accuracy portrays that the LSO-DCNN approach learns productively on the SLD dataset.

    Figure 4.  Accuracy curve of LSO-DCNN approach on the SLD dataset.

    The loss analysis of the LSO-DCNN technique in the training and validation is given on the SLD dataset in Figure 5. The results indicate that the LSO-DCNN approach attained adjacent values of training and validation loss. The LSO-DCNN approach learns productively on the SLD database.

    Figure 5.  Loss curve of LSO-DCNN approach on the SLD dataset.

    In Table 2 and Figure 6, the overall comparative recognition outcomes of the LSO-DCNN technique are examined on the SLGI dataset. Based on accuy, the LSO-DCNN technique reaches an increased accuy of 99.09%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches gain decreased accuy of 97.93%, 97.93%, 93.40%, 98.25%, and 98.31%, correspondingly. Next, based on precn, the LSO-DCNN methodology reaches an increased precn of 98.86%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches obtain decreased precn of 29.08%, 20.49%, 27.34%, 31.15%, and 98.12%, correspondingly. Simultaneously, based on recal, the LSO-DCNN method reaches an increased recal of 99.15%, while the RF, LR, KNN, XGBoost, and MobileNet-RF models obtain decreased recal of 30.33%, 23.37%, 27.98%, 31.78%, and 98.11%, correspondingly. Eventually, based on F1score, the LSO-DCNN technique reaches an increased F1score of 99.03%, while the RF, LR, KNN, XGBoost, and MobileNet-RF approaches obtain decreased F1score of 29.10%, 19.77%, 27.30%, 30.03%, and 97.89%, correspondingly.

    Table 2.  Comparative analysis of the LSO-DCNN approach with other methods on the SLGI dataset.
    Sign Language Gestures Image Dataset
    Methods Accuracy Precision Recall F1 score
    Random Forest 97.93 29.08 30.33 29.10
    Logistic Regression 97.93 20.49 23.37 19.77
    K-Nearest Neighbor 93.40 27.34 27.98 27.30
    XGBoost 98.25 31.15 31.78 30.03
    MobileNet-RF 98.31 98.12 98.11 97.89
    LSO-DCNN 99.09 98.86 99.15 99.03

     | Show Table
    DownLoad: CSV
    Figure 6.  Comparative outcome of LSO-DCNN approach on the SLGI dataset.

    Figure 7 portrays the accuracy of the LSO-DCNN method in the training and validation of the SLGI database. The result shows that the LSO-DCNN technique has higher accuracy values over greater epochs. Moreover, the higher validation accuracy over training accuracy shows that the LSO-DCNN technique learns productively on the SLGI database.

    Figure 7.  Accuracy curve of LSO-DCNN approach on the SLGI dataset.

    The loss analysis of the LSO-DCNN approach in the training and validation is shown on the SLGI dataset in Figure 8. The results indicate that the LSO-DCNN method reaches adjacent values of training and validation loss. The LSO-DCNN method learns productively on the SLGI database.

    Figure 8.  Loss curve of LSO-DCNN approach on the SLGI dataset.

    This study developed a new LSO-DCNN technique for automated gesture recognition and classification. The major intention of the LSO-DCNN approach lies in the proper identification and categorization of various categories of gestures that exist in the input images. The presented LSO-DCNN model follows a three-step procedure, namely 1D-CNN based feature extraction, LSO-based hyperparameter tuning, and XGBoost classification. In this work, the LSO method optimally chooses the hyperparameter values of the 1D-CNN model and it helps to recognize the gestures efficaciously. To prove the enhanced gesture classification results of the LSO-DCNN approach, a wide range of experimental results are investigated. The brief comparative study reported the improvements in the LSO-DCNN technique in the gesture recognition process. In the future, multimodality concepts can enhance the performance of the LSO-DCNN technique.

    The authors extend their appreciation to the King Salman center For Disability Research for funding this work through Research Group no KSRG-2023-175.



    [1] Z. Cui, J. Wu, The Selberg-Delange method in short intervals with an application, Acta Arith., 163 (2014), 247–260. https://doi.org/10.4064/aa163-3-4 doi: 10.4064/aa163-3-4
    [2] H. Delange, Sur des formules dues à Atle Selberg, Bull. Sci. Math., 83 (1959), 101–111.
    [3] H. Delange, Sur les formules de Atle Selberg, Acta Arith., 19 (1971), 105–146. https://doi.org/10.4064/AA-19-2-105-146 doi: 10.4064/AA-19-2-105-146
    [4] B. Feng, J. Wu, The arcsine law on divisors in arithmetic progressions modulo prime powers, Acta Math. Hungar., 163 (2021), 392–406. https://doi.org/10.1007/s10474-020-01105-7 doi: 10.1007/s10474-020-01105-7
    [5] G. Hanrot, G. Tenenbaum, J. Wu, Moyennes de certaines fonctions arithmetiques sur les entiers friables, 2, Proc. Lond. Math. Soc., 96 (2008), 107–135.
    [6] M. Huxley, On the difference between consecutive primes, Invent. Math., 15 (1971), 164–170. https://doi.org/10.1007/BF01418933
    [7] Y. Lau, Summatory formula of the convolution of two arithmetical functions, Mh. Math., 136 (2002), 35–45. https://doi.org/10.1007/s006050200032 doi: 10.1007/s006050200032
    [8] Y. Lau, J. Wu, Sums of some multiplicative functions over a special set of integers, Acta Arith., 101 (2002), 365–394. https://doi.org/10.4064/aa101-4-5 doi: 10.4064/aa101-4-5
    [9] C. D. Pan, C. B. Pan, Algebraic number theory (Chinese), Shandong: Shandong University Press, 2011.
    [10] A. Selberg, Note on a paper by L. G. Sathe, Journal of the Indian Mathematical Society, 18 (1954), 83–87. https://doi.org/10.18311/jims/1954/17018 doi: 10.18311/jims/1954/17018
    [11] G. Tenenbaum, Introduction to analytic and probabilistic number theory, Cambridge: Cambridge University Press, 1995.
    [12] J. Wu, Q. Wu, Mean values for a class of arithmetic functions in short intervals, Math. Nachr., 293 (2020), 178–202. https://doi.org/10.1002/mana.201800276 doi: 10.1002/mana.201800276
  • This article has been cited by:

    1. S Padmakala, Saif O. Husain, Ediga Poornima, Papiya Dutta, Mukesh Soni, 2024, Hyperparameter Tuning of Deep Convolutional Neural Network for Hand Gesture Recognition, 979-8-3503-7289-2, 1, 10.1109/NMITCON62075.2024.10698984
    2. REEMA G. AL-ANAZI, ABDULLAH SAAD AL-DOBAIAN, ASMA ABBAS HASSAN, MANAR ALMANEA, AYMAN AHMAD ALGHAMDI, SOMIA A. ASKLANY, HANAN AL SULTAN, JIHEN MAJDOUBI, INTELLIGENT SPEECH RECOGNITION USING FRACTAL AMENDED GRASSHOPPER OPTIMIZATION ALGORITHM WITH DEEP LEARNING APPROACH, 2024, 32, 0218-348X, 10.1142/S0218348X25400298
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(353) PDF downloads(20) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog