Review Special Issues

Advancements in AI-driven multilingual comprehension for social robot interactions: An extensive review

  • Received: 13 August 2023 Revised: 27 September 2023 Accepted: 28 September 2023 Published: 16 October 2023
  • In the digital era, human-robot interaction is rapidly expanding, emphasizing the need for social robots to fluently understand and communicate in multiple languages. It is not merely about decoding words but about establishing connections and building trust. However, many current social robots are limited to popular languages, serving in fields like language teaching, healthcare and companionship. This review examines the AI-driven language abilities in social robots, providing a detailed overview of their applications and the challenges faced, from nuanced linguistic understanding to data quality and cultural adaptability. Last, we discuss the future of integrating advanced language models in robots to move beyond basic interactions and towards deeper emotional connections. Through this endeavor, we hope to provide a beacon for researchers, steering them towards a path where linguistic adeptness in robots is seamlessly melded with their capacity for genuine emotional engagement.

    Citation: Yanling Dong, Xiaolan Zhou. Advancements in AI-driven multilingual comprehension for social robot interactions: An extensive review[J]. Electronic Research Archive, 2023, 31(11): 6600-6633. doi: 10.3934/era.2023334

    Related Papers:

    [1] Olayan Albalawi . Estimation techniques utilizing dual auxiliary variables in stratified two-phase sampling. AIMS Mathematics, 2024, 9(11): 33139-33160. doi: 10.3934/math.20241582
    [2] Haidy A. Newer, Bader S Alanazi . Bayesian estimation and prediction for linear exponential models using ordered moving extremes ranked set sampling in medical data. AIMS Mathematics, 2025, 10(1): 1162-1182. doi: 10.3934/math.2025055
    [3] Nuran M. Hassan, M. Nagy, Subhankar Dutta . Statistical inference for the bathtub-shaped distribution using balanced and unbalanced sampling techniques. AIMS Mathematics, 2024, 9(9): 25049-25069. doi: 10.3934/math.20241221
    [4] Haidy A. Newer, Mostafa M. Mohie El-Din, Hend S. Ali, Isra Al-Shbeil, Walid Emam . Statistical inference for the Nadarajah-Haghighi distribution based on ranked set sampling with applications. AIMS Mathematics, 2023, 8(9): 21572-21590. doi: 10.3934/math.20231099
    [5] Mehreen Fatima, Saman Hanif Shahbaz, Muhammad Hanif, Muhammad Qaiser Shahbaz . A modified regression-cum-ratio estimator for finite population mean in presence of nonresponse using ranked set sampling. AIMS Mathematics, 2022, 7(4): 6478-6488. doi: 10.3934/math.2022361
    [6] S. P. Arun, M. R. Irshad, R. Maya, Amer I. Al-Omari, Shokrya S. Alshqaq . Parameter estimation in the Farlie–Gumbel–Morgenstern bivariate Bilal distribution via multistage ranked set sampling. AIMS Mathematics, 2025, 10(2): 2083-2097. doi: 10.3934/math.2025098
    [7] Hleil Alrweili, Fatimah A. Almulhim . Estimation of the finite population mean using extreme values and ranks of the auxiliary variable in two-phase sampling. AIMS Mathematics, 2025, 10(4): 8794-8817. doi: 10.3934/math.2025403
    [8] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Analysis of reliability index R=P(Y<X) for newly extended xgamma progressively first-failure censored samples with applications. AIMS Mathematics, 2024, 9(11): 32200-32231. doi: 10.3934/math.20241546
    [9] Anoop Kumar, Shashi Bhushan, Abdullah Mohammed Alomair . Assessment of correlated measurement errors in presence of missing data using ranked set sampling. AIMS Mathematics, 2025, 10(4): 9805-9831. doi: 10.3934/math.2025449
    [10] Areej M. AL-Zaydi . On concomitants of generalized order statistics arising from bivariate generalized Weibull distribution and its application in estimation. AIMS Mathematics, 2024, 9(8): 22002-22021. doi: 10.3934/math.20241069
  • In the digital era, human-robot interaction is rapidly expanding, emphasizing the need for social robots to fluently understand and communicate in multiple languages. It is not merely about decoding words but about establishing connections and building trust. However, many current social robots are limited to popular languages, serving in fields like language teaching, healthcare and companionship. This review examines the AI-driven language abilities in social robots, providing a detailed overview of their applications and the challenges faced, from nuanced linguistic understanding to data quality and cultural adaptability. Last, we discuss the future of integrating advanced language models in robots to move beyond basic interactions and towards deeper emotional connections. Through this endeavor, we hope to provide a beacon for researchers, steering them towards a path where linguistic adeptness in robots is seamlessly melded with their capacity for genuine emotional engagement.



    Continuous data that strictly falls in the open interval (0, 1) is something we see rather frequently. Practitioners must represent this using the proper distributions, management, etc. This type of analysis includes studying ratios, percentages, etc. The beta distribution, which is utilized in a variety of situations, is one of the most versatile such distributions. However, there is a disadvantage to utilizing the beta distribution, as it is insufficient for some real-world scenarios, such as hydrological data. Considering this, the Topp-Leone distribution [1] and Kumaraswamy's distribution [2] merit consideration as alternatives to the beta model that have similar structure. The distribution and quantile functions may be represented in closed forms, which is a benefit in this case. To model datasets in the fields of biology, engineering, actuarial science, economics, and financial risk management, among others, several unit distributions have been created. Some of these significant, well-known distributions include the unit logistic distribution [3], unit Gompertz and unit Birnbaum-Saunders distributions [4,5], extended reduced Kies distribution [6], unit-Weibull distributions [7], unit generalized half normal distribution [8], unit Lindley distribution [9], unit Burr XII distribution [10], power unit Burr-XII distribution[11], unit gamma-Gompertz distribution [12], unit Teissier distribution[13], and generalized unit half-logistic geometric distribution[14], etc.

    Recently, Kharazmi et al. [15] proposed a new one-parameter unit distribution based on the definition of the arctan function. The new bounded distribution is called the arctan uniform distribution (AUD). The probability density function (PDF) and cumulative distribution function (CDF) of the AUD are given, respectively, by:

    f(z)=δtan1(δ)+δ2z2tan1(δ),0<z<1,δ>0, (1.1)

    and

    F(z)=tan1(δz)tan1(δ),0<z<1,δ>0, (1.2)

    where δ is the scale parameter. We depict the PDF (1.1) in Figure 1 for a few different choices of the parameter δ to examine the effect of δ on the PDF behavior. It can be concluded that the AUD has asymmetric shapes. Kharazmi et al. [15] also provided the moments of this distribution.

    Figure 1.  The PDF plots of the AUD.

    Cost-effective sampling is a major issue in some research, especially when measuring the relevant feature is costly, inconvenient, or time-consuming. It is possible to give the sample items that are gathered additional structure using ranked set sampling (RSS) and to leverage this structure to create effective inferential processes. It is possible to rank tiny groups of units exactly, even without true quantification. The ranking might be carried out using eye examination, preliminary data, expert judgment, prior sampling episodes, or other imprecise techniques without the need for real measurement. The RSS method is a great instrument for attaining observational economy since it increases the accuracy attained per unit of the sample. This method of data collection was initially put forth by McIntyre [16] as an alternative to the widely used simple random sample (SRS) methodology for improving the effectiveness of the sample mean. It is used extensively in the fields of agriculture, biology, engineering, quality control, and environmental studies (see [17,18,19,20,21,22,23,24,25]). The procedures below should be followed to implement the RSS of s observations from a population:

    (1) Choose (s) SRS with size (s) each, with s to be a low number.

    (2) In each sample, order the units from smallest to greatest. Without actually measuring the units about the variable of interest, ranking is performed.

    (3) Only the a1th greatest unit in the a1th sample, a1=1,..,s is used for actual measurements. As a result, the RSS associated with this cycle will be Z1(1:m),Z2(2:m),....,Zm(m:m). Note that Za1(a1:m) stands for the a1th order statistic from the a1th row.

    (4) Carry out the preceding steps v times (cycles) to obtain sample size m=sv, where s is the set size and v is the cycle number. As a result, the observed RSS for this v cycle will be Za1(a1:m)a2, a1=1,..,s, a2=1,..,v, where s is the set size and v is the cycle count. Hence for simplified form, Za1a2 will be used for the rest of the article instead of Za1(a1:m)a2.

    Statistical inference relies heavily on the parametric estimate approach employing the sampling design strategy. Numerous studies have examined various estimation techniques for estimating parameters based on RSS designs and their extensions. Reference [26] investigated how to estimate the location-scale family distributions' parameters. Some examples included the normal, exponential, and gamma distributions [27], the half logistic distribution [28], the Gumbel distribution [29], the generalized Rayleigh distribution [30], the Pareto distribution[31,32], the x-gamma distribution [33], the new Weibull-Pareto distribution[34], the extended inverted Topp-Leone distribution [35], the generalized quasi-Lindley distribution [36], and the inverse Kumaraswamy distribution [37]. For more, see [38,39,40,41,42].

    Statistical inference relies heavily on the parametric estimating approach and the sampling design strategy. The statistical literature frequently proposes different estimation approaches, since parameter estimation is important in practice. Generally, maximum likelihood (ML) estimation is the first step in the estimation process. This method's easy-to-understand formulation is the reason for its popularity. The estimators that are produced using this approach, for instance, are normally distributed and asymptotically consistent. Other, more widely used estimating techniques are available in the literature. These techniques include maximum product of spacing (MPS), least squares (LS), weighted LS (WLS), Cramér-von Mises (CM), Anderson-Darling (AD), minimum spacing absolute-log distance (MSALD), right-tail AD (RAD), left-tail AD (LAD), minimum spacing absolute distance (MSAD), percentile (PS), and a few more.

    This study's objective is to present an in-depth assessment of several frequentist approaches to the AUD. The wide range of fields in which the RSS approach is used served as the inspiration for this concentration. In addition, the RSS design offers, for a fixed sample size, more efficient estimators than the SRS design. We use certain significant traditional estimating techniques based on the following procedures: RSS and SRS. The following estimation methods are taken into consideration: MPS, LS, ML, WLS, CM, AD, MSALD, RAD, LAD, MSAD, and PS. A simulation task is then used to compare the suggested estimates based on the RSS design to those offered by the SRS approach for the same sample size. Some precision metrics are used in comparison studies. The novelty of this study stems from the lack of prior research evaluating all of these estimating techniques for the AUD based on RSS. For illustration reasons, an insurance data set is investigated as well. Therefore, the study will serve as a guide for selecting the most appropriate estimating technique for the AUD, which we believe applied statisticians would find fascinating.

    The following describes how this article is organized: Section 2 addresses the ML estimate (MLE) of the AUD parameter based on the RSS and SRS approaches. A few essential minimum distances of estimation for the proposed AUD are discussed in Section 3. In Section 4, several maximum and minimum product of spacing estimation are covered. The WLS, LS, and PS estimation techniques of the AUD are presented in Section 5. The effectiveness of the supplied estimating techniques is compared and evaluated in Section 6 using a Monte Carlo simulation. In Sections 7 and 8, respectively, an analysis of an insurance dataset is provided, followed by a conclusion.

    In this section, the MLE of parameter δ of the AUD is considered based on RSS and SRS. At first, assume that Za1a2={Za1a2,a1=1,...,s,a2=1,...,v}, is an RSS of size m with PDF (1.1) and CDF (1.2), where v is the cycles count and s is the set size. It can be seen from the structure of RSS that the data are all mutually independent, and, in addition, for each a1=1,...,s, the data are identically distributed. It should be noted that, if the judgment ranking is perfect, the PDF of a1th order statistics Za1a2 is as below:

    fZa1a2(z)=m!(a11)!(ma1)[f(z)]a11[1F(z)]ma1. (2.1)

    The likelihood function (LF) of the AUD, based on RSS, is given by:

    L(δ)=sa1=1va2=1m!(a11)!(ma1)[δtan1(δ)(1+δ2z2a1a2)]a11[1tan1(δza1a2)tan1(δ)]ma1. (2.2)

    The log-LF of (2.2), denoted by , is as follows:

    sa1=1va2=1(a11)[lnδln(tan1(δ))ln(1+δ2z2a1a2)]+(ma1)ln[C(δ)], (2.3)

    where C(δ)=[1tan1(δza1a2)tan1(δ)]. The MLE of δ says ˆδ1 is obtained by maximizing (2.3), which can be computed as the solution of the following nonlinear equation:

    δ=sa1=1va2=1[(a11)δ2δ(a11)z2a1a2(1+δ2z2a1a2)(a11)(1+δ2)tan1(δ)]+sa1=1va2=1(ma1)C(δ)C(δ), (2.4)

    where C(δ)=tan1(δza1a2)[tan1(δ)]2(1+δ2)1(1+δ2z2a1a2)tan1(δ).

    Setting (2.4) to zero and solving numerically, we get the MLE ˆδ1 of δ.

    Additionally, the MLE of the AUD parameter under SRS is the subject of the following discussion. We assume that z1,z2,...,zm is an observed SRS of size m from the AUD with PDF (1.1). The log-LF, say 1, based on SRS, is given by

    1=mln(δ)mln[tan1(δ)]mi=1ln(1+δ2z2i).

    The MLE δ1, of δ, is provided as the solution of the following non-linear equation after setting with zero:

    1δ=mδmi=0m(1+δ2)tan1(δ)mi=02δzi(1+δ2z2i). (2.5)

    Then, δ1 is provided from (2.5) after setting with zero and using the numerical technique.

    This section illustrates four estimation methods that minimize goodness-of-fit statistics, including AD, RAD, LAD, and CM. This series of estimating methods was created based on the discrepancy between the estimated CDF and the actual distribution function. This section presents the estimates of AUD parameters for SRS and RSS using the mentioned methodologies.

    Reference [43] introduced the AD test as an alternative to traditional statistical procedures to identify sample distribution deviations from the presumed distribution. Here, six estimators of parameter δ are produced based on the RSS and SRS.

    Suppose that the ordered items Z(1:m),Z(2:m),...,Z(m:m) are an RSS drawn from the AUD with sample size m=sv, where s is set size and v is the cycle count. By minimizing the following equation, the AD estimate (ADE) ˆδ2 of δ for the AUD is generated.

    ϑ1=m1mmk=1(2k1){logF(z(k:m)|δ)+logˉF(z(mk+1:m)|δ)}. (3.1)

    Instead of using (3.1), the ADE ˆδ2 of the AUD may be calculated by solving the nonlinear equation illustrated below:

    mk=1(2k1){φ1(z(k:m)|δ)F(z(k:m)|δ)φ2(z(mk+1:m)|δ)ˉF(z(mk+1:m)|δ)}=0,

    where,

    φ1(z(k:m)|δ)=tan1(δz(k:m))[tan1(δ)]2(1+δ2)z(k:m)(1+δ2z2(k:m))tan1(δ), (3.2)

    and

    φ2(z(mk+1:m)|δ)=tan1(δz(mk+1:m))[tan1(δ)]2(1+δ2)z(mk+1:m)(1+δ2z2(mk+1:m))tan1(δ). (3.3)

    The following function is used to provide the RAD estimate (RADE) ˆδ3 for δ of the AUD:

    ϑ2=m22mk=1F(z(k:m)|δ)1mmk=1(2k1)logˉF(z(m+1k:m)|δ). (3.4)

    Instead of using (3.4), the RADE ˆδ3 of the AUD may be calculated by solving the nonlinear equation illustrated below:

    2mk=1φ1(z(k:m)|δ)+1mmk=1(2k1)φ2(z(mk+1:m)|δ)ˉF(z(mk+1:m)|δ)=0,

    where φ1(.) and φ2(.) are defined in (3.2) and (3.3).

    The following function is used to provide the LAD estimate (LADE) ˆδ4 for δ of the AUD:

    ϑ3=3m2+2mk=1F(z(k:m)|δ)1mmk=1(2k1)logF(z(k:m)|δ).

    To obtain the LADE ˆδ4 of the AUD, the following nonlinear equation may be solved:

    2mk=1φ1(z(k:m)|δ)1mmk=1(2k1)φ1(z(k:m)|δ)F(z(k:m)|δ)=0,

    where, φ1(.) is defined in (3.2).

    Next, let us consider the scenario in which the ordered items Z(1),Z(2),...,Z(m) are SRS seen from AUD with sample size m. The following function is used to provide the ADE δ2, for δ of the AUD:

    ϑ1=m1mml=1(2l1){logF(z(l)|δ)+logˉF(z(ml+1)|δ)}, (3.5)

    with respect to δ. The following equation which is equivalent to (3.5) may be solved numerically to provide δ2

    ml=1(2l1){φ1(z(l)|δ)F(z(l)|δ)φ2(z(ml+1)|δ)ˉF(z(ml+1)|δ)}=0,

    where

    φ1(z(l)|δ)=tan1(δz(l))[tan1(δ)]2(1+δ2)z(ml+1)(1+δ2z(ml+1))tan1(δ), (3.6)

    and

    φ2(z(ml+1)|δ)=tan1(δz(ml+1))[tan1(δ)]2(1+δ2)z(ml+1)(1+δ2z2(ml+1))tan1(δ). (3.7)

    The following function is minimized for obtaining the RADE δ3 for the AUD.

    ϑ2=m22ml=1F(z(l)|δ)1mml=1(2l1)logˉF(z(m+1l)|δ). (3.8)

    The RTDE δ3 of the AUD is determined by solving the numerically the following nonlinear equation rather than using (3.8):

    2ml=1φ1(z(l)|δ)+1mml=1(2l1)φ2(z(m+1l)|δ)ˉF(z(m+1l)|δ)=0,

    where φ1(.) and φ2(.) are defined in (3.6) and (3.7).

    The following function is used to provide the LADE δ4 for δ of the AUD:

    ϑ3=3m2+2ml=1F(z(l)|δ)1mml=1(2l1)logF(zl)|δ).

    To obtain the LADE δ4 of the AUD, the following nonlinear equation may be solved:

    ϑ3=2ml=1φ1(z(l)|δ)1mml=1(2l1)φ1(z(l)|δ)F(z1)|δ)=0,

    where φ1(.) is defined in (3.6).

    To support the decision to use minimal distance estimators of the CM type, [44] offered empirical evidence showing the estimator's bias is less than that of other minimum distance estimators. Here, the RSS and SRS techniques are used to produce the CM estimate (CME) for the AUD parameter.

    Let us assume that the ordered items Z(1:m),Z(2:m),...,Z(m:m), with sample size m=sv, where s is set size and v is the cycle numbers, are the selected RSS from CDF (1.2). In order to obtain CME ˆδ5 of δ, the following function is minimized with regard to δ :

    ψ=112m+mk=1{F(z(k:m)|δ)2k12m}2. (3.9)

    Instead of using (3.9), CME can be derived by resolving the following nonlinear equation:

    mk=1{F(z(k:m)|δ)2k12m}φ1(z(k:m)|δ)=0,

    where φ1(.) is defined in (3.2).

    Currently, suppose that the ordered items Z(1),Z(2),...,Z(m) are the seen SRS from the AUD with sample size m. So, the following function is minimized to determine the CME δ5 of δ:

    ψ=112m+ml=1{F(z(l)|δ)2l12m}2. (3.10)

    Or equivalent to (3.10), the CME δ5 of δ is produced by minimizing the following function

    ml=1{F(z(l)|δ)2l12m}φ1(z(l)|δ)=0,

    where φ1(.) is defined in (3.6).

    The concept of differences in the values of the CDF at successive data points, according to Cheng and Amin [45], may be used to get the MPS estimate (MPSE) of the unknown parameter of the AUD. This approach is just as effective as ML estimators and consistent under a wider range of conditions.

    Let Z(1:m),Z(2:m),...,Z(m:m) be ordered items of the RSS drawn from the AUD with sample size m=sv, where s is set size and v is the cycle numbers. The uniform spacings may be defined as follows based on a random sample taken from the AUD.

    k(δ)=F(z(k:m)|δ)F(z(k1:m)|δ),k=1,2,...,m,

    where F(z(0:m)|δ)=0,F(z(m+1:m)|δ)=1, such that m+1k=1k(δ)=1.

    To get the MPSE ˆδ6 of δ, the geometric mean of the spacing should be maximized.

    K(δ)={m+1k=1k(δ)}1m+1,

    or, alternatively, there is maximizing the function that follows:

    H(δ)=1m+1m+1k=1ln[k(δ)].

    The MPSE ˆδ6 of δ can also be obtained by numerically resolving the following nonlinear equations:

    H(δ)δ=11+mm+1k=11[(δ)][φ1(z(k:m)|δ)φ1(z(k1:m)|δ)]=0,

    where φ1(z(k:m)|δ) is defined in (3.2) and φ1(z(k1:m)|δ) has the same expression with z(k1:m).

    Similarly, the minimum spacing distance estimator of δ is created by minimizing the following function.

    H(δ)=m+1k=1Δ[k(δ),1m+1],

    where Δ(u1,u2) is the suitable distance. According to Ref. [46], for Δ(u1,u2)=|u1u2|, Δ(u1,u2)=|logu1logu2| are referred to the MSAD and MSALD, respectively. As a result, the MSAD estimate (MSADE) and MSALD estimate (MSALDE) of δ are provided by minimizing the following functions:

    H(δ)=m+1k=1|k(δ)1m+1|, (4.1)

    and

    H(δ)=m+1k=1|log(k(δ))log(1m+1)|, (4.2)

    with respect to δ. Equivalently to (4.1) and (4.2), the MSADE ˆδ7 and MSALDE ˆδ8 are provided by solving the nonlinear equations

    H(δ)δ=m+1k=1k(δ)1m+1|k(δ)1m+1|[φ1(z(k:m)|δ)φ1(z(k1:m)|δ)]=0,

    and

    H(δ)δ=m+1k=1log(k(δ))log(1m+1)|log(k(δ))log(1m+1)|[φ1(z(k:m)|δ)φ1(z(k1:m)|δ)]=0,

    where φ1(z(k:m)|δ) and φ1(z(k1:m)|δ) are defined above.

    In addition to the above, the MPSE δ6 of δ for the AUD under SRS is obtained. Let Z(1),Z(2),...,Z(m) be SRS of size m from CDF (1.2), and the uniform spacings in this situation are

    l(δ)=F(z(l)|δ)F(z(l1)|δ),l=1,2,...,m,

    where F(z(0)|δ)=0,F(z(m+1)|δ)=1, such as m+1l=1l(δ)=1.

    The MPSE δ6 of δ is provided by maximizing the following function:

    K(δ)=11+mm+1l=1ln[l(δ)]. (4.3)

    Equivalent to (4.3), the MPSE δ6 of δ is produced by solving the following nonlinear equation numerically:

    K(δ)δ=11+mm+1l=11[l(δ)][φ1(z(l)|δ)φ1(z(l1)|δ)]=0,

    where φ1(z(l)|δ) is defined in (3.6), and φ1(z(l1)|δ) has the same expression with z(l1).

    Furthermore, MSADE δ7 and MSALDE δ8 are obtained by solving numerically the following equations:

    K(δ)=m+1k=1|l(δ)1m+1|, (4.4)

    and

    K(δ)=m+1k=1|log(l(δ))log(1m+1)|, (4.5)

    with respect to δ. Equivalently to (4.4) and (4.5), the MSADE δ7 and MSALDE δ8 are provided by solving the nonlinear equations

    K(δ)δ=m+1l=1l(δ)1m+1|l(δ)1m+1|[φ1(z(l)|δ)φ1(z(l1)|δ)]=0,

    and

    K(δ)δ=m+1l=1log(l(δ))log(1m+1)|log(l(δ))log(1m+1)|[φ1(z(l)|δ)φ1(z(l1)|δ)]=0,

    where, φ1(z(l)|δ) and φ1(z(l1)|δ) are defined above.

    This section offers the LS estimate (LSE), WLS estimate (WLSE), and PS estimate (PSE) for the AUD parameter based on RSS and SRS methods.

    Let Z(1:m),Z(2:m),...,Z(m:m) be an observed ordered RSS with size m=sv, from the AUD. The LSE ˆδ9 and WLSE ˆδ10 are derived by minimizing the following functions with regard to δ:

    γ=mk=1[F(z(k:m)|δ)km+1]2, (5.1)

    and

    γ=mk=1(m+1)2(m+2)k(mk+1)[F(z(k:m)|δ)km+1]2. (5.2)

    These estimators ˆδ9 are ˆδ10, which are equivalent to (5.1) and (5.2), and can be obtained by solving the following equations numerically:

    mk=1[F(z(k:m)|δ)km+1]φ1(z(k:m)|δ)=0,

    and

    mk=1(m+1)2(m+2)k(mk+1)[F(z(k:m)|δ)km+1]φ1(z(k:m)|δ)=0,

    where φ1(z(k:m)|δ) is defined before.

    Additionally, suppose that Z(1),Z(2),...,Z(m) is an ordered SRS of size m taken from the AUD. The LSE and WLSE δ9,δ10 of δ are produced by solving numerically the following equations:

    ml=1[F(z(l)|δ)lm+1]φ1(z(l)|δ)=0,

    and

    ml=1(m+1)2(m+2)l(ml+1)[F(z(l)|δ)lm+1]φ1(z(l)|δ)=0,

    where φ1(z(k:m)|δ) is defined before.

    One of the often employed methods for estimating the Weibull distribution's parameters is the percentile approach, which differs from other estimation techniques in terms of its ease of computation and effectiveness in parameter estimation [47]. Here, PSE ˆδ11 of δ of the AUD is provided using RSS and SRS methods.

    Consider Z(1:m),Z(2:m),...,Z(m:m) as an observed ordered RSS, with size m=sv, available from the AUD. From the PSE of the AUD's parameter one may get ˆδ11 by minimizing the following function and assuming that pk=km+1 is the estimate of F(z(k:m)|δ)

    Λ=mk=1[z(k:m)1δtan(p(k:m)tan1(δ))],

    with respect to δ.

    In the case of the SRS method, let Z(1),Z(2),...,Z(m) be an ordered SRS of size m drawn from AUD. The PSE δ11 of δ is obtained, by minimizing the following equation:

    Λ1=ml=1[z(l)1δtan(p(l)tan1(δ))],

    with respect to δ.

    This section focuses on the examination of various estimation methods presented in this paper. The goal is to assess the efficacy of these methods in estimating model parameters through the generation of random datasets derived from the proposed model. Subsequently, these datasets will be ranked, and the estimation methods will be employed to identify the most recommended one. The simulation will be conducted with the assumption of a flawless ranking, outlined as follows:

    ● To generate an RSS from the AUD with a fixed set size s=5 and different cycle numbers v=3,10,24,40,60, and 90, the corresponding sample sizes m=sv=15,50,120,200,300, and 450 are employed.

    ● Generate an SRS from the AUD with the specified sample sizes, m = 15, 50, 120, 200, 300, and 450.

    ● We have a set of estimates corresponding to each sample size, using the true parameter values (δ) of 0.15, 0.6, 1.0, 1.5, 2.0, and 2.5.

    ● To evaluate the effectiveness of the estimation methods, three measures are employed, which include the following:

    Average of absolute bias (bias), |bias(^δδ)|= 1MMi=1|^δδiδδ|, mean squared errors (MSE), MSE=1MMi=1(^δδiδδ)2, mean absolute relative errors (MRE) MRE=1MMi=1|^δδiδδ|/δδ.

    ● The measures outlined in the previous step serve as objective benchmarks for evaluating the accuracy and reliability of the estimated parameters. Utilizing these evaluation metrics enables a comprehensive assessment of the performance of the estimation techniques. This evaluation process provides valuable insights into the effectiveness and appropriateness of these techniques for the particular model under consideration.

    ● By repeating this process multiple times through numerous iterations, we can obtain a reliable and robust assessment of the estimation techniques. This repeated evaluation helps ensure that the performance results are consistent and representative, contributing to a more thorough understanding of the effectiveness of these techniques in estimating the model parameters.

    ● The results of the evaluation measures are presented in Tables 112, encompassing both SRS and RSS. These tables offer a comprehensive summary of the outcomes obtained. In these tables, the magnitude of each value signifies its relative effectiveness when compared to all the estimation approaches examined in the study. Lower-ranked values indicate stronger and more significant performance relative to the investigated estimation methods. These tables serve as a valuable reference for assessing the relative power and significance of the different estimation techniques.

    Table 1.  Bias, MSE, and MRE values for (δ=0.15) under SRS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias δ 0.5128{5} 0.5108{4} 0.5268{8} 0.4521{1} 0.5305{9} 0.4879{3} 0.4799{2} 0.5257{7} 0.5755{11} 0.5345{10} 0.5223{6}
    MSE δ 0.5886{4} 0.5957{5} 0.6356{9} 0.4704{1} 0.6343{8} 0.5022{2} 0.5088{3} 0.6309{7} 0.806{11} 0.7126{10} 0.6245{6}
    MRE δ 3.4184{5} 3.405{4} 3.5117{8} 3.0141{1} 3.5368{9} 3.2523{3} 3.1997{2} 3.5049{7} 3.8366{11} 3.563{10} 3.4823{6}
    Ranks 14{5} 13{4} 25{8} 3{1} 26{9} 8{3} 7{2} 21{7} 33{11} 30{10} 18{6}
    50 bias δ 0.3349{4} 0.3408{6} 0.3462{8} 0.3171{1} 0.337{5} 0.329{3} 0.327{2} 0.3414{7} 0.3703{11} 0.361{10} 0.3535{9}
    MSE δ 0.1991{4} 0.2068{6} 0.2138{8} 0.1837{1} 0.205{5} 0.1905{2} 0.1917{3} 0.2082{7} 0.2532{10} 0.2582{11} 0.2295{9}
    MRE δ 2.2326{4} 2.2718{6} 2.3083{8} 2.1143{1} 2.2469{5} 2.1934{3} 2.18{2} 2.2762{7} 2.4686{11} 2.4064{10} 2.357{9}
    Ranks 12{4} 18{6} 24{8} 3{1} 15{5} 8{3} 7{2} 21{7} 32{11} 31{10} 27{9}
    120 bias δ 0.2637{7} 0.2659{8} 0.2568{2} 0.2537{1} 0.2628{5} 0.2617{4} 0.2591{3} 0.2635{6} 0.2767{9} 0.2825{11} 0.279{10}
    MSE δ 0.1088{5} 0.1116{8} 0.1055{2} 0.1044{1} 0.1099{7} 0.1089{6} 0.1061{3} 0.1073{4} 0.1246{9} 0.1374{11} 0.1272{10}
    MRE δ 1.7582{7} 1.7728{8} 1.7119{2} 1.6911{1} 1.7523{5} 1.7446{4} 1.7271{3} 1.7564{6} 1.8445{9} 1.8832{11} 1.86{10}
    Ranks 19{7} 24{8} 6{2} 3{1} 17{6} 14{4} 9{3} 16{5} 27{9} 33{11} 30{10}
    200 bias δ 0.2313{8} 0.2301{6} 0.2294{4} 0.2173{1} 0.2272{2} 0.2307{7} 0.2277{3} 0.2297{5} 0.2403{9} 0.2447{11} 0.2438{10}
    MSE δ 0.0796{8} 0.0781{5} 0.0788{6} 0.0722{1} 0.0761{2} 0.0791{7} 0.0764{3} 0.0769{4} 0.0874{9} 0.0958{11} 0.0907{10}
    MRE δ 1.5418{8} 1.534{6} 1.5293{4} 1.4486{1} 1.5149{2} 1.5379{7} 1.5181{3} 1.5314{5} 1.6019{9} 1.6315{11} 1.6256{10}
    Ranks 24{8} 17{6} 14{4.5} 3{1} 6{2} 21{7} 9{3} 14{4.5} 27{9} 33{11} 30{10}
    300 bias δ 0.2075{4} 0.2098{6} 0.2108{8} 0.1925{1} 0.2059{3} 0.2084{5} 0.205{2} 0.2106{7} 0.2183{9} 0.223{11} 0.2207{10}
    MSE δ 0.0603{4} 0.062{7} 0.0629{8} 0.0549{1} 0.0602{3} 0.0614{5} 0.0593{2} 0.0619{6} 0.0686{9} 0.0763{11} 0.0701{10}
    MRE δ 1.3836{4} 1.3986{6} 1.4053{8} 1.283{1} 1.3724{3} 1.3896{5} 1.3666{2} 1.4037{7} 1.4553{9} 1.4864{11} 1.4712{10}
    Ranks 12{4} 19{6} 24{8} 3{1} 9{3} 15{5} 6{2} 20{7} 27{9} 33{11} 30{10}
    450 bias δ 0.191{8} 0.1908{7} 0.1869{3.5} 0.1744{1} 0.1891{5} 0.1853{2} 0.1869{3.5} 0.1905{6} 0.1957{9} 0.2039{11} 0.1996{10}
    MSE δ 0.0488{6} 0.0492{8} 0.047{3.5} 0.0436{1} 0.0487{5} 0.0463{2} 0.047{3.5} 0.0489{7} 0.0524{9} 0.0611{11} 0.0548{10}
    MRE δ 1.2732{8} 1.2723{7} 1.2459{3} 1.1625{1} 1.261{5} 1.2351{2} 1.2463{4} 1.2698{6} 1.3047{9} 1.3591{11} 1.3308{10}
    Ranks 22{7.5} 22{7.5} 10{3} 3{1} 15{5} 6{2} 11{4} 19{6} 27{9} 33{11} 30{10}

     | Show Table
    DownLoad: CSV
    Table 2.  Bias, MSE, and MRE values for (δ=0.15) under RSS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias ˆδ 0.3505{1} 0.395{5} 0.4091{7} 0.3525{2} 0.4098{8} 0.3829{4} 0.37{3} 0.3965{6} 0.4737{11} 0.4317{10} 0.4258{9}
    MSE ˆδ 0.2218{1} 0.3044{5} 0.3299{7} 0.2503{2} 0.3354{8} 0.2813{4} 0.2668{3} 0.3068{6} 0.4836{11} 0.4161{10} 0.383{9}
    MRE ˆδ 2.3369{1} 2.6331{5} 2.7272{7} 2.3501{2} 2.7317{8} 2.5528{4} 2.4666{3} 2.6436{6} 3.1579{11} 2.8777{10} 2.8388{9}
    Ranks 3{1} 15{5} 21{7} 6{2} 24{8} 12{4} 9{3} 18{6} 33{11} 30{10} 27{9}
    50 bias ˆδ 0.2485{9} 0.2123{5} 0.2129{6} 0.1925{1} 0.213{7} 0.2075{2} 0.2086{3} 0.212{4} 0.2296{8} 0.2735{11} 0.2679{10}
    MSE ˆδ 0.094{9} 0.0642{5} 0.0652{7} 0.0544{1} 0.0648{6} 0.0606{2} 0.0624{3} 0.0636{4} 0.0773{8} 0.1338{11} 0.1145{10}
    MRE ˆδ 1.6565{9} 1.4152{5} 1.4193{6} 1.2837{1} 1.4197{7} 1.383{2} 1.3908{3} 1.4134{4} 1.5306{8} 1.823{11} 1.7858{10}
    Ranks 27{9} 15{5} 19{6} 3{1} 20{7} 6{2} 9{3} 12{4} 24{8} 33{11} 30{10}
    120 bias ˆδ 0.1991{9} 0.1406{2} 0.1423{4} 0.1232{1} 0.1427{5} 0.1412{3} 0.143{6} 0.1462{7} 0.1486{8} 0.2087{11} 0.2026{10}
    MSE ˆδ 0.0542{9} 0.025{2} 0.0258{6} 0.0201{1} 0.0257{5} 0.0253{3} 0.0256{4} 0.0266{7} 0.0283{8} 0.0667{11} 0.057{10}
    MRE ˆδ 1.3271{9} 0.9374{2} 0.9488{4} 0.8213{1} 0.9511{5} 0.9411{3} 0.9531{6} 0.9747{7} 0.9906{8} 1.3916{11} 1.3509{10}
    Ranks 27{9} 6{2} 14{4} 3{1} 15{5} 9{3} 16{6} 21{7} 24{8} 33{11} 30{10}
    200 bias ˆδ 0.1806{10} 0.1085{2} 0.1113{5} 0.0928{1} 0.1112{4} 0.1106{3} 0.1139{6} 0.1148{7} 0.1173{8} 0.1853{11} 0.1793{9}
    MSE ˆδ 0.0422{9} 0.0152{2} 0.0157{3.5} 0.0117{1} 0.0158{5} 0.0157{3.5} 0.0165{6} 0.0169{7} 0.0174{8} 0.05{11} 0.0429{10}
    MRE ˆδ 1.204{10} 0.7233{2} 0.7423{5} 0.6186{1} 0.7416{4} 0.7376{3} 0.7596{6} 0.7652{7} 0.7821{8} 1.2355{11} 1.1956{9}
    Ranks 29{10} 6{2} 13.5{5} 3{1} 13{4} 9.5{3} 18{6} 21{7} 24{8} 33{11} 28{9}
    300 bias ˆδ 0.1596{9} 0.0902{4} 0.09{3} 0.0691{1} 0.0905{5} 0.0891{2} 0.0912{6} 0.0994{8} 0.0934{7} 0.1697{11} 0.1663{10}
    MSE ˆδ 0.0325{9} 0.0112{5} 0.011{3} 0.0073{1} 0.0111{4} 0.0109{2} 0.0114{6} 0.0142{8} 0.0118{7} 0.0405{11} 0.0354{10}
    MRE ˆδ 1.0638{9} 0.6017{4} 0.6001{3} 0.4603{1} 0.6035{5} 0.5937{2} 0.6078{6} 0.6626{8} 0.6227{7} 1.1314{11} 1.1087{10}
    Ranks 27{9} 13{4} 9{3} 3{1} 14{5} 6{2} 18{6} 24{8} 21{7} 33{11} 30{10}
    450 bias ˆδ 0.1462{9} 0.0824{8} 0.0689{2} 0.0478{1} 0.0694{3} 0.0698{4} 0.0729{6} 0.0761{7} 0.0728{5} 0.1577{11} 0.1518{10}
    MSE ˆδ 0.0273{9} 0.0114{8} 0.0073{2} 0.0041{1} 0.0074{3.5} 0.0074{3.5} 0.0079{5.5} 0.0097{7} 0.0079{5.5} 0.0337{11} 0.0293{10}
    MRE ˆδ 0.9747{9} 0.5493{8} 0.4592{2} 0.3183{1} 0.4629{3} 0.4656{4} 0.486{6} 0.507{7} 0.4853{5} 1.0514{11} 1.0123{10}
    Ranks 27{9} 24{8} 6{2} 3{1} 9.5{3} 11.5{4} 17.5{6} 21{7} 15.5{5} 33{11} 30{10}

     | Show Table
    DownLoad: CSV
    Table 3.  Bias, MSE, and MRE values for (δ=0.6) under SRS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias δ 0.6207{5} 0.6075{4} 0.6427{8} 0.5986{2} 0.6209{6} 0.5592{1} 0.6055{3} 0.6366{7} 0.7045{10} 0.7067{11} 0.6969{9}
    MSE δ 0.6293{5} 0.5692{3} 0.6515{7} 0.5246{2} 0.6296{6} 0.4705{1} 0.6066{4} 0.6556{8} 0.8866{10} 0.9346{11} 0.7847{9}
    MRE δ 1.0346{5} 1.0126{4} 1.0711{8} 0.9977{2} 1.0349{6} 0.932{1} 1.0091{3} 1.061{7} 1.1742{10} 1.1778{11} 1.1615{9}
    Ranks 15{5} 11{4} 23{8} 6{2} 18{6} 3{1} 10{3} 22{7} 30{10} 33{11} 27{9}
    50 bias δ 0.4557{8} 0.4571{9} 0.4129{4} 0.4021{2} 0.4086{3} 0.3988{1} 0.4154{5} 0.4482{7} 0.4369{6} 0.4633{10} 0.4807{11}
    MSE δ 0.297{9} 0.2957{8} 0.246{5} 0.226{2} 0.2334{3} 0.2179{1} 0.241{4} 0.2817{7} 0.2767{6} 0.3094{10} 0.3237{11}
    MRE δ 0.7596{8} 0.7618{9} 0.6882{4} 0.6701{2} 0.6809{3} 0.6647{1} 0.6924{5} 0.7471{7} 0.7282{6} 0.7722{10} 0.8012{11}
    Ranks 25{8} 26{9} 13{4} 6{2} 9{3} 3{1} 14{5} 21{7} 18{6} 30{10} 33{11}
    120 bias δ 0.3261{7} 0.3161{5} 0.2963{3} 0.2886{2} 0.3027{4} 0.2852{1} 0.3573{8} 0.3194{6} 0.3818{11} 0.3685{10} 0.3595{9}
    MSE δ 0.1668{7} 0.1613{5} 0.135{3} 0.1274{2} 0.138{4} 0.1264{1} 0.2051{9} 0.1637{6} 0.2333{11} 0.2082{10} 0.1993{8}
    MRE δ 0.5435{7} 0.5268{5} 0.4939{3} 0.4811{2} 0.5044{4} 0.4754{1} 0.5955{8} 0.5324{6} 0.6364{11} 0.6142{10} 0.5991{9}
    Ranks 21{7} 15{5} 9{3} 6{2} 12{4} 3{1} 25{8} 18{6} 33{11} 30{10} 26{9}
    200 bias δ 0.3121{8} 0.2876{6} 0.2792{4} 0.2252{1} 0.2871{5} 0.2263{2} 0.264{3} 0.3165{9} 0.291{7} 0.3194{10} 0.3455{11}
    MSE δ 0.1814{9} 0.1597{7} 0.144{4} 0.0808{1} 0.1537{5} 0.0831{2} 0.1324{3} 0.1883{10} 0.1551{6} 0.1698{8} 0.2015{11}
    MRE δ 0.5202{8} 0.4793{6} 0.4653{4} 0.3753{1} 0.4785{5} 0.3771{2} 0.44{3} 0.5275{9} 0.485{7} 0.5324{10} 0.5758{11}
    Ranks 25{8} 19{6} 12{4} 3{1} 15{5} 6{2} 9{3} 28{9.5} 20{7} 28{9.5} 33{11}
    300 bias δ 0.2609{6} 0.2871{10} 0.2688{8} 0.1889{1} 0.2707{9} 0.1897{2} 0.2442{5} 0.2357{3} 0.244{4} 0.2623{7} 0.2952{11}
    MSE δ 0.1497{7} 0.182{11} 0.1598{8} 0.059{1} 0.1632{9} 0.062{2} 0.1263{5} 0.1124{3.5} 0.1124{3.5} 0.1267{6} 0.1724{10}
    MRE δ 0.4348{6} 0.4786{10} 0.448{8} 0.3148{1} 0.4511{9} 0.3162{2} 0.4071{5} 0.3928{3} 0.4067{4} 0.4372{7} 0.492{11}
    Ranks 19{6} 31{10} 24{8} 3{1} 27{9} 6{2} 15{5} 9.5{3} 11.5{4} 20{7} 32{11}
    450 bias δ 0.2359{8.5} 0.2146{3} 0.2196{4} 0.1484{1} 0.2215{5} 0.1498{2} 0.239{10} 0.2304{6} 0.2359{8.5} 0.2309{7} 0.2523{11}
    MSE δ 0.1406{9} 0.1139{4} 0.1222{5} 0.0368{1} 0.1232{6} 0.0375{2} 0.1471{11} 0.1298{8} 0.1262{7} 0.1117{3} 0.1454{10}
    MRE δ 0.3932{9} 0.3576{3} 0.3661{4} 0.2474{1} 0.3692{5} 0.2497{2} 0.3983{10} 0.384{6} 0.3931{8} 0.3848{7} 0.4205{11}
    Ranks 26.5{9} 10{3} 13{4} 3{1} 16{5} 6{2} 31{10} 20{7} 23.5{8} 17{6} 32{11}

     | Show Table
    DownLoad: CSV
    Table 4.  Bias, MSE, and MRE values for (δ=0.6) under RSS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias ˆδ 0.4538{1} 0.472{3} 0.5065{7} 0.4879{5} 0.4887{6} 0.4678{2} 0.4746{4} 0.5331{8} 0.543{9} 0.5913{11} 0.5554{10}
    MSE ˆδ 0.2993{1} 0.316{3} 0.3685{7} 0.3255{5} 0.3619{6} 0.3087{2} 0.3177{4} 0.4032{8} 0.4637{10} 0.5488{11} 0.4368{9}
    MRE ˆδ 0.7563{1} 0.7867{3} 0.8442{7} 0.8132{5} 0.8145{6} 0.7797{2} 0.7909{4} 0.8885{8} 0.9049{9} 0.9854{11} 0.9257{10}
    Ranks 3{1} 9{3} 21{7} 15{5} 18{6} 6{2} 12{4} 24{8} 28{9} 33{11} 29{10}
    50 bias ˆδ 0.3076{9} 0.2821{8} 0.201{1} 0.2118{4} 0.2073{3} 0.2034{2} 0.2238{6} 0.2503{7} 0.2234{5} 0.3601{10} 0.365{11}
    MSE ˆδ 0.1571{8} 0.1676{9} 0.0668{1} 0.0736{4} 0.072{3} 0.0685{2} 0.085{6} 0.128{7} 0.0804{5} 0.1958{10} 0.2026{11}
    MRE ˆδ 0.5127{9} 0.4701{8} 0.3351{1} 0.353{4} 0.3455{3} 0.3389{2} 0.373{6} 0.4171{7} 0.3723{5} 0.6002{10} 0.6083{11}
    Ranks 26{9} 25{8} 3{1} 12{4} 9{3} 6{2} 18{6} 21{7} 15{5} 30{10} 33{11}
    120 bias ˆδ 0.2623{10} 0.146{8} 0.083{2} 0.0856{4} 0.0844{3} 0.0799{1} 0.1431{7} 0.1349{6} 0.1195{5} 0.2844{11} 0.2593{9}
    MSE ˆδ 0.1549{11} 0.0865{8} 0.0114{2.5} 0.0125{4} 0.0114{2.5} 0.0103{1} 0.0809{7} 0.0747{6} 0.0508{5} 0.1489{10} 0.1346{9}
    MRE ˆδ 0.4372{10} 0.2433{8} 0.1383{2} 0.1426{4} 0.1406{3} 0.1332{1} 0.2385{7} 0.2248{6} 0.1992{5} 0.4739{11} 0.4322{9}
    Ranks 31{10} 24{8} 6.5{2} 12{4} 8.5{3} 3{1} 21{7} 18{6} 15{5} 32{11} 27{9}
    200 bias ˆδ 0.192{9} 0.0877{8} 0.0497{1} 0.0499{2} 0.051{4} 0.0502{3} 0.0738{7} 0.0695{6} 0.0688{5} 0.2446{11} 0.2316{10}
    MSE ˆδ 0.0998{9} 0.0529{8} 0.0039{1} 0.0041{3} 0.0049{4} 0.004{2} 0.0325{7} 0.0291{6} 0.0222{5} 0.1347{11} 0.1316{10}
    MRE ˆδ 0.3201{9} 0.1461{8} 0.0829{1} 0.0832{2} 0.0851{4} 0.0836{3} 0.123{7} 0.1159{6} 0.1146{5} 0.4077{11} 0.386{10}
    Ranks 27{9} 24{8} 3{1} 7{2} 12{4} 8{3} 21{7} 18{6} 15{5} 33{11} 30{10}
    300 bias ˆδ 0.1497{9} 0.0474{7} 0.0327{3.5} 0.0323{1.5} 0.0323{1.5} 0.0327{3.5} 0.0511{8} 0.0426{5.5} 0.0426{5.5} 0.214{11} 0.1883{10}
    MSE ˆδ 0.0736{9} 0.0199{7} 0.0017{2.5} 0.0017{2.5} 0.0017{2.5} 0.0017{2.5} 0.0223{8} 0.0133{6} 0.0112{5} 0.1135{11} 0.108{10}
    MRE ˆδ 0.2496{9} 0.0791{7} 0.0545{3.5} 0.0539{1.5} 0.0539{1.5} 0.0545{3.5} 0.0852{8} 0.071{5.5} 0.071{5.5} 0.3567{11} 0.3139{10}
    Ranks 27{9} 21{7} 9.5{3.5} 5.5{1.5} 5.5{1.5} 9.5{3.5} 24{8} 17{6} 16{5} 33{11} 30{10}
    450 bias ˆδ 0.1307{9} 0.0459{8} 0.0214{1} 0.0223{4} 0.022{3} 0.0219{2} 0.0336{6} 0.0359{7} 0.0289{5} 0.1672{11} 0.1428{10}
    MSE ˆδ 0.0693{9} 0.0304{8} 7e04{1} 8e04{3} 8e04{3} 8e04{3} 0.0136{6} 0.0193{7} 0.0082{5} 0.0865{11} 0.0754{10}
    MRE ˆδ 0.2179{9} 0.0765{8} 0.0356{1} 0.0371{4} 0.0367{3} 0.0365{2} 0.056{6} 0.0599{7} 0.0481{5} 0.2786{11} 0.238{10}
    Ranks 23{9} 20{8} 10{1} 18{7} 16{5} 14{3.5} 14{3.5} 17{6} 11{2} 29{11} 26{10}

     | Show Table
    DownLoad: CSV
    Table 5.  Bias, MSE, and MRE values for (δ=1.0) under SRS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias δ 0.7122{6} 0.6873{3} 0.7245{7} 0.7082{5} 0.6979{4} 0.6344{1} 0.6844{2} 0.8253{10} 0.8014{8} 0.8066{9} 0.8412{11}
    MSE δ 0.8474{6} 0.7629{3} 0.8495{7} 0.7981{4} 0.8277{5} 0.6614{1} 0.7295{2} 1.1297{9} 1.1745{10} 1.3548{11} 1.1033{8}
    MRE δ 0.7122{6} 0.6873{3} 0.7245{7} 0.7082{5} 0.6979{4} 0.6344{1} 0.6844{2} 0.8253{10} 0.8014{8} 0.8066{9} 0.8412{11}
    Ranks 18{6} 9{3} 21{7} 14{5} 13{4} 3{1} 6{2} 29{9.5} 26{8} 29{9.5} 30{11}
    50 bias δ 0.4803{6} 0.506{9} 0.4083{3} 0.3938{2} 0.412{4} 0.3902{1} 0.5212{10} 0.484{7} 0.4515{5} 0.5029{8} 0.5381{11}
    MSE δ 0.4336{7} 0.4738{9} 0.2674{3} 0.2539{2} 0.2684{4} 0.2452{1} 0.5006{10} 0.4405{8} 0.3571{5} 0.4333{6} 0.5013{11}
    MRE δ 0.4803{6} 0.506{9} 0.4083{3} 0.3938{2} 0.412{4} 0.3902{1} 0.5212{10} 0.484{7} 0.4515{5} 0.5029{8} 0.5381{11}
    Ranks 19{6} 27{9} 9{3} 6{2} 12{4} 3{1} 30{10} 22{7.5} 15{5} 22{7.5} 33{11}
    120 bias δ 0.3303{7} 0.356{8} 0.3065{4} 0.2456{1} 0.2842{3} 0.2541{2} 0.3263{6} 0.3208{5} 0.3763{10} 0.3653{9} 0.394{11}
    MSE δ 0.2801{8} 0.3012{9} 0.2087{4} 0.1004{1} 0.1845{3} 0.1067{2} 0.2702{6} 0.26{5} 0.3326{10} 0.2761{7} 0.3575{11}
    MRE δ 0.3303{7} 0.356{8} 0.3065{4} 0.2456{1} 0.2842{3} 0.2541{2} 0.3263{6} 0.3208{5} 0.3763{10} 0.3653{9} 0.394{11}
    Ranks 22{7} 25{8.5} 12{4} 3{1} 9{3} 6{2} 18{6} 15{5} 30{10} 25{8.5} 33{11}
    200 bias δ 0.2903{8} 0.319{10} 0.2642{3} 0.1795{1} 0.2686{5} 0.1896{2} 0.2804{7} 0.2646{4} 0.2985{9} 0.2799{6} 0.3676{11}
    MSE δ 0.2652{9} 0.3161{10} 0.2133{4} 0.0533{1} 0.2146{5} 0.0587{2} 0.2464{7} 0.2208{6} 0.2651{8} 0.1644{3} 0.3641{11}
    MRE δ 0.2903{8} 0.319{10} 0.2642{3} 0.1795{1} 0.2686{5} 0.1896{2} 0.2804{7} 0.2646{4} 0.2985{9} 0.2799{6} 0.3676{11}
    Ranks 25{8} 30{10} 10{3} 3{1} 15{5.5} 6{2} 21{7} 14{4} 26{9} 15{5.5} 33{11}
    300 bias δ 0.2353{6} 0.2451{9} 0.2423{8} 0.1477{1} 0.2386{7} 0.1489{2} 0.2484{10} 0.2341{5} 0.2254{3} 0.2319{4} 0.2941{11}
    MSE δ 0.2032{6} 0.2231{9} 0.2123{8} 0.0349{1} 0.2096{7} 0.0354{2} 0.2314{10} 0.1956{5} 0.175{4} 0.1323{3} 0.2854{11}
    MRE δ 0.2353{6} 0.2451{9} 0.2423{8} 0.1477{1} 0.2386{7} 0.1489{2} 0.2484{10} 0.2341{5} 0.2254{3} 0.2319{4} 0.2941{11}
    Ranks 18{6} 27{9} 24{8} 3{1} 21{7} 6{2} 30{10} 15{5} 10{3} 11{4} 33{11}
    450 bias δ 0.2014{7} 0.2114{10} 0.1867{6} 0.1257{2} 0.1797{3} 0.1243{1} 0.211{9} 0.2015{8} 0.1802{4} 0.1845{5} 0.2343{11}
    MSE δ 0.1805{8} 0.2013{10} 0.1525{6} 0.0253{2} 0.1369{5} 0.0241{1} 0.1993{9} 0.176{7} 0.1145{4} 0.0973{3} 0.215{11}
    MRE δ 0.2014{7} 0.2114{10} 0.1867{6} 0.1257{2} 0.1797{3} 0.1243{1} 0.211{9} 0.2015{8} 0.1802{4} 0.1845{5} 0.2343{11}
    Ranks 22{7} 30{10} 18{6} 6{2} 11{3} 3{1} 27{9} 23{8} 12{4} 13{5} 33{11}

     | Show Table
    DownLoad: CSV
    Table 6.  Bias, MSE, and MRE values for (δ=1.0) under RSS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias ˆδ 0.5275{6} 0.5141{2} 0.5221{3} 0.5273{5} 0.5268{4} 0.4992{1} 0.5335{7} 0.614{9} 0.5662{8} 0.6926{11} 0.6648{10}
    MSE ˆδ 0.5039{7} 0.4254{2} 0.4337{5} 0.4281{3} 0.4289{4} 0.3892{1} 0.4383{6} 0.6202{9} 0.5341{8} 1.147{11} 0.6721{10}
    MRE ˆδ 0.5275{6} 0.5141{2} 0.5221{3} 0.5273{5} 0.5268{4} 0.4992{1} 0.5335{7} 0.614{9} 0.5662{8} 0.6926{11} 0.6648{10}
    Ranks 19{6} 6{2} 11{3} 13{5} 12{4} 3{1} 20{7} 27{9} 24{8} 33{11} 30{10}
    50 bias ˆδ 0.3107{9} 0.235{7} 0.1668{2} 0.1707{3} 0.1729{4} 0.1579{1} 0.2146{6} 0.238{8} 0.1894{5} 0.3776{10} 0.416{11}
    MSE ˆδ 0.2806{10} 0.1836{7} 0.0433{2} 0.0468{3} 0.0486{4} 0.0388{1} 0.1309{6} 0.1993{8} 0.0699{5} 0.2681{9} 0.3842{11}
    MRE ˆδ 0.3107{9} 0.235{7} 0.1668{2} 0.1707{3} 0.1729{4} 0.1579{1} 0.2146{6} 0.238{8} 0.1894{5} 0.3776{10} 0.416{11}
    Ranks 28{9} 21{7} 6{2} 9{3} 12{4} 3{1} 18{6} 24{8} 15{5} 29{10} 33{11}
    120 bias ˆδ 0.2243{9} 0.0964{7} 0.0706{3} 0.0681{1} 0.0713{4} 0.0699{2} 0.0799{6} 0.114{8} 0.0776{5} 0.251{11} 0.2387{10}
    MSE ˆδ 0.2005{10} 0.061{7} 0.0077{2} 0.0074{1} 0.008{4} 0.0078{3} 0.0229{6} 0.098{8} 0.0142{5} 0.1692{9} 0.2032{11}
    MRE ˆδ 0.2243{9} 0.0964{7} 0.0706{3} 0.0681{1} 0.0713{4} 0.0699{2} 0.0799{6} 0.114{8} 0.0776{5} 0.251{11} 0.2387{10}
    Ranks 28{9} 21{7} 8{3} 3{1} 12{4} 7{2} 18{6} 24{8} 15{5} 31{10.5} 31{10.5}
    200 bias ˆδ 0.1698{11} 0.0626{7} 0.0417{1} 0.0426{4} 0.0421{3} 0.0418{2} 0.0448{6} 0.0739{8} 0.0441{5} 0.1689{9} 0.1696{10}
    MSE ˆδ 0.1455{11} 0.0457{7} 0.0027{1} 0.0029{3.5} 0.0028{2} 0.0029{3.5} 0.0032{6} 0.0701{8} 0.003{5} 0.0785{9} 0.1166{10}
    MRE ˆδ 0.1698{11} 0.0626{7} 0.0417{1} 0.0426{4} 0.0421{3} 0.0418{2} 0.0448{6} 0.0739{8} 0.0441{5} 0.1689{9} 0.1696{10}
    Ranks 33{11} 21{7} 3{1} 11.5{4} 8{3} 7.5{2} 18{6} 24{8} 15{5} 27{9} 30{10}
    300 bias ˆδ 0.1399{9} 0.0433{8} 0.0279{1} 0.0289{4} 0.0284{2} 0.0287{3} 0.0293{5} 0.0406{7} 0.0294{6} 0.1418{11} 0.1411{10}
    MSE ˆδ 0.118{11} 0.031{8} 0.0012{1} 0.0013{4} 0.0013{4} 0.0013{4} 0.0013{4} 0.0283{7} 0.0013{4} 0.0708{9} 0.1043{10}
    MRE ˆδ 0.1399{9} 0.0433{8} 0.0279{1} 0.0289{4} 0.0284{2} 0.0287{3} 0.0293{5} 0.0406{7} 0.0294{6} 0.1418{11} 0.1411{10}
    Ranks 29{9} 24{8} 3{1} 12{4} 8{2} 10{3} 14{5} 21{7} 16{6} 31{11} 30{10}
    450 bias ˆδ 0.1036{9} 0.0221{7} 0.0192{3} 0.019{2} 0.0187{1} 0.0195{4} 0.0196{5} 0.0261{8} 0.0211{6} 0.113{10} 0.1484{11}
    MSE ˆδ 0.0744{10} 0.0081{7} 6e04{3.5} 6e04{3.5} 5e04{1} 6e04{3.5} 6e04{3.5} 0.0159{8} 7e04{6} 0.0457{9} 0.1471{11}
    MRE ˆδ 0.1036{9} 0.0221{7} 0.0192{3} 0.019{2} 0.0187{1} 0.0195{4} 0.0196{5} 0.0261{8} 0.0211{6} 0.113{10} 0.1484{11}
    Ranks 22{8} 15{4} 14.5{3} 12.5{2} 8{1} 16.5{5} 18.5{7} 18{6} 23{9.5} 23{9.5} 27{11}

     | Show Table
    DownLoad: CSV
    Table 7.  Bias, MSE, and MRE values for (δ=1.5) under SRS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RTADE WLSE LTADE MSADE MSALDE
    15 bias δ 0.8259{6} 0.7838{3} 0.8036{5} 0.7894{4} 0.8341{7} 0.7285{1} 0.7378{2} 0.9492{9} 0.8732{8} 1.0216{11} 0.9697{10}
    MSE δ 1.2727{7} 1.0399{3} 1.0808{4} 1.0933{5} 1.202{6} 0.8775{1} 0.9547{2} 1.6659{10} 1.4583{8} 2.1721{11} 1.6264{9}
    MRE δ 0.5506{6} 0.5225{3} 0.5358{5} 0.5263{4} 0.5561{7} 0.4857{1} 0.4919{2} 0.6328{9} 0.5821{8} 0.681{11} 0.6465{10}
    Ranks 19{6} 9{3} 14{5} 13{4} 20{7} 3{1} 6{2} 28{9} 24{8} 33{11} 29{10}
    50 bias δ 0.5183{7} 0.586{10} 0.4{2} 0.4047{3} 0.4139{4} 0.3985{1} 0.5567{9} 0.5348{8} 0.489{5} 0.5173{6} 0.6337{11}
    MSE δ 0.6396{7} 0.8036{10} 0.2791{2} 0.2805{3} 0.288{4} 0.2613{1} 0.7203{9} 0.6572{8} 0.4544{5} 0.5237{6} 0.8491{11}
    MRE δ 0.3455{7} 0.3907{10} 0.2666{2} 0.2698{3} 0.276{4} 0.2656{1} 0.3711{9} 0.3565{8} 0.326{5} 0.3449{6} 0.4225{11}
    Ranks 21{7} 30{10} 6{2} 9{3} 12{4} 3{1} 27{9} 24{8} 15{5} 18{6} 33{11}
    120 bias δ 0.3961{10} 0.355{5} 0.2711{3} 0.2496{1} 0.2806{4} 0.2575{2} 0.393{9} 0.3723{7} 0.3904{8} 0.3606{6} 0.4223{11}
    MSE δ 0.5109{10} 0.4097{6} 0.1418{3} 0.1005{1} 0.1596{4} 0.1041{2} 0.5078{9} 0.4587{7} 0.4643{8} 0.2852{5} 0.5133{11}
    MRE δ 0.2641{10} 0.2367{5} 0.1807{3} 0.1664{1} 0.1871{4} 0.1717{2} 0.262{9} 0.2482{7} 0.2603{8} 0.2404{6} 0.2815{11}
    Ranks 30{10} 16{5} 9{3} 3{1} 12{4} 6{2} 27{9} 21{7} 24{8} 17{6} 33{11}
    200 bias δ 0.2728{6} 0.2857{7} 0.2125{4} 0.1934{1} 0.2094{3} 0.1975{2} 0.2908{8} 0.3021{9} 0.3034{10} 0.2703{5} 0.3321{11}
    MSE δ 0.2776{6} 0.3128{7} 0.1143{4} 0.0596{1} 0.1093{3} 0.0614{2} 0.3489{8} 0.3503{9} 0.3585{10} 0.1747{5} 0.4016{11}
    MRE δ 0.1819{6} 0.1905{7} 0.1417{4} 0.1289{1} 0.1396{3} 0.1316{2} 0.1939{8} 0.2014{9} 0.2023{10} 0.1802{5} 0.2214{11}
    Ranks 18{6} 21{7} 12{4} 3{1} 9{3} 6{2} 24{8} 27{9} 30{10} 15{5} 33{11}
    300 bias δ 0.2244{5} 0.2549{8} 0.1945{4} 0.1567{1} 0.1871{3} 0.1656{2} 0.2356{6} 0.2598{10} 0.2597{9} 0.2369{7} 0.2841{11}
    MSE δ 0.2361{6} 0.3395{10} 0.1524{4} 0.0393{1} 0.1275{3} 0.0433{2} 0.257{7} 0.3273{9} 0.3211{8} 0.1709{5} 0.3588{11}
    MRE δ 0.1496{5} 0.1699{8} 0.1297{4} 0.1045{1} 0.1248{3} 0.1104{2} 0.1571{6} 0.1732{10} 0.1731{9} 0.1579{7} 0.1894{11}
    Ranks 16{5} 26{8.5} 12{4} 3{1} 9{3} 6{2} 19{6.5} 29{10} 26{8.5} 19{6.5} 33{11}
    450 bias δ 0.1999{8} 0.1892{7} 0.1614{3} 0.1295{2} 0.1625{4} 0.1291{1} 0.2165{10} 0.1861{6} 0.2033{9} 0.1765{5} 0.2949{11}
    MSE δ 0.2334{9} 0.222{7} 0.13{5} 0.0268{2} 0.1241{4} 0.0263{1} 0.2636{10} 0.2028{6} 0.2307{8} 0.0745{3} 0.4434{11}
    MRE δ 0.1333{8} 0.1262{7} 0.1076{3} 0.0864{2} 0.1083{4} 0.086{1} 0.1443{10} 0.124{6} 0.1355{9} 0.1177{5} 0.1966{11}
    Ranks 25{8} 21{7} 11{3} 6{2} 12{4} 3{1} 30{10} 18{6} 26{9} 13{5} 33{11}

     | Show Table
    DownLoad: CSV
    Table 8.  Bias, MSE, and MRE values for (δ=1.5) under RS.
    m Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias ˆδ 0.5613{5} 0.5416{3} 0.5415{2} 0.5732{7} 0.5575{4} 0.512{1} 0.5678{6} 0.7597{10} 0.6059{8} 0.7398{9} 0.7922{11}
    MSE ˆδ 0.7281{7} 0.5044{2} 0.5163{3} 0.5459{6} 0.5328{4} 0.4343{1} 0.5351{5} 1.1729{11} 0.7324{8} 0.9953{9} 1.083{10}
    MRE ˆδ 0.3742{5} 0.3611{3} 0.361{2} 0.3821{7} 0.3717{4} 0.3413{1} 0.3785{6} 0.5065{10} 0.404{8} 0.4932{9} 0.5281{11}
    Ranks 17{5.5} 8{3} 7{2} 20{7} 12{4} 3{1} 17{5.5} 31{10} 24{8} 27{9} 32{11}
    50 bias ˆδ 0.3325{9} 0.1833{6} 0.1701{2} 0.1707{3} 0.1671{1} 0.1759{4} 0.1762{5} 0.2899{8} 0.1844{7} 0.3829{10} 0.4402{11}
    MSE ˆδ 0.4045{10} 0.1073{7} 0.0447{1} 0.0466{3} 0.0463{2} 0.0489{4} 0.0492{5} 0.4029{9} 0.0559{6} 0.3246{8} 0.5328{11}
    MRE ˆδ 0.2216{9} 0.1222{6} 0.1134{2} 0.1138{3} 0.1114{1} 0.1172{4} 0.1175{5} 0.1933{8} 0.123{7} 0.2552{10} 0.2935{11}
    Ranks 28{9.5} 19{6} 5{2} 9{3} 4{1} 12{4} 15{5} 25{8} 20{7} 28{9.5} 33{11}
    120 bias ˆδ 0.1905{9} 0.085{7} 0.0726{2} 0.0724{1} 0.0746{4} 0.0735{3} 0.0763{5} 0.1106{8} 0.0801{6} 0.2246{10} 0.256{11}
    MSE ˆδ 0.1833{10} 0.0383{7} 0.0084{2.5} 0.0083{1} 0.0086{4} 0.0084{2.5} 0.0093{5} 0.1197{8} 0.01{6} 0.1394{9} 0.3088{11}
    MRE ˆδ 0.127{9} 0.0567{7} 0.0484{2} 0.0483{1} 0.0498{4} 0.049{3} 0.0509{5} 0.0737{8} 0.0534{6} 0.1497{10} 0.1707{11}
    Ranks 28{9} 21{7} 6.5{2} 3{1} 12{4} 8.5{3} 15{5} 24{8} 18{6} 29{10} 33{11}
    200 bias ˆδ 0.1872{10} 0.0448{2} 0.0431{1} 0.0457{3} 0.0462{4} 0.0472{6} 0.0469{5} 0.0762{8} 0.0488{7} 0.1621{9} 0.2017{11}
    MSE ˆδ 0.2301{10} 0.0031{2} 0.003{1} 0.0034{4} 0.0033{3} 0.0035{5.5} 0.0035{5.5} 0.0961{9} 0.0038{7} 0.0727{8} 0.2481{11}
    MRE ˆδ 0.1248{10}
    300 bias
    MSE
    MRE
    450 bias
    MSE
    MRE

     | Show Table
    DownLoad: CSV
    Table 9.  Bias, MSE, and MRE values for under SRS.
    Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias
    MSE
    MRE
    50 bias
    MSE
    MRE
    120 bias
    MSE
    MRE
    200 bias
    MSE
    MRE
    300 bias
    MSE
    MRE
    450 bias
    MSE
    MRE

     | Show Table
    DownLoad: CSV
    Table 10.  Bias, MSE, and MRE values for under RSS.
    Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias
    MSE
    MRE
    50 bias
    MSE
    MRE
    120 bias
    MSE
    MRE
    200 bias
    MSE
    MRE
    300 bias
    MSE
    MRE
    450 bias
    MSE
    MRE

     | Show Table
    DownLoad: CSV
    Table 11.  Bias, MSE, and MRE values for under SRS.
    Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias
    MSE
    MRE
    50 bias
    MSE
    MRE
    120 bias
    MSE
    MRE
    200 bias
    MSE
    MRE
    300 bias
    MSE
    MRE
    450 bias
    MSE
    MRE

     | Show Table
    DownLoad: CSV
    Table 12.  Bias, MSE, and MRE values for under RSS.
    Measure Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 bias
    MSE
    MRE
    50 bias
    MSE
    MRE
    120 bias
    MSE
    MRE
    200 bias
    MSE
    MRE
    300 bias
    MSE
    MRE
    450 bias
    MSE
    MRE

     | Show Table
    DownLoad: CSV

    ● The ratio of the MSE for SRS to the MSE for RSS is provided in Table 13. This ratio helps to gauge the comparative performance of SRS and RSS in terms of MSE, offering insights into the efficiency of these sampling methods.

    Table 13.  Numerical values for MSE of SRS divided by MSE of RSS for all estimates.
    Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 2.65374 1.95696 1.92664 1.87934 1.89117 1.78528 1.90705 2.05639 1.66667 1.71257 1.63055
    50 2.11809 3.22118 3.27914 3.37684 3.16358 3.14356 3.07212 3.27358 3.27555 1.92975 2.00437
    120 2.00738 4.46400 4.08915 5.19403 4.27626 4.30435 4.14453 4.03383 4.40283 2.05997 2.23158
    200 1.88626 5.13816 5.01911 6.17094 4.81646 5.03822 4.63030 4.55030 5.02299 1.91600 2.11422
    300 1.85538 5.53571 5.71818 7.52055 5.42342 5.63303 5.20175 4.35915 5.81356 1.88395 1.98023
    450 1.78755 4.31579 6.43836 10.63415 6.58108 6.25676 5.94937 5.04124 6.63291 1.81306 1.87031
    15 2.10257 1.80127 1.76798 1.61167 1.73971 1.52413 1.90935 1.62599 1.91201 1.70299 1.79647
    50 1.89052 1.76432 3.68263 3.07065 3.24167 3.18102 2.83529 2.20078 3.44154 1.58018 1.59773
    120 1.07682 1.86474 11.84211 10.19200 12.10526 12.27184 2.53523 2.19143 4.59252 1.39825 1.48068
    200 1.81764 3.01890 36.92308 19.70732 31.36735 20.77500 4.07385 6.47079 6.98649 1.26058 1.53116
    300 2.03397 9.14573 94.00000 34.70588 96.00000 36.47059 5.66368 8.45113 10.03571 1.11630 1.59630
    450 2.02886 3.74671 174.57143 46.00000 154.00000 46.87500 10.81618 6.72539 15.39024 1.29133 1.92838
    15 1.68168 1.79337 1.95873 1.86428 1.92982 1.69938 1.66439 1.82151 2.19903 1.18117 1.64157
    50 1.54526 2.58061 6.17552 5.42521 5.52263 6.31959 3.82429 2.21024 5.10873 1.61619 1.30479
    120 1.39701 4.93770 27.10390 13.56757 23.06250 13.67949 11.79913 2.65306 23.42254 1.63180 1.75935
    200 1.82268 6.91685 79.00000 18.37931 76.64286 20.24138 77.00000 3.14979 88.36667 2.09427 3.12264
    300 1.72203 7.19677 176.91667 26.84615 161.23077 27.23077 178.00000 6.91166 134.61538 1.86864 2.73634
    450 2.42608 24.85185 254.16667 42.16667 273.80000 40.16667 332.16667 11.06918 163.57143 2.12910 1.46159
    15 1.74797 2.06166 2.09336 2.00275 2.25601 2.02049 1.78415 1.42033 1.99113 2.18236 1.50175
    50 1.58121 7.48928 6.24385 6.01931 6.22030 5.34356 14.64024 1.63117 8.12880 1.61337 1.59366
    120 2.78723 10.69713 16.88095 12.10843 18.55814 12.39286 54.60215 3.83208 46.43000 2.04591 1.66224
    200 1.20643 100.90323 38.10000 17.52941 33.12121 17.54286 99.68571 3.64516 94.34211 2.40303 1.61870
    300 1.14835 261.15385 101.60000 26.20000 91.07143 30.92857 171.33333 11.85870 214.06667 3.37081 1.96066
    450 1.17286 370.00000 185.71429 44.66667 177.28571 37.57143 376.57143 4.51670 329.57143 1.53292 3.26991
    15 1.44528 2.29860 2.09539 2.18214 2.07689 2.29155 2.03887 1.64293 2.29799 1.91037 1.63957
    50 1.43903 14.86357 5.76391 6.80919 5.80201 4.99078 8.62879 2.62247 5.78613 2.10575 2.86022
    120 2.46001 71.90291 12.75472 12.81308 12.75000 10.71875 40.80342 4.28986 26.82927 1.80279 2.28536
    200 2.09955 119.40000 22.78947 19.57500 20.07500 17.25532 113.30233 4.45243 78.47826 2.35818 1.64810
    300 1.68733 204.27778 35.72222 28.66667 36.16667 24.90909 238.05263 4.49545 215.35000 1.78456 1.02462
    450 1.12838 339.12500 65.75000 43.62500 88.25000 35.00000 411.22222 6.33101 510.00000 2.04043 1.31170
    15 2.00128 2.23642 2.06948 2.20063 2.33644 2.04769 2.01129 1.64312 2.07672 1.70544 1.73017
    50 0.96425 10.79043 6.12016 6.88961 6.11179 4.84410 5.91411 3.30459 5.17339 1.97101 4.20687
    120 1.10808 69.60317 12.03356 12.29932 12.63448 10.21622 20.62658 3.57039 13.23077 1.91167 5.99552
    200 2.74171 168.08163 17.58182 20.02000 18.98113 17.62500 79.14035 10.45211 32.37097 1.60561 2.77473
    300 4.41193 321.40909 29.21739 28.50000 26.19231 24.16129 124.92857 6.62520 134.44444 1.49507 2.44361
    450 3.86858 477.70000 42.30000 40.50000 46.80000 37.21429 396.08333 6.72133 307.61538 1.66000 1.82361

     | Show Table
    DownLoad: CSV

    ● For a thorough and detailed analysis of the estimates, we present both their partial and total ranks in Tables 14 and 15 for the SRS and RSS, respectively. These rank tables offer a more nuanced and comprehensive perspective on the performance and comparative effectiveness of each estimation approach, facilitating a deeper understanding of their relative strengths and weaknesses.

    Table 14.  Partial and total rankings of all AUD estimate techniques by SRS for different values of .
    Parameter MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 5.0 4.0 8.0 1.0 9.0 3.0 2.0 7.0 11.0 10.0 6.0
    50 4.0 6.0 8.0 1.0 5.0 3.0 2.0 7.0 11.0 10.0 9.0
    120 7.0 8.0 2.0 1.0 6.0 4.0 3.0 5.0 9.0 11.0 10.0
    200 8.0 6.0 4.5 1.0 2.0 7.0 3.0 4.5 9.0 11.0 10.0
    300 4.0 6.0 8.0 1.0 3.0 5.0 2.0 7.0 9.0 11.0 10.0
    450 7.5 7.5 3.0 1.0 5.0 2.0 4.0 6.0 9.0 11.0 10.0
    15 5.0 4.0 8.0 2.0 6.0 1.0 3.0 7.0 10.0 11.0 9.0
    50 8.0 9.0 4.0 2.0 3.0 1.0 5.0 7.0 6.0 10.0 11.0
    120 7.0 5.0 3.0 2.0 4.0 1.0 8.0 6.0 11.0 10.0 9.0
    200 8.0 6.0 4.0 1.0 5.0 2.0 3.0 9.5 7.0 9.5 11.0
    300 6.0 10.0 8.0 1.0 9.0 2.0 5.0 3.0 4.0 7.0 11.0
    450 9.0 3.0 4.0 1.0 5.0 2.0 10.0 7.0 8.0 6.0 11.0
    15 6.0 3.0 7.0 5.0 4.0 1.0 2.0 9.5 8.0 9.5 11.0
    50 6.0 9.0 3.0 2.0 4.0 1.0 10.0 7.5 5.0 7.5 11.0
    120 7.0 8.5 4.0 1.0 3.0 2.0 6.0 5.0 10.0 8.5 11.0
    200 8.0 10.0 3.0 1.0 5.5 2.0 7.0 4.0 9.0 5.5 11.0
    300 6.0 9.0 8.0 1.0 7.0 2.0 10.0 5.0 3.0 4.0 11.0
    450 7.0 10.0 6.0 2.0 3.0 1.0 9.0 8.0 4.0 5.0 11.0
    15 6.0 3.0 5.0 4.0 7.0 1.0 2.0 9.0 8.0 11.0 10.0
    50 7.0 10.0 2.0 3.0 4.0 1.0 9.0 8.0 5.0 6.0 11.0
    120 10.0 5.0 3.0 1.0 4.0 2.0 9.0 7.0 8.0 6.0 11.0
    200 6.0 7.0 4.0 1.0 3.0 2.0 8.0 9.0 10.0 5.0 11.0
    300 5.0 8.5 4.0 1.0 3.0 2.0 6.5 10.0 8.5 6.5 11.0
    450 8.0 7.0 3.0 2.0 4.0 1.0 10.0 6.0 9.0 5.0 11.0
    15 3.0 4.0 5.0 7.0 6.0 1.0 2.0 11.0 8.0 10.0 9.0
    50 7.0 9.0 2.0 4.0 3.0 1.0 6.0 10.0 5.0 8.0 11.0
    120 8.0 9.0 1.0 2.0 4.0 3.0 6.0 10.0 5.0 7.0 11.0
    200 8.0 9.0 3.0 1.0 2.0 4.0 10.0 7.0 6.0 5.0 11.0
    300 6.0 7.0 3.0 1.0 2.0 4.0 9.0 11.0 10.0 5.0 8.0
    450 7.0 6.0 3.0 1.0 4.0 2.0 9.5 8.0 11.0 5.0 9.5
    15 2.0 3.0 5.0 6.0 7.0 1.0 4.0 11.0 8.0 10.0 9.0
    50 7.0 8.0 2.0 4.0 5.0 1.0 3.0 11.0 6.0 9.0 10.0
    120 7.0 9.0 1.0 2.0 3.0 4.0 6.0 11.0 5.0 8.0 10.0
    200 10.0 8.0 1.0 3.0 2.0 4.0 6.0 9.0 5.0 7.0 11.0
    300 11.0 8.0 2.0 1.0 3.0 4.0 5.0 10.0 7.0 6.0 9.0
    450 9.0 7.0 2.0 1.0 3.0 4.0 8.0 11.0 6.0 5.0 10.0
    Ranks 245.5 251.5 146.5 72.0 157.5 84.0 213.0 284.0 273.5 282.0 366.5
    Overall Rank 6 7 3 1 4 2 5 10 8 9 11

     | Show Table
    DownLoad: CSV
    Table 15.  Partial and total rankings of all AUD estimate techniques by RSS for different values of .
    Parameter MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    15 1.0 5.0 7.0 2.0 8.0 4.0 3.0 6.0 11.0 10.0 9.0
    50 9.0 5.0 6.0 1.0 7.0 2.0 3.0 4.0 8.0 11.0 10.0
    120 9.0 2.0 4.0 1.0 5.0 3.0 6.0 7.0 8.0 11.0 10.0
    200 10.0 2.0 5.0 1.0 4.0 3.0 6.0 7.0 8.0 11.0 9.0
    300 9.0 4.0 3.0 1.0 5.0 2.0 6.0 8.0 7.0 11.0 10.0
    450 9.0 8.0 2.0 1.0 3.0 4.0 6.0 7.0 5.0 11.0 10.0
    15 1.0 3.0 7.0 5.0 6.0 2.0 4.0 8.0 9.0 11.0 10.0
    50 9.0 8.0 1.0 4.0 3.0 2.0 6.0 7.0 5.0 10.0 11.0
    120 10.0 8.0 2.0 4.0 3.0 1.0 7.0 6.0 5.0 11.0 9.0
    200 9.0 8.0 1.0 2.0 4.0 3.0 7.0 6.0 5.0 11.0 10.0
    300 9.0 7.0 3.5 1.5 1.5 3.5 8.0 6.0 5.0 11.0 10.0
    450 9.0 8.0 1.0 7.0 5.0 3.5 3.5 6.0 2.0 11.0 10.0
    15 6.0 2.0 3.0 5.0 4.0 1.0 7.0 9.0 8.0 11.0 10.0
    50 9.0 7.0 2.0 3.0 4.0 1.0 6.0 8.0 5.0 10.0 11.0
    120 9.0 7.0 3.0 1.0 4.0 2.0 6.0 8.0 5.0 10.5 10.5
    200 11.0 7.0 1.0 4.0 3.0 2.0 6.0 8.0 5.0 9.0 10.0
    300 9.0 8.0 1.0 4.0 2.0 3.0 5.0 7.0 6.0 11.0 10.0
    450 8.0 4.0 3.0 2.0 1.0 5.0 7.0 6.0 9.5 9.5 11.0
    15 5.5 3.0 2.0 7.0 4.0 1.0 5.5 10.0 8.0 9.0 11.0
    50 9.5 6.0 2.0 3.0 1.0 4.0 5.0 8.0 7.0 9.5 11.0
    120 9.0 7.0 2.0 1.0 4.0 3.0 5.0 8.0 6.0 10.0 11.0
    200 10.0 2.0 1.0 3.0 4.0 6.0 5.0 8.0 7.0 9.0 11.0
    300 10.0 1.0 4.0 5.0 2.0 3.0 6.5 8.0 6.5 9.0 11.0
    450 11.0 2.0 8.0 1.0 3.5 3.5 9.5 5.0 6.0 7.0 9.5
    15 5.0 2.0 4.0 7.0 6.0 1.0 3.0 10.0 8.0 11.0 9.0
    50 9.0 4.0 2.0 1.0 3.0 5.0 6.0 8.0 7.0 10.5 10.5
    120 9.0 1.0 2.0 3.0 4.0 7.0 5.0 8.0 6.0 10.0 11.0
    200 9.5 3.0 1.0 2.0 4.0 7.0 5.0 8.0 6.0 9.5 11.0
    300 9.5 2.5 2.5 1.0 4.0 7.0 5.0 8.0 6.0 9.5 11.0
    450 10.0 2.0 5.0 1.0 3.0 4.0 8.0 6.0 9.0 7.0 11.0
    15 1.5 1.5 6.0 7.0 5.0 3.0 4.0 10.0 8.0 11.0 9.0
    50 10.5 2.0 3.0 1.0 5.0 6.0 4.0 8.5 7.0 10.5 8.5
    120 11.0 1.0 3.0 4.0 2.0 7.0 5.0 8.5 6.0 10.0 8.5
    200 9.5 2.0 4.0 1.0 3.0 7.0 5.0 8.0 6.0 9.5 11.0
    300 9.5 1.5 3.0 1.5 4.0 7.0 6.0 8.0 5.0 9.5 11.0
    450 9.5 1.5 3.0 1.5 4.0 7.0 5.0 8.0 6.0 9.5 11.0
    Ranks 304.5 148.0 113.0 100.5 138.0 135.5 200.0 270.0 237.0 362.0 367.5
    Overall Rank 9 5 2 1 4 3 6 8 7 10 11

     | Show Table
    DownLoad: CSV

    Upon careful analysis of the simulation results and the rankings presented in the tables, several conclusions can be drawn:

    ● It is noteworthy that for both SRS and RSS datasets, our model estimates exhibit the consistency property. This property implies that as the sample size increases, the estimates converge to the true parameter values.

    ● All three measures used exhibit a consistent trend: They decrease as the sample size increases. This pattern suggests that larger sample sizes lead to more accurate and precise parameter estimates.

    ● Based on our simulation results for both SRS and RSS datasets, it appears that the MPSE has the advantage in determining the quality of our estimates.

    ● From Table 13, it can be observed that the estimates obtained from the RSS datasets are more efficient compared to those obtained from the SRS datasets. This suggests that RSS is a more efficient sampling method in terms of producing estimates with lower MSE.

    To highlight the practical utility of the proposed estimation methods, a real dataset was meticulously selected and is comprehensively elucidated in this section. The objective was to illustrate the practical applications of these proposed estimates by conducting an in-depth analysis of the real-world dataset. This analysis serves as a demonstration of how these estimation techniques can be applied to real-world data, showcasing their effectiveness and relevance in practical research and decision-making contexts. The used real dataset presents the firm's risk management cost-effectiveness and it was studied by Abd El-Bar et al. [48]. Its values are: 0.0432, 0.1271, 0.793, 0.0407, 0.0076, 0.037, 0.18, 0.1129, 0.09, 0.0535, 0.0783, 0.0093, 0.0851, 0.1753, 0.0036, 0.1597, 0.002, 0.1357, 0.0215, 0.0065, 0.079, 0.0329, 0.0458, 0.1192, 0.0431, 0.1245, 0.0255, 0.1396, 0.0122, 0.15, 0.14, 0.0529, 0.2222, 0.0315, 0.0389, 0.0297, 0.0608, 0.1833, 0.0279, 0.0694, 0.15, 0.0818, 0.2912, 0.1261, 0.0931, 0.0216, 0.0525, 0.1938, 0.0433, 0.0351, 0.0629, 0.0125, 0.0571, 0.0094, 0.0885, 0.0411, 0.004, 0.0582, 0.2172, 0.0434, 0.0509, 0.65, 0.0913, 0.1, 0.0375, 0.2886, 0.0206, 0.0028, 0.0407, 0.0849, 0.0612, 0.1333, 0.9755.

    Table 16 provides a comprehensive summary of the descriptive analyses performed on the dataset under investigation. Figure 2 displays various graphical representations, including histograms, kernel density plots, violin plots, box plots, total time on test (TTT) plots, and quantile-quantile (Q-Q) plots. These visualizations and descriptive statistics collectively offer insights into the characteristics and distributions of the data, enhancing our understanding of the dataset's key features and patterns. The dataset was subjected to a Kolmogorov-Smirnov (KS) test to assess its compatibility with a specific model. The MLE was utilized to obtain the parameter estimates. The K-S distance (KSD) was computed to be 0.0812933, and the p-value (KSP) was found to be 0.720254. Based on these results, it is apparent that the AUD is a suitable candidate for fitting the firm's real dataset. To visually demonstrate this suitability, Figure 3 presents various graphical representations, including the probability-probability (P-P) plot, estimated CDF, estimated survival function (SF), and a histogram with the estimated PDF. These visualizations collectively suggest that the AUD is a suitable choice for modeling and fitting the firm's real dataset, as they align well with the distributional characteristics of the model.

    Table 16.  Descriptive analyses of the firm's real dataset.
    Mean Median Skewness Kurtosis Range Minimum Maximum Sum
    data 73 0.109733 0.0608 3.71542 17.9579 0.9735 0.002 0.9755 8.0105

     | Show Table
    DownLoad: CSV
    Figure 2.  Some plots for the firm's real dataset.
    Figure 3.  P-P plot, estimated CDF, estimated SF, and histogram with the estimated PDF for the AUD.

    Based on the theoretical findings discussed earlier, the dataset underwent an examination using two sampling techniques, SRS and RSS. Tables 17 and 18 present the SRS and RSS estimates, respectively, derived from the AUD. These estimates are provided for different sample sizes over five cycles, employing various estimation techniques. The process of generating the RSS and SRS observations was facilitated using the R-package. These tables collectively display the results of the estimation techniques applied to the dataset, allowing for a comprehensive comparison of the sampling methods and estimation procedures used. To demonstrate the superiority of RSS over SRS to various estimation methods, we conducted an evaluation using several goodness-of-fit statistics for the model. These statistics encompassed the Anderson-Darling test statistics (ADTS), Cramér-von Mises test statistics (CMTS), and KS test statistics (KSTS), along with their KSP. These tests and their p-values were utilized to assess how well the data conforms to the model, and their results can provide insights into the effectiveness of RSS compared to SRS in capturing the underlying distribution of the dataset. Estimates that outperformed their counterparts typically displayed larger p-values (greater than 5%) and lower goodness-of-fit values. Table 19 provides a comparative analysis between the SRS and RSS designs in terms of their goodness-of-fit values and KSPs. This comparison helps in assessing the relative effectiveness of SRS and RSS in fitting the dataset to the model, with a focus on identifying which design and estimation techniques yield better goodness-of-fit results. The fitting of the model to the dataset can be observed in Figures 4 and 5. Notably, the RSS design demonstrates superior performance compared to the SRS design in terms of efficiency. This is evident from the smaller goodness-of-fit values and the correspondingly larger KSPs. This superiority is consistently observed across all estimates, even when the same number of measurement units is considered. These findings underscore the advantages of RSS over SRS in terms of fitting the dataset to the model and obtaining more efficient estimates.

    Table 17.  Values of AUD estimate for various estimating techniques using the SRS dataset.
    Estimate MLE ADE CME MPSE LSE PSE RADE WLSE LADE MSADE MSALDE
    20 14.2997 14.4682 14.3253 14.2344 14.2934 7.22796 13.5736 14.591 15.5268 23.1179 14.2344
    35 18.2715 18.3864 18.1745 18.2414 18.1516 51.3402 20.4428 19.3072 16.4006 15.145 17.8123
    50 14.7089 14.8449 14.6641 14.6879 14.6491 16.5 15.777 15.078 13.8907 13.9479 15.4683
    65 16.1061 16.1716 16.031 16.0896 16.0229 16.2323 17.0406 16.2854 15.298 14.9776 15.7568

     | Show Table
    DownLoad: CSV
    Table 18.  Values of AUD estimate for various estimating techniques using the RSS dataset with set size .
    Estimate MLE ADE CME MPSE LSE PCE RADE WLSE LADE MSADE MSALDE
    20 14.2503 14.2662 13.8488 13.6955 13.7732 7.56834 13.3422 14.3021 15.4774 13.5672 17.3995
    35 12.5203 12.5202 12.3418 12.2565 12.3128 12.4091 13.0535 12.8861 11.94 14.5198 17.4009
    50 17.1027 17.0984 16.9476 16.9634 16.9351 15.6024 17.708 17.2244 16.471 18.3951 14.3959
    65 14.4861 14.4806 14.3786 14.4454 14.3728 14.4125 15.1468 14.4975 13.8117 13.7231 6.63895

     | Show Table
    DownLoad: CSV
    Table 19.  The estimates, KSTS, ADTS, CMTS, and KSP in the SRS and RSS designs for the dataset at .
    Method design ADTS CMTS KSTS KSP
    MLE 14.7089 0.761824 0.114911 0.117979 0.489566
    RSS 17.1027 0.387748 0.0472457 0.0751288 0.940393
    ADE 14.8449 0.760685 0.115343 0.115999 0.511594
    RSS 17.0984 0.387747 0.0472331 0.0751671 0.940158
    CME 14.6641 0.762704 0.114883 0.118639 0.482331
    RSS 16.9476 0.388744 0.047017 0.0765033 0.931604
    MPSE 14.6879 0.762205 0.114891 0.118288 0.48617
    RSS 16.9634 0.388545 0.0470194 0.0763622 0.932538
    LSE 14.6491 0.763056 0.114886 0.118861 0.479908
    RSS 16.9351 0.388917 0.0470186 0.0766156 0.930856
    PSE 16.5 0.91112 0.157392 0.117831 0.491197
    RSS 15.6024 0.494113 0.0658479 0.0894908 0.818117
    RADE 15.777 0.810594 0.131257 0.105954 0.6285
    RSS 17.708 0.403364 0.052323 0.0830061 0.881087
    WLSE 15.078 0.76395 0.117256 0.112676 0.549461
    RSS 17.2244 0.388432 0.0477398 0.0750625 0.940799
    LADE 13.8907 0.81995 0.123882 0.13058 0.361317
    RSS 16.471 0.405507 0.0492597 0.0808795 0.899158
    MSADE 13.9479 0.812856 0.12257 0.12966 0.369911
    RSS 18.3951 0.455783 0.0655095 0.0940681 0.768166
    MSALDE 15.4683 0.783456 0.123611 0.107303 0.612458
    RSS 14.3959 0.762225 0.120147 0.106991 0.616163

     | Show Table
    DownLoad: CSV
    Figure 4.  Plots of the estimated PDFs of the AUD with histogram for the two sampling methods at .
    Figure 5.  Plots of the estimated CDFs of the AUD for the two sampling methods at .

    A brand-new bounded distribution called the arctan uniform distribution may be used to simulate several bounded real-world datasets that already exist. When accurately measuring the observation is difficult or expensive, RSS is a valuable strategy. In the present work, the parameter estimator of the arctan uniform distribution is regarded using RSS and SRS approaches. The PS, WLS, AD, ML, MSALD, CM, LS, MPS, RAD, LAD, and MSAD are a few of the well-known conventional estimating techniques that are used. A Monte Carlo simulation based on some accuracy measures is used to assess the effectiveness of the generated estimates. Based on the results of our simulations for both the SRS and RSS datasets, it appears that the MPS approach is preferred in evaluating the quality of suggested estimates compared to the others. A similar pattern of decline with larger sample sizes is seen in all criteria measures. This trend indicates that parameter estimates are more accurate and trustworthy with higher sample numbers. Estimates derived from the RSS datasets are more trustworthy than those derived from the SRS datasets. This suggests that RSS is a sampling strategy that produces estimates with a lower mean squared error than other sampling methods. Real data findings provide more evidence that the RSS design is superior to the SRS approach.

    The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.

    This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-RG23048).

    The authors declare no conflict of interest.



    [1] O. Mubin, J. Henderson, C. Bartneck, You just do not understand me! Speech recognition in human robot interaction, in The 23rd IEEE International Symposium on Robot and Human Interactive Communication, (2014), 637–642. https://doi.org/10.1109/ROMAN.2014.6926324
    [2] T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, F. Tanaka, Social robots for education: a review, Sci. Rob., 3 (2018), eaat5954. https://doi.org/10.1126/scirobotics.aat5954 doi: 10.1126/scirobotics.aat5954
    [3] Y. Wang, S. Zhong, G. Wang, Preventing online disinformation propagation: cost-effective dynamic budget allocation of refutation, media censorship, and social bot detection, Math. Biosci. Eng., 20 (2023), 13113–13132. https://doi.org/10.3934/mbe.2023584 doi: 10.3934/mbe.2023584
    [4] C. A. Cifuentes, M. J. Pinto, N. Céspedes, M. Múnera, Social robots in therapy and care, Curr. Rob. Rep., 1 (2020), 59–74. https://doi.org/10.1007/s43154-020-00009-2 doi: 10.1007/s43154-020-00009-2
    [5] H. Su, W. Qi, J. Chen, D. Zhang, Fuzzy approximation-based task-space control of robot manipulators with remote center of motion constraint, IEEE Trans. Fuzzy Syst., 30 (2022), 1564–1573. https://doi.org/10.1109/TFUZZ.2022.3157075 doi: 10.1109/TFUZZ.2022.3157075
    [6] J. Hirschberg, C. D. Manning, Advances in natural language processing, Science, 349 (2015), 261–266. https://doi.org/10.1126/science.aaa8685 doi: 10.1126/science.aaa8685
    [7] S. H. Paplu, K. Berns, Towards linguistic and cognitive competence for socially interactive robots, in Robot Intelligence Technology and Applications 6, Springer, (2021), 520–530. https://doi.org/10.1007/978-3-030-97672-9_47
    [8] E. B. Onyeulo, V. Gandhi, What makes a social robot good at interacting with humans? Information, 11 (2020), 43. https://doi.org/10.3390/info11010043 doi: 10.3390/info11010043
    [9] C. Ke, V. W. Lou, K. C. Tan, M. Y. Wai, L. L. Chan, Changes in technology acceptance among older people with dementia: the role of social robot engagement, Int. J. Med. Inf., 141 (2020), 104241. https://doi.org/10.1016/j.ijmedinf.2020.104241 doi: 10.1016/j.ijmedinf.2020.104241
    [10] Y. Kim, H. Chen, S. Alghowinem, C. Breazeal, H. W. Park, Joint engagement classification using video augmentation techniques for multi-person human-robot interaction, preprint, arXiv: 2212.14128.
    [11] A. A. Allaban, M. Wang, T. Padır, A systematic review of robotics research in support of in-home care for older adults, Information, 11 (2020), 75. https://doi.org/10.3390/info11020075 doi: 10.3390/info11020075
    [12] W. Qi, A. Aliverti, A multimodal wearable system for continuous and real-time breathing pattern monitoring during daily activity, IEEE J. Biomed. Health. Inf., 24 (2019), 2199–2207. https://doi.org/10.1109/JBHI.2019.2963048 doi: 10.1109/JBHI.2019.2963048
    [13] C. Barras, Could speech recognition improve your meetings? New Sci., 205 (2010), 18–19. https://doi.org/10.1016/S0262-4079(10)60347-8 doi: 10.1016/S0262-4079(10)60347-8
    [14] Y. J. Lu, X. Chang, C. Li, W. Zhang, S. Cornell, Z. Ni, et al., Espnet-se++: Speech enhancement for robust speech recognition, translation, and understanding, preprint, arXiv: 2207.09514.
    [15] L. Besacier, E. Barnard, A. Karpov, T. Schultz, Automatic speech recognition for under-resourced languages: a survey, Speech Commun., 56 (2014), 85–100. https://doi.org/10.1016/j.specom.2013.07.008 doi: 10.1016/j.specom.2013.07.008
    [16] G. I. Winata, S. Cahyawijaya, Z. Liu, Z. Lin, A. Madotto, P. Xu, et al., Learning fast adaptation on cross-accented speech recognition, preprint, arXiv: 2003.01901.
    [17] S. Kim, B. Raj, I. Lane, Environmental noise embeddings for robust speech recognition, preprint, arXiv: 1601.02553.
    [18] A. F. Daniele, M. Bansal, M. R. Walter, Navigational instruction generation as inverse reinforcement learning with neural machine translation, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, (2017), 109–118. https://doi.org/10.1145/2909824.3020241
    [19] Z. Liu, D. Yang, Y. Wang, M. Lu, R. Li, Egnn: Graph structure learning based on evolutionary computation helps more in graph neural networks, Appl. Soft Comput., 135 (2023), 110040. https://doi.org/10.1016/j.asoc.2023.110040 doi: 10.1016/j.asoc.2023.110040
    [20] Y. Wang, Z. Liu, J. Xu, W. Yan, Heterogeneous network representation learning approach for Ethereum identity identification, IEEE Trans. Comput. Social Syst., 10 (2022), 890–899. https://doi.org/10.1109/TCSS.2022.3164719 doi: 10.1109/TCSS.2022.3164719
    [21] J. Zhao, Y. Lv, Output-feedback robust tracking control of uncertain systems via adaptive learning, Int. J. Control Autom. Syst, 21 (2023), 1108–1118. https://doi.org/10.1007/s12555-021-0882-6 doi: 10.1007/s12555-021-0882-6
    [22] S. Islam, A. Paul, B. S. Purkayastha, I. Hussain, Construction of English-bodo parallel text corpus for statistical machine translation, Int. J. Nat. Lang. Comput., 7 (2018), 93–103. https://doi.org/10.5121/ijnlc.2018.7509 doi: 10.5121/ijnlc.2018.7509
    [23] J. Su, J. Chen, H. Jiang, C. Zhou, H. Lin, Y. Ge, et al., Multi-modal neural machine translation with deep semantic interactions, Inf. Sci., 554 (2021), 47–60. https://doi.org/10.1016/j.ins.2020.11.024 doi: 10.1016/j.ins.2020.11.024
    [24] T. Duarte, R. Prikladnicki, F. Calefato, F. Lanubile, Speech recognition for voice-based machine translation, IEEE Software, 31 (2014), 26–31. https://doi.org/10.1109/MS.2014.14 doi: 10.1109/MS.2014.14
    [25] D. M. E. M. Hussein, A survey on sentiment analysis challenges, J. King Saud Univ. Eng. Sci., 30 (2018), 330–338. https://doi.org/10.1016/j.jksues.2016.04.002 doi: 10.1016/j.jksues.2016.04.002
    [26] Y. Liu, J. Lu, J. Yang, F. Mao, Sentiment analysis for e-commerce product reviews by deep learning model of bert-bigru-softmax, Math. Biosci. Eng., 17 (2020), 7819–7837. https://doi.org/10.3934/mbe.2020398 doi: 10.3934/mbe.2020398
    [27] H. Swapnarekha, J. Nayak, H. S. Behera, P. B. Dash, D. Pelusi, An optimistic firefly algorithm-based deep learning approach for sentiment analysis of COVID-19 tweets, Math. Biosci. Eng., 20 (2023), 2382–2407. https://doi.org/10.3934/mbe.2023112 doi: 10.3934/mbe.2023112
    [28] N. Mishra, M. Ramanathan, R. Satapathy, E. Cambria, N. Magnenat-Thalmann, Can a humanoid robot be part of the organizational workforce? a user study leveraging sentiment analysis, in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), (2019), 1–7. https://doi.org/10.1109/RO-MAN46459.2019.8956349
    [29] M. McShane, Natural language understanding (NLU, not NLP) in cognitive systems, AI Mag., 38 (2017), 43–56. https://doi.org/10.1609/aimag.v38i4.2745 doi: 10.1609/aimag.v38i4.2745
    [30] C. Li, W. Xing, Natural language generation using deep learning to support mooc learners, Int. J. Artif. Intell. Educ., 31 (2021), 186–214. https://doi.org/10.1007/s40593-020-00235-x doi: 10.1007/s40593-020-00235-x
    [31] H. Su, W. Qi, Y. Hu, H. R. Karimi, G. Ferrigno, E. De Momi, An incremental learning framework for human-like redundancy optimization of anthropomorphic manipulators, IEEE Trans. Ind. Inf., 18 (2020), 1864–1872. https://doi.org/10.1109/tii.2020.3036693 doi: 10.1109/tii.2020.3036693
    [32] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, et al., Google's neural machine translation system: bridging the gap between human and machine translation, preprint, arXiv: 1609.08144.
    [33] H. Hu, B. Liu, P. Zhang, Several models and applications for deep learning, in 2017 3rd IEEE International Conference on Computer and Communications (ICCC), (2017), 524–530. https://doi.org/10.1109/CompComm.2017.8322601
    [34] J. Aron, How innovative is apple's new voice assistant, Siri?, New Sci., 212 (2011), 24. https://doi.org/10.1016/S0262-4079(11)62647-X doi: 10.1016/S0262-4079(11)62647-X
    [35] W. Jiao, W. Wang, J. Huang, X. Wang, Z. Tu, Is ChatGPT a good translator? Yes with GPT-4 as the engine, preprint, arXiv: 2301.08745.
    [36] P. S. Mattas, ChatGPT: A study of AI language processing and its implications, Int. J. Res. Publ. Rev., 4 (2023), 435–440. https://doi.org/10.55248/gengpi.2023.4218 doi: 10.55248/gengpi.2023.4218
    [37] H. Su, W. Qi, Y. Schmirander, S. E. Ovur, S. Cai, X. Xiong, A human activity-aware shared control solution for medical human–robot interaction, Assem. Autom., 42 (2022), 388–394. https://doi.org/10.1108/AA-12-2021-0174 doi: 10.1108/AA-12-2021-0174
    [38] W. Qi, S. E. Ovur, Z. Li, A. Marzullo, R. Song, Multi-sensor guided hand gesture recognition for a teleoperated robot using a recurrent neural network, IEEE Rob. Autom. Lett., 6 (2021), 6039–6045. https://doi.org/10.1109/LRA.2021.3089999 doi: 10.1109/LRA.2021.3089999
    [39] H. Su, A. Mariani, S. E. Ovur, A. Menciassi, G. Ferrigno, E. De Momi, Toward teaching by demonstration for robot-assisted minimally invasive surgery, IEEE Trans. Autom. Sci. Eng., 18 (2021), 484–494. https://doi.org/10.1109/TASE.2020.3045655 doi: 10.1109/TASE.2020.3045655
    [40] J. Weizenbaum, ELIZA — a computer program for the study of natural language communication between man and machine, Commun. ACM, 26 (1983), 23–28. https://doi.org/10.1145/357980.357991 doi: 10.1145/357980.357991
    [41] M. Prensky, Digital natives, digital immigrants part 2, do they really think differently? Horizon, 9 (2001), 1–6. https://doi.org/10.1108/10748120110424843 doi: 10.1108/10748120110424843
    [42] M. Skjuve, A. Følstad, K. I. Fostervold, P. B. Brandtzaeg, My chatbot companion-a study of human-chatbot relationships, Int. J. Hum.-Comput. Stud., 149 (2021), 102601. https://doi.org/10.1016/j.ijhcs.2021.102601 doi: 10.1016/j.ijhcs.2021.102601
    [43] T. Kanda, T. Hirano, D. Eaton, H. Ishiguro, Interactive robots as social partners and peer tutors for children: a field trial, Hum.-Comput. Interact., 19 (2004), 61–84. https://doi.org/10.1080/07370024.2004.9667340 doi: 10.1080/07370024.2004.9667340
    [44] J. Zakos, L. Capper, Clive-an artificially intelligent chat robot for conversational language practice, in Artificial Intelligence: Theories, Models and Applications, Springer, (2008), 437–442. https://doi.org/10.1007/978-3-540-87881-0_46
    [45] M. A. Salichs, Á. Castro-González, E. Salichs, E. Fernández-Rodicio, M. Maroto-Gómez, J. J. Gamboa-Montero, et al., Mini: A new social robot for the elderly, Int. J. Social Rob., 12 (2020), 1231–1249. https://doi.org/10.1007/s12369-020-00687-0 doi: 10.1007/s12369-020-00687-0
    [46] J. Qi, X. Ding, W. Li, Z. Han, K. Xu, Fusing hand postures and speech recognition for tasks performed by an integrated leg–arm hexapod robot, Appl. Sci., 10 (2020), 6995. https://doi.org/10.3390/app10196995 doi: 10.3390/app10196995
    [47] V. Lim, M. Rooksby, E. S. Cross, Social robots on a global stage: establishing a role for culture during human–robot interaction, Int. J. Social Rob., 13 (2021), 1307–1333. https://doi.org/10.1007/s12369-020-00710-4 doi: 10.1007/s12369-020-00710-4
    [48] T. Belpaeme, P. Vogt, R. Van den Berghe, K. Bergmann, T. Göksun, M. De Haas, et al., Guidelines for designing social robots as second language tutors, Int. J. Social Rob., 10 (2018), 325–341. https://doi.org/10.1007/s12369-018-0467-6 doi: 10.1007/s12369-018-0467-6
    [49] M. Hirschmanner, S. Gross, B. Krenn, F. Neubarth, M. Trapp, M. Vincze, Grounded word learning on a pepper robot, in Proceedings of the 18th International Conference on Intelligent Virtual Agents, (2018), 351–352. https://doi.org/10.1145/3267851.3267903
    [50] H. Leeuwestein, M. Barking, H. Sodacı, O. Oudgenoeg-Paz, J. Verhagen, P. Vogt, et al., Teaching Turkish-Dutch kindergartners Dutch vocabulary with a social robot: does the robot's use of Turkish translations benefit children's Dutch vocabulary learning? J. Comput. Assisted Learn., 37 (2021), 603–620. https://doi.org/10.1111/jcal.12510 doi: 10.1111/jcal.12510
    [51] S. Biswas, Prospective role of chat GPT in the military: according to ChatGPT, Qeios, 2023. https://doi.org/10.32388/8WYYOD doi: 10.32388/8WYYOD
    [52] Y. Ye, H. You, J. Du, Improved trust in human-robot collaboration with ChatGPT, IEEE Access, 11 (2023), 55748–55754. https://doi.org/10.1109/ACCESS.2023.3282111 doi: 10.1109/ACCESS.2023.3282111
    [53] W. Qi, H. Su, A cybertwin based multimodal network for ECG patterns monitoring using deep learning, IEEE Trans. Ind. Inf., 18 (2022), 6663–6670. https://doi.org/10.1109/TII.2022.3159583 doi: 10.1109/TII.2022.3159583
    [54] W. Qi, H. Fan, H. R. Karimi, H. Su, An adaptive reinforcement learning-based multimodal data fusion framework for human–robot confrontation gaming, Neural Networks, 164 (2023), 489–496. https://doi.org/10.1016/j.neunet.2023.04.043 doi: 10.1016/j.neunet.2023.04.043
    [55] D. McColl, G. Nejat, Recognizing emotional body language displayed by a human-like social robot, Int. J. Social Rob., 6 (2014), 261–280. https://doi.org/10.1007/s12369-013-0226-7 doi: 10.1007/s12369-013-0226-7
    [56] A. Hong, N. Lunscher, T. Hu, Y. Tsuboi, X. Zhang, S. F. dos R. Alves, et al., A multimodal emotional human–robot interaction architecture for social robots engaged in bidirectional communication, IEEE Trans. Cybern., 51 (2020), 5954–5968. https://doi.org/10.1109/TCYB.2020.2974688 doi: 10.1109/TCYB.2020.2974688
    [57] A. Meghdari, M. Alemi, M. Zakipour, S. A. Kashanian, Design and realization of a sign language educational humanoid robot, J. Intell. Rob. Syst., 95 (2019), 3–17. https://doi.org/10.1007/s10846-018-0860-2 doi: 10.1007/s10846-018-0860-2
    [58] M. Atzeni, M. Atzori, Askco: A multi-language and extensible smart virtual assistant, in 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), (2019), 111–112. https://doi.org/10.1109/AIKE.2019.00028
    [59] A. Dahal, A. Khadka, B. Kharal, A. Shah, Effectiveness of native language for conversational bots, 2022. https://doi.org/10.21203/rs.3.rs-2183870/v2
    [60] R. Hasselvander, Buddy: Your family's companion robot, 2016.
    [61] T. Erić, S. Ivanović, S. Milivojša, M. Matić, N. Smiljković, Voice control for smart home automation: evaluation of approaches and possible architectures, in 2017 IEEE 7th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), (2017), 140–142. https://doi.org/10.1109/ICCE-Berlin.2017.8210613
    [62] S. Bajpai, D. Radha, Smart phone as a controlling device for smart home using speech recognition, in 2019 International Conference on Communication and Signal Processing (ICCSP), (2019), 0701–0705. https://doi.org/10.1109/ICCSP.2019.8697923
    [63] A. Ruslan, A. Jusoh, A. L. Asnawi, M. R. Othman, N. A. Razak, Development of multilanguage voice control for smart home with IoT, in J. Phys.: Conf. Ser., 1921, (2021), 012069. https://doi.org/10.1088/1742-6596/1921/1/012069
    [64] C. Soni, M. Saklani, G. Mokhariwale, A. Thorat, K. Shejul, Multi-language voice control iot home automation using google assistant and Raspberry Pi, in 2022 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), (2022), 1–6. https://doi.org/10.1109/ACCAI53970.2022.9752606
    [65] S. Kalpana, S. Rajagopalan, R. Ranjith, R. Gomathi, Voice recognition based multi robot for blind people using lidar sensor, in 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), (2020), 1–6. https://doi.org/10.1109/ICSCAN49426.2020.9262365
    [66] N. Harum, M. N. Izzati, N. A. Emran, N. Abdullah, N. A. Zakaria, E. Hamid, et al., A development of multi-language interactive device using artificial intelligence technology for visual impairment person, Int. J. Interact. Mob. Technol., 15 (2021), 79–92. https://doi.org/10.3991/ijim.v15i19.24139 doi: 10.3991/ijim.v15i19.24139
    [67] P. Vogt, R. van den Berghe, M. de Haas, L. Hoffman, J. Kanero, E. Mamus, et al., Second language tutoring using social robots: a large-scale study, in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (2019), 497–505. https://doi.org/10.1109/HRI.2019.8673077
    [68] D. Leyzberg, A. Ramachandran, B. Scassellati, The effect of personalization in longer-term robot tutoring, ACM Trans. Hum.-Rob. Interact., 7 (2018), 1–19. https://doi.org/10.1145/3283453 doi: 10.1145/3283453
    [69] D. T. Tran, D. H. Truong, H. S. Le, J. H. Huh, Mobile robot: automatic speech recognition application for automation and STEM education, Soft Comput., 27 (2023), 10789–10805. https://doi.org/10.1007/s00500-023-07824-7 doi: 10.1007/s00500-023-07824-7
    [70] T. Schlippe, J. Sawatzki, AI-based multilingual interactive exam preparation, in Innovations in Learning and Technology for the Workplace and Higher Education, Springer, (2022), 396–408. https://doi.org/10.1007/978-3-030-90677-1_38
    [71] T. Schodde, K. Bergmann, S. Kopp, Adaptive robot language tutoring based on bayesian knowledge tracing and predictive decision-making, in 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (2017), 128–136. https://doi.org/10.1145/2909824.3020222
    [72] B. He, M. Xia, X. Yu, P. Jian, H. Meng, Z. Chen, An educational robot system of visual question answering for preschoolers, in 2017 2nd International Conference on Robotics and Automation Engineering (ICRAE), (2017), 441–445. https://doi.org/10.1109/ICRAE.2017.8291426
    [73] C. Y. Lin, W. W. Shen, M. H. M. Tsai, J. M. Lin, W. K. Cheng, Implementation of an individual English oral training robot system, in Innovative Technologies and Learning, Springer, (2020), 40–49. https://doi.org/10.1007/978-3-030-63885-6_5
    [74] T. Halbach, T. Schulz, W. Leister, I. Solheim, Robot-enhanced language learning for children in Norwegian day-care centers, Multimodal Technol. Interact., 5 (2021), 74. https://doi.org/10.3390/mti5120074 doi: 10.3390/mti5120074
    [75] P. F. Sin, Z. W. Hong, M. H. M. Tsai, W. K. Cheng, H. C. Wang, J. M. Lin, Metmrs: a modular multi-robot system for English class, in Innovative Technologies and Learning, Springer, (2022), 157–166. https://doi.org/10.1007/978-3-031-15273-3_17
    [76] T. Jakonen, H. Jauni, Managing activity transitions in robot-mediated hybrid language classrooms, Comput. Assisted Lang. Learn., (2022), 1–24. https://doi.org/10.1080/09588221.2022.2059518 doi: 10.1080/09588221.2022.2059518
    [77] F. Tanaka, T. Takahashi, S. Matsuzoe, N. Tazawa, M. Morita, Child-operated telepresence robot: a field trial connecting classrooms between Australia and Japan, in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, (2013), 5896–5901. https://doi.org/10.1109/IROS.2013.6697211
    [78] A. S. Dhanjal, W. Singh, An automatic machine translation system for multi-lingual speech to Indian sign language, Multimedia Tools Appl., 81 (2022), 4283–4321. https://doi.org/10.1007/s11042-021-11706-1 doi: 10.1007/s11042-021-11706-1
    [79] S. Yamamoto, J. Woo, W. H. Chin, K. Matsumura, N. Kubota, Interactive information support by robot partners based on informationally structured space, J. Rob. Mechatron., 32 (2020), 236–243. https://doi.org/10.20965/jrm.2020.p0236 doi: 10.20965/jrm.2020.p0236
    [80] E. Tsardoulias, A. G. Thallas, A. L. Symeonidis, P. A. Mitkas, Improving multilingual interaction for consumer robots through signal enhancement in multichannel speech, J. Audio Eng. Soc., 64 (2016), 514–524. https://doi.org/10.17743/jaes.2016.0022 doi: 10.17743/jaes.2016.0022
    [81] Aldebaran, Thank you, gotthold! pepper robot boosts awareness for saving at bbbank, 2023. Available from: https://www.aldebaran.com/en/blog/news-trends/thank-gotthold-pepper-bbbank.
    [82] S. Yun, Y. J. Lee, S. H. Kim, Multilingual speech-to-speech translation system for mobile consumer devices, IEEE Trans. Consum. Electron., 60 (2014), 508–516. https://doi.org/10.1109/TCE.2014.6937337 doi: 10.1109/TCE.2014.6937337
    [83] A. Romero-Garcés, L. V. Calderita, J. Martınez-Gómez, J. P. Bandera, R. Marfil, L. J. Manso, et al., The cognitive architecture of a robotic salesman, 2015. Available from: http://hdl.handle.net/10630/10767.
    [84] A. Hämäläinen, A. Teixeira, N. Almeida, H. Meinedo, T. Fegyó, M. S. Dias, Multilingual speech recognition for the elderly: the AALFred personal life assistant, Procedia Comput. Sci., 67 (2015), 283–292. https://doi.org/10.1016/j.procs.2015.09.272 doi: 10.1016/j.procs.2015.09.272
    [85] R. Xu, J. Cao, M. Wang, J. Chen, H. Zhou, Y. Zeng, et al., Xiaomingbot: a multilingual robot news reporter, preprint, arXiv: 2007.08005.
    [86] M. Doumbouya, L. Einstein, C. Piech, Using radio archives for low-resource speech recognition: towards an intelligent virtual assistant for illiterate users, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 14757–14765. https://doi.org/10.1609/aaai.v35i17.17733
    [87] P. Rajakumar, K. Suresh, M. Boobalan, M. Gokul, G. D. Kumar, R. Archana, IoT based voice assistant using Raspberry Pi and natural language processing, in 2022 International Conference on Power, Energy, Control and Transmission Systems (ICPECTS), (2022), 1–4. https://doi.org/10.1109/ICPECTS56089.2022.10046890
    [88] A. Di Nuovo, N. Wang, F. Broz, T. Belpaeme, R. Jones, A. Cangelosi, Experimental evaluation of a multi-modal user interface for a robotic service, in Towards Autonomous Robotic Systems, Springer, (2016), 87–98. https://doi.org/10.1007/978-3-319-40379-3_9
    [89] A. Di Nuovo, F. Broz, N. Wang, T. Belpaeme, A. Cangelosi, R. Jones, et al., The multi-modal interface of robot-era multi-robot services tailored for the elderly, Intell. Serv. Rob., 11 (2018), 109–126. https://doi.org/10.1007/s11370-017-0237-6 doi: 10.1007/s11370-017-0237-6
    [90] L. Crisóstomo, N. F. Ferreira, V. Filipe, Robotics services at home support, Int. J. Adv. Rob. Syst., 17 (2020). https://doi.org/10.1177/1729881420925018 doi: 10.1177/1729881420925018
    [91] I. Giorgi, C. Watson, C. Pratt, G. L. Masala, Designing robot verbal and nonverbal interactions in socially assistive domain for quality ageing in place, in Human Centred Intelligent Systems, Springer, (2021), 255–265. https://doi.org/10.1007/978-981-15-5784-2_21
    [92] S. K. Pramanik, Z. A. Onik, N. Anam, M. M. Ullah, A. Saiful, S. Sultana, A voice controlled robot for continuous patient assistance, in 2016 International Conference on Medical Engineering, Health Informatics and Technology (MediTec), (2016), 1–4. https://doi.org/10.1109/MEDITEC.2016.7835366
    [93] M. F. Ruzaij, S. Neubert, N. Stoll, K. Thurow, Hybrid voice controller for intelligent wheelchair and rehabilitation robot using voice recognition and embedded technologies, J. Adv. Comput. Intell. Intell. Inf., 20 (2016), 615–622. https://doi.org/10.20965/jaciii.2016.p0615 doi: 10.20965/jaciii.2016.p0615
    [94] A. Romero-Garcés, J. P. Bandera, R. Marfil, M. González-García, A. Bandera, Clara: Building a socially assistive robot to interact with elderly people, Designs, 6 (2022), 125. https://doi.org/10.3390/designs6060125 doi: 10.3390/designs6060125
    [95] M. F. Ruzaij, S. Neubert, N. Stoll, K. Thurow, Multi-sensor robotic-wheelchair controller for handicap and quadriplegia patients using embedded technologies, in 2016 9th International Conference on Human System Interactions (HSI), (2016), 103–109. https://doi.org/10.1109/HSI.2016.7529616
    [96] T. Kobayashi, N. Yonaga, T. Imai, K. Arai, Bilingual SNS agency robot for person with disability, in 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), (2019), 74–75. https://doi.org/10.1109/GCCE46687.2019.9015297
    [97] C. Yvanoff-Frenchin, V. Ramos, T. Belabed, C. Valderrama, Edge computing robot interface for automatic elderly mental health care based on voice, Electronics, 9 (2020), 419. https://doi.org/10.3390/electronics9030419 doi: 10.3390/electronics9030419
    [98] D. Kottilingam, Emotional wellbeing assessment for elderly using multi-language robot interface, J. Inf. Technol. Digital World, 2 (2020), 1–10. https://doi.org/10.36548/jitdw.2020.1.001 doi: 10.36548/jitdw.2020.1.001
    [99] Microsoft, What Is A Social Robot? 2021. Available from: https://codecondo.com/what-is-a-social-robot/.
    [100] Aldebaran, Pepper in the fight against COVID-19 at Horovice Hospital, Czech republic, 2023.
    [101] N. Shuo, S. Shao, N. Kubota, An iBeacon-based guide robot system for multi-lingual service, in The Abstracts of the International Conference on Advanced Mechatronics: Toward Evolutionary Fusion of IT and Mechatronics: ICAM, (2015), 274–275. https://doi.org/10.1299/jsmeicam.2015.6.274
    [102] S. Sun, T. Takeda, H. Koyama, N. Kubota, Smart device interlocked robot partners for information support systems in sightseeing guide, in 2016 Joint 8th International Conference on Soft Computing and Intelligent Systems (SCIS) and 17th International Symposium on Advanced Intelligent Systems (ISIS), (2016), 586–590. https://doi.org/10.1109/SCIS-ISIS.2016.0129
    [103] L. Jeanpierre, A. I. Mouaddib, L. Locchi, M. T. Lazaro, A. Pennisi, H. Sahli, et al., Coaches: an assistance multi-robot system in public areas, in 2017 European Conference on Mobile Robots (ECMR), (2017), 1–6. https://doi.org/10.1109/ECMR.2017.8098710
    [104] H. Yoshiuchi, T. Matsuda, J. Dai, Data analysis technology of service robot system for business improvement, in ICRAI '19: Proceedings of the 5th International Conference on Robotics and Artificial Intelligence, (2019), 7–11. https://doi.org/10.1145/3373724.3373733
    [105] A. Saniya, M. Chandana, M. S. Dennis, K. Pooja, D. Chaithanya, K. Rohith, et al., CAMPUS MITHRA: design and implementation of voice based attender robot, J. Phys.: Conf. Ser., 2115 (2021), 012006. https://doi.org/10.1088/1742-6596/2115/1/012006 doi: 10.1088/1742-6596/2115/1/012006
    [106] Q. Zhang, The application of audio control in social robotics, in RICAI '22: Proceedings of the 2022 4th International Conference on Robotics, Intelligent Control and Artificial Intelligence, (2022), 963–966. https://doi.org/10.1145/3584376.3584548
    [107] Aldebaran, Landscape AI: Robotic guides in museums and cultural places, 2023.
    [108] Y. Lin, H. Zhou, M. Chen, H. Min, Automatic sorting system for industrial robot with 3D visual perception and natural language interaction, Meas. Control, 52 (2019), 100–115. https://doi.org/10.1177/0020294018819552 doi: 10.1177/0020294018819552
    [109] B. Birch, C. Griffiths, A. Morgan, Environmental effects on reliability and accuracy of mfcc based voice recognition for industrial human-robot-interaction, Proc. Inst. Mech. Eng., Part B: J. Eng. Manuf., 235 (2021), 1939–1948. https://doi.org/10.1177/09544054211014492 doi: 10.1177/09544054211014492
    [110] M. Kiruthiga, M. Divakar, V. Kumar, J. Martina, R. Kalpana, R. M. S. Kumar, Farmer's assistant using AI voice bot, in 2021 3rd International Conference on Signal Processing and Communication (ICPSC), (2021), 527–531. https://doi.org/10.1109/ICSPC51351.2021.9451760
    [111] J. H. Hong, J. Taylor, E. T. Matson, Natural multi-language interaction between firefighters and fire fighting robots, in 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), (2014), 183–189. https://doi.org/10.1109/WI-IAT.2014.166
    [112] J. Thomason, S. Zhang, R. J. Mooney, P. Stone, Learning to interpret natural language commands through human-robot dialog, in IJCAI'15: Proceedings of the 24th International Conference on Artificial Intelligence, (2015), 1923–1929. Available from: https://dl.acm.org/doi/10.5555/2832415.2832516.
    [113] R. Contreras, A. Ayala, F. Cruz, Unmanned aerial vehicle control through domain-based automatic speech recognition, Computers, 9 (2020), 75. https://doi.org/10.3390/computers9030075 doi: 10.3390/computers9030075
    [114] Y. He, Z. Deng, J. Zhang, Design and voice-based control of a nasal endoscopic surgical robot, CAAI Trans. Intell. Technol., 6 (2021), 123–131. https://doi.org/10.1049/cit2.12022 doi: 10.1049/cit2.12022
    [115] J. Nishihara, T. Nakamura, T. Nagai, Online algorithm for robots to learn object concepts and language model, IEEE Trans. Cognit. Dev. Syst., 9 (2016), 255–268. https://doi.org/10.1109/TCDS.2016.2552579 doi: 10.1109/TCDS.2016.2552579
    [116] H. M. He, Robotgpt: from Chatgpt to robot intelligence, 2023. https://doi.org/10.36227/techrxiv.22569247
    [117] F. Yuan, J. G. Anderson, T. H. Wyatt, R. P. Lopez, M. Crane, A. Montgomery, et al., Assessing the acceptability of a humanoid robot for Alzheimer's disease and related dementia care using an online survey, Int. J. Social Rob., 14 (2022), 1223–1237. https://doi.org/10.1007/s12369-021-00862-x doi: 10.1007/s12369-021-00862-x
    [118] S. Andrist, M. Ziadee, H. Boukaram, B. Mutlu, M. Sakr, Effects of culture on the credibility of robot speech: a comparison between English and Arabic, in HRI '15: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, (2015), 157–164. https://doi.org/10.1145/2696454.2696464
  • This article has been cited by:

    1. Najwan Alsadat, Amal S. Hassan, Mohammed Elgarhy, Arne Johannssen, Ahmed M. Gemeay, Estimation methods based on ranked set sampling for the power logarithmic distribution, 2024, 14, 2045-2322, 10.1038/s41598-024-67693-4
    2. Mohammed Elgarhy, Mohamed Kayid, Arne Johannssen, Mahmoud Elsehetry, Survival analysis based on an enhanced Rayleigh-inverted Weibull model, 2024, 10, 24058440, e35851, 10.1016/j.heliyon.2024.e35851
    3. Muhammad Riaz, Anwar H. Joarder, M. Hafidz Omar, Tahir Mahmood, Nasir Abbas, On Approaching Normality Through Rectangular Distribution: Industrial Applications to Monitor Electron Gun and File Server Processes, 2025, 2214-1766, 10.1007/s44199-024-00102-x
    4. Fatimah A. Almulhim, Dalia Kamal Alnagar, ELsiddig Idriss Mohamed, Nuran M. Hassan, Dependent and independent sampling techniques for modeling radiation and failure data, 2025, 18, 16878507, 101377, 10.1016/j.jrras.2025.101377
    5. Hatem Semary, Ahmad Abubakar Suleiman, Aliyu Ismail Ishaq, Jamilu Yunusa Falgore, Umar Kabir Abdullahi, Hanita Daud, Mohamed A. Abd Elgawad, Mohammad Elgarhy, A new modified Sine-Weibull distribution for modeling medical data with dynamic structures, 2025, 18, 16878507, 101427, 10.1016/j.jrras.2025.101427
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2937) PDF downloads(130) Cited by(1)

Figures and Tables

Figures(15)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog