Processing math: 16%
Research article Special Issues

Modified artificial fish swarm algorithm to solve unrelated parallel machine scheduling problem under fuzzy environment

  • Received: 07 July 2024 Revised: 29 November 2024 Accepted: 04 December 2024 Published: 18 December 2024
  • MSC : 90C27, 90C70

  • Unrelated parallel machine scheduling problem (UPMSP) in a fuzzy environment is an active research area due to the fuzzy nature of most real-world problems. UPMSP is an NP-hard problem; thus, finding optimal solutions is challenging, particularly when multiple objectives need to be considered. Hence, a metaheuristic algorithm based on a modified artificial fish swarm algorithm (AFSA) is presented in this study to minimize the multi-objective makespan and total tardiness. Three modifications were made to the proposed algorithm. First, aspiration behavior was added to AFSA behaviors to increase effectiveness. Second, improved parameters such as step and visual were used to balance global search capability and convergence rate. Finally, a transformation method was injected to make the algorithm suitable for discrete optimization problems such as UPMSP. The proposed algorithm was compared with AFSA and five modified versions of AFSA to verify and measure the algorithm's effectiveness by conducting three different sizes of problems. Afterward, the Wilcoxon signed-rank test was used to statistically evaluate the algorithm's performance. The results indicate that the proposed algorithm significantly outperformed the other algorithms, especially for medium and large-sized problems.

    Citation: Azhar Mahdi Ibadi, Rosshairy Abd Rahman. Modified artificial fish swarm algorithm to solve unrelated parallel machine scheduling problem under fuzzy environment[J]. AIMS Mathematics, 2024, 9(12): 35326-35354. doi: 10.3934/math.20241679

    Related Papers:

    [1] Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Muhammad Naeem, Choonkil Park . Improved VIKOR methodology based on q-rung orthopair hesitant fuzzy rough aggregation information: application in multi expert decision making. AIMS Mathematics, 2022, 7(5): 9524-9548. doi: 10.3934/math.2022530
    [2] Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Choonkil Park . A decision making algorithm for wind power plant based on q-rung orthopair hesitant fuzzy rough aggregation information and TOPSIS. AIMS Mathematics, 2022, 7(4): 5241-5274. doi: 10.3934/math.2022292
    [3] Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051
    [4] Muhammad Saqlain, Xiao Long Xin, Rana Muhammad Zulqarnain, Imran Siddique, Sameh Askar, Ahmad M. Alshamrani . Energy supplier selection using Einstein aggregation operators in an interval-valued q-rung orthopair fuzzy hypersoft structure. AIMS Mathematics, 2024, 9(11): 31317-31365. doi: 10.3934/math.20241510
    [5] Adel Fahad Alrasheedi, Jungeun Kim, Rukhsana Kausar . q-Rung orthopair fuzzy information aggregation and their application towards material selection. AIMS Mathematics, 2023, 8(8): 18780-18808. doi: 10.3934/math.2023956
    [6] Sumbal Ali, Asad Ali, Ahmad Bin Azim, Ahmad ALoqaily, Nabil Mlaiki . Averaging aggregation operators under the environment of q-rung orthopair picture fuzzy soft sets and their applications in MADM problems. AIMS Mathematics, 2023, 8(4): 9027-9053. doi: 10.3934/math.2023452
    [7] Yasir Yasin, Muhammad Riaz, Kholood Alsager . Synergy of machine learning and the Einstein Choquet integral with LOPCOW and fuzzy measures for sustainable solid waste management. AIMS Mathematics, 2025, 10(1): 460-498. doi: 10.3934/math.2025022
    [8] Sumbal Ali, Asad Ali, Ahmad Bin Azim, Abdul Samad Khan, Fuad A. Awwad, Emad A. A. Ismail . TOPSIS method based on q-rung orthopair picture fuzzy soft environment and its application in the context of green supply chain management. AIMS Mathematics, 2024, 9(6): 15149-15171. doi: 10.3934/math.2024735
    [9] Ahmad Bin Azim, Ahmad ALoqaily, Asad Ali, Sumbal Ali, Nabil Mlaiki, Fawad Hussain . q-Spherical fuzzy rough sets and their usage in multi-attribute decision-making problems. AIMS Mathematics, 2023, 8(4): 8210-8248. doi: 10.3934/math.2023415
    [10] Ghous Ali, Kholood Alsager, Asad Ali . Novel linguistic q-rung orthopair fuzzy Aczel-Alsina aggregation operators for group decision-making with applications. AIMS Mathematics, 2024, 9(11): 32328-32365. doi: 10.3934/math.20241551
  • Unrelated parallel machine scheduling problem (UPMSP) in a fuzzy environment is an active research area due to the fuzzy nature of most real-world problems. UPMSP is an NP-hard problem; thus, finding optimal solutions is challenging, particularly when multiple objectives need to be considered. Hence, a metaheuristic algorithm based on a modified artificial fish swarm algorithm (AFSA) is presented in this study to minimize the multi-objective makespan and total tardiness. Three modifications were made to the proposed algorithm. First, aspiration behavior was added to AFSA behaviors to increase effectiveness. Second, improved parameters such as step and visual were used to balance global search capability and convergence rate. Finally, a transformation method was injected to make the algorithm suitable for discrete optimization problems such as UPMSP. The proposed algorithm was compared with AFSA and five modified versions of AFSA to verify and measure the algorithm's effectiveness by conducting three different sizes of problems. Afterward, the Wilcoxon signed-rank test was used to statistically evaluate the algorithm's performance. The results indicate that the proposed algorithm significantly outperformed the other algorithms, especially for medium and large-sized problems.



    Abbreviation Description
    MAGDM Multi-attribute group decision making
    MADM Multi-attribute decision making
    FS Fuzzy set
    IFS Intuitionistic fuzzy set
    PFS Pythagorean fuzzy set
    q-ROFS q-rung orthopair fuzzy set
    DMs Decision-makers
    AOps Aggregation operators
    q-ROHFRS q-rung orthopair hesitant fuzzy rough set

    In the current economic and social management system, we need to deal with numerous MADM challenges. Many researchers in the fields of operations research and decision sciences have taken keen interest in MADM theories and techniques, and their noteworthy contributions have been acknowledged [1,2,3,4,5,6,7,8]. Choosing a suitable GDS is one of the most important steps for proper disposal of rubbish. Garbage has risen to be a real problem for international society, causing an increasing hazard to human health [9]. Garbage disposal which is not harmful to the environment and the conversion of garbage into money have become vital aspects for each community in planning and executing sustainable growth and maintaining an intellectual atmosphere [10]. At the start of the twenty-first century, the worldwide average annual waste growth rate has accelerated to 8.42 %, while in developing regions, the figure has even reached 10% [11]. The implementation of waste disposal systems begins with the identification of a location. The choice of selecting a GDS is complicated because so many elements are taken into consideration, such as the local economy, climate, transport services, ground hydrological factors, and geological conditions in the environment. In practical DM challenges, several influencing circumstances, such as lack of understanding of DMs, tight schedules, and restricted budgets, prevent DMs from providing correct assessment values of alternatives with regard to qualities [12]. Imprecise assessments play a crucial role in DM techniques in these situations. It can be seen that the MADM approaches have been used in the assessment of the GDS selection scheme. However, most existing studies were conducted using Zadeh's fuzzy sets (FSs) [13] and Attansov's intuitionistic fuzzy sets (IFSs) [14]. Existing research of IFSs-based MAGDM approaches has demonstrated the effectiveness and efficiency of IFSs in coping with challenging assessment schemes presented by DMs [15]. Afterwards, Yager [16] suggested a novel extension of IFSs called Pythagorean FSs (PFSs). The constraint of IFSs is that the sum of membership and nonmembership degree must be less than or equal to 1, while PFSs fulfill the constraint that the sum of the square of membership and nonmembership degree must not be greater than 1. Obviously, PFSs provide a wider range of information than IFSs, and PFS-based MADM methodologies have emerged as a novel and dynamic research field in comparison to IFSs [17]. Despite their capability to deal with complex fuzzy information in certain situations in real life, it is found that both IFSs and PFSs still have some shortcomings. Later on, Yager [18] introduced the notion of a q-rung orthopair fuzzy (q-ROF) set, which is an extension of the conventional IFS. The q-ROF set has the constraint that the sum of the qth-powers of membership and non-membership degree must be less than or equal to 1.

    There are several situations in which DMs have strong opinions regarding evaluating or assessing plans, initiatives or governmental statements issued by a government. For example, let an organization start a huge construction project in order to demonstrate its success and performance. Members of the organization may attribute favorable membership (β=0.9) to their initiative, whereas others may assess the same exertion as a wastage of resources and strive to invalidate it by providing fiercely contrasting perspectives. Therefore, they ascribe a negative membership (ψ=0.7). In this situation, βϝ()+ψϝ()>1, and (βϝ())2+(ψϝ())2>1, but (βϝ())q+(ψϝ())q<1 for q>3. Therefore, (β,ψ) is a q-rung orthopair fuzzy number (q-ROFN) rather than an intuitionistic fuzzy number (IFN) or a Pythagorean fuzzy number (PFN). Thus, Yager's q-ROFNs are effective in addressing information imprecision. The disadvantages of the aforementioned information make evaluating the selection process under such complicated information complex. Zhang et al. [19] integrated q-ROFSs into a multi-granular three-way decision framework and explored a unique MADM technique in q-ROF information systems. Zhang et al. [20] investigated a fuzzy intelligence learning technique based on limited rationality in internet of medical things systems, giving a suitable scheme for biomedical data processing. Zhang et al. [21] investigated MADM with imprecise and incomplete information in incomplete q-ROF information systems using multi-granulation probabilistic models, MULTIMOORA (multi-objective optimisation by ratio analysis plus the full multiplicative form) and the technique of precise order preference. For MADM, Hussain et al. [22] discussed a covering-based q-rung orthopair fuzzy rough set (q-ROFRS) model hybrid with technique for order preference by similarity to ideal solution (TOPSIS) approach. Chakraborty [23] conducted thorough simulation-based comparisons and mathematical analyses of TOPSIS and modified TOPSIS approaches in order to clear up incorrect assumptions about their applications in MADM resolving issues. Hanine et al. [24] proposed and explored the use of two well-known MCDM approaches, the analytic hierarchy process and TOPSIS methodologies, in decision-making situations. Gupta et al. [25] introduced a trapezoidal IF number-based decision approach for MADM challenges, including plant site selection. They additionally applied the VIKOR technique based on IF environment. A VIKOR approach was proposed by Hafezalkotob et al. [26] with both interval target values of attributes and interval ratings of alternatives on attributes. Additionally, they sought to use the power of interval estimations to reduce the degeneration of undetermined data. Soner et al. [27] employed the analytic hierarchy process and the VIKOR scheme to develop a complete framework for dealing with uncertainty during linguistic evaluation of decision-makers in marine transportation enterprises. Gul et al. [28] carried out an innovative literature review in order to categorize, analyze and evaluate current research on VIKOR applications. They also explored the applications of the VIKOR approach in fuzzy settings. Opricovi et al. [29] enhanced the VIKOR approach with a stability analysis identifying the weight stability intervals and with a trade-offs analysis, demonstrating its application in MCDM challenges with conflicting and non-commensurable criteria.

    Several approaches have been presented to address data uncertainty. The rough set (RS) theory, proposed by Pawlak [30] in 1982, is a non-statistical mathematical technique for dealing with the difficulties of ambiguity and uncertainty. This theory is based on associating a subset with two crisp sets known as lower and upper approximations, which are used to establish the subset's boundary region and accuracy measure [31]. Several extensions to the RS concept have been explored. Fuzzy-rough set (FRS) theory and RS theory may be integrated to organize information with continuous features and evaluate information discrepancies. The FRS approach has proven to be extremely beneficial in a wide range of application fields because it is an appropriate approach for evaluating inconsistent and conflicting information. The FRS theory, introduced in [34], is an enhancement of the RS theory that can support information with continuous numerical properties. The importance of FRS theory is highlighted in a variety of applications. Pan et al. [35] modified the fuzzy preference relation RS framework with an additive consistent fuzzy preference relation. Li et al. [36] proposed an effective FRS technique for feature evaluation. The problem of continuous observer design for integer order nonlinear systems has received a lot of consideration in the literature. However, there are a couple of investigations that are directly connected to granular computing theory in order to better estimate system states. Zhan [37] discussed the granular function description and provided a unique impulsive observer design technique to assure error dynamic system cognitive convergence. The derived criteria, in particular, accurately captured the relationship between the fractional order and the pulse signal. A numerical simulation was presented to demonstrate the efficacy of the suggested approach. Ren et al. [38] investigated logical systems with mixed decision implications. The mixed decision implications semantical system was designed to express and originate acceptable mixed decision implications while avoiding contradicting mixed decision implications. The syntactical system introduced mixed augmentation and mixed combination, and the soundness, completeness, and non-redundancy of these two inference rules were demonstrated. Lian et al. [39] investigated the concept of relative knowledge distances in light of relative cognitive principles. Then in the context of precise and fuzzy settings, they depicted the transformation challenges between any two types of knowledge given the condition of specific knowledge, and they further provided the newly possessed features due to the increase of relative knowledge distances and the refinement of conditional knowledge granularities, which can well reflect progressive features. Feng et al. [40] minimized multi-granulation by using uncertainty measures based on FRSs by minimizing both the negative and positive regions. Sun et al. [41] employed a constructive approach to generate three varieties of multi-granulation FRSs over two universes. Zhang et al. [42] discussed the FRS theory for feature selection based on information entropy. Based on FRS theory, Vluymans et al. [43] proposed a novel type of classification for unbalanced multi-instance data. Wang and Hu [44] proposed new L-FRSs to generalize the concept of L-FRSs. Zhang et al. [45] suggested the dual hesitant FRS and its application. Peng et al. [46] suggested an interactive fuzzy linguistic term set to represent interactive information in a MADM problem. The geometric features and benefits of the interactive fuzzy linguistic term set for decision information consistency were examined. Numerical examples illustrated its application to a MADM problem using interactive information, which may enhance decision outcomes and improve artificial intelligence. An improved evaluation based on the distance from average solution (EDAS) of the interval-valued intuitionistic trapezoidal fuzzy set was proposed by Peng et al. [47]. They provided a numerical example of flood disaster rescue to demonstrate the feasibility and efficacy of the suggested strategy, which was also compared to current methods. Tang et al. [48] introduced a decision-theoretic RS model using q-ROF information with their applications in stock investment assessment. For three-way decisions under GDM, Liang et al. [49] suggested q-ROFSs on decision-theoretic FRSs. Zhang et al. [50] proposed GDM using incomplete q-ROF preference relations. Peng et al. [51] suggested similarity measures for linguistic q-ROF MADM using a projection technique and presented a numerical example to demonstrate its feasibility and efficacy by comparing it to existing approaches. Sensitivity and stability analyses of the suggested approach were also provided. These q-ROFRS extensions have been demonstrated to be beneficial in resolving DM evaluation values in MADM challenges in applications.

    In recent years, researchers have significantly merged the MADM technique with uncertainty theories in response to the rising ambiguity and the complexity of real-world problems. Motivated by the above description, existing research piqued our interest in investigating the generalization of FRS theory to propose a substantial information representation. The innovative features of generalized RS theory addressed the limitations of traditional fuzzy set theory extensions such as FS, IFS, PFS, q-ROFS, LDFS, and many more. Several approaches to fuzzy generalisations of RS theory have been developed and implemented. Considering situations where experts hesitate among several evaluation values, q-rung orthopair hesitant FRSs (q-ROHFRSs), as a new generalizations of intuitionistic fuzzy rough sets, can describe uncertain information flexibly in the decision-making process. The q-ROHFRSs, a hybrid intelligent structure of rough sets, and q-rung orthopair hesitant fuzzy sets (q-ROHFSs) are advanced classification strategies that have captured our interest in working with ambiguous and incomplete data. The q-ROHFRSs improves on traditional models that utilize the value of lower and upper approximations with membership and non- membership from the unit interval to represent the ambiguity of real world challenges, which can effectively describe the uncertainty of complicated problems and the vagueness of human cognition. In accordance with the assessments, aggregation operators (AOps) play a key role in DM in collecting information from diverse sources into a single value. According to existing knowledge, the establishment of AOps with the hybridization of q-ROHFS with RS is not found in the q-ROF information. As a result, the current q-ROHF rough structure has captivated our attention, and we construct a list of algebraic AOps based on rough information, such as q-ROHF weighted averaging (WA), order WA and hybrid WA operators, using the algebraic t-norm and t-conorm.

    In light of the above research motivations, this paper enhances the existing literature of FRS theory by introducing novel ideas. The following describes the major contributions of the present work.

    (ⅰ) We establish novel q-ROHFRS concepts and examine their fundamental operational rules.

    (ⅱ) To construct AOps, q-ROHFREWA, q-ROHFREOWA and q-ROHFREHWA operators, based on Einstein's t-norm and t-conorm and thoroughly investigate their associated features.

    (ⅲ) To establish a DM approach based on the specified aggregation operators to synthesize ambiguous information in real-world DM challenges.

    (ⅳ) Carried out the sensitivity analysis of the proposed aggregation operators with decision-making outcomes.

    (ⅴ) A real-world case study with DM complications in GDS selection assessment has been established.

    (ⅵ) The outcomes have been compared to those demonstrated in the available the scientific literature.

    (ⅶ) Finally, enhanced q-ROHFR-TOPSIS and VIKOR approaches are employed, and we compare the findings obtained through the suggested operators to validate the proposed DM methodology. The main contributions of the study are summarized in Figure 1.

    Figure 1.  Diagram of the present work.

    The remainder of this article is outlined as follows: Section 2 briefly explores several fundamental concepts of IFSs, q-ROFSs and RS theory. In Section 3, an innovative notion of q-ROHFRSs is described together with their fundamental relevant operating rules. Section 4 displays several Einstein AOps that are constructed to aggregate ambiguous information and are based on Einstein operational laws. Section 5 is devoted to a DM approach based on the established AOps. Section 6 provides a numerical example of the assessment of site selection for garbage disposal site selection. This part also discusses the application of the established approach. The sensitivity analysis of ranking order for different values of parameters is presented in Section 7. Section 8 addresses the comparison of suggested AOps with previous outcomes and the enhanced q-ROHFR-TOPSIS and VIKOR methods for evaluating the proposed AOps-based MADM methodology. This manuscript is concluded at Section 9.

    In this section, we provide some relevant fundamental information, such as IFS, q-ROFS, q-ROHFS, RS and q-ROFRS, and some related operational laws, which are listed below. These core concepts will assist readers in comprehending the proposed framework.

    Definition 1. [14] Let U={1,2,3,...,n} be a universal set, an IFS ϝ over U is described by

    ϝ={ı,βϝ(ı),ψϝ(ı)|ıU}.

    For each ıU, the functions βϝ:U[0, 1] and ψϝ:U[0, 1] symbolize the membership grade (MG) and non-membership grade (NMG), which must fulfill the property 0βϝ(ı)+ψϝ(ı)1.

    Definition 2. [18] Let U={1,2,3,...,n} be a universal set, a q-ROFS F over U is defined by

    F={ı,βF(ı),ψF(ı)|ıU}.

    For each ıU, the functions βF:U[0, 1] and ψF:U[0, 1] signify the MG and NMG, which must satisfy the property that (ψF(ı))q+(βF(ı))q1,(q>2Z). Figure 2 depicts a diagrammatic portrayal of an IFS and q-ROFSs (q=15).

    Figure 2.  Graphical depiction of IFS and q-ROFSs (q = 1-5).

    Definition 3. [58] Let U={1,2,3,...,n} be a universal set, a q-ROHFS F is defined by

    F={ı,βhF(ı),ψhF(ı)|ıU},

    where βhF(ı) and ψhF(ı) are sets of some values in [0,1] symbolizing the MG and NMG, respectively. Which must satisfy the following properties: ıU, ϖF(ı)βhF(ı), νF(ı)ψhF(ı) with (max(βhF(ı)))q+(max(ψhF(ı)))q1. We utilize a pair F= (βhF,ψhF) for simplicity to mean a q-ROHF number.

    Definition 4. [58] Let F1=(βhF1,ψhF1) and F2=(βhF2,ψhF2) be two q-ROHFNs. The fundamental set theoretic operations are as follows:

    (1) F1F2=μ1ψhF1,μ2ψhF2ν1βhF1,ν2βhF2{((μ1)q+(μ2)q(μ1)q(μ2)q)1q,(ν1ν2)};

    (2) F1F2=μ1ψhF1,μ2ψhF2ν1βhF1,ν2βhF2{(μ1μ2),((ν1)q+(ν2)q(ν1)q(ν2)q)1q};

    (3) γF1=μ1ψhF1,ν1βhF1{(1(1μq1)γ)1q,(ν1)γ};

    (4) (F1)γ=μ1ψhF1,ν1βhF1{(μ1)γ,(1(1νq1)γ)1q};

    (5) Fc1={ψhF1,βhF1}.

    Definition 5. Let F1=(βhF1,ψhF1) and F2=(βhF2,ψhF2) be two q-ROHFNs, q>2 and γ>0 be any real number. The fundamental operations based on Einstein's t-norm and t-conorm are described in the following:

    (1) F1F2={μ1βhF1μ2βhF2{qμq1+μq2q1+μq1.μq2},ν1ψhF1ν2ψhF2{ν1.ν2q1+(1νq1).(1νq2)}};

    (2) F1F2={μ1βhF1μ2βhF2{μ1.μ2q1+(1μq1).(1μq2)},ν1ψhF1ν2ψhF2{qνq1+νq2q1+νq1.νq2}};

    (3) Fγ1={μ1βhF1{q2(μq1)γq(2μq1)γ+(μq1)γ}, ν1ψhF1{q(1+νq1)γ(1νq1)γq(1+νq1)γ+(1νq1)γ}};

    (4) γ.F1={μ1βhF1{q(1+μq1)γ(1μq1)γq(1+μq1)γ+(1μq1)γ},ν1ψhF1{q2(νq1)γq(2νq1)γ+(νq1)γ}}.

    Definition 6. Let U={1,2,3,...,n} be a universal set and U×U be a (crisp) relation. Then,

    (1)The relation is reflexive iff ıı for all ıU

    (2)The relation is symmetric if ıj then jı for all ı,jU

    (3)The relation is transitive if ıj and jk implies ıkı,j,kU.

    Definition 7. [32] Let U={1,2,3,...,n} be a universal set and be any relation on U. Define :UM(U) by ()={φU|(,φ)}, for ∝∈U where () is known as a successor neighborhood of the element with respect to the relation . The pair (U,) is called a (crisp) approximation space. Now, for any set £U, the lower approximation (LA) and upper approximation (UA) of with respect to approximation space (U,) are defined as

    _(£)={∝∈U|()£},¯(£)={∝∈U|()£ϕ}.

    The pair (_(£),¯(£)) is called a rough set if _(£)¯(£) and both _(£),¯(£):M(U)M(U) are LA and UA operators.

    Definition 8. Let U={1,2,3,...,n} be a universal set, and IFS(U×U) be an IF relation. Then,

    (1) is reflexive if μ(ı,ı)=1 and ν(ı,ı)=0,ıU,

    (2) is symmetric if  (ı,ȷ)U×U, μ(ı,κ)=μ(ȷ,ı) and ν(ı,κ)=ν(κ,ı);

    (3) is transitive if (ı,ȷ)U×U,

    μ(ı,κ)ȷU[μ(ı,ȷ)μ(ȷ,κ)];

    and

    ν(ı,κ)=ȷU[ν(ı,ȷ)ν(ȷ,κ)].

    Definition 9. [18] Let U={1,2,3,...,n} be a universal set. Then, any qRFS(U×U) is called a q-rung relation. The pair (U×) is said to be a q-rung approximation space. Now, for any TqRFS(U), the LA and UA of T with respect to q-RF approximation space (U,) are two q-RFSs, which are represented by ¯(T) and _(T) and characterized as

    ¯(T)={ı,μ¯(T)(ı),ν¯(T)(ı)|ıU},_(T)={ı,μ_(T)(ı),ν_(T)(ı)|ıU},

    where

    μ¯(T)(ı)=gU[μ(ı,g)μT(g)],ν¯(T)(ı)=gU[ν(ı,c)νT(g)],μ_(T)(ı)=gU[μ(ı,c)μT(g)],ν_(T)(ı)=gU[ν(ı,c)νT(g)],

    such that 0((μ¯(T)(ı))q+(ν¯(T)(ı))q)1, and 0((μ_(T)(ı))q+(ν_(T)(ı))q)1. As (_(T),¯(T)) are qRFSs, _(T), ¯(T):qRFS(U)qRFS(U) are LA and UA operators. The pair (T)=(_(T),¯(T)) ={ı,(μ_(T)(ı),ν_(T)(ı),(μ¯(T)(ı),ν¯(T)(ı))|ıT} is known as a q-ROFS. For simplicity (T)={ı,μ_(T)(ı),ν_(T)(ı),(μ¯(T)(ı),ν¯(T)(ı))|ıU} is represented as (T)=((μ_,ν_),(¯μ,¯ν)) and is referred to as a q-ROFR values.

    This section introduces the concept of a q-ROHFRS, which is a rough set and q-ROFS hybrid structure. In addition, we initiate the new score and accuracy functions to assess the q-ROHFRS and provide its key operational rules.

    Definition 10. Let U={1,2,3,...,n} be a universal set. Then, any subset qROHFS(U×U) is said to be a q-RHF relation. The pair (U,) is said to be q-ROHF approximation space. For any TqROHFS(U), then the LA and UA of T with respect to q-ROHF approximation space (U,) are two q-ROHFSs, which are represented by ¯(T) and _(T) and characterized as

    ¯(T)={ı,βh¯(T)(ı),ψh¯(T)(ı)|ıU},_(T)={ı,βh_(T)(ı),ψh_(T)(ı)|ıU},

    where

    βh¯(T)(ı)=kU[βh(ı,k)βhT(k)],ψh¯(T)(ı)=kU[ψh(ı,k)ψhT(k)],βh_(T)(ı)=kU[βh(ı,k)βhT(k)],ψh_(T)(ı)=kU[ψh(ı,k)ψhT(k)],

    such that 0(max(βh¯(T)(ı)))q+(min(ψh¯(T)(ı)))q1 and 0(min(βh_(T)(ı))q+(max(ψh_(T)(ı)))q1. As (¯(T),_(T)) are qROHFSs, _(T),¯(T):qROHFS(U)qRFS(U) are LA and UA operators. The pair

    (T)=(_(T),¯(T))={ı,(βh_(T)(ı),ψh_(T)(ı)),(βh¯(T)(ı),ψh¯(T)(ı))|ıT}

    will be referred to as a q-ROHFRS. For simplicity,

    (T)={ı,(βh_(T)(ı),ψh_(T)(ı)),(βh¯(T)(ı),ψh¯(T)(ı))|ıT}

    is symbolized as (T)=((β_,ψ_),(¯β,¯ψ)) and is referred to as a q-ROHFR values. For explanation of the above concept of a q-ROHFRS, we present the following example.

    Example 1. Let U={1,2,3,4} be any arbitrary set, and (U,) is a q-ROHF approximation space where qROHFRS(U×U) is the q -ROHFR relation as displayed in Table 1, where the value of q is 3. A decision expert provides the decision evaluation in the form of a q-ROHFS as follows:

    Table 1.  The q-ROHFR relation in U.
    T c1 c2 c3 c4
    1 ({0.1,0.3,0.4},{0.2,0.5,0.7}) ({0.2,0.3},{0.7,0.9}) ({0.2,0.5,0.7},{0.2,0.3}) ({0.3,0.5},{0.8})
    2 ({0.2,0.3,0.5},{0.2,0.7}) ({0.2,0.3,,0.5},{0.3,0.4}) ({0.1,0.4,0.6},{0.7,0.9}) ({0.2,0.4},{0.7})
    3 ({0.5,0.6},{0.7,0.9}) ({0.5,0.8,0.9},{0.1,0.9}) ({0.2,0.3},{0.5,0.9}) ({0.7,0.9},{0.1,0.2,0.3})
    4 ({0.2,0.5,0.9},{0.6,0.7,0.9}) ({0.3,0.8,0.9},{0.4,0.8}) ({0.2,0.5},{0.6,0.9}) ({0.5,0.7},{0.1,0.8})

     | Show Table
    DownLoad: CSV

    and

    T={1,{0.2,0.3,0.4},{0.5,0.7},2,{0.2,0.3,0.7},{0.1,0.7,0.8},3,{0.5,0.7,0.8},{0.1,0.5,0.7},4,{0.6,0.8,0.9},{0.2,0.6,0.7}}.

    It follows that

    βh¯(T)(1)=kU[βh(,c)βhT(k)]={{0.10.2,0.30.3,0.40.4}{0.20.2,0.30.3,00.7}{0.20.5,0.50.7,0.70.8}{0.30.6,0.50.8,00.9}}={{0.2,0.3,0.4}{0.2,0.3,0}{0.5,0.7,0.8}{0.6,0.8,0.9}}={0.6,0.8,0.9}.

    The other values are determined in a similar manner as follows:

    βh¯(T)(2)={0.6,0.8,0.9},βh¯(T)(3)={0.7,0.9},βh¯(T)(4)={0.6,0.8,0.9}.

    Similarly,

    ψh¯(T)(1)=kU[ψh(,c)ψhT(k)]={{0.20.5,0.50.7,00.7}{0.70.1,0.90.7,00.8}{0.20.1,0.30.5,00.7}{0.80.2,00.6,00.7}}={{0.2,0.5}{0.1,0.7}{0.2,0.3}{0.2}},={0.2}.

    By routine calculations, we get

    ψh¯(T)(2)={0.1},ψh¯(T)(3)={0.1,0.2},ψh¯(T)(4)={0.1,0.5}.

    Further,

    βh_(T)(1)=kU[βh(,c)βhT(k)]={{0.10.2,0.30.3,0.40.4}{0.20.2,0.30.3,00.7}{0.20.5,0.50.7,0.70.8}{0.30.6,0.50.8,00.9}}={{0.1,0.3,0.4}{0.2,0.3}{0.2,0.5,0.7}{0.3,0.5}}={0.1,0.3}.

    By routine calculations, we get

    βh_(T)(2)={0.1,0.3},βh_(T)(3)={0.2,0.3},βh_(T)(4)={0.2,0.3}.

    Now,

    ψh_(T)(1)=kU[ψh(,c)ψhT(k)]={{0.20.5,0.50.7,0.70}{0.70.2,0.90.3,00.7}{0.20.1,0.30.5,00.7}{0.80.2,00.6,00.7}}={{0.5,0.7,0.7}{0.7,0.9,0.7}{0.2,0.5,0.7}{0.8,0.6,0.7}}={0.8,0.9,0.7}.

    Following the same procedure, we obtain the other values,

    ψh_(T)(2)={0.7,0.9},ψh_(T)(3)={0.7,0.9,0.8},ψh_(T)(4)={0.6,0.9,0.9}.

    Thus, the LA and UA operators based on the q-ROHFR are presented as follows:

    _(T)={1,{0.1,0.3},{0.8,0.9,0.7},2,{0.1,0.3},{0.7,0.9},3,{0.2,0.3},{0.7,0.9,0.8},3,{0.2,0.3},{0.6,0.9,0.9}},
    ¯(T)={1,{0.6,0.8,0.9},{0.2},2,{0.6,0.8,0.9},{0.1},3,{0.7,0.9,0.9},{0.1,0.2},4,{0.6,0.8,0.9},{0.1,0.5}}.

    Hence,

    (T)=(_(T),¯(T))={1,({0.1,0.3},{0.8,0.9,0.7}),({0.6,0.8,0.9},{0.2}),2,({0.1,0.3},{0.7,0.9}),({0.6,0.8,0.9},{0.1}),3,({0.2,0.3},{0.7,0.9,0.8}),({0.7,0.9,0.9},{0.1,0.2}),3,({0.2,0.3},{0.6,0.9,0.9}),({0.6,0.8,0.9},{0.1,0.5})}.

    Definition 11. Let (T1)=(_(T1),¯(T1)) and (T2)=(_(T2),¯(T2)) be two q-ROHFRSs. Then, we have the following.

    (1) (T1) (T2)={(_(T1)_(T2)),(¯(T1)¯(T2))}

    (2) (T1) (T2)={(_(T1)_(T2)),(¯(T1)¯(T2))}

    (3) (T1) (T2)={(_(T1)_(T2)),(¯(T1)¯(T2))}

    (4) (T1) (T2)={(_(T1)_(T2)),(¯(T1)¯(T2))}

    (5) (T1) (T2)={(_(T1)_(T2)) and (¯(T1)¯(T2))}

    (6) γ(T1)=(γ_(T1), γ¯(T1)) for γ1

    (7) ((T1))γ=((_(T1))γ, (¯(T1))γ) for γ1

    (8) (T1)c=(_(T1)c, ¯(T1)c) where _(T1)c and ¯(T1)c represent the complement of q-rung fuzzy rough approximation operators _(T1) and ¯(T1), that is, _(T1)c=(ψh_(T),βh_(T))

    (9) (T1)= (T2) iff _(T1)=_(T2) and ¯(T1)=¯(T2).

    Definition 12. The score function for q-ROHFRN (T)=(_(T),¯(T))=((β_,ψ_),(¯β,¯ψ)) is given as:

    SR((T))=14(2+1MFμı_βh_(T){μı_}+1NF¯μıβh¯(T){¯μı}1MFνı_ψh_(T)(νı_)1MF¯νıψh¯(T)(¯νı)).

    The accuracy function for q-ROHFRV (T)=(_(T),¯(T))=((β_,ψ_),(¯β,¯ψ)) is defined by:

    AC(T)=14(1MFμıβh¯(T)(¯μı)+1MFμıβh¯(T)(¯μı)+1MFνı_ψh_(T)(νı_)+1MF¯νıψh¯(T)(¯νı)),

    where MF and NF indicate the numbers of elements in βhg and ψhg, respectively. We will use the score function to compare and rank two or more q-ROHFR values. The q-ROHFR number with the highest score is considered greater, while the q-ROHFRNs with the lowest score is considered smaller. If the score values are equal, the accuracy function will be employed. The q-ROHFRN having higher accuracy values are considered greater, whereas lower accuracy q-ROHFR values are considered smaller.

    Definition 13. Suppose (T1)=(_(T1),¯(T1)) and (T2)=(_(T2),¯(T2)) are two q-ROHFRVs. Then, we have the following.

    (1) If SR((T1))>SR((T2)), then (T1)>(T2).

    (2) If SR((T1))SR((T2)), then (T1)(T2).

    (3) If SR((T1))=SR((T2)), then we will find the accuracy functions and there are three possibilities that are as follows:

    (a) If AC(T1)>AC(T2) then, (T1)>(T2).

    (b) If AC(T1)AC(T2) then, (T1)(T2).

    (c) If AC(T1)=AC(T2) then, (T1)=(T2).

    In this part, we introduce the new concept of q-ROHF rough AOps by integrating RSs and q-ROPHF AOps to get aggregation notions of q-ROHFREWA, q-ROHFREOWA, and q-ROHFREHWA. In addition, certain important characteristics of the ideas are explored.

    Definition 14. Let (κt)=(_(κt),¯(κt)) (t=1,2,3,...,r) be the collection of q-ROHFRVs. The q-ROHFREWA operator is designated as

    qROHFREWA((κ1),(κ2),...,(κn))=(nt=1γt_(κt),rt=1γt¯(κt)),

    where γ=(γ1,γ2,...,γr)T is the weights vector such that rt=1γt=1 and 0 γt1.

    Theorem 4.1. Let (κt)=(_(κt),¯(κt)) (t=1,2,3,...,r) be the collection of q-ROHFRVs. Then, the q-ROHFREWA operator is defined by

    qROHFREWA((κ1),(κ2),...,(κr))=(rt=1γt_(κt),rt=1γt¯(κt))=({μht_βh_(κ)(qrt=1(1+μht_q)γtrt=1(1μht_q)γtqrt=1(1+μht_q)γt+rt=1(1μht_q)γt),νht_ψh_(κ)(q2rt=1(νqht_)γtqrt=1(2νqht_)γt+rt=1(νqht_)γt)}{¯μhtψh¯(κ),¯ðhtðh¯(κ)(qrt=1(1+¯μqht)γtrt=1(1¯μqht)γtqrt=1(1+¯μqht)γt+rt=1(1¯μqht)γt),¯νhtψh¯(κ),¯hth¯(κ)(q2rt=1(¯νqht)γtqrt=1(2¯νqht)γt+rt=1(¯νqht)γt)}),

    where γ=(γ1,γ2,...,γr)T is the weight vector such that rt=1γt=1 and 0 γt1.

    Proof. We will use mathematical induction to prove this theorem, starting with r=2.

    ((κ1)(κ2))=(_(κ1)_(κ2),¯(κ1)¯(κ2))
    qROHFREWA((κ1),(κ2))=(2t=1γt_(κt),2t=1γt¯(κt))=( {μht_βh_(κ)(q2t=1(1+μqht_)γt2t=1(1μqht_)γtq2t=1(1+μht_q)γt+2t=1(1μht_q)γt),νht_ψh_(κ)(q22t=1(νqht_)γtq2t=1(2νqht_)γt+2t=1(νqht_)γt)}{¯μhtψh¯(κ),¯ðhtðh¯(κ)(q2t=1(1+¯μqht)γt2t=1(1¯μqht)γtq2t=1(1+¯μqht)γt+2t=1(1¯μqht)γt),¯νhtψh¯(κ),¯hth¯(κ)(q22t=1(¯νqht)γtq2t=1(2¯νqht)γt+2t=1(¯νqht)γt)}),

    so the result is true for r = 2 . Now, suppose that the result is true for r = k, that is,

    \begin{eqnarray*} &&q-ROHFREWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{k})\right) = \left( \bigoplus\limits_{t = 1}^{k}\gamma _{t}\underline{\beth } (\kappa _{t}), \bigoplus\limits_{t = 1}^{k}\gamma _{t}\overline{\beth }(\kappa _{t})\right) \\ & = &\left( \begin{array}{c} {\text{ }}\left\{ \begin{array}{c} \bigcup\limits_{\underline{\mu _{h_{t}}}\in \beta _{h_{\underline{\beth } (\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 1+\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 1-\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 1+\underline{ \mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 1-\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}}} \right), \\ \bigcup\limits_{\underline{\nu _{h_{t}}}\in \psi _{h_{\underline{\beth } (\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^k \left( \underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 2-\underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( \underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) \end{array} \right\} \\ \left\{ \begin{array}{c} \bigcup\limits_{\overline{\mu _{h_{t}}}\in \psi _{h_{\overline{\beth } (\kappa )}}, \overline{\eth _{h_{t}}}\in \eth _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 1+ \overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 1-\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}} }{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 1+\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k \left( 1-\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) , \\ \bigcup\limits_{\overline{\nu _{h_{t}}}\in \psi _{h_{\overline{\beth }(\kappa )}}, \overline{\partial _{h_{t}}}\in \partial _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( \overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( 2-\overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k\left( \overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) \end{array} \right\} \end{array} \right). \end{eqnarray*}

    Now, we have to prove that the result holds for r = k+1. It follows that

    \begin{eqnarray*} &&q-ROHFREWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{k}), \beth (\kappa _{k+1})\right) \\ & = &\left( \bigoplus\limits_{t = 1}^{k}\gamma _{t}\underline{\beth }(\kappa _{t})\bigoplus \left( \underline{\beth }(\kappa _{k+1})\right) ^{w_{k+1}}, \bigoplus\limits_{t = 1}^{k}\gamma _{t}\overline{\beth }(\kappa _{t})\bigoplus \left( \overline{\beth }(\kappa _{k+1})\right) ^{w_{k+1}}\right) \\ & = &\left( \begin{array}{c} {\text{ }}\left\{ \begin{array}{c} \bigcup\limits_{\underline{\mu _{h_{t}}}\in \beta _{h_{\underline{\beth } (\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1 \left( 1+\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 1-\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 1+ \underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 1-\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}}}\right) , \\ \bigcup\limits_{\underline{\nu _{h_{t}}}\in \psi _{h_{\underline{\beth } (\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1 \left( \underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 2-\underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( \underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) \end{array} \right\} \\ \left\{ \begin{array}{c} \bigcup\limits_{\overline{\mu _{h_{t}}}\in \psi _{h_{\overline{\beth } (\kappa )}}, \overline{\eth _{h_{t}}}\in \eth _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 1+ \overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 1-\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}} }{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 1+\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1 \left( 1-\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) , \\ \bigcup\limits_{\overline{\nu _{h_{t}}}\in \psi _{h_{\overline{\beth }(\kappa )}}, \overline{\partial _{h_{t}}}\in \partial _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( \overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( 2-\overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^k+1\left( \overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) \end{array} \right\} \end{array} \right). \end{eqnarray*}

    Thus, the result is valid for r = k+1 . Therefore, it holds for every r\geq 1.

    Theorem 4.2. Let \beth (\kappa _{t}) = (\underline{\beth }(\kappa _{t}), \overline{ \beth }(\kappa _{t})) (t = 1, 2, 3, ..., r) be the collection of q-ROPHFRVs, and \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} is the weight vector such that \gamma _{t}\in \lbrack 0, 1] and \bigoplus _{t = 1}^{r}\gamma _{t} = 1 . The q-ROHFREWA operator must fulfill the following features:

    (1) Idempotency: If \beth (\kappa _{t}) = \mathcal{Q} (\kappa) for t = 1, 2, 3, ..., r where \mathcal{Q} (\kappa) = \left(\underline{\mathcal{Q} } (\kappa), \overline{\mathcal{Q} }(\kappa)\right) = \left((\underline{b_{h(\mathcal{\varpropto}_\imath)}}, \underline{d_{h(\mathcal{\varpropto}_\imath)}}), (\overline{b}_{h(\mathcal{\varpropto}_\imath)}, \overline{d_{h(\mathcal{\varpropto}_\imath)}}\right), then

    q-ROHFREWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) = \mathcal{Q} (\kappa ).

    (2) Boundedness: Let \left(\beth (\kappa)\right) _{\min } = \left(\underset{t}{\min }\underline{\beth }\left(\kappa _{t}\right), \underset{t}{ \max }\overline{\beth }(\kappa _{t})\right) and \left(\beth (\kappa)\right) _{\max } = \left(\underset{t}{\max }\underline{\beth }\left(\kappa _{t}\right), \underset{t}{\min }\overline{\beth }(\kappa _{t})\right). Then,

    \left( \beth (\kappa )\right) _{\min }\leq q-ROHFREWGA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) \leq \left( \beth (\kappa )\right) _{\max }.

    (3) Monotonicity: Suppose \mathcal{Q} (\kappa) = \left(\underline{\mathcal{Q} }(\kappa _{t}), \overline{\mathcal{Q} }(\kappa _{t})\right) (t = 1, 2, ..., r) is another collection of q-ROPHFRVs such that \underline{\mathcal{Q} }(\kappa _{t})\leq \underline{\beth }\left(\kappa _{t}\right) and \overline{\mathcal{Q} }(\kappa _{t})\leq \overline{\beth } (\kappa _{t}) . Then,

    q-ROHFREWA\left( \mathcal{Q} (\kappa _{1}), \mathcal{Q} (\kappa _{2}), ..., \mathcal{Q} (\kappa _{r})\right) \leq q-ROHFREWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right).

    Proof. (1) Idempotency: As \beth (\kappa _{t}) = \mathcal{Q} (\kappa) (for all t = 1, 2, 3, ..., r ) where \mathcal{Q} (\kappa _{t}) = \left(\underline{ \mathcal{Q} }(\kappa), \overline{\mathcal{Q} }(\kappa)\right) = \left((\underline{b_{h(\mathcal{\varpropto}_\imath)}}, \underline{d_{h(\mathcal{\varpropto}_\imath)}}), (\overline{b_{h(\mathcal{\varpropto}_\imath)}}, \overline{ d_{h(\mathcal{\varpropto}_\imath)}})\right), it follows that

    \begin{eqnarray*} &&q-ROHFREWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) \\ & = &\left( \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t}\underline{\beth }(\kappa _{t}), \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t}\overline{\beth }(\kappa _{t})\right) \\ & = &\left( \begin{array}{c} {\text{ }}\left\{ \begin{array}{c} \bigcup\limits_{\underline{\mu _{h_{t}}}\in \beta _{h_{\underline{\beth } (\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\underline{ \mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\underline{\mu _{h_{t}}}^{q}\right) ^{\gamma _{t}}}} \right) , \\ \bigcup\limits_{\underline{\nu _{h_{t}}}\in \psi _{h_{\underline{\beth } (\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r \left( \underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2-\underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \underline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right), \end{array} \right\} \\ \left\{ \begin{array}{c} \bigcup\limits_{\overline{\mu _{h_{t}}}\in \psi _{h_{\overline{\beth } (\kappa )}}, \overline{\eth _{h_{t}}}\in \eth _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+ \overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}} }{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r \left( 1-\overline{\mu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) , \\ \bigcup\limits_{\overline{\nu _{h_{t}}}\in \psi _{h_{\overline{\beth }(\kappa )}}, \overline{\partial _{h_{t}}}\in \partial _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2-\overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \overline{\nu _{h_{t}}^{q}}\right) ^{\gamma _{t}}}}\right) \end{array} \right\} \end{array} \right), \end{eqnarray*}

    for all t, \beth (\kappa _{t}) = \mathcal{Q} (\kappa) = \left(\underline{ \mathcal{Q} }(\kappa), \overline{\mathcal{Q} }(\kappa)\right) = \left((\underline{b_{h(\mathcal{\varpropto}_\imath)}}, \underline{d_{h(\mathcal{\varpropto}_\imath)}}), (\overline{b}_{h(\mathcal{\varpropto}_\imath)}, \overline{ d_{h(\mathcal{\varpropto}_\imath)}})\right). Hence,

    \begin{eqnarray*} & = &\left[ \begin{array}{c} \left( \begin{array}{c} \bigcup\limits_{\underline{b_{h(\mathcal{\varpropto}_\imath)}}\in \beta _{h_{\underline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^n\left( 1-\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}+ \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}} \right) ^{q}\right) ^{^{\gamma _{t}}}}}\right) , \\ {\text{ }}\bigcup\limits_{\underline{d_{h(\mathcal{\varpropto}_\imath)}}\in \psi _{h_{\underline{\beth } (\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r \left( \left( \underline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}}}{ \sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2-\left( \underline{ d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \left( \underline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}}}\right) \end{array} \right) \\ \left( \begin{array}{c} \bigcup\limits_{\overline{b_{h(\mathcal{\varpropto}_\imath)}}\in \beta _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\left( \overline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\left( \overline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\left( \overline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}+ \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\left( \overline{b_{h(\mathcal{\varpropto}_\imath)}} \right) ^{q}\right) ^{^{\gamma _{t}}}}}\right) , {\text{ }} \\ \bigcup\limits_{\overline{d_{h(\mathcal{\varpropto}_\imath)}}\in \psi _{h_{\overline{\beth }(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \left( \overline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}}}{\sqrt[q]{ \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2-\left( \overline{d_{h(\mathcal{\varpropto}_\imath)}} \right) ^{q}\right) ^{^{\gamma _{t}}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r \left( \left( \overline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) ^{^{\gamma _{t}}}}} \right) \end{array} \right) \end{array} \right] \\ & = &\left[ \begin{array}{c} \left( \left( \frac{\sqrt[q]{\left( 1+\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) -\left( 1-\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}{\sqrt [q]{\left( 1+\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) +\left( 1-\left( \underline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}\right) , \left( \frac{\sqrt[q]{ 2\left( \left( \underline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}{\sqrt[q]{\left( 2-\left( \underline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) +\left( \left( \underline{ d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}\right) \right) , \\ \left( \frac{\sqrt[q]{\left( 1+\left( \overline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) -\left( 1-\left( \overline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}{\sqrt[q]{\left( 1+\left( \overline{b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) +\left( 1-\left( \overline{ b_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}\right) , \left( \frac{\sqrt[q]{2\left( \left( \overline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}{\sqrt[q]{\left( 2-\left( \overline{ d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) +\left( \left( \overline{d_{h(\mathcal{\varpropto}_\imath)}}\right) ^{q}\right) }}\right) \end{array} \right] \\ & = &\left( \underline{\mathcal{Q} }(\kappa ), \overline{\mathcal{Q} }(\kappa )\right) = \mathcal{Q} (\kappa). \end{eqnarray*}

    Therefore, q-ROHFREWA \left(\beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) = \mathcal{Q} (\kappa).

    (2) Boundedness: As

    \begin{eqnarray*} \left( \underline{\beth }\left( \kappa \right) \right) ^{-} & = &\left[ \left( \underset{t}{\min }\left\{ \underline{\mu _{h_{t}}}\right\} , \underset{t}{ \max }\left\{ \underline{\nu _{h_{t}}}\right\} \right) , \left( \underset{t}{ \min }\{\overline{\mu _{h_{t}}}\}, \underset{t}{\max }\left\{ \overline{ \nu _{h}}_{_{\imath}}\right\} \right) \right] \\ \left( \underline{\beth }\left( \kappa \right) \right) ^{+} & = &\left[ \left( \underset{t}{\max }\{\underline{\mu _{h_{t}}}\}, \underset{t}{\min } \left\{ \underline{\nu _{h_{t}}}\right\} \right) , \left( \underset{t}{\max } \{\overline{\mu _{h_{t}}}/\overline{\eth _{h_{t}}}\}, \underset{t}{\min } \left\{ \overline{\nu _{h}}_{_{\imath}}\right\} \right) \right] \end{eqnarray*}

    and \beth (\kappa _{t}) = \left[\left(\underline{\beta_{t}}, \underline{{\psi}_{t}} \right), \left(\overline{\beta_{t}}, \overline{\psi}_{t}\right) \right]. To prove that

    \left( \beth (\kappa )\right) ^{-}\leq q-ROHFREWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) \leq \left( \beth (\kappa )\right) ^{+},

    let $ f(\mathcal{G}) = \sqrt[3]{\frac{1-\mathcal{G}^{3}}{1+\mathcal{G}^{3}}}, \mathcal{G}\in \lbrack 0, 1]. Then, f^{^{\prime }}(\mathcal{G}) = \frac{-2\mathcal{G}}{\left(1+\mathcal{G}^{3}\right) ^{3}}\sqrt[3]{\left(\frac{1-\mathcal{G}^{3}}{1+\mathcal{G}^{3}}\right) ^{-2}} < 0. Thus, f(\mathcal{G}) is a decreasing function over [0, 1]. Since \{\underline{\mu _{h_{\max }}}\}\leq \{ \underline{\mu _{h_{t}}}\}\leq \{\underline{\mu _{h_{\min }}}\} for all \ t, \ g\left(\underline{\mu _{h_{\min }}}\right) \leq g\left(\underline{\mu _{h_{t}}}\right) \leq g\left(\underline{\mu _{h_{\max }}}\right) (t = 1, 2, 3, ..., n) $ i.e.,

    \Leftrightarrow \sqrt[3]{\frac{1-\left( \underline{\mu _{h_{\min }}} \right) ^{3}}{1+\left( \underline{\mu _{h_{\min }}}\right) ^{3}}}\leq \sqrt[3]{\frac{1-\left( \underline{\mu _{h_{t}}}\right) ^{3}}{1+\left( \underline{\mu _{h_{t}}}\right) ^{3}}} \\ \leq \sqrt[3]{\frac{1-\left( \underline{\mu _{h_{\max }}}\right) ^{3}}{ 1+\left( \underline{\mu _{h_{\max }}}\right) ^{3}}}, \ (t = 1, 2, 3, ..., n).

    Let \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} be a weight vector such that \gamma _{t}\in \lbrack 0, 1] and \bigoplus _{t = 1}^{r}\gamma _{t} = 1, and we have

    \begin{eqnarray} &\Leftrightarrow &\sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{ 1-\left( \underline{\mu _{h_{\min }}}\right) ^{3}}{1+\left( \underline{ \mu _{h_{\min }}}\right) ^{3}}\right) ^{\gamma _{t}}}\leq \sqrt[3]{ \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{1-\left( \underline{\mu _{h_{t}}}\right) ^{3}}{1+\left( \underline{\mu _{h_{t}}}\right) ^{3}} \right) ^{\gamma _{t}}} \\ &\leq &\sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{1-\left( \underline{\mu _{h_{\max }}}\right) ^{3}}{1+\left( \underline{\mu _{h_{\max }}}\right) ^{3}}\right) ^{\gamma _{t}}} \\ &\Leftrightarrow &\sqrt[3]{\left( \frac{1-\left( \underline{\mu _{h_{\min }}}\right) ^{3}}{1+\left( \underline{\mu _{h_{\min }}}\right) ^{3}} \right) ^{\bigoplus _{t = 1}^{r}\gamma _{t}}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{1-\left( \underline{\mu _{h_{t}}}\right) ^{3} }{1+\left( \underline{\mu _{h_{t}}}\right) ^{3}}\right) ^{\gamma _{t}}} \\ &\leq &\sqrt[3]{\left( \frac{1-\left( \underline{\mu _{h_{\max }}}\right) ^{3}}{1+\left( \underline{\mu _{h_{\max }}}\right) ^{3}}\right) ^{\bigoplus _{t = 1}^{r}\gamma _{t}}} \end{eqnarray} (4.1)
    \begin{eqnarray} &\Leftrightarrow &\underline{\mu _{h_{\max }}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{1-\left( \underline{\mu _{h_{t}}} \right) ^{3}}{1+\left( \underline{\mu _{h_{t}}}\right) ^{3}}\right) ^{\gamma _{t}}}\leq \underline{\mu _{h_{\min }}} \\ &\Leftrightarrow &\underline{\mu _{h_{\max }}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{1-\left( \underline{\mu _{h_{t}}} \right) ^{3}}{1+\left( \underline{\mu _{h_{t}}}\right) ^{3}}\right) ^{\gamma _{t}}}\leq \underline{\mu _{h_{\min }}}. \end{eqnarray} (4.2)

    In a similar way, we can show that

    \begin{equation} \Leftrightarrow \overline{\mu _{h_{\max }}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{2-\left( \overline{\mu _{h_{t}}} \right) ^{3}}{\left( \overline{\mu _{h_{t}}}\right) ^{3}}\right) ^{\gamma _{t}}}\leq \overline{\mu _{h_{\min }}}. \notag \end{equation}

    Again, let \mathcal{S}(\varkappa) = $ \sqrt[3]{\frac{2-\varkappa ^{3}}{\varkappa ^{3}}}, \varkappa \in (0, 1]. Then, \mathcal{S}^{\prime }(\varkappa) = \frac{-2}{\varkappa ^{4} }\sqrt[3]{\left(\frac{2-\varkappa ^{3}}{\varkappa ^{3}}\right) ^{-2}} < 0.\ So, \mathcal{S}(\varkappa) is a decreasing function on (0, 1] . Since \{\underline{ \nu _{h_{\min }}}\}\leq \{\underline{\nu _{h_{t}}}\}\leq \{\underline{\nu _{h_{\max }}}\} for all \ t . So, \mathcal{S}\left(\underline{\nu _{h_{\max }}} \right) \leq \mathcal{S}\left(\underline{\nu _{h_{t}}}\right) \leq \mathcal{S}\left(\underline{ \nu _{h_{\min }}}\right) (t = 1, 2, 3, ..., r), $ i.e.,

    \begin{equation*} \sqrt[3]{\frac{2-\left( \underline{\nu _{h_{\max }}}\right) ^{3}}{\left( \underline{\nu _{h_{\max }}}\right) ^{3}}}\leq \sqrt[3]{\frac{2-\left( \underline{\nu _{h_{t}}}\right) ^{3}}{\left( \underline{\nu _{h_{t}}}\right) ^{3}}}\leq \sqrt[3]{\frac{2-\left( \underline{\nu _{h_{\min }}}\right) ^{3}}{ \left( \underline{\nu _{h_{\min }}}\right) ^{3}}}. \end{equation*}

    Let \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} be a weight vector such that \gamma _{t}\in \lbrack 0, 1] and \bigoplus _{t = 1}^{r}\gamma _{t} = 1, and we have

    \begin{eqnarray} &\Leftrightarrow &\sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{ 2-\left( \underline{\nu _{h_{\max }}}\right) ^{3}}{\left( \underline{\nu _{h_{\max }}}^{3}\right) }\right) ^{\gamma _{t}}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{2-\left( \underline{\nu _{h_{t}}} \right) ^{3}}{\left( \underline{\nu _{h_{t}}}^{3}\right) }\right) ^{\gamma _{t}}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{ 2-\left( \underline{\nu _{h_{\max }}}\right) ^{3}}{\left( \underline{\nu _{h_{\max }}}\right) ^{3}}\right) ^{\gamma _{t}}} \\ &\Leftrightarrow &\sqrt[3]{\left( \frac{2-\left( \underline{\nu _{h_{\max }}} \right) ^{3}}{\left( \underline{\nu _{h_{\max }}}\right) ^{3}}\right) ^{\bigoplus _{t = 1}^{r}\gamma _{t}}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{2-\left( \underline{\nu _{h_{t}}}^{3}\right) }{\left( \underline{\nu _{h_{t}}}\right) ^{3}}\right) ^{\gamma _{t}}}\leq \sqrt[3]{ \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{2-\left( \underline{\nu _{h_{\max }}}\right) ^{3}}{\left( \underline{\nu _{h_{\max }}}\right) ^{3}} \right) ^{\bigoplus _{t = 1}^{r}\gamma _{t}}} \\ &\Leftrightarrow &\underline{\nu _{h_{\min }}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{2-\left( \underline{\nu _{h_{t}}} \right) ^{3}}{\left( \underline{\nu _{h_{t}}}\right) ^{3}}\right) ^{\gamma _{t}}}\leq \underline{\nu _{h_{\max }}}. \end{eqnarray} (4.3)

    Likewise, we can show that

    \begin{equation} \Leftrightarrow \overline{\nu _{h_{\min }}}\leq \sqrt[3]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \frac{2-\left( \overline{\nu _{h_{t}}} \right) ^{3}}{\left( \overline{\nu _{h_{t}}}\right) ^{3}}\right) ^{\gamma _{t}}}\leq \overline{\nu _{h_{\max }}}. \end{equation} (4.4)

    By routine calculation, we can show that it is true for q > 3 . Thus, from Eqs (4.1) (4.4), we have

    \begin{equation*} \left( \beth (\kappa )\right) ^{-}\leq q-ROHFREWG\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) \leq \left( \beth (\kappa )\right) ^{+}. \end{equation*}

    (3) Monotonicity: The proof is similar to the proof of Theorem 4.1.

    In this subsection, we introduce the concept of the q-ROHFREOWA operator and suggest the essential features of the operator.

    Definition 15. Let \beth (\kappa _{t}) = (\underline{\beth }(\kappa _{t}), \overline{\beth }(\kappa _{t})) (t = 1, 2, 3, 4, ..., r) be the collection of q-ROHFRNs. Then, the q-ROHFREOWA operator is defined as

    \begin{equation*} q-ROHFREOWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) = \left( \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t}\underline{\beth } _{\rho(t) }(\kappa _{t}), \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t}\overline{\beth } _{\rho(t)} (\kappa _{t})\right), \end{equation*}

    where (\rho (1), \rho (2), ..., \rho (n)) is a permutation in such a way that \underline{\beth } _{\rho }(\kappa _{t})\leq \underline{\beth } _{\rho }(\kappa _{t}-1), and the weights vector \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} such that \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t} = 1 and 0\leq \gamma _{t}\leq 1.

    Theorem 4.3. Let \beth (\kappa _{t}) = (\underline{\beth }(\kappa _{t}), \overline{\beth }(\kappa _{t})) (t = 1, 2, 3, ..., r) be the collection of q -ROHFRVs. Then, the q-ROHFREOWA operator is defined as:

    \begin{eqnarray*} &&q-ROHFREOWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) = \left( \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t}\underline{\beth } _{\rho }(\kappa _{t}), \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t}\overline{\beth } _{\rho }(\kappa _{t})\right) \\ & = &\left( \begin{array}{c} {\mathit{{\text{}}}}\left\{ \begin{array}{c} \bigcup\limits_{\underline{\mu{\rho _{h_{t}}}}\in \beta _{h_{ \underline{\beth }_{\rho }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\underline{\mu _{_{\rho _{h_{t}}}}} ^{q}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1- \underline{\mu _{\rho _{h_{t}}}}^{q}\right) ^{\gamma _{t}}}}{\sqrt[q]{ \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\underline{\mu _{\rho _{h_{t}}}}^{q}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r \left( 1-\underline{\mu _{\rho _{h_{t}}}}^{q}\right) ^{\gamma _{t}}}} \right) \\ \bigcup\limits_{\underline{\nu _{\rho _{h_{t}}}}\in \psi _{h_{\underline{ \beth }_{\rho }(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \underline{\nu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}} }{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2-\underline{\nu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \underline{\nu{\rho{h_{t}}}^{q}}\right) ^{\gamma _{t}}}} \right), \end{array} \right\} \\ \left\{ \begin{array}{c} \bigcup\limits_{\overline{\mu _{\rho _{h_{t}}}}\in \psi _{h_{\overline{ \beth }_{\rho }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\overline{\mu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\overline{\mu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\overline{\mu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\overline{\mu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}}}\right), \\ \bigcup\limits_{\overline{\nu _{\rho _{h_{t}}}}\in \psi _{h_{\overline{\beth _{\rho }}(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \overline{\nu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}}}{ \sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2-\overline{\nu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r \left( \overline{\nu _{\rho _{h_{t}}}^{q}}\right) ^{\gamma _{t}}}}\right) \end{array} \right\} \end{array} \right), \end{eqnarray*}

    where \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} is a weight vector such that \bigoplus _{t = 1}^{r}\gamma _{t} = 1 and 0\leq \gamma _{t}\leq 1.

    Proof. It follows from Theorem 4.1.

    Theorem 4.4. Let \beth (\kappa _{t}) = (\underline{\beth }(\kappa _{t}), \overline{\beth }(\kappa _{t})) (t = 1, 2, 3, ..., r) be the collection of q-ROHFRVs, and \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} is a weights vector such that \gamma _{t}\in \lbrack 0, 1] and \bigoplus _{t = 1}^{r}\gamma _{t} = 1 . Then, the q-ROHFREOWA operator must fulfill the following features:

    (1) Idempotency: If \beth (\kappa _{t}) = \mathcal{Q} (\kappa) for (t = 1, 2, 3, ..., r) where

    \mathcal{Q} (\kappa ) = \left( \underline{\mathcal{Q} } (\kappa ), \overline{\mathcal{Q} }(\kappa )\right) = \left( (\underline{b_{h(\mathcal{\varpropto}_\imath)}}, \underline{d_{h(\mathcal{\varpropto}_\imath)}}), (\overline{b}_{h(\mathcal{\varpropto}_\imath)}, \overline{d_{h(\mathcal{\varpropto}_\imath)}}\right),

    then

    \begin{equation*} q-ROHFREOWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) = \mathcal{Q} (\kappa ). \end{equation*}

    (2) Boundedness: Let \left(\beth (\kappa)\right) _{\min } = \left(\underset{t}{\min }\underline{\beth }\left(\kappa _{t}\right), \underset{t}{ \max }\overline{\beth }(\kappa _{t})\right) and \left(\beth (\kappa)\right) _{\max } = \left(\underset{t}{\max }\underline{\beth }\left(\kappa _{t}\right), \underset{t}{\min }\overline{\beth }(\kappa _{t})\right). Then,

    \begin{equation*} \left( \beth (\kappa )\right) _{\min }\leq q-ROHFREOWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) \leq \left( \beth (\kappa )\right) _{\max }. \end{equation*}

    (3) Monotonicity: Suppose \mathcal{Q} (\kappa) = \left(\underline{\mathcal{Q} }(\kappa _{t}), \overline{\mathcal{Q} }(\kappa _{t})\right) (t = 1, 2, 3, ..., n) is another collection of q-ROHFRVs such that \underline{\mathcal{Q} }(\kappa _{t})\leq \underline{\beth }\left(\kappa _{t}\right) and \overline{\mathcal{Q} }(\kappa _{t})\leq \overline{\beth } (\kappa _{t}) . Then,

    \begin{equation*} q-ROHFREOWA\left( \mathcal{Q} (\kappa _{1}), \mathcal{Q} (\kappa _{2}), ..., \mathcal{Q} (\kappa _{r})\right) \leq q-ROHFREOWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) . \end{equation*}

    Proof. The proof is comparable to the proof of Theorem 4.2.

    In this part, we present the q -ROHFREHWA operator, which simultaneously weights the value and ordered position of q-ROHFR information. The key features of the proposed operator are described in detail as follows:

    Definition 16. Let \beth (\kappa _{t}) = (\underline{\beth }(\kappa _{t}), \overline{\beth }(\kappa _{t})) (t = 1, 2, 3, ..., n) be the collection of q-ROHFRVs and let the weight vector \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} of the given collection of q-ROHFRVs such that \mathop {\mathop \otimes \limits_{t = 1} }\limits^r \gamma _{t} = 1 and 0\leq \gamma _{t}\leq 1. Let \left(\varrho_{1}, \varrho_{2}, ..., \varrho_{r}\right) ^{T}\; be the associated weight vector such that \mathop {\mathop \otimes \limits_{t = 1} }\limits^r \varrho_{t} = 1 and 0\leq \varrho_{t}\leq 1. Then, the q - ROHFREHWA operator is determined as

    \begin{equation*} q-ROHFREHWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) = \left( \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t}\widehat{\underline{\beth }_{\rho }}(\kappa _{t}), \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\gamma _{t} \overline{\widehat{\beth }}_\rho (\kappa _{t})\right), \end{equation*}

    where \left(\widehat{\underline{\beth }_{\rho }}(\kappa _{t}), \overline{ \widehat{\beth }}_\rho (\kappa _{t})\right) = \left(n\gamma _{t}\underline{{\beth}} _{\rho}(\kappa_{t}), n\gamma _{t}\overline{\beth }_\rho (\kappa _{t})\right).

    Theorem 4.5. Let \beth (\kappa _{t}) = (\underline{\beth }(\kappa _{t}), \overline{\beth }(\kappa _{t})) (t = 1, 2, 3, 4, ..., r) be the collection of q -ROHFRVs. Then, the q-ROHFREHWA operator is defined as:

    \begin{eqnarray*} &&q-ROHFREHWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ...\beth (\kappa _{r})\right) = \left( \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\varrho_{t}\widehat{\underline{\beth }_{\rho }}(\kappa _{t}), \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\varrho_{t}\overline{\widehat{ \beth }}_{\rho }(\kappa _{t})\right) \\ & = &\left( \begin{array}{c} {\mathit{{\text{}}}}\left\{ \begin{array}{c} \bigcup\limits_{\underline{\mu _{\rho _{h_{t}}}}\in \beta _{h_{\widehat{ \underline{\beth }_{\rho }}(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\underline{\widehat{\mu }_{_{\rho _{h_{t}}}}}^{q}\right) ^{\varrho_{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\underline{\widehat{\mu }_{\rho _{h_{t}}}}^{q}\right) ^{\varrho_{t}}}}{\sqrt[q ]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\underline{\widehat{\mu } _{\rho _{h_{t}}}}^{q}\right) ^{\varrho_{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r \left( 1-\underline{\widehat{\mu }_{\rho _{h_{t}}}}^{q}\right) ^{\varrho_{t}}}} \right) \\ \bigcup\limits_{\underline{\nu _{\rho _{h_{t}}}}\in \psi _{h_{\widehat{ \underline{\beth }_{\rho }}(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \underline{\widehat{\nu }_{\rho _{h_{t}}}^{q} }\right) ^{\varrho_{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2- \underline{\widehat{\nu }_{\rho _{h_{t}}}^{q}}\right) ^{\varrho_{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \underline{\widehat{\nu }_{\rho _{h_{t}}}^{q} }\right) ^{\varrho_{t}}}}\right), \end{array} \right\} \\ \left\{ \begin{array}{c} \bigcup\limits_{\overline{\mu _{\rho _{h_{t}}}}\in \psi _{h_{\overline{ \widehat{\beth }}_{\rho }(\kappa )}}}\left( \frac{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\overline{\widehat{\mu }_{\rho _{h_{t}}}^{q}} \right) ^{\varrho_{t}}-\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1-\overline{ \widehat{\mu }_{\rho _{h_{t}}}^{q}}\right) ^{\varrho_{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1+\overline{\widehat{\mu }_{\rho _{h_{t}}}^{q}}\right) ^{\varrho_{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 1- \overline{\widehat{\mu }_{\rho _{h_{t}}}^{q}}\right) ^{\varrho_{t}}}}\right) , \\ \bigcup\limits_{\overline{\nu _{\rho _{h_{t}}}}\in \psi _{h_{\overline{ \widehat{\beth }}_{\rho }(\kappa )}}}\left( \frac{\sqrt[q]{2\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \overline{\widehat{\nu }_{\rho _{h_{t}}}^{q}} \right) ^{\varrho_{t}}}}{\sqrt[q]{\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( 2- \overline{\widehat{\nu }_{\rho _{h_{t}}}^{q}}\right) ^{\varrho_{t}}+\mathop {\mathop \otimes \limits_{t = 1} }\limits^r\left( \overline{\widehat{\nu }_{\rho _{h_{t}}}^{q}} \right) ^{\varrho_{t}}}}\right) \end{array} \right\} \end{array} \right) , \end{eqnarray*}

    where \left(\widehat{\underline{\beth }_{\rho }}(\kappa _{t}), \overline{ \widehat{\beth }}\rho (\kappa _{t})\right) = \left(n\gamma _{t}\underline{{\beth}} _{\rho}(\kappa_{t}), n\gamma _{t}\overline{\beth }_\rho (\kappa _{t})\right) and \left(\varrho_{1}, \varrho_{2}, ..., \varrho_{r}\right) ^{T}\; is the associated weight vector such that \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\varrho_{t} = 1 and 0\leq \varrho_{t}\leq 1.

    Proof. The proof follows from the proof of Theorem 4.1.

    Theorem 4.6. Let \beth (\kappa _{t}) = (\underline{\beth }(\kappa _{t}), \overline{\beth }(\kappa _{t})) (t = 1, 2, 3, ..., n) be the collection of q-ROHFRVs and let \left(\varrho_{1}, \varrho_{2}, ..., \varrho_{r}\right) ^{T}\; be the associated weight vector such that \mathop {\mathop \otimes \limits_{t = 1} }\limits^r\varrho_{t} = 1 and 0\leq \varrho_{t}\leq 1. Let the weight vector of the given collection of q-ROHFRVs be \gamma = \left(\gamma _{1}, \gamma _{2}, ..., \gamma _{r}\right) ^{T} such that \bigoplus \limits_{t = 1}^{r}\gamma _{t} = 1 and 0\leq \gamma _{t}\leq 1. Then, q-ROHFREHWA operator satisfies the following properties:

    (1) Idempotency: If \beth(\kappa_{t}) = \mathcal{Q}(\kappa) for t = 1, 2, ..., r where \mathcal{Q} (\kappa) = \left(\underline{\mathcal{Q} } (\kappa), \overline{\mathcal{Q} }(\kappa)\right) = \left((\underline{b_{h(\mathcal{\varpropto}_\imath)}}, \underline{d_{h(\mathcal{\varpropto}_\imath)}}), (\overline{b_{h(\mathcal{\varpropto}_\imath)}}, \overline{d_{h(\mathcal{\varpropto}_\imath)}}\right), then

    \begin{equation*} q-ROHFREHWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) = \mathcal{Q} (\kappa). \end{equation*}

    (2) Boundedness: Let \left(\beth (\kappa)\right) _{\min } = \left(\underset{t}{\min }\underline{\beth }\left(\kappa _{t}\right), \underset{t}{ \max }\overline{\beth }(\kappa _{t})\right) and \left(\beth (\kappa)\right) _{\max } = \left(\underset{t}{\max }\underline{\beth }\left(\kappa _{t}\right), \underset{t}{\min }\overline{\beth }(\kappa _{t})\right). Then,

    \begin{equation*} \left( \beth (\kappa )\right) _{\min }\leq q-ROHFREHWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right) \leq \left( \beth (\kappa )\right) _{\max }. \end{equation*}

    (3) Monotonicity: Suppose \mathcal{Q} (\kappa) = \left(\underline{\mathcal{Q} }(\kappa _{t}), \overline{\mathcal{Q} }(\kappa _{t})\right) (t = 1, 2, 3, ..., r) is another collection of q-ROHFRVs such that \underline{\mathcal{Q} }(\kappa _{t})\leq \underline{\beth }\left(\kappa _{t}\right) and \overline{\mathcal{Q} }(\kappa _{t})\leq \overline{\beth } (\kappa _{t}) . Then,

    \begin{equation*} q-ROHFREHWA\left( \mathcal{Q} (\kappa _{1}), \mathcal{Q} (\kappa _{2}), ..., \mathcal{Q} (\kappa _{r})\right) \leq q-ROHFREHWA\left( \beth (\kappa _{1}), \beth (\kappa _{2}), ..., \beth (\kappa _{r})\right). \end{equation*}

    Proof. The proof is similar to the proof of Theorem 4.2.

    Here, we develop an algorithm in order to deal with uncertainty in MAGDM under q-ROHFR information. Consider a DM problem with a set \left\{ A_{1}, A_{2}, ..., A_{r}\right\} of n alternatives and a set of n attributes \left\{ \chi _{1}, \chi _{2}, ..., \chi _{r}\right\} with the weight vector (\gamma _{1}, \gamma _{2}, ..., \gamma _{r})^{T} , that is, \gamma _{t}\in \lbrack 0, 1] , \bigoplus _{t = 1}^{r}\gamma _{t} = 1. To test the reliability of kth alternative A_{t} under the attribute \chi_{t}, let \left\{ \mathring{D}_{1}, \mathring{D}_{2}, ..., \mathring{D}_{\hat{z} }\right\} be a set of DMs. The expert assessment matrix is defined as follows:

    \begin{eqnarray*} M & = &\left[ \overline{\beth }(\kappa _{tj}^{\hat{z}})\right] _{m\times n}\\ & = &\left[ \begin{array}{cccc} \left( \underline{\beth }(\kappa _{11}), \overline{\beth }(\kappa _{11})\right) & \left( \underline{\beth }(\kappa _{12}), \overline{\beth }(\kappa _{12})\right) & \cdots & \left( \underline{\beth }(\kappa _{1j}), \overline{\beth }(\kappa _{1j})\right) \\ \left( \underline{\beth }(\kappa _{21}), \overline{\beth }(\kappa _{21})\right) & \left( \underline{\beth }(\kappa _{22}), \overline{\beth }(\kappa _{22})\right) & \cdots & \left( \underline{\beth }(\kappa _{2j}), \overline{\beth }(\kappa _{2j})\right) \\ \left( \underline{\beth }(\kappa _{31}), \overline{\beth }(\kappa _{31})\right) & \left( \underline{\beth }(\kappa _{32}), \overline{\beth }(\kappa _{32})\right) & \cdots & \left( \underline{\beth }(\kappa _{3j}), \overline{\beth }(\kappa _{3j})\right) \\ \vdots & \vdots & \ddots & \vdots \\ \left( \underline{\beth }(\kappa _{t1}), \overline{\beth }(\kappa _{t1})\right) & \left( \underline{\beth }(\kappa _{t2}), \overline{\beth }(\kappa _{t2})\right) & \cdots & \left( \underline{\beth }(\kappa _{tj}), \overline{\beth }(\kappa _{tj})\right) \end{array} \right] , \end{eqnarray*}

    where

    \begin{equation*} \underline{\beth }(\kappa ) = \left\{ \langle \propto_{\imath}, \beta _{h_{ \underline{\beth }(\kappa )}}(\propto_{\imath}), \psi _{h_{\underline{\beth } (\kappa )}}(\propto_{\imath})\rangle |\propto_{\imath}\in U \right\}, \end{equation*}

    and

    \begin{equation*} \overline{\beth }(\kappa _{tj}) = \left\{ \langle \propto_{\imath}, \beta _{h_{ \overline{\beth }(\kappa )}}(\propto_{\imath}), \psi _{h_{\overline{\beth }(\kappa )}}( \propto_{\imath})\rangle |\propto_{\imath}\in U \right\}, \end{equation*}
    \begin{equation*} 0\leq \left( \max (\beta _{h_{\overline{\beth }(\kappa )}}(\propto_{\imath}))\right) ^{q}+\left( \min (\psi _{h_{\overline{\beth }(\kappa )}}(\propto_{\imath}))\right) ^{q}\leq 1, \end{equation*}

    and

    \begin{equation*} 0\leq \left( \min (\beta _{h_{\underline{\beth }(\kappa )}}(\propto_{\imath}))\right) ^{q}+\left( \max (\psi _{h_{\underline{\beth }(\kappa )}}(\propto_{\imath}))\right) ^{q}\leq 1, \end{equation*}

    are the q-ROHFR values. The main steps for MAGDM are as follows:

    Step-1. Construct the experts evaluation matrices as

    \begin{equation*} \left( E\right) ^{_{\hat{z_{\imath}}}} = \left[ \begin{array}{cccc} \left( \underline{\beth }(\kappa _{11}^{_{\hat{z_{\imath}}}}), \overline{\beth } (\kappa _{11}^{_{\hat{z_{\imath}}}})\right) & \left( \underline{\beth }(\kappa _{12}^{_{\hat{z_{\imath}}}}), \overline{\beth }(\kappa _{12}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \underline{\beth }(\kappa _{1j}^{_{\hat{ \propto_{\imath}}}}), \overline{\beth }(\kappa _{1j}^{_{\hat{z_{\imath}}}})\right) \\ \left( \underline{\beth }(\kappa _{21}^{_{\hat{z_{\imath}}}}), \overline{\beth } (\kappa _{21}^{_{\hat{z_{\imath}}}})\right) & \left( \underline{\beth }(\kappa _{22}^{_{\hat{z_{\imath}}}}), \overline{\beth }(\kappa _{22}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \underline{\beth }(\kappa _{2j}^{_{\hat{ \propto_{\imath}}}}), \overline{\beth }(\kappa _{2j}^{_{\hat{z_{\imath}}}})\right) \\ \left( \underline{\beth }(\kappa _{31}^{_{\hat{z_{\imath}}}}), \overline{\beth } (\kappa _{31}^{_{\hat{z_{\imath}}}})\right) & \left( \underline{\beth }(\kappa _{32}^{_{\hat{z_{\imath}}}}), \overline{\beth }(\kappa _{32}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \underline{\beth }(\kappa _{3j}^{_{\hat{ \propto_{\imath}}}}), \overline{\beth }(\kappa _{3j}^{_{\hat{z_{\imath}}}})\right) \\ \vdots & \vdots & \ddots & \vdots \\ \left( \underline{\beth }(\kappa _{t1}^{_{\hat{z_{\imath}}}}), \overline{\beth } (\kappa _{t1}^{_{\hat{z_{\imath}}}})\right) & \left( \underline{\beth }(\kappa _{t2}^{_{\hat{z_{\imath}}}}), \overline{\beth }(\kappa _{t2}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \underline{\beth }(\kappa _{tj}^{_{\hat{ \propto_{\imath}}}}), \overline{\beth }(\kappa _{tj}^{_{\hat{z_{\imath}}}})\right) \end{array} \right], \end{equation*}

    where \hat{z_{\imath}} shows the number of experts.

    Step-2. Evaluate the normalized expert matrices \left(N\right) ^{ \hat{z_{\imath}}}, as

    \begin{equation*} \left( N\right) ^{\hat{z_{\imath}}} = \begin{array}{ccc} \beth (\kappa _{tj}) = \left( \underline{\beth }\left( \kappa _{tj}\right) , \overline{\beth }\left( \kappa _{tj}\right) \right) & {\text{if}} & {\rm{ \ \ for benefit, }} \\ \left( \beth (\kappa _{tj})\right) ^{c} = \left( \left( \underline{\beth }\left( \kappa _{tj}\right) \right) ^{c}, \left( \overline{\beth }\left( \kappa _{tj}\right) \right) ^{c}\right) & {\text{if}} & {\text{for cost }}. \end{array} \end{equation*}

    Step-3. Compute the q-ROHFRVs for each considered alternative with respect to the given list of criteria/attributes by utilizing the proposed aggregation information.

    Step-4. Determine the ranking of alternatives based on the score function as follows:

    \begin{equation*} SR(\beth (\kappa )) = \frac{1}{4}\left( \begin{array}{c} 2+\frac{1}{M_{\mathcal{F}}}\sum\limits_{\underline{\mu{h_{t}}}\in \beta _{h_{\underline{\beth }(\kappa )}}}(\underline{\mu{h_{t}}})+ \frac{1}{N_{\mathcal{F}}}\sum\limits_{\overline{\mu _{h_{t}}}\in \psi _{h_{\overline{\beth }(\kappa )}}}(\overline{\mu _{h_{t}}}) \\ \frac{1}{M_{\mathcal{F}}}\sum\limits_{\underline{\nu _{h_{t}}}\in \psi _{h_{ \underline{\beth }(\kappa )}}}(\underline{\nu{h_{t}}})-\frac{1}{M_{ \mathcal{F}}}\sum\limits_{\overline{\nu _{h_{t}}}\in \psi _{h_{\overline{ \beth }(\kappa )}}}(\overline{\nu _{h_{t}}}) \end{array} \right). \end{equation*}

    Step-5. Rank all the alternative scores in descending order. The alternative with the largest value will be superior/best.

    To validate the established operators, we consider a numerical MCGDM example for the analysis of the site selection for a garbage disposal plant.

    The production of garbage in human activities is growing day by day as the world population grows and people's living standards improve. Garbage has become significant problem for international society, creating an increasing threat to human health [43]. Environmentally friendly garbage disposal and the conversion of garbage into wealth have become critical issues for all countries to achieve sustainable growth and establish a scientific ecosystem [10]. Choosing the most appropriate location for garbage disposal is one of the most essential elements in assuring successful garbage disposal. An ideal garbage disposal location will minimize the enterprise's operating costs, increase the efficiency of rubbish disposal and improve the preservation of the city's environment. Environment, natural conditions, social factors and economic factors are all significant in GDS selection. Soil, water and air pollution are the three most significant environmental problems [54]. The waste water produced from a high number of landfills in a short period of time may rupture the leachate collection system, damaging the other cover film. Plastic garbage particles are so small that they can enter into the human body through water, causing a great risk for the health of an individual [53]. Plastic materials, heavy metals, cleaning products and other harmful chemicals in the garbage will end up in the soil while occupying many areas. Air pollution primarily implies that many germs are transported in the garbage, and the transmission of these germs to humans poses a severe health threat. Further, poisonous gases, dust and tiny particles produced by garbage decomposition spread into the air, causing suspended sulfur dioxide particles to reach the atmosphere, resulting in pollution from acid rain and dust. The primary and essential factor is the natural conditions for GDS selection. To maintain the water resistance of soil and rock density, the site must be kept away from unfavorable construction locations, water sources, rivers, and streams. Moreover, places far below ground level must be addressed [33,52]. This eliminates the need for many people to dig trenches and makes waste disposal more simple. Moreover, part of social factors is that allowing the public to participate and evaluate the establishment of the GDS is essential. Garbage disposal facilities will become interconnected and interdependent with society in the future as people's knowledge of environmental protection grows, as do quality of life and living environment needs [54]. If the public's negative perception of the GDS cannot be changed, and the odor is detected on a daily basis, negative impressions will grow. Economic factors are identified as the income and expenditure [53].

    The evaluation procedure for selecting a GDS: Suppose an organization wants to select a GDS for plant. They invite a panel of experts to analyze an appropriate GDS location. Let \left\{ A_{1}, A_{2}, A_{3}, A_{4}\right\} be four alternatives for a garbage disposal site, and we must select the ideal one. Let \left\{ \chi _{1}, \chi _{2}, \chi _{3}, \chi _{4}\right\} be the attributes of each alternative based on the influencing factors determined as follows: environmental protection level \left(\chi _{1}\right) , suitability of natural conditions \left(\chi _{2}\right) , social benefit \left(\chi _{3}\right) and economic benefit \left(\chi _{4}\right) of the GDS. Because of uncertainty, the DMs' selection information is presented as q-ROHFR information. The weight vector for criteria is \gamma = \left(0.13, 0.27, 0.29, 0.31\right) ^{T} . The following computations are carried out in order to evaluate the MCDM problem utilizing the defined approach for evaluating alternatives:

    Step 1. The information of a professional expert is presented in the form of q-ROHFRSs, where the value of q is 3, as shown in Tables 25.

    Table 2.  Decision making information.
    \chi _{1} \chi _{2}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1, 0.2, 0.5\right), \\ \left( 0.3, 0.4\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.4, 0.6\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.5, 0.7\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4, 0.5\right), \\ \left( 0.7, 0.9\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.6, 0.7\right), \\ \left( 0.7, 0.9\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3, 0.5\right), \\ \left( 0.6\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2, 0.4, 0.5\right), \\ \left( 0.5\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.6, 0.7\right), \\ \left( 0.3\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 3.  Decision making information.
    \chi _{3} \chi _{4}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4\right), \\ \left( 0.3, 0.7\right) \end{array} \right) \bf{, } \\ \left( \begin{array}{c} \left( 0.5\right), \\ \left( 0.9\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.6\right) \bf{, } \\ \left( 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.6, 0.8, 0.9\right), \\ \left( 0.7, 0.9\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.4, 0.5, 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.2, 0.5\right), \\ \left( 0.4, 0.5\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.5\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7\right), \\ \left( 0.1, 0.3, 0.4\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 4.  Decision making information.
    \chi _{1} \chi_{2}
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4, 0.5, 0.6\right), \\ \left( 0.6, 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.9\right), \\ \left( 0.5\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4, 0.6, 0.7\right), \\ \left( 0.5, 0.7\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3, 0.4\right), \\ \left( 0.8\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4, 0.5\right), \\ \left( 0.4\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1, 0.2\right), \\ \left( 0.2, 0.3\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 5.  Decision making information.
    \chi _{3} \chi _{4}
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3\right), \\ \left( 0.7, 0.8\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7, 0.8\right), \\ \left( 0.1, 0.4, 0.7\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3, 0.6\right), \\ \left( 0.8\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7\right), \\ \left( 0.3\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3\right), \\ \left( 0.7, 0.8\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7\right), \\ \left( 0.6\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.6, 0.7, 0.9\right), \\ \left( 0.3, 0.4\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.2, 0.7\right), \\ \left( 0.7, 0.8, 0.9\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV

    Step 2. The expert information is benefit type, so in this case, we do not need to normalize the q-ROHFRVs.

    Step 3. In this problem, we consider only one expert for the collection of uncertain information. So, we do not need to find the collected information.

    Step 4. Aggregation information of the alternative under the given list of attributes is evaluated using proposed AOps as follows:

    Case 1. Aggregation information using the Einstein weighted averaging operator is presented in Table 6.

    Table 6.  Aggregated information using q-ROHFREWA.
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.4934, 0.5659, 0.9634, 0.56680.5143, 0.5817\right\}, \\ \left\{ 0.0047, 0.0113, 0.0057, 0.0136, 0.0063, 0.0150, 0.0076, 0.0181\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.5932, 0.6759, 0.9871, 0.5777, 0.6647, 0.7320\right\}, \\ \left\{ 0.0281, 0.0371, 0.0370, 0.0488, 0.0423, 0.0558, 0.0556, 0.0735\right\} \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.7074, 0.7160, 0.8976, 0.7074, 0.7160, 0.7252\right\}, \\ \left\{ 0.0105, 0.0132, 0.0187, 0.0137, 0.0171, 0.0243\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.5584, 0.5915, 0.9867, 0.6235, 0.5714, 0.6030, 0.6057, 0.6336\right\}, \\ \left\{ 0.0011, 0.0032, 0.0043, 0.0013, 0.0040, 0.0054\right\} \end{array} \right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.2916, 0.4384, 0.9834, 0.4518, 0.3556, 0.4707\right\}, \\ \left\{ 0.0262, 0.0302, 0.0315, 0.0364, 0.0306, 0.0353, 0.0369, 0.0426\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.6968, 0.7310, 0.9674, 0.7522, 0.7408, 0.7695\right\}, \\ \left\{ 0.0011, 0.0063, 0.0079, 0.0016, 0.0063, 0.0112\right\} \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.4659, 0.5233, 0.9234, 0.4899, 0.5424, 0.6876\right\}, \\ \left\{ 0.0063, 0.0084, 0.0073, 0.0097, 0.0076, 0.0102, 0.0088, 0.0117\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.4784, 0.5990, 0.9543, 0.6006, 0.4852, 0.6032, 0.4879, 0.6048\right\}, \\ \left\{ 0.0103, 0.0119, 0.0136, 0.0155, 0.0179, 0.0204\right\} \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV

    Case 2. The score values from Tables 25 have been computed and are presented in Table 7. The information has been ordered on the basis of the score values, as shown in Tables 811. Aggregation information using the Einstein ordered weighted averaging operator is given in Table 12.

    Table 7.  Score value of Tables 2–5.
    \chi _{1} \chi _{2} \chi _{3} \chi _{4}
    A_{1} 0.5542 0.4250 0.3750 0.4667
    A_{2} 0.4000 0.5542 0.5417 0.6833
    A_{3} 0.5625 0.3792 0.4750 0.5125
    A_{4} 0.3500 0.4875 0.4125 0.5083

     | Show Table
    DownLoad: CSV
    Table 8.  Ordered expert information.
    \chi _{1} \chi _{2}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1, 0.2, 0.5\right), \\ \left( 0.3, 0.4\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.4, 0.6\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.6\right) {\bf{, }} \\ \left( 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.6, 0.8, 0.9\right), \\ \left( 0.7, 0.9\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.5\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7\right), \\ \left( 0.1, 0.3, 0.4\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2, 0.4, 0.5\right), \\ \left( 0.5\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.6, 0.7\right), \\ \left( 0.3\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 9.  Ordered expert information.
    \chi _{3} \chi _{4}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.5, 0.7\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4, 0.5\right), \\ \left( 0.7, 0.9\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4\right), \\ \left( 0.3, 0.7\right) \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left( 0.5\right), \\ \left( 0.9\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.4, 0.5, 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.2, 0.5\right), \\ \left( 0.4, 0.5\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.6, 0.7\right), \\ \left( 0.7, 0.9\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3, 0.5\right), \\ \left( 0.6\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 10.  Ordered expert information.
    \chi _{1} \chi _{2}
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4, 0.5, 0.6\right), \\ \left( 0.6, 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.9\right), \\ \left( 0.5\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3, 0.6\right), \\ \left( 0.8\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7\right), \\ \left( 0.3\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.6, 0.7, 0.9\right), \\ \left( 0.3, 0.4\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.2, 0.7\right), \\ \left( 0.7, 0.8, 0.9\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4, 0.5\right), \\ \left( 0.4\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1, 0.2\right), \\ \left( 0.2, 0.3\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 11.  Ordered expert information.
    \chi _{3} \chi _{4}
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3\right), \\ \left( 0.7, 0.8\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7, 0.8\right), \\ \left( 0.1, 0.4, 0.7\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4, 0.6, 0.7\right), \\ \left( 0.5, 0.7\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3\right), \\ \left( 0.7, 0.8\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7\right), \\ \left( 0.6\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3, 0.4\right), \\ \left( 0.8\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 12.  Aggregated information using q-ROHFREOWA.
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.4865, 0.5656, 0.4878, 0.5665, 0.5081, 0.5814\right\}, \\ \left\{ 0.4419, 0.5744, 0.4669, 0.6049, 0.4583, 0.5945, 0.4841, 0.6257\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.5898, 0.5729, 0.6640, 0.6513, 0.7238, 0.7139\right\}, \\ \left\{ 0.7150, 0.7732, 0.7692, 0.8277, 0.7481, 0.8067, 0.8026, 0.8609\right\} \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.6682, 0.6957, 0.6783, 0.7048, 0.6889, 0.7144\right\}, \\ \left\{ 0.5235, 0.5766, 0.5573, 0.6127, 0.6149, 0.6737\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.4876, 0.5265, 0.5306, 0.5637, 0.5343, 0.5669, 0.5704, 0.5991\right\}, \\ \left\{ 0.3535, 0.3777, 0.4065, 0.4339, 0.4217, 0.4500\right\} \end{array} \right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.2875, 0.4372, 0.3167, 0.4507, 0.3529, 0.4697\right\}, \\ \left\{ 0.6461, 0.6810, 0.6738, 0.7094, 0.6591, 0.6944, 0.6872, 0.7230\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.6896, 0.7180, 0.7408, 0.7247, 0.7495, 0.7695\right\}, \\ \left\{ 0.2751, 0.3088, 0.4092, 0.4578, 0.4847, 0.5407\right\} \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.4187, 0.4480, 0.4502, 0.4758, 0.5473, 0.5647\right\}, \\ \left\{ 0.4897, 0.5188, 0.5127, 0.5429, 0.5075, 0.5375, 0.5313, 0.5622\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.4833, 0.4989, 0.4859, 0.5014, 0.5400, 0.5525, 0.5421, 0.5545\right\}, \\ \left\{ 0.5111, 0.5659, 0.5217, 0.5773, 0.5325, 0.5889\right\} \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV

    Case 3. The weighted information is depicted in Tables 13 and 14, and the score values of the weighted matrix are given in Table 15. The ordered information is presented in Tables 16 and 17, respectively. Finally, the aggregation information using the Einstein hybrid weighted averaging operator is shown in Table 18.

    Table 13.  Weighted information (EWA).
    \chi _{1} \chi _{2}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.0479, 0.0958, 0.2400\right), \\ \left( 0.9159, 0.9339\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3960\right), \\ \left( 0.9339, 0.9599\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3276, 0.4638\right) {\bf{, }} \\ \left( 0.8609, 0.8933\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3276, 0.2618\right), \\ \left( 0.8989, 0.9734\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2890, 0.3400\right), \\ \left( 0.9703, 0.9897\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1438, 0.2400\right), \\ \left( 0.9599\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1308, 0.2618, 0.3276\right), \\ \left( 0.8609\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3944, 0.4638\right), \\ \left( 0.7285\right) \end{array} \right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1917, 0.2400, 0.2890\right), \\ \left( 0.9599, 0.9703\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4666\right), \\ \left( 0.9481\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.0654\right), \\ \left( 0.8609, 0.8933\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.2618, 0.3944, 0.4638\right), \\ \left( 0.8609, 0.9217\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1917\right), \\ \left( 0.9481, 0.9599\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1438, 0.1917\right), \\ \left( 0.9801\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2618, 0.3276\right), \\ \left( 0.8222\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.0654, 0.1308\right), \\ \left( 0.7058, 0.7733\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 14.  Weighted information (EWA).
    \chi _{3} \chi _{4}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2679\right), \\ \left( 0.7563, 0.9158\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3352\right), \\ \left( 0.9715\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4080\right), \\ \left( 0.9128\right) \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left( 0.4080, 0.5578, 0.6539\right), \\ \left( 0.9128, 0.9705\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.5518\right), \\ \left( 0.8087, 0.8503, 0.9158\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1339, 0.3352\right), \\ \left( 0.8087, 0.8503\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.5578\right), \\ \left( 0.8450\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4797\right), \\ \left( 0.5574, 0.7478, 0.8020\right) \end{array} \right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2008\right), \\ \left( 0.9158, 0.9440\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4745, 0.5518\right), \\ \left( 0.5704, 0.8087, 0.9158\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2031, 0.4080\right), \\ \left( 0.9421\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4797\right), \\ \left( 0.7478\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2008\right), \\ \left( 0.9158, 0.9440\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4745\right), \\ \left( 0.8851\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4080, 0.4797, 0.6539\right), \\ \left( 0.7478, 0.8020\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1354, 0.4797\right), \\ \left( 0.9128, 0.9421, 0.9705\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 15.  Score values of weighted (EWA) matrix.
    \chi _{1} \chi _{2} \chi _{3} \chi _{4}
    A_{1} 0.1630 0.2193 0.1989 0.2734
    A_{2} 0.1416 0.2700 0.2746 0.3725
    A_{3} 0.1984 0.1676 0.2548 0.2738
    A_{4} 0.1064 0.2078 0.2151 0.2762

     | Show Table
    DownLoad: CSV
    Table 16.  Ordered weighted information (EWA).
    \chi _{1} \chi _{2}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4080\right), \\ \left( 0.9128\right) \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left( 0.4080, 0.5578, 0.6539\right), \\ \left( 0.9128, 0.9705\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.3276, 0.4638\right) {\bf{, }} \\ \left( 0.8609, 0.8933\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3276, 0.2618\right), \\ \left( 0.8989, 0.9734\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.5578\right), \\ \left( 0.8450\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4797\right), \\ \left( 0.55740.74780.8020\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.5518\right), \\ \left( 0.80870.8503, 0.9158\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.13390.3352\right), \\ \left( 0.80870.8503\right) \end{array} \right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2031, 0.4080\right), \\ \left( 0.9421\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4797\right), \\ \left( 0.7478\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2008\right), \\ \left( 0.9158, 0.9440\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4745, 0.5518\right), \\ \left( 0.5704, 0.8087, 0.9158\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4080, 0.4797, 0.6539\right), \\ \left( 0.7478, 0.8020\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1354, 0.4797\right), \\ \left( 0.9128, 0.9421, 0.9705\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2008\right), \\ \left( 0.9158, 0.9440\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4745\right), \\ \left( 0.8851\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 17.  Ordered weighted information (EWA).
    \chi _{3} \chi _{4}
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2679\right), \\ \left( 0.7563, 0.9158\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3352\right), \\ \left( 0.9715\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.0479, 0.0958, 0.2400\right), \\ \left( 0.9159, 0.9339\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3960\right), \\ \left( 0.9339, 0.9599\right) \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1308, 0.2618, 0.3276\right), \\ \left( 0.8609\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3944, 0.4638\right), \\ \left( 0.7285\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2890, 0.3400\right), \\ \left( 0.9703, 0.9897\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1438, 0.2400\right), \\ \left( 0.9599\right) \end{array} \right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1917, 0.2400, 0.2890\right), \\ \left( 0.9599, 0.9703\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4666\right), \\ \left( 0.9481\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.0654\right), \\ \left( 0.8609, 0.8933\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.2618, 0.3944, 0.4638\right), \\ \left( 0.8609, 0.9217\right) \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2618, 0.3276\right), \\ \left( 0.8222\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.0654, 0.1308\right), \\ \left( 0.7058, 0.7733\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1917\right), \\ \left( 0.9481, 0.9599\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.1438, 0.1917\right), \\ \left( 0.9801\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV
    Table 18.  Aggregated information using q-ROHFREHWA.
    A_{1} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.3647, 0.3527, 0.3967, 0.3866, 0.42520.4165\right\}, \\ \left\{ 0.9331, 0.9412, 0.9531, 0.9610, 0.9406, 0.9487, 0.9604, 0.9681\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.3647, 0.3527, 0.3967, 0.3866, 0.4252, 0.4165\right\}, \\ \left\{ 0.9331, 0.9412, 0.9531, 0.9610, 0.9406, 0.9487, 0.9604, 0.9683\right\} \end{array} \right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.4246, 0.4331, 0.4329, 0.4410, 0.4415, 0.4494\right\}, \\ \left\{ 0.8802, 0.8869, 0.8912, 0.8978, 0.9087, 0.9152\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.3233, 0.3336, 0.3556, 0.3643, 0.3511, 0.3600, 0.3791, 0.3867\right\}, \\ \left\{ 0.7996, 0.8109, 0.8260, 0.8373, 0.8331, 0.8443\right\} \end{array} \right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.1755, 0.1946, 0.2180, 0.2361, 0.2473, 0.2626\right\}, \\ \left\{ 0.9157, 0.9254, 0.9189, 0.9286, 0.9234, 0.9330, 0.9266, 0.9362\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.4279, 0.4510, 0.4696, 0.4564, 0.4768, 0.4935\right\}, \\ \left\{ 0.7915, 0.8110, 0.8585, 0.8776, 0.8874, 0.9061\right\} \end{array} \right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( \begin{array}{c} \left\{ 0.2641, 0.2861, 0.2884, 0.6061, 0.3606, 0.3729\right\}, \\ \left\{ 0.8781, 0.8821, 0.8861, 0.8900, 0.8850, 0.8889, 0.8929, 0.8968\right\} \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left\{ 0.3117, 0.3159, 0.3136, 0.3178, 0.3539, 0.3573, 0.3554, 0.3587\right\}, \\ \left\{ 0.8686, 0.8879, 0.8726, 0.8918, 0.8765, 0.8957\right\} \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV

    Step 5. Table 19 and Figure 3 show the score values for all alternatives under established aggregation operators.

    Table 19.  Score values.
    Operators SR\left( A_{1}\right) SR\left( A_{2}\right) SR\left( A_{3}\right) SR\left( A_{4}\right)
    q-ROHFREWA 0.4690 0.6064 0.5051 0.5181
    q-ROHFREOWA 0.4665 0.5597 0.5052 0.4823
    q-ROHFRHWA 0.2198 0.2680 0.2259 0.2322

     | Show Table
    DownLoad: CSV
    Figure 3.  Graphical representation of score values obtained through q -ROHFREWA, q -ROHFREOWA and q -ROHFRHWA.

    Step-6. The alternatives A_k\; (k = 1, 2, ..., 4) have been arranged in ascending order, as shown in Table 20.

    Table 20.  The ordering of the alternatives.
    Operator Scores Best Alternative
    q-ROHFREWA SR\left( A_{2}\right) >SR\left( A_{4}\right) >SR\left( A_{3}\right) >SR\left( A_{1}\right) A_{2}
    q-ROHFREOWA SR\left( A_{2}\right) >SR\left( A_{3}\right) >SR\left( A_{4}\right) >SR\left( A_{1}\right) A_{2}
    q-ROHFRHWA SR\left( A_{2}\right) >SR\left( A_{4}\right) >SR\left( A_{3}\right) >SR\left( A_{1}\right) A_{2}

     | Show Table
    DownLoad: CSV

    Based on the results of the preceding computational procedure, we determined that the alternative A_2 is the best option among all and is thus strongly suggested.

    Garbage disposal plant site selection involves a complex process that requires a thorough understanding of the determination aspects of the process. The following are some of the most significant factors that should be considered while selecting a location for a garbage disposal plant:

    (i) Environmental protection level: One of the most critical aspects of site selection is considering the environmental factors. This includes the potential impact on air and water quality, noise pollution, land use and local biodiversity. The site should be located in an area that minimizes any negative environmental impact.

    (ii) Suitability of natural conditions: The suitability of natural conditions and the geographic location of the site is another critical factor in the site selection process. The site should be easily accessible and located in a region with a significant population and a consistent supply of waste materials. It includes the availability of infrastructure, such as roads, water supply, electricity, and communication systems.

    (iii) Social benefit: Social factors such as the local community's attitude towards waste disposal, employment opportunities, and economic benefits are also important in the site selection process.

    (iv) Economic benefit: The economic viability of the site is a crucial factor in site selection. The site should be cost-effective to operate, and the waste disposal fees should be competitive with other options. It includes the availability of infrastructure, such as roads, water supply, electricity, and communication systems.

    Garbage disposal plant site selection requires a thorough consideration of environmental, geographic, technical, social, legal and economic factors to ensure safe and efficient operation. The present study demonstrates a prudent development in selection of a site for garbage disposal by incorporating the key variables for assessment, namely environmental, geographic, technical, social, legal, and economic factors. This has been accomplished to ensure a safe and efficient operation, and requires the implementation of appropriate strategies through a compact model that is based on a high degree of sustainability. The site selection process should adhere to local, state and federal laws and regulations. Compliance with these regulations ensures that the plant operates in a safe and efficient manner. Global warming and climate change are serious environmental threats. To maintain environmental sustainability, site selection has to contemplate technological feasibility.

    The q-ROHFREWAA consists of the parameter q which gives the q-ROHFREWAA significant flexibility in handling fuzzy information. In our case, the value q = 3 was used. We study the possible variation in the results obtained from the q-ROHFREWAA operator. In the following, Table 21 and Figure 4 show the ranking orders based on score values obtained through the q-ROHFREWAA operator. The parameter q , was varied with values of 1 to 5, 10, 15, 20, 50 and 100, and the alternatives were arranged in ascending order based on q-ROHFR score values. When the value q in the q-ROHFREWAA was varied, the optimal alternative was always the same for 1 to 5 and 10, as were the orderings of alternatives ( A_{2}\succ A_{4}\succ A_{3}\succ A_{1} ). For the values 15, 20, 50 and 100, the optimal alternative was always the same, as were the orderings of alternatives ( A_{2}\succ A_{4}\succ A_{1}\succ A_{3} ). Using the q-ROHFREWAA operator and the score function, A_{2} had the highest score value, as the optimal alternative. A_{3} had the lowest score for the parameter values 1 to 5 and 10, and A_{1} had the lowest score for the parameter values 15, 20, 50 and 100, as presented in Table 21. Therefore, the most appropriate alternative was the same for all values of q among 1 to 5, 10, 15, 20, 50 and 100. Figure 4 depicts a graphical representation of Table 21. The effect of varying q on ranking order using the q-ROHFREWAA operator is visualized in Figure 4.

    Table 21.  Ranking order for different values of q in the q-ROHFREWAA operator.
    q Score values of alternatives Ranking Best Alternative
    A_{1} A_{2} A_{3} A_{4}
    1 0.5992 0.7522 0.6395 0.6623 A_{2}\succ A_{4}\succ \ A_{3}\succ A_{1} A_{2}
    2 0.5122 0.6657 0.5545 0.5698 A_{2}\succ A_{4}\succ \ A_{3}\succ A_{1} A_{2}
    3 0.4690 0.6064 0.5051 0.5181 A_{2}\succ A_{4}\succ \ A_{3}\succ A_{1} A_{2}
    4 0.4441 0.5657 0.4734 0.4868 A_{2}\succ A_{4}\succ \ A_{3}\succ A_{1} A_{2}
    5 0.4282 0.5366 0.4518 0.4660 A_{2}\succ A_{4}\succ \ A_{3}\succ A_{1} A_{2}
    10 0.3969 0.4658 0.4033 0.4185 A_{2}\succ A_{4}\succ \ A_{3}\succ A_{1} A_{2}
    15 0.3876 0.4382 0.3874 0.4003 A_{2}\succ A_{4}\succ \ A_{1}\succ A_{3} A_{2}
    20 0.3833 0.4235 0.3804 0.3905 A_{2}\succ A_{4}\succ \ A_{1}\succ A_{3} A_{2}
    50 0.3759 0.3953 0.3537 0.3717 A_{2}\succ A_{4}\succ \ A_{1}\succ A_{3} A_{2}
    100 0.2989 0.3844 0.2306 0.3144 A_{2}\succ A_{4}\succ \ A_{1}\succ A_{3} A_{2}

     | Show Table
    DownLoad: CSV
    Figure 4.  The effect of q on ranking order using q-ROHFRWAA operator.

    In this section, we compare the outcomes with Ashraf et al. [55] and the enhanced TOPSIS and VIKOR methodologies for the q-ROHFR information in order to deal with MCGDM problems and to verify the results obtained through the suggested aggregation operators. To demonstrate the importance and credibility of the developed q-ROHFR Einstein weighted averaging aggregation operators, a comparison of their features with those of existing decision support approaches is presented.

    Ashraf et al. [55] developed the list of q-rung orthopair fuzzy rough weighted aggregation operators to sort out the best alternative. We take the collective expert information from Ashraf et al. [55] to evaluate the finest alternative. The collective expert information is displayed in Table 22 as follows:

    Table 22.  Collected experts information.
    \chi _{1} \chi _{2} \chi _{3} \chi _{4} \chi _{5}
    A_{1} \left( \begin{array}{c} \left( 0.64, 0.13\right), \\ \left( 0.46, 0.15\right) \end{array} \right) \left( \begin{array}{c} \left( 0.8, 0.16\right), \\ \left( 0.63, 0.26\right) \end{array} \right) \left( \begin{array}{c} \left( 0.75, 0.22\right), \\ \left( 0.58, 0.18\right) \end{array} \right) \left( \begin{array}{c} \left( 0.67, 0.13\right), \\ \left( 0.60, 0.17\right) \end{array} \right) \left( \begin{array}{c} \left( 0.78, 0.1\right), \\ \left( 0.67, 0.15\right) \end{array} \right)
    A_{2} \left( \begin{array}{c} \left( 0.55, 0.15\right), \\ \left( 0.62, 0.24\right) \end{array} \right) \left( \begin{array}{c} \left( 0.71, 0.18\right), \\ \left( 0.4, 0.26\right) \end{array} \right) \left( \begin{array}{c} \left( 0.43, 0.17\right), \\ \left( 0.51, 0.20\right) \end{array} \right) \left( \begin{array}{c} \left( 0.61, 0.26\right), \\ \left( 0.40, 0.20\right) \end{array} \right) \left( \begin{array}{c} \left( 0.63, 0.19\right), \\ \left( 0.46, 0.26\right) \end{array} \right)
    A_{3} \left( \begin{array}{c} \left( 0.57, 0.24\right), \\ \left( 0.40, 0.27\right) \end{array} \right) \left( \begin{array}{c} \left( 0.71, 0.17\right), \\ \left( 0.65, 0.23\right) \end{array} \right) \left( \begin{array}{c} \left( 0.49, 0.22\right), \\ \left( 0.36, 0.39\right) \end{array} \right) \left( \begin{array}{c} \left( 0.54, 0.40\right), \\ \left( 0.45, 0.36\right) \end{array} \right) \left( \begin{array}{c} \left( 0.56, 0.36\right), \\ \left( 0.32, 0.18\right) \end{array} \right)
    A_{4} \left( \begin{array}{c} \left( 0.56, 0.17\right), \\ \left( 0.59, 0.23\right) \end{array} \right) \left( \begin{array}{c} \left( 0.50, 0.32\right), \\ \left( 0.43, 0.42\right) \end{array} \right) \left( \begin{array}{c} \left( 0.27, 0.13\right), \\ \left( 0.36, 0.29\right) \end{array} \right) \left( \begin{array}{c} \left( 0.46, 0.19\right), \\ \left( 0.37, 0.36\right) \end{array} \right) \left( \begin{array}{c} \left( 0.49, 0.25\right), \\ \left( 0.37, 0.26\right) \end{array} \right)

     | Show Table
    DownLoad: CSV

    Now, we utilize the q-ROHFR Einstein weighted average aggregation operator and the q-ROHFR Einstein ordered weighted average aggregation operator to find out the best alternative presented in Table 23 as follows:

    Table 23.  Aggregated information (q-ROHFREWA).
    A_{1} \left( \left( 0.7338, 0.1396\right), \left( 0.6058, 0.1743\right) \right)
    A_{2} \left( \left( 0.5992, 0.1959\right), \left( 0.4774, 0.2263\right) \right)
    A_{3} \left( \left( 0.5717, 0.2878\right), \left( 0.4502, 0.2807\right) \right)
    A_{4} \left( \left( 0.4656, 0.1988\right), \left( 0.4204, 0.3077\right) \right)

     | Show Table
    DownLoad: CSV

    Comparative analysis of the proposed operators with collected expert data by Ashraf et al. [55] is expressed in Table 24:

    Table 24.  Score and ranking of q-ROFRSs.
    Score
    Ashraf et al. SR\left( A_{1}\right) SR\left( A_{2}\right) SR\left( A_{3}\right) SR\left( A_{4}\right) Ranking
    q-ROFREWA 0.999 0.489 0.201 0.011 A_{1}>A_{2}>A_{3}>A_{4}
    Score
    Developed method SR\left( A_{1}\right) SR\left( A_{2}\right) SR\left( A_{3}\right) SR\left( A_{4}\right) Ranking
    q-ROHFREWA 0.7564 0.6636 0.6134 0.5949 A_{1}>A_{2}>A_{3}>A_{4}
    q-ROHFREOWA 0.7612 0.6741 0.6166 0.5991 A_{1}>A_{2}>A_{3}>A_{4}

     | Show Table
    DownLoad: CSV

    Comparison findings show that the proposed operators are generalized and effective to for aggregating uncertain information as compared to the existing operators developed by Ashraf et al. [55]. For this reason, our recommended techniques are likely to become more comprehensive and more adaptable to control q-rung orthopair hesitant fuzzy rough MADM challenges.

    The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is widely used in various fields, including engineering, management, and social sciences, for decision-making in situations where there are multiple criteria to be considered. It is a simple and effective method that provides a clear and systematic approach to decision-making. However, it requires a clear understanding of the problem and careful consideration of the criteria and their weights. The TOPSIS is a MADM method used for selecting the best option among a set of alternatives based on a set of criteria. This method was first introduced by Hwang and Yoon [57] in 1981 for solutions and allows policy makers and authorities to assess positive ideal solutions (PIS) and negative ideal solutions (NIS). The TOPSIS is established on the assumption that the optimal option is the one that is closest to the ideal while being the furthest distant from the perfect negative solutions [56,59]. The TOPSIS method involves the following steps:

    Step 1. Let the set of alternatives be A = \{A_{1}, A_{2}, A_{3}, ..., A_{m}\} and \chi = \{\chi_{1}, \chi_{2}, \chi_{3}, ..., \chi_{r}\} be the set of criteria. The expert decision matrix is presented as

    \begin{eqnarray*} M & = &\left[ \underline{\beth }(\kappa _{\imath j}^{\hat{z_{\imath}}}), \overline{\beth } (\kappa _{\imath j}^{\hat{z_{\imath}}})\right] _{m\times n} \\ & = &\left[ \begin{array}{cccc} \left( \underline{\beth }(\kappa _{11}), \overline{\beth }(\kappa _{11})\right) & \left( \underline{\beth }(\kappa _{12}), \overline{\beth }(\kappa _{12})\right) & \cdots & \left( \underline{\beth }(\kappa _{1j}), \overline{\beth }(\kappa _{1j})\right) \\ \left( \underline{\beth }(\kappa _{21}), \overline{\beth }(\kappa _{21})\right) & \left( \underline{\beth }(\kappa _{22}), \overline{\beth }(\kappa _{22})\right) & \cdots & \left( \underline{\beth }(\kappa _{2j}), \overline{\beth }(\kappa _{2j})\right) \\ \left( \underline{\beth }(\kappa _{31}), \overline{\beth }(\kappa _{31})\right) & \left( \underline{\beth }(\kappa _{32}), \overline{\beth }(\kappa _{32})\right) & \cdots & \left( \underline{\beth }(\kappa _{3j}), \overline{\beth }(\kappa _{3j})\right) \\ \vdots & \vdots & \ddots & \vdots \\ \left( \underline{\beth }(\kappa _{\imath1}), \overline{\beth }(\kappa _{\imath1})\right) & \left( \underline{\beth }(\kappa _{\imath2}), \overline{\beth }(\kappa _{\imath2})\right) & \cdots & \left( \underline{\beth }(\kappa _{\imath j}), \overline{\beth }(\kappa _{\imath j})\right) \end{array} \right] , \end{eqnarray*}

    where

    \begin{equation*} \overline{\beth }(\kappa _{\imath j}) = \left\{ \langle \propto_{\imath} , \beta _{h_{\overline{ \beth }(\kappa )}}(\propto_{\imath} ), \psi _{h_{\overline{\beth }(\kappa )}}(\propto_{\imath} )\rangle |\propto_{\imath} \in \mathcal{T} \right\} \end{equation*}

    and

    \begin{equation*} \underline{\beth }(\kappa ) = \left\{ \langle \propto_{\imath} , \beta _{h_{\underline{ \beth }(\kappa )}}(\propto_{\imath} ), \psi _{h_{\underline{\beth }(\kappa )}}(\propto_{\imath} )\rangle |\propto_{\imath} \in \mathcal{T} \right\} \end{equation*}

    such that

    \begin{equation*} 0\leq \left( \max (\beta _{h_{\overline{\beth }(\kappa )}}(\propto_{\imath} ))\right) ^{q}+\left( \min (\psi _{h_{\overline{\beth }(\kappa )}}(\propto_{\imath} ))\right) ^{q}\leq 1 \end{equation*}

    and

    \begin{equation*} 0\leq \left( \min (\beta _{h_{\underline{\beth }(\kappa )}}(\propto_{\imath} )\right) ^{q}+\left( \max (\psi _{h_{\underline{\beth }(\kappa )}}(\propto_{\imath} ))\right) ^{q}\leq 1 \end{equation*}

    are the q-ROHF rough values.

    Step 2. Initially, we collect information from decision makers based of q-ROHFRNs.

    Step 3. Second, we normalize the information provided by DMs, since the decision matrix may involve both benefit and cost criteria, as depicted below:

    \begin{equation*} \left( H\right) ^{_{\hat{z_{\imath}}}} = \left[ \begin{array}{cccc} \left( \overline{\beth }(\kappa _{11}^{_{\hat{z_{\imath}}}}), \underline{\beth } (\kappa _{11}^{_{\hat{z_{\imath}}}})\right) & \left( \overline{\beth }(\kappa _{12}^{_{\hat{z_{\imath}}}}), \underline{\beth }(\kappa _{12}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \overline{\beth }(\kappa _{1j}^{_{\hat{ \propto_{\imath}}}}), \underline{\beth }(\kappa _{1j}^{_{\hat{z_{\imath}}}})\right) \\ \left( \overline{\beth }(\kappa _{21}^{_{\hat{z_{\imath}}}}), \underline{\beth } (\kappa _{21}^{_{\hat{z_{\imath}}}})\right) & \left( \overline{\beth }(\kappa _{22}^{_{\hat{z_{\imath}}}}), \underline{\beth }(\kappa _{22}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \overline{\beth }(\kappa _{2j}^{_{\hat{ \propto_{\imath}}}}), \underline{\beth }(\kappa _{2j}^{_{\hat{z_{\imath}}}})\right) \\ \left( \overline{\beth }(\kappa _{31}^{_{\hat{z_{\imath}}}}), \underline{\beth } (\kappa _{31}^{_{\hat{z_{\imath}}}})\right) & \left( \overline{\beth }(\kappa _{32}^{_{\hat{z_{\imath}}}}), \underline{\beth }(\kappa _{32}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \overline{\beth }(\kappa _{3j}^{_{\hat{ \propto_{\imath}}}}), \underline{\beth }(\kappa _{3j}^{_{\hat{z_{\imath}}}})\right) \\ \vdots & \vdots & \ddots & \vdots \\ \left( \overline{\beth }(\kappa _{\imath1}^{_{\hat{z_{\imath}}}}), \underline{\beth } (\kappa _{\imath1}^{_{\hat{z_{\imath}}}})\right) & \left( \overline{\beth }(\kappa _{\imath2}^{_{\hat{z_{\imath}}}}), \underline{\beth }(\kappa _{\imath2}^{_{\hat{z_{\imath}} }})\right) & \cdots & \left( \overline{\beth }(\kappa _{\imath j}^{_{\hat{ \propto_{\imath}}}}), \underline{\beth }(\kappa _{\imath j}^{_{\hat{z_{\imath}}}})\right) \end{array} \right] \end{equation*}

    where \hat{z_{\imath}} represents the number of experts.

    Step 4. We examine the expert matrices that are normalized \left(N\right) ^{\hat{ z_{\imath}}}, as

    \begin{equation*} \left( N\right) ^{\hat{z_{\imath}}} = \left\{ \begin{array}{ccc} \beth (\kappa _{\imath j}) = \left( \underline{\beth }\left( \kappa _{\imath j}\right) , \overline{\beth }\left( \kappa _{\imath j}\right) \right) & {\text{if}} & {\text{for benefit, }} \\ \left( \beth (\kappa _{\imath j})\right) ^{c} = \left( \left( \underline{\beth }\left( \kappa _{\imath j}\right) \right) ^{c}, \left( \overline{\beth }\left( \kappa _{\imath j}\right) \right) ^{c}\right) & {\text{if}} & {\text{for cost }}. \end{array} \right. \end{equation*}

    Step 5. The PIS and NIS are computed based on the score value. The PIS and NIS are referred to as \daleth ^{+} = \left(\gimel _{1}^{+}, \gimel _{2}^{+}, \gimel _{3}^{+}, ...\gimel _{r}^{+}\right) and \daleth ^{-} = \left(\gimel _{1}^{-}, \gimel _{2}^{-}, \gimel _{3}^{-}, ..., \gimel _{r}^{-}\right) , respectively. PIS \daleth ^{+} is determined using the formula

    \begin{eqnarray*} \daleth ^{+} & = &\left( \gimel _{1}^{+}, \gimel _{2}^{+}, \gimel _{3}^{+}, ..., \gimel _{r}^{+}\right) \\ & = &\left( \max\limits_{\imath}score(\gimel _{\imath1}), \max\limits_{\imath}score\gimel _{\imath2}, \max\limits_{\imath}score\gimel _{\imath3}, ..., \max\limits_{\imath}score\gimel _{\imath n}\right). \end{eqnarray*}

    Similarly, the NIS is determined with the following formula:

    \begin{eqnarray*} \daleth ^{-} & = &\left( \gimel _{1}^{-}, \gimel _{2}^{-}, \gimel _{3}^{-}, ..., \gimel _{r}^{-}\right) \\ & = &\left( \min\limits_{\imath}score\gimel _{\imath1}, \min\limits_{\imath}score\gimel _{\imath2}, \min\limits_{\imath}score\gimel _{\imath3}, ..., \min\limits_{\imath}score\gimel _{\imath r}\right). \end{eqnarray*}

    After that, we compute the geometric distances between all of the alternatives and PIS \daleth ^{+} as follows:

    \begin{eqnarray*} d(\alpha _{\imath j}, \daleth ^{+}) & = &\frac{1}{8}\left( \begin{array}{c} \left( \begin{array}{c} \frac{1}{\#\aleph}\sum_{s = 1}^{\#\aleph}\left\vert \left( \underline{\mu } _{\imath j(s)}\right) ^{2}-\left( \underline{\mu }_{\imath}^{+}\right) ^{2}\right\vert \\ +\left\vert \left( \overline{\mu }_{\imath j(s)}\right) ^{2}-\left( \overline{\mu } _{\imath(s)}^{+}\right) ^{2}\right\vert \end{array} \right) \\ +\left( \begin{array}{c} \frac{1}{\#\mathcal{H}}\sum_{s = 1}^{\#\mathcal{H}}\left\vert \left( \underline{\nu } _{\imath j(s)}\right) ^{2}-\left( \underline{\nu }_{\imath(s)}^{+}\right) ^{2}\right\vert \\ +\left\vert \left( \overline{\nu _{h}}_{_{\imath j}}\right) ^{2}-\left( \overline{ \nu _{h}}_{_{\imath}}^{+}\right) ^{2}\right\vert \end{array} \right) \end{array} \right) , {\text{ }} \\ {\text{where }}\imath & = &1, 2, 3, ..., r, {\text{and }}j = 1, 2, 3, ..., m. \end{eqnarray*}

    Where \#\aleph and \#\mathcal{H} represents the number of elements in membership and non-membership, respectively. Consequently, the geometric distances between all alternatives and NIS \daleth ^{-} are described as follows:

    \begin{eqnarray*} d(\alpha _{\imath j}, \daleth ^{-}) & = &\frac{1}{8}\left( \begin{array}{c} \left( \begin{array}{c} \frac{1}{\#\aleph}\sum_{s = 1}^{\#\aleph}\left\vert \left( \underline{\mu } _{\imath j(s)}\right) ^{2}-\left( \underline{\mu }_{\imath(s)}^{-}\right) ^{2}\right\vert \\ +\left\vert \left( \overline{\mu }_{\imath j(s)}\right) ^{2}-\left( \overline{\mu } _{\imath(s)}^{-}\right) ^{2}\right\vert \end{array} \right) \\ +\left( \begin{array}{c} \frac{1}{\#\mathcal{H}}\sum_{s = 1}^{\#\mathcal{H}}\left\vert \left( \underline{\nu } _{\imath j(s)}\right) ^{2}-\left( \underline{\nu }_{\imath(s)}^{-}\right) ^{2}\right\vert \\ +\left\vert \left( \overline{\nu _{h}}_{_{\imath j}}\right) ^{2}-\left( \overline{ \nu _{h}}_{_{\imath}}^{-}\right) ^{2}\right\vert \end{array} \right) \end{array} \right) , \\ {\text{where }}\imath & = &1, 2, 3, ..., r, {\text{and }}\;j = 1, 2, 3, ..., m. \end{eqnarray*}

    Step 6. The following is a description of the relative closeness indices that were calculated for each decision maker based on the alternatives:

    \begin{equation*} RC(\alpha _{\imath j}) = \frac{d(\alpha _{\imath j}, \daleth ^{+})}{d(\alpha _{\imath j}, \daleth ^{-})+d(\alpha _{\imath j}, \daleth ^{+})}. \end{equation*}

    Step 7. The most desirable alternative with the shortest distance may be chosen by determining the ranking orders of the alternatives.

    To illustrate the viability of the method, a numerical example applicable to "site selection for a garbage disposal plant" is presented as follows:

    Step 1. Tables 25 contain decision makers information based on q-ROHFRNs.

    Step 2. Table 25 determines PIS and NIS as follows:

    Table 25.  Ideal solutions.
    Criteria \daleth ^{+} \daleth ^{-}
    \chi _{1} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4, 0.5, 0.6\right), \\ \left( 0.6, 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.9\right), \\ \left( 0.5\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.3, 0.4\right), \\ \left( 0.8\right) \end{array} \right) \end{array} \right)
    \chi _{2} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.2, 0.4, 0.5\right), \\ \left( 0.5\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.6, 0.7\right), \\ \left( 0.3\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.1\right), \\ \left( 0.5, 0.6\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.4, 0.6, 0.7\right), \\ \left( 0.5, 0.7\right) \end{array} \right) \end{array} \right)
    \chi _{3} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.4, 0.5, 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.2, 0.5\right), \\ \left( 0.4, 0.5\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.4\right), \\ \left( 0.3, 0.7\right) \end{array} \right) {\bf{, }} \\ \left( \begin{array}{c} \left( 0.5\right), \\ \left( 0.9\right) \end{array} \right) \end{array} \right)
    \chi _{4} \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.8\right), \\ \left( 0.5\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.7\right), \\ \left( 0.1, 0.3, 0.4\right) \end{array} \right) \end{array} \right) \left( \begin{array}{c} \left( \begin{array}{c} \left( 0.6\right) {\bf{, }} \\ \left( 0.7\right) \end{array} \right), \\ \left( \begin{array}{c} \left( 0.6, 0.8, 0.9\right), \\ \left( 0.7, 0.9\right) \end{array} \right) \end{array} \right)

     | Show Table
    DownLoad: CSV

    Step 3. The PIS and NIS distances are calculated as follows:

    and

    Step 4. The following are the relative closeness indices of the alternatives for decision makers:

    Step 5. According to the aforementioned ranking of alternatives visualized in Figure 5, A_2 has the shortest distance. Hence, A_{2} is the best option.

    Figure 5.  The pictorial depiction of the ranking utilizing the TOPSIS approach.

    VIKOR (Vise Kriterijumska Optimizacija I Kompromisno Resenje), is a decision-making method used to rank a set of alternatives based on multiple criteria. The VIKOR technique is particularly useful in situations where the decision-maker needs to consider both the optimal solution and the level of compromise among the alternatives. The method allows for the consideration of multiple criteria and multiple alternatives, and it can help to identify the best compromise solution. The VIKOR method is widely used in various fields, including engineering, management, and social sciences, for decision-making in situations where there are multiple criteria to be considered. It provides a systematic and objective approach to decision-making and allows the decision-maker to identify the best compromise solution among a set of alternatives. The following is a detailed explanation of the modified form of the VIKOR approach based on q-ROHFR information:

    Step 1. Identify the problem and the decision criteria. Construct evaluation matrices for the experts in the form of q-ROHFRVs that shows the performance of each alternative on each criterion.

    Step 2. Through using the q-ROHFRWA aggregation operator, compute the collected information of decision makers along their weights vector and obtain the aggregated decision matrix.

    Step 3. Compute the positive ideal solutions (PIS) \daleth^+ and negative ideal solutions (NIS) \daleth^- in the form of q-ROHFR information as follows:

    \begin{eqnarray*} \daleth^{+} & = &\left( \mathcal{Y} _{1}^{+}, \mathcal{Y} _{2}^{+}, \mathcal{Y} _{3}^{+}, ..., \mathcal{Y} _{\ell}^{+}\right) = \left( \max\limits_{\imath}\mathcal{Y} _{\imath1}, \max\limits_{\imath}\mathcal{Y} _{\imath 2}, \max\limits_{\imath}\mathcal{Y} _{\imath 3}, ..., \max\limits_{\imath}\mathcal{Y} _{\imath n}.\right) , \\ \daleth^{-} & = &\left( \mathcal{Y} _{1}^{-}, \mathcal{Y} _{2}^{-}, \mathcal{Y} _{3}^{-}, ..., \mathcal{Y} _{\ell}^{-}\right) = \left( \min\limits_{\imath}\mathcal{Y} _{\imath 1}, \min\limits_{\imath}\mathcal{Y} _{\imath 2}, \min\limits_{\imath}\mathcal{Y} _{\imath 3}, ..., \min\limits_{\imath}\mathcal{Y} _{\imath n}.\right). \end{eqnarray*}

    Step 4. Determine the q-ROHFR group utility measure S_{\imath}\; (\imath = 1, 2, 3, ..., \ell) and the regret measure R_{\imath}\; (\imath = 1, 2, 3, ..., \ell) for all alternatives L = \left(A _{1}, A_{2}, A _{3}, ..., A_{\ell}\right) applying the formulas mentioned below:

    \begin{eqnarray*} S_{\imath} & = &\bigoplus_{j = 1}^{\ell}\frac{\propto_{j}d\left( \mathcal{Y} _{\imath j}, \mathcal{Y} _{j}^{+}\right) }{d\left( \mathcal{Y} _{j}^{+}, \mathcal{Y} _{j}^{-}\right) }, {\text{ }} \imath = 1, 2, 3, 4, ..., m, \\ R_{\imath} & = &\max \frac{\propto_{j}d\left( \mathcal{Y} _{\imath j}, \mathcal{Y} _{j}^{+}\right) }{ d\left( \mathcal{Y} _{j}^{+}, \mathcal{Y} _{j}^{-}\right)}, {\text{ }}\imath = 1, 2, 3, 4, ..., m. \end{eqnarray*}

    Step 5. Determine the maximum and minimum values of S and R , respectively, as follows:

    \begin{equation*} S^{\star} = \min\limits_{\imath}S_{\imath}, {\text{ }}S^{\triangleleft } = \max\limits_{\imath}S_{\imath}, {\text{ }} R^{\star} = \min\limits_{\imath}R_{\imath}, {\text{ }}R^{\triangleleft } = \max\limits_{\imath}R_{\imath}, {\text{ }}\imath = 1, 2, 3, ..., \ell. \end{equation*}

    Lastly, we integrate the features of both the group utility S_{\imath} and the individual regret R_{\imath} in order to assess the ranking measure Q_{\imath} for the alternatives L = \left(A _{1}, A _{2}, A _{3}, ..., A _{\ell}\right) as follows:

    \begin{equation*} Q_{\imath} = \chi \frac{S_{\imath}-S^{\star}}{S^{\triangleleft }-S^{\star}}+\left( 1-\chi \right) \frac{ R_{\imath}-R^{\star}}{R^{\triangleleft }-R^{\star}}, R_{\imath}, {\text{ }}\imath = 1, 2, 3, ..., \ell, \end{equation*}

    where \chi is the strategic weight of the majority of parameters (the parameter with the largest group utility) and is essential for assessing the compromise solution. The value is chosen from the range [0, 1], and 0.5 is a common number and was utilized it.

    Step 6. Furthermore, the alternatives are ordered in decreasing order for the group utility measure S_i , individual regret measure R_\imath , and ranking measure Q_\imath . Here, we obtained three ranking lists that will help us determine the best compromise alternative.

    In this section, we implement an improved q-ROHFR-VIKOR approach to the MAGDM problem in order to identify the best location for a GDS using four criteria mentioned in the following numerical example.

    Step 1. The information based on q-ROHFRVs of the three professional experts is analyzed in Tables 25.

    Step 2. The aggregated information of the team of experts, as determined by the q-ROHFRWA aggregation operator, is summarized in Table 6.

    Step 3. The q-ROHFR PIS ( \daleth^{+} ) and the q-ROHFR NIS ( \daleth^{-} ) are computed in Table 25.

    Step 4. The q-ROHFR group utility measure values S_{\imath}\; (\imath = 1, 2, 3, 4) and the regret measure R_{\imath}\; (\imath = 1, 2, 3, 4) of the alternatives under consideration are summarized in Table 26.

    Table 26.  S_{\imath}, {\text{ }}R_{\imath}, {\text{ }}Q_{\imath} for each alternative.
    alternatives S_{\imath} R_{\imath} Q_{\imath}
    A _{1} 0.1460 0.2472 0.2137
    A _{2} 0.5324 0.4042 0.5631
    A _{3} 0.3637 0.3781 0.4319
    A _{4} 0.3426 0.2682 0.3142

     | Show Table
    DownLoad: CSV

    Steps-5 & 6. Alternative rankings based on the group utility measure S_{\imath} , the individual regret measure R_{\imath} , and the ranking measure Q_{\imath} are indicated in Table 27. Figure 6 depicts a graphical illustration of the rankings according to the modified VIKOR approach.

    Table 27.  Alternative rankings depending on S_{\imath}, R_{\imath}, and Q_{\imath}.
    Alternatives Ranking order of S_{\imath} Ranking order of R_{\imath} Ranking order of Q_{\imath}
    A _{1} 4 4 4
    A _{2} 1 1 1
    A _{3} 2 2 2
    A _{4} 3 3 3

     | Show Table
    DownLoad: CSV
    Figure 6.  The graphical depiction of rankings according to the modified VIKOR-method.

    There are numerous factors to consider while choosing a garbage disposal location, but human intelligence has certain limitations. Hence, it is essential to develop an efficient framework for analyzing garbage disposal location. Many researchers have addressed various approaches for evaluating garbage disposal location schemes, particularly the assessment approach based on fuzzy set theory. The basic technology of the ongoing fuzzy theory-based evaluation technique is to provide the assessment value via the membership function of the fuzzy set, then utilize the operational rules between fuzzy numbers for information aggregation and, lastly, obtain the evaluation result. However, the expression space for analyzing the information in fuzzy sets is restricted, making it difficult to handle problems of assessing GDS options in complex environmental situations. Therefore, a novel approach based on q-ROHFRS has been proposed for evaluation to overcome these limitations. It is worth noting that the approach presented in this study provides a broad space for analyzing information, enabling DMs to incorporate the features of uncertain data and have high computing capabilities for imprecise information. On the basis of the Einstein t-norm and t-conorm, several aggregation operators, the q-ROHFREWA, q-ROHFREOWA and q-ROHFREHWA operators, are described. In addition, a comprehensive discussion is presented on the fundamental concepts exhibited by the developed operators. The aforementioned operators consider the complexity of the MADM approach based on the q-ROHFR operators, and the evaluation information has significant rationality. A case study in the assessment of GDS location is also utilized to show the applicability and validity of the proposed technique. According to the empirical results, utilizing the suggested approach to perform the final site selection based on the ranking findings is appropriate and practical. We assessed the impact of variations of parameters on the decision-making outcomes through sensitivity analysis of the proposed aggregation operators. The findings indicated that the outcomes were consistent. The present study involves a comparative analysis of the proposed models with the existing literature outcomes. Additionally, an evaluation is carried out utilizing the extended q-ROHFR-TOPSIS and VIKOR methodologies in order to demonstrate the viability, superiority and reliability of the proposed techniques. The findings of the comparison indicated that the novel approach provides numerous benefits in assessing and analyzing the consequences of high-value assessment information.

    Future recommendations and limitations of the study: The method is beneficial for top-level executives who make decisions with conflicting paths in areas such as planning and control, industrial production, financing decisions, and health-care planning processes. There are numerous directions and ideas for future research in this field. As a follow-up to this study, future work could investigate the application of linguistic term usage to the structure explored here. Since q-ROHF rough set theory presupposes discrete values, it might not be applicable for all types data. The q-ROHF rough set theory may not work for continuous or mixed information. Fuzzy rough set theory computational complexity grows significantly with dataset advancement. This makes the theory inappropriate for enormous data sets.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors acknowledge Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R443), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. This study was supported via funding from Prince Sattam Bin Abdulaziz University project number (PSAU/2023/R/1444).

    The authors declare no competing interests.



    [1] K. Salimifard, D. Mohammadi, R. Moghdani, A. Abbasizad, Green fuzzy parallel machine scheduling with sequence-dependent setup in the plastic moulding industry, Asian J. Manag. Sci. Appl., 4 (2019), 27–48. https://doi.org/10.1504/AJMSA.2019.101423 doi: 10.1504/AJMSA.2019.101423
    [2] M. R. Garey, D. S. Johnson, Computers and intractability: A guide to the theory of NP-completeness, Freeman, San Francisco, (1979). https://doi.org/10.1137/1024022
    [3] A. E. Ezugwu, A. K. Shukla, R. Nath, A. A. Akinyelu, J. O. Agushaka, H. Chiroma, et al., Metaheuristics: A comprehensive overview and classification along with bibliometric analysis. Artif. Intell. Rev., 54 (2021), 4237–4316. https://doi.org/10.1007/s10462-020-0952-0 doi: 10.1007/s10462-020-0952-0
    [4] D. Y. Lin, T. Y. Huang, A hybrid metaheuristic for the unrelated parallel machine scheduling problem, Mathematics, 9 (2021). 1–20. https://doi.org/10.3390/math9070768
    [5] A. E. Ezugwu, F. Akutsah, An improved firefly algorithm for the unrelated parallel machines scheduling problem with sequence-dependent setup times, IEEE Access, 6 (2018), 54459–54478. https://doi.org/10.1109/access.2018.2872110 doi: 10.1109/access.2018.2872110
    [6] C. Chen, M. Fathi, M. Khakifirooz, K. Wu, Hybrid tabu search algorithm for unrelated parallel machine scheduling in semiconductor fabs with setup times, job release, and expired times, Comput. Ind. Eng., 165 (2022), 107915. https://doi.org/10.1016/j.cie.2021.107915 doi: 10.1016/j.cie.2021.107915
    [7] J. B. Abikarram, K. McConky, R. Proano, Energy cost minimization for unrelated parallel machine scheduling under real time and demand charge pricing, J. Clean. Prod., 208 (2019), 232–242. https://doi.org/10.1016/j.jclepro.2018.10.048 doi: 10.1016/j.jclepro.2018.10.048
    [8] N. Vakhania, On preemptive scheduling on unrelated machines using linear programming, AIMS Math., 8 (2023), 7061–7082. https://doi.org/10.3934/math.2023356 doi: 10.3934/math.2023356
    [9] Y. Fu, Y. Hou, Z. Wang, X. Wu, K. Gao, L. Wang, Distributed scheduling problems in intelligent manufacturing systems, Tsinghua Sci. Technol., 26 (2021), 625–645. https://doi.org/10.26599/tst.2021.9010009 doi: 10.26599/tst.2021.9010009
    [10] İ. Sarıçiçek, Multi-objective scheduling by maximizing machine preferences for unrelated parallel machines, Sigma J. Eng. Nat. Sci., 38 (2020), 405–419. https://doi.org/10.5829/idosi.ije.2017.30.02b.09 doi: 10.5829/idosi.ije.2017.30.02b.09
    [11] R. Meng, Y. Rao, Q. Luo, Modeling and solving for bi-objective cutting parallel machine scheduling problem, Ann. Oper. Res., 285 (2020), 223–245. https://doi.org/10.1007/s10479-019-03208-z doi: 10.1007/s10479-019-03208-z
    [12] M. Moser, N. Musliu, A. Schaerf, F. Winter, Exact and metaheuristic approaches for unrelated parallel machine scheduling, J. Scheduling, 25 (2022), 507–534. https://doi.org/10.1007/s10951-021-00714-6 doi: 10.1007/s10951-021-00714-6
    [13] M. Đurasević, D. Jakobović, Heuristic and metaheuristic methods for the unrelated machines scheduling problem: A Survey, IEEE T. Cybernetics, (2021). http://arXiv.org/abs/2107.13106
    [14] A. F. Guevara-Guevara, V. Gómez-Fuentes, L. J. Posos-Rodríguez, N. Remolina-Gómez, E. M. González-Neira, Earliness/tardiness minimization in a no-wait flow shop with sequence-dependent setup times, J. Proj. Manag., 7 (2022). 177–190. https://doi.org/10.5267/j.jpm.2021.12.001
    [15] A. Sadati, R. Tavakkoli-Moghaddam, B. Naderi, M. Mohammadi, A bi-objective model for a scheduling problem of unrelated parallel batch processing machines with fuzzy parameters by two fuzzy multi-objective meta-heuristics, Iran. J. Fuzzy Syst., 16 (2019), 21–40. https://doi.org/10.3233/ifs-151846 doi: 10.3233/ifs-151846
    [16] G. Rivera Zarate, Outranking-based multi-objective PSO for scheduling unrelated parallel machines with a freight industry-oriented application, Instituto de Ingeniería Tecnología. (2021). https://doi.org/10.1016/j.engappai.2021.104556
    [17] A. Fallahi, B. Shahidi-Zadeh, S. T. A. Niaki, Unrelated parallel batch processing machine scheduling for production systems under carbon reduction policies: NSGA-II and MOGWO metaheuristics, Soft Comput., 27 (2023), 17063–17091. https://doi.org/10.1007/s00500-023-08754-0 doi: 10.1007/s00500-023-08754-0
    [18] W. Zhou, F. Chen, X. Ji, H. Li, J. Zhou, A Pareto-based discrete particle swarm optimization for parallel casting workshop scheduling problem with fuzzy processing time, Knowl. Based Syst., (2022), 256, 109872. https://doi.org/10.1016/j.knosys.2022.109872
    [19] M. Z. Erişgin Barak, M. Koyuncu, Fuzzy order acceptance and scheduling on identical parallel machines, Symmetry, 13 (2021), 1236. https://doi.org/10.3390/sym13071236 doi: 10.3390/sym13071236
    [20] J. Rezaeian, S. Mohammad-Hosseini, S. Zabihzadeh, K. Shokoufi, Fuzzy scheduling problem on unrelated parallel machine in JIT production system, Artif. Intell. Evol., 1 (2020), 17–33. https://doi.org/10.37256/aie.112020202 doi: 10.37256/aie.112020202
    [21] K. Li, J. Chen, H. Fu, Z. Jia, W. Fu, Uniform parallel machine scheduling with fuzzy processing times under resource consumption constraint, Appl. Soft Comput., 82 (2019), 1–13. https://doi.org/10.1016/j.asoc.2019.105585 doi: 10.1016/j.asoc.2019.105585
    [22] H. A. Alsattar, A. A. Zaidan, B. B. Zaidan, Novel meta-heuristic bald eagle search optimisation algorithm, Artif. Intell. Rev., 53 (2020), 2237–2264. https://doi.org/10.1007/s10462-019-09732-5 doi: 10.1007/s10462-019-09732-5
    [23] F. Pourpanah, R. Wang, C. P. Lim, X. Z. Wang, D. Yazdani, A review of artificial fish swarm algorithms: Recent advances and applications, Artif. Intell. Rev., 56 (2020), 1867–1903. https://doi.org/10.1007/s10462-022-10214-4 doi: 10.1007/s10462-022-10214-4
    [24] L. Zhao, F. Wang, Y. Bai, Route planning for autonomous vessels based on improved artificial fish swarm algorithm, Ships Offshore Struc., 18 (2023), 897–906. https://doi.org/10.1080/17445302.2022.2081423 doi: 10.1080/17445302.2022.2081423
    [25] W. H. Tan, J. Mohamad-Saleh, Normative fish swarm algorithm (NFSA) for optimization, Soft Comput., 24 (2020), 2083–2099. https://doi.org/10.1007/s00500-019-04040-0 doi: 10.1007/s00500-019-04040-0
    [26] J. Huang, J. Zeng, Y. Bai, Z. Cheng, Z. Feng, L. Qi, et al., Layout optimization of fiber Bragg grating strain sensor network based on modified artificial fish swarm algorithm, Opt. Fiber Technol., 65 (2021), 102583. https://doi.org/10.1016/j.yofte.2021.102583 doi: 10.1016/j.yofte.2021.102583
    [27] F. Wang, L. Zhao, Y. Bai, Path planning for unmanned surface vehicles based on modified Artificial Fish Swarm Algorithm with local optimizer, Math. Probl. Eng., (2022). https://doi.org/10.1155/2022/1283374
    [28] J. Jin, Z. Zhang, L. Zhang, A modified artificial fish swarm algorithm for unit commitment optimization, in 2023 Eighth International Conference on Electromechanical Control Technology and Transportation (ICECTT), 760–767. https://doi.org/10.1117/12.2689449
    [29] Y. Gao, L. Xie, Z. Zhang, Q. Fan, Twin support vector machine based on improved artificial fish swarm algorithm with application to flame recognition, Appl. Intell., 50 (2020), 2312–2327. https://doi.org/10.1007/s10489-020-01676-6 doi: 10.1007/s10489-020-01676-6
    [30] T. Li, F. Yang, D. Zhang, L. Zhai, Computation scheduling of multi-access edge networks based on the artificial fish swarm algorithm, IEEE Access, 9 (2021), 74674–74683. https://doi.org/10.1109/ACCESS.2021.3078539 doi: 10.1109/ACCESS.2021.3078539
    [31] E. B. Tirkolaee, A. Goli, G. W. Weber, Fuzzy mathematical programming and self-adaptive artificial fish swarm algorithm for just-in-time energy-aware flow shop scheduling problem with outsourcing option, IEEE T. Fuzzy Syst., 28 (2020), 2772–2783. https://doi.org/10.1109/TFUZZ.2020.2998174 doi: 10.1109/TFUZZ.2020.2998174
    [32] P. Kongsri, J. Buddhakulsomsiri, A mixed integer programming model for unrelated parallel machine scheduling problem with sequence dependent setup time to minimize makespan and total tardiness, In 2020 IEEE 7th International Conference on Industrial Engineering and Applications (ICIEA), 605–609. https://doi.org/10.1109/iciea49774.2020.9102086
    [33] R. L. Graham, E. L. Lawler, J. K. Lenstra, A. R. Kan, Optimization and approximation in deterministic sequencing and scheduling: a survey, Annals Discrete Math., 5 (1979), 287–326. https://doi.org/10.1016/s0167-5060(08)70356-x doi: 10.1016/s0167-5060(08)70356-x
    [34] Y. Li, J. F. Côté, L. Callegari-Coelho, P. Wu, Novel formulations and logic-based benders decomposition for the integrated parallel machine scheduling and location problem, INFORMS J. Comput., 34 (2022), 1048–1069. https://doi.org/10.1287/ijoc.2021.1113 doi: 10.1287/ijoc.2021.1113
    [35] Y. Li, J. F. Côté, L. C. Coelho, P. Wu, Novel efficient formulation and matheuristic for large-sized unrelated parallel machine scheduling with release dates, Int. J. Prod. Res., 60 (2022), 6104–6123. https://doi.org/10.1080/00207543.2021.1983224 doi: 10.1080/00207543.2021.1983224
    [36] S. Chakraverty, D. M. Sahoo, N. R. Mahato, Defuzzification, In Concepts of Soft Computing, Springer Singapore, 2019,117–127. https://doi.org/10.1007/978-981-13-7430-2_7
    [37] L. A. Zadeh, Fuzzy sets, Inf. Control, 8 (1965), 338–353. https://doi.org/10.1016/s0019-9958(65)90241-x doi: 10.1016/s0019-9958(65)90241-x
    [38] H. J. Zimmermann, Fuzzy set theory, WIRES Comput. Stat., 2 (2010), 317–332. https://doi.org/10.1002/wics.82 doi: 10.1002/wics.82
    [39] H. T. Lee, S. H. Chen, H. Y. Kang, Multicriteria scheduling using fuzzy theory and tabu search, Int. J. Prod. Res., 40 (2002), 1221–1234. https://doi.org/10.1080/00207540110098832 doi: 10.1080/00207540110098832
    [40] S. Banerjee, T. K. Roy, Arithmetic operations on generalized trapezoidal fuzzy number and its applications, Turkish J. Fuzzy Syst., 3 (2012), 16–44.
    [41] J. Shen, An uncertain parallel machine problem with deterioration and learning effect, Comput. Appl. Math., 38 (2019), 1–17. https://doi.org/10.1007/s40314-019-0789-5
    [42] T. W. Liao, P. Su, Parallel machine scheduling in fuzzy environment with hybrid ant colony optimization including a comparison of fuzzy number ranking methods in consideration of spread of fuzziness, Appl. Soft Comput. J., 56 (2017), 65–81. https://doi.org/10.1016/j.asoc.2017.03.004 doi: 10.1016/j.asoc.2017.03.004
    [43] T. S. Liou, M. J. Wang, Ranking fuzzy numbers with integral value, Fuzzy Sets Syst., 50 (1992), 247–255. https://doi.org/10.1016/0165-0114(92)90223-q doi: 10.1016/0165-0114(92)90223-q
    [44] X. L. Li, An optimizing method based on autonomous animats: fish-swarm algorithm, Syst. Eng. Theory Practice, 22 (2002), 32–38.
    [45] A. W. Abdulqader, S. M. Ali, Diversity operators-based Artificial Fish Swarm Algorithm to solve flexible job shop scheduling problem, Baghdad Sci. J., (2023). https://doi.org/10.21123/bsj.2023.6810
    [46] L. Zhang, M. Fu, T. Fei, X. Pan, Application of FWA-Artificial Fish Swarm Algorithm in the Location of Low-Carbon Cold Chain Logistics Distribution Center in Beijing-Tianjin-Hebei Metropolitan Area, Sci. Programming, (2021), 1–10. https://doi.org/10.1155/2021/9945583
    [47] Y. Liu, X. Feng, L. Zhang, W. Hua, K. Li, A pareto artificial fish swarm algorithm for solving a multi-objective electric transit network design problem, Transportmetrica A, 16 (2020), 1648–1670. https://doi.org/10.1080/23249935.2020.1773574 doi: 10.1080/23249935.2020.1773574
    [48] S. Gorgich, S. Tabatabaei, Proposing an energy-aware routing protocol by using fish swarm optimization algorithm in WSN (wireless sensor networks), Wireless Pers. Commun., 119 (2021), 1935–1955. https://doi.org/10.1007/s11277-021-08312-7 doi: 10.1007/s11277-021-08312-7
    [49] R. A. Hasan, R. A. I. Alhayali, M. A., Mohammed, T. Sutikno, An improved fish swarm algorithm to assign tasks and cut down on latency in cloud computing, TELKOMNIKA (Telecommunication Computing Electronics and Control), 20 (2022), 1103–1108. https://doi.org/10.12928/telkomnika.v20i5.22645 doi: 10.12928/telkomnika.v20i5.22645
    [50] N. M. Sureja, S. P. Patel, Solving a combinatorial optimization problem using artificial fish swarm algorithm, Int. J. Eng. Trends Technol, 68 (2020), 27–32. https://doi.org/10.14445/22315381/ijett-v68i5p206s doi: 10.14445/22315381/ijett-v68i5p206s
    [51] M. Yadollahi, S. S. Razavi, Using Artificial Fish Swarm Algorithm to solve university exam timetabling problem, J. Adv. Comput. Res., 10 (2019), 109–117.
    [52] C. Peraza, P. Ochoa, L. Amador, O. Castillo, Artificial Fish Swarm Algorithm for the Optimization of a Benchmark Set of Functions, In New Perspectives on Hybrid Intelligent System Design based on Fuzzy Logic, Neural Networks and Metaheuristics, Studies in Computational Intelligence, 1050 (2022), (77–92). https://doi.org/10.1007/978-3-031-08266-5_6
    [53] S. P. M. Villablanca, An artificial fish swarm algorithm to solve the set covering problem doctoral dissertation, pontificia universidad católica de valparaíso, (2016).
    [54] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm Evol Comput, 1 (2011), 3–18. https://doi.org/10.1016/j.swevo.2011.02.002 doi: 10.1016/j.swevo.2011.02.002
    [55] F. Wilcoxon, Individual comparisons by ranking methods, Biometrics Bulletin, 1 (1945), 80–83. https://doi.org/10.2307/3001968 doi: 10.2307/3001968
  • This article has been cited by:

    1. Cengiz Kahraman, Proportional picture fuzzy sets and their AHP extension: Application to waste disposal site selection, 2024, 238, 09574174, 122354, 10.1016/j.eswa.2023.122354
    2. Omar Barukab, Asghar Khan, Sher Afzal Khan, Fermatean fuzzy Linguistic term set based on linguistic scale function with Dombi aggregation operator and their application to multi criteria group decision –making problem, 2024, 10, 24058440, e36563, 10.1016/j.heliyon.2024.e36563
    3. Yushuo Cao, Xuzhong Wu, Ling Ding, Weizhong Wang, A Hybrid Framework for Selecting Food Waste Treatment Techniques Using q-Rung Orthopair Fuzzy CRITIC-Generalized TODIM Method, 2024, 26, 1562-2479, 1916, 10.1007/s40815-024-01714-2
    4. Ertugrul Ayyildiz, Melike Erdogan, Literature analysis of the location selection studies related to the waste facilities within MCDM approaches, 2024, 1614-7499, 10.1007/s11356-024-34370-y
    5. Mery Ellen Brandt de Oliveira, Francisco Rodrigues Lima Junior, José Marcelo Almeida Prado Cestari , Hesitant Fuzzy Vikor e suas extensões: uma revisão sistemática de literatura, 2024, 15, 2178-9010, e4287, 10.7769/gesec.v15i10.4287
    6. Zihang Jia, Junsheng Qiao, Minghao Chen, Multi-level discrimination index for intuitionistic fuzzy coverings and its applications in feature selection, 2025, 263, 09574174, 125735, 10.1016/j.eswa.2024.125735
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1042) PDF downloads(35) Cited by(0)

Figures and Tables

Figures(6)  /  Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog