
In real life, with the trend of outsourcing logistics activities, choosing a third-party logistics (3PL) provider has become an inevitable choice for shippers. One of the most difficult decisions logistics consumers are facing the selecting the 3PL provider that best meets their needs. Decision making (DM) is an important in dealing with such situations because it allows them to make reliable decisions in a short period of time, as incorrect decisions can result in huge financial losses. In this regard, this article provides a new multi criteria group decision making method (MCGDM) under Pythagorean fuzzy rough (PyFR) set. A series of new PyFR Einstein weighted averaging aggregation operators and their basic aspects are described in depth. To evaluate the weights of decision experts and criteria weights we established the PyFR entropy measure. Further, using multiple aggregation methods based on PyFR information, a novel algorithm is offered to solve issues with ambiguous or insufficient data to obtain reliable and preferable results. First, decision-experts use PyFR sets to represent their evaluation information on alternatives based on the criteria. Then, apply all these proposed PyFR Einstein aggregation lists to rank all alternatives and find the best optimal result. Finally, to demonstrate the feasibility of the proposed PyFR decision system, a real example of choosing a 3PL is given.
Citation: Shougi S. Abosuliman, Abbas Qadir, Saleem Abdullah. Multi criteria group decision (MCGDM) for selecting third-party logistics provider (3PL) under Pythagorean fuzzy rough Einstein aggregators and entropy measures[J]. AIMS Mathematics, 2023, 8(8): 18040-18065. doi: 10.3934/math.2023917
[1] | Esmail Hassan Abdullatif Al-Sabri, Muhammad Rahim, Fazli Amin, Rashad Ismail, Salma Khan, Agaeb Mahal Alanzi, Hamiden Abd El-Wahed Khalifa . Multi-criteria decision-making based on Pythagorean cubic fuzzy Einstein aggregation operators for investment management. AIMS Mathematics, 2023, 8(7): 16961-16988. doi: 10.3934/math.2023866 |
[2] | Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051 |
[3] | Li Li, Mengjing Hao . Interval-valued Pythagorean fuzzy entropy and its application to multi-criterion group decision-making. AIMS Mathematics, 2024, 9(5): 12511-12528. doi: 10.3934/math.2024612 |
[4] | Muhammad Naeem, Muhammad Qiyas, Lazim Abdullah, Neelam Khan, Salman Khan . Spherical fuzzy rough Hamacher aggregation operators and their application in decision making problem. AIMS Mathematics, 2023, 8(7): 17112-17141. doi: 10.3934/math.2023874 |
[5] | Misbah Rasheed, ElSayed Tag-Eldin, Nivin A. Ghamry, Muntazim Abbas Hashmi, Muhammad Kamran, Umber Rana . Decision-making algorithm based on Pythagorean fuzzy environment with probabilistic hesitant fuzzy set and Choquet integral. AIMS Mathematics, 2023, 8(5): 12422-12455. doi: 10.3934/math.2023624 |
[6] | Saleem Abdullah, Muhammad Qiyas, Muhammad Naeem, Mamona, Yi Liu . Pythagorean Cubic fuzzy Hamacher aggregation operators and their application in green supply selection problem. AIMS Mathematics, 2022, 7(3): 4735-4766. doi: 10.3934/math.2022263 |
[7] | Abbas Qadir, Shadi N. Alghaffari, Shougi S. Abosuliman, Saleem Abdullah . A three-way decision-making technique based on Pythagorean double hierarchy linguistic term sets for selecting logistic service provider and sustainable transportation investments. AIMS Mathematics, 2023, 8(8): 18665-18695. doi: 10.3934/math.2023951 |
[8] | Ahmad Bin Azim, Ahmad ALoqaily, Asad Ali, Sumbal Ali, Nabil Mlaiki, Fawad Hussain . q-Spherical fuzzy rough sets and their usage in multi-attribute decision-making problems. AIMS Mathematics, 2023, 8(4): 8210-8248. doi: 10.3934/math.2023415 |
[9] | Nadia Khan, Sehrish Ayaz, Imran Siddique, Hijaz Ahmad, Sameh Askar, Rana Muhammad Zulqarnain . Sustainable practices to reduce environmental impact of industry using interaction aggregation operators under interval-valued Pythagorean fuzzy hypersoft set. AIMS Mathematics, 2023, 8(6): 14644-14683. doi: 10.3934/math.2023750 |
[10] | Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Muhammad Naeem, Choonkil Park . Improved VIKOR methodology based on q-rung orthopair hesitant fuzzy rough aggregation information: application in multi expert decision making. AIMS Mathematics, 2022, 7(5): 9524-9548. doi: 10.3934/math.2022530 |
In real life, with the trend of outsourcing logistics activities, choosing a third-party logistics (3PL) provider has become an inevitable choice for shippers. One of the most difficult decisions logistics consumers are facing the selecting the 3PL provider that best meets their needs. Decision making (DM) is an important in dealing with such situations because it allows them to make reliable decisions in a short period of time, as incorrect decisions can result in huge financial losses. In this regard, this article provides a new multi criteria group decision making method (MCGDM) under Pythagorean fuzzy rough (PyFR) set. A series of new PyFR Einstein weighted averaging aggregation operators and their basic aspects are described in depth. To evaluate the weights of decision experts and criteria weights we established the PyFR entropy measure. Further, using multiple aggregation methods based on PyFR information, a novel algorithm is offered to solve issues with ambiguous or insufficient data to obtain reliable and preferable results. First, decision-experts use PyFR sets to represent their evaluation information on alternatives based on the criteria. Then, apply all these proposed PyFR Einstein aggregation lists to rank all alternatives and find the best optimal result. Finally, to demonstrate the feasibility of the proposed PyFR decision system, a real example of choosing a 3PL is given.
The third-party logistics (3PL) market has expanded in recent years with the growth of e-commerce, becoming more important as a way of adapting to the rapidly changing global competitive environment [1]. As a result of the growing trend toward outsourcing logistics tasks, shippers now inevitably need to choose the finest 3PL supplier. The third-party logistics provider can bring significant benefits, such as lower fees and fixed logistics assets, higher order fulfillment rates, shorter average order cycle lengths, and cash-to-cash cycles. If an appropriate 3PL provider is not chosen, serious problems, such as low-quality logistics services and contract non-fulfillment can occur. This can damage the shipper's reputation, image and trust. Therefore, choosing a competent 3PL supplier is crucial in determining how well logistics are performed. The multi-criteria decision-making (MCDM) problem has become more crucial for choosing the best 3PL provider as a result of the increased ambiguity and uncertainty of real-world data. Because traditional tools did not address these types of data uncertainty and ambiguity in real-world data. For this researcher have developed many types of mathematical theories to evaluate the hidden facts from unclear data. One of these theories is Zadeh's Fuzzy set (FS) theory which was created in 1965 and is based on the concept of membership degree α ranging from zero to one to each individual component of a set [2]. FSs are one of the most effective tools to deal with inaccurate and uncertain data in DM problems [3,4]. Later, Atanassov [5] generalized the FS to intuitionistic fuzzy set (IFS) by adding the degree of non-membership β satisfying the condition α+β≤1. In recent decades, IFS has been studied in a board range of important discoveries, involving distance and similarly measures [6,7], aggregation operators (Aops) [8,9,10], pattern recognition [11] and medical diagnosis [12]. Despite the success of IFS in wide various fields in real life, when experts assign a membership of 0.7 and non-membership of 0.5 for certain criteria. The condition of IFS does not hold i.e., α+β>1. In order to deal with this kind of issues, Yager [13,14] filled this gap and introduced the concept of Pythagorean fuzzy set (PyFS), a generalization of IFS, restricted to (α)2+(β)2≤1, confirms that PyFS it can better deal with the uncertainty and imprecision in the information and data. Yager [14] also created a series of aggregation operators for aggregating the PyF information. Fei and Deng [15] gave the DM process in PyFS. In a PyF context, Peng and Yang [16] constructed division and subtraction operations and examined their properties in depth. Khan et al. [17] explored the Dombi norm based Aops in the context of PyFS in their use of in DM problems. Wei and Lu [18] suggested Pythagorean fuzzy power Aops to solve DM problems. Under PyF circumstances, Ashraf et al. [19] developed a unique technique based on the sine function. Zhang [20] used the idea of PyFS and established a similarity metric for DM problems. Khaista et al. [21] created the Einstein geometric operators for Pythagorean fuzzy set (PyFEWGA) and used them to DM issues utilizing the Einstein T-norm and conorm concept. Rani et al. [22] developed an advanced TOPSIS based on the fuzzy similarity measure of Pythagorean and solved the practical application in DM issues. Huang et al. [23] provided an extended version of MULTIMOORA approach for dealing with ambiguous data in PyF environment.
Pawlak [24] is considered the first person to study the widely held rough set (RS) theory after a series of events. This research has made significant progress in application and system research in recent years. The theory of RS has been expanded in variety of ways by several researchers. Dubois and Prade [25] were the first to use fuzzy relations instead of crisp binary relations and proposed the idea of fuzzy rough sets (FRS). In order to deal with the ambiguous information in DM problem, Wang and Zhang [26] designed a DM method based on FRS and Zhang and Zhan [27] designed a DM method based on fuzzy soft β-covering and FRSs. Mi et al. [28] discovered the uncertainty measure using partition in FRSs. Khan et al. [29] developed the idea of the probabilistic hesitant fuzzy rough set and discussed its application in decision-making. Zhang et al [30] summarized the structure intuitionistic FRS, allowing decision makers to freely choose how to deal with ambiguous data. Zhou and Wu [31] defined generalized approximation operators for intuitionistic FRS. Liu et al. [32] created a decision-making technique based on preference relations under FRS. Yun and Lee [33] developed some characteristics of IFR approximation operator based on IF relation by means of topology. Zhang et al. [34] developed the concepts of soft rough IFS and IF soft rough set based on soft approximation and fuzzy soft approximation space. Zhang [35] proposed the generalized IFRS based on IF covering. Zhang et al. [36] extended the notion of generalized IF soft rough set based on IF soft relation. Later Chinram et al. [37] proposed the EDAS method to solve the MAGDM with in IFRSs.
Motivation: As we know, decision-making is a crucial aspect everyday life. High performance and high-quality results are only practicable if the research community concentrates on closing theoretical knowledge gaps and practitioners employ the newest developments in their applications to address contemporary problems. So, according to our knowledge, based on the above review, we analyzed that under all unknown weights' information, there is still no MCGDM technique under PyFRS based on the Einstein averaging operator. This research's main objective is to provide the groundwork for a new model that excels at communicating imprecise information and closing theoretical knowledge gaps as represented in Figure 1.
In addition to increasing the quality of the information used to make judgements and reflecting its fuzziness, flexibility and applicability, it can function as a helpful tool for making trustworthy uncertain decisions. As a result, the current work is motivated, and we developed a new MCGDM based on PyFR Einstein averaging aggregation operators to solve a real world DM problems.
In the literature, the aggregation operators are developed for PyFRS using algebraic t-norm and t-conorm, to extend this idea of aggregation operators using generalized t-norm and t-conorm. We have to defined Einstein t-norm and t-conorm for PyFRS and also aggregation operators using Einstein t-norm and t-conorm.
● We define some useful fundamental properties for aggregation operators using Einstein operational law and explore different types of aggregation operators.
● The entropy and distance measures were established for finding the unknown weights of decision makers and criteria.
● To develop a DM method for accumulating uncertain information in real-world DM issues by applying suggested Einstein aggregation operators.
● Numerical case studies used to assess the 3PLs are being reviewed to validate the methods provided.
Here are the advantages of proposed methods over existing methods are:
● Better representation of uncertainty: In the proposed methods we can represent memberships and non-memberships in more flexible manners than existing methods because PyFRS range is greater than IFRS and it can be customized according to the problem domain. This flexibility enables the representation of a wide range of uncertain and vague information.
● More flexible aggregation operators: The PyFRS approach offers more flexibility in the choice of aggregation operators than the IFRS approach. The PyFRS operators, for instance, can be used to aggregate PyFR numbers, which is not possible in the IFRS approach.
● More efficient: This approach leads to more efficient algorithms and decision-making processes. It reduces the complexity of the problem and speeds up the computations. However, none of the existing works are up to the task of handling this kind of information supplied by PyFRS to a decision maker. While the suggested method is capable of handling data from pre-existing methods as well, it has the potential to significantly improve upon them.
The rest of this article is organized as follows: Section 2 present the basic concept, related to PyFS and rough set which will be used later. Section 3, contain a novel notion of PyFRSs and new score function. Section 4, contain Einstein operation laws for PyFRSs, as well as some Einstein averaging operators of PyFRS and their properties have been discussed in detail. Section 5, this section describes the entropy measures and distance measures for PyFRSs. Section 6, in this section, we provided a DM process to address the uncertainty in MCGDM situations with unknown weight information. In Section 7, a numerical example based on the proposed method is presented for the selection of 3PLs and a comparison with other methods which shows that the algorithm is reasonable. Section 8, explain the conclusion of the article.
This section introduces a few key concepts and properties that helped to create the proposed work and can easily understand by the readers. We will discuss some basic concepts including crisp relation, rough set (RS) and Pythagorean fuzzy relation (PyF-relation).
Definition 1. [13] Let F be a non-empty set, then mathematically Pythagorean fuzzy set is defined as:
H={(f,αH(f),βH(f)):f∈F}. | (1) |
Such that 0≤(αH(f))2+(βH(f))2≤1, where αH(f) and βH(f) represents the positive and negative membership degree belong to [0,1].
Definition 2. [31] Let F be a non-empty set, and ℘∈(F×F) be a crisp relation then,
1) ℘ is reflexive if (f,f)∈℘, for all f∈F;
2) ℘ is symmetric if for all f,g∈F, if (f,g)∈℘, there exists (g,f)∈℘;
3) ℘ is transitive if for all f,g,h∈F, if (f,g)∈℘ and (g,h)∈℘, then (f,h)∈℘.
Definition 3. [31] let F be a non-empty set, and ℘∈(F×F) be any relation. Now for all f∈F defined a mapping ℘∗:F→P(F) is given as.
℘∗(f)={g∈F:(f,g)∈℘}, |
where ℘∗(f) is the successor neighborhood of an object f w.r.t ℘. The pair (F,℘) is said to be approximation space. Now for any M⊆F, the lower and upper approximation of M w.r.t approximation space (F,℘) is denoted and defined as:
¯℘(M)={f∈F|℘∗(f)∩M≠∅}, |
℘_(M)={f∈F|℘∗(f)⊆M}. |
Therefore, ℘=(¯℘(M),℘_(M)) is known as a rough set and ¯℘(M),℘_(M):P(F)→P(F) are the upper and lower approximation operators.
Definition 4. [38] Let F≠0 and ℘∈PyF(F×F) be a PyF relation. Then,
1) ℘ is reflexive if α℘(f,f)=1 and β℘(f,f)=0, ∀ f∈F;
2) ∀(f,g)∈F×F, then ℘ is symmetric if , α℘(f,g)=α℘(g,f) and β℘(f,g)=β℘(g,f);
3) ð is transitive if (f,h)∈F×F,
α℘(f,h)≥∨g∈F[α℘(f,g)∧α℘(g,h)]andβ℘(f,h)=∧g∈F[β℘(f,g)∧β℘(g,h)]. |
In this section, we will develop the concept of Pythagorean fuzzy rough set (PyFRS) with the degree of hesitancy. We will also develop the score function and accuracy function whose purpose is to overcome various comparability issues.
Definition 5. Let F be a non-empty universe and for any subset ℘∈PyFS(F×F) be a PyF relation on F. Then (F,℘) is PyF approximation space. Then the lower ℘_(M) and upper ¯℘(M) approximations for any M ⊆PyFS(F) in term of (F,℘) are two PyFSs mathematically defined as:
℘_(M)={(f,⟨α℘_(M)(f),β℘_(M)(f)⟩):f∈F}. | (2) |
¯℘(M)={f,⟨α¯℘(M)(f),β¯℘(M)(f)⟩:f∈F}. | (3) |
Where
α℘_(M)(f)=∧g∈F[α℘(f,g)∧αM(g)], |
β℘_(M)(f)=∨g∈F[β℘(f,g)⋁βM(g)] |
and
α¯℘(M)(f)=⋁g∈F[α℘(f,g)⋁αM(g)],α¯℘(M)(f)=∧g∈F[α℘(f,g)∧αM(g)], |
i.e.,
0≤(α℘_(M)(f))2+(β℘_(M)(f))2≤1 |
and
0≤(α¯℘(M)(f))2+(β¯℘(M)(f))2≤1. |
Hence, ℘_(M) and ¯℘(M) are PyFSs, therefore ℘_(M),¯℘(M):PyFS(F)→PyFS(F) reperesnts upper and lower approximation operators respectively. Hence, the combination of ℘(M)=(℘_(M),¯℘(M)) is known as PyFRSs defined as:
℘(M)={f,⟨(α℘_(M)(f),β℘_(M)(f)),(α¯℘(M)(f),β¯℘(M)(f))⟩:f∈F}. | (4) |
Simply it is denoted by ℘(M)=((α_,β_),(¯α,¯β)). The degree of hesitancy for PyFRSs are determined as:
℘ℏ(M)=(℘_ℏ(M),¯℘ℏ(M)). |
Where ℘_ℏ(M)=√1−α_2−β_2 and ¯℘ℏ(M)=√1−¯α2−¯β2.
Definition 6. The score and accuracy function for PyFRSs collection ℘(Mi)=((α_i,β_i),(¯αi,¯βi)) is mathematically given by
Sc(℘(Mi))=2+α_i2+¯α2i−β2i−¯β2i4, | (5) |
Ac(℘(Mi))=2+α_i2+¯α2i+β2i+¯β2i4. | (6) |
Where Sc(℘(Mi)) and Ac(℘(Mi)) belong [0,1].
This section provides union, intersection and detail description of the operational laws of PyFR Einstein aggregation operational laws and their generators. These operational laws shall be used in development of aggregation operators (AOs) of PyFRSs and we also discuss their fundamental properties such as idempotency, boundedness and monotonicity etc.
Definition 7. [39] Let ℘(M1) and ℘(M2) two PyFR numbers. Then the generalized intersection and union of two PyFR numbers is given by
℘(M1)⋀℘(M2)={f,(Sℇ(α_1(f),α_2(f)),Tℇ(β_1(f),β_2(f))),(Sℇ(¯α1(f),¯α2(f)),Tℇ(¯β1(f),¯β2(f)))|f∈F}, | (7) |
℘(M1)⋁℘(M2)={f,(Tℇ(α_1(f),α_2(f)),Sℇ(β_1(f),β_2(f))),(Tℇ(¯α1(f),¯α2(f)),Sℇ(¯β1(f),¯β2(f)))|f∈F}. | (8) |
If T and S denotes T-norm and T-conorm respectively. Then Tℇ, Sℇ represent Einstein product and sum which are the generalized union and intersection of two PyFRSs as described below.
Sℇ(℘(M1),℘(M2))=√℘2(M1)+ℇ℘2(M2)1+℘2(M1).ℇ℘2(M2). | (9) |
Tℇ(℘(M1),℘(M2))=℘(M1).ℇ℘(M2)√1+(1−℘2(M1)).ℇ(1−℘2(M2)). | (10) |
Definition 8. Let ℘(Mi)=(℘_(Mi),¯℘(Mi))∈PyFS(F)(i∈N). Then the Einstein operational laws for PyFRNs are defined as follows.
℘(M1)♁℘(M2)={(√α_12+α_22√1+α_12.α_22,β_1.β_2√1+(1−β_12).(1−β_22)),(√¯α12+¯α22√1+¯α12.¯α22,¯β1.¯β2√1+(1−¯β12).(1−¯β22))}. | (11) |
℘(M1)♁℘(M2)={(α_1.α_2√1+(1−α_12).(1−α_22),√β_12+β_22√1+β_12.β_22),(¯α1.¯α2√1+(1−¯α12).(1−¯α22),√¯β21+¯β22√1+¯β21.¯β22)}. | (12) |
k⋅℘(M1)={(√(1+α_12)k−(1−α_12)k√(1+α_12)k+(1−α_12)k,√2(β_1)k√(2−β_12)k+(β_12)k),(√(1+¯α12)k−(1−¯α12)k√(1+¯α12)k+(1−¯α12)k,√2(¯β1)k√(2−¯β12)k+(¯β12)k)}. | (13) |
(℘(M1))k={(√2(α_1)k√(2−α_12)k+(α_12)k,√(1+β_12)k−(1−β_12)k√(1+β_12)k+(1−β_12)k),(√2(¯α1)k√(2−¯α12)k+(¯α12)k,√(1+¯β12)k−(1−¯β12)k√(1+¯β122)k+(1−¯β12)k)}. | (14) |
(℘(M1))c=(β_1,α_1),(¯β1,¯α1). | (15) |
Here is a detailed analysis of the PyFR Einstein aggregation operators such as Pythagorean fuzzy rough Einstein weighted averaging (PyFREWA) operator, Pythagorean fuzzy rough Einstein ordered weighted averaging (PyFREOWA) operator, and Pythagorean fuzzy rough Einstein hybrid averaging (PyFRHA) operators, and their required properties for instance, Idempotency, boundedness and monotonicity.
Definition 9. Let a collection
℘(Mi)=(℘_(Mi),¯℘(Mi))∈PyFS(F)(i∈N) |
and (mg1,mg2,...mgn)T represent the weight vectors of the given collection such that ∑ni=1mgi=1. Then from operational laws (11) and (13) of Definition 8, the Einstein weighted averaging operators for PyFRSs are defined as
PyFREWA(℘(M1),℘(M2),…,℘(Mn))=(⨁ni=1mgi(℘_(Mi)),⨁ni=1mgi(¯℘(Mi)))=((√∏ni=1(1+α_i2)mgi−∏ni=1(1−α_i2)mgi√∏ni=1(1+α_i2)mgi−∏ni=1(1−α_i2)mgi,√2∏ni=1(β_i)mgi√∏ni=1(2−β_i2)mgi+∏ni=1(β_i2)mgi),(√∏ni=1(1+¯αi2)mgi−∏ni=1(1−¯αi2)mgi√∏ni=1(1+¯αi2)mgi−∏ni=1(1−¯αi2)mgi,√2∏ni=1(¯βi)mgi√∏ni=1(2−¯βi2)mgi+∏ni=1(¯βi2)mgi)). | (16) |
Theorem 1. Let a collection
℘(Mi)=(℘_(Mi),¯℘(Mi))∈PyFS(F)(i∈N) |
and (mg1,mg2,...mgn)T represent the weight vectors of the given collection such that ∑ni=1mgi=1. Then, the fundamental properties of Einstein weighted averaging aggregation operators for PyFRSs are described as:
1) Idempotency. If
℘(Mi)=℘(M)=(℘_(M),¯℘(M))∀i∈N. |
Then
PyFREWA(℘(M1),℘(M2),…,℘(Mn))=℘(M). |
2) Boundedness. Let
(℘(M))−=(mini℘_(Mi),maxi¯℘(Mi)) |
and
(℘(M))+=(maxi℘_(Mi),mini¯℘(Mi)). |
Then,
(℘(M))−≤PyFREWA(℘(M1),℘(M2),…,℘(Mn))≤(℘(M))+. |
3) Monotonicity. Let
L(Mi)=(L_(Mi),¯L(Mi))∈PyFS(F)(i∈N), |
such that L_(Mi)≤℘_(Mi) and ¯L(Mi)≤¯℘(Mi). Then
PyFREWA(℘(M1),℘(M2),…,℘(Mn))≥PyFREWA(L(M1),L(M2),…,L(Mn)). |
Definition 10. Let a collection
℘(Mi)=(℘_(Mi),¯℘(Mi))∈PyFS(F)(i∈N) |
and (mg1,mg2,...mgn)T represent the weight vectors of the given collection such that ∑ni=1mgi=1. Then from operational laws (11) and (13) of Definition 8, the Einstein ordered weighted averaging operators for PyFRSs are defined as:
PyFREOWA(℘(M1),℘(M2),…,℘(Mn))=(⨁ni=1mgi(℘_q(Mi)),⨁ni=1mgi(¯℘q(Mi)))=((√∏ni=1(1+α_qi2)mgi−∏ni=1(1−α_qi2)mgi√∏ni=1(1+α_qi2)mgi+∏ni=1(1−α_qi2)mgi,√2∏ni=1(β_qi)mgi√∏ni=1(2−β_qi2)mgi+∏ni=1(β_qi2)mgi),(√∏ni=1(1+¯αqi2)mgi−∏ni=1(1−¯αqi2)mgi√∏ni=1(1+¯αqi2)mgi+∏ni=1(1−¯αqi2)mgi,√2∏ni=1(¯βqi)mgi√∏ni=1(2−¯βqi2)mgi+∏ni=1(¯βqi2)mgi)). | (17) |
Where (℘_q(Mi),¯℘q(Mi)) represents the largest value of permutation from the collection of PyFSs.
Theorem 2. Let a collection
℘(Mi)=(℘_(Mi),¯℘(Mi))∈PyFS(F)(i∈N) |
and (mg1,mg2,...mgn)T represent the weight vectors of the given collection such that ∑ni=1mgi=1. Then, the basic properties of Einstein weighted ordered averaging aggregation operators for PyFRSs are described as:
1) Idempotency. If
℘(Mi)=℘(M)=(℘_(M),¯℘(M))∀i∈N. |
Then
PyFREOWA(℘(M1),℘(M2),…,℘(Mn))=℘(M). |
2) Boundedness. Let
(℘(M))−=(mini℘_(Mi),maxi¯℘(Mi)) |
and
(℘(M))+=(maxi℘_(Mi),mini¯℘(Mi)). |
Then,
(℘(M))−≤PyFREOWA(℘(M1),℘(M2),…,℘(Mn))≤(℘(M))+. |
3) Monotonicity. Let
L(Mi)=(L_(Mi),¯L(Mi))∈PyFS(F)(i∈N), |
such that L_(Mi)≤℘_(Mi) and ¯L(Mi)≤¯℘(Mi). Then
PyFREOWA(℘(M1),℘(M2),…,℘(Mn))≥PyFREOWA(L(M1),L(M2),…,L(Mn)). |
Definition 11. Let a collection
℘(Mi)=(℘_(Mi),¯℘(Mi))∈PyFS(F)(i∈N) |
and (τ1,τ2,...τn)T represent the weight vectors of the given collection such that ∑ni=1τi=1. Suppose (mg1,mg2,...mgn)T be the associated weight information of the given family such that ∑ni=1mgi=1. Then from operational laws (11) and (13) of Definition 8, the Einstein hybrid averaging operators for PyFRSs are defined as:
PyFREHA(℘(M1),℘(M2),…,℘(Mn))=(⨁ni=1mgi(¨℘_q(Mi)),⨁ni=1mgi(¨¯℘q(Mi)))=((√∏ni=1(1+¨α_qi2)mgi−∏ni=1(1−¨α_qi2)mgi√∏ni=1(1+¨α_qi2)mgi+∏ni=1(1−¨α_qi2)mgi,√2∏ni=1(¨β_qi)mgi√∏ni=1(2−¨β_qi2)mgi+∏ni=1(¨β_qi2)mgi),(√∏ni=1(1+¨¯αqi2)mgi−∏ni=1(1−¨¯αqi2)mgi√∏ni=1(1+¨¯αqi2)mgi+∏ni=1(1−¨¯αqi2)mgi,√2∏ni=1(¨¯βqi)mgi√∏ni=1(2−¨¯βqi2)mgi+∏ni=1(¨¯βqi2)mgi)). | (18) |
where n is balancing coefficient and ¨℘q(Mi)=nτi℘(Mi)=(nτi℘(Mi),nτi℘(Mi)) denotes the highest permutation.
Theorem 3. Let a collection
℘(Mi)=(℘_(Mi),¯℘(Mi))∈PyFS(F)(i∈N) |
and (τ1,τ2,...τn)T represent the weight vectors of the given collection such that ∑ni=1τi=1. Suppose (mg1,mg2,...mgn)T be the associated weight information of the given family such that ∑ni=1mgi=1. Then, the main properties of Einstein hybrid averaging aggregation operators for PyFRSs are described as:
1) Idempotency. If
℘(Mi)=℘(M)=(℘_(M),¯℘(M))∀i∈N. |
Then
PyFREHA(℘(M1),℘(M2),…,℘(Mn))=℘(M). |
2) Boundedness. Let
(℘(M))−=(mini℘_(Mi),maxi¯℘(Mi)) |
and
(℘(M))+=(maxi℘_(Mi),mini¯℘(Mi)). |
Then,
(℘(M))−≤PyFREHA(℘(M1),℘(M2),…,℘(Mn))≤(℘(M))+. |
3) Monotonicity. Let
L(Mi)=(L_(Mi),¯L(Mi))∈PyFS(F)(i∈N), |
such that L_(Mi)≤℘_(Mi) and ¯L(Mi)≤¯℘(Mi). Then
PyFREHA(℘(M1),℘(M2),…,℘(Mn))≥PyFREHA(L(M1),L(M2),…,L(Mn)). |
Proof. Same as Theorem 1.
This section provides generalized distance measures and weighted generalized distance measures based on distance model [40,41] to determine the difference between two PyFRSs. We also suggest novel entropy measure by applying generalized distance to measure fuzziness of PyFRSs in this section.
Definition 12. Let
℘(Mi)=(℘(M1),℘(M2),…,℘(Mn))and℘∗(Mi)=(℘∗(M1),℘∗(M2),…,℘∗(Mn)) |
be a two collection of PyFRSs, then the generalized distance measure between any two PyFRSs for any Δ>0(∈R) are defined as follows:
dG(℘(Mi),℘∗(Mi))=(14∑ni=1(|α_i2−α_i∗2|Δ+|β_i2−β_i∗2|Δ+|¯αi2−¯αi∗2|Δ+|¯βi2−¯βi∗2|Δ))1Δ | (19) |
where
℘(Mi)=((α_i,β_i),(¯αi,¯βi))and℘∗(Mi)=((α_i∗,β_i∗),(¯αi∗,¯βi∗))(i∈N), |
and dG(℘(Mi),℘∗(Mi))∈[0,1].
Definition 13. Let
℘(Mi)=(℘(M1),℘(M2),…,℘(Mn))and℘∗(Mi)=(℘∗(M1),℘∗(M2),…,℘∗(Mn)) |
be a two collection of PyFRSs, and (mg1,mg2,...mgn)T be the weights informations, then the weighted generalized distance measure are defined as follows:
dWG(℘(Mi),℘∗(Mi))=(14∑ni=1mgi(|α_i2−α_i∗2|Δ+|β_i2−β_i∗2|Δ+|¯αi2−¯αi∗2|Δ+|¯βi2−¯βi∗2|Δ))1Δ. | (20) |
Definition 14. [40] Let ℘(M)=(℘(M1),℘(M2),…,℘(Mn)) be the collection of PyFRSs, the PyFR entropy measure are defined as:
E(℘(Mi))=1n∑ni=1[{1−d(℘(Mi),℘(Mi)c)}(1+℘_ℏ(Mi)+¯℘ℏ(Mi)3)]. | (21) |
In this section, we will develop an algorithm to solve the MCGDM problem utilizing the suggested PyFR Einstein aggregation operators. We will also provide the graphical representation of this method which will visually communicate information, data, or ideas in a clear and concise manner.
Algorithm: Here we propose a technique for dealing with uncertainty in DM under PyFR information. Let there are m number of alternatives {A1,A2,…,Am} represented by row with n number of attributes (criteria's) denoted by {Ψ1,Ψ2,…,Ψn} having unknown weights information. Let h number of experts denoted by a set {P1,P2,…,Ph} are gathered having unknown weight vector to provide evaluation reports for each alternative based on attributes (criteria's). Assume PyFR decision matrix is symbolically denoted by Il=[℘(Mli)]m×n (l = 1, 2, …, h) and described as:
Il=[(℘(Mli))11(℘(Mli))12(℘(Mli))12(℘(Mli))22⋯(℘(Mli))1n⋯(℘(Mli))2n(℘(Mli))31(℘(Mli))32⋮(℘(Mli))m1⋮(℘(Mli))m2⋯(℘(Mli))3n⋮⋯⋮(℘(Mli))mn] |
where ℘(Mlij)=((α_lij,βlij),(¯αijl,¯βijl)). The stepwise details are as follows:
Note: The weights information of expert and criteria are both unknown. First it is necessary to find the weights of experts and criteria which is calculated by the following steps as:
Step 1: Construct the normalize decision matrix denoted by ℵlij as follows:
ℵlij={((α_lij,βlij),(¯αijl,¯βijl))forbenifitcriteria((βlij,α_lij),(¯βijl,¯αijl))forcostcriteria. |
Step 2: Find the weights of expert's matrix, because it is very difficult for DM to make an accurate result when the weight of experts is hidden. So to calculate the expert weights it is most important to first identify the ideal opinion OIM, by considering the same decision experts' weights and proposed PyFREWA operator. Next, calculate the right ideal opinion ORM and left ideal opinion OLM and determine the distance from each decision expert to OIM, ORM and OLM denoted by OdGIM, OdGRM and OdGLM respectively. At last, determined the closeness indices and calculate the weights of each decision expert as follows.
(a) Construct the ideal opinion matrix OIM of normalized decision matrix ℵlij that is closer to all opinions of each DMs' is determined by applying the PyFREWA operator.
OIM=[IM11IM12IM12IM22⋯IM1n⋯IM2nIM31IM32⋮IMm1⋮IMm2⋯IM3n⋮⋯⋮IMmn] |
where
IMij=∑hl=11hℵlij=((√∏ni=1(1+α_i2)1h−∏ni=1(1−α_i2)1h√∏ni=1(1+α_i2)1h−∏ni=1(1−α_i2)1h,√2∏ni=1(β_i)1h√∏ni=1(2−β_i2)1h+∏ni=1(β_i2)1h),(√∏ni=1(1+¯αi2)1h−∏ni=1(1−¯αi2)1h√∏ni=1(1+¯αi2)1h−∏ni=1(1−¯αi2)1h,√2∏ni=1(¯βi)1h√∏ni=1(2−¯βi2)1h+∏ni=1(¯βi2)1h)). |
b) Construct the right and left ideal opinion matrix denoted by ORM, OLM defined as:
ORM=[RM11RM12RM12RM22⋯RM1n⋯RM2nRM31RM32⋮RMm1⋮RMm2⋯RM3n⋮⋯⋮RMmn] |
and
OLM=[LM11LM12LM12LM22⋯LM1n⋯LM2nLM31LM32⋮LMm1⋮LMm2⋯LM3n⋮⋯⋮LMmn] |
where
RMij={maxl[Sc(ℵlij)]}andLMij={(ℵlij):minl[Sc(ℵlij)]}. | (22) |
c) By using Eqs (23)–(25) calculate the distance measures denoted by OdGIMl,OdGRMl and OdGLMl of each normalized matrix ℵl to OIM, ORM and OLM.
OdGIMli=(14∑ni=1(|(α_ℵlij)2−(α_IMij)2|Δ+|(β_ℵlij)2−(β_IMij)2|Δ+|(¯αℵlij)2−(¯αIMij)2|Δ+|(¯βℵlij)2−(¯βIMij)2|Δ))1Δ. | (23) |
{{{\mathfrak{O}}_{{\mathrm{d}}_{\mathrm{G}}\mathrm{R}\mathrm{M}}}^{\mathrm{l}}}_{\mathbb{i}} = {\left(\frac{1}{4}{\sum }_{\mathbb{i} = 1}^{\mathrm{n}}\text{}\left({\left|{\left({\underline{\mathrm{\alpha }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\underline{\mathrm{\alpha }}}_{{\mathrm{R}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }+{\left|{\left({\underline{\mathrm{\beta }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\underline{\mathrm{\beta }}}_{{\mathrm{R}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }+{\left|{\left({\overline{\mathrm{\alpha }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\overline{\mathrm{\alpha }}}_{{\mathrm{R}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }+{\left|{\left({\overline{\mathrm{\beta }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\overline{\mathrm{\beta }}}_{{\mathrm{R}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }\right)\right)}^{\frac{1}{\Delta }} . | (24) |
{{{\mathfrak{O}}_{{\mathrm{d}}_{\mathrm{G}}\mathrm{L}\mathrm{M}}}^{\mathrm{l}}}_{\mathbb{i}} = {\left(\left(\frac{1}{4}{\sum }_{\mathbb{i} = 1}^{\mathrm{n}}\text{}\left({\left|{\left({\underline{\mathrm{\alpha }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\underline{\mathrm{\alpha }}}_{{\mathrm{L}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }+{\left|{\left({\underline{\mathrm{\beta }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\underline{\mathrm{\beta }}}_{{\mathrm{L}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }\right)+{\left|{\left({\overline{\mathrm{\alpha }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\overline{\mathrm{\alpha }}}_{{\mathrm{L}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }+{\left|{\left({\overline{\mathrm{\beta }}}_{{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}}\right)}^{2}-{\left({\overline{\mathrm{\beta }}}_{{\mathrm{L}\mathrm{M}}_{\mathrm{i}\mathrm{j}}}\right)}^{2}\right|}^{\Delta }\right)\right)}^{\frac{1}{\Delta }} . | (25) |
For \text{i = 1, 2, …, m} and \text{l = 1, 2, …, h} .
d) Calculate closeness index {{\mathfrak{C}}_{\mathrm{I}}}^{\mathrm{l}} with help of proposed model [42], as follows:
{{\mathfrak{C}}_{\mathrm{I}}}^{\mathrm{l}} = \frac{{\sum }_{\mathrm{I} = 1}^{\mathrm{m}}\text{}{{{\mathfrak{O}}_{{\mathrm{d}}_{\mathrm{G}}\mathrm{R}\mathrm{M}}}^{\mathrm{l}}}_{\mathbb{i}}+{{{{\sum }_{\mathrm{I} = 1}^{\mathrm{m}}\text{}\mathfrak{O}}_{{\mathrm{d}}_{\mathrm{G}}\mathrm{L}\mathrm{M}}}^{\mathrm{l}}}_{\mathbb{i}}}{{\sum }_{\mathrm{I} = 1}^{\mathrm{m}}\text{}{{{\mathfrak{O}}_{{\mathrm{d}}_{\mathrm{G}}\mathrm{I}\mathrm{M}}}^{\mathrm{l}}}_{\mathbb{i}}+{\sum }_{\mathrm{I} = 1}^{\mathrm{m}}\text{}{{{\mathfrak{O}}_{{\mathrm{d}}_{\mathrm{G}}\mathrm{R}\mathrm{M}}}^{\mathrm{l}}}_{\mathbb{i}}+{{{{\sum }_{\mathrm{I} = 1}^{\mathrm{m}}\text{}\mathfrak{O}}_{{\mathrm{d}}_{\mathrm{G}}\mathrm{L}\mathrm{M}}}^{\mathrm{l}}}_{\mathbb{i}}} . | (26) |
For \text{l = 1, 2, …, h.}
e) Calculate the decision expert weight by the following formula as:
{\mathrm{\gamma }}^{\mathrm{l}} = \frac{{{\mathfrak{C}}_{\mathrm{I}}}^{\mathrm{l}}}{\sum _{\mathrm{l} = 1}^{\mathrm{h}}{{\mathfrak{C}}_{\mathrm{I}}}^{\mathrm{l}}} . | (27) |
Step 3: Construct the revised ideal matrix {\mathcal{R}}_{\mathrm{I}\mathrm{M}} by Eq (28) and weight of decision experts.
{\mathcal{R}}_{{\mathrm{I}\mathrm{M}}_{\mathrm{i}\mathrm{j}}} = {⨁}_{\mathrm{l} = 1}^{\mathrm{h}}\left({\mathrm{\gamma }}^{\mathrm{l}}.{\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}}\right) = \left(\left(\genfrac{}{}{0pt}{}{\frac{\sqrt{\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1+{{\underline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}-\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1-{{\underline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}}{\sqrt{\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1+{{\underline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}-\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1-{{\underline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}}, }{\frac{\sqrt{2}\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left({\underline{\mathrm{\beta }}}_{\mathbb{i}}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}{\sqrt{\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(2-{{\underline{\mathrm{\beta }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}+\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left({{\underline{\mathrm{\beta }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}}}\right), \left(\genfrac{}{}{0pt}{}{\frac{\sqrt{\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1+{{\overline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}-\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1-{{\overline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}}{\sqrt{\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1+{{\overline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}-\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(1-{{\overline{\mathrm{\alpha }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}}, }{\frac{\sqrt{2}\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left({\overline{\mathrm{\beta }}}_{\mathbb{i}}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}{\sqrt{\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left(2-{{\overline{\mathrm{\beta }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}+\prod _{\mathbb{i} = 1}^{\mathrm{n}}{\left({{\overline{\mathrm{\beta }}}_{\mathbb{i}}}^{2}\right)}^{{\mathrm{\gamma }}^{\mathrm{l}}}}}}\right)\right) . | (28) |
Step 4: We will calculate the entropy measure \text{(}\mathbb{E}\mathbb{A}) associated to each attribute by using Eq (21) as follows:
{\mathbb{E}\mathbb{A}}_{\mathrm{j}} = \left({\mathcal{R}}_{{\mathrm{I}\mathrm{M}}_{1\mathrm{j}}}, {\mathcal{R}}_{{\mathrm{I}\mathrm{M}}_{2\mathrm{j}}}, \dots , {\mathcal{R}}_{{\mathrm{I}\mathrm{M}}_{\mathrm{m}\mathrm{j}}}\right), \mathrm{j} = \mathrm{1, 2}, \dots , \mathrm{n}. | (29) |
Step 5: Evaluate the criteria weight as follows:
{{\mathcal{K}}_{\mathrm{g}}}_{\mathrm{j}} = \frac{1-{\mathbb{E}\mathbb{A}}_{\mathrm{j}}}{\mathrm{n}-\sum _{\mathrm{j} = 1}^{\mathrm{n}}{\mathbb{E}\mathbb{A}}_{\mathrm{j}}}, \mathrm{j} = \mathrm{1, 2}, \dots , \mathrm{n}. | (30) |
Step 6: Calculate the collective preference value of each alternative in the revised ideal matrix based on the developed aggregation operator.
Step 7: Evaluate the score of the alternatives by using Definition 6 and arrange all the alternatives in decreasing order.
Step 8: Rank the alternative. The alternative having highest score value is our best choice.
The graphically presentation of the proposed algorithm is shown in Figure 2 which will visually communicate information, data, or ideas in a clear and concise manner.
Based on the results mentioned above, we propose a novel MCGDM method in the environment of PyFRSs. As shown in Figure 2, the description of the proposed method is given in the following short steps:
Step 1: On the basis of the practical context, we determine the elements of the PyFRS information system including alternatives and criteria.
Step 2: According to the distance measure of ideal opinion, left ideal opinion, and right ideal opinion, and the closeness indices the decision expert's weights are attained.
Step 3: Determine the revised ideal opinion by using the weights of the decision expert weight and the proposed PyFREWA operator.
Step (4-5): Based on Eq (21) the entropy measure, of each corresponding criteria of the revised ideal opinion matrix can be determined. Thus the weights of the criteria can be evaluated by Eq (30).
Step (6-7): By utilizing the weight of criteria, and the developed Einstein aggregation operator, the collective preference value of each alternative in the revised ideal matrix can be calculated.
Step 8: By using the score function determine the score of the collective preference value of each alternative in the revised ideal matrix. Arrange the score value in descending order and rank the alternatives.
This section takes a practical DM problem involving the selection of 3PLs as an example to demonstrate the applicability and feasibility of the proposed method. We also developed the comparative and sensitive analysis of this study with the existing study to show the efficacy of this work.
The e-commerce phenomena is currently expanding in today's competitive world and has a significant effect on global supply chains [43,44]. Hence, virtually every organization that moves physical goods has increased the significance of logistics management activities. In today's diverse and rapidly evolving world, there are several ways in which businesses can gain comparative advantage by externalizing logistics management processes. Using a 3PL provider is beneficial for distributors, organizations with distribution networks and exporters. 3PL refers to the technique through which organizations outsource their warehouse and shipping operational processes. Businesses that offer 3PL services can provide packaging materials, cross-docking, and home delivery. The growth rate of the 3PL services market is accelerating due to the development of e-commerce and the increase in reverse logistics processes. The e-commerce trend comprises commodities staged in forwarding facilities close to customers, faster, more reliable deliveries, and more inventory turnover. In order to help maintain this highly complex supply chain, a significant increase in 3PL companies has been observed. 3PL commonly receive requests for help with storage, delivery, and e-commerce fulfilment services, and they invest in technology for both internal and client-facing uses.
The presence of multiple statistical, interpersonal, etc. aspects in the natural DM phase, the selection of 3PLs is likely to be viewed as a complex MCGDM problem due to the character of the multidimensional decision-making challenge. Due to the importance of sustainable 3PL providers, there is little research on 3PL selection difficulties in emerging economies. The 3PL industry is expanding rapidly due to the expansion of e-commerce. The need for 3PL services is expected to increase as brands and distributors try to concentrate exclusively on their primary sectors. As a result, logistical services are often outsourced. In a word, identifying and selecting the best 3PS is an essential aspect of any company's long-term objectives. Following pre-screening, four 3PLs as alternative \left\{{\mathcal{A}}_{1}, {\mathcal{A}}_{2}, {\mathcal{A}}_{3}, {\mathcal{A}}_{4}\right\} have been identified. Then three experts {\mathcal{P}}^{\mathrm{l}} with unknown weights {\mathrm{\gamma }}^{\mathrm{l}} are invited to evaluate the 3PLs according to the following five criteria {\mathrm{\Psi }}_{\mathrm{j}} = \left\{{\mathrm{\Psi }}_{1}, {\mathrm{\Psi }}_{2}, {\mathrm{\Psi }}_{3}, {\mathrm{\Psi }}_{4}, {\mathrm{\Psi }}_{5}\right\} having unknown weight vectors {{\mathcal{K}}_{\mathrm{g}}}_{\mathrm{j}} which are given as under: (1) {\mathrm{\Psi }}_{1} financial stability; (2) {\mathrm{\Psi }}_{2} reputation; (3) {\mathrm{\Psi }}_{3} delivery time and reliability; (4) {\mathrm{\Psi }}_{4} green operation and (5) {\mathrm{\Psi }}_{5} IT capabilities.
Note: All criteria are benefits.
Then, for selecting 3PLs, we will use the generated aggregation operators in the form of PyFRVs, the decision experts evaluated each {\mathcal{A}}_{\mathrm{i}} evaluation report with in PyFRV against the associated criteria, where i = 1, 2, 3, 4. The evaluations given by all the three experts {\mathcal{P}}^{1}, {\mathcal{P}}^{2} , {\mathcal{P}}^{3} are given in the following Tables 1–3 respectively.
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.7, .4\right)\right) | \left(\left(.8, .5\right), \left(.2, .8\right)\right) | \left(\left(.4, .6\right), \left(.6, .7\right)\right) | \left(\left(.5, .5\right), \left(.9, .1\right)\right) | \left(\left(.7, .4\right), \left(.8, .2\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.6, .5\right)\right) | \left(\left(.6, .7\right), \left(.4, .5\right)\right) | \left(\left(.1, .9\right), \left(.6, .5\right)\right) | \left(\left(.2, .9\right), \left(.5, .6\right)\right) | \left(\left(.4, .5\right), \left(.6, .1\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.4, .5\right), \left(.4, .2\right)\right) | \left(\left(.7, .1\right), \left(.3, .9\right)\right) | \left(\left(.3, .8\right), \left(.6, .8\right)\right) | \left(\left(.9, .3\right), \left(.3, .7\right)\right) | \left(\left(.9, .4\right), \left(.6, .2\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.6, .6\right), \left(.6, .5\right)\right) | \left(\left(.9, .2\right), \left(.5, .5\right)\right) | \left(\left(.2, .5\right), \left(.3, .6\right)\right) | \left(\left(.7, .1\right), \left(.4, .8\right)\right) | \left(\left(.5, .6\right), \left(.4, .3\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.8, .4\right)\right) | \left(\left(.9, .1\right), \left(.8, .6\right)\right) | \left(\left(.4, .3\right), \left(.2, .9\right)\right) | \left(\left(.9, .4\right), \left(.4, .9\right)\right) | \left(\left(.2, .9\right), \left(.5, .4\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.9, .4\right)\right) | \left(\left(.8, .2\right), \left(.6, .5\right)\right) | \left(\left(.8, .1\right), \left(.5, .4\right)\right) | \left(\left(.1, .5\right), \left(.6, .4\right)\right) | \left(\left(.3, .7\right), \left(.6, .3\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .5\right), \left(.5, .5\right)\right) | \left(\left(.6, .5\right), \left(.8, .6\right)\right) | \left(\left(.4, .9\right), \left(.4, .6\right)\right) | \left(\left(.7, .2\right), \left(.8, .5\right)\right) | \left(\left(.3, .4\right), \left(.4, .8\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5, .6\right), \left(.4, .5\right)\right) | \left(\left(.7, .6\right), \left(.3, .7\right)\right) | \left(\left(.5, .2\right), \left(.7, .2\right)\right) | \left(\left(.3, .9\right), \left(.4, .7\right)\right) | \left(\left(.2, .5\right), \left(.5, .2\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.2, .7\right), \left(.9, .2\right)\right) | \left(\left(.4, .6\right), \left(.5, .3\right)\right) | \left(\left(.6, .8\right), \left(.5, .6\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.5, .3\right), \left(.1, .9\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.1, .4\right), \left(.4, .3\right)\right) | \left(\left(.3, .5\right), \left(.5, .8\right)\right) | \left(\left(.4, .6\right), \left(.7, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) | \left(\left(.4, .7\right), \left(.2, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .9\right), \left(.5, .7\right)\right) | \left(\left(.1, .6\right), \left(.4, .6\right)\right) | \left(\left(.5, .7\right), \left(.1, .9\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.4, .3\right), \left(.4, .6\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.3, .5\right), \left(.7, .2\right)\right) | \left(\left(.1, .7\right), \left(.7, .7\right)\right) | \left(\left(.3, .9\right), \left(.2, .8\right)\right) | \left(\left(.4, .7\right), \left(.1, .7\right)\right) | \left(\left(.8, .2\right), \left(.4, .2\right)\right) |
Step1: In DM problem, there are two types of criteria like cost criteria and benefit criteria, for this we normalize all cost criteria to benefit criteria. Here in this example all the criteria are of the same type i.e. the benefit criteria, so no normalization is required and all evaluations of expert's matrix are considered to be normalized.
Step 2: Next we find the weights of expert's matrix, because it is very difficult to make an accurate result when the weight of experts is hidden. So the weighs of decision experts are evaluated in the following steps (a)–(e):
a) Calculate the ideal opinion matrix {\mathfrak{O}}_{\mathrm{I}\mathrm{M}} of normalized decision matrix {\mathrm{\aleph }}_{\mathrm{i}\mathrm{j}}^{\mathrm{l}} that is closer to all opinions of each decision experts by considering the same decision experts' weight and applying the PyFREWA operator which is represented in the Table 4.
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\begin{array}{c}\left(.345, .767\right), \\ \left(.817, .292\right)\end{array}\right) | \left(\begin{array}{c}\left(.770, .321\right), \\ \left(.664, .539\right)\end{array}\right) | \left(\begin{array}{c}\left(.487, .539\right), \\ \left(.469, .731\right)\end{array}\right) | \left(\begin{array}{c}\left(.688, .432\right), \\ \left(.664, .484\right)\end{array}\right) | \left(\begin{array}{c}\left(.520, .497\right), \\ \left(.573, .439\right)\end{array}\right) |
{\mathcal{A}}_{2} | \left(\begin{array}{c}\left(.414, .589\right), \\ \left(.710, .393\right)\end{array}\right) | \left(\begin{array}{c}\left(.622, .423\right), \\ \left(.508, .592\right)\end{array}\right) | \left(\begin{array}{c}\left(.548, .409\right), \\ \left(.609, .432\right)\end{array}\right) | \left(\begin{array}{c}\left(.318, .693\right), \\ \left(.609, .461\right)\end{array}\right) | \left(\begin{array}{c}\left(.370, .630\right), \\ \left(.508, .231\right)\end{array}\right) |
{\mathcal{A}}_{3} | \left(\begin{array}{c}\left(.283, .623\right), \\ \left(.469, .423\right)\end{array}\right) | \left(\begin{array}{c}\left(.546, .321\right), \\ \left(.569, .697\right)\end{array}\right) | \left(\begin{array}{c}\left(.409, .799\right), \\ \left(.424, .765\right)\end{array}\right) | \left(\begin{array}{c}\left(.716, .230\right), \\ \left(.550, .563\right)\end{array}\right) | \left(\begin{array}{c}\left(.655, .364\right), \\ \left(.478, .477\right)\end{array}\right) |
{\mathcal{A}}_{4} | \left(\begin{array}{c}\left(.486, .566\right), \\ \left(.586, .2373\right)\end{array}\right) | \left(\begin{array}{c}\left(.711, .452\right), \\ \left(.534, .630\right)\end{array}\right) | \left(\begin{array}{c}\left(.357, .473\right), \\ \left(.467, .477\right)\end{array}\right) | \left(\begin{array}{c}\left(.534, .434\right), \\ \left(.332, .733\right)\end{array}\right) | \left(\begin{array}{c}\left(.581, .399\right), \\ \left(.436, .230\right)\end{array}\right) |
b) According to Eq (22), the right and left ideal opinion matrixes are calculated in Tables 5 and 6 as:
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.2, .7\right), \left(.9, .2\right)\right) | \left(\left(.9, .1\right), \left(.9, .6\right)\right) | \left(\left(.4, .6\right), \left(.6, .7\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.5, .5\right), \left(.9, .1\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.9, .4\right)\right) | \left(\left(.8, .2\right), \left(.6, .5\right)\right) | \left(\left(.8, .1\right), \left(.5, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.4, .5\right), \left(.4, .2\right)\right) | \left(\left(.6, .5\right), \left(.8, .6\right)\right) | \left(\left(.3, .8\right), \left(.6, .8\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.7, .2\right), \left(.8, .5\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.3, .5\right), \left(.7, .2\right)\right) | \left(\left(.9, .2\right), \left(.5, .5\right)\right) | \left(\left(.5, .2\right), \left(.7, .2\right)\right) | \left(\left(.4, .7\right), \left(.1, .7\right)\right) | \left(\left(.7, .1\right), \left(.4, .8\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.8, .7\right)\right) | \left(\left(.4, .6\right), \left(.5, .3\right)\right) | \left(\left(.4, .3\right), \left(.2, .9\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.2, .9\right), \left(.5, .4\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.6, .5\right)\right) | \left(\left(.3, .5\right), \left(.5, .8\right)\right) | \left(\left(.1, .9\right), \left(.6, .5\right)\right) | \left(\left(.2, .9\right), \left(.5, .6\right)\right) | \left(\left(.4, .7\right), \left(.2, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .9\right), \left(.5, .7\right)\right) | \left(\left(.1, .6\right), \left(.4, .6\right)\right) | \left(\left(.5, .7\right), \left(.1, .9\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.3, .4\right), \left(.4, .8\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5, .6\right), \left(.4, .5\right)\right) | \left(\left(.1, .7\right), \left(.7, .7\right)\right) | \left(\left(.3, .9\right), \left(.2, .8\right)\right) | \left(\left(.3, .9\right), \left(.4, .7\right)\right) | \left(\left(.5, .6\right), \left(.4, .3\right)\right) |
c) After calculating the Ideal opinion {\mathfrak{O}}_{\mathrm{I}\mathrm{M}} , right ideal opinion {\mathfrak{O}}_{\mathrm{R}\mathrm{M}} and left ideal opinion {\mathfrak{O}}_{\mathrm{L}\mathrm{M}} determine the distance from each decision expert matrixes to {\mathfrak{O}}_{\mathrm{I}\mathrm{M}} , {\mathfrak{O}}_{\mathrm{R}\mathrm{M}} and {\mathfrak{O}}_{\mathrm{L}\mathrm{M}} by Eqs (23)–(25) as represented in Table 7.
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{I}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.1988 | 0.2071 | 0.1814 | 0.1324 |
{\mathcal{P}}^{2} | 0.2575 | 0.1527 | 0.1812 | 0.2008 |
{\mathcal{P}}^{3} | 0.2870 | 0.1642 | 0.2177 | 0.2444 |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{L}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.2134 | 0.3027 | 0.2285 | 0.1941 |
{\mathcal{P}}^{2} | 0.3614 | 0.1002 | 0.2384 | 0.2926 |
{\mathcal{P}}^{3} | 0.4111 | 0.2913 | 0.3299 | 0.3515 |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{R}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.3897 | 0.1538 | 0.3788 | 0.3338 |
{\mathcal{P}}^{2} | 0.2643 | 0.3359 | 0.2807 | 0.2876 |
{\mathcal{P}}^{3} | 0.3082 | 0.1905 | 0.0664 | 0.1729 |
d) The closeness index is evaluated by Eq (26) as.
{{\mathfrak{C}}_{\mathrm{I}}}^{1} = 0.7530, {{\mathfrak{C}}_{\mathrm{I}}}^{2} = 0.7306, {{\mathfrak{C}}_{\mathrm{I}}}^{2} = 0.6988, {{\mathfrak{C}}_{\mathrm{I}}}^{3} = 0.7530 . |
e) The weights of each decision experts are computed by Eq (27) as
{\mathrm{\gamma }}^{1} = 0.35, {\mathrm{\gamma }}^{2} = 0.33, {\mathrm{\gamma }}^{3} = 0.32 . |
Step 3: Applying the weights of decision experts as we calculated in above step 2 of Algorithm, compute the Revised ideal matrix {\mathcal{R}}_{\mathrm{I}\mathrm{M}} by using Eq (28) of Algorithm, as given in Table 8.
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\begin{array}{c}\left(.349, .767\right), \\ \left(.814, .394\right)\end{array}\right) | \left(\begin{array}{c}\left(.772, .321\right), \\ \left(.660, .546\right)\end{array}\right) | \left(\begin{array}{c}\left(.475, .537\right), \\ \left(.472, .731\right)\end{array}\right) | \left(\begin{array}{c}\left(.687, .433\right), \\ \left(.674, .466\right)\end{array}\right) | \left(\begin{array}{c}\left(.524, .497\right), \\ \left(.582, .428\right)\end{array}\right) |
{\mathcal{A}}_{2} | \left(\begin{array}{c}\left(.418, .592\right), \\ \left(.710, .395\right)\end{array}\right) | \left(\begin{array}{c}\left(.623, .426\right), \\ \left(.507, .588\right)\end{array}\right) | \left(\begin{array}{c}\left(.544, .414\right), \\ \left(.607, .433\right)\end{array}\right) | \left(\begin{array}{c}\left(.314, .696\right), \\ \left(.606, .463\right)\end{array}\right) | \left(\begin{array}{c}\left(.370, .625\right), \\ \left(.512, .225\right)\end{array}\right) |
{\mathcal{A}}_{3} | \left(\begin{array}{c}\left(.286, .617\right), \\ \left(.467, .414\right)\end{array}\right) | \left(\begin{array}{c}\left(.552, .310\right), \\ \left(.567, .701\right)\end{array}\right) | \left(\begin{array}{c}\left(.406, .800\right), \\ \left(.431, .763\right)\end{array}\right) | \left(\begin{array}{c}\left(.725, .230\right), \\ \left(.548, .565\right)\end{array}\right) | \left(\begin{array}{c}\left(.665, .365\right), \\ \left(.482, .467\right)\end{array}\right) |
{\mathcal{A}}_{4} | \left(\begin{array}{c}\left(.490, .566\right), \\ \left(.585, .377\right)\end{array}\right) | \left(\begin{array}{c}\left(.720, .442\right), \\ \left(.531, .625\right)\end{array}\right) | \left(\begin{array}{c}\left(.355, .469\right), \\ \left(.466, .476\right)\end{array}\right) | \left(\begin{array}{c}\left(.538, .419\right), \\ \left(.335, .734\right)\end{array}\right) | \left(\begin{array}{c}\left(.576, .404\right), \\ \left(.435, .230\right)\end{array}\right) |
Step 4: Evaluate the entropy measure \text{(}\mathbb{E}\mathbb{A}) associated to each attribute by using Eq (21) as follows:
{\mathbb{E}\mathbb{A}}_{1} = 0.4506, {\mathbb{E}\mathbb{A}}_{2} = 0.4332, {\mathbb{E}\mathbb{A}}_{3} = 0.4888, {\mathbb{E}\mathbb{A}}_{4} = 0.6988, {\mathbb{E}\mathbb{A}}_{5} = 0.5702. |
Step 5: Evaluate the weights of criteria's by using Step 5 of Algorithm.
{{\mathcal{K}}_{\mathrm{g}}}_{1} = 0.2082, {{\mathcal{K}}_{\mathrm{g}}}_{2} = 0.2147, {{\mathcal{K}}_{\mathrm{g}}}_{3} = 0.1937, {{\mathcal{K}}_{\mathrm{g}}}_{4} = 0.2206, {{\mathcal{K}}_{\mathrm{g}}}_{5} = 0.1628. |
Step 6: Calculate the collective preference value of each alternative in the revised ideal matrix based on the developed aggregation operator.
a) Applying PyFREWA operators.
According to PyFREWA operator using Eq (16), the collective preference value of each alternative is calculated in Table 9.
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{6027}},.{\bf{4944}}} \right),\left( {.{\bf{6658}},.{\bf{5056}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4760, .5429\right), \left(.5993, .4175\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5603, .4249\right), \left(.5056, .5767\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5597, \mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }.4596\right), \left(.4811, .4793\right)\right) |
b) Applying PyFREOWA operators.
According to PyFREOWA operator using Eq (17) the collective overall preference value of each alternative is calculated in Table 10.
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{5970}},.{\bf{4979}}} \right),\left( {.{\bf{6705}},.{\bf{4954}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4731, .5401\right), \left(.5971, .3980\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5645, .4202\right), \left(.5041, .5608\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5558, \mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }.4583\right), \left(.4840, .4535\right)\right) |
c) Applying PyFREHA operators.
According to PyFRHA operator using Eq (18) the collective overall preference value of each alternative is calculated in Table 11.
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{5454}},.{\bf{5841}}} \right),\left( {.{\bf{6127}},.{\bf{5936}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4270, .6195\right), \left(.5425, .5138\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5151, .5044\right), \left(.4557, .6252\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5050, \mathrm{ }.5475\right), \left(.4394, .5937\right)\right) |
Step 7: Calculate the score value of collective preference value of each alternative in the revised ideal matrix by using Definition 6, as represented in Table 12 as follows.
Operators | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{3} |
PyFREWA | .5766 | .5291 | .5140 | .5259 |
PyFREOWA | .5782 | .5325 | .5203 | .5318 |
PyFRHA | .4948 | .4572 | .4489 | .4569 |
Step 8: Evaluate the ranking of each alternative based on score function, as represented in Table 13.
Operators | Score values | \bf{B}\bf{e}\bf{s}\bf{t} |
PyFREWA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
PyFREOWA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
PyFRHA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
The graphically representation of all the alternatives based on PyFREWA operator, PyFREOWA operator, and PyFRHA operator is given in Figure 3.
Pythagorean fuzzy rough set theory can handle complex, uncertain, and imprecise information as well as can approximate the lower and upper bounds of a set with greater accuracy than traditional rough set theory or IFRS theory. It is sensitive to changes in the data set because it is designed to adapt to different types of uncertainty and imprecision. This approach can handle data sets that are incomplete, inconsistent or contain missing values. It is also able to identify dependencies and relationships between objects in a set, even when the data is uncertain or imprecise.
The comparison between proposed Pythagorean Fuzzy Rough Einstein aggregation operators with several other existing aggregation operators shows the effectiveness and advantages of proposed methodology. This comparison was made by analyzing the characteristics of many decision-making techniques and aggregation operators proposed in the literature. The selected existing approach is IFR EDAS and q- Rung Orthopair fuzzy Einstein aggregation operators presented by [37,45] respectively. We applied the above-developed operators within the IFR setting and the existing q-Rung Orthopair fuzzy rough setting in which all weights are unknown. The final results after implying the complete procedure of Algorithm are shown in Table 14 and the graphical raking of alternatives is given in Figure 4. Hence {\mathcal{A}}_{1} is best, the results obtained by using proposed Einstein aggregations operators are the same as those obtained by using existing methods [37,45]. Thus, the decision-making process suggested in this study has been found to be more stable and practical. While none of the works in existence are capable of processing the type of data that PyFRS provides to a decision-maker. While the suggested approach has the potential to greatly outperform existing ones, even it can also handle data from such methods. This ultimately reaches the conclusion that our suggested study is better and more trustworthy than those already in use.
Existence operators | Score values {\mathcal{A}}_{1} {\mathcal{A}}_{2} {\mathcal{A}}_{3} {\mathcal{A}}_{4} |
Ranking | Best Alternative |
IFFRWA-EDAS [37] | 1.000, .482, .223, .007 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFROWA-EDAS [37] | 1.000, .5351, .235, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRHA-EDAS [37] | 1.000, .618, .255, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRWG-EDAS [37] | 1.000, .386, .038, .020 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFROWG-EDAS [37] | 1.000, .533, .268, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRHG-EDAS [37] | 1.000, .432, .228, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREWA [45] | .999, .489, .201, .011 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREOWA [45] | .999, .492, .162, .001 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREHA [45] | .999, .478, .234, .012 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREWG [45] | .999, .510, .075, .025 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREOWG [45] | .999, .481, .001, .017 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREHG [45] | .999, .598, .088, .205 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREWA | .576, .529, .514, .525 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREOWA | .578, .532, .520, .531 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREHA | .494, .457, .448, .456 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
MCGDM is a technique for determining and evaluating criteria that conflict with all aspects of DM to achieve more acceptable and accurate DM outcomes. In DM problems, the most efficient way to learn a given fact is sometimes hidden, making the decision-making process more complex and dynamic. Pythagorean fuzzy rough sets is mathematical methods for dealing with ambiguous and imprecise data. In this paper, a novel method is proposed to solve MCGDM problems with PyFRSs which is based on Einstein operators, in which the weights of criteria and DMs are not known. For this, we developed a series of Einstein aggregation operators for PyFRSs like PyFREWA, PyFREOWA and PyFREHA on the base of Einstein T-norm and co-norm to deal multi-criteria group decision making problems. Further, a novel PyFR entropy measure is presented to determine the weights of DM's and criteria with in PyFRSs. To minimize the knowledge base loss in this method, aggregation is performed by applying determine (DM's) and criterion weights to obtain the result ranking. Lastly, we applied our suggested approach to real-life 3PLs problems and compare the results to those of the existing DM method to show that our proposed DM method is effective and useful.
In future work, the proposed work can be extended to neural networking, three-way decision making, and various fuzzy extensions such as Fermatean fuzzy rough set, Spherical fuzzy rough set, Neutrosophic fuzzy rough set, complex Fermatean fuzzy rough set, and various aggregation operators, such as Yager, Dombi, Hamacher aggregation operators to solve various MCGDM problems.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research work was funded by Institutional Fund Projects under grant no (IFPIP:1437-980-1443). The authors gratefully acknowledge technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah Saudi Aribia.
The authors declare that they have no conflicts of interest.
[1] |
C. N. Wang, N. A. T. Nguyen, T. T. Dang, C. M. Lu, A compromised decision-making approach to third-party logistics selection in sustainable supply chain using fuzzy AHP and fuzzy VIKOR methods, Mathematics, 9 (2021), 886. https://doi.org/10.3390/math9080886 doi: 10.3390/math9080886
![]() |
[2] |
L. A. Zadeh, Fuzzy sets, Inf. Control, 8 (1965), 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X doi: 10.1016/S0019-9958(65)90241-X
![]() |
[3] |
J. Wang, X. Zhang, J. Dai, J. Zhan, TI-fuzzy neighborhood measures and generalized Choquet integrals for granular structure reduction and decision making, Fuzzy Sets Syst., 2023, 108512. https://doi.org/10.1016/j.fss.2023.03.015 doi: 10.1016/j.fss.2023.03.015
![]() |
[4] |
J. Wang, X. Zhang, Q. Hu, Three-way fuzzy sets and their applications (Ⅱ), Axioms, 11 (2022), 532. https://doi.org/10.3390/axioms11100532 doi: 10.3390/axioms11100532
![]() |
[5] | K. T. Atanassov, Intuitionistic fuzzy sets, Physica Heidelberg, 1999. https://doi.org/10.1007/978-3-7908-1870-3 |
[6] |
E. Szmidt, J. Kacprzyk, Distances between intuitionistic fuzzy sets, Fuzzy Sets Syst., 114 (2000), 505–518. https://doi.org/10.1016/S0165-0114(98)00244-9 doi: 10.1016/S0165-0114(98)00244-9
![]() |
[7] |
F. Wang, S. Wan, Possibility degree and divergence degree based method for interval-valued intuitionistic fuzzy multi-attribute group decision making, Expert Syst. Appl., 141 (2020), 112929. https://doi.org/10.1016/j.eswa.2019.112929 doi: 10.1016/j.eswa.2019.112929
![]() |
[8] |
Z. Xu, Intuitionistic fuzzy aggregation operators, IEEE Trans. Fuzzy Syst., 15 (2007), 1179–1187. https://doi.org/10.1109/TFUZZ.2006.890678 doi: 10.1109/TFUZZ.2006.890678
![]() |
[9] |
Y. Xu, H. Wang, The induced generalized aggregation operators for intuitionistic fuzzy sets and their application in group decision making, Appl. Soft Comput., 12 (2012), 1168–1179. https://doi.org/10.1016/j.asoc.2011.11.003 doi: 10.1016/j.asoc.2011.11.003
![]() |
[10] |
H. Zhao, Z. Xu, M. Ni, S. Liu, Generalized aggregation operators for intuitionistic fuzzy sets, Int. J. Intell. Syst., 25 (2010), 1–30. https://doi.org/10.1002/int.20386 doi: 10.1002/int.20386
![]() |
[11] |
I. K. Vlachos, G. D. Sergiadis, Intuitionistic fuzzy information–Applications to pattern recognition, Pattern Recognit. Lett., 28 (2007), 197–206. https://doi.org/10.1016/j.patrec.2006.07.004 doi: 10.1016/j.patrec.2006.07.004
![]() |
[12] |
S. K. De, R. Biswas, A. R. Roy, An application of intuitionistic fuzzy sets in medical diagnosis, Fuzzy Sets Syst., 117 (2001), 209–213. https://doi.org/10.1016/S0165-0114(98)00235-8 doi: 10.1016/S0165-0114(98)00235-8
![]() |
[13] |
R. R. Yager, Pythagorean fuzzy subsets, 2013 joint IFSA world congress and NAFIPS annual meeting (IFSA/NAFIPS), 2013, 57–61. https://doi.org/10.1109/IFSA-NAFIPS.2013.6608375 doi: 10.1109/IFSA-NAFIPS.2013.6608375
![]() |
[14] |
R. R. Yager, Pythagorean membership grades in multicriteria decision making, IEEE Trans. Fuzzy Syst., 22 (2013), 958–965. https://doi.org/10.1109/TFUZZ.2013.2278989 doi: 10.1109/TFUZZ.2013.2278989
![]() |
[15] |
L. Fei, Y. Deng, Multi-criteria decision making in Pythagorean fuzzy environment, Appl. Intell., 50 (2020), 537–561. https://doi.org/10.1007/s10489-019-01532-2 doi: 10.1007/s10489-019-01532-2
![]() |
[16] |
X. Peng, Y. Yang, Some results for Pythagorean fuzzy sets, Int. J. Intell. Syst., 30 (2015), 1133–1160. https://doi.org/10.1002/int.21738 doi: 10.1002/int.21738
![]() |
[17] |
A. A. Khan, S. Ashraf, S. Abdullah, M. Qiyas, J. Luo, S. U. Khan, Pythagorean fuzzy Dombi aggregation operators and their application in decision support system, Symmetry, 11 (2019), 383. https://doi.org/10.3390/sym11030383 doi: 10.3390/sym11030383
![]() |
[18] |
G. Wei, M. Lu, Pythagorean fuzzy power aggregation operators in multiple attribute decision making, Int. J. Intell. Syst., 33 (2018), 169–186. https://doi.org/10.1002/int.21946 doi: 10.1002/int.21946
![]() |
[19] |
S. Ashraf, S. Abdullah, S. Khan, Fuzzy decision support modeling for internet finance soft power evaluation based on sine trigonometric Pythagorean fuzzy information, J. Ambient Intell. Humanized Comput., 12 (2021), 3101–3119. https://doi.org/10.1007/s12652-020-02471-4 doi: 10.1007/s12652-020-02471-4
![]() |
[20] |
X. Zhang, A novel approach based on similarity measure for Pythagorean fuzzy multiple criteria group decision making, Int. J. Intell. Syst., 31 (2016), 593–611. https://doi.org/10.1002/int.21796 doi: 10.1002/int.21796
![]() |
[21] |
K. Rahman, S. Abdullah, R. Ahmed, M. Ullah, Pythagorean fuzzy Einstein weighted geometric aggregation operator and their application to multiple attribute group decision making, J. Intell. Fuzzy Syst., 33 (2017), 635–647. https://doi.org/10.3233/JIFS-16797 doi: 10.3233/JIFS-16797
![]() |
[22] |
P. Rani, A. R. Mishra, G. Rezaei, H. Liao, A. Mardani, Extended Pythagorean fuzzy TOPSIS method based on similarity measure for sustainable recycling partner selection, Int. J. Fuzzy Syst., 2 (2017), 735–747. https://doi.org/10.1007/s40815-019-00689-9 doi: 10.1007/s40815-019-00689-9
![]() |
[23] |
C. Huang, M. Lin, Z. Xu, Pythagorean fuzzy MULTIMOORA method based on distance measure and score function: its application in multicriteria decision making process, Knowl. Inf. Syst., 62 (2020), 4373–4406. https://doi.org/10.1007/s10115-020-01491-y doi: 10.1007/s10115-020-01491-y
![]() |
[24] |
Z. Pawlak, Rough sets, Int. J. Comput. Inf. Sci., 11 (1982), 341–356. https://doi.org/10.1007/BF01001956 doi: 10.1007/BF01001956
![]() |
[25] |
D. Dubois, H. Prade, Rough fuzzy sets and fuzzy rough sets, Int. J. Gen. Syst., 17 (1990), 191–209. https://doi.org/10.1080/03081079008935107 doi: 10.1080/03081079008935107
![]() |
[26] |
J. Wang, X. Zhang, A novel multi-criteria decision-making method based on rough sets and fuzzy measures, Axioms, 11 (2022), 275. https://doi.org/10.3390/axioms11060275 doi: 10.3390/axioms11060275
![]() |
[27] |
L. Zhang, J. Zhan, Fuzzy soft β-covering based fuzzy rough sets and corresponding decision-making applications, Int. J. Mach. Learn. Cyber., 10 (2019), 1487–1502. https://doi.org/10.1007/s13042-018-0828-3 doi: 10.1007/s13042-018-0828-3
![]() |
[28] |
J. S. Mi, Y. Leung, W. Z. Wu, An uncertainty measure in partition-based fuzzy rough sets, Int. J. Gene. Syst., 34 (2005), 77–90. https://doi.org/10.1080/03081070512331318329 doi: 10.1080/03081070512331318329
![]() |
[29] |
M. A. Khan, S. Ashraf, S. Abdullah, F. Ghani, Applications of probabilistic hesitant fuzzy rough set in decision support system, Soft Comput., 24 (2020), 16759–16774. https://doi.org/10.1007/s00500-020-04971-z doi: 10.1007/s00500-020-04971-z
![]() |
[30] |
X. Zhang, B. Zhou, P. Li, A general frame for intuitionistic fuzzy rough sets, Inf. Sci., 216 (2012), 34–49. https://doi.org/10.1016/j.ins.2012.04.018 doi: 10.1016/j.ins.2012.04.018
![]() |
[31] |
L. Zhou, W. Z. Wu, On generalized intuitionistic fuzzy rough approximation operators, Inf. Sci., 178 (2008), 2448–2465. https://doi.org/10.1016/j.ins.2008.01.012 doi: 10.1016/j.ins.2008.01.012
![]() |
[32] |
P. Liu, A. Ali, N. Rehman, Multi-granulation fuzzy rough sets based on fuzzy preference relations and their applications, IEEE Access, 7 (2019), 147825–147848. https://doi.org/10.1109/ACCESS.2019.2942854 doi: 10.1109/ACCESS.2019.2942854
![]() |
[33] |
S. M. Yun, S. J. Lee, Intuitionistic fuzzy rough approximation operators, Int. J. Fuzzy Log. Intell. Syst., 15 (2015), 208–215. https://doi.org/10.5391/IJFIS.2015.15.3.208 doi: 10.5391/IJFIS.2015.15.3.208
![]() |
[34] |
H. Zhang, L. Shu, S. Liao, Intuitionistic fuzzy soft rough set and its application in decision making, Abstr. Appl. Anal., 2014 (2014), 287314. https://doi.org/10.1155/2014/287314 doi: 10.1155/2014/287314
![]() |
[35] |
Z. Zhang, Generalized intuitionistic fuzzy rough sets based on intuitionistic fuzzy coverings, Inf. Sci., 198 (2014), 186–206. https://doi.org/10.1016/j.ins.2012.02.054 doi: 10.1016/j.ins.2012.02.054
![]() |
[36] | H. Zhang, L. Xiong, W. Ma, Generalized intuitionistic fuzzy soft rough set and its application in decision making, J. Comput. Anal. Appl, 20 (2016), 750–766. |
[37] |
R. Chinram, A. Hussain, T. Mahmood, M. I. Ali, EDAS method for multi-criteria group decision making based on intuitionistic fuzzy rough aggregation operators, IEEE Access, 9 (2021), 10199–10216. https://doi.org/10.1109/ACCESS.2021.3049605 doi: 10.1109/ACCESS.2021.3049605
![]() |
[38] |
S. P. Zhang, P. Sun, J. S. Mi, T. Feng, Belief function of Pythagorean fuzzy rough approximation space and its applications, Int. J. Approx. Reason., 119 (2020), 58–80. https://doi.org/10.1016/j.ijar.2020.01.001 doi: 10.1016/j.ijar.2020.01.001
![]() |
[39] |
H. Garg, A new generalized Pythagorean fuzzy information aggregation using Einstein operations and its application to decision making, Int. J. Intell. Syst., 31 (2016), 886–920. https://doi.org/10.1002/int.21809 doi: 10.1002/int.21809
![]() |
[40] |
K. Guo, Q. Song, On the entropy for Atanassov's intuitionistic fuzzy sets: An interpretation from the perspective of amount of knowledge, Appl. Soft Comput., 24 (2014), 328–340. https://doi.org/10.1016/j.asoc.2014.07.006 doi: 10.1016/j.asoc.2014.07.006
![]() |
[41] |
A. Biswas, B. Sarkar, Pythagorean fuzzy TOPSIS for multicriteria group decision‐making with unknown weight information through entropy measure, Int. J. Intell. Syst., 34 (2019), 1108–1128. https://doi.org/10.1002/int.22088 doi: 10.1002/int.22088
![]() |
[42] |
Z. Yue, An avoiding information loss approach to group decision making, Appl. Math. Model., 37 (2013), 112–126. https://doi.org/10.1016/j.apm.2012.02.008 doi: 10.1016/j.apm.2012.02.008
![]() |
[43] |
M. Riaz, H. M. A. Farid, Picture fuzzy aggregation approach with application to third-party logistic provider selection process, Rep. Mech. Eng., 3 (2022), 227–236. https://doi.org/10.31181/rme20023062022r doi: 10.31181/rme20023062022r
![]() |
[44] | S. Soh, A decision model for evaluating third-party logistics providers using fuzzy analytic hierarchy process, Afr. J. Bus. Manag., 4 (2010), 339–349. |
[45] |
S. Ashraf, N. Rehman, A. Hussain, H. AlSalman, A. H. Gumaei, q-rung orthopair fuzzy rough einstein aggregation information-based EDAS method: applications in robotic agrifarming, Comput. Intell. Neurosci., 2021 (2021), 5520264. https://doi.org/10.1155/2021/5520264 doi: 10.1155/2021/5520264
![]() |
1. | Amir Hussain, Xiaoya Zhu, Kifayat Ullah, Mehvish Sarfaraz, Shi Yin, Dragan Pamucar, Multi-attribute group decision-making based on Pythagorean fuzzy rough Aczel-Alsina aggregation operators and its applications to Medical diagnosis, 2023, 9, 24058440, e23067, 10.1016/j.heliyon.2023.e23067 | |
2. | Muhammad Rahim, Fazli Amin, ElSayed M. Tag Eldin, Hamiden Abd El-Wahed Khalifa, Sadique Ahmad, p, q-Spherical fuzzy sets and their aggregation operators with application to third-party logistic provider selection, 2024, 46, 10641246, 505, 10.3233/JIFS-235297 | |
3. | Chayel Tripura, Sayanta Chakraborty, Baby Bhattacharya, 2024, Chapter 9, 978-3-031-66741-1, 104, 10.1007/978-3-031-71125-1_9 | |
4. | Shi Yin, Yudan Zhao, Abrar Hussain, Kifayat Ullah, Comprehensive evaluation of rural regional integrated clean energy systems considering multi-subject interest coordination with pythagorean fuzzy information, 2024, 138, 09521976, 109342, 10.1016/j.engappai.2024.109342 | |
5. | Juan Liu, Evaluation of Vision-Language Transformers for Multimodal News Authenticity and Integrity in Journalism: A Multi-Criteria Decision-Making Approach, 2025, 13, 2169-3536, 2424, 10.1109/ACCESS.2024.3523704 | |
6. | Jing Wang, Multiple Experts-Based Decision Algorithm for the Analysis of Consumer Preferences in the Transformation of Creative Product Value, 2025, 13, 2169-3536, 28645, 10.1109/ACCESS.2025.3538863 |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.7, .4\right)\right) | \left(\left(.8, .5\right), \left(.2, .8\right)\right) | \left(\left(.4, .6\right), \left(.6, .7\right)\right) | \left(\left(.5, .5\right), \left(.9, .1\right)\right) | \left(\left(.7, .4\right), \left(.8, .2\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.6, .5\right)\right) | \left(\left(.6, .7\right), \left(.4, .5\right)\right) | \left(\left(.1, .9\right), \left(.6, .5\right)\right) | \left(\left(.2, .9\right), \left(.5, .6\right)\right) | \left(\left(.4, .5\right), \left(.6, .1\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.4, .5\right), \left(.4, .2\right)\right) | \left(\left(.7, .1\right), \left(.3, .9\right)\right) | \left(\left(.3, .8\right), \left(.6, .8\right)\right) | \left(\left(.9, .3\right), \left(.3, .7\right)\right) | \left(\left(.9, .4\right), \left(.6, .2\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.6, .6\right), \left(.6, .5\right)\right) | \left(\left(.9, .2\right), \left(.5, .5\right)\right) | \left(\left(.2, .5\right), \left(.3, .6\right)\right) | \left(\left(.7, .1\right), \left(.4, .8\right)\right) | \left(\left(.5, .6\right), \left(.4, .3\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.8, .4\right)\right) | \left(\left(.9, .1\right), \left(.8, .6\right)\right) | \left(\left(.4, .3\right), \left(.2, .9\right)\right) | \left(\left(.9, .4\right), \left(.4, .9\right)\right) | \left(\left(.2, .9\right), \left(.5, .4\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.9, .4\right)\right) | \left(\left(.8, .2\right), \left(.6, .5\right)\right) | \left(\left(.8, .1\right), \left(.5, .4\right)\right) | \left(\left(.1, .5\right), \left(.6, .4\right)\right) | \left(\left(.3, .7\right), \left(.6, .3\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .5\right), \left(.5, .5\right)\right) | \left(\left(.6, .5\right), \left(.8, .6\right)\right) | \left(\left(.4, .9\right), \left(.4, .6\right)\right) | \left(\left(.7, .2\right), \left(.8, .5\right)\right) | \left(\left(.3, .4\right), \left(.4, .8\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5, .6\right), \left(.4, .5\right)\right) | \left(\left(.7, .6\right), \left(.3, .7\right)\right) | \left(\left(.5, .2\right), \left(.7, .2\right)\right) | \left(\left(.3, .9\right), \left(.4, .7\right)\right) | \left(\left(.2, .5\right), \left(.5, .2\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.2, .7\right), \left(.9, .2\right)\right) | \left(\left(.4, .6\right), \left(.5, .3\right)\right) | \left(\left(.6, .8\right), \left(.5, .6\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.5, .3\right), \left(.1, .9\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.1, .4\right), \left(.4, .3\right)\right) | \left(\left(.3, .5\right), \left(.5, .8\right)\right) | \left(\left(.4, .6\right), \left(.7, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) | \left(\left(.4, .7\right), \left(.2, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .9\right), \left(.5, .7\right)\right) | \left(\left(.1, .6\right), \left(.4, .6\right)\right) | \left(\left(.5, .7\right), \left(.1, .9\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.4, .3\right), \left(.4, .6\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.3, .5\right), \left(.7, .2\right)\right) | \left(\left(.1, .7\right), \left(.7, .7\right)\right) | \left(\left(.3, .9\right), \left(.2, .8\right)\right) | \left(\left(.4, .7\right), \left(.1, .7\right)\right) | \left(\left(.8, .2\right), \left(.4, .2\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\begin{array}{c}\left(.345, .767\right), \\ \left(.817, .292\right)\end{array}\right) | \left(\begin{array}{c}\left(.770, .321\right), \\ \left(.664, .539\right)\end{array}\right) | \left(\begin{array}{c}\left(.487, .539\right), \\ \left(.469, .731\right)\end{array}\right) | \left(\begin{array}{c}\left(.688, .432\right), \\ \left(.664, .484\right)\end{array}\right) | \left(\begin{array}{c}\left(.520, .497\right), \\ \left(.573, .439\right)\end{array}\right) |
{\mathcal{A}}_{2} | \left(\begin{array}{c}\left(.414, .589\right), \\ \left(.710, .393\right)\end{array}\right) | \left(\begin{array}{c}\left(.622, .423\right), \\ \left(.508, .592\right)\end{array}\right) | \left(\begin{array}{c}\left(.548, .409\right), \\ \left(.609, .432\right)\end{array}\right) | \left(\begin{array}{c}\left(.318, .693\right), \\ \left(.609, .461\right)\end{array}\right) | \left(\begin{array}{c}\left(.370, .630\right), \\ \left(.508, .231\right)\end{array}\right) |
{\mathcal{A}}_{3} | \left(\begin{array}{c}\left(.283, .623\right), \\ \left(.469, .423\right)\end{array}\right) | \left(\begin{array}{c}\left(.546, .321\right), \\ \left(.569, .697\right)\end{array}\right) | \left(\begin{array}{c}\left(.409, .799\right), \\ \left(.424, .765\right)\end{array}\right) | \left(\begin{array}{c}\left(.716, .230\right), \\ \left(.550, .563\right)\end{array}\right) | \left(\begin{array}{c}\left(.655, .364\right), \\ \left(.478, .477\right)\end{array}\right) |
{\mathcal{A}}_{4} | \left(\begin{array}{c}\left(.486, .566\right), \\ \left(.586, .2373\right)\end{array}\right) | \left(\begin{array}{c}\left(.711, .452\right), \\ \left(.534, .630\right)\end{array}\right) | \left(\begin{array}{c}\left(.357, .473\right), \\ \left(.467, .477\right)\end{array}\right) | \left(\begin{array}{c}\left(.534, .434\right), \\ \left(.332, .733\right)\end{array}\right) | \left(\begin{array}{c}\left(.581, .399\right), \\ \left(.436, .230\right)\end{array}\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.2, .7\right), \left(.9, .2\right)\right) | \left(\left(.9, .1\right), \left(.9, .6\right)\right) | \left(\left(.4, .6\right), \left(.6, .7\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.5, .5\right), \left(.9, .1\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.9, .4\right)\right) | \left(\left(.8, .2\right), \left(.6, .5\right)\right) | \left(\left(.8, .1\right), \left(.5, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.4, .5\right), \left(.4, .2\right)\right) | \left(\left(.6, .5\right), \left(.8, .6\right)\right) | \left(\left(.3, .8\right), \left(.6, .8\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.7, .2\right), \left(.8, .5\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.3, .5\right), \left(.7, .2\right)\right) | \left(\left(.9, .2\right), \left(.5, .5\right)\right) | \left(\left(.5, .2\right), \left(.7, .2\right)\right) | \left(\left(.4, .7\right), \left(.1, .7\right)\right) | \left(\left(.7, .1\right), \left(.4, .8\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.8, .7\right)\right) | \left(\left(.4, .6\right), \left(.5, .3\right)\right) | \left(\left(.4, .3\right), \left(.2, .9\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.2, .9\right), \left(.5, .4\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.6, .5\right)\right) | \left(\left(.3, .5\right), \left(.5, .8\right)\right) | \left(\left(.1, .9\right), \left(.6, .5\right)\right) | \left(\left(.2, .9\right), \left(.5, .6\right)\right) | \left(\left(.4, .7\right), \left(.2, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .9\right), \left(.5, .7\right)\right) | \left(\left(.1, .6\right), \left(.4, .6\right)\right) | \left(\left(.5, .7\right), \left(.1, .9\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.3, .4\right), \left(.4, .8\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5, .6\right), \left(.4, .5\right)\right) | \left(\left(.1, .7\right), \left(.7, .7\right)\right) | \left(\left(.3, .9\right), \left(.2, .8\right)\right) | \left(\left(.3, .9\right), \left(.4, .7\right)\right) | \left(\left(.5, .6\right), \left(.4, .3\right)\right) |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{I}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.1988 | 0.2071 | 0.1814 | 0.1324 |
{\mathcal{P}}^{2} | 0.2575 | 0.1527 | 0.1812 | 0.2008 |
{\mathcal{P}}^{3} | 0.2870 | 0.1642 | 0.2177 | 0.2444 |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{L}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.2134 | 0.3027 | 0.2285 | 0.1941 |
{\mathcal{P}}^{2} | 0.3614 | 0.1002 | 0.2384 | 0.2926 |
{\mathcal{P}}^{3} | 0.4111 | 0.2913 | 0.3299 | 0.3515 |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{R}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.3897 | 0.1538 | 0.3788 | 0.3338 |
{\mathcal{P}}^{2} | 0.2643 | 0.3359 | 0.2807 | 0.2876 |
{\mathcal{P}}^{3} | 0.3082 | 0.1905 | 0.0664 | 0.1729 |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\begin{array}{c}\left(.349, .767\right), \\ \left(.814, .394\right)\end{array}\right) | \left(\begin{array}{c}\left(.772, .321\right), \\ \left(.660, .546\right)\end{array}\right) | \left(\begin{array}{c}\left(.475, .537\right), \\ \left(.472, .731\right)\end{array}\right) | \left(\begin{array}{c}\left(.687, .433\right), \\ \left(.674, .466\right)\end{array}\right) | \left(\begin{array}{c}\left(.524, .497\right), \\ \left(.582, .428\right)\end{array}\right) |
{\mathcal{A}}_{2} | \left(\begin{array}{c}\left(.418, .592\right), \\ \left(.710, .395\right)\end{array}\right) | \left(\begin{array}{c}\left(.623, .426\right), \\ \left(.507, .588\right)\end{array}\right) | \left(\begin{array}{c}\left(.544, .414\right), \\ \left(.607, .433\right)\end{array}\right) | \left(\begin{array}{c}\left(.314, .696\right), \\ \left(.606, .463\right)\end{array}\right) | \left(\begin{array}{c}\left(.370, .625\right), \\ \left(.512, .225\right)\end{array}\right) |
{\mathcal{A}}_{3} | \left(\begin{array}{c}\left(.286, .617\right), \\ \left(.467, .414\right)\end{array}\right) | \left(\begin{array}{c}\left(.552, .310\right), \\ \left(.567, .701\right)\end{array}\right) | \left(\begin{array}{c}\left(.406, .800\right), \\ \left(.431, .763\right)\end{array}\right) | \left(\begin{array}{c}\left(.725, .230\right), \\ \left(.548, .565\right)\end{array}\right) | \left(\begin{array}{c}\left(.665, .365\right), \\ \left(.482, .467\right)\end{array}\right) |
{\mathcal{A}}_{4} | \left(\begin{array}{c}\left(.490, .566\right), \\ \left(.585, .377\right)\end{array}\right) | \left(\begin{array}{c}\left(.720, .442\right), \\ \left(.531, .625\right)\end{array}\right) | \left(\begin{array}{c}\left(.355, .469\right), \\ \left(.466, .476\right)\end{array}\right) | \left(\begin{array}{c}\left(.538, .419\right), \\ \left(.335, .734\right)\end{array}\right) | \left(\begin{array}{c}\left(.576, .404\right), \\ \left(.435, .230\right)\end{array}\right) |
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{6027}},.{\bf{4944}}} \right),\left( {.{\bf{6658}},.{\bf{5056}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4760, .5429\right), \left(.5993, .4175\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5603, .4249\right), \left(.5056, .5767\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5597, \mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }.4596\right), \left(.4811, .4793\right)\right) |
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{5970}},.{\bf{4979}}} \right),\left( {.{\bf{6705}},.{\bf{4954}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4731, .5401\right), \left(.5971, .3980\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5645, .4202\right), \left(.5041, .5608\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5558, \mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }.4583\right), \left(.4840, .4535\right)\right) |
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{5454}},.{\bf{5841}}} \right),\left( {.{\bf{6127}},.{\bf{5936}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4270, .6195\right), \left(.5425, .5138\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5151, .5044\right), \left(.4557, .6252\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5050, \mathrm{ }.5475\right), \left(.4394, .5937\right)\right) |
Operators | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{3} |
PyFREWA | .5766 | .5291 | .5140 | .5259 |
PyFREOWA | .5782 | .5325 | .5203 | .5318 |
PyFRHA | .4948 | .4572 | .4489 | .4569 |
Operators | Score values | \bf{B}\bf{e}\bf{s}\bf{t} |
PyFREWA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
PyFREOWA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
PyFRHA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
Existence operators | Score values {\mathcal{A}}_{1} {\mathcal{A}}_{2} {\mathcal{A}}_{3} {\mathcal{A}}_{4} |
Ranking | Best Alternative |
IFFRWA-EDAS [37] | 1.000, .482, .223, .007 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFROWA-EDAS [37] | 1.000, .5351, .235, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRHA-EDAS [37] | 1.000, .618, .255, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRWG-EDAS [37] | 1.000, .386, .038, .020 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFROWG-EDAS [37] | 1.000, .533, .268, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRHG-EDAS [37] | 1.000, .432, .228, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREWA [45] | .999, .489, .201, .011 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREOWA [45] | .999, .492, .162, .001 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREHA [45] | .999, .478, .234, .012 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREWG [45] | .999, .510, .075, .025 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREOWG [45] | .999, .481, .001, .017 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREHG [45] | .999, .598, .088, .205 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREWA | .576, .529, .514, .525 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREOWA | .578, .532, .520, .531 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREHA | .494, .457, .448, .456 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.7, .4\right)\right) | \left(\left(.8, .5\right), \left(.2, .8\right)\right) | \left(\left(.4, .6\right), \left(.6, .7\right)\right) | \left(\left(.5, .5\right), \left(.9, .1\right)\right) | \left(\left(.7, .4\right), \left(.8, .2\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.6, .5\right)\right) | \left(\left(.6, .7\right), \left(.4, .5\right)\right) | \left(\left(.1, .9\right), \left(.6, .5\right)\right) | \left(\left(.2, .9\right), \left(.5, .6\right)\right) | \left(\left(.4, .5\right), \left(.6, .1\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.4, .5\right), \left(.4, .2\right)\right) | \left(\left(.7, .1\right), \left(.3, .9\right)\right) | \left(\left(.3, .8\right), \left(.6, .8\right)\right) | \left(\left(.9, .3\right), \left(.3, .7\right)\right) | \left(\left(.9, .4\right), \left(.6, .2\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.6, .6\right), \left(.6, .5\right)\right) | \left(\left(.9, .2\right), \left(.5, .5\right)\right) | \left(\left(.2, .5\right), \left(.3, .6\right)\right) | \left(\left(.7, .1\right), \left(.4, .8\right)\right) | \left(\left(.5, .6\right), \left(.4, .3\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.8, .4\right)\right) | \left(\left(.9, .1\right), \left(.8, .6\right)\right) | \left(\left(.4, .3\right), \left(.2, .9\right)\right) | \left(\left(.9, .4\right), \left(.4, .9\right)\right) | \left(\left(.2, .9\right), \left(.5, .4\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.9, .4\right)\right) | \left(\left(.8, .2\right), \left(.6, .5\right)\right) | \left(\left(.8, .1\right), \left(.5, .4\right)\right) | \left(\left(.1, .5\right), \left(.6, .4\right)\right) | \left(\left(.3, .7\right), \left(.6, .3\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .5\right), \left(.5, .5\right)\right) | \left(\left(.6, .5\right), \left(.8, .6\right)\right) | \left(\left(.4, .9\right), \left(.4, .6\right)\right) | \left(\left(.7, .2\right), \left(.8, .5\right)\right) | \left(\left(.3, .4\right), \left(.4, .8\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5, .6\right), \left(.4, .5\right)\right) | \left(\left(.7, .6\right), \left(.3, .7\right)\right) | \left(\left(.5, .2\right), \left(.7, .2\right)\right) | \left(\left(.3, .9\right), \left(.4, .7\right)\right) | \left(\left(.2, .5\right), \left(.5, .2\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.2, .7\right), \left(.9, .2\right)\right) | \left(\left(.4, .6\right), \left(.5, .3\right)\right) | \left(\left(.6, .8\right), \left(.5, .6\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.5, .3\right), \left(.1, .9\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.1, .4\right), \left(.4, .3\right)\right) | \left(\left(.3, .5\right), \left(.5, .8\right)\right) | \left(\left(.4, .6\right), \left(.7, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) | \left(\left(.4, .7\right), \left(.2, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .9\right), \left(.5, .7\right)\right) | \left(\left(.1, .6\right), \left(.4, .6\right)\right) | \left(\left(.5, .7\right), \left(.1, .9\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.4, .3\right), \left(.4, .6\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.3, .5\right), \left(.7, .2\right)\right) | \left(\left(.1, .7\right), \left(.7, .7\right)\right) | \left(\left(.3, .9\right), \left(.2, .8\right)\right) | \left(\left(.4, .7\right), \left(.1, .7\right)\right) | \left(\left(.8, .2\right), \left(.4, .2\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\begin{array}{c}\left(.345, .767\right), \\ \left(.817, .292\right)\end{array}\right) | \left(\begin{array}{c}\left(.770, .321\right), \\ \left(.664, .539\right)\end{array}\right) | \left(\begin{array}{c}\left(.487, .539\right), \\ \left(.469, .731\right)\end{array}\right) | \left(\begin{array}{c}\left(.688, .432\right), \\ \left(.664, .484\right)\end{array}\right) | \left(\begin{array}{c}\left(.520, .497\right), \\ \left(.573, .439\right)\end{array}\right) |
{\mathcal{A}}_{2} | \left(\begin{array}{c}\left(.414, .589\right), \\ \left(.710, .393\right)\end{array}\right) | \left(\begin{array}{c}\left(.622, .423\right), \\ \left(.508, .592\right)\end{array}\right) | \left(\begin{array}{c}\left(.548, .409\right), \\ \left(.609, .432\right)\end{array}\right) | \left(\begin{array}{c}\left(.318, .693\right), \\ \left(.609, .461\right)\end{array}\right) | \left(\begin{array}{c}\left(.370, .630\right), \\ \left(.508, .231\right)\end{array}\right) |
{\mathcal{A}}_{3} | \left(\begin{array}{c}\left(.283, .623\right), \\ \left(.469, .423\right)\end{array}\right) | \left(\begin{array}{c}\left(.546, .321\right), \\ \left(.569, .697\right)\end{array}\right) | \left(\begin{array}{c}\left(.409, .799\right), \\ \left(.424, .765\right)\end{array}\right) | \left(\begin{array}{c}\left(.716, .230\right), \\ \left(.550, .563\right)\end{array}\right) | \left(\begin{array}{c}\left(.655, .364\right), \\ \left(.478, .477\right)\end{array}\right) |
{\mathcal{A}}_{4} | \left(\begin{array}{c}\left(.486, .566\right), \\ \left(.586, .2373\right)\end{array}\right) | \left(\begin{array}{c}\left(.711, .452\right), \\ \left(.534, .630\right)\end{array}\right) | \left(\begin{array}{c}\left(.357, .473\right), \\ \left(.467, .477\right)\end{array}\right) | \left(\begin{array}{c}\left(.534, .434\right), \\ \left(.332, .733\right)\end{array}\right) | \left(\begin{array}{c}\left(.581, .399\right), \\ \left(.436, .230\right)\end{array}\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.2, .7\right), \left(.9, .2\right)\right) | \left(\left(.9, .1\right), \left(.9, .6\right)\right) | \left(\left(.4, .6\right), \left(.6, .7\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.5, .5\right), \left(.9, .1\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.9, .4\right)\right) | \left(\left(.8, .2\right), \left(.6, .5\right)\right) | \left(\left(.8, .1\right), \left(.5, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) | \left(\left(.5, .7\right), \left(.7, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.4, .5\right), \left(.4, .2\right)\right) | \left(\left(.6, .5\right), \left(.8, .6\right)\right) | \left(\left(.3, .8\right), \left(.6, .8\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.7, .2\right), \left(.8, .5\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.3, .5\right), \left(.7, .2\right)\right) | \left(\left(.9, .2\right), \left(.5, .5\right)\right) | \left(\left(.5, .2\right), \left(.7, .2\right)\right) | \left(\left(.4, .7\right), \left(.1, .7\right)\right) | \left(\left(.7, .1\right), \left(.4, .8\right)\right) |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\left(.4, .8\right), \left(.8, .7\right)\right) | \left(\left(.4, .6\right), \left(.5, .3\right)\right) | \left(\left(.4, .3\right), \left(.2, .9\right)\right) | \left(\left(.4, .4\right), \left(.2, .9\right)\right) | \left(\left(.2, .9\right), \left(.5, .4\right)\right) |
{\mathcal{A}}_{2} | \left(\left(.5, .7\right), \left(.6, .5\right)\right) | \left(\left(.3, .5\right), \left(.5, .8\right)\right) | \left(\left(.1, .9\right), \left(.6, .5\right)\right) | \left(\left(.2, .9\right), \left(.5, .6\right)\right) | \left(\left(.4, .7\right), \left(.2, .4\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.2, .9\right), \left(.5, .7\right)\right) | \left(\left(.1, .6\right), \left(.4, .6\right)\right) | \left(\left(.5, .7\right), \left(.1, .9\right)\right) | \left(\left(.2, .2\right), \left(.3, .5\right)\right) | \left(\left(.3, .4\right), \left(.4, .8\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5, .6\right), \left(.4, .5\right)\right) | \left(\left(.1, .7\right), \left(.7, .7\right)\right) | \left(\left(.3, .9\right), \left(.2, .8\right)\right) | \left(\left(.3, .9\right), \left(.4, .7\right)\right) | \left(\left(.5, .6\right), \left(.4, .3\right)\right) |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{I}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.1988 | 0.2071 | 0.1814 | 0.1324 |
{\mathcal{P}}^{2} | 0.2575 | 0.1527 | 0.1812 | 0.2008 |
{\mathcal{P}}^{3} | 0.2870 | 0.1642 | 0.2177 | 0.2444 |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{L}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.2134 | 0.3027 | 0.2285 | 0.1941 |
{\mathcal{P}}^{2} | 0.3614 | 0.1002 | 0.2384 | 0.2926 |
{\mathcal{P}}^{3} | 0.4111 | 0.2913 | 0.3299 | 0.3515 |
{\mathfrak{O}}_{{\bf{d}}_{\bf{G}}\mathit{R}\bf{M}} | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{4} |
{\mathcal{P}}^{1} | 0.3897 | 0.1538 | 0.3788 | 0.3338 |
{\mathcal{P}}^{2} | 0.2643 | 0.3359 | 0.2807 | 0.2876 |
{\mathcal{P}}^{3} | 0.3082 | 0.1905 | 0.0664 | 0.1729 |
{\bf{\Psi }}_{1} | {\bf{\Psi }}_{2} | {\bf{\Psi }}_{3} | {\bf{\Psi }}_{4} | {\bf{\Psi }}_{5} | |
{\mathcal{A}}_{1} | \left(\begin{array}{c}\left(.349, .767\right), \\ \left(.814, .394\right)\end{array}\right) | \left(\begin{array}{c}\left(.772, .321\right), \\ \left(.660, .546\right)\end{array}\right) | \left(\begin{array}{c}\left(.475, .537\right), \\ \left(.472, .731\right)\end{array}\right) | \left(\begin{array}{c}\left(.687, .433\right), \\ \left(.674, .466\right)\end{array}\right) | \left(\begin{array}{c}\left(.524, .497\right), \\ \left(.582, .428\right)\end{array}\right) |
{\mathcal{A}}_{2} | \left(\begin{array}{c}\left(.418, .592\right), \\ \left(.710, .395\right)\end{array}\right) | \left(\begin{array}{c}\left(.623, .426\right), \\ \left(.507, .588\right)\end{array}\right) | \left(\begin{array}{c}\left(.544, .414\right), \\ \left(.607, .433\right)\end{array}\right) | \left(\begin{array}{c}\left(.314, .696\right), \\ \left(.606, .463\right)\end{array}\right) | \left(\begin{array}{c}\left(.370, .625\right), \\ \left(.512, .225\right)\end{array}\right) |
{\mathcal{A}}_{3} | \left(\begin{array}{c}\left(.286, .617\right), \\ \left(.467, .414\right)\end{array}\right) | \left(\begin{array}{c}\left(.552, .310\right), \\ \left(.567, .701\right)\end{array}\right) | \left(\begin{array}{c}\left(.406, .800\right), \\ \left(.431, .763\right)\end{array}\right) | \left(\begin{array}{c}\left(.725, .230\right), \\ \left(.548, .565\right)\end{array}\right) | \left(\begin{array}{c}\left(.665, .365\right), \\ \left(.482, .467\right)\end{array}\right) |
{\mathcal{A}}_{4} | \left(\begin{array}{c}\left(.490, .566\right), \\ \left(.585, .377\right)\end{array}\right) | \left(\begin{array}{c}\left(.720, .442\right), \\ \left(.531, .625\right)\end{array}\right) | \left(\begin{array}{c}\left(.355, .469\right), \\ \left(.466, .476\right)\end{array}\right) | \left(\begin{array}{c}\left(.538, .419\right), \\ \left(.335, .734\right)\end{array}\right) | \left(\begin{array}{c}\left(.576, .404\right), \\ \left(.435, .230\right)\end{array}\right) |
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{6027}},.{\bf{4944}}} \right),\left( {.{\bf{6658}},.{\bf{5056}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4760, .5429\right), \left(.5993, .4175\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5603, .4249\right), \left(.5056, .5767\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5597, \mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }.4596\right), \left(.4811, .4793\right)\right) |
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{5970}},.{\bf{4979}}} \right),\left( {.{\bf{6705}},.{\bf{4954}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4731, .5401\right), \left(.5971, .3980\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5645, .4202\right), \left(.5041, .5608\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5558, \mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }.4583\right), \left(.4840, .4535\right)\right) |
{\mathcal{A}}_{1} | \left( {\left( {.{\bf{5454}},.{\bf{5841}}} \right),\left( {.{\bf{6127}},.{\bf{5936}}} \right)} \right) |
{\mathcal{A}}_{2} | \left(\left(.4270, .6195\right), \left(.5425, .5138\right)\right) |
{\mathcal{A}}_{3} | \left(\left(.5151, .5044\right), \left(.4557, .6252\right)\right) |
{\mathcal{A}}_{4} | \left(\left(.5050, \mathrm{ }.5475\right), \left(.4394, .5937\right)\right) |
Operators | {\mathcal{A}}_{1} | {\mathcal{A}}_{2} | {\mathcal{A}}_{3} | {\mathcal{A}}_{3} |
PyFREWA | .5766 | .5291 | .5140 | .5259 |
PyFREOWA | .5782 | .5325 | .5203 | .5318 |
PyFRHA | .4948 | .4572 | .4489 | .4569 |
Operators | Score values | \bf{B}\bf{e}\bf{s}\bf{t} |
PyFREWA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
PyFREOWA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
PyFRHA | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
Existence operators | Score values {\mathcal{A}}_{1} {\mathcal{A}}_{2} {\mathcal{A}}_{3} {\mathcal{A}}_{4} |
Ranking | Best Alternative |
IFFRWA-EDAS [37] | 1.000, .482, .223, .007 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFROWA-EDAS [37] | 1.000, .5351, .235, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRHA-EDAS [37] | 1.000, .618, .255, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRWG-EDAS [37] | 1.000, .386, .038, .020 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFROWG-EDAS [37] | 1.000, .533, .268, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
IFFRHG-EDAS [37] | 1.000, .432, .228, .000 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREWA [45] | .999, .489, .201, .011 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREOWA [45] | .999, .492, .162, .001 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREHA [45] | .999, .478, .234, .012 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREWG [45] | .999, .510, .075, .025 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREOWG [45] | .999, .481, .001, .017 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{3} > {\mathcal{A}}_{4} | {\mathcal{A}}_{1} |
q-ROFREHG [45] | .999, .598, .088, .205 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREWA | .576, .529, .514, .525 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREOWA | .578, .532, .520, .531 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |
Proposed-PyFREHA | .494, .457, .448, .456 | {\mathcal{A}}_{1} > {\mathcal{A}}_{2} > {\mathcal{A}}_{4} > {\mathcal{A}}_{3} | {\mathcal{A}}_{1} |