C1 | C2 | C3 | C4 | |
A1 | 0.6 | 0.8 | 0.5 | 0.6 |
A2 | 0.5 | 0.9 | 0.8 | 0.7 |
A3 | 0.7 | 0.6 | 0.7 | 0.6 |
A4 | 0.8 | 0.7 | 0.8 | 0.9 |
Weighted hesitant fuzzy set (WHFS) is an extension of hesitant fuzzy set (HFS), in which the weights indicate that the decision maker has different confidence in giving every possible assessment of the membership degree. In this paper, we redefine the union and intersection operations of weighted hesitant fuzzy elements (WHFEs), investigate their operation properties, and propose the variance function of the weighted hesitant fuzzy element (WHFE) to compare WHFEs. Furthermore, we develop two aggregation operators such as weighted hesitant fuzzy ordered weighted averaging (WHFOWA) and weighted hesitant fuzzy ordered weighted geometric (WHFOWG) operators to aggregate weighted hesitant fuzzy information, and present multiple-attribute group decision making algorithm under weighted hesitant fuzzy environment. Finally, four numerical examples are used to illustrate the effectiveness of our proposed aggregation operators.
Citation: Wenyi Zeng, Rong Ma, Deqing Li, Qian Yin, Zeshui Xu, Ahmed Mostafa Khalil. Novel operations of weighted hesitant fuzzy sets and their group decision making application[J]. AIMS Mathematics, 2022, 7(8): 14117-14138. doi: 10.3934/math.2022778
[1] | Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051 |
[2] | Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Muhammad Naeem, Choonkil Park . Improved VIKOR methodology based on $ q $-rung orthopair hesitant fuzzy rough aggregation information: application in multi expert decision making. AIMS Mathematics, 2022, 7(5): 9524-9548. doi: 10.3934/math.2022530 |
[3] | Muhammad Naeem, Aziz Khan, Shahzaib Ashraf, Saleem Abdullah, Muhammad Ayaz, Nejib Ghanmi . A novel decision making technique based on spherical hesitant fuzzy Yager aggregation information: application to treat Parkinson's disease. AIMS Mathematics, 2022, 7(2): 1678-1706. doi: 10.3934/math.2022097 |
[4] | Misbah Rasheed, ElSayed Tag-Eldin, Nivin A. Ghamry, Muntazim Abbas Hashmi, Muhammad Kamran, Umber Rana . Decision-making algorithm based on Pythagorean fuzzy environment with probabilistic hesitant fuzzy set and Choquet integral. AIMS Mathematics, 2023, 8(5): 12422-12455. doi: 10.3934/math.2023624 |
[5] | Asit Dey, Tapan Senapati, Madhumangal Pal, Guiyun Chen . A novel approach to hesitant multi-fuzzy soft set based decision-making. AIMS Mathematics, 2020, 5(3): 1985-2008. doi: 10.3934/math.2020132 |
[6] | Muhammad Kamran, Shahzaib Ashraf, Nadeem Salamat, Muhammad Naeem, Thongchai Botmart . Cyber security control selection based decision support algorithm under single valued neutrosophic hesitant fuzzy Einstein aggregation information. AIMS Mathematics, 2023, 8(3): 5551-5573. doi: 10.3934/math.2023280 |
[7] | Wajid Ali, Tanzeela Shaheen, Iftikhar Ul Haq, Hamza Toor, Faraz Akram, Harish Garg, Md. Zia Uddin, Mohammad Mehedi Hassan . Aczel-Alsina-based aggregation operators for intuitionistic hesitant fuzzy set environment and their application to multiple attribute decision-making process. AIMS Mathematics, 2023, 8(8): 18021-18039. doi: 10.3934/math.2023916 |
[8] | Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Choonkil Park . A decision making algorithm for wind power plant based on q-rung orthopair hesitant fuzzy rough aggregation information and TOPSIS. AIMS Mathematics, 2022, 7(4): 5241-5274. doi: 10.3934/math.2022292 |
[9] | Admi Nazra, Jenizon, Yudiantri Asdi, Zulvera . Generalized hesitant intuitionistic fuzzy N-soft sets-first result. AIMS Mathematics, 2022, 7(7): 12650-12670. doi: 10.3934/math.2022700 |
[10] | Noura Omair Alshehri, Rania Saeed Alghamdi, Noura Awad Al Qarni . Development of novel distance measures for picture hesitant fuzzy sets and their application in medical diagnosis. AIMS Mathematics, 2025, 10(1): 270-288. doi: 10.3934/math.2025013 |
Weighted hesitant fuzzy set (WHFS) is an extension of hesitant fuzzy set (HFS), in which the weights indicate that the decision maker has different confidence in giving every possible assessment of the membership degree. In this paper, we redefine the union and intersection operations of weighted hesitant fuzzy elements (WHFEs), investigate their operation properties, and propose the variance function of the weighted hesitant fuzzy element (WHFE) to compare WHFEs. Furthermore, we develop two aggregation operators such as weighted hesitant fuzzy ordered weighted averaging (WHFOWA) and weighted hesitant fuzzy ordered weighted geometric (WHFOWG) operators to aggregate weighted hesitant fuzzy information, and present multiple-attribute group decision making algorithm under weighted hesitant fuzzy environment. Finally, four numerical examples are used to illustrate the effectiveness of our proposed aggregation operators.
Fuzzy set introduced by Zadeh [1] has achieved a great success in many fields such as approximate reasoning, fuzzy control, fuzzy decision making and so on. With the complexity in the real decision making process, some researchers generalized the concept of fuzzy set. Considering the difficulty to establish the membership degree of an element to a fuzzy set, Torra [2,3] introduced the concept of hesitant fuzzy set (HFS) which permitted the membership degree having a set of possible values, and could reflect the human's hesitancy more objectively than the other classical extensions of fuzzy set. After then, many researchers have paid attention on this topic and obtained some meaningful conclusions. Generally speaking, most of these works can be mainly divided into three categories:
1) Aggregation operators. Aggregation operators are important tools to aggregate fuzzy information, but how to aggregate different fuzzy information usually relied on different domain knowledge. Hence, aiming at different scenarios, many researchers proposed different aggregation operators. For example, Bedregal et al. [4] investigated the classical aggregation operator for hesitant fuzzy element(HFE), Xia et al. [5,6] proposed a series of aggregation operators for hesitant fuzzy information, Wei [7] and Zeng et al. [8] investigated hesitant fuzzy prioritized operators, respectively. Similar to exponentiation, Zhang [9] introduced hesitant fuzzy power aggregation operators, Zhu et al. [10] and Yu et al. [11] investigated hesitant fuzzy geometric Bonferroni means and generalized hesitant fuzzy Bonferroni means, respectively, Peng et al. [12] investigated the continuous hesitant fuzzy aggregation operators and applied in decision making.
2) Information measure. Information measure is the basis of all kinds of decision making methods, thus many scholars investigated the meaningful topic. For example, Xu et al. [13,14,15] investigated the distance, similarity measure and correlation measure of hesitant fuzzy sets (HFSs), Peng et al. [16] presented the generalized hesitant fuzzy weighted distance and applied it in multiple criteria decision making, Li et al. [17,18] proposed several kinds of distance of HFSs based on the hesitancy degree of hesitant fuzzy element (HFE), Chen et al. [19] investigated correlation coefficient of HFSs and apply it into clustering analysis, Farhadinia [20] investigated the relationship among entropy, similarity measure and distance measure for HFSs and interval-valued hesitant fuzzy sets, Zeng et al. [21] investigated the relationship between the distance and the similarity measure of HFSs and applied in pattern recognition.
3) Extensions of HFSs. All kinds of extensions of HFSs are used to further describe the imprecise information in the real life. For example, Khan et al. [22,23,24] introduced several novel similarity measures for the q-rung orthopair fuzzy sets. Zeng et al. [25,26] introduced weighted interval-valued hesitant fuzzy set and weighted hesitant fuzzy linguistic term set, and applied in group decision making, respectively, Zhu et al. [27] introduced dual hesitant fuzzy set, Chen et al. [28] and Wei et al. [29] developed interval-valued hesitant fuzzy sets, respectively, Rodríguez et al. [30,31] investigated hesitant fuzzy linguistic term sets for decision making, Wei et al. [32] introduced some aggregation operators for hesitant fuzzy linguistic term sets and applied in multi-criteria decision making, Zhu et al. [33] introduced linguistic preference relation under hesitant fuzzy environment, Liao et al. [34,35] investigated distance and similarity measures between hesitant fuzzy linguistic term sets and the consistency and consensus of hesitant fuzzy preference relation, and applied in group decision making, Onar et al. [36] and Xu et al. [37] utilized hesitant fuzzy Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS) to obtain optimal strategy, respectively, Zhang et al. [38] investigated the extension of VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method based on hesitant fuzzy set(HFS), Qian et al. [39] proposed the generalized hesitant fuzzy set and applied in decision support system.
It needs to point out that each possible value involved in the HFS is distributed with the same importance or preference (weight). However, in some practical applications, especially in multi-attribute decision making, several possible values may sometimes have different importance, thus they are distributed different weights because decision makers hesitate among several possible values. Namely, the weights of the possible values are different. For instance, assuming that five experts are authorized to evaluate one supplier anonymously according to a given criteria or attribute, and each assessment that can indicate the satisfaction degree of the expert to the supplier is provided to the decision maker. Suppose the five assessments provided by the five experts are 0.6, 0.7, 0.6, 0.8 and 0.8, respectively, and the experts cannot persuade each other, thus, while determining the satisfaction degree of the supplier, the decision maker hesitate among three values, 0.6, 0.7, 0.8. According to the approach of constructing HFS in general, the membership degree of an element to a set can be represented as the hesitant fuzzy element (HFE) {0.6,0.7,0.8}. But, in fact, 0.6 is provided by two experts, 0.8 is provided by two experts, but only one expert provides 0.7. Hence, the decision maker may prefer to 0.6 and 0.8 while determining the membership degree, it is obvious that the classical HFS or HFE cannot accurately represent these important fuzzy information. Based on this, Zhang and Wu [40] introduced the concept of weighted hesitant fuzzy set (WHFS) and presented some operations of weighted hesitant fuzzy elements (WHFEs) and some operations based on T-norm.
Recently, Zhu and Xu [41] introduced the concept of probabilistic hesitant fuzzy set (PHFS). It needs to point out that there exists some differences between WHFS and PHFS, the former assigns weights to each HFE, where the weights are from experts, and the latter allocates probabilities to every HFE, where the set of HFE is a countable set and probability can be determined by its frequency. Hence, these two variants express different application background. Inspired by the idea of WHFS in Zhang and Wu [40], considering the different background of group decision making in the real life, in this paper, we redefine some operations such as union and intersection of weighted hesitant fuzzy elements (WHFEs), propose the variance function of WHFE and present the ranking rule to compare WHFEs based on the score function and variance function. Furthermore, we investigate the characteristics of weighted hesitant fuzzy element and develop two kinds of aggregation operators such as weighted hesitant fuzzy ordered weighted averaging (WHFOWA) operator and weighted hesitant fuzzy ordered weighted geometric (WHFOWG) operator to aggregate weighted hesitant fuzzy information, and develop the mathematical model of multi-attribute group decision making. Finally, four numerical examples are used to illustrate the effectiveness and feasibility of our proposed method.
The organization of our study is as follows. In Section 2, we review some basic notions of HFS and HFE. In Section 3, we review the concept of WHFS, redefine the intersection and union operations of weighted hesitant fuzzy elements (WHFEs), investigate their operation properties, propose the variance function of weighted hesitant fuzzy element, and present the ranking rule to compare the WHFEs based on the score function and variance function. In Section 4, we develop two aggregation operators such as weighted hesitant fuzzy ordered weighted averaging (WHFOWA) and weighted hesitant fuzzy ordered weighted geometric (WHFOWG) operators to aggregate weighted hesitant fuzzy information, and propose multi-attribute group decision making algorithm. In Section 5, four numerical examples are used to illustrate the effectiveness of our techniques. The conclusion is given in the last section.
In this paper, we use X={x1,x2,⋯,xn} to denote the discourse set, HFS and HFE stand for hesitant fuzzy set and hesitant fuzzy element, respectively, WHFS and WHFE stand for weighted hesitant fuzzy set and weighted hesitant fuzzy element, respectively, h and hw stand for hesitant fuzzy element (HFE) and weighted hesitant fuzzy element (WHFE), respectively.
Definition 1. [2] Given a fixed set X, then a hesitant fuzzy set (HFS) on X is in terms of a function that when applied to X returns a subset of [0,1].
For conveniences, Xia and Xu [5] proposed a simple and convenient symbol to express the HFS
E={<x,hE(x)>|x∈X} |
where hE(x) is a collection including multiple values in range [0,1], denoting the possible membership degree of the element x∈X to the set E. h=hE(x) is called a hesitant fuzzy element (HFE).
Furthermore, Torra [2] and Xia and Xu [5] introduced the following operations for hesitant fuzzy elements (HFEs) h,h1 and h2, respectively.
(1) h−=minh, h+=maxh;
(2) hc=⋃γ∈h{1−γ};
(3) h1⋃h2={γ∈h1∪h2|γ≥max(h−1,h−2)}=⋃γ1∈h1,γ2∈h2max{γ1,γ2};
(4) h1⋂h2={γ∈h1∩h2|γ≤min(h+1,h+2)}=⋂γ1∈h1,γ2∈h2min{γ1,γ2};
(5) hλ=⋃γ∈h{γλ},λ>0;
(6) λh=⋃γ∈h{1−(1−γ)λ},λ>0;
(7) h1⊕h2=⋃γ1∈h1,γ2∈h2{γ1+γ2−γ1γ2};
(8) h1⊗h2=⋃γ1∈h1,γ2∈h2{γ1γ2}.
Furthermore, let hi,i=1,2,⋯,n be a collection of HFEs, Liao et al. [34] introduced the following operations.
(9) ⊕ni=1hi=⋃γi∈hi{1−∏ni=1(1−γi)};
(10) ⊗ni=1hi=⋃γi∈hi{∏ni=1γi}.
Meanwhile, Xia and Xu [5] introduced the score function of HFE h, s(h)=1ł(h)∑γ∈hγ, where ł(h) is the number of the elements in h. For any given two HFEs h1 and h2, Xia and Xu [5] give the ranking rule to compare the HFEs: If s(h1)>s(h2), then h1>h2; if s(h1)=s(h2), then h1=h2.
After then, Farhadinia [42] and Rodríguez et al. [43] proposed the improved method to compare HFEs, respectively. Furthermore, Liao et al. [34] introduced the variance function as second ranking index to compare HFEs.
Definition 2. [34] For a given HFE h, v(h)=1ł(h)√∑γi,γj∈h(γi−γj)2 is called the variance function of HFE h, where ł(h) is the number of the elements in h.
In the following, Liao et al. [34] proposed the ranking rule to compare HFEs based on the score function and variance function of HFE h.
(a) If s(h1)>s(h2), then h1>h2.
(b) If s(h1)=s(h2), then
(1) If v(h1)>v(h2), then h1<h2;
(2) If v(h1)=v(h2), then h1=h2.
Definition 3. [40] Let X={x1,x2,⋯,xn} be a fixed set, then a weighted hesitant fuzzy set (WHFS) EW on X is in terms of a function that when applied to X returns a subset with values as
Ew={<x,hwE(x)>|x∈X} |
where hwE(x)={<γ1,w1>,<γ2,w2>,⋯,<γm,wm>}, γj(j=1,2,⋯,m) is a set of some values in [0,1] which denotes the possible membership degree of the element x∈X to the set EW, and wj∈[0,1](j=1,2,⋯,m), ∑mj=1wj=1, is called the weight of γj, in which the weight wj denotes the importance of γj being taken as the membership degree of x, or the preference value that the decision maker takes γj as the membership degree of element x, then hw=hwE(x) is called a weighted hesitant fuzzy element (WHFE).
Remark 1. Weighted hesitant fuzzy set (WHFS) is a variant of traditional HFS, and describes the human uncertainty more objectively and precisely.
Remark 2. The difference between HFS and WHFS is that the former assumes that the possible membership degrees have the equal importance, and the latter assumes that the possible membership degrees have different importance. Specially, If the weight wj=1m, for every j=1,2,⋯,m in WHFE hw, then the WHFE
hw={<γ1,1m>,<γ2,1m>,⋯,<γm,1m>}, |
then the WHFE hw will become the classical HFE h={γ1,γ2,⋯,γm}. Here, we suppose that the WHFE
hw={<γ1,1m>,<γ2,1m>,⋯,<γm,1m>} |
is equivalent to the HFE h={γ1,γ2,⋯,γm}. Thus, in this paper, we will make no distinction and denote
hw={<γ1,1m>,<γ2,1m>,⋯,<γm,1m>}⟺h={γ1,γ2,⋯,γm}. |
Furthermore, Zhang and Wu [40] introduced some operations of WHFEs in the following.
Definition 4. [40] Given three WHFEs hw=⋃γ∈hw{<γ,wγ>},hw1=⋃γ1∈hw1{<γ1,wγ1>} and hw2=⋃γ2∈hw2{<γ2,wγ2>}, for λ>0, then
(1) (hw)c=⋃γ∈hw{<1−γ,wγ>};
(2) hw1⋃hw2=⋃γ1∈hw1,γ2∈hw2{<γ1∨γ2,wγ1⋅wγ2}>};
(3) hw1⋂hw2=⋃γ1∈hw1,γ2∈hw2{<γ1∧γ2,wγ1⋅wγ2}>};
(4) (hw)λ=⋃γ∈hw{<γλ,wγ>};
(5) λhw=⋃γ∈hw{<1−(1−γ)λ,wγ>};
(6) hw1⊕hw2=⋃γ1∈hw1,γ2∈hw2{<γ1+γ2−γ1γ2,wγ1wγ2>};
(7) hw1⊗hw2=⋃γ1∈hw1,γ2∈hw2{<γ1γ2,wγ1wγ2>}.
Now we use an example to illustrate the shortcoming of the above operations such as "⋃" and "⋂".
Example 1. Given two WHFEs hw1={<0.6,0.5>,<0.7,0.5>} and hw2={<0.3,0.5>,<0.8,0.5>}, because every possible value in WHFEs hw1 and hw2 has the same weight, then the WHFEs hw1 and hw2 can be thought of as the classic hesitant fuzzy elements (HFEs) or WHFEs hw1 and hw2 are equivalent to the HFEs h1 and h2. However, known by Definition 4, we have:
hw1⋃hw2={<0.6∨0.3,0.5⋅0.5>,<0.6∨0.8,0.5⋅0.5>,<0.7∨0.3,0.5⋅0.5>,<0.7∨0.8,0.5⋅0.5>}={<0.6,0.25>,<0.8,0.25>,<0.7,0.25>,<0.8,0.25>}={<0.6,0.25>,<0.7,0.25>,<0.8,0.5>}. |
hw1⋂hw2={<0.6∧0.3,0.5⋅0.5>,<0.6∧0.8,0.5⋅0.5>,<0.7∧0.3,0.5⋅0.5>,<0.7∧0.8,0.5⋅0.5>}={<0.3,0.25>,<0.6,0.25>,<0.3,0.25>,<0.7,0.25>}={<0.3,0.5>,<0.6,0.25>,<0.7,0.25>}. |
If we take WHFEs hw1 and hw2 as the HFEs h1 and h2, then known by the definitions of "⋃" and "⋂" for HFEs, we also have:
h1⋃h2={0.6∨0.3,0.6∨0.8,0.7∨0.3,0.7∨0.8}={0.6,0.7,0.8}. |
h1⋂h2={0.6∧0.3,0.6∧0.8,0.7∧0.3,0.7∧0.8}={0.3,0.6,0.7}. |
Because both h1⋃h2 and h1⋂h2 have three possible values and every possible value has the same weight, then we make use of the equivalent expression between HFE and WHFE, and use WHFE to denote them, namely,
hw1⋃hw2={<0.6,13>,<0.7,13>,<0.8,13>} |
and
hw1⋂hw2={<0.3,13>,<0.6,13>,<0.7,13>}. |
Obviously, we find that the above calculation results are inconsistent, and the later result seems more intuitive and logical than the former one.
Consequently, we think that it is necessary to improve and redefine the definitions of the union and intersection operations of WHFEs.
Definition 5. Given two WHFEs hw1,hw2, then
(1) hw1⋃hw2=⋃γ1∈hw1,γ2∈hw2{<max{γ1,γ2},w′max{γ1,γ2}>}, for values in WHFEs, we choose maximum of two WHFEs, and for weight w′max{γ1,γ2} can be determined through three steps:
1) If γ1=γ2, then we choose average of two weights wmax{γ1,γ2}=wγ1+wγ22 as new weight; if γ1>γ2, then we choose maximum of two weights wmax{γ1,γ2}=wγ1 as new weight, contrary, we choose minimum of two weights as new weight wmax{γ1,γ2}=wγ2;
2) If there exists the several same values γi which belongs to the above calculation results ⋃{<γi,wγi>}, then the weight can be computed through arithmetic mean wmax{γ1,γ2,⋯,γl}=1l∑lk=1wγk, where l is the number of γi in ⋃{<γi,wγi>};
3) Owing to sum of multiple weights maybe beyond 1, we need to normalize {wmax{γ1,γ2}} to obtain new weight {w′max{γ1,γ2}}.
(2) hw1⋂hw2=⋂γ1∈hw1,γ2∈hw2{<min{γ1,γ2},w′min{γ1,γ2}>}, for values in WHFEs, we choose minimum of two WHFEs, and for weight w′min{γ1,γ2} can be determined through three steps:
1) If γ1=γ2, then we choose average of two weights wmin{γ1,γ2}=wγ1+wγ22 as new weight; if γ1>γ2, then we choose minimum of two weights wmin{γ1,γ2}=wγ1 as new weight, contrary, we choose minimum of two weights as new weight wmin{γ1,γ2}=wγ2;
2) If there exists the several same values γi which belongs to the above calculation results ⋂{<γi,wγi>}, then the weight can be computed through arithmetic mean wmin{γ1,γ2,⋯,γl}=1l∑lk=1wγk, where l is the number of γi in ⋂{<γi,wγi>};
3) Owing to sum of multiple weights maybe beyond 1, we need to normalize {wmin{γ1,γ2}} to obtain new weight {w′min{γ1,γ2}}.
Thus, according to Definition 5, we re-calculate Example 1 and have the following conclusions.
Example 2. Given two WHFEs hw1={<0.6,0.5>,<0.7,0.5>} and hw2={<0.3,0.5>,<0.8,0.5>}, then known by Definition 5, we have:
{<0.6∨0.3,0.5>,<0.6∨0.8,0.5>,<0.7∨0.3,0.5>,<0.7∨0.8,0.5>}={<0.6,0.5>,<0.8,0.5>,<0.7,0.5>,<0.8,0.5>}={<0.6,0.5>,<0.7,0.5>,<0.8,0.5>}. |
Thus, we have:
hw1⋃hw2={<0.6,13>,<0.7,13>,<0.8,13>}. |
Similarly,
{<0.6∧0.3,0.5>,<0.6∧0.8,0.5>,<0.7∧0.3,0.5>,<0.7∧0.8,0.5>}={<0.3,0.5>,<0.6,0.5>,<0.3,0.5>,<0.7,0.5>}={<0.3,0.5>,<0.6,0.5>,<0.7,0.5>}. |
Then we obtain
hw1⋂hw2={<0.3,13>,<0.6,13>,<0.7,13>}. |
Remark 3. These calculation results are completely consistent with our intuition and logic, and show that Definition 5 is more reasonable.
In the following, we use an example to explain these operations of WHFEs.
Example 3. Let
hw1={<0.6,0.3>,<0.7,0.25>,<0.8,0.45>} |
and
hw2={<0.65,0.4>,<0.7,0.35>,<0.9,0.25>} |
be two WHFEs, and λ>0, then we have:
(1)
(hw1)c={<0.4,0.3>,<0.3,0.25>,<0.2,0.45>}, |
(hw2)c={<0.35,0.4>,<0.3,0.35>,<0.1,0.25>}. |
(2)
{<0.6∨0.65,0.4>,<0.6∨0.7,0.35>,<0.6∨0.9,0.25>,<0.7∨0.65,0.25>,<0.7∨0.7,0.3>,<0.7∨0.9,0.25>,<0.8∨0.65,0.45>,<0.8∨0.7,0.45>,<0.8∨0.9,0.25>}={<0.65,0.4>,<0.7,0.35>,<0.9,0.25>,<0.7,0.25>,<0.7,0.3>,<0.9,0.25>,<0.8,0.45>,<0.8,0.45>,<0.9,0.25>}={<0.65,0.4>,<0.7,0.3>,<0.8,0.45>,<0.9,0.25>}. |
It needs to point out that we compute the average of some weights as its final weight for the same values in the above calculation process.
So, we have
hw1⋃hw2(x)={<0.65,0.2857>,<0.7,0.2143>,<0.8,0.3214>,<0.9,0.1786>}. |
(3) Similarly, we have
hw1⋂hw2={<0.6,0.2069>,<0.65,0.2759>,<0.7,0.2069>,<0.8,0.3103>}. |
(4)
(hw1)λ={<0.6λ,0.3>,<0.7λ,0.25>,<0.8λ,0.45>}, |
(hw2)λ={<0.65λ,0.4>,<0.7λ,0.35>,<0.9λ,0.25>}. |
(5)
λhw1={<1−0.4λ,0.3>,<1−0.3λ,0.25>,<1−0.2λ,0.45>}, |
λhw2(x)={<1−0.35λ,0.4>,<1−0.3λ,0.35>,<1−0.1λ,0.25>}. |
(6)
hw1⊕hw2={<0.86,0.12>,<0.88,0.105>,<0.96,0.075>,<0.895,0.1>,<0.91,0.0875>,<0.97,0.0625>,<0.93,0.18>,<0.94,0.1575>,<0.98,0.1125>}. |
(7)
hw1⊗hw2={<0.39,0.12>,<0.42,0.105>,<0.54,0.075>,<0.455,0.1>,<0.49,0.0875>,<0.63,0.0625>,<0.52,0.18>,<0.56,0.1575>,<0.72,0.1125>}. |
In the following we give some conclusions.
Theorem 1. For three WHFEs hw,hw1 and hw2, then (hw)c,hw1⋃hw2,hw1⋂hw2,(hw)λ,λhw(x),hw1⊕hw2,hw1⊗hw2 are also WHFEs.
Because all of these possible values belong to [0,1], and their weights are normalized or satisfy multiplication, thus all of the weights add up to 1, then the proof of Theorem 1 can be done.
Remark 4. Theorem 1 shows that the WHFEs are closed with respect to these operations.
Theorem 2. For three WHFEs hw,hw1 and hw2, and λ>0, then we have
(1) (hw1⋃hw2)c=(hw1)c⋂(hw2)c;
(2) (hw1⋂hw2)c=(hw1)c⋃(hw2)c;
(3) ((hw)c)λ=(λhw)c;
(4) λ(hw)c=((hw)λ)c;
(5) (hw1⊕hw2)c=(hw1)c⊗(hw2)c;
(6) (hw1⊗hw2)c=(hw1)c⊕(hw2)c.
Proof. Here, we only give the proof of (1), (3) and (5), the rest can be similarly proved.
(1)
(hw1⋃hw2)c=(⋃γ1∈hw1,γ2∈hw2,γ1≠γ2{<1−max{γ1,γ2},w′max{γ1,γ2}>})⋃(⋃γ1∈hw1,γ2∈hw2,γ1=γ2{<1−max{γ1,γ2},w′max{γ1,γ2}>})=(⋃γ1∈hw1,γ2∈hw2,γ1≠γ2{<min{1−γ1,1−γ2},w′max{1−γ1,1−γ2}>})⋃(⋃γ1∈hw1,γ2∈hw2,γ1=γ2{<min{1−γ1,1−γ2},w′max{γ1,γ2}>})=(hw1)c⋂(hw2)c. |
(3)
((hw)c)λ=⋃γ∈hw{<(1−γ)λ,wγ>}, |
and
(λhw)c=⋃γ∈hw{<1−(1−(1−γ)λ),wγ>}=⋃γ∈hw{<(1−γ)λ,wγ>}=((hw)c)λ. |
(5)
(hw1⊕hw2)c=⋃γ1∈hw1,γ2∈hw2{<1−(γ1+γ2−γ1γ2),wγ1wγ2>}=⋃γ1∈hw1,γ2∈hw2{<(1−γ1)(1−γ2),wγ1wγ2>}=(hw1)c⊗(hw2)c. |
Here, we complete the proof of Theorem 2.
Remark 5. Theorem 2 shows that both of ⋂ and ⋃ operations, and ⊕ and ⊗ operations satisfy De Morgan's law, respectively.
Definition 6. [40] For a given WHFE hw, s(hw)=∑γ∈hwwγ⋅γ was called as the score function of WHFE hw.
Definition 7. For a given WHFE hw,
v(hw)=√∑γ∈hwwγ(γ−s(hw))2 | (1) |
is called as the variance function of WHFE hw.
Remark 6. Known by the definitions of score function and variance function for HFS and WHFS, the main difference is that WHFS has weight adjustment. Hence, the calculations of score function and variance function are different.
In the following, we propose the ranking rule to compare any two WHFEs hw1 and hw2.
(a) If s(hw1)>s(hw2), then hw1>hw2.
(b) If s(hw1)=s(hw2), then
(1) If v(hw1)>v(hw2), then hw1<hw2.
(2) If v(hw1)=v(hw2), then hw1=hw2.
In this section, we will propose two kinds of aggregation operators based on weighted hesitant fuzzy information, and investigate their related properties.
For given WHFEs collection, hw1,hw2,⋯,hwn, Zhang and Wu [40] proposed two kinds of aggregation operators such as WHFWA and WHFWG operators as follows:
WHFWA(hw1,hw2,⋯,hwn)=n⨁j=1(ωjhwj)=⋃γ1∈hw1,⋯,γn∈hwn{<1−n∏j=1(1−γj)ωj,wγ1wγ2⋯wγn>}, |
WHFWG(hw1,hw2,⋯,hwn)=n⨂j=1(hwj)ωj=⋃γ1∈hw1,⋯,γn∈hwn{<n∏j=1γωjj,wγ1wγ2⋯wγn>}, |
where ω=(ω1,ω2,⋯,ωn) is the weight vector of hwj,j=1,2,⋯,n, and ωj≥0,∑nj=1ωj=1.
Theorem 3. Let hw1,hw2,⋯,hwn be a collection of WHFEs, and ω=(ω1,ω2,⋯,ωn) be the weight vector with ωj≥0,∑nj=1ωj=1, then
WHFWG(hw1,hw2,⋯,hwn)≤WHFWA(hw1,hw2,⋯,hwn). |
Proof. Known by Wei et al. [32], we have
n∏j=1γωjj≤1−n∏j=1(1−γj)ωj. |
Since the weights corresponding to both ∏nj=1γωjj and 1−∏nj=1(1−γj)ωj are the same as wγ1wγ2⋯wγn, hence we have
s(WHFWG(hw1,hw2,⋯,hwn))≤s(WHFWA(hw1,hw2,⋯,hwn)). |
Hence, we complete the proof of Theorem 3.
Motivated by the idea of OWA operator introduced by Yager [44], we propose two kinds of aggregation operators to aggregate weighted hesitant fuzzy information.
Definition 8. Let hw1,hw2,⋯,hwn be a collection of WHFEs, and ω=(ω1,ω2,⋯,ωn) be the weight vector with ωj≥0,∑nj=1ωj=1, and WHFOWA and WHFOWG operators are defined in the following, where (hwσ(1),hwσ(2),⋯,hwσ(n)) represents a permutation of hwσ(i)≥hwσ(j) for all i<j.
WHFOWA(hw1,hw2,⋯,hwn)=n⨁j=1(ωjhwσ(j))=⋃γσ(1)∈hwσ(1),⋯,γσ(n)∈hwσ(n){<1−n∏j=1(1−γσ(j))ωj,wγσ(1)wγσ(2)⋯wγσ(n)>}.WHFOWG(hw1,hw2,⋯,hwn)=n⨂j=1(hwσ(j))ωj=⋃γσ(1)∈hwσ(1),⋯,γσ(n)∈hwσ(n){<n∏j=1γωjσ(j),wγσ(1)wγσ(2)⋯wγσ(n)>}. |
Now we will give the detail algorithm steps in order to apply these two kinds of aggregation operators in multi-attribute group decision making based on weighted hesitant fuzzy information.
Let A={A1,A2,⋯,An} and C={C1,C2,⋯,Cm} be the collection of alternatives and the set of attributes, respectively. E={E1,E2,⋯,Et} and rkij∈[0,1] represent the set of experts and the assessment from the expert Ek respect to the alternative Ai under every attribute Cj, respectively. Our goal is to choose the best alternative.
Hence, we utilize WHFEs as our decision making data to aggregate these fuzzy information and compare their score function and variance function in order to make the ranking relation.
Step 1. For every alternative Ai under each attribute Cj, considering two kinds of different cases, we construct the WHFE hwij by incorporating the experts' assessments, respectively.
Case 1. The weights of the experts are unknown, then
hwij(Cj)={<rij,wij>|wij=p/t} | (2) |
where rij∈∪k{rkij}, p is the number of the experts who give the assessment rij and t is the number of experts.
Case 2. The weight vector of the experts, v=(v1,v2,⋯,vt)T with vk≥0 and ∑tk=1vk=1, is given, then
hwij(Cj)={<rij,wij>|wij=∑k∈N(rij)vk} | (3) |
where rij∈∪k{rkij} and N(rij) denotes the collection of the experts who give the assessment rij.
For convenience, we use an example to illustrate the establishing of weight for weighted hesitant fuzzy element hw(x).
Example 4. To compare three kinds of cars such as A,B and C under the attribute "Comfort" and ten experts(E1,E2,⋯,E10) are authorized to provide their assessments. For A, the assessments provided by the experts are 0.6, 0.7, 0.8, 0.6, 0.75, 0.7, 0.7, 0.8, 0.6, 0.7, respectively. Now we merge the ten experts' assessments by incorporating the evaluated values.
If the weights of the experts are unknown, then according to Eq (2), we obtain the WHFE
hw(x)={<0.6,0.3>,<0.7,0.4>,<0.75,0.1>,<0.8,0.2>}. |
If the weight vector of the experts is given as
v=(0.1,0.12,0,08,0,09,0.15,0.08,0.1,0.08,0.1,0.1)T, |
then according to Eq (3), we get the WHFE
hw(x)={<0.6,0.29>,<0.7,0.4>,<0.75,0.15>,<0.8,0.16>}. |
Here, we give the calculation process of <0.6,0.29>. Since three experts, E1,E4 and E9, give their assessment values 0.6, and the weights associated with them are 0.1,0.09 and 0.1, respectively, then w=0.1+0.09+0.1=0.29, so we have <0.6,0.29>.
Step 2. Assume that ω=(ω1,ω2,⋯,ωm)T with ωj≥0 and ∑mj=1ωj=1 is the weight vector of the attributes. For every alternative Ai, we aggregate the WHFEs hwij(Cj),j=1,2,⋯,m, by applying the WHFWA, WHFWG, WHFOWA and WHFOWG operators to derive the overall aggregation value hw(Ai). Here we utilize WHFOWA operator, then
hw(Ai)=WHFOWA(hwi1(C1),hwi2(C2),⋯,hwim(Cm)). |
Step 3. Calculate the score function s(hw(Ai)) and variance function v(hw(Ai)) of hw(Ai).
Step 4. Rank the alternatives according to s(hw(Ai)) and v(hw(Ai)),i=1,2,⋯,n, and choose the best one.
To illustrate the effectiveness of our proposed approach, we present four numerical examples in this section.
Example 5. There are four software packages A1,A2,A3,A4 to be selected. And these packages will be considered by three experts Ek(k=1,2,3) from four attributes:
(1) Investment in new software (C1);
(2) Performance improvement (C2);
(3) Cost of transferring systems (C3);
(4) Reliability (C4). Because of different attributes having different importance, thus we have the weight vector, w=(0.3,0.25,0.25,0.2)T.
After completion of decision making process, we obtain decision making matrix A(k)=(r(k)ij)(k=1,2,3) (see Tables 1–3) described by weighted hesitant fuzzy element. Every value in WHFE represents the membership degree. In the following, we give the ranking relation of four software packages by applying our proposed approach.
C1 | C2 | C3 | C4 | |
A1 | 0.6 | 0.8 | 0.5 | 0.6 |
A2 | 0.5 | 0.9 | 0.8 | 0.7 |
A3 | 0.7 | 0.6 | 0.7 | 0.6 |
A4 | 0.8 | 0.7 | 0.8 | 0.9 |
C1 | C2 | C3 | C4 | |
A1 | 0.6 | 0.85 | 0.65 | 0.6 |
A2 | 0.7 | 0.8 | 0.8 | 0.75 |
A3 | 0.8 | 0.6 | 0.75 | 0.6 |
A4 | 0.8 | 0.7 | 0.8 | 0.85 |
C1 | C2 | C3 | C4 | |
A1 | 0.8 | 0.85 | 0.65 | 0.7 |
A2 | 0.7 | 0.8 | 0.8 | 0.75 |
A3 | 0.7 | 0.8 | 0.75 | 0.9 |
A4 | 0.9 | 0.75 | 0.85 | 0.9 |
Case 1. Assume that the weights of the experts are unknown.
Step 1. We utilize Eq (2) and obtain the WHFE A={<rij,wij>} under unknown experts weights environments as follows:
For A1, we have
hwA1(C1)={<0.6,2/3>,<0.8,1/3>},hwA1(C2)={<0.8,1/3>,<0.85,2/3>}. |
hwA1(C3)={<0.5,1/3>,<0.65,2/3>},hwA1(C4)={<0.6,2/3>,<0.7,1/3>}. |
For A2, we have
hwA2(C1)={<0.5,1/3>,<0.7,2/3>},hwA2(C2)={<0.8,2/3>,<0.9,1/3>}. |
hwA2(C3)={<0.8,1>},hwA2(C4)={<0.7,1/3>,<0.75,2/3>}. |
For A3, we have
hwA3(C1)={<0.7,2/3>,<0.8,1/3>},hwA3(C2)={<0.6,2/3>,<0.8,1/3>}. |
hwA3(C3)={<0.7,1/3>,<0.75,2/3>},hwA3(C4)={<0.6,2/3>,<0.9,1/3>}. |
For A4, we have
hwA4(C1)={<0.8,2/3>,<0.9,1/3>},hwA4(C2)={<0.7,2/3>,<0.75,1/3>}. |
hwA4(C3)={<0.8,2/3>,<0.85,1/3>},hwA4(C4)={<0.85,1/3>,<0.9,2/3>}. |
Step 2. We apply WHFOWA operator to aggregate all of preference values <rij,wij>,j=1,2,3,4, and obtain hw(Ai),i=1,2,3,4 as follows:
hw(A1)={<0.8632,0.0494>,<0.8313,0.0247>,<0.9144,0.0988>,<0.8944,0.0494>,<0.9340,0.0988>,<0.9186,0.0494>,<0.9587,0.1975>,<0.9490,0.0988>,<0.8526,0.0247>,<0.8183,0.0123>,<0.9078,0.0494>,<0.8863,0.0247>,<0.9289,0.0494>,<0.9123,0.0247>,<0.9555,0.0988>,<0.9451,0.0494>}. |
hw(A2)={<0.9637,0.0741>,<0.9785,0.1481>,<0.9507,0.037>,<0.9708,0.0741>,<0.9795,0.1481>,<0.9878,0.2963>,<0.9722,0.0741>,<0.9835,0.1481>}. |
hw(A3)={<0.9116,0.0988>,<0.9244,0.0494>,<0.9476,0.1975>,<0.9552,0.0988>,<0.9048,0.0494>,<0.9186,0.0247>,<0.9435,0.0988>,<0.9517,0.0494>,<0.8846,0.0494>,<0.9014,0.0247>,<0.9316,0.0988>,<0.9415,0.0494>,<0.8757,0.0247>,<0.8937,0.0123>,<0.9263,0.0494>,<0.937,0.0247>}. |
hw(A4)={<0.9722,0.0988>,<0.9887,0.1975>,<0.9567,0.0494>,<0.9825,0.0988>,<0.9609,0.0494>,<0.9841,0.0988>,<0.9392,0.0247>,<0.9753,0.0494>,<0.9622,0.0494>,<0.9847,0.0988>,<0.9413,0.0247>,<0.9762,0.0494>,<0.9469,0.0247>,<0.9785,0.0494>,<0.9175,0.0123>,<0.9665,0.0247>}. |
Step 3. We calculate the score function and variance function of hw(Ai),i=1,2,3,4, and have s(hw(A1))=0.9256, v(hw(A1))=0.0345, s(hw(A2))=0.9790, v(hw(A2))=0.0090, s(hw(A3))=0.9307, v(hw(A3))=0.0213, s(hw(A4))=0.9747, v(hw(A4))=0.0148.
Step 4. Rank the alternatives according to s(hw(Ai)) and v(hw(Ai)),i=1,2,3,4, therefore, we obtain the ranking relation of alternatives, A2≻A4≻A3≻A1, and the best alternative is A2.
Case 2. Assume that the weights of the experts are v=(0.4,0.3,0.3)T.
Step 1. We utilize Eq (3), and obtain the WHFSs A={<rij,wij>} as follows:
For A1, we have
hwA1(C1)={<0.6,0.7>,<0.8,0.3>},hwA1(C2)={<0.8,0.4>,<0.85,0.6>}. |
hwA1(C3)={<0.5,0.4>,<0.65,0.6>},hwA1(C4)={<0.6,0.7>,<0.7,0.3>}. |
For A2, we have
hwA2(C1)={<0.5,0.4>,<0.7,0.6>},hwA2(C2)={<0.8,0.6>,<0.9,0.4>}. |
hwA2(C3)={<0.8,1>},hwA2(C4)={<0.7,0.4>,<0.75,0.6>}. |
For A3, we have
hwA3(C1)={<0.7,0.7>,<0.8,0.3>},hwA3(C2)={<0.6,0.7>,<0.8,0.3>}. |
hwA3(C3)={<0.7,0.4>,<0.75,0.6>},hwA3(C4)={<0.6,0.7>,<0.9,0.3>}. |
For A4, we have
hwA4(C1)={<0.8,0.7>,<0.9,0.3>},hwA4(C2)={<0.7,0.7>,<0.75,0.3>}. |
hwA4(C3)={<0.8,0.7>,<0.85,0.3>},hwA4(C4)={<0.85,0.3>,<0.9,0.7>}. |
Step 2. We apply WHFOWA operator to aggregate all of preference values <rij,wij>,j=1,2,3,4, and obtain hw(Ai),i=1,2,3,4 as follows:
hw(A1)={<0.8896,0.0784>,<0.8539,0.0336>,<0.9224,0.1176>,<0.8973,0.0504>,<0.9327,0.1176>,<0.9109,0.0504>,<0.9527,0.1764>,<0.9374,0.0756>,<0.8707,0.0336>,<0.8288,0.0144>,<0.9091,0.0504>,<0.8797,0.0216>,<0.9211,0.0504>,<0.8956,0.0216>,<0.9446,0.0756>,<0.9266,0.0324>}. |
hw(A2)={<0.9643,0.0960>,<0.9749,0.1440>,<0.9627,0.0640>,<0.9737,0.0960>,<0.9772,0.1440>,<0.9839,0.2160>,<0.9761,0.096>,<0.9832,0.1440>}. |
hw(A3)={<0.9263,0.1372>,<0.9298,0.0588>,<0.9480,0.2058>,<0.9505,0.0882>,<0.9136,0.0588>,<0.9177,0.0252>,<0.9391,0.0882>,<0.9420,0.0378>,<0.8943,0.0588>,<0.8994,0.0252>,<0.9255,0.0882>,<0.9291,0.0378>,<0.8761,0.0252>,<0.8821,0.0108>,<0.9127,0.0378>,<0.9169,0.0162>}. |
hw(A4)={<0.9744,0.1029>,<0.9910,0.2401>,<0.9553,0.0441>,<0.9842,0.1029>,<0.9608,0.0441>,<0.9862,0.1029>,<0.9315,0.0189>,<0.9758,0.0441>,<0.9604,0.0441>,<0.9860,0.1029>,<0.9309,0.0189>,<0.9756,0.0441>,<0.9393,0.0189>,<0.9786,0.0441>,<0.8941,0.0081>,<0.9627,0.0189>}. |
Step 3. We calculate the score function and variance function of hw(Ai),i=1,2,3,4, and have s(hw(A1))=0.9199, v(hw(A1))=0.0280, s(hw(A2))=0.9766, v(hw(A2))=0.0068, s(hw(A3))=0.9290, v(hw(A3))=0.0187, s(hw(A4))=0.9769, v(hw(A4))=0.0168.
Step 4. Rank the alternatives according to s(hw(Ai)) and v(hw(Ai)),i=1,2,3,4, therefore, we obtain the ranking of alternatives, A4≻A2≻A3≻A1, and the best alternative is A4.
Similarly, we can use WHFOWG operator to aggregate weighted hesitant fuzzy information and have the following conclusions.
Case 1. Assume that the weights of the experts are unknown.
We calculate the score function and variance function of hw(Ai),i=1,2,3,4, and have s(hw(A1))=0.4190, v(hw(A1))=0.0725, s(hw(A2))=0.4795, v(hw(A2))=0.0311, s(hw(A3))=0.4409, v(hw(A3))=0.0964, s(hw(A4))=0.6196, v(hw(A4))=0.0613. and obtain the ranking of alternatives, A4≻A2≻A3≻A1, and the best alternative is A4.
Case 2. Assume that the weights of the experts are v=(0.4,0.3,0.3)T.
We calculate the score function and variance function of hw(Ai),i=1,2,3,4, and have s(hw(A1))=0.4078, v(hw(A1))=0.0758, s(hw(A2))=0.4874, v(hw(A2))=0.0277, s(hw(A3))=0.4227, v(hw(A3))=0.0976, s(hw(A4))=0.6042, v(hw(A4))=0.0664, and obtain the ranking of alternatives, A4≻A2≻A3≻A1, and the best alternative is A4.
By combining the calculation results of both WHFOWA and WHFOWG operators, we find that the best alternative is A4.
In addition, we will make comparison analysis between our algorithm and the one of classical hesitant fuzzy set. Aimed at the data in Example 5, we retain the possible membership degrees and cancel the weight information, thus, these HFEs are listed in the following:
For A1, we have
hA1(C1)={0.6,0.8},hA1(C2)={0.8,0.85},hA1(C3)={0.5,0.65},hA1(C4)={0.6,0.7}. |
For A2, we have
hA2(C1)={0.5,0.7},hA2(C2)={0.8,0.9},hA2(C3)={0.8},hA2(C4)={0.7,0.75}. |
For A3, we have
hA3(C1)={0.7,0.8},hA3(C2)={0.6,0.8},hA3(C3)={0.7,0.75},hA3(C4)={0.6,0.9}. |
For A4, we have
hA4(C1)={0.8,0.9},hA4(C2)={0.7,0.75},hA4(C3)={0.8,0.85},hA4(C4)={0.85,0.9}. |
We apply HFOWA operator to aggregate origin values <γij>,j=1,2,3,4 and results h(Ai),i=1,2,3,4 are as follows:
hA1={0.984,0.988,0.9888,0.9916,0.988,0.991,0.9916,0.9937,0.992,0.994,0.9944,0.9958,0.994,0.9955,0.9958,0.9969}. |
hA2={0.994,0.995,0.997,0.9975,0.9964,0.997,0.9982,0.9985}. |
hA3={0.9856,0.9964,0.988,0.997,0.9928,0.9982,0.994,0.9985,0.9904,0.9976,0.992,0.998,0.9952,0.9988,0.996,0.999}. |
hA4={0.9982,0.9988,0.9987,0.9991,0.9985,0.999,0.9989,0.9992,0.9991,0.9994,0.9993,0.9996,0.9992,0.9995,0.9994,0.9996}. |
Then we calculate the score function and the variance function of h(Ai),i=1,2,3,4, and the results are listed as follows:
s(h(A1))=0.9922,v(h(A1))=0.0137,s(h(A2))=0.9967,v(h(A2))=0.0041. |
s(h(A3))=0.9948,v(h(A3))=0.0158,s(h(A4))=0.9990,v(h(A4))=0.0016. |
Therefore, we obtain the ranking relation of alternatives, A4≻A2≻A3≻A1, and the best alternative is A4.
Remark 7. By analyzing the results of WHFE and HFE in Case 1 and Case 2, we find that the ranking result of WHFE is the same as the one of HFE though the weights of experts and the WHFEs are different, thus it shows that the weighted hesitant fuzzy element has higher sensitivity.
Example 6. [40] Assume that a factory intends to choose a new site for new buildings. Now there exist three possible alternatives Yi(i=1,2,3) to be considered, three attributes include (1) G1(Price); (2) G2(Location); and (3) G3 Environment, and the weight vector of three attributes is w=(0.3,0.2,0.5)T. The related data is described as weighted hesitant fuzzy element, hwij=⋃γij∈hwij{<γij,wγij>}, where γij represents the possible membership degree which is the satisfaction degree of the alternative Yi respect to the attribute Gj, and wγij is the weight of γij. The origin data is listed in Table 4.
G1 | G2 | G3 | |
Y1 | {(0.6,0.3),(0.5,0.3),(0.4,0.4)} | {(0.6,0.8),(0.4,0.2)} | {(0.5,0.3),(0.3,0.7)} |
Y2 | {(0.4,0.6),(0.3,0.4)} | {(0.8,1)} | {(0.4,0.2),(0.3,0.3),(0.2,0.5)} |
Y3 | {(0.8,1)} | {(0.7,0.1),(0.6,0.3),(0.5,0.6)} | {(0.2,0.5),(0.1,0.5)} |
Then according to the related steps, we have the following calculation results.
Step 1. Obtain origin data under weighted hesitant fuzzy environment, see Table 4.
Step 2. Apply WHFOWA operator to aggregate all WHFEs, and obtain hw(Yi),i=1,2,3 in the following:
hw(Y1)={<0.7035,0.072>,<0.7157,0.1680>,<0.4429,0.018>,<0.4657,0.0420>,<0.6830,0.072>,<0.6960,0.1680>,<0.4043,0.018>,<0.4287,0.0420>,<0.6819,0.096>,<0.6949,0.224>,<0.4022,0.024>,<0.4266,0.056>}. |
hw(Y2)={<0.8671,0.12>,<0.8677,0.18>,<0.8683,0.3>,<0.8434,0.08>,<0.8442,0.12>,<0.8449,0.2>}. |
hw(Y3)={<0.8414,0.05>,<0.8318,0.05>,<0.8641,0.15>,<0.8559,0.15>,<0.8820,0.3>,<0.8748,0.3>}. |
Step 3. Calculate the score function and variance function, then we have:
s(hw(Y1))=0.6445,v(hw(Y1))=0.1072,s(hw(Y2))=0.8585. |
v(hwA(Y2))=0.0115,s(hw(Y3))=0.8687,v(hw(Y3))=0.0140. |
Step 4. Rank the alternatives based on the score function and the variance function of every alternative, then we have the ranking relation: A3≻A2≻A1.
Remark 8. The ranking result based on the WHFOWA operator is the same as that of WHFHWA operator in Zhang and Wu [40].
Remark 9. Numerical examples show that our proposed algorithm has good order preservation.
Example 7. In this example, we adopt the data from [5] by adding the same weights for HFEs data to construct the weighted hesitant fuzzy elements, list these data in Table 5, and make comparison.
G1 | G2 | |
Y1 | {(0.2,1/3),(0.4,1/3),(0.7,1/3)} | {(0.2,1/3),(0.6,1/3),(0.8,1/3)} |
Y2 | {(0.2,1/4),(0.4,1/4),(0.7,1/4),(0.9,1/4)} | {(0.1,1/4),(0.2,1/4),(0.4,1/4),(0.5,1/4)} |
Y3 | {(0.3,1/4),(0.5,1/4),(0.6,1/4),(0.7,1/4)} | {(0.2,1/4),(0.4,1/4),(0.5,1/4),(0.6,1/4)} |
Y4 | {(0.3,1/3),(0.5,1/3),(0.6,1/3)} | {(0.2,1/2),(0.4,1/2))} |
G3 | G4 | |
Y1 | {(0.2,1/5),(0.3,1/5),(0.6,1/5),(0.7,1/5),(0.9,1/5)} | {(0.3,1/5),(0.4,1/5),(0.5,1/5),(0.7,1/5),(0.8,1/5)} |
Y2 | {(0.3,1/4),(0.4,1/4),(0.6,1/4),(0.9,1/4)} | {(0.5,1/4),(0.6,1/4),(0.5,1/4),(0.9,1/4)} |
Y3 | {(0.3,1/4),(0.5,1/4),(0.7,1/4),(0.8,1/4)} | {(0.2,1/4),(0.5,1/4),(0.6,1/4),(0.7,1/4)} |
Y4 | {(0.5,1/3),(0.6,1/3),(0.7,1/3)} | {(0.8,1/2),(0.9,1/2))} |
After we compute the score function of our operator aggregation result, we have:
s(hw(Y1))=0.574,s(hw(Y2))=0.5978,s(hw(Y3))=0.5321,s(hw(Y4))=0.8116. |
And the ranking of these alternatives are that: Y4≻Y2≻Y1≻Y3. By comparing with the result in [5], we find that there exists the same ranking order, but the consequence values give us more accurate information for alternatives.
Example 8. In this example, we choose the data from [14] by adding the same weights for HFEs data to construct the weighted hesitant fuzzy elements, list these data in Table 6, and make comparison.
P1 | P2 | |
A1 | {(0.5,1/3),(0.4,1/3),(0.3,1/3)} | {(0.9,1/4),(0.8,1/4),(0.7,1/4),(0.1,1/4)} |
A2 | {(0.5,1/2),(0.3,1/2)} | {(0.9,1/4),(0.7,1/4),(0.6,1/4),(0.2,1/4)} |
A3 | {(0.7,1/2),(0.6,1/2)} | {(0.9,1/2),(0.6,1/2)} |
A4 | {(0.8,1/4),(0.7,1/4),(0.4,1/4),(0.3,1/4)} | {(0.7,1/3),(0.4,1/3),(0.2,1/3)} |
A5 | {(0.9,1/5),(0.7,1/5),(0.6,1/5),(0.3,1/5),(0.1,1/5)} | {(0.8,1/4),(0.7,1/4),(0.6,1/4),(0.4,1/4)} |
P3 | P4 | |
A1 | {(0.5,1/3),(0.4,1/3),(0.2,1/3)} | {(0.9,1/4),(0.6,1/4),(0.5,1/4),(0.3,1/4)} |
A2 | {(0.8,1/4),(0.6,1/4),(0.5,1/4),(0.1,1/4)} | {(0.7,1/3),(0.3,1/3),(0.4,1/3)} |
A3 | {(0.7,1/3),(0.5,1/3),(0.3,1/3)} | {(0.6,1/2),(0.4,1/2)} |
A4 | {(0.8,1/2),(0.1,1/2)} | {(0.9,1/3),(0.8,1/3),(0.6,1/3)} |
A5 | {(0.9,1/3),(0.8,1/3),(0.7,1/3)} | {(0.9,1/4),(0.7,1/4),(0.6,1/4),(0.3,1/4)} |
Suppose that the ideal alternative is A∗={1}, then we can obtain the score function of operator aggregation result, and have:
s(hw(A1))=0.5875,s(hw(A2))=0.5871,s(hw(A3))=0.8456, |
s(hw(A4))=0.7292,s(hw(A5))=0.7232. |
Hence, we have the ranking order of alternatives: A3≻A4≻A5≻A1≻A2.
Comparing with result from generalized hybrid hesitant weighted distance in [14], A3≻A5≻A4≻A1≻A2(λ=1), we find that there exists the difference between the samples A4 and A5. In fact, it also reflects the difference between the two approaches.
Considering that the weighted hesitant fuzzy set (WHFS) introduced by Zhang and Wu [40] is a powerful tool to describe hesitant fuzzy information, where its membership degrees are given with some different weights. In this paper, we redefine the union and intersection operations of WHFEs which are more intuitive and logical, investigate its operation properties, and propose the variance function of WHFE and the ranking rule to compare WHFEs based on the score function and variance function. Furthermore, motivated by the idea of OWA operator introduced by Yager [44], we propose two kinds of aggregation operators such as WHFOWA and WHFOWG operators, and apply in multiple-attribute group decision making. Finally, four numerical examples are used to illustrate that our proposed algorithm has good effectiveness.
In the future, we will focus on developing some novel operators under weighted hesitant fuzzy environment including Frank operator, Bonferroni means, OWA operator and variable weighted aggregation operator to enrich weighted hesitant fuzzy set theory and method. Furthermore, inspired by the idea of research in [22,23,24], we will extend our idea to q-rung orthopair fuzzy sets and propose some novel generalization of fuzzy sets. Finally, we hope that our research will be able to provide more new idea and new methods for multi-criteria and multiple-attribute group decision making based on weighted hesitant fuzzy information.
The authors are grateful to the anonymous reviewers and Editor-in-Chief Professor Alain Miranville for their excellent comments and valuable suggestions that help us improve this paper. This study is supported by the Joint Research Fund in Astronomy under Cooperative Agreement between the NSFC and CAS (U2031136).
We declare that we have no conflicts of interest.
[1] |
L. A. Zadeh, Fuzzy sets, Inf. Control, 8 (1965), 338–356. https://doi.org/10.1016/S0019-9958(65)90241-X doi: 10.1016/S0019-9958(65)90241-X
![]() |
[2] |
V. Torra, Hesitant fuzzy sets, Int. J. Intell. Syst., 25 (2010), 529–539. https://doi.org/10.1002/int.20418 doi: 10.1002/int.20418
![]() |
[3] |
V. Torra, Y. Narukawa, On hesitant fuzzy sets and decision, 2009 IEEE International Conference on Fuzzy Systems, 2009. https://doi.org/10.1109/FUZZY.2009.5276884 doi: 10.1109/FUZZY.2009.5276884
![]() |
[4] |
B. Bedregal, R. Reiser, H. Bustince, C. Lopez-Molina, V. Torra, Aggregation functions for typical hesitant fuzzy elemennts and the action of automorphisms, Inf. Sci., 255 (2014), 82–99. https://doi.org/10.1016/j.ins.2013.08.024 doi: 10.1016/j.ins.2013.08.024
![]() |
[5] |
M. M. Xia, Z. S. Xu, Hesitant fuzzy information aggregation in decision making, Int. J. Approx. Reason., 52 (2011), 395–407. https://doi.org/10.1016/j.ijar.2010.09.002 doi: 10.1016/j.ijar.2010.09.002
![]() |
[6] |
M. M. Xia, Z. S. Xu, N. Chen, Some hesitant fuzzy aggregation operators with their aplication in group decision making, Group Decis. Negot., 22 (2013), 259–279. https://doi.org/10.1007/s10726-011-9261-7 doi: 10.1007/s10726-011-9261-7
![]() |
[7] |
G. W. Wei, Hesitant fuzzy prioritized operators and their application to multiple attribute decision making, Knowl.-Based Syst., 31 (2012), 176–182. https://doi.org/10.1016/j.knosys.2012.03.011 doi: 10.1016/j.knosys.2012.03.011
![]() |
[8] |
W. Y. Zeng, D. Q. Li, Y. D. Gu, Note on the aggregation operators and ranking of hesitant interval-valued fuzzy elements, Soft Comput., 23 (2019), 8075–8083. https://doi.org/10.1007/s00500-018-3445-x doi: 10.1007/s00500-018-3445-x
![]() |
[9] |
Z. M. Zhang, Hesitant fuzzy power aggregation operators and their application to multiple attribute group decision making, Inf. Sci., 234 (2013), 150–181. https://doi.org/10.1016/j.ins.2013.01.002 doi: 10.1016/j.ins.2013.01.002
![]() |
[10] |
B. Zhu, Z. S. Xu, M. M. Xia, Hesitant fuzzy geomeric Bonferroni means, Inf. Sci., 182 (2012), 72–85. https://doi.org/10.1016/j.ins.2012.01.048 doi: 10.1016/j.ins.2012.01.048
![]() |
[11] | D. J. Yu, Y. Y. Wu, W. Zhou, Generalized hesitant fuzzy Bonferroni mean and its application in multi-criteria group decision making, J. Inf. Comput. Sci., 9 (2012), 267–274. |
[12] |
D. H. Peng, T. D. Wang, C. Y. Gao, H. Wang, Continuous hesitant fuzzy aggregation operators and their application to decision making under interval-valued hesitant fuzzy setting, Sci. World J., 2014 (2014), 897304. https://doi.org/10.1155/2014/897304 doi: 10.1155/2014/897304
![]() |
[13] |
Z. X. Xu, J. Chen, An overview of distance and similarity measures of intuitionistic fuzzy sets, Int. J. Uncertain. Fuzz. Knowl.-Based Syst., 16 (2008), 529–555. https://doi.org/10.1142/S0218488508005406 doi: 10.1142/S0218488508005406
![]() |
[14] |
Z. S. Xu, M. M. Xia, Distance and similarity measures for hesitant fuzzy sets, Inf. Sci., 181 (2011), 2128–2138. https://doi.org/10.1016/j.ins.2011.01.028 doi: 10.1016/j.ins.2011.01.028
![]() |
[15] |
Z. S. Xu, M. M. Xia, On distance and correlation measures of hesitant fuzzy information, Int. J. Intell. Syst., 26 (2011), 410–425. https://doi.org/10.1002/int.20474 doi: 10.1002/int.20474
![]() |
[16] |
D. H. Peng, C. Y. Gao, Z. F. Gao, Generalized hesitant fuzzy synergetic weighted distance measures and their application to multiple criteria decision-making, Appl. Math. Model., 37 (2013), 5837–5850. https://doi.org/10.1016/j.apm.2012.11.016 doi: 10.1016/j.apm.2012.11.016
![]() |
[17] |
D. Q. Li, W. Y. Zeng, J. H. Li, New distance and similarity measures on hesitant fuzzy sets and their applications in multiple criteria decision making, Eng. Appl. Artif. Intel., 40 (2015), 11–16. https://doi.org/10.1016/j.engappai.2014.12.012 doi: 10.1016/j.engappai.2014.12.012
![]() |
[18] |
D. Q. Li, W. Y. Zeng, Y. B. Zhao, Note on distance measure of hesitant fuzzy sets, Inf. Sci., 321 (2015), 103–15. https://doi.org/10.1016/j.ins.2015.03.076 doi: 10.1016/j.ins.2015.03.076
![]() |
[19] |
N. Chen, Z. S. Xu, M. M. Xia, Correlation coefficients of hesitant fuzzy sets and their applications to clustering analysis, Appl. Math. Model., 37 (2013), 2197–2211. https://doi.org/10.1016/j.apm.2012.04.031 doi: 10.1016/j.apm.2012.04.031
![]() |
[20] |
B. Farhadinia, Information measures for hesitant fuzzy sets and interval-valued hesitant fuzzy sets, Inf. Sci., 240 (2013), 129–144. https://doi.org/10.1016/j.ins.2013.03.034 doi: 10.1016/j.ins.2013.03.034
![]() |
[21] |
W. Y. Zeng, D. Q. Li, Q. Yin, Distance and similarity measures of hesitant fuzzy sets and their application in pattern recognition, Pattern Recogn. Lett., 84 (2016), 267–271. https://doi.org/10.1016/j.patrec.2016.11.001 doi: 10.1016/j.patrec.2016.11.001
![]() |
[22] |
M. J. Khan, P. Kumam, N. A. Alreshidi, W. Kumam, Improved cosine and cotangent function-based similarity measures for q-rung orthopair fuzzy sets and TOPSIS method, Complex Intell. Syst., 7 (2021), 2679–2696. https://doi.org/10.1007/s40747-021-00425-7 doi: 10.1007/s40747-021-00425-7
![]() |
[23] |
M. J. Khan, P. Kumam, M. Shutaywi, Knowledge measure for the q-rung orthopair fuzzy sets, Int. J. Intell. Syst., 36 (2020), 628–655. https://doi.org/10.1002/int.22313 doi: 10.1002/int.22313
![]() |
[24] |
M. Riaz, A. Habib, M. J. Khan, P. Kumam, Correlation coefficients for cubic bipolar fuzzy sets with applications to pattern recognition and clustering analysis, IEEE Access, 9 (2021), 109053–109066. https://doi.org/10.1109/ACCESS.2021.3098504 doi: 10.1109/ACCESS.2021.3098504
![]() |
[25] |
W. Y. Zeng, D. Q. Li, Q. Yin, Weighted interval-valued hesitant fuzzy sets and its application in group decision making, Int. J. Fuzzy Syst., 21 (2019), 421–432. https://doi.org/10.1007/s40815-018-00599-2 doi: 10.1007/s40815-018-00599-2
![]() |
[26] |
W. Y. Zeng, D. Q. Li, Q. Yin, Weighted hesitant fuzzy linguistic term sets and its application in group decision making, J. Intell. Fuzzy Syst., 37 (2019), 1099–1112. https://doi.org/10.3233/JIFS-182558 doi: 10.3233/JIFS-182558
![]() |
[27] |
B. Zhu, Z. S. Xu, M. M. Xia, Dual hesitant fuzzy sets, J. Appl. Math., 2012 (2012), 879629. https://doi.org/10.1155/2012/879629 doi: 10.1155/2012/879629
![]() |
[28] |
N. Chen, Z. S. Xu, M. M. Xia, Interval-valued hesitant preference relations and their applications to group decision making, Knowl.-Based Syst., 37 (2013), 528–540. https://doi.org/10.1016/j.knosys.2012.09.009 doi: 10.1016/j.knosys.2012.09.009
![]() |
[29] |
G. W. Wei, X. F. Zhao, R. Lin, Some hesitant interval-valued fuzzy aggregation operators and their applications to multiple attribute decision making, Knowl.-Based Syst., 31 (2012), 176–182. https://doi.org/10.1016/j.knosys.2013.03.004 doi: 10.1016/j.knosys.2013.03.004
![]() |
[30] |
R. M. Rodríguez, L. Martínez, F. Herrera, Hesitant fuzzy linguistic term sets for decision making, IEEE T. Fuzzy Syst., 20 (2012), 109–119. https://doi.org/10.1109/TFUZZ.2011.2170076 doi: 10.1109/TFUZZ.2011.2170076
![]() |
[31] |
R. M. Rodríguez, L. Martínez, F. Herrera, A group decision making model dealing with comparative linguistic expressions based on hesitant fuzzy linguistic term sets, Inf. Sci., 241 (2013), 28–42. https://doi.org/10.1016/j.ins.2013.04.006 doi: 10.1016/j.ins.2013.04.006
![]() |
[32] |
C. P. Wei, N. Zhao, X. J. Tang, Operators and comparisons of hesitant fuzzy linguistic term sets, IEEE T. Fuzzy Syst., 22 (2014), 575–585. https://doi.org/10.1109/TFUZZ.2013.2269144 doi: 10.1109/TFUZZ.2013.2269144
![]() |
[33] |
B. Zhu, Z. S. Xu, Consistency measures for hesitant fuzzy linguistic preference relations, IEEE T. Fuzzy Syst., 22 (2014), 35–45. https://doi.org/10.1109/TFUZZ.2013.2245136 doi: 10.1109/TFUZZ.2013.2245136
![]() |
[34] |
H. C. Liao, Z. S. Xu, M. M. Xia, Multiplicative consistency of hesitant fuzzy preference relation and its application in group decision making, Int. J. Inf. Technol. Decis. Making, 13 (2014), 47–76. https://doi.org/10.1142/S0219622014500035 doi: 10.1142/S0219622014500035
![]() |
[35] |
H. C. Liao, Z. S. Xu, X. J. Zeng, Novel correlation coefficients between hesitant fuzzy sets and their application in decision making, Knowl.-Based Syst., 82 (2015), 115–127. https://doi.org/10.1016/j.knosys.2015.02.020 doi: 10.1016/j.knosys.2015.02.020
![]() |
[36] |
S. C. Onar, B. Oztaysi, C. Kahraman, Strategic decision selection using hesitant fuzzy TOPSOS and interval type-2 fuzzy AHP: A case study, Int. J. Comput. Intell. Syst., 7 (2014, ) 1002–1021. https://doi.org/10.1080/18756891.2014.964011 doi: 10.1080/18756891.2014.964011
![]() |
[37] |
Z. S. Xu, X. L. Zhang, Hesitant fuzzy multi-attribute decision making based on TOPSIS with incomplete weight information, Knowl.-Based Syst., 52 (2013), 53–64. https://doi.org/10.1016/j.knosys.2013.05.011 doi: 10.1016/j.knosys.2013.05.011
![]() |
[38] |
N. Zhang, G. W. Wei, Extension of VIKOR method for decision making problem based on hesitant fuzzy set, Appl. Math. Model., 37 (2013, ) 4938–4947. https://doi.org/10.1016/j.apm.2012.10.002 doi: 10.1016/j.apm.2012.10.002
![]() |
[39] |
G. Qian, H. Wang, X. Feng, Generalized hesitant fuzzy sets and their application in decision support system, Knowl.-Based Syst., 37 (2013), 357–365. https://doi.org/10.1016/j.knosys.2012.08.019 doi: 10.1016/j.knosys.2012.08.019
![]() |
[40] |
Z. M. Zhang, C. Wu, Weighted hesitant fuzzy sets and their application to multi-criteria decision making, British J. Math. Comput. Sci., 4 (2014), 1091–1123. https://doi.org/10.9734/BJMCS/2014/8533 doi: 10.9734/BJMCS/2014/8533
![]() |
[41] |
B. Zhu, Z. S. Xu, Probability-hesitant fuzzy sets and the representation of preference relations, Technol. Econ. Dev. Eco., 24 (2016), 1029–1040. https://doi.org/10.3846/20294913.2016.1266529 doi: 10.3846/20294913.2016.1266529
![]() |
[42] |
B. Farhadinia, A novel method of ranking hesitant fuzzy values for multiple attribute decision making problems, Int. J. Intell. Syst., 28 (2013), 752–767. https://doi.org/10.1002/int.21600 doi: 10.1002/int.21600
![]() |
[43] |
R. M. Rodríguez, L. Martínez, V. Torra, Z. S. Xu, F. Herrera, Hesitant fuzzy sets: State of the art and future directions, Int. J. Intell. Syst., 29 (2014), 495–524. https://doi.org/10.1002/int.21654 doi: 10.1002/int.21654
![]() |
[44] |
R. R. Yager, On ordered weighted averaging aggregation operators in multi-criteria decision making, IEEE T. Syst. Man Cybern., 18 (1988), 183–190. https://doi.org/10.1109/21.87068 doi: 10.1109/21.87068
![]() |
1. | Xi Wen, Weighted hesitant fuzzy soft set and its application in group decision making, 2023, 8, 2364-4966, 1583, 10.1007/s41066-023-00387-w | |
2. | Fan Lei, Qiang Cai, Guiwu Wei, Novel Aczel-Alsina operations-based probabilistic double hierarchy linguistic aggregation operators and their applications in Blockchain performance evaluation Blockchain, 2024, 46, 10641246, 7989, 10.3233/JIFS-235215 | |
3. | Lei Zhao, Managing incomplete general hesitant linguistic preference relations and their application, 2024, 9, 2473-6988, 28870, 10.3934/math.20241401 | |
4. | Yaping Wang, Jianwei Gao, Huihui Liu, A large-scale group decision making method with text mining and probabilistic linguistic complementation for energy transition path assessment, 2024, 09601481, 122169, 10.1016/j.renene.2024.122169 | |
5. | Yan-dong Du, Yao Dong, Zheng-long Wu, Han-wen Wang, Yang-wen Wu, Qiang Lu, Site selection of offshore wind-wave-hydrogen energy coupling system based on improved WHFS-TOPSIS: A case study in China, 2025, 17, 1941-7012, 10.1063/5.0243994 |
C1 | C2 | C3 | C4 | |
A1 | 0.6 | 0.8 | 0.5 | 0.6 |
A2 | 0.5 | 0.9 | 0.8 | 0.7 |
A3 | 0.7 | 0.6 | 0.7 | 0.6 |
A4 | 0.8 | 0.7 | 0.8 | 0.9 |
C1 | C2 | C3 | C4 | |
A1 | 0.6 | 0.85 | 0.65 | 0.6 |
A2 | 0.7 | 0.8 | 0.8 | 0.75 |
A3 | 0.8 | 0.6 | 0.75 | 0.6 |
A4 | 0.8 | 0.7 | 0.8 | 0.85 |
C1 | C2 | C3 | C4 | |
A1 | 0.8 | 0.85 | 0.65 | 0.7 |
A2 | 0.7 | 0.8 | 0.8 | 0.75 |
A3 | 0.7 | 0.8 | 0.75 | 0.9 |
A4 | 0.9 | 0.75 | 0.85 | 0.9 |
G1 | G2 | G3 | |
Y1 | {(0.6,0.3),(0.5,0.3),(0.4,0.4)} | {(0.6,0.8),(0.4,0.2)} | {(0.5,0.3),(0.3,0.7)} |
Y2 | {(0.4,0.6),(0.3,0.4)} | {(0.8,1)} | {(0.4,0.2),(0.3,0.3),(0.2,0.5)} |
Y3 | {(0.8,1)} | {(0.7,0.1),(0.6,0.3),(0.5,0.6)} | {(0.2,0.5),(0.1,0.5)} |
G1 | G2 | |
Y1 | {(0.2,1/3),(0.4,1/3),(0.7,1/3)} | {(0.2,1/3),(0.6,1/3),(0.8,1/3)} |
Y2 | {(0.2,1/4),(0.4,1/4),(0.7,1/4),(0.9,1/4)} | {(0.1,1/4),(0.2,1/4),(0.4,1/4),(0.5,1/4)} |
Y3 | {(0.3,1/4),(0.5,1/4),(0.6,1/4),(0.7,1/4)} | {(0.2,1/4),(0.4,1/4),(0.5,1/4),(0.6,1/4)} |
Y4 | {(0.3,1/3),(0.5,1/3),(0.6,1/3)} | {(0.2,1/2),(0.4,1/2))} |
G3 | G4 | |
Y1 | {(0.2,1/5),(0.3,1/5),(0.6,1/5),(0.7,1/5),(0.9,1/5)} | {(0.3,1/5),(0.4,1/5),(0.5,1/5),(0.7,1/5),(0.8,1/5)} |
Y2 | {(0.3,1/4),(0.4,1/4),(0.6,1/4),(0.9,1/4)} | {(0.5,1/4),(0.6,1/4),(0.5,1/4),(0.9,1/4)} |
Y3 | {(0.3,1/4),(0.5,1/4),(0.7,1/4),(0.8,1/4)} | {(0.2,1/4),(0.5,1/4),(0.6,1/4),(0.7,1/4)} |
Y4 | {(0.5,1/3),(0.6,1/3),(0.7,1/3)} | {(0.8,1/2),(0.9,1/2))} |
P1 | P2 | |
A1 | {(0.5,1/3),(0.4,1/3),(0.3,1/3)} | {(0.9,1/4),(0.8,1/4),(0.7,1/4),(0.1,1/4)} |
A2 | {(0.5,1/2),(0.3,1/2)} | {(0.9,1/4),(0.7,1/4),(0.6,1/4),(0.2,1/4)} |
A3 | {(0.7,1/2),(0.6,1/2)} | {(0.9,1/2),(0.6,1/2)} |
A4 | {(0.8,1/4),(0.7,1/4),(0.4,1/4),(0.3,1/4)} | {(0.7,1/3),(0.4,1/3),(0.2,1/3)} |
A5 | {(0.9,1/5),(0.7,1/5),(0.6,1/5),(0.3,1/5),(0.1,1/5)} | {(0.8,1/4),(0.7,1/4),(0.6,1/4),(0.4,1/4)} |
P3 | P4 | |
A1 | {(0.5,1/3),(0.4,1/3),(0.2,1/3)} | {(0.9,1/4),(0.6,1/4),(0.5,1/4),(0.3,1/4)} |
A2 | {(0.8,1/4),(0.6,1/4),(0.5,1/4),(0.1,1/4)} | {(0.7,1/3),(0.3,1/3),(0.4,1/3)} |
A3 | {(0.7,1/3),(0.5,1/3),(0.3,1/3)} | {(0.6,1/2),(0.4,1/2)} |
A4 | {(0.8,1/2),(0.1,1/2)} | {(0.9,1/3),(0.8,1/3),(0.6,1/3)} |
A5 | {(0.9,1/3),(0.8,1/3),(0.7,1/3)} | {(0.9,1/4),(0.7,1/4),(0.6,1/4),(0.3,1/4)} |
C1 | C2 | C3 | C4 | |
A1 | 0.6 | 0.8 | 0.5 | 0.6 |
A2 | 0.5 | 0.9 | 0.8 | 0.7 |
A3 | 0.7 | 0.6 | 0.7 | 0.6 |
A4 | 0.8 | 0.7 | 0.8 | 0.9 |
C1 | C2 | C3 | C4 | |
A1 | 0.6 | 0.85 | 0.65 | 0.6 |
A2 | 0.7 | 0.8 | 0.8 | 0.75 |
A3 | 0.8 | 0.6 | 0.75 | 0.6 |
A4 | 0.8 | 0.7 | 0.8 | 0.85 |
C1 | C2 | C3 | C4 | |
A1 | 0.8 | 0.85 | 0.65 | 0.7 |
A2 | 0.7 | 0.8 | 0.8 | 0.75 |
A3 | 0.7 | 0.8 | 0.75 | 0.9 |
A4 | 0.9 | 0.75 | 0.85 | 0.9 |
G1 | G2 | G3 | |
Y1 | {(0.6,0.3),(0.5,0.3),(0.4,0.4)} | {(0.6,0.8),(0.4,0.2)} | {(0.5,0.3),(0.3,0.7)} |
Y2 | {(0.4,0.6),(0.3,0.4)} | {(0.8,1)} | {(0.4,0.2),(0.3,0.3),(0.2,0.5)} |
Y3 | {(0.8,1)} | {(0.7,0.1),(0.6,0.3),(0.5,0.6)} | {(0.2,0.5),(0.1,0.5)} |
G1 | G2 | |
Y1 | {(0.2,1/3),(0.4,1/3),(0.7,1/3)} | {(0.2,1/3),(0.6,1/3),(0.8,1/3)} |
Y2 | {(0.2,1/4),(0.4,1/4),(0.7,1/4),(0.9,1/4)} | {(0.1,1/4),(0.2,1/4),(0.4,1/4),(0.5,1/4)} |
Y3 | {(0.3,1/4),(0.5,1/4),(0.6,1/4),(0.7,1/4)} | {(0.2,1/4),(0.4,1/4),(0.5,1/4),(0.6,1/4)} |
Y4 | {(0.3,1/3),(0.5,1/3),(0.6,1/3)} | {(0.2,1/2),(0.4,1/2))} |
G3 | G4 | |
Y1 | {(0.2,1/5),(0.3,1/5),(0.6,1/5),(0.7,1/5),(0.9,1/5)} | {(0.3,1/5),(0.4,1/5),(0.5,1/5),(0.7,1/5),(0.8,1/5)} |
Y2 | {(0.3,1/4),(0.4,1/4),(0.6,1/4),(0.9,1/4)} | {(0.5,1/4),(0.6,1/4),(0.5,1/4),(0.9,1/4)} |
Y3 | {(0.3,1/4),(0.5,1/4),(0.7,1/4),(0.8,1/4)} | {(0.2,1/4),(0.5,1/4),(0.6,1/4),(0.7,1/4)} |
Y4 | {(0.5,1/3),(0.6,1/3),(0.7,1/3)} | {(0.8,1/2),(0.9,1/2))} |
P1 | P2 | |
A1 | {(0.5,1/3),(0.4,1/3),(0.3,1/3)} | {(0.9,1/4),(0.8,1/4),(0.7,1/4),(0.1,1/4)} |
A2 | {(0.5,1/2),(0.3,1/2)} | {(0.9,1/4),(0.7,1/4),(0.6,1/4),(0.2,1/4)} |
A3 | {(0.7,1/2),(0.6,1/2)} | {(0.9,1/2),(0.6,1/2)} |
A4 | {(0.8,1/4),(0.7,1/4),(0.4,1/4),(0.3,1/4)} | {(0.7,1/3),(0.4,1/3),(0.2,1/3)} |
A5 | {(0.9,1/5),(0.7,1/5),(0.6,1/5),(0.3,1/5),(0.1,1/5)} | {(0.8,1/4),(0.7,1/4),(0.6,1/4),(0.4,1/4)} |
P3 | P4 | |
A1 | {(0.5,1/3),(0.4,1/3),(0.2,1/3)} | {(0.9,1/4),(0.6,1/4),(0.5,1/4),(0.3,1/4)} |
A2 | {(0.8,1/4),(0.6,1/4),(0.5,1/4),(0.1,1/4)} | {(0.7,1/3),(0.3,1/3),(0.4,1/3)} |
A3 | {(0.7,1/3),(0.5,1/3),(0.3,1/3)} | {(0.6,1/2),(0.4,1/2)} |
A4 | {(0.8,1/2),(0.1,1/2)} | {(0.9,1/3),(0.8,1/3),(0.6,1/3)} |
A5 | {(0.9,1/3),(0.8,1/3),(0.7,1/3)} | {(0.9,1/4),(0.7,1/4),(0.6,1/4),(0.3,1/4)} |