
A genetic algorithm (GA) evolves the population of candidate solutions to a particular problem through crossover and mutation operators. Since a newly generated solution may not satisfy the constraints of the given problem, it is common to penalize infeasible solutions or to repair them to feasible solutions. For the minimum vertex cover (MVC) problem, we propose an adaptive greedy repair operator. Our repair operator first repairs an infeasible solution into a feasible one using a randomized greedy algorithm, and then removes unnecessary vertices from the feasible vertex cover, making it a minimal vertex cover. During the repair process, when adding or removing vertices, the degree of exploration and exploitation in the randomized greedy algorithm is adaptively adjusted based on the convergence level of the population. When the population lacks high-quality solutions, the operator strives to generate superior solutions greedily. However, when the solution set has enough high-quality solutions, it explores unexplored choices to break through the limitations of the existing solution set. We compared our GA with a deterministic greedy algorithm, a randomized greedy algorithm, and GAs using various repair operators. Experimental results on benchmark graphs from the Benchmarks with Hidden Optimum Solutions Library (BHOSLIB) demonstrated that the proposed repair operator improved the performance of the GA for the MVC problem.
Citation: Seung-Hyun Moon, Yourim Yoon. An adaptive greedy repair operator in a genetic algorithm for the minimum vertex cover problem[J]. AIMS Mathematics, 2025, 10(6): 13365-13392. doi: 10.3934/math.2025600
[1] | Chunfeng Suo, Yan Wang, Dan Mou . The new construction of knowledge measure on intuitionistic fuzzy sets and interval-valued intuitionistic fuzzy sets. AIMS Mathematics, 2023, 8(11): 27113-27127. doi: 10.3934/math.20231387 |
[2] | Li Li, Xin Wang . Hamming distance-based knowledge measure and entropy for interval-valued Pythagorean fuzzy sets. AIMS Mathematics, 2025, 10(4): 8707-8720. doi: 10.3934/math.2025399 |
[3] | Muhammad Riaz, Maryam Saba, Muhammad Abdullah Khokhar, Muhammad Aslam . Novel concepts of m-polar spherical fuzzy sets and new correlation measures with application to pattern recognition and medical diagnosis. AIMS Mathematics, 2021, 6(10): 11346-11379. doi: 10.3934/math.2021659 |
[4] | Li Li, Mengjing Hao . Interval-valued Pythagorean fuzzy entropy and its application to multi-criterion group decision-making. AIMS Mathematics, 2024, 9(5): 12511-12528. doi: 10.3934/math.2024612 |
[5] | T. M. Athira, Sunil Jacob John, Harish Garg . A novel entropy measure of Pythagorean fuzzy soft sets. AIMS Mathematics, 2020, 5(2): 1050-1061. doi: 10.3934/math.2020073 |
[6] | Atiqe Ur Rahman, Muhammad Saeed, Hamiden Abd El-Wahed Khalifa, Walaa Abdullah Afifi . Decision making algorithmic techniques based on aggregation operations and similarity measures of possibility intuitionistic fuzzy hypersoft sets. AIMS Mathematics, 2022, 7(3): 3866-3895. doi: 10.3934/math.2022214 |
[7] | Hu Wang . A novel bidirectional projection measures of circular intuitionistic fuzzy sets and its application to multiple attribute group decision-making problems. AIMS Mathematics, 2025, 10(5): 10283-10307. doi: 10.3934/math.2025468 |
[8] | Changlin Xu, Yaqing Wen . New measure of circular intuitionistic fuzzy sets and its application in decision making. AIMS Mathematics, 2023, 8(10): 24053-24074. doi: 10.3934/math.20231226 |
[9] | Shichao Li, Zeeshan Ali, Peide Liu . Prioritized Hamy mean operators based on Dombi t-norm and t-conorm for the complex interval-valued Atanassov-Intuitionistic fuzzy sets and their applications in strategic decision-making problems. AIMS Mathematics, 2025, 10(3): 6589-6635. doi: 10.3934/math.2025302 |
[10] | Muhammad Arshad, Muhammad Saeed, Khuram Ali Khan, Nehad Ali Shah, Wajaree Weera, Jae Dong Chung . A robust MADM-approach to recruitment-based pattern recognition by using similarity measures of interval-valued fuzzy hypersoft set. AIMS Mathematics, 2023, 8(5): 12321-12341. doi: 10.3934/math.2023620 |
A genetic algorithm (GA) evolves the population of candidate solutions to a particular problem through crossover and mutation operators. Since a newly generated solution may not satisfy the constraints of the given problem, it is common to penalize infeasible solutions or to repair them to feasible solutions. For the minimum vertex cover (MVC) problem, we propose an adaptive greedy repair operator. Our repair operator first repairs an infeasible solution into a feasible one using a randomized greedy algorithm, and then removes unnecessary vertices from the feasible vertex cover, making it a minimal vertex cover. During the repair process, when adding or removing vertices, the degree of exploration and exploitation in the randomized greedy algorithm is adaptively adjusted based on the convergence level of the population. When the population lacks high-quality solutions, the operator strives to generate superior solutions greedily. However, when the solution set has enough high-quality solutions, it explores unexplored choices to break through the limitations of the existing solution set. We compared our GA with a deterministic greedy algorithm, a randomized greedy algorithm, and GAs using various repair operators. Experimental results on benchmark graphs from the Benchmarks with Hidden Optimum Solutions Library (BHOSLIB) demonstrated that the proposed repair operator improved the performance of the GA for the MVC problem.
Since Zadeh introduced fuzzy set theory, extensive research and extension related to fuzzy sets have been pursued. Among the various generalized forms that have been proposed, Atanassov [1] introduced the definition of intuitionistic fuzzy sets (IFSs). This was a broader concept of fuzzy sets designed to address uncertain information. It offered a more precise characterization of such information through the triad of degree of membership, degree of nonmembership, and degree of hesitation. As an extension of IFS, Atanassov [2] presented an alternative definition of interval-valued intuitionistic fuzzy sets (IVIFSs), wherein the degree of membership, nonmembership, and hesitation are represented by subintervals within the range of [0,1]. As a result, it can precisely capture the dynamic features. Furthermore, Xu [3] examined certain attributes and average operators of IVIFSs. Currently, IVIFSs have been extensively utilized in diverse areas, including quality evaluation, venture capital, and medical diagnosis [4,5,6,7], owing to their remarkable flexibility in managing uncertainty or ambiguity.
Information measures pertaining to IFSs and IVIFSs have garnered escalating attention due to their exceptional performance in addressing practical applications. Interested readers can to [8,9,10]. Axiomatic definitions of distance measures on IFSs and IVIFSs were respectively provided in [11,12]. Furthermore, a novel distance measure for IFSs was introduced through the utilization of line integrals [13]. Drawing from the relationship between distance measure and similarity measure, similarity measure has also been paid attention to by a considerable number of researchers, and numerous scholars investigated the similarity measure for IVIFSs on the basis of distance measures [14,15,16,29,30,31,32]. Fuzzy entropy was initially introduced by Zadeh, and in recent years, there has been a growing interest among scholars in exploring this concept. Interested readers are encouraged to consult references [17,18,19] for further details. An entropy on IVIFSs constructed based on a distance measure was proposed by Zhang et al. [20], and the relationship between similarity measure and entropy was investigated in the paper. Che et al. [21] presented a method for constructing an entropy on IVIFSs in terms of the distance function. The cross-entropy approach, originating from the information theory presented by Shannon [22], was later applied to decision-making problems by Ye [23] who developed the fuzzy cross-entropy of IVIFSs using an analogy with the intuitionistic fuzzy cross-entropy. In recent years, there has been a notable surge in the interest surrounding knowledge measures within the frameworks of IFSs and IVIFSs, as clearly demonstrated by the references [24,25,26]. Das et al. [27] employed the knowledge measures under IFSs and IVIFSs to compute the criterion weights in the decision problem, and Guo [28] considered both aspects of knowledge related to IVIFSs, i.e., information content and information clarity to construct a model of knowledge measure and apply it to a decision-making problem under unknown weights. In addition, some scholars have also extended IVIFS [33,34].
The structure of this paper is as follows: Section 2 delves into several definitions and axioms that will be utilized in the subsequent research outlined in this paper. Section 3 centers on the construction of similarity measures, knowledge measures, and entropy, all grounded in the axiomatic definition of the closest crisp set and the distance measure. Furthermore, it explores the intricate transformation relationships among these three constructs. Subsequently, we present a range of formulas for information measures, each utilizing distinct functions. In Section 4, we demonstrate the efficacy and versatility of our newly introduced methodologies by applying the proposed knowledge measure to tackle a decision-making problem and the similarity measure to address a pattern recognition problem. Lastly, in Section 5, we offer some concluding remarks on the contents of this paper.
In this section, we will briefly introduce some of fuzzy set theories and concepts which should be used in this paper. The X in the following text is represented as: X=(x1,x2,⋯,xn). Let N denote the set of natural numbers.
Definition 2.1. ( [2]) An IVIFS ˜A on a finite set X can be defined as the following form:
˜A={<x,μ˜A(x),ν˜A(x)>|x∈X}, |
where μ˜A(x)=[μL˜A(x),μU˜A(x)]⊆[0,1] and ν˜A(x)=[νL˜A(x),νU˜A(x)]⊆[0,1], which satisfies 0≤μU˜A(x)+νU˜A(x)≤1, for any x∈X. For the interval hesitation margin π˜A(x)=[πL˜A(x),πU˜A(x)]⊆[0,1], we have πL˜A(x)=1−μU˜A(x)−νU˜A(x) and πU˜A(x)=1−μL˜A(x)−νL˜A(x).
Specifically, if μL˜A(x)=μU˜A(x) and νL˜A(x)=νU˜A(x), then the IVIFS is reduced to an intuitionistic fuzzy set.
From the interval representation of membership degree and nonmembership degree of IVIFSs, it is not difficult to see that the geometric meaning of IVIFSs at a certain point is a rectangle, as shown in Figure 1. The three vertices of the triangle are denoted as (0,0),(1,0),(0,1), and the two sides of the triangle represent the degree of membership μ˜A(x) and nonmembership ν˜A(x), so that (0,0) expresses the case if π˜A(x)=1. (1,0) and (0,1) mean the two cases of the crisp sets. For the hypotenuse of the triangle, that is, the segment from (1,0) to (0,1), it corresponds to ν˜A(x)=1−μ˜A(x). Moreover, there is 0≤μ˜A(x)+ν˜A(x)≤1 for an IVIFS, which also means that none of the individual vertices of the rectangle in Figure 1 can extend beyond the hypotenuse of the triangle.
For convenience, let IVIFS(X) express the set of IVIFSs on X. Besides, we use P(X) to denote the set of all crisp sets defined on X.
Definition 2.2. ( [3]) For any ˜A,˜B∈IVIFS(X), the following relations can be defined:
(1) ˜A⊆˜B if μL˜A(x)≤μL˜B(x), μU˜A(x)≤μU˜B(x) and νL˜A(x)≥νL˜B(x), νU˜A(x)≥νU˜B(x) for any x∈X;
(2) ˜A=˜B if ˜A⊆˜B and ˜A⊇˜B;
(3) The complement set ˜Ac={⟨x,ν˜A(x),μ˜A(x)⟩|x∈X}.
Definition 2.3. ( [12]) For any ˜A,˜B,˜C∈IVIFS(X), a mapping D:IVIFS(X)×IVIFS(X)→[0,1] is called a distance measure on IVIFSs, and it satisfies the following properties:
(D1)0≤D(˜A,˜B)≤1;
(D2)D(˜A,˜B)=0 if ˜A=˜B;
(D3)D(˜A,˜B)=D(˜B,˜A);
(D4)D(˜A,˜C)≤D(˜A,˜B)+D(˜B,˜C);
(D5)If˜A⊆˜B⊆˜C, then D(˜A,˜C)≥max{D(˜A,˜B),D(˜B,˜C)}.
Consider ˜A,˜B∈IVIFS(X), where ˜A={⟨x,[μL˜A(x),μU˜A(x)],[νL˜A(x),νU˜A(x)]⟩|x∈X}, ˜B={⟨x,[μL˜B(x),μU˜B(x)],[νL˜B(x),νU˜B(x)]⟩|x∈X}.
For convenience, we denote:
|μL˜A(xi)−μL˜B(xi)|=αi,|μU˜A(xi)−μU˜B(xi)|=βi,|νL˜A(xi)−νL˜B(xi)|=γi,|νU˜A(xi)−νU˜B(xi)|=δi. |
For arbitrary p, the distance measure between ˜A and ˜B is presented in the following form:
Dp(˜A,˜B)=(1qnn∑i=1(αpi∗βpi⋆γpi∗δpi))1p, | (2.1) |
where the parameter q is determined by the two operations "∗" and "⋆".
When p assumes different values, various distance measures can be obtained, as demonstrated in Table 1.
∗=+ ⋆=+ |
Minkovsky | q=4 | p=1 | DH(˜A,˜B)=(14n∑ni=1(αi+βi+γi+δi)) |
p=2 | DE(˜A,˜B)=(14n∑ni=1(α2i+β2i+γ2i+δ2i))12 | |||
p=3 | DM(˜A,˜B)=(14n∑ni=1(α3i+β3i+γ3i+δ3i))13 | |||
∗=∨,⋆=+ ∗=∨,⋆=∨ |
p=1 | q=2 | Hamming-Hausdorff | DHH(˜A,˜B)=(12n∑ni=1(αi∨βi+γi∨δi)) |
q=1 | Hausdorff | Dh(˜A,˜B)=(1n∑ni=1(αi∨βi∨γi∨δi)) |
Definition 2.4. ( [28]) Let ˜A,˜B∈IVIFS(X), and a mapping K:IVIFS(X)→[0,1] is called a knowledge measure on IVIFS(X) if K satisfies the following conditions:
(K1)K(˜A)=1 if ˜A∈P(X);
(K2)K(˜A)=0 if π˜A(x)=[1,1] for any x∈X;
(K3)K(˜A)≥K(˜B) if ˜A is less fuzzy than ˜B, i.e., ˜A⊆˜B for μL˜B(x)≤νL˜B(x) and μU˜B(x)≤νU˜B(x) or ˜A⊇˜B for μL˜B(x)≥νL˜B(x) and μU˜B(x)≥νU˜B(x), for any x∈X;
(K4)K(˜A)=K(˜Ac).
Definition 2.5. ( [18]) Assume that the entropy is a function E:IVIFS(X)→[0,1], which satisfies the nether four conditions:
(E1)E(˜A)=0 if ˜A∈P(X);
(E2)E(˜A)=1 if μ˜A(x)=ν˜A(x) for any x∈X;
(E3)E(˜A)≤E(˜B) if ˜A is less fuzzy than ˜B, i.e., μ˜A(x)≤μ˜B(x) and ν˜A(x)≥ν˜B(x) for μ˜B(x)≤ν˜B(x) or μ˜A(x)≥μ˜B(x) and ν˜A(x)≤ν˜B(x) for μ˜B(x)≥ν˜B(x);
(E4)E(˜A)=E(˜Ac).
Furthermore, this paper introduces the definition of the closest crisp set within the realm of IVIFSs and derives several equations pertaining to similarity measure, knowledge measure, and entropy, all grounded in the distance measure. Additionally, it delves into the methodology for converting between these measures.
The extent of resemblance between two comparable entities is frequently characterized or elucidated through the utilization of similarity measure. Within this segment, we primarily introduce a novel formulation of similarity measure pertaining to IVIFSs, which relies on distance measure and the proximate crisp set denoted as C˜A.
To this end, inspired by the concept of the closest crisp set considering intuitionistic fuzzy sets in [17], the notion of closest crisp set for IVIFSs is introduced.
Definition 3.1. Let ˜A∈IVIFS(X), and the closest crisp set to ˜A, which is denoted by C˜A, satisfies the following condition: If μ˜A(x)≥ν˜A(x), then x∈C˜A; if μ˜A(x)≱ν˜A(x), then x∉C˜A.
Since any crisp set is an IVIFS with zero interval hesitation index, the closest crisp set of ˜A can be expressed as:
μLC˜A(x)=μUC˜A(x)={1ifμL˜A(x)≥νL˜A(x),μU˜A(x)≥νU˜A(x),0otherwise, | (3.1) |
νLC˜A(x)=νUC˜A(x)={0ifμL˜A(x)≥νL˜A(x),μU˜A(x)≥νU˜A(x),1otherwise. | (3.2) |
From Figure 2, it can be seen that in the left panel, μL˜A(x)>νL˜A(x),μU˜A(x)>νU˜A(x), so that there is x∈C˜A, or μLC˜A(x)=μUC˜A(x)=1, νLC˜A(x)=νUC˜A(x)=0. The opposite is true for the right panel, μL˜A(x)<νL˜A(x),μU˜A(x)<νU˜A(x); thus, there is x∉C˜A, or μLC˜A(x)=μUC˜A(x)=0, νLC˜A(x)=νUC˜A(x)=1.
Next, a corollary is given below that presents practical properties of the closest crisp set.
Corollary 3.2. Assume that ˜A∈IVIFS(X), and let C˜A be its closest crisp set. It has the following properties:
(1) ˜A∈P(X) if, and only if, ˜A=C˜A.
(2) If μL˜A(x)≠νL˜A(x),μU˜A(x)≠νU˜A(x), then Cc˜A(x)=C˜Ac(x) for any x∈X, where C˜Ac expresses the closest crisp set to Ac.
The preceding Corollary 3.2(2) can be elucidated more intuitively by scrutinizing Figure 3, which illustrates that the rectangular positions designated by ˜A(x) and ~Ac(x) are precisely as depicted in the image when the conditions μL˜A(x)≥νL˜A(x) and μU˜A(x)≥νU˜A(x) are satisfied. Furthermore, it becomes evident that C˜A(x) and C˜Ac(x) occupy opposing corners, thereby establishing the equality Cc˜A(x)=C˜Ac(x). Note here that the conditional requirement of Corollary 3.2(2) is μL˜A(x)≠νL˜A(x),μU˜A(x)≠νU˜A(x), the reason for which is that if μL˜A(x)=νL˜A(x),μU˜A(x)=νU˜A(x), as shown in Figure 4, then ˜A(x) and ˜Ac(x) are coincident and their closest crisp sets are the same.
Based on the proposed the closest crisp set, information measure functions are explored.
Definition 3.3. A function f:[0,1]2→[0,1] is called a binary aggregation function, for any x,y∈[0,1], it satisfies the following conditions:
(1) f is component-wise increasing;
(2) f(0,0)=0;
(3) f(1,1)=1;
(4) f(x,y)=f(y,x).
Definition 3.4. ( [9]) Let ˜A,˜B∈IVIFS(X), and a mapping S: IVIFS(X)×IVIFS(X)→[0,1] is called a similarity measure on IVIFSs if S satisfies the following properties:
(S1) Boundary: 0≤S(˜A,˜B)≤1;
(S2) Symmetry: S(˜A,˜B)=S(˜B,˜A);
(S3) Reflexivity: S(˜A,˜B)=1 iff ˜A=˜B;
(S4) Complementarity: S(˜A,˜Ac)=0 iff ˜A∈P(X).
Theorem 3.5. Let f:[0,1]×[0,1]→[0,1] be a binary aggregation function and D be an aforementioned distance in Definition 2.3 for IVIFSs. Then, for any ˜A,˜B∈IVIFS(X),
S(˜A,˜B)=f(D(˜A,C˜A),D(˜B,C˜B)) |
is a similarity measure for IVIFSs.
Proof. In the following, we will prove the proposed formula satisfies four conditions in Definition 3.4.
(S1): It is clearly established.
(S2): By the definition of function f, it can be clearly deduced that
S(˜A,˜B)=f(D(˜A,C˜A),D(˜B,C˜B))=f(D(˜B,C˜B),D(˜A,C˜A))=S(˜B,˜A). |
(S3): If S(˜A,˜B)=1, according to the definition of f and the distance measure, we can get
f(D(˜A,C˜A),D(˜B,C˜B))=1⇔D(˜A,C˜A)=D(˜B,C˜B)=1, |
thus, ˜A=˜B.
(S4): If S(˜A,˜Ac)=0, based on the definition of f and the distance measure,
f(D(˜A,C˜A),D(˜Ac,C˜Ac))=0⇔D(˜A,C˜A)=D(˜Ac,C˜Ac)=0 |
can be deduced. Then, ˜A∈P(X).
With regards to Theorem 3.5, it is feasible to devise diverse formulations for calculating similarity measures on IVIFSs, leveraging various binary aggregation functions f:[0,1]×[0,1]→[0,1]. By incorporating distinct functions f and distance measures D, a range of similarity measures can be derived, as exemplified in Table 2.
Distance measures | Conversion functions | Similarity measures |
DH(˜A,˜B) | f1(x,y)=x+y2 | SH1(˜A,˜B)=DH(˜A,C˜A)+DH(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SH2(˜A,˜B)=sin(DH(˜A,C˜A)+DH(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SH3(˜A,˜B)=k√DH(˜A,C˜A)+DH(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SH4(˜A,˜B)=log(DH(˜A,C˜A)+DH(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SH5(˜A,˜B)=2DH(˜A,C˜A)⋅DH(˜B,C˜B)−1 | |
DE(˜A,˜B) | f1(x,y)=x+y2 | SE1(˜A,˜B)=DE(˜A,C˜A)+DE(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SE2(˜A,˜B)=sin(DE(˜A,C˜A)+DE(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SE3(˜A,˜B)=k√DE(˜A,C˜A)+DE(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SE4(˜A,˜B)=log(DE(˜A,C˜A)+DE(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SE5(˜A,˜B)=2DE(˜A,C˜A)⋅DE(˜B,C˜B)−1 | |
DM(˜A,˜B) | f1(x,y)=x+y2 | SM1(˜A,˜B)=DM(˜A,C˜A)+DM(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SM2(˜A,˜B)=sin(DM(˜A,C˜A)+DM(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SM3(˜A,˜B)=k√DM(˜A,C˜A)+DM(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SM4(˜A,˜B)=log(DM(˜A,C˜A)+DM(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SM5(˜A,˜B)=2DM(˜A,C˜A)⋅DM(˜B,C˜B)−1 | |
DHH(˜A,˜B) | f1(x,y)=x+y2 | SHH1(˜A,˜B)=DHH(˜A,C˜A)+DHH(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SHH2(˜A,˜B)=sin(DHH(˜A,C˜A)+DHH(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SHH3(˜A,˜B)=k√DHH(˜A,C˜A)+DHH(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SHH4(˜A,˜B)=log(DHH(˜A,C˜A)+DHH(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SHH5(˜A,˜B)=2DHH(˜A,C˜A)⋅DHH(˜B,C˜B)−1 | |
Dh(˜A,˜B) | f1(x,y)=x+y2 | Sh1(˜A,˜B)=Dh(˜A,C˜A)+Dh(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | Sh2(˜A,˜B)=sin(Dh(˜A,C˜A)+Dh(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | Sh3(˜A,˜B)=k√Dh(˜A,C˜A)+Dh(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | Sh4(˜A,˜B)=log(Dh(˜A,C˜A)+Dh(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | Sh5(˜A,˜B)=2Dh(˜A,C˜A)⋅Dh(˜B,C˜B)−1 |
The knowledge content of IVIFSs is typically characterized through knowledge measures, with entropy serving as a pivotal quantifying the uncertainty associated with fuzzy variables. In this section, we established an approach of developing similarity measures to devise knowledge measures and entropy for IVIFSs.
Theorem 3.6. Let f be a function defined in Definition 3.3 and D be an aforementioned distance in Definition 2.3 for IVIFSs. For any ˜A∈IVIFS(X),
K(˜A)=1−f(D(˜A,C˜A),D(~Ac,C˜Ac)) |
is a knowledge measure on IVIFS(X).
Proof. In the following we will show that the knowledge measure we construct satisfies the four conditions in Definition 2.4.
(KD1): Let K(˜A)=1, then consider the definition of the function f, that is, D(˜A,C˜A)=D(~Ac,C˜Ac)=0. Therefore, by Definition 2.3 and Corollary 3.2(1), it is easily obtained that ˜A∈P(X).
(KD2): It is not difficult to see that if K(˜A)=0, then D(˜A,C˜A)=D(~Ac,C˜Ac)=1, and according to the definition of the distance D,
μL˜A(x)=μU˜A(x)=νL˜A(x)=νU˜A(x)=0, |
can be obtained. Thus, π˜A(x)=[1,1].
(KD3): Assume that ˜A⊆˜B for μL˜B(x)≤νL˜B(x) and μU˜B(x)≤νU˜B(x). It is certain that
μL˜A(x)≤μL˜B(x)≤νL˜B(x)≤νL˜A(x),μU˜A(x)≤μU˜B(x)≤νU˜B(x)≤νU˜A(x), |
and from the notion of the closest crisp set, it can be derived that
μLC˜A(x)=μUC˜A(x)=μLC˜B(x)=μUC˜B(x)=0, |
νLC˜A(x)=νUC˜A(x)=νLC˜B(x)=νUC˜B(x)=1. |
Therefore, let ˜N=<[0,0],[1,1]>, and if ˜N⊆˜A⊆˜B, then
D(˜A,˜N)=D([μL˜A(x),μU˜A(x)],[νL˜A(x),νU˜A(x)],[0,0],[1,1])≤D(˜B,˜N)=D([μL˜B(x),μU˜B(x)],[νL˜B(x),νU˜B(x)],[0,0],[1,1]). |
Similarly, it is obvious that D(~Ac,C˜Ac)≤D(~Bc,C˜Bc). Then, by considering the function f to be component-wise increasing, there is
f(D(˜A,C˜A),D(~Ac,C˜Ac))≤f(D(˜B,C˜B),D(~Bc,C˜Bc)). |
Hence, it is easy to know that K(˜A)≥K(˜B). On the other hand, assume that ˜A⊇˜B for μL˜B(x)≥νL˜B(x) and μU˜B(x)≥νU˜B(x). K(˜A)≥K(˜B) can also be obtained.
(KD4): In order to verify K(˜A)=K(˜Ac), simply prove that
f(D(˜A,C˜A),D(~Ac,C˜Ac))=f(D(~Ac,C˜Ac),D(˜A,C˜A)), |
which is obvious from the definition of the function f.
Next, a series of knowledge measures can be obtained by using different distance measures and different expressions of function f. They are shown in Table 3.
Distance measures | Conversion functions | Knowledge measures |
DH(˜A,˜B) | f1(x,y)=x+y2 | KH1(˜A)=1−DH(˜A,C˜A)+DH(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KH2(˜A)=1−sin(DH(˜A,C˜A)+DH(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | KH3(˜A)=1−k√DH(˜A,C˜A)+DH(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | KH4(˜A)=1−log(DH(˜A,C˜A)+DH(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KH5(˜A)=2−2DH(˜A,C˜A)⋅DH(˜Ac,C˜Ac) | |
DE(˜A,˜B) | f1(x,y)=x+y2 | KE1(˜A)=1−DE(˜A,C˜A)+DE(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KE2(˜A)=1−sin(DE(˜A,C˜A)+DE(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | KE3(˜A)=1−k√DE(˜A,C˜A)+DE(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | KE4(˜A)=1−log(DE(˜A,C˜A)+DE(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KE5(˜A)=2−2DE(˜A,C˜A)⋅DE(˜Ac,C˜Ac) | |
DM(˜A,˜B) | f1(x,y)=x+y2 | KM1(˜A)=1−DM(˜A,C˜A)+DM(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KM2(˜A)=1−sin(DM(˜A,C˜A)+DM(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | KM3(˜A)=1−k√DM(˜A,C˜A)+DM(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | KM4(˜A)=1−log(DM(˜A,C˜A)+DM(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KM5(˜A)=2−2DM(˜A,C˜A)⋅DM(˜Ac,C˜Ac) | |
DHH(˜A,˜B) | f1(x,y)=x+y2 | KHH1(˜A)=1−DHH(˜A,C˜A)+DHH(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KHH2(˜A,˜B)=sin(DHH(˜A,C˜A)+DHH(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | KHH3(˜A,˜B)=k√DHH(˜A,C˜A)+DHH(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | KHH4(˜A)=1−log(DHH(˜A,C˜A)+DHH(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KHH5(˜A)=2−2DHH(˜A,C˜A)⋅DHH(˜Ac,C˜Ac) | |
Dh(˜A,˜B) | f1(x,y)=x+y2 | Kh1(˜A)=1−Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | Kh2(˜A)=1−sin(Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | Kh3(˜A)=1−k√Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | Kh4(˜A)=1−log(Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | Kh5(˜A)=2−2Dh(˜A,C˜A)⋅Dh(˜Ac,C˜Ac) |
Subsequently, entropy from the perspective of knowledge measures for IVIFSs will be explored.
Corollary 3.7. Let K be a knowledge measure induced by the aforementioned function f for IVIFSs, then for any ˜A∈IVIFS(X),
E(˜A)=1−K(˜A) |
is an entropy for IVIFSs.
Based on Corollary 3.7, it is straightforward to deduce the existence of a distinct numerical correlation between entropy and the newly devised knowledge metric. Put simply, within the context of an IVIFS, a system with a lower entropy value may consistently possess a greater quantity of knowledge, provided that it adheres to the prescribed axioms.
By carefully examining the intricate relationship between the recently introduced knowledge measure and entropy, we can effortlessly deduce certain entropies.
Upon scrutinizing the information measures previously discussed for IVIFSs, it becomes evident that they share certain linkages, which we shall subsequently elaborate in the form of corollaries.
Corollary 3.8. Let S be an aforementioned similarity measure in Theorem 3.5 for IVIFSs, and for any ˜A∈IVIFS(X),
K(˜A)=1−S(˜A,~Ac) |
is a knowledge measure for IVIFSs.
Corollary 3.9. Let S be an aforementioned similarity measure in Theorem 3.5 for IVIFSs; assuming that ˜A∈IVIFS(X),
E(˜A)=S(˜A,~Ac) |
is an entropy for IVIFSs.
Regarding the construction method and transformation relationship of information measures, there are some important issues to be further discussed below, and these discussions pave the way for the subsequent research.
(1) Drawing upon the axiomatic definition of IVIFSs, the intricate relationship among similarity measures, knowledge measures, and the entropy of IVIFSs is comprehensively formulated in general terms, accompanied by explicit expressions. Notably, several key findings are presented in the form of theorems, which underscore the existence of a profound connection between the information measures associated with IVIFSs.
(2) Among these theorems, through the axiomatic definition of IVIFS, we can easily convert the above information measures, i.e., similarity measure, knowledge measure, and entropy, to each other. Specifically, we consider that the similarity measure and knowledge measure of IVIFS can be converted to each other, and, finally, based on the general relationship between knowledge measure and entropy, the entropy is constructed with the similarity measure, so that the larger the similarity measure, the less the amount of knowledge it contains, and the larger the entropy.
(3) The result of Corollary 3.7 in this paper shows that there is a dyadic relationship between entropy and knowledge measure, which paves the way for continuing to explore whether there is a dyadic relationship between entropy and knowledge measure for fuzzy sets and their extended forms.
This section employs the recently proposed knowledge measure to tackle multi-attribute decision-making (MADM) problems under conditions of unknown weights, while utilizing the novel similarity measure to address pattern recognition issues within the framework of IVIFSs. The derived outcomes are subsequently contrasted with several established measures for comprehensive analysis.
In the complex process of MADM, where the weights of individual attributes remain elusive, this subsection introduces a methodology for determining these weights through the utilization of knowledge measures techniques.
Construct an MADM process that examines m alternatives ˜A={˜A1,˜A2,…,˜Am} using n attributes C={C1,C2,…,Cn}. Suppose that w=(w1,w2,…,wn)T is the weight vector of attributes, where 0≤wj≤1 and ∑nj=1wj=1,j=1,2,…,n.
The MADM problem of IVIFSs can be described in the matrix form below:
R=˜A1˜A2⋮˜AmC1 C2⋯ Cn(˜a11˜a12⋯˜a1n˜a21˜a22⋯˜a2n⋮⋮⋱⋮˜am1˜am2⋯˜amn), | (4.1) |
where ˜aij=⟨(x,[μij˜aL(x),μij˜aU(x)],[νij˜aL(x),νij˜aU(x)])|x∈X⟩(i=1,2,…,m;j=1,2,…,n) indicates the assessment of an expert corresponding to attribute Cj(j=1,2,…,n) to evaluate the alternative ˜Ai(i=1,2,…,m).
Step 1. By defining the knowledge measure of IVIFSs using KH1, the knowledge measure based decision matrix Q=(˜kij)m×n is generated from the decision matrix R above. The matrix Q=(˜kij)m×n is shown below.
Q=˜A1˜A2⋮˜AmC1 C2⋯ Cn(˜k11˜k12⋯˜k1n˜k21˜k22⋯˜k2n⋮⋮⋱⋮˜km1˜km2⋯˜kmn), | (4.2) |
where ˜kij is the knowledge measure corresponding to each ˜aij.
Step 2. The following formula can be used to construct the attribute weights based on the knowledge measure when the attribute weights wj are completely unknown:
wj=∑mi=1˜kij∑nj=1∑mi=1˜kij, | (4.3) |
where 0≤wj≤1 and ∑nj=1wj=1.
Step 3. In an interval-valued intuitionistic fuzzy environment, the interval-valued intuitionistic fuzzy arithmetic mean operator [3] is used to calculate the weighted aggregation value of each alternative ˜Ai after the weights of each attribute have been determined and the operator is expressed as follows:
Zi=⟨[μLi,μUi],[νLi,νUi]⟩=⟨[1−n∏j=1(1−μij˜aL(x))wj,1−n∏j=1(1−μij˜aU(x))wj],[n∏j=1(νij˜aL(x))wj,n∏j=1(νij˜aU(x))wj]⟩, | (4.4) |
where i=1,2,…,m;j=1,2,…,n. The calculated Zi is an IVIFS.
Step 4. In order to differentiate between the alternatives, the alternatives will be ranked below by calculating a score function for each alternative, with the following equation:
ϕ(Zi)=λ(μLi+μUi)(1−λ)(νLi+νUi),i=1,2,...,m,λ∈(0,1). | (4.5) |
Subsequently, arrange the alternatives in a nonincreasing sequence of score function, the larger the score function of the alternative, the better the alternative is. In order to realize the whole decision-making process, Figure 5 describes a framework.
In this section, a practical case study, adapted from [5], will be presented to demonstrate the application of the newly introduced knowledge measure in addressing MADM problems involving unknown weights.
An investment company is about to make an investment, and the four possible choices for the investment capital are (1) ˜A1, a car company; (2) ˜A2, a food company; (3) ˜A3, a computer company; and (4) ˜A4, a branch office. The investment firm is trying to decide which of these is the best option. The following four attributes must also be taken into consideration by the investment company when making its choice: C1: Risk analysis; C2: Growth analysis; C3: Environmental impact analysis; and C4: Social and political impact analysis.
Let w=(w1,w2,w3,w4)T be the weight vector of the attributes, which is completely unknown. Additionally, the ratings of the alternatives over the attributes are listed in the following Table 4.
C1 | C2 | C3 | C4 | |
˜A1 | ⟨[0.1,0.2],[0.1,0.2]⟩ | ⟨[0.25,0.5],[0.25,0.5]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ | ⟨[0.5,0.5],[0.5,0.5]⟩ |
˜A2 | ⟨[0.5,0.6],[0.2,0.3]⟩ | ⟨[0.2,0.5],[0.2,0.5]⟩ | ⟨[0.0,0.0],[0.25,0.75]⟩ | ⟨[0.3,0.4],[0.4,0.6]⟩ |
˜A3 | ⟨[0.25,0.5],[0.25,0.5]⟩ | ⟨[0.2,0.4],[0.2,0.4]⟩ | ⟨[0.2,0.3],[0.4,0.7]⟩ | ⟨[0.2,0.3],[0.5,0.6]⟩ |
˜A4 | ⟨[0.2,0.3],[0.6,0.7]⟩ | ⟨[0.4,0.7],[0.2,0.3]⟩ | ⟨[0.2,0.5],[0.2,0.5]⟩ | ⟨[0.5,0.7],[0.1,0.3]⟩ |
Step 1. Based on KH1, the knowledge measure matrix is calculated as follows:
Q=˜A1˜A2˜A3˜A4C1 C2 C3 C4(0.50.50.4750.50.650.50.750.5750.50.50.650.650.70.650.50.7). | (4.6) |
Step 2. Next, the weight of attributes are calculated by applying Eq (4.3):
w1=0.2527,w2=0.2312,w3=0.2554,w4=0.2608. |
Step 3. The interval-valued intuitionistic fuzzy weighted average operator can be computed by utilizing the formula presented in Eq (4.4). As a result, the combined results of the schemes are obtained as follows:
Z1=⟨[0.3326,0.4370],[0.2486,0.3966]⟩;Z2=⟨[0.2737,0.4085],[0.2536,0.5111]⟩; |
Z3=⟨[0.2129,0.3796],[0.3207,0.5426]⟩;Z4=⟨[0.3379,0.5766],[0.2203,0.4234]⟩. |
Step 4. The scoring values of the alternatives are calculated with Eq (4.5) when λ=12 as follows:
ϕ(Z1)=1.1922,ϕ(Z2)=0.8921,ϕ(Z3)=0.6863,ϕ(Z4)=1.4207. |
Therefore, the sorting order is ˜A4≻˜A1≻˜A2≻˜A3, and the optimal alternative is ˜A4. Furthermore, to underscore the practicality of the newly introduced knowledge measure in addressing the MADM problem, we contrast the derived outcomes with diverse entropy and knowledge measures. Evidently, the majority of the ranking outcomes closely align with our findings, as evident from the subsequent Table 5 and Figure 6. Notably, ˜A4 emerges as the optimal solution.
Methods | Weighting vector | Alternatives ranking |
EJPCZ(E1) [29] | w=(0.2274,0.2860,0.2370,0.2496)T | ˜A4≻˜A1≻˜A2≻˜A3 |
ELZX(E2) [30] | w=(0.2274,0.2860,0.2370,0.2496)T | ˜A4≻˜A1≻˜A2≻˜A3 |
EWWZ(E3) [31] | w=(0.2274,0.2860,0.2370,0.2496)T | ˜A4≻˜A1≻˜A2≻˜A3 |
EZJJL(E4) [32] | w=(0.2366,0.3159,0.1859,0.2616)T | ˜A4≻˜A1≻˜A2≻˜A3 |
Das et al. [27] | w=(0.2506,0.2211,0.2472,0.2811)T | ˜A4≻˜A1≻˜A2≻˜A3 |
Proposed method | w=(0.2527,0.2312,0.2554,0.2608)T | ˜A4≻˜A1≻˜A2≻˜A3 |
The aforementioned examples demonstrate the remarkable effectiveness of applying knowledge measures to tackle MADM problems that involve unknown weights. Additionally, they underscore the usefulness and practical applicability of the newly introduced knowledge measures in our daily lives.
It is evident that the parameter λ in Eq (4.5) can be interpreted as the weight of membership degree, and its magnitude significantly impacts the ultimate ranking of alternatives. Sensitivity analysis is performed by selecting various λ values. Table 6 presents the score function values ϕ(Zi) and the rankings of each alternative, which are calculated based on different λ values.
ϕ(Z1) | ϕ(Z2) | ϕ(Z3) | ϕ(Z4) | Alternatives order | |
λ=0.1 | 0.1325 | 0.0991 | 0.0763 | 0.1579 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.2 | 0.2980 | 0.2230 | 0.1716 | 0.3552 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.3 | 0.5109 | 0.3823 | 0.2941 | 0.6089 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.4 | 0.7948 | 0.5947 | 0.4576 | 0.9472 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.5 | 1.1922 | 0.8920 | 0.6863 | 1.4207 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.6 | 1.7882 | 1.3380 | 1.0295 | 2.1311 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.7 | 2.7817 | 2.0813 | 1.6015 | 3.3150 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.8 | 4.7686 | 3.5680 | 2.7454 | 5.6829 | ˜A4≻˜A1≻˜A2≻˜A3 |
λ=0.9 | 10.7295 | 8.0280 | 6.1771 | 12.7865 | ˜A4≻˜A1≻˜A2≻˜A3 |
We randomly select 9 values for λ within the interval (0,1), and it is evident that regardless of how λ varies, the ranking of alternatives remains consistent throughout Table 6. Additionally, we will visualize the data in Figure 7, which illustrates that the ultimate ranking remains unaltered despite changes in λ. As evident from Table 6, when λ>0.5, the score function values of the alternative solutions exhibit a substantial increase, primarily due to a greater preference for membership degree under such conditions.
To clearly demonstrate the disparities among alternative solutions as λ escalates, consider the following definitions: T1=ϕ(Z4)−ϕ(Z1),T2=ϕ(Z1)−ϕ(Z2),T3=ϕ(Z2)−ϕ(Z3). It can be found that with the increase of λ, the discrimination between the alternatives increases in Table 7 and Figure 8. That is to say, with the increase of λ, the score function ϕ(Zi) can better distinguish the alternatives. Therefore, the ranking results should be minimally influenced by subjective factors, which further demonstrates the stability of the method proposed in this paper.
T1 | T2 | T3 | |
λ=0.1 | 0.0254 | 0.0334 | 0.0229 |
λ=0.2 | 0.0571 | 0.0750 | 0.0514 |
λ=0.3 | 0.0980 | 0.1286 | 0.0881 |
λ=0.4 | 0.1524 | 0.2001 | 0.1371 |
λ=0.5 | 0.2286 | 0.3002 | 0.2057 |
λ=0.6 | 0.3428 | 0.4502 | 0.3085 |
λ=0.7 | 0.5333 | 0.7004 | 0.4799 |
λ=0.8 | 0.9143 | 1.2007 | 0.8226 |
λ=0.9 | 2.0571 | 2.7015 | 1.8509 |
Novel coronavirus pneumonia (COVID-19) is an acute respiratory infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Since its outbreak in Pakistan, the COVID-19 situation there has garnered widespread attention. The epidemic's spread in Pakistan has exhibited certain fluctuations, with a gradual rise in confirmed cases, exerting a profound impact on public health, socioeconomic conditions, and the daily lives of citizens.
This paper refers to the example of healthcare facilities in Khyber Pakhtunkhwa, Pakistan in [33]. There is currently a review panel comprising three experts, e1,e2, and e3, tasked with evaluating four hospitals Al(l=1,2,3,4) in Khyber Pakhtunkhwa based on four attributes: c1: Reverse Transcription-Polymerase Chain Reaction Facilities (RT-PCR), c2: Personal protective equipment (PPE), c3: Shortage of Isolation Ward (SIW), c4: Hight Qualified Staff (HQS).
Next, the specific steps of the decision-making method are as follows.
Step 1. We construct interval valued intuitionistic fuzzy decision matrices, as shown in Tables 8–10.
c1 | c2 | c3 | c4 | |
A1 | ⟨[0.3,0.4],[0.1,0.3]⟩ | ⟨[0.2,0.5],[0.2,0.3]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ | ⟨[0.1,0.3],[0.3,0.5]⟩ |
A2 | ⟨[0.1,0.3],[0.3,0.4]⟩ | ⟨[0.3,0.4],[0.1,0.4]⟩ | ⟨[0.2,0.4],[0.1,0.2]⟩ | ⟨[0.3,0.4],[0.1,0.2]⟩ |
A3 | ⟨[0.3,0.4],[0.1,0.2]⟩ | ⟨[0.2,0.4],[0.1,0.2]⟩ | ⟨[0.3,0.4],[0.1,0.4]⟩ | ⟨[0.1,0.3],[0.3,0.4]⟩ |
A4 | ⟨[0.1,0.3],[0.3,0.5]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ | ⟨[0.2,0.5],[0.2,0.3]⟩ | ⟨[0.3,0.4],[0.1,0.3]⟩ |
c1 | c2 | c3 | c4 | |
A1 | ⟨[0.1,0.3],[0.3,0.5]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ | ⟨[0.2,0.5],[0.2,0.3]⟩ | ⟨[0.2,0.5],[0.2,0.3]⟩ |
A2 | ⟨[0.3,0.4],[0.1,0.2]⟩ | ⟨[0.2,0.4],[0.1,0.2]⟩ | ⟨[0.3,0.4],[0.1,0.4]⟩ | ⟨[0.3,0.4],[0.1,0.4]⟩ |
A3 | ⟨[0.1,0.3],[0.3,0.4]⟩ | ⟨[0.3,0.4],[0.1,0.4]⟩ | ⟨[0.2,0.4],[0.1,0.2]⟩ | ⟨[0.2,0.4],[0.1,0.2]⟩ |
A4 | ⟨[0.3,0.4],[0.1,0.3]⟩ | ⟨[0.2,0.5],[0.2,0.3]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ |
c1 | c2 | c3 | c4 | |
A1 | ⟨[0.2,0.5],[0.2,0.3]⟩ | ⟨[0.3,0.5],[0.2,0.3]⟩ | ⟨[0.1,0.3],[0.3,0.5]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ |
A2 | ⟨[0.3,0.4],[0.1,0.4]⟩ | ⟨[0.1,0.4],[0.2,0.4]⟩ | ⟨[0.3,0.4],[0.1,0.2]⟩ | ⟨[0.2,0.4],[0.1,0.2]⟩ |
A3 | ⟨[0.2,0.4],[0.1,0.2]⟩ | ⟨[0.3,0.4],[0.1,0.2]⟩ | ⟨[0.1,0.3],[0.3,0.4]⟩ | ⟨[0.3,0.4],[0.1,0.4]⟩ |
A4 | ⟨[0.4,0.5],[0.3,0.5]⟩ | ⟨[0.1,0.5],[0.2,0.5]⟩ | ⟨[0.3,0.4],[0.1,0.3]⟩ | ⟨[0.2,0.5],[0.2,0.3]⟩ |
Step 2. By utilizing Kh1 to define the knowledge measure of IVIFSs, a decision matrix Qk(k=1,2,3), grounded in this knowledge measure, is derived from the decision matrix. The matrix Qk(k=1,2,3) is presented as follows.
Q1=A1A2A3A4c1 c2 c3 c4(0.60.80.60.70.70.70.80.70.70.80.70.70.70.60.80.7). |
Q2=A1A2A3A4c1 c2 c3 c4(0.70.60.80.80.70.80.70.70.70.70.80.80.70.80.60.6). |
Q3=A1A2A3A4c1 c2 c3 c4(0.80.70.70.60.70.80.70.80.80.70.70.70.60.90.70.8). |
Step 3. We assign equal weights to the three experts and aggregate their knowledge measure decision matrices into one using the formula below:
ˉQ=Q1+Q2+Q33. | (4.7) |
Obtain the following knowledge measure aggregation matrix ˉQ:
ˉQ=A1A2A3A4c1 c2 c3 c4(0.70.70.70.70.70.770.730.730.730.730.730.730.670.770.70.7). |
Step 4. Determine attribute weights by applying Eq (4.3).
w1=0.2437,w2=0.2585,w3=0.2489,w4=0.2489. |
Step 5. Employ Eq (4.4) to apply weights and aggregate the attributes of the alternative (Table 11).
e1 | e2 | e3 | |
z1 | ⟨[0.2577,0.4316],[0.2067,0.3869]⟩ | ⟨[0.2357,0.4573],[0.2352,0.3877]⟩ | ⟨[0.2591,0.4563],[0.2447,0.3869]⟩ |
z2 | ⟨[0.2306,0.3770],[0.1307,0.2833]⟩ | ⟨[0.2754,0.4000],[0.1000,0.2824]⟩ | ⟨[0.2278,0.4000],[0.1196,0.2833]⟩ |
z3 | ⟨[0.2286,0.3765],[0.1314,0.2824]⟩ | ⟨[0.2046,0.3770],[0.1307,0.2833]⟩ | ⟨[0.2302,0.3765],[0.1314,0.2824]⟩ |
z4 | ⟨[0.2607,0.4321],[0.2063,0.3877]⟩ | ⟨[0.3289,0.4773],[0.2067,0.3869]⟩ | ⟨[0.2562,0.4768],[0.1858,0.3877]⟩ |
Step 6. When λ=12, the score values by each expert to the alternative are calculated using Eq (4.5) as shown in Table 12.
φ1 | φ2 | φ3 | φ4 | |
e1 | 1.1613 | 1.4679 | 1.4623 | 1.1662 |
e2 | 1.0949 | 1.7662 | 1.4051 | 1.3583 |
e3 | 1.1328 | 1.5581 | 1.4660 | 1.2781 |
Step 7. Aggregate the score values of three experts to obtain the final ranking of alternative solutions.
ˉφ1=1.1297,ˉφ2=1.5974,ˉφ3=1.4445,ˉφ4=1.2675. |
It is not hard to observe that ˉφ2>ˉφ3>ˉφ4>ˉφ1. Thus, yielding the alternative ranking result: A2≻A3≻A4≻A1. This is consistent with the result in [33], which also demonstrates the effectiveness and practicality of the method proposed in this paper.
Suppose X is a finite universe of discourse. There are m patterns which are denoted by IVIFSs
˜Mj={⟨x1,[μL˜Mj(x1),μU˜Mj(x1)],[νL˜Mj(x1),νU˜Mj(x1)]⟩,...,⟨xn,[μL˜Mj(xn),μU˜Mj(xn)],[νL˜Mj(xn),νU˜Mj(xn)]⟩|x∈X}(j=1,2,...,m) |
and a test sample that needs to be categorized exists, which is denoted by IVIFS
˜B={⟨x1,[μL˜B(x1),μU˜B(x1)],[νL˜B(x1),νU˜B(x1)]⟩,...,⟨xn,[μL˜B(xn),μU˜B(xn)],[νL˜B(xn),νU˜B(xn)]⟩|x∈X}. |
The following is the recognition process:
(a) Compute the similarity measure S(˜Mj,˜B) between ˜Mj(j=1,...,m) and ˜B with SH1 and Sh1.
(b) Sort S(˜Mj,˜B) in ascending order, where a larger value of S(˜Mj,˜B) indicates a closer proximity of ˜B to category ˜Mj.
A diverse array of scholars have proposed numerous approaches to tackle medical diagnosis from varying perspectives. In this section, we employ pattern recognition techniques to address the medical diagnosis challenge: the collection of symptoms serves as the discourse universe, the patient functions as an unidentified test specimen, and the various diseases correspond to multiple modalities. The objective is to categorize patients into distinct groups based on their respective diseases.
Suppose X={x1(Temperature),x2(Cough),x3(Headache),x4(Stomach pain)} is a group of symptoms and ˜M={˜M1(Viral fever),˜M2(Typhoid),˜M3(Pneumonia),˜M4(Stomach problem)} is a group of the diseases. Then, the disease-symptom matrix expressed by IVIFSs is shown in Table 13 and data is derived from [16].
x1(Temperature) | x2(Cough) | x3(Headache) | x4(Stomach pain) | |
˜M1 | ⟨[0.8,0.9],[0.0,0.1]⟩ | ⟨[0.7,0.8],[0.1,0.2]⟩ | ⟨[0.5,0.6],[0.2,0.3]⟩ | ⟨[0.6,0.8],[0.1,0.2]⟩ |
˜M2 | ⟨[0.5,0.6],[0.1,0.3]⟩ | ⟨[0.8,0.9],[0.0,0.1]⟩ | ⟨[0.6,0.8],[0.1,0.2]⟩ | ⟨[0.4,0.6],[0.1,0.2]⟩ |
˜M3 | ⟨[0.7,0.8],[0.1,0.2]⟩ | ⟨[0.7,0.9],[0.0,0.1]⟩ | ⟨[0.4,0.6],[0.2,0.4]⟩ | ⟨[0.3,0.5],[0.2,0.4]⟩ |
˜M4 | ⟨[0.8,0.9],[0.0,0.1]⟩ | ⟨[0.7,0.8],[0.1,0.2]⟩ | ⟨[0.7,0.9],[0.0,0.1]⟩ | ⟨[0.8,0.9],[0.0,0.1]⟩ |
Let the patient B be expressed as:
B={⟨x1,[0.4,0.5],[0.1,0.2]⟩,⟨x2,[0.7,0.8],[0.1,0.2]⟩,⟨x3,[0.9,0.9],[0.0,0.1]⟩,⟨x4,[0.3,0.5],[0.2,0.4]⟩}. |
The aim is to categorize patient B into one of diseases ˜M1,˜M2,˜M3,˜M4. We can then obtain the following results on IVIFSs, as shown in Table 14.
S(˜M1,B) | S(˜M2,B) | S(˜M3,B) | S(˜M4,B) | Recognition Result | |
S1 [11] | 0.73 | 0.80 | 0.78 | 0.73 | ˜M2 |
SD [6] | 0.82 | 0.91 | 0.86 | 0.84 | ˜M2 |
SH1 | 0.32 | 0.36 | 0.33 | 0.28 | ˜M2 |
Sh1 | 0.38 | 0.45 | 0.43 | 0.34 | ˜M2 |
We may determine that there is the most similarity between ˜M2 and B by taking into account the pattern recognition principle of IVIFSs, but the similarity measure S1 is unable to determine which of ˜M1 and ˜M4 is greater. Furthermore, the newly suggested similarity measures are more practical and capable of resolving this point. In addition, by using the similarity measures SH1 and Sh1 proposed in this paper, the same recognition results can be obtained. Hence, based on the recognition principle, we can classify patient B as suffering from disease ˜M2, thereby conclusively diagnosing them with typhoid illness.
This paper introduces the axiomatic definition of the closest crisp set for IVIFSs, along with its pertinent properties. Subsequently, leveraging the distance measure to the closest crisp set, a novel knowledge measure and entropy for IVIFSs are presented. Furthermore, several theorems are proven, and the interplay between knowledge measures and entropy is examined. Ultimately, the efficacy of the proposed method is validated through two concrete examples: The application of knowledge measures in MADM and similarity measures in pattern recognition.
In the future, we will persist in exploring the application of IVIFS information measures in other domains and strive to develop superior and more rational models for the mutual conversion of these information measures.
Chunfeng Suo: Writing-review & editing, supervision, project administration, funding acquisition; Le Fu: Conceptualization, Methodology, software, validation, writing-original draft, writing-review & editing; Jingxuan Chen: Software, validation, supervision, conceptualization; Xuanchen Li: Validation, writing-review. All authors have read and approved the final version of the manuscript for publication.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This paper was supported by National Science Foundation of China (Grant Nos: 11671244, 12071271), Natural Science Foundation of Jilin Province (Grant Nos: YDZJ202201ZYTS320) and the Sixth Batch of Jilin Province Youth Science and Technology Talent Lifting Project.
The authors declare that there are no conflicts of interest in the publication of this paper.
[1] |
S. Shyu, P. Y. Yin, B. Lin, An ant colony optimization algorithm for the minimum weight vertex cover problem, Ann. Oper. Res., 131 (2004), 283–304. https://doi.org/10.1023/B:ANOR.0000039523.95673.33 doi: 10.1023/B:ANOR.0000039523.95673.33
![]() |
[2] |
H. Tamura, H. Sugawara, M. Sengoku, S. Shinoda, Multiple cover problem on undirected flow networks, Electron. Comm. JPN. 3, 84 (2001), 67–74. https://doi.org/10.1002/1520-6440(200101)84:1<67::AID-ECJC7>3.0.CO;2-%23 doi: 10.1002/1520-6440(200101)84:1<67::AID-ECJC7>3.0.CO;2-%23
![]() |
[3] |
V. V. Gusev, The vertex cover game: Application to transport networks, Omega, 97 (2020), 102102. https://doi.org/10.1016/j.omega.2019.08.009 doi: 10.1016/j.omega.2019.08.009
![]() |
[4] |
A. Hossain, E. Lopez, S. M. Halper, D. P. Cetnar, A. C. Reis, D. Strickland, et al., Automated design of thousands of nonrepetitive parts for engineering stable genetic systems, Nat. Biotechnol., 38 (2020), 1466–1475. https://doi.org/10.1038/s41587-020-0584-2 doi: 10.1038/s41587-020-0584-2
![]() |
[5] |
A. C. Reis, S. M. Halper, G. E. Vezeau, D. P. Cetnar, A. Hossain, P. R. Clauer, et al., Simultaneous repression of multiple bacterial genes using nonrepetitive extra-long sgrna arrays, Nat. Biotechnol., 37 (2019), 1294–1301. https://doi.org/10.1038/s41587-019-0286-9 doi: 10.1038/s41587-019-0286-9
![]() |
[6] | M. R. Garey, D. S. Johnson, Computers and intractability: A guide to the theory of NP-completeness, W. H. Freeman & Co., USA, 1990. |
[7] | C. H. Papadimitriou, M. Yannakakis, Optimization, approximation, and complexity classes, J. Comput. Syst. Sci., 43 (1991), 425–440. https://doi.org/10.1016/0022-0000(91)90023-X |
[8] | K. Subhash, D. Minzer, M. Safra, Pseudorandom sets in Grassmann graph have near-perfect expansion, in Proceedings of the 59th Annual IEEE Symposium on Foundations of Computer Science, 2018,592–601. https://doi.org/10.1109/FOCS.2018.00062 |
[9] |
S. Khot, O. Regev, Vertex cover might be hard to approximate to within 2−ϵ, J. Comput. Syst. Sci., 74 (2008), 335–349. https://doi.org/10.1016/j.jcss.2007.06.019 doi: 10.1016/j.jcss.2007.06.019
![]() |
[10] | C. H. Papadimitriou, K. Steiglitz, Combinatorial optimization: Algorithms and complexity, Prentice-Hall, Inc., USA, 1982. |
[11] | W. Gao, T. Friedrich, F. Neumann, C. Hercher, Randomized greedy algorithms for covering problems, in Proceedings of the Genetic and Evolutionary Computation Conference, 2018,309–315. https://doi.org/10.1145/3205455.3205542 |
[12] |
S. Bouamama, C. Blum, A. Boukerram, A population-based iterated greedy algorithm for the minimum weight vertex cover problem, Appl. Soft Comput., 12 (2012), 1632–1639. https://doi.org/10.1016/j.asoc.2012.02.013 doi: 10.1016/j.asoc.2012.02.013
![]() |
[13] |
S. Cai, J. Lin, C. Luo, Finding a small vertex cover in massive sparse graphs: Construct, local search, and preprocess, J. Artif. Intell. Res., 59 (2017), 463–494. https://doi.org/10.1613/jair.5443 doi: 10.1613/jair.5443
![]() |
[14] |
R. Jovanovic, A. P. Sanfilippo, S. Voß, Fixed set search applied to the multi-objective minimum weighted vertex cover problem, J. Heuristics, 28 (2022), 481–508. https://doi.org/10.1007/s10732-022-09499-z doi: 10.1007/s10732-022-09499-z
![]() |
[15] | S. Khuri, T. Bäck, An evolutionary heuristic for the minimum vertex cover problem, in Genetic Algorithms within the Framework of Evolutionary Computation–Proc. of the KI-94 Workshop, Saarbrücken, Germany, 1994, 86–90. |
[16] |
A. Karci, A. Arslan, Bidirectional evolutionary heuristic for the minimum vertex-cover problem, Comput. Electr. Eng., 29 (2003), 111–120. https://doi.org/10.1016/S0045-7906(01)00018-0 doi: 10.1016/S0045-7906(01)00018-0
![]() |
[17] |
A. Singh, A. K. Gupta, A hybrid heuristic for the minimum weight vertex cover problem, Asia Pac. J. Oper. Res., 23 (2006), 273–285. https://doi.org/10.1142/S0217595906000905 doi: 10.1142/S0217595906000905
![]() |
[18] |
B. Nagy, P. Szokol, A genetic algorithm for the minimum vertex cover problem with interval-valued fitness, Acta Polytech. Hung., 18 (2021), 105–123. https://doi.org/10.12700/APH.18.4.2021.4.6 doi: 10.12700/APH.18.4.2021.4.6
![]() |
[19] | J. H. Holland, Genetic algorithms, Sci. Am., 267 (1992), 66–73. |
[20] |
P. O. Lewis, A genetic algorithm for maximum-likelihood phylogeny inference using nucleotide sequence data, Mol. Biol. Evol., 15 (1998), 277–283. https://doi.org/10.1093/oxfordjournals.molbev.a025924. doi: 10.1093/oxfordjournals.molbev.a025924
![]() |
[21] |
S. L. K. Pond, D. Posada, M. B. Gravenor, C. H. Woelk, S. D. Frost, Automated phylogenetic detection of recombination using a genetic algorithm, Mol. Biol. Evol., 23 (2006), 1891–1901. https://doi.org/10.1093/molbev/msl051 doi: 10.1093/molbev/msl051
![]() |
[22] |
B. Chowdhury, G. Garai, A review on multiple sequence alignment from the perspective of genetic algorithm, Genomics, 109 (2017), 419–431. https://doi.org/10.1016/j.ygeno.2017.06.007 doi: 10.1016/j.ygeno.2017.06.007
![]() |
[23] |
L. Bateni, F. Asghari, Bankruptcy prediction using logit and genetic algorithm models: A comparative analysis, Comput. Econ., 55 (2020), 335–348. https://doi.org/10.1007/s10614-016-9590-3 doi: 10.1007/s10614-016-9590-3
![]() |
[24] |
C. B. Kalayci, O. Polat, M. A. Akbay, An efficient hybrid metaheuristic algorithm for cardinality constrained portfolio optimization, Swarm Evol. Comput., 54 (2020), 100662. https://doi.org/10.1016/j.swevo.2020.100662 doi: 10.1016/j.swevo.2020.100662
![]() |
[25] |
S.-H. Moon, Y. Yoon, Genetic mean reversion strategy for online portfolio selection with transaction costs, Mathematics, 10 (2022), 1073. https://doi.org/10.3390/math10071073 doi: 10.3390/math10071073
![]() |
[26] |
C. R. Reeves, A genetic algorithm for flowshop sequencing, Comput. Oper. Res., 22 (1995), 5–13. https://doi.org/10.1016/0305-0548(93)E0014-K doi: 10.1016/0305-0548(93)E0014-K
![]() |
[27] |
Y. Yoon, Y. H. Kim, B. R. Moon, A theoretical and empirical investigation on the Lagrangian capacities of the 0-1 multidimensional knapsack problem, Eur. J. Oper. Res., 218 (2012), 366–376. https://doi.org/10.1016/j.ejor.2011.11.011 doi: 10.1016/j.ejor.2011.11.011
![]() |
[28] | D. Orvosh, L. Davis, Using a genetic algorithm to optimize problems with feasibility constraints, in Proceedings of the First IEEE Conference on Evolutionary Computation (CEC), vol. 2, 1994,548–553. https://doi.org/10.1109/ICEC.1994.350001 |
[29] |
S. Salcedo-Sanz, A survey of repair methods used as constraint handling techniques in evolutionary algorithms, Comput. Sci. Rev., 3 (2009), 175–192. https://doi.org/10.1016/j.cosrev.2009.07.001 doi: 10.1016/j.cosrev.2009.07.001
![]() |
[30] |
F. Samanipour, J. Jelovica, Adaptive repair method for constraint handling in multi-objective genetic algorithm based on relationship between constraints and variables, Appl. Soft Comput., 90 (2020), 106143. https://doi.org/10.1016/j.asoc.2020.106143 doi: 10.1016/j.asoc.2020.106143
![]() |
[31] |
C. Quan, P. Guo, A local search method based on edge age strategy for minimum vertex cover problem in massive graphs, Expert Syst. Appl., 182 (2021), 115185. https://doi.org/10.1016/j.eswa.2021.115185 doi: 10.1016/j.eswa.2021.115185
![]() |
[32] |
Y. Zhang, S. Wang, C. Liu, E. Zhu, TIVC: An efficient local search algorithm for minimum vertex cover in large graphs, Sensors, 23 (2023), 7831. https://doi.org/10.3390/s23187831 doi: 10.3390/s23187831
![]() |
[33] | R. Rossi, N. Ahmed, The network data repository with interactive graph analytics and visualization, in Proceedings of the AAAI conference on artificial intelligence, 29 (2015). https://doi.org/10.1609/aaai.v29i1.9277 |
[34] | A. Eiben, C. Schippers, On evolutionary exploration and exploitation, Fundam. Inform., 35 (1998), 35–50. |
[35] | H. G. Beyer, On the "explorative power" of ES/EP-like algorithms, in Evolutionary Programming Ⅶ (eds. V. W. Porto, N. Saravanan, D. Waagen and A. E. Eiben), Springer Berlin Heidelberg, Berlin, Heidelberg, 1998,323–334. https://doi.org/10.1007/BFb0040785 |
[36] | B. L. Miller, D. E. Goldberg, Genetic algorithms, tournament selection, and the effects of noise, Complex Syst., 9 (1995), 193–212. |
[37] | T. Bui, B. Moon, A new genetic approach for the traveling salesman problem, in Proceedings of the First IEEE Conference on Evolutionary Computation. (CEC), 1 (1994), 7–12. https://doi.org/10.1109/ICEC.1994.350051 |
[38] | M. Buzdalov, The (1+(λ, λ)) genetic algorithm on the vertex cover problem: Crossover helps leaving plateaus, in Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), IEEE, 2022, 1–10. https://doi.org/10.1109/CEC55065.2022.9870224 |
∗=+ ⋆=+ |
Minkovsky | q=4 | p=1 | DH(˜A,˜B)=(14n∑ni=1(αi+βi+γi+δi)) |
p=2 | DE(˜A,˜B)=(14n∑ni=1(α2i+β2i+γ2i+δ2i))12 | |||
p=3 | DM(˜A,˜B)=(14n∑ni=1(α3i+β3i+γ3i+δ3i))13 | |||
∗=∨,⋆=+ ∗=∨,⋆=∨ |
p=1 | q=2 | Hamming-Hausdorff | DHH(˜A,˜B)=(12n∑ni=1(αi∨βi+γi∨δi)) |
q=1 | Hausdorff | Dh(˜A,˜B)=(1n∑ni=1(αi∨βi∨γi∨δi)) |
Distance measures | Conversion functions | Similarity measures |
DH(˜A,˜B) | f1(x,y)=x+y2 | SH1(˜A,˜B)=DH(˜A,C˜A)+DH(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SH2(˜A,˜B)=sin(DH(˜A,C˜A)+DH(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SH3(˜A,˜B)=k√DH(˜A,C˜A)+DH(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SH4(˜A,˜B)=log(DH(˜A,C˜A)+DH(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SH5(˜A,˜B)=2DH(˜A,C˜A)⋅DH(˜B,C˜B)−1 | |
DE(˜A,˜B) | f1(x,y)=x+y2 | SE1(˜A,˜B)=DE(˜A,C˜A)+DE(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SE2(˜A,˜B)=sin(DE(˜A,C˜A)+DE(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SE3(˜A,˜B)=k√DE(˜A,C˜A)+DE(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SE4(˜A,˜B)=log(DE(˜A,C˜A)+DE(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SE5(˜A,˜B)=2DE(˜A,C˜A)⋅DE(˜B,C˜B)−1 | |
DM(˜A,˜B) | f1(x,y)=x+y2 | SM1(˜A,˜B)=DM(˜A,C˜A)+DM(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SM2(˜A,˜B)=sin(DM(˜A,C˜A)+DM(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SM3(˜A,˜B)=k√DM(˜A,C˜A)+DM(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SM4(˜A,˜B)=log(DM(˜A,C˜A)+DM(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SM5(˜A,˜B)=2DM(˜A,C˜A)⋅DM(˜B,C˜B)−1 | |
DHH(˜A,˜B) | f1(x,y)=x+y2 | SHH1(˜A,˜B)=DHH(˜A,C˜A)+DHH(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | SHH2(˜A,˜B)=sin(DHH(˜A,C˜A)+DHH(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | SHH3(˜A,˜B)=k√DHH(˜A,C˜A)+DHH(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | SHH4(˜A,˜B)=log(DHH(˜A,C˜A)+DHH(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | SHH5(˜A,˜B)=2DHH(˜A,C˜A)⋅DHH(˜B,C˜B)−1 | |
Dh(˜A,˜B) | f1(x,y)=x+y2 | Sh1(˜A,˜B)=Dh(˜A,C˜A)+Dh(˜B,C˜B)2 |
f2(x,y)=sin(x+y)π4 | Sh2(˜A,˜B)=sin(Dh(˜A,C˜A)+Dh(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | Sh3(˜A,˜B)=k√Dh(˜A,C˜A)+Dh(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | Sh4(˜A,˜B)=log(Dh(˜A,C˜A)+Dh(˜B,C˜B))3 | |
f5(x,y)=2xy−1 | Sh5(˜A,˜B)=2Dh(˜A,C˜A)⋅Dh(˜B,C˜B)−1 |
Distance measures | Conversion functions | Knowledge measures |
DH(˜A,˜B) | f1(x,y)=x+y2 | KH1(˜A)=1−DH(˜A,C˜A)+DH(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KH2(˜A)=1−sin(DH(˜A,C˜A)+DH(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | KH3(˜A)=1−k√DH(˜A,C˜A)+DH(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | KH4(˜A)=1−log(DH(˜A,C˜A)+DH(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KH5(˜A)=2−2DH(˜A,C˜A)⋅DH(˜Ac,C˜Ac) | |
DE(˜A,˜B) | f1(x,y)=x+y2 | KE1(˜A)=1−DE(˜A,C˜A)+DE(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KE2(˜A)=1−sin(DE(˜A,C˜A)+DE(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | KE3(˜A)=1−k√DE(˜A,C˜A)+DE(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | KE4(˜A)=1−log(DE(˜A,C˜A)+DE(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KE5(˜A)=2−2DE(˜A,C˜A)⋅DE(˜Ac,C˜Ac) | |
DM(˜A,˜B) | f1(x,y)=x+y2 | KM1(˜A)=1−DM(˜A,C˜A)+DM(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KM2(˜A)=1−sin(DM(˜A,C˜A)+DM(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | KM3(˜A)=1−k√DM(˜A,C˜A)+DM(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | KM4(˜A)=1−log(DM(˜A,C˜A)+DM(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KM5(˜A)=2−2DM(˜A,C˜A)⋅DM(˜Ac,C˜Ac) | |
DHH(˜A,˜B) | f1(x,y)=x+y2 | KHH1(˜A)=1−DHH(˜A,C˜A)+DHH(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | KHH2(˜A,˜B)=sin(DHH(˜A,C˜A)+DHH(˜B,C˜B))π4 | |
f3(x,y)=k√x+y2(k>1) | KHH3(˜A,˜B)=k√DHH(˜A,C˜A)+DHH(˜B,C˜B)2(k>1) | |
f4(x,y)=logx+y+13 | KHH4(˜A)=1−log(DHH(˜A,C˜A)+DHH(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | KHH5(˜A)=2−2DHH(˜A,C˜A)⋅DHH(˜Ac,C˜Ac) | |
Dh(˜A,˜B) | f1(x,y)=x+y2 | Kh1(˜A)=1−Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac)2 |
f2(x,y)=sin(x+y)π4 | Kh2(˜A)=1−sin(Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac))π4 | |
f3(x,y)=k√x+y2(k>1) | Kh3(˜A)=1−k√Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac)2(k>1) | |
f4(x,y)=logx+y+13 | Kh4(˜A)=1−log(Dh(˜A,C˜A)+Dh(˜Ac,C˜Ac))3 | |
f5(x,y)=2xy−1 | Kh5(˜A)=2−2Dh(˜A,C˜A)⋅Dh(˜Ac,C˜Ac) |
C1 | C2 | C3 | C4 | |
˜A1 | ⟨[0.1,0.2],[0.1,0.2]⟩ | ⟨[0.25,0.5],[0.25,0.5]⟩ | ⟨[0.4,0.5],[0.3,0.5]⟩ | ⟨[0.5,0.5],[0.5,0.5]⟩ |
˜A2 | ⟨[0.5,0.6],[0.2,0.3]⟩ | ⟨[0.2,0.5],[0.2,0.5]⟩ | ⟨[0.0,0.0],[0.25,0.75]⟩ | ⟨[0.3,0.4],[0.4,0.6]⟩ |
˜A3 | ⟨[0.25,0.5],[0.25,0.5]⟩ | ⟨[0.2,0.4],[0.2,0.4]⟩ | ⟨[0.2,0.3],[0.4,0.7]⟩ | ⟨[0.2,0.3],[0.5,0.6]⟩ |
˜A4 | ⟨[0.2,0.3],[0.6,0.7]⟩ | \langle[0.4, 0.7], [0.2, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.5]\rangle | \langle[0.5, 0.7], [0.1, 0.3]\rangle |
Methods | Weighting vector | Alternatives ranking |
E_{JPCZ}(E1) [29] | w={(0.2274, 0.2860, 0.2370, 0.2496)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
E_{LZX}(E2) [30] | w={(0.2274, 0.2860, 0.2370, 0.2496)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
E_{WWZ}(E3) [31] | w={(0.2274, 0.2860, 0.2370, 0.2496)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
E_{ZJJL}(E4) [32] | w={(0.2366, 0.3159, 0.1859, 0.2616)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
Das et al. [27] | w={(0.2506, 0.2211, 0.2472, 0.2811)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
Proposed method | w={(0.2527, 0.2312, 0.2554, 0.2608)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\phi(Z_{1}) | \phi(Z_{2}) | \phi(Z_{3}) | \phi(Z_{4}) | Alternatives order | |
\lambda=0.1 | 0.1325 | 0.0991 | 0.0763 | 0.1579 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.2 | 0.2980 | 0.2230 | 0.1716 | 0.3552 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.3 | 0.5109 | 0.3823 | 0.2941 | 0.6089 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.4 | 0.7948 | 0.5947 | 0.4576 | 0.9472 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.5 | 1.1922 | 0.8920 | 0.6863 | 1.4207 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.6 | 1.7882 | 1.3380 | 1.0295 | 2.1311 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.7 | 2.7817 | 2.0813 | 1.6015 | 3.3150 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.8 | 4.7686 | 3.5680 | 2.7454 | 5.6829 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.9 | 10.7295 | 8.0280 | 6.1771 | 12.7865 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
T_{1} | T_{2} | T_{3} | |
\lambda=0.1 | 0.0254 | 0.0334 | 0.0229 |
\lambda=0.2 | 0.0571 | 0.0750 | 0.0514 |
\lambda=0.3 | 0.0980 | 0.1286 | 0.0881 |
\lambda=0.4 | 0.1524 | 0.2001 | 0.1371 |
\lambda=0.5 | 0.2286 | 0.3002 | 0.2057 |
\lambda=0.6 | 0.3428 | 0.4502 | 0.3085 |
\lambda=0.7 | 0.5333 | 0.7004 | 0.4799 |
\lambda=0.8 | 0.9143 | 1.2007 | 0.8226 |
\lambda=0.9 | 2.0571 | 2.7015 | 1.8509 |
c_1 | c_2 | c_3 | c_4 | |
A_1 | \langle[0.3, 0.4], [0.1, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.1, 0.3], [0.3, 0.5]\rangle |
A_2 | \langle[0.1, 0.3], [0.3, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.2]\rangle |
A_3 | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.1, 0.3], [0.3, 0.4]\rangle |
A_4 | \langle[0.1, 0.3], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.3, 0.4], [0.1, 0.3]\rangle |
c_1 | c_2 | c_3 | c_4 | |
A_1 | \langle[0.1, 0.3], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle |
A_2 | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle |
A_3 | \langle[0.1, 0.3], [0.3, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle |
A_4 | \langle[0.3, 0.4], [0.1, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle |
c_1 | c_2 | c_3 | c_4 | |
A_1 | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.3, 0.5], [0.2, 0.3]\rangle | \langle[0.1, 0.3], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle |
A_2 | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.1, 0.4], [0.2, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle |
A_3 | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.1, 0.3], [0.3, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle |
A_4 | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.1, 0.5], [0.2, 0.5]\rangle | \langle[0.3, 0.4], [0.1, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle |
e_1 | e_2 | e_3 | |
z_1 | \langle[0.2577, 0.4316], [0.2067, 0.3869]\rangle | \langle[0.2357, 0.4573], [0.2352, 0.3877]\rangle | \langle[0.2591, 0.4563], [0.2447, 0.3869]\rangle |
z_2 | \langle[0.2306, 0.3770], [0.1307, 0.2833]\rangle | \langle[0.2754, 0.4000], [0.1000, 0.2824]\rangle | \langle[0.2278, 0.4000], [0.1196, 0.2833]\rangle |
z_3 | \langle[0.2286, 0.3765], [0.1314, 0.2824]\rangle | \langle[0.2046, 0.3770], [0.1307, 0.2833]\rangle | \langle[0.2302, 0.3765], [0.1314, 0.2824]\rangle |
z_4 | \langle[0.2607, 0.4321], [0.2063, 0.3877]\rangle | \langle[0.3289, 0.4773], [0.2067, 0.3869]\rangle | \langle[0.2562, 0.4768], [0.1858, 0.3877]\rangle |
\varphi_1 | \varphi_2 | \varphi_3 | \varphi_4 | |
e_1 | 1.1613 | 1.4679 | 1.4623 | 1.1662 |
e_2 | 1.0949 | 1.7662 | 1.4051 | 1.3583 |
e_3 | 1.1328 | 1.5581 | 1.4660 | 1.2781 |
x_{1} (Temperature) | x_{2} (Cough) | x_{3} (Headache) | x_{4} (Stomach pain) | |
\tilde{M}_{1} | \langle[0.8, 0.9], [0.0, 0.1]\rangle | \langle[0.7, 0.8], [0.1, 0.2]\rangle | \langle[0.5, 0.6], [0.2, 0.3]\rangle | \langle[0.6, 0.8], [0.1, 0.2]\rangle |
\tilde{M}_{2} | \langle[0.5, 0.6], [0.1, 0.3]\rangle | \langle[0.8, 0.9], [0.0, 0.1]\rangle | \langle[0.6, 0.8], [0.1, 0.2]\rangle | \langle[0.4, 0.6], [0.1, 0.2]\rangle |
\tilde{M}_{3} | \langle[0.7, 0.8], [0.1, 0.2]\rangle | \langle[0.7, 0.9], [0.0, 0.1]\rangle | \langle[0.4, 0.6], [0.2, 0.4]\rangle | \langle[0.3, 0.5], [0.2, 0.4]\rangle |
\tilde{M}_{4} | \langle[0.8, 0.9], [0.0, 0.1]\rangle | \langle[0.7, 0.8], [0.1, 0.2]\rangle | \langle[0.7, 0.9], [0.0, 0.1]\rangle | \langle[0.8, 0.9], [0.0, 0.1]\rangle |
S(\tilde{M}_{1}, B) | S(\tilde{M}_{2}, B) | S(\tilde{M}_{3}, B) | S(\tilde{M}_{4}, B) | Recognition Result | |
S_{1} [11] | 0.73 | 0.80 | 0.78 | 0.73 | \tilde{M}_{2} |
S_{D} [6] | 0.82 | 0.91 | 0.86 | 0.84 | \tilde{M}_{2} |
S_{H1} | 0.32 | 0.36 | 0.33 | 0.28 | \tilde{M}_{2} |
S_{h1} | 0.38 | 0.45 | 0.43 | 0.34 | \tilde{M}_{2} |
\ast=+ \star=+ |
Minkovsky | q=4 | p=1 | D_{H}(\tilde{A}, \tilde{B})=\big(\frac{1}{4n}\sum_{i=1}^{n} (\alpha_{i}+\beta_{i}+\gamma_{i}+\delta_{i})\big) |
p=2 | D_{E}(\tilde{A}, \tilde{B})= \big(\frac{1}{4n}\sum_{i=1}^{n} (\alpha_{i}^{2}+\beta_{i}^{2}+\gamma_{i}^{2}+\delta_{i}^{2})\big)^{\frac{1}{2}} | |||
p=3 | D_{M}(\tilde{A}, \tilde{B})= \big(\frac{1}{4n}\sum_{i=1}^{n} (\alpha_{i}^{3}+\beta_{i}^{3}+\gamma_{i}^{3}+\delta_{i}^{3})\big)^{\frac{1}{3}} | |||
\ast=\vee, \star=+ \ast=\vee, \star=\vee |
p=1 | q=2 | Hamming-Hausdorff | D_{HH}(\tilde{A}, \tilde{B})=\big(\frac{1}{2n}\sum_{i=1}^{n} (\alpha_{i}\vee\beta_{i}+\gamma_{i}\vee\delta_{i})\big) |
q=1 | Hausdorff | D_{h}(\tilde{A}, \tilde{B})=\big(\frac{1}{n}\sum_{i=1}^{n} (\alpha_{i}\vee\beta_{i}\vee\gamma_{i}\vee\delta_{i})\big) |
Distance measures | Conversion functions | Similarity measures |
D_{H}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | S_{H1}(\tilde{A}, \tilde{B})= \frac{D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{B}, C_{\tilde{B}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | S_{H2}(\tilde{A}, \tilde{B})= \sin\frac{\big(D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{B}, C_{\tilde{B}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | S_{H3}(\tilde{A}, \tilde{B})=\sqrt[k]{\frac{D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{B}, C_{\tilde{B}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | S_{H4}(\tilde{A}, \tilde{B})=\log^{\big(D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{B}, C_{\tilde{B}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | S_{H5}(\tilde{A}, \tilde{B})=2^{D_{H}(\tilde{A}, C_{\tilde{A}})\cdot D_{H}(\tilde{B}, C_{\tilde{B}})}-1 | |
D_{E}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | S_{E1}(\tilde{A}, \tilde{B})= \frac{D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{B}, C_{\tilde{B}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | S_{E2}(\tilde{A}, \tilde{B})= \sin\frac{\big(D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{B}, C_{\tilde{B}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | S_{E3}(\tilde{A}, \tilde{B})=\sqrt[k]{\frac{D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{B}, C_{\tilde{B}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | S_{E4}(\tilde{A}, \tilde{B})=\log^{\big(D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{B}, C_{\tilde{B}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | S_{E5}(\tilde{A}, \tilde{B})=2^{D_{E}(\tilde{A}, C_{\tilde{A}})\cdot D_{E}(\tilde{B}, C_{\tilde{B}})}-1 | |
D_{M}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | S_{M1}(\tilde{A}, \tilde{B})= \frac{D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{B}, C_{\tilde{B}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | S_{M2}(\tilde{A}, \tilde{B})= \sin\frac{\big(D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{B}, C_{\tilde{B}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | S_{M3}(\tilde{A}, \tilde{B})=\sqrt[k]{\frac{D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{B}, C_{\tilde{B}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | S_{M4}(\tilde{A}, \tilde{B})=\log^{\big(D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{B}, C_{\tilde{B}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | S_{M5}(\tilde{A}, \tilde{B})=2^{D_{M}(\tilde{A}, C_{\tilde{A}})\cdot D_{M}(\tilde{B}, C_{\tilde{B}})}-1 | |
D_{HH}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | S_{HH1}(\tilde{A}, \tilde{B})= \frac{D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{B}, C_{\tilde{B}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | S_{HH2}(\tilde{A}, \tilde{B})= \sin\frac{\big(D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{B}, C_{\tilde{B}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | S_{HH3}(\tilde{A}, \tilde{B})=\sqrt[k]{\frac{D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{B}, C_{\tilde{B}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | S_{HH4}(\tilde{A}, \tilde{B})=\log^{\big(D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{B}, C_{\tilde{B}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | S_{HH5}(\tilde{A}, \tilde{B})=2^{D_{HH}(\tilde{A}, C_{\tilde{A}})\cdot D_{HH}(\tilde{B}, C_{\tilde{B}})}-1 | |
D_{h}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | S_{h1}(\tilde{A}, \tilde{B})= \frac{D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{B}, C_{\tilde{B}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | S_{h2}(\tilde{A}, \tilde{B})= \sin\frac{\big(D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{B}, C_{\tilde{B}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | S_{h3}(\tilde{A}, \tilde{B})=\sqrt[k]{\frac{D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{B}, C_{\tilde{B}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | S_{h4}(\tilde{A}, \tilde{B})=\log^{\big(D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{B}, C_{\tilde{B}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | S_{h5}(\tilde{A}, \tilde{B})=2^{D_{h}(\tilde{A}, C_{\tilde{A}})\cdot D_{h}(\tilde{B}, C_{\tilde{B}})}-1 |
Distance measures | Conversion functions | Knowledge measures |
D_{H}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | K_{H1}(\tilde{A})=1- \frac{D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | K_{H2}(\tilde{A})=1- \sin\frac{\big(D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | K_{H3}(\tilde{A})=1- \sqrt[k]{\frac{D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | K_{H4}(\tilde{A})=1- \log^{\big(D_{H}(\tilde{A}, C_{\tilde{A}})+D_{H}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | K_{H5}(\tilde{A})=2- 2^{D_{H}(\tilde{A}, C_{\tilde{A}})\cdot D_{H}(\tilde{A}^{c}, C_{\tilde{A}^{c}})} | |
D_{E}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | K_{E1}(\tilde{A})=1- \frac{D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | K_{E2}(\tilde{A})=1- \sin\frac{\big(D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | K_{E3}(\tilde{A})=1- \sqrt[k]{\frac{D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | K_{E4}(\tilde{A})=1- \log^{\big(D_{E}(\tilde{A}, C_{\tilde{A}})+D_{E}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | K_{E5}(\tilde{A})=2- 2^{D_{E}(\tilde{A}, C_{\tilde{A}})\cdot D_{E}(\tilde{A}^{c}, C_{\tilde{A}^{c}})} | |
D_{M}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | K_{M1}(\tilde{A})=1- \frac{D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | K_{M2}(\tilde{A})=1- \sin\frac{\big(D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | K_{M3}(\tilde{A})=1- \sqrt[k]{\frac{D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | K_{M4}(\tilde{A})=1- \log^{\big(D_{M}(\tilde{A}, C_{\tilde{A}})+D_{M}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | K_{M5}(\tilde{A})=2- 2^{D_{M}(\tilde{A}, C_{\tilde{A}})\cdot D_{M}(\tilde{A}^{c}, C_{\tilde{A}^{c}})} | |
D_{HH}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | K_{HH1}(\tilde{A})=1- \frac{D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | K_{HH2}(\tilde{A}, \tilde{B})= \sin\frac{\big(D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{B}, C_{\tilde{B}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | K_{HH3}(\tilde{A}, \tilde{B})=\sqrt[k]{\frac{D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{B}, C_{\tilde{B}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | K_{HH4}(\tilde{A})=1- \log^{\big(D_{HH}(\tilde{A}, C_{\tilde{A}})+D_{HH}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | K_{HH5}(\tilde{A})=2- 2^{D_{HH}(\tilde{A}, C_{\tilde{A}})\cdot D_{HH}(\tilde{A}^{c}, C_{\tilde{A}^{c}})} | |
D_{h}(\tilde{A}, \tilde{B}) | f_{1}(x, y)=\frac{x+y}{2} | K_{h1}(\tilde{A})=1- \frac{D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2} |
f_{2}(x, y)=\sin\frac{(x+y)\pi}{4} | K_{h2}(\tilde{A})=1- \sin\frac{\big(D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)\pi}{4} | |
f_{3}(x, y)=\sqrt[k]{\frac{x+y}{2}}(k>1) | K_{h3}(\tilde{A})=1- \sqrt[k]{\frac{D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{A}^{c}, C_{\tilde{A}^{c}})}{2}}(k>1) | |
f_{4}(x, y)=\log^{x+y+1}_{3} | K_{h4}(\tilde{A})=1- \log^{\big(D_{h}(\tilde{A}, C_{\tilde{A}})+D_{h}(\tilde{A}^{c}, C_{\tilde{A}^{c}})\big)}_{3} | |
f_{5}(x, y)=2^{xy}-1 | K_{h5}(\tilde{A})=2- 2^{D_{h}(\tilde{A}, C_{\tilde{A}})\cdot D_{h}(\tilde{A}^{c}, C_{\tilde{A}^{c}})} |
C_1 | C_2 | C_3 | C_4 | |
\tilde{A}_1 | \langle[0.1, 0.2], [0.1, 0.2]\rangle | \langle[0.25, 0.5], [0.25, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.5, 0.5], [0.5, 0.5]\rangle |
\tilde{A}_2 | \langle[0.5, 0.6], [0.2, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.5]\rangle | \langle[0.0, 0.0], [0.25, 0.75]\rangle | \langle[0.3, 0.4], [0.4, 0.6]\rangle |
\tilde{A}_3 | \langle[0.25, 0.5], [0.25, 0.5]\rangle | \langle[0.2, 0.4], [0.2, 0.4]\rangle | \langle[0.2, 0.3], [0.4, 0.7]\rangle | \langle[0.2, 0.3], [0.5, 0.6]\rangle |
\tilde{A}_4 | \langle[0.2, 0.3], [0.6, 0.7]\rangle | \langle[0.4, 0.7], [0.2, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.5]\rangle | \langle[0.5, 0.7], [0.1, 0.3]\rangle |
Methods | Weighting vector | Alternatives ranking |
E_{JPCZ}(E1) [29] | w={(0.2274, 0.2860, 0.2370, 0.2496)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
E_{LZX}(E2) [30] | w={(0.2274, 0.2860, 0.2370, 0.2496)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
E_{WWZ}(E3) [31] | w={(0.2274, 0.2860, 0.2370, 0.2496)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
E_{ZJJL}(E4) [32] | w={(0.2366, 0.3159, 0.1859, 0.2616)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
Das et al. [27] | w={(0.2506, 0.2211, 0.2472, 0.2811)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
Proposed method | w={(0.2527, 0.2312, 0.2554, 0.2608)^{T}} | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\phi(Z_{1}) | \phi(Z_{2}) | \phi(Z_{3}) | \phi(Z_{4}) | Alternatives order | |
\lambda=0.1 | 0.1325 | 0.0991 | 0.0763 | 0.1579 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.2 | 0.2980 | 0.2230 | 0.1716 | 0.3552 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.3 | 0.5109 | 0.3823 | 0.2941 | 0.6089 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.4 | 0.7948 | 0.5947 | 0.4576 | 0.9472 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.5 | 1.1922 | 0.8920 | 0.6863 | 1.4207 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.6 | 1.7882 | 1.3380 | 1.0295 | 2.1311 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.7 | 2.7817 | 2.0813 | 1.6015 | 3.3150 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.8 | 4.7686 | 3.5680 | 2.7454 | 5.6829 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
\lambda=0.9 | 10.7295 | 8.0280 | 6.1771 | 12.7865 | \tilde{A}_4\succ \tilde{A}_1\succ \tilde{A}_2\succ \tilde{A}_3 |
T_{1} | T_{2} | T_{3} | |
\lambda=0.1 | 0.0254 | 0.0334 | 0.0229 |
\lambda=0.2 | 0.0571 | 0.0750 | 0.0514 |
\lambda=0.3 | 0.0980 | 0.1286 | 0.0881 |
\lambda=0.4 | 0.1524 | 0.2001 | 0.1371 |
\lambda=0.5 | 0.2286 | 0.3002 | 0.2057 |
\lambda=0.6 | 0.3428 | 0.4502 | 0.3085 |
\lambda=0.7 | 0.5333 | 0.7004 | 0.4799 |
\lambda=0.8 | 0.9143 | 1.2007 | 0.8226 |
\lambda=0.9 | 2.0571 | 2.7015 | 1.8509 |
c_1 | c_2 | c_3 | c_4 | |
A_1 | \langle[0.3, 0.4], [0.1, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.1, 0.3], [0.3, 0.5]\rangle |
A_2 | \langle[0.1, 0.3], [0.3, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.2]\rangle |
A_3 | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.1, 0.3], [0.3, 0.4]\rangle |
A_4 | \langle[0.1, 0.3], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.3, 0.4], [0.1, 0.3]\rangle |
c_1 | c_2 | c_3 | c_4 | |
A_1 | \langle[0.1, 0.3], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle |
A_2 | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle |
A_3 | \langle[0.1, 0.3], [0.3, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle |
A_4 | \langle[0.3, 0.4], [0.1, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle |
c_1 | c_2 | c_3 | c_4 | |
A_1 | \langle[0.2, 0.5], [0.2, 0.3]\rangle | \langle[0.3, 0.5], [0.2, 0.3]\rangle | \langle[0.1, 0.3], [0.3, 0.5]\rangle | \langle[0.4, 0.5], [0.3, 0.5]\rangle |
A_2 | \langle[0.3, 0.4], [0.1, 0.4]\rangle | \langle[0.1, 0.4], [0.2, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.2, 0.4], [0.1, 0.2]\rangle |
A_3 | \langle[0.2, 0.4], [0.1, 0.2]\rangle | \langle[0.3, 0.4], [0.1, 0.2]\rangle | \langle[0.1, 0.3], [0.3, 0.4]\rangle | \langle[0.3, 0.4], [0.1, 0.4]\rangle |
A_4 | \langle[0.4, 0.5], [0.3, 0.5]\rangle | \langle[0.1, 0.5], [0.2, 0.5]\rangle | \langle[0.3, 0.4], [0.1, 0.3]\rangle | \langle[0.2, 0.5], [0.2, 0.3]\rangle |
e_1 | e_2 | e_3 | |
z_1 | \langle[0.2577, 0.4316], [0.2067, 0.3869]\rangle | \langle[0.2357, 0.4573], [0.2352, 0.3877]\rangle | \langle[0.2591, 0.4563], [0.2447, 0.3869]\rangle |
z_2 | \langle[0.2306, 0.3770], [0.1307, 0.2833]\rangle | \langle[0.2754, 0.4000], [0.1000, 0.2824]\rangle | \langle[0.2278, 0.4000], [0.1196, 0.2833]\rangle |
z_3 | \langle[0.2286, 0.3765], [0.1314, 0.2824]\rangle | \langle[0.2046, 0.3770], [0.1307, 0.2833]\rangle | \langle[0.2302, 0.3765], [0.1314, 0.2824]\rangle |
z_4 | \langle[0.2607, 0.4321], [0.2063, 0.3877]\rangle | \langle[0.3289, 0.4773], [0.2067, 0.3869]\rangle | \langle[0.2562, 0.4768], [0.1858, 0.3877]\rangle |
\varphi_1 | \varphi_2 | \varphi_3 | \varphi_4 | |
e_1 | 1.1613 | 1.4679 | 1.4623 | 1.1662 |
e_2 | 1.0949 | 1.7662 | 1.4051 | 1.3583 |
e_3 | 1.1328 | 1.5581 | 1.4660 | 1.2781 |
x_{1} (Temperature) | x_{2} (Cough) | x_{3} (Headache) | x_{4} (Stomach pain) | |
\tilde{M}_{1} | \langle[0.8, 0.9], [0.0, 0.1]\rangle | \langle[0.7, 0.8], [0.1, 0.2]\rangle | \langle[0.5, 0.6], [0.2, 0.3]\rangle | \langle[0.6, 0.8], [0.1, 0.2]\rangle |
\tilde{M}_{2} | \langle[0.5, 0.6], [0.1, 0.3]\rangle | \langle[0.8, 0.9], [0.0, 0.1]\rangle | \langle[0.6, 0.8], [0.1, 0.2]\rangle | \langle[0.4, 0.6], [0.1, 0.2]\rangle |
\tilde{M}_{3} | \langle[0.7, 0.8], [0.1, 0.2]\rangle | \langle[0.7, 0.9], [0.0, 0.1]\rangle | \langle[0.4, 0.6], [0.2, 0.4]\rangle | \langle[0.3, 0.5], [0.2, 0.4]\rangle |
\tilde{M}_{4} | \langle[0.8, 0.9], [0.0, 0.1]\rangle | \langle[0.7, 0.8], [0.1, 0.2]\rangle | \langle[0.7, 0.9], [0.0, 0.1]\rangle | \langle[0.8, 0.9], [0.0, 0.1]\rangle |
S(\tilde{M}_{1}, B) | S(\tilde{M}_{2}, B) | S(\tilde{M}_{3}, B) | S(\tilde{M}_{4}, B) | Recognition Result | |
S_{1} [11] | 0.73 | 0.80 | 0.78 | 0.73 | \tilde{M}_{2} |
S_{D} [6] | 0.82 | 0.91 | 0.86 | 0.84 | \tilde{M}_{2} |
S_{H1} | 0.32 | 0.36 | 0.33 | 0.28 | \tilde{M}_{2} |
S_{h1} | 0.38 | 0.45 | 0.43 | 0.34 | \tilde{M}_{2} |