Object | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 |
x1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
x2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
x3 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
x4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
x5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 |
x6 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
The objective of this paper is to explore novel unified continuous and discrete versions of the Trapezium-Jensen-Mercer (TJM) inequality, incorporating the concept of convex mapping within the framework of q-calculus, and utilizing majorized tuples as a tool. To accomplish this goal, we establish two fundamental lemmas that utilize the ς1q and ς2q differentiability of mappings, which are critical in obtaining new left and right side estimations of the midpoint q-TJM inequality in conjunction with convex mappings. Our findings are significant in a way that they unify and improve upon existing results. We provide evidence of the validity and comprehensibility of our outcomes by presenting various applications to means, numerical examples, and graphical illustrations.
Citation: Bandar Bin-Mohsin, Muhammad Zakria Javed, Muhammad Uzair Awan, Hüseyin Budak, Awais Gul Khan, Clemente Cesarano, Muhammad Aslam Noor. Unified inequalities of the q-Trapezium-Jensen-Mercer type that incorporate majorization theory with applications[J]. AIMS Mathematics, 2023, 8(9): 20841-20870. doi: 10.3934/math.20231062
[1] | R. Mareay, Radwan Abu-Gdairi, M. Badr . Soft rough fuzzy sets based on covering. AIMS Mathematics, 2024, 9(5): 11180-11193. doi: 10.3934/math.2024548 |
[2] | Rukchart Prasertpong . Roughness of soft sets and fuzzy sets in semigroups based on set-valued picture hesitant fuzzy relations. AIMS Mathematics, 2022, 7(2): 2891-2928. doi: 10.3934/math.2022160 |
[3] | Jamalud Din, Muhammad Shabir, Nasser Aedh Alreshidi, Elsayed Tag-eldin . Optimistic multigranulation roughness of a fuzzy set based on soft binary relations over dual universes and its application. AIMS Mathematics, 2023, 8(5): 10303-10328. doi: 10.3934/math.2023522 |
[4] | Saqib Mazher Qurashi, Ferdous Tawfiq, Qin Xin, Rani Sumaira Kanwal, Khushboo Zahra Gilani . Different characterization of soft substructures in quantale modules dependent on soft relations and their approximations. AIMS Mathematics, 2023, 8(5): 11684-11708. doi: 10.3934/math.2023592 |
[5] | Mostafa K. El-Bably, Radwan Abu-Gdairi, Mostafa A. El-Gayar . Medical diagnosis for the problem of Chikungunya disease using soft rough sets. AIMS Mathematics, 2023, 8(4): 9082-9105. doi: 10.3934/math.2023455 |
[6] | José Sanabria, Katherine Rojo, Fernando Abad . A new approach of soft rough sets and a medical application for the diagnosis of Coronavirus disease. AIMS Mathematics, 2023, 8(2): 2686-2707. doi: 10.3934/math.2023141 |
[7] | Imran Shahzad Khan, Choonkil Park, Abdullah Shoaib, Nasir Shah . A study of fixed point sets based on Z-soft rough covering models. AIMS Mathematics, 2022, 7(7): 13278-13291. doi: 10.3934/math.2022733 |
[8] | Jamalud Din, Muhammad Shabir, Samir Brahim Belhaouari . A novel pessimistic multigranulation roughness by soft relations over dual universe. AIMS Mathematics, 2023, 8(4): 7881-7898. doi: 10.3934/math.2023397 |
[9] | Hilah Awad Alharbi, Kholood Mohammad Alsager . Fermatean m-polar fuzzy soft rough sets with application to medical diagnosis. AIMS Mathematics, 2025, 10(6): 14314-14346. doi: 10.3934/math.2025645 |
[10] | Rabia Mazhar, Shahida Bashir, Muhammad Shabir, Mohammed Al-Shamiri . A soft relation approach to approximate the spherical fuzzy ideals of semigroups. AIMS Mathematics, 2025, 10(2): 3734-3758. doi: 10.3934/math.2025173 |
The objective of this paper is to explore novel unified continuous and discrete versions of the Trapezium-Jensen-Mercer (TJM) inequality, incorporating the concept of convex mapping within the framework of q-calculus, and utilizing majorized tuples as a tool. To accomplish this goal, we establish two fundamental lemmas that utilize the ς1q and ς2q differentiability of mappings, which are critical in obtaining new left and right side estimations of the midpoint q-TJM inequality in conjunction with convex mappings. Our findings are significant in a way that they unify and improve upon existing results. We provide evidence of the validity and comprehensibility of our outcomes by presenting various applications to means, numerical examples, and graphical illustrations.
Many engineers and scientists have turned their attention to ambiguity or uncertainty modeling in order to extract the applicable information from uncertain data. Recently, scholars have presented a number of notions concerning uncertainty, e.g., fuzzy sets [1], intuitionistic fuzzy sets [2], rough sets [3,4], real valued interval mathematics [5], and so on. Pawlak [3] in 1982 pioneered rough set theory as a tool to help analyze the uncertainty in real-life problems. Generally, the objects in rough set theory could be categorized by using an equivalence relation based on the meaning of their properties (features). In [3], Pawlak replaced the ambiguous concept with two definite sets, i.e., lower approximation and upper approximation. This style of roughness of a universe set is built on equivalence classes generated by the associated equivalence relations. Many researchers have deduced that equivalence relations could not be used to solve real-life problems through the application of general rough set theory such as in [6,7,8,9].
In 1999, Molodtsov [10] submitted the first article on soft sets as a new method that can deal with uncertainties. The properties of these soft sets give a standardized framework for modeling ambiguity problems such as in decision making problems [11,12], diagnostic problems in medicine [13,14,15], the evaluation of nutrition systems [16] and problems in information systems [17]. Soft topological notions were defined in [18,19]. The notion of an ideal model of this type can be seen in [20], and there are many research studies concerned with the idealized versions of numerous rough set models. The effect or the advantage of employing an ideal version is reduction of the ambiguity of a concept in an uncertainty zone by shrinking the boundary region of roughness and enhancing the accuracy measure of roughness. As a result, using an ideal model is a great way to demystify the concept and define it precisely. Accordingly, many researchers have studied this theory by using ideals such as those in [21,22,23,24].
Rough set theory and soft set theory are two different tools to deal with uncertainty. Apparently there is no direct connection between these two theories; however, efforts have been made to establish some kind of linkage [25,26]. The major criticism of rough set theory is that it lacks parametrization tools [27]. In order to make parametrization tools available in rough sets a major step was taken by Feng et al. [28]. They introduced the concept of soft rough sets, where instead of equivalence classes parametrized subsets of a set serve the purpose of finding lower and upper approximations of a subset. In doing so, some unusual situations may occur. For example upper approximation of a non-empty set may be empty. Upper approximation of a subset X may not contain the set X. These situations do not occur in classical rough set theory. Moreover, the soft rough set model must be reduced to be able to get a true decision of any real-life problem. Therefore, the authors of [29] modified the concept of soft rough sets to solve the previous problems. In order to strengthen the concept of soft rough sets and solve the previous problems more accurately than the method given in [29], new approaches that apply the ideals are presented here.
In this paper, we submit two novel approaches for soft rough sets that apply the ideals, as defined in [30]. These new approaches are extensions of soft rough sets approaches that have been introduced in [28,29]. The main characteristics of the recent approaches are investigated, and comparisons between our methods and previous ones are applied. We illustrate that the soft rough approximations [28] constitute a special case of the current approximations in the first method in Definition 3.2, and that the soft rough approximations [29] constitute a special case of the current approximations in the second method in Definition 3.5. Moreover, we prove that our second method is the best one since it produces smaller boundary regions and higher accuracy values than our first method and those introduced in [28,29]. Therefore, this method is more applicable to real-life problems and can be used to determine the vagueness of the data. Furthermore, new soft rough sets that were developed by using two ideals, called soft bi-ideal rough sets, are presented. These soft approximations are discussed from the perspective of two different methods. The properties and results of these soft bi-ideal rough sets are provided. The relationships between these two approximations and the previous ones are discussed. However, comparisons between the last two methods are presented and we discuss which of them is the best. Finally, two medical applications are provided to demonstrate the significance of adopting ideals in the current techniques. In the proposed applications, we illustrate that our techniques reduce boundary regions and improve the accuracy measure of the sets more than the approaches presented in [28,29], which means that the current techniques allow the medical staff to classify patients successfully in terms of influenza infection (see [31,32]) (first application) and heart attacks (see [33]) (second application). That helps doctors to make the best decision.
The aim of this section is to illustrate the basic concepts and properties of rough sets and soft sets which are needed in the sequel.
Definition 2.1. [3] If X is a universal set of objects, R is an equivalence relation on X and [x]R is the proposed equivalence class containing x. Then for A⊆X, the lower approximation, the upper approximation, the boundary region and the accuracy measure of A are defined respectively, as follows:
Aprox_(A)={x∈X:[x]R⊆A},¯Aprox(A)={x∈X:[x]R∩A≠ϕ},BND(A)=¯Aprox(A)−Aprox_(A),ACC(A)=|Aprox_(A)||¯Aprox(A)|,A≠ϕ. |
Clearly, 0≤ACC(A)≤1. A is a crisp set if Aprox_(A)=¯Aprox(A); otherwise, A is called a rough set.
Definition 2.2. [3] The membership relations of an element x∈X to a rough set A⊆X are defined by:
x∈_A iff x∈Aprox_(A)and x¯∈A iff x∈¯Aprox(A). |
Definition 2.3. [3] For two rough subsets A,B⊆X, the inclusion relations are defined as follows:
A⊆_B iff Aprox_(A)⊆Aprox_(B), and A¯⊆B iff ¯Aprox(A)⊆¯Aprox(B). |
Proposition 2.1. [3] Let A,B be two subsets of X. The following is a list of the main characterizations of the approximation operators defined in Definition 2.1,
(L1) Aprox_(Ac)=[¯Aprox(A)]c;
(L2) Aprox_(X)=X;
(L3) Aprox_(ϕ)=ϕ;
(L4) Aprox_(A)⊆A;
(L5) A⊆B⇒Aprox_(A)⊆Aprox_(B) (A⊆_B);
(L6) Aprox_[Aprox_(A)]=Aprox_(A);
(L7) Aprox_(A∩B)=Aprox_(A)∩Aprox_(B);
(L8) Aprox_(A)∪Aprox_(B)⊆Aprox_(A∪B);
(L9) Aprox_[¯Aprox(A)]=¯Aprox(A);
(U1) ¯Aprox(Ac)=[Aprox_(A)]c;
(U2) ¯Aprox(X)=X;
(U3) ¯Aprox(ϕ)=ϕ;
(U4) A⊆¯Aprox(A);
(U5) A⊆B⇒¯Aprox(A)⊆¯Aprox(B) (A¯⊆B);
(U6) ¯Aprox[¯Aprox(A)]=¯Aprox(A);
(U7) ¯Aprox(A∩B)⊆¯Aprox(A)∩¯Aprox(B);
(U8) ¯Aprox(A)∪¯Aprox(B)=¯Aprox(A∪B);
(U9) ¯Aprox[Aprox_(A)]=Aprox_(A).
Definition 2.4. [10] Let X be the universal set, E≠ϕ a family of parameters which represent attributes or decision variables and H⊆E. A pair (F,H) is said to be a soft set of X if F is a mapping from H to the power set of X, i.e., F:H⟶P(X). Therefore, a soft set is a set of parameterized elements from X. To each e∈H, F(e) is a subset of X, which is usually named the set of e-approximate elements of (F,H). Also, F(e) can be regarded as a mapping F(e):X⟶{0,1}, and then F(e)(x)=1 is equivalent to x∈F(e) for x∈X.
Definition 2.5. [28] Let S=(F,E) be a soft set of a universe X. Then, P=(X,S) is said to be a soft approximation space. For any subset A of X, the soft P-lower approximation, the soft P-upper approximation and the soft P-boundary region are defined respectively, as follows:
apr_(A)={x∈X:∃e∈E,[x∈F(e)⊆A]}¯apr(A)={x∈X:∃e∈E,[x∈F(e),F(e)∩A≠ϕ]}Bndapr(A)=¯apr(A)−apr_(A). |
Theorem 2.1. [28] Let P=(X,S) be a soft approximation space. Then, the approximations defined in Definition 2.5 satisfy the conditions of the properties L3,L5,L6,L8,L9,U3,U5,U7,andU8 in Proposition 2.1.
Definition 2.6. [28] Let S=(F,E) be a soft set of X. If ⋃e∈EF(e)=X, then S is called a full soft set.
Definition 2.7. [28] Let S=(F,E) be a soft set over X. If for any e1,e2∈E, there is e3∈E such that F(e3)=F(e1)∩F(e2) whenever F(e1)∩F(e2)≠ϕ, then S is called an intersecting complete soft set.
Definition 2.8. [29] Let P=(X,S) be a soft approximation space where S=(F,E) is a soft set of X, and A⊆X. Based on P, the soft lower approximation, the soft upper approximation and the soft boundary region are defined respectively, as follows:
SR_(A)=∪{F(e),e∈E:F(e)⊆A}¯SR(A)=[SR_(Ac)]c,where Ac is the complement of ABndSR(A)=¯SR(A)−SR_(A). |
Definition 2.9. [29] Let P=(X,S) be a soft approximation space. Based on P, one can deduce that the degree of crispness of any A⊆X is given by the rough measures Accapr(A) and AccSR(A) that are respectively given by
Accapr(A)=|apr_(A)||¯apr(A)|,A≠ϕ and AccSR(A)=|SR_(A)||¯SR(A)|,A≠ϕ. |
Obviously, 0≤Accapr(A)≤1 and 0≤AccSR(A)≤1.
Theorem 2.2. [29] Let P=(X,S) be a soft approximation space. Then, the approximations defined in Definition 2.8 satisfy the conditions of the properties L1, L3–L6, L8 and U1,U2, U4–U7 in Proposition 2.1.
Definition 2.10. [29] Let S=(F,E) be a full soft set of X, and let P=(X,S) be a soft approximation space. Then, A⊆X is referred to as follows:
(1) A totally SR-definable (SR-exact) set if SR_(A)=¯SR(A)=A.
(2) An internally SR-definable set if SR_(A)=A and ¯SR(A)≠A.
(3) An externally SR-definable set if SR_(A)≠A and ¯SR(A)=A.
(4) A totally SR-rough set if SR_(A)≠A≠ ¯SR(A).
Definition 2.11. [29] Let P=(X,S) be a soft approximation space where S=(F,E) is a soft set of X. Let A⊆X and x∈X. The SR-membership relations, denoted by ∈_SR,¯∈SR are given by
x ∈_SR A iff x ∈SR_(A),x ¯∈SR A iff x ∈¯SR(A). |
Definition 2.12. [30] Let X be a universal set. Then, a non-empty family L of subsets of X is said to be an ideal on X if it fulfills the following conditions.
(1) If A∈L and B⊆A, then B∈L.
(2) If A,B∈L, then A∪B∈L.
Definition 2.13. [22] Let L1,L2 be two ideals on a non-empty set X. Then, the family of subsets from L1,L2 is denoted as <L1,L2> and defined by
<L1,L2>={G1∪G2:G1∈L1,G2∈L2}. |
Proposition 2.2. [22] Let L1,L2 be two ideals on a non-empty set X and A,B⊆X. Then, the collection <L1,L2> has the following proprieties:
(1) <L1,L2>≠ϕ;
(2) A∈ <L1,L2>, B⊆A ⇒ B∈ <L1,L2>;
(3) A,B ∈ <L1,L2> ⇒ A∪B∈ <L1,L2>.
In this section, we will generalize the soft rough set theory by using the ideal notion. Also, we will present some properties of soft ideal rough approximation operators and introduce two new soft rough set models based on the ideals, which constitutes an improvement of the models by Feng et al. [28] and Alkhazaleh and Marei [29].
Definition 3.1. Let S=(F,E) be a soft set of a universal set X and L an ideal on X. Then, P=(X,S,L) is said to be a soft ideal approximation space. For any subset A of X, the soft P-lower approximation and the soft P-upper approximation, i.e., (aprL)∗(A) and (aprL)∗(A), are defined respectively, as follows:
(aprL)∗(A)={x∈X:∃e∈E,x∈F(e),F(e)∩Ac∈L}(aprL)∗(A)={x∈X:∃e∈E,x∈F(e),F(e)∩A∉L}. |
Proposition 3.1. Let P=(X,S,L) be a soft ideal approximation space and A,B⊆X. Then, the conditions of the following properties are fulfilled.
(1) (aprL)∗(ϕ)=∪{F(e):e∈E,F(e)∈L} and (aprL)∗(ϕ)=ϕ;
(2) (aprL)∗(X)=∪{F(e),e∈E} and (aprL)∗(X)=∪{F(e):e∈E,F(e)∉L};
(3) A⊆B⇒(aprL)∗(A)⊆(aprL)∗(B);
(4) A⊆B⇒(aprL)∗(A)⊆(aprL)∗(B);
(5) (aprL)∗[(aprL)∗(A)]=(aprL)∗(A);
(6) (aprL)∗(A)⊆(aprL)∗[(aprL)∗(A)];
(7) (aprL)∗(A)⊆(aprL)∗[(aprL)∗(A)];
(8) (aprL)∗(A∩B)⊆(aprL)∗(A)∩(aprL)∗(B);
(9) (aprL)∗(A)∪(aprL)∗(B)⊆(aprL)∗(A∪B);
(10) (aprL)∗(A∩B)⊆(aprL)∗(A)∩(aprL)∗(B);
(11) (aprL)∗(A)∪(aprL)∗(B)=(aprL)∗(A∪B).
Proof. (1), (2) Straightforward from Definition 3.1.
(3) Let A⊆B and x∈(aprL)∗(A)={x∈X:∃e∈E,x∈F(e),F(e)∩Ac∈L}. Then, x∈F(e),F(e)∩Ac∈L for some e∈E. But, A⊆B and L is an ideal, so x∈F(e),F(e)∩Bc∈L. Thus, x∈(aprL)∗(B). Hence, (aprL)∗(A)⊆(aprL)∗(B).
(4) Let A⊆B and x∈(aprL)∗(A)={x∈X:∃e∈E,x∈F(e),F(e)∩A∉L}. Then, x∈F(e),F(e)∩A∉L for some e∈E. But, A⊆B and L is an ideal, so x∈F(e),F(e)∩Bc∈L. Therefore, (aprL)∗(A)⊆(aprL)∗(B).
(5) Let x∈(aprL)∗(A)={x∈X:∃e∈E,x∈F(e),F(e)∩Ac∈L}. Therefor for some e∈E:x∈F(e), we have that F(e)∩Ac∈L. So, F(e)⊆(aprL)∗(A) which means that F(e)∩[(aprL)∗(A)]c=ϕ∈L. Hence, x∈(aprL)∗[(aprL)∗(A)].
Conversely, let x∉(aprL)∗(A). Then, for all e∈E:x∈F(e), we have that F(e)∩Ac∉L. Thus, ∃y∈F(e),y∈Ac,{y}∉L. So, y∉(aprL)∗(A) and {y}∉L, which means that {y}∩[(aprL)∗(A)]c∉L and, for all e∈E:y∈F(e), we have that F(e)∩[(aprL)∗(A)]c∉L. Hence, x∉(aprL)∗[(aprL)∗(A)].
(6) Let x∈(aprL)∗(A)={x∈X:∃e∈E,x∈F(e),F(e)∩A∉L}. Therefore, for some e∈E:x∈F(e) we have that F(e)∩A∉L. So, F(e)⊆(aprL)∗(A) which means that F(e)∩(aprL)∗(A)=F(e)∉L. Thus, x∈(aprL)∗[(aprL)∗(A)]. Hence, (aprL)∗(A)⊆(aprL)∗[(aprL)∗(A)].
(7) Let x∉(aprL)∗[(aprL)∗(A)]={x∈X:∃e∈E,x∈F(e),F(e)∩[(aprL)∗(A)]c∈L}. Then, for all e∈E:x∈F(e) we have that F(e)∩[(aprL)∗(A)]c∉L. Thus, ∃y∈F(e),y∉(aprL)∗(A). So, for some e∈E:y∈F(e), we have that F(e)∩A∈L thus, x∈F(e) and F(e)∩A∈L. Therefore, x∉(aprL)∗(A). Hence, (aprL)∗(A)⊆(aprL)∗[(aprL)∗(A)].
(8) Let x∈(aprL)∗(A∩B)={x∈X:∃e∈E,x∈F(e),F(e)∩(Ac∪Bc)∈L}. Thus, ∃F(e) such that x∈F(e) and F(e)∩(Ac∪Bc)∈L. Therefore, x∈F(e),F(e)∩Ac∈L and x∈F(e),F(e)∩Bc∈L. Consequently, x∈(aprL)∗(A) and x∈(aprL)∗(B). Hence, (aprL)∗(A∩B)⊆(aprL)∗(A)∩(aprL)∗(B).
(9) Let, x∉(aprL)∗(A∪B). Thus, ∀e∈E:x∈F(e), we have that F(e)∩(Ac∩Bc)∉L. Therefore, ∀e∈E:x∈F(e), we have that F(e)∩Ac∉L and F(e)∩Bc∉L. Consequently, x∉(aprL)∗(A) and x∉(aprL)∗(B). Hence, (aprL)∗(A)∪(aprL)∗(B)⊆(aprL)∗(A∪B).
(10) Similar to part (7).
(11)
(aprL)∗(A∪B)={x∈X:∃e∈E,x∈F(e),F(e)∩(A∪B)∉L}={x∈X:∃e∈E,x∈F(e),(F(e)∩A)∉Lor(F(e)∩B)∉L}={x∈X:∃e∈E,x∈F(e),(F(e)∩A)∉L} ∪{x∈X:∃e∈E,x∈F(e),(F(e)∩B)∉L}=(aprL)∗(A)∪(aprL)∗(B). |
Remark 3.1. Let P=(X,S,L) be a soft ideal approximation space and A,B⊆X. Then, the following example ensures that
(1) (aprL)∗(X)≠X,(aprL)∗(X)≠X and (aprL)∗(ϕ)≠ϕ,
(2) (aprL)∗(A)⊈A, A⊈(aprL)∗(A) and (aprL)∗(A)⊈A, A⊈(aprL)∗(A),
(3) (aprL)∗(A)⊆(aprL)∗(B)⇏A⊆B and (aprL)∗(A)⊆(aprL)∗(B)⇏A⊆B,
(4) (aprL)∗(A)≠(aprL)∗[(aprL)∗(A)] and (aprL)∗(A)≠(aprL)∗[(aprL)∗(A)],
(5) (aprL)∗(A)⊈[(aprL)∗(Ac)]c and (aprL)∗(A)⊉[(aprL)∗(Ac)]c,
(6) (aprL)∗(A)⊈(aprL)∗[(aprL)∗(A)] and (aprL)∗(A)⊉(aprL)∗[(aprL)∗(A)],
(7) (aprL)∗(A∩B)≠(aprL)∗(A)∩(aprL)∗(B) and (aprL)∗(A∩B)≠(aprL)∗(A)∩(aprL)∗(B).
Example 3.1. Let X={x1,x2,x3,x4,x5,x6},E={e1,e2,e3,e4,e5,e6,e7,e8}, F:E⟶P(X), S=(F,E) be a soft set over X, as given in Table 1, and L={ϕ,{x1},{x6},{x1,x6}}. From Table 1, we can deduce that
F(e1)={x1,x6},F(e2)=ϕF(e3)={x3},F(e4)={x1,x2,x3}F(e5)={x1,x2,x5},F(e6)={x1,x2,x6}F(e7)={x2,x3,x5},F(e8)={x2,x3,x6}. |
Hence, we obtain the following results:
(1) From Proposition 3.1 part (2), we have that (aprL)∗(X)=(aprL)∗(X)={x1,x2,x3,x5,x6}≠X. Also, (aprL)∗(ϕ)={x∈X:∃e∈E,x∈F(e),F(e)∩X∈L}={x1,x6}≠ϕ.
(2) Let A={x3,x4}. Then, (aprL)∗(A)={x1,x6}. Hence, (aprL)∗(A)⊈A and A⊈(aprL)∗(A).
(3) Let A={x1,x2,x4}; then, (aprL)∗(A)={x∈X:∃e∈E,x∈F(e),F(e)∩{x1,x2,x4}∉L}={x1,x2,x3,x5,x6}. Hence, (aprL)∗(A)⊈A and A⊈(aprL)∗(A).
(4) Let A={x1,x2,x5},B={x1,x2,x4}. Then, (aprL)∗(A)={x1,x2,x5,x6} and (aprL)∗(B)={x1,x2,x6}. So, (aprL)∗(B)⊆(aprL)∗(A) but B⊈A. Also, if A={x4,x5,x6}, B={x3,x5,x6}. Then, (aprL)∗(A)={x1,x2,x3,x5} and (aprL)∗(B)={x1,x2,x3,x5,x6}. So, (aprL)∗(A)⊆(aprL)∗(B) but A⊈B.
(5) If A={x4,x5}, then (aprL)∗(A)={x1,x2,x3,x5}. So, (aprL)∗[(aprL)∗(A)]={x1,x2,x3,x5,x6}. Hence, (aprL)∗(A)≠(aprL)∗[(aprL)∗(A)]. Also, (aprL)∗[(aprL)∗(A)]={x1,x2,x3,x5,x6}. Hence, (aprL)∗(A)≠(aprL)∗[(aprL)∗(A)].
(6) Let A={x1,x2,x3,x5,x6}. Then, (aprL)∗(A)={x1,x2,x3,x5,x6} but [(aprL)∗(Ac)]c={x2,x3,x4,x5}. Therefore, (aprL)∗(A)⊈[(aprL)∗(Ac)]c and (aprL)∗(A)⊉[(aprL)∗(Ac)]c.
(7) Let A={x1,x6}. Then, (aprL)∗(A)={x1,x6}. So, (aprL)∗[(aprL)∗(A)]=ϕ. Hence, (aprL)∗(A)⊈(aprL)∗[(aprL)∗(A)]. Also, if A={x2,x3,x4} then (aprL)∗(A)={x1,x2,x3,x6}. So, (aprL)∗[(aprL)∗(A)]={x1,x2,x3,x5,x6}. Therefore, (aprL)∗(A)⊉(aprL)∗[(aprL)∗(A)].
(8) Consider that L={ϕ,{x1},{x5},{x1,x5}}. If A={x1,x2,x3,x5} and B={x1,x6}, then (aprL)∗(A)∩(aprL)∗(B)={x1,x2,x3,x5}∩{x1,x6}={x1} but (aprL)∗(A∩B)=(aprL)∗({x1})=ϕ. Also, if A={x2,x3} and B={x4,x5,x6}, then (aprL)∗(A)∩(aprL)∗(B)={x1,x2,x3,x5,x6}∩{x1,x2,x3,x6}={x1,x2,x3,x6} but (aprL)∗(A∩B)(aprL)∗(ϕ)=ϕ.
Object | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 |
x1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
x2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
x3 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
x4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
x5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 |
x6 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
Definition 3.2. Let S=(F,E) be a soft set of the universal set X and L an ideal on X. Then, P=(X,S,L) is said to be a soft ideal approximation space. For any subset A of X, the soft P-lower approximation and the soft P-upper approximation, i.e., aprL_(A) and ¯aprL(A), are defined respectively, as follows:
aprL_(A)=A∩(aprL)∗(A)¯aprL(A)=A∪(aprL)∗(A). |
Proposition 3.2. Let P=(X,S,L) be a soft ideal approximation space and A,B⊆X. Then, the following conditions of the properties are fulfilled.
(1) aprL_(ϕ)=¯aprL(ϕ)=ϕ and ¯aprL(X)=X;
(2) aprL_(A)⊆A⊆¯aprL(A);
(3) A⊆B⇒aprL_(A)⊆aprL_(B) and ¯aprL(A)⊆¯aprL(B);
(4) aprL_[aprL_(A)]=aprL_(A) and ¯aprL[¯aprL(A)]⊇¯aprL(A);
(5) aprL_[¯aprL(A)]=¯aprL(A) and aprL_(A)⊆¯aprL[aprL_(A)];
(6) aprL_(A∩B)⊆aprL_(A)∩aprL_(B) and aprL_(A)∪aprL_(B)⊆aprL_(A∪B);
(7) ¯aprL(A∩B)⊆¯aprL(A)∩¯aprL(B) and ¯aprL(A)∪¯aprL(B)=¯aprL(A∪B).
Proof. Straightforward.
Remark 3.2. Any one can define different ideals to obtain an example that is analogous to Example 3.1 to prove that the inclusion in Proposition 3.2 parts (2) and (4)–(8) could not be changed to the equality.
Definition 3.3. Let P=(X,S,L) be a soft ideal approximation space and S=(F,E) a soft set of X; also, let A⊆X. Then, the soft aprL-boundary region BndaprL(A) and the soft aprL-accuracy measure AccaprL(A) are defined respectively, as follows:
BndaprL(A)=¯aprL(A)−aprL_(A),AccaprL(A)=|aprL_(A)||¯aprL(A)| where A≠ϕ. |
Definition 3.4. Let P=(X,S,L) be a soft ideal approximation space in which S=(F,E) is a soft set over a universal set X; also, let there be an ideal L on X and A⊆X. For any subset A of X, the lower approximation and the upper approximation, (SRL)∗(A) and (SRL)∗(A), are defined respectively, as follows:
(SRL)∗(A)=∪{F(e),e∈E:F(e)∩Ac∈L}(SRL)∗(A)=[(SRL)∗(Ac)]c,where Ac is the complement of A. |
Proposition 3.3. Let P=(X,S,L) be a soft ideal approximation space A,B⊆X. Then, the following conditions of the properties are fulfilled.
(1) (SRL)∗(Ac)=[(SRL)∗(A)]c;
(2) A⊆B⇒(SRL)∗(A)⊆(SRL)∗(B);
(3) A⊆B⇒(SRL)∗(A)⊆(SRL)∗(B);
(4) (SRL)∗[(SRL)∗(A)]=(SRL)∗(A);
(5) (SRL)∗[(SRL)∗(A)]=(SRL)∗(A);
(6) (SRL)∗(A∩B)⊆(SRL)∗(A)∩(SRL)∗(B);
(7) (SRL)∗(A)∪(SRL)∗(B)⊆(SRL)∗(A∪B);
(8) (SRL)∗(A∩B)⊆(SRL)∗(A)∩(SRL)∗(B);
(9) (SRL)∗(A)∪(SRL)∗(B)⊆(SRL)∗(A∪B).
Proof. (1) [(SRL)∗(A)]c=[((SRL)∗(Ac))c]c=(SRL)∗(Ac).
(2) Let A⊆B and x∈(SRL)∗(A)=∪{F(e),e∈E:F(e)∩Ac∈L}. Then, ∃F(e) such that x∈F(e), where F(e)∩Ac∈L. But, A⊆B; thus, x∈F(e),F(e)∩Bc∈L. Hence, x∈(SRL)∗(B) and (SRL)∗(A)⊆(SRL)∗(B).
(3) Let A⊆B. Thus, Bc⊆Ac and then (SRL)∗(Bc)⊆(SRL)∗(Ac). Hence, [(SRL)∗(Ac)]c⊆[(SRL)∗(Bc)]c. Consequently, (SRL)∗(A)⊆(SRL)∗(B).
(4) Let x∈(SRL)∗(A)=∪{F(e),e∈E:F(e)∩Ac∈L}. Therefor for some e∈E:x∈F(e), we have that F(e)∩Ac∈L. So, F(e)⊆(SRL)∗(A) which means that F(e)∩[(SRL)∗(A)]c=ϕ∈L. Hence, x∈(SRL)∗[(SRL)∗(A)].
Conversely, let x∉(SRL)∗(A). Then, for all e∈E:x∈F(e), we have that F(e)∩Ac∉L. Thus, ∃y∈F(e),y∈Ac,{y}∉L. So, y∉(SRL)∗(A),{y}∉L, which means that {y}∩[(SRL)∗(A)]c∉L and, for all e∈E:y∈F(e), we have that F(e)∩[(SRL)∗(A)]c∉L. Hence, x∉(SRL)∗[(SRL)∗(A)].
(5) It follows directly by putting A=Ac in part (4) and using part (1).
(6) Let x∈(SRL)∗(A∩B)=∪{F(e),e∈E:F(e)∩(Ac∪Bc)∈L}. Then, ∃F(e) such that x∈F(e), where F(e)∩(Ac∪Bc)∈L. Therefore, x∈F(e),F(e)∩Ac∈L and x∈F(e),F(e)∩Bc∈L. Consequently, x∈(SRL)∗(A) and x∈(SRL)∗(B). Hence, (SRL)∗(A∩B)⊆(SRL)∗(A)∩(SRL)∗(B).
(7) Let x∉(SRL)∗(A∪B)=∪{F(e),e∈E:F(e)∩(Ac∩Bc)∈L}. Then, ∀e∈E:x∈F(e), we have that F(e)∩(Ac∩Bc)∉L. Therefore, ∀e∈E:x∈F(e), we have that F(e)∩Ac∉L and F(e)∩Bc∉L. Consequently, x∉(SRL)∗(A) and x∉(SRL)∗(B). Hence, (SRL)∗(A)∪(SRL)∗(B)⊆(SRL)∗(A∪B).
(8) It follows directly by putting A=Ac,B=Bc in part (6) and using part (1).
(9) It follows directly by putting A=Ac,B=Bc in part (7) and using part (1).
Remark 3.3. Let P=(X,S,L) be a soft ideal approximation space and A,B⊆X. Then, the next examples ensure that
(1) (SRL)∗(X)≠X and (SRL)∗(ϕ)≠ϕ;
(2) (SRL)∗(ϕ)≠ϕ and (SRL)∗(X)≠X;
(3) (SRL)∗(A)⊈A,A⊈(SRL)∗(A) and (SRL)∗(A)⊈A, A⊈(SRL)∗(A);
(4) (SRL)∗(A)⊆(SRL)∗(B)⇏A⊆B and (SRL)∗(A)⊆(SRL)∗(B)⇏A⊆B;
(5) (SRL)∗(A)⊈(SRL)∗[(SRL)∗(A)] and (SRL)∗(A)⊉(SRL)∗[(SRL)∗(A)];
(6) (SRL)∗(A)⊈(SRL)∗[(SRL)∗(A)] and (SRL)∗(A)⊉(SRL)∗[(SRL)∗(A)];
(7) (SRL)∗(A∩B)≠(SRL)∗(A)∩(SRL)∗(B) and (SRL)∗(A∪B)≠(SRL)∗(A)∪(SRL)∗(B).
Example 3.2. Let X={x1,x2,x3,x4,x5,x6},E={e1,e2,e3,e4,e5}, F:E⟶P(X), S=(F,E) be a soft set of X, as given in Table 2, and L={ϕ,{x1},{x6},{x1,x6}}. From Table 2, we can deduce that
F(e1)={x1,x6},F(e2)=ϕ,F(e3)={x3},F(e4)={x1,x2,x3}F(e5)={x1,x2,x5}. |
Hence, we obtain the following results:
(1) (SRL)∗(X)=∪{F(e),e∈E:F(e)∩Xc∈L)}={x1,x2,x3,x5,x6}≠X. Then, (SRL)∗(ϕ)=[(SRL)∗(X)]c={x4}≠ϕ.
(2) (SRL)∗(ϕ)=∪{F(e),e∈E:F(e)∩ϕc∈L)}={x1,x6}≠ϕ. Then, (SRL)∗(X)=[(SRL)∗(ϕ)]c={x2,x3,x4,x5}≠X.
(3) Let A={x4}. Then, (SRL)∗(A)={x1,x6}. Hence. (SRL)∗(A)⊈A and A⊈(SRL)∗(A).
(4) From part (3), if A={x1,x2,x3,x5,x6}, then (SRL)∗(A)={x2,x3,x4,x5}. Hence, (SRL)∗(A)⊈A and A⊈(SRL)∗(A).
(5) Let A={x1,x2,x3} and B={x1,x2,x4}. Then, (SRL)∗(A)={x1,x2,x3,x6} and (SRL)∗(B)={x1,x6}. So, (SRL)∗(B)⊆(SRL)∗(A), but B⊈A. Also, if A={x4,x5,x6} and B={x3,x5,x6}, then (SRL)∗(A)={x4,x5} and (SRL)∗(B)={x2,x3,x4,x5}. So, (SRL)∗(A)⊆(SRL)∗(B) but A⊈B.
(6) From part (4), if A={x1,x2,x3,x5,x6}, then (SRL)∗(A)={x2,x3,x4,x5}. So, (SRL)∗[(SRL)∗(A)]=X. Hence, (SRL)∗(A)⊉(SRL)∗[(SRL)∗(A)].
(7) From part (6), if A={x1,x2,x3,x5,x6}, then (SRL)∗(A)=A but (SRL)∗[(SRL)∗(A)]=(SRL)∗(A)={x2,x3,x4,x5}. Hence, (SRL)∗(A)⊈(SRL)∗[(SRL)∗(A)] and (SRL)∗(A)⊉(SRL)∗[(SRL)∗(A)].
(8) Consider that L={ϕ,{x4},{x6},{x4,x6}} and A={x1,x2,x5}. Then, (SRL)∗(A)={x1,x2,x4,x5,x6}. So, (SRL)∗[(SRL)∗(A)]={x1,x6}. Hence, (SRL)∗(A)⊈(SRL)∗[(SRL)∗(A)].
(9) Consider that L={ϕ,{x4},{x5},{x4,x5}}. If A={x1,x2,x3,x5} and B={x1,x6}, then (SRL)∗(A)∩(SRL)∗(B)={x1,x2,x3,x5}∩{x1,x6}={x1} but (SRL)∗(A∩B)=(SRL)∗({x1})=ϕ. Also, if A={x2,x3} and B={x4,x5,x6}, then (SRL)∗(A)∪(SRL)∗(B)={x2,x3,x4,x5}∪{x4,x6}={x2,x3,x4,x5,x6} but (SRL)∗(A∪B)=X.
Object | e1 | e2 | e3 | e4 | e5 |
x1 | 1 | 0 | 0 | 1 | 1 |
x2 | 0 | 0 | 0 | 1 | 1 |
x3 | 0 | 0 | 1 | 1 | 0 |
x4 | 0 | 0 | 0 | 0 | 0 |
x5 | 0 | 0 | 0 | 0 | 1 |
x6 | 1 | 0 | 0 | 0 | 0 |
Definition 3.5. Let P=(X,S,L) be a soft ideal approximation space in which S=(F,E) is a soft set of X; also, an ideal L is given and A⊆X. Based on P, the lower approximation and the upper approximation, SRL_(A) and ¯SRL(A), are defined respectively, as follows:
SRL_(A)=A∩(SRL)∗(A)¯SRL(A)=A∪(SRL)∗(A). |
Definition 3.6. Let P=(X,S,L) be a soft ideal approximation space and S=(F,E) a soft set of X and let A⊆X. Then, the soft SRL-boundary region BndSRL(A) and the soft SRL-accuracy measure AccSRL(A) are defined respectively by:
BndSRL(A)=¯SRL(A)−SRL_(A),AccSRL(A)=|SRL_(A)||¯SRL(A)| where A≠ϕ. |
Proposition 3.4. Let P=(X,S,L) be a soft ideal approximation space and A,B⊆X. Then, the following conditions of the properties are fulfilled.
(1) SRL_(ϕ)=ϕ and ¯SRL(X)=X;
(2) SRL_(A)⊆A⊆¯SRL(A);
(3) SRL_(Ac)=[¯SRL(A)]c and ¯SRL(Ac)=[SRL_(A)]c;
(4) A⊆B⇒SRL_(A)⊆SRL_(B) and ¯SRL(A)⊆¯SRL(B);
(5) SRL_[SRL_(A)]=SRL_(A) and ¯SRL[¯SRL(A)]=¯SRL(A);
(6) SRL_[¯SRL(A)]⊆¯SRL(A) and SRL_(A)⊆¯SRL[SRL_(A)];
(7) SRL_(A∩B)⊆SRL_(A)∩SRL_(B) and SRL_(A)∪SRL_(B)⊆SRL_(A∪B);
(8) ¯SRL(A∩B)⊆¯SRL(A)∩¯SRL(B) and ¯SRL(A)∪¯SRL(B)⊆¯SRL(A∪B).
Proof. Straightforward.
Remark 3.4. Any one can define different ideals to obtain an example that is analogous to Example 3.2 to prove that the inclusion in Proposition 3.4 parts (2), (4) and (6)–(8) could not be changed to the equality.
In what follows, Table 3 summarizes the differences between the proposed properties of the two styles.
The first method | The second method | |
L1 | ◻ | ◼ |
L2 | ◻ | ◻ |
L3 | ◼ | ◼ |
L4 | ◼ | ◼ |
L5 | ◼ | ◼ |
L6 | ◼ | ◼ |
L7 | ◻ | ◻ |
L8 | ◼ | ◼ |
L9 | ◼ | ◻ |
U1 | ◻ | ◼ |
U2 | ◼ | ◼ |
U3 | ◼ | ◻ |
U4 | ◼ | ◼ |
U5 | ◼ | ◼ |
U6 | ◻ | ◼ |
U7 | ◼ | ◼ |
U8 | ◼ | ◻ |
U9 | ◻ | ◻ |
Corollary 3.1. Let P=(X,S,L) be a soft ideal approximation space and A⊆X. Then,
apr_(A)=SR_(A)⊆aprL_(A)=SRL_(A). |
Proof. Straightforward from Definitions 2.5, 2.8, 3.2 and 3.5.
Theorem 3.1. Let S=(F,E) be a full soft set of X, P=(X,S,L) be a soft ideal approximation space and A⊆X. Then,
(1) apr_(A)⊆aprL_(A)=SRL_(A)⊆A⊆¯SRL(A)⊆¯aprL(A)⊆¯apr(A);
(2) apr_(A)=SR_(A)⊆SRL_(A)⊆A⊆¯SRL(A)⊆¯SR(A)⊆¯apr(A);
(3) BndSRL(A)⊆BndaprL(A)⊆Bndapr(A) and BndSRL(A)⊆BndSR(A)⊆Bndapr(A);
(4) AccSRL(A)≥AccaprL(A)≥Accapr(A) and AccSRL(A)≥AccSR(A)≥Accapr(A).
Proof.
(1) By Corollary 3.1 and Proposition 3.4, we have that apr_(A)⊆aprL_(A)⊆SRL_(A)⊆A⊆¯SRL(A). To prove that ¯SRL(A)⊆¯aprL(A), let x∉¯aprL(A)=A∪{x∈X:∃e∈E,x∈F(e),F(e)∩A∉L}. Then, x∉A and for all e∈E;x∈F(e) we get that F(e)∩A∈L. Thus, x∉A and x∈(SRL)∗(Ac). That is, x∉A and x∉[(SRL)∗(Ac)]c. Hence, x∉¯SRL(A). It follows that ¯SRL(A)⊆¯aprL(A). To prove that ¯aprL(A)⊆¯apr(A), let x∉¯apr(A)={x∈X:∃e∈E,[x∈F(e),F(e)∩A≠ϕ]}. Then, for all e∈E:x∈F(e) we have that F(e)∩A=ϕ, which means that F(e)⊆Ac. Thus, x∉A and x∉(aprL)∗(A). Therefore, x∉¯aprL(A). Hence, ¯aprL(A)⊆¯apr(A).
(2) The proof is similar to that of part (1).
(3), (4) It is immediately obtained by applying parts (1) and (2).
Remark 3.5. According to Theorem 3.1, the second style described in Definition 3.5 is the best technique to enhance approximations and increase the accuracy measure, which means that the vagueness/uncertainty is decreased by this style more than the other styles given in Definition 3.2, Definition 2.5 in [28] and Definition 2.8 in [29]. Hence, a decision made according to the calculations of the style in Definition 3.5 is more applicable based on the decrease to the upper approximations and the increase to the lower approximations; thus, we get a higher accuracy value than the accuracy values given by the other styles discussed in [28,29]. Consider the following as a special case:
(1) If L=ϕ and S is a full soft set, then Definition 3.2 is consistent with the previous Definition 2.5 in [28].
(2) If L=ϕ, then Definition 3.5 is consistent with the previous definition in [29].
In this section, we redefine some soft rough concepts as modified soft rough concepts by utilizing ideals. Comparisons between the present ideal soft rough concepts and the previous soft rough concepts in [28,29] are presented. Finally, some examples are used to explain the current definitions.
Definition 4.1. Let S=(F,E) be a full soft set of X and P=(X,S,L) be a soft ideal approximation space. Then, A⊆X is referred to as follows:
(1) A totally SRL (resp. aprL)-definable set if SRL_(A)=¯SRL(A)=A (resp. aprL_(A)=¯aprL(A)=A).
(2) An internally SRL (resp. aprL)-definable set if SRL_(A)=A, ¯SRL(A)≠A (resp. aprL_(A)=A, ¯aprL(A)≠A).
(3) An externally SRL (resp. aprL)-definable set if SRL_(A)≠A, ¯SRL(A)=A (resp. aprL_(A)≠A, ¯aprL(A)=A).
(4) A totally SRL (resp. aprL)-rough set if SRL_(A)≠A≠¯SRL(A) (resp. aprL_(A)≠A≠¯aprL(A).
Definition 4.2. Let S=(F,E) be a full soft set of X and P=(X,S) a soft approximation space. Then, A⊆X is referred to as follows:
(1) A totally apr-definable set if apr_(A)=¯apr(A)=A.
(2) An internally apr-definable set if apr_(A)=A and ¯apr(A)≠A.
(3) An externally apr-definable set if apr_(A)≠A and ¯apr(A)=A.
(4) A totally apr -rough set if apr_(A)≠A≠¯apr(A).
Remark 4.1. From Definitions 2.10, 4.1 and 4.2 and by using Theorem 3.1, we have the following diagrams:
(1)
(2)
The following example explains how the implications in the previous diagrams are not reversible.
Example 4.1. Let X={x1,x2,x3,x4,x5,x6},E={e1,e2,e3,e4,e5,e6,e7,e8}, F:E⟶P(X), and S=(F,E) be a soft set of X as given in Table 3. From Table 4, we can deduce that
F(e1)={x1,x4,x6},F(e2)={x6}F(e3)={x3,x4},F(e4)={x1,x2,x3}F(e5)={x1,x2,x4,x5},F(e6)={x1,x2,x4,x6}F(e7)={x2,x3,x5},F(e8)={x2,x3,x6}. |
Hence, we obtain the following results:
(1) Consider that L={ϕ,{x2},{x4},{x2,x4}}. If A={x1,x3,x6}, then (SRL)∗(A)=ϕ. So, ¯SRL(A)=A∪(SRL)∗(A)=A∪ϕ=A. Also, (SRL)∗(A)={x1,x3,x4,x6}. So, SRL_(A)=A∩(SRL)∗(A)={x1,x3,x6}=A. Hence, A is a totally SRL-definable set. But ¯aprL(A)=¯SR(A)=X≠A. Therefore, A is neither totally aprL -definable nor totally SR-definable set.
(2) Consider that L={ϕ,{x1},{x2},{x3},{x1,x2},{x1,x3},{x2,x3},{x1,x2,x3}}. If A={x1,x2}, then aprL_(A)=¯aprL(A)=A. Thus, A is a totally aprL-definable set. But ¯apr(A)=X≠A. Therefore, A is not a totally apr-definable set. Also, if A={x1,x4,x6}, then SR_(A)=¯SR(A)=A. Therefore, A is a totally SR-definable set. But ¯apr(A)=X≠A. Hence, A is not a totally apr-definable set.
(3) From part (1), if A={x1,x3,x6}, then SR_(A)={x6}≠A and ¯SR(A)=X≠A. Thus, A is a totally SR-rough set. But SRL_(A)=¯SRL(A)=A. So, A is not a totally SRL-rough set.
(4) From part (2), if A={x1,x2}, then apr_(A)=ϕ≠A and ¯apr(A)=X≠A. So, A is a totally apr-rough set. But aprL_(A)=¯aprL(A)=A. Thus, A is not a totally aprL-rough set.
(5) Let A={x4,x5,x6}. Then, apr_(A)={x6}≠A and ¯apr(A)=X≠A. So, A is a totally apr-rough set. But ¯SR(A)=A. Therefore, A is not a totally SR-rough set. Note that A is an externally SR-definable set.
(6) Consider that L={ϕ,{x2},{x4},{x2,x4}} and A={x1,x2,x4}. Then, aprL_(A)=ϕ≠A and ¯aprL(A)=X≠A. So, A is a totally aprL-rough set. But ¯SRL(A)=A. Therefore, A is not a totally SRL-rough set. Also, A is an externally SRL-definable set. If A={x1,x3,x5,x6}, then aprL_(A)=A and ¯aprL(A)=X≠A. Hence, A is an internally aprL-definable set.
Object | e1 | e2 | e3 | e4 | e5 | e6 | e7 | e8 |
x1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
x2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
x3 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
x4 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 |
x5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 |
x6 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 |
Proposition 4.1. Let P=(X,S,L) be a soft ideal approximation space in which S=(F,E) is a soft set of X and A⊆X. Then, the following holds:
(1) A is a totally SRL-definable set iff AccSRL(A)=1.
(2) A is a totally aprL-definable set iff AccaprL(A)=1.
(3) A is a totally apr-definable set ⟹Accapr(A)=1.
Proof.
(1) Let A be a totally SRL-definable set. Then, SRL_(A)=¯SRL(A)=A. Therefore, AccSRL(A)=|A||A|=1.
On the other hand, if AccSRL(A)=1, then SRL_(A)=¯SRL(A), but from Proposition 3.4, we have that SRL_(A)⊆A⊆¯SRL(A); then, SRL_(A)=¯SRL(A)=A. Hence, A is a totally SRL-definable set.
(2), (3) Similar to the proof of part (1).
Remark 4.2. In this example, it is shown that the inverse of property (3) in Proposition 4.1 did not hold.
Example 4.2. In Example 3.1, if A=X, then apr_(A)=¯apr(A)={x1,x2,x3,x5,x6}≠X. Hence, Accapr(A)=1, but A is not a totally apr-definable set.
Definition 4.3. Let P=(X,S,L) be a soft ideal approximation space such that S=(F,E) is a soft set over X, A⊆X and x∈X. Then, the SRL, aprL and apr-soft rough membership relations, denoted respectively by ∈_SRL,¯∈SRL, ∈_aprL,¯∈aprL and ∈_apr,¯∈apr, are given as follows:
x ∈_SRL A iff x∈SRL_(A), x ∈_aprL A iff x∈aprL_(A) and x ∈_apr A iff x∈apr_(A),x ¯∈SRL A iff x∈¯SRL(A), x ¯∈aprL A iff x∈¯aprL(A) and x ¯∈apr A iff x∈¯apr(A). |
Proposition 4.2. Let P=(X,S,L) be a soft ideal approximation space in which S=(F,E) is a soft set of X. If A⊆X, x∈X, then
(1) x ∈_SRL A ⇒x∈A and x ∈_aprL A ⇒x∈A.
(2) x ¯∉SRL A ⇒x∉A and x ¯∉aprL A ⇒x∉A.
Proof. It follows directly from Definition 4.3 and Propositions 3.2 and 3.4.
Remark 4.3. Let S=(F,E) be a full soft set of the universal set X and P=(X,S,L) be a soft ideal approximation space. Then, from Definitions 2.11 and 4.3, and by using Theorem 3.1, we have the following diagrams:
![]() |
The next example explains the irreversible parts of the implications in these diagrams.
Example 4.3. (1) In Example 4.1 part (1), we have that A={x1,x3,x6}, ¯aprL(A)=¯SR(A)=X and ¯SRL(A)=A. Hence, x2,x4,x5 ¯∈aprL(A) and x2,x4,x5 ¯∈SR(A) but x2,x4,x5 ¯∉SRL(A).
(2) In Example 4.1 part (1), we have that A={x1,x2}, ¯aprL(A)=A,¯SR(A)={x1,x2,x5} and ¯apr(A)=X. Hence, x3,x4,x5,x6 ¯∈apr(A) but x3,x4,x5,x6 ¯∉aprL(A) and x3,x4,x6 ¯∉SR(A).
(3) In Example 4.1 part (3), we have that A={x1,x3,x6}, SRL_(A)=A and SR_(A)={x6}. Hence, x1,x3 ∈_SRL(A) but, x1,x3 ∉_SR(A).
Definition 4.4. Let P=(X,S,L) be a soft ideal approximation space in which S=(F,E) is a soft set of X and A,B⊆X. Then, the SRL, aprL and apr-soft rough inclusion relations, denoted respectively by ⊂⇁SRL,⇀⊂SRL, ⊂⇁aprL,⇀⊂aprL and ⊂⇁apr,⇀⊂apr, are given as follows:
A⊂⇁SRLB iff SRL_(A)⊆SRL_(B), A⊂⇁aprLB iff aprL_(A)⊆aprL_(B),A⇀⊂SRLB iff ¯SRL(A)⊆¯SRL(B), A⇀⊂aprLB iff ¯aprL(A)⊆¯aprL(B), |
A⊂⇁aprB iff apr_(A)⊆apr_(B), A⇀⊂aprB iff ¯apr(A)⊆¯apr(B). |
Proposition 4.3. Let P=(X,S,L) be a soft ideal approximation space in which S=(F,E) is a soft set of X and A,B⊆X. Then, we have the following:
(1) A⊆B ⇒ A⊂⇁SRLB, A⊂⇁aprLB and A⊂⇁aprB.
(2) A⊆B ⇒ A⇀⊂SRLB, A⇀⊂aprLB and A⇀⊂aprB.
Proof. It follows directly from Definition 4.4, Theorem 2.2 and Propositions 3.2 and 3.4.
Remark 4.4. The following example explains how the converse of Proposition 4.3 did not hold.
Example 4.4. In Example 4.1, consider that L={ϕ,{x2},{x4},{x2,x4}}. Then, the following holds:
(1) If A={x1,x3} and B={x2,x4}, then apr_(A)=ϕ, aprL_(A)={x1,x3} and apr_(B)=aprL_(B)=ϕ. Hence, B⊂⇁aprLA and B⊂⇁aprA. However, B⊈A.
(2) If A={x1,x2} and B={x1,x4}, then ¯apr(A)=¯aprL(A)=X and ¯apr(B)=¯aprL(B)=X. Hence, A⇀⊂aprLB and A⇀⊂aprB. However, A⊈B.
Definition 4.5. Let P=(X,S,L) be a soft ideal approximation space in which S=(F,E) is a soft set of X and A,B⊆X. Then, the SRL and aprL-soft rough set relations, denoted respectively by ≂SRL, ≃SRL and ≂aprL, ≃aprL are given as follows:
A≂SRLB iff SRL_(A)=SRL_(B) and A≂aprLB iff aprL_(A)=aprL_(B),A≃SRLB iff ¯SRL(A)=¯SRL(B) and A≃aprLB iff ¯aprL(A)=¯aprL(B),A≈SRLB iff A≂SRLB,A≃SRLB and A≈aprLB iff A≂aprLB,A≃aprLB. |
Remark 4.5. The next example interprets Definition 4.5.
Example 4.5. In Example 4.1, consider that L={ϕ,{x3}}. If A={x1} and B={x2}, then aprL_(A)=SRL_(A)=aprL_(B)=SRL_(B)=ϕ and ¯aprL(A)=¯SRL(A)=¯aprL(B)=¯SRL(B)=X. Consequently, A≂SRLB,A≃SRLB,A≈SRLB and A≂aprLB,A≃aprLB,A≈aprLB.
Proposition 4.4. Let P=(X,S,L) be a soft ideal approximation space and A,B,A1,B1⊆X. Then, the following applies:
(1) A=B⇒A≈SRLB and A≈aprLB.
(2) Ac≂SRLBc⇒A≂SRLB and Ac≃SRLBc⇒A≃SRLB.
(3) A≃aprLA1,B≃aprLB1⇒(A∪B)≃aprL(A1∪B1).
(4) A≃aprLB⇒A≃aprL(A∪B)≃aprLB and (A∪Bc)≃aprLX.
(5) A≂SRLSRL_(A), A≃SRL¯SRL(A) and A≂aprLaprL_(A).
(6) A⊆B,B≂SRLϕ⇒A≂SRLϕ and A⊆B,B≂aprLϕ⇒A≂aprLϕ.
(7) A⊆B,B≃SRLϕ⇒A≃SRLϕ and A⊆B,B≃aprLϕ⇒A≃aprLϕ.
(8) A⊆B,A≂SRLX⇒B≂SRLX and A⊆B,A≂aprLX⇒B≂aprLX.
(9) A⊆B,A≃SRLX⇒B≃SRLX and A⊆B,A≃aprLX⇒B≃aprLX.
Proof. It follows directly from Definition 4.5 and Propositions 3.2 and 3.4.
Lemma 4.1. Let P=(X,S,L) be a soft ideal approximation space and S=(F,E) be an intersecting complete soft set of X. Then, we have
aprL_(A∩B)=aprL_(A)∩aprL_(B)andSRL_(A∩B)=SRL_(A)∩SRL_(B) for all A,B⊆X. |
Proof. Usually, aprL_(A∩B)⊆aprL_(A)∩aprL_(B). Thus, we need to prove the reverse inclusion aprL_(A∩B)⊇aprL_(A)∩aprL_(B). Let x∈aprL_(A)∩aprL_(B). Then, there exist e1,e2∈E with x∈F(e1),F(e1)∩Ac∈L and x∈F(e2),F(e2)∩Bc∈L. By assuming that S is an intersecting complete soft set, we ensure that there is an e3∈E with x∈F(e3)=F(e1)∩F(e2) and F(e3)∩(A∩B)c∈L. Hence, we conclude that x∈aprL_(A∩B). In a similar way, we can deduce that SRL_(A∩B)=SRL_(A)∩SRL_(B).
Proposition 4.5. Let P=(X,S,L) be a soft ideal approximation space and S=(F,E) be an intersecting complete soft set of X. Then, for A,B,A1,B1⊆X, we have the following:
(1) A≂aprLB⇒A∩Bc≂aprLϕ and A≂SRLB⇒A∩Bc≂SRLϕ.
(2) A≂aprLB⇒A≂aprL(A∩B)≂aprLB and A≂SRLB⇒A≂SRL(A∩B)≂SRLB.
(3) A≂aprLA1,B≂aprLB1⇒(A∩B)≂aprL(A1∩B1) and A≂SRLA1,B≂SRLB1⇒(A∩B)≂SRL(A1∩B1).
Proof. It follows directly from Definition 4.5 and Lemma 4.1.
Proposition 4.6. Let P=(X,S,L) be a soft ideal approximation space and S=(F,E) be a soft set of X. Then, for all A⊆X we have the following:
(1) aprL_(A)=⋂{B⊆X:A≂aprLB} and SRL_(A)=⋂{B⊆X:A≂SRLB}.
(2) ¯aprL(A)⊇⋃{B⊆X:A≃aprLB} and ¯SRL(A)=⋃{B⊆X:A≃SRLB}.
Proof. (1) Let x∈aprL_(A). If A≂aprLB, then aprL_(A)=aprL_(B). But aprL_(B)⊆B for any B⊆X. It follows that x∈aprL_(A)=aprL_(B)⊆B. Hence, x∈∩{B⊆X:A≂aprLB}, so aprL_(A)⊆∩{B⊆X:A≂aprLB}. Now, we deduce that the reverse inclusion also holds. Let x∈∩{B⊆X:A≂aprLB}. Then, by Proposition 4.4, we have that A≂aprLaprL_(A), and it follows that x∈aprL_(A). Therefore, ∩{B⊆X:A≂aprLB}⊆aprL_(A). Consequently, we conclude that aprL_(A)=∩{B⊆X:A≂aprLB}. Similarly, we can deduce that SRL_(A)=⋂{B⊆X:A≂SRLB}.
(2) Let x∈∪{B⊆X:A≃aprLB}. Then, there exists some B⊆X with x∈B and A≃aprLB. But B⊆¯aprL(B) from Proposition 3.2; thus, x∈B⊆¯aprL(B)=¯aprL(A). Hence, we conclude that ¯aprL(A)⊇⋃{B⊆X:A≃aprLB}, as required. Similarly, we can prove that ¯SRL(A)⊇⋃{B⊆X:A≃SRLB}. Also, we can deduce that the reverse inclusion ⋃{B⊆X:A≃SRLB}⊇¯SRL(A) holds. Let x∉∪{B⊆X:A≃SRLB}. Then, x∉B for any B⊆X with A≃SRLB. Thus, by Proposition 4.4, we have that A≃SRL¯SRL(A), and it follows that x∉¯SRL(A). Consequently, we conclude that ¯SRL(A)=∪{B⊆X:A≃SRLB}.
Remark 4.6. The following example explains Proposition 4.6.
Example 4.6. In Example 4.1, consider that L={ϕ,{x2},{x4},{x2,x4}}. For A={x3,x5}⊆X, we have that aprL_(A)=SRL_(A)={x3}, and ¯aprL(A)={x1,x2,x3,x5} with ¯SRL(A)={x2,x3,x4,x5}. Then, one can see that aprL_(A)=SRL_(A)=⋂{B⊆X:A≂aprLB}={x3} and ¯SRL(A)=⋃{B⊆X:A≃SRLB}={x2,x3,x4,x5}. Also, ⋃{B⊆X:A≃aprLB}={x2,x3,x5}⫋¯aprL(A) and ⋃{B⊆X:A≃aprLB}={x2,x3,x5}⫋¯aprL(A).
In this section, a new generalization derived as based on two ideals of the best method that has been proposed in Definition 3.5, called soft bi-ideal rough set approximations, is presented. This generalization is analyzed by using two distinct methods. Their properties are investigated, and the relationships between these methods are studied.
Definition 5.1. The quadruple (X,S,L1,L2) is called a soft bi-ideal approximation space, S is a soft set defined on X, L1,L2 are two ideals on X, and (X,S,<L1,L2>) is called a soft ideal approximation space related to (X,S,L1,L2). For any subset A of X, the lower approximation and the upper approximation, i.e., (SR)∗<L1,L2>(A) and (SR)∗<L1,L2>(A), are defined respectively, as follows:
(SR)∗<L1,L2>(A)=∪{F(e),e∈E:F(e)∩Ac∈<L1,L2>}(SR)∗<L1,L2>(A)=[(SR)∗<L1,L2>(Ac)]c. |
Remark 5.1. The lower approximation and the upper approximation given in Definition 5.1 are consistent with the approximations given in Definition 3.4 if L1=L2. Also, the confirmed properties of the soft approximations in Definition 5.1 are those given in Proposition 3.3.
Definition 5.2. Let (X,S,L1,L2) be a soft bi-ideal approximation space. For any subset A of X, the soft lower approximation and the soft upper approximation, SR_<L1,L2>(A) and ¯SR<L1,L2>(A), are defined respectively, as follows:
SR_<L1,L2>(A)=A∩(SR)∗<L1,L2>(A)¯SR<L1,L2>(A)=A∪(SR)∗<L1,L2>(A). |
Remark 5.2. The lower approximation and the upper approximation given in Definition 5.2 are consistent with the approximations given in Definition 3.5 if L1=L2. Also, the confirmed properties of the soft approximations in Definition 5.2 are those given in Proposition 3.4.
Definition 5.3. Let (X,S,L1,L2) be a soft bi-ideal approximation space. For any subset A of X, the soft lower approximation and the soft upper approximation, SR_L1,L2(A) and ¯SRL1,L2(A), are defined respectively, as follows:
SR_L1,L2(A)=SRL1_(A)∪SRL2_(A),¯SRL1,L2(A)=¯SRL1(A)∩¯SRL2(A), |
where SRL1_(A) and ¯SRL2(A) are the lower approximation and the upper approximation of A as related to Li, i∈{1,2} as in Definition 3.5.
Remark 5.3. The lower approximation and the upper approximation given in Definition 5.3 are consistent with the approximations given in Definition 3.5 if L1=L2. Also, it should be noted that the operators SR_L1,L2(A) and ¯SRL1,L2(A) satisfy the conditions of the properties given in Proposition 3.4.
Definition 5.4. Let (X,S,L1,L2) be a soft bi-ideal approximation space and A⊆X. Then, the soft boundary regions Bnd<L1,L2>(A) and BndL1,L2(A) and the soft accuracy measures Acc<L1,L2>(A) and AccL1,L2(A) are defined respectively, as follows:
Bnd<L1,L2>(A)=¯SR<L1,L2>(A)−SR_<L1,L2>(A) and BndL1,L2(A)=¯SRL1,L2(A)−SR_L1,L2(A),Acc<L1,L2>(A)=|SR_<L1,L2>(A)||¯SR<L1,L2>(A)| and AccL1,L2(A)=|SR_L1,L2(A)||¯SRL1,L2(A)|. |
Theorem 5.1. Let (X,S,L1,L2) be a soft bi-ideal approximation space and A⊆X. Then, the following holds:
(1) ¯SR<L1,L2>(A)⊆¯SRL1,L2(A).
(2) SR_L1,L2(A)⊆SR_<L1,L2>(A).
(3) Bnd<L1,L2>(A)⊆BndL1,L2(A).
(4) AccL1,L2(A)⊆Acc<L1,L2>(A).
Proof. (1) Let x∈¯SR<L1,L2>(A). Then, x∈A or x∈(SR)∗<L1,L2>(A). So, x∈A or ∀e∈E:x∈F(e), and we get that F(e)∩A∉<L1,L2>. First choice, if x∈A, then x∈¯SRL1(A) and x∈¯SRL2(A). Thus, x∈¯SRL1,L2(A). Second choice, if ∀e∈E:x∈F(e), we have that F(e)∩A∉<L1,L2>; then, F(e)∩A∉L1 and F(e)∩A∉L2 ∀e∈E:x∈F(e), and thus x∉(SRL1)∗(Ac) and x∉(SRL2)∗(Ac). That means that x∈(SRL1)∗(A) and x∈(SRL2)∗(A) by Definition 3.4. Thus, x∈¯SRL1(A)∩¯SRL2(A). This means that x∈¯SRL1,L2(A). Hence, ¯SR<L1,L2>(A)⊆¯SRL1,L2(A).
(2) Let x∈SR_L1,L2(A). Then, x∈SRL1_(A) or x∈SRL2_(A). Therefore, for some e∈E:x∈F(e), we have that F(e)∩Ac∈L1 or F(e)∩Ac∈L2. Since L1,L2⊆<L1,L2>, then for some e∈E:x∈F(e) we have that F(e)∩Ac∈<L1,L2>. Thus, x∈SR_<L1,L2>(A). Hence, SR_L1,L2(A)⊆SR_<L1,L2>(A).
(3), (4) It is immediately obtained by applying parts (1) and (2).
Proposition 5.1. Let (X,S,L1,L2) be a soft bi-ideal approximation space and A⊆X. Then, the following holds:
(1) ¯SR<L1,L2>(A)⊆¯SRL1,L2(A)⊆¯SRLi(A) ∀i∈{1,2}.
(2) SRLi_(A)⊆SR_L1,L2(A)⊆SR_<L1,L2>(A) ∀i∈{1,2}.
(3) Bnd<L1,L2>(A)⊆BndL1,L2(A)⊆BndSRLi(A) ∀i∈{1,2}.
(4) AccSRLi(A)⊆AccL1,L2(A)⊆Acc<L1,L2>(A) ∀i∈{1,2}.
Proof. It directly follows from Definition 5.3 and Theorem 5.1.
Remark 5.4. From Theorem 5.1, we deduce that the boundary region defined by Definition 5.2 is smaller than that boundary region computed by applying Definition 5.3. Consequently, the accuracy value computed based on Definition 5.2 is higher than the accuracy value computed based on Definition 5.3. In the next section, we introduce two medical applications to clarify the advantages of using these recent methods to make a decision.
In this section, we show the good performance of the aforementioned approaches as compared to their counterpart approaches that exist in literature [28,29] by providing two practical applications.
Example 6.1. Medical application: Decision-making for influenza problem:
The aim of this example is to illustrate the advantage of the approximations defined in Definitions 3.2 and 3.5 as the best tools to detect the critical symptoms of influenza infections. The information in Table 5 was adopted from [31,32]. In this example, let (F,E) describe the symptoms of patients suspected of having influenza that any hospital would consider before making a decision. The most common symptoms (set of attributes) of influenza are as follows: Fever, respiratory issues, nasal discharge, cough, headache, lethargy and sore throat. It was taken from six patients under medical examination. So, the set of objects is X={x1,x2,x3,x4,x5,x6}, and the set of decision attributes is E={e1,e2,e3,e4,e5,e6,e7}, such that
\begin{align*} e_1& = {the \;parameter \;"fever",}\\ e_2& = {the \;parameter \;"respiratory\; issues",}\\ e_3& = {the\; parameter \;"nasal \;discharges",}\\ e_4& = {the\; parameter\; "cough",}\\ e_5& = {the\;parameter \;"headache",}\\ e_6& = {the \;parameter \;"sore \;throat",}\\ e_7& = {the \;parameter \;"lethargy"}. \end{align*} |
Consider the mapping F: E\longrightarrow P(X). From Table 5, we can deduce that
\begin{align*} F(e_1)0& = \{x_1,x_3,x_4,x_5,x_6\} \; \; \; \; \; \; \; \; \; \; \; \; \; \; F(e_2) = \{x_1,x_2\},\\ F(e_3)& = \{x_1,x_2,x_4\}\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; F(e_4) = \{x_1\},\\ F(e_5)& = \{x_3,x_4\} \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; F(e_6) = \{x_2,x_4\},\\ F(e_7)& = \{x_1,x_3,x_5,x_6\}. \end{align*} |
Therefore, F(e_1) indicates that patients have a fever, and the functional value is the set \{p_1, p_3, p_4, p_5, p_6\} . Thus, (F, E) could be described as a set of approximations, as explained in Table 5.
Patients | e_1 | e_2 | e_3 | e_4 | e_5 | e_6 | e_7 |
x_1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 |
x_2 | 0 | 1 | 1 | 0 | 0 | 1 | 0 |
x_3 | 1 | 0 | 0 | 0 | 1 | 0 | 1 |
x_4 | 1 | 0 | 1 | 0 | 1 | 1 | 0 |
x_5 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
x_6 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Consider that \mathcal{L} = \{\phi, \{x_1\}, \{x_2\}, \{x_3\}, \{x_1, x_2\}, \{x_1, x_3\}, \{x_2, x_3\}, \{x_1, x_2, x_3\}\}. A comparison between the introduced methods and the previous methods is shown in Tables 6 and 7. For example, take a set of patients A = \{x_3, x_4, x_5\}; then, the soft boundary region ( Bnd_{\mathbf{S}R_\mathcal{L}}(A) ) and soft accuracy measure ( Acc_{\mathbf{S}R_\mathcal{L}}(A) ) by Definition 3.5 are \phi and 1, respectively. Alternatively, Bnd_{apr}(A) and Acc{apr}(A) are \{x_1, x_2, x_5, x_6\} and 1/3, respectively. Also, Bnd_{\mathbf{S}R}(A) and Acc_{\mathbf{S}R}(A) are \{x_5, x_6\} and 1/2, respectively, Bnd_{apr_\mathcal{L}}(A) and Acc_{apr_\mathcal{L}}(A) are \{x_1, x_2, x_5, x_6\} and 1/3, respectively. If we take another set such that A = \{x_2, x_4, x_5, x_6\}, then Bnd_{\mathbf{S}R_\mathcal{L}}(A) and Acc_{\mathbf{S}R_\mathcal{L}}(A) ) are \phi and 1, respectively. Alternatively, Bnd_{apr}(A) and Acc{apr}(A) are \{x_1, x_3, x_5, x_6\} and 1/3, respectively. Also, Bnd_{\mathbf{S}R}(A) and Acc_{\mathbf{S}R}(A) are \{x_3, x_5, x_6\} and 2/5, respectively and Bnd_{apr_\mathcal{L}}(A) and Acc_{apr_\mathcal{L}}(A) are \{x_1\} and 5/6, respectively. According to the above discussion, Definition 3.5 gives us a smaller boundary region and a higher accuracy value than the ones computed using Definition 3.2 and the methods in [28,29]. Additionally, it is clear that the proposed approach in Definition 3.2 and its counterpart introduced in [29] are different in general. Hence, a decision made according to the calculations of our current technique in Definition 3.5 is more accurate.
Soft rough set approach [28] | Soft rough set approach [29] | The suggested approach in Definition 3.2 | The suggested approach in Definition 3.5 | |||||
Patients | \underline{apr}(A) | \overline{apr}(A) | \underline{\mathbf{S}R}(A) | \overline{\mathbf{S}R}(A) | \underline{apr_\mathcal{L}}(A) | \overline{apr_\mathcal{L}}(A) | \underline{\mathbf{S}R_\mathcal{L}}(A) | \overline{\mathbf{S}R_\mathcal{L}}(A) |
\{x_1\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1\} | \{x_1\} | \{x_1\} |
\{x_1, x_2\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} | \{x_1, x_2\} | \{x_1, x_2\} | \{x_1, x_2\} | \{x_1, x_2\} |
\{x_1, x_3\} | \{x_1\} | X | \{x_1\} | \{x_1, x_3, x_5, x_6\} | \{x_1\} | \{x_1, x_3\} | \{x_1\} | \{x_1, x_3\} |
\{x_1, x_5\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_5, x_6\} |
\{x_2, x_4\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_4\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_4\} |
\{x_3, x_4\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} |
\{x_1, x_2, x_5\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} |
\{x_1, x_3, x_5\} | \{x_1\} | X | \{x_1\} | \{x_1, x_3, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_5, x_6\} |
\{x_1, x_5, x_6\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_5, x_6\} |
\{x_2, x_3, x_4\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_2, x_4, x_5\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_3, x_4, x_5\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} |
\{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | X | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} |
\{x_2, x_3, x_4, x_5\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_2, x_4, x_5, x_6\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} | X | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4, x_5, x_6\} | X | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4, x_5, x_6\} |
\{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} | X | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} |
Approach in [28] | Approach in [29] | Our first approach | Our second approach | |
Patients | Acc_{apr}(A) | Acc_{\mathbf{S}R}(A) | Acc_{apr_\mathcal{L}}(A) ) | Acc_{\mathbf{S}R_\mathcal{L}}(A) |
\{x_1\} | 1/6 | 1/3 | 1 | 1 |
\{x_1, x_2\} | 1/3 | 1/2 | 1 | 1 |
\{x_1, a_3\} | 1/6 | 1/4 | 1/2 | 1/2 |
\{x_1, x_5\} | 1/6 | 1/3 | 1/5 | 1/3 |
\{x_2, x_4\} | 1/3 | 1 | 1/3 | 1 |
\{x_3, x_4\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_2, x_5\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_3, x_5\} | 1/6 | 1/4 | 1/5 | 1/4 |
\{x_1, x_5, x_6\} | 1/6 | 1/3 | 1/5 | 1/3 |
\{x_2, x_3, x_4\} | 1/2 | 3/5 | 1/2 | 3/5 |
\{x_2, x_4, x_5\} | 1/3 | 2/5 | 1/3 | 2/5 |
\{x_3, x_4, x_5\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_3, x_5, x_6\} | 2/3 | 1 | 4/5 | 1 |
\{x_2, x_3, x_4, x_5\} | 1/2 | 3/5 | 1/2 | 3/5 |
\{x_2, x_4, x_5, x_6\} | 1/3 | 2/5 | 5/6 | 1 |
\{x_3, x_4, x_5, x_6\} | 1/3 | 1/2 | 2/3 | 1 |
\{x_2, x_3, x_4, x_5, x_6\} | 1/2 | 3/5 | 5/6 | 1 |
Example 6.2. Medical application: Decision-making for a heart attack problem:
The Cardiology Department in Al-Azhar University [33], in this example, saved the data showed in Table 8. We used these data and applied the techniques described in Definitions 5.2 and 5.3 to obtain the best decision for heart attacks. Table 8 represents the set of objects (patients) as
\begin{equation*} X = \{x_1,x_2,x_3,x_4,x_5,x_6,x_7\}, \end{equation*} |
and the set of decision parameters (set of symptoms) is E = \{e_1, e_2, e_3, e_4, e_5\}, meaning that
\begin{align*} e_1& = {the \;parameter \;"Breathlessness",}\\ e_2& = {the \;parameter \;"Orthopnea",}\\ e_3& = {the\; parameter \;" Paroxysmal\; nocturnal\; dyspnea",}\\ e_4& = {the \;parameter\; "Reduced \;exercise \;tolerance",}\\ e_5& = {the \;parameter\; "Ankle \;swelling "}. \end{align*} |
The decision of a heart attacks is confirmed by "Decision" as explained in Table 8.
Patients | e_1 | e_2 | e_3 | e_4 | e_5 | Decision |
x_1 | 1 | 1 | 1 | 1 | 0 | yes |
x_2 | 0 | 0 | 0 | 1 | 1 | no |
x_3 | 1 | 1 | 1 | 1 | 1 | yes |
x_4 | 0 | 0 | 0 | 1 | 0 | no |
x_5 | 1 | 0 | 0 | 1 | 1 | no |
x_6 | 1 | 1 | 0 | 1 | 1 | yes |
x_7 | 1 | 0 | 1 | 1 | 0 | yes |
From Table 8, we can deduce that
\begin{align*} F(e_1) = \{x_1,x_3,x_5,x_6,x_7\}& \; \; \; \; F(e_2) = \{x_1,x_3,x_6\}\; \; \; F(e_3) = \{x_1,x_3,x_7\}\\F(e_4)& = X, \; \; \; \; F(e_5) = \{x_2,x_3,x_5,x_6\}. \end{align*} |
Consequently, anyone can present two ideals to illustrate that the approximations given in Definition 5.2 are superior to those ones given in Definition 5.3 by extending tables similar to Tables 6 and 7, and by comparing the resultant accuracy.
For example, let \mathcal{L}_1 = \{\phi, \{x_2\}\} and \mathcal{L}_2 = \{\phi, \{x_3\}, \{x_5\}, \{x_3, x_5\}\} be two ideals on X . Therefore, we have that < \mathcal{L}_1, \mathcal{L}_2 > = \{\phi, \{x_2\}, \{x_3\}, \{x_5\}, \{x_2, x_3\}, \{x_2, x_5\}, \{x_3, x_5\}, \{x_2, x_3, x_5\}\}.
From Table 8, the patients \{x_6, x_7\} were surely diagnosed with heart attacks. Thus, we respectively computed the soft lower, soft upper, soft boundary and soft accuracy measure of A to be given as follows:
(1) The approach in Definition 5.3 yields \phi, X, X and 0. This means that the patients x_6 and x_7 did not experienced heart attacks, which is a contradiction to the "yes" values in Table 8. So, we could not determine whether a patient suffered a heart attack.
(2) The approach in Definition 5.2 yields \{x_6, x_7\}, X, \{x_1, x_2, x_4, x_4, x_5\} and 2/7. This means that the patients \{x_6, x_7\} surely experienced a heart attack according to the techniques of Definition 5.2 which is consistent with Table 8. As a result, we were able to determine whether a patient suffered a heart attack. Additionally, the soft boundary region decreased and the soft accuracy measure was higher.
At the end, it should be noted that the use of the soft rough technique utilizing ideals as detailed in our new proposal in Definition 5.2 also refines the primary evaluation results of an expert group and thus allows us to select the optimal object in a more reliable manner. Specifically, the soft lower approximation can be used to add the optimal objects that have possibly been neglected by some experts in the primary evaluation, while the soft upper approximation can be used to remove the objects that have been improperly selected as the optimal objects by some experts in the primary evaluation. Therefore, considering that the subjective aspect of decision making is considered and described by using the evaluation soft set in our best method, as based the two ideals in Definition 5.2, the use of soft rough sets could, to some extent, automatically reduce the errors caused by the subjective nature of the evaluation given by an expert group in some decision-making problems.
This paper is a modification and could be a generalization of the roughness of soft sets. Two new approximations called soft ideal rough approximations, which generalize the old soft approximations [28,29], have been proposed. It is proved that the two proposed approaches satisfy the conditions of the main properties established in Pawlak's model. Moreover, comparisons among these approaches and previous ones [28,29] have been discussed. Moreover, we proved that our second approach defined in Definition 3.5, is the best one since it produces smaller boundary regions and greater accuracy values than the corresponding boundary regions and accuracy values obtained via our first method in Definition 3.2 and those introduced in [28,29]. Furthermore, two new soft approximation spaces based on two ideals, called soft bi-ideal approximation spaces, have been introduced in Definitions 5.2 and 5.3. The properties and results of these soft bi-ideal rough sets have been presented. Also, the comparisons between these methods have been investigated. These methods can be extended by using n -ideals. The importance of the introduced soft approximations is that they are dependent on the concept of "ideal" which effectively minimizes the ambiguity raised in real-life problems and results in better decisions. Therefore, two medical applications have been offered to highlight the effect of incorporating ideals into the current approaches. We demonstrated in the first proposed application that our strategies reduce border regions and increase the accuracy measure of the sets more than the approaches shown in [28,29]. In the second medical application, we proved that the approximations explained in Definition 5.2 yield a smaller boundary and a higher accuracy than those obtained by applying Definition 5.3, and this was achieved by comparing the resultant boundary and accuracy results. In the two medical applications, our techniques handled the imperfect data for symptoms, which automatically made patient diagnosis simple and accurate. As a result, medical personnel can make more accurate decisions about patient diagnosis. In future work, we will introduce the notion of fuzzy soft ideal rough approximation and conduct further research in this direction.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number ISP-2024.
The authors declare that they have no conflict of interest.
[1] | S. S. Dragomir, C. Pearce, Selected topics on Hermite-Hadamard inequalities and applications, 2003. |
[2] | A. M. D. Mercer, A variant of Jensen's inequality, J. Inequal. Pure Appl. Math., 4 (2003), 73. |
[3] |
S. S. Dragomir, R. P. Agarwal, Two inequalities for differentiable mappings and applications to special means of real numbers and to trapezoidal formula, Appl. Math. Lett., 11 (1998), 91–95. https://doi.org/10.1016/S0893-9659(98)00086-X doi: 10.1016/S0893-9659(98)00086-X
![]() |
[4] | M. Kian, M. S. Moslehian, Refinements of the operator Jensen-Mercer inequality, Electron.J. Linear Al., 26 2013,742–753. https://doi.org/10.13001/1081-3810.1684 |
[5] |
H. Ogulmus, M. Z. Sarikaya, Hermite-Hadamard-Mercer type inequalities for fractional integrals, Filomat, 35 (2021), 2425–2436. https://doi.org/10.2298/FIL2107425O doi: 10.2298/FIL2107425O
![]() |
[6] |
S. I. Butt, A. Kashuri, M. Umar, A. Aslam, W. Gao, Hermite-Jensen-Mercer type inequalities via \Lambda -Riemann-Liouville k-fractional integrals, AIMS Math., 5 (2020), 5193–5220. https://doi.org/10.3934/math.2020334 doi: 10.3934/math.2020334
![]() |
[7] |
S. I. Butt, M. Umar, S. Rashid, A. O. Akdemir, Y. M. Chu, New Hermite-Jensen-Mercer-type inequalities via k-fractional integrals, Adv. Differ. Equ., 2020 (2020), 635. https://doi.org/10.1186/s13662-020-03093-y doi: 10.1186/s13662-020-03093-y
![]() |
[8] |
H. H. Chu, S. Rashid, Z. Hammouch, Y. M. Chu, New fractional estimates for Hermite-Hadamard-Mercer's type inequalities, Alex. Eng. J., 59 (2020), 3079–3089. https://doi.org/10.1016/j.aej.2020.06.040 doi: 10.1016/j.aej.2020.06.040
![]() |
[9] |
M. Vivas-Cortez, M. A. Ali, A. Kashuri, H. Budak, Generalizations of fractional Hermite-Hadamard-Mercer like inequalities for convex functions, AIMS Math., 6 (2021), 9397–9421. https://doi.org/10.3934/math.2021546 doi: 10.3934/math.2021546
![]() |
[10] |
M. Vivas-Cortez, M. U. Awan, M. Z. Javed, A. Kashuri, M. A Noor, K. I. Noor, Some new generalized k-fractional Hermite-Hadamard-Mercer type integral inequalities and their applications, AIMS Math., 7 (2022), 3203–3220. https://doi.org/10.3934/math.2022177 doi: 10.3934/math.2022177
![]() |
[11] |
W. Sudsutad, S. K. Ntouyas, J. Tariboon, Quantum integral inequalities for convex functions, J. Math. Inequal., 9 (2015), 781–793. https://doi.org/10.7153/jmi-09-64 doi: 10.7153/jmi-09-64
![]() |
[12] |
M. A. Noor, K. I. Noor, M. U. Awan, Some quantum estimates for Hermite-Hadamard inequalities, Appl. Math. Comput., 251 (2015), 675–679. https://doi.org/10.1016/j.amc.2014.11.090 doi: 10.1016/j.amc.2014.11.090
![]() |
[13] |
N. Alp, M. Z. Sarıkaya, M. Kunt, İ. İşcan, q-Hermite Hadamard inequalities and quantum estimates for midpoint type inequalities via convex and quasi-convex functions, J. King Saud Univ. Sci., 30 (2018), 193–203. https://doi.org/10.1016/j.jksus.2016.09.007 doi: 10.1016/j.jksus.2016.09.007
![]() |
[14] |
Y. Zhang, T. S. Du, H. Wang Y. J, Shen, Different types of quantum integral inequalities via (\alpha, m)-convexity, J. Inequal. Appl., 2018 (2018), 264. https://doi.org/10.1186/s13660-018-1860-2 doi: 10.1186/s13660-018-1860-2
![]() |
[15] |
Y. Deng, M. U. Awan, S. Wu, Quantum integral inequalities of Simpson-type for strongly preinvex functions, Mathematics, 7 (2019), 751. https://doi.org/10.3390/math7080751 doi: 10.3390/math7080751
![]() |
[16] | M. Kunt, M. Aljasem, Fractional quantum Hermite-Hadamard type inequalities, Konuralp J. Math., 8 (2020), 122–136. |
[17] |
M. Vivas-Cortez, M. Z. Javed, M. U. Awan, A. Kashuri, M. A. Noor, Generalized (p, q)-analogues of Dragomir-Agarwal's inequalities involving Raina's mapping and applications, AIMS Math, 7 (2022), 11464–11486. https://doi.org/10.3934/math.2022639 doi: 10.3934/math.2022639
![]() |
[18] |
S. Erden, S. Iftikhar, R. M. Delavar, P. Kumam, P. Thounthong, W. Kumam, On generalizations of some inequalities for convex functions via quantum integrals, RACSAM Rev. R. Acad. A, 114 (2020), 110. https://doi.org/10.1007/s13398-020-00841-3 doi: 10.1007/s13398-020-00841-3
![]() |
[19] |
P. P. Wang, T. Zhu, T. S. Du, Some inequalities using s-preinvexity via quantum calculus, J. Interdiscip. Math., 24 (2021), 613–636. https://doi.org/10.1080/09720502.2020.1809117 doi: 10.1080/09720502.2020.1809117
![]() |
[20] |
M. Vivas-Cortez, M. U. Awan, S. Talib, A. Kashuri, M. A. Noor, Multi-parameter quantum integral identity involving Raina's function and corresponding q-integral inequalities with applications, Symmetry, 14 (2022), 606. https://doi.org/10.3390/sym14030606 doi: 10.3390/sym14030606
![]() |
[21] |
Y. M. Chu, M. U. Awan, S. Talib, M. A. Noor, K. I. Noor, New post quantum analogues of Ostrowski-type inequalities using new definitions of left-right (p, q-derivatives and definite integrals, Adv. Differ. Equ., 2020 (2020), 634. https://doi.org/10.1186/s13662-020-03094-x doi: 10.1186/s13662-020-03094-x
![]() |
[22] |
H. Kalsoom, M. Vivas-Cortez, q_1, q_2-Ostrowski-type integral inequalities involving property of generalized higher-order strongly n-polynomial preinvexity, Symmetry, 14 (2022), 717. https://doi.org/10.3390/sym14040717 doi: 10.3390/sym14040717
![]() |
[23] |
M. A. Ali, H. Budak, M. Feckan, S, Khan, A new version of q-Hermite-Hadamard's midpoint and trapezoid type inequalities for convex functions, Math. Slovaca, 73 (2023), 369–386. https://doi.org/10.1515/ms-2023-0029 doi: 10.1515/ms-2023-0029
![]() |
[24] |
B. Bin-Mohsin, M. Saba, M. Z. Javed, M. U. Awan, H. Budak, K. Nonlaopon, A quantum calculus view of Hermite-Hadamard-Jensen-Mercer inequalities with applications, Symmetry, 14 (2022), 1246. https://doi.org/10.3390/sym14061246 doi: 10.3390/sym14061246
![]() |
[25] | H. Budak, H. Kara, On quantum Hermite-Jensen-Mercer inequalities, Miskolc Math. Notes, 2016. |
[26] |
M. Bohner, H. Budak, H. Kara, Post-quantum Hermite-Jensen-Mercer inequalities, Rocky Mountain J. Math., 53 (2023), 17–26. https://doi.org/10.1216/rmj.2023.53.17 doi: 10.1216/rmj.2023.53.17
![]() |
[27] |
H. Budak, M. A. Ali, M. Tarhanaci, Some new quantum Hermite-Hadamard-like inequalities for coordinated convex functions, J. Optim. Theory Appl., 186 (2020), 899–910. https://doi.org/10.1007/s10957-020-01726-6 doi: 10.1007/s10957-020-01726-6
![]() |
[28] |
M. A. Ali, H. Budak, M. Abbas, Y. M. Chu, Quantum Hermite-Hadamard-type inequalities for functions with convex absolute values of second q^a derivatives, Adv. Differ. Equ., 2021 (2021), 7. https://doi.org/10.1186/s13662-020-03163-1 doi: 10.1186/s13662-020-03163-1
![]() |
[29] |
K. Nonlaopon, M. U. Awan, M. Z. Javed, H. Budak, M. A. Noor, Some q-fractional estimates of trapezoid like inequalities involving Raina's function, Fractal Fract., 6 (2022), 185. https://doi.org/10.3390/fractalfract6040185 doi: 10.3390/fractalfract6040185
![]() |
[30] |
N. Siddique, M. Imran, K. A. Khan, J. Pecaric, Majorization inequalities via Green functions and Fink's identity with applications to Shannon entropy, J. Inequal. Appl., 2020 (2020), 192. https://doi.org/10.1186/s13660-020-02455-0 doi: 10.1186/s13660-020-02455-0
![]() |
[31] |
N. Siddique, M. Imran, K. A. Khan, J. Pecaric, Difference equations related to majorization theorems via Montgomery identity and Green's functions with application to the Shannon entropy, Adv. Differ. Equ., 2020 (2020), 430. https://doi.org/10.1186/s13662-020-02884-7 doi: 10.1186/s13662-020-02884-7
![]() |
[32] |
S. Faisal, M. A. Khan, T. U. Khan, T. Saeed, A. M. Alshehri, E. R Nwaeze, New conticrete Hermite-Hadamard-Jensen-Mercer fractional inequalities, Symmetry, 14 (2022), 294. https://doi.org/10.3390/sym14020294 doi: 10.3390/sym14020294
![]() |
[33] |
J. Tariboon, S. K. Ntouyas, Quantum calculus on finite intervals and applications to impulsive difference equations, Adv. Differ. Equ., 2013 (2013), 282. https://doi.org/10.1186/1687-1847-2013-282 doi: 10.1186/1687-1847-2013-282
![]() |
[34] |
S. Bermudo, P. Korus, J. N. Valdes, On q-Hermite-Hadamard inequalities for general convex functions, Acta Math. Hungar., 162 (2020), 364–374. https://doi.org/10.1007/s10474-020-01025-6 doi: 10.1007/s10474-020-01025-6
![]() |
[35] | S. S. Dragomir, Some majorization type discrete inequalities for convex functions, Math. Inequal. Appl., 7 (2004), 207–216. |
[36] | G. H. Hardy, J. E. Littlewood, G. Pólya, Inequalities, Cambridge University Press, 1952. |
[37] | N. Latif, I. Peric, J. Pecaric, On discrete Farvald's and Bervald's inequalities, Commun. Math. Anal., 12 (2012) 34–57. |
[38] |
M. Niezgoda, A generalization of Mercer's result on convex functions, Nonlinear Anal. Theor., 71 (2009), 2771–2779. https://doi.org/10.1016/j.na.2009.01.120 doi: 10.1016/j.na.2009.01.120
![]() |
Object | e_1 | e_2 | e_3 | e_4 | e_5 | e_6 | e_7 | e_8 |
x_1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
x_2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
x_3 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
x_4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
x_5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 |
x_6 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
Object | e_1 | e_2 | e_3 | e_4 | e_5 |
x_1 | 1 | 0 | 0 | 1 | 1 |
x_2 | 0 | 0 | 0 | 1 | 1 |
x_3 | 0 | 0 | 1 | 1 | 0 |
x_4 | 0 | 0 | 0 | 0 | 0 |
x_5 | 0 | 0 | 0 | 0 | 1 |
x_6 | 1 | 0 | 0 | 0 | 0 |
The first method | The second method | |
\mathrm{L}_1 | \square | \blacksquare |
\mathrm{L}_2 | \square | \square |
\mathrm{L}_3 | \blacksquare | \blacksquare |
\mathrm{L}_4 | \blacksquare | \blacksquare |
\mathrm{L}_5 | \blacksquare | \blacksquare |
\mathrm{L}_6 | \blacksquare | \blacksquare |
\mathrm{L}_7 | \square | \square |
\mathrm{L}_8 | \blacksquare | \blacksquare |
\mathrm{L}_9 | \blacksquare | \square |
\mathrm{U}_1 | \square | \blacksquare |
\mathrm{U}_2 | \blacksquare | \blacksquare |
\mathrm{U}_3 | \blacksquare | \square |
\mathrm{U}_4 | \blacksquare | \blacksquare |
\mathrm{U}_5 | \blacksquare | \blacksquare |
\mathrm{U}_6 | \square | \blacksquare |
\mathrm{U}_7 | \blacksquare | \blacksquare |
\mathrm{U}_8 | \blacksquare | \square |
\mathrm{U}_9 | \square | \square |
Object | e_1 | e_2 | e_3 | e_4 | e_5 | e_6 | e_7 | e_8 |
x_1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
x_2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
x_3 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
x_4 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 |
x_5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 |
x_6 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 |
Patients | e_1 | e_2 | e_3 | e_4 | e_5 | e_6 | e_7 |
x_1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 |
x_2 | 0 | 1 | 1 | 0 | 0 | 1 | 0 |
x_3 | 1 | 0 | 0 | 0 | 1 | 0 | 1 |
x_4 | 1 | 0 | 1 | 0 | 1 | 1 | 0 |
x_5 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
x_6 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Soft rough set approach [28] | Soft rough set approach [29] | The suggested approach in Definition 3.2 | The suggested approach in Definition 3.5 | |||||
Patients | \underline{apr}(A) | \overline{apr}(A) | \underline{\mathbf{S}R}(A) | \overline{\mathbf{S}R}(A) | \underline{apr_\mathcal{L}}(A) | \overline{apr_\mathcal{L}}(A) | \underline{\mathbf{S}R_\mathcal{L}}(A) | \overline{\mathbf{S}R_\mathcal{L}}(A) |
\{x_1\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1\} | \{x_1\} | \{x_1\} |
\{x_1, x_2\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} | \{x_1, x_2\} | \{x_1, x_2\} | \{x_1, x_2\} | \{x_1, x_2\} |
\{x_1, x_3\} | \{x_1\} | X | \{x_1\} | \{x_1, x_3, x_5, x_6\} | \{x_1\} | \{x_1, x_3\} | \{x_1\} | \{x_1, x_3\} |
\{x_1, x_5\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_5, x_6\} |
\{x_2, x_4\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_4\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_4\} |
\{x_3, x_4\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} |
\{x_1, x_2, x_5\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} |
\{x_1, x_3, x_5\} | \{x_1\} | X | \{x_1\} | \{x_1, x_3, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_5, x_6\} |
\{x_1, x_5, x_6\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_5, x_6\} |
\{x_2, x_3, x_4\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_2, x_4, x_5\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_3, x_4, x_5\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} |
\{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | X | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} |
\{x_2, x_3, x_4, x_5\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_2, x_4, x_5, x_6\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} | X | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4, x_5, x_6\} | X | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4, x_5, x_6\} |
\{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} | X | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} |
Approach in [28] | Approach in [29] | Our first approach | Our second approach | |
Patients | Acc_{apr}(A) | Acc_{\mathbf{S}R}(A) | Acc_{apr_\mathcal{L}}(A) ) | Acc_{\mathbf{S}R_\mathcal{L}}(A) |
\{x_1\} | 1/6 | 1/3 | 1 | 1 |
\{x_1, x_2\} | 1/3 | 1/2 | 1 | 1 |
\{x_1, a_3\} | 1/6 | 1/4 | 1/2 | 1/2 |
\{x_1, x_5\} | 1/6 | 1/3 | 1/5 | 1/3 |
\{x_2, x_4\} | 1/3 | 1 | 1/3 | 1 |
\{x_3, x_4\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_2, x_5\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_3, x_5\} | 1/6 | 1/4 | 1/5 | 1/4 |
\{x_1, x_5, x_6\} | 1/6 | 1/3 | 1/5 | 1/3 |
\{x_2, x_3, x_4\} | 1/2 | 3/5 | 1/2 | 3/5 |
\{x_2, x_4, x_5\} | 1/3 | 2/5 | 1/3 | 2/5 |
\{x_3, x_4, x_5\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_3, x_5, x_6\} | 2/3 | 1 | 4/5 | 1 |
\{x_2, x_3, x_4, x_5\} | 1/2 | 3/5 | 1/2 | 3/5 |
\{x_2, x_4, x_5, x_6\} | 1/3 | 2/5 | 5/6 | 1 |
\{x_3, x_4, x_5, x_6\} | 1/3 | 1/2 | 2/3 | 1 |
\{x_2, x_3, x_4, x_5, x_6\} | 1/2 | 3/5 | 5/6 | 1 |
Patients | e_1 | e_2 | e_3 | e_4 | e_5 | Decision |
x_1 | 1 | 1 | 1 | 1 | 0 | yes |
x_2 | 0 | 0 | 0 | 1 | 1 | no |
x_3 | 1 | 1 | 1 | 1 | 1 | yes |
x_4 | 0 | 0 | 0 | 1 | 0 | no |
x_5 | 1 | 0 | 0 | 1 | 1 | no |
x_6 | 1 | 1 | 0 | 1 | 1 | yes |
x_7 | 1 | 0 | 1 | 1 | 0 | yes |
Object | e_1 | e_2 | e_3 | e_4 | e_5 | e_6 | e_7 | e_8 |
x_1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
x_2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
x_3 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
x_4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
x_5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 |
x_6 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
Object | e_1 | e_2 | e_3 | e_4 | e_5 |
x_1 | 1 | 0 | 0 | 1 | 1 |
x_2 | 0 | 0 | 0 | 1 | 1 |
x_3 | 0 | 0 | 1 | 1 | 0 |
x_4 | 0 | 0 | 0 | 0 | 0 |
x_5 | 0 | 0 | 0 | 0 | 1 |
x_6 | 1 | 0 | 0 | 0 | 0 |
The first method | The second method | |
\mathrm{L}_1 | \square | \blacksquare |
\mathrm{L}_2 | \square | \square |
\mathrm{L}_3 | \blacksquare | \blacksquare |
\mathrm{L}_4 | \blacksquare | \blacksquare |
\mathrm{L}_5 | \blacksquare | \blacksquare |
\mathrm{L}_6 | \blacksquare | \blacksquare |
\mathrm{L}_7 | \square | \square |
\mathrm{L}_8 | \blacksquare | \blacksquare |
\mathrm{L}_9 | \blacksquare | \square |
\mathrm{U}_1 | \square | \blacksquare |
\mathrm{U}_2 | \blacksquare | \blacksquare |
\mathrm{U}_3 | \blacksquare | \square |
\mathrm{U}_4 | \blacksquare | \blacksquare |
\mathrm{U}_5 | \blacksquare | \blacksquare |
\mathrm{U}_6 | \square | \blacksquare |
\mathrm{U}_7 | \blacksquare | \blacksquare |
\mathrm{U}_8 | \blacksquare | \square |
\mathrm{U}_9 | \square | \square |
Object | e_1 | e_2 | e_3 | e_4 | e_5 | e_6 | e_7 | e_8 |
x_1 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 |
x_2 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
x_3 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 |
x_4 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 |
x_5 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 |
x_6 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 |
Patients | e_1 | e_2 | e_3 | e_4 | e_5 | e_6 | e_7 |
x_1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 |
x_2 | 0 | 1 | 1 | 0 | 0 | 1 | 0 |
x_3 | 1 | 0 | 0 | 0 | 1 | 0 | 1 |
x_4 | 1 | 0 | 1 | 0 | 1 | 1 | 0 |
x_5 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
x_6 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Soft rough set approach [28] | Soft rough set approach [29] | The suggested approach in Definition 3.2 | The suggested approach in Definition 3.5 | |||||
Patients | \underline{apr}(A) | \overline{apr}(A) | \underline{\mathbf{S}R}(A) | \overline{\mathbf{S}R}(A) | \underline{apr_\mathcal{L}}(A) | \overline{apr_\mathcal{L}}(A) | \underline{\mathbf{S}R_\mathcal{L}}(A) | \overline{\mathbf{S}R_\mathcal{L}}(A) |
\{x_1\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1\} | \{x_1\} | \{x_1\} |
\{x_1, x_2\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} | \{x_1, x_2\} | \{x_1, x_2\} | \{x_1, x_2\} | \{x_1, x_2\} |
\{x_1, x_3\} | \{x_1\} | X | \{x_1\} | \{x_1, x_3, x_5, x_6\} | \{x_1\} | \{x_1, x_3\} | \{x_1\} | \{x_1, x_3\} |
\{x_1, x_5\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_5, x_6\} |
\{x_2, x_4\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_4\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_4\} |
\{x_3, x_4\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} |
\{x_1, x_2, x_5\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} | \{x_1, x_2\} | X | \{x_1, x_2\} | \{x_1, x_2, x_5, x_6\} |
\{x_1, x_3, x_5\} | \{x_1\} | X | \{x_1\} | \{x_1, x_3, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_5, x_6\} |
\{x_1, x_5, x_6\} | \{x_1\} | X | \{x_1\} | \{x_1, x_5, x_6\} | \{x_1\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1\} | \{x_1, x_5, x_6\} |
\{x_2, x_3, x_4\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_2, x_4, x_5\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_3, x_4, x_5\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} |
\{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | X | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_4, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} | \{x_1, x_3, x_5, x_6\} |
\{x_2, x_3, x_4, x_5\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_2, x_4, x_5, x_6\} | \{x_2, x_4\} | X | \{x_2, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} | X | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} |
\{x_3, x_4, x_5, x_6\} | \{x_3, x_4\} | X | \{x_3, x_4\} | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4, x_5, x_6\} | X | \{x_3, x_4, x_5, x_6\} | \{x_3, x_4, x_5, x_6\} |
\{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4\} | X | \{x_2, x_3, x_4\} | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} | X | \{x_2, x_3, x_4, x_5, x_6\} | \{x_2, x_3, x_4, x_5, x_6\} |
Approach in [28] | Approach in [29] | Our first approach | Our second approach | |
Patients | Acc_{apr}(A) | Acc_{\mathbf{S}R}(A) | Acc_{apr_\mathcal{L}}(A) ) | Acc_{\mathbf{S}R_\mathcal{L}}(A) |
\{x_1\} | 1/6 | 1/3 | 1 | 1 |
\{x_1, x_2\} | 1/3 | 1/2 | 1 | 1 |
\{x_1, a_3\} | 1/6 | 1/4 | 1/2 | 1/2 |
\{x_1, x_5\} | 1/6 | 1/3 | 1/5 | 1/3 |
\{x_2, x_4\} | 1/3 | 1 | 1/3 | 1 |
\{x_3, x_4\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_2, x_5\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_3, x_5\} | 1/6 | 1/4 | 1/5 | 1/4 |
\{x_1, x_5, x_6\} | 1/6 | 1/3 | 1/5 | 1/3 |
\{x_2, x_3, x_4\} | 1/2 | 3/5 | 1/2 | 3/5 |
\{x_2, x_4, x_5\} | 1/3 | 2/5 | 1/3 | 2/5 |
\{x_3, x_4, x_5\} | 1/3 | 1/2 | 1/3 | 1/2 |
\{x_1, x_3, x_5, x_6\} | 2/3 | 1 | 4/5 | 1 |
\{x_2, x_3, x_4, x_5\} | 1/2 | 3/5 | 1/2 | 3/5 |
\{x_2, x_4, x_5, x_6\} | 1/3 | 2/5 | 5/6 | 1 |
\{x_3, x_4, x_5, x_6\} | 1/3 | 1/2 | 2/3 | 1 |
\{x_2, x_3, x_4, x_5, x_6\} | 1/2 | 3/5 | 5/6 | 1 |
Patients | e_1 | e_2 | e_3 | e_4 | e_5 | Decision |
x_1 | 1 | 1 | 1 | 1 | 0 | yes |
x_2 | 0 | 0 | 0 | 1 | 1 | no |
x_3 | 1 | 1 | 1 | 1 | 1 | yes |
x_4 | 0 | 0 | 0 | 1 | 0 | no |
x_5 | 1 | 0 | 0 | 1 | 1 | no |
x_6 | 1 | 1 | 0 | 1 | 1 | yes |
x_7 | 1 | 0 | 1 | 1 | 0 | yes |