
Linear mixed-effects models (LMEMs) are widely used in medical, engineering, and social applications. The accurate specification of the covariance matrix structure within the error term is known to impact the estimation and inference procedures. Thus, it is crucial to detect the source of errors in LMEMs specifications. In this study, we propose combining a user-friendly computational test with an analytical method to visualize the source of errors. Through statistical simulations under different scenarios, we evaluate the performance of the proposed test in terms of the Power and Type I error rate. Our findings indicate that as the sample size n increases, the proposed test effectively detects misspecification in the systematic component, the number of random effects, the within-subject covariance structure, and the covariance structure of the error term in the LMEM with high Power while maintaining the nominal Type I error rate. Finally, we show the practical usefulness of our proposed test with a real-world application.
Citation: Jairo A. Angel, Francisco M.M. Rocha, Jorge I. Vélez, Julio M. Singer. A new test for detecting specification errors in Gaussian linear mixed-effects models[J]. AIMS Mathematics, 2024, 9(11): 30710-30727. doi: 10.3934/math.20241483
[1] | Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051 |
[2] | Muhammad Kamran, Shahzaib Ashraf, Nadeem Salamat, Muhammad Naeem, Thongchai Botmart . Correction: Cyber security control selection based decision support algorithm under single valued neutrosophic hesitant fuzzy Einstein aggregation information. AIMS Mathematics, 2023, 8(6): 13795-13796. doi: 10.3934/math.2023705 |
[3] | Muhammad Kamran, Rashad Ismail, Shahzaib Ashraf, Nadeem Salamat, Seyma Ozon Yildirim, Ismail Naci Cangul . Decision support algorithm under SV-neutrosophic hesitant fuzzy rough information with confidence level aggregation operators. AIMS Mathematics, 2023, 8(5): 11973-12008. doi: 10.3934/math.2023605 |
[4] | Misbah Rasheed, ElSayed Tag-Eldin, Nivin A. Ghamry, Muntazim Abbas Hashmi, Muhammad Kamran, Umber Rana . Decision-making algorithm based on Pythagorean fuzzy environment with probabilistic hesitant fuzzy set and Choquet integral. AIMS Mathematics, 2023, 8(5): 12422-12455. doi: 10.3934/math.2023624 |
[5] | Wajid Ali, Tanzeela Shaheen, Iftikhar Ul Haq, Hamza Toor, Faraz Akram, Harish Garg, Md. Zia Uddin, Mohammad Mehedi Hassan . Aczel-Alsina-based aggregation operators for intuitionistic hesitant fuzzy set environment and their application to multiple attribute decision-making process. AIMS Mathematics, 2023, 8(8): 18021-18039. doi: 10.3934/math.2023916 |
[6] | Attaullah, Asghar Khan, Noor Rehman, Fuad S. Al-Duais, Afrah Al-Bossly, Laila A. Al-Essa, Elsayed M Tag-eldin . A novel decision model with Einstein aggregation approach for garbage disposal plant site selection under q-rung orthopair hesitant fuzzy rough information. AIMS Mathematics, 2023, 8(10): 22830-22874. doi: 10.3934/math.20231163 |
[7] | Rana Muhammad Zulqarnain, Wen Xiu Ma, Imran Siddique, Shahid Hussain Gurmani, Fahd Jarad, Muhammad Irfan Ahamad . Extension of aggregation operators to site selection for solid waste management under neutrosophic hypersoft set. AIMS Mathematics, 2023, 8(2): 4168-4201. doi: 10.3934/math.2023208 |
[8] | Jie Ling, Mingwei Lin, Lili Zhang . Medical waste treatment scheme selection based on single-valued neutrosophic numbers. AIMS Mathematics, 2021, 6(10): 10540-10564. doi: 10.3934/math.2021612 |
[9] | Wenyi Zeng, Rong Ma, Deqing Li, Qian Yin, Zeshui Xu, Ahmed Mostafa Khalil . Novel operations of weighted hesitant fuzzy sets and their group decision making application. AIMS Mathematics, 2022, 7(8): 14117-14138. doi: 10.3934/math.2022778 |
[10] | Muhammad Naeem, Aziz Khan, Shahzaib Ashraf, Saleem Abdullah, Muhammad Ayaz, Nejib Ghanmi . A novel decision making technique based on spherical hesitant fuzzy Yager aggregation information: application to treat Parkinson's disease. AIMS Mathematics, 2022, 7(2): 1678-1706. doi: 10.3934/math.2022097 |
Linear mixed-effects models (LMEMs) are widely used in medical, engineering, and social applications. The accurate specification of the covariance matrix structure within the error term is known to impact the estimation and inference procedures. Thus, it is crucial to detect the source of errors in LMEMs specifications. In this study, we propose combining a user-friendly computational test with an analytical method to visualize the source of errors. Through statistical simulations under different scenarios, we evaluate the performance of the proposed test in terms of the Power and Type I error rate. Our findings indicate that as the sample size n increases, the proposed test effectively detects misspecification in the systematic component, the number of random effects, the within-subject covariance structure, and the covariance structure of the error term in the LMEM with high Power while maintaining the nominal Type I error rate. Finally, we show the practical usefulness of our proposed test with a real-world application.
The idea of "multi-attribute group decision making" (MAGDM) was put forward as a promising and important field of research at the beginning of the 1970s. Since then, a growing number of contributions have been made to theories and models that could be the basis for making decisions that are more methodical and logically sound, employing a variety of criteria. According to one viewpoint, decision-making is a process of problem-solving that ends with the choice of a solution that is thought to be either the best or, at the very least, a reasonable and acceptable alternative among a collection of plausible alternatives. The phrase "multi-criteria decision making" or "MAGDM" refers to a sub field of operations research that focuses on the process of selecting the best option for a given set of criteria by carefully and systematically examining all of the alternatives. By contrasting and comparing all of the options, this is achieved. MAGDM issues and related solutions are regularly encountered in a variety of disciplines, including the social sciences, economics, management, and medicine. Struggling to figure out how to incorporate ambiguous information pieces that have been offered by a broad range of sources in the process of arriving at a judgment or conclusion is one of the most challenging challenges one encounters when meeting complexity that requires MAGDM. When dealing with problems that call for MAGDM, this is one of the biggest difficulties one encounters. Numerous surveys, including those by Bana and Costa [1], demonstrate the field's vigour and the variety of methodologies that have been created. A few years later, Bellman, Zadeh, and Zimmermann introduced fuzzy sets into the field, paving the way for a new family of techniques to solve problems that had previously been inaccessible and unsolvable with conventional MAGDM techniques. There are various variations on the MAGDM theme, depending on the theoretical underpinnings used for the modeling. Since it protects against data theft and destruction, cybersecurity is essential. This includes sensitive data, personally identifiable information (PII), protected health information (PHI), personal data, information relating to intellectual property, and computer networks utilized by the government and industry. Without a cybersecurity programme, your business cannot defend itself against data breach operations, making it an inevitable target for cyber criminals.
Both inherent risk and residual risk are increasing as a result of improved worldwide connectivity and the use of cloud services like Amazon Web Services to store private and sensitive data. Due to widespread poor cloud service design, highly trained cybercriminals, and widespread inadequate cloud service setup, it is more likely that your company will be the victim of a successful cyberattack or data breach. Business executives cannot exclusively rely on standard cybersecurity tools like firewalls and antivirus software because hackers are growing more cunning and their strategies are becoming more resistant to traditional cyber defenses. To stay well-protected, it's crucial to cover all aspects of cybersecurity. Any level of your organisation has the potential to pose a cyber threat. To educate personnel about typical cyber threats, including social engineering scams, phishing, ransomware attacks (think WannaCry), and other programmes made to steal sensitive data, workplaces must offer cyber security awareness training. Due to the prevalence of data breaches, cybersecurity is essential across all industries, not just those with strict regulations like the health care sector. After a data breach, even small firms run the risk of having their reputations permanently damaged.
To help you understand the importance of cyber security, we've posted an essay describing the numerous components of cybercrime you might not be aware of. You should be concerned about cybersecurity risks if you aren't already. Cybersecurity is the practise of preventing and responding to attacks on computer systems, networks, hardware, and software. Your sensitive data is at risk from increasingly sophisticated and dynamic cyber attacks, which use cutting-edge methods that combine social engineering and artificial intelligence (AI) to bypass well-established data protection safeguards. The world is getting more and more dependent on technology, and as we create new technologies that will eventually connect to our linked devices via Bluetooth and Wi-Fi, this dependence will only grow. Intelligent cloud security solutions should be used in conjunction with stringent password rules such as multi-factor authentication to limit unauthorised access and protect customer data while adopting new technologies. Information theft is the most expensive and quickly spreading type of cybercrime. mostly as a result of cloud services' role in the growth of identity information vulnerability on the web. It's not the only one, though. Industrial controls, which are susceptible to disruption ordestruction, are used to regulate power grids and other infrastructure. In order to cause strife within a business or government, cyber attacks may also aim to threaten data integrity (destroy or change data), making identity theft their secondary goal. As they gain experience, cybercriminals change the targets they select, the methods by which they influence enterprises, and the manner in which they attack different security systems. Here, we go over numerous security measures and their difficulties.
A fuzzy set (FS) is a mathematical representation of a group of elements (objects) with fuzzy boundaries that allows for the potential of a progressive change in an element's belongings to a group, from full membership to non membership. This idea is presented in the fuzzy sets (FSs) theory as a way to mathematically express fuzzy concepts that people use to describe how they perceive real systems, their preferences, and goals, among other things. Applying the fuzzy decision theory, choose the best security system that will protect you from hackers. Many challenges exist in security systems that are only hazy when it comes to selecting the best option. When dealing with unstructured scenarios in decision-making situations, classic or crisp methods may not always be the most effective. Zadeh [2] developed FSs in 1965 as a technique to manage such inconsistency.
In FSs, Zadeh assigns membership grades in the range [0, 1] to a set of components. Since many of the set theoretic components of crisp conditions were given for FSs, Zadeh's work in this area is noteworthy. An improved version of the FS that contains membership and non-membership degrees was the intuitionistic fuzzy set (IFS), which was the subject of Atanassov's [3] research. IFSs have been shown to be useful and frequently used by academics to assess uncertainty and instability in data over the last few decades. To explain the hesitant fuzzy set (HFS) more forcefully than the preceding classical fuzzy set extensions, Torra [5] developed the HFS, which necessitates that the membership have a collection of potential values. In order to handle circumstances where experts are split between several possibilities for an indicator, alternative, element, etc. [6,7], a new model based on HFSs was recently put into place. HFSs are particularly effective at addressing the issues of group decision making when experts hesitate between several potential memberships for an element of a series of decisions [8]. Many extensions to HFS have been implemented to handle more complex environments [34], including the interval-valued hesitated fuzzy set [9,10], the hesitant-triangular fuzzy set [11,12], the hesitant-multiplicative set [13], the hesitant-fuzzy linguistic word set [14], the hesitant-fuzzy uncertain linguistic set [15], the dual HFS [16,17], and the generalized HFS [18]. Several scholars have used aggregation operators to apply the HFS notion to group decision-making settings [19,20,21,22]. The neutrosophic set (NS), a philosophical field and mathematical instrument for understanding the genesis, nature, and range of neutralities, was initially put forth by Smarandache [23]. It examines the origins, character, and scope of neutralities as well as how they interact with other ideational spectrums.
The NS generalizes the concepts of the classic set [27], fuzzy set, interval-valued fuzzy set, intuitionistic interval-valued fuzzy sets [28], dialetheist set, paradoxist set, and tautological set [29]. A NS is indicated by membership degree βℜ(ϰ), indeterminacy αℜ(ϰ) and non membership degree γℜ(ϰ), where βℜ(ϰ), αℜ(ϰ) and γℜ(ϰ) are elements from ]0−,1+[. Although NS philosophically generalises the notions of FS, IFS, and all the existing structures, it will be challenging to implement in real-world scientific and engineering situations. This concept is critical in many contexts, such as information fusion, where data from several sensors is integrated. Recently, neutrosophic sets have primarily been used in engineering and other sectors to make decisions. Wang et al. [30] proposed a single-valued neutrosophic set (SV-NS), which can handle inaccurate, indeterminate, and incompatible data challenges. Many other researchers have defined its extensions; for example, see [31]. On the one hand, an SV-NS is a NS that allows us to convey ambiguity, imprecision, incompleteness, and inconsistency in the real world. It would be more suitable to employ uncertain information and an inconsistent information matrix in decision-making [32,33,35]. SV-NSs, on the other hand, can be employed in scientific and technical applications since SV-NS theory is useful in modelling ambiguous, imprecise, and inconsistent data [36,37]. The SV-NS is suitable for collecting imprecise, unclear, and inconsistent information in multi criteria decision-making analysis due to its ability to easily capture the ambiguous character of subjective judgments. Many researchers work on the operators of the NSs, which can be seen as Domi operators [38], Einstein operators [39] and many others. Also, we can see the use of these operators in decision-making [40,41].
Motivation
The security categorization is used in the security controls selection process to choose the initial baseline of security controls (i.e., low or moderate) that will adequately safeguard the data and information systems that are housed within the cloud service environment. According to a risk assessment or a security requirement specific to an organization, a cloud service may call for the implementation of alternative or compensating security controls that were not part of the initial baseline, or it may call for the addition of additional security controls or enhancements to address specific organisational needs. In order to accomplish this, the Control Tailoring Workbook (CTW) gives the CSP a list of the FedRAMP security controls applicable to the cloud environment and helps identify the exception scenarios for the service offering. This allows the platform to be pre-qualified before resources are used to develop all of the other necessary FedRAMP documentation requirements. Your security systems and the procedures necessary for the GRC programme are regularly monitored by modern governance, risk, and compliance (GRC) solutions. These duties could include gathering evidence, risk assessment, risk management for vendors, staff training, and gap analysis. By actively protecting your data and assisting you in remaining compliant, you may earn the trust of your clients, business associates, suppliers, and investors. However, there are several GRC tools on the market, each claiming to be the best, making it easy to become perplexed if you are seeking GRC solutions. Therefore, in order to save you time and help you narrow down your search for the best GRC tools, we have chosen the top four. Using the SV-NHF environment, we choose the best option according to our system requirements.
In this research work, we administered the Einstein aggregation operators (AOs) to the SV-NHFS environment. i.e., the SV-NHFEWA, SV-NHFEOWA, SV-NHFEHWA, SV-NHFEWG, SV-NHFEOWG, and SV-NHFEHWG operators. Idempotency, boundness, and monotonicity are among the properties of the recommended operators that have been established. Such operators take the SV-NS AOs into consideration in hesitant scenarios, which is their main benefit. In the case of hesitant material, the lack of SV-NHFE AOs could lead to a scarcity of hesitant information.
This study's remaining sections are organised as follows: Briefly explained in Section 1 are some fundamental SV-FSs, HFSs, and SV-NHFS theory concepts. Section 2 provides an explanation of basic notations and ideas. In Sections 3 and 4, respectively, a unique idea of SV-neutrosophic hesitant fuzzy sets (SV-NHFSs) with Einstein aggregation operations is introduced. A collection of algebraic SV-NHF Einstein aggregation operators for aggregating uncertain data is provided in Section 5 for use in making decisions. Section 6 of the manuscript marks its conclusion.
Let's go over the basics of fuzzy sets, intuitionistic fuzzy sets, hesitant fuzzy sets, and neutrosophic sets in this part. Once they have been approved, these ideas will be implemented later.
Definition 1. [2] For a fixed set Ξ. A FS ℜ in Ξ is presented as
ℜ={⟨δℓ,Δℜ(δℓ)⟩|δℓ∈Ξ}, |
for each δℓ∈Ξ, the membership degree (MD) Δℜ:Ξ→Δ specifies the degree to which the element δℓ∈ℜ, where Δ⟶[0,1] be the unit interval.
Definition 2. [3] For a fixed set Ξ. An IFS ℜ in Ξ is presented as
ℜ={⟨δℓ,Δℜ(δℓ),▽ℜ(δℓ)⟩|δℓ∈Ξ}, |
Δℜ(δℓ) is known as the MD and ▽ℜ(δℓ) is the non MD where (Δℜ(δℓ),▽ℜ(δℓ))⟶[0,1]. Moreover, it is required that 0≤Δℜ(δℓ)+▽ℜ(δℓ)≤1, for each δℓ∈Ξ.
Definition 3. [4] For a fixed set Ξ. A HFS ℜ in Ξ is presented as
ℜ={⟨δℓ,Δℜhϰ(δℓ)⟩|δℓ∈Ξ} |
where Δℜhϰ(δℓ) is in the form of set, that's contained some possible values in unit interval, i.e., [0,1] which represent the MD of δℓ∈Ξ in ℜ.
Definition 4. [23] Suppose Ξ is a fixed set and Υ∈Ξ. A NS ϰ in Ξ is denoted as MD Δϰ(Υ), an indeterminacy Λϰ(Υ) and a non MD ∇ϰ(Υ). Δϰ(Υ), Λϰ(Υ) and ∇ϰ(Υ) are subset of ]0−,1+[ and
Δϰ(Υ),Λϰ(Υ),∇ϰ(Υ):Ξ⟶]0−,1+[. |
The representation of NS ϰ is mathematically defined as:
ϰ={⟨Υ,Δϰ(Υ),Λϰ(Υ),∇ϰ(Υ))⟩|Υ∈Ξ}, |
where
0−<Δϰ(Υ)+Λϰ(Υ)+∇ϰ(Υ)≤3+. |
Definition 5. [30] Let Ξ be a fixed set and Υ∈Ξ. A SV-NS A in Ξ is defined as MD ΔA(Υ), an indeterminacy ΛA(Υ) and a non MD ∇A(Υ). ΔA(Υ),ΛA(Υ) and ∇A(Υ) are subsets of [0,1], and
ΔA(Υ),ΛA(Υ),∇A(Υ):Ξ⟶[0,1]. |
The representation of SV-NS A is mathematically defined as:
A={⟨Υ,ΔA(Υ),ΛA(Υ),∇A(Υ))⟩|Υ∈Ξ}, |
where
0<ΔA(Υ)+ΛA(Υ)+∇A(Υ)≤3. |
Definition 6. [24,25] Let Ξ be a fixed set. The representation of SV-NHFS ℜ is mathematically defined as:
ℜ={⟨Υ,ΔℓΞ(Υ),ΛℓΞ(Υ),∇ℓΞ(Υ)⟩|Υ∈ℜ} |
where ΔℓΞ(Υ),ΛℓΞ(Υ),∇ℓΞ(Υ) are set of some values in [0,1], indicate the hesitant grade of membership, indeterminacy and non membership of the element Υ∈ℜ to the set Ξ.
Definition 7. [26] For a fixed set Ξ, the SV-NHFS ℜ is represented mathematically as follows:
ℜ={⟨Υ,Δℓℜ(Υ),Λℓℜ(Υ),∇ℓℜ(Υ)⟩|Υ∈Ξ}, |
where Δℓℜ(Υ),Λℓℜ(Υ) and ∇ℓℜ(Υ) are sets of some values in [0,1] and denote the MD, indeterminacy and non MD sequentially. It satisfy the following properties:
∀Υ∈Ξ:∀μℜ(Υ)∈Δℓℜ(Υ),∀λℜ(Υ)∈Λℓℜ(Υ),∀νℜ(Υ)∈▽ℓℜ(Υ) |
and
∀νℜ(Υ)∈∇ℓℜ(Υ)with(max(Δℓℜ(Υ)))+(min(Λℓℜ(Υ)))+(min(∇ℓℜ(Υ)))≤3, |
and
(min(Δℓℜ(Υ)))+(min(Λℓℜ(Υ)))+(max(∇ℓℜ(Υ)))≤3. |
For simplicity, we will use a pair ℜ= (Δℓℜ,Λℓℜ,∇ℓℜ) to mean SV-NHFS.
Definition 8. Let
ϖ1=(Δℓϖ1,Λℓϖ1,∇ℓϖ1) |
and
ϖ2=(Δℓϖ2,Λℓϖ2,∇ℓϖ2) |
be two SV-NHFNs. Following are the fundamental set theoretic operations:
ϖ1∪ϖ2={⋃μ1∈Δℓϖ1μ2∈Δℓϖ2max(μ1,μ2),⋃ν1∈Λℓϖ1ν2∈Λℓϖ2min(ν1,ν2),⋃λ1∈∇ℓϖ1λ2∈∇ℓϖ2min(λ1,λ2)}; |
ϖ1∩ϖ2={⋃μ1∈Δℓϖ1μ2∈Δℓϖ2min(μ1,μ2),⋃ν1∈Λℓϖ1ν2∈Λℓϖ2max(ν1,ν2),⋃λ1∈∇ℓϖ1λ2∈∇ℓϖ2max(λ1,λ2)}; |
ϖc1={∇ℓϖ1,Λℓϖ1,Δℓϖ1}. |
The application of t-norms in FS theory at the intersection of two FSs is widely recognized. T-conorms are being used to model disjunction or union. These are simple explanations of the conjunction and disjunction in mathematical fuzzy logic syntax, and they are used to combine criteria in MCDM. The Einstein sum (⊕ϵ) and Einstein product (⊗ϵ) are case studies of t-conorms and t-norms, respectively, and are stated in the SV-NHF environment as follows.
˜N⊕ϵˇS=˜N+ˇS1+˜NˇS; ˜N⊗ϵˇS=˜NˇS1+(1−˜N)(1−ˇS). |
Based on the above Einstein operations, we give the following new operations on SV-NHF environment.
Definition 9. Let ℜ1=(υhℓ1,τhℓ1,Υhℓ1) and ℜ2=(υhℓ2,τhℓ2,Υhℓ2) be two SV-NHFEs and, then
ℜ1⊕ℜ2={⋃Ξ1∈υhℓ1(lℓ),Ξ2∈υhℓ2(lℓ)(Ξ1+Ξ21+Ξ1Ξ2),⋃κ1∈τhℓ1(lℓ),κ2∈τhℓ2(lℓ)(κ1κ21+(1−κ1)(1−κ2)),⋃χ1∈Υhℓ1(lℓ),χ2∈Υhℓ2(lℓ)(χ1χ21+(1−χ1)(1−χ2))}; |
ℜ1⊗ℜ2={⋃Ξ1∈υhℓ1(lℓ),Ξ2∈υhℓ2(lℓ)(Ξ1Ξ21+(1−Ξ1)(1−Ξ2)),⋃κ1∈τhℓ1(lℓ),κ2∈τhℓ2(lℓ)(κ1+κ21+κ1κ2),⋃χ1∈Υhℓ1(lℓ),χ2∈Υhℓ2(lℓ)(χ1+χ21+χ1χ2)}; |
ηℜ1={⋃Ξ1∈υhℓ1(lℓ)((1+Ξ1)η−(1−Ξ1)η(1+Ξ1)η+(1−Ξ1)η),⋃κ1∈τhℓ1(lℓ)(2κη1(2−κ1)η+(κ1)η),⋃χ1∈Υhℓ1(lℓ)(2χη1(2−χ1)η+(χ1)η)}; |
ℜη1={⋃Ξ1∈υhℓ1(lℓ)2Ξη1(2−Ξ1)η+(Ξ1)η⋃κ1∈τhℓ1(lℓ)((1+κ1)η−(1−κ1)η(1+κ1)η+(1−κ1)η1−(1−κ1)η),⋃χ1∈Υhℓ1(lℓ)((1+χ1)η−(1−χ1)η(1+χ1)η+(1−χ1)η1−(1−χ1)η)}. |
Within this section, we explained several novel Einstein operators for SV-NHFNs, namely the SV-neutrosophic hesitant-fuzzy Einstein weighted averaging (SV-NHFEWA) operator, the SV-neutrosophic hesitant-fuzzy Einstein ordered weighted averaging (SV-NHFEOWA) operator, the SV-neutrosophic hesitant-fuzzy Einstein weighted geometric (SV-NHFEWG) operator, and the SV-neutrosophic hesitant-fuzzy Einstein ordered weighted geometric (SV-NHFEOWG) operator.
Definition 10. Let ℜˆȷ=(υhℓˆȷ,τhℓˆȷ,Υhℓˆȷ) (ˆȷ=1,2,.....,˜ı) be a collection of SV-NHFNs, ℑ=(ℑ1,ℑ2,....,ℑ˜ı)T are the weights of ℜˆȷ∈[0,1] with ∑˜ıˆȷ=1ℑˆȷ=1. Then SV-NHFEWA:SV-HFN˜ı⟶ SV-NHFN such that
SV−NHFEWA(ℜ1,ℜ2,....,ℜ˜ı)=ℑ1.εℜ1⊕εℑ2.εℜ2⊕ε.....⊕εℑ˜ı.εℜ˜ı |
is called the SV-neutrosophic hesitant fuzzy Einstein weighted averaging operator.
Theorem 1. Let δˆȷ=(υhℓˆȷ,τhℓˆȷ,Υhℓˆȷ) (ˆȷ=1,2,.....,˜ı) be a collection of SV-NHFNs. Then the aggregation result using SV-NHFEWA, we can achieve the following
SV−NHFEWA(δ1,δ2,....,δ˜ı)=(⋃Ξδˆȷ∈υδˆȷΠ˜ıˆȷ=1(1+Ξδˆȷ)ℑˆȷ−Π˜ıˆȷ=1(1−Ξδˆȷ)ℑˆȷΠ˜ıˆȷ=1(1+Ξδˆȷ)ℑˆȷ+Π˜ıˆȷ=1(1−Ξδˆȷ)ℑˆȷ,⋃κδˆȷ∈τδˆȷ2Π˜ıˆȷ=1(κδˆȷ)ℑˆȷΠ˜ıˆȷ=1(2−κδˆȷ)ℑˆȷ+Π˜ıˆȷ=1(κδˆȷ)ℑˆȷ,⋃χδˆȷ∈Υδˆȷ2Π˜ıˆȷ=1(κδˆȷ)ℑˆȷΠ˜ıˆȷ=1(2−χδˆȷ)ℑˆȷ+Π˜ıˆȷ=1(χδˆȷ)ℑˆȷ) |
where ℑ=(ℑ1,ℑ2,....,ℑ˜ı)T are the weights of δˆȷ with ℑˆȷ∈[0,1] with ∑˜ıˆȷ=1ℑˆȷ=1.
Proof. We will demonstrate the theorem by mathematical induction. For ˜ı=2
SV−NHFEWA(δ1,δ2)=ℑ1.εδ1⊕εℑ2.εδ2. |
Since both ℑ1.εδ1 and ℑ2.εδ2 are SV-NHFNs, and also, ℑ1.εδ1⊕εℑ2.εδ2 is a SV-NHFN.
ℑ1.εδ1=(⋃Ξδ1∈υδ1(1+Ξδ1)ℑ1−(1−Ξδ1)ℑ1(1+Ξδ1)ℑ1+(1−Ξδ1)ℑ1,⋃κδ1∈τδ12(κδ1)ℑ1(2−κδ1)ℑ1+(κδ1)ℑ1,⋃χδ1∈Υδ12(χδ1)ℑ1(2−χδ1)ℑ1+(χδ1)ℑ1). |
ℑ2.εδ2=(⋃Ξ2∈υδ2(1+Ξδ2)ℑ2−(1−Ξδ2)ℑ2(1+Ξδ2)ℑ2+(1−Ξδ2)ℑ2,⋃κδ2∈τδ22(κδ2)ℑ2(2−κδ2)ℑ2+(κδ2)ℑ2,⋃χδ2∈Υδ22(χδ2)ℑ2(2−χδ2)ℑ2+(χδ2)ℑ2). |
Then
SV−NHFEWA(δ1,δ2)=ℑ1.εδ1⊕εℑ2.εδ2=(⋃Ξ1∈υδ1(1+Ξδ1)ℑ1−(1−Ξδ1)ℑ1(1+Ξδ1)ℑ1+(1−Ξδ1)ℑ1,⋃κδ1∈τδ12(κδ1)ℑ1(2−κδ1)ℑ1+(κδ1)ℑ1,⋃χδ1∈Υδ12(κδ1)ℑ1(2−χδ1)ℑ1+(χδ1)ℑ1)⊕ε(⋃Ξ2∈υδ2(1+Ξδ2)ℑ2−(1−Ξδ2)ℑ2(1+Ξδ2)ℑ2+(1−Ξδ2)ℑ2,⋃κδ2∈τδ22(κδ2)ℑ2(2−κδ2)ℑ2+(κδ2)ℑ2,⋃χδ2∈Υδ22(χδ2)ℑ2(2−χδ2)ℑ2+(χδ2)ℑ2). |
=(⋃Ξ1∈υδ1Ξ2∈υδ2(1+Ξδ1)ℑ1−(1−Ξδ1)ℑ1(1+Ξδ1)ℑ1+(1−Ξδ1)ℑ1+(1+Ξδ2)ℑ2−(1−Ξδ2)ℑ2(1+Ξδ2)ℑ2+(1−Ξδ2)ℑ21+((1+Ξδ1)ℑ1−(1−Ξδ1)ℑ1(1+Ξδ1)ℑ1+(1−Ξδ1)ℑ1).ε((1+Ξδ2)ℑ2−(1−Ξδ2)ℑ2(1+Ξδ2)ℑ2+(1−Ξδ2)ℑ2),⋃κδ1∈τδ1κδ2∈τδ2(2(κδ1)ℑ1(2−κδ1)ℑ1+(κδ1)ℑ1).ε(2(κδ2)ℑ2(2−κδ2)ℑ2+(κδ2)ℑ2)1+(1−2(κδ1)ℑ1(2−κδ1)ℑ1+(κδ1)ℑ1).ε(1−2(κδ2)ℑ2(2−κδ2)ℑ2+(κδ2)ℑ2),⋃χδ1∈Υδ1χδ2∈τδ2(2(χδ1)ℑ1(2−χδ1)ℑ1+(χδ1)ℑ1).ε(2(χδ2)ℑ2(2−χδ2)ℑ2+(χδ2)ℑ2)1+(1−2(χδ1)ℑ1(2−χδ1)ℑ1+(χδ1)ℑ1).ε(1−2(χδ2)ℑ2(2−χδ2)ℑ2+(χδ2)ℑ2))=(⋃Ξ1∈υδ1Ξ2∈υδ2(1+Ξδ1)ℑ1−.ε(1+Ξδ2)ℑ2−(1−Ξδ1)ℑ1.ε(1−Ξδ2)ℑ2(1+Ξδ1)ℑ1−.ε(1+Ξδ2)ℑ2+(1−Ξδ1)ℑ1.ε(1−Ξδ2)ℑ2,⋃κδ1∈τδ1κδ2∈τδ22(κδ1)ℑ1(κδ2)ℑ2(2−κδ1)ℑ1.ε(2−κδ2)ℑ2+(κδ1)ℑ1.ε(κδ2)ℑ2,⋃χδ1∈Υδ1χδ2∈τδ22(χδ1)ℑ1(χδ2)ℑ2(2−χδ1)ℑ1.ε(2−χδ2)ℑ2+(χδ1)ℑ1.ε(χδ2)ℑ2). |
Thus, the result holds for ˜ı=2. Assume that the results holds for ˜ı=ℷ
SV−NHFEWA(δ1,δ2,....,δℷ)=(⋃Ξˆȷ∈υδˆȷΠℷˆȷ=1(1+Ξδˆȷ)ℑˆȷ−Πℷˆȷ=1(1−Ξδˆȷ)ℑˆȷΠℷˆȷ=1(1+Ξδˆȷ)ℑˆȷ+Πℷˆȷ=1(1−Ξδˆȷ)ℑˆȷ,⋃κδˆȷ∈τδˆȷ2Πℷˆȷ=1(κδˆȷ)ℑˆȷΠℷˆȷ=1(2−κδˆȷ)ℑˆȷ+Πℷˆȷ=1(κδˆȷ)ℑˆȷ,⋃χδˆȷ∈Υδˆȷ2Πℷˆȷ=1(χδˆȷ)ℑˆȷΠℷˆȷ=1(2−χδˆȷ)ℑˆȷ+Πℷˆȷ=1(χδˆȷ)ℑˆȷ). |
Now we will prove for ˜ı=ℷ+1
SV−NHFEWA(δ1,δ2,....,δℷ+1)=SV−NHFEWA(δ1,δ2,....,δℷ)⊕εℑℷ+1.εδℷ+1=(⋃Ξˆȷ∈υδˆȷΠℷˆȷ=1(1+Ξδˆȷ)ℑˆȷ−Πℷˆȷ=1(1−Ξδˆȷ)ℑˆȷΠℷˆȷ=1(1+Ξδˆȷ)ℑˆȷ+Πℷˆȷ=1(1−Ξδˆȷ)ℑˆȷ,⋃κδˆȷ∈τδˆȷ2Πℷˆȷ=1(κδˆȷ)ℑˆȷΠℷˆȷ=1(2−κδˆȷ)ℑˆȷ+Πℷˆȷ=1(κδˆȷ)ℑˆȷ,⋃χδˆȷ∈Υδˆȷ2Πℷˆȷ=1(χδˆȷ)ℑˆȷΠℷˆȷ=1(2−χδˆȷ)ℑˆȷ+Πℷˆȷ=1(χδˆȷ)ℑˆȷ)⊕ε(⋃Ξℷ+1∈υδℷ+1(1+Ξδℷ+1)ℑℷ+1−(1−Ξδℷ+1)ℑℷ+1(1+Ξδℷ+1)ℑℷ+1+(1−Ξδℷ+1)ℑℷ+1,⋃κδℷ+1∈τδℷ+12(κδℷ+1)ℑℷ+1(2−κδℷ+1)ℑℷ+1+(κδℷ+1)ℑℷ+1,⋃χδℷ+1∈Υδℷ+12(κδℷ+1)ℑℷ+1(2−χδℷ+1)ℑℷ+1+(χδℷ+1)ℑℷ+1)=(⋃Ξˆȷ∈υδˆȷΠℷ+1ˆȷ=1(1+Ξδˆȷ)ℑˆȷ−Πℷ+1ˆȷ=1(1−Ξδˆȷ)ℑˆȷΠℷ+1ˆȷ=1(1+Ξδˆȷ)ℑˆȷ+Πℷ+1ˆȷ=1(1−Ξδˆȷ)ℑˆȷ,⋃κδˆȷ∈τδˆȷ2Πℷ+1ˆȷ=1(κδˆȷ)ℑˆȷΠℷ+1ˆȷ=1(2−κδˆȷ)ℑˆȷ+Πℷ+1ˆȷ=1(κδˆȷ)ℑˆȷ,⋃χδˆȷ∈Υδˆȷ2Πℷ+1ˆȷ=1(χδˆȷ)ℑˆȷΠℷ+1ˆȷ=1(2−χδˆȷ)ℑˆȷ+Πℷ+1ˆȷ=1(χδˆȷ)ℑˆȷ). |
Thus
\begin{equation*} SV-NHFEWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \left( \begin{array}{c} \bigcup\limits_{\Xi _{\hat{\jmath}}\in \upsilon _{\delta _{\hat{\jmath}}}} \frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{ \jmath}}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{ \hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{\jmath} }}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}, \\ \bigcup\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\chi _{\delta _{\hat{\jmath}}}\in \Upsilon _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \chi _{\delta _{ \hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}} \end{array} \right). \end{equation*} |
There are some properties which are fulfilled by the SV-NHFEWA as follows:
Theorem 2. Suppose \delta _{\hat{\jmath}} = \left(\upsilon _{h_{\ell _{\hat{\jmath} }}}, \tau _{h_{\ell _{\hat{\jmath}}}}, \Upsilon _{h_{\ell _{\hat{\jmath} }}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a group of SV-NHFNs, \Im = (\Im _{1}, \Im _{2}, ...., \Im _{\tilde{\imath}})^{T} are the weights of \delta _{\hat{\jmath}} with \Im _{\hat{\jmath}}\in \lbrack 0, 1] with \sum_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}} = 1; then we have the following:
{\bf{(1)}} Boundary:
\begin{equation*} \delta ^{-}\leq SV-NHFEWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath }})\leq \delta ^{+} \end{equation*} |
where
\begin{eqnarray*} \delta ^{-} & = &\left( \min\limits_{1\leq \hat{\jmath}\leq \tilde{\imath}}\min\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{h_{\ell _{\hat{\jmath}}}}}\Xi _{\delta _{\hat{\jmath}}}, \max\limits_{1\leq \hat{\jmath}\leq \tilde{\imath} }\max\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{h_{\ell _{\hat{\jmath} }}}}\kappa _{\delta _{\hat{\jmath}}}, \max\limits_{1\leq \hat{\jmath}\leq \tilde{ \imath}}\max\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \Upsilon _{h_{\ell _{\hat{ \jmath}}}}}\chi _{\delta _{\hat{\jmath}}}\right); \\ \delta ^{+} & = &\left( \max\limits_{1\leq \hat{\jmath}\leq \tilde{\imath}}\max\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{h_{\ell _{\hat{\jmath}}}}}\Xi _{\delta _{\hat{\jmath}}}, \min\limits_{1\leq \hat{\jmath}\leq \tilde{\imath} }\min\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{h_{\ell _{\hat{\jmath} }}}}\kappa _{\delta _{\hat{\jmath}}}, \min\limits_{1\leq \hat{\jmath}\leq \tilde{ \imath}}\min\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \Upsilon _{h_{\ell _{\hat{ \jmath}}}}}\chi _{\delta _{\hat{\jmath}}}\right). \end{eqnarray*} |
{\bf{(2)}} Monotonicity: Let \delta _{\hat{\jmath}}^{\ast } = \left(\dot{ \upsilon}_{h_{\ell _{\hat{\jmath}}}}, \dot{\tau}_{h_{\ell _{\hat{\jmath} }}}, \Upsilon _{h_{\ell _{\hat{\jmath}}}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a collection of SV-NHFNs. Then
\begin{equation*} SV-NHFEWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}})\leq SV-NHFEWA(\delta _{1}^{\ast }, \delta _{2}^{\ast }, ...., \delta _{\tilde{\imath }}^{\ast }). \end{equation*} |
Proof. (1) Let f(x) = \frac{1-x}{1+x}, x\in \lbrack 0, 1], then f^{\prime }(x) = \frac{ -2}{\left(1+x\right) ^{2}} < 0, so f(x) is a decreasing function. Let max \Xi _{\delta _{\hat{\jmath}}} = \max_{1\leq \hat{\jmath}\leq \tilde{\imath} }\max_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{h_{\ell _{\hat{\jmath} }}}}\Xi _{\delta _{\hat{\jmath}}}, min \Xi _{\delta _{\hat{\jmath} }} = \min_{1\leq \hat{\jmath}\leq \tilde{\imath}}\min_{\Xi _{\delta _{\hat{ \jmath}}\in \kappa _{h_{\ell _{\hat{\jmath}}}}}}\Xi _{\delta _{\hat{\jmath} }} and min \Xi _{\delta _{\hat{\jmath}}} = \min_{1\leq \hat{\jmath}\leq \tilde{\imath}}\min_{\Xi _{\delta _{\hat{\jmath}}\in \Upsilon _{h_{\ell _{ \hat{\jmath}}}}}}\Xi _{\delta _{\hat{\jmath}}}
\begin{eqnarray*} min\left( \Xi _{\delta _{\hat{\jmath}}}\right) &\leq &\left( \Xi _{\delta _{ \hat{\jmath}}}\right) \leq \max \left( \Xi _{\delta _{\hat{\jmath}}}\right) \text{ }forall\text{ }\hat{\jmath} = 1, 2, ....., \tilde{\imath} \\ f(\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) ) &\leq &\left( \Xi _{\delta _{\hat{\jmath}}}\right) \leq f(\min \left( \Xi _{\delta _{\hat{ \jmath}}}\right) )\; forall\text{ }\hat{\jmath} = 1, 2, ....., \tilde{\imath} \\ \frac{1-\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) } &\leq &\frac{1-\left( \Xi _{\delta _{\hat{ \jmath}}}\right) }{1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) }\leq \frac{ 1-\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }. \end{eqnarray*} |
Since \Im = (\Im _{1}, \Im _{2}, ...., \Im _{\tilde{\imath}})^{T} are the weights of \delta _{\hat{\jmath}} with \Im _{\hat{\jmath}}\in \lbrack 0, 1] with \sum_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}} = 1, we have
\begin{eqnarray*} \left( \frac{1-\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}} &\leq &\left( \frac{1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}}\leq \left( \frac{1-\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}}; \\ \prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{1-\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\max \left( \Xi _{\delta _{\hat{ \jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}} &\leq &\prod\limits_{\hat{ \jmath} = 1}^{\tilde{\imath}}\left( \frac{1-\left( \Xi _{\delta _{\hat{\jmath} }}\right) }{1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{ \hat{\jmath}}}\leq \prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{1-\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}}; \\ \left( \frac{1-\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\sum\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}}} &\leq &\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{1-\left( \Xi _{\delta _{\hat{\jmath} }}\right) }{1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{ \hat{\jmath}}}\leq \left( \frac{1-\min \left( \Xi _{\delta _{\hat{\jmath} }}\right) }{1+\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\sum\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}}}. \end{eqnarray*} |
\begin{eqnarray*} &\Longleftrightarrow &\left( \frac{1-\max \left( \Xi _{\delta _{\hat{\jmath} }}\right) }{1+\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) \leq \prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\left( \Xi _{\delta _{\hat{\jmath} }}\right) }\right) ^{\Im _{\hat{\jmath}}}\leq \left( \frac{1-\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\min \left( \Xi _{\delta _{\hat{\jmath} }}\right) }\right); \\ &\Longleftrightarrow &\left( \frac{2}{1+\max \left( \Xi _{\delta _{\hat{ \jmath}}}\right) }\right) \leq 1+\prod\limits_{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( \frac{1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) }{ 1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) ^{2}}\right) ^{\Im _{\hat{ \jmath}}}\leq \left( \frac{2}{1+\min \left( \Xi _{\delta _{\hat{\jmath} }}\right) ^{2}}\right); \\ &\Longleftrightarrow &\left( \frac{1+\min \left( \Xi _{\delta _{\hat{\jmath} }}\right) }{2}\right) \leq \frac{1}{1+\prod\limits_{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( \frac{1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) }{ 1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}} }\leq \left( \frac{1+\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) }{2} \right); \\ &\Longleftrightarrow &\left( 1+\min \left( \Xi _{\delta _{\hat{\jmath} }}\right) \right) \leq \frac{2}{1+\prod\limits_{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( \frac{1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) }{ 1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}} }\leq \left( 1+\max \left( \Xi _{\delta _{\hat{\jmath}}}\right) \right); \\ &\Longleftrightarrow &\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) \leq \frac{2}{1+\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{ 1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) }{1+\left( \Xi _{\delta _{\hat{ \jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}}}-1\leq \max \left( \Xi _{\delta _{\hat{\jmath}}}\right); \\ &\Longleftrightarrow &\min \left( \Xi _{\delta _{\hat{\jmath}}}\right) \leq \frac{\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath} }}-\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}}}{ \prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}}+\prod\limits_{\hat{ \jmath} = 1}^{\tilde{\imath}}\left( 1-\left( \Xi _{\delta _{\hat{\jmath} }}\right) \right) ^{\Im _{\hat{\jmath}}}}\leq \max \left( \Xi _{\delta _{ \hat{\jmath}}}\right). \end{eqnarray*} |
Thus,
\begin{equation*} \min \Xi _{\delta _{\hat{\jmath}}}\leq \frac{\prod\limits_{\hat{\jmath} = 1}^{ \tilde{\imath}}\left( 1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}}-\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}} }{\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\left( \Xi _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath} }}+\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1-\left( \Xi _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}}}\leq \max \Xi _{\delta _{\hat{\jmath}}}. \end{equation*} |
Consider g(y) = \frac{2-y}{y}, y\in (0, 1], then g^{\prime }(y) = -\frac{2}{y^{2} }, i.e., g(y) is a decreasing function on (0, 1]. Let max \kappa _{\delta _{ \hat{\jmath}}} = \max_{1\leq \hat{\jmath}\leq \tilde{\imath}}\max_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{h_{\ell _{\hat{\jmath}}}}}\kappa _{\delta _{\hat{\jmath}}} , min \kappa _{\delta _{\hat{\jmath}}} = min _{1\leq \hat{\jmath}\leq \tilde{\imath}} min _{\kappa _{\delta _{\hat{\jmath} }}\in \tau _{h_{\ell _{\hat{\jmath}}}}}, and min \kappa _{\delta _{\hat{ \jmath}}} = min _{1\leq \hat{\jmath}\leq \tilde{\imath}} min _{\kappa _{\delta _{\hat{\jmath}}}\in \Upsilon _{h_{\ell _{\hat{\jmath}}}}}.
\begin{eqnarray*} min\left( \kappa _{\delta _{\hat{\jmath}}}\right) &\leq &\left( \kappa _{\delta _{\hat{\jmath}}}\right) \leq \max \left( \kappa _{\delta _{\hat{ \jmath}}}\right) \text{ }forall\text{ }\hat{\jmath} = 1, 2, ....., \tilde{\imath} \\ g\left( \max \left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) &\leq &g\left( \left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) \leq g\left( \min \left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) \text{ }forall \text{ }\hat{\jmath} = 1, 2, ....., \tilde{\imath} \\ \frac{2-\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) } &\leq &\frac{2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\left( \kappa _{\delta _{\hat{\jmath} }}\right) }\leq \frac{2-\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) } \\ \left( \frac{2-\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath} }} &\leq &\left( \frac{2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) }{ \left( \kappa _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{\jmath} }}\leq \left( \frac{2-\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{ \min \left( \kappa _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{\hat{ \jmath}}} \\ \prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{2-\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\max \left( \kappa _{\delta _{\hat{ \jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}} &\leq &\prod\limits_{\hat{ \jmath} = 1}^{\tilde{\imath}}\left( \frac{2-\left( \kappa _{\delta _{\hat{ \jmath}}}\right) }{\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{2}} \right) ^{\Im _{\hat{\jmath}}}\leq \prod\limits_{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( \frac{2-\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{ \min \left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{2}}\right) ^{\Im _{ \hat{\jmath}}} \\ \left( \frac{2-\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }\right) ^{\sum\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}}} &\leq &\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{2-\left( \kappa _{\delta _{\hat{\jmath} }}\right) }{\left( \kappa _{\delta _{\hat{\jmath}}}\right) }\right) ^{\Im _{ \hat{\jmath}}}\leq \left( \frac{2-\min \left( \kappa _{\delta _{\hat{\jmath} }}\right) }{\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) }\right) ^{\sum\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}}} \\ \left( \frac{2-\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) }\right) &\leq &\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\left( \kappa _{\delta _{\hat{\jmath} }}\right) }\right) ^{\Im _{\hat{\jmath}}}\leq \left( \frac{2-\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\min \left( \kappa _{\delta _{\hat{ \jmath}}}\right) }\right) \\ \frac{2}{\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) } &\leq &1+\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\left( \kappa _{\delta _{\hat{ \jmath}}}\right) }\right) ^{\Im _{\hat{\jmath}}}\leq \frac{2}{\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) } \\ \frac{\max \left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{2}}{2} &\leq & \frac{1}{1+\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{ 2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{2}}{\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{2}}\right) ^{\Im _{\hat{\jmath}}}}\leq \frac{\min \left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{2}}{2} \\ \max \left( \kappa _{\delta _{\hat{\jmath}}}\right) &\leq &\frac{2}{ 1+\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \frac{2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) }{\left( \kappa _{\delta _{\hat{ \jmath}}}\right) ^{2}}\right) ^{\Im _{\hat{\jmath}}}}\leq \min \left( \kappa _{\delta _{\hat{\jmath}}}\right) \\ \max \left( \kappa _{\delta _{\hat{\jmath}}}\right) &\leq &\frac{ 2\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}}}{ \prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath} }}+\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}}}\leq \min \left( \kappa _{\delta _{\hat{\jmath}}}\right) \\ \max \kappa _{\delta _{\hat{\jmath}}} &\leq &\frac{2\prod\limits_{\hat{ \jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 2-\left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{ \hat{\jmath}}}+\prod\limits_{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \left( \kappa _{\delta _{\hat{\jmath}}}\right) \right) ^{\Im _{\hat{\jmath}}}}\leq \min \kappa _{\delta _{\hat{\jmath}}}. \end{eqnarray*} |
Let SV-NHFEWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \delta \left(\Xi _{\delta _{\hat{\jmath}}}\right), then \leq \Xi _{\delta _{ \hat{\jmath}}}\leq.
Definition 11. Let \delta _{\hat{\jmath}} = \left(\upsilon _{h_{\ell _{\hat{\jmath}}}}, \tau _{h_{\ell _{\hat{\jmath}}}}, \Upsilon _{h_{\ell _{\hat{\jmath}}}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a collection of SV-NHFNs, \Im = (\Im _{1}, \Im _{2}, ...., \Im _{\tilde{\imath}})^{T} are the weights of \delta _{\hat{\jmath}}\in \lbrack 0, 1] with \sum_{\hat{\jmath} = 1}^{\tilde{ \imath}}\Im _{\hat{\jmath}} = 1. Then SV-NHFEWG:SV-NHFN^{\tilde{\imath} }\longrightarrow SV-NHFN such that
\begin{equation*} SV-NHFEWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \delta _{1}^{{ \Im }_{1}}\otimes _{\varepsilon }\delta _{2}^{{ \Im } _{2}}\otimes _{\varepsilon }.....\otimes _{\varepsilon }\delta _{\tilde{ \imath}}^{{ \Im }_{\tilde{\imath}}}. \end{equation*} |
is called the SV-neutrosophic hesitant fuzzy Einstein weighted geometric operator.
Theorem 3. Let \delta _{\hat{\jmath}} = \left(\upsilon _{h_{\ell _{\hat{\jmath}}}}, \tau _{h_{\ell _{\hat{\jmath}}}}, \Upsilon _{h_{\ell _{\hat{\jmath}}}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a collection of SV-NHFNs. Then the aggregation result using SV-NHFEWG, we can achieve the following
\begin{equation*} SV-NHFEWG(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \left( \begin{array}{c} \bigcup\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{\delta _{\hat{ \jmath}}}}\frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}} }{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{\jmath} }}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}, \\ \bigcup\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\chi _{\delta _{\hat{\jmath}}}\in \Upsilon _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{ \tilde{\imath}}\left( 2-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{ \jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \chi _{\delta _{\hat{ \jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}} \end{array} \right) \end{equation*} |
where \Im = (\Im _{1}, \Im _{2}, ...., \Im _{\tilde{\imath}})^{T} are the weights of \delta _{\hat{\jmath}} with \Im _{\hat{\jmath}}\in \lbrack 0, 1] with \sum_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}} = 1.
There are some properties which are fulfilled by the SV-NHFEWG as follows:
Theorem 4. Let \delta _{\hat{\jmath}} = \left(\upsilon _{h_{\ell _{\hat{\jmath}}}}, \tau _{h_{\ell _{\hat{\jmath}}}}, \Upsilon _{h_{\ell _{\hat{\jmath}}}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a collection of SV-NHFNs, \Im = (\Im _{1}, \Im _{2}, ...., \Im _{\tilde{\imath}})^{T} are the weights of \delta _{\hat{\jmath}} with \Im _{\hat{\jmath}}\in \lbrack 0, 1] with \sum_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}} = 1; then we have the following:
{\bf{(1)}} Boundary:
\begin{equation*} \delta ^{-}\leq SV-NHFEWG(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath }})\leq \delta ^{+} \end{equation*} |
where
\begin{eqnarray*} \delta ^{-} & = &\left( \min\limits_{1\leq \hat{\jmath}\leq \tilde{\imath}}\min\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{h_{\ell _{\hat{\jmath}}}}}\Xi _{\delta _{\hat{\jmath}}}, \max\limits_{1\leq \hat{\jmath}\leq \tilde{\imath} }\max\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{h_{\ell _{\hat{\jmath} }}}}\kappa _{\delta _{\hat{\jmath}}}, \max\limits_{1\leq \hat{\jmath}\leq \tilde{ \imath}}\max\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \Upsilon _{h_{\ell _{\hat{ \jmath}}}}}\chi _{\delta _{\hat{\jmath}}}\right) \\ \delta ^{+} & = &\left( \max\limits_{1\leq \hat{\jmath}\leq \tilde{\imath}}\max\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{h_{\ell _{\hat{\jmath}}}}}\Xi _{\delta _{\hat{\jmath}}}, \min\limits_{1\leq \hat{\jmath}\leq \tilde{\imath} }\min\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{h_{\ell _{\hat{\jmath} }}}}\kappa _{\delta _{\hat{\jmath}}}, \min\limits_{1\leq \hat{\jmath}\leq \tilde{ \imath}}\min\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \Upsilon _{h_{\ell _{\hat{ \jmath}}}}}\chi _{\delta _{\hat{\jmath}}}\right). \end{eqnarray*} |
{\bf{(2)}} Monotonicity: Let \delta _{\hat{\jmath}}^{\ast } = \left(\dot{ \upsilon}_{h_{\ell _{\hat{\jmath}}}}, \dot{\tau}_{h_{\ell _{\hat{\jmath} }}}, \Upsilon _{h_{\ell _{\hat{\jmath}}}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a collection of SV-NHFNs. Then
\begin{equation*} SV-NHFEWG(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}})\leq SV-NHFEWG(\delta _{1}^{\ast }, \delta _{2}^{\ast }, ...., \delta _{\tilde{\imath }}^{\ast }). \end{equation*} |
We developed the following ordered weighted operators based on SV-NHFNs.
Definition 12. (1) Let \delta _{\hat{\jmath}} = \left(\upsilon _{h_{\ell _{\hat{\jmath} }}}, \tau _{h_{\ell _{\hat{\jmath}}}}, \Upsilon _{h_{\ell _{\hat{\jmath} }}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a collection of SV-NHFNs, \Im = (\Im _{1}, \Im _{2}, ...., \Im _{\tilde{\imath}})^{T} are the weights of \delta _{\hat{\jmath}}\in \lbrack 0, 1] with \sum_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}} = 1. Then SV - NHFEOWA : SV - NHFN^{\tilde{ \imath}}\longrightarrow SV - NHFN such that
\begin{eqnarray*} SV-NHFEOWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) & = &{ \Im }_{1}._{\varepsilon }\delta _{\rho (1)}\oplus _{\varepsilon }{ \Im }_{2}._{\varepsilon }\delta _{\rho (2)}\oplus _{\varepsilon }.....\oplus _{\varepsilon }{ \Im }_{\tilde{\imath}}._{\varepsilon }\delta _{\rho ( \tilde{\imath})} \\ & = &\left( \begin{array}{c} \bigcup\limits_{\Xi _{\delta _{_{\rho (\hat{\jmath})}}}\in \upsilon _{\delta _{\rho (\hat{\jmath})}}}\frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\rho (\hat{\jmath})}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{ \hat{\jmath} = 1}^{\tilde{\imath}}\left( 1-\Xi _{\delta _{\rho (\hat{\jmath} )}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1+\Xi _{\delta _{\rho (\hat{\jmath})}}\right) ^{\Im _{\hat{\jmath} }}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1-\Xi _{\delta _{\rho (\hat{ \jmath})}}\right) ^{\Im _{\hat{\jmath}}}}, \\ \bigcup\limits_{\kappa _{\delta _{_{\rho (\hat{\jmath})}}}\in \tau _{\delta _{_{\rho (\hat{\jmath})}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( \kappa _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{\jmath }}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{ \tilde{\imath}}\left( \kappa _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\chi _{\delta _{_{\rho (\hat{\jmath})}}}\in \Upsilon _{\delta _{_{\rho (\hat{\jmath})}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( \chi _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{ \jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\chi _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \chi _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \end{array} \right) \end{eqnarray*} |
where \delta _{\rho (\hat{\jmath})} be the \hat{\jmath} th largest in them, is called the SV-neutrosophic hesitant fuzzy Einstein ordered weighted averaging operator.
(2) Let \delta _{\hat{\jmath}} = \left(\upsilon _{h_{\ell _{\hat{\jmath} }}}, \tau _{h_{\ell _{\hat{\jmath}}}}, \Upsilon _{h_{\ell _{\hat{\jmath} }}}\right) (\hat{\jmath} = 1, 2, ....., \tilde{\imath}) be a collection of SV-NHFNs, \Im = (\Im _{1}, \Im _{2}, ...., \Im _{\tilde{\imath}})^{T} are the weights of \delta _{\hat{\jmath}}\in \lbrack 0, 1] with \sum_{\hat{\jmath} = 1}^{\tilde{\imath}}\Im _{\hat{\jmath}} = 1. Then SV-NHFEOWG:SV-NHFN^{\tilde{ \imath}}\longrightarrow SV-NHFN such that
\begin{eqnarray*} SV-NHFEOWG(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) & = &\delta _{\rho (1)}^{{ \Im }_{1}}\otimes _{\varepsilon }\delta _{\rho (2)}^{ { \Im }_{2}}\otimes _{\varepsilon }.....\otimes _{\varepsilon }\delta _{\rho (\tilde{\imath})}^{{ \Im }_{\tilde{\imath}}} \\ & = &\left( \begin{array}{c} \bigcup\limits_{\chi _{\delta _{_{\rho (\hat{\jmath})}}}\in \Upsilon _{\delta _{_{\rho (\hat{\jmath})}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( \chi _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{ \jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\chi _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \chi _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\kappa _{\delta _{_{\rho (\hat{\jmath})}}}\in \tau _{\delta _{_{\rho (\hat{\jmath})}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( \kappa _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{\jmath }}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{ \tilde{\imath}}\left( \kappa _{\delta _{_{\rho (\hat{\jmath})}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\Xi _{\delta _{_{\rho (\hat{\jmath})}}}\in \upsilon _{\delta _{\rho (\hat{\jmath})}}}\frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\rho (\hat{\jmath})}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{ \hat{\jmath} = 1}^{\tilde{\imath}}\left( 1-\Xi _{\delta _{\rho (\hat{\jmath} )}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1+\Xi _{\delta _{\rho (\hat{\jmath})}}\right) ^{\Im _{\hat{\jmath} }}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1-\Xi _{\delta _{\rho (\hat{ \jmath})}}\right) ^{\Im _{\hat{\jmath}}}}, \end{array} \right) \end{eqnarray*} |
where \delta _{\rho (\hat{\jmath})} be the \hat{\jmath} th largest in them, is called the SV-neutrosophic hesitant fuzzy Einstein ordered weighted geometric operator.
Here, we designed a system for SV-NHF details information to be incorporated into MAGDM to represent uncertainty.
Step-1: The collection of data from experts.
Step-2: Normalize the experts data based on the benefit and cost.
Step-3: Apply the aggregation Operators as given below The SV-NHFWA operator
\begin{equation*} SV-NHFEWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \left( \begin{array}{c} \bigcup\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{\delta _{\hat{ \jmath}}}}\frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}} }{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{\jmath} }}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}, \\ \bigcup\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\chi _{\delta _{\hat{\jmath}}}\in \Upsilon _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \chi _{\delta _{ \hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \end{array} \right) \end{equation*} |
and the SV-NHFWG operator
\begin{equation*} SV-NHFEWG(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \left( \begin{array}{c} \bigcup\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \Xi _{\delta _{ \hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( 2-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath} }}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \Xi _{\delta _{\hat{\jmath} }}\right) ^{^{\Im _{\hat{\jmath}}}}} \\ \bigcup\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\chi _{\delta _{\hat{\jmath}}}\in \Upsilon _{\delta _{\hat{ \jmath}}}}\frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( 1-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}} }{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\chi _{\delta _{\hat{\jmath} }}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}} \end{array} \right). \end{equation*} |
Step-4: Find the score value based on the score function given below
\begin{equation*} s(\delta ) = \left( \frac{1}{M_{\delta }}\sum\nolimits_{\hslash _{i}\in \nu _{h_{\dot{g}}, }}(\hslash _{i})\right) -\left( \frac{1}{N_{\delta }} \sum\nolimits_{\varrho _{i}\in \tau _{h_{\varkappa }}}(\varrho _{i})\right) -\left( \frac{1}{S_{\delta }}\sum\nolimits_{\varrho _{i}\in \Upsilon _{h_{\varkappa }}}(\alpha _{i})\right). \end{equation*} |
Here {M_{\delta }}, {N_{\delta }} and {S_{\delta }} represent the number of the elements in each of the MD, an indeterminacy and non MD respectively.
Step-5: Rank the alternative based on score values and find the best option.
In the Figure 1, You can see the flow chart of the algorithm for decision-making.
Cybersecurity is crucial since it guards against theft and destruction of many types of data. This covers delicate information, personally identifiable information (PII), protected health information (PHI), personal data, data pertaining to intellectual property, and information systems used by the government and business.
Case study: We provide a real-world example of cybersecurity types and challenges. Our aim is to choose the best type of security to handle and protect.
(1) \mathring{C}_{1}: Internet of things security: In the context of the Internet of Things (IoT), where almost any physical or logical entity or object may be given a unique identifier and the capability to communicate autonomously over a network, "Internet of Things privacy" refers to the special considerations necessary to protect the information of individuals from exposure in the IoT environment. Insecure communications, security mechanisms that were originally developed for desktop computers but are difficult to implement on resource-constrained IoT devices, data leaks from IoT systems, malware risks, cyber attacks, secure networks, and secure data are the most frequent Internet of Things security issues.
(2) \mathring{C}_{2}: Cloud security: Cloud computing's use of data privacy makes it possible to collect, store, move, and share data over the internet without endangering the privacy of individual users' personal information. Misconfiguration, which is a major contributor to cloud data breaches, unauthorised access, insecure interfaces and APIs, account hijacking, a lack of visibility, external data sharing, malicious insiders, and cyber attacks are a few obstacles.
(3) \mathring{C}_{3}: Network security: By limiting the introduction or spread of a wide range of potential dangers within a network, network security is a set of technologies that safeguards the usefulness and integrity of a company's infrastructure. Among the challenges are numerous misconfigurations, lax oversight of privileged access, poor tool compatibility, a lack of visibility, and controls that are out of date with infrastructure changes.
Criteria: \delta 1 = Control of Hijacking data, \delta 2 = Lack of visibility, \delta 3 = Secure networks, and \delta 4 = Cyber attacks. The given weight vector is \Im = (0.314, 0.355, 0.331) ^{T} . Table 1 showing all the collective data of DM.
\mathring{C}_{1} | \mathring{C}_{2} | \mathring{C}_{3} | |
\delta _{1} | \left\{ \begin{array}{c} \left(0.25, 0.35\right), \left(0.31\right), \\ \left(0.33, 0.59\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.45\right), \left(0.84\right), \\ \left(0.84, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.79\right), \left(0.66, 0.73\right), \\ \left(0.7, 0.43\right) \end{array} \right\} |
\delta _{2} | \left\{ \begin{array}{c} \left(0.8, 0.1\right), \left(0.15, 0.2\right), \\ \left(0.45, 0.28\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.51\right), \left(0.37, 0.43\right), \\ \left(0.7, 0.13\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.91, 0.61\right), \left(0.36, 0.24\right), \\ \left(0.86, 0.24\right) \end{array} \right\} |
\delta _{3} | \left\{ \begin{array}{c} \left(0.95, 0.23\right), \left(0.99\right), \\ \left(0.28, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.1\right), \left(0.36, 0.46\right), \\ \left(0.63\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.55, 0.65\right), \left(0.39\right), \\ \left(0.69, 0.91\right) \end{array} \right\} |
\delta _{4} | \left\{ \begin{array}{c} \left(0.4, 0.66\right), \left(0.55\right), \\ \left(0.89\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.71\right), \left(0.65, 0.15\right), \\ \left(0.56, 0.95\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.21\right), \left(0.32, 0.68\right), \\ \left(0.92, 0.98\right) \end{array} \right\} |
\Im = (0.34, 0.35, 0.31) ^{T}
Step 1. Because the attributes are uniform so there are no need to normalized.
Step 2. By exploiting the proposed SV-NHFWA operator, we achieve the overall preference values \delta _{\tilde{\imath}} of the alternative \delta _{\tilde{\imath}}(\tilde{\imath} = 1, 2, 3, 4). For instance
\begin{equation*} SV-NHFEWA(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \left( \begin{array}{c} \bigcup\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{\delta _{\hat{ \jmath}}}}\frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}} }{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\Xi _{\delta _{\hat{\jmath} }}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}, \\ \bigcup\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\chi _{\delta _{\hat{\jmath}}}\in \Upsilon _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \chi _{\delta _{ \hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}} \end{array} \right) \end{equation*} |
\begin{equation*} \delta _{1} = \left\{ \begin{array}{c} \langle 1.1453, 1.1460\rangle , \langle 0.6460, 0.6632\rangle , \\ \langle 0.6670, 0.5877, 0.6948, 0.6137, 0.7770, 0.6914, 0.8066, 0.7196\rangle \end{array} \right\}. \end{equation*} |
Similarly for other alternatives
\begin{eqnarray*} \delta _{2} & = &\left\{ \begin{array}{c} \langle 1.0897, 1.1308, 1.1120, 1.1555\rangle , \\ \langle 0.3341, 0.2954, 0.3521, 0.3117, 0.3660, 0.3244, 0.3853, 0.3419\rangle , \\ \langle 0.7233, 0.5204, 0.4409, 0.2987, 0.6375, 0.4503, 0.3787, 0.2533\rangle \end{array} \right\} \\ \delta _{3} & = &\left\{ \begin{array}{c} \langle 1.0889, 1.0855, 1.1467, 1.1453\rangle , \langle 0.6132, 0.6579\rangle , \\ \langle 0.5811, 0.6271, 0.8142, 0.8673\rangle \end{array} \right\} \\ \delta _{4} & = &\left\{ \begin{array}{c} \langle 1.1536, 1.1430\rangle , \langle 0.5817, 0.7041, 0.3657, 0.4574\rangle , \\ \langle 0.8276, 0.8400, 0.9449, 0.9576\rangle \end{array} \right\} \end{eqnarray*} |
\begin{equation*} s(\delta ) = \left( \frac{1}{M_{\delta }}\sum\nolimits_{\hslash _{i}\in \nu _{h_{\dot{g}}, }}(\hslash _{i})\right) -\left( \frac{1}{N_{\delta }} \sum\nolimits_{\varrho _{i}\in \tau _{h_{\varkappa }}}(\varrho _{i})\right) -\left( \frac{1}{S_{\delta }}\sum\nolimits_{\varrho _{i}\in \Upsilon _{h_{\varkappa }}}(\alpha _{i})\right). \end{equation*} |
Step 3. The score values are obtained as
\begin{equation*} s(\delta _{1}) = -0.2036, s(\delta _{2}) = 0.3202, s(\delta _{3}) = -0.2414, s(\delta _{4}) = -0.2715. \end{equation*} |
Step 4.
\begin{equation*} s(\delta _{2}) > s(\delta _{1}) > s(\delta _{3}) > s(\delta _{4}). \end{equation*} |
Therefore
\begin{equation*} \delta _{2} > \delta _{1} > \delta _{3} > \delta _{4}. \end{equation*} |
The best choice is \delta _{2}.
For SV-NHFWG operator
Here we start from all the collective data of the DM, which is shown in Table 2 as well,
\mathring{C}_{1} | \mathring{C}_{2} | \mathring{C}_{3} | |
\delta _{1} | \left\{ \begin{array}{c} \left(0.25, 0.35\right), \left(0.31\right), \\ \left(0.33, 0.59\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.45\right), \left(0.84\right), \\ \left(0.84, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.79\right), \left(0.66, 0.73\right), \\ \left(0.7, 0.43\right) \end{array} \right\} |
\delta _{2} | \left\{ \begin{array}{c} \left(0.8, 0.1\right), \left(0.15, 0.2\right), \\ \left(0.45, 0.28\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.51\right), \left(0.37, 0.43\right), \\ \left(0.7, 0.13\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.91, 0.61\right), \left(0.36, 0.24\right), \\ \left(0.86, 0.24\right) \end{array} \right\} |
\delta _{3} | \left\{ \begin{array}{c} \left(0.95, 0.23\right), \left(0.99\right), \\ \left(0.28, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.1\right), \left(0.36, 0.46\right), \\ \left(0.63\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.55, 0.65\right), \left(0.39\right), \\ \left(0.69, 0.91\right) \end{array} \right\} |
\delta _{4} | \left\{ \begin{array}{c} \left(0.4, 0.66\right), \left(0.55\right), \\ \left(0.89\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.71\right), \left(0.65, 0.15\right), \\ \left(0.56, 0.95\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.21\right), \left(0.32, 0.68\right), \\ \left(0.92, 0.98\right) \end{array} \right\} |
Step 2. By exploiting the proposed SV-NHFWG operator, we achieve the overall preference values \delta _{\tilde{\imath}} of the alternative \delta _{\tilde{\imath}}(\tilde{\imath} = 1, 2, 3, 4).
\begin{equation*} SV-NHFEWG(\delta _{1}, \delta _{2}, ...., \delta _{\tilde{\imath}}) = \left( \begin{array}{c} \bigcup\limits_{\Xi _{\delta _{\hat{\jmath}}}\in \upsilon _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \Xi _{\delta _{ \hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( 2-\Xi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath} }}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \Xi _{\delta _{\hat{\jmath} }}\right) ^{^{\Im _{\hat{\jmath}}}}} \\ \bigcup\limits_{\kappa _{\delta _{\hat{\jmath}}}\in \tau _{\delta _{\hat{ \jmath}}}}\frac{2\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}}{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 2-\kappa _{\delta _{\hat{\jmath}}}\right) ^{\Im _{ \hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( \kappa _{\delta _{\hat{\jmath}}}\right) ^{^{\Im _{\hat{\jmath}}}}}, \\ \bigcup\limits_{\chi _{\delta _{\hat{\jmath}}}\in \Upsilon _{\delta _{\hat{ \jmath}}}}\frac{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}-\Pi _{\hat{\jmath} = 1}^{\tilde{ \imath}}\left( 1-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}} }{\Pi _{\hat{\jmath} = 1}^{\tilde{\imath}}\left( 1+\chi _{\delta _{\hat{\jmath} }}\right) ^{\Im _{\hat{\jmath}}}+\Pi _{\hat{\jmath} = 1}^{\tilde{\imath} }\left( 1-\chi _{\delta _{\hat{\jmath}}}\right) ^{\Im _{\hat{\jmath}}}} \end{array} \right) \end{equation*} |
\begin{eqnarray*} \delta _{1} & = &\left\{ \begin{array}{c} \langle 0.4199, 0.5538\rangle , \langle 0.6460, 0.6632\rangle , \\ \langle 1.3219, 1.2874, 1.3375, 1.3034, 1.3607, 1.3272, 1.3759, 1.3428\rangle \end{array} \right\} \\ \delta _{2} & = &\left\{ \begin{array}{c} \langle 0.9132, 0.8008, 0.3404, -0.2506\rangle , \\ \langle 0.3341, 0.2954, 0.3521, 0.3117, 0.3660, 0.3244, 0.3853, 0.3419\rangle , \\ \langle 1.3397, 1.2589, 1.2450, 1.1600, 1.3127, 1.2305, 1.2164, 1.1304\rangle \end{array} \right\} \\ \delta _{3} & = &\left\{ \begin{array}{c} \langle 0.5267, 0.5969, -1.3692, -1.0489\rangle , \\ \langle 0.6132, 0.6579\rangle , \langle 1.2827, 1.3081, 1.3782, 1.4018\rangle \end{array} \right\} \\ \delta _{4} & = &\left\{ \begin{array}{c} \langle 0.2414, 0.5379\rangle , \langle 0.5817, 0.7041, 0.3657, 0.4574\rangle , \\ \langle 1.3849, 1.3909, 1.4350, 1.4408\rangle \end{array} \right\}. \end{eqnarray*} |
Step 3. The score values are obtained as
\begin{equation*} s(\delta _{1}) = -1.4998, s(\delta _{2}) = -1.1246, s(\delta _{3}) = -2.3019, s(\delta _{4}) = -1.5504. \end{equation*} |
Step 4.
\begin{equation*} s(\delta _{2}) > s(\delta _{1}) > s(\delta _{4}) > s(\delta _{3}). \end{equation*} |
Therefore
\begin{equation*} \delta _{2} > \delta _{1} > \delta _{4} > \delta _{3}. \end{equation*} |
The best choice is \delta _{2}. The comparison analysis using SV - NP HFWA and SV - NP HFWG are shown in Table 3.
Operators | Score | Best Alternative |
SV-NP HFWA | Sc\left(\delta _{1}\right) > Sc\left(\delta _{2}\right) > Sc\left(\delta _{3}\right) > Sc\left(\delta _{4}\right) | \delta _{1} |
SV-NP HFWG | Sc\left(\delta _{1}\right) > Sc\left(\delta _{3}\right) > Sc\left(\delta _{2}\right) > Sc\left(\delta _{4}\right) | \delta _{1} |
Therefore the best choice is \delta _{1}. After applying test Step 1, we came up with the identical best alternative \delta _{1} that we had in our suggested numerical case analysis. Test Steps 2 and 3, We are now testing the validity test Steps 2 and 3 to demonstrate that the proposed approach is reliable and relevant. To this end, we first transformed the MAGDM problem into three smaller sub-problems such as \{\delta _{1}, \delta _{2}, \delta _{3}\} and \{\delta _{2}, \delta_{3}, \delta _{4}\} . We now apply our suggested decision-making approach to the smaller problems that have been transformed and have obtained the following ranking of alternatives: \delta _{1} > \delta _{3} > \delta _{4}, \delta _{1} > \delta _{2} > \delta _{3} and \delta _{2} > \delta _{3} > \delta_{4} respectively. We find that \delta _{1} > \delta _{2} > \delta _{3} > \delta_{4} is the same as the standard decision-making approach results when assigning a detailed ranking.
A strong fusion of a SV-NS and HFS called SV-NHFS was developed for situations where each object has a range of potential values that are dictated by MD, indeterminacy, and non MD. A SV-NHFEWA operator, SV-NHFEWG operator, SV-NHFEOWG operator and SV-NHFEOWA operators are all suggested in this article. Additionally, based on the SV-NHFEWA and SV-NHFEWG operators, we suggested novel MADM approach. More information on the advantages of these techniques is provided below.
(1) First, there are important characteristics of the SV-NHFEWG and SV-NHFEWA operators, including idempotency, commutativity, boundedness, and monotonicity.
(2) Second, the conversion of the SV-NHFEWA and SV-NHFEWG operators to the earlier AOs for SVNHFSs demonstrates the adaptability of the suggested AOs.
(3) Third, the results obtained by the SV-NHFEWA and SV-NHFEWG operators are accurate and dependable when compared to other existing techniques for MADM problems in an SV-NHF context, demonstrating their applicability in real-world situations.
(4) The methods for MAGDM that are suggested in this paper are able to further acknowledge more association between attributes and alternatives, demonstrating that they have a higher accuracy and a larger reference value than the methods currently in use, which are unable to take into account the inter-relationships of attributes in practical applications. This indicates that the methods for MAGDM that are suggested in this paper can recognize even more associations between attributes.
(5) The proposed aggregation operators are also put to use in practise to look at symmetrical analysis in the choice of a workable cybersecurity control selection technique.
(6) Future research on personalized individual consistency control consensus problems, consensus reaching with non-cooperative behavior management decision-making problems, and two-sided matching decision-making with multi-granular and incomplete criteria weight information could all benefit from the use of the proposed AOs. The degrees of membership, abstention, and non-membership have no bearing on this examination of the restrictions imposed by proposed AOs. On this side of the intended AOs, a new hybrid structure of interactive and prioritized AOs may be observed being implemented.
(7) We will examine the conceptual framework of SV-NHFSs for Einstein operations in future work using innovative decision-making approaches like as TOPSIS, VIKOR, TODAM, GRA, and EDAS. We'll also talk about how these techniques are used in domains including analytical thinking, intelligent systems, social sciences, finance, management of human resources, robotics, navigation, horticulture, soft computing, and many others.
The author (Muhammad Naeem) would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: 22UQU4310396DSR41.
The authors declare no conflict of interest.
[1] | J. Afiune, Avaliação Ecocardiografica Evolutiva de Recém-nascidos Pre-termo, do Nascimento Até o Termo, Phd thesis, Universidade de São Paulo, 2000. |
[2] | E. Demidenko, Mixed Models: Theory and Applications with R, 2 Eds., Hoboken: John Wiley & Sons, Inc., 2013. |
[3] | R. Drikvandi, G. Verbeke, G. Molenberghs, Diagnosing Misspecification of the Random-Effects Distribution in Mixed Models, Biometrics, 73 (2017), 63–71. |
[4] |
D. A. Freedman, On the so-called 'Huber Sandwich Estimator' and 'Robust Standard Errors', Amer. Stat., 60 (2006), 299–302. http://doi.org/10.1198/000313006X152207 doi: 10.1198/000313006X152207
![]() |
[5] |
S. K. Hanneman, Design, Analysis, and Interpretation of Method-Comparison Studies, AACN Adv. Crit. Care, 19 (2008), 223–234. http://doi.org/10.4037/15597768-2008-2015 doi: 10.4037/15597768-2008-2015
![]() |
[6] | D. A. Harville, Maximum Likelihood Approaches to Variance Component Estimation and to Related Problems, J. Amer. Stat. Assoc., 72 (1977), 320–338. |
[7] |
X. Huang, Detecting Random-Effects Model Misspecification via Coarsened Data, Comput. Stat. Data Anal., 55 (2011), 703–714. https://doi.org/10.1016/j.csda.2010.06.012 doi: 10.1016/j.csda.2010.06.012
![]() |
[8] | J. Jiang, Goodness-of-Fit Tests for Mixed Model Diagnostics, Ann. Stat., 29 (2001), 1137–1164. |
[9] | N. Lange, L. Ryan, Assessing Normality in Random Effects Models, Ann. Stat., 17 (1989), 624–642. |
[10] | S. Litière, A. Alonso, G. Molenberghs, Type I and Type II Error under Random-Effects Misspecification in Generalized Linear Mixed Models, Biometrics, 63 (2007), 1038–1044. |
[11] | C. Nagle, An Introduction to Fitting and Evaluating Mixed-effects Models in R, Proceedings of the 10th Pronunciation in Second Language Learning and Teaching Conference, 2018, 82–105. |
[12] | H. D. Patterson, R. Thompson, Recovery of Inter-block Information when Block Sizes are Unequal, Biometrika, 58 (1971), 545–554, |
[13] | R Core Team, R: A Language and Environment for Statistical Computing, Available from: https://www.r-project.org/. |
[14] | G. K. Robinson, That BLUP is a Good Thing: The Estimation of Random Effects, Stat. Sci., 6 (1991), 15–32, |
[15] | F. M. Rocha, J. M. Singer, Selection of Terms in Random Coefficient Regression Models, J. Appl. Stat., 45 (2018), 225–242. |
[16] |
H. Schielzeth, N. J. Dingemanse, S. Nakagawa, D. F. Westneat, H. Allegue, C. Teplitsky, et al., Robustness of Linear Mixed-Effects Models to Violations of Distributional Assumptions, Methods Ecol. Evol., 11 (2020), 1141–1152. https://doi.org/10.1111/2041-210X.13434 doi: 10.1111/2041-210X.13434
![]() |
[17] | P. Sen, J. Singer, Large Sample Methods in Statistics: An Introduction with Applications, Boca Raton: CRC Press, 1993. |
[18] |
J. M. Singer, F. M. Rocha, J. S. Nobre, Graphical Tools for Detecting Departures from Linear Mixed Model Assumptions and Some Remedial Measures, Int. Stat. Rev., 85 (2017), 290–324. https://doi.org/10.1111/insr.12178 doi: 10.1111/insr.12178
![]() |
[19] |
H. White, Maximum Likelihood Estimation of Misspecified Models, Econometrica, 50 (1982), 1–25. https://doi.org/10.2307/1912526 doi: 10.2307/1912526
![]() |
[20] | D. Yu, X. Zhang, K. K. Yau, Asymptotic Properties and Information Criteria for Misspecified Generalized Linear Mixed Models, J. R. Stat. Soc. Ser. B (Stat. Methodol.), 80 (2018), 817–836. |
1. | Muhammad Kamran, Rashad Ismail, Esmail Hassan Abdullatif Al-Sabri, Nadeem Salamat, Muhammad Farman, Shahzaib Ashraf, An Optimization Strategy for MADM Framework with Confidence Level Aggregation Operators under Probabilistic Neutrosophic Hesitant Fuzzy Rough Environment, 2023, 15, 2073-8994, 578, 10.3390/sym15030578 | |
2. | Muhammad Kamran, Rashad Ismail, Shahzaib Ashraf, Nadeem Salamat, Seyma Ozon Yildirim, Ismail Naci Cangul, Decision support algorithm under SV-neutrosophic hesitant fuzzy rough information with confidence level aggregation operators, 2023, 8, 2473-6988, 11973, 10.3934/math.2023605 | |
3. | Misbah Rasheed, ElSayed Tag-Eldin, Nivin A. Ghamry, Muntazim Abbas Hashmi, Muhammad Kamran, Umber Rana, Decision-making algorithm based on Pythagorean fuzzy environment with probabilistic hesitant fuzzy set and Choquet integral, 2023, 8, 2473-6988, 12422, 10.3934/math.2023624 | |
4. | Muhammad Kamran, Shahzaib Ashraf, Nadeem Salamat, Muhammad Naeem, Thongchai Botmart, Correction: Cyber security control selection based decision support algorithm under single valued neutrosophic hesitant fuzzy Einstein aggregation information, 2023, 8, 2473-6988, 13795, 10.3934/math.2023705 | |
5. | Muhammad Kamran, Shahzaib Ashraf, Shahid Kalim Khan, Aamir Hussain Khan, Hedia Zardi, Saba Mehmood, Integrated decision-making framework for hospital development: A single-valued neutrosophic probabilistic hesitant fuzzy approach with innovative aggregation operators, 2024, 34, 0354-0243, 515, 10.2298/YJOR230915034K | |
6. | Muhammad Kamran, Shahzaib Ashraf, Muhammad Naeem, A promising approach for decision modeling with single-valued neutrosophic probabilistic hesitant fuzzy Dombi operators, 2023, 33, 0354-0243, 549, 10.2298/YJOR230115007S | |
7. | Yi Zhao, Fangwei Zhang, Bing Han, Jun Ye, Jingyuan Li, A novel parameterized neutrosophic score function and its application in genetic algorithm, 2024, 10, 2376-5992, e2117, 10.7717/peerj-cs.2117 | |
8. | Muhammad Kamran, Shahzaib Ashraf, Muhammad Shazib Hameed, A promising approach with confidence level aggregation operators based on single-valued neutrosophic rough sets, 2023, 1432-7643, 10.1007/s00500-023-09272-9 | |
9. | Esmail Hassan Abdullatif Al-Sabri, Muhammad Rahim, Fazli Amin, Rashad Ismail, Salma Khan, Agaeb Mahal Alanzi, Hamiden Abd El-Wahed Khalifa, Multi-criteria decision-making based on Pythagorean cubic fuzzy Einstein aggregation operators for investment management, 2023, 8, 2473-6988, 16961, 10.3934/math.2023866 | |
10. | Aditya Makwe, Priyesh Kanungo, Sandeep Kautish, Golla Madhu, Abdulaziz S. Almazyad, Guojiang Xiong, Ali Wagdy Mohamed, Cloud service prioritization using a Multi-Criteria Decision-Making technique in a cloud computing environment, 2024, 15, 20904479, 102785, 10.1016/j.asej.2024.102785 | |
11. | Fangui Meng, Aimin Liu, Yan Hu, Da Ren, Yao Liu, Xin Zhang, A Control Method for Path Following of AUVs Considering Multiple Factors Under Ocean Currents, 2024, 12, 2077-1312, 2045, 10.3390/jmse12112045 | |
12. | Harish Garg, A new exponential-logarithm-based single-valued neutrosophic set and their applications, 2024, 238, 09574174, 121854, 10.1016/j.eswa.2023.121854 | |
13. | Muhammad Kamran, Nadeem Salamat, Chiranjibe Jana, Qin Xin, Decision-making technique with neutrosophic Z-rough set approach for sustainable industry evaluation using sine trigonometric operators, 2024, 15684946, 112539, 10.1016/j.asoc.2024.112539 | |
14. | Nadeem Salamat, Muhammad Kamran, Shahzaib Ashraf, Manal Elzain Mohammed Abdulla, Rashad Ismail, Mohammed M. Al-Shamiri, Complex Decision Modeling Framework with Fairly Operators and Quaternion Numbers under Intuitionistic Fuzzy Rough Context, 2024, 139, 1526-1506, 1893, 10.32604/cmes.2023.044697 | |
15. | Muhammad Rahim, Sanaa Ahmed Bajri, Haifa Alqahtani, Somayah Abdualziz Alhabeeb, Hamiden Abd El-Wahed Khalifa, Multi-criteria Group Decision-Making Using Complex p, q-Quasirung Orthopair Fuzzy Sets: Application in the Selection of Renewable Energy Projects for Investments, 2025, 17, 1866-9956, 10.1007/s12559-025-10424-2 |
\mathring{C}_{1} | \mathring{C}_{2} | \mathring{C}_{3} | |
\delta _{1} | \left\{ \begin{array}{c} \left(0.25, 0.35\right), \left(0.31\right), \\ \left(0.33, 0.59\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.45\right), \left(0.84\right), \\ \left(0.84, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.79\right), \left(0.66, 0.73\right), \\ \left(0.7, 0.43\right) \end{array} \right\} |
\delta _{2} | \left\{ \begin{array}{c} \left(0.8, 0.1\right), \left(0.15, 0.2\right), \\ \left(0.45, 0.28\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.51\right), \left(0.37, 0.43\right), \\ \left(0.7, 0.13\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.91, 0.61\right), \left(0.36, 0.24\right), \\ \left(0.86, 0.24\right) \end{array} \right\} |
\delta _{3} | \left\{ \begin{array}{c} \left(0.95, 0.23\right), \left(0.99\right), \\ \left(0.28, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.1\right), \left(0.36, 0.46\right), \\ \left(0.63\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.55, 0.65\right), \left(0.39\right), \\ \left(0.69, 0.91\right) \end{array} \right\} |
\delta _{4} | \left\{ \begin{array}{c} \left(0.4, 0.66\right), \left(0.55\right), \\ \left(0.89\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.71\right), \left(0.65, 0.15\right), \\ \left(0.56, 0.95\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.21\right), \left(0.32, 0.68\right), \\ \left(0.92, 0.98\right) \end{array} \right\} |
\mathring{C}_{1} | \mathring{C}_{2} | \mathring{C}_{3} | |
\delta _{1} | \left\{ \begin{array}{c} \left(0.25, 0.35\right), \left(0.31\right), \\ \left(0.33, 0.59\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.45\right), \left(0.84\right), \\ \left(0.84, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.79\right), \left(0.66, 0.73\right), \\ \left(0.7, 0.43\right) \end{array} \right\} |
\delta _{2} | \left\{ \begin{array}{c} \left(0.8, 0.1\right), \left(0.15, 0.2\right), \\ \left(0.45, 0.28\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.51\right), \left(0.37, 0.43\right), \\ \left(0.7, 0.13\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.91, 0.61\right), \left(0.36, 0.24\right), \\ \left(0.86, 0.24\right) \end{array} \right\} |
\delta _{3} | \left\{ \begin{array}{c} \left(0.95, 0.23\right), \left(0.99\right), \\ \left(0.28, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.1\right), \left(0.36, 0.46\right), \\ \left(0.63\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.55, 0.65\right), \left(0.39\right), \\ \left(0.69, 0.91\right) \end{array} \right\} |
\delta _{4} | \left\{ \begin{array}{c} \left(0.4, 0.66\right), \left(0.55\right), \\ \left(0.89\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.71\right), \left(0.65, 0.15\right), \\ \left(0.56, 0.95\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.21\right), \left(0.32, 0.68\right), \\ \left(0.92, 0.98\right) \end{array} \right\} |
Operators | Score | Best Alternative |
SV-NP HFWA | Sc\left(\delta _{1}\right) > Sc\left(\delta _{2}\right) > Sc\left(\delta _{3}\right) > Sc\left(\delta _{4}\right) | \delta _{1} |
SV-NP HFWG | Sc\left(\delta _{1}\right) > Sc\left(\delta _{3}\right) > Sc\left(\delta _{2}\right) > Sc\left(\delta _{4}\right) | \delta _{1} |
\mathring{C}_{1} | \mathring{C}_{2} | \mathring{C}_{3} | |
\delta _{1} | \left\{ \begin{array}{c} \left(0.25, 0.35\right), \left(0.31\right), \\ \left(0.33, 0.59\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.45\right), \left(0.84\right), \\ \left(0.84, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.79\right), \left(0.66, 0.73\right), \\ \left(0.7, 0.43\right) \end{array} \right\} |
\delta _{2} | \left\{ \begin{array}{c} \left(0.8, 0.1\right), \left(0.15, 0.2\right), \\ \left(0.45, 0.28\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.51\right), \left(0.37, 0.43\right), \\ \left(0.7, 0.13\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.91, 0.61\right), \left(0.36, 0.24\right), \\ \left(0.86, 0.24\right) \end{array} \right\} |
\delta _{3} | \left\{ \begin{array}{c} \left(0.95, 0.23\right), \left(0.99\right), \\ \left(0.28, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.1\right), \left(0.36, 0.46\right), \\ \left(0.63\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.55, 0.65\right), \left(0.39\right), \\ \left(0.69, 0.91\right) \end{array} \right\} |
\delta _{4} | \left\{ \begin{array}{c} \left(0.4, 0.66\right), \left(0.55\right), \\ \left(0.89\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.71\right), \left(0.65, 0.15\right), \\ \left(0.56, 0.95\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.21\right), \left(0.32, 0.68\right), \\ \left(0.92, 0.98\right) \end{array} \right\} |
\mathring{C}_{1} | \mathring{C}_{2} | \mathring{C}_{3} | |
\delta _{1} | \left\{ \begin{array}{c} \left(0.25, 0.35\right), \left(0.31\right), \\ \left(0.33, 0.59\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.45\right), \left(0.84\right), \\ \left(0.84, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.79\right), \left(0.66, 0.73\right), \\ \left(0.7, 0.43\right) \end{array} \right\} |
\delta _{2} | \left\{ \begin{array}{c} \left(0.8, 0.1\right), \left(0.15, 0.2\right), \\ \left(0.45, 0.28\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.51\right), \left(0.37, 0.43\right), \\ \left(0.7, 0.13\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.91, 0.61\right), \left(0.36, 0.24\right), \\ \left(0.86, 0.24\right) \end{array} \right\} |
\delta _{3} | \left\{ \begin{array}{c} \left(0.95, 0.23\right), \left(0.99\right), \\ \left(0.28, 0.96\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.1\right), \left(0.36, 0.46\right), \\ \left(0.63\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.55, 0.65\right), \left(0.39\right), \\ \left(0.69, 0.91\right) \end{array} \right\} |
\delta _{4} | \left\{ \begin{array}{c} \left(0.4, 0.66\right), \left(0.55\right), \\ \left(0.89\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.71\right), \left(0.65, 0.15\right), \\ \left(0.56, 0.95\right) \end{array} \right\} | \left\{ \begin{array}{c} \left(0.21\right), \left(0.32, 0.68\right), \\ \left(0.92, 0.98\right) \end{array} \right\} |
Operators | Score | Best Alternative |
SV-NP HFWA | Sc\left(\delta _{1}\right) > Sc\left(\delta _{2}\right) > Sc\left(\delta _{3}\right) > Sc\left(\delta _{4}\right) | \delta _{1} |
SV-NP HFWG | Sc\left(\delta _{1}\right) > Sc\left(\delta _{3}\right) > Sc\left(\delta _{2}\right) > Sc\left(\delta _{4}\right) | \delta _{1} |