Research article

Multi-sensor data fusion based on the similarity measure and belief (Deng) entropy under neutrosophic evidence sets

  • Received: 06 February 2025 Revised: 15 April 2025 Accepted: 25 April 2025 Published: 07 May 2025
  • MSC : 03E72, 94A17

  • The Dempster–Shafer evidence theory is a very practical concept for handling uncertain information. The foundation of this theory lies in the basic probability assignment (BPA), which exclusively accounts for the degree of support attributed to focal elements (FEs). In this study, neutrosophic evidence sets (NESs) are defined to introduce additional probabilistic measures, aimed at addressing the uncertainty, imprecision, incompleteness, and inconsistency present in real-world information. The basic element of NESs is a neutrosophic basic probability assignment (NBPA), which consists of three components. The truth degree of FEs is represented by the first BPA, the second BPA represents the indeterminacy degree of FEs, and the last BPA characterizes the falsity degree of FEs. In NESs, each support degree of FEs is shown separately without any limitation. Therefore, the general concept of NESs is broader compared to traditional evidence sets and intuitionistic fuzzy evidence sets. Unlike the neutrosophic set (NS), the NBPA method assigns truth-support, uncertainty-support, and false-support degrees, as well as these support degrees, to single and multiple subsets in a discriminative framework. This paper aimed to develop some information measures for NESs, such as neutrosophic Deng entropy (NDE), neutrosophic cosine similarity measure, and neutrosophic Jousselme distance. Then, an improved method based on NDE and neutrosophic cosine similarity measure was established to combine contradictory evidence to increase the influence of reliable evidence on the one hand and to reduce the influence of unreliable evidence on the other hand. Finally, a case involving sensor data integration for target identification was studied to highlight the importance of these innovative ideas. The numerical example demonstrates that the proposed method provides more reliable and superior fusion performance compared to classical models, particularly in scenarios involving high conflict and uncertain information. However, the effectiveness of the method is partially influenced by the structure of the similarity matrix and the entropy parameters, which necessitates careful parameter tuning to achieve optimal results. These limitations are explicitly highlighted to serve as a guide for future improvements and broader applications of the method.

    Citation: Ali Köseoğlu, Rıdvan Şahin, Ümit Demir. Multi-sensor data fusion based on the similarity measure and belief (Deng) entropy under neutrosophic evidence sets[J]. AIMS Mathematics, 2025, 10(5): 10471-10503. doi: 10.3934/math.2025477

    Related Papers:

    [1] Dongsheng Xu, Xiangxiang Cui, Lijuan Peng, Huaxiang Xian . Distance measures between interval complex neutrosophic sets and their applications in multi-criteria group decision making. AIMS Mathematics, 2020, 5(6): 5700-5715. doi: 10.3934/math.2020365
    [2] D. Jeni Seles Martina, G. Deepa . Some algebraic properties on rough neutrosophic matrix and its application to multi-criteria decision-making. AIMS Mathematics, 2023, 8(10): 24132-24152. doi: 10.3934/math.20231230
    [3] Rajab Ali Borzooei, Hee Sik Kim, Young Bae Jun, Sun Shin Ahn . MBJ-neutrosophic subalgebras and filters in $ BE $-algebras. AIMS Mathematics, 2022, 7(4): 6016-6033. doi: 10.3934/math.2022335
    [4] Ning Liu, Zengtai Gong . Derivatives and indefinite integrals of single valued neutrosophic functions. AIMS Mathematics, 2024, 9(1): 391-411. doi: 10.3934/math.2024022
    [5] Dongsheng Xu, Huaxiang Xian, Xiewen Lu . Interval neutrosophic covering rough sets based on neighborhoods. AIMS Mathematics, 2021, 6(4): 3772-3787. doi: 10.3934/math.2021224
    [6] D. Ajay, P. Chellamani, G. Rajchakit, N. Boonsatit, P. Hammachukiattikul . Regularity of Pythagorean neutrosophic graphs with an illustration in MCDM. AIMS Mathematics, 2022, 7(5): 9424-9442. doi: 10.3934/math.2022523
    [7] Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051
    [8] Murugan Palanikumar, Chiranjibe Jana, Biswajit Sarkar, Madhumangal Pal . $ q $-rung logarithmic Pythagorean neutrosophic vague normal aggregating operators and their applications in agricultural robotics. AIMS Mathematics, 2023, 8(12): 30209-30243. doi: 10.3934/math.20231544
    [9] M. Mohseni Takallo, Rajab Ali Borzooei, Seok-Zun Song, Young Bae Jun . Implicative ideals of BCK-algebras based on MBJ-neutrosophic sets. AIMS Mathematics, 2021, 6(10): 11029-11045. doi: 10.3934/math.2021640
    [10] Sumyyah Al-Hijjawi, Abd Ghafur Ahmad, Shawkat Alkhazaleh . A generalized effective neurosophic soft set and its applications. AIMS Mathematics, 2023, 18(12): 29628-29666. doi: 10.3934/math.20231517
  • The Dempster–Shafer evidence theory is a very practical concept for handling uncertain information. The foundation of this theory lies in the basic probability assignment (BPA), which exclusively accounts for the degree of support attributed to focal elements (FEs). In this study, neutrosophic evidence sets (NESs) are defined to introduce additional probabilistic measures, aimed at addressing the uncertainty, imprecision, incompleteness, and inconsistency present in real-world information. The basic element of NESs is a neutrosophic basic probability assignment (NBPA), which consists of three components. The truth degree of FEs is represented by the first BPA, the second BPA represents the indeterminacy degree of FEs, and the last BPA characterizes the falsity degree of FEs. In NESs, each support degree of FEs is shown separately without any limitation. Therefore, the general concept of NESs is broader compared to traditional evidence sets and intuitionistic fuzzy evidence sets. Unlike the neutrosophic set (NS), the NBPA method assigns truth-support, uncertainty-support, and false-support degrees, as well as these support degrees, to single and multiple subsets in a discriminative framework. This paper aimed to develop some information measures for NESs, such as neutrosophic Deng entropy (NDE), neutrosophic cosine similarity measure, and neutrosophic Jousselme distance. Then, an improved method based on NDE and neutrosophic cosine similarity measure was established to combine contradictory evidence to increase the influence of reliable evidence on the one hand and to reduce the influence of unreliable evidence on the other hand. Finally, a case involving sensor data integration for target identification was studied to highlight the importance of these innovative ideas. The numerical example demonstrates that the proposed method provides more reliable and superior fusion performance compared to classical models, particularly in scenarios involving high conflict and uncertain information. However, the effectiveness of the method is partially influenced by the structure of the similarity matrix and the entropy parameters, which necessitates careful parameter tuning to achieve optimal results. These limitations are explicitly highlighted to serve as a guide for future improvements and broader applications of the method.



    Uncertainty is an important factor in knowledge representation, especially in civil engineering, supply chain management, risk analysis, medical care, and other fields. Even so, there has been significantly more focus on the control of hesitancy [1,2]. Many theories have been built on this, such as probability theory, fuzzy set theory [3], intuitionistic fuzzy set (IFS) theory [4], neutrosophic set theory [5], Dempster–Shafer (DS) evidence theory [6,7], and D-system theory [8]. The neutrosophic set (NS) is a powerful formal framework that is an extension of the classical set, fuzzy set, interval fuzzy set, IFS, interval IFS, dialectical set, paradoxical set, and tautological set. In contrast to the aforementioned sets, each element of the NS is characterized by three degrees of truth-membership, indeterminacy-membership, and falsity-membership; these degrees of membership are explicitly defined independently of each other. For the technical applications of NSs, the domain and range of truth-membership, indeterminacy-membership, and falsity-membership can be restricted to the standard real unit range [0, 1]. Wang et al. [9] proposed the single-valued neutrosophic set (SVNS) as a type of NSs. SVNSs have the ability to identify and hold ambiguous and complex information that cannot be identified or managed. Recently, the NSs and SVNSs have been extensively used in various fields such as set theories [10,11] and some applications [12,13,14].

    It is very difficult to make decisions based on uncertain and ambiguous situations. Therefore, hesitant and inaccurate data have become a major problem in the decision-making process. A final judgment in the decision-making process is based on different measurements and criteria. In turn, decision makers face difficulties in making logical inferences when making decisions with uncertain, imprecise, and incomplete information. The DS theory was developed to achieve precise conclusions in scenarios with incomplete information and hesitancy (see [6,7]). Unlike classical probability theories, the DS theory does not measure the probability of a given event occurring as a number; instead, it considers specific data about both the occurrence and non-occurrence of the event. Known as an effective tool for reasoning under uncertainty, the DS theory of evidence was first introduced by Dempster [6] and later developed by Shafer [7]. The DS evidence theory has been widely used in different fields for knowledge fusion [15,16,17] as it effectively models ambiguity and imprecision even without prior knowledge [18,19]. Despite its numerous benefits, the DS evidence theory can produce unexpected results when integrating highly contradictory pieces of evidence. Several approaches have been introduced to address this problem. Yager [20] established that the evidence of conflict was invalid and proceeded by distinguishing the conflicting beliefs. Dubois and Prade [21] argued that the useful information of conflicting evidence should be preserved and assigned conflict information only to the conflict proposition. Smets [22] left the conflict belief outside of the solution. Although several recent studies have shown promising progress [23,24,25], further research is still needed to address unresolved aspects of DS theory, such as conflicting management issues, independence of various sources of evidence, approaches to generating basic probability assignments (BPAs), incompleteness in the frame of discernment, and processing uncertain information.

    With the ability to represent uncertain and imprecise information through IFSs and neutrosophic sets (NS), the DS theory, which can only handle single-valued uncertainties, has become inadequate for addressing real-world problems expressed within these frameworks. One strategy to address these unresolved concerns is to recognize and manage the level of hesitancy in ambiguous data before proceeding with processing. At the outset, it is important to consider the uncertainty, inconsistency, and incompleteness of the evidence, as these can lead to incomplete and counterintuitive conclusions. Moreover, in practical applications of the original DS theory, it is possible that some or all probability masses may be uncertain or imprecise, instead of requiring precise degrees of belief and belief structures. Also, ambiguity in language, lack of knowledge, incompleteness, or vagueness can lead to vagueness or imprecision. Classical evidence theory places constraints on the BPA of focal elements according to the level of support available. Of course, these assessments can be used to obtain an accurate estimate of value, but they eventually result in the loss of certain important details.

    Numerous recent efforts have been made to extend DS theory to intuitionistic fuzzy belief structures such as sensor dynamics [26] and combination processes [27,28,29]. In these research works, a non-support degree is assigned to the BPA class of focal elements to extend BPAs to intuitionistic fuzzy information, leading to the identification of intuitionistic BPAs. Given the previous dialog, it is clear that support levels for BPA need to be increased, which is the main goal of this paper. Clearly, deciding the level of conflict between different pieces of evidence is a crucial issue. This paper introduces not only the fusion rule of evidence theory but also a different enhancement to the belief function. The classical basic probability assignment only takes into account the levels of support of the evidence. Despite the widespread use of DS theory and its various extensions, such as risk assessment [30], supply chain management [31], and industrial safety [32], several important limitations persist in practical applications. Most notably, existing methods either fail to represent indeterminacy as an independent and quantifiable component or rely on distance-based or entropy-based evaluations, without integrating both in a unified manner. Furthermore, existing cardinality-based normalizations in fuzzy and intuitionistic frameworks are insufficient when applied to neutrosophic systems, due to their interval-based definitions and limited applicability in real-world decision problems. These gaps form the core motivation for the present study, which aims to establish a more comprehensive structure for evidence representation and fusion under uncertainty. To address this issue, a neutrosophic evidence set (NES) is proposed that takes into account the ability of NS to deal with hesitation. The NES framework takes into account the accuracy-support, uncertainty-support, and falsity-support ratings of focal elements. An NBPA can be viewed as a triplet of consecutive BPAs, where the first BPA represents degrees of truth-support, the second BPA represents degrees of indeterminacy-support, and the third BPA represents degrees of falsity-support. Given that the primary focus of the NBPA is on various aspects of the discrimination framework, the proposed NES proves to be a versatile tool for capturing uncertainty in decision-making. Therefore, a greater amount of information can be processed by combining features of DS evidence theory and NS. Most of the previously mentioned techniques rely on the Jousselme distance [33] as the key factor in calculating the weight vector used to update bodies of evidence. However, in this research, the weight of evidence is calculated using an improved cosine similarity measure based on Jousselme distance and extended belief (Deng) entropy, which is extended to the neutrosophic evidence universe.

    The main achievements of this paper are summarized below:

    1) Classical neutrosophic sets are extended to neutrosophic evidence sets.

    2) The classical basic likelihood assignment, which is determined only by the degree of support of the focal elements, is extended to a neutrosophic basic likelihood assignment function where the focal elements have three basic degrees of support. This allows additional information to be processed.

    3) Neutrosophic belief (Deng) entropy is defined by applying Deng entropy to neutrosophic evidence sets to quantify the hesitation of newly proposed neutrosophic basic probability assignments (NBPAs).

    4) Furthermore, a refined cosine similarity metric is constructed by considering three pieces of evidence information (angle, distance, and vector norm) to measure the similarity of two NBPAs.

    5) A decision model is built using the similarity and entropy information metrics introduced earlier. This model categorizes evidence as safe and unsafe based on their degree of trustworthiness, calculated using a given similarity metric. Furthermore, a positive influence function and a negative influence function are created to evaluate the information content of various types of evidence. Then, the neutrosophic belief (Deng) entropy function is used to increase the influence of reliable evidence and decrease the influence of unreliable evidence. Thus, the weight value calculated in the first step is adjusted according to the volume of information obtained. As a result, the final weight of the evidence is obtained. Then, this weight is used in the process of weighing pieces of evidence. A specific numerical notation is provided to show that the proposed approach is both practical and effective.

    The rest of this paper is organized as follows: Section 2 briefly introduces the preliminaries. Then, Section 3 defines the NESs and NBPA, as well as some extended information measures. Section 4 proposes a new method based on an improved cosine similarity measure of evidence and neutrosophic belief (Deng) entropy. Section 5 provides a numerical example to demonstrate the effectiveness of the method. A statistical experiment and a detailed discussion are conducted in Section 6. Finally, Section 7 presents the results.

    The initial discoveries about DS theory, also known as evidence theory, were found in Dempster's research. The goal was to assess any event using upper and lower probabilities. Following this study, Shafer expanded the DS theory in his 1976 book "A mathematical theory of evidence". The DS theory, a new model, has since been applied in various fields like hesitant information modeling, data fusion, and decision-making.

    The first building block of the DS theory of evidence is to define a finite set of mutually exclusive elements called the frame of discernment, denoted by Ω. P(Ω) denotes the power set of Ω and contains all possible combinations of sets in Ω, including Ω itself. Singular clusters in the Ω detection framework do not contain non-empty subsets and are thus called atomic clusters. It is assumed that only one set of atoms can be true at any given time. An observer can demonstrate the belief that one or more sets in the power set of Ω may be true by assigning belief masses to these sets. Belief mass on an atomic set A∈P(Ω) is interpreted as the belief that the set in question is true. Belief mass on a non-atomic set A∈P(Ω) is interpreted as the belief that one of the atomic sets it contains is true, but the observer is uncertain about which. The following definitions are necessary for the basic processes of DS theory [6,7]. Accordingly, these definitions provide the theoretical basis for the development of the proposed neutrosophic evidence sets.

    Definition 1. Let

    Ω={A1,A2,,An}

    be the frame of discernment. A basic probability assignment (BPA) is a function m:P(Ω)[0,1], satisfying the two following conditions:

    m()=0andAΩm(A)=1, (1)

    where denotes empty set, A is any subset of Ω, and m is called mass function. For each subset AΩ, the value taken by the BPA at A is called the basic probability mass of A, denoted by m(A).

    Now, we define belief and plausibility functions, which determine the upper and lower bounds of the discernment frame.

    Definition 2. For any proposition AΩ, the belief function Bel:P(Ω)[0,1] and the plausibility function Pl:P(Ω)[0,1] are defined as

    Bel(A)=BAm(B),Pl(A)=1Bel(Ac)=BAm(B), (2)

    where Ac is the complement of A. It is clear that

    Bel(B)Pl(B).

    An interval [Bel(B),Pl(B)] is called the belief interval (BI). It can also be interpreted as an interval enclosing the "true probability" of B [33]. The following definition provides Dempster's combination rule, which combines two independent BPAs defined on the same frame of discernment Ω.

    Definition 3. Let m1 and m2 be independent BPAs defined on the frame of discernment Ω. Dempster's combination rule combines two BPAs and generates a new BPA as follows:

    m(A)={0,A=,BC=A,B,CP(Ω)m1(B)m2(C)1K,A, (3)

    where K is the conflict coefficient and indicates the degree or amount of conflict between m1 and m2. It is noteworthy that the value of K is between 0 and 1. Formally, K can be defined as

    K=BC=,B,CP(Ω)m1(B)m2(C). (4)

    When K=0, it indicates no conflict between m1 and m2. When K=1, it represents complete conflict between m1 and m2.

    In evidence theory, the Jousselme distance plays a central role in assessing the degree of disagreement between two BPAs. Unlike standard distance measures, it takes into account the structure of the power set and the semantic relationships between focal elements by incorporating a cardinality based on set intersections and unions. This allows it to capture not only numerical differences but also the overlap in meaning between different hypotheses. As such, it has been widely used to measure conflict, to evaluate the reliability of evidence sources, and to determine weights in fusion processes. Its flexibility also makes it adaptable to extended frameworks like fuzzy, intuitionistic fuzzy, and neutrosophic evidence structures. By offering a meaningful and mathematically grounded way to compare belief functions, the Jousselme distance significantly enhances both the theoretical depth and practical effectiveness of evidence combination strategies.

    Definition 4. [33] Let m1 and m2 be two BPAs on the same frame of discernment Ω. Ai and Bj are focal elements. The Jousselme distance, denoted by d(m1,m2), is defined as follows:

    d(m1,m2)=0.5(m12+m222m1,m2), (5)

    where

    m12=m1,m1,m22=m2,m2;m1,m2

    represents the scalar product of two vectors. It is defined as follows:

    m1,m2=si=1sj=1m1(Ai)m2(Bj)|AiBj||AiBj|, (6)

    where s is the number of the elements of the power set, Ai and Bj are the elements of frame of discernment Ω, |AiBj| is the cardinality of common objects between elements Ai and Bj, and |AiBj| is the number of subsets of union of Ai and Bj.

    A modified cosine similarity measure proposed by Jiang et al. [34] is an efficient approach to measure the similarity between vectors because it considers three important factors, namely, angle, distance, and vector norm. The modified cosine similarity measure among these BPAs can determine whether the pieces of evidence contradict each other. A large similarity indicates that this piece of evidence has more support than another piece of evidence, while a small similarity indicates that this piece of evidence has less support than another piece of evidence.

    In the following definitions, we use standard letters instead of script letters to emphasize the distinction between the evidence sets and other elements.

    Definition 5. [34] Let

    A={a1,a2,a3,,an}

    and

    B={b1,b2,b3,,bn}

    be two vectors of Rn. The modified cosine similarity between vectors A and B is defined as

    s(A,B)={0.5{αd(A,B)+min(|A||B|,|B||A|)}cv(A,B),A0,B0,0,A=0orB=0, (7)

    where α is a constant whose value is greater than 1, d(A,B) is the Euclidean distance between the two vectors A and B, αd(A,B) is the distance-based similarity measure, min(|A||B|,|B||A|) is the minimum of |A||B| and |B||A|, and cv(A,B) is the cosine similarity of vectors, defined by

    cv(A,B)=cos(θ)=AB|A||B|=ni=1(AiBi)ni=1(Ai)2ni=1(Bi)2,0cv(A,B)1. (8)

    The larger the α, the greater the impact of distance on vector similarity will be.

    To bridge the classical and newly introduced concepts, it is essential to emphasize the structural and functional differences between the traditional BPA and the proposed neutrosophic BPA (NBPA). While classical BPAs assign a single degree of belief to each focal element, NBPA will extend this structure by incorporating three components—truth, indeterminacy, and falsity degrees—allowing for a more expressive representation of uncertainty. Therefore, we give the neutrosophic notions in this section.

    Smarandache defined a NS for handling uncertain information as follows:

    Definition 6. [5] Let X be a space of points (objects), with a generic element in X denoted by x. An NS A in X is characterized by a truth-membership function TA(x), an indeterminacy-membership function IA(x), and a falsity-membership function FA(x). TA(x), IA(x), and FA(x) are real standard or nonstandard subsets of]0−, 1+[, that is, TA:X]0,1+[, IA:X]0,1+[, and FA:X]0,1+[. There is no restriction on the sum of TA(x), IA(x), and FA(x), therefore

    0supTA(x)+supIA(x)+supFA(x)3+,xX,

    where supTA(x), supIA(x), and supFA(x) denote the supremum of TA(x), IA(x), and FA(x), respectively. The values TA(x),IA(x), and FA(x) characterize the truth-membership degree, indeterminacy-membership degree, and falsity-membership degree, respectively, of the element of x to the set A.

    It is difficult to apply the NS to real decision-making problems because of its definition. In order to overcome this situation, Wang et al. [9] defined the concept of a single-valued neutrosophic set (SVNS).

    Definition 7. [9] Let TA(x), IA(x), and FA(x) be truth-membership, indeterminacy-membership, and falsity-membership functions, respectively. A SVNS is defined as

    A={x,TA(x),IA(x),FA(x):xX}, (9)

    where the functions TA(x), IA(x), and FA(x) are real standard subset of [0,1], i.e.,

    TA:X[0,1],IA:X[0,1],FA:X[0,1]

    with the condition

    0TA(x)+IA(x)+FA(x)3.

    For the sake of convenience, we take

    a=TA,IA,FA

    is a neutrosophic element (NE) of the single valued neutrosophic number in a SVNS.

    Assume that

    A={xj,TA(xj),IA(xj),FA(xj):xjX}

    and

    B={xj,TB(xj),IB(xj),FB(xj):xjX}

    are two NSs in

    X={x1,x2,xn},

    then for all xjX

    AB={xj,min(TA(xj),TB(xj)),max(IA(xj),IB(xj)),max(FA(xj),FB(xj))},AB={xj,max(TA(xj),TB(xj)),min(IA(xj),IB(xj)),min(FA(xj),FB(xj))}.

    An SVNS A is contained in another SVNS B, denoted by AB, if and only if

    TA(x)TB(x),IB(x)IA(x) and FB(x)FA(x)

    for any xX.

    Definition 8. [35] Let A and B be any two NSs in X. Then the cosine similarity measures between A and B are defined as follows

    c(A,B)=1nnj=1cos[π(|TA(xj)TB(xj)||IA(xj)IB(xj)||FA(xj)FB(xj)|)2], (10)

    where the symbol is the maximum operation. Moreover, let

    a=TA,IA,FA

    be a NE and λ(0,1], then we have the equation

    λa=λTA,IA,FA=1(1TA)λ,(IA)λ,(FA)λ,

    which we will use in further sections.

    The neutrosophic aggregation operator has garnered significant attention as a key tool in information fusion. Şahin and Yiğider [12] introduced two notable operators: the single-valued neutrosophic weighted averaging (SVNWA) operator and the single-valued neutrosophic weighted geometric (SVNFWG) operator, as described below:

    Assume

    Aj=Tj,Ij,Fj(j=1,2,,n)

    is a collection of SVNNs, the SVNWA defined as

    SVNWA(A1,A2,,An)=nj=1ωjAj=(1nj=1(1Tj)ωj,nj=1Ijωj,nj=1Fjωj), (11)

    where ωj is the weight vector of each Aj (j=1,2,,n), with ωj[0,1] and

    nj=1ωj=1.

    Recently, Li and Deng [28] revealed that there exists a strong relationship between IFSs and the DS theory. The main purpose of this chapter is to extend this idea to the neutrosophic universe. Considering the power of neutrosophic sets compared to IFSs, we can say that the results will be more effective in multi criteria decision making (MCDM). Within the framework of DS theory in neutrosophic sets, a precise definition of neutrosophic cardinality is essential for the normalization of basic probability assignments, the formulation of combination rules, the computation of the Jousselme distance, the application of modified similarity measures, and the evaluation of belief entropy. Before defining neutrosophic cardinality, it is instructive to examine the definitions of fuzzy and intuitionistic cardinalities, as they provide the foundational developments that lead to the formalization of neutrosophic cardinality.

    Initially, DeLuca and Termini [36] extended the classical notion of cardinality to fuzzy sets in order to quantify the number of elements within a fuzzy set. To this end, they introduced a cardinality measure known as the sigma count, defined as follows:

    count(A)=xXμA(x),

    where A is a fuzzy set on X. Subsequently, Szmidt and Kacprzyk [37] generalized this definition to IFSs as:

    count(A)=[xXμA(x),xXμA(x)+πA(x)].

    Tripathy et al. [38] rearranged this definition as

    count(A)=[xXμA(x),xX1νA(x)].

    Thus, if

    νA(x)=1μA(x),

    then it reduces to count of a fuzzy set A. They also showed some properties of this cardinality.

    However, since these cardinality definitions are expressed as intervals, the classical cardinality concept has been employed in the applications of DS theory to fuzzy and IFSs. This, in turn, diminishes the full impact of DS theory within these frameworks. In the context of neutrosophic sets, Majumdar and Samanta [39] proposed a cardinality definition parallel to those of fuzzy and IFSs as follows:

    count(A)=[nj=1TA(xj),nj=1(TA(xj)+(1IA(xj))],

    where A is a NS in X. Nevertheless, it is still in an interval, and we cannot use it effectively in applications. Therefore, we need a new neutrosophic cardinality definition. In order to remain within the theoretical boundaries of fuzzy and intuitionistic fuzzy frameworks, we propose the following definition in parallel with the aforementioned cardinality formulations.

    Definition 9. Let A be a NS in X. Then the sigma count of A is defined as

    count(A)=[nj=1TA(xj),nj=1(1IA(xj)),nj=1(1FA(xj))].

    Proposition 1. If

    IA(xj)=,

    then it reduces to count of an IFS and if both

    IA(xj)=

    and

    FA(xj)=,

    then it reduces to count of a fuzzy set.

    In order to effectively use it in applications, we need to reduce the sigma count of A to a real number.

    Definition 10. Let A be a NS in X. Then, the average possible neutrosophic cardinality of A, denoted by |A|n, is defined as

    |A|n=13nj=1(TA(xj)+1IA(xj)+1FA(xj)). (12)

    Proposition 2. Let A be a NS in X and |A| be the cardinality of A, then we have the following situations.

    1) For all xjX, if

    FA(xj),TA(xj),IA(xj)=1,0,0,

    that is, it is largest neutrosophic set, then the cardinality of A is computed as

    |A|n=n.

    2) For all xjX, if

    FA(xj),TA(xj),IA(xj)=0,1,1,

    that is, it is smallest neutrosophic set, then the cardinality of A is computed as

    |A|n=0.

    3) 0|A|nn.

    Theorem 1. Let A and B be two neutrosophic sets on X, then

    1) count(AB)+count(AB)=count(A)+count(B),

    2) |A|n+|Ac|n=n.

    Proof. 1) We have

    count(AB)=[max{nj=1TA(xj),nj=1TB(xj)},min{nj=1(1IA(xj)),nj=1(1IB(xj))},min{nj=1(1FA(xj)),nj=1(1FB(xj))},],

    and

    count(AB)=[min{nj=1TA(xj),nj=1TB(xj)},max{nj=1(1IA(xj)),nj=1(1IB(xj))},max{nj=1(1FA(xj)),nj=1(1FB(xj))}].

    So,

    count(AB)+count(AB)=[max{nj=1TA(xj),nj=1TB(xj)}+min{nj=1TA(xj),nj=1TB(xj)},min{nj=1(1IA(xj)),nj=1(1IB(xj))}+max{nj=1(1IA(xj)),nj=1(1IB(xj))},min{nj=1(1FA(xj)),nj=1(1FB(xj))}+max{nj=1(1FA(xj)),nj=1(1FB(xj))}]=[nj=1TA(xj)+nj=1TB(xj),nj=1(1IA(xj))+nj=1(1IB(xj)),nj=1(1FA(xj))+nj=1(1FB(xj))]=[nj=1TA(xj),nj=1(1IA(xj)),nj=1(1FA(xj))]+[nj=1TB(xj),nj=1(1IB(xj)),nj=1(1FB(xj))]=count(A)+count(B).

    3) Since

    |A|n=13nj=1(TA(xj)+1IA(xj)+1FA(xj))

    and

    Ac={xj,FA(xj),1IA(xj),TA(xj):xjX},

    we have

    |Ac|n=13nj=1(FA(xj)+1(1IA(xj))+(1TA(xj)))=13nj=1(FA(xj)+IA(xj)+1TA(xj)).

    Therefore,

    |A|n+|Ac|n=13nj=1(TA(xj)+FA(xj)+1IA(xj)+IA(xj)+1FA(xj)+1TA(xj))=13nj=13=n.

    If we have four neutrosophic elements in an NS and all these elements are the biggest NNs, then, intuitively, the cardinality of this NS should be 4.

    Example 1. Let

    A={(1,0,0),(1,0,0),(1,0,0),(1,0,0)}

    be a SVNS. Then, the cardinality of this set is

    |A|n=13((1+1+1)+(1+1+1)+(1+1+1)+(1+0+0))=4.

    Example 2. Let

    A={(1,0,0),(1,0,0),(0,1,1),(0,1,1)}

    be a SVNS. Then, the cardinality of this set is

    |A|n=13((1+1+1)+(1+1+1)+(0+0+0)+(0+0+0))=2.

    Example 3. Let

    A={(1,0,0),(1,1,1),(0,1,1)}

    be a SVNS. Then, the cardinality of this set is

    |A|n=13((1+1+1)+(1+0+0)+(0+0+0))=1.33.

    Example 4. Let

    A={(0.7,0.5,0.8),(0.1,0.3,0.9),(0.5,0.6,1.0)}

    be a SVNS. Then, the cardinality of this set is

    |A|n=13((0.7+0.5+0.2)+(0.1+0.7+0.1)+(0.5+0.4+0))=1.06.

    The fundamental component of the NES is the neutrosophic basic probability assignment (NBPA). Similar to the BPAs in classical evidence theory, the NBPA is defined over a finite set of mutually exclusive elements, known as the frame of discernment.

    Definition 10. Let

    Ω={s1,s2,,sn}

    be a frame of discernment and let P(Ω) denote its power set, i.e.,

    P(Ω)={{s1},,{sn},{s1,s2},,{s1,s2,,si},,Ω}.

    Let each AiP(Ω) be an element of the power set. Then the functions

    m+(Ai):P(Ω)[0,1],m0(Ai):P(Ω)[0,1]

    and

    m(Ai):P(Ω)[0,1]

    represent the truth-support degree, indeterminacy-support degree, and falsity-support degree, respectively. Then, a neutrosophic basic probability assignment m on P(Ω) is defined as

    m={Ai,m+(Ai),m0(Ai),m(Ai):AiP(Ω). (13)

    Subject to the following conditions:

     1) m+()=0,m0()=0 and m()=0; 2) AiP(Ω)m+(Ai)=1 3) AiΩ,m+(Ai)+m0(Ai)+m(Ai)3 (14)

    Definition 11. For an NBPA m on Ω, any subset AiP(Ω) is called a focal element of m if

    m+(Ai)>0 or m0(Ai)>0 or m(Ai)>0.

    Definition 12. Let m be an NBPA on Ω given as

    m={Ai,m+(Ai),m0(Ai),m(Ai):AiP(Ω).

    Then the neutrosophic belief and plausibility functions of m also have two components, defined as follows:

    Bl(Ai)=bl+(Ai),bl0(Ai),bl(Ai),Pl(Ai)=pl+(Ai),pl0(Ai),pl(Ai), (15)

    where

    bl+(Ai)=AjAim+(Aj),pl+(Ai)=AjAim+(Aj),bl0(Ai)=12|m|nAjAim0(Aj),pl0(Ai)=12|m|nAjAim0(Aj),bl(Ai)=12|m|nAjAim(Aj),pl(Ai)=12|m|nAjAim(Aj).

    It is evident that if the second and third components of an NBPA are empty, it reduces to a traditional BPA. However, a traditional BPA cannot be transformed into an NBPA, as the NBPA inherently contains more information than the BPA. Furthermore, the coefficient 12|m|n ensures that the second and third components of the belief and plausibility functions remain within the defined range. Specifically, bl0(Ai) and bl(Ai) are computed by summing over all subsets AjAi, which may lead to large sums. Therefore, they can be normalized with the coefficient 12|m|n. The same idea applies to pl0(Ai) and pl(Ai). Otherwise, these sums would exceed their defined limits resulting in unbounded values.

    Example 5. Let

    Ω={a,b,c}

    and let AiP(Ω), where the focal elements are

    Ai=({a},{a,b},{a,b,c}).

    Suppose two observations are given in the form of NBPAs as follows:

    m1={{a},(0.3,0.2,0.4),{a,b},(0.5,0,6,1.0),{a,b,c},(0.2,0,9,0.5)},m2={{a},(0.2,0.6,0.8),{a,b},(0.6,0,2,0.8),{a,b,c},(0.1,0,9,0.6)}.

    Here, for example,

    m1({a})=(0.3,0.2,0.4).

    It is easy to see that

    3i=1m+1=1,

    while the sums of the indeterminacy and falsity components exceed 1. Since {a}{a,b} and {a}{a,b,c}, the neutrosophic belief function for the truth component is calculated as:

    bl+({a})=m+1({a})+m+1({a,b})+m+1({a,b,c})=0.3+0.5+0.2=1.0.

    To calculate bl0(a), we first compute the neutrosophic cardinality as

    |m1|n=13((0.3+0.8+0.6)+(0.5+0.4+0.0)+(0.2+0.1+0.5))=3.43=1.1333.

    Hence,

    bl0({a})=121.1333(0.2+0.6+0.9)=0.7750,bl({a})=121.1333(0.4+1.0+0.5)=0.8662.

    Now, for the element {a,b}, note that {a,b}{a,b,c} only, so

    bl+({a,b})=m+1({a,b})+m+1({a,b,c})=0.5+0.2=0.7,bl0({a,b})=121.1333(0.6+0.9)=0.6838,bl({a,b})=121.1333(1+0.5)=0.6838.

    To highlight the difference between the belief and plausibility functions, let us compute Pl({a,b}).

    Since

    {a,b}{a}={a} and {a,b}{a,b,c}={a,b,c},

    we include all focal elements

    pl+({a,b})=0.3+0.5+0.2=1,pl0({a,b})121.1333(0.2+0.6+0.9)=0.7750,pl({a,b})121.1333(0.4+1.0+0.5)=0.8662.

    The remaining belief and plausibility values for other focal elements can be calculated in a similar manner.

    Definition 13. Suppose m is a NBPA on Ω where each subset AP(Ω) is assigned a neutrosophic probability mass

    m(A)=m+(A),m0(A),m(A).

    Then, a reduced BPA denoted by m can be obtained from m using

    m(A)=m+(A)+(1m0(A))(1m(A))2. (16)

    It is evident that m(A)[0,1]. However, the use of only one NBPA is not enough in real decision-making problems. To operate multiple NBPAs, a combination of NBPAs is needed, which is defined below.

    Definition 14. Assume m1 and m2 are two NBPAs on Ω; the combination result is denoted as m1m2, which is proposed as follows:

    m1m2(A)=m+1,2(A),m01,2(A),m1,2(A), (17)

    where

    {m+1,2()=0,m+1,2(Ak)=AiAj=Akm+1(Ai)m+2(Aj)1AiAj=m+1(Ai)m+2(Aj),m01,2()=0,m01,2(Ak)=AiAj=Akm01(Ai)m02(Aj)|AiAj|n(1AiAj=m01(Ai)m02(Aj)),m01,2()=0,m1,2(Ak)=AiAj=Akm1(Ai)m2(Aj)|AiAj|n(1AiAj=m1(Ai)m2(Aj)), (18)

    and each NBPA m(Ak) is defined as

    m(Ak)={0,Ak=,AiAj=Ak,Ai,AjP(Ω)m1(Ai)m2(Aj)1K,Ak, (19)

    where K is the degree of conflict between m1 and m2 and is called the conflict coefficient. K is defined as

    K=1|Ai|n1|Aj|nAiAj=,Ai,AjP(Ω)m1(Ai)m2(Aj), (20)

    where K=[0,1]. If K=0, it indicates that there is no conflict between m1 and m2. Conversely, if K=1, it signifies complete conflict between m1 and m2. Just like the BPA elements given in (15), the coefficient 1|Ai|1|Aj| for K ensures that it stays in the interval [0,1]. Also, it is the same for the second and third components in (18).

    It is necessary to measure the distance between two NBPAs in decision-making problems. Therefore, we extend the Jousselme distance into NESs.

    Definition 15. Let

    m1={Ai,m+1(Ai),m01(Ai),m1(Ai):AiP(Ω)

    and

    m2={Bj,m+2(Bj),m02(Bj),m2(Bj):BjP(Ω)

    be two NBPAs on Ω. The Jousselme distance in NESs, denoted by dN(m1,m2), is defined as follows:

    dN(m1,m2)=0.5(m12+m222m1,m2), (21)

    where

    m12=m1,m1,m22=m2,m2;

    m1,m2 is the scalar product of two vectors defined as follows:

    m1,m2=si=1sj=113{m+1(Ai)m+2(Bj)+m01(Ai)m02(Bj)+m1(Ai)m2(Bj)}|AiBj|n|AiBj|n, (22)

    where s is the number of the elements of the power set, and |AiBj|n and |AiBj|n are the neutrosophic cardinalities of intersection and union of NSs Ai and Bj, respectively.

    It can be easily proven that the dN(m1,m2) satisfies the basic properties of a distance measure.

    Similarity measures evaluate the degree of agreement or closeness between different pieces of evidence. Depending on the structure of the similarity function (e.g., cosine-based, distance-based, etc.), the weight or influence of the evidence may vary. High similarity between elements may lead to stronger mutual reinforcement during fusion. Moreover, the similarity measure is the starting point of the calculations of weights of evidence in the fusion process. Therefore, we introduce a modified cosine similarity measure of NPBAs to enhance the sensitivity and robustness of neutrosophic evidence combination mechanisms by providing a principled approach to comparing multi-dimensional evidence structures. Its use contributes to more informed and reliable fusion outcomes, especially in complex environments where information is uncertain, inconsistent, or incomplete.

    Assume that E1 and E2 are two sources of evidence under frame of discernment Ω and let AjP(Ω) where j=1,2,,n represents singleton sets. Let m1 and m2 be two NBPAs defined over the same Ω. For each singleton subset Aj, the corresponding belief and plausibility measures are given by

    Bl1(Aj)=bl+1(Aj),bl01(Aj),bl1(Aj),Pl1(Aj)=pl+1(Aj),pl01(Aj),pl1(Aj),Bl2(Aj)=bl+2(Aj),bl02(Aj),bl2(Aj),Pl2(Aj)=pl+2(Aj),pl02(Aj),pl2(Aj).

    To make it formally easier, NBPAs can be expressed as two vectors on the singleton subsets indexed by i=1,2 as

    Bli=(Bli(A1),Bli(A2),,Bli(An)),Pli=(Pli(A1),Pli(A2),,Pli(An)).

    Based on these representations, we now define the novel similarity measure of NBPAs integrating the modified cosine similarity.

    Definition 16. Let S(Bl1,Bl2) and S(Pl1,Pl2) be the modified cosine similarities of NBPAs. Then, a novel similarity of NBPAs is defined as

    SN=δS(Bl1,Bl2)+(1δ)S(Pl1,Pl2), (23)

    where

    δ=(16n(2i=1nj=1(|bl+i(Aj)pl+i(Aj)|2+|bl0i(Aj)pl0i(Aj)|2+|bli(Aj)pli(Aj)|2)))12,s(Bl1,Bl2)={0.5{αd(Bl1,Bl2)+min(|Bl1|n|Bl2|n,|Bl2|n|Bl1|n)}c(Bl1,Bl2),Bl10,Bl20,0,Bl1=0orBl2=0,s(Pl1,Pl2)={0.5{αd(Pl1,Pl2)+min(|Pl1|n|Pl2|n,|Pl2|n|Pl1|n)}c(Pl1,Pl2),Pl10,Pl20,0,Pl1=0orPl2=0,

    where δ is the total uncertainty of NBPAs with 0δ1, α1, d is the neutrosophic Jousselme distance, and c(Bl1,Bl2) and c(Pl1,Pl2) are the cosine similarities of vectors (Bl1,Bl2) and (Pl1,Pl2), given in (15), respectively. When

    Bli(Aj)=1,0,0

    and

    Pli(Aj)=0,1,1,

    i.e., Bl is maximum and Pl is minimum, then δ=1, which is the maximum value; if

    Bli(Aj)=Pli(Aj),

    then δ=0, which is the minimum value.

    The new similarity of NBPAs SN satisfies the following features:

    1) m1=m2SN(m1,m2)=1;

    2) SN(m1,m2)=SN(m2,m1);

    3) 0SN(m2,m1)1

    4) SN(m1,m2)=0, iff m1 and m2 have no compatible element.

    Shannon entropy is a measure of ambiguity or randomness in a system, introduced by Shannon in information theory. It quantifies the amount of information contained in a probability distribution, where higher entropy indicates greater uncertainty or unpredictability. Mathematically, it is defined as

    S=li=1pilog21pi, (24)

    where l is the number of events, and pi represents the probability of event i in the system that satisfies

    li=1pi=0.

    Next, we discuss Deng entropy in the present section. Since its initial proposal for thermodynamics by Clausius in 1865 [40], various forms of entropy have been introduced to quantify system hesitancy, including Shannon entropy [41], Tsallis entropy [42], and nonadditive entropy [43]. In the context of information theory, Shannon entropy is frequently employed to assess the information content of a system or process and to determine the expected value of the information conveyed in a message.

    A new method named Deng entropy [44] was recently introduced to quantify the uncertainty of the BPAs. Deng entropy is essentially an extension of Shannon entropy because Deng entropy equals Shannon entropy when a probability measure is established by BPA. In simpler terms, Deng entropy can be viewed as a form of generalized Shannon entropy. When a BPA is converted into a probability distribution, Deng entropy subsequently transforms into Shannon entropy.

    The Deng entropy is able to accurately assess the level of ambiguity of BPA. Nevertheless, there is still a question as to how to quantify the level of vagueness in assigning fundamental probabilities when operating within a domain that lacks precise, adequate, and comprehensive data. A primary focus of this research article is the creation of a novel entropy known as NDE. NDE is an extended form of both Shannon entropy and Deng entropy. When BPA transforms into a probability distribution, the NDE transforms into Shannon entropy. In this section, we will introduce Deng entropy and discuss some of its properties.

    Definition 17. [44] Deng entropy for a BPA m is defined as

    Ed(m)=AX;m(A)>0m(A)log2(m(A)2|A|1), (25)

    where |A| is the cardinality of A. Here, if the BPA m is assigned to single elements, it is degenerated to Shannon entropy as

    Ed(m)=AXm(A)log2m(A). (26)

    Deng entropy quantifies the level of unpredictability in a given evidence set based on the m and |A|, without taking into account the size of the frame of discernment. Thus, this approach is inadequate for evaluating variations in an uncertain degree when there are similar basic probability assessments within distinct frames of reference.

    Zhou and colleagues [45] utilized the Dempster–Shafer evidence theory (DSET) to evaluate the ambiguity of the evidence set by taking into account mass functions and the size of the frame of discernment. They introduced a new belief entropy model, derived from Deng entropy, to overcome the constraints of Deng entropy.

    Definition 18. [45] Improved Deng entropy for a BPA m is defined as

    Eid(m)=AXm(A)log2(m(A)2|A|1e|A|1|X|), (27)

    where |A| and |X| are the cardinality of A and X, which are the number of elements in frame of discernment. Here, the exponential factor e|A|1|X| gives more uncertain information in a body of evidence compared to Deng entropy.

    Now, we will define NDE using the neutrosophic information where the BPAs have three components.

    Definition 19. Let m be a NBPA on a frame of discernment. Then, NDE of m is defined as

    ENDE(m)=AXm+(A)log2(m+(A)2|A|n1e|A|n1|X|n)AXm0(A)log2(m0(A)2|A|n1e|A|n1|X|n)AXm(A)log2(m(A)2|A|n1e|A|n1|X|n), (28)

    and, where |A|n is the neutrosophic cardinality of A. It is clear that the value range of the NDE is (0,).

    Furthermore, because of the unpredictable and varying nature of data in real life, the evidence that is gathered must be classified in various ways. We categorize this classification into two groups: reliable evidence and unreliable evidence. Hence, determining reliable and unreliable evidence and adjusting their weights is crucial in enhancing the impact of reliable evidence and reducing the influence of unreliable evidence on the final merging outcomes. Therefore, we employ the adjusted cosine similarity of evidence to assess their credibility. When the target has a high similarity measure with other alternative evidence, it indicates that this evidence substantiates the target and, therefore, should be viewed as reliable evidence. Conversely, a low similarity measure between the target evidence and other alternative evidence suggests that the target evidence lacks support, indicating its unreliability. Therefore, to amplify the impact of reliable evidence, it is important to prioritize reliable evidence; reducing the influence of unreliable evidence can help mitigate its negative effects. Based on this foundation, we aim to achieve the most optimal decision outcome by assessing the weights of evidence through various evaluations based on different types of evidence.

    Definition 20. Suppose that mi (i=1,2,,n) is a reliable NBPA on a frame of discernment and ENDE(mi) is the NDE of mi. Then the positive impact function P(mi) of mi with ENDE(mi) is calculated as

    P(mi)=e[ENDE(mi)]. (29)

    Definition 21. Suppose that mi (i=1,2,,n) is an unreliable NBPA on a frame of discernment and ENDE(mi) is the NDE of mi. Then the negative impact function N(mi) of mi with ENDE(mi) is calculated as

    N(mi)=e[(EmaxNDE(mi)ENDE(mi))]. (30)

    It is easy to see that both functions are monotonically increasing. Therefore, for unreliable evidence, a smaller entropy value indicates that the evidence receives less support from other sources, which in turn gives a lower weight assignment to such unreliable evidence.

    We present a decision model based on the definitions of NESs. Let Ω be a frame of discernment and

    AiP(Ω)

    denote an element of its power set. Consider a collection of n NBPAs defined as

    m1={Ai,m+1(Ai),m01(Ai),m1(Ai):AiP(Ω),mn={Ai,m+n(Ai),m0n(Ai),mn(Ai):AiP(Ω).

    Let SN(mi,mj) be the similarity value between two NBPAs mi and mj where i,j=1,2,,n.

    Step 1. Obtain the global credibility weight of the pieces of evidence.

    Using (23), construct the similarity matrix Snxn as follows:

    Sij=[1SN(m1,m2)SN(m1,mn)SN(m2,m1)1SN(m2,mn)SN(mn,m1)SN(mn,m2)1]. (31)

    To compute an entry such as SN(m1,m2), we follow these steps. First, calculate Bl1(Ai) and P l1(Ai) for m1 and Bl2(Ai) and P l2(Ai) for m2 using (15). Then, using (10), compute cosine similarities c(Bl1,Bl2) and c(Pl1,Pl2). Next, determine Jousselme distances d(Bl1,Bl2) and d(Pl1,Pl2) applying (22). For a given α1 and acquired δ, we can obtain modified cosine similarities of Blk(Ai) and P lk(Ai) for k=1,2 using (23).

    Thus, the support degree SD(mi) of each NBPA mi is calculated by summing the similarity values of its corresponding column in the matrix:

    SD(mi)=nj=1,ijSN(mi,mj). (32)

    The credibility degrees CD(mi) of NBPAs mi are then determined by normalizing the support degrees

    CD(mi)=SD(mi)nj=1SD(mj). (33)

    Finally, the global credibility degree GCD(m) of the entire body of evidence is computed by averaging the individual credibility degrees

    GCD(m)=ni=1CD(mi)n. (34)

    Step 2. Classify the evidence.

    All evidence can be categorized into two distinct classes, reliable and unreliable, based on the comparison between the credibility degree CD(mi) and the global credibility degree GCD(mi). This classification is defined as:

    mi={reliable evidence,CD(mi)GCD(m),unreliable evidence,CD(mi)<GCD(m). (35)

    Step 3. Measure the ambiguity of evidence.

    The NDE ENDE(mi) of the NBPA mi (i=1,2,,n) can be calculated according to (28). This entropy quantifies the degree of uncertainty or ambiguity inherent in the evidence mi.

    Step 4. Compute the general information volume for the reliable and unreliable evidence.

    Based on the positive impact function P(mi) and the negative impact function N(mi) derived from the entropy ENDE(mi) using (28), the general information volume GI(mi) of the pieces of evidence are computed as:

    I(mi)={P(mi)=e[ENDE(mi)],ifmiis reliableN(mi)=e[EmaxNDE(mi)ENDE(mi)],ifmiis unreliable. (36)

    Step 5. Modify the CDs of the evidence.

    To adjust the original credibility values based on information volume, the modified credibility degree MCD(mi) of each NBPA is calculated as:

    MCD(mi)=CD(mi)×GI(mi)ni=1CD(mi)×GI(mi),(i=1,2,,n). (37)

    These modified values serve as final weights in the subsequent aggregation process.

    Step 6. Obtain the weighted average of the pieces of evidence.

    Using MCD(mi), the weighted average evidence WAE(m) is defined as:

    WAE(m)=ni=1(MCD(mi)×mi),(i=1,2,,n). (38)

    This represents the fused result that incorporates both the quality and quantity of individual evidences.

    Step 7. Fuse the WAE of m.

    Since there are n pieces of evidence, the WAE(m) is combined using the proposed rule given in (17) as

    F(m)=(((WAE(m)WAE(m))1)hWAE(m))(n1), (39)

    for n1 times. After fusing operation, we obtain the final fusion result of the evidence integrating their credibility, ambiguity, and relative consistency.

    The flowchart of the decision model is illustrated in Figure 1, and the step-by-step algorithm is given in Algorithm 1.

    Figure 1.  Flowchart of decision model.

    Algorithm 1. Information fusing on NES.
    Input: A set of NBPAs {A1,A2,,An} and a set of evidences {m1,m2,,mm}
    Steps:
          1) For i=1:m
                            For j=1:m
                                          Determine Blk(Ai) and Plk(Ai)vectors for k=1,2. using (15).
                                          Compute cosine similarities of Bli(Ai) and Pli(Ai) using (10).
                                          Calculate Jousselme distance of Bli(Ai) and Pli(Ai) using (22).
                                          Find the total uncertainty δ of Bli(Ai) and Pli(Ai) using (23).
                                          Calculate the modified cosine similarities of NBPAs Bli(Ai) and Pli(Ai) using (23).
                                          Determine the SN(i,j) similarity matrix.
                            End
              End
          2) Calculate the support degree SD (mi) and the credibility degree CD (mi) using (32) and (33), respectively.
          3) Determine the global credibility degree (GCD) GCD(mi) of evidence mi by utilizing (34).
          4) Classify the evidences based on (35).
          5) Compute the NDE of the pieces of evidence by using (28) and the cardinality definition given in (12).
          6) Determine the general information volume GI(mi) according to the positive impact function P(mi) and the negative impact function N(mi) of mi with the (36).
          7) Determine the MCD(mi) of the NBPA mi by the (37).
          8) Calculate the WAE(m) by aggregating evidences using (11) taking MCD(mi) as weight vector. Then calculate the reduced NBPAs using (16).
          9) Do
               Fuse the WAE(m)s by using (17)(20), and determine the reduced BPAs by using (16).
               n-1 times
         10) Obtain the final result.
    Output: Final fusion result of m1,m2,m3,m4,m5

    We apply the decision-making model of the multi sensor–based target recognition system proposed by Qian et al. [46], by integrating it into a neutrosophic environment to show the applicability and effectiveness of the method developed in the neutrosophic framework. This system includes observations of objects that are acquired from five different kinds of sensors. Here, x1,x2, and x3 are the three objects in

    X={x1,x2,x3}.

    We take the BPAs in [46] as the truth degrees and randomly add other BPAs that are the indeterminacy degree and the falsity degree to obtain NBPAs, which is given in Table 1.

    Table 1.  NBPA matrix of the observations.
    Pieces of evidence NBPAs
    {x1} {x2} {x3} {x1,x2,x3}
    m1(.)
    m2(.)
    m3(.)
    m4(.)
    m5(.)
    (0.30,0,70,0.40) (0.20,1,00,0.30) (0.10,0,70,0.70) (0.40,0,40,0.20)
    (0.00,0,10,0.50) (0.90,0,20,0.80) (0.10,0,40,0.90) (0.00,0,80,0.60)
    (0.60,0,20,1.00) (0.10,0,30,1.00) (0.10,0,30,0.40) (0.20,0,10,0.70)
    (0.70,0,20,0.10) (0.10,0,80,0.40) (0.10,0,70,0.40) (0.10,0,70,0.10)
    (0.70,1,00,0.90) (0.10,0,80,0.90) (0.10,0,40,0.70) (0.10,0,60,0.60)

     | Show Table
    DownLoad: CSV

    Step 1. Using (23), the similarity matrix Sij is calculated with α=1.5 and is given in Table 2.

    Table 2.  The similarity matrix of NBPAs.
    m1(.) m2(.) m3(.) m4(.) m5(.)
    m1(.)
    m2(.)
    m3(.)
    m4(.)
    m5(.)
    1.000 0.8435 0.8007 0.9016 0.8386
    0.8435 1.000 0.8435 0.7978 0.8255
    0.8007 0.8435 1.0000 0.7238 0.7716
    0.9016 0.7978 0.7238 1.0000 0.7648
    0.8386 0.8255 0.7716 0.7648 1.0000

     | Show Table
    DownLoad: CSV

    According to (32)–(34) the support degree SD(mi), the credibility degree CD(mi), and the global credibility degree GCD(mi) of evidence mi is obtained as shown in Table 3.

    Table 3.  The related degree measures of NBPAs.
    Items Pieces of evidence
    m1 m2 m3 m4 m5
    SD(m)
    CD(m)
    GCD(m)
    3.3844 3.3103 3.1395 3.1880 3.2005
    0.2086 0.2041 0.1935 0.1965 0.1973
    0.2000

     | Show Table
    DownLoad: CSV

    Step 2. Using (35), all possible evidence is classified as reliable and unreliable evidence. It is easy to see in Table 3 that m1 and m2 are reliable evidence and m3,m4, and m5 are unreliable evidence.

    Step 3. Based on (28), the information volume (NDE) of the NBPA mi (i=1,2,,5) is computed and given in Table 4.

    Table 4.  Neutrosophic belief (Deng) of NBPAs.
    m1 m2 m3 m4 m5
    ENDE(mi) 2.3768 2.8185 2.7474 2.1017 7.3928

     | Show Table
    DownLoad: CSV

    Step 4. By applying (36), the general information volume for evidence, using the positive impact function P(mi) and the negative impact function N(mi) of mi are measured, respectively, and given in Table 5.

    Table 5.  The related degree measures of NBPAs.
    Items Pieces of evidence
    m1 m2 m3 m4 m5
    GI(m) 10.7707 16.7523 0.0096 0.0050 1

     | Show Table
    DownLoad: CSV

    Step 5. On the basis of (37), MCD(m) of the NBPA mi is generated and given in Table 6.

    Table 6.  The related degree measures of NBPAs.
    Items Pieces of evidence
    m1 m2 m3 m4 m5
    MCD(m) 0.3831 0.5828 0.0003 0.0002 0.0336

     | Show Table
    DownLoad: CSV

    Step 6. The weighted average evidence WAE(m) is computed by utilizing (38), as shown in Table 7.

    Table 7.  The neutrosophic WAE(m) and its first result F(m).
    NBPAs
    Items {x1} {x2} {x3} {x1,x2,x3}
    WAE(m) (0.6810,0.1527,0.2105) (0.0743,0.1078,0.2480) (0.0872,0.3324,0.3643) (0.1575,0.4071,0.1771)
    Reduced BPAs 0.6749 0.3726 0.2558 0.3227

     | Show Table
    DownLoad: CSV

    Step 7. Since there are 5 observations, WAE(m) is fused 4 times with the proposed combination rule that is given in Table 8.

    Table 8.  The fusion results of the neutrosophic WAE(m)s.
    Fusion step 1
    m1,m2
    NBPAs WAE(m) Reduced m
    {x1} (0.8842,0.0510,0.0428 0.8963
    {x2} (0.0377,0.0343,0.0537) 0.4758
    {x3} (0.0457,0.1315,0.0941) 0.4162
    {x1,x2,x3} (0.0324,0.1716,0.0338) 0.4163
    Fusion step 2
    m1,m2,m3
    NBPAs WAE(m) Reduced m
    {x1} (0.9619,0.0185,0.0080) 0.9678
    {x2} (0.0140,0.0122,0.0106) 0.4957
    {x3} (0.0176,0.0521,0.0215) 0.4726
    {x1,x2,x3} (0.0064,0.0707,0.0061) 0.4650
    Fusion step 3
    m1,m2,m3,m4
    NBPAs WAE(m) Reduced m
    {x1} (0.9883,0.0071,0.0015) 0.9899
    {x2} (0.0045,0.0047,0.0020) 0.4989
    {x3} (0.0059,0.0208,0.0046) 0.4903
    {x1,x2,x3} (0.0012,0.0289,0.0011) 0.4856
    Fusion step 4
    m1,m2,m3,m4,m5
    NBPAs WAE(m) Reduced m
    {x1} (0.9965,0.0028,0.0003) 0.9967
    {x2} (0.0014,0.0018,0.0004) 0.4996
    {x3} (0.0019,0.0083,0.0010) 0.4963
    {x1,x2,x3} (0.0002,0.0118,0.0002) 0.4941

     | Show Table
    DownLoad: CSV

    From the results given in Table 8 and the graphical illustration in Figure 1, we can conclude that the proposed method has a high convergence rate. Figure 1 shows clearly that the NPBA x1 differs from others right from the beginning. The values at fusion step 0 are the beginning values that are also given in Table 7. The starting point of x1 is 0.6749, which leads to faster convergence. We also give the neutrosophic values of NPBAs in each step since the others converge to around 0.5 when they should ideally converge to near 0. The reason for this behavior lies in the given neutrosophic reduced function m in (16). However, as given in Table 8, the F(m) values are neutrosophic numbers and their truth, indeterminacy, and falsity values converge to 0, which is the desired outcome. Therefore, the fact that the reduced values of {x2}, {x3}, and {x1,x2,x3} approach around 0.5 does not constitute a contradiction or absurdity. Besides, all three NBPAs converge to the same point, which separates them from the x1. Furthermore, the second and third components of x1 are getting smaller as the steps progress, which can be interpreted as a reduction in the uncertainty of the conflict of x1.

    We discuss the benefits of the new approach by comparing it to other existing methods. Table 9 presents the combined findings from several studies. Based on Table 9, we observe that Dempster's combination method produces paradoxical results. If we evaluate the results obtained by integrating only two pieces of evidence, the proposed method indicates that the target is x1 in the beginning, while other methods indicate that the target is x2. With three pieces of evidence, Murphy's [48] and Deng et al. [49]'s methods do not allow us to decide that the belief assigned to object x1 is below 50%. Qian et al. [46] and Xiao and Qin [47] reported BPA values of 61.10% and 57.79% for object x1, respectively. In contrast, the probability that the proposed approach gives to target x1 is 88.14%.

    Table 9.  The final fusion result F(m).
    Evidence NBPAs Target
    Methods {x1} {x2} {x3} {x1,x2,x3}
    m1,m2 Dempster [6] 0.0000 0.9153 0.0847 0.0000 x2
    Murphy [48] 0.1187 0.7518 0.0719 0.0576 x2
    Deng et al. [49] 0.1187 0.7518 0.0719 0.0576 x2
    Qian et al. [46] 0.1187 0.7518 0.0719 0.0576 x2
    Xiao and Qin [47] 0.1187 0.7518 0.0719 0.0576 x2
    Proposed method 0.8963 0.4758 0.4162 0.4163 x1
    m1,m2,m3 Dempster [6] 0.0000 0.9153 0.0847 0.0000 x2
    Murphy [48] 0.3324 0.5909 0.0540 0.0227 x2
    Deng et al. [49] 0.4477 0.4546 0.0644 0.0333
    Qian et al. [46] 0.6110 0.2861 0.0659 0.0370 x1
    Xiao and Qin [47] 0.5779 0.3070 0.0714 0.0438 x1
    Proposed method 0.9678 0.4957 0.4726 0.4650 x1
    m1,m2,m3,m4 Dempster [6] 0.0000 0.9153 0.0847 0.0000 x2
    Murphy [48] 0.6170 0.3505 0.0272 0.0053 x1
    Deng et al. [49] 0.8007 0.1640 0.0283 0.0070 x1
    Qian et al. [46] 0.8472 0.1221 0.0249 0.0058 x1
    Xiao and Qin [47] 0.8785 0.0857 0.0271 0.0076 x1
    Proposed method 0.9899 0.4989 0.4903 0.4856 x1
    m1,m2,m3,m4,m5 Dempster [6] 0.0000 0.9153 0.0847 0.0000 x2
    Murphy [48] 0.8389 0.1502 0.0099 0.0010 x1
    Deng et al. [49] 0.9499 0.0411 0.0080 0.0010 x1
    Qian et al. [46] 0.9525 0.0393 0.0074 0.0008 x1
    Xiao and Qin [47] 0.9713 0.0204 0.0073 0.0010 x1
    Proposed method 0.9967 0.4996 0.4963 0.4941 x1

     | Show Table
    DownLoad: CSV

    Adding a fifth piece of evidence, for object x1, the combination methods of Murphy [48], Deng et al. [49], Qian et al. [46], and Xiao and Qin [47] show improved results, leading to BPA values of 83.89%, 94.99%, 95.25%, and 97.13%, respectively. Therefore, it can be said that the methods are able to successfully handle conflicting evidence. However, based on the five data points in Table 9, the proposed method significantly outperforms the other combination methods by computing the BPA value of the object at 99.67% as can be seen in Figures 2 and Figure 3. Therefore, we can conclude that the proposed technique is as effective as other techniques. It can also be concluded that the newly developed technique is superior to other methods in integrating opposing evidence to obtain more unpredictable results. To further explain, the proposed approach outperforms alternative methods in terms of performance. There are several main reasons for this. Initially, in the proposed method, each focal element is assigned a basic probability with three levels of support, which distinguishes it from other methods. In addition, the proposed approach takes into account various types of evidence, categorizing types of evidence with both a positive influence function and a negative influence function using NDE. By following these steps, the influence of reliable evidence is increased while the influence of unreliable evidence is reduced. This allows for the positive and negative effects of the evidence to be taken into account in the final aggregation results, overcoming other alternative methods.

    Figure 2.  Fusing results of each step.
    Figure 3.  Fusing comparison of NPBAs in each step.

    To provide a deeper analytical justification for the superiority of the proposed model, it is essential to highlight its dual integration of similarity and entropy-based mechanisms. Unlike traditional models that rely on support degrees or distance measures, the proposed approach leverages a modified cosine similarity that captures both set-based relationships with cardinality and vector norms in Jousselme distance, which allows it to assess the directional consistency between evidence sources. This prevents overestimation of similarity when evidence vectors are orthogonal or sparse—scenarios where classical cosine similarity or Euclidean metrics tend to fail. Moreover, the use of Deng entropy introduces a measure of internal uncertainty within each piece of evidence. This entropy component not only detects the ambiguity of a single evidence source but also enables a relative weighting based on its informational volume. Through the positive and negative impact functions, the model amplifies the effect of credible evidence and suppresses unreliable contributions, creating a dynamic weighting system that adapts to evidence quality rather than treating all sources equally. In addition, the classification step based on GCD enhances robustness by systematically excluding evidence that falls below a statistical credibility threshold. This step contributes significantly to reducing noise and instability in the fusion output, especially in highly conflicting scenarios. The analytical strength of the model thus comes from its ability to simultaneously measure reliability (via similarity), internal uncertainty (via entropy), and contextual impact (via information volume), making it more flexible and durable compared to conventional evidence fusion approaches.

    This paper extends the classical BPA, which focuses solely on the support degree of focal elements, to the neutrosophic BPA with three components. This involves taking into account the level of support, the level of non-support, and the level of uncertain support for each focal element. Furthermore, a similarity measure based on Jousselme distance is established for quantifying the similarity between evidence. Then, the Deng entropy is extended to assess the impact of evidence on weight in NESs, leading to the introduction of neutrosophic belief (Deng) entropy. A novel method is suggested for integrating contradictory evidence by utilizing the similarity measure of evidence and belief function entropy. An example with numbers is shown to illustrate how practical and successful the method is. The outcomes demonstrate that the new method exhibits improved performance with enhanced accuracy.

    However, like any modeling technique, our method presents certain limitations. Its performance may degrade in high-dimensional settings or when the amount of evidence increases substantially, due to the computational complexity associated with pairwise similarity calculations and entropy evaluations. Moreover, in cases where evidence is either highly conflicting or uniformly uncertain, the entropy-based discrimination mechanism may lose its effectiveness. While the model performs well in structured decision-making environments, it may encounter challenges in highly dynamic applications—such as continuous sensor data streams or scenarios requiring rapid real-time fusion. In such cases, the time complexity of computing similarity matrices and entropy values may limit the model's practical usability. Additionally, the method assumes a relatively stable frame of discernment and clearly distinguishable sources of evidence, conditions that may not always hold in domains characterized by noisy or rapidly evolving information.

    From an analytical perspective, while the superiority of our model in terms of fusion accuracy is empirically validated, its theoretical benefits stem from the combined use of similarity and entropy, allowing simultaneous evaluation of information content and consistency. This dual perspective ensures that both supportive and hesitant evidence is proportionally represented in the fusion result.

    Future work can explore expanding this framework to handle dynamic data streams and real-time decision-making scenarios, particularly in applications involving complex sensor networks, autonomous systems, and intelligent monitoring. The potential of integrating neutrosophic-based methods with machine learning algorithms also remains a promising area for enhancing data-driven decision systems.

    Ali Köseoğlu: conceptualization, writing-original draft, methodology, writing-review & editing, software; Rıdvan Şahin: conceptualization, writing-original draft, methodology; Ümit Demir: conceptualization, writing-original draft, methodology, writing-review & editing. All authors of this article contributed equally. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare no conflicts of interest.



    [1] Y. Jin, J. Branke, Evolutionary optimization in uncertain environments-a survey, IEEE Trans. Evol. Comput., 9 (2005), 303–317. https://doi.org/10.1109/TEVC.2005.846356 doi: 10.1109/TEVC.2005.846356
    [2] R. R. Yager, Decision making under measure-based granular uncertainty, Granular Comput., 3 (2018), 345–353. https://doi.org/10.1007/S41066-017-0075-0/METRICS doi: 10.1007/S41066-017-0075-0/METRICS
    [3] L. A. Zadeh, Fuzzy sets, Inf. Control, 8 (1965), 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X doi: 10.1016/S0019-9958(65)90241-X
    [4] K. T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets Syst., 20 (1986), 87–96. https://doi.org/10.1016/S0165-0114(86)80034-3 doi: 10.1016/S0165-0114(86)80034-3
    [5] F. A. Smarandache, Unifying field in logics: neutrosophic logic, American Research Press, 1999.
    [6] A. P. Dempster, Upper and lower probabilities induced by a multivalued mapping, In: R. R. Yager, L. Liu, Classic works of the DempsterShafer theory of belief functions, 2008, 57–72. https://doi.org/10.1007/978-3-540-44792-4_3
    [7] G. Shafer, A mathematical theory of evidence, Princeton University Press, 1976. https://doi.org/10.1515/9780691214696
    [8] Y. Deng, D numbers: theory and applications, J. Inf. Comput. Sci., 9 (2012), 2421–2428.
    [9] H. Wang, F. Smarandache, Y. Q. Zhang, R. Sunderraman, Single valued neutrosophic sets, Multispace Multistructure, 4 (2010), 410–413.
    [10] A. Köseoğlu, R. Şahin, M. Merdan, A simplified neutrosophic multiplicative set-based TODIM using water-filling algorithm for the determination of weights, Expert Syst., 37 (2020) e12515. https://doi.org/10.1111/exsy.12515 doi: 10.1111/exsy.12515
    [11] A. Köseoğlu, F. Altun, R. Şahin, Aggregation operators of complex fuzzy Z-number sets and their applications in multi-criteria decision making, Complex Intell. Syst., 10 (2024), 6559–6579, https://doi.org/10.1007/S40747-024-01450-y doi: 10.1007/S40747-024-01450-y
    [12] R. Şahin, M. Yigider, A multi-criteria neutrosophic group decision making method based TOPSIS for supplier selection, Appl. Math. Inf. Sci., 10 (2016), 1843–1852. https://doi.org/10.18576/AMIS/100525 doi: 10.18576/AMIS/100525
    [13] A. Köseoğlu, Generalized correlation coefficients of intuitionistic multiplicative sets and their applications to pattern recognition and clustering analysis, J. Exp. Theor. Artif. Intell., 2024. https://doi.org/10.1080/0952813X.2024.2323039
    [14] H. Garg, A new exponential-logarithm-based single-valued neutrosophic set and their applications, Expert. Syst. Appl., 238 (2024), 121854. https://doi.org/10.1016/J.ESWA.2023.121854 doi: 10.1016/J.ESWA.2023.121854
    [15] J. B. Yang, D. L. Xu, Evidential reasoning rule for evidence combination, Artif. Intell., 205 (2013), 1–29. https://doi.org/10.1016/J.ARTINT.2013.09.003 doi: 10.1016/J.ARTINT.2013.09.003
    [16] C. Fu, J. B. Yang, S. L. Yang, A group evidential reasoning approach based on expert reliability. Eur. J. Oper. Res., 246 (2015), 886–893. https://doi.org/10.1016/J.EJOR.2015.05.042 doi: 10.1016/J.EJOR.2015.05.042
    [17] V. N. Huynh, T. T. Nguyen, C. A. Le, Adaptively entropy-based weighting classifiers in combination using Dempster–Shafer theory for word sense disambiguation, Comput. Speech Lang, 24 (2010), 461–473. https://doi.org/10.1016/J.CSL.2009.06.003 doi: 10.1016/J.CSL.2009.06.003
    [18] O. Kharazmi, J. E. Contreras-Reyes, Deng-Fisher information measure and its extensions: application to Conway's game of life, Chaos Solitons Fract., 174 (2023), 113871. https://doi.org/10.1016/J.CHAOS.2023.113871 doi: 10.1016/J.CHAOS.2023.113871
    [19] O. Kharazmi, J. E. Contreras-Reyes, Belief inaccuracy information measures and their extensions, Fluctuation Noise Lett., 23 (2024), 2450041. https://doi.org/10.1142/S021947752450041X doi: 10.1142/S021947752450041X
    [20] R. R. Yager, On the Dempster–Shafer framework and new combination rules, Inf. Sci., 41 (1987) 93–137. https://doi.org/10.1016/0020-0255(87)90007-7 doi: 10.1016/0020-0255(87)90007-7
    [21] D. Dubois, H. Prade, Representation and combination of uncertainty with belief functions and possibility measures, Comput. Intell., 4 (1988), 244–264. https://doi.org/10.1111/J.1467-8640.1988.TB00279.X doi: 10.1111/J.1467-8640.1988.TB00279.X
    [22] P. Smets, The combination of evidence in the transferable belief model, IEEE Trans. Pattern Anal. Mach. Intell., 12 (1990), 447–458. https://doi.org/10.1109/34.55104 doi: 10.1109/34.55104
    [23] M. Urbani, G. Gasparini, M. Brunelli, A numerical comparative study of uncertainty measures in the Dempster–Shafer evidence theory, Inf. Sci., 639 (2023), 119027. https://doi.org/10.1016/J.INS.2023.119027 doi: 10.1016/J.INS.2023.119027
    [24] Q. Zhang, P. Zhang, T. Li, Information fusion for large-scale multi-source data based on the Dempster–Shafer evidence theory, Inf. Fusion, 115 (2025), 102754. https://doi.org/10.1016/J.INFFUS.2024.102754 doi: 10.1016/J.INFFUS.2024.102754
    [25] M. Brunelli, R. P. Jayasuriya Kuranage, V. N. Huynh, Selection rules for new focal elements in the Dempster–Shafer evidence theory, Inf. Sci., 712 (2025), 122160. https://doi.org/10.1016/J.INS.2025.122160 doi: 10.1016/J.INS.2025.122160
    [26] Y. Song, X. Wang, J. Zhu, L. Lei, Sensor dynamic reliability evaluation based on evidence theory and intuitionistic fuzzy sets, Appl. Intell., 48 (2018), 3950–3962. https://doi.org/10.1007/S10489-018-1188-0/TABLES/6 doi: 10.1007/S10489-018-1188-0/TABLES/6
    [27] Y. Song, X. Wang, L. Lei, A. Xue, Combination of interval-valued belief structures based on intuitionistic fuzzy set, Knowl. Based Syst., 67 (2014), 61–70. https://doi.org/10.1016/J.KNOSYS.2014.06.008 doi: 10.1016/J.KNOSYS.2014.06.008
    [28] Y. Li, Y. Deng, Intuitionistic evidence sets, IEEE Access, 7 (2019), 106417–106426. https://doi.org/10.1109/ACCESS.2019.2932763 doi: 10.1109/ACCESS.2019.2932763
    [29] Y. Xue, Y. Deng, On the conjunction of possibility measures under intuitionistic evidence sets, J. Ambient Intell. Humanized Comput., 12 (2021), 7827–7836. https://doi.org/10.1007/S12652-020-02508-8/METRICS doi: 10.1007/S12652-020-02508-8/METRICS
    [30] S. M. Hatefi, M. E. Basiri, J. Tamošaitiene, An evidential model for environmental risk assessment in projects using Dempster–Shafer theory of evidence, Sustainability, 11 (2019), 6329. https://doi.org/10.3390/SU11226329 doi: 10.3390/SU11226329
    [31] M. M. Bappy, S. M. Ali, G. Kabir, S. K. Paul, Supply chain sustainability assessment with Dempster–Shafer evidence theory: implications in cleaner production, J. Cleaner Produc., 237 (2019), 117771. https://doi.org/10.1016/J.JCLEPRO.2019.117771 doi: 10.1016/J.JCLEPRO.2019.117771
    [32] J. Desikan, S. K. Singh, A. Jayanthiladevi, S. Singh, B. Yoon, Dempster Shafer-empowered machine learning-based scheme for reducing fire risks in IoT-enabled industrial environments, IEEE Access, 13 (2025), 46546–46567. https://doi.org/10.1109/ACCESS.2025.3550413 doi: 10.1109/ACCESS.2025.3550413
    [33] A. L. Jousselme, D. Grenier, É. Bossé, A new distance between two bodies of evidence, Inf. Fusion, 2 (2001) 91–101. https://doi.org/10.1016/S1566-2535(01)00026-4 doi: 10.1016/S1566-2535(01)00026-4
    [34] W. Jiang, B. Wei, X. Qin, J. Zhan, Y. Tang, Sensor data fusion based on a new conflict measure, Math. Probl. Eng., 2016 (2016), 5769061. https://doi.org/10.1155/2016/5769061 doi: 10.1155/2016/5769061
    [35] J. Ye, Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses, Artif. Intell. Med., 63 (2015), 171–179. https://doi.org/10.1016/J.ARTMED.2014.12.007 doi: 10.1016/J.ARTMED.2014.12.007
    [36] A. De Luca, S. Termini, A definition of a nonprobabilistic entropy in the setting of fuzzy sets theory, Read. Fuzzy Sets Intell. Syst., 1993,197–202. https://doi.org/10.1016/B978-1-4832-1450-4.50020-1 doi: 10.1016/B978-1-4832-1450-4.50020-1
    [37] E. Szmidt, J. Kacprzyk, Entropy for intuitionistic fuzzy sets, Fuzzy Sets Syst., 118 (2001), 467–477. https://doi.org/10.1016/S0165-0114(98)00402-3 doi: 10.1016/S0165-0114(98)00402-3
    [38] B. K. Tripathy, S. P. Jena, S. K. Ghosh, An intuitionistic fuzzy count and cardinality of Intuitionistic fuzzy sets, Malaya J. Mat., 1 (2013), 123–133. https://doi.org/10.26637/mjm104/014 doi: 10.26637/mjm104/014
    [39] P. Majumdar, S. Kumar, On similarity and entropy of neutrosophic sets, J. Intell. Fuzzy Syst., 26 (2014), 1245–1252. https://doi.org/10.5555/2596417.2596434 doi: 10.5555/2596417.2596434
    [40] R. Clausius, The mechanical theory of heat, Macmillan, 1879.
    [41] C. E. Shannon, A mathematical theory of communication, ACM Sigmobile Mobile Comput. Commun. Rev., 5 (2001), 3–55. https://doi.org/10.1145/584091.584093 doi: 10.1145/584091.584093
    [42] C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys., 52 (1988), 479–487. https://doi.org/10.1007/BF01016429/METRICS doi: 10.1007/BF01016429/METRICS
    [43] C. Tsallis, Nonadditive entropy: The concept and its use, Eur. Phys. J. A, 40 (2009), 257–266. https://doi.org/10.1140/EPJA/I2009-10799-0 doi: 10.1140/EPJA/I2009-10799-0
    [44] Y. Deng, Deng entropy, Chaos Solitons Fract., 91 (2016), 549–553. https://doi.org/10.1016/J.CHAOS.2016.07.014 doi: 10.1016/J.CHAOS.2016.07.014
    [45] D. Zhou, Y. Tang, W. Jiang, An improved belief entropy and its application in decision-making, Complexity, 2017 (2017), 4359195. https://doi.org/10.1155/2017/4359195 doi: 10.1155/2017/4359195
    [46] J. Qian, X. Guo, Y. Deng, A novel method for combining conflicting evidences based on information entropy, Appl. Intell., 46 (2017), 876–888. https://doi.org/10.1007/S10489-016-0875-Y/TABLES/6 doi: 10.1007/S10489-016-0875-Y/TABLES/6
    [47] F. Xiao, B. Qin, A weighted combination method for conflicting evidence in multi-sensor data fusion, Sensors, 18 (2018), 1487. https://doi.org/10.3390/S18051487 doi: 10.3390/S18051487
    [48] C. K. Murphy, Combining belief functions when evidence conflicts, Decision Support Syst., 29 (2000), 1–9. https://doi.org/10.1016/S0167-9236(99)00084-6 doi: 10.1016/S0167-9236(99)00084-6
    [49] D. Yong, S. W. Kang, Z. Z. Fu, L. Qi, Combining belief functions based on distance of evidence, Decis. Support Syst., 38 (2004), 489–493. https://doi.org/10.1016/J.DSS.2004.04.015 doi: 10.1016/J.DSS.2004.04.015
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(136) PDF downloads(31) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(9)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog