Processing math: 32%
Research article

A higher-order numerical scheme for system of two-dimensional nonlinear fractional Volterra integral equations with uniform accuracy

  • We give a modified block-by-block method for the nonlinear fractional order Volterra integral equation system by using quadratic Lagrangian interpolation based on the classical block-by-block method. The core of the method is that we divide its domain into a series of subdomains, that is, block it, and use piecewise quadratic Lagrangian interpolation on each subdomain to approximate κ(x,y,s,r,u(s,r)). Our proposed method has uniform accuracy and its convergence order is O(h4αx+h4βy). We give a strict proof for the error analysis of the method, and give several numerical examples to verify the correctness of the theoretical analysis.

    Citation: Ziqiang Wang, Qin Liu, Junying Cao. A higher-order numerical scheme for system of two-dimensional nonlinear fractional Volterra integral equations with uniform accuracy[J]. AIMS Mathematics, 2023, 8(6): 13096-13122. doi: 10.3934/math.2023661

    Related Papers:

    [1] Murugan Palanikumar, Chiranjibe Jana, Biswajit Sarkar, Madhumangal Pal . q-rung logarithmic Pythagorean neutrosophic vague normal aggregating operators and their applications in agricultural robotics. AIMS Mathematics, 2023, 8(12): 30209-30243. doi: 10.3934/math.20231544
    [2] Jia-Bao Liu, Rashad Ismail, Muhammad Kamran, Esmail Hassan Abdullatif Al-Sabri, Shahzaib Ashraf, Ismail Naci Cangul . An optimization strategy with SV-neutrosophic quaternion information and probabilistic hesitant fuzzy rough Einstein aggregation operator. AIMS Mathematics, 2023, 8(9): 20612-20653. doi: 10.3934/math.20231051
    [3] Muhammad Kamran, Shahzaib Ashraf, Nadeem Salamat, Muhammad Naeem, Thongchai Botmart . Cyber security control selection based decision support algorithm under single valued neutrosophic hesitant fuzzy Einstein aggregation information. AIMS Mathematics, 2023, 8(3): 5551-5573. doi: 10.3934/math.2023280
    [4] Muhammad Kamran, Rashad Ismail, Shahzaib Ashraf, Nadeem Salamat, Seyma Ozon Yildirim, Ismail Naci Cangul . Decision support algorithm under SV-neutrosophic hesitant fuzzy rough information with confidence level aggregation operators. AIMS Mathematics, 2023, 8(5): 11973-12008. doi: 10.3934/math.2023605
    [5] Jie Ling, Mingwei Lin, Lili Zhang . Medical waste treatment scheme selection based on single-valued neutrosophic numbers. AIMS Mathematics, 2021, 6(10): 10540-10564. doi: 10.3934/math.2021612
    [6] Rana Muhammad Zulqarnain, Wen Xiu Ma, Imran Siddique, Shahid Hussain Gurmani, Fahd Jarad, Muhammad Irfan Ahamad . Extension of aggregation operators to site selection for solid waste management under neutrosophic hypersoft set. AIMS Mathematics, 2023, 8(2): 4168-4201. doi: 10.3934/math.2023208
    [7] Murugan Palanikumar, Nasreen Kausar, Harish Garg, Shams Forruque Ahmed, Cuauhtemoc Samaniego . Robot sensors process based on generalized Fermatean normal different aggregation operators framework. AIMS Mathematics, 2023, 8(7): 16252-16277. doi: 10.3934/math.2023832
    [8] Muhammad Ihsan, Muhammad Saeed, Atiqe Ur Rahman, Mazin Abed Mohammed, Karrar Hameed Abdulkaree, Abed Saif Alghawli, Mohammed AA Al-qaness . An innovative decision-making framework for supplier selection based on a hybrid interval-valued neutrosophic soft expert set. AIMS Mathematics, 2023, 8(9): 22127-22161. doi: 10.3934/math.20231128
    [9] Dongsheng Xu, Huaxiang Xian, Xiewen Lu . Interval neutrosophic covering rough sets based on neighborhoods. AIMS Mathematics, 2021, 6(4): 3772-3787. doi: 10.3934/math.2021224
    [10] Rajab Ali Borzooei, Hee Sik Kim, Young Bae Jun, Sun Shin Ahn . MBJ-neutrosophic subalgebras and filters in BE-algebras. AIMS Mathematics, 2022, 7(4): 6016-6033. doi: 10.3934/math.2022335
  • We give a modified block-by-block method for the nonlinear fractional order Volterra integral equation system by using quadratic Lagrangian interpolation based on the classical block-by-block method. The core of the method is that we divide its domain into a series of subdomains, that is, block it, and use piecewise quadratic Lagrangian interpolation on each subdomain to approximate κ(x,y,s,r,u(s,r)). Our proposed method has uniform accuracy and its convergence order is O(h4αx+h4βy). We give a strict proof for the error analysis of the method, and give several numerical examples to verify the correctness of the theoretical analysis.



    Decision-makers are increasingly having difficulty identifying the optimal solution to real-world problems as they become increasingly complex. It is possible to choose the best option despite the difficulty of deciding between the alternatives. A challenge many firms face is creating opportunities, objectives, and viewpoint constraints. As a result, individuals and groups must consider multiple objectives at the same time while engaging in decision-making (DM). The result is that we need to improve our ability to deal with difficult situations. A variety of methods have been used by researchers to contribute to this field of study. The uncertainties have been addressed by several theories, such as fuzzy set (FS) proposed by Zadeh [1], intuitionistic FS (IFS) proposed by Atanassov [2], Pythagorean FS (PFS) proposed by Yager [3] and NSS proposed by Smarandache [4]. An FS in which each element of the universe and the grades are called the membership degree (MD). As a result, Atanassov [2] introduces the IFS that requires the total value of the MD and non-membership degree (NMD) to be less than or equal to 1. If the MD and NMD sum to greater than 1, we might have difficulty with DM. Yazbek et al. [5] introduced the Novel Approach to Model the Economic Characteristics of an Organization by interval-valued complex Pythagorean fuzzy information. PFS logic was developed by Yager [3] as a generalization of IFS characterized by an MD and NMD with a square sum that must be less than or equal to 1.

    There are many applications based on PFS, as discussed by Akram et al. [6,7,8]. This paper extends Rahman et al. [9] discussion of a geometric aggregation operator (AO) based on interval-valued PFS (IVPFS) to the framework of group DM. Peng et al. [10] proposed a Pythagorean fuzzy AO with interval values. As an alternative to group with MADM, Rahman et al. [11] have proposed a Pythagorean Ⅳ fuzzy Einstein AO. The idea of IVPFS for MADM with normal AOs were discussed by Yang et al. [12]. Shami et al. [13] studied the square root FS (SRFS) and its weighted AOs in the context of DM. The NSS is developed by Smarandache [4]. The neutral mind is known as "neutrosophy" and it is this neutrality that distinguishes FS from IFS. The NSS has a truth degree (TD), indeterminacy degree (ID) and falsehood degree (FD). There are three components of TD, ID and FD which lie between 0 and 1. The Pythagorean NSIV set (PNSIVS) were first discussed by Smarandache et al. [14]. An NSS with a single value is applied to medical diagnostics and context analysis [15]. We evaluated MCDM and MADM problems using Ejegwa [16] extended distance measures for IFSs, including Hamming distance (HD), Euclidean distance (ED) and normalized ED (NED). A MADM for Pythagorean NSNIV with AOs were introduced by Palanikumar et al. [17]. The majority of distance functions for PNSNIVSs as a generalization of the PNSIVS. Xu et al. [18] discussed the concept of regression prediction for fuzzy time series. Yang [19] introduced the notion of a class of fuzzy c-numbers clustering procedures for fuzzy data.

    Peng et al. [20] proposed neutrosophic based on Under MABAC and TOPSIS. Zhang et al. [21] discussed generalizing PFS based on TOPSIS to MCDM. A number of practical MADM applications were discussed by Hwang et al. [22]. Jana et al. discussed a generalization of fuzzy soft sets [23]. A fuzzy bipolar MABAC-based MAGDM approach was presented by Jana [24]. An approach for robust linear MCDM with bipolar fuzzy soft AOs has been developed by Jana et al. [25]. The Pythagorean fuzzy dombi AOs have been introduced by Jana et al. [26]. Ullah et al. [27] discussed the concept of complex PFS with practical applications. The proposed AOs are based on the trapezoidal neutrosophic MADM by Jana et al. [28]. Jana et al. [29] was developed the neutrosophic dombi power AO. The MCDM approach was presented by Jana et al. [30] using single-valued trigonometric numbers (SVTrNs).

    Recently, Hesitant FSs with real-life applications were discussed by [31,32]. Lu et al. [33,34] discussed the consensus progress for DM in social networks with incomplete probabilistic hesitant fuzzy and hesitant multiplicative preference relations. Yazdi et al. [35] introduced the application of an artificial intelligence in DM for the selection of maintenance strategy. Rojek et al. [36] discussed an artificial intelligence to supervise machine failures and support their repair. Huang et al. [37] discussed the concept of design alternative assessment and selection, a Z-cloud rough number-based BWM-MABAC model. Xiao [38] introduced q-rung orthopair fuzzy DM with new score function and best-worst method for manufacturer selection. Huang et al. [39] discussed the failure mode and effect analysis using T-spherical fuzzy maximizing deviation and combined comparison solution methods. Mahmood et al. [40] discussed the concept of Prioritized muirhead mean AOs under the complex single-valued neutrosophic settings and their application in MADM. The purpose of this work is to expand the definition of SRNSNIV sets. We obtain SRNSNIVS based on AOs. We will apply these operators to DM problems and develop a ranking based on them.

    (1) There are new ED and HD measures introduced for SRNSNIVS.

    (2) MADM and SRNSNIVN aggregation operators are shown in an example using the new definition.

    (3) Determine positive and negative ideal values based on SRNSNIVWA, SRNSNIVWG, GSRNSNIVWA, and GSRNSNIVWG.

    (4) In order to arrive at a result, Δ is used to make a decision.

    The following seven sections make up the paper. A brief explanation of the related ideas can be found in Section 2. MADM using square root NSNIV number (SRNSNIVN) is discussed in Section 3. In Section 4 uses SRNSNIVN based on various distances. A few AOs are discussed in Section 5. The SRNSNIV set is discussed using MADM in Section 6, which includes a numerical example, analysis, and algorithm. Section 7 contains the conclusion.

    For our future studies, we will quickly review some fundamental terms in this section.

    Definition 2.1. [3] Let Ξ be the universe. The PFS in Ξ is ={μ,ΘT(μ),ΘF(μ)|μΞ}, where ΘT:Ξ[0,1] and ΘF:Ξ[0,1] are denotes the MD and NMD of μΞ to , respectively and 0(ΘT(μ))2+(ΘF(μ))21. For, =ΘT,ΘF is represent a Pythagorean fuzzy number (PFN).

    Definition 2.2. [13] The SRFS in Ξ is ={μ,ΘT(μ),ΘF(μ)|μΞ}, where ΘT:Ξ[0,1] and ΘF:Ξ[0,1] are denotes the MD and NMD of μΞ to , respectively and 0(ΘT(μ))2+ΘF(μ)1. For, =ΘT,ΘF is represent a square root fuzzy number (SRFN).

    Definition 2.3. [10] The PIVFS in Ξ is ˜={μ,~ΘT(μ),~ΘF(μ)|μΞ}, where ~ΘT:ΞInt([0,1]) and ~ΘF:ΞInt([0,1]) denotes the MD and NMD of μΞ to , respectively, and 0(ΘTU(μ))2+(ΘFU(μ))21. For, ˜=[ΘTL,ΘTU],[ΘFL,ΘFU] is represent a Pythagorean interval-valued fuzzy number (PIVFN).

    Definition 2.4. [4] The NSS in Ξ is ={μ,ΘT(μ),ΘI(μ),ΘF(μ)|μΞ}, ΘT:Ξ[0,1], ΘI:Ξ[0,1] and ΘF:Ξ[0,1] are denotes the TD, ID and FD of μΞ to , respectively and 0ΘT(μ)+ΘI(μ)+ΘF(μ)3. For, =ΘT,ΘI,ΘF is represent a neutrosophic number (NSN).

    Definition 2.5. [14] The PNSS in Ξ is ={μ,ΘT(μ),ΘI(μ),ΘF(μ)|μΞ}, ΘT:Ξ[0,1], ΘI:Ξ[0,1] and ΘF:Ξ[0,1] are denotes the TD, ID and FD of μΞ to , respectively and 0(ΘT(μ))2+(ΘI(μ))2+(ΘF(μ))22. For, =ΘT,ΘI,ΘF is represent a Pythagorean neutrosophic number (PNSN).

    Definition 2.6. [10] Let ˜=[ΘTL,ΘTU],[ΘFL,ΘFU], ˜1=[ΘTL1,ΘTU1], [ΘFL1,ΘFU1] and ˜2=[ΘTL2,ΘTU2],[ΘFL2,ΘFU2] be the PIVFNs, and Δ>0. Then,

    (1) ˜12=[[(ΘTL1)2+(ΘTL2)2(ΘTL1)2(ΘTL2)2,(ΘTU1)2+(ΘTU2)2(ΘTU1)2(ΘTU2)2],[ΘFL1ΘFL2,ΘFU1ΘFU2]],

    (2) ˜1˜2=[[ΘTL1ΘTL2,ΘTU1ΘTU2],[(ΘFL1)2+(ΘFL2)2(ΘFL1)2(ΘFL2)2,(ΘFU1)2+(ΘFU2)2(ΘFU1)2(ΘFU2)2]],

    (3) Δ˜=[[1(1(ΘTL)2)Δ,1(1(ΘTU)2)Δ],[(ΘFL)Δ,(ΘFU)Δ]],

    (4) ˜Δ=[[(ΘTL)Δ,(ΘTU)Δ],[1(1(ΘFL)2)Δ,1(1(ΘFU)2)Δ]].

    Definition 2.7. [10] For any PIVFN ˜=[ΘTL,ΘTU],[ΘFL,ΘFU] and score function S(˜) is defined as S(˜)=12((ΘTL)2+(ΘTU)2(ΘFL)2(ΘFU)2),S(˜)[1,1], and the accuracy function H(˜) is defined as H(˜)=12((ΘTL)2+(ΘTU)2+(ΘFL)2+(ΘFU)2),H(˜)[0,1].

    Definition 2.8. For any SRIVFN ˜=[ΘTL,ΘTU],[ΘFL,ΘFU] and score function S(˜) is defined as S(˜)=12((ΘTL)2+(ΘTU)2ΘFLΘFU),S(˜)[1,1], and the accuracy function H(˜) is defined as H(˜)=12((ΘTL)2+(ΘTU)2+ΘFL+ΘFU),H(˜)[0,1].

    Definition 2.9. [19] The fuzzy number M(μ)=e(μϱτ)2,(τ>0) is called a normal fuzzy number (NFN) if M=(ϱ,τ) and ˜N is the NFN set (NFNS), where R is a real number.

    Definition 2.10. [18] Let ˜N be the NFNS, M1=(ϱ1,τ1)˜Nand M2=(ϱ2,τ2)˜N, (τ1,τ2>0). Then D(M1,M2)=(ϱ1ϱ2)2+12(τ1τ2)2.

    In this article, we define the SRNSNIVN and its operations in connection with square root NSIVNs (SRNSIVNs) and NFNs.

    Definition 3.1. The SRNSIV set ˜ in Ξ is ˜={μ,~ΘT(μ),~ΘI(μ),~ΘF(μ)|μΞ}, where ~ΘT:ΞInt([0,1]), ~ΘI:ΞInt([0,1]) and ~ΘF:ΞINT([0,1]) are denotes the TD, ID and FD of μΞ to ˜, respectively and 0(~ΘT(μ))2+~ΘI(μ)+~ΘF(μ)2, implies 0(ΘTU(μ))2+ΘIU(μ)+ΘFU(μ)2. For, ˜=[ΘTL,ΘTU],[ΘIL,ΘIU],[ΘFL,ΘFU] is called a square root neutrosophic interval-valued number(SRNSIVN).

    Definition 3.2. For any SRNSIVN ˜=[ΘTL,ΘTU],[ΘIL,ΘIU],[ΘFL,ΘFU], the score function S(˜)=ϱ2((ΘTL)2+(ΘTU)22ΘIL+ΘIU2+1ΘFL+ΘFU2), where S(˜)[1,1].

    Definition 3.3. Let ˜=(ϱ,τ);[ΘTL,ΘTU],[ΘIL,ΘIU],[ΘFL,ΘFU] is a SRNSNIVN. The FD, ID and FD are defined as [ΘTL,ΘTU]=[ΘTLe(μϱτ)2,ΘTUe(μϱτ)2], [ΘIL,ΘIU]= [ΘILe(μϱτ)2,ΘIUe(μϱτ)2] and [ΘFL,ΘFU]=[1(1ΘFL)e(μϱτ)2,1(1ΘFU)e(μϱτ)2], xX respectively, where X is a non-empty set and [ΘTL,ΘTU],[ΘIL,ΘIU],[ΘFL,ΘFU]([0,1]) and 0(ΘTU(μ))2+ΘIU(μ)+ΘFU(μ)2, where (ϱ,τ)˜N.

    Definition 3.4. Let ˜=(ϱ,τ);[ΘTL,ΘTU],[ΘIL,ΘIU],[ΘFL,ΘFU], ˜1=(ϱ1,τ1); [ΘTL1,ΘTU1], [ΘIL1,ΘIU1],[ΘFL1,ΘFU1] and ˜2=(ϱ2,τ2);[ΘTL2,ΘTU2],[ΘIL2,ΘIU2], [ΘFL2,ΘFU2] be any three SRNSNIVNs, and Δ>0. Then,

    (1) ˜1˜2=[(ϱ1+ϱ2,τ1+τ2;[(2ΔΘTL1+2ΔΘTL22ΔΘTL12ΔΘTL2)2Δ,(2ΔΘTU1+2ΔΘTU22ΔΘTU12ΔΘTU2)2Δ],[(ΔΘIL1+ΔΘIL2ΔΘIL1ΔΘIL2)Δ,(ΔΘIU1+ΔΘIU2ΔΘIU1ΔΘIU2)Δ],[ΘFL1ΘFL2,ΘFU1ΘFU2]],

    (2) ˜1˜2=[(ϱ1ϱ2,τ1τ2;[ΘTL1ΘTL2,ΘTU1ΘTU2],[(ΔΘIL1+ΔΘIL2ΔΘIL1ΔΘIL2)Δ,(ΔΘIU1+ΔΘIU2ΔΘIU1ΔΘIU2)Δ],[(2ΔΘFL1+2ΔΘFL22ΔΘFL12ΔΘFL2)2Δ,(2ΔΘFU1+2ΔΘFU22ΔΘFU12ΔΘFU2)2Δ]],

    (3) Δ˜=[(Δϱ,Δτ);[(1(12ΔΘTL)Δ)2Δ,(1(12ΔΘTU)Δ)2Δ],[ΔΘIL1,ΔΘIU1],[(ΘFL)Δ,(ΘFU)Δ]],

    (4) ˜Δ=[(ϱΔ,τΔ);[(ΘTL)Δ,(ΘTU)Δ],[ΔΘIL1,ΔΘIU1],[(1(12ΔΘFL)Δ)2Δ,(1(12ΔΘFU)Δ)2Δ]].

    The ED and HD measures for SRNSNIVNs are introduced along with some mathematical features of the model.

    Definition 4.1. For SRNSNIVNs ˜1=(ϱ1,τ1;[ΘTL1,ΘTU1],[ΘIL1,ΘIU1], [ΘFL1,ΘFU1] and ˜2=(ϱ2,τ2;[ΘTL2,ΘTU2],[ΘIL2,ΘIU2],[ΘFL2,ΘFU2]. Then

    DE(˜1,˜2)=12[1+(ΘTL1)2ΘIL1ΘFL1+1+(ΘTU1)2ΘIU1ΘFU14ϱ11+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24ϱ2]2+12[1+(ΘTL1)2ΘIL1ΘFL1+1+(ΘTU1)2ΘIU1ΘFU14τ11+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24τ2]2

    where DE(˜1,˜2) is denote the ED between ˜1 and ˜2.

    Also,

    DH(˜1,˜2)=12[|1+(ΘTL1)2ΘIL1ΘFL1+1+(ΘTU1)2ΘIU1ΘFU14ϱ11+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24ϱ2|+12|1+(ΘTL1)2ΘIL1ΘFL1+1+(ΘTU1)2ΘIU1ΘFU14τ11+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24τ2|]

    where DH(˜1,˜2) denote the HD between ˜1 and ˜2.

    Theorem 4.1. If any three SRNSNIVNs ˜1=(ϱ1,τ1;[ΘTL1,ΘTU1],[ΘIL1,ΘIU1], [ΘFL1,ΘFU1],˜2=(ϱ2,τ2;[ΘTL2,ΘTU2],[ΘIL2,ΘIU2],[ΘFL2,ΘFU2], ~3=(ϱ3,τ3;[ΘTL3,ΘTU3],[ΘIL3,ΘIU3],[ΘFL3,ΘFU3], then DE(1,2) satisfies the following properties are holds.

    (1) DE(˜1,˜2)=0 iff ˜1=˜2.

    (2) DE(˜1,˜2)=DE(˜2,˜1).

    (3) DE(˜1,~3)DE(˜1,˜2)+DE(˜2,~3).

    Proof. Now, (DE(1,2+DE(2,3)2= [12[1+(ΘTL1)2ΘIL1ΘFL1+1+(ΘTU1)2ΘIU1ΘFU14ϱ11+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24ϱ2]2+12[1+(ΘTL1)2ΘIL1ΘFL1+1+(ΘTU1)2ΘIU1ΘFU14τ11+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24τ2]2+12[1+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24ϱ21+(ΘTL3)2ΘIL3ΘFL3+1+(ΘTU3)2ΘIU3ΘFU34ϱ3]2+12[1+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24τ21+(ΘTL3)2ΘIL3ΘFL3+1+(ΘTU3)2ΘIU3ΘFU34τ3]2]2

    implies

    14((Υ1ϱ1Υ2ϱ22+12(Υ1τ1Υ2τ22)+14((Υ2ϱ2Υ3ϱ23+12(Υ2τ2Υ3τ23)+12((Υ1ϱ1Υ2ϱ22+12(Υ1τ1Υ2τ22×(Υ2ϱ2Υ3ϱ23+12(Υ2τ2Υ3τ23).

    Since,

    Υ1=1+(ΘTL1)2ΘIL1ΘFL1+1+(ΘTU1)2ΘIU1ΘFU14,
    Υ2=1+(ΘTL2)2ΘIL2ΘFL2+1+(ΘTU2)2ΘIU2ΘFU24,
    Υ3=1+(ΘTL3)2ΘIL3ΘFL3+1+(ΘTU3)2ΘIU3ΘFU34.

    Hence, (DE(1,2+DE(2,3)2

    14((Υ1ϱ1Υ2ϱ22+12(Υ1τ1Υ2τ22)+14((Υ2ϱ2Υ3ϱ23+12(Υ2τ2Υ3τ23)+12((Υ1ϱ1Υ2ϱ2×(Υ2ϱ2Υ3ϱ3+12(Υ1τ1Υ2τ2×(Υ2τ2Υ3τ3)=14((Υ1ϱ1Υ2ϱ22+(Υ2ϱ2Υ3ϱ23+2(Υ1ϱ1Υ2ϱ2×(Υ2ϱ2Υ3ϱ3)+14(12(Υ1τ1Υ2τ22+12(Υ2τ2Υ3τ23+(Υ1τ1Υ2τ2×(Υ2τ2Υ3τ3)=14(Υ1ϱ1Υ2ϱ2+Υ2ϱ2Υ3ϱ23+18(Υ1τ1Υ2τ2+Υ2τ2Υ3τ23=14(Υ1ϱ1Υ3ϱ23+18(Υ1τ1Υ3τ23=14[(Υ1ϱ1Υ3ϱ23+12(Υ1τ1Υ3τ23]=DE(1,23.

    Corollary 4.1. If any three SRNSNIVNs ˜1=(ϱ1,τ1;[ΘTL1,ΘTU1],[ΘIL1,ΘIU1], [ΘFL1,ΘFU1],˜2=(ϱ2,τ2;[ΘTL2,ΘTU2],[ΘIL2,ΘIU2],[ΘFL2,ΘFU2], ~3=(ϱ3,τ3;[ΘTL3,ΘTU3],[ΘIL3,ΘIU3],[ΘFL3,ΘFU3]. Then

    (1) DH(˜1,˜2)=0 iff ˜1=˜2.

    (2) DH(˜1,˜2) and DH(˜2,˜1) are co-occur.

    (3) DH(˜1,~3)DH(˜1,˜2)+DH(˜2,~3).

    Here we introduced the new operators for SRNSNIVWA, SRNSNIVWG, GSRNSNIVWA, and GSRNSNIVWG.

    Definition 5.1. Let ˜i=(ϱi,τi);[ΘTLi,ΘTUi],[ΘILi,ΘIUi],[ΘFLi,ΘFUi] be the family of SRNSNIVNs, W=(ε1,ε2,...,εn) be the weight of ˜i, εi0 and ni=1εi=1. Then SRNSNIVWA (˜1,˜2,...,˜n)=ni=1εi˜i.

    Theorem 5.1. Let ˜i=(ϱi,τi);[ΘTLi,ΘTUi],[ΘILi,ΘIUi],[ΘFLi,ΘFUi] be the family of SRNSNIVNs. Then

    SRNSNIVWA(˜1,˜2,...,˜n)=[(ni=1εiϱi,ni=1εiτi);[(1ni=1(12Δ(ΘTLi))εi)2Δ,(1ni=1(12Δ(ΘTUi))εi)2Δ],[(1ni=1(1Δ(ΘILi))εi)Δ,(1ni=1(1Δ(ΘIUi))εi)Δ],[ni=1(ΘFLi)εi,ni=1(ΘFUi)εi]].

    Proof. For the proof, we used the mathematical induction method.

    Put n=2, SRNSNIVWA(˜1,˜2)=ε1˜1ε2˜2, where

    ε1˜1=[(ε1ϱ1,ε1τ1);[(1(12Δ(ΘTL1))ε1)2Δ,(1(12Δ(ΘTU1))ε1)2Δ],[(1(1Δ(ΘIL1))ε1)Δ,(1(1Δ(ΘIU1))ε1)Δ],[(ΘFL1)ε1,(ΘFU1)ε1]]
    ε2˜2=[(ε2ϱ2,ε2τ2);[(1(12Δ(ΘTL2))ε2)2Δ,(1(12Δ(ΘTU2))ε2)2Δ],[(1(1Δ(ΘIL2))ε2)Δ,(1(1Δ(ΘIU2))ε2)Δ],[(ΘFL2)ε2,(ΘFU2)ε2]].

    Now,

    ε1˜1ε2˜2=[(ε1ϱ1+ε2ϱ2,ε1τ1+ε2τ2);[((1(12Δ(ΘTL1))ε1)+(1(12Δ(ΘTL2))ε2)(1(12Δ(ΘTL1))ε1)(1(12Δ(ΘTL2))ε2))2Δ,((1(12Δ(ΘTU1))ε1)+(1(12Δ(ΘTU2))ε2)(1(12Δ(ΘTU1))ε1)(1(12Δ(ΘTU2))ε2))2Δ],[((1(1Δ(ΘIL1))ε1)+(1(1Δ(ΘIL2))ε2)(1(1Δ(ΘIL1))ε1)(1(1Δ(ΘIL2))ε2))Δ,((1(1Δ(ΘIU1))ε1)+(1(1Δ(ΘIU2))ε2)(1(1Δ(ΘIU1))ε1)(1(1Δ(ΘIU2))ε2))Δ],[(ΘFL1)ε1(ΘFL2)ε2,(ΘFU1)ε1(ΘFU2)ε2]]
    =[(ε1ϱ1+ε2ϱ2,ε1τ1+ε2τ2);[(1(12Δ(ΘTL1))ε1(12Δ(ΘTL2))ε2)2Δ,(1(12Δ(ΘTU1))ε1(12Δ(ΘTU2))ε2)2Δ],[(1(1Δ(ΘIL1))ε1(1Δ(ΘIL2))ε2)Δ,(1(1Δ(ΘIU1))ε1(1Δ(ΘIU2))ε2)Δ],[(ΘFL1)ε1(ΘFL2)ε2,(ΘFU1)ε1(ΘFU2)ε2]]
    SRNSNIVWA(˜1,˜2)=[(2i=1εiϱi,2i=1εiτi);[(12i=1(12Δ(ΘTLi))εi)2Δ,(12i=1(12Δ(ΘTUi))εi)2Δ],[(12i=1(1Δ(ΘILi))εi)Δ,(12i=1(1Δ(ΘIUi))εi)Δ],[2i=1(ΘFLi)εi,2i=1(ΘFUi)εi]].

    Also, valid for n3, hence SRNSNIVWA(˜1,˜2,...,˜l)=

    [(li=1εiϱi,li=1εiτi);[(1li=1(12Δ(ΘTLi))εi)2Δ,(1li=1(12Δ(ΘTUi))εi)2Δ],[(1li=1(1Δ(ΘILi))εi)Δ,(1li=1(1Δ(ΘIUi))εi)Δ],[li=1(ΘFLi)εi,li=1(ΘFUi)εi]].

    If n=l+1, then SRNSNIVWA (˜1,˜2,...,˜l,˜l+1)

    =[(li=1εiϱi+εl+1ϱl+1,li=1εiτi+εl+1τl+1);[(li=1(1(12Δ(ΘTLi))εi)+(1(12Δ(ΘTLl+1))εl+1)li=1(1(12Δ(ΘTLi))εi)(1(12Δ(ΘTLl+1))εl+1))2Δ,(li=1(1(12Δ(ΘTUi))εi)+(1(12Δ(ΘTUl+1))εl+1)li=1(1(12Δ(ΘTUi))εi)(1(12Δ(ΘTUl+1))εl+1))2Δ],[(li=1(1(1Δ(ΘILi))εi)+(1(1Δ(ΘILl+1))εl+1)li=1(1(1Δ(ΘILi))εi)(1(1Δ(ΘILl+1))εl+1))Δ,(li=1(1(1Δ(ΘIUi))εi)+(1(1Δ(ΘIUl+1))εl+1)li=1(1(1Δ(ΘIUi))εi)(1(1Δ(ΘIUl+1))εl+1))Δ],[li=1(ΘFLi)εi(ΘFLl+1)εl+1,li=1(ΘFUi)εi(ΘFUl+1)εl+1]]
    =[(l+1i=1εiϱi,l+1i=1εiτi);[(1l+1i=1(12Δ(ΘTLi))εi)2Δ,(1l+1i=1(12Δ(ΘTUi))εi)2Δ],[(1l+1i=1(1Δ(ΘILi))εi)Δ,(1l+1i=1(1Δ(ΘIUi))εi)Δ],[l+1i=1(ΘFLi)εi,l+1i=1(ΘFUi)εi]].

    Theorem 5.2. If all ˜i=(ϱi,τi);[ΘTLi,ΘTUi],[ΘILi,ΘIUi][ΘFLi,ΘFUi] are equal, then { SRNSNIVWA(˜1,˜2,...,˜n)=˜}(idempotency property).

    Proof. Given that (ϱi,τi)=(ϱ,τ), [ΘTLi,ΘTUi]=[ΘTL,ΘTU], [ΘILi,ΘIUi]=[ΘIL,ΘIU] and [ΘFLi,ΘFUi]=[ΘFL,ΘFU] and ni=1εi=1. Now, SRNSNIVWA(˜1,˜2,...,˜n)

    =[(ni=1εiϱi,ni=1εiτi);[(1ni=1(12Δ(ΘTLi))εi)2Δ,(1ni=1(12Δ(ΘTUi))εi)2Δ],[(1ni=1(1Δ(ΘILi))εi)Δ,(1ni=1(1Δ(ΘIUi))εi)Δ],[ni=1(ΘFLi)εi,ni=1(ΘFUi)εi]],=[(ϱni=1εi,τni=1εi);[(1(12Δ(ΘTLi))ni=1εi)2Δ,(1(12Δ(ΘTUi))ni=1εi)2Δ],[(1(1Δ(ΘILi))ni=1εi)Δ,(1(1Δ(ΘIUi))ni=1εi)Δ],[(ΘFLi)ni=1εi,(ΘFUi)ni=1εi]],=[(ϱ,τ);[(1(12Δ(ΘTLi)))2Δ,(1(12Δ(ΘTUi)))2Δ],[(1(1Δ(ΘILi)))Δ,(1(1Δ(ΘIUi)))Δ],[(ΘFLi),(ΘFUi)]],=˜.

    Theorem 5.3. Let ˜i=(ϱij,τij);[ΘTLij,ΘTUij],[ΘILij,ΘIUij][ΘFLij,ΘFUij](i=1ton);(j=1toij) be the SRNSNIVWA, where

    ˆϱ=infϱij, ϱ=supϱij, ˆτ=supτij, τ=infτij,

    ^ΘTL=infΘTLij, ΘTL=supΘTLij, ^ΘTU=infΘTUij, ΘTU=supΘTUij,

    ^ΘIL=infΘILij, ΘIL=supΘILij, ^ΘIU=infΘIUij, ΘIU=supΘIUij,

    ^ΘFL=infΘFLij, ΘFL=supΘFLij, ^ΘFU=infΘFUij, ΘFU=supΘFUij.

    Then, (ˆϱ,ˆτ);[^ΘTL,^ΘTU],[^ΘIL,^ΘIU],[ΘFL,ΘFU]SRNSNIVWA(˜1,˜2,...,˜n) (ϱ,τ);[ΘTL,ΘTU],[ΘIL,ΘIU],[^ΘFL,^ΘFU], where 1in,j=1,2,...,ij (boundedness property).

    Proof. Since, ^ΘTL=infΘTLij, ΘTL=supΘTLij ^ΘTU=infΘTUij, ΘTU=supΘTUij and ^ΘTLΘTLijΘTL and ^ΘTUΘTUijΘTU. Now,

    ^ΘTL+^ΘTU=(1ni=1(12Δ^(ΘTL))εi)2Δ+(1ni=1(12Δ^(ΘTU))εi)2Δ(1ni=1(12Δ(ΘTLij))εi)2Δ+(1ni=1(12Δ(ΘTUij))εi)2Δ(1ni=1(12Δ(ΘTL))εi)2Δ+(1ni=1(12Δ(ΘTU))εi)2Δ=ΘTL+ΘTU.

    Since, ^ΘIL=infΘILij, ΘIL=supΘILij ^ΘIU=infΘIUij, ΘIU=supΘIUij and ^ΘILΘILijΘIL and ^ΘIUΘIUijΘIU. Now,

    ^ΘIL+^ΘIU=(1ni=1(1Δ^(ΘIL))εi)Δ+(1ni=1(1Δ^(ΘIU))εi)Δ(1ni=1(1Δ(ΘILij))εi)Δ+(1ni=1(1Δ(ΘIUij))εi)Δ(1ni=1(1Δ(ΘIL))εi)Δ+(1ni=1(1Δ(ΘIU))εi)Δ=ΘIL+ΘIU.

    Since, ^ΘFL=infΘFLij, ΘFL=supΘFLij ^ΘFU=infΘFUij, ΘFU=supΘFUij and ^ΘFLΘFLijΘFL and ^ΘFUΘFUijΘFU. Now,

    ^ΘFL+^ΘFU=ni=1(^ΘFL)εi+ni=1(^ΘFU)εini=1(ΘFLij)εi+ni=1(ΘFUij)εini=1(ΘFL)εi+ni=1(ΘFU)εi=ΘFL+ΘFU.

    Since, ˆϱ=infϱij, ϱ=supϱij, ˆτ=supτij, τ=infτij and ˆϱϱijϱ and ττijˆτ.

    Thus, ni=1εiˆϱni=1εiϱijni=1εiϱ and ni=1εiτni=1εiτijni=1εiˆτ.

    Hence,

    ni=1εiˆϱ2×[(1ni=1(12Δ^(ΘTL))εi)2Δ+(1ni=1(12Δ^(ΘTU))εi)2Δ2(1ni=1(1Δ^(ΘIL))εi)Δ+(1ni=1(1Δ^(ΘIU))εi)Δ2+1(ni=1(ΘFL)εi)+(ni=1(ΘFU)εi)2]ni=1εiϱij2×[(1ni=1(12Δ(ΘTLij))εi)2Δ+(1ni=1(12Δ(ΘTUij))εi)2Δ2(1ni=1(1Δ(ΘILij))εi)Δ+(1ni=1(1Δ(ΘIUij))εi)Δ2+1(ni=1(ΘFLij)εi)+(ni=1(ΘFUij)εi)2]ni=1εiϱ2×[(1ni=1(12Δ(ΘTLij))εi)2Δ+(1ni=1(12Δ(ΘTUij))εi)2Δ2(1ni=1(1Δ(ΘILij))εi)Δ+(1ni=1(1Δ(ΘIUij))εi)Δ2+1(ni=1(ΘFLij)εi)+(ni=1(ΘFUij)εi)2].

    Therefore, (ˆϱ,ˆτ);[^ΘTL,^ΘTU],[^ΘIL,^ΘIU],[ΘFL,ΘFU]SRNSNIVWA(˜1,˜2,...,˜n) (ϱ,τ);[ΘTL,ΘTU],[ΘIL,ΘIU],[^ΘFL,^ΘFU].

    Theorem 5.4. Let ˜i=(ϱtij,τtij);[ΘTLtij,ΘTUtij],[ΘILtij,ΘIUtij],[ΘFLtij,ΘFUtij] and ˜Wi=(ϱhij,τhij); [ΘTLhij,ΘTUhij],[ΘILhij,ΘIUhij],[ΘFLhij,ΘFUhij] be the two families of SRNSNIVWAs. For any i, if there is ϱtijτhij, (ΘTLtij)+(ΘTUtij)(ΘTLhij)+(ΘTUhij) and (ΘILtij)+(ΘIUtij)(ΘILhij)+(ΘIUhij) and (ΘFLtij)+(ΘFUtij)(ΘFLhij)+(ΘFUhij) or ˜i˜Wi, then SRNSNIVWA(˜1,˜2,...,˜n)SRNSNIVWA(˜W1,˜W2,...,˜Wn), where (i=1ton);(j=1toij) (monotonicity property).

    Proof. For any i, ϱtijτhij. Thus, ni=1ϱtijni=1τhij.

    For any i, (ΘTLtij)+(ΘTUtij)(ΘTLhij)+(ΘTUhij).

    Therefore, 12Δ(ΘTLti)+12Δ(ΘTUti)12Δ(ΘTLhi)+12Δ(ΘTUhi).

    Hence,

    ni=1(12Δ(ΘTLti))εi+ni=1(12Δ(ΘTUti))εi ni=1(12Δ(ΘTLhi))εi+ni=1(12Δ(ΘTUhi))εi

    and (1ni=1(12Δ(ΘTLti))εi)2Δ+(1ni=1(12Δ(ΘTUti))εi)2Δ (1ni=1(12Δ(ΘTLhi))εi)2Δ+(1ni=1(12Δ(ΘTUhi))εi)2Δ.

    For any i, (ΘILtij)+(ΘIUtij)(ΘILhij)+(ΘIUhij).

    Therefore, 1Δ(ΘILti)+1Δ(ΘIUti)1Δ(ΘILhi)+1Δ(ΘIUhi).

    Hence,

    ni=1(1Δ(ΘILti))εi+ni=1(1Δ(ΘIUti))εi ni=1(1Δ(ΘILhi))εi+ni=1(1Δ(ΘIUhi)2)εi

    and (1ni=1(1Δ(ΘILti))εi)Δ+(1ni=1(1Δ(ΘIUti))εi)Δ (1ni=1(1Δ(ΘILhi))εi)Δ+(1ni=1(1Δ(ΘIUhi))εi)Δ.

    For any i, (ΘFLtij)+(ΘFUtij)(ΘFLhij)+(ΘFUhij).

    Therefore, 1(ni=1ΘFLtij)+(ni=1ΘFUtij)21(ni=1ΘFLhij)+(ni=1ΘFUhij)2. Hence,

    ni=1ϱtij2×[(1ni=1(12Δ(ΘTLti))εi)2Δ+(1ni=1(12Δ(ΘTUti))εi)2Δ2(1ni=1(1Δ(ΘILti))εi)2Δ+(1ni=1(1Δ(ΘIUti))εi)2Δ2+1(ni=1(ΘFLtij))+(ni=1(ΘFUtij))2]ni=1ϱhij2×[(1ni=1(12Δ(ΘTLhi))εi)2Δ+(1ni=1(12Δ(ΘTUhi))εi)2Δ2(1ni=1(1Δ(ΘILhi))εi)2Δ+(1ni=1(1Δ(ΘIUhi))εi)2Δ2+1(ni=1(ΘFLhij))+(ni=1(ΘFUhij))2].

    Hence, SRNSNIVWA(˜1,˜2,...,˜n)SRNSNIVWA(˜W1,˜W2,...,˜Wn).

    Definition 5.2. Let ˜i=(ϱi,τi);[ΘTLi,ΘTUi],[ΘILi,ΘIUi],[ΘFLi,ΘFUi] be the family of SRNSNIVNs. Then SRNSNIVWG (˜1,˜2,...,˜n)=ni=1˜εii.

    Theorem 5.5. Let ˜i=(ϱi,τi);[ΘTLi,ΘTUi],[ΘFLi,ΘFUi] be the family of SRNSNIVNs. Prove that

    [(ni=1ϱεii,ni=1τεii);[ni=1(ΘTLi)εi,ni=1(ΘTUi)εi],[(1ni=1(1Δ(ΘILi))εi)Δ,(1ni=1(1Δ(ΘIUi))εi)Δ],[(1ni=1(12Δ(ΘFLi))εi)2Δ,(1ni=1(12Δ(ΘFUi))εi)2Δ]].

    Theorem 5.6. If all ˜i=(ϱi,τi);[ΘTLi,ΘTUi],[ΘILi,ΘIUi][ΘFLi,ΘFUi](i=1,2,...,n) are equal, then SRNSNIVWG(˜1,˜2,...,˜n)=˜.

    Remark 5.1. Using the SRNSNIVWG operator, the boundedness and monotonicity properties are met.

    Definition 5.3. Let ˜i=(ϱi,τi);[ΘTLi,ΘTUi],[ΘILi,ΘIUi],[ΘFLi,ΘFUi] be the family of SRNSNIVN. Then GSRNSNIVWA (˜1,˜2,...,˜n)=(ni=1εi˜Δi)1/Δ.

    Theorem 5.7. Let \widetilde{\mho}_{i} = \Big\langle (\varrho_{i}, \tau_{i}); [\Theta^{\mathscr{TL}}_{i}, \Theta^{\mathscr{TU}}_{i}], [\Theta^{\mathscr{IL}}_{i}, \Theta^{\mathscr{IU}}_{i}], [\Theta^{\mathscr{FL}}_{i}, \Theta^{\mathscr{FU}}_{i}] \Big\rangle be the family of SRNSNIVNs. Then

    GSRNSNIVWA(\widetilde{\mho}_{1} , \widetilde{\mho}_{2},...,\widetilde{\mho}_{n}) = \\ { \begin{bmatrix} \Bigg( \Big( \oplus^{n}_{i = 1} \varepsilon_{i}\varrho_{i}^{\Delta}\Big)^{1/\Delta}, \Big( \oplus^{n}_{i = 1} \varepsilon_{i}\tau_{i}^{\Delta}\Big)^{1/\Delta}\Bigg);\\ \begin{bmatrix} \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{2\Delta}\,\,\right)^{\Delta}, \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{2\Delta}\,\,\right)^{\Delta} \end{bmatrix},\\ \begin{bmatrix} \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{\Delta}\,\,\right)^{\Delta}, \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{\Delta}\,\,\right)^{\Delta} \end{bmatrix},\\ \begin{bmatrix} \left(1-\Bigg( \begin{aligned} 1-\sqrt[2\Delta]{ \bigcirc^{n}_{i = 1} \Bigg( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}}}\\ \end{aligned}\,\,\Bigg)^{\Delta}\right)^{2\Delta},\\ \left(1-\Bigg( \begin{aligned} 1-\sqrt[2\Delta]{ \bigcirc^{n}_{i = 1} \Bigg( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}}}\\ \end{aligned}\,\,\Bigg)^{\Delta}\right)^{2\Delta}\\ \end{bmatrix}\\ \end{bmatrix}.}

    Proof. It must be demonstrated that,

    \oplus^{n}_{i = 1} \varepsilon_{i} \widetilde{\mho}_{i}^{\Delta} = \begin{bmatrix} \Bigg( \Big( \oplus^{n}_{i = 1} \varepsilon_{i}\varrho_{i}^{\Delta}\Big), \Big( \oplus^{n}_{i = 1} \varepsilon_{i}\tau_{i}^{\Delta}\Big)\Bigg);\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{n}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{n}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{n}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{n}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \bigcirc^{n}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned}, \begin{aligned} \bigcirc^{n}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned} \end{bmatrix} \end{bmatrix}.

    Put n = 2 , \varepsilon_{1}\mho_{1}\oplus \varepsilon_{2}\mho_{2} =

    \begin{eqnarray*} &&\begin{bmatrix} \Big(\varepsilon_{1}\varrho_{1}^{\Delta}+ \varepsilon_{2}\varrho_{2}^{\Delta}, \varepsilon_{1}\tau_{1}^{\Delta} + \varepsilon_{2}\tau_{2}^{\Delta}\Big);\\ \begin{bmatrix} \left(\begin{aligned} \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}} + \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{2\Delta}}\\ - \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}} \cdot \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{2\Delta}} \end{aligned}\,\,\right)^{2\Delta}\,\,,\\ \left(\begin{aligned} \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}} + \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{2\Delta}}\\ - \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}} \cdot \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{2\Delta}} \end{aligned}\,\,\right)^{2\Delta} \,\,\,\,\,\end{bmatrix},\\ \begin{bmatrix} \left(\begin{aligned} \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}} + \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{\Delta}}\\ - \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}} \cdot \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{\Delta}} \end{aligned}\,\,\right)^{\Delta}\,\,,\\ \left(\begin{aligned} \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}} + \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{\Delta}}\\ - \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}} \cdot \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{2})^{\Delta}}\bigg)^{\varepsilon_{2}}\\ \end{aligned}\Bigg)^{\Delta}} \end{aligned}\,\,\right)^{\Delta} \,\,\,\,\,\end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{1})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{1}} \cdot \Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{2})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{1}}\\ \end{aligned},\\ \begin{aligned} \Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{1})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{1}} \cdot \Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{2})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{1}}\\ \end{aligned} \end{bmatrix} \end{bmatrix} \end{eqnarray*}
    = \begin{bmatrix} \Bigg( \Big( \oplus^{2}_{i = 1} \varepsilon_{i}\varrho_{i}^{\Delta}\Big), \Big( \oplus^{2}_{i = 1} \varepsilon_{i}\tau_{i}^{\Delta}\Big)\Bigg);\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{2}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{2}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{2}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{2}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \bigcirc^{2}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned}, \begin{aligned} \bigcirc^{2}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned} \end{bmatrix} \end{bmatrix}.

    In general,

    \begin{bmatrix} \Bigg( \Big( \oplus^{l}_{i = 1} \varepsilon_{i}\varrho_{i}^{\Delta}\Big), \Big( \oplus^{l}_{i = 1} \varepsilon_{i}\tau_{i}^{\Delta}\Big)\Bigg);\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{l}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{l}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{l}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{l}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \bigcirc^{l}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned}, \begin{aligned} \bigcirc^{l}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned} \end{bmatrix} \end{bmatrix}.

    If n = l+1 , then \oplus^{l}_{i = 1} \varepsilon_{i}\widetilde{\mho}_{i}^{\Delta} +\varepsilon_{l+1} \widetilde{\mho}_{l+1}^{\Delta} = \oplus^{l+1}_{i = 1} \varepsilon_{i}\widetilde{\mho}_{i}^{\Delta} .

    Now, \oplus^{l}_{i = 1} \varepsilon_{i}\widetilde{\mho}_{i}^{\Delta} +\varepsilon_{l+1} \widetilde{\mho}_{l+1}^{\Delta} = \oplus^{l+1}_{i = 1} \varepsilon_{i}\widetilde{\mho}_{i}^{\Delta} = \varepsilon_{1}\widetilde{\mho}_{1}^{\Delta} \oplus \varepsilon_{2}\widetilde{\mho}_{2}^{\Delta} \oplus... \oplus \varepsilon_{l}\widetilde{\mho}_{l}^{\Delta} \oplus \varepsilon_{l+1}\widetilde{\mho}_{l+1}^{\Delta}

    \begin{eqnarray*} & = &\begin{bmatrix} \Big(\varepsilon_{i}\varrho_{i}^{\Delta}+ \varepsilon_{l+1}\varrho_{l+1}^{\Delta}, \varepsilon_{i}\tau_{i}^{\Delta} + \varepsilon_{l+1}\tau_{l+1}^{\Delta}\Big);\\ \begin{bmatrix} \left(\begin{aligned} \sqrt[2\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{2\Delta}} + \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}}\\ - \sqrt[2\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{2\Delta}} \cdot \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}} \end{aligned}\,\,\right)^{2\Delta}\,\,,\\ \left(\begin{aligned} \sqrt[2\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{2\Delta}} + \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}}\\ - \sqrt[2\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{2\Delta}} \cdot \sqrt[2\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{2\Delta}} \end{aligned}\,\,\right)^{2\Delta} \,\,\,\,\,\end{bmatrix},\\ \begin{bmatrix} \left(\begin{aligned} \sqrt[\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{\Delta}} + \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}}\\ - \sqrt[\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{\Delta}} \cdot \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}} \end{aligned}\,\,\right)^{\Delta}\,\,,\\ \left(\begin{aligned} \sqrt[\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{\Delta}} + \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}}\\ - \sqrt[\Delta]{\Bigg( \begin{aligned} 1- \bigcirc^{l}_{i = 1}\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\bigg)^{\varepsilon_{i}}\\ \end{aligned}\Bigg)^{\Delta}} \cdot \sqrt[\Delta]{\Bigg( \begin{aligned} 1-\bigg(1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{l+1})^{\Delta}}\bigg)^{\varepsilon_{1}}\\ \end{aligned}\Bigg)^{\Delta}} \end{aligned}\,\,\right)^{\Delta} \,\,\,\,\,\end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \bigcirc^{l}_{i = 1}\Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}} \cdot \Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{l+1})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{1}}\\ \end{aligned},\\ \begin{aligned} \bigcirc^{l}_{i = 1}\Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}} \cdot \Bigg(\left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{l+1})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{1}}\\ \end{aligned} \end{bmatrix} \end{bmatrix}. \end{eqnarray*}

    Hence,

    \oplus^{l+1}_{i = 1} \varepsilon_{i}\widetilde{\mho}_{i}^{\Delta} = \begin{bmatrix} \Bigg( \Big( \oplus^{l+1}_{i = 1} \varepsilon_{i}\varrho_{i}^{\Delta}\Big), \Big( \oplus^{l+1}_{i = 1} \varepsilon_{i}\tau_{i}^{\Delta}\Big)\Bigg);\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{l+1}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{l+1}_{i = 1} \Bigg( 1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{2\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \left(1- \bigcirc^{l+1}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\,, \begin{aligned} \left(1- \bigcirc^{l+1}_{i = 1} \Bigg( 1-\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\Bigg)^{\varepsilon_{i}}\right)^{\Delta}\\ \end{aligned}\,\, \end{bmatrix},\\ \begin{bmatrix} \begin{aligned} \bigcirc^{l+1}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned}, \begin{aligned} \bigcirc^{l+1}_{i = 1} \left( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\right)^{\varepsilon_{i}}\\ \end{aligned} \end{bmatrix} \end{bmatrix}.

    Also, \left(\bigcirc^{l+1}_{i = 1} \varepsilon_{i}\widetilde{\mho}_{i}^{\Delta}\right)^{1/\Delta} =

    \begin{bmatrix} \Bigg( \Big( \bigcirc^{l+1}_{i = 1} \varepsilon_{i}\varrho_{i}^{\Delta}\Big)^{1/\Delta}, \Big( \bigcirc^{l+1}_{i = 1} \varepsilon_{i}\tau_{i}^{\Delta}\Big)^{1/\Delta}\Bigg);\\ \begin{bmatrix} \left(\left( \begin{aligned} 1- \bigcirc^{l+1}_{i = 1} \left( 1-\Big(\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{2\Delta}\,\,\right)^{\Delta}, \left(\left( \begin{aligned} 1- \bigcirc^{l+1}_{i = 1} \left( 1-\Big(\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{2\Delta}\,\,\right)^{\Delta} \end{bmatrix},\\ \begin{bmatrix} \left(\left( \begin{aligned} 1- \bigcirc^{l+1}_{i = 1} \left( 1-\Big(\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{\Delta}\,\,\right)^{\Delta}, \left(\left( \begin{aligned} 1- \bigcirc^{l+1}_{i = 1} \left( 1-\Big(\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{\Delta}\,\,\right)^{\Delta} \end{bmatrix},\\ \begin{bmatrix} \left(1-\Bigg( \begin{aligned} 1-\sqrt[2\Delta]{ \bigcirc^{l+1}_{i = 1} \Bigg( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}}}\\ \end{aligned}\,\,\Bigg)^{\Delta}\right)^{2\Delta},\\ \left(1-\Bigg( \begin{aligned} 1-\sqrt[2\Delta]{ \bigcirc^{l+1}_{i = 1} \Bigg( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}}}\\ \end{aligned}\,\,\Bigg)^{\Delta}\right)^{2\Delta}\\ \end{bmatrix}\\ \end{bmatrix}.

    Remark 5.2. The GSRNSNIVWA is switched to the SRNSNIVWA if \Delta = 1 .

    Theorem 5.8. If all \widetilde{\mho}_{i} = \Big\langle (\varrho_{i}, \tau_{i}); [\Theta^{\mathscr{TL}}_{i}, \Theta^{\mathscr{TU}}_{i}], [\Theta^{\mathscr{IL}}_{i}, \Theta^{\mathscr{IU}}_{i}] [\Theta^{\mathscr{FL}}_{i}, \Theta^{\mathscr{FU}}_{i}] \Big\rangle (i = 1\, \, to \, \, n) are equal, then { GSRNSNIVWA (\widetilde{\mho}_{1}, \widetilde{\mho}_{2}, ..., \widetilde{\mho}_{n}) = \widetilde{\mho} }.

    Remark 5.3. Using the GSRNSNIVWA, the boundedness and monotonicity are met.

    Definition 5.4. Let \widetilde{\mho}_{i} = \Big\langle (\varrho_{i}, \tau_{i}); [\Theta^{\mathscr{TL}}_{i}, \Theta^{\mathscr{TU}}_{i}], [\Theta^{\mathscr{IL}}_{i}, \Theta^{\mathscr{IU}}_{i}], [\Theta^{\mathscr{FL}}_{i}, \Theta^{\mathscr{FU}}_{i}] \Big\rangle be the family of SRNSNIVNs. Then GSRNSNIVWG (\widetilde{\mho}_{1}, \widetilde{\mho}_{2}, ..., \widetilde{\mho}_{n}) = \frac{1}{\Delta}\Big(\bigcirc^{n}_{i = 1} (\Delta\widetilde{\mho}_{i})^{\varepsilon_{i}} \Big) .

    Theorem 5.9. Let \widetilde{\mho}_{i} = \Big\langle (\varrho_{i}, \tau_{i}); [\Theta^{\mathscr{TL}}_{i}, \Theta^{\mathscr{TU}}_{i}], [\Theta^{\mathscr{IL}}_{i}, \Theta^{\mathscr{IU}}_{i}], [\Theta^{\mathscr{FL}}_{i}, \Theta^{\mathscr{FU}}_{i}] \Big\rangle be the family of SRNSNIVNs. Prove that GSRNSNIVWG (\widetilde{\mho}_{1}, \widetilde{\mho}_{2}, ..., \widetilde{\mho}_{n}) =

    \begin{bmatrix} \Bigg( \frac{1}{\Delta} \bigcirc^{n}_{i = 1} (\Delta \varrho_{i})^{\varepsilon_{i}}, \frac{1}{\Delta} \bigcirc^{n}_{i = 1} (\Delta \tau_{i})^{\varepsilon_{i}}\Bigg);\\ \begin{bmatrix} \left(1-\Bigg( \begin{aligned} 1-\sqrt[2\Delta]{ \bigcirc^{n}_{i = 1} \Bigg( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TL}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}}}\\ \end{aligned}\,\,\Bigg)^{\Delta}\right)^{2\Delta},\\ \left(1-\Bigg( \begin{aligned} 1-\sqrt[2\Delta]{ \bigcirc^{n}_{i = 1} \Bigg( \left(1-\Big(1-\sqrt[2\Delta]{(\Theta^{\mathscr{TU}}_{i})}\Big)^{\Delta}\right)^{2\Delta}\Bigg)^{\varepsilon_{i}}}\\ \end{aligned}\,\,\Bigg)^{\Delta}\right)^{2\Delta}\\ \end{bmatrix},\\ \begin{bmatrix} \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[\Delta]{(\Theta^{\mathscr{IL}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{\Delta}\,\,\right)^{\Delta}, \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[\Delta]{(\Theta^{\mathscr{IU}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{\Delta}\,\,\right)^{\Delta} \end{bmatrix},\\ \begin{bmatrix} \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[2\Delta]{(\Theta^{\mathscr{FL}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{2\Delta}\,\,\right)^{\Delta}, \left(\left( \begin{aligned} 1- \bigcirc^{n}_{i = 1} \left( 1-\Big(\sqrt[2\Delta]{(\Theta^{\mathscr{FU}}_{i})^{\Delta}}\Big)\right)^{\varepsilon_{i}}\\ \end{aligned}\,\,\right)^{2\Delta}\,\,\right)^{\Delta} \end{bmatrix} \end{bmatrix}.

    Remark 5.4. The GSRNSNIVWG becomes the SRNSNIVWG operator if \Delta = 1 .

    Remark 5.5. Using the GSRNSNIVWG, the boundedness and monotonicity properties are met.

    Theorem 5.10. If all \widetilde{\mho}_{i} = \Big\langle (\varrho_{i}, \tau_{i}); [\Theta^{\mathscr{TL}}_{i}, \Theta^{\mathscr{TU}}_{i}], [\Theta^{\mathscr{IL}}_{i}, \Theta^{\mathscr{IU}}_{i}] [\Theta^{\mathscr{FL}}_{i}, \Theta^{\mathscr{FU}}_{i}] \Big\rangle (i = 1\, \, to \, \, n) are equal, then { GSRNSNIVWG (\widetilde{\mho}_{1}, \widetilde{\mho}_{2}, ..., \widetilde{\mho}_{n}) = \widetilde{\mho} }.

    Let \widetilde{\Xi} = \{\widetilde{\Xi}_{1}, \widetilde{\Xi}_{2}, ..., \widetilde{\Xi}_{n}\} be the n -alternatives, C = \{C_{1}, C_{2}, ..., C_{m}\} be the m -attributes, w = \{\varepsilon_{1}, \varepsilon_{2}, ..., \varepsilon_{m}\} be the weights of attributes, \widetilde{\widetilde{\Xi}}_{ij} = \Big\langle (\varrho_{ij}, \tau_{ij}); [\Theta^{\mathscr{TL}}_{ij}, \Theta^{\mathscr{TU}}_{ij}], [\Theta^{\mathscr{IL}}_{ij}, \Theta^{\mathscr{IU}}_{ij}] [\Theta^{\mathscr{FL}}_{ij}, \Theta^{\mathscr{FU}}_{ij}] \Big\rangle is denote SRNSNIVN of \widetilde{\Xi}_{i} in C_{j} . Here, \Big[\Theta^{\mathscr{TL}}_{ij}, \Theta^{\mathscr{TU}}_{ij}\Big], \Big[\Theta^{\mathscr{IL}}_{ij}, \Theta^{\mathscr{IU}}_{ij}\Big], \Big[\Theta_{ij}^{\mathscr{FL}}, \Theta_{ij}^{\mathscr{FU}}\Big]\in [0, 1] and 0 \leq (\Theta^{\mathscr{TU}}_{ij}(\mu))^{2}+\sqrt{(\Theta^{\mathscr{IU}}_{ij}(\mu))} +\sqrt{(\Theta_{ij}^{\mathscr{FU}}(\mu))} \leq 2 .

    Step-1: There should be a choice value for SRNSNIV.

    Step-2: Choosing the values to be used for normalization. The matrix of choices \mathbb{D} = (\widetilde{\Xi}_{ij})_{n \times m}

    is normalized into ; put

    and

    \widehat{\varrho_{ij}} = \frac{\varrho_{ij}} {\sup\limits_{i}(\varrho_{ij})}, \widehat{\tau_{ij}} = \frac{\tau_{ij}}{\sup\limits_{i}(\tau_{ij})} \cdot \frac{\tau_{ij}}{\varrho_{ij}},\,\, \widehat{\Theta^{\mathscr{TL}}_{ij}} = \Theta^{\mathscr{TL}}_{ij}, \widehat{\Theta^{\mathscr{TU}}_{ij}} = \Theta^{\mathscr{TU}}_{ij}.

    Step-3: Find the aggregate values, Using SRNSNIV operators, attribute C_{j} in \widetilde{\Xi}_{i} , = \Big\langle (\widehat{\varrho_{ij}}, \widehat{\tau_{ij}}); [\widehat{\Theta^{\mathscr{TL}}_{ij}}, \widehat{\Theta^{\mathscr{TU}}_{ij}}], [\widehat{\Theta^{\mathscr{IL}}_{ij}}, \widehat{\Theta^{\mathscr{IU}}_{ij}}], [\widehat{\Theta^{\mathscr{FL}}_{ij}}, \widehat{\Theta^{\mathscr{FU}}_{ij}}] \Big\rangle

    is aggregated into = \Big\langle (\widehat{\varrho_{i}}, \widehat{\tau_{i}}); [\widehat{\Theta^{\mathscr{TL}}_{i}}, \widehat{\Theta^{\mathscr{TU}}_{i}}], [\widehat{\Theta^{\mathscr{IL}}_{i}}, \widehat{\Theta^{\mathscr{IU}}_{i}}], [\widehat{\Theta^{\mathscr{FL}}_{i}}, \widehat{\Theta^{\mathscr{FU}}_{i}}] \Big\rangle .

    Step-4: Calculate the positive and negative ideal values:

    Step-5: Find the ED between each option with two ideal values:

    Step-6: Calculating relative closeness is as follows:

    \mathbb{D}^{\star}_{i} = \frac{\mathbb{D}_{i}^{N}}{\mathbb{D}_{i}^{P} + \mathbb{D}_{i}^{N}}.

    Step-7: In this case, the best output is \sup \mathbb{D}^{\star}_{i} To solve a problem, a decision is to choose the best option.

    Educators, economists, managers, and politicians all face DM challenges every day. Before making your final robotic engineering choice, take into account the following five points. Based on professional evaluations against the criteria, we want to choose the best option from a large number of alternatives. Computer science and machine tool technology have been combined in robotics, a branch of applied engineering. Computer programming, microelectronics, machine design, and artificial intelligence are all covered. Currently, we have selected five types of robotic nurses: Pharma robotics, Robotic-assisted biopsy, Antibacterial nano-materials, AI diagnostics, and AI epidemiology. A robotics system should be chosen based on the following four criteria: robot controller features (C_1) , affordable off line programming software (C_2) , safety codes (C_3) , experience and reputation of the robot manufacturer (C_4) and their weights are w = \{0.4, 0.3, 0.2, 0.1\} .

    (1) ROBOTIC NURSES (\widetilde{\Xi}_{1}) :

    It is useful to have nursing robots in hospitals as well as in senior living facilities. Nurses may be able to focus on their work by having robots relieve them of their workloads. Automated machines have already been considered to support processes such as the distribution of food trays, medications, and laboratory specimens in hospitals, which are relevant to their primary duties. A robotic system designed to assist nurses in patient transfers, ambulation, and limiting their movements could significantly reduce the physical strain they face. Telemedicine might also be offered by nursing robots. Telepresence platforms equipped with robotic nurses enable doctors to communicate effectively with patients remotely. Robots usually travel to hospitals using their onboard screens to make visual contact with patients during routine vital visits. Additionally, the robot records the patient's vital signs at regular intervals in accordance with standard clinical protocols. Sales of health care robots are expected to reach 2.8 billion dollars by 2021. Health care seems to have a limitless number of applications. Robots for hospitals, housing, and surgery are all included in this market. Robots that can help with therapy, medication, logistics, telepresence, and cleaning can also be utilized. This includes surgical robots, training exoskeletons, intelligent prostheses and bionics, robotic nurses, and robots for assisting with therapy, medication, logistics, and telepresence. Further, touch sensors, speech recognition, gesture control, and machine vision are crucial technologies for health care robotics. Robert Din Sow is a patient in Thai and Japanese hospitals. While keeping an eye on elderly patients, Robert Din Sow conducts video talks with their families. Using a desktop or laptop computer, a robotic arm can be operated remotely.

    (2) PHARMAROBOTICS (\widetilde{\Xi}_{2}) :

    Worldwide, dairy farmers administer millions of vaccines and reproductive items to cows. It takes a lot of training and work to administer these medications. In this sector, only about 70 % of cows achieve a 100 % compliance rate under current protocols, meaning they don't receive the full prescribed schedule of vaccines or reproductive products, which could negatively impact their health. We are pro-environment and pro-animal health farmers. As part of our efforts to build a healthier planet, we believe that using our patented technology to automate the process of giving essential drugs to animals will improve animal health standards. Sure Shot may administer up to three vaccinations and replica items. An automated gate system, RFID reader, camera ID reader, and dairy management software can be used to manufacture products based on each cow's RIFD tag. As a dairy farmer responsible for several agricultural activities, Alexander was mentored by Marinus in 2017. As a first-year college student, Alexander had the idea of improving cow animal health standards and providing farmers with a sustainable economic future. The idea was patented and a new challenge was created. A robotic injection system for domestic herd animals. After that, the concept became a thought program. After the accelerator program, the team was able to develop a smart business model. February 2019 marked the founding of Pharm Robotics.

    (3) ROBOTIC-ASSISTED BIOPSY (\widetilde{\Xi}_{3}) :

    The objective of our research was to develop a robot that could perform computer tomography (CT). The method's viability, accuracy, and efficacy were evaluated using ten peas placed inside a gel phantom (mean diameter 9.9t/0.4mm). Using CT imaging to identify the optimal access point, the position of the phantom was captured using an optical tracking device. The robot planning system (Linux-based industrial PC outfitted with a video capture card) received locational information about the phantom and its CT image after the correct angle, pitch, and location were determined. Using seven degrees of freedom, the robotic arm directed the needle's trajectory to the center of the target. A coaxial approach was used to perform the biopsy. After measuring the length of all harvested specimens, we pushed short bits of a guide into the target to determine the needle track's departure from the target. Resting biopsy specimens were available for all targets (mean length 5.6 t/1.4mm; only a needle pass was needed). As far as the needle tip was concerned, its mean deviation from the center of the target was 1.2 t/0.9 mm in both axis, and 0.6 t/0.4 mm in both axes. It was possible to test the perceptual and dexterity abilities of doctors using robotic assisted biopics in vitro utilizing CT guidance. These biopics early cancer diagnosis with enhanced detection and precise treatment. The procedures may be steered towards more accurate biopsies and focused medicines through the use of robotics in this situation. During a biopsy technique, which is stabilized by robotic manipulation, a tissue sample is obtained from the probable lesion.

    (4) ANTIBACTERIAL NANOMATERIALS (\widetilde{\Xi}_{4}) :

    By fighting poisons and bacteria constantly, blood does not keep us alive. Due to the rise in antibiotic-resistant bacteria, robots are being called upon to save the day. It may one day be possible to save patients from bacterial illnesses that doctors are increasingly finding themselves unable to treat by using tiny robots. The tiny robots are made of gold nano wire coated with a membrane that kills bacteria and toxins and propelled by ultrasonic waves from outside the body. There is a major innovation in the device's covering. A red blood cell neutralizes bacteria's poisons, while a platelet attacks bacteria. Both work their magic through sensors on their exterior membranes. In order to create the toxin-fighting bot, gold nanowires are covered with the new membrane. Due to the effectiveness of their method, Ail claims that "we didn't have a substandard batch of robots". By making the nanowire segments concave at one end, Avila made the nanorobots asymmetrical, so that they would respond to ultrasound more effectively. Using ultrasound, the nanorobots move at a 35 micron per second speed. This setting does not allow for individual control of nabobs since they swarm in groups. A blood sample was found to contain an antibiotic-resistant bacterium called Staphylococcus Atreus. After just five minutes, there were three times fewer germs in the sample. The gold nanowire will be tested in vivo, among other things, to determine how hazardous it might be since gold reacts well to acoustic fields.

    (5) AI DIAGNOSTICS AND AI EPIDEMIOLOGY (\widetilde{\Xi}_{5}) :

    In medicine, robots can do this work most effectively. Through machine learning, scientists can train AIs to perform tasks more effectively than people by giving them thousands of instances. Diagnostic tools of this type have numerous applications, but a few stand out. In an effort to detect patients who are most likely to develop diabetes, heart failure, or stroke, New York University created an artificial intelligence that scans thousands of medical records. Over 8000 diseases and rare genetic disorders can be accurately diagnosed using facial recognition software in the Facial Dysmorphology Novel Analysis (FDNA) system. In the future, robotics, diagnostic image analysis, and precision medicine applications may become more and more dependent on robots due to future restrictions, trends. Computer vision powered by deep learning is one example of the use of medical machine learning (ML) in the clinical setting. Public health relies heavily on epidemiology. In the context of COVID-19, machine learning applications in epidemiology have certainly received increased attention. In addition to the development of highly linked databases of health records, larger amounts of data with a health impact are becoming available from a wider source, allowing machine learning to make almost magical predictions from large, high-dimensional data sets. In epidemiology, informal thinking plays a major role, which can be reconciled with the casual attitude of the ML. There can be no overstatement of how critical casual thinking is in epidemiology. Making flimsy claims is an issue worth considering.

    A large number of options must be evaluated according to the criteria to determine which is the best. The following information is needed to make a decision:

    Step-1: DM information are, (see Table 1).

    Table 1.  DM information.
    C_{1} C_{2} C_{3} C_{4}
    \widetilde{\Xi}_{1} \big\langle(0.8, 0.55);[0.5, 0.52], \big\langle(0.6, 0.55);[0.5, 0.55], \big\langle(0.7, 0.55);[0.6, 0.65], \big\langle(0.85, 0.6);[0.45, 0.55],
    [0.6, 0.65], [0.52, 0.57]\big\rangle [0.6, 0.65], [0.65, 0.7]\big\rangle [0.5, 0.7], [0.4, 0.5]\big\rangle [0.7, 0.75], [0.55, 0.6]\big\rangle
    \widetilde{\Xi}_{2} \big\langle(0.8, 0.75);[0.3, 0.43], \big\langle(0.8, 0.65);[0.55, 0.56], \big\langle(0.55, 0.5);[0.4, 0.45], \big\langle(0.8, 0.65);[0.5, 0.55],
    [0.19, 0.2], [0.4, 0.8]\big\rangle [0.8, 0.85], [0.3, 0.35]\big\rangle [0.7, 0.75], [0.55, 0.7]\big\rangle [0.35, 0.46], [0.45, 0.65]\big\rangle
    \widetilde{\Xi}_{3} \big\langle(0.85, 0.7);[0.5, 0.55], \big\langle(0.75, 0.7);[0.2, 0.25], \big\langle(0.7, 0.65);[0.6, 0.7], \big\langle(0.7, 0.55);[0.6, 0.65],
    [0.4, 0.75], [0.3, 0.35]\big\rangle [0.9, 0.95], [0.4, 0.45]\big\rangle [0.65, 0.76], [0.3, 0.35]\big\rangle [0.65, 0.7], [0.35, 0.45]\big\rangle
    \widetilde{\Xi}_{4} \big\langle(0.65, 0.6);[0.36, 0.51], \big\langle(0.7, 0.65);[0.49, 0.53], \big\langle(0.8, 0.55);[0.5, 0.55], \big\langle(0.8, 0.6);[0.5, 0.55],
    [0.37, 0.38], [0.6, 0.65]\big\rangle [0.65, 0.73], [0.55, 0.65]\big\rangle [0.55, 0.6], [0.65, 0.75]\big\rangle [0.4, 0.45], [0.5, 0.73]\big\rangle
    \widetilde{\Xi}_{5} \big\langle(0.7, 0.6);[0.45, 0.6], \big\langle(0.6, 0.55);[0.25, 0.35], \big\langle(0.65, 0.6);[0.45, 0.55], \big\langle(0.6, 0.5);[0.5, 0.55],
    [0.5, 0.55], [0.25, 0.6]\big\rangle [0.45, 0.5], [0.75, 0.85]\big\rangle [0.55, 0.65], [0.5, 0.6]\big\rangle [0.65, 0.7], [0.45, 0.5]\big\rangle

     | Show Table
    DownLoad: CSV

    Table 2 shows the normalized decision matrix.

    Table 2.  Normalized decision values.
    C_{1} C_{2} C_{3} C_{4}
    \widetilde{\Xi}_{1} \big\langle(0.9412, 0.5042);[0.5, 0.52], \big\langle(0.75, 0.7202);[0.5, 0.55], \big\langle(0.875, 0.6648);[0.6, 0.65], \big\langle(1, 0.6516);[0.45, 0.55],
    [0.6, 0.65], [0.52, 0.57]\big\rangle [0.6, 0.65], [0.65, 0.7]\big\rangle [0.5, 0.7], [0.4, 0.5]\big\rangle [0.7, 0.75], [0.55, 0.6]\big\rangle
    \widetilde{\Xi}_{2} \big\langle(0.9412, 0.9375);[0.3, 0.43], \big\langle(1, 0.7545);[0.55, 0.56], \big\langle(0.6875, 0.6993);[0.4, 0.45], \big\langle(0.9412, 0.8125);[0.5, 0.55],
    [0.19, 0.2], [0.4, 0.8]\big\rangle [0.8, 0.85], [0.3, 0.35]\big\rangle [0.7, 0.75], [0.55, 0.7]\big\rangle [0.35, 0.46], [0.45, 0.65]\big\rangle
    \widetilde{\Xi}_{3} \big\langle(1, 0.7686);[0.5, 0.55], \big\langle(0.9375, 0.9333);[0.2, 0.25], \big\langle(0.875, 0.9286);[0.6, 0.7], \big\langle(0.8235, 0.6648);[0.6, 0.65],
    [0.4, 0.75], [0.3, 0.35]\big\rangle [0.9, 0.95], [0.4, 0.45]\big\rangle [0.65, 0.76], [0.3, 0.35]\big\rangle [0.65, 0.7], [0.35, 0.45]\big\rangle
    \widetilde{\Xi}_{4} \big\langle(0.7647, 0.7385);[0.36, 0.51], \big\langle(0.875, 0.8622);[0.49, 0.53], \big\langle(1, 0.5817);[0.5, 0.55], \big\langle(0.9412, 0.6923);[0.5, 0.55],
    [0.37, 0.38], [0.6, 0.65]\big\rangle [0.65, 0.73], [0.55, 0.65]\big\rangle [0.55, 0.6], [0.65, 0.75]\big\rangle [0.4, 0.45], [0.5, 0.73]\big\rangle
    \widetilde{\Xi}_{5} \big\langle(0.8235, 0.6857);[0.45, 0.6], \big\langle(0.75, 0.7202);[0.25, 0.35], \big\langle(0.8125, 0.8521);[0.45, 0.55], \big\langle(0.7059, 0.641);[0.5, 0.55],
    [0.5, 0.55], [0.25, 0.6]\big\rangle [0.45, 0.5], [0.75, 0.85]\big\rangle [0.55, 0.65], [0.5, 0.6]\big\rangle [0.65, 0.7], [0.45, 0.5]\big\rangle

     | Show Table
    DownLoad: CSV

    Step-2: Obtain the normalized decision matrix:

    Table 3 shows the SRNSNIVWG operator. The following aggregate data is available for each alternative.

    Table 3.  SRNSNIVWG operator.
    SRNSNIVWG \, \, operator\, \, (\Delta = 1)
    \big\langle(0.8717, 0.6084);[0.5131, 0.5561], [0.5936, 0.6719], [0.5443, 0.6038]\big\rangle
    \big\langle(0.9001, 0.8166);[0.4011, 0.4814], [0.5730, 0.6311], [0.4089, 0.6664]\big\rangle
    \big\langle(0.9366, 0.8339);[0.4012, 0.4633], [0.7018, 0.8442], [0.3356, 0.3911]\big\rangle
    \big\langle(0.8578, 0.7328);[0.4358, 0.5277], [0.5086, 0.5627], [0.5869, 0.6808]\big\rangle
    \big\langle(0.7864, 0.7219);[0.3812, 0.4973], [0.5139, 0.5759], [0.5037, 0.6917]\big\rangle

     | Show Table
    DownLoad: CSV

    Step-3: Aggregate information based on SRNSNIVWG operator for every alternative (\Delta = 1) .

    Step-4: Consider the following alternatives and determine their optimum values, both positive and negative:

    Step-5: ED between each alternative with both ideal values:

    \mathbb{D}^{P}_{1} = 0.3224,\mathbb{D}^{P}_{2} = 0.3392, \mathbb{D}^{P}_{3} = 0.3382, \mathbb{D}^{P}_{4} = 0.3307, \mathbb{D}^{P}_{5} = 0.3303,

    and

    \mathbb{D}^{N}_{1} = 0.0647, \mathbb{D}^{N}_{2} = 0.0821, \mathbb{D}^{N}_{3} = 0.0810, \mathbb{D}^{N}_{4} = 0.0734, \mathbb{D}^{N}_{5} = 0.0732.

    Step-6: Relative closeness is calculated as follows:

    \mathbb{D}^{\star}_{1} = 0.1672, \mathbb{D}^{\star}_{2} = 0.1950, \mathbb{D}^{\star}_{3} = 0.1933, \mathbb{D}^{\star}_{4} = 0.1817, \mathbb{D}^{\star}_{5} = 0.1814.

    Step-7: Ranking of alternatives are

    \widetilde{\Xi}_{2} \geq \widetilde{\Xi}_{3} \geq \widetilde{\Xi}_{4} \geq \widetilde{\Xi}_{5} \geq \widetilde{\Xi}_{1}.

    As a result, Pharmarobotics is the best option.

    The suggested models are compared with a few existing models in this subsection. In this way, it demonstrates its value and advantages. Recently, Palanikumar et al. [17] introduced MADM approach for Pythagorean neutrosophic normal interval-valued aggregation operators. We proposed ED and HD is based on SRNSNIVWA, SRNSNIVWG, GSRNSNIVWA, and GSRNSNIVWG, respectively. In both cases, the Euclidean distance method and the Hamming distance method were used. In Tables 4 and 5, existing and proposed methods are compared. The following categories can be used to categorize distances:

    Table 4.  Comparison table.
    \Delta=1 SRNSNIVWA SRNSNIVWG GSRNSNIVWA GSRNSNIVWG
    TOPSIS-\, Euclidean \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{3} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{3} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{4}
    distance\, (proposed) \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{1}
    TOPSIS-\, Hamming \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{3} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{3} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{4}
    distance\, (proposed) \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{1}
    Score\, (proposed) \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{3} \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{3} \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{4}
    \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{2} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{2}

     | Show Table
    DownLoad: CSV
    Table 5.  Comparison table.
    \Delta=1 SRNSNIVWA SRNSNIVWG GSRNSNIVWA GSRNSNIVWG
    Euclidean distance [17] \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{1}
    \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5}
    Hamming distance [17] \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{1}
    \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{5}
    Score [17] \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{5} \widetilde{\Xi}_{5}\geq \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{1} \widetilde{\Xi}_{1}\geq \widetilde{\Xi}_{4}\geq \widetilde{\Xi}_{5}
    \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{2} \widetilde{\Xi}_{2}\geq \widetilde{\Xi}_{4} \widetilde{\Xi}_{3}\geq \widetilde{\Xi}_{2}

     | Show Table
    DownLoad: CSV

    EDs for proposed approaches and current methods based on these four aggregating operators are shown in Figure 1. Figure 1 shows the Graphical representation consisting of euclidean distances.

    Figure 1.  Euclidean distances.

    HDs for proposed approaches and current methods based on these four aggregating operators are shown in Figure 2. Figure 2 shows the Graphical representation consisting of Hamming distances.

    Figure 2.  Hamming distances.

    Score values for proposed approaches and current methods based on these four aggregating operators are shown in Figure 3. Figure 3 shows the Graphical representation consisting of score values.

    Figure 3.  Score values.

    MADM conditional dependability changes with the alternative. This test has prerequisites. Using the SRNSNIVWG approach, the proximity values and ranking are as follows. The \Delta = 2 values should be changed from the SRNSNIVWG approach. As shown below, the relative closeness values and orders are as follows: For each alternative, we provide the following aggregate data using the SRNSNIVWG operator (see Table 6):

    Table 6.  SRNSNIVWG operator.
    \big\langle(0.8717, 0.6084);(0.5131, 0.5561), (0.5929, 0.6716), (0.5434, 0.6032)\big\rangle
    \big\langle(0.9001, 0.8166), [0.4011, 0.4814], [0.5508, 0.6096], [0.4076, 0.6628]\big\rangle
    \big\langle(0.9366, 0.8339);[0.4012, 0.4633], [0.6922, 0.8424], [0.3352, 0.3906]\big\rangle
    \big\langle(0.8578, 0.7328);[0.4358, 0.5277], [0.5041, 0.5565], [0.5867, 0.6806]\big\rangle
    \big\langle(0.7864, 0.7219);[0.3812, 0.4973], [0.5129, 0.5747], [0.4969, 0.6900]\big\rangle

     | Show Table
    DownLoad: CSV

    Determine the optimum values, both ideal values of the following alternatives:

    and

    ED between each choice with both ideal values:

    \mathbb{D}^{P}_{1} = 0.3222,\mathbb{D}^{P}_{2} = 0.3350,\mathbb{D}^{P}_{3} = 0.3371,\mathbb{D}^{P}_{4} = 0.3298, \mathbb{D}^{P}_{5} = 0.3294
    \mathbb{D}^{N}_{1} = 0.0645, \mathbb{D}^{N}_{2} = 0.0779, \mathbb{D}^{N}_{3} = 0.0800, \mathbb{D}^{N}_{4} = 0.0725, \mathbb{D}^{N}_{5} = 0.0723.

    Relative closeness are calculated as follows:

    \mathbb{D}^{\star}_{1} = 0.1668, \mathbb{D}^{\star}_{2} = 0.1887, \mathbb{D}^{\star}_{3} = 0.1918, \mathbb{D}^{\star}_{4}0.1802,\mathbb{D}^{\star}_{5} = 0.1800.

    We can see from the data above that the SRNSNIVWG operator is used to determine the alternative ranking. If \Delta = 2, then new order is \widetilde{\Xi}_{3} \geq \widetilde{\Xi}_{2} \geq \widetilde{\Xi}_{4} \geq \widetilde{\Xi}_{5} \geq \widetilde{\Xi}_{1}. As a result, robotic-assisted biopsy becomes the preferred option to pharmarobotics. Similar to SRNSNIVWA, GSRNSNIVWA, and GSRNSNIVWG operators with \Delta values are used to generate alternate ranks.

    Figure 4 shows the Graphical representation consists of TOPSIS based on Hamming-SRNSNIVWG.

    Figure 4.  TOPSIS based on Hamming-SRNSNIVWG.

    According to the previous study, the applications have numerous advantages. SRNSNIVS is a combination of square root neutrosophic set and interval valued normal neutrosophic set. According to SRNSNIVS, human behavior and natural events follow a normal distribution in real life. By using the SRNSNIVS AOs, we find the most suitable alternative based on a set of options provided by the decision maker. Thus, the proposed MADM technique based on SRNSNIVS AOs provides another approach to finding the most effective alternative in DM. A decision maker can select the outcome according to \Delta and their own preferences. With operators like SRNSNIVWA, SRNSNIVWG, GSRNSNIVWA, and GSRNSNIVWG, different ranking outcomes of each alternative could be produced dynamically.

    We present the ED and HD measures for SRNSNIVSs. The mathematical simplicity of these distance measures makes them advantageous. It is demonstrated that ED and HD measures through the use of numerical examples. We have suggested AO rules for SRNSNIVWA, SRNSNIVWG, GSRNSNIVWA, and GSRNSNIVWG. As well as providing some examples, we discussed some of the features of these operators. It is possible for people to choose the best option under uncertain and inconsistent circumstances by implementing the SRNSNIV MADM. The SRNSNIVWA, SRNSNIVWG, GSRNSNIVWA, and GSRNSNIVWG operators have been used to solve MADM problems depending on \Delta. Those generalized values of \Delta significantly impact the ranking of alternatives as shown in the study. To make the decision, DM may select \Delta values on the basis of the real life problems. The DM can select the method for obtaining results depending on \Delta values. There are many practical applications of ED and HD measures in data analysis. This paper's discussion will be useful to future academics interested in this field. Further discussions will be held on the following topics are:

    (1) Soft sets and expert sets are explored in terms of SRNSNIVS.

    (2) Based on SRNSNIVS, we investigate generalized q -Rung interval valued normal neutrosophic set and cubic q -Rung interval valued normal fuzzy set.

    (3) A generalized cubic interval valued normal neutrosophic set and complex interval valued normal neutrosophic set can be used to solve the problem of MADM.

    The authors present their appreciation to King Saud University for funding this research through Researchers Supporting Program number (RSPD2023R704), King Saud University, Riyadh, Saudi Arabia.

    The authors declare no conflict of interest.



    [1] K. A. Ahmad, R. Ezzati, K. M. Afshar, Solving systems of fractional two-dimensional nonlinear partial Volterra integral equations by using Haar wavelets, J. Appl. Anal., 27 (2021), 239–257. https://doi.org/10.1515/JAA-2021-2050 doi: 10.1515/JAA-2021-2050
    [2] A. Karimi, K. Maleknejad, R. Ezzati, Numerical solutions of system of two-dimensional Volterra integral equations via Legendre wavelets and convergence, Appl. Numer. Math., 156 (2020), 228–241. https://doi.org/10.1016/j.apnum.2020.05.003 doi: 10.1016/j.apnum.2020.05.003
    [3] H. Liu, J. Huang, W. Zhang, Y. Ma, Meshfree approach for solving multi-dimensional systems of Fredholm integral equations via barycentric Lagrange interpolation, Appl. Math. Comput., 346 (2019), 295–304. https://doi.org/10.1016/j.amc.2018.10.024 doi: 10.1016/j.amc.2018.10.024
    [4] J. Xie, M. Yi, Numerical research of nonlinear system of fractional Volterra-Fredholm integral-differential equations via Block-Pulse functions and error analysis, J. Comput. Appl. Math., 345 (2019), 159–167. https://doi.org/10.1016/j.cam.2018.06.008 doi: 10.1016/j.cam.2018.06.008
    [5] P. Gonz\acute{a}lez-Rodelas, M. Pasadas, A. Kouibia, B. Mustafa, Numerical solution of linear Volterra integral equation systems of second kind by radial basis functions, Mathematics, 10 (2022), 223. https://doi.org/10.3390/MATH10020223 doi: 10.3390/MATH10020223
    [6] A. R. Yaghoobnia, R. Ezzati, Using Bernstein multi-scaling polynomials to obtain numerical solution of Volterra integral equations system, Comput. Appl. Math., 39 (2020), 608–616. https://doi.org/10.1007/s40314-020-01198-4 doi: 10.1007/s40314-020-01198-4
    [7] A. Jafarian, S. Measoomy, S. Abbasbandy, Artificial neural networks based modeling for solving Volterra integral equations system, Appl. Soft Comput., 27 (2015), 391–398. https://doi.org/ 10.1016/j.asoc.2014.10.036 doi: 10.1016/j.asoc.2014.10.036
    [8] J. Cao, C. Xu, A high order schema for the numercial solution of the fractional ordinary differential equations, J. Comput. Phys., 238 (2013), 154–168. https://doi.org/10.1016/j.jcp.2012.12.013 doi: 10.1016/j.jcp.2012.12.013
    [9] R. Katani, S. Shahmorad, Block by block method for the systems of nonlinear Volterra integral equtions, Appl. Math. Model., 34 (2010), 400–406. https://doi.org/10.1016/j.apm.2009.04.013 doi: 10.1016/j.apm.2009.04.013
    [10] H. H. Sorkun, S. Yalçinbaş, Approximate solutions of linear Volterra integral equation systems with variable coefficients, Appl. Math. Model., 34 (2010), 3451–3464. https://doi.org/10.1016/j.apm.2010.02.034 doi: 10.1016/j.apm.2010.02.034
    [11] M. I. Berenguer, D. Gámez, A. I. G. Guillem, M. R. Galán, M. C. S. Pérez, Biorthogonal systems for solving Volterra integral equation systems of the second kind, J. Comput. Appl. Math., 235 (2010), 1875–1883. https://doi.org/10.1016/j.cam.2010.07.011 doi: 10.1016/j.cam.2010.07.011
    [12] K. Maleknejad, A. S. Shamloo, Numerical solution of singular Volterra integral equations system of convolution type by using operational matrices, Appl. Math. Comput., 195 (2007), 500–505. https://doi.org/10.1016/j.amc.2007.05.001 doi: 10.1016/j.amc.2007.05.001
    [13] A. Tahmasbi, O. S. Fard, Numerical solution of linear Volterra integral equations system of the second kind, Appl. Math. Comput., 201 (2008), 547–552. https://doi.org/10.1016/j.amc.2007.12.041 doi: 10.1016/j.amc.2007.12.041
    [14] M. Rabbani, K. Maleknejad, N. Aghazadeh, Numerical computational solution of the Volterra integral equations system of the second kind by using an expansion method, Appl. Math. Comput., 187 (2006), 1143–1146. https://doi.org/10.1016/j.amc.2006.09.012 doi: 10.1016/j.amc.2006.09.012
    [15] S. Yalçinbaş, K. Erdem, Approximate solutions of nonlinear Volterra equation systems, Internat. J. Modern Phys. B., 24 (2010), 6235–6258. https://doi.org/10.1142/S0217979210055524 doi: 10.1142/S0217979210055524
    [16] M. A. Zaky, I. G. Ameen, N. A. Elkot, E. H. Doha, A unified spectral collocation method for nonlinear systems of multi-dimensional integral equations with convergence analysis, Appl. Numer. Math., 161 (2021), 27–45. https://doi.org/10.1016/j.apnum.2020.10.028 doi: 10.1016/j.apnum.2020.10.028
    [17] E. Babolian, M. Mordad, A numerical method for solving systems of linear and nonlinear integral equations of the second kind by hat basis functions, Comput. Math. Appl., 62 (2011), 187–198. https://doi.org/10.1016/j.camwa.2011.04.066 doi: 10.1016/j.camwa.2011.04.066
    [18] F. Mirzaee, E. Hadadiyan, Solving system of linear Stratonovich Volterra integral equations via modification of hat functions, Appl. Math. Comput., 293 (2017), 254–264. https://doi.org/10.1016/j.amc.2016.08.016 doi: 10.1016/j.amc.2016.08.016
    [19] E. Babolian, J. Biazar, A. R. Vahidi, On the decomposition method for system of linear equations and system of linear Volterra integral equations, Appl. Math. Comput., 147 (2004), 19–27. https://doi.org/10.1016/S0096-3003(02)00644-6 doi: 10.1016/S0096-3003(02)00644-6
    [20] K. Maleknejad, M. Shahrezaee, Using Runge-Kutta method for numerical solution of the system of Volterra integral equation, Appl. Math. Comput., 149 (2004), 399–410. https://doi.org/10.1016/s0096-3003(03)00148-6 doi: 10.1016/s0096-3003(03)00148-6
    [21] W. Jiang, Z. Chen, Solving a system of linear Volterra integral equations using the new reproducing kernel method, Appl. Math. Comput., 219 (2013), 10225–10230. https://doi.org/10.1016/j.amc.2013.03.123 doi: 10.1016/j.amc.2013.03.123
    [22] J. Biazar, H. Ebrahimi, Chebyshev wavelets approach for nonlinear systems of Volterra integral equations, Comput. Math. Appl., 63 (2012), 608–616. https://doi.org/10.1016/j.camwa.2011.09.059 doi: 10.1016/j.camwa.2011.09.059
    [23] F. Mirzaee, S. Hoseini, A new collocation approach for solving systems of high-order linear Volterra integro-differential equations with variable coefficients, Appl. Math. Comput., 311 (2017), 272–282. https://doi.org/10.1016/j.amc.2017.05.031 doi: 10.1016/j.amc.2017.05.031
    [24] D. Conte, S. Shahmorad, Y. Talaei, New fractional Lanczos vector polynomials and their application to system of Abel-Volterra integral equations and fractional differential equations, J. Comput. Appl. Math., 366 (2020), 112409. https://doi.org/10.1016/j.cam.2019.112409 doi: 10.1016/j.cam.2019.112409
    [25] K. Sadri, K. Hosseini, D. Baleanu, S. Salahshour, A high-accuracy Vieta-Fibonacci collocation scheme to solve linear time-fractional telegraph equations, Wave. Rand. Complex Media, 2022, 2135789. https://doi.org/10.1080/17455030.2022.2135789
    [26] S. Khadijeh, H. Kamyar, B. Dumitru, S. Soheil, P. Choonkil, Designing a matrix collocation method for fractional delay intgro-differential equations with weakly singular kernels based on Vieta-Fibonacci polynomials, Fractal Fract., 6 (2022). https://doi.org/10.3390/fractalfract6010002
    [27] J. Dixon, S. Mxkee, Weakly singular discrete Gronwall inequalities, Z. Angew. Math. Mech., 66 (1978), 535–544. https://doi.org/10.1002/zamm.19860661107 doi: 10.1002/zamm.19860661107
  • This article has been cited by:

    1. Wei Du, Fan Yang, Optimizing market risk evaluation of small and medium sized enterprises through hamacher interactive power geometric technique under uncertainty, 2024, 46, 10641246, 7521, 10.3233/JIFS-238763
    2. Murugan Palanikumar, Chiranjibe Jana, Ibrahim M. Hezam, Abdelaziz Foul, Vladimir Simic, Dragan Pamucar, Multiple attribute decision-making model for artificially intelligent last-mile delivery robots selection in neutrosophic square root environment, 2024, 136, 09521976, 108878, 10.1016/j.engappai.2024.108878
    3. Lijuan Zhao, Shuo Du, Incorporating intelligence in multiple-attribute decision-making using algorithmic framework and double-valued neutrosophic sets: Varied applications to teaching quality evaluation, 2025, 1327-2314, 10.1177/13272314241313147
    4. S. R. Vidhya, N. Rajesh, S. Jafari, G. Nordo, H. A. Matevossian, Possibility \boldsymbol{q}-Rung Interval Valued Fuzzy Soft Set and Their Real-life Application to Decision-making Approach, 2024, 45, 1995-0802, 6633, 10.1134/S1995080224607719
    5. Faiza Gul, Imran Mir, Aseel Smerat, Manzar Abbas, Population‐Based Optimization With Decentralized Method, 2025, 0143-2087, 10.1002/oca.3317
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1543) PDF downloads(60) Cited by(0)

Figures and Tables

Tables(11)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog