Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

How artificial intelligence reduces human bias in diagnostics?

  • Accurate diagnostics of neurological disorders often rely on behavioral assessments, yet traditional methods rooted in manual observations and scoring are labor-intensive, subjective, and prone to human bias. Artificial Intelligence (AI), particularly Deep Neural Networks (DNNs), offers transformative potential to overcome these limitations by automating behavioral analyses and reducing biases in diagnostic practices. DNNs excel in processing complex, high-dimensional data, allowing for the detection of subtle behavioral patterns critical for diagnosing neurological disorders such as Parkinson's disease, strokes, or spinal cord injuries. This review explores how AI-driven approaches can mitigate observer biases, thereby emphasizing the use of explainable DNNs to enhance objectivity in diagnostics. Explainable AI techniques enable the identification of which features in data are used by DNNs to make decisions. In a data-driven manner, this allows one to uncover novel insights that may elude human experts. For instance, explainable DNN techniques have revealed previously unnoticed diagnostic markers, such as posture changes, which can enhance the sensitivity of behavioral diagnostic assessments. Furthermore, by providing interpretable outputs, explainable DNNs build trust in AI-driven systems and support the development of unbiased, evidence-based diagnostic tools. In addition, this review discusses challenges such as data quality, model interpretability, and ethical considerations. By illustrating the role of AI in reshaping diagnostic methods, this paper highlights its potential to revolutionize clinical practices, thus paving the way for more objective and reliable assessments of neurological disorders.

    Citation: Artur Luczak. How artificial intelligence reduces human bias in diagnostics?[J]. AIMS Bioengineering, 2025, 12(1): 69-89. doi: 10.3934/bioeng.2025004

    Related Papers:

    [1] Mustafa Mudhesh, Hasanen A. Hammad, Eskandar Ameer, Muhammad Arshad, Fahd Jarad . Novel results on fixed-point methodologies for hybrid contraction mappings in Mb-metric spaces with an application. AIMS Mathematics, 2023, 8(1): 1530-1549. doi: 10.3934/math.2023077
    [2] Muhammad Tariq, Muhammad Arshad, Mujahid Abbas, Eskandar Ameer, Saber Mansour, Hassen Aydi . A relation theoretic m-metric fixed point algorithm and related applications. AIMS Mathematics, 2023, 8(8): 19504-19525. doi: 10.3934/math.2023995
    [3] Huaping Huang, Bessem Samet . Two fixed point theorems in complete metric spaces. AIMS Mathematics, 2024, 9(11): 30612-30637. doi: 10.3934/math.20241478
    [4] Hanadi Zahed, Zhenhua Ma, Jamshaid Ahmad . On fixed point results in F-metric spaces with applications. AIMS Mathematics, 2023, 8(7): 16887-16905. doi: 10.3934/math.2023863
    [5] Umar Ishtiaq, Fahad Jahangeer, Doha A. Kattan, Manuel De la Sen . Generalized common best proximity point results in fuzzy multiplicative metric spaces. AIMS Mathematics, 2023, 8(11): 25454-25476. doi: 10.3934/math.20231299
    [6] Leyla Sağ Dönmez, Abdurrahman Büyükkaya, Mahpeyker Öztürk . Fixed-point results via αji-(DC(PˆE))-contractions in partial -metric spaces. AIMS Mathematics, 2023, 8(10): 23674-23706. doi: 10.3934/math.20231204
    [7] Jamshaid Ahmad, Abdullah Shoaib, Irshad Ayoob, Nabil Mlaiki . Common fixed points for (κGm)-contractions with applications. AIMS Mathematics, 2024, 9(6): 15949-15965. doi: 10.3934/math.2024772
    [8] Muhammad Waseem Asghar, Mujahid Abbas, Cyril Dennis Enyi, McSylvester Ejighikeme Omaba . Iterative approximation of fixed points of generalized αm-nonexpansive mappings in modular spaces. AIMS Mathematics, 2023, 8(11): 26922-26944. doi: 10.3934/math.20231378
    [9] Tahair Rasham, Muhammad Nazam, Hassen Aydi, Abdullah Shoaib, Choonkil Park, Jung Rye Lee . Hybrid pair of multivalued mappings in modular-like metric spaces and applications. AIMS Mathematics, 2022, 7(6): 10582-10595. doi: 10.3934/math.2022590
    [10] Amer Hassan Albargi, Jamshaid Ahmad . Fixed point results of fuzzy mappings with applications. AIMS Mathematics, 2023, 8(5): 11572-11588. doi: 10.3934/math.2023586
  • Accurate diagnostics of neurological disorders often rely on behavioral assessments, yet traditional methods rooted in manual observations and scoring are labor-intensive, subjective, and prone to human bias. Artificial Intelligence (AI), particularly Deep Neural Networks (DNNs), offers transformative potential to overcome these limitations by automating behavioral analyses and reducing biases in diagnostic practices. DNNs excel in processing complex, high-dimensional data, allowing for the detection of subtle behavioral patterns critical for diagnosing neurological disorders such as Parkinson's disease, strokes, or spinal cord injuries. This review explores how AI-driven approaches can mitigate observer biases, thereby emphasizing the use of explainable DNNs to enhance objectivity in diagnostics. Explainable AI techniques enable the identification of which features in data are used by DNNs to make decisions. In a data-driven manner, this allows one to uncover novel insights that may elude human experts. For instance, explainable DNN techniques have revealed previously unnoticed diagnostic markers, such as posture changes, which can enhance the sensitivity of behavioral diagnostic assessments. Furthermore, by providing interpretable outputs, explainable DNNs build trust in AI-driven systems and support the development of unbiased, evidence-based diagnostic tools. In addition, this review discusses challenges such as data quality, model interpretability, and ethical considerations. By illustrating the role of AI in reshaping diagnostic methods, this paper highlights its potential to revolutionize clinical practices, thus paving the way for more objective and reliable assessments of neurological disorders.



    In 1922, S. Banach [15] provided the concept of Contraction theorem in the context of metric space. After, Nadler [28] introduced the concept of set-valued mapping in the module of Hausdroff metric space which is one of the potential generalizations of a Contraction theorem. Let (X,d) is a complete metric space and a mapping T:XCB(X) satisfying

    H(T(x),T(y))γd(x,y)

    for all x,yX, where 0γ<1, H is a Hausdorff with respect to metric d and CB(X)={SX:S is closed and bounded subset of X equipped with a metric d}. Then T has a fixed point in X.

    In the recent past, Matthews [26] initiate the concept of partial metric spaces which is the classical extension of a metric space. After that, many researchers generalized some related results in the frame of partial metric spaces. Recently, Asadi et al. [4] introduced the notion of an M-metric space which is the one of interesting generalizations of a partial metric space. Later on, Samet et al. [33] introduced the class of mappings which known as (α,ψ)-contractive mapping. The notion of (α,ψ) -contractive mapping has been generalized in metric spaces (see more [10,12,14,17,19,25,29,30,32]).

    Throughout this manuscript, we denote the set of all positive integers by N and the set of real numbers by R. Let us recall some basic concept of an M-metric space as follows:

    Definition 1.1. [4] Let m:X×XR+be a mapping on nonempty set X is said to be an M-metric if for any x,y,z in X, the following conditions hold:

    (i) m(x,x)=m(y,y)=m(x,y) if and only if x=y;

    (ii) mxym(x,y);

    (iii) m(x,y)=m(y,x);

    (iv) m(x,y)mxy(m(x,z)mxz)+(m(z,y)mz,y) for all x,y,zX. Then a pair (X,m) is called M-metric space. Where

    mxy=min{m(x,x),m(y,y)}

    and

    Mxy=max{m(x,x),m(y,y)}.

    Remark 1.2. [4] For any x,y,z in M-metric space X, we have

    (i) 0Mxy+mxy=m(x,x)+m(y,y);

    (ii) 0Mxymxy=|m(x,x)m(y,y)|;

    (iii) Mxymxy(Mxzmxz)+(Mzymzy).

    Example 1.3. [4] Let (X,m) be an M-metric space. Define mw, ms:X×XR+ by:

    (i)

    mw(x,y)=m(x,y)2mx,y+Mx,y,

    (ii)

    ms={m(x,y)mx,y, if xy0, if x=y.

    Then mw and ms are ordinary metrics. Note that, every metric is a partial metric and every partial metric is an M-metric. However, the converse does not hold in general. Clearly every M-metric on X generates a T0 topology τm on X whose base is the family of open M -balls

    {Bm(x,ϵ):xX, ϵ>0},

    where

    Bm(x,ϵ)={yX:m(x,y)<mxy+ϵ}

    for all xX, ε>0. (see more [3,4,23]).

    Definition 1.4. [4] Let (X,m) be an M-metric space. Then,

    (i) A sequence {xn} in (X,m) is said to be converges to a point x in X with respect to τm if and only if

    limn(m(xn,x)mxnx)=0.

    (ii) Furthermore, {xn} is said to be an M-Cauchy sequence in (X,m) if and only if

    limn,m(m(xn,xm)mxnxm), and limn,m(Mxn,xmmxnxm)

    exist (and are finite).

    (iii) An M-metric space (X,m) is said to be complete if every M-Cauchy sequence {xn} in (X,m) converges with respect to τm to a point xX such that

    limnm(xn,x)mxnx=0, and limn(Mxn,xmxnx)=0.

    Lemma 1.5. [4] Let (X,m) be an M-metric space. Then:

    (i) {xn} is an M-Cauchy sequence in (X,m) if and only if {xn} is a Cauchy sequence in a metric space (X,mw).

    (ii) An M-metric space (X,m) is complete if and only if the metric space (X,mw) is complete. Moreover,

    limnmw(xn,x)=0 if and only if (limn(m(xn,x)mxnx)=0, limn(Mxnxmxnx)=0).

    Lemma 1.6. [4] Suppose that {xn} convergesto x and {yn} converges to y as n approaches to in M-metric space (X,m). Then we have

    limn(m(xn,yn)mxnyn)=m(x,y)mxy.

    Lemma 1.7. [4] Suppose that {xn} converges to xas n approaches to in M-metric space (X,m).Then we have

    limn(m(xn,y)mxny)=m(x,y)mxy for all yX.

    Lemma 1.8. [4] Suppose that {xn} converges to xand {xn} converges to y as n approaches to in M-metric space (X,m). Then m(x,y)=mxymoreover if m(x,x)= m(y,y), then x=y.

    Definition 1.9. Let α:X×X[0,). A mapping T:XX is said to be an α-admissible mapping if for all x,yX

    α(x,y)1α(T(x),T(y))1.

    Let Ψ be the family of the (c)-comparison functions ψ:R+{0}R+{0} which satisfy the following properties:

    (i) ψ is nondecreasing,

    (ii) n=0ψn(t)< for all t>0, where ψn is the n-iterate of ψ (see [7,8,10,11]).

    Definition 1.10. [33] Let (X,d) be a metric space and α:X×X[0,). A mapping T:XX is called (α,ψ)-contractive mapping if for all x,yX, we have

    α(x,y)d(T(x),T(x))ψ(d(x,y)),

    where ψΨ.

    A subset K of an M-metric space X is called bounded if for all xK, there exist yX and r>0 such that xBm(y,r). Let ¯K denote the closure of K. The set K is closed in X if and only if ¯K=K.

    Definition 1.11. [31] Define Hm:CBm(X)×CBm(X)[0,) by

    Hm(K,L)=max{m(K,L),m(L,K)},

    where

    m(x,L)=inf{m(x,y):yL} andm(L,K)=sup{m(x,L):xK}

    Lemma 1.12. [31] Let F be any nonempty set in M-metric space (X,m), then

    x¯F if and only if m(x,F)=supaF{mxa}.

    Proposition 1.13. [31] Let A,B,CCBm(X), then

    (i) m(A,A)=supxA{supyAmxy},

    (ii) (m(A,B)supxAsupyBmxy)(m(A,C)infxAinfzCmxz)+

    (m(C,B)infzCinfyBmzy).

    Proposition 1.14. [31] Let A,B,CCBm(X) followingare hold

    (i) Hm(A,A)=m(A,A)=supxA{supyAmxy},

    (ii) Hm(A,B)=Hm(B,A),

    (iii) Hm(A,B)supxAsupyAmxy)Hm(A,C)+Hm(B,C)infxAinfzCmxzinfzCinfyBmzy.

    Lemma 1.15. [31] Let A,BCBm(X) and h>1.Then for each xA, there exist at the least one yB such that

    m(x,y)hHm(A,B).

    Lemma 1.16. [31] Let A,BCBm(X) and l>0.Then for each xA, there exist at least one yB such that

    m(x,y)Hm(A,B)+l.

    Theorem 1.17. [31] Let (X,m) be a complete M-metric space and T:XCBm(X). Assume that there exist h(0,1) such that

    Hm(T(x),T(y))hm(x,y), (1.1)

    for all x,yX. Then T has a fixed point.

    Proposition 1.18. [31] Let T:XCBm(X) be a set-valued mapping satisfying (1.1) for all x,y inan M-metric space X. If zT(z) for some z in Xsuch that m(x,x)=0 for xT(z).

    We start with the following definition:

    Definition 2.1. Assume that Ψ is a family of non-decreasing functions ϕM:R+R+ such that

    (i) +nϕnM(x)< for every x>0 where ϕnM is a nth-iterate of ϕM,

    (ii) ϕM(x+y)ϕM(x)+ϕM(y) for all x,yR+,

    (iii) ϕM(x)<x, for each x>0.

    Remark 2.2. If αn|n= =0 is a convergent series with positive terms then there exists a monotonic sequence (βn)|n= such that βn|n== and αnβn|n==0 converges.

    Definition 2.3. Let (X,m) be an M-metric pace. A self mapping T:XX is called (α,ϕM)-contraction if there exist two functions α:X×X[0,) and ϕMΨ such that

    α(x,y)m(T(x),T(y))ϕM(m(x,y)),

    for all x,yX.

    Definition 2.4. Let (X,m) be an M-metric space. A set-valued mapping T:XCBm(X) is said to be (α,ϕM)-contraction if for all x,yX, we have

    α(x,y)Hm(T(x),T(x))ϕM(m(x,y)), (2.1)

    where ϕMΨ and α:X×X[0,).

    A mapping T is called α-admissible if

    α(x,y)1α(a1,b1)1

    for each a1T(x) and b1T(y).

    Theorem 2.5. Let (X,m) be a complete M-metric space.Suppose that (α,ϕM) contraction and α-admissible mapping T:XCBm(X)satisfies the following conditions:

    (i) there exist x0X such that α(x0,a1)1 for each a1T(x0),

    (ii) if {xn}X is a sequence such that α(xn,xn+1)1 for all n and {xn}xX as n, then α(xn,x)1 for all nN. Then T has a fixed point.

    Proof. Let x1T(x0) then by the hypothesis (i) α(x0,x1)1. From Lemma 1.16, there exist x2T(x1) such that

    m(x1,x2)Hm(T(x0),T(x1))+ϕM(m(x0,x1)).

    Similarly, there exist x3T(x2) such that

    m(x2,x3)Hm(T(x1),T(x2))+ϕ2M(m(x0,x1)).

    Following the similar arguments, we obtain a sequence {xn}X such that there exist xn+1T(xn) satisfying the following inequality

    m(xn,xn+1)Hm(T(xn1),T(xn))+ϕnM(m(x0,x1)).

    Since T is α-admissible, therefore α(x0,x1)1α(x1,x2)1. Using mathematical induction, we get

    α(xn,xn+1)1. (2.2)

    By (2.1) and (2.2), we have

    m(xn,xn+1)Hm(T(xn1),T(xn))+ϕnM(m(x0,x1))α(xn,xn+1)Hm(T(xn1),T(xn))+ϕnM(m(x0,x1))ϕM(m(xn1,xn))+ϕnM(m(x0,x1))=ϕM[(m(xn1,xn))+ϕn1M(m(x0,x1))]ϕM[Hm(T(xn2),T(xn1))+ϕn1M(m(x0,x1))]ϕM[α(xn1,xn)Hm(T(xn1),T(xn))+ϕn1M(m(x0,x1))]ϕM[ϕM(m(xn2,xn1))+ϕn1M(m(x0,x1))+ϕn1M(m(x0,x1))]ϕ2M(m(xn2,xn1))+2ϕnM(m(x0,x1))....
    m(xn,xn+1)ϕnM(m(x0,x1))+nϕnM(m(x0,x1))m(xn,xn+1)(n+1)ϕnM(m(x0,x1)).

    Let us assume that ϵ>0, then there exist n0N such that

    nn0(n+1)ϕnM(m(x0,x1))<ϵ.

    By the Remarks (1.2) and (2.2), we get

    limnm(xn,xn+1)=0.

    Using the above inequality and (m2), we deduce that

    limnm(xn,xn)=limnmin{m(xn,xn),m(xn+1,xn+1)}=limnmxnxn+1limnm(xn,xn+1)=0.

    Owing to limit, we have limnm(xn,xn)=0,

    limn,mmxnxm=0.

    Now, we prove that {xn} is M-Cauchy in X. For m,n in N with m>n and using the triangle inequality of an M-metric we get

    m(xn,xm)mxnxmm(xn,xn+1)mxnxn+1+m(xn+1,xm)mxn+1xmm(xn,xn+1)mxnxn+1+m(xn+1,xn+2)mxn+1xn+1+m(xn+2,xm)mxn+2xmm(xn,xn+1)mxnxn+1+m(xn+1,xn+2)mxn+1xn+2++m(xm1,xm)mxm1xmm(xn,xn+1)+m(xn+1,xn+2)++m(xm1,xm)=m1r=nm(xr,xr+1)m1r=n(r+1)ϕrM(m(x0,x1))m1rn0(r+1)ϕrM(m(x0,x1))m1rn0(r+1)ϕrM(m(x0,x1))<ϵ.

    m(xn,xm)mxnxm0, as n, we obtain limm,n(Mxnxmmxnxm)=0. Thus {xn} is a M-Cauchy sequence in X. Since (X,m) is M-complete, there exist xX such that

    limn(m(xn,x)mxnx)=0 andlimn(Mxnxmxnx)=0.

    Also, limnm(xn,xn)=0 gives that

    limnm(xn,x)=0 and limnMxnx=0, (2.3)
    limn{max(m(xn,x),m(x,x))}=0,

    which implies that m(x,x)=0 and hence we obtain mxT(x)=0. By using (2.1) and (2.3) with

    limnα(xn,x)1.

    Thus,

    limnHm(T(xn),T(x))limnϕM(m(xn,x))limnm(xn,x).
    limnHm(T(xn),T(x))=0. (2.4)

    Now from (2.3), (2.4), and xn+1T(xn), we have

    m(xn+1,T(x))Hm(T(xn),T(x))=0.

    Taking limit as n and using (2.4), we obtain that

    limnm(xn+1,T(x))=0. (2.5)

    Since mxn+1T(x)m(xn+1,T(x)) which gives

    limnmxn+1T(x)=0. (2.6)

    Using the condition (m4), we obtain

    m(x,T(x))supyT(x)mxym(x,T(x))mx,T(x)m(x,xn+1)mxxn+1+m(xn+1,T((x))mxn+1T(x).

    Applying limit as n and using (2.3) and (2.6), we have

    m(x,T(x))supyT(x)mxy. (2.7)

    From (m2), mxym(xy) for each yT(x) which implies that

    mxym(x,y)0.

    Hence,

    sup{mxym(x,y):yT(x)}0.

    Then

    supyT(x)mxyinfyT(x)m(x,y)0.

    Thus

    supyT(x)mxym(x,T(x)). (2.8)

    Now, from (2.7) and (2.8), we obtain

    m(T(x),x)=supyT(x)mxy.

    Consequently, owing to Lemma (1.12), we have x¯T(x)=T(x).

    Corollary 2.6. Let (X,m) be a complete M-metric space and anself mapping T:XX an α-admissible and (α,ϕM)-contraction mapping. Assume that thefollowing properties hold:

    (i) there exists x0X such that α(x0,T(x0))1,

    (ii) either T is continuous or for any sequence {xn}X with α(xn,xn+1)1 for all nN and {xn}x as n , we have α(xn,x)1 for all nN. Then T has a fixed point.

    Some fixed point results in ordered M-metric space.

    Definition 2.7. Let (X,) be a partially ordered set. A sequence {xn}X is said to be non-decreasing if xnxn+1 for all n.

    Definition 2.8. [16] Let F and G be two nonempty subsets of partially ordered set (X,). The relation between F and G is defined as follows: F1G if for every xF, there exists yG such that xy.

    Definition 2.9. Let (X,m,) be a partially ordered set on M-metric. A set-valued mapping T:XCBm(X) is said to be ordered (α,ϕM)-contraction if for all x,yX, with xy we have

    Hm(T(x),T(y))ϕM(m(x,y))

    where ϕMΨ. Suppose that α:X×X[0,) is defined by

    α(x,y)={1     if Tx1Ty0       otherwise.

    A mapping T is called α-admissible if

    α(x,y)1α(a1,b1)1,

    for each a1T(x) and b1T(y).

    Theorem 2.10. Let (X,m,) be a partially orderedcomplete M-metric space and T:XCBm(X) an α-admissible ordered (α,ϕM)-contraction mapping satisfying the following conditions:

    (i) there exist x0X such that {x0}1{T(x0)}, α(x0,a1)1 for each a1T(x0),

    (ii) for every x,yX, xy implies T(x)1T(y),

    (iii) If {xn}X is a non-decreasing sequence such that xnxn+1 for all n and {xn}xX as n gives xnx for all nN. Then T has a fixed point.

    Proof. By assumption (i) there exist x1T(x0) such that x0x1 and α(x0,x1)1. By hypothesis (ii), T(x0)1T(x1). Let us assume that there exist x2T(x1) such that x1x2 and we have the following

    m(x1,x2)Hm(T(x0),T(x1))+ϕM(m(x0,x1)).

    In the same way, there exist x3T(x2) such that x2x3 and

    m(x2,x3)Hm(T(x1),T(x2))+ϕ2M(m(x0,x1)).

    Following the similar arguments, we have a sequence {xn}X  and xn+1T(xn) for all n0 satisfying x0x1x2x3...xnxn+1. The proof is complete follows the arguments given in Theorem 2.5.

    Example 2.11. Let X=[16,1] be endowed with an M -metric given by m(x,y)=x+y2. Define T:XCBm(X) by

    T(x)={{12x+16,14}, if x=16{x2,x3},  if 14x13{23,56},  if 12x1.

    Define a mapping α:X×X[0,) by

    α(x,y)={1     if x,y[14,13]0       otherwise.

    Let ϕM:R+R+ be given by ϕM(t)=1710 where ϕMΨ, for x,yX. If x=16, y=14 then m(x,y)=524, and

    Hm(T(x),T(y))=Hm({312,14},{18,112})=max(m({312,14},{18,112}),m({18,112},{312,14}))=max{316,212}=316ϕM(t)m(x,y).

    If x=13, y=12 then m(x,y)=512, and

    Hm(T(x),T(y))=Hm({16,19},{23,1})=max(m({16,19},{23,1}),m({23,1},{16,19}))=max{1736,718}=1736ϕM(t)m(x,y).

    If x=16, y=1, then m(x,y)=712 and

    Hm(T(x),T(y))=Hm({312,14},{23,56})=max(m({312,14},{23,56}),m({23,56},{312,14}))=max{1124,1324}=1324ϕM(t)m(x,y).

    In all cases, T is (α,ϕM)-contraction mapping. If x0=13, then T(x0)={x2,x3}.Therefore α(x0,a1)1 for every a1T(x0). Let x,yX be such that α(x,y)1, then x,y[x2,x3] and T(x)={x2,x3} and T(y)= {x2,x3} which implies that α(a1,b1)1 for every a1T(x) and b1T(x). Hence T is α-admissble.

    Let {xn}X be a sequence such that α(xn,xn+1)1 for all n in N and xn converges to x as n converges to , then xn[x2,x3]. By definition of α -admissblity, therefore x[x2,x3] and hence α(xn,x)1. Thus all the conditions of Theorem 2.3 are satisfied. Moreover, T has a fixed point.

    Example 2.12. Let X={(0,0),(0,15),(18,0)} be the subset of R2 with order defined as: For (x1,y1),(x2,y2)X, (x1,y1)(x2,y2) if and only if x1x2, y1y2. Let m:X×XR+ be defined by

    m((x1,y1),(x2,y2))=|x1+x22|+|y1+y22|, for x=(x1,y1), y=(x2,y2)X.

    Then (X,m) is a complete M-metric space. Let T:XCBm(X) be defined by

    T(x)={{(0,0)}, if x=(0,0),{(0,0),(18,0)},  if x(0,15){(0,0)},  if x(18,0).

    Define a mapping α:X×X[0,) by

    α(x,y)={1     if x,yX0       otherwise.

    Let ϕM:R+R+ be given by ϕM(t)=12. Obviously, ϕMΨ. For x,yX,

    if x=(0,15) and y=(0,0), then Hm(T(x),T(y))=0 and m(x,y)=110 gives that

    Hm(T(x),T(y))=Hm({(0,0),(18,0)},{(0,0)})=max(m({(0,0),(18,0)},{(0,0)}),m({(0,0)},{(0,0),(18,0)}))=max{0,0}=0ϕM(t)m(x,y).

    If x=(18,0) and y=(0,0) then Hm(T(x),T(y))=0, and m(x,y)=116 implies that

    Hm(T(x),T(y))ϕM(t)m(x,y).

    If x=(0,0) and y=(0,0) then Hm(T(x),T(y))=0, and m(x,y)=0 gives

    Hm(T(x),T(y))ϕM(t)m(x,y).

    If x=(0,15) and y=(0,15) then Hm(T(x),T(y))=0, and m(x,y)=15 implies that

    Hm(T(x),T(y))ϕM(t)m(x,y).

    If x=(0,18) and y=(0,18) then Hm(T(x),T(y))=0, and m(x,y)=18 gives that

    Hm(T(x),T(y))ϕM(t)m(x,y).

    Thus all the condition of Theorem 2.10 satisfied. Moreover, (0,0) is the fixed point of T.

    In this section, we present an application of our result in homotopy theory. We use the fixed point theorem proved for set-valued (α,ϕM)-contraction mapping in the previous section, to establish the result in homotopy theory. For further study in this direction, we refer to [6,35].

    Theorem 3.1. Suppose that (X,m) is a complete M-metricspace and A and B are closed and open subsets of X respectively, suchthat AB. For a,bR, let T:B×[a,b]CBm(X) be aset-valued mapping satisfying the following conditions:

    (i) xT(y,t) for each yB/Aand t[a,b],

    (ii) there exist ϕMΨ and α:X×X[0,) such that

    α(x,y)Hm(T(x,t),T(y,t))ϕM(m(x,y)),

    for each pair (x,y)B×B and t[a,b],

    (iii) there exist a continuous function Ω:[a,b]R such that for each s,t[a,b] and xB, we get

    Hm(T(x,s),T(y,t))ϕM|Ω(s)Ω(t)|,

    (iv) if xT(x,t),then T(x,t)={x},

    (v) there exist x0 in X such that x0T(x0,t),

    (vi) a function :[0,)[0,) defined by (x)=xϕM(x) is strictly increasing and continuous if T(.,t) has a fixed point in B for some t[a,b], then T(.,t) has afixed point in A for all t[a,b]. Moreover, for a fixed t[a,b], fixed point is unique provided that ϕM(t)=12t where t>0.

    Proof. Define a mapping α:X×X[0,) by

    α(x,y)={1     if xT(x,t), yT(y,t) 0       otherwise.

    We show that T is α-admissible. Note that α(x,y)1 implies that xT(x,t) and yT(y,t) for all t[a,b]. By hypothesis (iv), T(x,t)={x} and T(y,t)={y}. It follows that T is α -admissible. By hypothesis (v), there exist x0X such that x0(x0,t) for all t, that is α(x0,x0)1. Suppose that α(xn,xn+1)1 for all n and xn converges to q as n approaches to and xnT(xn,t) and xn+1T(xn+1,t) for all n and t[a,b] which implies that qT(q,t) and thus α(xn,q)1. Set

    D={t[a,b]: xT(x,t) for xA}.

    So T(.,t) has a fixed point in B for some t[a,b], there exist xB such that xT(x,t). By hypothesis (i) xT(x,t) for t[a,b] and xA so Dϕ. Now we now prove that D is open and close in [a,b]. Let t0D and x0A with x0T(x0,t0). Since A is open subset of X, ¯Bm(x0,r)A for some r>0. For ϵ=r+mxx0ϕ(r+mxx0) and a continuous function Ω on [a,b], there exist δ>0 such that

    ϕM|Ω(t)Ω(t0)|<ϵ for all t(t0δ,t0+δ).

    If t(t0δ,t0+δ) for xBm(x0,r)={xX:m(x0,x)mx0x+r} and lT(x,t), we obtain

    m(l,x0)=m(T(x,t),x0)=Hm(T(x,t),T(x0,t0)).

    Using the condition (iii) of Proposition 1.13 and Proposition 1.18, we have

    m(l,x0)Hm(T(x,t),T(x0,t0))+Hm(T(x,t),T(x0,t0)) (2.9)

    as xT(x0,t0) and xBm(x0,r)AB, t0[a,b] with α(x0,x0)1. By hypothesis (ii), (iii) and (2.9)

    m(l,x0)ϕM|Ω(t)Ω(t0)|+α(x0,x0)Hm(T(x,t),T(x0,t0))ϕM|Ω(t)Ω(t0)|+ϕM(m(x,x0))ϕM(ϵ)+ϕM(mxx0+r)ϕM(r+mxx0ϕM(r+mxx0))+ϕM(mxx0+r)<r+mxx0ϕM(r+mxx0)+ϕM(mxx0+r)=r+mxx0.

    Hence l¯Bm(x0,r) and thus for each fixed t(t0δ,t0+δ), we obtain T(x,t)¯Bm(x0,r) therefore T:¯Bm(x0,r)CBm(¯Bm(x0,r)) satisfies all the assumption of Theorem (3.1) and T(.,t) has a fixed point ¯Bm(x0,r)=Bm(x0,r)B. But by assumption of (i) this fixed point belongs to A. So (t0δ,t0+δ)D, thus D is open in [a,b]. Next we prove that D is closed. Let a sequence {tn}D with tn converges to t0[a,b] as n approaches to . We will prove that t0 is in D.

    Using the definition of D, there exist {tn} in A such that xnT(xn,tn) for all n. Using Assumption (iii)(v), and the condition (iii) of Proposition 1.13, and an outcome of the Proposition 1.18, we have

    m(xn,xm)Hm(T(xn,tn),T(xm,tm))Hm(T(xn,tn),T(xn,tm))+Hm(T(xn,tm),T(xm,tm))ϕM|Ω(tn)Ω(tm)|+α(xn,xm)Hm(T(xn,tm),T(xm,tm))ϕM|Ω(tn)Ω(tm)|+ϕM(m(xn,xm))m(xn,xm)ϕM(m(xn,xm))ϕM|Ω(tn)Ω(tm)|(m(xn,xm))ϕM|Ω(tn)Ω(tm)|(m(xn,xm))<|Ω(tn)Ω(tm)|m(xn,xm)<1|Ω(tn)Ω(tm)|.

    So, continuity of 1, and convergence of {tn}, taking the limit as m,n in the last inequality, we obtain that

    limm,nm(xn,xm)=0.

    Sine mxnxmm(xn,xm), therefore

    limm,nmxnxm=0.

    Thus, we have limnm(xn,xn)=0=limmm(xm,xm). Also,

    limm,n(m(xn,xm)mxnxm)=0, limm,n(Mxnxmmxnxm).

    Hence {xn} is an M-Cauchy sequence. Using Definition 1.4, there exist x in X such that

    limn(m(xn,x)mxnx)=0 and limn(Mxnxmxnx)=0.

    As limnm(xn,xn)=0, therefore

    limnm(xn,x)=0 and limnMxnx=0.

    Thus, we have m(x,x)=0. We now show that xT(x,t). Note that

    m(xn,T(x,t))Hm(T(xn,tn),T(x,t))Hm(T(xn,tn),T(xn,t))+Hm(T(xn,t),T(x,t))ϕM|Ω(tn)Ω(t)|+α(xn,t)Hm(T(xn,t),T(x,t))ϕM|Ω(tn)Ω(t)|+ϕM(m(xn,t)).

    Applying the limit n in the above inequality, we have

    limnm(xn,T(x,t))=0.

    Hence

    limnm(xn,T(x,t))=0. (2.10)

    Since m(x,x)=0, we obtain

    supyT(x,t)mxy=supyT(x,t)min{m(x,x),m(y,y)}=0. (2.11)

    From above two inequalities, we get

    m(x,T(x,t))=supyT(x,t)mxy.

    Thus using Lemma 1.12 we get xT(x,t). Hence xA. Thus xD and D is closed in [a,b], D=[a,b] and D is open and close in [a,b]. Thus T(.,t) has a fixed point in A for all t[a,b]. For uniqueness, t[a,b] is arbitrary fixed point, then there exist xA such that xT(x,t). Assume that y is an other point of T(x,t), then by applying condition 4, we obtain

    m(x,y)=Hm(T(x,t),T(y,t))αM(x,y)Hm(T(x,t),T(y,t))ϕM(m(x,y)).

    ForϕM(t)=12t, where t>0, the uniqueness follows.

    In this section we will apply the previous theoretical results to show the existence of solution for some integral equation. For related results (see [13,20]). We see for non-negative solution of (3.1) in X=C([0,δ],R). Let X=C([0,δ],R) be a set of continuous real valued functions defined on [0,δ] which is endowed with a complete M-metric given by

    m(x,y)=supt[0,δ](|x(t)+x(t)2|) for all x,yX.

    Consider an integral equation

    v1(t)=ρ(t)+δ0h(t,s)J(s,v1(s))ds for all 0tδ. (3.1)

    Define g:XX by

    g(x)(t)=ρ(t)+δ0h(t,s)J(s,x(s))ds

    where

    (i) for δ>0,  (a) J:[0,δ]×RR, (b) h:[0,δ]×[0,δ][0,), (c) ρ:[0,δ]R are all continuous functions

    (ii) Assume that σ:X×XR is a function with the following properties,

    (iii) σ(x,y)0 implies that σ(T(x),T(y))0,

    (iv) there exist x0X such that σ(x0,T(x0))0,

    (v) if {xn}X is a sequence such that σ(xn,xn+1)0 for all nN and xnx as n, then σ(x,T(x))0

    (vi)

    supt[0,δ]δ0h(t,s)ds1

    where t[0,δ], sR,

    (vii) there exist ϕMΨ, σ(y,T(y))1 and σ(x,T(x))1 such that for each t[0,δ], we have

    |J(s,x(t))+J(s,y(t))|ϕM(|x+y|). (3.3)

    Theorem 4.1. Under the assumptions (i)(vii) theintegral Eq (3.1) has a solution in {X=C([0,δ],R) for all t[0,δ]}.

    Proof. Using the condition (vii), we obtain that

    m(g(x),g(y))=|g(x)(t)+g(y)(t)2|=|δ0h(t,s)[J(s,x(s))+J(s,y(s))2]ds|δ0h(t,s)|J(s,x(s))+J(s,y(s))2|dsδ0h(t,s)[ϕM|x(s)+y(s)2|]ds(supt[0,δ]δ0h(t,s)ds)(ϕM|x(s)+y(s)2|)ϕM(|x(s)+y(s)2|)
    m(g(x),g(y))ϕ(m(x,y))

    Define α:X×X[0,+) by

    α(x,y)={1     if σ(x,y)0 0       otherwise

    which implies that

    m(g(x),g(y))ϕM(m(x,y)).

    Hence all the assumption of the Corollary 2.6 are satisfied, the mapping g has a fixed point in X=C([0,δ],R) which is the solution of integral Eq (3.1).

    In this study we develop some set-valued fixed point results based on (α,ϕM)-contraction mappings in the context of M-metric space and ordered M-metric space. Also, we give examples and applications to the existence of solution of functional equations and homotopy theory.

    The authors declare that they have no competing interests.


    Acknowledgments



    The author developed AI agents to work with them, like with a good MSc student. Agents helped to identify the most relevant literature, design a plan for the paper, implement suggested improvements, and draft paper sections and rewrite them based on comments provided by the author. The author assumes full responsibility for the accuracy of the content presented here.

    Conflict of interest



    The author has no conflicts of interest to declare.

    [1] Bakeman R, Quera V (2011) Sequential Analysis and Observational Methods for the Behavioral Sciences. Cambridge University Press. https://doi.org/10.1017/CBO9781139017343
    [2] Metz GA, Whishaw IQ (2002) Cortical and subcortical lesions impair skilled walking in the ladder rung walking test: a new task to evaluate fore-and hindlimb stepping, placing, and co-ordination. J Neurosci Meth 115: 169-179. https://doi.org/10.1016/S0165-0270(02)00012-2
    [3] Spano R (2005) Potential sources of observer bias in police observational data. Soc Sci Res 34: 591-617. https://doi.org/10.1016/j.ssresearch.2004.05.003
    [4] Asan O, Montague E (2014) Using video-based observation research methods in primary care health encounters to evaluate complex interactions. J Innov Health Inform 21: 161-170. https://doi.org/10.14236/jhi.v21i4.72
    [5] Moran RW, Schneiders AG, Major KM, et al. (2016) How reliable are functional movement screening scores? A systematic review of rater reliability. Brit J Sport Med 50: 527-536. https://doi.org/10.1136/bjsports-2015-094913
    [6] Mathis MW, Mathis A (2020) Deep learning tools for the measurement of animal behavior in neuroscience. Curr Opin Neurobiol 60: 1-11. https://doi.org/10.1016/j.conb.2019.10.008
    [7] Gautam R, Sharma M (2020) Prevalence and diagnosis of neurological disorders using different deep learning techniques: a meta-analysis. J Med Syst 44: 49. https://doi.org/10.1007/s10916-019-1519-7
    [8] Singh KR, Dash S (2023) Early detection of neurological diseases using machine learning and deep learning techniques: a review. Artif Intell Neurol Diso 2023: 1-24. https://doi.org/10.1016/B978-0-323-90277-9.00001-8
    [9] Arac A, Zhao P, Dobkin BH, et al. (2019) DeepBehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Front Syst Neurosci 13: 20. https://doi.org/10.3389/fnsys.2019.00020
    [10] Sewak M, Sahay SK, Rathore H (2020) An overview of deep learning architecture of deep neural networks and autoencoders. J Comput Theor Nanos 17: 182-188. https://doi.org/10.1166/jctn.2020.8648
    [11] Brattoli B, Büchler U, Dorkenwald M, et al. (2021) Unsupervised behaviour analysis and magnification (uBAM) using deep learning. Nat Mach Intell 3: 495-506. https://doi.org/10.1038/s42256-021-00326-x
    [12] ul Haq A, Li JP, Agbley BLY, et al. (2022) A survey of deep learning techniques based Parkinson's disease recognition methods employing clinical data. Expert Syst Appl 208: 118045. https://doi.org/10.1016/j.eswa.2022.118045
    [13] Nilashi M, Abumalloh RA, Yusuf SYM, et al. (2023) Early diagnosis of Parkinson's disease: a combined method using deep learning and neuro-fuzzy techniques. Comput Biol Chem 102: 107788. https://doi.org/10.1016/j.compbiolchem.2022.107788
    [14] Shahid AH, Singh MP (2020) A deep learning approach for prediction of Parkinson's disease progression. Biomed Eng Lett 10: 227-239. https://doi.org/10.1007/s13534-020-00156-7
    [15] Chintalapudi N, Battineni G, Hossain MA, et al. (2022) Cascaded deep learning frameworks in contribution to the detection of parkinson's disease. Bioengineering 9: 116. https://doi.org/10.3390/bioengineering9030116
    [16] Almuqhim F, Saeed F (2021) ASD-SAENet: a sparse autoencoder, and deep-neural network model for detecting autism spectrum disorder (ASD) using fMRI data. Front Comput Neurosci 15: 654315. https://doi.org/10.3389/fncom.2021.654315
    [17] Zhang L, Wang M, Liu M, et al. (2020) A survey on deep learning for neuroimaging-based brain disorder analysis. Front Neurosci 14: 779. https://doi.org/10.3389/fnins.2020.00779
    [18] Uddin MZ, Shahriar MA, Mahamood MN, et al. (2024) Deep learning with image-based autism spectrum disorder analysis: a systematic review. Eng Appl Artif Intel 127: 107185. https://doi.org/10.1016/j.engappai.2023.107185
    [19] Gupta C, Chandrashekar P, Jin T, et al. (2022) Bringing machine learning to research on intellectual and developmental disabilities: taking inspiration from neurological diseases. J Neurodev Disord 14: 28. https://doi.org/10.1186/s11689-022-09438-w
    [20] Saleh AY, Chern LH (2021) Autism spectrum disorder classification using deep learning. IJOE 17: 103-114. https://doi.org/10.3991/ijoe.v17i08.24603
    [21] Koppe G, Meyer-Lindenberg A, Durstewitz D (2021) Deep learning for small and big data in psychiatry. Neuropsychopharmacology 46: 176-190. https://doi.org/10.1038/s41386-020-0767-z
    [22] Gütter J, Kruspe A, Zhu XX, et al. (2022) Impact of training set size on the ability of deep neural networks to deal with omission noise. Front Remote Sens 3: 932431. https://doi.org/10.3389/frsen.2022.932431
    [23] Sturman O, von Ziegler L, Schläppi C, et al. (2020) Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions. Neuropsychopharmacology 45: 1942-1952. https://doi.org/10.1038/s41386-020-0776-y
    [24] He T, Kong R, Holmes AJ, et al. (2020) Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage 206: 116276. https://doi.org/10.1016/j.neuroimage.2019.116276
    [25] Chen M, Li H, Wang J, et al. (2019) A multichannel deep neural network model analyzing multiscale functional brain connectome data for attention deficit hyperactivity disorder detection. Radiol Artif Intell 2: e190012. https://doi.org/10.1148/ryai.2019190012
    [26] Golshan HM, Hebb AO, Mahoor MH (2020) LFP-Net: a deep learning framework to recognize human behavioral activities using brain STN-LFP signals. J Neurosci Meth 335: 108621. https://doi.org/10.1016/j.jneumeth.2020.108621
    [27] Sutoko S, Masuda A, Kandori A, et al. (2021) Early identification of Alzheimer's disease in mouse models: Application of deep neural network algorithm to cognitive behavioral parameters. Iscience 24: 102198. https://doi.org/10.1016/j.isci.2021.102198
    [28] Tarigopula P, Fairhall SL, Bavaresco A, et al. (2023) Improved prediction of behavioral and neural similarity spaces using pruned DNNs. Neural Networks 168: 89-104. https://doi.org/10.1016/j.neunet.2023.08.049
    [29] Uyulan C, Ergüzel TT, Unubol H, et al. (2021) Major depressive disorder classification based on different convolutional neural network models: deep learning approach. Clin EEG Neurosci 52: 38-51. https://doi.org/10.1177/1550059420916634
    [30] Wen J, Thibeau-Sutre E, Diaz-Melo M, et al. (2020) Convolutional neural networks for classification of Alzheimer's disease: overview and reproducible evaluation. Med Image Anal 63: 101694. https://doi.org/10.1016/j.media.2020.101694
    [31] Karthik R, Menaka R, Johnson A, et al. (2020) Neuroimaging and deep learning for brain stroke detection-A review of recent advancements and future prospects. Comput Meth Prog Bio 197: 105728. https://doi.org/10.1016/j.cmpb.2020.105728
    [32] Iqbal MS, Heyat MBB, Parveen S, et al. (2024) Progress and trends in neurological disorders research based on deep learning. Comput Med Imag Grap 116: 102400. https://doi.org/10.1016/j.compmedimag.2024.102400
    [33] Kim S, Pathak S, Parise R, et al. (2024) The thriving influence of artificial intelligence in neuroscience. Application of Artificial Intelligence in Neurological Disorders . Singapore: Springer Nature Singapore 157-184. https://doi.org/10.1007/978-981-97-2577-9_9
    [34] Lima AA, Mridha MF, Das SC, et al. (2022) A comprehensive survey on the detection, classification, and challenges of neurological disorders. Biology 11: 469. https://doi.org/10.3390/biology11030469
    [35] Mulpuri RP, Konda N, Gadde ST, et al. (2024) Artificial intelligence and machine learning in neuroregeneration: a systematic review. Cureus 16: e61400. https://doi.org/10.7759/cureus.61400
    [36] Keserwani PK, Das S, Sarkar N (2024) A comparative study: prediction of parkinson's disease using machine learning, deep learning and nature inspired algorithm. Multimed Tools Appl 83: 69393-69441. https://doi.org/10.1007/s11042-024-18186-z
    [37] Fatima A, Masood S (2024) Machine learning approaches for neurological disease prediction: a systematic review. Expert Syst 41: e13569. https://doi.org/10.1111/exsy.13569
    [38] Surianarayanan C, Lawrence JJ, Chelliah PR, et al. (2023) Convergence of artificial intelligence and neuroscience towards the diagnosis of neurological disorders—a scoping review. Sensors 23: 3062. https://doi.org/10.3390/s23063062
    [39] Lombardi A, Diacono D, Amoroso N, et al. (2021) Explainable deep learning for personalized age prediction with brain morphology. Front Neurosci 15: 674055. https://doi.org/10.3389/fnins.2021.674055
    [40] Choo YJ, Chang MC (2022) Use of machine learning in stroke rehabilitation: a narrative review. Brain Neurorehab 15: e26. https://doi.org/10.12786/bn.2022.15.e26
    [41] Ryait H, Bermudez-Contreras E, Harvey M, et al. (2019) Data-driven analyses of motor impairments in animal models of neurological disorders. PLoS Biol 17: e3000516. https://doi.org/10.1371/journal.pbio.3000516
    [42] Nguyen HS, Ho DKN, Nguyen NN, et al. (2024) Predicting EGFR mutation status in non-small cell lung cancer using artificial intelligence: a systematic review and meta-analysis. Acad Radiol 31: 660-683. https://doi.org/10.1016/j.acra.2023.03.040
    [43] Zhang Y, Yao Q, Yue L, et al. (2023) Emerging drug interaction prediction enabled by a flow-based graph neural network with biomedical network. Nat Comput Sci 3: 1023-1033. https://doi.org/10.1038/s43588-023-00558-4
    [44] Le NQK (2023) Predicting emerging drug interactions using GNNs. Nat Comput Sci 3: 1007-1008. https://doi.org/10.1038/s43588-023-00555-7
    [45] Abed Mohammed A, Sumari P (2024) Hybrid k-means and principal component analysis (PCA) for diabetes prediction. Int J Comput Dig Syst 15: 1719-1728. https://doi.org/10.12785/ijcds/1501121
    [46] Mostafa F, Hasan E, Williamson M, et al. (2021) Statistical machine learning approaches to liver disease prediction. Livers 1: 294-312. https://doi.org/10.3390/livers1040023
    [47] Jackins V, Vimal S, Kaliappan M, et al. (2021) AI-based smart prediction of clinical disease using random forest classifier and Naive Bayes. J Supercomput 77: 5198-5219. https://doi.org/10.1007/s11227-020-03481-x
    [48] Cho G, Yim J, Choi Y, et al. (2019) Review of machine learning algorithms for diagnosing mental illness. Psychiat Invest 16: 262. https://doi.org/10.30773/pi.2018.12.21.2
    [49] Aljrees T (2024) Improving prediction of cervical cancer using KNN imputer and multi-model ensemble learning. Plos One 19: e0295632. https://doi.org/10.1371/journal.pone.0295632
    [50] Hajare S, Rewatkar R, Reddy KTV (2024) Design of an iterative method for enhanced early prediction of acute coronary syndrome using XAI analysis. AIMS Bioeng 11: 301-322. https://doi.org/10.3934/bioeng.2024016
    [51] Schjetnan AGP, Luczak A (2011) Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat. J Vis Exp 56: e3282. https://doi.org/10.3791/3282-v
    [52] Luczak A, Narayanan NS (2005) Spectral representation-analyzing single-unit activity in extracellularly recorded neuronal data without spike sorting. J Neurosci Meth 144: 53-61. https://doi.org/10.1016/j.jneumeth.2004.10.009
    [53] Luczak A, Hackett TA, Kajikawa Y, et al. (2004) Multivariate receptive field mapping in marmoset auditory cortex. J Neurosci Meth 136: 77-85. https://doi.org/10.1016/j.jneumeth.2003.12.019
    [54] Luczak A (2010) Measuring neuronal branching patterns using model-based approach. Front Comput Neurosci 4: 135. https://doi.org/10.3389/fncom.2010.00135
    [55] Luczak A, Kubo Y (2022) Predictive neuronal adaptation as a basis for consciousness. Front Syst Neurosci 15: 767461. https://doi.org/10.3389/fnsys.2021.767461
    [56] Lepakshi VA (2022) Machine learning and deep learning based AI tools for development of diagnostic tools. Computational Approaches for Novel Therapeutic and Diagnostic Designing to Mitigate SARS-CoV-2 Infection . Academic Press 399-420. https://doi.org/10.1016/B978-0-323-91172-6.00011-X
    [57] Montavon G, Binder A, Lapuschkin S, et al. (2019) Layer-wise relevance propagation: an overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning . Cham: Springer 193-209. https://doi.org/10.1007/978-3-030-28954-6_10
    [58] Nazir S, Dickson DM, Akram MU (2023) Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 156: 106668. https://doi.org/10.1016/j.compbiomed.2023.106668
    [59] Torabi R, Jenkins S, Harker A, et al. (2021) A neural network reveals motoric effects of maternal preconception exposure to nicotine on rat pup behavior: a new approach for movement disorders diagnosis. Front Neurosci 15: 686767. https://doi.org/10.3389/fnins.2021.686767
    [60] Shahtalebi S, Atashzar SF, Patel RV, et al. (2021) A deep explainable artificial intelligent framework for neurological disorders discrimination. Sci Rep 11: 9630. https://doi.org/10.1038/s41598-021-88919-9
    [61] Morabito FC, Ieracitano C, Mammone N (2023) An explainable artificial intelligence approach to study MCI to AD conversion via HD-EEG processing. Clin EEG Neurosci 54: 51-60. https://doi.org/10.1177/15500594211063662
    [62] Goodwin NL, Nilsson SRO, Choong JJ, et al. (2022) Toward the explainability, transparency, and universality of machine learning for behavioral classification in neuroscience. Curr Opin Neurobiol 73: 102544. https://doi.org/10.1016/j.conb.2022.102544
    [63] Lindsay GW (2024) Grounding neuroscience in behavioral changes using artificial neural networks. Curr Opin Neurobiol 84: 102816. https://doi.org/10.1016/j.conb.2023.102816
    [64] Dan T, Kim M, Kim WH, et al. (2023) Developing explainable deep model for discovering novel control mechanism of neuro-dynamics. IEEE T Med Imaging 43: 427-438. https://doi.org/10.1109/TMI.2023.3309821
    [65] Fellous JM, Sapiro G, Rossi A, et al. (2019) Explainable artificial intelligence for neuroscience: behavioral neurostimulation. Front Neurosci 13: 1346. https://doi.org/10.3389/fnins.2019.01346
    [66] Bartle AS, Jiang Z, Jiang R, et al. (2022) A critical appraisal on deep neural networks: bridge the gap between deep learning and neuroscience via XAI. HANDBOOK ON COMPUTER LEARNING AND INTELLIGENCE: Volume 2: Deep Learning, Intelligent Control and Evolutionary Computation 2022: 619-634. https://doi.org/10.1142/9789811247323_0015
    [67] Lemon RN (1997) Mechanisms of cortical control of hand function. Neuroscientist 3: 389-398. https://doi.org/10.1177/107385849700300612
    [68] Alaverdashvili M, Whishaw IQ (2013) A behavioral method for identifying recovery and compensation: hand use in a preclinical stroke model using the single pellet reaching task. Neurosci Biobehav R 37: 950-967. https://doi.org/10.1016/j.neubiorev.2013.03.026
    [69] Metz GAS, Whishaw IQ (2000) Skilled reaching an action pattern: stability in rat (Rattus norvegicus) grasping movements as a function of changing food pellet size. Behav Brain Res 116: 111-122. https://doi.org/10.1016/S0166-4328(00)00245-X
    [70] Faraji J, Gomez-Palacio-Schjetnan A, Luczak A, et al. (2013) Beyond the silence: bilateral somatosensory stimulation enhances skilled movement quality and neural density in intact behaving rats. Behav Brain Res 253: 78-89. https://doi.org/10.1016/j.bbr.2013.07.022
    [71] Sheu Y (2020) Illuminating the black box: interpreting deep neural network models for psychiatric research. Front Psychiatry 11: 551299. https://doi.org/10.3389/fpsyt.2020.551299
    [72] Fan FL, Xiong J, Li M, et al. (2021) On interpretability of artificial neural networks: a survey. IEEE T Radiat Plasma 5: 741-760. https://doi.org/10.1109/TRPMS.2021.3066428
    [73] Smucny J, Shi G, Davidson I (2022) Deep learning in neuroimaging: overcoming challenges with emerging approaches. Front Psychiatry 13: 912600. https://doi.org/10.3389/fpsyt.2022.912600
    [74] Kohlbrenner M, Bauer A, Nakajima S, et al. (2020) Towards best practice in explaining neural network decisions with LRP. 2020 International Joint Conference on Neural Networks (IJCNN) . IEEE 1-7. https://doi.org/10.1109/IJCNN48605.2020.9206975
    [75] Farahani FV, Fiok K, Lahijanian B, et al. (2022) Explainable AI: a review of applications to neuroimaging data. Front Neurosci 16: 906290. https://doi.org/10.3389/fnins.2022.906290
    [76] Böhle M, Eitel F, Weygandt M, et al. (2019) Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer's disease classification. Front Aging Neurosci 11: 456892. https://doi.org/10.3389/fnagi.2019.00194
    [77] Marques dos Santos JD, Marques dos Santos JP (2023) Path-weights and layer-wise relevance propagation for explainability of ANNs with fMRI data. International Conference on Machine Learning, Optimization, and Data Science . Cham: Springer Nature Switzerland 433-448. https://doi.org/10.1007/978-3-031-53966-4_32
    [78] Filtjens B, Ginis P, Nieuwboer A, et al. (2021) Modelling and identification of characteristic kinematic features preceding freezing of gait with convolutional neural networks and layer-wise relevance propagation. BMC Med Inform Decis Mak 21: 341. https://doi.org/10.1186/s12911-021-01699-0
    [79] Li H, Tian Y, Mueller K, et al. (2019) Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image Vision Comput 83: 70-86. https://doi.org/10.1016/j.imavis.2019.02.005
    [80] Nam H, Kim JM, Choi W, et al. (2023) The effects of layer-wise relevance propagation-based feature selection for EEG classification: a comparative study on multiple datasets. Front Hum Neurosci 17: 1205881. https://doi.org/10.3389/fnhum.2023.1205881
    [81] Korda AI, Ruef A, Neufang S, et al. (2021) Identification of voxel-based texture abnormalities as new biomarkers for schizophrenia and major depressive patients using layer-wise relevance propagation on deep learning decisions. Psychiat Res-Neuroim 313: 111303. https://doi.org/10.1016/j.pscychresns.2021.111303
    [82] von Ziegler L, Sturman O, Bohacek J (2021) Big behavior: challenges and opportunities in a new era of deep behavior profiling. Neuropsychopharmacology 46: 33-44. https://doi.org/10.1038/s41386-020-0751-7
    [83] Marks M, Jin Q, Sturman O, et al. (2022) Deep-learning-based identification, tracking, pose estimation and behaviour classification of interacting primates and mice in complex environments. Nat Mach Intell 4: 331-340. https://doi.org/10.1038/s42256-022-00477-5
    [84] Bohnslav JP, Wimalasena NK, Clausing KJ, et al. (2021) DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels. Elife 10: e63377. https://doi.org/10.7554/eLife.63377
    [85] Wang PY, Sapra S, George VK, et al. (2021) Generalizable machine learning in neuroscience using graph neural networks. Front Artif Intell 4: 618372. https://doi.org/10.3389/frai.2021.618372
    [86] Watson DS, Krutzinna J, Bruce IN, et al. (2019) Clinical applications of machine learning algorithms: beyond the black box. Bmj 364: l886. https://doi.org/10.2139/ssrn.3352454
    [87] Jain A, Salas M, Aimer O, et al. (2024) Safeguarding patients in the AI era: ethics at the forefront of pharmacovigilance. Drug Safety 48: 119-127. https://doi.org/10.1007/s40264-024-01483-9
    [88] Murdoch B (2021) Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics 22: 1-5. https://doi.org/10.1186/s12910-021-00687-3
    [89] Ziesche S (2021) AI ethics and value alignment for nonhuman animals. Philosophies 6: 31. https://doi.org/10.3390/philosophies6020031
    [90] Bossert L, Hagendorff T (2021) Animals and AI. the role of animals in AI research and application-an overview and ethical evaluation. Technol Soc 67: 101678. https://doi.org/10.1016/j.techsoc.2021.101678
    [91] Gong Y, Liu G, Xue Y, et al. (2023) A survey on dataset quality in machine learning. Inform Software Tech 162: 107268. https://doi.org/10.1016/j.infsof.2023.107268
    [92] Bolaños LA, Xiao D, Ford NL, et al. (2021) A three-dimensional virtual mouse generates synthetic training data for behavioral analysis. Nat Methods 18: 378-381. https://doi.org/10.1038/s41592-021-01103-9
    [93] Lashgari E, Liang D, Maoz U (2020) Data augmentation for deep-learning-based electroencephalography. J Neurosci Methods 346: 108885. https://doi.org/10.1016/j.jneumeth.2020.108885
    [94] Barile B, Marzullo A, Stamile C, et al. (2021) Data augmentation using generative adversarial neural networks on brain structural connectivity in multiple sclerosis. Comput Meth Prog Bio 206: 106113. https://doi.org/10.1016/j.cmpb.2021.106113
    [95] Memar S, Jiang E, Prado VF, et al. (2023) Open science and data sharing in cognitive neuroscience with MouseBytes and MouseBytes+. Sci Data 10: 210. https://doi.org/10.1038/s41597-023-02106-1
    [96] Jleilaty S, Ammounah A, Abdulmalek G, et al. (2024) Distributed real-time control architecture for electrohydraulic humanoid robots. Robot Intell Automat 44: 607-620. https://doi.org/10.1108/RIA-01-2024-0013
    [97] Zhao J, Wang Z, Lv Y, et al. (2024) Data-driven learning for H∞ control of adaptive cruise control systems. IEEE Trans Veh Technol 73: 18348-18362. https://doi.org/10.1109/TVT.2024.3447060
    [98] Kelly CJ, Karthikesalingam A, Suleyman M, et al. (2019) Key challenges for delivering clinical impact with artificial intelligence. BMC Med 17: 1-9. https://doi.org/10.1186/s12916-019-1426-2
    [99] Kulkarni PA, Singh H (2023) Artificial intelligence in clinical diagnosis: opportunities, challenges, and hype. Jama 330: 317-318. https://doi.org/10.1001/jama.2023.11440
    [100] Choudhury A, Asan O (2020) Role of artificial intelligence in patient safety outcomes: systematic literature review. JMIR Med inf 8: e18599. https://doi.org/10.2196/18599
    [101] Ratwani RM, Sutton K, Galarraga JE (2024) Addressing AI algorithmic bias in health care. Jama 332: 1051-1052. https://doi.org/10.1001/jama.2024.13486
    [102] Chen C, Sundar SS (2024) Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust. Hum-Comput Interact 2024: 1-37. https://doi.org/10.1080/07370024.2024.2392494
    [103] Chen F, Wang L, Hong J, et al. (2024) Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models. J Am Medl Inform Assn 31: 1172-1183. https://doi.org/10.1093/jamia/ocae060
    [104] Ienca M, Ignatiadis K (2020) Artificial intelligence in clinical neuroscience: methodological and ethical challenges. AJOB Neurosci 11: 77-87. https://doi.org/10.1080/21507740.2020.1740352
    [105] Avberšek LK, Repovš G (2022) Deep learning in neuroimaging data analysis: applications, challenges, and solutions. Front Neuroimag 1: 981642. https://doi.org/10.3389/fnimg.2022.981642
  • This article has been cited by:

    1. Amjad Ali, Muhammad Arshad, Eskandar Ameer, Asim Asiri, Certain new iteration of hybrid operators with contractive M -dynamic relations, 2023, 8, 2473-6988, 20576, 10.3934/math.20231049
    2. Muhammad Tariq, Muhammad Arshad, Mujahid Abbas, Eskandar Ameer, Saber Mansour, Hassen Aydi, A relation theoretic m-metric fixed point algorithm and related applications, 2023, 8, 2473-6988, 19504, 10.3934/math.2023995
    3. Imo Kalu Agwu, Naeem Saleem, Umar Isthiaq, A new modified mixed-type Ishikawa iteration scheme with error for common fixed points of enriched strictly pseudocontractive self mappings and ΦΓ-enriched Lipschitzian self mappings in uniformly convex Banach spaces, 2025, 26, 1989-4147, 1, 10.4995/agt.2025.17595
    4. Muhammad Tariq, Sabeur Mansour, Mujahid Abbas, Abdullah Assiry, A Solution to the Non-Cooperative Equilibrium Problem for Two and Three Players Using the Fixed-Point Technique, 2025, 17, 2073-8994, 544, 10.3390/sym17040544
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1046) PDF downloads(44) Cited by(0)

Figures and Tables

Figures(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog