Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

How artificial intelligence reduces human bias in diagnostics?

  • Received: 10 October 2024 Revised: 07 December 2025 Accepted: 07 February 2025 Published: 12 February 2025
  • Accurate diagnostics of neurological disorders often rely on behavioral assessments, yet traditional methods rooted in manual observations and scoring are labor-intensive, subjective, and prone to human bias. Artificial Intelligence (AI), particularly Deep Neural Networks (DNNs), offers transformative potential to overcome these limitations by automating behavioral analyses and reducing biases in diagnostic practices. DNNs excel in processing complex, high-dimensional data, allowing for the detection of subtle behavioral patterns critical for diagnosing neurological disorders such as Parkinson's disease, strokes, or spinal cord injuries. This review explores how AI-driven approaches can mitigate observer biases, thereby emphasizing the use of explainable DNNs to enhance objectivity in diagnostics. Explainable AI techniques enable the identification of which features in data are used by DNNs to make decisions. In a data-driven manner, this allows one to uncover novel insights that may elude human experts. For instance, explainable DNN techniques have revealed previously unnoticed diagnostic markers, such as posture changes, which can enhance the sensitivity of behavioral diagnostic assessments. Furthermore, by providing interpretable outputs, explainable DNNs build trust in AI-driven systems and support the development of unbiased, evidence-based diagnostic tools. In addition, this review discusses challenges such as data quality, model interpretability, and ethical considerations. By illustrating the role of AI in reshaping diagnostic methods, this paper highlights its potential to revolutionize clinical practices, thus paving the way for more objective and reliable assessments of neurological disorders.

    Citation: Artur Luczak. How artificial intelligence reduces human bias in diagnostics?[J]. AIMS Bioengineering, 2025, 12(1): 69-89. doi: 10.3934/bioeng.2025004

    Related Papers:

    [1] María Ángeles García-Ferrero, Angkana Rüland . Strong unique continuation for the higher order fractional Laplacian. Mathematics in Engineering, 2019, 1(4): 715-774. doi: 10.3934/mine.2019.4.715
    [2] Chiara Gavioli, Pavel Krejčí . Deformable porous media with degenerate hysteresis in gravity field. Mathematics in Engineering, 2025, 7(1): 35-60. doi: 10.3934/mine.2025003
    [3] Fernando Farroni, Gioconda Moscariello, Gabriella Zecca . Lewy-Stampacchia inequality for noncoercive parabolic obstacle problems. Mathematics in Engineering, 2023, 5(4): 1-23. doi: 10.3934/mine.2023071
    [4] Federico Cluni, Vittorio Gusella, Dimitri Mugnai, Edoardo Proietti Lippi, Patrizia Pucci . A mixed operator approach to peridynamics. Mathematics in Engineering, 2023, 5(5): 1-22. doi: 10.3934/mine.2023082
    [5] Francesco Maddalena, Danilo Percivale, Franco Tomarelli . Signorini problem as a variational limit of obstacle problems in nonlinear elasticity. Mathematics in Engineering, 2024, 6(2): 261-304. doi: 10.3934/mine.2024012
    [6] Xavier Fernández-Real, Alessio Figalli . On the obstacle problem for the 1D wave equation. Mathematics in Engineering, 2020, 2(4): 584-597. doi: 10.3934/mine.2020026
    [7] Patrizia Pucci, Letizia Temperini . On the concentration–compactness principle for Folland–Stein spaces and for fractional horizontal Sobolev spaces. Mathematics in Engineering, 2023, 5(1): 1-21. doi: 10.3934/mine.2023007
    [8] Petteri Harjulehto, Peter Hästö, Jonne Juusti . Bloch estimates in non-doubling generalized Orlicz spaces. Mathematics in Engineering, 2023, 5(3): 1-21. doi: 10.3934/mine.2023052
    [9] Luca Spolaor, Bozhidar Velichkov . On the logarithmic epiperimetric inequality for the obstacle problem. Mathematics in Engineering, 2021, 3(1): 1-42. doi: 10.3934/mine.2021004
    [10] Boubacar Fall, Filippo Santambrogio, Diaraf Seck . Shape derivative for obstacles in crowd motion. Mathematics in Engineering, 2022, 4(2): 1-16. doi: 10.3934/mine.2022012
  • Accurate diagnostics of neurological disorders often rely on behavioral assessments, yet traditional methods rooted in manual observations and scoring are labor-intensive, subjective, and prone to human bias. Artificial Intelligence (AI), particularly Deep Neural Networks (DNNs), offers transformative potential to overcome these limitations by automating behavioral analyses and reducing biases in diagnostic practices. DNNs excel in processing complex, high-dimensional data, allowing for the detection of subtle behavioral patterns critical for diagnosing neurological disorders such as Parkinson's disease, strokes, or spinal cord injuries. This review explores how AI-driven approaches can mitigate observer biases, thereby emphasizing the use of explainable DNNs to enhance objectivity in diagnostics. Explainable AI techniques enable the identification of which features in data are used by DNNs to make decisions. In a data-driven manner, this allows one to uncover novel insights that may elude human experts. For instance, explainable DNN techniques have revealed previously unnoticed diagnostic markers, such as posture changes, which can enhance the sensitivity of behavioral diagnostic assessments. Furthermore, by providing interpretable outputs, explainable DNNs build trust in AI-driven systems and support the development of unbiased, evidence-based diagnostic tools. In addition, this review discusses challenges such as data quality, model interpretability, and ethical considerations. By illustrating the role of AI in reshaping diagnostic methods, this paper highlights its potential to revolutionize clinical practices, thus paving the way for more objective and reliable assessments of neurological disorders.



    Let n and k be two positive integers. Denote by p(n,k) the number of partitions of the positive number n on exactly k parts. Then the partition class k is the sequence p(1,k),p(2,k),,p(n,k), We already know, see [1], all these values can be divided into the highest d0=LCM(1,2,,k) sub sequences, each of which is calculated by the same polynomial.

    Choose a sequence of k natural numbers such that: the first member is arbitrary, and the rest form an arithmetic progression with a difference d=md0,mN, starting from the chosen first member. For example:

    x1=j,x2=j+d,,xk=j+(k1)d,jN. (1.1)

    The corresponding number of partitions of the class k for the elements of the previous arithmetic progression's values is:

    p(x1,k),p(x2,k),,p(xk,k). (1.2)

    If the values, which are calculated using the same polynomial, multiplied by the corresponding binomial coefficients, form the alternate sum, we notice that the sum always has a value which is independent of x1, no matter how we form the sequence (1.1).

    For the partition function of classes we already know the following results, see [1,2] for some details:

    ⅰ) The values of the partition function of classes is calculated with one quasi polynomial.

    ⅱ) For each class k the quasi polynomial consists of at most LCM(1,2,,k) different polynomials, each of them consists of a strictly positive and an alternating part.

    ⅲ) All polynomials within one quasi polynomial p(n,k) are of degree k1.

    ⅳ) All the coefficients with the highest degrees down to [k2] are equal for all polynomials (all of strictly positive) and all polynomials differ only in lower coefficients (alternating part).

    ⅴ) The form of any polynomial p(n,k) is:

    p(n,k)=a1nk1+a2nk2++ak, (1.3)

    where the coefficients a1,a2,,ak are calculated in the general form.

    Let us forget for a moment that the coefficients a1,a2, are known in general form. Knowing that all values for partitions class of the sequence (1.1) are obtained by one polynomial p(n,k), it is possible to determine all unknown coefficients in a completely different way from that given in papers [1,2]. To determine k unknowns, a k equation is required. For this purpose, it is sufficient to know all the values of the sequence (1.2). To this end, we must form the system (1.4) and solve it. (For k=10, see [3]).

    a1xk11+a2xk21++ak=p(x1,k)a1xk12+a2xk22++ak=p(x2,k)a1xk1k+a2xk2k++ak=p(xk,k) (1.4)

    The system (1.4) can be solved by Cramer's Rule. For further analysis, we need to find the following determinants. We will start with the known Vandermonde determinant, see [4].

    Δm=|xm11xm211xm12xm221xm1mxm2m1|=1i<jm(xixj),m>1. (1.5)

    When we remove the first column and an arbitrary row from the previous determinant we obtain the Vandermonde determinant of one order less. The following results are known, see [4] and are needed for further exposure. If we remove the second column and an arbitrary a-th row from the determinant (1.5) we get

    |xm11xm31x11xm1a1xm3a1xa11xm1a+1xm3a+1xa+11xm1mxm3mxm1|=(ia0imxi)i,ja0i<jm(xixj) (1.6)

    If we remove the third column and an arbitrary a-th row from the determinant (1.5) we get

    |xm11xm21xm41x11xm1a1xm2a1xm4a1xa11xm1a+1xm2a+1xm4a+1xa+11xm1mxm2mxm4mxm1|=i,ja1i<jmxixji,ja0i<jm(xixj) (1.7)

    Generaly, if we remove the b-th column and an arbitrary a-th row from the determinant (1.5) we get

    Δ(a,b)m=|xm11xm21xb+11xb11x11xm1a1xm2a1xb+1a1xb1a1xa11xm1a+1xm2a+1xb+1a+1xb1a+1xa+11xm1mxm2mxb+1mxb1mxm1|=(t1,,tb1a1t1<t2<tb1mxt1xt2xtb1)i,ja0i<jm(xixj).

    The label Δ(a,b)m means that from Δm remove the a-th row and b-th column from the set of variables xa.

    Theorem 1. Let m,j and k be three positive integers and

    I1(k,j,d)=k1i=0(1)i(k1i)p(j+id,k),

    where d=mLCM(1,2,3,,k). Then I1(k,j,d)=(1)k1dk1k! and is independent of j. (I1(k,j,d) is the first partition invariant which exists in all classes.)

    Proof. Among the values of the class k we choose the ones corresponding to the sequence (1.1), and they are given with the sequence (1.2). According to [2], all the elements in (1.2) can be calculated using the same polynomial p(n,k) with degree k1. Elements of the following sequence:

    q,q+d,,q+(k1)d,qj,

    are calculated with not necessarily the same polynomial as the previous one. Let the polynomial p(n,k) have the form as in (1.3). To determine the coefficients a1,a2,,ak it suffices to know the k values: p(x1,k),p(x2,k),,p(xk,k) where x1=j,x2=j+d,,xk=j+(k1)d are different numbers. Since Δk0, system (1.4) always has a unique solution, because all the elements of the set {x1,x2,,xk} are different from one another. According to Cramer's Rule, to determine the coefficient of the highest degree of the polynomial (1.3), which calculates the value of the number of partitions of class k, we have the following formula:

    a1=p(x1,k)Δ(1,1)kp(x2,k)Δ(2,1)k++(1)k1p(xk,k)Δ(k,1)kΔk (2.1)

    Determinants Δ(a,1)k,(1ak) are also Vandermonde and their values are equal to Δk1. Let {xi}1ik satisfy (1.1) then for 1ak it holds that

    Δ(a,1)k=Δk1=Δka1i=1(xixa)ki=a+1(xaxi)=Δk(1)a1(a1)!da1(1)ka(ka)!dka=(1)k1Δk(a1)!(ka)!dk1.

    Replacing in (2.1), after shortening with Δk we have

    a1=(1)k1(p(x1,k)0!(k1)!dk1p(x2,k)1!(k2)!dk1++(1)k1p(xk,k)(k1)!0!dk1).

    The coefficient a1 is already defined in [2] where it is shown that a1=1k!(k1)!. Substituting into the previous equality and multiplying by (1)k1, we obtain

    (1)k1k!(k1)!=p(x1,k)0!(k1)!dk1p(x2,k)1!(k2)!dk1++(1)k1p(xk,k)(k1)!0!dk1.

    Multiplying the last equality with (k1)!dk1 we obtain

    (1)k1dk1k!=k1i=0(1)i(k1i)p(j+id,k),

    which was to be proved. As these values are equal to each observed number of objects (1.2) within a class, the sum is invariant for any observed class.

    All classes of the partition do not contain all the invariants we will list. This primarily refers to the classes from the beginning. Only the first invariant appears in all classes. The second invariant holds starting from the third class. The third invariant holds starting from the fifth class. Fourth, from the seventh class, etc. This coincides with the appearance of the common coefficients {ak} in quasi polynomials p(n,k), kN.

    Theorem 2. Let m, j and k be three positive integers, k3 and

    I2(k,j,d)=k1i=0(1)i+1(j(k1)+((k2)i)d)(k1i)p(j+id,k)

    where d=mLCM(2,3,,k). Then I2(k,j,d)=(1)k(k3)dk14(k2)! and is independent of j.

    Remark. In the previous expression, we should not simplify as then the value for k=3 cannot be obtained. However, the value for k=3 exists and is equal to zero.

    Proof. Analogously to Theorem 1, the fact that the sum does not depend on the parameter j is a consequence of the periodicity per modulo LCM(2,3,,k) using the same polynomial to calculate the partition class values.

    In [2] it is shown how the system of linear equations can determine the other unknown coefficient of the polynomials which are calculated values of the partition classes. This coefficient is obtained from Cramer's Rule on system (1.4) and a2 is given by

    a2=p(x1,k)Δ(1,2)k+p(x2,k)Δ(2,2)k(1)kp(xk,k)Δ(k,2)kΔk. (2.2)

    Considering (1.6), knowing that {xi}i=1,2,,k is an arithmetic progression, determinants Δ(a,2)k can be written for 1ak with

    Δ(a,2)k=(ia1ikxi)Δk1=((k1)j+((k2)a+1)d)Δka1i=1(xixa)ki=a+1(xaxi)=((k1)j+((k2)a+1)d)Δk(1)a1(a1)!da1(1)ka(ka)!dka=(1)k1((k1)j+((k2)a+1)d)Δk(a1)!(ka)!dk1.

    Knowing the value of the coefficient a2=k34(k1)!(k2)! [2] and substituting in (2.2), and after multiplication with (1)k(k1)!dk1 we obtain

    (1)kk34(k2)!dk1=((k1)j+(k2)d)(k10)p(j,k)((k1)j+((k2)+1)d)(k11)p(j+d,k)+=k1i=0(1)i+1(j(k1)+((k2)i)d)(k1i)p(j+id,k).

    These invariants are in all classes starting from the fifth. For simplicity we denote them by

    R(i,j,k,d)=12(((k1)j+((k2)i)d)2(k1)j2(16k(k1)(2k1)i2)d22dj((k2)i)).

    Theorem 3. Let m,j and k be three positive integers, k5 and

    I3(k,j,d)=ki=0(1)iR(i,j,k,d)(k1i)p(j+id,k),

    where d=mLCM(2,3,,k). Then I3(k,j,d)=(1)k19k358k2+75k2288(k3)!dk1 and is independent of j.

    Proof. For the third invariant we need the value of the third polynomial coefficient of p(n,k), and it is shown [2] that this is

    a3=9k358k2+75k2288(k1)!(k3)!,k5.

    On the other hand, we have

    a3=p(x1,k)Δ(1,3)kp(x2,k)Δ(2,3)k++(1)k1p(xk,k)Δ(k,3)kΔk (2.3)

    From formula (1.7) we find Δ(a,3)k. The required sum 1i<jkxixj is convenient to calculate from the equality

    i,ja1i<jkxixj=12((ia1ikxi)2ia1ikx2i),

    where the sequence {xi} satisfies (1.1). Then, we should determine the quotient which can be simplified by reducing the following:

    Δ(a,3)kΔk=R(a1,j,k,d)a1i=1(xixa)ki=a+1(xaxi).

    By multiplying (2.3) with (1)k1(k1)!dk1 and after shortening we obtain:

    I3(k,j,d)=k1i=0(1)iR(i,j,k,d)(k1i)p(j+id,k).

    In every subsequent invariant, the proceedings become more complex. But, it is quite clear how further invariants can be calculated.

    For each partitions class k, kN we determine d0=LCM(1,2,3,,k), and then form d=md0, mN. In addition arbitrarily choose the natural number j and than form sequences (1.1) and (1.2). Finally, we form an appropriate sum which is for the first invariant:

    k1i=0(1)i(k1i)p(j+id,k)=k1i=0(1)i(k1i)p(xi+1,k),jN. (3.1)

    Sum (3.1) has a constant value in each partitions class and can be nominated as the first partitions class invariant.

    For k=1, sum (3.1) has a constant value of 1.

    For k=2, d0=2. If we choose some mN and set d=2m, the sum (3.1) has the form: p(j,2)p(j+d,2),jN. According to [1], it is known that p(n,2)=[n2]. Distinguishing between even and odd numbers of j (j and j+d have the same parity) and substituting into the sum, we obtain that the result, in both cases, is equal to d2=m.

    For k=3, d0=6. If we choose some mN and set d=6m the sum (3.1) has the form:

    p(j,3)2p(j+d,3)+p(j+2d,3),jN. (3.2)

    According to [1], it is known that:

    p(n,3)=n2+ωi12,i=nmod6,ωi{0,1,4,3,4,1}. (3.3)

    By replacing (3.3) in relation (3.2) we get

    j2+wi1122(j+d)2+wi212+(j+2d)2+wi312.

    Note that: i1=jmod6, i2=(j+d)mod6, i3=(j+2d)mod6 and wi1=wi2=wi3. Finally, we get the unique sum 6m2.

    For k=4, d0=12. If we choose some mN and set d=12m the sum (3.1) has the form:

    p(j,4)3p(j+d,4)+3p(j+2d,4)p(j+3d,4),jN. (3.4)

    According to [1], it is known that:

    p(n,4)=1144n3+148n2+{wi144,n even,116n+wi144,n odd,inmod12, (3.5)
    wi{0,5,20,27,32,11,36,5,16,27,4,11}.

    Similar to case k=3, by distinguishing the even and odd j and replacing (3.5) in relation (3.4) we obtain that the corresponding sums in both cases are equal to: 72m3. (Note that: i1=jmod12, i2=(j+d)mod12, i3=(j+2d)mod12, i4=(j+3d)mod12 and wi1=wi2=wi3=wi4.)

    The number of invariants increases, when the class number increases. Starting with class three, another invariant can be observed.

    Form in the same way as in the previous section: d0, d and the sequences (1.1) and (1.2) as well as the sum:

    k1i=0(1)i(j(k1)+((k2)i)d)(k1i)p(j+id,k).

    Previous sum has a constant value in each partitions class (starting from third class) and can be nominated as the second partitions class invariant.

    For k=3, d0=6. If we choose some mN and set d=6m the general form of the second invariant in the third class can be written as

    (2j+3d)p(j,3)2(2j+2d)p(j+d,3)+(2j+d)p(j+2d,3),jN

    The values p(j,3),p(j+d,3) and p(j+2d,3) are calculated using the same polynomial (3.3). Using (3.3) in the last equality we have

    (2j+3d)j2+wi162(2j+2d)(j+d)2+wi26+(2j+d)(j+2d)2+wi36

    Note that: i1=jmod6, i2=(j+d)mod6, i3=(j+2d)mod6 and wi1=wi2=wi3. The last equality is identical to zero.

    For k=4, d0=12. If we choose some mN and set d=12m the general form of the second invariant in the fourth class can be written as

    (3j+6d)p(j,4)3(3j+5d)p(j+d,4)+3(3j+4d)p(j+2d,4)(3j+3d)p(j+3d,4). (3.6)

    The last equations can be verified in an analogous manner, by using the same form of the known polynomial for the fourth class given in (3.5). Note that: i1=jmod12, i2=(j+d)mod12, i3=(j+2d)mod12, i4=(j+3d)mod12 and wi1=wi2=wi3=wi4. By distinguishing the even and odd j and replacing (3.5) in relation (3.6) we obtain that the corresponding sums in both cases are equal to: 216m3.

    Form in the same way as in the previous two section: d0, d and the sequences (1.1) and (1.2) as well as the sum I3(k,j,d) (Theorem 3). For each class (starting from the fifth) I3(k,j,d) has constant values and can be nominated as the third partitions class invariant. It is known [1] that

    p(n,5)=12880n4+1288n3+1288n2+{124n+wi2880,n even,196n+wi2880,n odd,inmod60, (3.7)

    wi are following numeric respectively:

    0,9,104,351,576,905,216,351,256,9,360,31,576,9,104,225,576,329,216,351,320,9,216,31,576,585,104,351,576,329,360,351,256,9,216,545,576,9,104,351,0,329,216,351,256,585,216,31,576,9,680,351,576,329,216,225,256,9,216,31.

    For k=5, d0=60. If we choose some mN and set d=60m the invariant I3(k,j,d) can be written as:

    12((4j+10d)24j220dj30d2)p(j,5)12((4j+9d)24j218dj29d2)p(j+d,5)+12((4j+8d)24j216dj26d2)p(j+2d,5)12((4j+7d)24j214dj21d2)p(j+3d,5)+12((4j+6d)24j212dj14d2)p(j+4d,5).

    Substituting (3.7) into the previous formula by distinguishing between even and odd j, we obtain a unique value of 1080000m4.

    Remark 1. From the Table 1, see [5], given at the end of the paper it is possible to check all of these explicitly with numerical values. For example:

    Table 1.  Partition classes values.
    d0 1 2 6 12 60 60 420 840 2520 2520
    n/k 1 2 3 4 5 6 7 8 9 10 11 p(n)
    1 1 0 0 0 0 0 0 0 0 0 0 1
    2 1 1 0 0 0 0 0 0 0 0 0 2
    3 1 1 1 0 0 0 0 0 0 0 0 3
    4 1 2 1 1 0 0 0 0 0 0 0 5
    5 1 2 2 1 1 0 0 0 0 0 0 7
    6 1 3 3 2 1 1 0 0 0 0 0 11
    7 1 3 4 3 2 1 1 0 0 0 0 15
    8 1 4 5 5 3 2 1 1 0 0 0 22
    9 1 4 7 6 5 3 2 1 1 0 0 30
    10 1 5 8 9 7 5 3 2 1 1 0 42
    11 1 5 10 11 10 7 5 3 2 1 1 56
    12 1 6 12 15 13 11 7 5 3 2 1 77
    13 1 6 14 18 18 14 11 7 5 3 2 101
    14 1 7 16 23 23 20 15 11 7 5 3 135
    15 1 7 19 27 30 26 21 15 11 7 5 176
    16 1 8 21 34 37 35 28 22 15 11 7 231
    17 1 8 24 39 47 44 38 29 22 15 11 297
    18 1 9 27 47 57 58 49 40 30 22 15 385
    19 1 9 30 54 70 71 65 52 41 30 22 490
    20 1 10 33 64 84 90 82 70 54 42 30 627
    21 1 10 37 72 101 110 105 89 73 55 43 792
    22 1 11 40 84 119 136 131 116 94 75 56 1002
    23 1 11 44 94 141 163 164 146 123 97 77 1255
    24 1 12 48 108 164 199 201 186 157 128 100 1575
    25 1 12 52 120 192 235 248 230 201 164 133 1958
    26 1 13 56 136 221 282 300 288 252 212 171 2436
    27 1 13 61 150 255 331 364 352 318 267 223 3010
    28 1 14 65 169 291 391 436 434 393 340 282 3718
    29 1 14 70 185 333 454 522 525 488 423 362 4565
    30 1 15 75 206 377 532 618 638 598 530 453 5604
    31 1 15 80 225 427 612 733 764 732 653 573 6842
    32 1 16 85 249 480 709 860 919 887 807 709 8349
    33 1 16 91 270 540 811 1009 1090 1076 984 884 10143
    34 1 17 96 297 603 931 1175 1297 1291 1204 1084 12310
    35 1 17 102 321 674 1057 1369 1527 1549 1455 1337 14883
    36 1 18 108 351 748 1206 1579 1801 1845 1761 1626 17977
    37 1 18 114 378 831 1360 1824 2104 2194 2112 1984 21637

     | Show Table
    DownLoad: CSV

    1. Check the first invariant in the third class. Take m=2, j=5. The first invariant formula is

    p(5,3)2p(17,3)+p(27,3).

    From the Table we find: p(5,3)=2, p(17,3)=24, p(29,3)=70. By substitution we find 2224+70=24(=6m2).

    2. Check the second invariant in the forth class. Take m=1, j=3. The second invariant formula is

    81p(3,4)369p(15,4)+357p(27,4)45p(39,4).

    From the Table we find: p(3,4)=0, p(15,4)=27, p(27,4)=150, p(39,4)=441. By substitution we find:

    81036927+35715045441=216(=216m3).

    3. Check the third invariant in the fifth class. Take m=1, j=1. The third invariant formula is

    127806p(1,5)380904p(61,5)+419076p(121,5)206664p(181,5)+40686p(241,5)=1080000.

    Using formulas from (3.7), we find that: p(1,5)=0, p(61,5)=5608, p(121,5)=80631, p(181,5)=393369 and p(241,5)=1220122, and so by checking we are assured of the accuracy.

    Remark 2. Obviously, p(n,k) define values only for nk. The invariants determine very precisely that values for n<k should be taken as zero.

    In this paper, authors have demonstrated a new approach to partitions class invariants, as a way of proving the relevance and accuracy of all formulas given in [1,2]. Also, it I can be considered to be another way to obtain some of the formulas in [2]. The quasi polynomials p(n,k) needed to calculate the number of partitions of a number n to exactly k parts consists of at most LCM(1,2,,k) different polynomials. The invariants claim that the more different polynomials in one quasi polynomial, the more invariable sizes connect them.

    The author thank to The Academy of Applied Technical Studies Belgrade for partial funding of this paper.

    Authors declare no conflicts of interest in this paper.


    Acknowledgments



    The author developed AI agents to work with them, like with a good MSc student. Agents helped to identify the most relevant literature, design a plan for the paper, implement suggested improvements, and draft paper sections and rewrite them based on comments provided by the author. The author assumes full responsibility for the accuracy of the content presented here.

    Conflict of interest



    The author has no conflicts of interest to declare.

    [1] Bakeman R, Quera V (2011) Sequential Analysis and Observational Methods for the Behavioral Sciences. Cambridge University Press. https://doi.org/10.1017/CBO9781139017343
    [2] Metz GA, Whishaw IQ (2002) Cortical and subcortical lesions impair skilled walking in the ladder rung walking test: a new task to evaluate fore-and hindlimb stepping, placing, and co-ordination. J Neurosci Meth 115: 169-179. https://doi.org/10.1016/S0165-0270(02)00012-2
    [3] Spano R (2005) Potential sources of observer bias in police observational data. Soc Sci Res 34: 591-617. https://doi.org/10.1016/j.ssresearch.2004.05.003
    [4] Asan O, Montague E (2014) Using video-based observation research methods in primary care health encounters to evaluate complex interactions. J Innov Health Inform 21: 161-170. https://doi.org/10.14236/jhi.v21i4.72
    [5] Moran RW, Schneiders AG, Major KM, et al. (2016) How reliable are functional movement screening scores? A systematic review of rater reliability. Brit J Sport Med 50: 527-536. https://doi.org/10.1136/bjsports-2015-094913
    [6] Mathis MW, Mathis A (2020) Deep learning tools for the measurement of animal behavior in neuroscience. Curr Opin Neurobiol 60: 1-11. https://doi.org/10.1016/j.conb.2019.10.008
    [7] Gautam R, Sharma M (2020) Prevalence and diagnosis of neurological disorders using different deep learning techniques: a meta-analysis. J Med Syst 44: 49. https://doi.org/10.1007/s10916-019-1519-7
    [8] Singh KR, Dash S (2023) Early detection of neurological diseases using machine learning and deep learning techniques: a review. Artif Intell Neurol Diso 2023: 1-24. https://doi.org/10.1016/B978-0-323-90277-9.00001-8
    [9] Arac A, Zhao P, Dobkin BH, et al. (2019) DeepBehavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Front Syst Neurosci 13: 20. https://doi.org/10.3389/fnsys.2019.00020
    [10] Sewak M, Sahay SK, Rathore H (2020) An overview of deep learning architecture of deep neural networks and autoencoders. J Comput Theor Nanos 17: 182-188. https://doi.org/10.1166/jctn.2020.8648
    [11] Brattoli B, Büchler U, Dorkenwald M, et al. (2021) Unsupervised behaviour analysis and magnification (uBAM) using deep learning. Nat Mach Intell 3: 495-506. https://doi.org/10.1038/s42256-021-00326-x
    [12] ul Haq A, Li JP, Agbley BLY, et al. (2022) A survey of deep learning techniques based Parkinson's disease recognition methods employing clinical data. Expert Syst Appl 208: 118045. https://doi.org/10.1016/j.eswa.2022.118045
    [13] Nilashi M, Abumalloh RA, Yusuf SYM, et al. (2023) Early diagnosis of Parkinson's disease: a combined method using deep learning and neuro-fuzzy techniques. Comput Biol Chem 102: 107788. https://doi.org/10.1016/j.compbiolchem.2022.107788
    [14] Shahid AH, Singh MP (2020) A deep learning approach for prediction of Parkinson's disease progression. Biomed Eng Lett 10: 227-239. https://doi.org/10.1007/s13534-020-00156-7
    [15] Chintalapudi N, Battineni G, Hossain MA, et al. (2022) Cascaded deep learning frameworks in contribution to the detection of parkinson's disease. Bioengineering 9: 116. https://doi.org/10.3390/bioengineering9030116
    [16] Almuqhim F, Saeed F (2021) ASD-SAENet: a sparse autoencoder, and deep-neural network model for detecting autism spectrum disorder (ASD) using fMRI data. Front Comput Neurosci 15: 654315. https://doi.org/10.3389/fncom.2021.654315
    [17] Zhang L, Wang M, Liu M, et al. (2020) A survey on deep learning for neuroimaging-based brain disorder analysis. Front Neurosci 14: 779. https://doi.org/10.3389/fnins.2020.00779
    [18] Uddin MZ, Shahriar MA, Mahamood MN, et al. (2024) Deep learning with image-based autism spectrum disorder analysis: a systematic review. Eng Appl Artif Intel 127: 107185. https://doi.org/10.1016/j.engappai.2023.107185
    [19] Gupta C, Chandrashekar P, Jin T, et al. (2022) Bringing machine learning to research on intellectual and developmental disabilities: taking inspiration from neurological diseases. J Neurodev Disord 14: 28. https://doi.org/10.1186/s11689-022-09438-w
    [20] Saleh AY, Chern LH (2021) Autism spectrum disorder classification using deep learning. IJOE 17: 103-114. https://doi.org/10.3991/ijoe.v17i08.24603
    [21] Koppe G, Meyer-Lindenberg A, Durstewitz D (2021) Deep learning for small and big data in psychiatry. Neuropsychopharmacology 46: 176-190. https://doi.org/10.1038/s41386-020-0767-z
    [22] Gütter J, Kruspe A, Zhu XX, et al. (2022) Impact of training set size on the ability of deep neural networks to deal with omission noise. Front Remote Sens 3: 932431. https://doi.org/10.3389/frsen.2022.932431
    [23] Sturman O, von Ziegler L, Schläppi C, et al. (2020) Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions. Neuropsychopharmacology 45: 1942-1952. https://doi.org/10.1038/s41386-020-0776-y
    [24] He T, Kong R, Holmes AJ, et al. (2020) Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage 206: 116276. https://doi.org/10.1016/j.neuroimage.2019.116276
    [25] Chen M, Li H, Wang J, et al. (2019) A multichannel deep neural network model analyzing multiscale functional brain connectome data for attention deficit hyperactivity disorder detection. Radiol Artif Intell 2: e190012. https://doi.org/10.1148/ryai.2019190012
    [26] Golshan HM, Hebb AO, Mahoor MH (2020) LFP-Net: a deep learning framework to recognize human behavioral activities using brain STN-LFP signals. J Neurosci Meth 335: 108621. https://doi.org/10.1016/j.jneumeth.2020.108621
    [27] Sutoko S, Masuda A, Kandori A, et al. (2021) Early identification of Alzheimer's disease in mouse models: Application of deep neural network algorithm to cognitive behavioral parameters. Iscience 24: 102198. https://doi.org/10.1016/j.isci.2021.102198
    [28] Tarigopula P, Fairhall SL, Bavaresco A, et al. (2023) Improved prediction of behavioral and neural similarity spaces using pruned DNNs. Neural Networks 168: 89-104. https://doi.org/10.1016/j.neunet.2023.08.049
    [29] Uyulan C, Ergüzel TT, Unubol H, et al. (2021) Major depressive disorder classification based on different convolutional neural network models: deep learning approach. Clin EEG Neurosci 52: 38-51. https://doi.org/10.1177/1550059420916634
    [30] Wen J, Thibeau-Sutre E, Diaz-Melo M, et al. (2020) Convolutional neural networks for classification of Alzheimer's disease: overview and reproducible evaluation. Med Image Anal 63: 101694. https://doi.org/10.1016/j.media.2020.101694
    [31] Karthik R, Menaka R, Johnson A, et al. (2020) Neuroimaging and deep learning for brain stroke detection-A review of recent advancements and future prospects. Comput Meth Prog Bio 197: 105728. https://doi.org/10.1016/j.cmpb.2020.105728
    [32] Iqbal MS, Heyat MBB, Parveen S, et al. (2024) Progress and trends in neurological disorders research based on deep learning. Comput Med Imag Grap 116: 102400. https://doi.org/10.1016/j.compmedimag.2024.102400
    [33] Kim S, Pathak S, Parise R, et al. (2024) The thriving influence of artificial intelligence in neuroscience. Application of Artificial Intelligence in Neurological Disorders . Singapore: Springer Nature Singapore 157-184. https://doi.org/10.1007/978-981-97-2577-9_9
    [34] Lima AA, Mridha MF, Das SC, et al. (2022) A comprehensive survey on the detection, classification, and challenges of neurological disorders. Biology 11: 469. https://doi.org/10.3390/biology11030469
    [35] Mulpuri RP, Konda N, Gadde ST, et al. (2024) Artificial intelligence and machine learning in neuroregeneration: a systematic review. Cureus 16: e61400. https://doi.org/10.7759/cureus.61400
    [36] Keserwani PK, Das S, Sarkar N (2024) A comparative study: prediction of parkinson's disease using machine learning, deep learning and nature inspired algorithm. Multimed Tools Appl 83: 69393-69441. https://doi.org/10.1007/s11042-024-18186-z
    [37] Fatima A, Masood S (2024) Machine learning approaches for neurological disease prediction: a systematic review. Expert Syst 41: e13569. https://doi.org/10.1111/exsy.13569
    [38] Surianarayanan C, Lawrence JJ, Chelliah PR, et al. (2023) Convergence of artificial intelligence and neuroscience towards the diagnosis of neurological disorders—a scoping review. Sensors 23: 3062. https://doi.org/10.3390/s23063062
    [39] Lombardi A, Diacono D, Amoroso N, et al. (2021) Explainable deep learning for personalized age prediction with brain morphology. Front Neurosci 15: 674055. https://doi.org/10.3389/fnins.2021.674055
    [40] Choo YJ, Chang MC (2022) Use of machine learning in stroke rehabilitation: a narrative review. Brain Neurorehab 15: e26. https://doi.org/10.12786/bn.2022.15.e26
    [41] Ryait H, Bermudez-Contreras E, Harvey M, et al. (2019) Data-driven analyses of motor impairments in animal models of neurological disorders. PLoS Biol 17: e3000516. https://doi.org/10.1371/journal.pbio.3000516
    [42] Nguyen HS, Ho DKN, Nguyen NN, et al. (2024) Predicting EGFR mutation status in non-small cell lung cancer using artificial intelligence: a systematic review and meta-analysis. Acad Radiol 31: 660-683. https://doi.org/10.1016/j.acra.2023.03.040
    [43] Zhang Y, Yao Q, Yue L, et al. (2023) Emerging drug interaction prediction enabled by a flow-based graph neural network with biomedical network. Nat Comput Sci 3: 1023-1033. https://doi.org/10.1038/s43588-023-00558-4
    [44] Le NQK (2023) Predicting emerging drug interactions using GNNs. Nat Comput Sci 3: 1007-1008. https://doi.org/10.1038/s43588-023-00555-7
    [45] Abed Mohammed A, Sumari P (2024) Hybrid k-means and principal component analysis (PCA) for diabetes prediction. Int J Comput Dig Syst 15: 1719-1728. https://doi.org/10.12785/ijcds/1501121
    [46] Mostafa F, Hasan E, Williamson M, et al. (2021) Statistical machine learning approaches to liver disease prediction. Livers 1: 294-312. https://doi.org/10.3390/livers1040023
    [47] Jackins V, Vimal S, Kaliappan M, et al. (2021) AI-based smart prediction of clinical disease using random forest classifier and Naive Bayes. J Supercomput 77: 5198-5219. https://doi.org/10.1007/s11227-020-03481-x
    [48] Cho G, Yim J, Choi Y, et al. (2019) Review of machine learning algorithms for diagnosing mental illness. Psychiat Invest 16: 262. https://doi.org/10.30773/pi.2018.12.21.2
    [49] Aljrees T (2024) Improving prediction of cervical cancer using KNN imputer and multi-model ensemble learning. Plos One 19: e0295632. https://doi.org/10.1371/journal.pone.0295632
    [50] Hajare S, Rewatkar R, Reddy KTV (2024) Design of an iterative method for enhanced early prediction of acute coronary syndrome using XAI analysis. AIMS Bioeng 11: 301-322. https://doi.org/10.3934/bioeng.2024016
    [51] Schjetnan AGP, Luczak A (2011) Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat. J Vis Exp 56: e3282. https://doi.org/10.3791/3282-v
    [52] Luczak A, Narayanan NS (2005) Spectral representation-analyzing single-unit activity in extracellularly recorded neuronal data without spike sorting. J Neurosci Meth 144: 53-61. https://doi.org/10.1016/j.jneumeth.2004.10.009
    [53] Luczak A, Hackett TA, Kajikawa Y, et al. (2004) Multivariate receptive field mapping in marmoset auditory cortex. J Neurosci Meth 136: 77-85. https://doi.org/10.1016/j.jneumeth.2003.12.019
    [54] Luczak A (2010) Measuring neuronal branching patterns using model-based approach. Front Comput Neurosci 4: 135. https://doi.org/10.3389/fncom.2010.00135
    [55] Luczak A, Kubo Y (2022) Predictive neuronal adaptation as a basis for consciousness. Front Syst Neurosci 15: 767461. https://doi.org/10.3389/fnsys.2021.767461
    [56] Lepakshi VA (2022) Machine learning and deep learning based AI tools for development of diagnostic tools. Computational Approaches for Novel Therapeutic and Diagnostic Designing to Mitigate SARS-CoV-2 Infection . Academic Press 399-420. https://doi.org/10.1016/B978-0-323-91172-6.00011-X
    [57] Montavon G, Binder A, Lapuschkin S, et al. (2019) Layer-wise relevance propagation: an overview. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning . Cham: Springer 193-209. https://doi.org/10.1007/978-3-030-28954-6_10
    [58] Nazir S, Dickson DM, Akram MU (2023) Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 156: 106668. https://doi.org/10.1016/j.compbiomed.2023.106668
    [59] Torabi R, Jenkins S, Harker A, et al. (2021) A neural network reveals motoric effects of maternal preconception exposure to nicotine on rat pup behavior: a new approach for movement disorders diagnosis. Front Neurosci 15: 686767. https://doi.org/10.3389/fnins.2021.686767
    [60] Shahtalebi S, Atashzar SF, Patel RV, et al. (2021) A deep explainable artificial intelligent framework for neurological disorders discrimination. Sci Rep 11: 9630. https://doi.org/10.1038/s41598-021-88919-9
    [61] Morabito FC, Ieracitano C, Mammone N (2023) An explainable artificial intelligence approach to study MCI to AD conversion via HD-EEG processing. Clin EEG Neurosci 54: 51-60. https://doi.org/10.1177/15500594211063662
    [62] Goodwin NL, Nilsson SRO, Choong JJ, et al. (2022) Toward the explainability, transparency, and universality of machine learning for behavioral classification in neuroscience. Curr Opin Neurobiol 73: 102544. https://doi.org/10.1016/j.conb.2022.102544
    [63] Lindsay GW (2024) Grounding neuroscience in behavioral changes using artificial neural networks. Curr Opin Neurobiol 84: 102816. https://doi.org/10.1016/j.conb.2023.102816
    [64] Dan T, Kim M, Kim WH, et al. (2023) Developing explainable deep model for discovering novel control mechanism of neuro-dynamics. IEEE T Med Imaging 43: 427-438. https://doi.org/10.1109/TMI.2023.3309821
    [65] Fellous JM, Sapiro G, Rossi A, et al. (2019) Explainable artificial intelligence for neuroscience: behavioral neurostimulation. Front Neurosci 13: 1346. https://doi.org/10.3389/fnins.2019.01346
    [66] Bartle AS, Jiang Z, Jiang R, et al. (2022) A critical appraisal on deep neural networks: bridge the gap between deep learning and neuroscience via XAI. HANDBOOK ON COMPUTER LEARNING AND INTELLIGENCE: Volume 2: Deep Learning, Intelligent Control and Evolutionary Computation 2022: 619-634. https://doi.org/10.1142/9789811247323_0015
    [67] Lemon RN (1997) Mechanisms of cortical control of hand function. Neuroscientist 3: 389-398. https://doi.org/10.1177/107385849700300612
    [68] Alaverdashvili M, Whishaw IQ (2013) A behavioral method for identifying recovery and compensation: hand use in a preclinical stroke model using the single pellet reaching task. Neurosci Biobehav R 37: 950-967. https://doi.org/10.1016/j.neubiorev.2013.03.026
    [69] Metz GAS, Whishaw IQ (2000) Skilled reaching an action pattern: stability in rat (Rattus norvegicus) grasping movements as a function of changing food pellet size. Behav Brain Res 116: 111-122. https://doi.org/10.1016/S0166-4328(00)00245-X
    [70] Faraji J, Gomez-Palacio-Schjetnan A, Luczak A, et al. (2013) Beyond the silence: bilateral somatosensory stimulation enhances skilled movement quality and neural density in intact behaving rats. Behav Brain Res 253: 78-89. https://doi.org/10.1016/j.bbr.2013.07.022
    [71] Sheu Y (2020) Illuminating the black box: interpreting deep neural network models for psychiatric research. Front Psychiatry 11: 551299. https://doi.org/10.3389/fpsyt.2020.551299
    [72] Fan FL, Xiong J, Li M, et al. (2021) On interpretability of artificial neural networks: a survey. IEEE T Radiat Plasma 5: 741-760. https://doi.org/10.1109/TRPMS.2021.3066428
    [73] Smucny J, Shi G, Davidson I (2022) Deep learning in neuroimaging: overcoming challenges with emerging approaches. Front Psychiatry 13: 912600. https://doi.org/10.3389/fpsyt.2022.912600
    [74] Kohlbrenner M, Bauer A, Nakajima S, et al. (2020) Towards best practice in explaining neural network decisions with LRP. 2020 International Joint Conference on Neural Networks (IJCNN) . IEEE 1-7. https://doi.org/10.1109/IJCNN48605.2020.9206975
    [75] Farahani FV, Fiok K, Lahijanian B, et al. (2022) Explainable AI: a review of applications to neuroimaging data. Front Neurosci 16: 906290. https://doi.org/10.3389/fnins.2022.906290
    [76] Böhle M, Eitel F, Weygandt M, et al. (2019) Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer's disease classification. Front Aging Neurosci 11: 456892. https://doi.org/10.3389/fnagi.2019.00194
    [77] Marques dos Santos JD, Marques dos Santos JP (2023) Path-weights and layer-wise relevance propagation for explainability of ANNs with fMRI data. International Conference on Machine Learning, Optimization, and Data Science . Cham: Springer Nature Switzerland 433-448. https://doi.org/10.1007/978-3-031-53966-4_32
    [78] Filtjens B, Ginis P, Nieuwboer A, et al. (2021) Modelling and identification of characteristic kinematic features preceding freezing of gait with convolutional neural networks and layer-wise relevance propagation. BMC Med Inform Decis Mak 21: 341. https://doi.org/10.1186/s12911-021-01699-0
    [79] Li H, Tian Y, Mueller K, et al. (2019) Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image Vision Comput 83: 70-86. https://doi.org/10.1016/j.imavis.2019.02.005
    [80] Nam H, Kim JM, Choi W, et al. (2023) The effects of layer-wise relevance propagation-based feature selection for EEG classification: a comparative study on multiple datasets. Front Hum Neurosci 17: 1205881. https://doi.org/10.3389/fnhum.2023.1205881
    [81] Korda AI, Ruef A, Neufang S, et al. (2021) Identification of voxel-based texture abnormalities as new biomarkers for schizophrenia and major depressive patients using layer-wise relevance propagation on deep learning decisions. Psychiat Res-Neuroim 313: 111303. https://doi.org/10.1016/j.pscychresns.2021.111303
    [82] von Ziegler L, Sturman O, Bohacek J (2021) Big behavior: challenges and opportunities in a new era of deep behavior profiling. Neuropsychopharmacology 46: 33-44. https://doi.org/10.1038/s41386-020-0751-7
    [83] Marks M, Jin Q, Sturman O, et al. (2022) Deep-learning-based identification, tracking, pose estimation and behaviour classification of interacting primates and mice in complex environments. Nat Mach Intell 4: 331-340. https://doi.org/10.1038/s42256-022-00477-5
    [84] Bohnslav JP, Wimalasena NK, Clausing KJ, et al. (2021) DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels. Elife 10: e63377. https://doi.org/10.7554/eLife.63377
    [85] Wang PY, Sapra S, George VK, et al. (2021) Generalizable machine learning in neuroscience using graph neural networks. Front Artif Intell 4: 618372. https://doi.org/10.3389/frai.2021.618372
    [86] Watson DS, Krutzinna J, Bruce IN, et al. (2019) Clinical applications of machine learning algorithms: beyond the black box. Bmj 364: l886. https://doi.org/10.2139/ssrn.3352454
    [87] Jain A, Salas M, Aimer O, et al. (2024) Safeguarding patients in the AI era: ethics at the forefront of pharmacovigilance. Drug Safety 48: 119-127. https://doi.org/10.1007/s40264-024-01483-9
    [88] Murdoch B (2021) Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics 22: 1-5. https://doi.org/10.1186/s12910-021-00687-3
    [89] Ziesche S (2021) AI ethics and value alignment for nonhuman animals. Philosophies 6: 31. https://doi.org/10.3390/philosophies6020031
    [90] Bossert L, Hagendorff T (2021) Animals and AI. the role of animals in AI research and application-an overview and ethical evaluation. Technol Soc 67: 101678. https://doi.org/10.1016/j.techsoc.2021.101678
    [91] Gong Y, Liu G, Xue Y, et al. (2023) A survey on dataset quality in machine learning. Inform Software Tech 162: 107268. https://doi.org/10.1016/j.infsof.2023.107268
    [92] Bolaños LA, Xiao D, Ford NL, et al. (2021) A three-dimensional virtual mouse generates synthetic training data for behavioral analysis. Nat Methods 18: 378-381. https://doi.org/10.1038/s41592-021-01103-9
    [93] Lashgari E, Liang D, Maoz U (2020) Data augmentation for deep-learning-based electroencephalography. J Neurosci Methods 346: 108885. https://doi.org/10.1016/j.jneumeth.2020.108885
    [94] Barile B, Marzullo A, Stamile C, et al. (2021) Data augmentation using generative adversarial neural networks on brain structural connectivity in multiple sclerosis. Comput Meth Prog Bio 206: 106113. https://doi.org/10.1016/j.cmpb.2021.106113
    [95] Memar S, Jiang E, Prado VF, et al. (2023) Open science and data sharing in cognitive neuroscience with MouseBytes and MouseBytes+. Sci Data 10: 210. https://doi.org/10.1038/s41597-023-02106-1
    [96] Jleilaty S, Ammounah A, Abdulmalek G, et al. (2024) Distributed real-time control architecture for electrohydraulic humanoid robots. Robot Intell Automat 44: 607-620. https://doi.org/10.1108/RIA-01-2024-0013
    [97] Zhao J, Wang Z, Lv Y, et al. (2024) Data-driven learning for H∞ control of adaptive cruise control systems. IEEE Trans Veh Technol 73: 18348-18362. https://doi.org/10.1109/TVT.2024.3447060
    [98] Kelly CJ, Karthikesalingam A, Suleyman M, et al. (2019) Key challenges for delivering clinical impact with artificial intelligence. BMC Med 17: 1-9. https://doi.org/10.1186/s12916-019-1426-2
    [99] Kulkarni PA, Singh H (2023) Artificial intelligence in clinical diagnosis: opportunities, challenges, and hype. Jama 330: 317-318. https://doi.org/10.1001/jama.2023.11440
    [100] Choudhury A, Asan O (2020) Role of artificial intelligence in patient safety outcomes: systematic literature review. JMIR Med inf 8: e18599. https://doi.org/10.2196/18599
    [101] Ratwani RM, Sutton K, Galarraga JE (2024) Addressing AI algorithmic bias in health care. Jama 332: 1051-1052. https://doi.org/10.1001/jama.2024.13486
    [102] Chen C, Sundar SS (2024) Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust. Hum-Comput Interact 2024: 1-37. https://doi.org/10.1080/07370024.2024.2392494
    [103] Chen F, Wang L, Hong J, et al. (2024) Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models. J Am Medl Inform Assn 31: 1172-1183. https://doi.org/10.1093/jamia/ocae060
    [104] Ienca M, Ignatiadis K (2020) Artificial intelligence in clinical neuroscience: methodological and ethical challenges. AJOB Neurosci 11: 77-87. https://doi.org/10.1080/21507740.2020.1740352
    [105] Avberšek LK, Repovš G (2022) Deep learning in neuroimaging data analysis: applications, challenges, and solutions. Front Neuroimag 1: 981642. https://doi.org/10.3389/fnimg.2022.981642
  • This article has been cited by:

    1. Ching-Lung Lin, Hongyu Liu, Catharine W. K. Lo, Uniqueness principle for fractional (non)-coercive anisotropic polyharmonic operators and applications to inverse problems, 2024, 0, 1930-8337, 0, 10.3934/ipi.2024054
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1049) PDF downloads(44) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog