Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

An image filtering method for dataset production

  • Received: 14 March 2024 Revised: 17 June 2024 Accepted: 19 June 2024 Published: 27 June 2024
  • To address the issue of the lack of specialized data filtering algorithms for dataset production, we proposed an image filtering algorithm. Using feature fusion methods to improve discrete wavelet transform algorithm (DWT) and enhance the robustness of image feature extraction, a weighted hash algorithm was proposed to hash features to reduce the complexity and computational cost of feature comparison. To minimize the time cost of image filtering as much as possible, a fast distance calculation method was also proposed to calculate the similarity of images. The experimental results showed that compared with other advanced methods, the algorithm proposed in this paper had an average accuracy improvement of 3% and a speed improvement of at least 30%. Compared with traditional manual filtering methods, while ensuring accuracy, the filtering speed of a single image is increased from 9.9s to 0.01s, which has important application value for dataset production.

    Citation: Ling Li, Dan He, Cheng Zhang. An image filtering method for dataset production[J]. Electronic Research Archive, 2024, 32(6): 4164-4180. doi: 10.3934/era.2024187

    Related Papers:

    [1] Yousaf Khurshid, Muhammad Adil Khan, Yu-Ming Chu . Conformable fractional integral inequalities for GG- and GA-convex functions. AIMS Mathematics, 2020, 5(5): 5012-5030. doi: 10.3934/math.2020322
    [2] Yousaf Khurshid, Muhammad Adil Khan, Yu-Ming Chu . Conformable integral version of Hermite-Hadamard-Fejér inequalities via η-convex functions. AIMS Mathematics, 2020, 5(5): 5106-5120. doi: 10.3934/math.2020328
    [3] Muhammad Amer Latif, Mehmet Kunt, Sever Silvestru Dragomir, İmdat İşcan . Post-quantum trapezoid type inequalities. AIMS Mathematics, 2020, 5(4): 4011-4026. doi: 10.3934/math.2020258
    [4] Jorge E. Macías-Díaz, Muhammad Bilal Khan, Muhammad Aslam Noor, Abd Allah A. Mousa, Safar M Alghamdi . Hermite-Hadamard inequalities for generalized convex functions in interval-valued calculus. AIMS Mathematics, 2022, 7(3): 4266-4292. doi: 10.3934/math.2022236
    [5] Sabila Ali, Shahid Mubeen, Rana Safdar Ali, Gauhar Rahman, Ahmed Morsy, Kottakkaran Sooppy Nisar, Sunil Dutt Purohit, M. Zakarya . Dynamical significance of generalized fractional integral inequalities via convexity. AIMS Mathematics, 2021, 6(9): 9705-9730. doi: 10.3934/math.2021565
    [6] Attazar Bakht, Matloob Anwar . Ostrowski and Hermite-Hadamard type inequalities via $ (\alpha-s) $ exponential type convex functions with applications. AIMS Mathematics, 2024, 9(10): 28130-28149. doi: 10.3934/math.20241364
    [7] Thongchai Botmart, Soubhagya Kumar Sahoo, Bibhakar Kodamasingh, Muhammad Amer Latif, Fahd Jarad, Artion Kashuri . Certain midpoint-type Fejér and Hermite-Hadamard inclusions involving fractional integrals with an exponential function in kernel. AIMS Mathematics, 2023, 8(3): 5616-5638. doi: 10.3934/math.2023283
    [8] Hassen Aydi, Bessem Samet, Manuel De la Sen . On $ \psi $-convex functions and related inequalities. AIMS Mathematics, 2024, 9(5): 11139-11155. doi: 10.3934/math.2024546
    [9] XuRan Hai, ShuHong Wang . Hermite-Hadamard type inequalities based on the Erdélyi-Kober fractional integrals. AIMS Mathematics, 2021, 6(10): 11494-11507. doi: 10.3934/math.2021666
    [10] Sevda Sezer . The Hermite-Hadamard inequality for $ s $-Convex functions in the third sense. AIMS Mathematics, 2021, 6(7): 7719-7732. doi: 10.3934/math.2021448
  • To address the issue of the lack of specialized data filtering algorithms for dataset production, we proposed an image filtering algorithm. Using feature fusion methods to improve discrete wavelet transform algorithm (DWT) and enhance the robustness of image feature extraction, a weighted hash algorithm was proposed to hash features to reduce the complexity and computational cost of feature comparison. To minimize the time cost of image filtering as much as possible, a fast distance calculation method was also proposed to calculate the similarity of images. The experimental results showed that compared with other advanced methods, the algorithm proposed in this paper had an average accuracy improvement of 3% and a speed improvement of at least 30%. Compared with traditional manual filtering methods, while ensuring accuracy, the filtering speed of a single image is increased from 9.9s to 0.01s, which has important application value for dataset production.


    In 1935, D. H. Lehmer [20] introduced and investigated generalized Euler numbers Wn, defined by the generating function

    3et+eωt+eω2t=n=0Wntnn!, (1.1)

    where ω=1+32 and ω2=ˉω=132 are the cube roots of unity. Notice that Wn=0 unless n0(mod3). The sequence of these numbers is given by

    W3n}n0=1,1,19,1513,315523,136085041,105261234643,132705221399353,254604707462013571,

    and the sequence of these absolute values is recorded in [22,A002115]. In [15], the complementary numbers W(j)n (j=0,1,2) to Lehmer's Euler numbers are defined by the generating function

    n=0W(j)ntnn!=(1+l=1t3l(3l+j)!)1. (1.2)

    Notice that W(j)n=0 unless n0(mod3). When j=0, Wn=W(0)n are the original Lehmer's Euler numbers. When j=1, we also have

    n=0W(1)ntnn!=3tet+ω2eωt+ωeω2t. (1.3)

    Lehmer's Euler numbers and their complementary numbers W(j)n can be considered analogous of the classical Euler numbers En and their complementary Euler numbers ˆEn ([11,19]). For, their generating functions are given by

    n=0Entnn!=1cosht=2et+et=(l=0t2l(2l)!)1 (1.4)

    and

    n=0ˆEntnn!=tsinht=2tetet=(l=0t2l(2l+1)!)1, (1.5)

    respectively. Still similar numbers are the well-known classical Bernoulli numbers defined by

    n=0Bntnn!=tet1=(l=0tl(l+1)!)1. (1.6)

    Recently, Barman et al. ([3]) introduce more general numbers, so-called hypergeometric Lehmer-Euler numbers W(j)N,n,r (j=0,1) of grade r, defined by

    n=0W(j)N,n,rtnn!=(1Fr(1;rN+j+1r,rN+j+2r,,rN+j+rr;(tr)r))1=(1+n=1(rN+j)!(rN+rn+j)!trn)1(N0),

    where 1Fr(a;b1,,br;z) is the hypergeometric function, defined by

    1Fr(a;b1,,br;z)=n=0(a)(n)(b1)(n)(br)(n)znn!.

    and (x)(n)=x(x+1)(x+n1) (n1) is the rising factorial with (x)(0)=1. A determinant expression is given by

    W(j)N,n,r=(1)n(rn)!|(rN+j)!(rN+j+r)!10(rN+j)!(rN+j+2r)!(rN+j)!(rN+j+r)!10(rN+j)!(rN+rn+jr)!(rN+j)!(rN+rn+j2r)!(rN+j)!(rN+j+r)!1(rN+j)!(rN+rn+j)!(rN+j)!(rN+rn+jr)!(rN+j)!(rN+j+2r)!(rN+j)!(rN+j+r)!|. (1.7)

    When N=0 and r=3, W(j)n=W(j)0,n,3 are the Lehmer's generalized Euler numbers (j=0) in (1.1) and their complementary numbers (j=1) in (1.3). When N=0 and r=2, En=W(j)0,n,2 are the classical Euler numbers (j=0) in (1.4) and their complementary numbers (j=1) in (1.5). A famous determinant expression of Euler numbers discovered by Glaisher in 1875 ([6,p.52])

    E2n=(1)n(2n)!|121014!12101(2n2)!1(2n4)!1211(2n)!1(2n2)!14!12| (1.8)

    and an expression of the complementary numbers ([11,19])

    ˆE2n=(1)n(2n)!|13!1015!13!101(2n1)!1(2n3)!13!11(2n+1)!1(2n1)!15!13!|. (1.9)

    When r=1 and j=0, BN,n=W(0)N,n,1 are the hypergeometric Bernoulli numbers. When N=r=1 and j=0 in (1.7), Bn=W(0)1,n,1 are the classical Bernoulli numbers in (1.6). The determinant expression for the classical Bernoulli numbers was discovered by Glaisher ([6,p.53]).

    Bn=(1)nn!|121013!12101n!1(n1)!1211(n+1)!1n!13!12|. (1.10)

    However, the classical Cauchy numbers and their generalized numbers are not involved in the numbers W(j)N,n,r. Hypergeometric Cauchy numbers cN,n ([9]) are defined by

    12F1(1,N;N+1;t)=(1)N1tN/Nlog(1+t)N1n=1(1)n1tn/n=n=0cN,ntnn!, (1.11)

    where 2F1(a,b;c;z) is the hypergeometric function defined by

    2F1(a,b;c;z)=n=0(a)(n)(a)(b)(c)(n)znn!.

    When N=1, cn=c1,n are the classical Cauchy numbers defined by

    tlog(1+t)=n=0cntnn!. (1.12)

    The determinant expression of hypergeometric Cauchy numbers is given by

    cN,n=n!|NN+110NN+2NN+110NN+n1NN+n2NN+11NN+nNN+n1NN+2NN+1| (1.13)

    ([2,18]). The determinant expression for the classical Cauchy numbers was discovered by Glaisher ([6,p.50]). Other generalized Cauchy numbers, having similar properties, are Leaping Cauchy numbers [13] and Shifted Cauchy numbers [16].

    A generalized version for Bernoulli and Euler numbers has been established in [17], where the elements contain factorials, as seen in (1.8), (1.9), (1.10) and (1.7). However, expressions for Cauchy and their generalized numbers cannot be included because they do not contain the factorial elements, as seen in (1.13). Universal Bernoulli numbers were studied in [1] and [8], and particularly, some universal Kummer congruences were established in [1] and [8].

    In this paper, we introduce the hypergeometric Cauchy numbers of higher grade that are introduced as generalizations of both hypergeometric Cauchy numbers and the classical Cauchy numbers. We give several expressions and identities.

    For N1 and n0, define hypergeometric Cauchy numbers V(j)N,n,r (j=0,1) of grade r by

    n=0V(j)N,n,rtnn!=(2F1(1,N+jr;N+1+jr;(t)r))1, (2.1)

    where 2F1(a,b;c;z) is the Gauss hypergeometric function, defined by

    2F1(a,b;c;z)=n=0(a)(n)(b)(n)(c)(n)znn!.

    From the definition, V(j)N,n,r0(modr) unless n0(modr). When r=1 and j=0 in (2.1), cN,n=V(0)N,n,1 are the hypergeometric Cauchy numbers in (1.11). When N=1, r=1 and j=0 in (2.1), cn=V(0)1,n,1 are the classical Cauchy numbers in (1.12).

    We can write (2.1) as

    2F1(1,N+jr;N+1+jr;(t)r)=n=0(1)n(rN+j)rN+rn+jtrn=1+n=1(1)n(rN+j)rN+rn+jtrn. (2.2)

    The definition (2.1) with (2.2) may be obvious or artificial for the readers with different backgrounds. However, our initial motivations were from Combinatorics, in particular, graph theory. In 1989, Cameron [5] considered the operator A defined on the set of sequences of non-negative integers as follows: for x={xn}n1 and z={zn}n1, set Ax=z, where

    1+n=1zntn=(1n=1xntn)1. (2.3)

    Cameron's operators deal with only nonnegative integers, but the operators can be used extensively for rational numbers. In the sense of Cameron's operator A, we have the following relation.

    A{(1)n1(rN+j)rN+rn+j}={V(j)N,rn,r(rn)!}

    This relation is interchangeable in the sense of determinants too. See Section 5 about Trudi's formula.

    We have the following recurrence relation.

    Proposition 1. For N0 and j=0,1, we have

    V(j)N,rn,r=n1k=0(1)nk1(rn)!(rN+j)(rN+rnrk+j)(rk)!V(j)N,rk,r(n1)

    with V(j)N,0,r=1.

    Proof. By (2.1), we get

    1=(1+l=1(1)l(rN+j)rN+rl+jtrl)(n=0V(j)N,rn,rtrn(rn)!)=n=0V(j)N,rn,rtrn(rn)!+n=1n1k=0(1)nk(rN+j)V(j)N,rk,r(rN+rnrk+j)(rk)!trn.

    Comparing the coefficient on both sides, we obtain

    V(j)N,rn,r(rn)!+n1k=0(1)nk(rN+j)V(j)N,rk,r(rN+rnrk+j)(rk)!=0(n1).

    We have an explicit expression of V(j)N,n,r.

    Theorem 1. Let j=0,1. For n1,

    V(j)N,rn,r=(rn)!nk=1(1)nki1++ik=ni1,,ik1(rN+j)k(rN+ri1+j)(rN+rik+j).

    Proof. The proof is done by induction on n. From Proposition 1 with n=1,

    V(j)N,r,r=r!(rN+j)rN+j+rV(j)N,0,r=r!(rN+j)rN+j+r.

    This matches the result when n=1. Assume that the result is valid up to n1. Then by Proposition 1

    V(j)N,rn,r(rn)!=n1l=0(1)nl1(rN+j)rN+rnrl+jV(j)N,rl,r(rl)!=n1l=1(1)nl1(rN+j)rN+rnrl+jlk=1(1)lki1++ik=li1,,ik1(rN+j)k(rN+ri1+j)(rN+rik+j)+(1)n1(rN+j)rN+rn+j=n1k=1(1)nk1n1l=k(rN+j)rN+rnrl+ji1++ik=li1,,ik1(rN+j)k(rN+ri1+j)(rN+rik+j)+(1)n1(rN+j)rN+rn+j=nk=2(1)nkn1l=k1rN+jrN+rnrl+ji1++ik1=li1,,ik11(rN+j)k1(rN+ri1+j)(rN+rik1+j)+(1)n1(rN+j)rN+rn+j=nk=2(1)nki1++ik=ni1,,ik1(rN+j)k(rN+ri1+j)(rN+rik+j)+(1)n1(rN+j)(rN+rn+j)(nl=ik)=nk=1(1)nki1++ik=ni1,,ik1(rN+j)k(rN+ri1+j)(rN+rik+j).

    There is an alternative form of V(j)N,n,r by using binomial coefficients. The proof is similar to that of Theorem 1 and is omitted.

    Theorem 2. For n1,

    V(j)N,rn,r=(rn)!nk=1(1)nk(n+1k+1)i1++ik=ni1,,ik0(rN+j)k(rN+ri1+j)(rN+rik+j).

    In this section, we shall show an expression of hypergeometric Cauchy numbers of higher grade in terms of determinants. This result is a generalization of those of the hypergeometric and the classical Cauchy numbers. For simplification of determinant expressions, we use the Jordan matrix

    J=(0000100001000010).

    J0 is the identity matrix and JT is the transpose matrix of J.

    Theorem 3. For n1,

    V(j)N,rn,r=(rn)!|JT+nk=1rN+jr(N+k)+jJk1|.

    Proof. For simplicity, put ˜VN,n=V(j)N,n,r/n!. Then, we shall prove that for any n1

    ˜VN,rn=|JT+nk=1rN+jr(N+k)+jJk1|. (3.1)

    When n=1, (3.1) is valid because by Theorem 1 we get

    ˜VN,r=rN+jrN+j+r.

    Assume that (3.1) is valid up to n1. Notice that by Proposition 1, we have

    ˜VN,rn=n1k=0(1)nk1(rN+j)rN+rnrk+j˜VN,rk.

    Thus, by expanding the right-hand side of (3.1) along the first row, it is equal to

    rN+jrN+j+r˜VN,rnr|rN+jrN+j+2r10rN+jrN+j+3rrN+jrN+j+r10rN+jrN+rn+jrrN+jrN+rn+j3rrN+jrN+j+r1rN+jrN+rn+jrN+jrN+rn+j2rrN+jrN+j+2rrN+jrN+j+r|=rN+jrN+j+r˜VN,rnrrN+jrN+j+2r˜VN,rn2r++(1)n|rN+jrN+rn+jr1rN+jrN+rn+jrN+jrN+j+r|=n1k=0(1)nk1(rN+j)rN+rnrk+j˜VN,rk=˜VN,rn.

    Remark. When r=1 and j=0, the determinant expression in Theorem 3 is reduced to that in (1.13) for the hypergeometric Cauchy numbers cN,n=V(0)N,n,1. When N=1, r=1 and j=0, we have a determinant expression of the Cauchy numbers cn=V(0)1,n,1 ([6,p.50]).

    As applications or variations to generalize the hypergeometric numbers V(j)N,n,r of higher grade, we shall introduce two kinds of incomplete hypergeometric Cauchy numbers of higher grade. Similar but slightly different kinds of incomplete numbers are considered in [10,12,14,17]. In addition, similar techniques can be found in [24] and later cited in [7]. For j=0,1 and nm1, define the restricted hypergeometric Cauchy numbers V(j)N,n,r,m of grade r by

    n=0V(j)N,n,r,mtnn!=(1+ml=1(1)l(rN+j)rN+rl+jtrl)1 (4.1)

    and the associated hypergeometric Cauchy numbers V(j)N,n,r,m of grade r by

    n=0V(j)N,n,r,mtnn!=(1+l=m(1)l(rN+j)rN+rl+jtrl)1. (4.2)

    When m in (4.1) and m=1 in (4.2), V(j)N,n,r=V(j)N,n,r,=V(j)N,n,r,1 are the original hypergeometric Cauchy numbers of grade r, defined in (2.1) with (2.2). Hence, both incomplete numbers are reduced to the hypergeometric Cauchy numbers too.

    Notice that V(j)N,n,r,m=V(j)N,n,r,m=0 unless n0(modr).

    The restricted and associated hypergeometric Cauchy numbers satisfy the following recurrence relations.

    Proposition 2. For j=0,1, we have

    V(j)N,rn,r,m=n1k=max{nm,0}(1)nk1(rn)!(rN+j)(rN+rnrk+j)(rk)!V(j)N,rk,r,m(n1)

    with V(j)N,0,r,m=1, and

    V(j)N,rn,r,m=nmk=0(1)nk1(rn)!(rN+j)(rN+rnrk+j)(rk)!V(j)N,rk,r,m(nm)

    with V(j)N,0,r,m=1 and V(j)N,r,r,m==V(j)N,r(m1),r,m=0.

    Proof. First, we shall prove the relation for the restricted hypergeometric Cauchy numbers. By the definition (4.1), we get

    1=(1+ml=1(1)l(rN+j)trlrN+rl+j)(n=0V(j)N,rn,r,mtrn(rn)!)=n=0V(j)N,rn,r,mtrn(rn)!+n=1n1k=max{nm,0}(1)nk(rN+j)V(j)N,rk,r,m(rN+rnrk+j)(rk)!trn.

    Comparing the coefficient on both sides, we obtain the first identity.

    Next, we prove the relation for the associated hypergeometric Cauchy numbers. By the definition (4.2), we get

    1=(1+l=m(1)l(rN+j)!trlrN+rl+j)(n=0V(j)N,rn,r,mtrn(rn)!)=n=0V(j)N,rn,r,mtrn(rn)!+n=mnmk=0(1)nk(rN+j)V(j)N,rk,r,m(rN+rnrk+j)(rk)!trn.

    Comparing the coefficient on both sides, we obtain the desired result.

    The restricted and associated hypergeometric Cauchy numbers have the following expressions in terms of determinants. From the expression of Theorem 3, all the elements change to 0 in more diagonal directed bands.

    Theorem 4. For integers n and m with nm1, we have

    V(j)N,rn,r,m=(rn)!|JT+mk=1rN+jr(N+k)+jJk1|

    and

    V(j)N,rn,r,m=(rn)!|JT+nk=mrN+jr(N+k)+jJk1|.

    Proof. First, we shall prove the first expression for the restricted hypergeometric Cauchy numbers. For simplicity, put ˜VN,rn,m=V(j)N,rn,r,m/(rn)! and prove that for nm1

    ˜VN,rn,m=|JT+mk=1rN+jr(N+k)+jJk1|. (4.3)

    When n=m, we have ˜VN,rm,m=˜VN,rm, and the result reduces to Theorem 3. Assume that (4.3) is valid up to n1. If n2m, then the determinant on the right-hand side of (4.3) is equal to

    ˜VN,rnr,m(rN+j)rN+j+r˜VN,rn2r,m(rN+j)rN+j+2r++(1)m1|rN+jrN+rm+j100rN+jrN+r+j1rN+jrN+rm+j1rN+jrN+rm+jrN+jrN+r+j|=˜VN,rnr,m(rN+j)rN+r+j˜VN,rn2r,m(rN+j)rN+2r+j++(1)m1˜VN,rnrm,m(rN+j)rN+rm+j=˜VN,rn,m.

    If m<n2m, then the determinant on the right-hand side of (4.3) is equal to

    ˜V(j)N,rnr,m(rN+j)rN+r+j˜V(j)N,rn2r,m(rN+j)rN+2r+j++(1)mn1|rN+jrN+rnrm+j10rN+jrN+rm+jrN+jrN+2rmrn+j010rN+jrN+rm+jrN+jrN+r+j|=˜VN,rnr,m(rN+j)rN+r+j˜VN,rn2r,m(rN+j)rN+2r+j++(1)nm1˜VN,rm,m(rN+j)rN+rnrm+j=˜VN,rnr,m(rN+j)rN+r+j˜VN,rn2r,m(rN+j)rN+2r+j++(1)m1˜VN,rnrm,m(rN+j)rN+rm+j=˜VN,rn,m.

    Next, we prove the second expression for the associated hypergeometric Cauchy numbers. For simplicity, put ˜VN,rn,m=VN,rn,r,m/(rn)! and we prove that

    ˜VN,rn,m=|JT+nk=mrN+jr(N+k)+jJk1|. (4.4)

    If mn2m, the determinant on the right-hand side of (4.4) is equal to

    (1)nm|00rN+jrN+rm+jrN+jrN+rn+j1000100m1|=(1)nm(1)m+1rN+jrN+rn+j|1000001|=(1)n+1rN+jrN+rn+j.

    Since only the term for k=0 does not vanish in the second relation of Proposition 2, we have

    ˜VN,rn,m=(1)n+1rN+jrN+rn+j.

    If n2m, the determinant on the right-hand side of (4.4) is equal to

    (1)m1|rN+jrN+rm+jrN+jrN+rn+j1000rN+jrN+rm+jrN+jr(N+nm)+jrN+jr(N+m)+jn2m+10100m1|=(1)m1˜V(j)N,rnrm,m(rN+j)rN+rm+j+(1)m|rN+jrN+rm+r+jrN+jrN+rn+j1000rN+jrN+rm+jrN+jr(N+nm1)+jrN+jr(N+m)+jn2m0100m1|==(1)m1˜V(j)N,rnrm,m(rN+j)rN+rm+j+(1)m˜V(j)N,r(nm1),m(rN+j)rN+r(m+1)+j++(1)nm+1˜V(j)N,rm,m(rN+j)rN+r(nm)+j+(1)nm+1(1)mrN+jrN+rn+j=nmk=m(1)nk˜VN,rk,m(rN+j)r(N+nk)+j=˜VN,rn,m.

    Here, we used the second relation of Proposition 2 again.

    There exist explicit expressions for both incomplete Cauchy numbers.

    Theorem 5. For n,m1,

    V(j)N,rn,r,m=(rn)!nk=1(1)nki1++ik=n1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j).

    For n,m1,

    V(j)N,rn,m=(rn)!nk=1(1)nki1++ik=ni1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j).

    Proof. First, we shall prove the first expression for the restricted hypergeometric Cauchy numbers. When nm, the proof is similar to that of Proposition 1. Note that in the proof of Proposition 1,

    1nl=iknk+1n.

    Let nm+1. By the first relation of Proposition 2

    V(j)N,rn,r,m(rn)!=n1l=nm(1)nl1(rN+j)V(j)N,rl,r,m(rN+rnrl+j)(rl)!=n1l=nm(1)nl1(rN+j)rN+rnrl+jlk=1(1)lki1++ik=l1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)=n1l=1(1)nk1(rN+j)rN+rnrl+jlk=1i1++ik=l1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)+nm1l=1(1)nk(rN+j)rN+rnrl+jlk=1(1)ki1++ik=l1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)=n1k=1(1)nk1n1l=krN+jrN+rnrl+ji1++ik=l1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)+nm1k=1(1)nknm1l=krN+jrN+rnrl+ji1++ik=l1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)=nk=2(1)nkn1l=k1rN+jrN+rnrl+ji1++ik1=l1i1,,ik1m(rN+j)k1(rN+ri1+j)(rN+rik1+j)+nmk=2(1)nk1nm1l=k1rN+jrN+rnrl+ji1++ik1=l1i1,,ik1m(rN+j)k1(rN+ri1+j)(rN+rik1+j)=nk=nm+1(1)nkn1l=k1rN+jrN+rnrl+ji1++ik1=l1i1,,ik1m(rN+j)k1(rN+ri1+j)(rN+rik1+j)+nmk=2(1)nkn1l=nmrN+jrN+rnrl+ji1++ik1=l1i1,,ik1m(rN+j)k1(rN+ri1+j)(rN+rik1+j).

    By putting ik=nl, in the first term by n1lk1nm, in the second term by n1lnm, we have

    1nl=ikm.

    Therefore,

    V(j)N,rn,r,m(rn)!=nk=nm+1(1)nki1++ik=n1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)+nmk=2(1)nki1++ik=n1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)=nk=1(1)nki1++ik=n1i1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j).

    Note that the expression vanishes for k=1 as n>m.

    Next, we prove the second expression for the associated hypergeometric Cauchy numbers. Since the set

    {(i1,,ik)|i1++ik=n,i1,,ikm}

    is empty for n=1,,m1, we have V(j)N,r,r,m==V(j)N,rmr,r,m=0. For n=m, by the second expression of Theorem 4

    V(j)N,rm,r,m=(rm)!|0101rN+jrN+rm+j00|=(rm)!(1)m1rN+jrN+rm+j=(1)m1(rN+j)rN+rm+j,

    which matches the result for n=m. Assume that the result is valid up to n1(m). Then by the second relation of Proposition 2

    VN,rn,r,m(rn)!=nml=0(1)nl1(rN+j)(rN+rnrl+j)(rl)!V(j)N,rl,r,m=(1)n1(rN+j)rN+rn+j+nml=1(1)nl1(rN+j)rN+rnrl+jlk=1(1)lki1++ik=li1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)=(1)n1(rN+j)rN+rn+j+nmk=1(1)nk1nml=krN+jrN+rnrl+ji1++ik=li1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)=(1)n1(rN+j)rN+rn+j+nm+1k=2(1)nknml=k1rN+jrN+rnrl+ji1++ik1=li1,,ik1m(rN+j)k1(rN+ri1+j)(rN+rik1+j)=(1)n1(rN+j)rN+rn+j+nm+1k=2(1)nki1++ik=ni1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)(ik=nl)=nm+1k=1(1)nki1++ik=ni1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j)=nk=1(1)nki1++ik=ni1,,ikm(rN+j)k(rN+ri1+j)(rN+rik+j).

    Note that ik=nlm as lnm. As 1mn1, we have m(nm+2)>n, so the set

    {(i1,,ik)|i1++ik=n,i1,,ikm}

    is empty for nm+2kn.

    We shall use Trudi's formula to obtain different explicit expressions and inversion relations for the numbers V(j)N,n. Denote the multinomial coefficients by (t1++tnt1,,tn)=(t1++tn)!t1!tn!.

    Lemma 1. For a positive integer n, we have

    |a1a00a2a10an1a1a0anan1a2a1|=t1+2t2++ntn=n(t1++tnt1,,tn)(a0)nt1tnat11at22atnn.

    This relation is known as Trudi's formula [21,Vol.3, p.214], [23] and the case a0=1 of this formula is known as Brioschi's formula [4], [21,Vol.3, pp.208–209].

    In addition, there exists the following inversion formula (see, e.g., [17]), which is based upon the relation

    nk=0(1)nkαkD(nk)=0(n1)

    or Cameron's operator in (2.3).

    Lemma 2. For the sequence {αn}n0 defined by α0=1 and

    αn=|D(1)1D(2)1D(n)D(2)D(1)|, we have D(n)=|α11α21αnα2α1|.

    From Trudi's formula, it is possible to give the combinatorial expression

    αn=t1+2t2++ntn=n(t1++tnt1,,tn)(1)nt1tnD(1)t1D(2)t2D(n)tn.

    By applying these lemmas to Theorem 4, we obtain explicit expressions for the incomplete hypergeometric Cauchy numbers of higher grade defined in (4.1) and (4.2).

    Theorem 6. For nm1, we have

    V(j)N,rn,r,m=(rn)!t1+2t2++mtm=n(t1++tmt1,,tm)(1)nt1tm(rN+jrN+j+r)t1(rN+jrN+rm+j)tm

    and

    (j)N,rn,r,m=(rn)!mtm+(m+1)tm+1++ntn=n(tm+tm+1++tntm,tm+1,,tn)×(1)ntmtm+1tn(rN+jrN+rm+j)tm(rN+jrN+rm+j+r)tm+1(rN+jrN+rn+j)tn.

    As a special case of Theorem 6, we can obtain the expressions for the original hypergeometric Cauchy numbers.

    Corollary 1. For n1, we have

    V(j)N,rn,r=(rn)!t1+2t2++ntn=n(t1++tnt1,,tn)(1)nt1tn(rN+jrN+j+r)t1(rN+jrN+rn+j)tn.

    By applying the inversion relation in Lemma 2 to Theorem 3, we have the following.

    Theorem 7. Let j=0,1. For n1, we have

    rN+jrN+rn+j=|JT+nk=1V(j)N,kr,r(kr)!Jk1|.

    In this sense, we have the inversion relation of Corollary 1 too.

    Corollary 2. For n1, we have

    rN+jrN+rn+j=t1+2t2++ntn=n(t1++tnt1,,tn)(1)nt1tn(V(j)N,r,rr!)t1(V(j)N,rn,r(rn)!)tn.

    In this paper, we proposed one type of generalizations of the classical Cauchy numbers and hypergeometric Cauchy numbers. Many other generalizations are known, but the focus of this paper is on the determinant, which originated in Glaisher and others. Similar determinants have been dealt with by Brioshi, Trudi and others, but have long been forgotten. A similar generalization attempt, made by the first author of this paper with Barman in 2019, has proposed generalized numbers including the classical Bernoulli numbers, hypergeometric Bernoulli numbers, Euler numbers, hypergeometric Euler numbers, and so on. However, classical Cauchy numbers and hypergeometric Cauchy numbers cannot be included in the generalization by Barman et al., and this is achieved in this paper. The background and motivation for generalization is Cameron's operator, which is related to graph theory. There, only integers were targeted, but in this paper, we extended this to rational numbers and applied it.

    The authors thank the anonymous referees for useful comments which have helped us to improve the manuscript.



    [1] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, F. F. Li, Imagenet: A large-scale hierarchical image database, in IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, (2009), 248–255. https://doi.org/10.1109/CVPR.2009.5206848
    [2] A. Krizhevsky, I. Sutskever, G. Hinton, ImageNet classification with deep convolutional neural networks, Commun. ACM, 60 (2017), 84–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [3] A. V. Emchinov, V. V. Ryazanov, Research and development of deep learning algorithms for the classification of pneumonia type and detection of ground-glass loci on radiological images, Pattern Recognit. Image Anal., 32 (2022), 707–716. https://doi.org/10.1134/S1054661822030105 doi: 10.1134/S1054661822030105
    [4] H. Tang, Research progress and development of deep learning based on convolutional neural network, in 2021 2nd International Conference on Computing and Data Science (CDS), Stanford, CA, USA, (2021), 259–264. https://doi.org/10.1109/CDS52072.2021.00052
    [5] H. Luo, J. Luo, R. Li, M. Yu, Optimization algorithm design of laser marking contour extraction and graphics hatching based on image processing technology, J. Phys. Conf. Ser., 2173 (2022), 012078. https://doi.org/10.1088/1742-6596/2173/1/012078 doi: 10.1088/1742-6596/2173/1/012078
    [6] L. Zhang, Y. P. Sui, H. S. Wang, S. K. Hao, N. B. Zhang, Image feature extraction and recognition model construction of coal and gangue based on image processing technology, Sci. Rep., 12 (2022), 20983. https://doi.org/10.1038/s41598-022-25496-5 doi: 10.1038/s41598-022-25496-5
    [7] X. L. Chen, H. Fang, T. Y. Lin, R. Vedantam, S. Gupta, P. Dollar, et al., Microsoft COCO captions: Data collection and evaluation server, preprint, arXiv: 1504.00325. https://doi.org/10.48550/arXiv.1504.00325
    [8] O. M. Parkhi, A. Vedaldi, A. Zisserman, Deep face recognition, in BMVC 2015 - Proceedings of the British Machine Vision Conference 2015, Swansea, UK, (2015), 1–12.
    [9] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, T, Antonio, Places: A 10 million image database for scene recognition, IEEE Trans. Pattern Anal. Mach. Intell., 40 (2018), 1452–1464. https://doi.org/10.1109/TPAMI.2017.2723009 doi: 10.1109/TPAMI.2017.2723009
    [10] M. Kumar, A. Bindal, R. Gautam, R. Bhatia, Keyword query based focused Web crawler, Procedia Comput. Sci., 125 (2018), 584–590. https://doi.org/10.1016/j.procs.2017.12.075 doi: 10.1016/j.procs.2017.12.075
    [11] G. Lin, Y. Liang, A. Tavares, Design of an energy supply and demand forecasting system based on web crawler and a grey dynamic model, Energies, 16 (2023), 1431. https://doi.org/10.3390/en16031431 doi: 10.3390/en16031431
    [12] Q. C. Deng, K. Cheng, Collection and semi-automatic labeling of custom target detection dataset (in Chinese), Soft. Guide, 21 (2022), 116–122.
    [13] M. Z. Hua, L. M. Wang, J. W. Jiang, Construction of large-scale coral dataset based on web resources (in Chinese), J. North. Nor. Univer., 55 (2023), 72–79. https://doi.org/10.16163/j.cnki.dslkxb202209230003 doi: 10.16163/j.cnki.dslkxb202209230003
    [14] M. J. Shenza, The discrete wavelet transform: wedding the a trous and mallat algorithms, IEEE Trans. Signal Process., 40 (1992), 2464–2482. https://doi.org/10.1109/78.157290 doi: 10.1109/78.157290
    [15] H. Y. Chen, H. Y. Long, Y. J. Song, H. L. Chen, X. B. Zhou, W. Deng, M3FuNet: An unsupervised multivariate feature fusion network for hyperspectral image classification, IEEE Trans. Geosci. Remote. Sens., 62 (2024), 1–15. https://doi.org/10.1109/TGRS.2024.3380087 doi: 10.1109/TGRS.2024.3380087
    [16] L. Pinjarkar, M. Sharma, S. Selot, Deep CNN combined with relevance feedback for trademark image retrieval, J. Intell. Syst., 29 (2020), 894–909. https://doi.org/10.1515/jisys-2018-0083 doi: 10.1515/jisys-2018-0083
    [17] Z. Zeng, S. Sun, J. Sun, J. Yin, Y. Shen, Constructing a mobile visual search framework for Dunhuang murals based on fine-tuned CNN and ontology semantic distance, Electron. Lib., 40 (2022), 121–139. https://doi.org/10.1108/EL-09-2021-0173 doi: 10.1108/EL-09-2021-0173
    [18] T. Rajasenbagam, S. Jeyanthi, Semantic content-based image retrieval system using deep learning model for lung cancer CT images, J. Med. Imaging Health Inf., 11 (2021), 2675–2682. https://doi.org/10.1166/jmihi.2021.3859 doi: 10.1166/jmihi.2021.3859
    [19] M. A. Aljanabi, Z. M. Hussain, S. F. Lu, An entropy-histogram approach for image similarity and face recognition, Math. Probl. Eng., 2018 (2018), 1–18. https://doi.org/10.1155/2018/9801308 doi: 10.1155/2018/9801308
    [20] Y. Zhang, Y. Yao, Y. Wan, W. Liu, W. Yang, Z. Zheng, et al., Histogram of the orientation of the weighted phase descriptor for multi-modal remote sensing image matching, J. Photogramm. Remote Sens., 196 (2023), 1–15. https://doi.org/10.1016/j.isprsjprs.2022.12.018 doi: 10.1016/j.isprsjprs.2022.12.018
    [21] A. Drmic, M. Silic, G. Delac, K. Vladimir, A. S. Kurdija, Evaluating robustness of perceptual image hashing algorithms, in 2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics, Opatija, Croatia, (2017), 995–1000. https://doi.org/10.23919/MIPRO.2017.7973569
    [22] D. G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vision, 60 (2004), 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94 doi: 10.1023/B:VISI.0000029664.99615.94
    [23] N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), San Diego, CA, USA, (2005), 886–893. https://doi.org/10.1109/CVPR.2005.177
    [24] K. H. Sri, G. T. Manasa, G. G. Reddy, S. Bano, V. B. Trinadh, Detecting image similarity using SIFT, in Expert Clouds and Applications: Proceedings of ICOECA 2021, Singapore, 209 (2022), 561–575. https://doi.org/10.1007/978-981-16-2126-0_45
    [25] F. Naiemi, V. Ghods, H. Khalesi, An efficient character recognition method using enhanced HOG for spam image detection, Soft Comput., 23 (2019), 11759–11774. https://doi.org/10.1007/s00500-018-03728-z doi: 10.1007/s00500-018-03728-z
    [26] Y. L. Liu, G. J. Xin Y. Xiao, Robust image hashing using Radon transform and invariant features, Radioengineering, 25 (2016), 556–564. https://doi.org/10.13164/re.2016.0556 doi: 10.13164/re.2016.0556
    [27] N. Hussein, M. Ali, M. E. Mahdi, Detecting similarity in color images based on perceptual image hash algorithm, in IOP Conference Series: Materials Science and Engineering, Istanbul, Turkey, 737 (2020), 012244. https://doi.org/10.1088/1757-899X/737/1/012244
    [28] M. Hori, T. Hori, Y. Ohno, S. Tsuruta, H. Iwase, T. Kawai, A novel identification method using perceptual degree of concordance of occlusal surfaces calculated by a Python program, Forensic Sci. Int., 313 (2020), 110358. https://doi.org/10.1016/j.forsciint.2020.110358 doi: 10.1016/j.forsciint.2020.110358
    [29] M. Fei, J. Li, H. Liu, Visual tracking based on improved foreground detection and perceptual hashing, Neucomputing, 152 (2015), 413–428. https://doi.org/10.1016/j.neucom.2014.09.060 doi: 10.1016/j.neucom.2014.09.060
    [30] D. M. Mo, W. K. Wong, X. J. Liu, Y. Ge, Concentrated hashing with neighborhood embedding for image retrieval and classification, Int. J. Mach. Learn. Cybern., 13 (2022), 1571–1587. https://doi.org/10.1007/s13042-021-01466-7 doi: 10.1007/s13042-021-01466-7
    [31] A. Jose, D. Filbert, C. Rohlfing, J. R. Ohm, Deep hashing with hash center update for efficient image retrieval, in ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, (2022), 4773–4777. https://doi.org/10.1109/ICASSP43922.2022.9746805
    [32] W. J. Yang, L. J. Wang, S. L. Cheng, Y. M. Li, A. Y. Du, Deep hash with improved dual attention for image retrieval, Information, 12 (2021), 285. https://doi.org/10.3390/info12070285 doi: 10.3390/info12070285
    [33] C. Tian, M. Zheng, W. Zuo, B. Zhang, Y. Zhang, D. Zhang, Multi-stage image denoising with the wavelet transform, Pattern Recognit., 134 (2023), 109050. https://doi.org/10.1016/j.patcog.2022.109050 doi: 10.1016/j.patcog.2022.109050
    [34] J. Bhardwaj, A. Nayak, Haar wavelet transform-based optimal bayesian method for medical image fusion, Med. Biol. Eng. Comput., 58 (2020), 2397–2411. https://link.springer.com/article/10.1007/s11517-020-02209-6
    [35] R. Ranjan, P. Kumar, An improved image compression algorithm using 2D dwt and pca with canonical huffman encoding, Entropy, 25 (2023), 1382. https://doi.org/10.3390/e25101382 doi: 10.3390/e25101382
    [36] G. Strang, The discrete cosine transform, SIAM Rev., 41 (1998), 135–147. https://doi.org/10.1137/S0036144598336745 doi: 10.1137/S0036144598336745
    [37] M. Norouzi, A. Punjani, D. J. Fleet, Fast exact search in hamming space with multi-index hashing, IEEE Trans. Pattern Anal. Mach. Intell., 6 (2014), 1107–1119. https://doi.org/10.1109/TPAMI.2013.231 doi: 10.1109/TPAMI.2013.231
    [38] H. W. Zhang, Y. B. Dong, J. Li, D. Q. Xu, An efficient method for time series similarity search using binary code representation and hamming distance, Intell. Data Anal., 25 (2021), 439–461. https://doi.org/10.3233/IDA-194876 doi: 10.3233/IDA-194876
    [39] F. Rashid, A. Miri, I. Woungang, Secure image deduplication through image compression, J. Inf. Secur. Appl., 27 (2016), 54–64. https://doi.org/10.1016/j.jisa.2015.11.003 doi: 10.1016/j.jisa.2015.11.003
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1207) PDF downloads(40) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog