Loading [MathJax]/jax/output/SVG/jax.js

Mean field models for large data–clustering problems

  • Received: 01 July 2019 Revised: 01 March 2020 Published: 09 September 2020
  • Primary: 82C40, 94A08; Secondary: 68U10

  • We consider mean-field models for data–clustering problems starting from a generalization of the bounded confidence model for opinion dynamics. The microscopic model includes information on the position as well as on additional features of the particles in order to develop specific clustering effects. The corresponding mean–field limit is derived and properties of the model are investigated analytically. In particular, the mean–field formulation allows the use of a random subsets algorithm for efficient computations of the clusters. Applications to shape detection and image segmentation on standard test images are presented and discussed.

    Citation: Michael Herty, Lorenzo Pareschi, Giuseppe Visconti. Mean field models for large data–clustering problems[J]. Networks and Heterogeneous Media, 2020, 15(3): 463-487. doi: 10.3934/nhm.2020027

    Related Papers:

    [1] Michael Herty, Lorenzo Pareschi, Giuseppe Visconti . Mean field models for large data–clustering problems. Networks and Heterogeneous Media, 2020, 15(3): 463-487. doi: 10.3934/nhm.2020027
    [2] Sabrina Bonandin, Mattia Zanella . Effects of heterogeneous opinion interactions in many-agent systems for epidemic dynamics. Networks and Heterogeneous Media, 2024, 19(1): 235-261. doi: 10.3934/nhm.2024011
    [3] Michael Herty, Lorenzo Pareschi, Sonja Steffensen . Mean--field control and Riccati equations. Networks and Heterogeneous Media, 2015, 10(3): 699-715. doi: 10.3934/nhm.2015.10.699
    [4] Ciro D'Apice, Peter Kogut, Rosanna Manzo . On generalized active contour model in the anisotropic BV space and its application to satellite remote sensing of agricultural territory. Networks and Heterogeneous Media, 2025, 20(1): 113-142. doi: 10.3934/nhm.2025008
    [5] Fabio Camilli, Italo Capuzzo Dolcetta, Maurizio Falcone . Preface. Networks and Heterogeneous Media, 2012, 7(2): i-ii. doi: 10.3934/nhm.2012.7.2i
    [6] András Bátkai, Istvan Z. Kiss, Eszter Sikolya, Péter L. Simon . Differential equation approximations of stochastic network processes: An operator semigroup approach. Networks and Heterogeneous Media, 2012, 7(1): 43-58. doi: 10.3934/nhm.2012.7.43
    [7] Nastassia Pouradier Duteil . Mean-field limit of collective dynamics with time-varying weights. Networks and Heterogeneous Media, 2022, 17(2): 129-161. doi: 10.3934/nhm.2022001
    [8] Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang . Uniform stability and mean-field limit for the augmented Kuramoto model. Networks and Heterogeneous Media, 2018, 13(2): 297-322. doi: 10.3934/nhm.2018013
    [9] Sergei Yu. Pilyugin, M. C. Campi . Opinion formation in voting processes under bounded confidence. Networks and Heterogeneous Media, 2019, 14(3): 617-632. doi: 10.3934/nhm.2019024
    [10] Diogo A. Gomes, Gabriel E. Pires, Héctor Sánchez-Morgado . A-priori estimates for stationary mean-field games. Networks and Heterogeneous Media, 2012, 7(2): 303-314. doi: 10.3934/nhm.2012.7.303
  • We consider mean-field models for data–clustering problems starting from a generalization of the bounded confidence model for opinion dynamics. The microscopic model includes information on the position as well as on additional features of the particles in order to develop specific clustering effects. The corresponding mean–field limit is derived and properties of the model are investigated analytically. In particular, the mean–field formulation allows the use of a random subsets algorithm for efficient computations of the clusters. Applications to shape detection and image segmentation on standard test images are presented and discussed.



    Particle and kinetic models for consensus and cluster formation appeared in recent literature for self–organized socio–economic dynamical systems as opinion formation, flocking of birds or fish, elections and referendums under influence of mass media, etc. See e.g. [5,11,18,19,20,23,25,34,16], the review articles [35,4,1] and the book [40]. A related research direction is based on using the consensus features of these models in an artificial way to solve problems of optimization or segmentation of data in large dimensions [28,41,15,29].

    In this paper, we aim at formulating suitable models on the microscopic (or particle) level as well as on the mean–field (or kinetic) level to describe the partition of a large set of data in clusters. This problem is also known as data clustering problem and it is widely studied in many applications like pattern recognition, shape detection and image segmentation problems. The proposed methods do not need to have fixed a–priori number of clusters and clusters are characterized by small in–group and large out–group distances.

    Since we are interested in modes characterizing distance and qualitative features without additional physical or socio–economical modeling background the proposed model will generalize the Hegselmann–Krause (HK) opinion dynamics model [25]. Originally, the HK model was proposed in a microscopic setting, in one–dimensional spatial and time discrete framework. Several extensions exist, in the sequel we will briefly review the basic model before discussing its extension towards the image clustering problems.

    To this aim, let us consider a group of n particles with a (scalar) initial state xi(0)R, i=1,,n, and the state of each particle varies depending on the state of the others. The key idea of the Hegselmann–Krause (HK) model [25] is that particles with completely different opinions do not influence each other, and a sort of mediation occurs among agents whose opinions are within a bounded confidence interval described by a parameter ϵ0. Let x(t)=[x1(t),,xn(t)]T be the state of the system at time t0. Then the dynamic of the i–th particle is given by

    ddtxi(t)=nj=1Aij(t,ϵ)(xj(t)xi(t)),i=1,,n (1)

    where A(t,ϵ)Rn×n is the time–varying adjacency matrix whose entries are in the form

    Aij(t,ϵ):={1σi,if jNi(t,ϵ)0,otherwise (2)

    with

    Ni(t,ϵ):={j{1,,n}:|xi(t)xj(t)|ϵ},i=1,,n (3)

    defining the neighborhood of the i–th particle at time t, and

    σi:={|Ni(t,ϵ)|,n (4)

    the type of interactions. Precisely, when σi=n the interactions are symmetric since Aij=Aji, i,j, but A is not a stochastic matrix. Instead, when σi=|Ni(t,ϵ)|, then A is a right stochastic matrix but interactions are no longer symmetric.

    Several works have been proposed in the literature which analyze the properties of the HK model. For instance, for the analysis in the time discrete setting we refer to [27]. In [7] it is proven that, during the evolution of the system (1), the order of the states is preserved. Thanks to the definition of the interaction kernel (2)–(4), if |xi(t)xi+1(t)|>ϵ, at some time t, it remains true for larger times. Therefore, the HK model tends to group the initial states in a finite number of clusters as proved in [35]. Following [35], we define a cluster C(t) at time t0 a subset of particles separated from all the other particles

    Aij(t,ϵ)0  for all  i,jC(t),Aij(t,ϵ)=0  whenever  iC(t),jC(t).

    In [7,22,30] the stability of the dynamical model is investigated. In particular, the fact that the system converges to a steady profile in finite time is proved in [7]. For further results on the one–dimensional local and symmetric model we refer also to [8]. We point out that also behavior of cluster formation in the transient is of interest in the mathematical literature [21].

    In [36,10], the one-dimensional Hegselmann–Krause is generalized to the case of a multi–dimensional data–set. Subsequently, the multi–dimensional HK model has been used as a technique to cluster a big amount of data into a small number of subsets with some common features [38] and to compare its performance with the k–means algorithm [31]. Recently, in [29,37] new approaches to clustering problems and image segmentation have been proposed based on the Kuramoto model.

    Here, we introduce a generalization of the multi–dimensional formulation of the HK model to solve data clustering problems. This amounts to take into account clustering with respect to different features. We deal with data having both time dependent and static features. The latter describe intrinsic properties of a datum, such as the measure of a trustworthy information or the color intensity of pixels in images. We derive the corresponding mean–field limit and investigate analytically the properties of the model. In particular, following [2] the mean–field formulation allows the use of a random subset algorithm for efficient computations of the clusters.

    The rest of the manuscript is organized as follows. The microscopic model is introduced at the beginning of Section 2 and briefly discussed in Section 2.1. The proposed model is still a microscopic model and we describe the case of large data using a mean–field equation in Section 3. Analytical properties of the kinetic equation, such as a–priori estimation on the evolution of the moments and characterization of the limit distribution, are discussed in Section 3.1. Numerical evidence of the theoretical results is provided in Section 4.1. Further, we propose applications on detection and compression of data, such as shape detection, in Section 4.2, and image segmentation, in Section 4.3. We finally conclude with some remarks and future research directions in Section 5.

    Each particle i=1,,N is endowed with a time–dependent state vector xi=xi(t)Rd1 as well as features ci=[ci,1,,ci,d2]Rd2 representing static characteristics of the system, i.e., ci is independent of time. As a motivation example consider an image segmentation problem where xi are the center point of a pixel or voxels of the image and ci the color coding at the center point.

    As in the HK model we define the neighborhood of the particles by

    Ni(t,ϵ1,ϵ2):={j{1,,n}:xi(t)xj(t)Rd1ϵ1,cicjRd2ϵ2}, (5)

    for i=1,,n, and Aij(t,ϵ1,ϵ2) are entries of the time–varying matrix A(t,ϵ1,ϵ2)Rn×n defined as

    Aij(t,ϵ1,ϵ2):={1σi,if jNi(t,ϵ1,ϵ2)0,otherwise (6)

    with σi defined analogously to (4). Here, ϵ10 and ϵ20 are two bounded confidence levels. The two metrics Rd1 and Rd2 need to be properly defined according to the specific context of the problem. Then, the mathematical model for any t0 is given by

    ci,k(t)=ci,k(0),i=1,,n,k=1,,d2, (7)
    ddtxi,k(t)=nj=1Aij(t,ϵ1,ϵ2)(xj,k(t)xi,k(t)),i=1,,n,k=1,,d1 (8)

    and initial condition ci,k(0)=c0i,k and xi,k(0)=x0i,k. Notice that a sufficient condition to reduce model (8) to the multi–dimensional version of the HK model (1) is ϵ2>maxi,j=1,,ncicjRd2.

    Existence and convergence of solutions to system (8) can be established by using same techniques as in [8] where the original HK in the one–dimensional case was analyzed. In fact, system (7)- (8) belongs to the same class of state-switched systems.

    In the case of symmetric interactions we can recover from the previous model similar results on the moment behavior as presented for the HK model in [8,26]. We only record the results here since the proofs are slight variations of existing results (see for example [8]).

    Define the moments m1(t)Rd1 and m2(t)Rd1×d1 with respect to the time dependent feature as

    m1(t):=ni=1xi(t),m2(t):=ni=1xi(t)xi(t), (9)

    then the following result holds true.

    Lemma 2.1. Let (xi(t))1in be the solution of the dynamical system (8) with symmetric interactions, i.e. σi=n in (6). Then, we obtain

    m1(t)=m1(0),ddt(m2(t))kk0,k=1,,d1.

    Corollary 1. Let (xi(t))1in be the solution of the dynamical system (8). Assume that ϵ1 and ϵ2 are sufficiently large so that interactions are global. Then the first moment is conserved and the following decay estimate holds:

    ddt(m2(t))k=2n(m1(0))k(m1(0))2(m2(t))k,limt(m2(t))k=(m1(0))k(m1(0))n.

    Later, we will show that similar results hold for the continuous model. Some remarks on further properties concerning the particle model (8) are in order.

    Remark 1. ● Formation of clusters in the large time behavior is extensively investigated in the review article [35] for general systems of the form (8). However, number of clusters cannot be a–priori predicted starting from a given initial configuration.

    ● In the non–symmetric case, conservation of the first moment does not hold true. Also, in general, it is not possible to show the decay of the second moment although clustering still appears, see [35].

    ● Extensions of the model to take into account non static features are obtained by including an interaction term on the right hand side of (7). The determination of such interaction term, however, would be rather problem dependent and in this paper we will not explore further this direction.

    In the case of many particles we derive the formal mean–field equation. Let Ω1Rd1, Ω2Rd2 be compact domains and Ω=Ω1×Ω2. For n1 we denote by fn:R+×ΩR the empirical distribution on ΩRd1×d2 given by

    fn(t,x,c):=1nni=1δ(xxi(t))δ(cci(t)).

    Let us consider a test function φ(x,c)C10(Ω), i.e. the space of continuous and compactly supported functions on Ω with continuous derivative. Denote by , the integration of fn against the test function φ on Ω. We have

    ddtfn(t),φ=1nni=1ddtφ(xi(t),ci(t))=1nni=11σijNi(t,ϵ1,ϵ2)xφ(xi(t),ci(t))(xj(t)xi(t))=fn(t),1nσ(t,x,c)nj=1χϵ1(xj(t)x)χϵ2(cj(t)c)(xj(t)x)xφ

    where we defined

    χϵ(x)={1,xϵ0,else

    and we used the fact that equation (8) can be re-written as

    ddtxi(t)=1nσ(t,xi(t),ci(t))nj=1χϵ1(xj(t)xi(t))χϵ2(cj(t)ci(t))(xj(t)xi(t)),

    with σi=nσ(t,xi(t),ci(t)) and

    σ(t,xi(t),ci(t))=1nnj=1χϵ1(xj(t)xi(t))χϵ2(cj(t)ci(t)).

    We can easily compute

    σ(t,x,c)=1nnj=1χϵ1(xj(t)x)χϵ2(cj(t)c)=χϵ1(yx)χϵ2(zc),δ(yxj(t))δ(zcj(t))Ωχϵ1(yx)χϵ2(zc)fn(t,y,z)dzdy,

    and similarly

    1nσ(t,x,c)nj=1χϵ1(xj(t)x)χϵ2(cj(t)c)(xj(t)x)=1σ(t,x,c)Ωχϵ1(yx)χϵ2(zc)(yx)fn(t,y,z)dzdy.

    Collecting these formal computations, after integration by part in x, we obtain the weak form of the mean–field equation

    ddtfn,φ+x(fn(t,x,c)Ωχϵ1(yx)χϵ2(zc)σ(t,x,c)(yx)fn(t,y,z)dzdy),φ=0.

    If we now define a kernel A as continuous extension of the adjacency matrix (6)

    Aϵ1,ϵ2(t,x,c,y,z)=χϵ1(yx)χϵ2(zc)σ(t,x,c), (10)

    and V given by

    Vϵ1,ϵ2(t,x,c)=ΩAϵ1,ϵ2(t,x,c,y,z)(yx)f(t,y,z)dzdy (11)

    in the limit n, assuming that the empirical measure fn(t,x,c) converge to f(t,x,c), we formally obtain the strong form of the kinetic equation as

    tf(t,x,c)+x(Vϵ1,ϵ2(t,x,c)f(t,x,c))=0. (12)

    Rigorous analytical results on convergence in the case of ϵ2 very large and symmetric interactions have been already discussed, for instance in [13,14]. These results guarantee convergence of the distribution f in (12) to a probability limit distribution f, provided the initial distribution f0 at time t=0 has finite second moment and A is a non–negative, bounded, measurable and symmetric kernel. Observe that these assumptions are verified also in our framework. For a more detailed discussion on analytical results on convergence in mean–field models, we refer e.g. to [12,16]

    We briefly discuss the relation to similar kinetic models.

    In [9] the analysis of a homogeneous kinetic model for opinion dynamics under bounded confidence is studied. Therein, the model is derived by using a Boltzmann–like approach with instantaneous binary interactions describing compromise. An analogous derivation of the kinetic equation (12) is also possible using a binary interaction model based on an explicit Euler discretization of the underlying particle dynamics (8) with n=2 and performing a grazing collision limit. We omit this computation.

    In [9] the authors prove the weak convergence of the solution to a convex combination of Dirac delta functions. In [13] a similar asymptotic distribution is found for the mean–field limit of the classical Hegselmann–Krause model.

    A further symmetric clustering model with weighted interactions with respect a fixed number of closest neighbors and corresponding mean–field limit has been introduced in [3]. Moments and long–time behavior could also be studied therein. Finally, models for other applications are also able to cluster information e.g. in traffic flow modeling [42] where the physical acceleration has the role of the bounded confidence.

    As preliminary remark we observe that the marginal distribution ˜fc(t,c):=Ω1f(t,x,c)dx is preserved in time by the kinetic equation (12). Instead, it is easy to check that the marginal distribution ˜fx(t,x):=Ω2f(t,x,c)dc is not stationary and its behavior in time is still influenced by a c dependent kernel. These considerations are direct consequence of the microscopic model (8) and (7). For this reason, in the following results and if not otherwise stated, we mainly focus on the analysis of moments with respect to the variable x.

    As in the discrete case we define the p–th moment of the kinetic distribution with respect to x as

    xα(t)=Ωxαf(t,x,c)dxdc,|α|=pN, αiN.

    Here, we used the following multi–index notation xα=xα11xαd1d1 and |α|=d1i=1αi. In particular, for each k,j=1,,d1 we will denote

    uk(t)=xα(t),|α|=1, αi=δik, i=1,,d1
    Ekj(t)=xα(t),|α|=2, αi=δik+δij, i=1,,d1,

    the first moment and the second moment, respectively. Then, for a symmetric kernel A we obtain by the integration of the kinetic equation

    Lemma 3.1. Let f(t,x,c) be the solution of the model (12) with symmetric kernel A, i.e., σ(t,x,c)=1. Then, the following a–priori estimates hold true:

    ddtuk(t)=0,ddtEkk(t)0,|Ekj(t)|12(Ekk+Ejj)(0)

    for each k,j=1,,d1.

    Proof. For each fixed k=1,,d1 we have

    ddtuk(t)=Ωxkx(Vϵ1,ϵ2(t,x,c)f(t,x,c))dxdc=ΩΩχϵ1(yx)χϵ2(zc)(ykxk)f(t,y,z)f(t,x,c)dydzdxdc=0.

    The last term vanishes due to anti–symmetry of the integrand by the interchange of variables xy. The second statement follows by observing that for each k=1,,d1 we have

    ddtEk(t)=Ωx2kx(Vϵ1,ϵ2(t,x,c)f(t,x,c))dxdc=ΩΩχϵ1(yx)χϵ2(zc)(ykxk)2f(t,y,z)f(t,x,c)dydzdxdc0.

    Hence, we obtain

    0|Ekj(t)|Ω|xkxj|f(t,x,c)dxdc12(Ekk+Ejj)(t)12(Ekk+Ejj)(0).

    Instead, in the case of global interactions and for a general kernel A and we have

    Corollary 2. Assume that ϵ1 and ϵ2 are sufficiently large so that A(t,x,c,y,z;ϵ1,ϵ2)=1, for all x,yΩ1, c,zΩ2 and t0. Then, for all k,j=1,,d1, the following relations hold true:

    uk(t)=uk(0), t0andEkj(t)tuk(0)uj(0).

    Proof. Multiplying (12) by xα with |α|=pN, and integrating over Ω, we compute

    ddtxα(t)=pxα(t)+d1=1αu(t)xα()(t)

    where α()=αe with e being the –th element of the standard basis vector of Rd1 and |α()|=p1. Then, for p=1 we still have have conservation of the first moment since we obtain

    ddtuk(t)=uk(t)+uk(t)=0,k=1,,d1.

    For p=2 we have

    ddtEkj(t)=2Ekj(t)+2uk(t)uj(t)=2Ekj(t)+2uk(0)uj(0),k,j=1,,d1

    and therefore

    Ekj(t)=Ekj(0)e2t+uk(0)uj(0)(1e2t)tuk(0)uj(0),k,j=1,,d1.

    Observe that, compared to the local interaction case analyzed in Theorem 3.1 where in general the energy decay property of all second order moments is not guaranteed, in global interactions also the mixed second order moments decay in time. In other words, while in the local interaction model only variances go to zero, in the global interaction model both variances and covariances go to zero in the large time behavior.

    Lemma 3.1 and Corollary 2 suggest that for t any initial distribution will tend to a stationary distribution that is concentrated on a finite number of points in Ω1 and represents therefore clusters. Hence, in the following we investigate the asymptotic distribution of the model (12).

    Lemma 3.2. Let ϵ1 and ϵ2 be arbitrary positive bounded confidence levels. Then, the distribution

    f(x,c)=n1k=1fkδ(xxk)n2(k)=1δ(cck) (13)

    with fk>0 and n1k=1n2(k)=1fk=1 is a weak stationary solution of the model (12) if and only if either xixkRd1>ϵ1 for all ik or cijckRd2>ϵ2 for all ik, for all j,, or both hold true.

    Proof. Let us assume that at least one of xixkRd1>ϵ1 for all ik and cijckRd2>ϵ2 for all ik, for all j,, is verified and prove that f is the asymptotic distribution of the equation (12), i.e. given a test function φC10(Ω)

    Ωφ(x,c)x(Vϵ1,ϵ2(t,x,c)f(x,c))dxdc=0.

    We obtain

    Ωφ(x,c)x(Vϵ1,ϵ2(t,x,c)f(x,c))dxdc=n1k=1fkn2(k)=1V(t,xk,ck;ϵ1,ϵ2)xφ(x,c)|x=xk=n1k=1f2kn2(k)=1(xkxk)σ(xk,ck)xφ(x)|x=xk=0.

    Conversely, assume that f as in (13) is the steady state of (12). Assume by contradiction that there exist k,i{1,n1} such that xkxiRd1ϵ1 and {1,n2(k)}, j{1,n2(i)} such that cijckRd2ϵ2. Then, using similar computations as before we obtain

    Ωφ(x,c)x(Vϵ1,ϵ2(t,x,c)f(x,c))dxdc=fkfi(xkxi)σ(xk,ck)xφ(x,c)|x=xkfifk(xixk)σ(xi,cij)xφ(x,c)|x=xi0

    which contradicts the hypothesis.

    Lemma 3.3. Assume that ϵ1 and ϵ2 are sufficiently large so that A(t,x,c,y,z;ϵ1,ϵ2)=1, for all x,yΩ1, c,zΩ2 and t0. Then, the distribution

    f(x,c)=δ(xˉx)˜fc(0,c)=d1k=1δ(xkˉxk)˜fc(0,c)

    with ˜fc(0,c)=Ω1f0(x,c)dx is a weak stationary solution of the model (12) if and only if ˉxk=uk(0).

    Proof. Substituting the expression of f in the weak form of the kinetic equation, integrating by parts and using the fact that for ϵ1 and ϵ2 large enough so that

    Vϵ1,ϵ2(t,x,c)=Ωyf(t,y,z)dzdyx=Ωyf0(y,z)dzdyx

    we obtain

    Ωφ(x,c)x(Vϵ1,ϵ2(t,x,c)f(x,c))dxdc=Ω2˜f(c)(ˉxΩyf0(y,z)dzdy)xφ(x,c)|x=ˉxdc

    which is zero if ˉx=Ωyf0(y,z)dzdy.

    Remark 2. As direct consequence of Theorem 3.2 and Theorem 3.3 we have that taking ϵ2 sufficiently large or an initial distribution f0(x,c) being atomic with respect the second variable, the same quantized steady–state of the kinetic formulation of the microscopic model (1) are preserved, cf. [13].

    Remark 3. In Theorem 3.2 the number of clusters n1, as well the values fj and the positions xj of the clusters, are functions of the initial distribution f0(x,c) and of the bounded confidence levels ϵ1 and ϵ2. As pointed–out also in [14], in general it is not possible to predict the number of Dirac deltas in the asymptotic configuration from a given initial distribution. However, for ϵ2 sufficiently large and assuming Ω=[0,1]d, the number of clusters is 1˜n1ϵd1. This consideration is also observed at the Boltzmann level, see [9].

    The theoretical results on the asymptotic behavior of the mean–field equation (12) introduced in the previous section are here also numerically investigated. Moreover, we show the efficiency of the model as technique to solve realistic data–clustering problems and to this end we focus on applications to the field of shape detection and image segmentation.

    In order to efficiently solve the kinetic model (12) we employ the Mean Field Interaction Algorithm introduced in [2] which is based on random subset evaluation of the kernel term Aϵ1,ϵ2(t,x,c,y,z) given in (10). The algorithm used in the numerical experiments is summarized by the steps in Algorithm 1 for a time interval [0,T] discretized in ktot subintervals of size Δt. The computational cost of the algorithm is O=(MN), where M is the size of the subset of interacting particles, and for M=N we obtain the explicit Euler scheme for the original N particle system (8) whose cost is O(N2). For further details on the Mean Field Interaction Algorithm we refer to [2].

    Algorithm 1 Mean Field Interaction Algorithm for the kinetic equation (12).
    1: Given N sample pairs (x0i,c0i), with i=1,,N computed from the initial distribution f0(x,c) and MN;
    2: for k=0 to ktot1 do
    3:  for i=1 to N do
    4:    sample M data j1,,jM uniformly without repetition among all data;
    5:    compute

    ˉA(xki,c0i)=M=1Aϵ1,ϵ2(xki,c0i,xkj,c0j),ˉxki=M=1Aϵ1,ϵ2(xki,c0i,xkj,c0j)ˉA(xki,c0i)xj


    6:    compute the data change

    xk+1i=xki(1ΔtˉA(xki,c0i))+ΔtˉA(xki,c0i)ˉxki


    7:  end for
    8: end for

     | Show Table
    DownLoad: CSV

    We use the Mean Field Interaction Algorithm to numerically investigate the properties of the kinetic model proved in the previous section. We analyze two typical situations which lead to the two applications we show later in this section.

    First, we consider an initial distribution being the uniform distribution with respect to x and a constant distribution along the static feature variable c, i.e. f0(x,c)=χ[0,1]d1(x). This choice is obviously also consistent with considering the bounded confidence level ϵ2 very large. We show that, as discussed in Remark 2, equation (12) provides the same steady–states of the Hegselmann–Krause model.

    One–dimension. The numerical steady–states of the mean–field equation (12) are first computed for the one–dimensional case and compared to the steady–states of the microscopic model (8). Observe that, under the assumption of an initial constant distribution along c, the asymptotic behavior of (8) is the classical one prescribed by the one–dimensional Hegselmann–Krause model (1). The evolution in time of the first and second moment is also provided.

    In Figure 1 we show the steady–state provided by the mean–field kinetic model (12) for the one–dimensional initial uniform distribution on [0,1]. The final time is T=20 and the time step is Δt=0.5. We consider N=5×105 particles in the Monte Carlo method so that we reduce error due to the sampling. The number of interacting particles is taken as M=10. We show the results for two values of the confidence bound, ϵ1=0.5 in the left panel and ϵ1=0.15 in the right panel. The evolution of the distributions up to final time is showed by normalizing with respect to the maximum value at each fixed time. For ϵ1=0.5 we observe the formation of a consensus state in large time behavior. In fact, the final distribution is a Dirac delta centered in the initial value of the first moment. For ϵ1=0.15, instead, three clusters arise at equilibrium, similarly the classical Hegselmann–Krause model for the same confidence bounds and equally distributed initial data.

    Figure 1. 

    Trend to the steady–state of the one–dimensional Hegselmann–Krause model (1) with n=100 agents equally spaced at initial time and non–symmetric interactions (top row) and of the mean–field model (12) computed with Algorithm 1 (bottom row) up to final time T=20. Left panels show the case for ϵ1=0.5, the right panels show the case for ϵ1=0.15

    .

    As observed in Figure 2, for both values of the bounded confidence level, we have conservation of the first moment and energy dissipation in time. Although we are using a non–symmetric model, the first moment is preserved during the time evolution, with deviation of order 104 from the initial value, since the initial distribution is symmetric. The fluctuations in the first moment are due to the stochastic method.

    Figure 2. 

    Evolution in time of the first moment (left) and of the second moment (right) for the two values of the bounded confidence level ϵ1=0.5 (dashed lines) and ϵ1=0.15 (solid lines)

    .

    In Figure 3 we show some results of a simple analysis of Algorithm 1 with respect to the values of M, i.e. the size of the random subset of interacting particles. Left panel shows the trend to the steady–state of the kinetic model computed with Algorithm 1 and same parameters as in the right plot of Figure 1. However, here we consider N=2×104 and M=2. The same equilibrium state is reached but with a faster transient. The dependence of the velocity to the formation of cluster on the size M of the random subset is showed in the right panel of Figure 3. The velocity to equilibrium decreases as M increases. In particular, the M=2 case, where each particle is enforced to align to the velocity of the other particle instead of their "mean", exhibits the fastest convergence. This analysis suggests that, although model (12) has a convolution structure and in principle can be efficiently solved with a Fast Fourier Transform at a O(NlogN) cost, Algorithm 1 may be preferable since it has a comparable (or even lower) computational cost and it can be applied to more general kernels where the convolution structure is lost.

    Figure 3. 

    Left: trend to the steady–state of the mean–field model (12) computed with Algorithm 1 with N=2×104, M=2, ϵ1=0.15 and up to final time T=20. Right: energy decay of the mean–field model (12) for several values of interacting particles M

    .

    Two–dimensions. We also show one example of the numerical steady–states of the mean–field equation (12) in the two–dimensional case. Again we employ Algorithm 1. Figure 4 shows the steady–state for the particle density and the kinetic density at time t=4 and final time T=50. We use N=10000 particles and the time step is Δt=0.5. Using the Mean Field Interaction Algorithm 1, the number of interacting particles is taken as M=10. We show the results for one value of the confidence bound, ϵ1=0.15 which leads to the formation of 8 clusters at equilibrium. The evolution in time of the two–dimensional moments is also provided in Figure 5. We still observe conservation of the first moment, due to the initial symmetric distribution, and decay in time for the two non–mixed second moments.

    Figure 4. 

    Particle solution (left plots) with N=10000 and kinetic density (right plots). Results are provided at time t=4 (top row) and final time T=50 (bottom row). The bounded confidence level is ϵ1=0.15

    .
    Figure 5. 

    Evolution in time of the two–dimensional first moments (left) and second moments (right) for the bounded confidence level ϵ1=0.15

    .

    Next, we consider a non–constant initial distribution along the variable c. We consider a one–dimensional setting both along x and c since the goal of this experiment is to provide evidence of the influence of the static feature on the clustering process. The initial kinetic distribution it given as a tensor product between a uniform and Gaussian distribution so that

    f0(x,c)=χ[0,1](x)12πσ2exp((cμ)22σ2),

    with mean μ=0.5, and variance σ2=0.3. See the top left plot in Figure 6. All the simulations are performed by using N=5000 samples up to equilibrium.

    Figure 6. 

    Top row: particles and kinetic density at initial time (left plot) and at equilibrium (right plot). Bottom row: at left, analysis of the distances between clusters in x (blue line with circle markers) and c direction (red line with triangle markers); at right, plot of the marginals. Confidence levels are ϵ1=0.15 and ϵ2=0.025

    .

    In this experiment, we expect that the behavior at equilibrium is influenced also by the interactions with respect to the static feature variable due to the corresponding characteristic function in the kernel (10). In Figure 6 we show the results provided by using ϵ1=0.15 and ϵ2=0.025. The top left plot shows particles and kinetic density at equilibrium. Observe that, compared to the case with constant distribution along the static feature, the low value of ϵ2 allows for the formation of more clusters, precisely 8. Clusters arise because either their distance in x direction is larger than the confidence level ϵ1 or the minimum distance in c direction is larger than ϵ2. This consideration is analyzed in the bottom left plot of Figure 6 where the blue line with circle markers shows the distance in x between two adjacent clusters and the red line with triangle markers the minimum distance between the static features of particles being in two adjacent clusters. Observe that when the distance in x is lower than the value ϵ1, the corresponding distance between the static features is larger than ϵ2, and therefore cluster is no longer possible. Moreover, we notice that time to reach equilibrium is larger due to two levels of clustering. Finally, the bottom right plot of Figure 6 shows the two marginal distributions.

    In Figure 7 we show the behavior at equilibrium by considering two different pairs of confidence levels. In the left plot, ϵ1=0.15 and ϵ2=0.1. Here ϵ2 is large enough to not influence the clustering process and in fact we recover 3 clusters, exactly the same provided in Figure 1 when the initial kinetic distribution was constant in c. In the right plot, ϵ1=1 and ϵ2=0.025. In this case, ϵ1 is very large and we show the formation of clusters at equilibrium when only clustering due to the static feature is allowed.

    Figure 7. 

    Particle and kinetic density at equilibrium with confidence levels ϵ1=0.15, ϵ2=0.1 (left) and ϵ1=1, ϵ2=0.025 (right)

    .

    The property of the equation (12), analytically studied in Section 3.1 and numerically investigated in Section 4.1, of having quantized steady–states makes the model suitable for solving data clustering problems. In particular, in this section we use the kinetic model as technique for shape detection.

    Shape detection can be considered as a branch of pattern recognition problems that focuses on the discovering of previously unknown patterns in a big set of data. Machine learning and many clustering algorithms have already been applied to this class of problems, such as the k–means algorithm which however suffers from the necessity of defining a–priori the number of clusters. Instead, as already pointed out, opinion dynamics based models automatically find the number of clusters as functions of the confidence value.

    More specifically, in this section we apply the models introduced in Section 2 to the example of a character recognition. We consider a letter "A" which is composed by n two-dimensional points L={xi}ni=1 defining three segments in Ω=[0,1]2. To each of these points we apply an additive noise distributed according to a uniform or a Gaussian distribution. We obtain a new set of n points

    ˜xi=xi+αθi,α>0,θiU(1,1)  or  θiN(0,1)

    which define the noisy pattern. Here, the value α represents the percentage of noise and it is also chosen in such a way that ˜xiΩ, i=1,,n. The points {˜xi}ni=1 are used as initial condition of the model. They are seen as samples from a distribution defining the noisy pattern and the Mean Field Interaction Algorithm 1 is employed in order to solve the kinetic model.

    The goal of this example is to cluster the noisy data into a set of points which give information on the shape of the exact pattern to be detected. This can be considered also as a dimensionality reduction problem which could help other algorithms to recognize the unknown pattern efficiently by using information on the position of the clusters. Here the efficiency of the results is studied by means of the following measure:

    E(ϵ1,ϵ2,α)=1˜n˜nk=1minxLCkx2

    where {Ck}˜nk=1 is the position of clusters in Ω, ˜n the number of clusters at equilibrium. The quantity E measures for each cluster the minimum 2–norm distance to the exact pattern and then computes the normalized 1–norm of these quantities. The definition of the measure E is inspired by the idea of computing an average of the minimum distances between the clusters and the shape L, when formation of multiple clusters distributed close the shape L arises, as in the cases showed here. Certainly, E cannot provide a general way to measure the quality of the results, since e.g. in case of one steady-state cluster positioned exactly on a point of L, we would have E=0 providing absurdly the optimal choice.

    In the following, all the results are given for an initial set of n=5000 particles and a large enough final time to reach a steady–state T=50. Moreover, we use the Euclidean norm and all particles are initialized with a constant static feature. For an extension of this application to clustering also with respect to the static feature see Remark 4 below.

    In Table 1 and Table 2 we show a sensitivity study of the error E as function of the percentage of noise α and of the bounded confidence value ϵ1, for initial noisy data uniformly and normally distributed, respectively. The number of clusters at equilibrium is also provided. Results with minimal error value are highlighted with bold font. In particular, we point out as the increasing percentage of noise highly affects the results.

    Table 1. 

    Number of clusters and errors depending on the bounded confidence value and the percentage of the noise when uniformly distributed

    .
    α=5% α=7.5% α=10%
    ϵ1 E ˜n ϵ1 E ˜n ϵ1 E ˜n
    0.03 1.25e-02 30 0.03 3.47e-02 51 0.06 2.64e-02 11
    0.04 4.10e-03 16 0.05 1.21e-02 14 0.07 1.48e-02 8
    0.05 4.00e-03 12 0.07 7.70e-03 8 0.08 1.12e-02 8
    0.06 4.60e-03 9 0.09 7.90e-03 8 0.09 1.63e-02 5
    0.07 5.40e-03 8 0.11 1.66e-02 3 0.10 1.63e-02 5

     | Show Table
    DownLoad: CSV
    Table 2. 

    Number of clusters and errors depending on the bounded confidence value and the percentage of the noise when normally distributed

    .
    α=5% α=5.5% α=6%
    ϵ1 E ˜n ϵ1 E ˜n ϵ1 E ˜n
    0.05 4.44e-02 23 0.05 4.73e-02 24 0.05 6.37e-02 30
    0.06 1.36e-02 11 0.06 2.62e-02 13 0.06 4.16e-02 16
    0.065 6.40e-03 9 0.065 1.63e-02 11 0.07 2.12e-02 9
    0.07 6.70e-03 7 0.0675 7.40e-03 10 0.075 9.70e-03 7
    0.08 8.50e-03 7 0.07 8.00e-03 9 0.08 9.20e-03 7
    0.09 1.00e-02 6 0.08 9.80e-03 7 0.085 1.10e-02 5

     | Show Table
    DownLoad: CSV

    In Figure 8 and Figure 9 some of this cases are shown. Precisely, for the uniformly distributed initial noisy data, Figure 8, we consider the largest value of the percentage of noise α=10% and three values of the bounded confidence level ϵ1: the smallest ϵ1=0.06 in Table 1, the optimal ϵ1=0.08 (giving the smallest value of the error E) and the largest one ϵ1=0.1. We observe that the result is highly influenced by the choice of the confidence bound which determines the number of final clusters. Smaller and larger values of ϵ1 result in larger values of the error measure E. In fact, in one case the model provides too many clusters which are very spread out, with several outliers. In the other case too few clusters so that all the information on the exact shape is lost. For the optimal value of ϵ1= the model provides a good agreement with the exact pattern.

    Figure 8. 

    Shape detection of the letter "A" initialized with 10% of a uniform additive noise. Top left panel shows the initial condition. We show clusters obtained with bounded confidence values ϵ1=0.06 (top right), ϵ1=0.08 (bottom left) and ϵ1=0.1 (bottom right)

    .
    Figure 9. 

    Shape detection of the letter "A" initialized with 5.5% of a Gaussian additive noise. Top left panel shows the initial condition. We show clusters obtained with bounded confidence values ϵ1=0.05 (top right), ϵ1=0.0675 (bottom left) and ϵ1=0.08 (bottom right)

    .

    Similar considerations hold for Figure 9 where we show the noisy initial data obtained with α=10% percentage of noise and three values of the bounded confidence level ϵ1=0.05, ϵ1=0.0675 (giving the smallest value of the error E) and ϵ1=0.08.

    We point out that the goal of this application is to provide a preliminary step for shape feature extraction. This technique should be coupled with a further algorithm which is able to select the correct letter from a given alphabet. The dimensionality reduction provided by the data clustering approach allows to efficiently employ the subsequent recognition analysis.

    Remark 4 (Shape detection with non–constant static feature). In the previous examples the static features is taken constant for all the data. In the non–constant case, one could think of this feature as an additional initial given input which measures the quality of the information of each single point and that can be modeled by the distance to its exact value in L.

    We now turn to the second application of the data clustering model 12 which is the segmentation of gray scale images. Image segmentation is widely used in medical and astronomical image processing, face recognition, etc. In fact, this technique allows to partition an image in sets of significant regions of pixels sharing same characteristics such as closeness and similar intensity of color. As in shape detection, the aim of such approach is to modify the description of an image into a structure which easier to be analyzed for subsequent computer vision algorithms.

    Several mathematical techniques have been applied to image segmentation. For instance we mention level set methods [39,43] methods based on the Kuramoto model [29,37] and supervised convolution neural networks [17,46]. A more complicated procedure to segmentation of images is the clustering approach and we refer e.g. to the k–means method [44,47], c–means method [33] and hierarchical clustering method [45].

    Here, we study the efficiency of the kinetic model (12) to solve image segmentation problems and therefore we still relies on the clustering technique. Thanks to the model introduced in this paper, segmentation can be performed by using two levels of clustering in order to determine regions of pixels with similar characteristics. More precisely, we cluster based on the Euclidean distance and the distance of the gray intensity between pixels. Taking into account spatial coordinates is needed in order to avoid the problem of selecting homogeneous regions of pixels which are however distinct in the original image. The clustering with respect the intensity of the gray colors is performed by using the static feature variable c.

    The application of (12) works as described in the following. Number of pixels is the number of particles which are assumed to be equally spaced samples. The intensity of the gray color of each pixel is computed at initial time and defines the one–dimensional static feature c of each particle. The Mean Field Interaction Algorithm 1 is then applied to find clusters representing local regions of the original image with homogeneous intensity of color. At equilibrium, we compute the mean of the color intensity of pixels belonging to the same cluster and then they are mapped back to the original positions in the initial image to get the segmentation. Another possible approach, which is more suitable for images with very sharp color intensities, such as in astronomical images, is to apply thresholding technique. This method is based on replacing the color intensity of each cluster by black if the mean is below a certain threshold or by white otherwise. This technique results therefore in binary images.

    In Figure 10 we first apply the segmentation to a benchmark test in order to show how the model works on a simple test. We initialize an gray scale image with 4096 pixels with four regions of different intensity colors. The top and the bottom left corners have intensity 1 and 0.75, respectively. The top and the bottom right corners have intensity 0 and 0.25, respectively. See the left panel of Figure 10. The positions of the pixels are rescaled on Ω=[0,1]2. The clustering algorithm is applied with the two bounded confidence values ϵ1=0.5 and ϵ2=0.3. At equilibrium the two red clusters showed in the middle panel of Figure 10 are computed by the kinetic model. The mean of the intensity colors of the pixels in each cluster is computed and mapped back to the initial position obtaining the segmentation in the right panel of Figure 10. The two values of the gray intensity are 0.125 and 0.875.

    Figure 10. 

    Left panel: initial image of 4096 pixels with four regions with different gray intensity. Middle panel: red dots show the positions of the clusters at equilibrium. Right panel: segmentation of the initial image in two regions

    .

    We now apply the segmentation process to real images taken from different data–sets. In Figure 11 we consider an image of 174×73 pixels showing a night sky, trees and the moon. On this example, the method is able to provide good results with a wide range of confidence values since the variation of the gray scale is sharp between the objects which are also characterized by regions having homogeneous colors. In Figure 11 we report the result obtained with ϵ1=0.2 and ϵ2=0.1 which results in the formation of 5 clusters.

    Figure 11. 

    Image segmentation of 174×73 gray scale image taken by the data–set [6]

    .

    The second image in Figure 12 is also characterized by two backgrounds: the sky which is very homogeneous and the field which is less homogeneous. The goal of the segmentation process would be to identify the two cows, separating them from the background. We report the results obtained with ϵ1=0.2 and ϵ2=0.05, resulting in 6 clusters. Observe that ϵ2 is taken small in order to avoid the formation of many clusters due to the low homogeneity of the field. The two cows are well identified by the method.

    Figure 12. 

    Image segmentation of 93×93 gray scale image taken by the data–set [24]

    .

    In the case of Figure 13 the confidence level ϵ2 is taken larger in order to force clustering of regions with snow but different shadows. The clustering method identifies 8 clusters and the skier is well separated by the background.

    Figure 13. 

    Image segmentation of 128×94 gray scale image taken by the data–set [32]

    .

    In Figure 14 we employ the kinetic model for clustering to segmentation of a real image taken by [32]. The true image is 67×67 pixels and several values of the bounded confidence levels ϵ1 and ϵ2 are applied in order to cluster with respect positions and intensity color of pixels. The best result is obtained with ϵ1=0.3 and ϵ2=0.1 which results in the formation of 8 clusters. Lower values of ϵ1 allows to obtain more clusters and thus more gray intensity colors at equilibrium. Increasing ϵ1 requires to not take ϵ2 also large, otherwise many information are lost at equilibrium.

    Figure 14. 

    Image segmentation of 67×67 gray scale image taken by the data–set [32]

    .

    Finally, in Figure 15 we present an additional experiment aimed to provide a comparison between our segmentation process and the one published in [29] based on the Kuramoto model. The image is selected from the data–set [32]. Right panel of Figure 15 has to be compared with Figure (7c) in [29] and, in particular, we observe that our result is able to detect better some structures of the image that are lost by using the Kuramoto model.

    Figure 15. 

    Image segmentation of 132×106 gray scale image taken by the data–set [32]

    .

    In this paper we have proposed a new method for clustering of large data which is based on a suitable generalization of the Hegselmann-Krause opinion dynamics model. The extension accounts for clustering with respect to two different sets of features of each single datum: one set describes time dependent characteristics, the other one represents static characteristics.

    The mean–field limit of the particle model has been formally derived and the resulting kinetic equation allows for the study of mathematical properties. Time evolution of moments and asymptotic behavior of kinetic equation have been analytically characterized. Moreover, the derivation of the mean–field model allows for reduction of computational complexity thanks to a suitable random subset Monte Carlo algorithm.

    Several applications to digital imaging have been proposed. In particular, we have focused on application to shape detection and image segmentation, showing the efficiency of the model to provide satisfying results. Finally, we emphasize that the present model has to be intended as a starting point towards more realistic applications, for example based on the use of non static features combined with suitable machine learning approaches.

    This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2023 Internet of Production – 390621612.

    Lorenzo Pareschi and Giuseppe Visconti acknowledge the support of the "National Group for Scientific Computation (GNCS-INDAM)", project "Numerical approximation of hyperbolic problems and applications".



    [1]

    G. Albi, N. Bellomo, L. Fermo, S.-Y. Ha and J. Kim, et al., Vehicular traffic, crowds, and swarms.: From kinetic theory and multiscale methods to applications and research perspectives, Math. Models Methods Appl. Sci., 29 (2019), 1901-2005.

    [2] Binary interaction algorithms for the simulation of flocking and swarming dynamics. Multiscale Model. Simul. (2013) 11: 1-29.
    [3] Modeling of self-organized systems interacting with a few individuals: From microscopic to macroscopic dynamics. Appl. Math. Lett. (2013) 26: 397-401.
    [4]

    G. Albi, L. Pareschi, G. Toscani and M. Zanella, Recent advances in opinion modeling: Control and social influence, in Active Particles, Vol. 1, Model. Simul. Sci. Eng. Technol., Birkhäuser/Springer, Cham, 2017, 49–98.

    [5] Opinion dynamics over complex networks: Kinetic modeling and numerical methods. Kinet. Relat. Models (2017) 10: 1-32.
    [6] Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. (2011) 33: 898-916.
    [7] On Krauses's multi-agent consensus model with state-dependent connectivity. IEEE Trans. Automat. Control (2009) 54: 2586-2597.
    [8] Continuous time average-preserving opinion dynamics with opinion-dependent communications. SIAM J. Control Optim. (2010) 48: 5214-5240.
    [9] Asymptotic analysis of continuous opinion dynamics models under bounded confidence. Commun. Pure Appl. Anal. (2013) 12: 1487-1499.
    [10]

    L. Boudin, R. Monaco and F. Salvarani, Kinetic model for multidimensional opinion formation, Phys. Rev. E, 81 (2010), 9pp.

    [11] Opinion dynamics: Kinetic modelling with mass media, application to the Scottish independence referendum. Phys. A (2016) 444: 448-457.
    [12] A well-posedness theory in measures for some kinetic models of collective motion. Math. Models Methods Appl. Sci. (2011) 21: 519-539.
    [13] An Eulerian approach to the analysis of rendez-vous algorithms. IFAC Proceedings Volumes (2008) 41: 9039-9044.
    [14]

    C. Canuto, F. Fagnani and P. Tilli, An Eulerian approach to the analysis of Krause's consensus models, SIAM J. Control Optim., 50 (2012), 243–265.

    [15] An analytical framework for consensus-based global optimization method. Math. Models Methods Appl. Sci. (2018) 28: 1037-1066.
    [16]

    J. A. Carrillo, M. Fornasier, G. Toscani and F. Vecil, Particle, kinetic, and hydrodynamic models of swarming, in Mathematical Modeling of Collective Behavior in Socio-Economic and Life Sciences, Model. Simul. Sci. Eng. Technol., Birkhäuser Boston, Boston, MA, 2010, 297–336.

    [17] DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Machine Intelligence (2018) 40: 834-848.
    [18] Effective leadership and decision-making in animal groups on the move. Nature (2005) 433: 513-516.
    [19] Emergent behavior in flocks. IEEE Trans. Automat. Control (2007) 52: 852-862.
    [20] A macroscopic model for a system of swarming agents using curvature control. J. Stat. Phys. (2011) 143: 685-714.
    [21]

    F. Dietrich, S. Martin and M. Jungers, Transient cluster formation in generalized Hegselmann-Krause opinion dynamics, 2016 European Control Conference (ECC), Aalborg, Denmark, 2016.

    [22] Consensus formation under bounded confidence. Nonlinear Anal. (2001) 47: 4615-4622.
    [23] Boltzmann and Fokker-Planck equations modelling opinion formation in the presence of strong leaders. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. (2009) 465: 3687-3708.
    [24]

    S. Gould, R. Fulton and D. Koller, Decomposing a scene into geometric and semantically consistent regions, 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 2009.

    [25]

    R. Hegselmann and U. Krause, Opinion dynamics and bounded confidence: Models, analysis and simulation, J. Artifical Societies Social Simulation, 5 (2002).

    [26]

    J. M. Hendrickx, Graphs and Networks for the Analysis of Autonomous Agent Systems, Ph.D thesis, Ecole Polytechnique de Louvain, 2008.

    [27] Clustering and asymptotic behavior in opinion formation. J. Differential Equations (2014) 257: 4165-4187.
    [28]

    J. Kennedy and R. Eberhart, Particle swarm optimization, Proc. Internat. Conference Neural Networks, Perth, Australia, 1995.

    [29] Color image segmentation based on modified Kuramoto model. Procedia Comp. Sci. (2016) 88: 245-258.
    [30] A stabilization theorem for dynamics of continuous opinions. Phys. A (2005) 355: 217-223.
    [31]

    J. MacQueen, Some methods for classification and analysis of multivariate observations, Proc. Fifth Berkeley Sympos. Math. Statist. and Probability, Vol. Ⅰ: Statistics, Univ. California Press, Berkeley, CA, 1967, 281-297.

    [32]

    D. Martin, C. Fowlkes, D. Tal and J. Malik, A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, Proc. Eighth IEEE Internat. Conference Comp. Vision, Vancouver, Canada, 2001.

    [33] Kernel possibilistic fuzzy c-means clustering with local information for image segmentation. Internat. J. Fuzzy Syst. (2019) 21: 321-332.
    [34] A new model for self-organized dynamics and its flocking behavior. J. Stat. Phys. (2011) 144: 923-947.
    [35] Heterophilious dynamics enhances consensus. SIAM Review (2014) 56: 577-621.
    [36]

    A. Nedić and B. Touri, Multi-dimensional Hegselmann-Krause dynamics, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), Maui, HI, 2012.

    [37] Oscillatory neural networks based on the Kuramoto model for cluster analysis. Pattern Recognition Image Anal. (2014) 24: 365-371.
    [38]

    G. Oliva, D. La Manna, A. Fagiolini and R. Setola, Distributed data clustering via opinion dynamics, Internat. J. Distributed Sensor Networks, 11 (2015).

    [39] Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. (1988) 79: 12-49.
    [40]

    L. Pareschi and G. Toscani, Interacting Multiagent Systems. Kinetic Equations and Monte Carlo Methods, Oxford University Press, 2013.

    [41] A consensus-based model for global optimization and its mean-field limit. Math. Models Methods Appl. Sci. (2017) 27: 183-204.
    [42] Kinetic models for traffic flow resulting in a reduced space of microscopic velocities. Kinet. Relat. Models (2017) 10: 823-854.
    [43]

    J. A. Sethian, Level Set Methods and Fast Marching Methods. Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science, Cambridge Monographs on Applied and Computational Mathematics, 3, Cambridge University Press, Cambridge, 1999.

    [44]

    P. Shan, Image segmentation method based on K-mean algorithm, EURASIP J. Image Video Processing, 81 (2018).

    [45] Research on image segmentation based on clustering algorithm. Internat. J. Signal Process. Image Process. Pattern Recognition (2016) 9: 1-12.
    [46] Representative discovery of structure cues for weakly-supervised image segmentation. IEEE Transac. Multimedia (2014) 16: 470-479.
    [47]

    X. Zheng, Q. Lei, R. Yao, Y. Gong and Q. Yin, Image segmentation based on adaptive K-means algorithm, EURASIP J. Image Video Processing, 68 (2018).

  • This article has been cited by:

    1. Alessandro Benfenati, Giacomo Borghi, Lorenzo Pareschi, Binary Interaction Methods for High Dimensional Global Optimization and Machine Learning, 2022, 86, 0095-4616, 10.1007/s00245-022-09836-5
    2. Christian Fiedler, Michael Herty, Michael Rom, Chiara Segala, Sebastian Trimpe, Reproducing kernel Hilbert spaces in the mean field limit, 2023, 0, 1937-5093, 0, 10.3934/krm.2023010
    3. Christian Fiedler, Michael Herty, Chiara Segala, Sebastian Trimpe, Recent kernel methods for interacting particle systems: first numerical results, 2024, 0956-7925, 1, 10.1017/S0956792524000706
    4. Lorenzo Pareschi, Mattia Zanella, Reduced Variance Random Batch Methods for Nonlocal PDEs, 2024, 191, 0167-8019, 10.1007/s10440-024-00656-z
    5. Raffaella Fiamma Cabini, Anna Pichiecchio, Alessandro Lascialfari, Silvia Figini, Mattia Zanella, A kinetic approach to consensus-based segmentation of biomedical images, 2024, 0, 1937-5093, 0, 10.3934/krm.2024017
    6. Raffaella Fiamma Cabini, Horacio Tettamanti, Mattia Zanella, Understanding the Impact of Evaluation Metrics in Kinetic Models for Consensus-Based Segmentation, 2025, 27, 1099-4300, 149, 10.3390/e27020149
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2537) PDF downloads(332) Cited by(4)

Figures and Tables

Figures(15)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog