Loading [MathJax]/jax/output/SVG/jax.js
Research article

Unsupervised domain adaptation through transferring both the source-knowledge and target-relatedness simultaneously

  • The authors marked with "§" are co-second authors. (Yanan Zhu and Yao Cheng contributed equally to this work.)
  • Unsupervised domain adaptation (UDA) is an emerging research topic in the field of machine learning and pattern recognition, which aims to help the learning of unlabeled target domain by transferring knowledge from the source domain. To perform UDA, a variety of methods have been proposed, most of which concentrate on the scenario of single source and the single target domain (1S1T). However, in real applications, usually single source domain with multiple target domains are involved (1SmT), which cannot be handled directly by those 1S1T models. Unfortunately, although a few related works on 1SmT UDA have been proposed, nearly none of them model the source domain knowledge and leverage the target-relatedness jointly. To overcome these shortcomings, we herein propose a more general 1SmT UDA model through transferring both the source-knowledge and target-relatedness, UDA-SKTR for short. In this way, not only the supervision knowledge from the source domain but also the potential relatedness among the target domains are simultaneously modeled for exploitation in the process of 1SmT UDA. In addition, we construct an alternating optimization algorithm to solve the variables of the proposed model with a convergence guarantee. Finally, through extensive experiments on both benchmark and real datasets, we validate the effectiveness and superiority of the proposed method.

    Citation: Qing Tian, Yanan Zhu, Yao Cheng, Chuang Ma, Meng Cao. Unsupervised domain adaptation through transferring both the source-knowledge and target-relatedness simultaneously[J]. Electronic Research Archive, 2023, 31(2): 1170-1194. doi: 10.3934/era.2023060

    Related Papers:

    [1] Tianyuan Xu, Shanming Ji, Chunhua Jin, Ming Mei, Jingxue Yin . EARLY AND LATE STAGE PROFILES FOR A CHEMOTAXIS MODEL WITH DENSITY-DEPENDENT JUMP PROBABILITY. Mathematical Biosciences and Engineering, 2018, 15(6): 1345-1385. doi: 10.3934/mbe.2018062
    [2] Wenjie Zhang, Lu Xu, Qiao Xin . Global boundedness of a higher-dimensional chemotaxis system on alopecia areata. Mathematical Biosciences and Engineering, 2023, 20(5): 7922-7942. doi: 10.3934/mbe.2023343
    [3] Sunwoo Hwang, Seongwon Lee, Hyung Ju Hwang . Neural network approach to data-driven estimation of chemotactic sensitivity in the Keller-Segel model. Mathematical Biosciences and Engineering, 2021, 18(6): 8524-8534. doi: 10.3934/mbe.2021421
    [4] Qianhong Zhang, Fubiao Lin, Xiaoying Zhong . On discrete time Beverton-Holt population model with fuzzy environment. Mathematical Biosciences and Engineering, 2019, 16(3): 1471-1488. doi: 10.3934/mbe.2019071
    [5] Chichia Chiu, Jui-Ling Yu . An optimal adaptive time-stepping scheme for solving reaction-diffusion-chemotaxis systems. Mathematical Biosciences and Engineering, 2007, 4(2): 187-203. doi: 10.3934/mbe.2007.4.187
    [6] Xu Song, Jingyu Li . Asymptotic stability of spiky steady states for a singular chemotaxis model with signal-suppressed motility. Mathematical Biosciences and Engineering, 2022, 19(12): 13988-14028. doi: 10.3934/mbe.2022652
    [7] Tingting Yu, Sanling Yuan . Dynamics of a stochastic turbidostat model with sampled and delayed measurements. Mathematical Biosciences and Engineering, 2023, 20(4): 6215-6236. doi: 10.3934/mbe.2023268
    [8] Lin Zhang, Yongbin Ge, Zhi Wang . Positivity-preserving high-order compact difference method for the Keller-Segel chemotaxis model. Mathematical Biosciences and Engineering, 2022, 19(7): 6764-6794. doi: 10.3934/mbe.2022319
    [9] Changwook Yoon, Sewoong Kim, Hyung Ju Hwang . Global well-posedness and pattern formations of the immune system induced by chemotaxis. Mathematical Biosciences and Engineering, 2020, 17(4): 3426-3449. doi: 10.3934/mbe.2020194
    [10] Marcin Choiński, Mariusz Bodzioch, Urszula Foryś . A non-standard discretized SIS model of epidemics. Mathematical Biosciences and Engineering, 2022, 19(1): 115-133. doi: 10.3934/mbe.2022006
  • Unsupervised domain adaptation (UDA) is an emerging research topic in the field of machine learning and pattern recognition, which aims to help the learning of unlabeled target domain by transferring knowledge from the source domain. To perform UDA, a variety of methods have been proposed, most of which concentrate on the scenario of single source and the single target domain (1S1T). However, in real applications, usually single source domain with multiple target domains are involved (1SmT), which cannot be handled directly by those 1S1T models. Unfortunately, although a few related works on 1SmT UDA have been proposed, nearly none of them model the source domain knowledge and leverage the target-relatedness jointly. To overcome these shortcomings, we herein propose a more general 1SmT UDA model through transferring both the source-knowledge and target-relatedness, UDA-SKTR for short. In this way, not only the supervision knowledge from the source domain but also the potential relatedness among the target domains are simultaneously modeled for exploitation in the process of 1SmT UDA. In addition, we construct an alternating optimization algorithm to solve the variables of the proposed model with a convergence guarantee. Finally, through extensive experiments on both benchmark and real datasets, we validate the effectiveness and superiority of the proposed method.



    Baló's concentric sclerosis (BCS) was first described by Marburg [1] in 1906, but became more widely known until 1928 when the Hungarian neuropathologist Josef Baló published a report of a 23-year-old student with right hemiparesis, aphasia, and papilledema, who at autopsy had several lesions of the cerebral white matter, with an unusual concentric pattern of demyelination [2]. Traditionally, BCS is often regarded as a rare variant of multiple sclerosis (MS). Clinically, BCS is most often characterized by an acute onset with steady progression to major disability and death with months, thus resembling Marburg's acute MS [3,4]. Its pathological hallmarks are oligodendrocyte loss and large demyelinated lesions characterized by the annual ring-like alternating pattern of demyelinating and myelin-preserved regions. In [5], the authors found that tissue preconditioning might explain why Baló lesions develop a concentric pattern. According to the tissue preconditioning theory and the analogies between Baló's sclerosis and the Liesegang periodic precipitation phenomenon, Khonsari and Calvez [6] established the following chemotaxis model

    ˜uτ=DΔX˜udiffusion ofactivated macrophagesX(˜χ˜u(ˉu˜u)˜v)chemoattractant attractssurrounding activated macrophages+μ˜u(ˉu˜u)production of activated macrophages,˜ϵΔX˜vdiffusion of chemoattractant=˜α˜v+˜β˜wdegradationproduction of chemoattractant,˜wτ=κ˜uˉu+˜u˜u(ˉw˜w)destruction of oligodendrocytes, (1.1)

    where ˜u, ˜v and ˜w are, respectively, the density of activated macrophages, the concentration of chemoattractants and density of destroyed oligodendrocytes. ˉu and ˉw represent the characteristic densities of macrophages and oligodendrocytes respectively.

    By numerical simulation, the authors in [6,7] indicated that model (1.1) only produces heterogeneous concentric demyelination and homogeneous demyelinated plaques as χ value gradually increases. In addition to the chemoattractant produced by destroyed oligodendrocytes, "classically activated'' M1 microglia also can release cytotoxicity [8]. Therefore we introduce a linear production term into the second equation of model (1.1), and establish the following BCS chemotaxis model with linear production term

    {˜uτ=DΔX˜uX(˜χ˜u(ˉu˜u)˜v)+μ˜u(ˉu˜u),˜ϵΔX˜v+˜α˜v=˜β˜w+˜γ˜u,˜wτ=κ˜uˉu+˜u˜u(ˉw˜w). (1.2)

    Before going to details, let us simplify model (1.2) with the following scaling

    u=˜uˉu,v=μˉu˜ϵD˜v,w=˜wˉw,t=μˉuτ,x=μˉuDX,χ=˜χ˜ϵμ,α=D˜α˜ϵμˉu,β=˜βˉw,γ=˜γˉu,δ=κμ,

    then model (1.2) takes the form

    {ut=Δu(χu(1u)v)+u(1u),xΩ,t>0,Δv+αv=βw+γu,xΩ,t>0,wt=δu1+uu(1w),xΩ,t>0,ηu=ηv=0,xΩ,t>0,u(x,0)=u0(x),w(x,0)=w0(x),xΩ, (1.3)

    where ΩRn(n1) is a smooth bounded domain, η is the outward normal vector to Ω, η=/η, δ balances the speed of the front and the intensity of the macrophages in damaging the myelin. The parameters χ,α and δ are positive constants as well as β,γ are nonnegative constants.

    If δ=0, then model (1.3) is a parabolic-elliptic chemotaxis system with volume-filling effect and logistic source. In order to be more line with biologically realistic mechanisms, Hillen and Painter [9,10] considered the finite size of individual cells-"volume-filling'' and derived volume-filling models

    {ut=(Du(q(u)q(u)u)uq(u)uχ(v)v)+f(u,v),vt=DvΔv+g(u,v). (1.4)

    q(u) is the probability of the cell finding space at its neighbouring location. It is also called the squeezing probability, which reflects the elastic properties of cells. For the linear choice of q(u)=1u, global existence of solutions to model (1.4) in any space dimension are investigated in [9]. Wang and Thomas [11] established the global existence of classical solutions and given necessary and sufficient conditions for spatial pattern formation to a generalized volume-filling chemotaxis model. For a chemotaxis system with generalized volume-filling effect and logistic source, the global boundedness and finite time blow-up of solutions are obtained in [12]. Furthermore, the pattern formation of the volume-filling chemotaxis systems with logistic source and both linear diffusion and nonlinear diffusion are shown in [13,14,15] by the weakly nonlinear analysis. For parabolic-elliptic Keller-Segel volume-filling chemotaxis model with linear squeezing probability, asymptotic behavior of solutions is studied both in the whole space Rn [16] and on bounded domains [17]. Moreover, the boundedness and singularity formation in parabolic-elliptic Keller-Segel volume-filling chemotaxis model with nonlinear squeezing probability are discussed in [18,19].

    Very recently, we [20] investigated the uniform boundedness and global asymptotic stability for the following chemotaxis model of multiple sclerosis

    {ut=Δu(χ(u)v)+u(1u),χ(u)=χu1+u,xΩ,t>0,τvt=Δvβv+αw+γu,xΩ,t>0,wt=δu1+uu(1w),xΩ,t>0,

    subject to the homogeneous Neumann boundary conditions.

    In this paper, we are first devoted to studying the local existence and uniform boundedness of the unique classical solution to system (1.3) by using Neumann heat semigroup arguments, Banach fixed point theorem, parabolic Schauder estimate and elliptic regularity theory. Then we discuss that exponential asymptotic stability of the positive equilibrium point to system (1.3) by constructing Lyapunov function.

    Although, in the pathological mechanism of BCS, the initial data in model (1.3) satisfy 0<u0(x)1,w0(x)=0, we mathematically assume that

    {u0(x)C0(ˉΩ)with0,u0(x)1inΩ,w0(x)C2+ν(ˉΩ)with0<ν<1and0w0(x)1inΩ. (1.5)

    It is because the condition (1.5) implies u(x,t0)>0 for any t0>0 by the strong maximum principle.

    The following theorems give the main results of this paper.

    Theorem 1.1. Assume that the initial data (u0(x),w0(x)) satisfy the condition (1.5). Then model (1.3) possesses a unique global solution (u(x,t),v(x,t),w(x,t)) satisfying

    u(x,t)C0(ˉΩ×[0,))C2,1(ˉΩ×(0,)),v(x,t)C0((0,),C2(ˉΩ)),w(x,t)C2,1(ˉΩ×[0,)), (1.6)

    and

    0<u(x,t)1,0v(x,t)β+γα,w0(x)w(x,t)1,inˉΩ×(0,).

    Moreover, there exist a ν(0,1) and M>0 such that

    uC2+ν,1+ν/2(ˉΩ×[1,))+vC0([1,),C2+ν(ˉΩ))+wCν,1+ν/2(ˉΩ×[1,))M. (1.7)

    Theorem 1.2. Assume that β0,γ0,β+γ>0 and

    χ<{min{22αβ,22αγ},β>0,γ>0,22αβ,β>0,γ=0,22αγ,β=0,γ>0. (1.8)

    Let (u,v,w) be a positive classical solution of the problem (1.3), (1.5). Then

    u(,t)uL(Ω)+v(,t)vL(Ω)+w(,t)wL(Ω)0,ast. (1.9)

    Furthermore, there exist positive constants λ=λ(χ,α,γ,δ,n) and C=C(|Ω|,χ,α,β,γ,δ) such that

    uuL(Ω)Ceλt,vvL(Ω)Ceλt,wwL(Ω)Ceλt,t>0, (1.10)

    where (u,v,w)=(1,β+γα,1) is the unique positive equilibrium point of the model (1.3).

    The paper is organized as follows. In section 2, we prove the local existence, the boundedness and global existence of a unique classical solution. In section 3, we firstly establish the uniform convergence of the positive global classical solution, then discuss the exponential asymptotic stability of positive equilibrium point in the case of weak chemotactic sensitivity. The paper ends with a brief concluding remarks.

    The aim of this section is to develop the existence and boundedness of a global classical solution by employing Neumann heat semigroup arguments, Banach fixed point theorem, parabolic Schauder estimate and elliptic regularity theory.

    Proof of Theorem 1.1 (ⅰ) Existence. For p(1,), let A denote the sectorial operator defined by

    Au:=ΔuforuD(A):={φW2,p(Ω)|ηφ|Ω=0}.

    λ1>0 denote the first nonzero eigenvalue of Δ in Ω with zero-flux boundary condition. Let A1=Δ+α and Xl be the domains of fractional powers operator Al,l0. From the Theorem 1.6.1 in [21], we know that for any p>n and l(n2p,12),

    zL(Ω)CAl1zLp(Ω)forallzXl. (2.1)

    We introduce the closed subset

    S:={uX|uL((0,T);L(Ω))R+1}

    in the space X:=C0([0,T];C0(ˉΩ)), where R is a any positive number satisfying

    u0(x)L(Ω)R

    and T>0 will be specified later. Note F(u)=u1+u, we consider an auxiliary problem with F(u) replaced by its extension ˜F(u) defined by

    ˜F(u)={F(u)uifu0,F(u)(u)ifu<0.

    Notice that ˜F(u) is a smooth globally Lipshitz function. Given ˆuS, we define Ψˆu=u by first writing

    w(x,t)=(w0(x)1)eδt0˜F(ˆu)ˆuds+1,xΩ,t>0, (2.2)

    and

    w0w(x,t)1,xΩ,t>0,

    then letting v solve

    {Δv+αv=βw+γˆu,xΩ,t(0,T),ηv=0,xΩ,t(0,T), (2.3)

    and finally taking u to be the solution of the linear parabolic problem

    {ut=Δuχ(ˆu(1ˆu)v)+ˆu(1ˆu),xΩ,t(0,T),ηu=0,xΩ,t(0,T),u(x,0)=u0(x),xΩ.

    Applying Agmon-Douglas-Nirenberg Theorem [22,23] for the problem (2.3), there exists a constant C such that

    vW2p(Ω)C(βwLp(Ω)+γˆuLp(Ω))C(β|Ω|1p+γ(R+1)) (2.4)

    for all t(0,T). From a variation-of-constants formula, we define

    Ψ(ˆu)=etΔu0χt0e(ts)Δ(ˆu(1ˆu)v(s))ds+t0e(ts)Δˆu(s)(1ˆu(s))ds.

    First we shall show that for T small enough

    Ψ(ˆu)L((0,T);L(Ω))R+1

    for any ˆuS. From the maximum principle, we can give

    etΔu0L(Ω)u0L(Ω), (2.5)

    and

    t0etΔˆu(s)(1ˆu(s))L(Ω)dst0ˆu(s)(1ˆu(s))L(Ω)ds(R+1)(R+2)T (2.6)

    for all t(0,T). We use inequalities (2.1) and (2.4) to estimate

    χt0e(ts)Δ(ˆu(1ˆu)v(s))L(Ω)dsCt0(ts)lets2Δ(ˆu(1ˆu)v(s))Lp(Ω)dsCt0(ts)l12(ˆu(1ˆu)v(s)Lp(Ω)dsCT12l(R+1)(R+2)(β|Ω|1p+γ(R+1)) (2.7)

    for all t(0,T). This estimate is attributed to T<1 and the inequality in [24], Lemma 1.3 iv]

    etΔzLp(Ω)C1(1+t12)eλ1tzLp(Ω)forallzCc(Ω).

    From inequalities (2.5), (2.6) and (2.7) we can deduce that Ψ maps S into itself for T small enough.

    Next we prove that the map Ψ is a contractive on S. For ˆu1,ˆu2S, we estimate

    Ψ(ˆu1)Ψ(ˆu2)L(Ω)χt0(ts)l12[ˆu2(s)(1ˆu2(s))ˆu1(s)(1ˆu1(s))]v2(s)Lp(Ω)ds+χt0ˆu1(s)(1ˆu1(s))(v1(s)v2(s))Lp(Ω)ds+t0e(ts)Δ[ˆu1(s)(1ˆu1(s))ˆu2(s)(1ˆu2(s))]L(Ω)dsχt0(ts)l12(2R+1)ˆu1(s)ˆu2(s)Xv2(s)Lp(Ω)ds+χt0(R+1)(R+2)(βw1(s)w2(s)Lp(Ω)+γˆu1(s)ˆu2(s)Lp(Ω))ds+t0(2R+1)ˆu1(s)ˆu2(s)Xdsχt0(ts)l12(2R+1)ˆu1(s)ˆu2(s)Xv2(s)Lp(Ω)ds+2βδχt0(R+1)(R+2)tˆu1(s)ˆu2(s)Lp(Ω)+γˆu1(s)ˆu2(s)Lp(Ω)ds+t0(2R+1)ˆu1(s)ˆu2(s)Xds(CχT12l(2R+1)(β|Ω|1p+γ(R+1))+2βδχT(R2+3R+γ+2)+T(2R+1))ˆu1(s)ˆu2(s)X.

    Fixing T(0,1) small enough such that

    (CχT12l(2R+1)(β|Ω|1p+γ(R+1))+2βδχT(R2+3R+γ+2)+T(2R+1))12.

    It follows from the Banach fixed point theorem that there exists a unique fixed point of Ψ.

    (ⅱ) Regularity. Since the above of T depends on u0L(Ω) and w0L(Ω) only, it is clear that (u,v,w) can be extended up to some maximal Tmax(0,]. Let QT=Ω×(0,T] for all T(0,Tmax). From uC0(ˉQT), we know that wC0,1(ˉQT) by the expression (2.2) and vC0([0,T],W2p(Ω)) by Agmon-Douglas-Nirenberg Theorem [22,23]. From parabolic Lp-estimate and the embedding relation W1p(Ω)Cν(ˉΩ),p>n, we can get uW2,1p(QT). By applying the following embedding relation

    W2,1p(QT)Cν,ν/2(ˉQT),p>n+22, (2.8)

    we can derive u(x,t)Cν,ν/2(ˉQT) with 0<ν2n+2p. The conclusion wCν,1+ν/2(ˉQT) can be obtained by substituting uCν,ν/2(ˉQT) into the formulation (2.2). The regularity uC2+ν,1+ν/2(ˉQT) can be deduced by using further bootstrap argument and the parabolic Schauder estimate. Similarly, we can get vC0((0,T),C2+ν(ˉΩ)) by using Agmon-Douglas-Nirenberg Theorem [22,23]. From the regularity of u we have wC2+ν,1+ν/2(ˉQT).

    Moreover, the maximal principle entails that 0<u(x,t)1, 0v(x,t)β+γα. It follows from the positivity of u that ˜F(u)=F(u) and because of the uniqueness of solution we infer the existence of the solution to the original problem.

    (ⅲ) Uniqueness. Suppose (u1,v1,w1) and (u2,v2,w2) are two deferent solutions of model (1.3) in Ω×[0,T]. Let U=u1u2, V=v1v2, W=w1w2 for t(0,T). Then

    12ddtΩU2dx+Ω|U|2dxχΩ|u1(1u1)u2(1u2)|v1||U|+u2(1u2)|V||U|dx+Ω|u1(1u1)u2(1u2)||U|dxχΩ|U||v1||U|+14|V||U|dx+Ω|U|2dxΩ|U|2dx+χ232Ω|V|2dx+χ2K2+22Ω|U|2dx, (2.9)

    where we have used that |v1|K results from v1C0([0,T],C0(ˉΩ)).

    Similarly, by Young inequality and w0w11, we can estimate

    Ω|V|2dx+α2Ω|V|2dxβ2αΩ|W|2dx+γ2αΩ|U|2dx, (2.10)

    and

    ddtΩW2dxδΩ|U|2+|W|2dx. (2.11)

    Finally, adding to the inequalities (2.9)–(2.11) yields

    ddt(ΩU2dx+ΩW2dx)C(ΩU2dx+ΩW2dx)forallt(0,T).

    The results U0, W0 in Ω×(0,T) are obtained by Gronwall's lemma. From the inequality (2.10), we have V0. Hence (u1,v1,w1)=(u2,v2,w2) in Ω×(0,T).

    (ⅳ) Uniform estimates. We use the Agmon-Douglas-Nirenberg Theorem [22,23] for the second equation of the model (1.3) to get

    vC0([t,t+1],W2p(Ω))C(uLp(Ω×[t,t+1])+wLp(Ω×[t,t+1]))C2 (2.12)

    for all t1 and C2 is independent of t. From the embedded relationship W1p(Ω)C0(ˉΩ),p>n, the parabolic Lp-estimate and the estimation (2.12), we have

    uW2,1p(Ω×[t,t+1])C3

    for all t1. The estimate uCν,ν2(ˉΩ×[t,t+1])C4 for all t1 obtained by the embedded relationship (2.8). We can immediately compute wCν,1+ν2(ˉΩ×[t,t+1])C5 for all t1 according to the regularity of u and the specific expression of w. Further, bootstrapping argument leads to vC0([t,t+1],C2+ν(ˉΩ))C6 and uC2+ν,1+ν2(ˉΩ×[t,t+1])C7 for all t1. Thus the uniform estimation (1.7) is proved.

    Remark 2.1. Assume the initial data 0<u0(x)1 and w0(x)=0. Then the BCS model (1.3) has a unique classical solution.

    In this section we investigate the global asymptotic stability of the unique positive equilibrium point (1,β+γα,1) to model (1.3). To this end, we first introduce following auxiliary problem

    {uϵt=Δuϵ(uϵ(1uϵ)vϵ)+uϵ(1uϵ),xΩ,t>0,Δvϵ+αvϵ=βwϵ+γuϵ,xΩ,t>0,wϵt=δu2ϵ+ϵ1+uϵ(1wϵ),xΩ,t>0,ηuϵ=ηvϵ=0,xΩ,t>0,uϵ(x,0)=u0(x),wϵ(x,0)=w0(x),xΩ. (3.1)

    By a similar proof of Theorem 1.1, we get that the problem (3.1) has a unique global classical solution (uϵ,vϵ,wϵ), and there exist a ν(0,1) and M1>0 which is independent of ϵ such that

    uϵC2+ν,1+ν/2(ˉΩ×[1,))+vϵC2+ν,1+ν/2(ˉΩ×[1,))+wϵCν,1+ν/2(ˉΩ×[1,))M1. (3.2)

    Then, motivated by some ideas from [25,26], we construct a Lyapunov function to study the uniform convergence of homogeneous steady state for the problem (3.1).

    Let us give following lemma which is used in the proof of Lemma 3.2.

    Lemma 3.1. Suppose that a nonnegative function f on (1,) is uniformly continuous and 1f(t)dt<. Then f(t)0 as t.

    Lemma 3.2. Assume that the condition (1.8) is satisfied. Then

    uϵ(,t)1L2(Ω)+vϵ(,t)vL2(Ω)+wϵ(,t)1L2(Ω)0,t, (3.3)

    where v=β+γα.

    Proof We construct a positive function

    E(t):=Ω(uε1lnuϵ)+12δϵΩ(wϵ1)2,t>0.

    From the problem (3.1) and Young's inequality, we can compute

    ddtE(t)χ24Ω|vϵ|2dxΩ(uϵ1)2dxΩ(wϵ1)2dx,t>0. (3.4)

    We multiply the second equations in system (3.1) by vϵv, integrate by parts over Ω and use Young's inequality to obtain

    Ω|vϵ|2dxγ22αΩ(uϵ1)2dx+β22αΩ(wϵ1)2dx,t>0, (3.5)

    and

    Ω(vϵv)2dx2γ2α2Ω(uϵ1)2dx+2β2α2Ω(wϵ1)2dx,t>0. (3.6)

    Substituting inequality (3.5) into inequality (3.4) to get

    ddtE(t)C8(Ω(uϵ1)2dx+Ω(wϵ1)2dx),t>0,

    where C8=min{1χ2β28α,1χ2γ28α}>0.

    Let f(t):=Ω(uϵ1)2+(wϵ1)2dx. Then

    1f(t)dtE(1)C8<,t>1.

    It follows from the uniform estimation (3.2) and the Arzela-Ascoli theorem that f(t) is uniformly continuous in (1,). Applying Lemma 3.1, we have

    Ω(uϵ(,t)1)2+(wϵ(,t)1)2dx0,t. (3.7)

    Combining inequality (3.6) and the limit (3.7) to obtain

    Ω(vϵ(,t)v)2dx0,t.

    Proof of Theorem 1.2 As we all known, each bounded sequence in C2+ν,1+ν2(ˉΩ×[1,)) is precompact in C2,1(ˉΩ×[1,)). Hence there exists some subsequence {uϵn}n=1 satisfying ϵn0 as n such that

    limnuϵnuC2,1(ˉΩ×[1,))=0.

    Similarly, we can get

    limnvϵnvC2(ˉΩ)=0,

    and

    limnwϵnwC0,1(ˉΩ×[1,))=0.

    Combining above limiting relations yields that (u,v,w) satisfies model (1.3). The conclusion (u,v,w)=(u,v,w) is directly attributed to the uniqueness of the classical solution of the model (1.3). Furthermore, according to the conclusion, the strong convergence (3.3) and Diagonal line method, we can deduce

    u(,t)1L2(Ω)+v(,t)vL2(Ω)+w(,t)1L2(Ω)0,t. (3.8)

    By applying Gagliardo-Nirenberg inequality

    zLCz2/(n+2)L2(Ω)zn/(n+2)W1,(Ω),zW1,(Ω), (3.9)

    comparison principle of ODE and the convergence (3.8), the uniform convergence (1.9) is obtained immediately.

    Since limtu(,t)1L(Ω)=0, so there exists a t1>0 such that

    u(x,t)12forallxΩ,t>t1. (3.10)

    Using the explicit representation formula of w

    w(x,t)=(w0(x)1)eδt0F(u)uds+1,xΩ,t>0

    and the inequality (3.10), we have

    w(,t)1L(Ω)eδ6(tt1),t>t1. (3.11)

    Multiply the first two equations in model (1.3) by u1 and vv, respectively, integrate over Ω and apply Cauchy's inequality, Young's inequality and the inequality (3.10), to find

    ddtΩ(u1)2dxχ232Ω|v|2dxΩ(u1)2dx,t>t1. (3.12)
    Ω|v|2dx+α2Ω(vv)2dxβ2αΩ(w1)2dx+γ2αΩ(u1)2dx,t>0. (3.13)

    Combining the estimations (3.11)–(3.13) leads us to the estimate

    ddtΩ(u1)2dx(χ2γ232α1)Ω(u1)2dx+χ2β232αeδ3(tt1),t>t1.

    Let y(t)=Ω(u1)2dx. Then

    y(t)(χ2γ232α1)y(t)+χ2β232αeδ3(tt1),t>t1.

    From comparison principle of ODE, we get

    y(t)(y(t1)3χ2β232α(3δ)χ2γ2)e(1χ2γ232α)(tt1)+3χ2β232α(3δ)χ2γ2eδ3(tt1),t>t1.

    This yields

    Ω(u1)2dxC9eλ2(tt1),t>t1, (3.14)

    where λ2=min{1χ2γ232α,δ3} and C9=max{|Ω|3χ2β232α(3δ)χ2γ2,3χ2β232α(3δ)χ2γ2}.

    From the inequalities (3.11), (3.13) and (3.14), we derive

    Ω(vβ+γα)2dxC10eλ2(tt1),t>t1, (3.15)

    where C10=max{2γ2α2C9,2β2α2}. By employing the uniform estimation (1.7), the inequalities (3.9), (3.14) and (3.15), the exponential decay estimation (1.10) can be obtained.

    The proof is complete.

    In this paper, we mainly study the uniform boundedness of classical solutions and exponential asymptotic stability of the unique positive equilibrium point to the chemotactic cellular model (1.3) for Baló's concentric sclerosis (BCS). For model (1.1), by numerical simulation, Calveza and Khonsarib in [7] shown that demyelination patterns of concentric rings will occur with increasing of chemotactic sensitivity. By the Theorem 1.1 we know that systems (1.1) and (1.2) are {uniformly} bounded and dissipative. By the Theorem 1.2 we also find that the constant equilibrium point of model (1.1) is exponentially asymptotically stable if

    ˜χ<2ˉw˜β2Dμ˜α˜ϵˉu,

    and the constant equilibrium point of the model (1.2) is exponentially asymptotically stable if

    ˜χ<22Dμ˜α˜ϵˉumin{1ˉw˜β,1ˉu˜γ}.

    According to a pathological viewpoint of BCS, the above stability results mean that if chemoattractive effect is weak, then the destroyed oligodendrocytes form a homogeneous plaque.

    The authors would like to thank the editors and the anonymous referees for their constructive comments. This research was supported by the National Natural Science Foundation of China (Nos. 11761063, 11661051).

    We have no conflict of interest in this paper.



    [1] J. Jiang, A literature survey on domain adaptation of statistical classifiers, 3 (2018), 1–12.
    [2] P. S. Jialin, Q. Yang, A survey on transfer learning, IEEE Trans. Knowl. Data Eng., 22 (2009), 1345–1359. https://doi.org/10.1109/TKDE.2009.191 doi: 10.1109/TKDE.2009.191
    [3] C. Wang, S. Mahadevan, Learning with augmented features for heterogeneous domain adaptation, in Proceedings of 22th International Joint Conference on Artificial Intelligence, (2011), 1541–1546. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-259
    [4] L. Duan, D. Xu, I. Tsang, Learning with augmented features for heterogeneous domain adaptation, arXiv preprint, (2011), arXiv: 1206.4660. https://doi.org/10.48550/arXiv.1206.4660
    [5] M. Wang, W. Deng, Deep visual domain adaptation: A survey, Neurocomputing, 312 (2018), 135–153. http://doi.org/10.1016/j.neucom.2018.05.083 doi: 10.1016/j.neucom.2018.05.083
    [6] S. M. Salaken, A. Khosravi, T. Nguyen, S. Nahavandi, Extreme learning machine based transfer learning algorithms: A survey, Neurocomputing, 267 (2017), 516–524. https://doi.org/10.1016/j.neucom.2017.06.037 doi: 10.1016/j.neucom.2017.06.037
    [7] Z. Zhou, A brief introduction to weakly supervised learning, Natl. Sci. Rev., 5 (2017), 44–53. https://doi.org/10.1093/nsr/nwx106 doi: 10.1093/nsr/nwx106
    [8] J. Zhang, W. Li, P. Ogunbona, D. Xu, Recent advances in transfer learning for cross-dataset visual recognition: A problem-oriented perspective, ACM Comput. Surv., 52 (2020), 1–38. https://doi.org/10.1145/3291124 doi: 10.1145/3291124
    [9] J. Huang, A. Gretton, P. Ogunbona, K. Borgwardt, B. Schölkopf, A. J. Smola, Correcting sample selection bias by unlabeled data, in Advances in Neural Information Processing Systems 19, MIT Press, (2006), 601–608. https://doi.org/10.7551/mitpress/7503.003.0080
    [10] M. Sugiyama, S. Nakajima, H. Kashima, P. V. Buenau, M. Kawanabe, Direct importance estimation with model selection and its application to covariate shift adaptation, in Advances in Neural Information Processing Systems, (2007), 601–608.
    [11] S. Li, S. Song, G. Huang, Prediction reweighting for domain adaptation, IEEE Trans. Neural Networks Learn. Syst., 28 (2016), 1682–1695. https://doi.org/10.1109/TNNLS.2016.2538282 doi: 10.1109/TNNLS.2016.2538282
    [12] Y. Zhu, K. Ting, Z. Zhou, New class adaptation via instance generation in one-pass class incremental learning, in 2017 IEEE International Conference on Data Mining (ICDM), (2017), 1207–1212. https://doi.org/10.1109/ICDM.2017.163
    [13] B. Zadrozny, Learning and evaluating classifiers under sample selection bias, in Proceedings of the 21th International Conference on Machine Learning, 2004. https://doi.org/10.1145/1015330.1015425
    [14] J. Jiang, C. Zhai, Instance weighting for domain adaptation in NLP, in Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, (2007), 264–271. https://aclanthology.org/P07-1034
    [15] R. Wang, M. Utiyama, L. Liu, K. Chen, E. Sumita, Instance weighting for neural machine translation domain adaptation, in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, (2017), 1482–1488. https://doi.org/10.18653/v1/d17-1155
    [16] J. Blitzer, R. McDonald, F. Pereira, Domain adaptation with structural correspondence learning, in Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, (2006), 120–128. https://doi.org/10.3115/1610075.1610094
    [17] M. Xiao, Y. Guo, Feature space independent semi-supervised domain adaptation via kernel matching, IEEE Trans. Pattern Anal. Mach. Intell., 37 (2014), 54–66. http://doi.org/10.1109/TPAMI.2014.2343216 doi: 10.1109/TPAMI.2014.2343216
    [18] S. Herath, M. Harandi, F. Porikli, Learning an invariant hilbert space for domain adaptation, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3845–3854. http://doi.org/doi:10.1109/CVPR.2017.421
    [19] L. Zhang, S. Wang, G. Huang, W. Zuo, J. Yang, D. Zhang, Manifold criterion guided transfer learning via intermediate domain generation, IEEE Trans. Neural Networks Learn. Syst., 30 (2019), 3759–3773. https://doi.org/10.1109/TNNLS.2019.2899037 doi: 10.1109/TNNLS.2019.2899037
    [20] J. Fu, L. Zhang, B. Zhang, W. Jia, Guided Learning: A new paradigm for multi-task classification, in Lecture Notes in Computer Science, 10996 (2018), 239–246. https://doi.org/10.1007/978-3-319-97909-0_26
    [21] S. Sun, Z. Xu, M. Yang, Transfer learning with part-based ensembles, in Lecture Notes in Computer Science, (2013), 271–282. https://doi.org/10.1007/978-3-642-38067-9_24
    [22] L. Cheng, F. Tsung, A. Wang, A statistical transfer learning perspective for modeling shape deviations in additive manufacturing, IEEE Rob. Autom. Lett., 2 (2017), 1988–1993. http://doi.org/doi:10.1109/LRA.2017.2713238 doi: 10.1109/LRA.2017.2713238
    [23] Y. Wang, S. Chen, Soft large margin clustering, Inf. Sci., 232 (2013), 116–129. https://doi.org/10.1016/j.ins.2012.12.040 doi: 10.1016/j.ins.2012.12.040
    [24] W. Dai, Q. Yang, G. Xue, Y. Yong, Self-taught clustering, in Proceedings of the 25th International Conference on Machine Learning, (2008), 200–207. https://doi.org/10.1016/j.ins.2012.12.040
    [25] Z. Deng, Y. Jiang, F. Chung, H. Ishibuchi, K. Choi, S. Wang, Transfer prototype-based fuzzy clustering, IEEE Trans. Fuzzy Syst., 24 (2015), 1210–1232. http://doi.org/doi:10.1109/TFUZZ.2015.2505330 doi: 10.1109/TFUZZ.2015.2505330
    [26] H. Yu, M. Hu, S. Chen, Multi-target unsupervised domain adaptation without exactly shared categories, arXiv preprint, (2018), arXiv: 1809.00852. https://doi.org/10.48550/arXiv.1809.00852
    [27] Z. Ding, M. Shao, Y. Fu, Robust multi-view representation: A unified perspective from multi-view learning to domain adaption, in Proceedings of the 27th International Joint Conference on Artificial Intelligence, (2018), 5434–5440. https://doi.org/10.24963/ijcai.2018/767
    [28] Z. Pei, Z. Cao, M. Long, J. Wang, Multi-adversarial domain adaptation, in Proceedings of the 32th AAAI Conference on Artificial Intelligence, 32 (2018), 3934–3941. https://doi.org/10.1609/aaai.v32i1.11767
    [29] W. Jiang, W. Liu, F. Chung, Knowledge transfer for spectral clustering, Pattern Recognit., 81 (2018), 484–496. https://doi.org/10.1016/j.patcog.2018.04.018 doi: 10.1016/j.patcog.2018.04.018
    [30] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, et al., Domain-adversarial training of neural networks, J. Mach. Learn. Res., 17 (2016), 2096–2030. https://jmlr.org/papers/v17/15-239.html
    [31] A. J. Gallego, J. Calvo-Zaragoza, R. B. Fisher, Incremental unsupervised domain-adversarial training of neural networks, IEEE Trans. Neural Networks Learn. Syst., 32 (2020), 4864–4878. https://doi: 10.1109/TNNLS.2020.3025954 doi: 10.1109/TNNLS.2020.3025954
    [32] B. Sun, K. Saenko, Deep coral: Correlation alignment for deep domain adaptation, in Lecture Notes in Computer Science, (2016), 443–450. https://doi.org/10.1007/978-3-319-49409-8_35
    [33] S. Lee, D. Kim, N. Kim, S. G. Jeong, Drop to adapt: Learning discriminative features for unsupervised domain adaptation, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2019), 91–100.
    [34] D. B. Bhushan, K. Benjamin, F. Rémi, T. Devis, C. Nicolas, Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation, in Lecture Notes in Computer Science, 11208 (2018), 447–463. https://doi.org/10.1007/978-3-030-01225-0_28
    [35] X. Fang, N. Han, J. Wu, Y. Xu, J. Yang, W. Wong, et al., Approximate low-rank projection learning for feature extraction, IEEE Trans. Neural Networks Learn. Syst., 29 (2018), 5228–5241. http://doi.org/10.1109/TNNLS.2018.2796133 doi: 10.1109/TNNLS.2018.2796133
    [36] B. Gong, Y. Shi, F. Sha, K. Grauman, Geodesic flow kernel for unsupervised domain adaptation, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), 2066–2073. http://doi.org/10.1109/CVPR.2012.6247911
    [37] S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, S. Zafeiriou, Agedb: the first manually collected, in-the-wild age database, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017), 51–59. https://doi.org/10.1109/CVPRW.2017.250
    [38] K. Ricanek, T. Tesafaye, Morph: A longitudinal image database of normal adult age-progression, in 7th International Conference on Automatic Face and Gesture Recognition (FGR06), (2006), 341–345. https://doi.org/10.1109/FGR.2006.78
    [39] B. Chen, C. Chen, W. Hsu, Cross-age reference coding for age-invariant face recognition and retrieval, in Lecture Notes in Computer Science, (2014), 768–783. https://doi.org/10.1007/978-3-319-10599-4_49
    [40] X. Zhu, S. Zhang, Y. Li, J. Zhang, L. Yang, Y. Fang, Low-rank sparse subspace for spectral clustering, IEEE Trans. Knowl. Data Eng., 31 (2018), 1532–1543. https://doi.org/10.1109/TKDE.2018.2858782 doi: 10.1109/TKDE.2018.2858782
    [41] L. T. Nguyen-Meidine, A. Belal, M. Kiran, J. Dolz, L. Blais-Morin, E. Granger, Unsupervised multi-target domain adaptation through knowledge distillation, in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), (2021), 1339–1347. https://doi.org/10.1109/WACV48630.2021.00138
    [42] B. Mirkin, Clustering: a data recovery approach, Chapman and Hall/CRC, 2005. https://doi.org/10.1201/9781420034912
    [43] Q. Tian, S. Chen, T. Ma, Ordinal space projection learning via neighbor classes representation, Comput. Vision Image Understanding, 174 (2018), 24–32. http://doi.org/10.1016/j.cviu.2018.06.003 doi: 10.1016/j.cviu.2018.06.003
    [44] X. Geng, Z. Zhou, K. Smith-Miles, Automatic age estimation based on facial aging patterns, IEEE Trans. Pattern Anal. Mach. Intell., 29 (2007), 2234–2240. http://doi.org/10.1109/TPAMI.2007.70733 doi: 10.1109/TPAMI.2007.70733
    [45] T. Serre, L. Wolf, T. Poggio, Object recognition with features inspired by visual cortex, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 2 (2005), 994–1000. http://doi.org/10.1109/CVPR.2005.254
    [46] Y. Xu, X. Fang, J. Wu, X. Li, D. Zhang, Discriminative transfer subspace learning via low-rank and sparse representation, IEEE Trans. Image Process., 25 (2015), 850–863. https://doi.org/10.1109/TIP.2015.2510498 doi: 10.1109/TIP.2015.2510498
    [47] Y. Jin, C. Qin, J. Liu, K. Lin, H. Shi, Y. Huang, et al., A novel domain adaptive residual network for automatic atrial fibrillation detection, Knowledge Based Syst., 203 (2020). https://doi.org/10.1016/j.knosys.2020.106122 doi: 10.1016/j.knosys.2020.106122
    [48] J. Jiao, J. Lin, M. Zhao, K. Liang, Double-level adversarial domain adaptation network for intelligent fault diagnosis, Knowledge Based Syst., 205 (2020). http://doi.org/10.1016/j.knosys.2020.106236 doi: 10.1016/j.knosys.2020.106236
  • This article has been cited by:

    1. Lu Xu, Chunlai Mu, Qiao Xin, Global boundedness and asymptotic behavior of solutions for a quasilinear chemotaxis model of multiple sclerosis with nonlinear signal secretion, 2023, 28, 1531-3492, 1215, 10.3934/dcdsb.2022118
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1794) PDF downloads(56) Cited by(2)

Figures and Tables

Figures(10)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog