Open-world semi-supervised learning (OWSSL) has received significant attention since it addresses the issue of unlabeled data containing classes not present in the labeled data. Unfortunately, existing OWSSL methods still rely on a large amount of labeled data from seen classes, overlooking the reality that a substantial amount of labels is difficult to obtain in real scenarios. In this paper, we explored a new setting called open-world barely-supervised learning (OWBSL), where only a single label was provided for each seen class, greatly reducing labeling costs. To tackle the OWBSL task, we proposed a novel framework that leveraged augmented pseudo-labels generated for the unlabeled data. Specifically, we first generated initial pseudo-labels for the unlabeled data using visual-language models. Subsequently, to ensure that the pseudo-labels remained reliable while being updated during model training, we enhanced them using predictions from weak data augmentation. This way, we obtained the augmented pseudo-labels. Additionally, to fully exploit the information from unlabeled data, we incorporated consistency regularization based on strong and weak augmentations into our framework. Our experimental results on multiple benchmark datasets demonstrated the effectiveness of our method.
Citation: Zhongnian Li, Yanyan Ding, Meng Wei, Xinzheng Xu. Open-world barely-supervised learning via augmented pseudo labels[J]. Electronic Research Archive, 2024, 32(10): 5804-5818. doi: 10.3934/era.2024268
[1] | Shengxiang Wang, Xiaohui Zhang, Shuangjian Guo . The Hom-Long dimodule category and nonlinear equations. Electronic Research Archive, 2022, 30(1): 362-381. doi: 10.3934/era.2022019 |
[2] | Dong Su, Shilin Yang . Representation rings of extensions of Hopf algebra of Kac-Paljutkin type. Electronic Research Archive, 2024, 32(9): 5201-5230. doi: 10.3934/era.2024240 |
[3] | Zhaoyong Huang . On the $ C $-flatness and injectivity of character modules. Electronic Research Archive, 2022, 30(8): 2899-2910. doi: 10.3934/era.2022147 |
[4] | Rongmin Zhu, Tiwei Zhao . The construction of tilting cotorsion pairs for hereditary abelian categories. Electronic Research Archive, 2025, 33(5): 2719-2735. doi: 10.3934/era.2025120 |
[5] | Juxiang Sun, Guoqiang Zhao . Gorenstein invariants under right Quasi-Frobenius extensions. Electronic Research Archive, 2025, 33(6): 3561-3570. doi: 10.3934/era.2025158 |
[6] | Dongxing Fu, Xiaowei Xu, Zhibing Zhao . Generalized tilting modules and Frobenius extensions. Electronic Research Archive, 2022, 30(9): 3337-3350. doi: 10.3934/era.2022169 |
[7] | Agustín Moreno Cañadas, Isaías David Marín Gaviria, Pedro Fernando Fernández Espinosa . Brauer configuration algebras and Kronecker modules to categorify integer sequences. Electronic Research Archive, 2022, 30(2): 661-682. doi: 10.3934/era.2022035 |
[8] | Francisco Javier García-Pacheco, María de los Ángeles Moreno-Frías, Marina Murillo-Arcila . On absolutely invertibles. Electronic Research Archive, 2024, 32(12): 6578-6592. doi: 10.3934/era.2024307 |
[9] | Dong Su, Fengxia Gao, Zhenzhen Gao . Module algebra structures of nonstandard quantum group $ X_{q}(A_{1}) $ on the quantum plane. Electronic Research Archive, 2025, 33(6): 3543-3560. doi: 10.3934/era.2025157 |
[10] | Daniel Sanchez, David Alfaya, Jaime Pizarroso . Motives meet SymPy: studying $ \lambda $-ring expressions in Python. Electronic Research Archive, 2025, 33(4): 2118-2147. doi: 10.3934/era.2025093 |
Open-world semi-supervised learning (OWSSL) has received significant attention since it addresses the issue of unlabeled data containing classes not present in the labeled data. Unfortunately, existing OWSSL methods still rely on a large amount of labeled data from seen classes, overlooking the reality that a substantial amount of labels is difficult to obtain in real scenarios. In this paper, we explored a new setting called open-world barely-supervised learning (OWBSL), where only a single label was provided for each seen class, greatly reducing labeling costs. To tackle the OWBSL task, we proposed a novel framework that leveraged augmented pseudo-labels generated for the unlabeled data. Specifically, we first generated initial pseudo-labels for the unlabeled data using visual-language models. Subsequently, to ensure that the pseudo-labels remained reliable while being updated during model training, we enhanced them using predictions from weak data augmentation. This way, we obtained the augmented pseudo-labels. Additionally, to fully exploit the information from unlabeled data, we incorporated consistency regularization based on strong and weak augmentations into our framework. Our experimental results on multiple benchmark datasets demonstrated the effectiveness of our method.
The motivation of this study arises from the molecular descriptors and their conversion into number structures. It enables Mathematics to play its part in these studies. This work is useful in Chemistry, pharmacy, and environmental protection. The {topological indices} which are molecular descriptors that remain invariants in the molecular structure created a bridge between chemical structures through the characterization of chemical diagrams with graph theory and combinatorics. It also attracts algebraists and other researchers to correlate algebraic structures to enhance the parameter of mentioned applications.
The molecular graph can be considered as the structural formula for a chemical compound that can be seen as a graph structure. A molecular graph can be seen as a colored graph where atoms are considered to be vertices of the compound and edges are considered as atomic bonds. We consider a topological index for a graph of a molecular structure as a real digit that is assigned in a way that it represents the topological structure of the graph and remains constant up to the graph automorphism. The indices associated with a molecular graph structure infers several applications in chemistry. It also infers the study of nanotube structures that can be seen in [10,12,22,24].
The characteristic of a topological index is the association of various types of topological indices to a graph. Topological indices are mainly associated with the degrees, distances and eccentricities of a graph. In particular, index related to Randi connectivity, harmonic indices, connectivity of atomic bond, Zagreb indices and geometric arithmetic indices are based on the degrees [1,2,5,28].
Some of the examples of distance-based topological indices are the Hosaya index, Wiener index, Estrada index [36,37]. The geometric-arithmetic eccentric index [20], Zagreb eccentric index [19,35], connectivity of atomic bond eccentric index [15] and index related to eccentric harmonic [16,17] are eccentric based indices.
From the application point of view, the ABC index gives an association between branched and linear alkanes stability. Also, it is used to compute the strain energy for Cycloalkanes [18,33]. The GA index is used as a tool to correlate certain Physico-chemical characteristics which is proven to be more effective for predictive power in comparison with the index of Randic connectivity [11,32,34]. First, as well as second Zagreb indices are used for the computation of total energy of p-electron in a molecule [31]. The topological indices which are based on degrees are generally used to analyze the chemical properties of various molecular diagrams.
Motivated by the above works the authors study eccentric-based topological indices for a class of graphs that are being used to analyze the molecule's structure of a compound for the assessment of pharmacological, Physicochemical, and toxicological characteristics. Further details can be seen in [14,25]. The QSAR, that is, quantitative structure and activity relationship is used for such analysis [21].
For the structural study, ring structures are associated with graphs in several ways, amongst those we shall consider zero-divisor graphs. Initially, these graphs over commutative rings were constructed by I. Beck [13] in 1988 and discussed the coloring problem on these graphs.
Classical and logical algebraic structures linked with Graph theory have been studied intensively in previous years for associated invariants and applications in various fields. For example, it is interesting to explore ring R so that the graph Ω(R) becomes isomorphic with the given graph Γ. By taking a commutative ring having 14 elements and assuming them as vertices, Redmond [29] constructed all possible zero-divisor graphs of this structure. Further, an algorithm is provided to compute rings (up to isomorphism) that produce the graph on zero-divisors of a ring having a fixed given number of vertices. Some of the recent studies on algebraic combinatorics with application in other sciences are available in [23,26,27]. Further, applications and the relation between the algebraic structures and chemical graphs can be seen in [3,14]. The recent works due to Asir et al. [8], Selvakumar et al. [30] and Asir et al. [9] provided formulas for calculating the Wiener index of zero-divisor graphs of Zn. A detailed work on graphs associated with rings structure is given in [6].
Consider a class of commutative rings Zpn where p is a prime number and consider Ω(Zpn), the corresponding zero-divisor graph. In this text, we provide a method of calculation of eccentric topological indices of zero-divisor graphs Ω(Zpn) for a fixed positive integer n and any prime number p.
In particular, we found first and third Zagreb eccentric indices, geometrico arithmetic eccentric index, atomic bonding connectivity eccentric index and eccentric harmonic indices of fourth types for zero-divisor graphs associated with the rings Zpn.
In this whole text Γ will denote a general connected graph with set of vertices V(Γ) and edges set E(Γ). For any v∈V(Γ), the degree d(v) denotes the number of edges connected to the vertex v. The numbers Δ(Γ) and δ(Γ) are maximum and minimum degree of a vertex in a graph Γ. The number of edges between the shortest path of any two given vertices v1 and v2 will be denoted by d(v1,v2). For a vertex v∈V(Γ), we define the eccentricity of v as:
ϱv=max{d(v,x):x∈V(Γ)}. | (2.1) |
The T(Γ) is a topological invariant of the eccentricity for the vertices of graph Γ.
T(Γ)=∑vu∈E(Γ)ϕ(ϱv,ϱu), | (2.2) |
where ϕ(ϱu,ϱv)=ϕ(ϱv,ϱu) gives a real valued function between two eccentricities ϱu and ϱv.
● If ϕ(ϱu,ϱv)=(ϱu+ϱv)β for β∈R∖0, then we say that ϕ(Γ) is the first Zagreb eccentric index if β=1 [19,35].
● If ϕ(ϱu,ϱv)=(ϱu×ϱv)α for α∈R∖0, then we say that ϕ(Γ) is the third Zagreb eccentric index if α=1 [19,35].
● If ϕ(ϱu,ϱv)=2√ϱu×ϱvϱu+ϱv, it gives the geometric arithmetic eccentric index, A4(Γ) [20].
● If ϕ(ϱu,ϱv)=√ϱu+ϱv−2ϱu×ϱv, we obtain the atom-bond connectivity eccentric index ABC5(Γ) [15].
● If ϕ(ϱu,ϱv)=2ϱu+ϱv, then we obtained eccentric harmonic index, H4(Γ), called fourth eccentric harmonic index [16,17].
We consider a ring R which is commutative with unity. A zero-divisor is a non-zero element z∈R if there exists another element x∈R,x≠0 and we have z.x=0. Similarly, a unit is an element a∈R,a≠0 if there exists another element b∈R,b≠0 and we have a.b=1. The set Z(R) will denotes the collection of all zero-divisors elements in ring R. If R is finite and commutative, then it is easy to associate R with ΩR, the zero-divisor graph of R in a way that V(ΩR)=Z(R) becomes the set of vertices of ΩR and the set E(ΩR) will represents the edge set of a zero-divisor graph ΩR. Clearly, any (x1,x2)∈E(ΩR) if x1,x2∈V(ΩR) and x1.x2=0. it is proved by Anderson and Livingston [7] that the graph ΩR remains a connected graph irrespective of any commutative ring R [4,14].
In this text, the under consideration rings are of the form R=Zm for a fix positive integer m. An element x∈Zm∖{0} is a zero-divisor if and only if gcd(x,m)>1 and an element a∈Zm∖{0} is a unit if and only if gcd(a,m)=1. Therefore any non-zero element in the ring Zm is either a zero-divisor or a unit.
Let p be any prime number and n be a a fix positive integer. We consider the finite commutative rings of the form R=Zpn. According to above construction, an element a∈Zpn,a≠0 is in fact a zero-divisor if and only if p divides a. We partition the set of zero-divisors Z(Zpn) into the sets, Λi={u.pi:u is a unit in Zpn}⊆Zpn that in fact contains elements which are a multiple of pi but not of pi+1. Clearly, the set Z(Zpn)=n−1⨆i=1Λi and |Λi|=pn−i−pn−i−1 for each i=1,2,…,n−1 and therefore |Z(Zpn)|=n−1∑i=1|Λi|=pn−1−1.
We denote associated zero-divisor graph to the ring R=Zpn by ΩR=ΩZpn with set of vertices V(ΩR)=Z(Zpn). Since we consider a zero-divisor to be a non-zero element, so 0∉V(ΩR).
Let us denote dΛi(x). The degree of a vertex x∈Λi is obtained in the following result.
Proposition 3.1. For a zero-divisor graph ΩR associated to the ring R=Zpn, the degree of vertex x is,
dΛi(x)={pi−1,for 1≤i≤⌈n2⌉−1pi−2,for ⌈n2⌉≤i≤n−1 |
Proof. For any vertex x∈Λi, we have x.y=0 if and only if y∈Λj for j≥n−i. For 1≤i≤⌈n2⌉−1 we get dΛi(x)=|n−1⨆j=n−iΛj|=n−1∑j=n−i|Λj|=pi−1. For ⌈n2⌉≤i≤n−1 we get dΛi(x)=|n−1⨆j=n−iΛj−{x}|=n−1∑j=n−i|Λj|−1=pi−1−1=pi−2.
By using the hand shaking lemma and after simplification, we obtained the size of the ΩR in the following theorem.
Proposition 3.2. For a prime number p and a fix positive integer n≥2, the size of ΩR is,
12{∑x∈V(ΩR)d(x)}=12{pn−1(np−n−p)−pn−⌈n2⌉+2},exceptn=p=2. |
Theorem 3.3. Consider a prime number p and a fixed integer n≥2. Then the vertex eccentricity of the associated graph ΩR with the ring R=Zpn is either 1 or 2.
Proof. Let d(Λi,Λj) be the distance between sets Λi and Λj. For any vertex ϱiu∈Λi and ϱju∈Λj, then d(ϱiu,ϱjv)=1 for i=j and 1≤i≤⌈n2⌉−1. Also, d(ϱiu,ϱjv)=2 for i=j and ⌈n2⌉≤i≤n−1. Now, d(ϱiu,ϱjv)=2 for i≠j & 1≤i,j≤⌈n2⌉−1 and d(ϱiu,ϱjv)=1 for i≠j & ⌈n2⌉≤i,j≤n−1. Also, d(ϱ1u,ϱjv)=2 if 2≤j≤n−2 and d(ϱ1u,ϱjv)=1 if j=n−1. Therefore, the vertex eccentricity of the graph ΩR is 1 or at most 2. It also gives that the diameter, diam(ΩR)=2.
Lemma 3.4. For any prime number p and the graph ΩR associated with the ring R=Zpn we have the followings. For the even integers n,
T(ΩR)=pn2(ϕ(1,1)+ϕ(1,2)(n−2))−n2pn−1ϕ(1,2)+pn2(ϕ(1,2)−32ϕ(1,1))+ϕ(1,1). |
For the odd integers n,
T(ΩR)=pn−12(ϕ(1,1)−ϕ(1,2)(pn−p−n−1))+pn−12(ϕ(1,2)−32ϕ(1,1))+ϕ(1,1). |
Proof. The graph ΩR contains pn−1−1 vertices and 12{pn−1(np−n−P)−pn−⌈n2⌉+2 edges except for n=p=2. Partitioning the edges of ΩR into different sets we obtained the vertex eccentricity.
Er,s={uv∈E(ΩR): where ϱu=r,ϱv=s}. |
It is clear that the set Er,s keeps those edges that are incident to vertices having eccentricities r and s. By using Proposition 3.1 and Theorem 3.3, we get |E1,1|=pn−1−3pn2+22 for n even & |E1,1|=pn−1−3pn−12+22 for n odd, |E1,2|=pn−1(pn−n−2p)2+pn2 for n even & |E1,2|=pn−1(np−p−n−1)2+pn−12 for n odd and E(ΩR)=E1,1∪E1,2.
Then for even integers n,
T(ΩR)=∑uv∈E(ΩR)ϕ(ϱu,ϱv)=∑uv∈E1,1ϕ(1,1)+∑uv∈E1,2ϕ(1,2)=pn−1−3pn2+22ϕ(1,1)+(pn−1(pn−n−2p)2+pn2)ϕ(2,2)=pn2(ϕ(1,1)+ϕ(1,2)(n−2))−n2pn−1ϕ(1,2)+pn2(ϕ(1,2)−32ϕ(1,1))+ϕ(1,1). |
For odd integers n,
T(ΩR)=∑uv∈E(ΩR)ϕ(ϱu,ϱv)=∑uv∈E1,1ϕ(1,1)+∑uv∈E1,2ϕ(1,2)=pn−1−3pn−12+22ϕ(1,1)+(pn−1(np−p−n−1)2+pn−12)ϕ(2,2)=pn−12(ϕ(1,1)+ϕ(1,2)(pn−p−n−1))+pn−12(ϕ(1,2)−32ϕ(1,1))+ϕ(1,1). |
The given theorem computes the edge-based values for the eccentric based topological indices of graphs ΩR.
Theorem 3.5. For a prime number p≠3, the zero-divisor graph ΩR for R=Zpn. The first Zagreb eccentric index of the graph ΩR is;
M∗1(ΩR)={pn2(3n−4)−3n2pn−1+2,for nevenpn2(3n−3)−pn−12(3n−1)+2,for nodd |
the third Zagreb eccentric index is;
M∗3(ΩR)={pn2(2n−1)−npn−1+12pn2+1,for nevenpn(n−1)−pn−12(2n+1)+12pn−12+1,for nodd |
the geometric-arithmetic eccentric index is;
GA4(ΩR)={pn2(2n√2−4√2+33)−n√23pn−1+pn2(4√2−96)+1,for nevenpn2(2√23(n−1))+pn−12(3−2√2−2n√23)+pn−12(4√2−96)+1,for nodd |
the atom-bond connectivity eccentric index is;
ABC5(ΩR)={pn2(n−2√2)−n2√2pn−1+1√2pn2,for neven√2pn4(n−1)−√2pn−14(n+1)+√22pn−12,for nodd |
the eccentric harmonic index of type four is as below:
H4(ΩR)={pn6(2n−1)−n3pn−1−56pn2+1,for nevenpn3(n−1)+pn−16(1−2n)−56pn−12+1,for nodd |
Proof. For the first Zagreb eccentric indices M∗1(ΩR) of graph ΩR we have ϕ(ϱu,ϱv)=ϱu+ϱv. Therefore, ϕ(1,1)=2 and ϕ(1,2)=3. Thus by Lemma 3.4,
For even integers n,
M∗1(ΩR)=pn2(2+3(n−2))−n2pn−1(3)+pn2(3−32(2))+2=pn2(3n−4)−3n2pn−1+2. |
For odd integers n,
M∗1(ΩR)=12(pn−1)(2+3(pn−p−n−1))+pn−12(3−32(2))+2=pn2(3n−3)−pn−12(3n−1)+2. |
For the third Zagreb eccentric indices M∗3(ΩR) of ΩR we get ϕ(ϱu,ϱv)=ϱu×ϱv. Therefore, ϕ(1,1)=1 and ϕ(1,2)=2 So by Lemma 3.4,
For n even
M∗3(ΩR)=pn2(1+2(n−2))−n2pn−1(2)+pn2(2−32)+1=pn2(2n−1)−npn−1+12pn2+1. |
For n odd
M∗3(ΩR)=pn−12(1+2(pn−p−n−1))+pn−12(2−32)+1=pn(n−1)−pn−12(2n+1)+12pn−12+1. |
For the geometric arithmetic eccentric index GA4(ΩR) of the graph ΩR, we obtained,
ϕ(ϱu,ϱv)=2√ϱu×ϱvϱu+ϱv. |
Therefore, ϕ(1,1)=1 and ϕ(1,2)=2√23.
So, for even integers n,
GA4(ΩR)=pn2(1+2√23(n−2))−n2pn−1(2√23)+pn2(2√23−32)+1=pn2(2n√2−4√2+33)−n√23pn−1+pn2(4√2−96)+1. |
For odd integers n,
GA4(ΩR)=pn−12(1+2√23(pn−p−n−1))+pn−12(2√23−32)+1=pn2(2√23(n−1))+pn−12(3−2√2−2n√23)+pn−12(4√2−96)+1. |
For the atom-bond connectivity eccentric index ABC5(ΩR) of ΩR, we obtain ϕ(ϱu,ϱv)=√ϱu+ϱv−2ϱu×ϱv. Thus ϕ(1,1)=0 and ϕ(1,2)=1√2 Therefore,
For n even
ABC5(ΩR)=pn2(0+1√2(n−2))−n2pn−1(1√2)+pn2(1√2−32(0))+0=pn2(n−2√2)−n2√2pn−1+1√2pn2. |
For n odd
ABC5(ΩR)=pn−12(0+1√2(pn−p−n−1))+pn−12(1√2−32(0))+0=√2pn4(n−1)−√2pn−14(n+1)+√22pn−12. |
For eccentric harmonic index of type four H4(ΩR) of the graph ΩR, we obtained, ϕ(ϱu,ϱv)=2ϱu+ϱv, hence, ϕ(1,1)=1 and ϕ(1,2)=23.
Thus, for even integers n,
H4(ΩR)=pn2(1+23(n−2))−n2pn−123+pn2(23−32)+1=pn6(2n−1)−n3pn−1−56pn2+1. |
For odd integers n,
H4(ΩR)=pn−12(1+23(pn−p−n−1))+pn−12(23−32)+1=pn3(n−1)+pn−16(1−2n)−56pn−12+1. |
We have computed Zagreb eccentric index for the first type, the Zagreb eccentric index for the third type, geometrico-arithmetic eccentric index, atomic bonding connectivity eccentric indices and the eccentric harmonic indices of the fourth type of the graphs that are related to the ring Zpn. Our work can be used to study different physical and chemical structures such as carbohydrate, silicon structure, polymer, coating, paint constituent and for various computer network problems.
The authors declare no conflict of interest.
[1] | D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, C. A. Raffel, Mixmatch: A holistic approach to semi-supervised learning, Adv. Neural Inf. Process. Syst., 32 (2019). |
[2] | D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, et al., Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring, preprint, arXiv: 1911.09785. https://doi.org/10.48550/arXiv.1911.09785 |
[3] |
Z. Peng, S. Tian, L. Yu, D. Zhang, W. Wu, S. Zhou, Semi-supervised medical image classification with adaptive threshold pseudo-labeling and unreliable sample contrastive loss, Biomed. Signal Process. Control, 79 (2023), 104142. https://doi.org/10.1016/j.bspc.2022.104142 doi: 10.1016/j.bspc.2022.104142
![]() |
[4] | Y. Wang, H. Wang, Y. Shen, J. Fei, W. Li, G. Jin, et al., Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 4238–4247. https://doi.org/10.1109/CVPR52688.2022.00421 |
[5] | H. Xu, L. Liu, Q. Bian, Z. Yang, Semi-supervised semantic segmentation with prototype-based consistency regularization, Adv. Neural Inf. Process. Syst., 35 (2022), 26007–26020. |
[6] | H. Mai, R. Sun, T. Zhang, F. Wu, RankMatch: Exploring the better consistency regularization for semi-supervised semantic segmentation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2024), 3391–3401. https://doi.org/10.1109/CVPR52733.2024.00326 |
[7] | H. Wang, Z. Zhang, J. Gao, W. Hu, A-teacher: Asymmetric network for 3D semi-supervised object detection, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2024), 14978–14987. https://doi.org/10.1109/CVPR52733.2024.01419 |
[8] | M. Xu, Z. Zhang, H. Hu, J. Wang, L. Wang, F. Wei, et al., End-to-end semi-supervised object detection with soft teacher, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), 3060–3069. https://doi.org/10.1109/ICCV48922.2021.00305 |
[9] | J. Zhang, X. Lin, W. Zhang, K. Wang, X. Tan, J. Han, et al., Semi-detr: Semi-supervised object detection with detection transformers, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), 23809–23818. https://doi.org/10.1109/CVPR52729.2023.02280 |
[10] | T. Sosea, C. Caragea, MarginMatch: Improving semi-supervised learning with pseudo-margins, in IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), 15773–15782. https://doi.org/10.1109/CVPR52729.2023.01514 |
[11] | Y. Wang, H. Chen, Q. Heng, W. Hou, Y. Fan, Z. Wu, et al., FreeMatch: Self-adaptive thresholding for semi-supervised learning, in The Eleventh International Conference on Learning Representations, 2023. |
[12] | K. Cao, M. Brbic, J. Leskovec, Open-world semi-supervised learning, preprint, arXiv: 2102.03526. https://doi.org/10.48550/arXiv.2102.03526 |
[13] | L. Guo, Y. Zhang, Z. Wu, J. Shao, Y. Li, Robust semi-supervised learning when not all classes have labels, Adv. Neural Inf. Process. Syst., 35 (2022), 3305–3317. |
[14] | A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, et al., Learning transferable visual models from natural language supervision, in International Conference on Machine Learning, 139 (2021), 8748–8763. |
[15] | D. H. Lee, Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks, in Workshop on Challenges in Representation Learning, ICML, 3 (2013), 896. |
[16] | P. Cascante-Bonilla, F. Tan, Y. Qi, V. Ordonez, Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 6912–6920. https://doi.org/10.1609/aaai.v35i8.16852 |
[17] | J. Hu, C. Chen, L. Cao, S. Zhang, A. Shu, J. Jiang, et al., Pseudo-label alignment for semi-supervised instance segmentation, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2023), 16337–16347. https://doi.org/10.1109/ICCV51070.2023.01497 |
[18] | J. Li, C. Xiong, S. C. Hoi, Comatch: Semi-supervised learning with contrastive graph regularization, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), 9475–9484. https://doi.org/10.1109/ICCV48922.2021.00934 |
[19] | E. Arazo, D. Ortego, P. Albert, N. E. O'Connor, K. McGuinness, Pseudo-labeling and confirmation bias in deep semi-supervised learning, in Proceedings of the 2020 International Joint Conference on Neural Networks, (2020), 1–8. https://doi.org/10.1109/ijcnn48605.2020.9207304 |
[20] | S. Laine, T. Aila, Temporal ensembling for semi-supervised learning, preprint, arXiv: 1610.02242. https://doi.org/10.48550/arXiv.1610.02242 |
[21] | A. Tarvainen, H. Valpola, Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results, Adv. Neural Inf. Process. Syst., 30 (2017). |
[22] | Q. Xie, Z. Dai, E. Hovy, T. Luong, Q. Le, Unsupervised Data Augmentation for Consistency Training, Adv. Neural Inf. Process. Syst., 33 (2020), 6256–6268. |
[23] |
Y. Fan, A. Kukleva, D. Dai, B. Schiele Revisiting consistency regularization for semi-supervised learning, Int. J. Comput. Vision, 131 (2023), 626–643. https://doi.org/10.1007/s11263-022-01723-4 doi: 10.1007/s11263-022-01723-4
![]() |
[24] | K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. A. Raffel, et al., Fixmatch: Simplifying semi-supervised learning with consistency and confidence, Adv. Neural Inf. Process. Syst., 33 (2020), 596–608. |
[25] | B. Zhang, Y. Wang, W. Hou, H. Wu, J. Wang, M. Okumura, et al., Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling, Adv. Neural Inf. Process. Syst., 34 (2021), 18408–18419. |
[26] | I. Nassar, M. Hayat, E. Abbasnejad, H. Rezatofighi, G. Haffari, Protocon: Pseudo-label refinement via online clustering and prototypical consistency for efficient semi-supervised learning, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), 11641–11650. https://doi.org/10.1109/CVPR52729.2023.01120 |
[27] | Y. Chen, X. Tan, B. Zhao, Z. Chen, R. Song, J. Liang, et al., Boosting semi-supervised learning by exploiting all unlabeled data, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), 7548–7557. https://doi.org/10.1109/CVPR52729.2023.00729 |
[28] | J. Park, S. Yun, J. Jeong, J. Shin, Opencos: Contrastive semi-supervised learning for handling open-set unlabeled data, in European Conference on Computer Vision, (2022), 134–149. https://doi.org/10.1007/978-3-031-25063-7_9 |
[29] | S. Mo, J. Su, C. Ma, M. Assran, I. Misra, L. Yu, et al., Ropaws: Robust semi-supervised representation learning from uncurated data, preprint, arXiv: 2302.14483. https://doi.org/10.48550/arXiv.2302.14483 |
[30] | T. Lucas, P. Weinzaepfel, G. Rogez, Barely-supervised learning: Semi-supervised learning with very few labeled images, in Thirty-Sixth AAAI Conference on Artificial Intelligence, (2022), 1881–1889. https://doi.org/10.1609/aaai.v36i2.20082 |
[31] | G. Gui, Z. Zhao, L. Qi, L. Zhou, L. Wang, Y. Shi, Improving barely supervised learning by discriminating unlabeled samples with super-class, Adv. Neural Inf. Process. Syst., 35 (2022), 19849–19860. |
[32] | Y. Sun, Y. Li, Opencon: Open-world contrastive learning, preprint, arXiv: 2208.02764. https://doi.org/10.48550/arXiv.2208.02764 |
[33] | M. N. Rizve, N. Kardan, M. Shah, Towards realistic semi-supervised learning, in European Conference on Computer Vision, (2022), 437–455. https://doi.org/10.1007/978-3-031-19821-2_25 |
[34] | Y. Wang, Z. Zhong, P. Qiao, X. Cheng, X. Zheng, C. Liu, et al., Discover and align taxonomic context priors for open-world semi-supervised learning, Adv. Neural Inf. Process. Syst., 36 (2024). |
[35] | S. Vaze, K. Han, A. Vedaldi, A. Zisserman, Generalized category discovery, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 7492–7501. https://doi.org/10.1109/CVPR52688.2022.00734 |
[36] | X. Wen, B. Zhao, X. Qi, Parametric classification for generalized category discovery: A baseline study, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2023), 16590–16600. https://doi.org/10.1109/ICCV51070.2023.01521 |
[37] | B. Zhao, X. Wen, K. Han, Learning semi-supervised gaussian mixture models for generalized category discovery, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2023), 16623–16633. https://doi.org/10.1109/ICCV51070.2023.01524 |
[38] |
K. Zhou, J. Yang, C. C. Loy, Z. Liu, Learning to prompt for vision-language models, Int. J. Comput. Vision, 130 (2022), 2337–2348. https://doi.org/10.1007/s11263-022-01653-1 doi: 10.1007/s11263-022-01653-1
![]() |
[39] | K. Zhou, J. Yang, C. C. Loy, Z. Liu, Conditional prompt learning for vision-language models, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 16816–16825. https://doi.org/10.1109/CVPR52688.2022.01631 |
[40] |
P. Gao, S. Geng, R. Zhang, T. Ma, R. Fang, Y. Zhang, et al., Clip-adapter: Better vision-language models with feature adapters, Int. J. Comput. Vision, 132 (2024), 581–595. https://doi.org/10.1007/s11263-023-01891-x doi: 10.1007/s11263-023-01891-x
![]() |
[41] | R. Zhang, R. Fang, W. Zhang, P. Gao, K. Li, J. Dai, et al., Tip-adapter: Training-free clip-adapter for better vision-language modeling, preprint, arXiv: 2111.03930. https://doi.org/10.48550/arXiv.2111.03930 |
[42] | A. Krizhevsky, Learning Multiple Layers of Features from Tiny Images, Master's thesis, University of Tront, 2009. |
[43] | Y. Le, X. Yang, Tiny ImageNet Visual Recognition Challenge, CS 231N, 7 (2015), 3. |
[44] | F. Li, F. Rob, P. Pietro, Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories, in 2004 Conference on Computer Vision and Pattern Recognition Workshop, (2004), 178–178. https://doi.org/10.1016/j.cviu.2005.09.012 |
[45] | L. Bossard, M. Guillaumin, L. V. Gool, Food-101-mining discriminative components with random forests, in ECCV 2014, (2014), 446–461. https://doi.org/10.1007/978-3-319-10599-4_29 |
[46] | I. Loshchilov, F. Hutter, Decoupled weight decay regularization, preprint, arXiv: 1711.05101. https://doi.org/10.48550/arXiv.1711.05101 |
[47] | E. D. Cubuk, B. Zoph, J. Shlens, Q. V. Le, Randaugment: Practical automated data augmentation with a reduced search space, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, (2020), 702–703. https://doi.org/10.1109/CVPRW50498.2020.00359 |
[48] | T. DeVries, Improved regularization of convolutional neural networks with cutout, preprint, arXiv: 1708.04552. https://doi.org/10.48550/arXiv.1708.04552 |
[49] | H. Wang, G. Pang, P. Wang, L. Zhang, W. Wei, Y.Zhang, Glocal energy-based learning for few-shot open-set recognition, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), 7507–7516. https://doi.org/10.1109/CVPR52729.2023.00725 |