Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer
Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer
Arjun Ashok
KJ Joseph
Vineeth N Balasubramanian

Accepted at ECCV 2022

[Paper]
[GitHub]

Abstract

In class-incremental learning, the model is expected to learn new classes continually while maintaining knowledge on previous classes. The challenge here lies in preserving the model's ability to effectively represent prior classes in the feature space, while adapting it to represent incoming new classes. We propose two distillation-based objectives for class incremental learning that leverage the structure of the feature space to maintain accuracy on previous classes, as well as enable learning the new classes. In our first objective, termed cross-space clustering (CSC), we propose to use the feature space structure of the previous model to characterize directions of optimization that maximally preserve the class: directions that all instances of a specific class should collectively optimize towards, and those that they should collectively optimize away from. Apart from minimizing forgetting, this indirectly encourages the model to cluster all instances of a class in the current feature space, and gives rise to a sense of herd-immunity, allowing all samples of a class to jointly combat the model from forgetting the class. Our second objective termed controlled transfer (CT) tackles incremental learning from an understudied perspective of inter-class transfer. CT explicitly approximates and conditions the current model on the semantic similarities between incrementally arriving classes and prior classes. This allows the model to learn classes in such a way that it maximizes positive forward transfer from similar prior classes, thus increasing plasticity, and minimizes negative backward transfer on dissimilar prior classes, whereby strengthening stability. We perform extensive experiments on two benchmark datasets, adding our method (CSCCT) on top of three prominent class-incremental learning methods. We observe consistent performance improvement on a variety of experimental settings.


Method

We propose two distillation-based objectives for class incremental learning that leverage the structure of the feature space to maintain accuracy on previous classes, as well as enable learning the new classes.

Cross-Space Clustering

Our Cross-Space Clustering (CSC) objective alleviates forgetting by distilling class-level semantics and inducing tight clusters in the feature space. CSC leverages points across the entire feature space of the previous model $ F^T_{t−1} $, to identify regions that a class is optimized to stay within, and other harmful regions that it is prevented from drifting towards.

Controlled Transfer

Our Controlled Transfer (CT) objective controls and regularizes transfer of features between classes at a fine-grained level. We formulate an objective that approximates the relative similarity between an incoming class and all previous classes, and conditions the current task's model on these estimated similarities. This encourages new classes to be situated optimally in the feature space, ensuring maximal positive transfer and minimal negative transfer.


Results

Results on CIFAR-100


Results on ImageNet-Subset


Paper

Arjun Ashok, K J Joseph, Vineeth Balasubramanian.
Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer.
In ECCV 2022.

[Paper] [GitHub]


Bibtex

@article{ashok2022class, title={Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer}, author={Ashok, Arjun and Joseph, KJ and Balasubramanian, Vineeth}, journal={arXiv preprint arXiv:2208.03767}, year={2022} }



Acknowledgements

We are grateful to the Department of Science and Technology, India, as well as Intel India for the financial support of this project through the IMPRINT program (IMP/2019/000250) as well as the DST ICPS Data Science Cluster program. KJJ thanks TCS for their PhD Fellowship. We also thank the anonymous reviewers and Area Chairs for their valuable feedback in improving the presentation of this paper.

This template was originally made by for a colorful ECCV project.