FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning

Published in Winter Conference on Applications of Computer Vision (WACV), 2023

Deep learning has dramatically improved the quality of automatic visual recognition, both in terms of accuracy and scale. Current models discriminate between thousands of classes with an accuracy often close to that of human recognition, assuming that sufficient training examples are provided. Unlike humans, algorithms reach optimal performance only if trained with all data at once whenever new classes are learned. This is an important limitation because data often occur in sequences and their storage is costly. Also, iterative retraining to integrate new data is computationally costly and difficult in time- or computation-constrained applications Incremental learning was introduced to reduce the memory and computational costs of machine learning algorithms. The main problem faced by class-incremental learning (CIL) methods is catastrophic forgetting, the tendency of neural nets to underfit past classes when ingesting new data. Many recent solutions, based on deep nets, use replay from a bounded memory of the past to reduce forgetting. However, replay-based methods make a strong assumption because past data are often unavailable. Also, the footprint of the image memory can be problematic for memory-constrained devices. Exemplar-free class-incremental learning (EFCIL) methods recently gained momentum. Most of them use distillation to preserve past knowledge, and generally favor plasticity. New classes are well predicted since models are learned with all new data and only a representation of past data. A few EFCIL methods are inspired by transfer learning. They learn a feature extractor in the initial state, and use it as such later to train new classifiers. In this case, stability is favored over plasticity since the model is frozen.

We introduce FeTrIL, a new EFCIL method which combines a frozen feature extractor and a pseudo-feature generator to improve incremental performance. New classes are represented by their image features obtained from the feature extractor. Past classes are represented by pseudo-features which are derived from features of new classes by using a geometric translation process. This translation moves features toward a region of the features space which is relevant for past classes. The proposed pseudo-feature generation is adapted for EFCIL since it is simple, fast and only requires the storage of the centroids for past classes. We run experiments with a standard EFCIL setting, which consists of a larger initial state, followed by smaller states which include the same number of classes. Results show that the proposed approach has better behavior compared to ten existing methods, including very recent ones.

The detailled description of the method is available in the paper

The code is available on Github

You can find the poster of the paper here, and the presentation video below.

FeTrIL presentation

If you found this work useful for your research, please cite it as follows:

@InProceedings{Petit_2023_WACV,
    author    = {Petit, Gr\'egoire and Popescu, Adrian and Schindler, Hugo and Picard, David and Delezoide, Bertrand},
    title     = {FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {3911-3920}
}