Skip to content

Evaluate three types of task shifting with popular continual learning algorithms.

License

Notifications You must be signed in to change notification settings

GT-RIPL/Continual-Learning-Benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Continual-Learning-Benchmark

Evaluate three types of task shifting with popular continual learning algorithms.

This repository implemented and modularized following algorithms with PyTorch:

  • EWC: code, paper (Overcoming catastrophic forgetting in neural networks)
  • Online EWC: code, paper
  • SI: code, paper (Continual Learning Through Synaptic Intelligence)
  • MAS: code, paper (Memory Aware Synapses: Learning what (not) to forget)
  • GEM: code, paper (Gradient Episodic Memory for Continual Learning)
  • (More are coming)

All the above algorithms are compared to following baselines with the same static memory overhead:

Key tables:

If this repository helps your work, please cite:

@inproceedings{Hsu18_EvalCL,
  title={Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines},
  author={Yen-Chang Hsu and Yen-Cheng Liu and Anita Ramasamy and Zsolt Kira},
  booktitle={NeurIPS Continual learning Workshop },
  year={2018},
  url={https://arxiv.org/abs/1810.12488}
}

Preparation

This repository was tested with Python 3.6 and PyTorch 1.0.1.post2. Part of the cases is tested with PyTorch 1.5.1 and gives the same results.

pip install -r requirements.txt

Demo

The scripts for reproducing the results of this paper are under the scripts folder.

  • Example: Run all algorithms in the incremental domain scenario with split MNIST.
./scripts/split_MNIST_incremental_domain.sh 0
# The last number is gpuid
# Outputs will be saved in ./outputs
  • Eaxmple outputs: Summary of repeats
===Summary of experiment repeats: 3 / 3 ===
The regularization coefficient: 400.0
The last avg acc of all repeats: [90.517 90.648 91.069]
mean: 90.74466666666666 std: 0.23549144829955856
  • Eaxmple outputs: The grid search for regularization coefficient
reg_coef: 0.1 mean: 76.08566666666667 std: 1.097717733400629
reg_coef: 1.0 mean: 77.59100000000001 std: 2.100847606721314
reg_coef: 10.0 mean: 84.33933333333334 std: 0.3592671553160509
reg_coef: 100.0 mean: 90.83800000000001 std: 0.6913701372395712
reg_coef: 1000.0 mean: 87.48566666666666 std: 0.5440161353816179
reg_coef: 5000.0 mean: 68.99133333333333 std: 1.6824762174313899

Usage

  • Enable the grid search for the regularization coefficient: Use the option with a list of values, ex: -reg_coef 0.1 1 10 100 ...
  • Repeat the experiment N times: Use the option -repeat N

Lookup available options:

python iBatchLearn.py -h

Other results

Below are CIFAR100 results. Please refer to the scripts for details.

Releases

No releases published

Packages

No packages published