Training neural networks on newly available data leads to catastrophic forgetting of previously learned information. The naive solution of retraining the neural network on the entire combined data set of old and new data is costly and slow and not always feasible when access to the old data is restricted. Various strategies have been proposed to counter catastrophic forgetting, among them Generative Replay, where together with the discriminator a second, generative model is trained to learn the distribution of the training data. When new data becomes available the generator produces data resembling the old data set and the neural networks’ training is continued on the combination of the new data and the generated replay data. In this thesis, we implement this method and add it to the Open Source Continual Learning Library Avalanche. We then compare several variations of how to use Generative Replay in order to understand better when the method works best, using the common benchmark scenario splitMNIST as our testing scenario. We then argue that benchmarks like these do not necessarily correspond to real-life settings and we propose a new scenario to address this issue. We evaluate several strategies on the new scenario and find that state-of-the-art method iCaRL is outperformed by Generative Replay. However, we also find that Generative Replay is not easy to use and it requires knowledge on the underlying scenario to adjust it to work properly.