Overcoming Catastrophic Forgetting with Context-Dependent Activations (XdA) and Synaptic Stabilization
Abstract
Overcoming Catastrophic Forgetting in neural networks is crucial to solving continuous learning problems. Deep Reinforcement Learning uses neural networks to make predictions of actions according to the current state space of an environment. In a dynamic environment, robust and adaptive life-long learning algorithms mark the cornerstone of their success. In this thesis we will examine an elaborate subset of algorithms countering catastrophic forgetting in neural networks and reflect on their weaknesses and strengths. Furthermore, we present an enhanced alternative to promising synaptic stabilization methods, such as Elastic Weight Consolidation or Synaptic Intelligence. Our method uses context-based information to switch between different pathways throughout the neural network, reducing destructive activation interference during the forward pass and destructive weight updates during the backward pass. We call this method Context-Dependent Activations (XdA). We show that XdA enhanced methods outperform basic synaptic stabilization methods and are a better choice for long task sequences.