Authors: Andrew Murray and Jacob Matz
Institution: Dalhousie University
Date: April 2010
Abstract
Manipulation of learning processes in the brain has proven to be experimentally challenging. Various studies have focused on specific components of Long-Term Potentiation (LTP), attempting to systematically regulate the learning progress through manipulation of its proposed elements. The manipulation of such elements is thought to result in a proportional change in the efficacy of the whole system. Based on the inadequate results of this approach, we hypothesized that changes implemented to a fraction of synapses in a network represented an incomplete investigation of LTP function. We further hypothesized that in order to successfully manipulate learning output, whole systems must be manipulated whether working with a single synapse or a neural network. We chose a relatively simple and easily applied model, the perceptron, to simulate the effects of strengthening synaptic connections between neurons, as in learning. We expected that following the learning phase, systematic changes to a fraction of synaptic weights would jeopardize the perceptron's ability to recall the learned patterns. Indeed, fractional changes of the weight distribution caused more errors in the retrieval of patterns. However, when the entire distribution was altered, error in pattern retrieval