Dept. of Science & Engineering
Oregon Health & Science University
The learning in neural systems is the formation of memory by changing the connections and their strengths in the circuits of the brain. Memory in the brain is fundamentally associative in that a new piece of information can be recalled if it is associated with previously acquired knowledge. The more associations that exist, the more meaningful the new information is, and the more efficiently the information can be stored and utilized later. Forming associative memory is one of the basic functions implemented by the brain, creating extensive mappings of the world throughout the brain. In addition to associative processing, there is growing evidence that neural systems represent probabilistic data and then performing inference over this data in a Bayesian manner. The Palm network is one of the simple associative memories that is easy to implement, has reasonable behavior, is shown here to approximate Bayesian inference, and has some biological inspiration. This thesis discusses a computational model of associative memory based on the Palm network. The goal of the research is to investigate the computational characteristics of the model as a biologically plausible building block for a cognitive system with the specific goal of implementing Bayesian inference in a distributed manner. We explore some of the computational capabilities that are provided by biological associative memory. We will first study the performance of the associative model in very large networks, and explore its performance with respect to stability, fault tolerance, and parallel processing of the network. We propose a theoretical interpretation of the basic operation of the model, where the Palm network is functionally equivalent to a Voronoi classification. During dynamic learning, where training vectors have different prior probabilities, weighted Voronoi regions are formed. We then show the capability of such networks to learn complex non-linear functions in the context of the reinforcement learning, where, through the interactive reaction with the outside world, the adaptive mappings of the outside world to the inside neuron activities are formed. As relatively large network is plausible for such a model, a spiking neuron representation of the network will be discussed to investigate the temporal learning of the network. Based on the Voronoi classification analysis, we propose a Bayesian Memory model which provides a maximum entropy data reduction from an input representation to an output representation. With a network of hierarchically connected Bayesian Memories, the scalability is significantly enhanced, and the resulting bidirectional information flow allows high level information to be incorporated in the decision process. With all the characteristics discussed above, it is possible to build hierarchically connected associative memories that mimic a cognitive system. Although associative memory and Bayesian inference are not new concepts, to incorporate these ideas as the general basis for modular, scalable associative memory is not mentioned in the current literature. The main contribution of the thesis then is that we define the theoretical basis of Palm network as a reasonably applicable associative memory, and, based on the simple Vector Quantization concept, we propose the Bayesian Memory that incorporates all the necessary features of an associative memory, and, as a building block, provides a solution to build large associative networks that are in the scale of a cognitive system.
Div. of Biomedical Computer Science
School of Medicine
Zhu, Shaojuan, "Associative memory as a Bayesian building block" (2008). Scholar Archive. 335.