index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
100
|
206 APPLICATIONS OF ~RROR BACK-PROPAGATION TO PHONETIC CLASSIFICATION Hong C. Leung & Victor W. Zue Spoken Language Systems Group Laboratory for Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 ABSTRACT This paper is concerced with the use of error back-propagation in phonetic classification. Our objective is to investigate the basic characteristics of back-propagation, and study how the framework of multi-layer perceptrons can be exploited in phonetic recognition. We explore issues such as integration of heterogeneous sources of information, conditioll~ that can affect performance of phonetic classification, internal representations, comparisons with traditional pattern classification techniques, comparisons of different error metrics, and initialization of the network. Our investigation is performed within a set of experiments that attempts to recognize the 16 vowels in American English independent of speaker. Our results are comparable to human performance. Early approaches in phonetic recognition fall into two major extremes: heuristic and algorithmic. Both approaches have their own merits and shortcomings. The heuristic approach has the intuitive appeal that it focuses on the linguistic information in the speech signal and exploits acoustic-phonetic knowledge. HO'fever, the weak control strategy used for utilizing our knowledge has been grossly inadequate. At the other extreme, the algorithmic approach relies primarily on the powerful control strategy offered by well-formulated pattern recognition techniques. However, relatively little is known about how our speech knowledge accumulated over the past few decades can be incorporated into the well-formulated algorithms. We feel that artificial neural networks (ANN) have some characteristics that can potentially enable them to bridge the gap between these two extremes. On the one hand, our speech knowledge can provide guidance to the structure and design of the network. On the other hand, the self-organizing mechanism of ANN can provide a control strategy for utilizing our knowledge. In this paper, we extend our earlier work on the use of artificial neural networks for phonetic recognition [2]. Specifically, we focus our investigation on the following sets of issues. First, we describe the use of the network to integrate heterogeneous sources of information. We will see how classification performance improves as more Error Back-Propagation to Phonetic Classification 207 information is available. Second, we discuss several important factors that can substantially affect the performance of phonetic classification. Third, we examine the internal representation of the network. Fourth, we compare the network with two traditional classification techniques: K-nearest neighbor and Gaussian classification. Finally, we discuss our specific implementations of back-propagation that yield improved performance and more efficient learning time. EXPERIMENTS Our investigation is performed within the context of a set of experiments that attempts to recognize the 16 vowels in American English independent of speaker. The vowels are excised from continuous speech and they can be preceded and followed by any phonemes, thus providing a rich environment to study contextual influence. We assume that the locations of the vowels have been detected. Given a time region, the network determines which one of the 16 vowels was spoken. CORPUS As Table 1 shows, our training set consists of 20,000 vowel tokens, excised from 2,500 continuous sentences spoken by 500 male and female speakers. The test set consists of about 2,000 vowel tokens, excised from 250 sentences spoken by 50 different speakers. All the data are extracted from the TIMIT database, which has a wide range of American dialectical variations [1]. The speech signal is represented by spectral vectors obtained from an auditory model [4]. Speaker and energy normalization are also performed [5]. Tokens Sentences Speakers (M/F) Training 20,000 2500 500 (350/150) Testing 2,000 250 50 (33/17) Table 1: Corpus extracted from the TIMIT database. NETWORK STRUCTURE The structure of the network we have examined most extensively has 1 hidden layer as shown in Figure 1. It has 16 output units, with one unit for each of the 16 vowels. In order to capture dynamic information, the vowel region is divided into three equal subregions. An average spectrum is then computed in each subregion. These 3 average spectra are then applied to the first 3 sets of input units. Additional sources of information, such as duration and local phonetic contexts, can also be made available to the network. While spectral and durational inputs are continuous and numerical, the contextual inputs are discrete and symbolic. 208 Leung and Zue output from auditory model (synchrony spectrogram) -.......... -... -... ~.~j:.I._._ .... ---.......... -... -.. ~ Figure 1: Basic structure of the network. HETEROGENEOUS INFORMATION INTEGRATION In our earlier study, we have examined the integration of the Synchrony Envelopes and the phonetic contexts [2]. The Synchrony Envelopes, an output of the auditory model, have been shown to enhance the formant information. In this study, we add additional sources of information. Figure 2 shows the performance as heterogeneous sources of information are made available to the network. The performance is about 60% when only the Synchrony Envelopes are available. The performance improves to 64% when the Mean Rate Response, a different output of the auditory model which has been shown to enhance the temporal aspects of the speech signal, is also available. We can also see that the performance improves consistently to 77% as durational and contextual inputs are provided to the network. This experiment suggests that the network is able to make use of heterogeneous sources of information, which can be numerical and/or symbolic. Error Back-Propagation to Phonetic Classification 209 One may ask how well human listeners can recognize the vowels. Experiments have been performed to study how well human listeners agree with each other when they can only listen to sequences of 3 phonemes, i.e. the phoneme before the vowel, the vowel itself, and the phoneme after the vowel [3]. Results indicate that the average agreement among the listeners on the identities of the vowels is between 65% and 70%. 80· 70· 605lr Synchrony Add Add Envelopes Mean Rate Duration Response Sources of Information : Add Phonetic Context Figure 2: Integration of heterogeneous sources of information. PERFORMANCE RESULTS We have seen that one of the important factors for the network performance is the amount of information available to the network. To gain additional insights about how the network performs under different conditions, several experiments were conducted using different databases. In these and the subsequent experiments we describe in this paper, only the Synchrony Envelopes are available to the network. Table 2 shows the performance results for several recognition tasks. In each of these tasks, the network is trained and tested with independent sets of speech data. The first task recognizes vowels spoken by one speaker and excised from the fbf-vowel-ftf environment, spoken in isolation. This recognition task is relatively straightforward, resulting in perfect performance. In the second experiment, vowel tokens are extracted from the same phonetic context, but spoken by 17 male and female speakers. Due to inter-speaker variability, the accuracy degrades to 86%. The third task recognizes vowels spoken by one speaker and excised from an unrestricted context, spoken continuously. We can see that the accuracy decreases further to 70%. Finally, data from the TIM IT database are used, spoken by multiple speakers. The accuracy drops to 60%. These results indicate that a substantial difference in performance can be expected under different conditions, depending on whether the task is speaker-independent, what is the restriction on the phonetic 210 Leung and Zue Speakers(M/F) Context Training Percent Remark Tokens Correct 1(1/0) b t 64 100 isolated 17(8/9) b t 256 86 isolated 1(1/0) * * 3,000 70 continuous 500(350/150) * * 20,000 60 continuous Table 2: Performance for different tasks, using only the synchrony spectral information. "*,, stands for any phonetic contexts. contexts, whether the speech material is spoken continuously, and how much data are used to train the network. INTERNAL REPRESENTATION To understand how the network makes use of the input information, we examined the connection weights of the network. A vector is formed by extracting the connections from all the hidden units to one output unit as shown in Figure 3a. The same process is repeated for all output units to obtain a total of 16 vectors. The correlations among these vectors are then examined by measuring the inner products or the angles between them. Figure 3b shows the distribution of the angles after the network is trained, as a function of the number of hidden units. The circles represent the mean of the distribution and the vertical bars stand for one standard deviation away from the mean. As the number of hidden units increases, the distribution becomes more and more concentrated and the vectors become increasingly orthogonal to each other. The correlations of the connection weights before training were also examined, as shown in Figure 3c. Comparing parts (b) and (c) of Figure 3, we can see that the distributions before and after training overlap more and more as the number of hidden units increases. With 128 hidden units, the two distributions are actually quite similar. This leads us to suspect that perhaps the connection weights between the hidden and the output layer need not be trained if we have a sufficient number of hidden units. Figure 4a shows the performance of recognizing the 16 vowels using three different techniques: (i) train all the connections in the network, (ii) fix the connections between the hidden and output layers after random initialization and train only the connections between the input and hidden layers, and (iii) fix the connections between the input and hidden layers and train only the connections between the hidden and output layers. We can see that with enough hidden units, training only the connections between the input and the hidden layers achieves almost the same performance as training all the connections in the network. We can also see that Error Back-Propagation to Phonetic Classification 211 for the same number of hidden units, training only the connections between the input and the hidden layer can achieve higher performance than training only the connections between the hidden and the output layer. Figure 4b compares the three training techniques for 8 vowels, resulting in 8 output units only. We can see similar characteristics in both parts (a) and (b) of Figure 4. 150 130 l110 ! 90 I w Ja 70 ~ < 50 30 1 I f Iff I 10 100 Output Layer Hidden Layer Cl" Y "j •• £r"Y"J' Input '~',~, ">-<."-----';,.. ......... :., Layer (a) 150 -;;- 130 l 110 ! 90 w u 70 Cib c < 50 toOO 30 1 10 100 1000 Number of Hidden Units Number of Hidden Units (b) (c) Figure 3: (a) Correlations of the vectors from the hidden to output layers are examined. (b) Distribution of the angles between these vectors after training. (c) Distribution of the angles between these vectors before training. COMPARISONS WITH TRADITIONAL TECHNIQUES One of the appealing characteristics of back-propagation is that it does not assume any probability distributions or distance metrics. To gain further insights, we compare with two traditional pattern classification techniques: K-nearest neighbor (KNN) and multi-dimensional Gaussian classifiers. 212 Leung and Zue 70 90 (i) (i) __ ..0 .•••• 0-._. . •.. 60 y .. o ..•. o-.. • ·~D p.... ". •.. 0.···.0 ~ 70 I ~ ~ I ~.c//\ ... 50 / ..p/ ... ... ... 8 40 I /--C) 8 50 , • (iii) 1: i III 1: ~. ~ o "" CD ,....-0/ / u 30 ... CD / \(") CD o 'Qi) Q. Q. 30 20 cI II 10 10 1 10 100 1000 1 10 100 1000 Number of Hidden Units Number of Hidden Units (a) (b) Figure 4: Performance of recognizing (a) 16 vowels, (b) 8 vowels when (i) all the connections in the network are trained, (ii) only the connections between the input and hidden layers are trained, and (iii) only the connections between the hidden and output layers are trained. Figure 5a compares the performance results of the network with those of KNN, for different amounts of training tokens. Again, only the Synchrony Envelopes are made available to the network, resulting in input vectors of 100 dimensions. Each cluster of crosses corresponds to performance results of ten networks, each one randomly initialized differently. Due to different initialization, a fluctuation of 2% to 3% is observed even for the same training size. For comparison, we perform KNN using the Euclidean distance metric. For each training size, we run KNN 6 times, each one with a different K, which is chosen to be proportional to the square root of the number of training tokens, N. For simplicity, Figure 5a shows results for only 3 different values of K: (i) K = Vii, (ii) K = 10Vii, and (iii) K = 1. In this experiment, we have found that the performance is the best when K = ..fFi and is the worst when K = 1. We have also found that up to 20,000 training tokens, the network consistently compares favorably to KNN. It is possible that the network is able to find its own distance metric to achieve better performance. Since the true underlying probability distribution is unknown, we assume multidimensional Gaussian distribution in the second experiment. (i) We use the full covariance matrix, which has 100zl00 elements. To avoid problems with singularity, we obtain results only for large number of training tokens. (ii) We use the diagonal covariance matrix which has non-zero elements only along the diagonal. We can see from Figure 5b that the network compares favorably to the Gaussian classifiers. Our results also suggest that the Gaussian assumption is invalid. Error Back-Propagation to Phonetic Classification 213 60 .t. ! . ..! (i) 60 • • ~ ~--....-, (ii) ] 50 I ~~:~) 50 ....... ... / ........... D···· (iii) )..-/ iii j.. ..... ..-:. •. _ ..... 'e! . .... ~ J""".' ........ ~ I 40 /"Jr/ 40 (.~ Q. 30~----------------------------------~ ~~------------------------------------~ 100 1000 10000 100000 100 1000 10000 100000 Number of Training Tokens Number of Training Tokens (a) (b) Figure 5: (a) Comparison with KNN for different values of K (See text). (b) Comparison with Gaussian classification when using the (i) full covariance matrix, and (ii) diagonal covariance matrix. Each cluster of 10 crosses corresponds to the results of 10 different networks, each one randomly initialized. ERROR METRIC AND INITIALIZATION In order to take into account the classification performance of the network more explicitly, we have introduced a weighted mean square error metric [2]. By modulating the mean square error with weighting factors that depend on the classification performance, we have shown that the rank order statistics can be improved. Like simulated annealing, gradient descent takes relatively big steps when the performance is poor, and takes smaller and smaller steps as the performance of the network improves. Results also indicate that it is more likely for a unit output to be initially in the saturation regions of the sigmoid function if the network is randomly initialized. This is not desirable since learning is slow when a unit output is in a saturation region. Let the sigmoid function goes from -1 to 1. If the connection weights between the input and the hidden layers are initialized with zero weights, then all the hidden unit outputs in the network will initially be zero, which in turn results in zero output values for all the output units. In other words, all the units will initially operate at the center of the transition region of the sigmoid function, where learning is the fastest. We call this method center initialization (CI). Parts (a) and (b) of Figure 6 compare the learning speed and performance, respectively, of the 3 different techniques: (i) mean square error (MSE), (ii) weighted mean square error (WMSE), and (iii) center initialization (CI) with WMSE. We can see that both WMSE and CI seem to be effective in improving the learning time and the performance of the network. 214 Leung and Zue (iii) K'" (ii) (i) 30~--~----------------~ o 10 20 30 40 50 Number of Training Iterations (a) 30~4-------------~----~ 100 1000 10000 100000 Number of Training Tokens (b) Figure 6: Comparisons of the (a) learning characteristics and, (b) performance results, for the 3 different techniques: (i) MSE, (ii) WMSE, and (iii) CI with WMSE. Each point corresponds to the average of 10 different networks, each one initialized randomly. SUMMARY In summary, we have described a set of experiments that were designed to help us get a better understanding of the use of back-propagation in phonetic classification. Our results are encouraging and we are hopeful that artificial neural networks may provide an effective framework for utilizing our acoustic-phonetic knowledge in speech recognition. References [1] Fisher, W.E., Doddington, G.R., and Goudie-Marshall, K.M., "The DARPA Speech Recognition Research Database: Specifications and Status," Proceedings of the DARPA Speech Recognition Workshop Report No. SAIC-86/1546, February, 1986. [2] Leung, H.C., "Some phonetic recognition experiments using artificial neural nets/' ICASSP-88, 1988. [3] Phillips, M.S., "Speaker independent classification of vowels and diphthongs in continuous speech," Proc. of the 11th International Congress of Phonetic Sciences, Estonia, USSR, 1987. [4] Seneff S., "A computational model for the peripheral auditory system: application to speech recognition research," Proc. ICASSP, Tokyo, 1986. [5] Seneff S., "Vowel recognition based on 'line-formants' derived from an auditorybased spect(al representation," Proc. of the 11th International Congress of Phonetic Sciences, Estonia, USSR, 1987.
|
1988
|
19
|
101
|
402 MODELING THE OLFACTORY BULB COUPLED NONLINEAR OSCILLATORS Zhaoping Lit J. J. Hopfield· t Division of Physics, Mathematics and Astronomy ·Division of Biology, and Division of Chemistry and Chemical Engineering t· California Institute of Technology, Pasadena, CA 91125, USA • AT&T Bell Laboratories ABSTRACT The olfactory bulb of mammals aids in the discrimination of odors. A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations produce a 35-60 Hz modulated activity coherent across the bulb, mimicing the observed field potentials. The decision states (for the odor information) here can be thought of as stable cycles, rather than point stable states typical of simpler neuro-computing models. Analysis and simulations show that a group of coupled non-linear oscillators are responsible for the oscillatory activities determined by the odor input, and that the bulb, with appropriate inputs from higher centers, can enhance or suppress the sensitivity to partiCUlar odors. The model provides a framework in which to understand the transform between odor input and the bulbar output to olfactory cortex. 1. INTRODUCTION The olfactory system has a simple cortical intrinsic structure (Shepherd 1979), and thus is an ideal candidate to yield insight on the principles of sensory information processing. It includes the receptor cells, the olfactory bulb, and the olfactory cortex receiving inputs from the bulb (Figure [1]). Both the bulb and the cortex exhibit similar 35-90 Hz rhythmic population activity modulated by breathing. Efforts have been made to model the bulbar information processing function (Freeman 1979b, 1979c; Freeman and Schneider 1982; Freeman and Skarda 1985; Baird 1986j Skarda and Freeman 1987), which is still unclear (Scott 1986). The bulbar position in the olfactory pathway, and the linkage of the oscillatory activity with the sniff cycles suggest that the bulb and the oscillation play important roles in the olfactory information processing. We will examine how the bulbar oscillation pattern, which can be thought of as the decision state about odor information, originates and how it depends on the input odor. We then show that with appropriate inputs from the higher centers, the bulb can suppress or enhance the its sensitivity to particular odors. Much more details of our work are described in other two papers (Li and Hopfield 1988a, 1988b). Modeling the Olfactory Bulb-Coupled Nonlinear Oscillators 403 The olfactory bulb has mainly the excitatory mitral and the inhibitory granule cells located on different parallel lamina. Odor receptors effectively synapse on the mitral cells which interact locally with the granule cells and carry the bulbar outputs (Fig 1, Shepherd 1979). A rabbit has about 50,000 mitral, and "'-J 10,000,000 granule cells (Shepherd 1979). With short odor pulses, the receptor firing rate increases in time, and terminates quickly after the odor pulse terminates (Getchell and Shepherd 1978). Most inputs from higher brain centers are directed to the granule cells, and little is know about them. The surface EEG wave (generated by granule activities, Freeman 1978j Freeman and Schneider 1982), depending on odor stimulations and animal motivation, shows a high amplitude oscillation arising during the inhalation and stopping early in the exhalation. The oscillation is an intrinsic property of the bulb itself, and is influenced by central inputs (Freeman 1979aj Freeman and Skarda 1985). It has a peak frequency (which is the same across the bulb) in the range of 35-90 Hz, and rides on a slow background wave phase locked with the respiratory wave. 2. MODEL ORGANIZATION For simplicity, we only include ( N excitatory) mitral and ( M inhibitory ) granule cells in the model. The Receptor input I is Ii = Iodor" + Ibaekgrotmd,i, for 1 ... , N, a superposition of an odor signal Iodor and a background input Ibaekgrov,nd. Iodor > 0 increases in time during inhalation, and return exponentially during exhalation toward the ambient. The central input to the granule cells is vector Ie with components Ie,j for 1 < j < M. For now, it is assumed that Ie = 0.1 and Ibaekgrov,nd = 0.243 do not change during a sniff (Li and Hopfield 1988a). Each cell is one unit with its internal state level described by a single variable, and its output a continuous function of the internal state level. The internal states and outputs are respectively X = {Xl' X2, ... , X N} and G z ( X) = {gz(xd, gz(x2),·.· ,gz(XN)} (Y = {YI' Y2,··· ,YM} and Gy{Y) = {gy(Yd, gy(Y2), ... ,gy(YM)}) for the mitral (granule) cells, where gz > 0 and gy > 0 are the neurons' non-linear sigmoid output functions essential for the bulbar oscillation dynamics (Freeman and Skarda 1985) to be studied. other brain Receptor inputs ... ... ... ... ... ... ... ... (t)(t)(t)(t)(t)(t)(t)(t) It' t ~ , + )If Int~~:~lion 22222222 It. It. It. It. It. It. It. ... Central inputs Fig.l. Left: olfactory system; Right: bulbar structure Cells marked "+" are mitral cells, "_" are granule cells The geometry of bulbar structure is simplified to a one dimensional ring. Each cell is specified by an index, e.g. ith mitral cell, and jth granule cell for all i, i 404 Li and Hopfield indicating cell locations on the ring (Fig 1). N X M matrix Ho and M X N matrix Wo are used respectively to describe the synaptic strengths (postsynaptic input: presynaptic output) from granule cells to mitral cells and vice versa. The bulb model system has equations of motion: X = -HoGy(Y) O:zX + I, Y = WoGz(X) - O:yY + Ie. (2.1) where O:z = 1/'rz, O:y = 1/'ry, and 'rz = 'ry = 7 msec are the time constants of the mitral and granule cells respectively (Freeman and Skarda 1985; Shepherd 1988). In simulation, weak random noise is added to I .and Ie to simulate the fluctuations in the system. 3. SIMULATION RESULT Computer simulation was done with 10 mitral and granule cells, and show that the model can capture the major effects of the real bulb. The rise and fall of oscillations with input and the baseline shift wave phase locked with sniff cycles are obvious (Fig.2). The simulated EEG (calculated using the approximation by Freeman (1980)) and the measured EEG are shown for comparison. During a sniff, all the cells oscillate coherently with the same frequency as physiologically observed. 7\ .'""1 GrwIule Output ~J1 .0 EEG Wave looms ~ B ~EEGW'" lOOms H Respiratory Wave Fig.2. A: Simulation result; B: measured result from Freeman and Schneider 1982. The model also shows the capability of a pattern classifier. During a sniff, some input patterns induce oscillation, while others do not, and different inputs induce different oscillation patterns. We showed (Li and Hopfield 1988a) that the bulb amplifies the differences between the different inputs to give different output patterns, while the responses to same odor inputs with different noise samples differ negligibly. 4. MATHEMATICAL ANALYSIS A (damped) oscillator with frequency w can be described by the equations X = -wy - o:x iI = wx - o:y or (4.1) Modeling the Olfactory Bulb-Coupled Nonlinear Oscillators 405 The solution orbit in (x, y) space is a circle if a = 0 (non-damped oscillator), and spirals into the origin otherwise (damped oscillator). IT a mitral cell and a granule cell are connected to each other, with inputs i(t) and ie(t) respectively, then x = -h . gy(y) - azx + i(t), y = w . gz(x) - ayy + ie(t). (4.2) This is the scalar version of equation (2.1) with each upper case letter representing a vector or matrix replaced by a lower case letter representing a scalar. It is assumed that i(t) has a much slower time course than X or y (frequency of sniffs ~ characteristic neural oscillation frequency). Use the adiabatic approximation, and define the equilibrium point (xo, Yo) as Xo ~ 0 = -h . gy(yo) - azxo + i, Yo ~ 0 = w . gz(xo) - ayyo + ie' Define x' = x Xo, y' - y - Yo' Then x' = -h(gy(y) - gy(yo)) - azx', il = w(gz(x) - gz(xo)) - ayy'. (cf. equation (4.1)). IT a z = ay = 0, then the solution orbit zo+z' yo+Y' (4.3) R = f w(gz(s) - gz(xo))ds + f h(gy(s) - gy(Yo))ds = constant Zo Yo is a closed curve in the original (x, y) space surrounding the point (xo, Yo), i.e., (x, y) oscillates around the point (xo, Yo). When the dissipation is included, dR/ dt < 0, the orbit in (x, y) space will spiral into the point (xo, Yo). Thus a connected pair of mitral and granule cells behaves as a damped non-linear oscillator, whose oscillation center (xo, Yo) is determined by the external inputs i and ie' For small oscillation amplitudes, it can be approximated by a sinusoidal oscillator via linearization around the (xo, Yo): x = -h . g~(yo)Y - azx iI = w . g~(xo)x - a 1ly (4.4) where (x, y) is the deviation from (xo, Yo . The solution is X = Toe-at sin(wt+<p) where a = (az + a y)/2 and w = hwg~(xo)g~(yo) + (az - a y)2/4. IT az = a y, which is about right in the bulb, w = Jhwg~(xo)g~(yo). For the bulb, a ~ 0.3w. The oscillation frequency depends on the synaptic strengths hand w, and is modulated by the receptor and central input via (xo, Yo). 406 Li and Hopfield N such mitral-granule pairs with cell interconnections between the pairs represent a group of N coupled non-linear damped oscillators. This is exactly the situation in the olfactory bulb. The locality of synaptic connections in the bulb implies that the oscillator coupling is also local. (That there are many more granule cells than mitral cells only means that there is more than one granule cell in each oscillator.) Corresponding to equation (4.2) and (4.4), we have equation (2.1) and . , X = -HoGy(Yo)Y - o.zX = -HY - o.zX, y = WoG~(Xo)X - ayY = WX - o.yY. (4.5) where (X, Y) are now deviations from (Xo, Yo) and G~(Xo) and G~(Yo) are diagonal matrices with elements: [G~(Xo)lii = g~(Xi,o) > 0, [G~(Yo)lii = g~(Yilo) > 0, for all i,j. Eliminating Y, (4.6) where A = HW = HoG~(Yo)WoG~(Xo). The ith oscillator (mitral cell) follows the equation Xi + (o.z + o.y)Xi + (Aii + o.zo.y)Xi + L AijXj = 0 (4.7) jt.i (cf.equation (4.1)), the the last term describes the coupling between oscillators. Non-linear effect occurs when the amplitude is large, and make the oscillation wave form non-sinusoidal. If X k is one of the eigenvectors of A with eigenvalue Ak, equation (4.6) has kth oscillation mode Components of Xk indicate oscillators' relative amplitudes and phases (for each k = 1,2, ... , N independent mode). For simplicity, we set 0.2: = 0.1/ = 0., then X ex: Xke-at±i../X,.t. Each mode has frequency Re~k' where Re means the real part of a complex number. If Re( -0. ± i~k) > 0 is satisfied for some k, then the amplitude of the kth mode will increase with time, i.e. growing oscillation. Starting from an initial condition of arbitrary small amplitudes in linear analysis, the mode with the fastest growing amplitude will dominate the output, and the whole bulb will oscillate in the same frequency as observed physiologically (Freeman 1978; Freeman and Schneider 1982) as well as in the simulation. With the non-linear effect, the strongest mode will suppress the others, and the final activity output will be a single "mode" in a non-linear regime. Modeling the Olfactory Bulb-Coupled Nonlinear Oscillators 407 Because of the coupling between the (damped) oscillators, the equilibrium point (Xo, Yo) of a group of oscillators is no longer always stable with the possibility of growing oscillation modes. Ak must be complex in order to have kth mode grow. For this, a necessary (but not sufficient) condition is that matrix A is nonsymmetric. Those inputs that make matrix A less symmetric will more likely induce the oscillatory output and thus presumpably be noticed by the following olfactory cortex (see Li and Hopfield 1988a for details). The consequences (also observed physiologically) of our model are (Freeman 1975,1978; Freeman and Schneider 1982; Li and Hopfield 1988a): 1): local mitral cells' oscillation phase leads that of the local granule cells by a quarter cycle; 2): oscillations across the bulb have the same dominant frequency whose range possible should be narrow; 3): there should be a non-zero phase gradient field across the bulb; 4): the oscillation activity will rise during the inhale and fall at exhale, and rides on a slow background baseline shift wave phase locked with the sniff cycles. This model of the olfactory bulb can be generalized to other masses of interacting excitatory and inhibitory cells such as those in olfactory cortex, neocortex and hippocampus (Shepherd 1979) etc. where there may be connections between the excitatory cells as well as the inhibitory cells (Li and Hopfield 1988a). Suppose that Bo and Co are excitatory-to-excitatory and inhibitory-to-inhibitory connection matrices respectively, then equation (4.6) becomes x + (az - B + a y + C)k + (A + (az - B)(ay + C))X = 0 (4.9) 5. COMPUTATIONS IN THE OLFACTORY BULB Receptor input I influences (Xo, Yo) as follows dXo ~ (a2 + HW)-l(adI + di) dYo ~ (a2 + W H)-l(W dI - aH-1di) (5.1) This is how the odor input determines the bulbar output. Increasing Iodor not only raises the mean activity level (Xo, Yo) (and thus the gain (G~(Xo), G~(Yo))), but also slowly changes the oscillation modes by structurally changing the oscillation equation (4.6) through matrix A = HoG~(Yo)WoG~(Xo). If (Xo, Yo) is raised to such an extent that Re( -a± iv'Ak) > 0 is satisfied for some mode k, the equilibrium point (Xo, Yo) becomes unstable and this mode emerges with oscillatory bursts. Different oscillation modes that emerge are indicative of the different odor inputs controlling the system parameters (Xo, Yo), and can be thought of as the decision states reached for odor information, i.e., the oscillation pattern classifies odors. When (Xo, Yo) is very low (e.g. before inhale), all modes are damped, and only small amplitUde oscillations occur, driven by noise and the weak time variation of the odor input. The absence of oscillation can be interpreted by higher processing 408 Li and Hopfield centers as the absence of an odor (Skarda and Freeman 1987). Detailed analysis shows how the bulb selectively responds (or not to respond) to certain input patterns (Li and Hopfield 1988a) by choosing the synaptic connections appropriately. This means the bulb can have non-uniform sensitivities to different odor receptor inputs and achieve better odor discriminations. 6. PERFORMANCE OPTIMIZATION IN THE BULB We discussed (Li and Hopfield 1988a) how the olfactory bulb makes the least information contamination between sniffs and changes the motivation level for odor discrimination. We further postulate with our model that the bulb, with appropriate inputs from the higher centers, can enhance or suppress the sensitivity to particular odors (details in Li and Hopfield 1988b). When the central input Ie is not fixed, it can control the bulbar output by shifting (Xo, Yo), just as the odor input I can, equation (5.1) becomes: dXo ~ (a2 + HW)-l(adI + di - HdIe + aW-ldic) dYo ~ (a2 + W H)-l(W dI - aH-ldi + adIe + die) (6.1) Suppose that Ie = Ie,ba.ekground + Ie,eontrol where Ie,eontrol is the control signal which changes during a sniff. Olfactory adaptation is achieved by having an Ie,eontrol = Iga.neel which cancels the effect of Iodor on Xo cancelling. This keeps the mitral cells baseline output Gz(Xo) and gain G~(Xo) low, and thus makes the oscillation output impossible as if no odor exists. We can then expect that reversing the sign of Iga.neel will cause the bulb to have an enhanced, instead of reduced (adapted), response to Iodor anti-cancelling, and achieve the olfactory enhancement. We can derive further phenomena such as recognizing an odor component in an odor mixture, cross-adaptation and cross-enhancement (Li and Hopfield 1988b). Computer simulations confirmed the expected results. 7. DISCUSSION Our model of the olfactory bulb is a simplification of the known anatomy and physiology. The net of the mitral and granule cells simulates a group of coupled non-linear oscillators which are the sources of the rhythmic activities in the bulb. The coupling makes the oscillation coherent across the bulb surface for each sniff. The model suggests, in agreement with Freeman and coworkers, that stability change bifurcation is used for the bulbar oscillator system to decide primitively on the relevance of the receptor input information. Different non-damping oscillation modes emerged are used to distinguish the different odor input information which is the driving source for the bifurcations, and are approximately thought of as the (unitary) decision states of the system for the odor information. With the extra information represented in the oscillation phases of the cells, the bulb emphasizes the differences between different input patterns (section 4). Both the analysis and simulation show that the bulb is selectively sensitive to different receptor input patterns. This selectivity as well as the motivation level of the animal could also be Modeling the Olfactory Bulb-Coupled Nonlinear Oscillators 409 modulated from higher centers. This model also successfully applies to bulbar ability to use input from higher centers to suppress or enhance sensitivity to particular target or to mask odors. This model does not exclude the possibility that the information be coded in the non-oscillatory slow wave Xo which is also determined by the odor input. The chief behaviors do not depend on the number of cells in the model. The model can be generalized to olfactory cortex, hippocampus and neocortex etc. where there are more varieties of synaptic organizations. Acknowledgements This research was supported by ONR contract NOO014-87-K-0377. We would also like to acknowledge discussions with J.A. Bower. References Baird B. Nonlinear dynamics of pattern formation and pattern recognition in rabbit olfactory bulb. Physica 22D, 150-175 (1986) Freeman W.J. Mass action in the nervous system. New York: Academic Press 1975 Freeman W.J. Spatial properties of an EEG event in the olfactory bulb and cortex. Electroencephalogr. Clin. Neurophysiol. 44, 586-605 (1978) Freeman W.J. Nonlinear Gain mediating cortical stimulus-response relations. BioI. Cybernetics 33, 237-247 (1979a) Freeman W.J. Nonlinear dynamics of paleocortex manifested in the olfactory EEG. BioI. Cybernetics 35, 21-37 (1979b) Freeman W.J. EEG analysis gives model of neuronal template-matching mechanism for sensory search with olfactory bulb. BioI. Cybernetics 35, 221-234 (1979c) Freeman W.J. Use of spatial deconvolution to compensate for distortion of EEG by volume conduction. IEEE Trans. Biomed. Engineering 21,421-429 (1980) Freeman W.J., Schneider W.S. Changes in spatial patterns of rabbit olfactory EEG with conditioning to odors. Psychophysiology 19, 44-56 (1982) Freeman W.J., Skarda C.A. Spatial EEG patterns, non-linear dynamics and perception: the Neo-Sherringtonian view. Brain Res. Rev. 10, 147-175 (1985) Getchell T.V., Shepherd G.M. Responses of olfactory receptor cells to step pulses of odour at different concentrations in the salamender. J. Physiol. 282, 521-540 (1978) Lancet D. Vertebrate olfactory reception Ann. Rev. Neurosci. 9, 329-355 (1986) Li Z., Hopfield J.J. Modeling the olfactory bulb. Submitted to Biological Cybernetics (1988a) Li Z., Hopfield J.J. A model of olfactory adaptation and enhancement in the olfactory bulb. In preparation. (1988b) Scott J. W. The olfactory bulb and central pathways. Experientia 42,223-232 (1986) Shepherd G.M. The synaptic organization of the brain. New York: Oxford University Press 1979 Shepherd G.M. Private communications. (1988) Skarda C.A., Freeman W.J. How brains make chaos in order to make sense of the world. Behavioral and Brain Sciences 10, 161-195 (1987)
|
1988
|
2
|
102
|
AN ANALOG SELF-ORGANIZING NEURAL NElWORK CHIP James R. Mann MIT Lincoln Laboratory 244 Wood Street Lexington, MA 02173"()()73 ABSTRACT Sheldon Gilbert 4421 West Estes Lincolnwood, IL 60646 A design for a fully analog version of a self-organizing feature map neural network has been completed. Several parts of this design are in fabrication. The feature map algorithm was modified to accommodate circuit solutions to the various computations required. Performance effects were measured by simulating the design as part of a frontend for a speech recognition system. Circuits are included to implement both activation computations and weight adaption 'or learning. External access to the analog weight values is provided to facilitate weight initialization, testing and static storage. This fully analog implementation requires an order of magnitude less area than a comparable digital/analog hybrid version developed earlier. INTRODUCTION This paper describes an analog version of a self-organizing feature map circuit. The design implements Kohonen's self-organizing feature map algorithm [Kohonen, 1988] with some modifications imposed by practical circuit limitations. The feature map algorithm automatically adapts connection weights to nodes in the network such that each node comes to represent a distinct class of features in the input space. The system also self-organizes such that neighboring nodes become responsive to similar input classes. The prototype circuit was fabricated in two parts (for testability); a 4 node, 4 input synaptic array, and a weight adaptation and refresh circuit. A functional simulator was used to measure the effects of design constraints. This simulator evolved with the design to the point that actual device characteristics and process statistics were incorporated. The feature map simulator was used as a front-end processor to a speech recognition system whose error rates were used to monitor the effects of parameter changes on performance. This design has evolved over the past two years from earlier experiments with a perceptron classifier [Raffel, 1987] and an earlier version of a self-organizing feature map circuit [Mann, 1988]. The perceptron classifier used a connection matrix built with multiplying D / A converters to perform the product operation for the sum-of-products computation common to all neural network algorithms. The feature map circuit also used MDAC's to perform a more complicated calculation to realize a squared Euclidean distance measure. The weights were also stored digitally, but in a unary encoded format to simplify the weight adjustment operation. This circuit contained all of the control necessary to perform weight adaptation, except for selecting a maximum responder. The new feature map circuit described in this paper replaces the digital weight storage with dynamic analog charge storage on a capacitor. This paper will describe the circuitry and discuss problems associated with this approach to neural network implementations. Reprinted with pennission of Lincoln Laboratory, Massachusetts Institute of Tedmology, Lexington, Massachusetts 739 740 Mann and Gilbert ALGORITHM DESCRIPTION The original Kohonen algorithm is based on a network topology such as shown in Figure 1. This illustrates a linear array of nodes, consistent with the hardware implementation being descnbed. Each node in the circuit computes a level of activity [Dj(t)] which indicates the similarity between the current input vector [Xi(t)] and its respective weight vector [Wij(t)]. Traditionally this would be the squared Euclidean distance given by the activation equation in the figure. If the inputs are normalized, a dot product operation can be substituted. The node most representative of the current input will be the one with the minimum or maximum output activity (classification), depending on which distance measure is used. The node number of the min.fmax. responder U·] then comes to represent that class of which the input is a member. If the network is still in its learning phase, an adaptation process is invoked. This process updates the weights of all the nodes lying within a prescribed neighborhood [NEjj·(t)] of the selected node. The weights are adjusted such that the distance between the input and weight vector is diminished. This is accomplished by decreasing the individual differences between each component pair of the two vectors. The rate of learningis controlled by the gain term [aCt)]. Both the neighborhood and gain terms decrease during the learning process, stopping when the gain term reaches O. The following strategy was selected for the circuit implementation. First, it was assumed that inputs are normalized, thereby permitting the simpler dot product operation to be adopted. Second, weight adjustments were reducedto a simple increment / decrement operation determined by the sign of the difference between the components of the input and weight vector. Both of these Simplifications were tested in the simulations described earlier and had negligible effects on overall performance as a speech vector quantizer. In addition, the prototype circuits of the analog weight version of the feature map vector quantizer do not include either the max. picker or the neighborhood operator. To date, a version of a max. picker has not yet been chosen, though many forms exist. The neighborhood operator was included in the previous version of this design, but was not repeated on this ftrst pass. HARDWARE DESCRIPTION SYNAPTIC ARRAY A transistor constitutes the basic synaptic connection used in this design. An analog input is represented by a voltage v(Xi) on the drain of the transistor. The weight is stored as charge q(Wij) on the gate of the transistor. If the gate voltage exceeds the maximum input voltage by an amount greater than the transistor threshold voltage, the device will be operating in the ohmic region. In this region the current [i(Dj)] through the transistor is proportional to the product of the input and weight voltages. This effectively computes one contribution to the dot product. By connecting many synapses to a single wire, current summing is performed, in accordance with Kirchofrs current law, producing the desired sum of products activity. Figure 2 shows the transistor current as a function of the input and weight voltages. These curves merely serve to demonstrate how a transistor operating in the ohmic region will approximate a product operation. As the input voltage begins to approach the saturation region of the transistor, the curves begin to bend over. For use in competitive learning networks, like the feature map algorithm, it is only important that the computation be monotonically increasing. These curves were the characteristics of the computation used in the simulations. The absolute values given for output current do not reflect those produced in the actual circuit. An Analog Self-Organizing Neural Network Chip 741 • ACTIVATION : m Dj(l) = 2: (x,(I) - W'j(I))2 i=1 • CLASSIFICATION : j' = M!N(D,(I)) , • ADAPTATION : Figure 1. Description of Kohonen's original feature map algorithm using a linear array of nodes. /l. A 200 ,-------,-------...---~---~ WEIGHT (Vgs) ~ I::;) c.. I::;) 0 150 lao 50 C L-__ ~ ___ ~ ____ _L ___ ___ o 0.5 1 1.5 INPUT (Vds) 2 V S.OV 4.BV 4.6V 4.4V 4.2V 4.0V J.BV J.6V ~.4V 3.2V 3.0V Figure 2. Typical T -V curves for a transistor operating in the ohmic region. 742 Mann and Gilbert It should also be noted that there is no true zero weight; even the zero weight voltage contnbutes to the output current. But again, in a competitive network, it is only important that it contribute less than a higher weight value at that same input voltage. In short, neither small non-linearities nor offsets interfere with circuit operation if the synapse characteristic is monotonic with weight value and input. SYSTEM Figure 3 is a block diagram of the small four-node hardware prototype. The nodes are oriented horizontally, their outputs identified as 10 through 13 along the right-hand edge, representing the accumulated currents. The analog inputs [X3-XO] come in from the bottom and, traveling vertically, make connections with each node at the boxes identified as synapses. Each synapse performs its product operation between the analog weight stored at that node and the input potential. Along the top and left sides are the control circuits for accessing weight information. The two storage registers associated with each synapse are the control signals used to select the reading and writing of weights. Weights are accessed serially by connecting to a global read and write wire, W- and W + respectively. Besides the need for modification, the weights also drift with time, much like DRAM storage, and therefore must be refreshed periodically. This is also performed by the adaptation circuit that will be presented separately. Control is provided by having a single "1" bit circulating through the DRAM storage bits associated with each synapse. This process goes on continuously in the background after being initialized, in parallel with the activity calculations. If the circuit is not being trained, the adaptation circuit continues to refresh the existing weights. WEIGHT MODIFICATION & REFRESH A complete synapse, along with the current to voltage conversion circuit used to read the weight contents, is shown in Figure 4. The current synapse is approximately the size of two 6 tr~sistor static RAM bits. This approximation will be used to make synaptic population estimates from current SRAM design experience. The six transistors along the top of the synapse circuit are two, three-transistor dynamic RAM cells used to control access to weight contents. These are represented in Figure 3 as the two storage elements associated with each synapse and are used as descnbed earlier. READING THE WEIGHT The two serial, vertically oriented transistors in the synapse circuit are used to sense the stored weight value. The bottom (sensing) transistor's channel is modulated by the charge stored on the weight capacitor. The sensing transistor is selected through the binary state of the 3T DRAM bit immediately above it. These two transistors used for reading the weight are duplicated in the outpu~ circuit shown to the right of the synapse. The current produced in the global read wire through the sensing transistor, is set up in the cascode current mirror arrangement in the output circuit. A mirrored version of the current, leaving the right hand side of the cascode mirror, is established in the duplicate transistor pair. The gate of this transistor is controlled by the operational amplifier as shown, and must be equivalent to the weight valueat the connection being read, if the drains are both at the same potential. This is guaranteed by the cascode mirror arrangement selected, and is set by the minus input to the amplifier. WRITING THE WEIGHT The lone horizontal transistor at the bottom right comer of the synapse circuit is the weight access transistor. This connects the global write wire[W +] to the weight capacitor [Wij]. This An Analog Self-Organizing Neural Network Chip 743 (581 x 320 microns) ROW·CTRl·IN w+ W-X3 X2 Xl XO Figure 3. A block diagram of the 4 x 4 synaptic array integrated circuit. SYNAPSE (82 x 32 MICRONS) ,dl wrl RO I • , I ~~I' i T ,d2 wr2 : :fiT1 I'IR--;---~--+.--+--=--t--+-+i I I -: ~rJ I I r j ?' \'IIi 1'1. -r----r-+--;----+-_f_ X, OUTPUT CIRCUIT W· Figure 4. Full synapse circuit. Activation transistor is at bottom central position in the synapse circuit. 744 Mann and Gilbert occurs whenever the DRAM element directly above it is holding a "1". When the access transistor is off, current leakage takes place, causing the voltage on the capacitor to drift with time. There are two requirements on the weight drift for our application; That drift rates be as slow as possible, and that they drift in a known direction, in our case, toward ground. This is true because the refresh mechanism always raises the voltage to the top of a quantized voltage bin. A cross-section of the access transistor in Figure 5 identifies the two major leakage components; reverse diode leakage to the grounded substrate (or p-well) [10], and subthreshold channel conduction to the global write wire[Id]. The reverse diode leakage current is proportional to the area of the diffusion while the channel conduction leakage is proportional to the channel W /L ratio. Maintaining a negative voltage drift can be accomplished by sizing the devices such that reverse diode leakage dominates the channel conduction. This however would degrade the overall storage performance, and hence the minimum refresh cycle time. This can be relaxed by the technique of holding the global write line at some low voltage during everything but write cycles. This then makes the average voltage seen across the channel less than the minimum weight voltage, always resulting in a net voltage drop. Also, these leakage currents are exponentially dependent on temperature and can be decreased by an order of magnitude with just 10's of degrees of cooling [SChwartz, 1988]. WEIGHT REPRESENTATION Weights, while analog, are restricted to discrete voltages. This permits the stored voltage to drift by a restricted amount (a bin), and still be refreshed to its original value. The drift rate just discussed, combined with the bin size (determined by the levels of quantization (i.e. 'of bins) and weight range (i.e. column height», determines the refresh cycle time. The refresh cycle time, in tum, determines how many synapses (or weights) can be served by a single adaptation circuit. This means that doubling the range of the weight voltage would permit either doubling the number of quantization levels or doubling the number of synapses served by one adaptation circuit. Weight adjustments during learning involve raising or lowering the current weight voltage to the bins immediately above or below the current bin. This constitutes a digital increment or decrement operation. ADAPTATION CIRCUITRY Weight adjustments are made based upon a comparison between the current weight value and the input voltage connected to that weight. But, as these two ranges are not coincident, the comparison is made between two binary values produced by parallel flash AID converters [Brown, 1987]. The two opposing AID converters in Figure 6, produce a 1-of-N code, used in the comparison. The converters are composed of two stages to conserve area. The fIrst stage performs a coarse conversion which in tum drives the upper and lower rails of the second stage converter. The selection logic decides which of the voltages among those in the second stage weight conversion circuit to route back on the global write wire [W +]. This conflguration provides an easy mechanism for setting the ranges on both the inputs and weights. This is accomplished merely by setting the desired maximum and minimum voltages desired on the respective conversion circuits ([Xmin,Xmax] [Wmin,Wmax]). TEST RESULTS Both circuits were fabricated in MOSIS. The synaptic array was fabricated in a 3 micron 2 metal CMOS process while the adaptation circuitry was fabricated in a similar 2 micron process. To date, only the synaptic array has been tested. In these tests, the input was restricted to a 0 to1 An Analog Self-Organizing Neural Network Chip 745 I Figure 5. Cross-sectional view of a weight access transistor with leakage currents. a.1 ~~~m ~-~T! <;:, : \ I - '.---, I :; ~,compH - ... ~ !: I ~ I I SELECT LOGIC I---.-----J _.... .... ""'-W-mln Figure 6. Block diagram of the weight adaptation and refresh circuit. Comparison of digital A \D outputs and new weight selection takes place in the box marked SELECT LOGIC. 746 Mann and Gilbert V range while the weight range was 2 to 3 V. Most of these early tests were done with binary weights, either 2 V or 3 V, corresponding to a "O"and a "1". The synapses and associated control circuitry all work as expected. The circuit can be clocked up to 7 MHz. The curves shown in Figure 7 display a typical neuron output during two modes of operation; a set of four binary weights with all of the inputs swept together over their operating range, and a single, constant input with its weight being swept through its operating range. The graphs in Figure 8 show the temporal behavior of the weight voltage stored at a single synapse. On the left is plotted the output current to weight VOltage, for converting between the two quantities. The right hand plot is the output current of the synapse plotted against time. If the weight VOltage bin size is set to 15 m V (2V range, 128 bins), a 3 to 4 second refresh cycle time limit would be required. This is a very lenient constraint and may permit a much finer quantization than expected. The circuitry for reading the weights was tested and appears to be inoperative. The casco de mirror requires a very high potential at the p-channel sources which causes the circuit to latch up when the clocks are turned on. This circuit will be isolated and tested under static conditions. CONCLUSIONS In summary, a design for an analog version of a self- organizing feature map has been completed and prototype versions of the synaptic array and the adaptation circuitry have been fabricated. The devices are still undergoing testing and characterization, but the basic DRAM control and synaptic operation have been demonstrated. Simulations have provided the guidance on design choices. These have been instrumental in providing information on effects due to quantization, computational non-linearities, and process variations. The new design offers a significant increase in density over a digital/analog hybrid approach. The 84 pin standard frame package from MOSIS will accommodate more than 8000 synapses of from 6 to 8 bits accuracy. It appears that control modifications may offer even greater densities in future versions. This work was sponsored by the Department of the Air Force, and the Defense Advanced Research Projects Agency, the views expressed are those of the author and do not reflect the official policy or pOSition of the U.S. Government. REFERENCES P. Brown, R Millecchia M. Stinely. Analog Memory for Continuous-Voltage, Discrete-Time Implementation of Neural Networks. Proc. IEEE IntI. Conf. on Neural Networks. 1987. T. Kohonen. Self-Organization and Associative Memory. Springer-Verlag. 1988. J. Mann, R Lippmann, R Berger J. Raffel. A Self-Organizing Neural Net Chip. IEEE 1988 Custom Integrated Circuits Conference. pp. 103.1-103.5. 1988. J. Raffel, J. Mann, R Berger, A. Soares S. Gilbert. A Generic Architecture for Wafer-Scale Neuromorphic Systems. Proc. IEEE IntI. Conf. on Neural Networks.1987. D.B. Schwartz RE. Howard. A Programmable Analog Neural Network Chip. IEEE 1988 Custom Integrated Circuits Conference. pp. 10.2.1-10.2.4. 1988. An Analog Self-Organizing Neural Network Chip 747 I . :... I I ~ I I I i I 'J l I Il I , X1 (V) X1 (V) a) b) Figure 7. a) plot of output current (lj) as a function of imput voltage (Xi) between 0 and I volt for 0 (top curve) to 4 (bottom curve) weights "DN". b) plot of output current (Ij) vs. input voltage (Xi) from o to IV for a weight voltage between 2V (top) and 3V (bottom) in O.IV steps. 0.212 Y I "". y(lon) I I I I I I O.261------~------~--I I I I I I V(lou,) (V) I I I 0 .2S'r------+---1'-----I I I I I I I I 0 .2Ut--- ---+-------+..-----I I I I I I I I 0.25C=",,; =-----:-h----.."..+,,--___=' 2.t7 :s Yel (V) a) O .• II.....-_~--.:.;~~:....:,:::=---~-___. , I I I I I I I I • I I 0 . 261----1 --:---r---l--V(lo.,) (V) I I I I o.m ,---1----:-- -r---t--. I I I I o.258~---l---~---~. _ L __ _ I I I I " : : I 0 .257 I I I o 2:S n •• (I'c) b) I I I 4 6 Figure 8. a) plot of output current V (lout) vs. weight voltage. b) plot of output current as a function of time with W + held at OVX and the local weight initially set to 3V.
|
1988
|
20
|
103
|
A BIFURCATION THEORY APPROACH TO THE PROGRAMMING OF PERIODIC A TTRACTORS IN NETWORK MODELS OF OLFACTORY CORTEX Bill Baird Department of Biophysics U.C. Berkeley ABSTRACT A new learning algorithm for the storage of static and periodic attractors in biologically inspired recurrent analog neural networks is introduced. For a network of n nodes, n static or n/2 periodic attractors may be stored. The algorithm allows programming of the network vector field independent of the patterns to be stored. Stability of patterns, basin geometry, and rates of convergence may be controlled. For orthonormal patterns, the l~grning operation reduces to a kind of periodic outer product rule that allows local, additive, commutative, incremental learning. Standing or traveling wave cycles may be stored to mimic the kind of oscillating spatial patterns that appear in the neural activity of the olfactory bulb and prepyriform cortex during inspiration and suffice, in the bulb, to predict the pattern recognition behavior of rabbits in classical conditioning experiments. These attractors arise, during simulated inspiration, through a multiple Hopf bifurcation, which can act as a critical "decision pOint" for their selection by a very small input pattern. INTRODUCTION This approach allows the construction of biological models and the exploration of engineering or cognitive networks that employ the type of dynamics found in the brain. Patterns of 40 to 80 hz oscillation have been observed in the large scale activity of the olfactory bulb and cortex(Freeman and Baird 86) and even visual neocortex(Freeman 87,Grey and Singer 88), and found to predict the olfactory and visual pattern recognition responses of a trained animal. Here we use analytic methods of bifurcation theory to design algorithms for determining synaptic weights in recurrent network architectures, like those 459 460 Baird found in olfactory cortex, for associative memory storage of these kinds of dynamic patterns. The "projection algorithm" introduced here employs higher order correlations, and is the most analytically transparent of the algorithms to come from the bifurcation theory approach(Baird 88). Alternative numerical algorithms employing unused capacity or hidden units instead of higher order correlations are discussed in (Baird 89). All of these methods provide solutions to the problem of storing exact analog attractors, static or dynamic, in recurrent neural networks, and allow programming of the ambient vector field independent of the patterns to be stored. The stability of cycles or equilibria, geometry of basins of attraction, rates of convergence to attractors, and the location in parameter space of primary and secondary bifurcations can be programmed in a prototype vector field - the normal form. To store cycles by the projection algorithm, we start with the amplitude equations of a polar coordinate normal form, with coupling coefficients chosen to give stable fixed points on the axes, and transform to Cartesian coordinates. The axes of this system of nonlinear ordinary differential equations are then linearly transformed into desired spatial or spatio-temporal patterns by projecting the system into network coordinates - the standard basis - using the desired vectors as columns of the transformation matrix. This method of network synthesis is roughly the inverse of the usual procedure in bifurcation theory for analysis of a given physical system. Proper choice of normal form couplings will ensure that the axis attractors are the only attractors in the system - there are no "spurious attractors". If symmetric normal form coefficients are chosen, then the normal form becomes a gradient vector field. It is exactly the gradient of an explicit potential function which is therefore a strict Liapunov function for the system. Identical normal form coefficients make the normal form vector field equivariant under permutation of the axes, which forces identical scale and rotation invariant basins of attraction bounded by hyperplanes. Very complex periodic a~tractors may be established by a kind of Fourier synthesis as linear combinations of the simple cycles chosen for a subset of the axes, when those are programmed to be unstable, and a single "mixed mode" in the interior of that subspace is made stable. Proofs and details on vectorfield programming appear in (Baird 89). In the general case, the network resulting from the projection A Bifurcation Theory Approach to Programming 461 algorithm has fourth order correlations, but the use of restrictions on the detail of vector field programming and the types of patterns to be stored result in network architectures requiring only s~cond order correlations. For biological modeling, where possibly the patterns to be stored are sparse and nearly orthogonal, the learning rule for periodic patterns becomes a "periodic" outer product rule which is local, additive, commutative, and incremental. It reduces to the usual Hebb-like rule for static attractors. CYCLES The observed physiological activity may be idealized mathet . 11 "1 " 1 (ej + wt) • 1 2 S h ma 1ca y as a cyc e , r Xj e , J- , , ... ,n. uc a cycle is ~ "periodic attractor" if it is stable. The global amplitude r is just a scaling factor for the pattern ~ , and the global phase w in e1wt is a periodic scaling that scales x by a factor between ± 1 at frequency w as t varies. The same vector XS or "pattern" of relative amplitudes can appear in space as a standing wave, like that seen in the bulb, if the relative phase eS1 of each compartment (component) is the same, eS1+, - eS1, or as a traveling wave, like that seen in the ~repyriform cortex. if the relative phase components of ~s form a gradient in space, eS1+1 1/a e\. The traveling wave will "sweep out" the amplitude pattern XS in time, but the root-mean-square amplitude measured in an experiment will be the same ~s, regardless of the phase pattern. For an arbitrary phase vector, t~~se "simple" single frequency cycles can make very complicated looking spatio-temporal patterns. From the mathematical point of view, the relative phase pattern ~ is a degree of freedom in the kind patterns that can be stored. Patterns of uniform amplitude ~ which differed only in the phase locking pattern ~ could be stored as well. To store the kind of patterns seen in bulb, the amplitude vector ~ is assumed to be parsed into equal numbers of excitatory and inhibitory components, where each class of component has identical phase. but there is a phase difference of 60 90 degrees between the classes. The traveling wave in the prepyriform cortex is modeled by introducing an additional phase g~adient into both excitatory and inhibitory classes. PROJECTION ALGORITHM The central result of this paper is most compactly stated as the following: 462 Baird THEOREM Any set S, s - 1,2, ... , n/2 , of cycles r S x.s e1(9js + wst) of linearly independent vectors of relative comJonent amplitudes xS E Rn and phases ~s E Sn, with frequencies wS E R and global amplitudes r S E R, may be established in the vector field of the analog fourth order network: by some variant of the projection operation : -1 Tij ... Emn Pim J mn P nj , T EPA p-1. p-1 p-1 ijk1· mn im mn mJ nk n1' where the n x n matrix P contains the real and imaginary components [~S cos ~s , ~s sin ~S] of the complex eigenvectors xS e19s as columns, J is an n x n matrix of complex conjugate eigenvalues in diagonal blocks, Amn is an n x n matrix of 2x2 blocks of repeated coefficients of the normal form equations, and the input bi &(t) is a delta function in time that establishes an initial condition. The vector field of the dynamics of the global amplitudes rs and phases -s is then given exactly by the normal form equations : r s == Us r s In particular, for ask > 0 , and ass/akS < 1 , for all sand k, the cycles s - 1,2, ... ,n/2 are stable, and have amplitudes rs ;; (us/ass )1I2, where us· 1 "T • Note that there is a multiple Hopf bifurcation of codimension n/2 at "T = 1. Since there are no approximations here, however, the theorem is not restricted to the neighborhood of this bifurcation, and can be discussed without further reference to bifurcation theory. The normal form equations for drs/dt and d_s/dt determine how r S and _s for pattern s evolve in time in interaction with all the other patterns of the set S. This could be thought of as the process of phase locking of the pattern that finally emerges. The unusual power of this algorithm lies in the ability to precisely specify these ~ linear interactions. In general, determination of the modes of the linearized system alone (li and Hopfield 89) is insufficient to say what the attractors of the nonlinear system will be. A Bifurcation Theory Approach to Programming 463 PROOF The proof of the theorem is instructive since it is a constructive proof, and we can use it to explain the learning algorithm. We proceed by showing first that there are always fixed points on the axes of these amplitude equations, whose stability is given by the coefficients of the nonlinear terms. Then the network above is constructed from these equations by two coordinate transformations. The first is from polar to Cartesian coordinates, and the second is a linear transformation from these canonical "mode" coordinates into the standard basis e1, e2, ... , eN' or "network coordinates". This second transformation constitutes the "learning algorithm", because it tra"nSfrirms the simple fixed points of the amplitude equations into the specific spatio-temporal memory patterns desired for the network. Amplitude Fixed Points Because the amplitude equations are independent of the rotation _, the fixed points of the amplitude equations characterize the asymptotic states of the underlying oscillatory modes. The stability of these cycles is therefore given by the stability of the fixed points of the amplitude equations. On each axis r s' the other components rj are zero, by definition, rj = rj ( uj Ek ajk r k2 ) • 0, for rj • 0, which leaves r s - rs ( Us - ass r s 2 ), and r s - 0 There is an equilibrium on each axis s, at r s.(us/ass )1I2, as claimed. Now the Jacobian of the amplitude equations at some fixed point r~ has elements J . . - 2 a .. r~. r..... , J 11 = u. :5 a .. r~.2 ~ a .. r~.2 . lJ lJ 1 J 1 11 1 ]7-i lJ J For a fixed point r~s on axis s, J ij • 0 , since r~i or r~j • 0, making J a diagonal matrix whose entries are therefore its eigenvalues. Now J l1 • u1 ais r~ s 2, for i /. s, and J ss • Us :5 ass r~/. Since r~/ • us/ass' J ss • 2 us' and J ii • ui - ais (us/ass). This gives aisfass > u1/us as the condition for negative eigenvalues that assures the stability of r .... s. Choice of aji/aii ) uj/ui , for all i, j , therefore guarantees stability of all axis fixed points. Coordinate Transformations We now construct the neural network from these well behaved equations by the following transformations, First; polar to Cartesian, (rs'-s) to (v2s-1.v2s) : Using V2s-1 '" r s cos -s v2s = r s sin -s ,and differentiating these 464 Baird gives: V2s-1 • r s cos "s by the chain rule. Now substituting cos tis • v2s-1/r s ' and r s sin "s • v2s, gives: v2s - v2s rs + (v2 l/r ) .. ss s Entering the expressions of the normal form for rs and tis' gives: and since 222 rs = v2s-1 + v2s n/2 v2s-1 Us v2s-1 Ws v2s + E j [v2s-1 asj - v2s bsj] (v2j-/ + v2/) Similarly, n/2 v2s Us v2s + Ws v2s-' + E j [v2s asj + v2s-1 bSj ] (v2j_/ + v2/)· Setting the bsj - 0 for simplicity, choosing Us - T + 1 to get a standard network form, and reindexing i,j-l,2, ... ,n , we get the Cartesian equivalent of the polar normal form equations. n n Here J is a matrix containing 2x2 blocks along the diagonal of the local couplings of the linear terms of each pair of the previous equations v2s-1 ' v2s • with T separated out of the diagonal terms. The matrix A has 2x2 blocks of identical coefficients asj of the nonlinear terms from each pair. 1 - w, a'l a" a12 a'2 w, 1 a" a1, a'2 a'2 J = " 1 - w2 a 21 a 21 a 22 a22 w2 1 a 21 a 21 a22 a22 " ., ~ A Bifurcation Theory Approach to Programming 465 Learning Transformation - Linear Term Second; J is the canonical form of a real matrix with complex conjugate eigenvalues, where the conjugate pairs appear in blocks along the diagonal as shown. The Cartesian normal form equations describe the interaction of these linearly uncoupled complex modes due to the coupling of the nonlinear terms. We can interpret the normal form equations as network equations in eigenvector (or "memory") coordinates, given by some diagonalizing transformation P, containing those eigenvectors as its columns, so that J a p-1 T P. Then it is clear that T may instead be determined by the reverse projection T _ P J p-1 back into network coordinates, if we start with desired eigenvectors and eigenvalues. We are free to choose as columns in P, the real and imaginary vectors [XS cos 9s , XS sin 9S] of the cycles ~s ei9s of any linearly independent- set -S of p~tterns to be learned. If we write the matrix expression for the projection in component form, we recover the expression given in the theorem for Tij , Nonlinear Term Projection The nonlinear terms are transformed as well, but the expression cannot be easily written in matrix form. Using the component form of the transformation, substituting into the Cartesian normal form, gives: Xi (-'T+1) E j Pij (E k P-1jk xk) + E j Pij Ek J jk (E I P-\I xl) + E j Pij (Ek P-1jk xk) EI Ajl (Em p-\m xm) (En p-\n xn) Rearranging the orders of summation gives, Xi = (-'T+1) Ek (E j Pij P-1jk ) xk + EI (E k E j Pij J jk P-\l) xl ( -1 -1 p-1 ) + En Em Ek EI E j Pij P jk AjI P 1m In xk xm xn Finally, performing the bracketed summations and relabeling indices gives us the network of the theorem, xi = 'T xi + E j T1j Xj + Ejkl Tijkl Xj Xk xl with the expression for the tensor of the nonlinear term, 466 Baird -1 -1 -1 Tijk1 Emn Pim Amn P mj P nk P n1 Q.E.D. LEARNING RULE EXTENSIONS This is the core of the mathematical story, and it may be extended in many ways. When the columns of P are orthonormal, then p-1 • pT, and the formula above for the linear network coupling becomes T = pJpT. Then, for complex eigenvectors, This is now a local, additive, incremental learning rule for synapse ij, and the system can be truly self-organizing because the net can modify itself based on its own activity. Between units of equal phase, or when 9i s = 9j S 0 for a static pattern, this reduces to the usual Hebb rule. In a similar fashion, the learning rule for the higher order nonlinear terms becomes a multiple periodic outer product rule when the matrix A is chosen to have a simple form. Given our present ignorance of the full biophysics of intracellular processing, it is not entirely impossible that some dimensionality of the higher order weights in the mathematical network coul~ be implemented locally within the cells of a biological network, using the information available on the primary lines given by the linear connections discussed above. When the A matrix is chosen to have uniform entries Aij c for all its off-diagonal 2 x 2 blocks, and uniform entries Aij c - d for the diagonal blocks, then, T ijk1 • This reduces to the multiple outer product The network architecture generated by this learning rule is This reduces to an architecture without higher order correlations in the case that we choose a completely uniform A matrix (A1j c , for all i,j). Then + + A Bifurcation Theory Approach to Programming 467 This network has fixed points on the axes of the normal form as always, but the stability condition is not satisfied since the diagonal normal form coefficients are equal, not less, than the remaining A matrix entries. In (Baird 89) we describe how clamped input (inspiration) can break this symmetry and make the nearest stored pattern be the only attractor. All of the above results hold as well for networks with sigmoids, provided their coupling is such that they have a Taylor's expansion which is equal to the above networks up to third order. The results then hold only in the neighborhood of the origin for which the truncated expansion is accurate. The expected performance of such systems has been verified in simulations. Acknowledgements Supported by AFOSR-87-0317. I am very grateful for the support of Walter Freeman and invaluable assistance of Morris Hirsch. References B. Baird. Bifurcation Theory Methods For Programming Static or Periodic Attractors and Their Bifurcations in Dynamic Neural Networks. Proc. IEEE Int. Conf. Neural Networks, San Diego, Ca.,pI-9, July(1988). B. Baird. Bifurcation Theory Approach to Vectorfield Programming for Periodic Attractors. Proc. INNS/IEEE Int. Conf. on Neural Networks. Washington D.C., June(1989). W. J. Freeman & B. Baird. Relation of Olfactory EEG to Behavior: Spatial Analysis. Behavioral Neuroscience (1986). W. J. Freeman & B. W. van Dijk. Spatial Patterns of Visual Cortical EEG During Conditioned Reflex in a Rhesus Monkey. Brain Research, 422, p267(1987). C. M. Grey and W. Singer. Stimulus Specific Neuronal Oscillations in Orientation Columns of Cat Visual Cortex. PNAS. In Press(1988). Z. Li & J.J. Hopfield. Modeling The Olfactory Bulb. Biological Cybernetics. Submitted(1989}.
|
1988
|
21
|
104
|
HETEROGENEOUS NEURAL NETWORKS FOR ADAPTIVE BEHAVIOR IN DYNAMIC ENVIRONMENTS Hillel J. Chiel Biology Dept. & CAISR CWRU Randall D. Beer Dept. of Computer Engineering and Science and Center for Automation and Intelligent Systems Research Case Western Reserve University Cleveland, OH 44106 ABSTRACT Leon S. Sterling CS Dept. & CAISR CWRU Research in artificial neural networks has genera1ly emphasized homogeneous architectures. In contrast, the nervous systems of natural animals exhibit great heterogeneity in both their elements and patterns of interconnection. This heterogeneity is crucial to the flexible generation of behavior which is essential for survival in a complex, dynamic environment. It may also provide powerful insights into the design of artificial neural networks. In this paper, we describe a heterogeneous neural network for controlling the wa1king of a simulated insect. This controller is inspired by the neuroethological and neurobiological literature on insect locomotion. It exhibits a variety of statically stable gaits at different speeds simply by varying the tonic activity of a single cell. It can also adapt to perturbations as a natural consequence of its design. INTRODUCTION Even very simple animals exhibit a dazzling variety of complex behaviors which they continuously adapt to the changing circumstances of their environment. Nervous systems evolved in order to generate appropriate behavior in dynamic, uncertain situations and thus insure the survival of the organisms containing them. The function of a nervous system is closely tied to its structure. Indeed, the heterogeneity of nervous systems has been found to be crucial to those few behaviors for which the underlying neura1 mechanisms have been worked out in any detail [Selverston, 1988]. There is every reason to believe that this conclusion will remain valid as more complex nervous systems are studied: The brain as an "organ" is much more diversified than, for example, the kidney or the liver. If the performance of relatively few liver cells is known in detail, there is a good chance of defining the role of the whole organ. In the brain, different ce))s perform different, specific tasks ... Only rarely can aggregates of neurons be treated as though they were homogeneous. Above all, the cells in the brain are connected with one another according to a complicated but specific design that is of far greater complexity than the connections between cells in other organs. ([Kuffler, Nicholls, & Martin, 1984], p. 4) 577 578 Beer, Chiel and Sterling In contrast to research on biological nervous systems, work in artificial neural networks has primarily emphasized uniform networks of simple processing units with a regular interconnection scheme. These homogeneous networks typically depend upon some general learning procedure to train them to perform specific tasks. This approach has certain advantages. Such networks are analytically tractable and one can often prove theorems about their behavior. Furthermore, such networks have interesting computational properties with immediate practical applications. In addition, the necessity of training these networks has resulted in a resurgence of interest in learning, and new training procedures are being developed. When these procedures succeed, they allow the rapid construction of networks which perform difficult tasks. However, we believe that the role of learning may have been overemphasized in artificial neural networks, and that the architectures and heterogeneity of biological nervous systems have been unduly neglected. We may learn a great deal from more careful study of the design of biological nervous systems and the relationship of this design to behavior. Toward this end, we are exploring the ways in which the architecture of the nervous systems of simpler organisms can be utilized in the design of artificial neural networks. We are particularly interested in developing neural networks capable of continuously synthesizing appropriate behavior in dynamic, underspecified, and uncertain environments of the sort encountered by natural animals. THE ARTIFICIAL INSECT PROJECT In order to address these issues, we have begun to construct a simulated insect which we call Periplaneta compUlatrix. Our ultimate goal is to design a nervous system capable of endowing this insect with a1l of the behaviors required for long-term survival in a complex and dynamic simulated environment similar to that of natural insects. The skills required to survive in this environment include the basic abilities to move around, to find and consume food when necessary, and to escape from predators. In this paper, we focus on the design of that portion of the insect's nervous system which controls its locomotion. In designing this insect and the nervous system which controls it, we are inspired by the biological literature. It is important to emphasize, however, that this is not a modeling project. We are not altempting to reproduce the experimental data on a particular animal; rather, we are using insights gleaned from Biology to design neural networks capable of generating similar behaviors. In this manner, we hope to gain a better understanding of the role heterogeneity plays in the generation of behavior by nervous systems, and to abstract design principles for use in artificial neural networks. Figure 1. Periplaneta computatrix Heterogeneous Neural Networks for Adaptive Behavior 579 BODY The body of our artificial insect is shown in Figure 1. It is loosely based on the American Cockroach. Periplaneta americana [Bell & Adiyodi. 1981]. However. it is a reasonable abstraction of the bodies of most insects. It consists of an abdomen. head. six legs with feet. two antennae. and two cerci in the rear. The mouth can open and close and contains tactile and chemical sensors. The antennae also contain tactile and chemical sensors. The cerci contain tactile and wind sensors. The feet may be 'either up or down. When a foot is down. it appears as a black square. Finally. a leg can apply forces which translate and rotate the body whenever its foot is down. In addition. though the insect is only two-dimensional, it is capable of "falling down." Whenever its center of mass falls outside of the polygon formed by its supporting feet, the insect becomes statically unstable. If this condition persists for any length of time. then we say that the insect has "fallen down" and the legs are no longer able to move the body. NEURAL MODEL The essential challenge of the Artificial Insect Project is to design neural controllers capable of generating the behaviors necessary to the insect's survival. The neural model that we are currently using to construct our controllers is shown in Figure 2. It represents the firing frequency of a cell as a function of its input potential. We have used saturating linear threshold functions for this relationship (see inset). The RC characteristics of the cell membrane are also represented. These cells are interconnected by weighted synapses which can cause currents to flow through this membrane. Finally, our model includes the possibility of additional intrinsic currents which may be time and voltage dependent. These currents aJlow us to capture some of the intrinsic propenies which make real neurons unique and have proven to be important components of the neural mechanisms underlying many behaviors. Synaptic Currents Intrinsic Currents C Cell Membrane I(V)le v v Firing Properties Figure 2. Neural Model Firing Frequency 580 Beer, Chiel and Sterling For example, a pacemaker cell is a neuron which is capable of endogenously producing rhythmic bursting. Pacemakers have been implicated in a number of temporally patterned behaviors and playa crucial role in our locomotion controller. As described by Kandel (1976, pp. 260-268), a pacemaker cell exhibits the following characteristics: (1) when it is sufficiently inhibited, it is silent, (2) when it is sufficiently excited, it bursts continuously, (3) between these extremes, the interburst interval is a continuous function of the membrane potential, (4) a transient excitation which causes the cell to fire between bursts can reset the bursting rhythm, and (5) a transient inhibition which prematurely terminates a burst can also reset the bursting rhythm. These characteristics can be reproduced with our neural model through the addition of two intrinsic currents. IH is a depolarizing current which tends to pull the membrane potential above threshold. IL is a hyperpolarizing current which tends to pull the membrane potential below threshold. These currents change according to the following rules: (1) IH is triggered whenever the cell goes above threshold or IL terminates, and it then remains active for a fixed period of time, and (2) IL is triggered whenever IH terminates, and it then remains acti ve for a variable period of time whose duration is a function of the membrane potential. In our work to date, the voltage dependence of IL has been linear. LOCOMOTION An animal's ability to move around its environment is fundamental to many of its other behaviors. In most insects, this requirement is fulfilled by six-legged walking. Thus, this was the first capability we sought to provide to P. computatrix. Walking involves the generation of temporally patterned forces and stepping movements such that the insect maintains a steady forward motion at a variety of speeds without falling down. Though we do not address all of these issues here, it is worth pointing out that locomotion is an interesting adaptive behavior in its own right. An insect robustly solves this complex coordination problem in real Lime in the presence of variations in load and terrain, developmental changes, and damage to the walking apparatus itself [Graham, 1985]. LEG CONTROLLER The most basic components of walking are the rhythmic movements of each individual leg. These consist of a swing phase, in which the foot is up and the leg is swinging forward, and a stance phase, in which the foot is down and the leg is swinging back, propelling the body forward. In our controller, these rhythmic movements are produced by the leg controller circuit shown in Figure 3. There is one command neuron, C, for the entire controller and six copies of the remainder of this circuit, one for each leg. The rhythmic leg movements are primarily generated centrally by the portion of the leg controller shown in solid lines in Figure 3. Each leg is controlled by three motor neurons. The stance and swing motor neurons determine the force with which the leg is swung backward or forward, respectively, and the foot motor neuron controls whether the foot is up or down. Normally, the foot is down and the stance motor neuron is active, pushing Heterogeneous Neural Networks for Adaptive Behavior 581 the leg back and producing a stance phase. Periodically, however, this state is interrupted by a burst from the pacemaker neuron P. This burst inhibits the foot and stance motor neurons and excites the swing motor neuron, lifting the foot and swinging the leg forward. When this burst terminates, another stance phase begins. Rhythmic bursting in P thus produces the basic swing/stance cycle required for walking. The force applied during each stance phase as well as the time between bursts in P depend upOn the level of excitation supplied by the command neuron C. This basic design is based on the flexor burst-generator model of cockroach walking [pearson, 1976]. In order to properly time the transitions between the swing and stance phases, the controller must have some information about where the legs actually are. The simplest way to provide this information is to add sensors which signal when a leg has reached an extreme forward or backward angle, as shown with dashed lines in Figure 3. When the leg is all the way back, the backward angle sensor encourages P to initiate a swing by exciting it. When the leg is all the way forward, the forward angle sensor encourages P to terminate the swing by inhibiting it. These sensors serve to reinforce and fine-tune the cen· trally generated stepping rhythm. They were inspired by the hair plate receptors in P. americana, which seem LO playa similar role in its locomotion [Pearson, 1976]. The RC characteristics of our neural model cause delays at the end of each swing before the next stance phase begins. This pause produces a "jerky" walk which we sought to avoid. In order to smooth out this effect, we added a stance reflex comprised of the dotted connections shown in Figure 3. This reflex gives the motor neurons a slight "kick" in the right direction to begin a stance whenever the leg is swung all the way forward and is also inspired by the cockroach [Pearson, 1976]. Stance Foot Swing "Backward Angle Sensor O~ · · ... : ........ '. '. ' ... ' .. ~:~ :D •...................... Forward Angle Sensor E>Excitatory Connection _ Inhibitory Connection Figure 3. Leg Controller Circuit 582 Beer, Chiel and Sterling Figure 4. Central Coupling between Pacemakers LOCOMOTION CONTROLLER In order for these six individual leg controllers to serve as the basis for a locomotion controller, we must address the issue of stability. Arbitrary patterns of leg movements will not, in general, lead to successful locomotion. Rather, the movements of each leg must be synchronized in such a way as to continuously maintain stability. A good rule of thumb is that adjacent legs should be discouraged from swinging at the same time. As shown in Figure 4, this constraint was implemented by mutual inhibition between the pacemakers of adjacent legs. So, for example, when leg L2 is swinging, legs LI, L3 and R2 are discouraged from also swinging, but legs RI and R3 are unaffected (see Figure Sa for leg labelings). This coupling scheme is also derived from Pearson's (1976) work. The gaits adopted by the controller described above depend in general upon the initial angles of the legs. To further enhance stability, it is desirable to impose some reliable order to the stepping sequence. Many animals exhibit a stepping sequence known as a metachronal wave, in which a wave of stepping progresses from back to front. In insects, for example, the back leg swings, then the middle one, then the front one on each side of the body. This sequence is achieved in our controller by slightly increasing the leg angle ranges of the rear legs, lowering their stepping frequency. Under these conditions, the rear leg oscillators entrain the middle and front ones, and produce metachronal waves [Graham, 1977]. RESULTS When this controller is embedded in the body of our simulated insect, it reliably produces successful walking. We have found that the insect can be made to walk at different speeds with a variety of gaits simply by varying the flring frequency of the command neuron C. Observed gaits range from the wave gait, in which the metachronal waves on each side of the body are very nearly separated, to the tripod gait, in which the front and back legs on each side of the body step with the middle leg on the opposite side. These gaits fall out of the interaction between the dynamics of the neural controller and the body in which it is embedded. Heterogeneous Neural Networks for Adaptive Behavior 583 L~R I I t Z S 3 .. -.... .. AS "...,., ,,--'.. \.''''_ ~ tI,: I :~I . p . -. _ 1t,1_11, _ _ I ;: : LS! ,1 .. _: , •.••• _ " •.•• _ L, '...-,', .• ll ·· .. ~ - .... ' ~ A. .$1q'plq ,dIrII It3 _ _ JI2 _ a, _ _ L3 _ L2_ • Ll ii i ,nT, iii ii i i., 1m. iii' iii i 1m: i Ii iii, , 25~dlv stq'pfq 'dIrII It3 _ _ _ 112. ___ al ___ _ L3. _ • U _ _ _ Ll _ _ ."iI .. hi 'liihi """" Ii i hi iii Ii liiii i i 25 --=vdlv It3 R2 _ _IIIJ11l'aMm - - ----• .1 _ - - L3 __ - • L2 __ - Ll ~'iii Ii ,."'ii.i 'irm •• ; •• .wriii'iilWt 25 -'Ct,v &ea'p.ptq I'dIrII 1t3 __ -- - - al ___ 1.2 ___ _ u ___ L 1 "Ii mT II Ii 1i'l'lT! " Ii i ""Ii " i ~'mT i Ii ill 25~d,v B. Figure S. (A) Description of Some Gaits Observed in Natural Insects (from [Wilson, 1966]). (B) Selected Gaits Observed in P. computatrix. If the legs .are labeled as shown at the top of Figure Sa, then gaits may be conveniently described by their stepping patterns. In this representation, a black bar is displayed during the swing phase of each leg. The space between bars represents the stance phase. Selected gaits observed in P. computatrix at different speeds are shown in Figure 5b as the command neuron firing frequency is varied from lowest (top) to highest (bottom). At the lower speeds, the metachronal waves on each ~ide of the body are very apparent. The metachronal waves can still be discerned in fas~r walks. However, they increasingly overlap as the stance phases shorten, until the tripod gait appears at the highest speeds. This sequence of gaits bears a strong resemblance to some of those that have beep described for natural insects, as shown in Figure Sa [Wilson, 1966]. In order to study the robustness of this controller and to gain insight into the detailed mechanisms of its operation, we have begun a series of lesion studies. Such studies ex584 Beer, Chiel and Sterling amine the behavioral effects of selective damage to a neural controller. This study is still in progress and we only report a few preliminary results here. In general, we have been repeatedly surprised by the intricacy of the dynamics of this controller. For example, removal of all of the forward angle sensors resulted in a complete breakdown of the metachronal wave at low speeds. However, at higher speeds, the gait was virtually unaffected. Only brief periods of instability caused by the occasional overlap of the slightly longer than normal swing phases were observed in the tripod gait, but the insect did not fall down. Lesioning single forward angle sensors often dynamically produced compensatory phase shifts in the other legs. Lesions of selected central connections produced similarly interesting effects. In general, our studies seem to suggest subtle interactions between the central and peripheral components of the controller which deserve much more exploration. Finally, we have observed the phenomena of reflex stepping in P. computalrix. When the central locomotion system is completely shut down by strongly inhibiting the command neuron and the insect is continuously pushed from behind, it is still capable of producing an uncoordinated kind of walking. As the insect is pushed forward, a leg whose foot is down bends back until the backward angle sensor initiates a swing by exciting the pacemaker neuron P. When the leg has swung all the way forward, the stance reflex triggered by the forward angle sensor puts the foot down and the cycle repeats. Brooks (1989) has described a semi-distributed locomotion controller for an insect-like autonomous robot. We are very much in agreement with his general approach. However, his controller is not as fully distributed as the one described above. It relies on a central leg lift sequencer which must be modified to produce different gaits. Donner (1985) has also implemented a distributed hexapod locomotion controller inspired by an early model of Wilson's (1966). His design used individual leg controllers driven by leg load and position information. These leg controllers were coupled by forward excitation from posterior legs. Thus, his stepping movements were produced by reflex-driven peripheral oscillators rather than the central oscillators used in our model. He did not report the generation of the series of gaits shown in Figure Sa. Donner also demonstrated the ability of his controller to adapt to a missing leg. We have experimented with leg amputations as well, but with mixed success. We feel that more accurate three-dimensional load information than we currently model is necessary for the proper handling of amputations. Neither of these other locomotion controllers utilize neural networks. CONCLUSIONS AND FUTURE WORK We have described a heterogeneous neural network for controlling the walking of a simulated insect. This controller is completely distributed yet capable of reliably producing a range of statically stable gaits at different walking speeds simply by varying the tonic activity of a single command neuron. Lesion studies have demonstrated that the controller is robust, and suggested that subtle interactions and dynamic compensatory mechanisms are responsible for this robustness. This controller is serving as the basis for a number of other behaviors. We have already implemented wandering, and are currently experimenting with controllers for recoil reHeterogeneous Neural Networks for Adaptive Behavior 585 sponses and edge following. In the near future, we plan to implement feeding behavior and an escape response, resulting in what we feel is the minimum complement of behaviors necessary for survival in an insect-like environment Finally, we wish to introduce plasticity into these controllers so that they may better adapt to the exigencies of particular environments. We believe that learning is best viewed as a means by which additional flexibility can be added to an existing controller. The locomotion controller described in this paper was inspired by the literature on insect locomotion. The further development of P. compUlalrix will continue to draw inspiration from the neuroethology and neurobiology of simpler natural organisms. In trying to design autonomous organisms using principles gleaned from Biology, we may both improve our understanding of natural nervous systems and discover design principles of use to the construction of artificial ones. A robot with "only" the behavioral repertoire and adaptability of an insect would be an impressive achievement indeed. In particular, we have argued in this paper for a more careful consideration of the intrinsic architecture and heterogeneity of biological nervous systems in the design of artificial neural networks. The locomotion controller we have described above only hints at how productive such an approach can be. References Bell, W.J. and K.G. Adiyodi eds (1981). The American Cockroach. New York: Chapman and Hall. Brooks, R.A. (1989). A robot that walks: emergent behaviors from a carefully evolved network. Neural Computation 1(1). Donner, M. (1987). Real-time control of walking (Progress in Computer Science, Volume 7). Cambridge, MA: Birkhauser Boston, Inc. Graham, D. (1977). Simulation of a model for the coordination of leg movements in free walking insects. Biological Cybernetics 26:187-198. Graham, D. (1985). Pattern and control of walking in insects: Advances in Insect PhYSiology 18:31-140. Kandel, E.R. (1976). Cellular Basis of Behavior: An Introduction to Behavioral Neurobiology. W.H. Freeman. Kuffler, S.W., Nicholls, J.G., and Martin, A. R. (1984). From Neuron to Brain: A Cellular Approach to the Function of the Nervous System. Sunderland, MA: Sinauer Associates Inc. Pearson, K. (1976). The control of walking. Scientific American 235:72-86. Selverston, A.I. (1988). A consideration of invertebrate central pattern generators as computational data bases. Neural Networks 1:109-117. Wilson, D.M. (1966). Insect walking. Annual Review of Entomology 11:103-122.
|
1988
|
22
|
105
|
PROGRAMMABLE ANALOG PULSE-FIRING NEURAL NETWORKS Alan F. Murray Alister Hamilton Dept. of Elec. Eng., Dept. of Elec. Eng., University of Edinburgh, University of Edinburgh, Mayfield Road, Mayfield Road, Edinburgh, EH9 3JL Edinburgh, EH9 3JL United Kingdom. United Kingdom. ABSTRACT Lionel Tarassenko Dept. of Eng. Science, University of Oxford, Parks Road, Oxford, OX1 3PJ United Kingdom. We describe pulse - stream firing integrated circuits that implement asynchronous analog neural networks. Synaptic weights are stored dynamically, and weighting uses time-division of the neural pulses from a signalling neuron to a receiving neuron. MOS transistors in their "ON" state act as variable resistors to control a capacitive discharge, and time-division is thus achieved by a small synapse circuit cell. The VLSI chip set design uses 2.5J.1.m CMOS technology. INTRODUCTION Neural network implementations fall into two broad classes - digital [1,2] and analog (e.g. [3,4]). The strengths of a digital approach include the ability to use well-proven design techniques, high noise immunity, and the ability to implement programmable networks. However digital circuits are synchronous, while biological neural networks are asynchronous. Furthermore, digital multipliers occupy large areas of silicon. Analog networks offer asynchronous behaviour, smooth neural activation and (potentially) small circuit elements. On the debit side, however, noise immunity is low, arbitrary high precision is not possible; and no reliable "mainstream" analog nonvolatile memory technology exists. Many analog VLSI implementations are nonprogrammable, and therefore have fixed functionality. For instance, subthreshold MOS devices have been used to mimic the nonlinearities of neural behaviour, in implementing Hopfield style nets [3] , associative memory [5] , visual processing functions [6] , and auditory processing [7]. Electron-beam programmable resistive interconnects have been used to represent synaptic weights between more conventional operational-amplifier neurons [8,4]. We describe programmable analog pulse-firillg neural networks that use 00chip dynamic analog storage capacitors to store synaptic weights, currently 671 672 Hamilton, Murray and Tarassenko refreshed from an external RAM via a Digital -Analog converter. PULSE-FIRING NEURAL NETWORKS A pulse-firing neuron, i is a circuit which signals its state, V. by generating a stream of 0-5V pulses on its output. The pulse rate R.' varies from 0 when neuron i is OFF to R.(max) when neuron i is fully ION. Switching between the OFF and ON stAtes is a smooth transition in output pulse rate between these lower and upper limits. In a previous system, outlined below, the synapse allows a proportion of complete presynaptic neural pulses V. to be added (electrically OR-ed) to its output. A synaptic "gating" function, determined by T .. , allowed bursts of complete pulses through the synapse. Moving down a'l column of synapses, therefore, we see an ever more crowded asynchronous mass of pulses, representing the aggregated activity of the receiving neuron. In the system that forms the substance of this paper, a proportion (determined by T .. ) of each presynaptic pulse is passed to the postsynaptic summation. l] INTEGRATOR RING OSCLLATOR ~------------------------~I~I --------------------------------~ 11111111111111111111111111 Excitatory "-.... A~ ~ 11111111111111111111111111 III! Wibitory Activity XI I I I I Pll.SE GENERATOR Figure 1. Neuron Circuit NEURON CIRCUIT Figure 1 shows a CMOS implementation of the pulse-firing neuron function in a system where excitatory and inhibitory pulses are accumulated on separate channels. The output stage of the neuron consists of a "ring oscillator" - a feedback circuit containing an odd number of logic inversions, with the loop broken by a NAND gate, controlled by a smoothly varying voltage representing the neuron's total activity, j=" -1 Xj = L TjjV, j=O Programmable Analog Pulse-Firing Neural Networks 673 This activity is increased or decreased by the dumping or removal of charge packets from the "integrator" circuit. The arrival of an excitatory pulse dumps charge, while an inhibitory pulse removes it. Figure 2 shows a device level (SPICE) simulation of the neuron circuit. A strong excitatory input causes the neural potential to rise in steps and the neuron turns ON. Subsequent inhibitory pulses remove charge packets from the integrating capacitor at a higher rate, driving the neuron potential down and switching the neuron OFF. 5 Ol------J N euro n Output 5 Neural Potential (V4) '0 > O---------J Inhibitory input 5 o Excitatory input o 9 Figure 2. SPICE Simulation of Neuron SYNAPSE CIRCUIT - USING CHOPPING CLOCKS In an earlier implementation, "chopping clocks" were introduced - synchronous to one another, but asynchronous to the neural firing. One bit of the (digitally stored) weight T .. indicates its sign, and each other bit of precision is represented by a chopping clock. The clocks are non-overlapping, the MSB clock is high for lh of the time, the next for % of the time, etc. These clocks are used to gate bursts of pulses such that a fraction T .. of the pulses are passed from the input of the synapse to either the excita¥ory or inhibitory output channel. 674 Hamilton, Murray and Tarassenko CHOPPING CLOCK SYSTEM - PROBLEMS A custom VLSI synaptic array has been constructed [9] with the neural function realised in discrete SSI to allow flexibility in the choice of time constants. The technique has proven successful, but suffers from a number of problems:- Digital gating ("using chopping clocks") is clumsy - Excitation and Inhibition on separate lines - bulky - Synapse complicated and of large area - < 100 synapses per chip - < 10 neurons per chip In order to overcome these problems we have devised an alternative arithmetic technique that modulates individual pulse widths and uses analog dynamic weight storage. This results in a much smaller synapse. < w » L WxTij xTij -----,I L Figure 3. Pulse Multiplication Increment Activity SYNAPSE CIRCUIT - PULSE MULTIPLICATION The principle of operation of the new synapse is illustrated in Figure 3. Each presynaptic pulse of width W is modulated by the synaptic weight T .. such that the resulting postsynaptic pulse width is lJ W.Tij This is achieved by using an analog voltage to modulate a capacitive discharge as illustrated in Figure 4. The presynaptic pulse enters a CMOS inverter whose positive supply voltage (V dd) is controlled by T ... The capacitor is nominally charged to Vdd, but begins to discharge at a gonstant rate when the input pulse arrives. When the voltage on the capacitor falls below the threshold of the following inverter, the synapse output goes high. At the end of the presynaptic pulse the capacitor recharges rapidly and the synapse output goes low, having output a pulse of length W.T ". The circuit is now lJ Programmable Analog Pulse-Firing Neural Networks 675 ready for the next presynaptic pulse. This mechanism gives a linear relationship between multiplier Wand inverter supply voltage, Vdd. Tik Determines Vdd for inverter Vk Ire1 Figure 4. Improved Synapse Circuit FULL SYNAPSE Synaptic weight storage is achieved using dynamic analog storage capacitors refreshed from off-chip RAM via a Digital-Analog converter. A CMOS active-resistor inverter is used as a buffer to isolate the storage capacitor from the multiplier circuit as shown in the circuit diagram of a full synapse in Figure 5. SYNAPTIC WEIGHT TIt I T PRESYNAPTIC STATE Vk BIAS VOl. TAGE Vdd TIt -11Figure s. Full Synapse Circuit A capacitor distributed over a column of synaptic outputs stores neural activity, x., as an analog voltage. The range over which the synapse voltage - pulse tithe multiplier relationship is linear is shown in Figure 6. This wide 676 Hamilton, Murray and Tarassenko (:=c2V) range may be used to implement inhibition and excitation in a single synapse, by "splitting" the range such that the lower volt (l-2V) represents inhibition, and the upper volt (2-3V) excitation. Each presynaptic pulse removes a packet of charge from the activity capacitor while each postsynaptic pulse adds charge at twice the rate. In this way, a synaptic weight voltage of 2V, giving a pulse length multiplier of lh, gives no net change in neuron activity x .. The synaptic weight voltage range 1-2V therefore gives a net reduction'in neuron activity and is used to represent inhibition, the range 2-3V gives a net increase in neuron activity and is used to represent excitation. 1.0 0.6 o . 4 - ---- -- ------- ------------0.2 o . -,' .' . . . . --------- -- -o 123 4 Synapse Voltage Tij (V) Figure 6. Multiplier Linearity 5 The resulting synapse circuit implements excitation and inhibition in 11 transistors per synapse. It is estimated that this technique will yield more than 100 fu.ly programmable neurons per chip. FURTHER WORK There is still much work to be done to refine the circuit of Figure 5 to optimise (for instance) the mark-space ratio of the pulse firing and the effect of pulse overlap, and to minimise the power consumption. This will involve the creation of a custom pulse-stream simulator, implemented directly as code, to allow these parameters to be studied in detail in a way that probing an actual chip does not allow. Finally, as Hebbian- (and modified Hebbian - for instance [10]) learning schemes only require a synapse to "know" the presynaptic and postsynaptic states, we are able to implement it on-chip at little cost, as the chip topology makes both of these signals available available to the synapse locally. This work introduces as many exciting possibilities for truly autonomous systems as it does potential problems! Programmable Analog Pulse-Firing Neural Networks 677 Acknowledgements The authors acknowledge the support of the Science and Engineering Research Council (UK) in the execution of this work. References 1. A. F. Murray, A. V. W. Smith, and Z. F. Butler, "Bit - Serial Neural Networks," Neural Information Processing Systems (Proc. 1987 NIPS Conference), p. 573, 1987. 2. S. C. J. Garth, "A Chipset for High Speed Simulation of Neural Network Systems," IEEE Conference on Neural Networks, San Diego, vol. 3, pp. 443 - 452, 1987. 3. M~ A. SiviloUi, M. R. Emerling, and C. A. Mead, "VLSI Architectures for Implementation of Neural Networks," Proc. AlP Conference on Neural Networks for Computing, Snowbird, pp. 408 - 413, 1986. 4. H. P. Graf, L. D. Jackel, R. E. Howard, B. Straughn, J. S. Denker, W. Hubbard, D. M. Tennant, and D. Schwartz, "VLSI Implementation of a Neural Network Memory with Several Hundreds of Neurons," Proc. AlP Conference on Neural Networks for Computing, Snowbird, pp. 182 - 187, 1986. 5. M. Sivilotti, M. R. Emerling, and C. A. Mead, "A Novel Associative Memory Implemented Using Collective Computation," Chapel Hill Conf. on VLSI, pp. 329 - 342, 1985. 6. M. A. Sivilotti, M. A. Mahowald, and C. A. Mead, "Real - Time Visual Computations Using Analog CMOS Processing Arrays," Stanford VLSI Confeence, pp. 295-312, 1987. 7. C. A. Mead, in Analog VLSI and Neural Systems, Addison-Wesley, 1988. 8. W. Hubbard, D. Schwartz, J. S. Denker, H. P. Graf, R. E. Howard, L. D. Jackel, B. Straughn, and D. M. Tennant, "Electronic Neural Networks," Proc. AlP Conference on Neural Networks for Computing, Snowbird, pp. 227 - 234, 1986. 9. A. F. Murray, A. V. W. Smith, and L. Tarassenko, "FullyProgrammable Analogue VLSI Devices for the Implementation of Neural Networks," Int. Workshop on VLSI for Artificial Intelligence, 1988. 10. S. Grossberg, "Some Physiological and Biochemical Consequences of Psychological Postulates," Proc. Natl. Acad. Sci. USA, vol. 60, pp. 758 - 765, 1968.
|
1988
|
23
|
106
|
AN ANALOG VLSI CHIP FOR THIN-PLATE SURFACE INTERPOLATION John G. Harris California Institute of Technology Computation and Neural Systeins Option, 216-76 Pasadena, CA 91125 ABSTRACT Reconstructing a surface from sparse sensory data is a well-known problem iIi computer vision. This paper describes an experimental analog VLSI chip for smooth surface interpolation from sparse depth data. An eight-node ID network was designed in 3J.lm CMOS and successfully tested. The network minimizes a second-order or "thinplate" energy of the surface. The circuit directly implements the coupled depth/slope model of surface reconstruction (Harris, 1987). In addition, this chip can provide Gaussian-like smoothing of images. INTRODUCTION Reconstructing a surface from sparse sensory data is a well-known problem in computer vision. Early vision modules typically supply sparse depth, orientation, and discontinuity information. The surface reconstruction module incorporates these sparse and possibly conflicting measurements of a surface into a consistent, dense depth map. The coupled depth/slope model provides a novel solution to the surface reconstruction problem (Harris, 1987). A ID version of this model has been implemented; fortunately, its extension to 2D is straightforward. Figure 1 depicts a high-level schematic of the circuit. The di voltages represent noisy and possibly sparse input data, the ZiS are the smooth output values, and the PiS are the explicitly computed slopes. The vertical data resistors (with conductance g) control the confidence in the input data. In the absence of data these resistors are open circuits. The horizontal chain of smoothness resistors of conductance ..\ forces the derivative of the data to be smooth. This model is called the coupled depth/slope model because of the coupling between the depth and slope representations provided by the subtractor elements. The subtractors explicitly calculate a slope representation of the surface. Any depth or slope node can be made into a constraint by fixing a voltage source to the proper location in the network. Intuitively, any sudden change in slope is smoothed out with the resistor mesh. 687 688 Harris Figure 1. The coupled depth/slope model. The tri-directional subtractor device (shown in Figure 2) is responsible for the coupling between the depth and slope representations. If nodes A and B are set with ideal voltage sources, then node C will be forced to A - B by the device. This circuit element is unusual in that all of its terminals can act as inputs or outputs. If nodes Band C are held constant with voltage sources, then the A terminal is fixed to B + C. If A and C are input, then B becomes A - C. When further constraints are added, this device dissipates a power proportional to (A - B - C)2. In the limiting case of a continuous network, the total dissipated power is (1) The three terms arise from the power dissipated in the sub tractors and in the two different types of resistors. Energy minimization techniques and standard calculus of variations have been used to formally show that the reconstructed surfaces, z, satisfy the 1D biharmonic equation between input data points (Harris, 1987). In the tw~dimensional formulation, z is a solution of (2) This interpolant, therefore, provides the same results as minimizing the energy of a thin plate, which has been commonly used in surface reconstruction algorithms on digital computers (Grimson, 1981; Terzopoulos, 1983). IMPLEMENTATION The eight-node 1D network shown in Figure 1 was designed in 3J.lm CMOS (Mead, 1988) and fabricated through MOSIS. Three important components of the model must be mapped to analog VLSI: the two different types of resistors and the subtractors. The vertical confidence resistors are built with simple transconductance An Analog VLSI Chip for Thin-Plate Surface Interpolation 689 B A c Figure 2. Tri-directional subtract constraint device. amplifiers (transamps) connected as followers. The bias voltage of the transamp follower determines its conductance (g) and therefore signifies the certainty of the data. If there are no data for a given location, the corresponding transamp follower is turned off. The horizontal smoothness resistors are implemented with Mead's saturating resistor (Mead, 1988). Since conventional CMOS processes lack adequate resistive elements, we are forced to build resistors out of transistor elements. The bias voltage for Mead's resistor allows the effective conductance of these circuit elements to vary over many orders of magnitude. The most difficult component to implement in analog VLSI is the subtract constraint device. Its construction led to a general theory of constraint boxes which can be used to implement all sorts of constraints which are useful in early vision (Harris, 1988). The implementation of the subtract constraint device is a straightforward application of constraint box theory. Figure 3 shows a generic n terminal constraint box enforcing a constraint F on its voltage terminals. The constraints are enforced by generating a feedback current lie for each constrained voltage terminal. Suppose F can be written as One possible feedback equation which implements this constraint is given by 8F 1.: = -F(4) 8Vl: When this particular choice of feedback current is used, the constraint box minimizes the least-squares error in the constraint equation (Harris, 1989). Notice that F can be scaled by any arbitrary scaling factor. This scaling factor and the capacitance at each node determine the speed of convergence of a single constraint box. 690 Harris V t-------i~-.. k Figure 3. Generic n terminal constraint box. The subtract constraint box given in Figure 2 requires a constraint of A - B = C, which leads to the following error equation: F(A,B,C) = A- B - C (5) Straightforward application of constraint box theory yields 8F -F 8A = -(A - B - C) 1B -F~ = (A- B - C) (6) Ie -F :~ = (A - B - C) where lA, IB , and Ie represent feedback currents that must be generated by the device. These current feedback equations can be implemented with two modified widerange transamps (see Figure 4). In its linear range, a single transamp produces a current proportional to the difference of its two inputs. The negative input to each transamp is indicated by an inverting circle. The transamps have been modified to produce four outputs, two positive and two negative. The negative outputs are also represented by inverting circles. Because the difference terminal C can be positive or negative, it is measured with respect to a voltage reference VREF. VREF is a global signal which defines zero slope. As seen in Figure 4, the A B An Analog VLSI Chip for Thin-Plate Surface Interpolation 691 IB Ie I----V REF c ~ ________________ -J Figure 4. Tri-directional subtract constraint box. proper combination of positive and negative outputs from the two transamps are fed back to the voltage terminals to implement the feedback equations given in eq. (6). Analog networks which solve most regularizable early vision problems can be designed with networks consisting solely of linear resistances and batteries (Poggio and Koch, 1985). Unfortunately, many times these networks contain negative resistances that are troublesome to implement in analog VLSI. For example, the circuit shown in Figure 5 computes the same solutions as the coupled depth/slope network described in this paper. Interestingly, a 2-D implementation of this idea was implemented in the 1960s using inductors and capacitors (Volynskii and Bukhman, 1965). Proper choice of the frequency of alternating current allowed the circuit elements to act as pure positive and negative impedances. Unfortunately, negative resistances are troublesome to implement, especially in analog VLSI. One of the big advantages of using constraint boxes to implement early vision algorithms is that the resulting networks do not require negative resistances. ANALYSIS Figure 6 shows a sample output of the circuit. Data (indicated by vertical dashed lines) were supplied at nodes 2, 5, and 8. As expected, the chip finds a smooth solution (solid line) which extrapolates beyond the known data points. It is wellknown that a single resistive grid minimizes the first-order or membrane energy of a surface. Luo, Koch, and Mead (1988) have implemented a 48x48 resistive grid to perform surface interpolation. Figure 6 also shows the simulated performance of a first-order energy or membrane energy minimization. Data points are again supplied at nodes 2, 5, and 8. In contrast to the second-order chip results, the solution (dashed line) is much more jagged and does not extrapolate outside of 692 Harris • g -R -R -R -R -R -R Figure 5. A negative-resistor resistor solution to the ID biharmonic equation. I .• .... ~ .... .... 1.7 ~ ~ ~ ~ ~ ~ -. / ~ 1.6 ~ I ~ / / ~ I ~ 1.5 1.4 13+------+------+------+------~----~------~----~ I 2 3 4 5 6 7 • Figure 6. Measured data from the second-order chip (solid line) and simulated first-order result ( dashed line). 1.0 0.' 0.6 0.4 G.2 / An Analog VLSI Chip for Thin-Plate Surface Interpolation 693 I I / / I I I , , , , , , , " .... .... --o.o+r-~. ~ .. -.-----.: . ' . .... ... ,. ..... .. . ... 42+---~~--~----~--~~-~---~--~~--+--~--~ -5 "" -3 -2 -I 0 2 3 4 5 Figure 7. Graphical comparison of ID analytic Green's functions for first-order (dashed line), second-order (dotted line) and Gaussian (solid line). the known data points (for example, see node 1). Interestingly, psychophysics experiments support the smoother interpolant used by the second-order coupled depth/slope chip (Grimson, 1981). Unlike the second-order network, the firstorder network is not rigid enough to incorporate either orientation constraints or orientation discontinuities (Terzopoulos, 1983). Image smoothing is a special case of surface interpolation where the data are given on a dense grid. The first-order network is a poor smoothing operator. A comparison of analytic Green's function of first and second-order networks is shown in Figure 7 (the first-order shown with a dashed line and the secondorder with a solid line). Note that the analytic Green's function of the secondorder network (solid line) and that of standard Gaussian convolution (dotted line) are nearly identical. This fact was pointed out by Poggio, Voorhees, and Yuille (1986), when they suggested the use of the second-order energy to regularize the edge detection problem. Gaussian convolution has been claimed by many authors to be the "optimal" smoothing operator and is commonly used as the first stage of edge detection. Though the second-order network can be used to smooth images, Gaussian convolution cannot be used to solve the more difficult problem of interpolating from sparse data points. 694 Harris CONCLUSION Biharmonic surface interpolation has been successfully demonstrated in analog VLSI. To test true performance, we plan to combine a larger version of this chip with an analog stereo network. Work has already started on building the necessary circuitry for discontinuity detection during surface reconstruction. The Gaussianlike smoothing effect of this network will be further explored through building a network with photoreceptors supplying dense data input. Acknowledgements Support for this research was provided by the Office of Naval Research and the System Development Foundation. The author is a Hughes Aircraft Fellow and thanks Christof Koch and Carver Mead for their ongoing support. Additional thanks to Berthold Horn for several helpful suggestions. References Grimson, W.E.L. From Images to Surfaces, MIT Press, Cambridge, (1981). Harris, J.G. A new approach to surface reconstruction: the coupled depth/slope model, Proc. IEEE First Inti. Con! Computer Vision, pp. 277-283, London, (1987). Harris, J.G. Solving early vision problems with VLSI constraint networks, Neural Architectures for Computer Vision Workshop, AAAI-88, Minneapolis, Minnesota, Aug. 20 (1988). Harris, J .G. Designing analog constraint boxes to solve energy minimization problems in vision, submitted to INNS Neural Networks Conference, Washington D.C., June (1989) Luo, J., Koch, C., and Mead, C. An experimental subthreshold, analog CMOS tw<r dimensional surface interpolation circuit, Neural Information and Processing Systems Conference, Denver, Nov. (1988). Mead, C.A. Analog VLSI and Neural Systems, Addison-Wesley, Reading, (1989). Poggio, T. and Koch, C. Ill-posed problems in early vision: from computational theory to analogue networks, Proc. R. Soc. Lond. B 226: 303-323 (1985). Poggio, T., Voorhees, H., and Yuille, A. A regularized solution to edge detection, Arti! Intell. Lab Memo No. 833, MIT, Cambridge, (1986). Terzopoulos, D. Multilevel computational processes for visual surface reconstruction, Compo Vision Graph. Image Proc. 24: 52-96 (1983). Volynskii, B. A. and Bukhman, V. Yeo Analogues for Solution of Boundary-Value Problems, Pergamon Press, New York, (1965).
|
1988
|
24
|
107
|
436 SIMULATION AND MEASUREMENT OF THE ELECTRIC FIELDS GENERATED BY WEAKLY ELECTRIC FISH Brian Rasnow1, Christopher Assad2, Mark E. Nelson3 and James M. Bow~ Divisions of Physics1 ,Elecbical Engineerini, and Biolo~ Caltech, Pasadena, 91125 ABSTRACT The weakly electric fish, Gnathonemus peters;;, explores its environment by generating pulsed elecbic fields and detecting small pertwbations in the fields resulting from nearby objects. Accordingly, the fISh detects and discriminates objects on the basis of a sequence of elecbic "images" whose temporal and spatial properties depend on the timing of the fish's electric organ discharge and its body position relative to objects in its environmenl We are interested in investigating how these fish utilize timing and body-position during exploration to aid in object discrimination. We have developed a fmite-element simulation of the fish's self-generated electric fields so as to reconstruct the electrosensory consequences of body position and electric organ discharge timing in the fish. This paper describes this finite-element simulation system and presents preliminary electric field measurements which are being used to tune the simulation. INTRODUCTION The active positioning of sensory structures (i.e. eyes, ears, whiskers, nostrils, etc.) is characteristic of the information seeking behavior of all exploratory animals. Yet, in most existing computational models and in many standard experimental paradigms, the active aspects of sensory processing are either eliminated or controlled (e.g. by stimulating fIXed groups of receptors or by stabilizing images). However, it is clear that the active positioning of receptor surfaces directly affects the content and quality of the sensory infonnation received by the nervous system. Thus. controlling the position of sensors during sensory exploration constitutes an important feature of an animals strategy for making sensory discriminations. Quantitative study of this process could very well shed light on the algorithms and internal representations used by the nervous system in discriminating peripheral objects. Studies of the active use of sensory surfaces generally can be expected to pose a number of experimental challenges. This is because, in many animals, the sensory surfaces involved are themselves structurally complicated, making it difficult to reconstruct p0sition sequences or the consequences of any repositioning. For example, while the senSimulation and Measurement of the Weakly Electric Fish 437 sory systems of rats have been the subjects of a great deal of behavioral (Welker, 1964) and neurophysiological study (Gibson & Welker, 1983), it is extremely difficult to even monitor the movements of the perioral surfaces (lips, snout, whiskers) used by these animals in their exploration of the world let alone reconstruct the sensory consequences. For these reasons we have sought an experimental animal with a sensory system in which these sensory-motor interactions can be more readily quantified. The experimental animal which we have selected for studying the control of sensory surface position during exploration is a member of a family of African freshwater fish (Monniridae) that use self-generated electric fields to detect and discriminate objects in their environment (Bullock & Heiligenberg, 1986). The electrosensory system in these fish relies on an "electric organ" in their tails which produces a weak pulsed electric field in the surrounding environment (significant within 1-2 body lengths) that is then detected with an array of electrosensors that are extremely sensitive to voltage drops across the skin. These "electroreceptors" allow the fISh to respond to the perturbations in the electric field resulting from objects in the environment which differ in conductivity from the surrounding water (Fig. 1). .. conducting • object IIID electric organ § electroreceptors electric field lines Figure 1. The peripheral electrosensory system of Gnathonemus petersii consists of an "electric organ" current source at the base of the tail and several thousand "electroreceptor" cells distributed non uniformly over the fish's body. A conducting object near the fish causes a local increase in the current through the skin. These fISh are nocturnal, and rely more on their electric sense than on any other sensory system in perfonning a wide range of behaviors (eg. detecting and localizing objects such as food). It is also known that these fish execute exploratory movements, changing their body position actively as they attempt an electrosensory discrimination (Toerring & Belbenoit, 1979). Our objective is to understand how these movements change the distribution of the electric field on the animals skin, and to determine what, if any, relationship this has to the discrimination process. There are several clear advantages of this system for our studies. First, the electrore438 Rasnow, Assad, Nelson and Bower ceptors are in a fixed position with respect to each other on the surface of the animal. Therefore, by knowing the overall body position of the animal it is possible to know the exact spatial relationship of electroreceptors with respect to objects in the environment. Second, the physical equations governing the self-generated electric fIeld in the fish's environment are well understood. As a consequence, it is relatively straightforward to reconstruct perturbations in the electric field resulting from objects of different shape and conductance. Third, the electric potential can be readily measured, providing a direct measure of the electric field at a distance from the fish which can be used to reconstruct the potential difference across the animals skin. And finally, in the particular species of fish we have chosen to work with, Gnathonemus petersii, individual animals execute a brief (100 J.1Sec) electric organ discharge (BOD) at intervals of 30 msec to a few seconds. Modification of the firing pattern is 1cnown to be correlated with changes in the electrical environment (Lissmann, 1958). Thus, when the electric organ discharges, it is probable that the animal is interested in "taking a look" at its surroundings. In few other sensory systems is there as direct an indication of the attentional state of the subject. Having stated the advantages of this system for the study we have undertaken, it is also the case that considerable effort will still be necessary to answer the questions we have posed. For example, as described in this paper, in order to use electric field measurements made at a distance to infer the voltages across the surface of the animal's skin, it is necessary to develop a computer model of the fish and its environment. This will allow us to predict the field on the animal's skin surface given different body poSitions relative to objects in the environment. This paper describes our first steps in constructing this simulation system. Experimental Approach and Methods Simulations of Fish Electric Fields The electric potential, cll(x), generated by the EOD of a weakly electric fish in a fish tank is a solution ofPoisson's equation: Ve(pVell) = f where p(x)and f(x) are the impedance magnitude and source density at each point x inside and surrounding the fish. Our goal is to solve this equation for ell given the current source density, f, generated by the electric organ and the impedances, p, corresponding to the properties of the fish and external objects (rocks, worms, etc.). Given p and f. this equation can be solved for the potential ell using a variety of iterative approximation schemes. Iterative methods, in general, first discretize the spatial domain of the problem into a set of "node" points, and convert Poisson's equation into a set of algebraic equations with the nodal potentials as the unknown parameters. The node values, in this case, each represent an independent degree of freedom of the system and, as a consequence, there are as many equations as there are nodes. This very large system of equations can Simulation and Measurement of the Weakly Electric Fish 439 then be solved using a variety of standard techniques, including relaxation methods, conjugate gradient minimization, domain decomposition and multi-grid methods. To simulate the electric fields generated by a fish, we currently use a 2-dimensional fmite element domain discretization (Hughes, 1987) and conjugate gradient solver. We chose the finite element method because it allows us to simulate the electric fields at much higher resolution in the area of interest close to the animal's body where the electric field is largest and where errors due to the discretization would be most severe. The fmite element method is based on minimizing a global function that corresponds to the potential energy of the electric field. To compute this energy, the domain is decomposed into a large number of elements, each with uniform impedance (see Fig. 2). The global energy is expressed as a sum of the contributions from each element, where the potential within each element is assumed to be a linear interpolation of the potentials at the nodes or vertices of each element The conjugate gradient solver determines the values of the node potentials which minimize the global energy function. 1\ IVrv'V 1\ 1\ rv:V J J 1\/1\ .J 1\/ 1\11\/1\/1\11\/1\/1\ 1\11\/[\ 1\1 IV V r--.. v 7' v [7 If\ If\ '\ '\ V\ V If\ J\ 1'\ 'w l/\ V 11'\ '\ 1/ :1'\ '\ '\ '\ V '\ 1'\" '\ 11\ 1/\ \ '\ V Figure 2. The inner region of a fmite element grid constructed for simulating in 2-dimensions the electric field generated by an electric fish. Measurement of Fish Electric Fields Another aspect of our experimental approach involves the direct measurement of the potential generated by a fish's EOD in a fish tank using arrays of small electrodes and differential amplifiers. The electrodes and electronics have a high impedance which minimizes their influence on the electric fields they are designed to measure. The electrodes are made by pulling a 1mm glass capillary tube across a heated tungsten filament, resulting in a fine tapered tip through which a 1~ silver wire is run. The end of this wire is melted in a flame leaving a 200J,un ball below the glass insulation. Several electrodes are then mounted as an array on a microdrive attached to a modified X-Yplotter under computer control and giving better than 1mm positioning accuracy. Generated potentials are amplified by a factor of 10 - 100, and digitized at a rate of 100kHz per channel with a 12 bit AID converter using a Masscomp 5700 computer. An array processor searches this 440 Rasnow, Assad, Nelson and Bower continuous stream of data for EOD wavefonns. which are extracted and saved along with the position of the electrode array. Results Calibration of the Simulator In order to have confidence in the overall system, it was fD'St necessary to calibrate both the recording and the simulation procedures. To do this we set up relatively simple geometrical arrangements of sources and conductors in a fish tank for which the potential could be found analytically. The calibration source was an electronic "fake fish" circuit that generated signals resembling the discharge of Gnathonemus. Point current source A point source in a 2-dimensional box is perhaps the simplest configuration with which to initially test our electric field reconstruction system. The analytic solution for the potential from a point current source centered in a grounded. conducting 2-dimensional box is: . (.n7t). (n7tx). h (.n7ty ) 00 sm("2 sm L sm \L 4>(x. y) = L ri1t n =1 n L cosh(T) Our fmite element simulation. based on a regular 80 x 80 node grid differs from the above expression by less than 1 %. except in the elements adjacent to the source. where the potential change across these elements is large and is not as accurately reconstructed by a linear interpolation (Fig. 3). Smaller elements surrounding the source would improve the accuracy. however. one should note the analytic solution is infmite at the location of the "point" source whereas the measured and simulated sources (and real fish) have finite current densities. To measure the real equivalent of a point source in a 2-dimensional box. we used a linear current source (a wire) which ran the full depth of a real 3-dimensional tank. Measurements made in the midplane of the tank agree with the simulation and analytic solution to better than 5% (Fig. 3.). Uncertainty in the positions of the ClUTent source and recording sites relative to the position of the conducting walls probably accounts for much of this difference. Simulation and Measurement of the Weakly Electric Fish 441 1----~~--~--~----~------o - measured x - simulated 00 2 4 6 8 10 12 14 16 dislaDce from source Figure 3. Electric potential of a point current source centered in a grounded 2-dimensional box. Measurements of Fish Fields and 2-Dimensional Simulations Calibration of our fmite element model of an electric fish requires direct measurements of the electric potential close to a discharging fish. Fig. 4 shows a recording of a single EOD sampled with 5 colinear electrodes near a restrained fish. The wavefonn is bipolar, with the fIrst phase positive if recorded near the animals head and negative if recorded near the tail (relative to a remote reference). We used the peak amplitude of the larger second phase of the wavefonn to quantify the electric potential recorded at each location. Note that the potential reverses sign at a point approximately midway along the tail. This location corresponds to the location of the null potential shown in Fig. 5. 1500 1000 $' 5500 I 0 -1r-soo -1000 o 200 ~sec Figure 4. EOD waveform sampled simultaneously from 5 electrodes. 442 Rasnow, Assad, Nelson and Bower Measurements of EODs from a restrained fish exhibited an extraordinarily small variance in amplitude and waveform over long periods of time. In fact, the peak-peak amplitude of the EOD varied by less than 0.4% in a sample of 40 EOD's randomly chosen during a 30 minute period. Thus we are able to directly compare waveforms sampled sequentially without renonnalizing for fluctuations in EOD amplitude. Figure 5 shows equipotential lines reconstructed from a set of 360 measurements made in the midplane of a restrained Gnathonemus. Although the observed potential resembles that from a purely dipolar source (Fig. 6), careful inspection reveals an asymmetry between the head and tail of the fISh. This asymmetry can be reproduced in our simulations by adjusting the electrical properties of the fish. Qualitatively, the measured fields can be reproduced by assigning a low impedance to the internal body cavity and a high impedance to the skin. However, in order to match the location of the null potential, the skin impedance must vary over the length of the body. We are currently quantifying these parameters, as described in the next section. !!!!m 1'!I!fl!IPf!~m II •• 1 ... ...... 1 ••• !~ ....... . Figure 5. Measured potentials (at peak of second phase of EOD) recorded from a restrained Gnathonemus petersii in the midplane of the fish. Equipotential lines are 20 m V apart. Inset shows relative location of fish and sampling points in the fISh tank. Figure 6. Equipotential lines from a 2-dimensional finite element simulation of a dipole using the grid of Fig. 2. The resistivity of the fish was set equal to that of the sWToundings in this simulation. Simulation and Measurement of the Weakly Electric Fish 443 Future Directions There is still a substantial amount of work that remains to be done before we achieve our goal of being able to fully reconstruct the pattern of electroreceptor activation for any arbitrary body position in any particular environment. First. it is clear that we require more information about the electrical structure of the fISh itself. We need an accurate representation of the internal impedance distribution p(x) of the body and skin as well as of the source density f(x) of the electric organ. To some extent this can be addressed as an inverse problem, namely given the measured potential cl>(x), what choice of p(x) and f(x) best reproduces the data. Unfortunately, in the absence of further constraints, there are many equally valid solution, thus we will need to directly measure the skin and body impedance of the fish. Second, we need to extend our finite-element simulations of the fish to 3-dimensions which, although conceptually straight forward, requires substantial technical developments to be able to (a) specify and visualize the space-filling set of 3-dimensional finite-elements (eg. tetrahedrons) for arbitrary configurations, (b) compute the solution to the much larger set of equations (typically a factor of 100-1(00) in a reasonable time, and (c) visualize and analyze the resulting solutions for the 3-dimensional electrical fields. As a possible solution to (b), we are developing and testing a parallel processor implementation of the simulator. References Bullock, T. H. & Heiligenberg, W. (Eds.) (1986). "Electroreception", Wiley & Sons, New York. Gibson, J. M. & Welker. W. I. (1983). Quantitative Studies of Stimulus Coding in FirstOrder Vibrissa Afferents of Rats. 1. Receptive Field Properties and Threshold Distributions. Somatosensory Res. 1:51-67. Hughes, T. J. (1987). The Finite Element Method: Linear Static and Dynamic Finite Element Analysis. Prentice-Hall, New Jersey. Lissmann. H.W. (1958). On the function and evolution of electric organs in fish. J. Exp. Bioi. 35:156-191. Toening, M. J. and Belbenoit. P. (1979). Motor Programmes and Electroreception in Monnyrid Fish. Behav. Ecol. Sociobiol. 4:369-379. Welker, W. I. (1964). Analysis of Sniffing of the Albino Rat Behaviour 22:223-244.
|
1988
|
25
|
108
|
TRAINING MULTILAYER PERCEPTRONS WITH THE EXTENDED KALMAN ALGORITHM Sharad Singhal and Lance Wu Bell Communications Research, Inc. Morristown, NJ 07960 ABSTRACT A large fraction of recent work in artificial neural nets uses multilayer perceptrons trained with the back-propagation algorithm described by Rumelhart et. a1. This algorithm converges slowly for large or complex problems such as speech recognition, where thousands of iterations may be needed for convergence even with small data sets. In this paper, we show that training multilayer perceptrons is an identification problem for a nonlinear dynamic system which can be solved using the Extended Kalman Algorithm. Although computationally complex, the Kalman algorithm usually converges in a few iterations. We describe the algorithm and compare it with back-propagation using twodimensional examples. INTRODUCTION Multilayer perceptrons are one of the most popular artificial neural net structures being used today. In most applications, the "back propagation" algorithm [Rllmelhart et ai, 1986] is used to train these networks. Although this algorithm works well for small nets or simple problems, convergence is poor if the problem becomes complex or the number of nodes in the network become large [Waibel et ai, 1987]. In problems sllch as speech recognition, tens of thousands of iterations may be required for convergence even with relatively small elata-sets. Thus there is much interest [Prager anel Fallsiele, 1988; Irie and Miyake, 1988] in other "training algorithms" which can compute the parameters faster than back-propagation anel/or can handle much more complex problems. In this paper, we show that training multilayer perceptrons can be viewed as an identification problem for a nonlinear dynamic system. For linear dynamic Copyright 1989. Bell Communications Research. Inc. 133 134 Singhal and Wu systems with white input and observation noise, the Kalman algorithm [Kalman, 1960] is known to be an optimum algorithm. Extended versions of the Kalman algorithm can be applied to nonlinear dynamic systems by linearizing the system around the current estimate of the parameters. Although computationally complex, this algorithm updates parameters consistent with all previously seen data and usually converges in a few iterations. In the following sections, we describe how this algorithm can be applied to multilayer perceptrons and compare its performance with backpropagation using some two-dimensional examples. THE EXTENDED KALMAN FILTER In this section we briefly outline the Extended Kalman filter. Mathematical derivations for the Extended Kalman filter are widely available in the literature [Anderson and Moore, 1979; Gelb, 1974] and are beyond the scope of this paper. Consider a nonlinear finite dimensional discrete time system of the form: x(n+l) = In(x(n» + gn(x(n»w(n), den) = hn(x(n»+v(n). (1) Here the vector x (n) is the state of the system at time n, w (n) is the input, den) is the observation, v(n) is observation noise and In('), gn('), and hn(') are nonlinear vector functions of the state with the subscript denoting possible dependence on time. We assume that the initial state, x (0), and the sequences {v (n)} and {w (n)} are independent and gaussian with E [x (O)]=x(O), E {[x (O)-x (O)][x (O)-i(O»)I} = P(O), E [w (n)] = 0, E [w (n )w t (l)] = Q (n )Onl' E[v(n)] = 0, E[v(n)vt(l)] = R(n)onb (2) where Onl is the Kronecker delta. Our problem is to find an estimate i (n +1) of x (n +1) given d (j) , O<j <n. We denote this estimate by i (n +11 n). If the nonlinearities in (1) are sufficiently smooth, we can expand them llsing Taylor series about the state estimates i (n In) and i (n In -1) to obtain In(x(n» = I" (i(n In» + F(n)[x(n)-i(n In)] + ... gn(x(n» = gil (i(n In» + ... = C(n) + ... hn(x(n» = hll(i(n In-I» + J-f1(n)[x(n)-i(n In-1)] + where C(ll) = gn(i(n Ill», din (x) dhll (x) F (ll ) = ---. -, I-P (n ) = --., --(3) ax x = .i (II III) Ox x=i(IIII1-1) i.e. G (n) is the value of the function g" (.) at i (n In) and the ij th components of F (n) and H' (n) are the partial derivatives of the i th components of f II (.) and hll (-) respectively with respect to the j th component of x (n) at the points indicated. Neglecting higher order terms and assuming Training Multilayer Perceptrons 135 knowledge of i (n In) and i (n In-I), the system in (3) can be approximated as where x(n+l) = F(n)x(n) + G(n)w(n) + u(n) n>O z (n ) = HI (n )x (n )+ v (n) + y (n ), u(n) = /n(i(n In» - F(n)i(n In) y(n) = hn(i(n In-I» - H1(n)i(n In-1). (4) (5) It can be shown [Anderson and Moore, 1979] that the desired estimate i (n + 11 n) can be obtained by the recursion i(n+1In) =/n(i(n In» (6) i(n In) = i(n In-I) + K(n)[d(n) - hn(i(n In-1»] (7) K(n) = P(n In-I)H(n)[R(n)+HI(n)P(n In-I)H(n)tl (8) P(n+Iln) = F(n)P(n In)FI(n) + G(n)Q(n)G1(n) (9) P(n In) = P(n In-I) - K(n)HI(n)P(n In-I) (10) with P(11 0) = P (0). K (n) is known as the Kalman gain. In case of a linear system, it can be shown that P(n) is the conditional error covariance matrix associated with the state and the estimate i (n +1/ n) is optimal in the sense that it approaches the conditional mean E [x (n + 1) I d (0) ... d (n)] for large n . However, for nonlinear systems, the filter is not optimal and the estimates can only loosely be termed conditional means. TRAINING MULTILAYER PERCEPTRONS The network under consideration is a L layer perceptronl with the i th input of the k th weight layer labeled as :J-l(n), the jth output being zjk(n) and the weight connecting the i th input to the j th output being (}i~j' We assume that the net has m inputs and I outputs. Thresholds are implemented as weights connected from input nodes2 with fixed unit strength inputs. Thus, if there are N (k) nodes in the k th node layer, the total number of weights in the system is L M = ~N(k-l)[N(k)-l]. (11) k=1 Although the inputs and outputs are dependent on time 11, for notational brevity, we wil1 not show this dependence unless explicitly needed. l. We use the convention that the number of layers is equal to the number of weight layers . Thus we have L layers of Wl'iglrls labeled 1 · L and I ~ + I layers of /lodes (including the input and output nodes) labeled O · . . L . We will refer to the kth weight layer or the kth node layer unless the context is clear. 2. We adopt the convention that the 1st input node is the threshold. i.e. lit., is the threshold for the j th output node from the k th weight layer. 136 Singhal and Wu In order to cast the problem in a form for recursive estimation, we let the weights in the network constitute the state x of the nonlinear system, i.e. x = [Ob,Ot3 ... 0k(O),N(l)]t. (12) The vector x thus consists of all weights arranged in a linear array with dimension equal to the total number of weights M in the system. The system model thus is x(n+l)=x(n) n>O, den) = zL(n) + v(n) = hn(x(n),zO(n)) + v(n), (13) (14) where at time n, zO(n) is the input vector from the training set, d (n) is the corresponding desired output vector, and ZL (n) is the output vector produced by the net. The components of hn (.) define the nonlinear relationships between the inputs, weights and outputs of the net. If r(·) is the nonlinearity used, then ZL (n) = hn (x (n ),zO(n)) is given by zL(n) = r{(OL)tr{(OL-l)tr ... r{(OlyzO(n)}· .. }}.. (15) where r applies componentwise to vector arguments. Note that the input vectors appear only implicitly through the observation function hn ( . ) in (14). The initial state (before training) x (0) of the network is defined by populating the net with gaussian random variables with a N(x(O),P(O)) distribution where x (0) and P (0) reflect any apriori knowledge about the weights. In the absence of any such knowledge, a N (0,1/f. I) distribution can be used, where f. is a small number and I is the identity matrix. For the system in (13) and (14), the extended Kalman filter recursion simplifies to i(I1+1) = i(n) + K(n)[d(n) - hn(i(n),zO(n))] K (n) = P(n)H (n )[R (n )+H' (n )P(n )H(n )]-1 Pen +1) = P(n) - K (n )Ht (n)P (n) where P(n) is the (approximate) conditional error covariance matrix. (16) (17) (18) Note that (16) is similar to the weight update equation in back-propagation with the last term [ZL - hn (x ,ZO)] being the error at the output layer. However, unlike the delta rule used in back-propagation, this error is propagated to the weights through the Kalman gain K (n) which updates each weight through the entire gradient matrix H (n) and the conditional error covariance matrix P (n ). In this sense, the Kalman algorithm is not a local training algorithm . However, the inversion required in (17) has dimension equal to the llumber of outputs I, 110t the number of weights M, and thus does not grow as weights arc added to the problem. EXAMPLES AND RESULTS To evaluale the Olltpul and the convergence properties of the extended Kalman algorithm. we constructed mappings using two-dimensional inputs with two or four outputs as shown in Fig. 1. Limiting the input vector to 2 dimensions allows liS to visualize the decision regiolls ohtained by the net and Training Multilayer Perceptrons 137 to examine the outputs of any node in the net in a meaningful way. The xand y-axes in Fig. 1 represent the two inputs, with the origin located at the center of the figures. The numbers in the figures represent the different output classes. 2 1 - t------+-----I 1 2 I (a) REGIONS (b) XOR Figure 1. Output decision regions for two problems The training set for each example consisted of 1000 random vectors uniformly filling the region. The hyperbolic tangent nonlinearity was used as the nonlinear element in the networks. The output corresponding to a class was set to 0.9 when the input vector belonged to that class, and to -0.9 otherwise. During training, the weights were adjusted after each data vector was presented. Up to 2000 sweeps through the input data were used with the stopping criteria described below to examine the convergence properties. The order in which data vectors were presented was randomized for each sweep through the data. In case of back-propagation, a convergence constant of 0.1 was used with no "momentum" factor. In the Kalman algorithm R was set to I ·e-k / 50 , where k was the iteration number through the data. Within each iteration, R was held constant. The Stopping Criteria Training was considered complete if anyone of the following con~itions was satisfied: a. 2000 sweeps through the input data were used, h. the RMS (root mean squared) error at the output averaged over all training data during a sweep fell below a threshold 11' or c. the error reduction 8 after the i th sweep through the data fell below a threshold I::., where 8; = !3b;_1 + (l-,B) I ei-ei_l I. Here !3 is some positive constant less than unity, and ei is the error defined in b. In our simulations we set ;3 = 0.97, II = 10-2 and 12 = 10-5• 138 Singhal and Wu Example 1 - Meshed, Disconnected Regions: Figure l(a) shows the mapping with 2 disconnected, meshed regions surrounded by two regions that fill up the space. We used 3-layer perceptrons with 10 hidden nodes in each hidden layer to Figure 2 shows the RMS error obtained during training for the Kalman algorithm and back-propagation averaged over 10 different initial conditions. The number of sweeps through the data (x-axis) are plotted on a logarithmic scale to highlight the initial reduction for the Kalman algorithm. Typical solutions obtained by the algorithms at termination are shown in Fig. 3. It can be seen that the Kalman algorithm converges in fewer iterations than back-propagation and obtains better solutions. 1 0.8 Average 0.6 RMS Error 0.4 backprop 0.2 Kalman 0 1 2 5 10 20 50 100 200 500 10002000 No. of Iterations Figure 2. Average output error during training for Regions problem using the Kalman algorithm and backprop I I (a) (b) Figure 3. Typical solutions for Regions problem using (a) Kalman algorithm and (h) hackprop. Training Multilayer Perceptrons 139 Example 2 - 2 Input XOR: Figure 1(b) shows a generalized 2-input XOR with the first and third quadrants forming region 1 and the second and fourth quadrants forming region 2. We attempted the problem with two layer networks containing 2-4 nodes in the hidden layer. Figure 4 shows the results of training averaged over 10 different randomly chosen initial conditions. As the number of nodes in the hidden layer is increased, the net converges to smaller error values. When we examine the output decision regions, we found that none of the nets attempted with back-propagation reached the desired solution. The Kalman algorithm was also unable to find the desired solution with 2 hidden nodes in the network. However, it reached the desired solution with 6 out of 10 initial conditions with 3 hidden nodes in the network and 9 out of 10 initial conditions with 4 hidden nodes. Typical solutions reached by the two algorithms are shown in Fig. 5. In all cases, the Kalman algorithm converged in fewer iterations and in all but one case, the final average output error was smaller with the Kalman algorithm. 1 0.8 Average 0.6 RMS Error 0.4 Kalman 3 nodes 0.2 Kalman 4 nodes 0 1 2 5 10 20 50 100 200 500 10002000 No. of Iterations Figure 4. Average output error during training for XOR problem using the Kalman algorithm and backprop CONCLUSIONS In this paper, we showed that training feed-forward nets can be viewed as a system identification problem for a nonlinear dynamic system. For linear dynamic systems, the Kalman tllter is known to produce an optimal estimator. Extended versions of the Kalman algorithm can be used to train feed-forward networks. We examined the performance of the Kalman algorithm using artifkially constructed examples with two inputs and found that the algorithm typically converges in a few iterations. We also llsed back-propagation on the same examples and found that invariably, the Kalman algorithm converged in 140 Singhal and Wu l 2 1 ~ 1 2 I 2 I I Figure 5. (a) (b) Typical solutions for XOR problem using (a) Kalman algorithm and (b) backprop. fewer iterations. For the XOR problem, back-propagation failed to converge on any of the cases considered while the Kalman algorithm was able to find solutions with the same network configurations. References [1] B. D. O. Anderson and J. B. Moore, Optimal Filtering, Prentice Hall, 1979. [2] A. Gelb, Ed., Applied Optimal Estimation, MIT Press, 1974. [3] B. Irie, and S. Miyake, "Capabilities of Three-layered Perceptrons," Proceedings of the IEEE International Conference on Neural Networks, San Diego, June 1988, Vol. I, pp. 641-648. [4] R. E. Kalman, "A New Approach to Linear Filtering and Prediction Problems," 1. Basic Eng., Trans. ASME, Series D, Vol 82, No.1, 1960, pp.35-45. [5] R. W. Prager and F. Fallside, "The Modified Kanerva Model for Automatic Speech Recognition," in 1988 IEEE Workshop on Speech Recognition, Arden House, Harriman NY, May 31-Jllne 3,1988. [6] D. E. Rumelharl, G. E. Hinton and R. J. Williams, "Learning Internal Representations by Error Propagation," in D. E. Rllmelhart and J. L. McCelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure oj' Cognition. Vol 1: Foundations. MIT Press, 1986. [7J A. Waibel, T. Hanazawa, G. Hinton, K. Shikano and K. Lang "Phoneme Recognition Using Time-Delay Neural Networks," A 1R internal Report TR-I-0006, October 30, 1987.
|
1988
|
26
|
109
|
720 AN ELECTRONIC PHOTORECEPTOR SENSITIVE TO SMALL CHANGES IN INTENSITY T. Delbriick and C. A. Mead 256-80 Computer Science California Institute of Technology Pasadena, CA 91125 ABSTRACT We describe an electronic photoreceptor circuit that is sensitive to small changes in incident light intensity. The sensitivity to change8 in the intensity is achieved by feeding back to the input a filtered version of the output. The feedback loop includes a hysteretic element. The circuit behaves in a manner reminiscent of the gain control properties and temporal responses of a variety of retinal cells, particularly retinal bipolar cells. We compare the thresholds for detection of intensity increments by a human and by the circuit. Both obey Weber's law and for both the temporal contrast sensitivities are nearly identical. We previously described an electronic photoreceptor that outputs a voltage that is logarithmic in the light intensity (Mead, 1985). This report describes an extension of this circuit which was based on a suggestion by Frank Werblin that biological retinas may achieve greater sensitivity to change8 in the illumination by feeding back a filtered version of the output. OPERATION OF THE CIRCUIT The circuit (Figure 1) consists of a phototransistor (P), exponential feedback to P (Ql, Q2, and Q3), a transconductance amplifier (A), and the hysteretic element (Q4 and Qs). In general terms the operation of the circuit consists of two stages of amplification with hysteresis in the feedback loop. The light falls on the parasitic bipolar transistor P. (The rest of the circuit is shielded by metal.) P's collector is the substrate and the base is an isolated well. P and Ql form the first stage of amplification. The light produces a base current Is for P. The emitter current IE is PIs, neglecting collector resistance for now. P is typically a few hundred. The feedback current IQl is set by the gate voltage on QdQ2' which is set by the current through Q3, which is set by the feedback voltage Vjb. In equilibrium Vjb will be such that IQl = IE and some voltage Vp will be the output of the first stage. The An Electronic Photoreceptor Sensitive to Small Changes 721 negative feedback through the transconductance amplifier A will make Vp ~ V/ b• This voltage is logarithmic in the light intensity, since in subthreshold operation the currents through Q2 and Q3 are exponential in their gate to source voltages. The DC output of the circuit will be Vout ~ V/b = Vdd - (2kT /q) log IE, neglecting the back-gate effect for Q2. Figure Sa (DC output) shows that the assumption of subthreshold operation is valid over about 4 orders of magnitude. G T O Vout Figure 1. The photoreceptor circuit. Now what happens when the intensity increases a bit? Figure 2a shows the current through P and Ql as a function of the voltage Vp. Both P and Ql act like current sources in parallel with a resistance, where the value of the current is set, respectively, by the light intensity and by the feedback voltage V/b. When the intensity increases a bit the immediate result is that the curve labeled IE in Figure 2a will shift upwards a little to the curve labeled I~. But IQl won't change right away it is set by the delayed feedback. The effect on Vp will be that Vp will drop by the amount of the shift in the intersection of the curves in Figure 2a, to Vj,. Because interesting gain control properties arise here we will analyze this before going on with the rest of the circuit. In Figure 2b, we model P and Ql as current sources with associated drain/collector resistances. Now, 722 Delbro.ck and Mead rp and rQl physically arise from the familiar Early effect, a variation of depletion region thickness causing a variation in the channel length or base thickness. It is a reasonable approximation to model the drain or collector resistance due to the Early effect as r = Vel I, where Ve is the Early voltage and is typically tens of volts, and I is the value of the current source. I ~1 rQl 12: ~1 Vp 12: rp V Vp Vp 1 ... ~ a Figure 2. The first stage of amplification. a: The curves show the current through the phototransistor P and feedback transistor Ql as a function of the voltage Vp . Since Ip = IQl the intersection gives the voltage Vp. b: An equivalent circuit model for these transistors in the linear region. The Early effect leads to drain/collector resistances inversely proportional to the value of the current source. Substituting this approximation for rp and rQl into the above expression for 6Vp and letting 6IE = P6Is, we obtain 6Is 6V = Is (Ve,p IIVe,Ql) where Ve,p and Ve,Ql are the Early voltages associated with the phototransistor and the feedback transistor QlI respectively. In other words, the change in Vp is just proportional to the "contrast" 6 Is / Is. Figure 4a shows test results which support this model. A detector which encodes the intensity logarithmically (so that the output V in response to an input I is V = log I) would also give 6V = 61/1. Our in our circuit the gain control properties for transients arise from an unrelated property of the conductances of the sensor and the feedback element. Comparing the gains for DC and for transients in our circuit and using the expression for the DC output given earlier, we find that the ratio of the gains is transient gain ~ Ve,p IIVe,Ql ~ 200 DC gain 2kT/q An Electronic Photoreceptor Sensitive to Small Changes 723 assuming Ve,pllVe,Ql = lOV and kT/q = 25mV. Finally, let us consider the operation of the rest of the circuit. The second stage of amplification is done by the transconductance amplifier A. A produces a current which is pr~ortional to the tanh of the difference between the two inputs, I = G tanh( v;k;7~b). When the output of the amplifier is taken as a voltage, the voltage gain is typically a few hundred. Following the transconductance amplifier there is a pair of diode connected transistors, Q4 and Q5 (Figure 3a), which we call the hysteretic element. This pair of transistors has an I-V characteristic which is similar to that of Figure 3. The hysteretic element conducts very little until the voltage across it becomes substantial. Thus the transconductance amplifier works in a dual voltage-output/current-output mode. Small changes in the output voltage result in little change in the feedback voltage. Larger changes in the output voltage cause current to flow through the hysteretic element, completing the feedback loop. This represents a form of memory, or hysteresis, for the past state of the output and a sensitivity to small changes in the input around the past history of the input. I v I a b Figure 3. a: The hysteretic element. b: 1- l': characteristic. COMPARISON OF CmCUIT AND RETINAL CELLS We felt that since the circuit was motivated by biology it might be interesting to compare the operational characteristics of the circuit and of retinal cells. Since the circuit has no spatial extent it cannot model any of the spatially mediated effects (such as center4Jurround) seen in retinas. Nonetheless we had hoped to capture some of the temporal effects seen in retinal cells. In Figure 4 we compare the responses of the circuit and responses of a retinal bipolar cell to diffuse flashes of light. The circuit has response characteristics closest to those of retinal bipolar cells. The circuit's gain control properties are very similar to those of bipolar cells. Figure 5 shows that both the circuit and bipolar cells tend to control their gain so that they maintain a constant output amplitude for a given change in the log of the intensity. 724 DelbIilck and Mead The response characteristics of the circuit differ from those of bipolar cells in the following ways. First, the gain of the circuit for transients is much larger than that of the bipolar cell, as can be seen in Figure 5, with concomitantly much smaller dynamic range. The dynamic range of the bipolar cell is about 1.5 - 2 log units around the steady intensity, while for the circuit the dynamic range is only about 0.1 log unit. Input IV~ 0.2ms 20ms -1.0 background ~% \to% ch~ge 1% -3.0 background (2 decades attenuated) Noise level a 2.5 BACKGROUND Figure 4. The responses of the circuit compared with responses of a retinal bipolar cell. a: The output of the circuit in response to changes in the intensity. The background levels refer to the same scale as shown in Figure 5. The bottom curve shows the noise level; from this one can see that a detection criterion of signal/ noise ratio equals 2 is satisfied for increments of 1-2%, in agreement with Figure 6. Note that a 2 decade attenuation hardly changes the response amplitude but the time constant increases by a factor of a hundred. b: The response of a bipolar cell (from Werblin, 1974). The numbers next to the responses are the log of the intensity of the flash substituted for the intial value of the intensity. Note the bidirectionality of the response compared to the circuit. ~V) An Electronic Photoreceptor Sensitive to Small Changes 725 -5 -4 -3 -2 -1 log (Intensity) a 1 (mV) '0 ~ 6 } 4 2 6V~tOr-~-+~~~~~ -2 -4 log (Intensity ) b Figure 5. The operating curves of the circuit (a) compared with retinal bipolar cells (b) (adapted from Werblin, 1974). The curves show the height of the peak of the response to flashes substituted for the initial intensity. The initial intensity is given by the intersection of the curves with the abscissa. Note the difference of the gain and the dynamic range. The squares show the DC responses. The slope of the DC response for the circuit is less than expected, probably because there is a leakage current through the hysteretic element. Second, the response of bipolar cells is symmetrical for increases and decreases in the intensity. This can probably be traced to the symmetrical responses of the cones from which they receive direct input. The circuit, on the other hand, only responds strongly to increases in the light intensity. The response in our circuit only becomes symmetrical for output voltage swings comparable to leT / q, probably because the limiting process is recombination in the base of the phototransistor. Third, the control of time constants is dramatically different. In Figure 4a the top set of responses is on a time scale 100 times expanded relative to the bottom scale. The circuit's time constant, in other words, is roughly inversely proportional to the light intensity. This is not the case for bipolar cells. Although we do not show it here, the time constant of the responses of bipolar cells hardly varies with light intensity over at least 4 orders of magnitude (Werblin, 1974). The circuit's action differs much more from that of photoreceptors, amacrine, or ganglion cells. Cones show a much larger sustained response relative to their transient response. Amacrine and ganglion cells spike; our circuit does not. And the circuit differs from on/off amacrine and ganglion cells in the asymmetry of its response to increases and decreases in light intensity. 726 Delbruck and Mead EYE vs. CHIP We compared the sensitivity to small changes in the light intensity for one of us and for the circuit in order to get an idea of the performance of the circuit relative to a subjective scale. The thresholds for detection of intensity increments are shown in Figure 6. 10 • Human 1.6% • Photoreceptor 9 8 log(Increment) 7 6 5 • 6 7 8 9 10 11 log( Absorbable Quanta/ sec) Figure 6. Thresholds for detection of flicker. The subject (TD) sat in a darkened room foveating a flickering yellow (583nm) LED at a distance of 75cm. The LED subtended 22' of arc and the frequency of flicker was 5 Hz. Threshold determination was made by a series of trials, indicated by a buzzer, in which the computer either caused the LED to flicker or not to flicker. The percentage of flicker was started at some large amount for which determination was unambiguous. For each trial the subject pressed a button if he thought he saw the LED flickering. A incorrect response would cause the percentage of flicker for the next trial to be increased. A correct response would cause the percentage of flicker to be decreased with a probability of 0.44. The result after a hundred trials would be a curve of percentage flicker vs. trial number which started high and then leveled off around the 75% correct level, taken to be the threshold (Levitt, 1971). The threshold for the circuit was determined by shining the same LED onto the chip directly. The threshold was defined as the smallest amount of flicker that would result in a signal-to-noise ratio of 2 for the output. The two sets of thresholds were scaled relative to each other using a Tektronics photometer and were both scaled to read in terms of absorbable quanta, defined for the human as the number of photons hitting the cornea and for the circuit as the number of quanta that hit the area of the phototransistor. There is a bias here favoring the circuit, since the other parts An Electronic Photoreceptor Sensitive to Small Changes 727 of the circuit have not been included in this area. Including the rest of the circuit would raise the thresholds for the circuit by a factor of about 3. The results show that both the circuit and the human approximately obey Weber's law (61/1 at threshold is a constant), and the sensitivities are nearly the same. The highest sensitivity for the circuit, measured at an incident intensity of 660#-, W / cm2 , was 1.2%. This is about half the intensity in a brightly lit office. APPLICATIONS We are trying to develop these sensors for use in neurophysiological optical dye recording. These experiments require a sensor capable of recording changes in the incident intensity of about 1 part in 103 - 4 (Grinvald, 1985). The current technique is to use integrated arrays of photodiodes each with a dedicated rack-mounted low noise amplifier. We will try to replace this arrangement with arrays of receptors of the type discussed in this paper. Currently we are 1 to 2 orders of magnitude short of the required sensitivity, but we hope to improve the performance by using hybrid bipolar/FET technology. CONCLUSION This circuit represents an example of an idea from biology directly and simply synthesised in silicon. The resulting circuit incorporated not only the intended idea, sensitivity to changes in illumination, but also gave rise to an unexpected gain control mechanism unrelated to exponential feedback. The circuit differs in several ways from its possible biological analogy but remains an interesting and potentially useful device. Aeknowledgements This work was supported by the System Development Foundation and the Office of Naval Research. Chips were fabricated through the MOSIS foundry. We thank Frank Werblin for helpful comments and Mary Ann Maher for editorial assistance. Refereneea A. Grinvald. (1985) Real time optical ma~ping of neuronal activity: from single growth cones to the intact mammalian braili. Ann. Rev. Neurolci. 8:263-305. H. Levitt. (1971) 'Iransformed up-down methods in psychoacoustics. J. Acoud. Soc. Am. 49:467-477. C. Mead. (1985) A Sensitive Electronic Photoreceptor. In 1985 Chapel Hill Conference on VLSI. 463-471. F. Werblin. (1974) Control of Retinal Sensitivity II: Lateral Interactions at the Outer Plexiform Layer. J. PhYliology. 68:62-87.
|
1988
|
27
|
110
|
494 TRAINING A 3-NODE NEURAL NETWORK IS NP-COMPLETE Avrim Blum'" MIT Lab. for Computer Science Cambridge, Mass. 02139 USA Ronald L. Rivest t MIT Lab. for Computer Science Cambridge, Mass. 02139 USA ABSTRACT We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions of their inputs. We show that it is NP-complete to decide whether there exist weights and thresholds for the three nodes of this network so that it will produce output consistent with a given set of training examples. We extend the result to other simple networks. This result suggests that those looking for perfect training algorithms cannot escape inherent computational difficulties just by considering only simple or very regular networks. It also suggests the importance, given a training problem, of finding an appropriate network and input encoding for that problem. It is left as an open problem to extend our result to nodes with non-linear functions such as sigmoids. INTRODUCTION One reason for the recent surge in interest in neural networks is the development of the "back-propagation" algorithm for training neural networks. The ability to train large multi-layer neural networks is essential for utilizing neural networks in practice, and the back-propagation algorithm promises just that. In practice, however, the back-propagation algorithm runs very slowly, and the question naturally arises as to whether there are necessarily intrinsic computational difficulties associated with training neural networks, or whether better training algorithms might exist. This paper provides additional support for the position that training neural networks is intrinsically difficult. A common method of demonstrating a problem to be intrinsically hard is to show the problem to be "NP-complete". The theory of NP-complete problems is well-understood (Garey and Johnson, 1979), and many infamous problemssuch as the traveling salesman problem-are now known to be NP-complete. While NP-completeness does not render a problem totally unapproachable in ·Supported by an NSF graduate fellowship. tThis paper was prepared with support from NSF grant DCR-8607494, ARO Grant DAAL03-86-K-0l71, and the Siemens Corporation. Training a 3-Node Neural Network is NP-Complete 495 practice, it usually implies that only small instances ofthe problem can be solved exactly, and that large instances can at best only be solved approximately, even with large amounts of computer time. The work in this paper is inspired by Judd (Judd, 1987) who shows the following problem to be NP-complete: "Given a neural network and a set of training examples, does there exist a set of edge weights for the network so that the network produces the correct output for all the training examples?" Judd also shows that the problem remains NP-complete even if it is only required a network produce the correct output for two-thirds of the training examples, which implies that even approximately training a neural network is intrinsically difficult in the worst case. Judd produces a class of networks and training examples for those networks such that any training algorithm will perform poorly on some networks and training examples in that class. The results, however, do not specify any particular "hard network"-that is, any single network hard for all algorithms. Also, the networks produced have a number of hidden nodes that grows with the number of inputs, as well as a quite irregular connection pattern. We extend his result by showing that it is NP-complete to train a specific very simple network, having only two hidden nodes and a regular interconnection pattern. We also present classes of regular 2-layer networks such that for all networks in these classes, the training problem is hard in the worst case (in that there exists some hard sets of training examples). The NP-completeness proof also yields results showing that finding approximation algorithms that make only one-sided error or that approximate closely the minimum number of hidden-layer nodes needed for a network to be powerful enough to correctly classify the training data, is probably hard, in that these problems can be related to other difficult (but not known to be NP-complete) approximation problems. Our results, like Judd's, are described in terms of "batch"-style learning algorithms that are given all the training examples at once. It is worth noting that training with an "incremental" algorithm that sees the examples one at a time such as back-propagation is at least as hard. Thus the NP-completeness result given here also implies that incremental training algorithms are likely to run slowly. Our results state that given a network of the classes considered, for any training algorithm there will be some types of training problems such that the algorithm will perform poorly as the problem size increases. The results leave open the possibility that given a training problem that is hard for some network, there might exist a different network and encoding of the input that make training easy. 496 Blum and Rivest 1 2 3 4 n Figure 1: The three node neural network. THE NEURAL NETWORK TRAINING PROBLEM The multilayer network that we consider has n binary inputs and three nodes: Nt, N2, Na. All the inputs are connected to nodes Nl and N2. The outputs of hidden nodes Nl and N2 are connected to output node Na which gives the output of the network. Each node Ni computes a linear threshold function Ii on its inputs. If Ni has input Z = (Zll ••. I Zm), then for some constants ao, . .. , am, The aj's (j > 1) are typically viewed as weights on the incoming edges and ao as the threshold. The training algorithm for the network is given a set of training examples. Each is either a positive example (an input for which the desired network output is +1) or a negative example (an input for which the desired output is -1). Consider the following problem. Note that we have stated it as a decision ("yes" or "no") problem, but that the search problem (finding the weights) is at least equally hard. TRAINING A 3-NODE NEURAL NETWORK: Given: A set of O( n) training examples on n inputs. Question: Do there exist linear threshold functions h, /2, fa for nodes Nt, N21 Na Training a 3-Node Neural Network is NP-Complete 497 such that the network of figure 1 produces outputs consistent with the training set? Theorem: Training a 3-node neural network is NP-complete. We also show (proofs omitted here due to space requirements) NP-completeness results for the following networks: 1. The 3-node network described above, even if any or all of the weights for one hidden node are required to equal the corresponding weights of the other, so possibly only the thresholds differ, and even if any or all of the weights are forced to be from {+ 1, -I}. 2. Any k-hidden node, for k bounded by some polynomial in n (eg: k = n 2 ), two-layer fully-connected network with linear threshold function nodes where the output node is required to compute the AND function of its inputs. 3. The 2-layer, 3-node n-input network with an XOR output node, if ternary features are allowed. In addition we show (proof omitted here) that any set of positive and negative training examples classifiable by the 3-node network with XOR output node (for which training is NP-complete) can be correctly classified by a perceptron with O(n2 ) inputs which consist of the original n inputs and all products of pairs of the original n inputs (for which training can be done in polynomial-time using linear programming techniques). THE GEOMETRIC POINT OF VIEW A training example can be thought of as a point in n-dimensional space, labeled '+' or '-' depending on whether it is a positive or negative example. The points are vertices of the n-dimensional hypercube. The zeros of the functions /1 and h for the hidden nodes can be thought of as (n - I)-dimensional hyperplanes in this space. The planes Pl and P2 corresponding to the functions hand /2 divide the space into four quadrants according to the four possible pairs of outputs for nodes Nl and N2 • If the planes are parallel, then one or two of the quadrants is degenerate (non-existent). Since the output node receives as input only the outputs of the hidden nodes Nl and N 2 , it can only distinguish between points in different quadrants. The output node is also restricted to be a linear function. It may not, for example, output "+1" when its inputs are (+1,+1) and (-1, -1), and output "-I" when its inputs are (+1, -1) and (-1,+1). So, we may reduce our question to the following: given O( n) points in {O, 1}n , each point labeled '+' or '-', does there exist either 498 Blum and Rivest 1. a single plane that separates the '+' points from the '-' points, or 2. two planes that partition the points so that either one quadrant contains all and only '+' points or one quadrant contains all and only '-' points. We first look at the restricted question of whether there exist two planes that partition the points such that one quadrant contains all and only the '+' points. This corresponds to having an "AND" function at the output node. We will call this problem: "2-Linear Confinement of Positive Boolean Examples". Once we have shown this to be NP-complete, we will extend the proof to the full problem by adding examples that disallow the other possibilities at the output node. Megiddo (Megiddo, 1986) has shown that for O(n) arbitrary '+' and '-' points in n-dimensional Euclidean space, the problem of whether there exist two hyperplanes that separate them is NP-complete. His proof breaks down, however, when one restricts the coordinate values to {O, I} as we do here. Our proof turns out to be of a quite different style. SET SPLITTING The following problem was proven to be NP-complete by Lovasz (Garey and Johnson 1979). SET-SPLITTING: Given: A finite set 5 and a collection C of subsets Ci of 5. Question: Do there exist disjoint sets 51, S2 such that Sl U S2 Sand Vi, Ci rt. Sl and Ci rt. S2. The Set-Splitting problem is also known as 2-non-Monotone Colorability. Our use of this problem is inspired by its use by Kearns, Li, Pitt, and Valiant to show that learning k-term DNF is NP-complete (Kearns et al. 1987) and the style of the reduction is similar. THE REDUCTION Suppose we are given an instance of the Set-Splitting problem: Create the following signed points on the n-dimensional hypercube {O, l}n: • Let the origin on be labeled '+' . • For each Si, put a point labeled '-' at the neighbor to the origin that has 12 ... i ... n a 1 in the ith bit-that is, at (00" -010·· ·0). Call this point Pi. Training a 3-Node Neural Network is NP-Complete 499 (001) (010) (000) (100) Figure 2: An example . • For each Cj = {Sjt, ..• ,Sjkj}, put a point labeled '+' at the location whose bits are 1 at exactly the positions j1,i2, ... ,jkj-that is, at Pj1 + .. '+Pjkr For example, let 8 = {Sl,S2,S3}, C = {Ct,C2}, Cl = {Sl,S2}, C2 = {S2,S3}' SO, we create '-' points at: (0 0 1), (0 1 0), (1 0 0) and '+' points at: (0 0 0), (1 1 0), (0 1 1) in this reduction (see figure 2). Claim: The given instance of the Set-Splitting problem has a solution ¢:::::} the constructed instance of the 2-Linear Confinement of Positive Boolean Examples problem has a solution. Proof: (=» Given 8 1 from the solution to the Set-Splitting instance, create the plane P1 : a1z1 + ... + anZn = -~, where ai = -1 if Sj E 8 11 and aj = n if Si ¢ 8 1 , Let the vectors a = (a1, .. ' an),z = (Zl,"" zn). This plane separates from the origin exactly the '-' points corresponding to Si E 81 and no '+' points. Notice that for each Si E 81, a· Pi = -1, and that for each Si ¢ 8 1 , a . Pi = n. For each '+' point p, a· P > - ~ since either P is the origin or else P has a 1 in a bit i such that Si ¢ 8 1 , Similarly, create the plane P2 from 8 2 , {¢::} Let 81 be the set of points separated from the origin by PI and 8 2 be those points separated by P2. Place any points separated by both planes in either 81 or 82 arbitrarily. Sets 81 and 8 2 cover 8 since all '-' points are separated from the origin by at least one of the planes. Consider some Cj = {Sjl .•• Sjkj} 500 Blum and Rivest (001) (010) (000) (100) Figure 3: The gadget. and the corresponding '-' points Pjb" • ,Pjkr If, say, Cj C 811 then P1 must separate all the Pji from the origin. Therefore, Pl must separate Pj1 + ... + Pjkj from the origin. Since that point is the '+' point corresponding to Cj, the '+' points are not all confined to one quadrant, contradicting our assumptions. So, no Cj can be contained in 81. Similarly, no Cj can be contained in 82• • We now add a "gadget" consisting of 6 new points to handle the other possibilities at the output node. The gadget forces that the only way in which two planes could linearly separate the '+' points from the '-' points would be to confine the '+' points to one quadrant. The gadget consists of extra points and three new dimensions. We add three new dimensions, xn+b Xn +2, and Xn +3, and put '+' points in locations: (0· .. 0101), (0 .. ·0011) and '-' points in locations: (0 .. ·0100), (0 .. ·0010), (0 .. ·0001), (0 .. ·0 111). (See figure 3.) The '+' points ot:this cube can be separated from the '-' points by appropriate settings of the weights of planes P1 and P2 corresponding to the three new dimensions. Given planes P{ : a1X1 + ... + anXn = -! and P2 : b1x1 + ... + bnxn = -~ which solve a 2-Linear Confinement of Positive Boolean Examples instance in n dimensions, expand the solution to handle the gadget by setting 1 to a1 x 1 + ... + anXn + Xn+1 + Xn+2 Xn+3 = -2" 1 to b1x 1 + ... + bnxn x n +1 x n+2 + Xn+3 = -2" Training a 3-N ode Neural Network is NP-Complete 501 (Pl separates '-' point (0···0 001) from the '+' points and P2 separates the other three '-' points from the '+' points). Also, notice that there is no way in which just one plane can separate the '+' points from the '-' points in the cube and also no way for two planes to confine all the negative points in one quadrant. Thus we have proved the theorem. CONCLUSIONS Training a 3-node neural network whose nodes compute linear threshold functions is NP-complete. An open problem is whether the NP-completeness result can be extended to neural networks that use sigmoid functions. We believe that it can because the use of sigmoid functions does not seem to alter significantly the expressive power of a neural network. Note that Judd (Judd 1987), for the networks he considers, shows NP-completeness for a wide variety of node functions including sigmoids. References James A. Anderson and Edward Rosenfeld, editors. Neurocomputing: Foundations of Research. MIT Press, 1988. M. Garey and D. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, San Francisco, 1979. J. Stephen Judd. Learning in networks is hard. In Proceedings of the First International Conference on Neural Networks, pages 685-692, I.E.E.E., San Diego, California June 1987. J. Stephen Judd. Neural Network Design and the Complexity of Learning. PhD thesis, Computer and Information Science dept., University of Massachusetts, Amherst, Mass. U.S.A., 1988. Michael Kearns, Ming Li, Leonard Pitt, and Leslie Valiant. On the learn ability of boolean formulae. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, pages 285-295, New York, New York, May 1987. Nimrod Megiddo. On The Complexity of Polyhedral Separability. Technical Report RJ 5252, IBM Almaden Research Center, August 1986. Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. The MIT Press, 1969. David E. Rumelhart and James 1. McClelland, editors. Parallel Distributed Processing (Volume I: Foundations). MIT Press, 1986.
|
1988
|
28
|
111
|
602 AUTOMATIC LOCAL ANNEALING Jared Leinbach Deparunent of Psychology Carnegie-Mellon University Pittsburgh, PA 15213 ABSTRACT This research involves a method for finding global maxima in constraint satisfaction networks. It is an annealing process butt unlike most otherst requires no annealing schedule. Temperature is instead determined locally by units at each updatet and thus all processing is done at the unit level. There are two major practical benefits to processing this way: 1) processing can continue in 'bad t areas of the networkt while 'good t areas remain stablet and 2) processing continues in the 'bad t areast as long as the constraints remain poorly satisfied (i.e. it does not stop after some predetermined number of cycles). As a resultt this method not only avoids the kludge of requiring an externally determined annealing schedulet but it also finds global maxima more quickly and consistently than externally scheduled systems (a comparison to the Boltzmann machine (Ackley et alt 1985) is made). FinallYt implementation of this method is computationally trivial. INTRODUCTION A constraint satisfaction network, is a network whose units represent hypotheses, between which there are various constraints. These constraints are represented by bidirectional connections between the units. A positive connection weight suggests that if one hypothesis is accepted or rejected, the other one should be also, and a negative connection weight suggests that if one hypothesis is accepted or rejected. the other one should not be. The relative importance of satisfying each constraint is indicated by the absolute size of the corresponding weight. The acceptance or rejection of a hypothesis is indicated by the activation of the corresponding unit Thus every point in the activation space corresponds to a possible solution to the constraint problem represented by the network. The quality of any solution can be calculated by summing the 'satisfiedn~ss' of all the constraints. The goal is to find a point in the activation space for which the quality is at a maximum. Automatic Local Annealing 603 Unfortunately, if units update dettnninistically (i.e. if they always move toward the state that best satisfies their constraints) there is no means of avoiding local quality maxima in the activation space. This is simply a fimdamental problem of all gradient decent procedures. Annealing systems attempt to avoid this problem by always giving units some probability of not moving towards the state Ihat best satisfaes their constraints. This probability is called the 'temperature' of the network. When the temperature is high, solutions are generally not good, but the network moves easily throughout the activation space. When the temperature is low, the network is committed to one area of the activation space, but it is very good at improving its solution within that area. Thus the annealing analogy is born. The notion is that if you start with the temperature high, and lower it slowly enough, the network will gradually replace its 'state mobility' with 'state improvement ability' , in such a way as to guide itself into a globally maximal state (much as the atoms in slowly annealed metals find optimal bonding structures). To search for solutions this way, requires some means of detennining a temperature for the network, at every update. Annealing systems simply use a predetennined schedule to provide this information. However, there are both practical and theoretical problems with this approach. The main practical problems are the following: 1) once an annealing schedule comes to an end. all processing is finished regardless of the quality of the current solution, and 2) temperature must be unifonn across the network, even though different parts of the network may merit different temperatures (this is the case any time one part of the network is in a 'better' area of the activation space than another, which is a natural condition). The theoretical problem with this approach involves the selection of annealing schedules. In order to pick an appropriate schedule for a network. one must use some knowledge about what a good solution for that network is. Thus in order to get the system to find a solution, you must already know something about the solution you want it to find. The problem is that one of the most critical elements of the process. the way that the temperature is decreased, is handled by something other than the network itself. Thus the quality of the fmal solution must depend. at least in part. on that system's understanding of the problem. By allowing each unit to control its own temperature during processing, Automatic Local Annealing avoids this serious kludge. In addition. by resolving the main practical problems. it also ends up fmding global maxima more quickly and reliably than externally controlled systems. MECHANICS All units take on continuous activations between a unifonn minimum and maximum value. There is also a unifonn resting activation for all units (between the minimum and maximum). Units start at random activations. and are updated synchronously at each cycle in one of two possible ways. Either they are updated via any ordinary update rule for which a positive net input (as defined below) increases activation and a negative net input decreases activation, or they are simply reset to their resting activation. There is an update probability function that detennines the probability of normal update for a unit based on its temperature (as defmed below). It should be noted that once the net input for 604 Leinbach a unit has been calculated, rmding its temperature is trivial (the quantity (a; - rest) in the equation for g~ss; can come outside the summation). Definitions: = ~ .(a-rest)xw .. kJ "} IJ temperature; = -g~sS;l1ltlUpOsgdnssi g~ssilnuuneggdnssi if g~ss; ~ 0 otherwise goodness; = Lj(a,rest)xwijx(arrest) 1ItIUpOsgdnssi = the largest pos. v31ue that goodness; could be maxneggdnss; = the largest neg. value that goodness; could be Maxposgdnss and maxneggdnss are constants that can be calculated once for each unit at the beginning of simulation. They depend only on the weights into the unit, and the constant maximum, minimum and resting activation values. Temperature is always a value between 1 and -1, with 1 representing high temperature and -1 low. SIMULATIONS The parameters below were used in processing both of the networks that were tested. The first network processed (Figure la) has two local maxima that are extremely close. to its two global maxima. This is a very 'difficult' network in the sense that the search for a global maximum must be extremely sensitive to the minute difference between the global maxima and the next-best local maxima. The other network processed (Figure Ib) has many local maxima, but none of them are especially close to the global maxima. This is an 'easy' network in the sense that the slow and cautious process that was used, was not really necessary. A more appropriate set of parameters would have improved performance on this second network, but it was not used in order to illustrate the relative generality of the algorithm. Parameters: maximum activation = 1 minimum activation = 0 resting activation = O.S normal update rule: A activation; = netinput; x (moxactivation - activation;) x k netinput; x (activation; - minactivation) x k with'k = 0.6 if netinput; ~ 0 otherwise update probability fwlction: -I -.79 o TE\fPERA TCRE Automatic Local Annealing 605 This function defines a process that moves slowly towards a global maximum, moves away from even good solutions easily, and 'freezes' units that are colder than -0.79. RESULTS The results of running the Automatic Local Annealing process on these two networks (in comparison to a standard Boltzmann Machine's performance) are summarized in figures 2a and 2b. With Automatic Local Annealing (ALA), the probability of having found a stable global maximum departs from zero fairly soon after processing begins. and increases smoothly up to one. The Boltzmann Machine, instead, makes little 'useful' progress until the end of the annealing schedule, and then quickly moves into a solution which mayor may not be a global maximum. In order to get its reliability near that of ALA, the Boltzmann Machine's schedule must be so slow that solutions are found much more slowly than ALA. Conversely in ordet to start finding solution as quickly as ALA. such a short schedule is necessary that the reliability becomes much worse than ALA's. Finally, if one makes a more reasonable comparison to the Boltzmann Machine (either by changing the parameters of the ALA process to maximize its performance on each network. or by using a single annealing schedule with the Boltzmann Machine for both networks). the overall performance advantage for ALA increases substantially. DISCUSSION HOW IT WORKS The characteristics of the approach to a global maximum are determined by the shape of the update probability function. By modifying this shape, one can control such things as: how quickly/steadily the network moves towards a global maximum, how easily it moves away from local maxima, how good a solution must be in order for it to become completely stable, and so on. The only critical feature of the function, is that as temperature decreases the probability of normal update increases. In this way, the colder a unit gets the more steadily .it progresses towards an extreme activation value, and the hoUrz a wit gets the more time it spends near resting activation. From this you get hot 606 Leinbach 1 ~5 ~5 -2.5 -2.5 1 Figure la. A 'Difficult' Network. Global maxima are: 1) all eight upper units on, with the remaining units off, 2) all eight lower units on with the remaining units off. Next best local maxima are: 1) four uppel' left and four lower right units on, with the remaiiung units off, 2) four upper right and four lower left units on, with the remaining units off. -1.5 1~ 1 1 Figure lb. An 'Easy' Network. Necker cube network (McClelland & Rumelhart 1988). Each set of four corresponding units are connected as shown above. Connections for the other three such sets were omitted for clarity. The global maxima have all units in one cube on with all units in the other off. 8 0 .. 0 (\I 0 o o o 0 o Automatic Local Annealing 607 Automatic Local Annealing 8 .M. with 125 de sctIedule' . I , : , 50 100 150 200 250 Cycles 01 ProcesSing Figure la. Performance On A 'Difficult' Network (Figure la). . .. , , 20 •. WI Y u --------crnrwTh~~~re~~~--- - - - - - - - - - - - B~M-: with -30 cycle schedul;;a- - ----------------------S.M.-w;th 2() cycie -sciieduiili -----. . I " . - .,,' 8.M with 10 cycle schedule 40 60 80 100 Cycles 01 Processing Figure 2b. Performance On An 'Easy' Network (Figure Ib). I Each line is based on 100 trials. A stable global maxima is one that the network remained in for the rest of the trial. 2 All annealing schedules were the best performing three-leg schedules found. 608 Leinbach units that have little effect on movement in the activation space (since they conbibute little to any unit's net input), and cold units that compete to control this critical movement. The cold units 'coor connected units that are in agreement with them, and 'heat' connected units that are in disagreement (see temperature equation). As the connected agreeing units are cooled, they too begin to cool their connected agreeing units. In this way coldness spreads out. stabilizing sets d units whose hypotheses agree. This spreading is what makes the ALA algorithm wort. A units decision about its hypothesis can now be felt by units that are only distantly connected, as must be the case if units are to act in accordance with any global criterion (e.g. the overall quality d the states of these networks). In order to see why global maxima are found, one must consider the network as a whole. In general, the amount of time spent in any state is proportional to the amount of heat in that state (since heat is directly related to stability). The state(s) containing the least possible heat for a given network. will be the most stable. These state(s) will also represent the global maxima (since they have the least total 'dissatisfaction' of constraints). Therefore, given infinite processing time, the most commonly visited states will be the global maxima. More importantly, the 'visitedness.' of ~ry state will be proportional to its overall quality (a mathematical description of this has not yet been developed). This later characteristic provides good practical benefits, when one employs a notion of solution satisficing. This is done by using an update probability function that allows units to 'freeze' (i.e. have normal update pobabiUties of 1) at temperatures higher than-I (as was done with the simulations described above). In this condition, states can become completely stable, without perfectly satisfying all constraints. As the time of simulation increases, the probability of being in any given state approaches approaches a value proportional to its quality. Thus, if there are any states good enough to be frozen, the chances of not having hit one will decrease with time. The amount of time necessary to satisfice is directly related to the freezing point used. Times as small as 0 (for freezing points> 1) and as large as infmity (for freezing points < -1) can be achieved. This type of time/quality trade-off, is extremely useful in many practical applications. MEASURING PERFORMANCE While ALA finds global maxima faster and more reliably than Boltzmann Machine annealing, these are not the only benefits to ALA processing. A number of othex elements make it preferable to externally scheduled annealing processes: 1) Various solutions to subparts of problems are found and, at least temporarily, maintained during processing. If one considers constraint satisfaction netwOJks in terms of schema processors, this corresponds nicely to the simultaneous processing of all levels of scbemas and subschemas. Subschemas with obvious solutions get filled in quickly, even when the higher level schemas have still not found real solutions. While these initial sub-solutions may not end up as part of the final solution, their appearance during Automatic Local Annealing 609 processing can still be quite useful in some settings. 2) ALA is much more biologically feasible than externally scheduled systems. Not only can units flDlCtion on their own (without the use of an intelligent external processor), but the paths travened through the activation space (as described by the schema example above) also parallel human processing more closely. 3) ALA processing may lend itself to simple learning algorithms. During processing, units are always acting in close accord with the constraints that are present At fU'St distant corwtraint are ignmed in favor of more immediate ones, but regardless the units rarely actually defy any constraints in the network. Thus basic approaches to making weight adjustments, such as continuously increasing weights between units that are in agreement about their hypotheses, and decreasing weights between units that are in disagreement about their hypotheses (Minsky & Papert, 1968), may have new power. This is an area of current ~h, which would represent an enonnous time savings over Boltzmann Machine type learning (Ackley et at 1985) if it were to be found feasible. REFERENCES Ackley, D. H., Hinton, G. E., & Sejnowski, T. I. (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9,141-169. McClelland, I. L., & Rumelhart. D. E. (1988). Explorations in Parallel Distributed Processing. Cambridge, MA: MIT Press. Minsky, M., & Papert, S. (1968). Perceptrons. Cambridge, MA: MIT Press.
|
1988
|
29
|
112
|
40 EFFICIENT PARALLEL LEARNING ALGORITHMS FOR NEURAL NETWORKS Alan H. Kramer and A. Sangiovanni-Vincentelli Department of EECS U .C. Berkeley Berkeley, CA 94720 ABSTRACT Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergence properties, optimization techniques such as the PolakRibiere method are also significantly more efficient than the Backpropagation algorithm. These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of hand-written character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable. In this paper, we describe the network learning problem in a numerical framework and investigate parallel algorithms for its solution. Specifically, we compare the performance of several parallelizable optimization techniques to the standard Back-propagation algorithm. Experimental results show the clear superiority of the numerical techniques. 2 NEURAL NETWORKS A neural network is characterized by its architecture, its node functions, and its interconnection weights. In a learning problem, the first two of these are fixed, so that the weight values are the only free parameters in the system. when we talk about "weight space" we refer to the parameter space defined by the weights in a network, thus a "weight vector" w is a vector or a point in weightspace which defines the values of each weight in the network. We will usually index the components of a weight vector as Wij, meaning the weight value on the connection from unit i to unit j. Thus N(w, r), a network function with n output units, is an n-dimensional vector-valued function defined for any weight vector wand any input vector r: N( w, r) = [Ol(W, r), D2(w, r), ... , on(w, r)f Efficient Parallel Learning Algorithms 41 where 0. is the ith output unit of the network. Any node j in the network has input ij(w,r) = E'efanin; o.(w,r)wij and output oj(w,r) ;: I;(ij(w,r», where 1;0 is the node function. The evaluation of NO is inherently parallel and the time to evaluate NO on a single input vector is O(#layers). If pipelining is used, multiple input vectors can be evaluated in constant time. 3 LEARNING The "learning" problem for a neural network refers to the problem of finding a network function which approximates some desired "target" function TO, defined over the same set of input vectors as the network function. The problem is simplified by asking that the network function match the target function on only a finite set of input vectors, the "training set" R. This is usually done with an error measure. The most common measure is sum-squared error, which we use to define the "instance error" between N(w, r) and T(r) at weight vector wand input vector r: eN,T(w, r) = E ! (Ta(r) - o.(w, r»2 = !IIT(r) - N(w, r)1I2. ieoutputs We can now define the "error function" between NO and TO over R as a function ofw: EN,T,R(w) = I: eN,T(w, r). reR The learning problem is thus reduced to finding a w for which EN T R(w) is min, , imized. If this minimum value is zero then the network function approximates the target function exactly on all input vectors in the training set. Henceforth, for notational simplicity we will write eO and EO rather than eN TO and EN T RO. , .» , 4 OPTIMIZATION TECHNIQUES As we have framed it here, the learning problem is a classic problem in optimization. More specifically, network learning is a problem of function approximation, where the approximating function is a finite parameter-based system. The goal is to find a set of parameter values which minimizes a cost function, which in this case, is a measure of the error between the target function and the approximating function. Among the optimization algorithms that can be used to solve this type of problem, gradient-based algorithms have proven to be effective in a variety of applications {Avriel, 1976}. These algorithms are iterative in nature, thus Wk is the weight vector at the kth iteration. Each iteration is characterized by a search direction dk and a step ak. The weight vector is updated by taking a step in the search direction as below: tor(k=o; evaluate(wk) != CONVERGED; ++k) { dk = determine-.eearch_directionO; ak = determine-.etepO; Wk+l = wit; + akdk ; } 42 Kramer and Sangiovanni-Vincentelli If dk is a direction of descent I such as the negative of the gradient, a sufficiently small step will reduce the value of EO. Optimization algorithms vary in the way they determine Q and d, but otherwise they are structured as above. 5 CONVERGENCE CRITERION The choice of convergence criterion is important. An algorithm must terminate when EO has been sufficiently minimized. This may be done with a threshold on the value of EO, but this alone is not sufficient. In the case where the error surface contains "bad" local minima, it is possible that the error threshold will be unattainable, and in this case the algorithm will never terminate. Some researchers have proposed the use of an iteration limit to guarantee termination despite an unattainable error threshold {Fahlman, 1989}. Unfortunately, for practical problems where this limit is not known a priori, this approach is inapplicable. A necessary condition for W* to be a minimum, either local or global I is that the gradient g(w*) = V E(w*) = o. Hence, the most usual convergence criterion for optimization algorithms is Ilg(Wk)11 ~ l where l is a sufficiently small gradient threshold. The downside of using this as a convergence test is that, for successful trials, learning times will be longer than they would be in the case of an error threshold. Error tolerances are usually specified in terms of an acceptable bit error, and a threshold on the maximum bit error (MBE) is a more appropriate representation of this criterion than is a simple error threshold. For this reason we have chosen a convergence criterion consisting of a gradient threshold and an M BE threshold (T), terminating when IIg(wk)1I < lor M BE(Wk) < T, where M BEO is defined as: M BE(w,,) = max (. max (!(Ti(r) - Oi(Wkl r))2)) . reR leoutputs 6 STEEPEST DESCENT Steepest Descent is the most classical gradient-based optimization algorithm. In this algorithm the search direction d" is always the negative of the gradient - the direction of steepest descent. For network learning problems the computation of g(w), the gradient of E(w), is straightforward: where where for output units while for all other units g(W) = VE(w) Ve(w, r) 8e(wlr) 8W ij 6j(w, r) 6j(w, r) [d~ 2:e(Wlr)]T = 2: Ve(w,r), reR reR [8e(Wlr), 8e(w,r) , ... , 8e(w, r)]T 8wn 8W12 8wmn ,; (ij (w, r))(oj(w, r) - Tj(r)), ,;(ij(w, r)) L 6j (w, r)Wjk. kefanout; Efficient Parallel Learning Algorithms 43 The evaluation of g is thus almost dual to the evaluation of N; while the latter feeds forward through the net, the former feeds back. Both computations are inherently parallelizable and of the same complexity. The method of Steepest Descent determines the step Ok by inexact linesearch, meaning that it minimizes E(Wk - Okdk). There are many ways to perform this computation, but they are all iterative in nature and thus involve the evaluation of E(Wk - Okdk) for several values of Ok. As each evaluation requires a pass through the entire training set, this is expensive. Curve fitting techniques are employed to reduce the number of iterations needed to terminate a linesearch. Again, there are many ways to curve fit . We have employed the method of false position and used the Wolfe Test to terminate a line search {Luenberger, 1986}. In practice we find that the typical linesearch in a network learning problem terminates in 2 or 3 iterations. 7 PARTIAL CONJUGATE GRADIENT METHODS Because linesearch guarantees that E(Wk+d < E(Wk), the Steepest Descent algorithm can be proven to converge for a large class of problems {Luenberger, 1986}. Unfortunately, its convergence rate is only linear and it suffers from the problem of "cross-stitching" {Luenberger, 1986}, so it may require a large number of iterations. One way to guarantee a faster convergence rate is to make use of higher order derivatives. Others have investigated the performance of algorithms of this class on network learning tasks, with mixed results {Becker, 1989}. We are not interested in such techniques because they are less parallelizable than the methods we have pursued and because they are more expensive, both computationally and in terms of storage requirements. Because we are implementing our algorithms on the Connection Machine, where memory is extremely limited, this last concern is of special importance. We thus confine our investigation to algorithms that require explicit evaluation only of g, the first derivative. Conjugate gradient techniques take advantage of second order information to avoid the problem of cross-stitching without requiring the estimation and storage of the Hessian (matrix of second-order partials). The search direction is a combination of the current gradient and the previous search direction: There are various rules for determining 13k; we have had the most success with the Polak-Ribiere rule, where 13k is determined from gk+l and gk according to a _ (gk+l - gk)T . gk+l }Jk T . gk . gk As in the Steepest Descent algorithm, Ok is determined by linesearch. \Vith a simple reinitialization procedure partial conjugate gradient techniques are as robust as the method of Steepest Descent {Powell, 1977}; in practice we find that the PolakRibiere method requires far fewer iterations than Steepest Descent. 44 Kramer and Sangiovanni-Vincentelli 8 BACKPROPAGATION The Batch Back-propagation algorithm {Rumelhart, 1986} can be described in terms of our optimization framework. Without momentum, the algorithm is very similar to the method of Steepest Descent in that dk = -gk. Rather than being determined by a linesearch, a, the "learning rate", is a fixed user-supplied constant. With momentum, the algorithm is similar to a partial conjugate gradient method, as dk+l = -~+l + ,Bkdk, though again (3, the "momentum term", is fixed. On-line Back-propagation is a variation which makes a change to the weight vector following the presentation of each input vector: dk = V'e(wk' rk). Though very simple, we can see that this algorithm is numerically unsound for several reasons. Because,B is fixed, d k may not be a descent direction, and in this case any a will increase EO. Even if dk is a direction of descent (as is the case for Batch Back-propagation without momentum), a may be large enough to move from one wall of a "valley" to the opposite wall, again resulting in an increase in EO. Because the algorithm can not guarantee that EO is reduced by successive iterations, it cannot be proven to converge. In practice, finding a value for a which results in fast progress and stable behavior is a black art, at best. 9 WEIGHT DECAY One of the problems of performing gradient descent on the "error surface" is that minima may be at infinity. (In fact, for boolean learning problems all minima are at infinity.) Thus an algorithm may have to travel a great distance through weightspace before it converges. Many researchers have found that weight decay is useful for reducing learning times {Hinton, 1986}. This technique can be viewed as adding a term corresponding to the length of the weight vector to the cost function; this modifies the cost surface in a way that bounds all the minima. Rather than minimizing on the error surface, minimization is performed on the surface with cost function C(W) = E(w) + 211wll2 2 where I, the relative weight cost, is a problem-specific parameter. The gradient for this cost function is g( w) = V' C( w) = V' E( w) + IW, and for any step o'k, the effect of I is to "decay" the weight vector by a factor of (1 - O'ey): 10 PARALLEL IMPLEMENTATION ISSUES We have emphasized the parallelism inherent in the evaluation of EO and gO. To be efficient, any learning algorithm must exploit this parallelism. Without momentum, the Back-propagation algorithm is the simplest gradient descent technique, as it requires the storage of only a single vector, gk. Momentum requires the storage of only one additional vector, dk-l. The Steepest Descent algorithm also requires the storage of only a single vector more than Back-propagation without momentum: Efficient Parallel Learning Algorithms 45 dk, which is needed for linesearch. In addition to dk, the Polak-Ribiere method requires the storage of two additional vectors: dk-l and gk-l. The additional storage requirements of the optimization techniques are thus minimal. The additional computational requirements are essentially those needed for linesearch - a single dot product and a single broadcast per iteration. These operations are parallelizable (log time on the Connection Machine) so the additional computation required by these algorithms is also minimal, especially since computation time is dominated by the evaluation of EO and gO. Both the Steepest Descent and Polak-Ribiere algorithms are easily parallelizable. We have implemented these algorithms, as well as Back-propagation, on a Connection Machine {Hillis, 1986}. 11 EXPERIMENTAL RESULTS - BOOLEAN LEARNING We have compared the performance of the Polak-Ribiere (P-R), Steepest Descent (S-D), and Batch Back-propagation (B-B) algorithms on small boolean learning problems. In all cases we have found the Polak-Ribiere algorithm to be significantly more efficient than the others. All the problems we looked at were based on threelayer networks (1 hidden layer) using the logistic function for all node functions. Initial weight vectors were generated by randomly choosing each component from (+r, -r). '1 is the relative weight cost, and f and r define the convergence test. Learning times are measured in terms of epochs (sweeps through the training set). The encoder problem is easily scaled and has no bad local minima (assuming sufficient hidden units: log(#inputs)). All Back-propagation trials used Q' = 1 and (3 = OJ these values were found to work about as well as any others. Table 1 summarizes the results. Standard deviations for all data were insignificant « 25%). TABLE 1. Encoder Results Encoder num Parameter Values A verage Epochs to Convergence Problem trials r 1 '11 r/ f P-R 1 S-D / B-B 10-5-10 100 1.0 1e-4 1e-1 1e-8 63.71 109.06 196.93 10-5-10 100 1.0 1e-4 2e-2 1e-8 71.27 142.31 299.55 10-5-10 100 1.0 1e-4 7e-4 1e-8 104.70 431.43 3286.20 10-5-10 100 1.0 1e-4 0.0 1e-4 279.52 1490.00 13117.00 10-5-10 100 1.0 1e-4 0.0 1e-6 353.30 2265.00 24910.00 10-5-10 100 1.0 1e-4 0.0 le-8 417.90 2863.00 35260.00 4-2-4 100 1.0 1e-4 0.1 1e-8 36.92 56.90 179.95 8-3-8 100 1.0 1e-4 0.1 1e-8 67.63 194.80 594.76 16-4-16 100 1.0 1e-4 0.1 1e-8 121.30 572.80 990.33 32-5-32 25 1.0 1e-4 0.1 1e-8 208.60 1379.40 1826.15 64-6-64 25 1.0 1e-4 0.1 1e-8 405.60 4187.30 > 10000 46' Kramer and Sangiovanni-Vincentelli The parity problem is interesting because it is also easily scaled and its weightspace is known to contain bad local minima.. To report learning times for problems with bad local minima, we use expected epochs to solution, EES. This measure makes sense especially if one considers an algorithm with a restart procedure: if the algorithm terminates in a bad local minima it can restart from a new random weight vector. EES can be estimated from a set of independent learning trials as the ratio of total epochs to successful trials. The results of the parity experiments are summarized in table 2. Again, the optimization techniques were more efficient than Back-propagation. This fact is most evident in the case of bad trials. All trials used r = 1, "y = 1e - 4, T = 0.1 and f = 1e - 8. Back-propagation used a = 1 and f3 = o. TABLE 2. Parity Results II Parity alg trials I %"uee I avg"uee (s.d.) I avgun, (s.d.) I EES 11 2-2-1 P-R 100 72% 73 (43) 232 (54) 163 S-D 100 80% 95 (115 3077 (339) 864 B-B 100 78% 684 (1460 47915 (5505) 14197 4-4-1 P-R 100 61% 352 (122 453 J}17 641 S-D 100 99% 2052 (1753 18512 (2324 B-B 100 71% 8704 (8339 95345 (11930 48430 8-8-1 P-R 16 50% 1716 (748 953 (355 2669 S-D 6 >10000 >10000 >10000 B-B 2 >100000 >100000 >100000 12 LETTER RECOGNITION One criticism of batch-based gradient descent techniques is that for large real-world, real-valued learning problems, they will be be less efficient than On-line Backpropagation. The task of characterizing hand drawn examples of the 26 capital letters was chosen as a good problem to test this, partly because others have used this problem to demonstrate that On-line Back-propagation is more efficient than Batch Back-propagation {Le Cun, 1986}. The experimental setup was as follows: Characters were hand-entered in a 80 x 120 pixel window with a 5 pixel-wide brush (mouse controlled). Because the objective was to have many noisy examples of the same input pattern, not to learn scale and orientation invariance, all characters were roughly centered and roughly the full size of the window. Following character entry, the input window was symbolically gridded to define 100 8 x 12 pixel regions. Each of these regions was an input and the percentage of "on" pixels in the region was its value. There were thus 100 inputs, each of which could have any of 96 (8 x 12) distinct values. 26 outputs were used to represent a one-hot encoding of the 26 letters, and a network with a single hidden layer containing 10 units was chosen. The network thus had a 100-10-26 architecture; all nodes used the logistic function. Efficient Parallel Learning Algorithms 47 A training set consisting of 64 distinct sets of the 26 upper case letters was created by hand in the manner described. 25" A" vectors are shown in figure 1. This large training set was recursively split in half to define a series of 6 successively larger training sets; Ro to Ro, where Ro is the smallest training set consisting of 1 of each letter and Ri contains Ri-l and 2i - 1 new letter sets. A testing set consisting of 10 more sets of hand-entered characters was also created to measure network performance. For each Ri, we compared naive learning to incremental learning, where naive learning means initializing w~i) randomly and incremental learning means setting w~i) to w~i-l) (the solution weight vector to the learning problem based on Ri-d. The incremental epoch count for the problem based on Ri was normalized to the number of epochs needed starting from w~i-l) plus! the number of epochs taken by the problem based on Ri-l (since IRi-ll = !IRd). This normalized count thus reflects the total number of relative epochs needed to get from a naive network to a solution incrementally. Both Polak-Ribiere and On-line Back-propagation were tried on all problems. Table 3 contains only results for the Polak-Ribiere method because no combination of weight-decay and learning rate were found for which Back-propagation could find a solution after 1000 times the number of iterations taken by Polak-Ribiere, although values of "y from 0.0 to 0.001 and values for 0' from 1.0 to 0.001 were tried. All problems had r = 1, "y = 0.01, r = Ie - 8 and € = 0.1. Only a single trial was done for each problem. Performance on the test set is shown in the last column. FIGURE 1. 25 "A"s TABLE 3. Letter Recognition prob Learning Time .r epochs) Test set INC I NORM I NAIV % RO 95 95 95 53.5 R1 83 130 85 69.2 R2 63 128 271 80.4 ~ -!j. :11;; H·-:' ·r::. f"t ' ~:!'!I .. ....:I I .. · ... M ... ~ r ~ , . . ..... ~ ~ .: R3 14 78 388 83.4 R4 191 230 1129 92.3 R5 153 268 1323 98.1 R6 46 180 657 99.6 F": 1'\ .r .,. :'1'''' r! 1 ~ " ,:: HI .•. ;: '1 .. ,l'1 J.'1 .. .~ ;i....J _ •• . I. t· '. I! = t· ! , ., The incremental learning paradigm was very effective at reducing learning times. Even non-incrementally, the Polak-Ribiere method was more efficient than on-line Back-propagation on this problem. The network with only 10 hidden units was sufficient, indicating that these letters can be encoded by a compact set of features. 13 CONCLUSIONS Describing the computational task of learning in feedforward neural networks as an optimization problem allows exploitation of the wealth of mathematical programming algorithms that have been developed over the years. We have found 48 Kramer and Sangiovanni-Vincentelli that the Polak-Ribiere algorithm offers superior convergence properties and significant speedup over the Back-propagation algorithm. In addition, this algorithm is well-suited to parallel implementation on massively parallel computers such as the Connection Machine. Finally, incremental learning is a way to increase the efficiency of optimization techniques when applied to large real-world learning problems such as that of handwritten character recognition. Acknowledgments The authors would like to thank Greg Sorkin for helpful discussions. This work was supported by the Joint Services Educational Program grant #482427-25304. References {Avriel, 1976} Mordecai Avriel. Nonlinear Programming, Analysis and Methods. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1976. {Becker, 1989} Sue Becker and Yan Le Cun. Improving the Convergence of BackPropagation Learning with Second Order Methods. In Proceedings of the 1988 Connectionist Alodels Summer School, pages 29-37, Morgan Kaufmann, San Mateo Calif., 1989. {Fahlman, 1989} Scott E. Fahlman. Faster Learning Variations on Back-Propagation: An Empirical Study. In Proceedings of the 1988 Connectionist Models Summer School, pages 38-51, Morgan Kaufmann, San Mateo Calif., 1989. {Hillis, 1986} William D. Hillis. The Connection Machine. MIT Press, Cambridge, Mass, 1986. {Hinton, 1986} G. E. Hinton. Learning Distributed Representations of Concepts. In Proceedings of the Cognitive Science Society, pages 1-12, Erlbaum, 1986. {Kramer, 1989} Alan H. Kramer. Optimization Techniques for Neural Networks. Technical Memo #UCB-ERL-M89-1, U.C. Berkeley Electronics Research Laboratory, Berkeley Calif., Jan. 1989. {Le Cun, 1986} Yan Le Cun. HLM: A Multilayer Learning Network. In Proceedings of the 1986 Connectionist Alodels Summer School, pages 169-177, Carnegie-Mellon University, Pittsburgh, Penn., 1986. {Luenberger, 1986} David G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley Co., Reading, Mass, 1986. {Powell, 1977} M. J. D. Powell. "Restart Procedures for the Conjugate Gradient Method", Mathematical Programming 12 (1977) 241-254 {Rumelhart, 1986} David E Rumelhart, Geoffrey E. Hinton, and R. J. Williams. Learning Internal Representations by Error Propagation. In Parallel Distributed Processing: Explorations in the Microstructure, of Cognition. Vol 1: Foundations, pages 318-362, MIT Press, Cambridge, Mass., 1986
|
1988
|
3
|
113
|
124 ADAPTIVE NEURAL NET PREPROCESSING FOR SIGNAL DETECTION IN NON-GAUSSIAN NOISE1 Richard P. Lippmann and Paul Beckman MIT Lincoln Laboratory Lexington, MA 02173 ABSTRACT A nonlinearity is required before matched filtering in mInimum error receivers when additive noise is present which is impulsive and highly non-Gaussian. Experiments were performed to determine whether the correct clipping nonlinearity could be provided by a single-input singleoutput multi-layer perceptron trained with back propagation. It was found that a multi-layer perceptron with one input and output node, 20 nodes in the first hidden layer, and 5 nodes in the second hidden layer could be trained to provide a clipping nonlinearity with fewer than 5,000 presentations of noiseless and corrupted waveform samples. A network trained at a relatively high signal-to-noise (SIN) ratio and then used as a front end for a linear matched filter detector greatly reduced the probability of error. The clipping nonlinearity formed by this network was similar to that used in current receivers designed for impulsive noise and provided similar substantial improvements in performance. INTRODUCTION The most widely used neural net, the adaptive linear combiner (ALe). is a singlelayer perceptron with linear input and output nodes. It is typically trained using the LMS algorithm and forms one of the most common components of adaptive filters. ALes are used in high-speed modems to construct equalization filters, in telephone links as echo cancelers, and in many other signal processing applications where linear filtering is required [9]. The purpose of this study was to determine whether multilayer perceptrons with linear input and output nodes but with sigmoidal hidden nodes could be as effective for adaptive nonlinear filtering as ALes are for linear filtering. 1 This work wa.s sponsored by the Defense Advanced Research Projects Agency and the Department of the Air Force. The views expressed are those of the authors and do not reflect the policy or position of the U . S. Government. Adaptive Neural Net Preprocessing for Signal Detection 125 The task explored in this paper is signal detection with impulsive noise where an adaptive nonlinearity is required for optimal performance. Impulsive noise occurs in underwater acoustics and in extremely low frequency communications channels where impulses caused by lightning strikes propagate many thousands of miles [2]. This task was selected because a nonlinearity is required in the optimal receiver, the structure of the optimal receiver is known, and the resulting signal detection error rate provides an objective measure of performance. The only other previous studies of the use of multi-layer perceptrons for adaptive nonlinear filtering that we are aware of [6,8] appear promising but provide no objective performance comparisons. In the following we first present examples which illustrate that multi-layer perceptrons trained with back-propagation can rapidly form clipping and other nonlinearities useful for signal processing with deterministic training. The signal detection task is then described and theory is presented which illustrates the need for nOlllinear processing with non-Gaussian noise. Nonlinearities formed when the input to a net is a corrupted signal and the desired output is the uncorrupted signal are then presented for no noise, impulsive noise, and Gaussian noise. Finally, signal detection performance results are presented that demonstrate large improvements in performance with an adaptive nonlinearity and impulsive noise. FORMING DETERMINISTIC NONLINEARITIES A theorem proven by Kohnogorov and described in [5] demonstrates that singleinput single-output continuous nonlinearities can be formed by a multi-layer perceptron with two layers of hidden nodes. This proof, however, requires complex nonlinear functions in the hidden nodes that are very sensitive to the desired input/output function and may be difficult to realize. "More recently, Lapedes [4] presented an intuitive description of how multi-layer perceptrons with sigmoidal nonlinearities could produce continuous nonlinear mappings. A careful mathematical proof was recently developed by Cybenko [1] which demonstrated that continuous nonlinear mappings can be formed using sigmoidal nonlinearities and a multi-layer perceptron with one layer of hidden nodes. This proof, however, is not constructive and does not indicate how many nodes are required in the hidden layer. The purpose of our study was to determine whether multi-layer perceptrons with sigmoidal nonlinearities and trained using back-propagation could adaptively and rapidly form clipping nonlinearities. Initial experiments were performed to determine the difficulty of learning complex mappings using multi-layer perceptrons trained using back-propagation. Networks with 1 and 2 hidden layers and from 1 to 50 hidden nodes per layer were evaluated. Input and output nodes were linear and all other nodes included sigmoidal nonlinearities. Best overall performance was provjded by the three-layer perceptron shown in Fig. 1. It has 20 nodes in the first and 5 nodes in the second hidden layer. This network could form a wide variety of mappings and required only slightly more training than other networks. It was used in all experiments. 126 Lippmann and Beckman y - OUTPUT (LInear Sum) (20 Nodes) x - INPUT Figure 1: The multi-layer perceptron with linear input and output nodes that was used in all experiments. The three-layer network shown in Fig. 1 was used to form clipping and other deterministic nonlinearities. Results in Fig. 2 demonstrate that a clipping nonlinearity ('auld be formed with fewer than 1,000 input samples. Input/output point pairs were determined by selecting the input at random over the range plotted and using tlte deterministic clipping function shown as a solid line in Fig. 2. Back-propagation training [7] was used with the gain term (11) equal to 0.1 and the momentum term (0') equal to 0.5. These values provide good convergence rates for the clipping function and all other functions tested. Initial connection weights were set to small random values. The multi-layer percept ron from Fig. 1 was also used to form the four nonlinear functions shown in Fig. 3. The "Hole Punch" is useful in nonlinear signal processing. It performs much the same function as the clipper but completely eliminates amplitudes above a certain threshold le\'el. Accurate approximation of this function required more than 50,000 input samples. The "Step" has one sharp edge and could be roughly approximated after 2,000 input samples. The "Double Pulse" requires approximation of two close "pulses" and is the nonlinear function analogy of the disjoint region problem studied in [3]. In this examplf>, back-propagation training approximated the rightmost pulse first after 5,000 input samples. Both pulses were then approximated fairly well after 50,000 input samples. The "Gaussian Pulse" is a smooth curve that could be approximated well after only 2,000 input samples. These results demonstrate that back-propagation training with sigmoidal 1I0nlinearities can form many different nonlinear functions. Qualitative results on training times are similar to those reported in [.1]. In this previous study it was de mOllAdaptive Neural Net Preprocessing for Signal Detection 127 BEFORE TRAINING DESIRED .... 1 ACTUAL .!: ••••••••. ~ 0 ~ ..... . 0-1 -2_2 -1 0 2 O. II: 0 II: II: u.I (f) :IE ex: -2 200 40 TRIALS 1000 TRIALS -1 0 t 2 -2 -1 0 2 INPUT (I() RMS ERROR 400 606 800 1000 TRIALS Figure 2: Clipping nonlinearities formed using back-propagation training and the multi-layer perceptron from Fig. 1 (top) and the rms error produced by these Ilonlinearities versus training time (bottom). strated that simple half-plane decision regions could be formed for classification problems with little training while complex disjoint decision regions required long training times. These new results suggest that complex nonlinearities with many sharp discontinuities require much more training time than simple smooth curves. THE SIGNAL DETECTION TASK The signal detection task was to discriminate between two equally likely input signals as shown in Fig. 4. One signal (so(t)) corresponds to no input and the other signal (Sl(t)) was a sinewa\'c pulse with fixed duration and known amplitude, frequency, and phase. Noise was added to these inputs, the resultant signal was passed through a memoryless nonlinearity, and a matched filter was then used to select hypothesis Ho corresponding to no input or HI corresponding to the sinewave pulse. The matched filter multiplied the output of the nonlinearity by the known timealigned signal waveform, integrated this product over time, and decided HI if the result was greater than a threshold and Ho otherwise. The threshold was selected to provide a minimum overall error rate. The optimum nonlinearity Ilsed in the detector depends on the noise distribu tion. If the signal levels are small relati\'e to the noise levels, then the optimum nonlinearity is approximated by f (J') = t;~ In{ (In (J')). where r .. (x) is the instantaneous probability density function of the noise (2]- This function is linear for Gaussian noise but has a clipping shape for impulsi\'e noise. 128 Lippmann and Beckman HOLE PUNCH STEP 2r---~---r--~---. I I f-, .l/ "-. 0 • I f/ .......... I ····~·:·~5 ............... i.~j .............. ., N.5.ooo . N.5oo --------Nco 50.000 ---_. ~ ·2 ·2 -, 0 2 Nco 2.000 1 1 -1--' ·2 o 2 ., I:::l DOUBLE PULSE no GAUSSIAN PULSE I2 ::l 0 ·2 ·1 o 2 INPUT (xl Figure 3: Four deterministic nonlinearities formed using the multi-layer perceptron from Fig. 1. Desired functions are plotted as solid lines while functions formed using back-propagation with different numbers of input samples are plotted using dots and dashes. Examples of the signal, impulsive noise and Gaussian noise are presented in Fig. 5. The signal had a fixed duration of 250 samples and peak amplitude of 1.0. The impulsive noise was defined by its amplitude distribution and inter-arrival time. Amplit udes had a zero mean, Laplacian distribution with a standard de\'iation (IJ) of 14.1 in all experiments. The standard deviation was reduced to 2.8 in Fig. 5 for illustrative purposes. Inter-arrival times (L\T) between noise impulses had a Poisson distribution. The mean inter-arrival time was varied in experiments to obtain different SIN ratios after adding noise. For example varying inter-arrival times from 500 to 2 samples results in SIN ratios that vary from roughly 1 dB to - 24 dB. Additive Gaussian noise had zero mean and a standard oeviation (IJ) of 0.1 in all experiments. ADAPTIVE TRAINING WITH NOISE The three-layer perceptron was traineq as shown in Fig. 6 using the signal plus Iloist> as the input and the uncorrupted signal as the desired output. Network weights were adapted after every sample input using back-propagation training. Adaptive nonlinearitics formed during training are shown in Fig. 7. These are similar to those Adaptive Neural Net Preprocessing for Signal Detection 129 SO(II--S,II)~ ~-...o{ MEMORYlESS NONLINEARITY NOISE 'I • I(x) N(tl MATCHED FILTER DETECTOR Figure 4: The signal detection task was to discriminate between a sinewa\·e pulse and a no-input condition with additive impulsive noise. UNCORRUPTED SIGNAL IMPULSE NOISE ct = 2.8 .H~ 12 GAUSSIAN NOISE maO (J • O. t o 50 100 150 200 250 0 50 100 150 200 2500 50 100 150 200 250 SAMPLES Figure 5: The input to the nonlinearity with no noise, additive impulsive noise, and additive Gaussian noise. required by theory. No noise results in nonlinearity that is linear over the range of the input sinewave (-1 to + 1) after fewer than 3,000 input samples. Impulsive noise at a high SIN ratio (6.T = 125 or SIN = -5 dB) results in a nonlinearity that clips above the signal level after roughly 5,000 input samples and then slowly forms a "Hole Punch" nonlinearity as the number of training samples increases. Gaussian noise noise results in a nonlinearity that is roughly linear over the range of the input sinewave after fewer than 5,000 input samples. SIGNAL DETECTION PERFORMANCE Signal detection performance was measured using a matched filter detector and the nonlinearity shown in the center of Fig. 7 for 10,000 input training samples. The error rate with a minimum-error matched filter is plotted in Fig. 8 for impulsive lIoise at SIN ratios ranging from roughly 5 dB to -24 dB. This error rate was estimated from 2,000 signal detection trials. Signal detection performance always improved with the nonlinearity and sometimes the improvement was dramatic. The error rate provided with the adaptively-formed nonlinearity is essentially identical 130 Lippmann and Beckman ~ ... ::;) Q. ... ::;) 0 5(1) X MULTI-LAYER Y )--.... --1 PERCEPTRON I. NOISE BACK-PROPAGATION 1-_ ..... ALGORITHM + E DESIRED OUTPUT Figure 6: The procedure used for adaptive training. NO NOISE IMPULSE NOISE GAUSSIAN NOISE 2 0 N.1.000 -, N . 2.ooo N.3.ooo N. '00,000 N.5,000 ·2 -2 -, 0 2 ·2 ., 0 2 ·2 ., 0 2 INPUT (x) Figure 7: Nonlinearities formed with adaptive training with no additive noise, with additive impulsive noise at a SIN level of -5 dB, and with additive Gaussian noise. to that provided by a clipping nonlinearity that clips above the signal level. This error rate is roughly zero down to - 24 dB and then rises rapidly with higher levels of impulsive noise. This rapid increase in error rate below -24 dB is not shown in Fig. 8. The error rate with linear processing rises slowly as the SIN ratio drops and reaches roughly 36% when the SIN ratio is -24 dB. Further exploratory experiments demonstrated that the nonlinearity formed by back-propagation was not robust to the SIN ratio used during training. A clipping nonlinearity is only formed when the number of samples of uncorrupted sinewave input is high enough to form the linear function of the curve and the number of samples of noise pulses is low, but sufficient to form the non~ill('ar clipping section of the nonlinearity. At high noise levels the resulting nonlinearity is not linear Over the range of the input signal. It instead resembles a curve that interpolates between a flat horizontal input-output curve and the desired clipping curve. SUMMARY AND DISCUSSION In summary, it was first demonstrated that multi-layer perccptrons with linear w ~ < a: a: 40 o 20 a: a: w 10 Adaptive Neural Net Preprocessing for Signal Detection 131 ADAPTIVE NONLINEAR PROCESSING LINEAR ~ PROCESSING o ••• ..., •••••• .I ..... .I •••.• ~. ~ ____ -~. -Z5 ' -20 -15 -10 -5 o 5 SIN RATIO Figure 8: The signal detection error rate with impulsive noise when the SIN ratio after adding the noise ranges from 5 dB to - 24 dB. input and output nodes could approximate prespecified clipping nonlinearities required for signal detection with impulsive noise with fewer than 1,000 trials of back-propagation training. More complex nonlinearities could also be formed but required longer training times. Clipping nonunearities were also formed adaptively using a multi-layer perceptron with the corrupted signal as the input and the noisefree signal as the desired output. Nonlinearities learned using this approach at high S / N ratios were similar to those required by theory and improved signal detection performance dramatically at low SIN ratios. Further work is necessary to further explore the utility of this technique for forming adaptive nonlinearities. This work should explore the robustness of the nonlinearity formed to variations in the input S / N ratio. It should also explore the use of multi-layer perccptrons and backpropagation training for other adaptive nonlinear signal processing tasks such as system identification, noise removal, and channel modeling. 132 Lippmann and Beckman References [1] G. Cybenko. Approximation by superpositions of a sigmoidal function. Research note, Department of Computer Science, Tufts University, October 1988. [2] J. E. Evans and A. S Griffiths. Design of a sanguine noise processor based upon world-wide extremely low frequency (elf) recordings. IEEE Transactions on Communications, COM-22:528-539, April 1974. [3] W. M. Huang and R. P. Lippmann. Neural net and traditional classifiers. In D. Anderson, editor, Neural Information Processing Systems, pages 387-396, New York, 1988. American Institute of Physics. [4] A. Lapedes and R. Farber. How neural nets work. In D. Anderson, editor, Neural Information Processing Systems, pages 442-456, New York, 1988. American Institute of Physics. [5] G. G. Lorentz. The 13th problem of Hilbert. In F. E. Browder, editor, Afathematical Developments Arising from Hilbert Problems. American Mathematical Society, Providence, R.I., 1976. [6] D. Palmer and D. DeSieno. Removing random noise from ekg signals using a back propagation network, 1987. HNC Inc., San Diego, CA. [7] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, volume 1: Foundations, chapter 8. MIT Press, Cambridge, MA, 1986. [8] S. Tamura and A. Wailbel. Noise reduction using connectionist models. In Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, volume 1: Speech Processing, pages 553-556, April 1988. [9] B. Widrow and S. D. Stearns. Adaptive Signal Processing. Prentice-Hall, NJ, 1985.
|
1988
|
30
|
114
|
468 LEARNING THE SOLUTION TO THE APERTURE PROBLEM FOR PATTERN MOTION WITH A HEBB RULE Martin I. Sereno Cognitive Science C-015 University of California, San Diego La Jolla, CA 92093-0115 ABSTRACT The primate visual system learns to recognize the true direction of pattern motion using local detectors only capable of detecting the component of motion perpendicular to the orientation of the moving edge. A multilayer feedforward network model similar to Linsker's model was presented with input patterns each consisting of randomly oriented contours moving in a particular direction. Input layer units are granted component direction and speed tuning curves similar to those recorded from neurons in primate visual area VI that project to area MT. The network is trained on many such patterns until most weights saturate. A proportion of the units in the second layer solve the aperture problem (e.g., show the same direction-tuning curve peak to plaids as to gratings), resembling pattern-direction selective neurons, which ftrst appear inareaMT. INTRODUCTION Supervised learning schemes have been successfully used to learn a variety of inputoutput mappings. Explicit neuron-by-neuron error signals and the apparatus for propagating them across layers, however, are not realistic in a neurobiological context. On the other hand, there is ample evidence in real neural networks for conductances sensitive to correlation of pre- and post-synaptic activity, as well as multiple areas connected by topographic, somewhat divergent feedforward projections. The present project was to try to learn the solution to the aperture problem for pattern motion using a simple hebb rule and a layered feedforward network. Some of the connections responsible for the selectivity of cortical neurons to local stimulus features develop in the absence of pattered visual experience. For example, newborn cats and primates already have orientation-selective neurons in primary visual cortex (area 17 or VI), before they open their eyes. The prenatally generated orientation selectivity is sharpened by subsequent visual experience. Linsker (1986) Learning the Solution to the Aperture Problem 469 has shown that feedforward networks with somewhat divergent, topOgraphic interlayer connections, linear summation, and simple hebb rules develop units in tertiary and higher layers that have parallel, elongated excitatory and inhibitory sub fields when trained solely on random inputs to the frrst layer. By contrast, the development of the circuitry in secondary and tertirary visual cortical areas necessary for processing more complex, non-local features of visual arrays--e.g., orientation gradients, shape from shading, pattern translation, dilation, rotation--is probably much more dependent on patterned visual experience. Parietal direction noise horizontal direction noise 2-D rotation 3-D cylinder visual cortical areas, for example, are almost totally unresponsive in dark-reared monkeys, despite the fact that these monkeys have a normalappearing VI (Hyvarinen, 1984). Behavioral indices suggest that development of some perceptual abilities may require months of experience. Human babies, for example, only evidence seeing the transition between randomly moving dots and circular 2-D motion at 6 months, while the transition from horizontally moving dots with random x -axis velocities to dots with sinusoidally varying X-axIS velocities (the latter gives the percept of a rotating 3-D cylinder) is only detected after 7 months (Spitz, Stiles-Davis, & Siegel, 1988) (see Fig. 1). Figure 1. Motion field transitions During the first 6 months of its life, a human baby typically makes approximately 30 million saccades, experiencing in the process many views which contain large moving fields and smaller moving objects. The importance of these millions of glances for the development of the ability to recognize complex visual objects has often been acknowledged. Brute visual experience may. however. be just as important in developing a solution to the simpler problem of detecting pattern motion using local cues. NETWORK ARCHITECTURE Moving visual stimuli are processed in several stages in the primate visual system. The first cortical stage is layer 4C-alpha of area VI, which receives its main ascending input from the magnocellular layers of the lateral geniculate nucleus. Layer 4C-alpha projects to layer 4B, which contains many tightly-tuned directionselective neurons (Movshon et aI., 1985). These neurons, however, respond to 470 Sereno moving contours as if these contours were moving perpendicular their local orientation--Le .• they fire in proportion to the difference between the orthogonal component of motion and their best direction (for a bar). An orientation series run for a layer 4B nemon using a plaid (2 orthogonal moving gratings) thus results in two peaks in the direction tuning curve. displaced 45 degrees to either side of the peak for a single grating (Movshon et al.. 1985). The aperture problem for pattern motion (see e.g .• Horn & Schunck. 1981) thus exists for cells in area VI of the adult (and presumably infant) primate. Layer 4B neurons project topographically via direct and indirect pathways to area MT. a small exttastriate area specialized for processing moving stimuli. A subset Input Layer (=Vl, layer 4B) Second Layer (=MT) I \ i \ / \ / Figure 2. Network Architecture of neurons in MT show a single peak in their direction tuning curves for a plaid that is lined up with the peak for a single grating--Le., they fire in proportion to the difference between the true pattern direction and their best direction (for a bar). These neurons therefore solve the aperture problem presented to them by the local translational motion detectors in layer 4B of VI. The excitatory receptive fields of all MT neurons are much larger than those in VI as a result of divergence in the VIMT projection as well as the smaller areal extent of MT compared to VI. M.E. Sereno (1987) showed using a supervised learning rule that a linear t two layer network can satisfactorily solve the aperture problem characterized above. The present task was to see if unsupervised learning might suffice. A simple caraicature of the Vl-to-MT projection was constructed. At each x-y location in the first layer of the network. there are a set of units tuned to a range of local directions and speeds. The input layer thus has four dimensions. The sample network illustrated above (Fig. 2) has 5 different directions and 3 speeds at each x-y location. Input units are granted tuning curves resembling those found for neurons in layer 4B of Learning the Solution to the Aperture Problem 471 ~:~ o X 2X velocity component orthogonal to contour =pl L>\t)(\ o I I o 0.5 speed component orthogonal to contour Figure 3. Excitatory Tuning Curves (1st layer) 1 area VI. The tuning curves are linear. with half-height overlap for both direction and speed (see Fig. 3--for 12 directions and 4 speeds). and direction and speed tuning interact linearly. Inhibition is either tuned or untuned (see Fig. 4). and scaled to balance excitation. Since direction tuning wraps around. there is a trough in the tuned inhibition condition. Speed tuning does not wrap around. The relative effect of direction and speed tuning in the output of ftrst layer units is set by a parameter. As with Linsker. the probability that a unit in the fust layer will connect with a unit in the second layer falls off as a gaussian centered on the retinotopically resp o resp o untuned 1\ tuned ~[""""""'" ~""'""""". \,. u: ........................ . o X velocity component orthogonal to contour ............... ........................ o 0.5 speed component orthogonal to contour (no wrap-around) 2X 1 Figure 4. Tuned vs. Untuned Inhibition 472 Sereno equivalent point in the second layer (see Fig. 2). New random numbers are drawn to generate the divergent gaussian projection pattern for each first layer unit (Le., all of the units at a single x-y location have different. overlapping projection patterns). There are no local connections within a layer. The network update rule is similar to that of Linsker except that there is no term like a decay of synaptic weights (k1) and no offset parameter for the correlations (k,J. Also, all of the units in each layer are modeled explicitly. The activation, Yj' for each unit is a linear weighted sum of its Ui inputs, scaled by a, and clipped to a maximum or minimum value: { a I. u· w·· I I) y. = ) Ymax.min Weights are also clipped to maximum and minimum values. The change in each weight. .1wij, is a simple fraction, a, of the product of the pre- and post-synaptic values: .1w·· = au·y· I) I ) RESULTS The network is tr:ained with a set of fullfield texture movements. Each stimulus consists of a set of randomly oriented contours--one at each x-y point--all moving in the same, randomly chosen pattern direction. A typical stimulus is drawn in figure 5 as the set of component motions visible to neurons in VI (i.e .• direction components perpendicular to the local contour); the local speed component varies as the cosine of the angle between the pattern direction and the perpendicular to the local contour. The single component motion at each point is run through the first layer tuning curves. The response of the input layer to such a pattern is shown in Figure 6. Each rectangular box represents a single x-y location, containing 48 units -./' -........ ~ '........ v tv , .... ....... " ~ ~ I' " ~ " ~ t...... .-. '. ? '" .-. -+ ...... ? -I' /' -.. , ~ ...... ""....... .-. ....... .. '--+ ./' I' ~ -~ --+ " 1 -.. ""I' '- , ./' , A ""...... II' " -+ /' "v 1 ., .-. " ~ "A .- /' ~ ./' -+ Figure 5. Local Compqnent Motions from a Training Stimulus (pattern direction is toward right) .. -. f ., , /' Learning the Solution to the Aperture Problem 473 p. . . . • • • . [J • • • o • • • • • ••• 0 • •• • • • . . . • • • [J D· •• . . . . · Q. • •••• c •••••• ·0····· ·00·· . . . . . . . • • • • • • • 0 • • • • · .. ·0· · . 00 .. • • • • • a a • • • · . . . . • 0 a • · . . . · ... 00· .. -0-.· -ac.... ·Da .•••• aa •••••• -0- •••••• 0 ••••••••••••• 0 .••••••• aa ••••. ••••• · . . . . • • • • • • • • • • • • • 0 • • • • • • • • D • • • • • • • • • • a a • • • • • • • • J • • • • • • .•. 00· .• • • •• C ••• · 00 · · · · • ., • • • • • • • • • • C • • • • • ~ • • •••••••• 0 •• 0-. • • • •• • • •• • • • c·· • • • • 0 • • • • • • • • • • • • • • 0 • • • • • • • • • • • • • • 0 • • • • • • • • • • • ••. 00 . · ... aD· . . . . . . . ·0· . . · ald· • 0 . · · • • • • • • • • • • C • • • • • • • • • • • D a • • • • • . 0 a . . .•• • . a 0 ••••• Figure 6. Output of Portion of First Layer to a Training Slimulus (untuned inhibition) tuned to different combinations of direction and speed (12 directions run horizontally and 4 speeds run vertically). Open and filled squares indicate positive and negative outputs. Inhibition is untuned here. The hebb sensitivity, a, was set so that 1,000 such patterns could be presented before most weights saturated at maximum values. Weights intially had small random values drawn from a flat distribution centered around zero. The scale parameter for the weighted sum, a, was set low enough to prevent second layer units from saturating all the time. In Figure 6, direction tuning is 2.5 times as important as speed tuning in detennining the output of a unit Selectivity of second layer units for pattern direction was examined both before and after training using four stimulus conditions: 1) grating--contours perpendicular to pattern direction, 2) random grating--contours randomly oriented with respect to pattern direction (same as the training condition), 3) plaid--contours oriented 45 or 67 degrees from perpendicular to pattern direction, 4) random plaid--contours randomly oriented, but avoiding angles nearly perpendicular to pattern direction. The pre-training direction tuning curves for the grating conditions usually showed some weak direction selectivity. Pre-training direction tuning curves for the plaid conditions, however, were often twin-peaked, exhibiting pattern component responses displaced to either side of the grating peak. Mter training, by contrast, the direction tuning peaks in all test conditions were single and sharp, and the plaid condition peaks were usually aligned with the grating peaks. An example of the weights onto a mature pattern direction selective unit is shown in Figure 7. As before, each rectangular box contains 48 units representing one point in x-y space of the input layer (the tails of the 2-D gaussian are cropped in this illustration), except that the black and white boxes now represent negative and positive weights onto a single second layer unit. Within each box, 12 directions run horizontally and 4 speeds run vertically. The peaks in the direction tuning curves for gratings and 135 degree plaids for this unit were sharp and aligned. 474 Sereno .. :: :: :: " ...... .. .. .. .. .. ...... .. .. .. :: . :: :: Figure 7. Mature Weights Onto Pattern Direction-Selective Unit Pattern direction selective units such as this comprised a significant fraction of the second layer when direction tuning was set to be 2 to 4 times as important as speed tuning in determining the output of fU'St layer units. Post-training weight structures under these conditions actually formed a continuum--from units with component direction selectivity, to units with pattern direction selectivity, to units with component speed selectivity. Not surprisingly, varying the relative effects of direction and speed in the VI tuning curves generated more direction-tuned-only or speed-tuned-only units. In all conditions, units showed clear boundaries between maximum and minimum weights in the direction-speed subspace each x-y point, and a single best direction. The location of these boundaries was always correlated across different x-y input points. Most units showing unambiguous pattern direction selectivity were characterized by two oppositely sloping diagonal boundaries between maximum and minimum weights in direction-speed subspace (see e.g., Fig. 7). The stimuli used to train the network above--fullfield movrnents of a rigid texture field of randomly oriented contours--are unnatural; generally, there may be one or more objects in the field moving in different directions and at different speeds than the surround. Weight distributions needed to solve the aperture problem appear when the network. is trained on occluding moving objects against moving backgrounds (object and background velocities chosen randomly on each trial), as long as the object is made small or large relative to the receptive field size. The solution breaks down when the moving objects occupy a significant fraction of the area of a second layer receptive field. Learning the Solution to the Aperture Problem 475 For comparison, the network was also trained using two different kinds of noise stimuli. In the fIrst condition (unit noise), each new stimulus consisted of random input values on each input unit With other network parameters held the same, the typical mature weight pattern onto a second layer unit showed an intimate intermixture of maximum and minimum weights in the direction-speed subspace at each x-y location. In the second condition (direction noise), each new stimulus consisted of a random direction at each x-y location. The mature weight patterns now showed continuous regions of all-maximum or all-minimum weights in the speed-direction supspace at each x-y point. In contrast to the situation with fullfieid texture movement stimuli, however, the best directions at each of the x-y points providing input to a given unit were uncorrelated. In addition, multiple best directions at a single x-y point sometimes appeared. DISCUSSION This simple model suggests that it may be possible to learn the solution to the aperture problem for pattern motion using only biologically realistic unsupervised learning and minimally structured motion fields. Using a similar network architecture, M.E. Sereno had previously shown that supervised learning on the problem of detecting pattern motion direction from local cues leads to the emergence of chevron shaped weight structures in direction-speed space (M.E. Sereno, 1986). The weight structures generated here are similar except that the inside or outside of the chevron is filled in, and upside-down chevrons are more common. This results in decreased selectivity to pattern speed in the second layer. The model needs to be extended to more complex motion correlations in the input-e.g., rotation, dilation, shear, multiple objects, flexible objects. MT in primates does not respond selectively to rotation or dilation, while its target area MST does. Thus, biological estimates of rotation and dilation are made in two stages--rotation and dilation are not detected locally, but instead constructed from estimates of local translation. Higher layers in the present model may be able to learn interesting 'second-order' things about rotation, dilation, segmentation, and transparency. The real primate visual system, of course, has a great many more parts than this model. There are a large number of interconnected cortical visual areas--perbaps as many as 25. A substantial portion of the 600 possible between-area connections may be present (for review, see M.I. Sereno, 1988). There are at least 6 map-like visual structures, and several more non-retinotopic visual structures in the thalamus (beyond the dLGN) that interconnect with the cortical visual areas. Each visual cortical area then has its own set of layers and intedayer connections. The most unbiological aspect of this model is the lack of time and the crude methods of gain control (clipped synaptic weights and input/output functions). Future models should employ within-area connections and time-dependent hebb rules. Making a biologically realistic model of intermediate and higher level visual processing is difficult since it ostensibly requires making a biologically realistic model of earlier, yet often not less complex stations in the system--e.g., the retina, 476 Sereno dLGN, and layer 4C of primary visual cortex in the present case. One way to avoid having to model all of the stations up to the one of interest is to use physiological data about how the earlier stations respond to various stimuli, as was done in the present model. This shortcut is applicable to many other problems in modeling the visual system. In order for this to be most effective, physiologists and modelers need to cooperate in generating useful libraries of response profiles to arbitrary stimuli. Many stimulus parameters interact, often nonlinearly, to produce the final output of a cell. In the case of simple moving stimuli in VI and MT, we minimally need to know the interaction between stimulus size, stimulus speed, stimulus direction, surround speed, surround direction, and x-y starting point of the movement relative to the classical excitatory receptive field. Collecting this many response combinations from single cells requires faster serial presentation of stimuli is customary in visual physiology experiments. There is no obvious reason, however, why the rate of stimulus presentation need be any less than the rate at which the visual system nonnally operates--namely, 3-5 new views per second. Also, we need to get a better understanding of the 'stimulus set'. The very large set of stimuli on which the real visual system is trained (millions of views) is still very poorly characterized. It would be worthwhile and practical, nevertheless, to collect a naturalistic corpus of perhaps 1000 views (several hours of viewing). Acknowledgements I thank M.E. Sereno and U. Wehmeier for discussions and comments. Supported by NIH grant F32 EY05887. Networks and displays were constructed on the Rochester Connectionist Simulator. References B.K.P. Hom & B.G. Schunck. Determining optical flow. Artiflntell., 17, 185-203 (1981). J. Hyvarinen. The Parietal Cortex. Springer-Verlag (1984). R. Linsker. From basic network principles to neural architecture: emergence of orientation-selective cells. Proc. Nat. Acad. Sci. 83,8390-8394 (1986). J.A. Movshon, E.H. Adelson, M.S. Gizzi & W.T. Newsome. Analysis of moving visual patterns. In Pattern Recognition Mechanisms. Springer-Verlag, pp. 117151 (1985). M.E. Sereno. Modeling stages of motion processing in neural networks. Proc. 9th Ann. Con! Cog. Sci. Soc. pp. 405416 (1987). M.I. Sereno. The visual system. In I.W.v. Seelen, U.M. Leinhos, & G. Shaw (eds.), Organization of Neural Networks. VCH, pp. 176-184 (1988). R.V. Spitz, J. Stiles-Daves & R.M. Siegel. Infant perception of rotation from rigid structure-from-motion displays. Neurosci. Abstr. 14, 1244 (1988).
|
1988
|
31
|
115
|
GENESIS: A SYSTEM FOR SIMULATING NEURAL NETWOfl.KS Matthew A. Wilson, Upinder S. Bhalla, John D. Uhley, James M. Bower. Division of Biology California Institute of Technology Pasadena, CA 91125 ABSTRACT We have developed a graphically oriented, general purpose simulation system to facilitate the modeling of neural networks. The simulator is implemented under UNIX and X-windows and is designed to support simulations at many levels of detail. Specifically, it is intended for use in both applied network modeling and in the simulation of detailed, realistic, biologicallybased models. Examples of current models developed under this system include mammalian olfactory bulb and cortex, invertebrate central pattern generators, as well as more abstract connectionist simulations. INTRODUCTION Recently, there has been a dramatic increase in interest in exploring the computational properties of networks of parallel distributed processing elements (Rumelhart and McClelland, 1986) often referred to as Itneural networks" (Anderson, 1988). Much of the current research involves numerical simulations of these types of networks (Anderson, 1988; Touretzky, 1989). Over the last several years, there has also been a significant increase in interest in using similar computer simulation techniques to study the structure and function of biological neural networks. This effort can be seen as an attempt to reverse-engineer the brain with the objective of understanding the functional organization of its very complicated networks (Bower, 1989). Simulations of these systems range from detailed reconstructions of single neurons, or even components of single neurons, to simulations of large networks of complex neurons (Koch and Segev, 1989). Modelers associated with each area of research are likely to benefit from exposure to a large range of neural network simulations. A simulation package capable of implementing these varied types of network models would facilitate this interaction. 485 486 Wilson, Bhalla, Uhley and Bower DESIGN FEATURES OF THE SIMULATOR We have built GENESIS (GEneral NEtwork SImulation System) and its graphical interface XODUS (X-based Output and Display Utility for Simulators) to provide a standardized and flexible means of constructing neural network simulations while making minimal assumptions about the actual structure of the neural components. The system is capable of growing according to the needs of users by incorporating user-defined code. We will now describe the specific features of this system. Device independence. The entire system has been designed to run under UNIX and X-windows (version 11) for maximum portability. The code was developed on Sun workstations and has been ported to Sun3's, Sun4's, Sun 386i's, and Masscomp computers. It should be portable to all installations supporting UNIX and X-II. In addition, we will be developing a parallel implementation of the simulation system (Nelson et al., 1989). Modular design. The design of the simulator and interface is based on a "building-block" approach. Simulations are constructed of modules which receive inputs, perform calculations on them, and generate outputs (figs. 2,3). This approach is central to the generality and flexibility of the system as it allows the user to easily add new features without modification to the base code. Interactive specification and control. Network specification and control is done at a high level using graphical tools and a network specification language (fig. 1). The graphics interface provides the highest and most user friendly level of interaction. It consists of a number of tools which the user can configure to suit a particular simulation. Through the graphical interface the user can display, control and adjust the parameters of simulations. The network specification language we have developed for network modeling represents a more basic level of interaction. This language consists of a set of simulator and interface functions that can be executed interactively from the keyboard or from text flies storing command sequences (scripts). The language also provides for arithmetic operations and program control functions such as looping, conditional statements, and subprograms or macros. Figures 3 and 4 demonstrate how some of these script functions are used. Simulator and interrace toolkits. Extendable toolkits which consist of module libraries, graphical tools and the simulator base code itself (fig. 2) provide the routines and modules used to construct specific simulations. The base code provides the common control and support routines for the entire system. GENESIS: A System for Simulating Neural Networks 487 Script Files Genesis command window and ke board Genesis 1% Gra hics Interface .. ~ .. ~ .DP~~Data ( Script Language Interpreter Figure 1. Levels Of Interaction With The Simulator CONSTRUCTING SIMULATIONS Files The first step in using GENESIS involves selecting and linking together those modules from the toolkits that will be necessary for a particular simulation (fig. 2,3). Additional commands in the scripting language establish the network and the graphical interface (fig. 4). Module Classes. Modules in GENESIS are divided into computational modules, communications modules and graphical modules. All instances of computational modules are called elements. These are the central components of simulations, performing all of the numerical calculations. Elements can communicate in two ways: via links and via connections. Links allow the passing of data between two elements with no time delay and with no computation being performed on the data. Thus. links serve to unify a large number of elements into a single computational unit (e.g. they are used to link elements together to form the neuron in fig. 3C). Connections. on the other hand. interconnect computational units via simulated communication channels which can incorporate time delays and perform transformations on data being transmitted (e.g. axons in fig. 3C). Graphical modules called widgets are used to construct the interface. These modules can issue script commands as well as respond to them, thus allowing interactive access to simulator structures and functions. 488 Wilson, Bhalla, Uhley and Bower Hierarchical organization. In order to keep track of the structure of a simulation, elements are organized into a tree hierarchy similar to the directory structure in UNIX (fig. 3B). The tree structure does not explicitly represent the pattern of links and connections between elements, it is simply a tool for organizing complex groups of elements in the simulation. Simulation example. As an example of the types of modules available and the process of structuring them into a network simulation and graphical interface, we will describe the construction of a simple biological neural simulation (fig. 3). The I11pdel consists of two neurons. Each neuron contains a passive dendritic compartment, an active cell body, an axonal output, and a synaptic input onto the dendrite. The axon of one neuron connects to a synaptic input of the other. Figure 3 shows the basic structure of the model as implemented under GENESIS. In the model, the synapse, channels, Simulator and interrace toolkit -----------------------------------------------------------------~ Graphics Modules Computational Modules ( A oDCO Earn Simulation Communications modules .... -----0001 CLinker • Simulator => __ ffi ~ ca .. .... ;.......... .::<::;:::;";::::,:::-:.<., ..... . \.< .. :· j~ : CQdK .. ... Figure 2. Stages In Constructing A Simulation. GENESIS: A System for Simulating Neural Net~orks 489 A B network ~ neuron! neuron2 ~~ dendrite \ cell-body A axon Na K synapse C KEY Element Connection dendrite -Link D Figure 3. Implementation of a two neuron model in GENESIS. (A) Schematic diagram of compartmentally modeled neurons. Each cell in this simple model has a passive dendritic compartment, an active cell-body, and an output axon. There is a synaptic input to the dendrite of one cell and two ionic channels on the cell body. (B) Hierarchical representation of the components of the simulation as maintained in GENESIS. The cell-body of neuron 1 is referred to as /network/neuronl/cell-body. (C) A representation of the functional links between the basic components of one neuron. (D) Sample interface control and display widgets created using the XODUS toolkit. 490 Wilson, Bhalla, Uhley and Bower dendritic compartments, cell body and axon are each treated as separate computational elements (fig. 3C). Links allow elements to share information (e.g. the Na channel needs to have access to the cell-body membrane voltage). Figure 4 shows a portion of the script used to construct this simulation. Create different types or elements and assign them names. create neuronl create active compartment cell-body create passive_compartment dendrite create synapse dendrite/synapse Establish functional "links" between the elements. link dendrite to cell-body link dendrite/synapse to dendrite Set parameters associated with the elements. set dendrit~ capacitance l.Oe-6 Make copies or entire element subtrees. copy neuronl to neuron2 Establish "connections" between two elements. connect neuronl/axon to neuron2/dendrite/ synapse Set up a graph to monitor an element variable graph neuronl/cell-body potential Make a control panel with several control "widgets". xform control xdialo g nstep xdialog dt Xloggle Euler set-nstep -default 200 set-dt -default 0.5 set-euler Figure 4. Sample script commands for constructing a simulation (see fig. 3) SIMULATOR SPECIFICATIONS Memory requirements or GENESIS. Currently. GENESIS consists of about 20,000 lines of simulator code and a similar amount of graphics code, all written in C. The executable binaries take up about 1.5 Megabytes. A rough estimate of the amount of additional memory necessary for a particular simulation can be calculated from the sizes and number of modules used in a simulation. Typically, elements use around 100 bytes, connections 16 and messages 20. Widgets use 5-20 Kbytes each. GENESIS: A System for Simulating Neural Networks 491 Performance The overall efficiency of the GENESIS system is highly simulation specific. To consider briefly a specific case, the most sophisticated biologically based simulation currently implemented under GENESIS, is a model of piriform (olfactory) cortex (Wilson et al., 1986; Wilson and Bower, 1988; Wilson and Bower, 1989). This simulation consists of neurons of four different types. Each neuron contains from one to five compartments. Each compartment can contain several channels. On a SUN 386i with 8 Mbytes of RAM. this simulation with 500 cells runs at I second per time step. Other models that have been implemented under GENESIS The list of projects currently completed under GENESIS includes approximately ten different simulations. These include models of the olfactory bulb (Bhalla et al., 1988), the inferior olive (Lee and Bower, 1988). and a motor circuit in the invertebrate sea slug Tritonia (Ryckebusch et aI., 1989)~ We have also built several tutorials to allow students to explore compartmental biological models (Hodgkin and Huxley, 1952), and Hopfield networks (Hopfield. 1982). Access/use of GENESIS GENESIS and XODUS will be made available at the cost of distribution to all interested users. As described above, new user-defined modules can be linked into the simulator to extend the system. Users are encouraged to support the continuing development of this system by sending modules they develop to Caltech. These will be reviewed and compiled into the overall system by GENESIS support staff. We would also hope that users would send completed published simulations to the GENESIS data base. This will provide others with an opportunity to observe the behavior of a simulation first hand. A current listing of modules and full simulations will be maintained and available through an electronic mail newsgroup. Babel. Enquiries about the system should be sent to [email protected] or [email protected]. Acknowledgments We would like to thank Mark Nelson for his invaluable assistance in the development of this system and specifically for his suggestions on the content of this manuscript. We would also like to recognize Dave Bilitch. Wojtek Furmanski. Christof Koch, innumerable Caltech students and the students of the 1988 MBL summer course on Methods in Computational Neuroscience for their contributions to the creation and evolution of GENESIS (not mutually exclusive). This research was also supported by the NSF (EET-8700064). the NIH (BNS 22205). the ONR (Contract NOOOI4-88-K-0513). the Lockheed Corporation. the Caltech Presidents Fund, the JPL Directors Development Fund. and the Joseph Drown Foundation. 492 Wilson, Bhalla, Uhley and Bower References D. Anderson. (ed.) Neural information processing systems. American Institute of Physics, New York (1988). U.S. Bhalla, M.A. Wilson, & J.M. Bower. Integration of computer simulations and multi-unit recording in the rat olfactory system. Soc. Neurosci. Abstr. 14: 1188 (1988). I.M. Bower. Reverse engineering the nervous system: An anatomical, physiological, and computer based approach. In: An Introduction to Neural and Electronic Networks. Zornetzer, Davis, and Lau, editors. Academic Press (1989)(in press). A.L. Hodgkin and A.F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. I.Physiol, (Lond.) 117, 500544 (1952). 1.J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA. 79,2554-2558 (1982). C. Koch and I. Segev. (eds.) Methods in Neuronal Modeling: From Synapses to Networks. MIT Press, Cambridge, MA (in press). M. Lee and I.M. Bower. A structural simulation of the inferior olivary nucleus. Soc. Neurosci. Abstr. 14: 184 (1988). M. Nelson, W. Furmanski and I.M. Bower. Simulating neurons and neuronal networks on parallel computers. In: Methods in Neuronal Modeling: From Synapses to Networks. C. Koch and I. Segev, editors. MIT Press, Cambridge, MA (1989)(in press). S. Ryckebusch, C. Mead and I.M. Bower. Modeling a central pattern generator in software and hardware: Tritonia in sea moss (CMOS). (l989)(this volume). D.E. Rumelhart, 1.L. McClelland and the PDP Research Group. Parallel Distributed Processing. MIT Press, Cambridge, MA (1986). D. Touretzky. (ed.) Advances in Neural Network Information Processing Systems. Morgan Kaufmann Publishers, San Mateo, California (1989). M.A. Wilson and I.M. Bower. The simulation of large-scale neuronal networks. In: Methods in Neuronal Modeling: From Synapses to Networks. C. Koch and I. Segev, editors. MIT Press, Cambridge, MA (1989)(in press). M.A. Wilson and I.M. Bower. A computer simulation of olfactory cortex with functional implications for storage and retrieval of olfactory information. In: Neural information processing systems. pp. 114-126 D. Anderson, editor. Published by AlP Press, New York, N.Y (1988). M.A. Wilson, I.M. Bower and L.B. Haberly. A computer simulation of piriform cortex. Soc. Neurosci. Abstr. 12.1358 (1986). Part IV Structured Networks
|
1988
|
32
|
116
|
794 NEURAL ARCHITECTURE Valentino Braitenberg Max Planck Institute Federal Republic of Germany While we are waiting for the ultimate biophysics of cell membranes and synapses to be completed, we may speculate on the shapes of neurons and on the patterns of their connections. Much of this will be significant whatever the outcome of future physiology. Take as an example the isotropy, anisotropy and periodicity of different kinds of neural networks. The very existence of these different types in different parts of the brain (or in different brains) defeats explanation in terms of embryology; the mechanisms of development are able to make one kind of network or another. The reasons for the difference must be in the functions they perform. The tasks which they solve in one case apparently refer to some space which is intrinsically isotropic, in another to a situation in which different coordinates mean different things. In the periodic case, the tasks obviously refer to some kind of modules and to their relations. The examples I have in mind are first the cerebral cortex, quite isotropic in the plane of the cortex, second the cerebellar cortex with very different sets of fibers at right angles to each other (one excitatory, as we know today, and the other inhibitory) and third some of the nerve nets behind the eye of the fly. Besides general patterns of symmetry, some simple statements of a statistical nature can be read off the histological picture. If a neuron is a device picking up excitation (and/or inhibition) on its ten to ten thousand afferent synapses and producing excitation (or inhibition) on ten to ten thousand synapses on other neurons, the density, geometrical distribution and reciprocal overlap of the clouds of afferent and of efferent synapses of individual neurons provide unquestionable constraints to neural computation. In the simple terms of the histological practitioner, this translates into the description of the shapes of dendritic and axonal trees, into counts of neurons and synapses, differential counts of synapses of the excitatory and inhibitory kind and measurements of the axonal and dendritic lengths.
|
1988
|
33
|
117
|
LEARNING BY CHOICE OF INTERNAL REPRESENTATIONS Tal Grossman, Ronny Meir and Eytan Domany Department of Electronics, Weizmann Institute of Science Rehovot 76100 Israel ABSTRACT We introduce a learning algorithm for multilayer neural networks composed of binary linear threshold elements. Whereas existing algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. Once a correct set of internal representations is arrived at, the weights are found by the local aild biologically plausible Perceptron Learning Rule (PLR). We tested our learning algorithm on four problems: adjacency, symmetry, parity and combined symmetry-parity. I. INTRODUCTION Consider a network of binary linear threshold elements i, whose state Si = ±1 is determined according to the rule Si = sign(L WijSj + Oi) . (1) j Here Wij is the (unidirectional) weight assigned to the connection from unit j to i; 0i is a local bias. We focus our attention on feed-forward networks in which N units of the input layer determine the states of H units of a hidden layer; these, in turn, feed one or more output elements. For a typical A vs B classification task such a network needs a single output, with sout = + 1 (or -1) when the input layer is set in a state that belongs to catego~y A (or B) of input space. The basic problem of learning is to find an algorithm, that produces weights which enable the network to perform this task. In the absence of hidden units learning can be accomplished by the PLR [Rosenblatt 1962], which we now briefly Jcscribe. Consider j = 1, ... , N source units and a single target unit i. When the source units are set in anyone of p. = 1, .. M patterns, i.e. Sj = er, we require that the target unit (determined using (1» takes preassigned values er. Learning takes place in the course of a training session. Starting from any arbitrary initial guess for the weights, an input 1/ is presented, resulting in the output taking some value Sr. Now modify every weight according to the rule W·· -+ W·· + "(1 SI!CI!)CI!Cl( 11 IJ" 1 ~I ~I ~J ' (2) 73 74 Grossman, Meir and Domany where TJ > 0 is a parameter (er = 1 is used to modify the bias 0). Another input pattern is presented, and so on, until all inputs draw the correct output. The Perceptron convergence theorem states [Rosenblatt 1962, Minsky and Papert 1969] that the PLR will find a solution (if one exists), in a finite number of steps. However, of the 22N possible partitions of input space only a small subset (less than 2N2 / N!) is linearly separable [Lewis and Coates 1967], and hence soluble by singlelayer perceptrolls. To get around this, hidden units are added. Once a single hidden layer (with a large enough number of units) is inserted beween input and output, every classification problem has a solution. But for such architectures the PLR cannot be implemented; when the network errs, it is not clear which connection is to blame for the error, and what corrective action is to be taken. Back-propagation [Rumelhart et al 1986] circumvents this "credit-assignment" problem by dealing only with networks of continuous valued units, whose response function is also continuous (sigmoid). "Learning" consists of a gradient-descent type minimization of a cost function that measure the deviation of actual outputs from the required ones, in the space of weights Wij, 0i. A new version of BP, "back propagation of desired states", which bears some similarity to our algorithm, has recently been introduced [Plaut 1987]. See also Ie Cun [1985] and Widrow and Winter [1988] for related methods. Our algorithm views the internal representations associated with various inputs as the basic independent variables of the learning process. This is a conceptually plausible assumption; in the course of learning a biological or artificial system should form maps and representations of the external world. Once such representations are formed, the weights can be found by simple and local Hebbian learning rules such as the PLR. Hence the problem of learning becomes one of searching for proper internal representations, rather than one of minimization. Failure of the PLR to converge to a solution is used as an indication that the current guess of internal representations needs to be modified. II. THE ALGORITHM If we know the internal representations (e.g. the states taken by the hidden layer when patterns from the training set are presented), the weights can be found by the PLR. This way the problem of learning becomes one of choosing proper internal representations, rather than of minimizing a cost function by varying the values of weights. To demonstrate our approache, consider the classification problem with output values, sout,~ = eout,~, required in response to J1. = I, ... , M input patterns. If a solution is found, it first maps each input onto an internal representation generated on the hidden layer, which, in turn, produces the correct output. Now imagine that we are not supplied with the weights that solve the problem; however the correct internal representations are revealed. That is, we are given a table with M rows, one for each input. Every row has H bits e;'~, for i = I, .. H, specifying the state of the hidden layer obtained in response to input pattern JJ. One can now view each hidden-layer cell i as the target cell of the PLR, with the N inputs viewed as source. Given sufficient time, the PLR will converge to a set Learning by Choice of Internal Representations 75 of weights Wi,j, connecting input unit j to hidden unit i, so that indeed the inputoutput association that appears in column i of our table will be realized. In a similar fashion, the PLR will yield a set of weights Wi, in a learning process that uses the hidden layer as source and the output unit as target. Thus, in order to solve the problem of learning, all one needs is a search procedure in the space of possible internal representations, for a table that can be used to generate a solution. Updating of weights can be done in parallel for the two layers, using the current table of internal representations. In the present algorithm, however, the process is broken up into four distinct stages: 1. SETINREP: Generate a table of internal representations {e?'II} by presenting each input pattern from the training set and calculating the state on the hidden layer,using Eq.(la), with the existing couplings Wij and ej. 2. LEARN23: The hidden layer cells are used as source, and the output as the target unit of the PLR. The current table of internal representations is used as the training set; the PLR tries to find appropriate weights Wi and e to obtain the desired outputs. If a solution is found, the problem has been solved. Otherwise stop after 123 learning sweeps, and keep the current weights, to use in IN REP. 3. INREP: Generate a new table of internal representations, which, when used in (lb), yields the correct outputs. This is done by presenting the table sequentially, row by row, to the 11idden layer. If for row v the wrong output is obtained, the internal representation eh ,1I is changed. Having the wrong output means that the "field" produced by the hidden layer on the output unit, hout,lI = Ej Wje~'11 is either too large or too small. We then randomly pick a site j of the hidden layer, and try to flip the sign of e;'II; if hout,lI changes in the right direction, we replace the entry of our table, i.e. &~,II -. _&~,II '3 'J' We keep picking sites and changing the internal representation of pattern v until the correct output is generated. We always generate the correct output this way, provided Ej IWjl > leoutl (as is the case for our learning process in LEARN23). This procedure ends with a modified table which is our next guess of internal representations. 4. LEARN12: Apply the PLR with the first layer serving as source, treating every hidden layer site separately as target. Actually, when an input from the training set is presented to the first layer, we first check whether the correct result is produced on the output unit of the network. If we get wrong overall output, we use the PLR for every hidden unit i, modifying weights incident on i according to (2), using column i of the table as the desired states of this unit. If input v does yield the correct output, we insert the current state of the hidden layer as the internal representation associated with pattern v, and no learning steps are taken. We sweep in this manner the training set, modifying weights Wij, (between input and hidden layer), hidden-layer thresholds ei, and, as explained above, internal 76 Grossman, Meir and Domany representations. If the network has achieved error-free performance for the entire training set, learning is completed. If no solution has been found after 112 sweeps of the training set, we abort the PLR stage, keep the present values of Wij, OJ, and start SETINREP again. This is a fairly complete account of our procedure (see also Grossman et al [1988]). There are a few details ·that need to be added. a) The "impatience" parameters: 112 and 123, which are rather arbitrary, are introduced to guarantee that the PLR stage is aborted if no solution is found. This is necessary since it is not clear that a solution exists for the weights, given the current table of internal representations. Thus, if the PLR stage does not converge within the time limit specified, a new table of internal representations is formed. The parameters have to be large enough to allow the PLR to find a solution (if one exists) with sufficiently high probability. On the other hand, too large values are wasteful, since they force the algorithm to execute a long search even when no solution exists. Therefore the best values of the impatience parameters can be determined by optimizing the performance of the network; our experience indicates, however, that once a "reasonable" range of values is found, performance is fairly insensitive to the precise choice. b) Integer weights: In the PLR correction step, as given by Eq.2, the size of D.. W is constant. Therefore, when using binary units, it can be scaled to unity (by setting T] = 0.5) and one can use integer Wi,j'S without any loss of generality. c) Optimization: The algorithm described uses several parameters, which should be optimized to get the best performance. These parameters are: 112 and 123 - see section (a) above; Imax - time limit, i.e. an upper bound to the total number of training sweeps; and the PLR training parameters - i.e the increment of the weights and thresholds during the PLR stage. In the PLR we used values of 1] ~ 0.1 [see Eq. (2) ] for the weights, and 1] ~ 0.05 for thresholds, whereas the initial (random) values of all weights were taken from the interval (-0.5,0.5), and thresholds from (-0.05,0.05). In the integer weights program, described above, these parameters are not used. d) Treating Multiple Outputs: In the version of inrep described above, we keep flipping the internal representations 'until we find one that yields the correct output, i.e. zero error for the given pattern. This is not always possible when using more than one output unit. Instead, we can allow only for a pre-specified number of attempted flips, lin' and go on to the next pattern even if vanishing error was not achieved. In this modified version we also introduce a slightly different, and less "restrictive" criterion for accepting or rejecting a flip. Having chosen (at random) a hidden unit i, we check the effect of flipping the sign of ~;,II on the total output error, i.e. the number of wrong bits (and not on the output field, as described above). If the output error is not increased, the flip is accepted and the table of internal representations is changed accordingly. This modified algorithm is applicable for multiple-output networks. Results of preliminary experiments with this version are presented in the next section. Learning by Choice of Internal Representations 77 III. PERFORMANCE OF THE ALGORITHM The "time" parameter that we use for measuring performance is the number of sweeps through the training set of M patterns needed in order to find the solution. Namely, how many times each pattern was presented to the network. In each cycle of the algorithm there are 112 + 123 such sweeps. For each problem, and each parameter choice, an ensemble of many independent runs, each starting with a different random choice of initial weights, is created. In general, when applying a learning algorithm to a given problem, there are cases in which the algorithm fails to find a solution within the specified time limit (e.g. when BP get stuck in a local minimum), and it is impossible to calculate the ensemble average of learning times. Therefore we calculate, as a performance measure, either the median number of sweeps, t m , or the "inverse average rate", T, as defined in Tesauro and Janssen [1988]. The first problem we studied is contiguity: the system has to determine whether the number of clumps (i.e. contiguous blocks) of +1 's in the input is, say, equal to 2 or 3. This is called [Denker et al 1987] the "2 versus 3" clumps predicate. We used, as our training set, all inputs that have 2 or 3 clumps, with learning cycles parametrized by 112 = 20 and 123 = 5. Keeping N = 6 fixed, we varied H; 500 cases were used for each data point of Fig.l. 400 x BP 300 <> CHIR 200 100 3 4 5 6 7 8 H Figure 1. Median number of sweeps tm , needed to train a network of N = 6 input units, over an exhaustive training set, to solve the" 2 vs 3" clumps predicate, plotted against the number of hidden units H. Results for back-propagation [Denker et al 1987] (x) and this work (¢) are shown. 78 Grossman, Meir and Domany In the next problem, symmetry, one requires sout = 1 for reflection-symmetric inputs and -1 otherwise. This can be solved with H ~ 2 hidden units. Fig. 2 presents, for H = 2, the median number of exhaustive training sweeps needed to solve the problem, vs input size N. At each point 500 cases were run, with 112 = 10 and 123 = 5. We always found a solution in'less than 200 cycles. 6 N 8 10 Figure 2. Median number of sweeps t m , needed to train networks on symmetry (with H = 2). In the Parity problem one requires sout = 1 for an even number of +1 bits in the input, and -1 otherwise. In order to compare performance of our algorithm to that of BP, we studied the Parity problem, using networks with an architecture of N : 2N : 1, as chosen by Tesauro and Janssen [1988]. We used the integer version of our algorithm, briefly described above. In this version of the algorithm the weights and thresholds are integers, and the increment size, for both thresholds and weights, is unity. As an initial condition, we chose them to be +1 or -1 randomly. In the simulation of this version, all possible input patterns were presented sequentially in a fixed order (within the perceptron learning sweeps). The results are presented in Table 1. For all choices of the parameters ( It2, 123 ), that are mentioned in the table, our success rate was 100%. Namely, the algorithm didn't fail even once to find a solution in less than the maximal number of training cycles Imax specified in the table. Results for BP, r(BP) (from Tesauro and Janssen 1988) are also given in the table. Note that BP does get caught in local minima, but the percentage of such occurrences was not reported. Learning by Choice of Internal Representations 79 For testing the multiple output version of the algorithm we use8 the combined parity and symmetry problem; the network has two output units, both connected to all hidden units. The first output unit performs the parity predicate on the input, and the second performs the symmetry predicate. The network architecture was N:2N:2 and the results for N=4 .. 7 are given in Table 2. The choice of parameters is also given in that table. N (I12,/23) Ima.x tm T(CH IR) T(BP) 3 (8,4) 10 3 3 3g 4 (9,3)(6,6) 20 4 4 75 5 (12,4)(9,6) 40 8 6 130 6 (12,4)(10,5) 120 19 9 310 I 7 (12,4)(15,5) 240 290 30 80Q 8 (20,10) 900 2900 150 20db 9 (20,10) 900 2400 1300 Table 1. Parity with N:2N:1 architecture. N 112 h3 lin Ima.x tm T 4 12 8 7 40 50 33 5 14 7 7 400 900 350 6 18 9 7 900 5250 925 7 40 20 7 900 6000 2640 Table 2. Parity and Symmetry with N :2N:2 architecture. IV. DISCUSSION We have presented a learning algorithm for two-Iayerperceptrons, that searches for internal representations of the training set, and determines the weights by the local, Hebbian perceptron learning rule. Learning by choice of internal representation may turn out to be most useful in situations where the "teacher" has some information about the desired internal representations. We demonstrated that our algorithm works well on four typical problems, and studied the manner in which training time varies with network size. Comparisons with back-propagation were also made. it should be noted that a training sweep involves much less computations than that of back-propagation. We also presented a generalization of the algorithm to networks with multiple outputs, and found that it functions well on various problems of the same kind as discussed above. It appears that the modification needed to deal with multiple outputs also enables us to solve the learning problem for network architectures with more than one hidden layer. 80 Grossman, Meir and Domany At this point we can offer only very limited discussion of the interesting question - why does our algorithm work at all? That is, how come it finds correct internal representations (e.g. "tables") while these constitute only a small fraction of the total possible number (2H2N)? The main reason is that our procedure actually does not search this entire space of tables. This large space contains a small subspace, T, of "target tables", i.e. those that can be obtained, for all possible choices of w{j and OJ, by rule (1), in response to presentation of the input patterns. Another small subspace S, is that of the tables that can potentially produce the required output. Solutions of the learning problem constitute the space T n S. Our algorithm iterates between T and S, executing also a "walk" (induced by the modification of the weights due to the PLR) within each. An appealing feature of our algorithm is that it can be implemented in a manner that uses only integer-valued weights and thresholds. This discreteness makes the analysis of the behavior of the network much easier, since we know the exact number of bits used by the system in constructing its solution, and do not have to worry about round-off errors. From a technological point of view, for hardware implementation it may also be more feasible to work with integer weights. We are extending this work in various directions. The present method needs, in the learning stage, M H bits of memory: internal representations of all M training patterns are stored. This feature is biologically implausible and may be technologically limiting; we are developing a method that does not require such memory. Other directions of current study include extensions to networks with continuous variables, and to networks with feed-back. References Denker J., Schwartz D., Wittner B., SolI a S., Hopfield J.J., Howard R. and Jackel L. 1987, Complex Systems 1, 877-922 Grossman T., Meir R. and Domany E. 1988, Complex Systems in press. I1ebb D.O. 1949, The organization of Behavior, J. Wiley, N.Y Le Cun Y. 1985, Proc. Cognitiva 85, 593 Lewis P.M. and Coates C.L. 1967, Threshold Logic. (Wiley, New York) Minsky M. and Papert S. 1988, Perceptrons. (MIT, Cambridge). Plaut D.C., Nowlan S.J. and Hinton G.E. 1987, Tech. Report CMU-CS-86-126 Rosenblatt F. Principles of neurodynamics. (Spartan, New York, 1962) Rumelhart D.E., Hinton G.E. and Williams R.J. 1986, Nature 323,533-536 Tesauro G. and Janssen H. 1988, Complex Systems 2, 39 Widrow B. and Winter R. 1988, Computer 21, No.3, 25
|
1988
|
34
|
118
|
Adaptive Neural Networks Using MOS Charge Storage D. B. Schwartz 1, R. E. Howard and W. E. Hubbard AT&T Bell Laboratories Crawfords Corner Rd. Holmdel, N.J. 07733 Abstract MOS charge storage has been demonstrated as an effective method to store the weights in VLSI implementations of neural network models by several workers 2 . However, to achieve the full power of a VLSI implementation of an adaptive algorithm, the learning operation must built into the circuit. We have fabricated and tested a circuit ideal for this purpose by connecting a pair of capacitors with a CCD like structure, allowing for variable size weight changes as well as a weight decay operation. A 2.51-' CMOS version achieves better than 10 bits of dynamic range in a 140/' X 3501-' area. A 1.25/' chip based upon the same cell has 1104 weights on a 3.5mm x 6.0mm die and is capable of peak learning rates of at least 2 x 109 weight changes per second. 1 Adaptive Networks Much of the recent excitement about neural network models of computation has been driven by the prospect of new architectures for fine grained parallel computation using analog VLSI. Adaptive systems are espescially good targets for analog VLSI because the ada.ptive process can compensate for the inaccuracy of individual devices as easily as for the variability of the signal. However, silicon VLSI does not provide us with an ideal solution for weight storage. Among the properties of an ideal storage technology for analog VLSI adaptive systems are: • The minimum available weight change ~w must be small. The simplest adaptive algorithms optimize the weights by minimizing the output error with a steepest descent search in weight space [1]. Iterative improvement algorithms such as steepest descent are based on the heuristic assumption of 'better' weights being found in the neighborhood of 'good' ones; a heuristic that fails when the granularity of the weights is not fine enough. In the worst case, the resolution required just to represent a function can grow exponentially in the dimension of the input space . • The weights must be able to represent both positive and negative values and the changes must be easily reversible. Frequently, the weights may cycle up and down while the adaptive process is converging and millions of incremental changes during a single training session is not unreasonable. If the weights cannot easily follow all of these changes, then the learning must be done off chip. 1 Now at GTE Laboratories, 40 Sylvan Rd., Waltham, Mass 02254 [email protected]%relay.cs.net 2For example, see the papers by Mann and Gilbert, Walker and Akers, and Murray et. al. in this proceedings 761 762 Schwartz, Howard and Hubbard • The parallelism of the network can be exploited to the fullest only if the mechanism controlling weight changes is simple enough to be reproduced at each weight. Ideally, the change is determined by some easily computed combination of information local to each weight and signals global to the entire system. This type of locality, which is as much a property of the algorithm as of the hardware, is necessary to keep the wiring cost associated with learning small. • Weight decay, Wi = aw with a < 1 is useful although not essential. Global decay of all the weights can be used to extend their dynamic range by rescaling when the average magnitude becomes too large. Decay of randomly chosen weights can be used both to control their magnitude [2] and to help gradient searches escape from local minima. To implement an analog storage cell with MOS VLSI the most obvious choices are non-volatile devices like floating gate and MNOS transistors, multiplying DAC's with conventional digital storage, and dynamic analog storage on MOS capacitors. Most non-volatile devices rely upon electron tunneling to change the amount of stored charge, typically requiring a large amount of circuitry to control weight changes. DAC's have already proven themselves in situations where 5 bits or less of resolution [3] [4] are sufficient, but higher resolution is prohibitively expensive in terms of area. We will show the disadvantage of MOS charge storage, its volatility, is more than outweighed by the resolution available and ease of making weight changes. Representation of both positive and negative weights can be obtained by storing the weights Wi differentially on a pair of capacitors in which case Differential storage can be used to obtain some degree of rejection of leakage and can guarantee that leakage will reduce the magnitude of the weights as compared with a scheme where the weights are defined with respect to a fixed level, in which case as a weight decays it can change signs. A constant common mode voltage also eases the design constraints on the differential input multiplier used to read out the weights. An elegant way to manipulate the weights is to transfer charge from one capacitor to the other, keeping constant the total charge on the system and thus maximizing the dynamic range available from the readout circuit. 2 Weight Changes Small packets of charge can easily be transferred from one capacitor to the other by exploiting charge injection, a phenomena carefully avoided by designers of switched capacitor circuits as a source of sampling error [5] [6] [7] [8] [9]. An example of a storage cell with the simplest configuration for a charge transfer system is shown in figure 1. A pair of MOS capacitors are connected by a string of narrow MOS transistors, a long one to transfer charge and two minimum length ones to isolate Adaptive Neural Networks Using MOS Charge Storage 763 TA TP TC TM TA ru Ul~v-IL I I TA TP TCP TI TCM TM TA ru UlJ I I Figure 1: (a) The simplest storage cell, with provisions for only a single size increment/ decrement operations and no weight decay. (b) A more sophisticated cell with facilities for weight decay. By suitable manipulation of the clock signals, the two charge transfer transistors can be used to obtain different sizes of weight changes. Both circuits are initialized by turning on the access transistors TA and charging the capacitors up to a convenient voltage, typically Vnn /2. the charge transfer transistor from the storage nodes. For the sake of discussion, we can treat the isolation transistors as ideal switches and concentrate on the charge transfer transistor that we here assume to be an n-channel device. To increase the weight ( See figure 1 ), the charge transfer transistor (TC) and isolation transistor attached to the positive storage node (TP) are turned on. When the system has reached electrostatic equilibrium the charge transfer transistor (TC) is disconnected from the plus storage node by turning off TP and connected to the minus storage node by turning on TM. If the charge transfer transistor TC is slowly turned off, the mobile charge in its channel will diffuse into the minus node, lowering its voltage. A detailed analysis of the charge transfer mechanism has been given elsewhere [10], but for the purpose of qualitative understanding of the circuit the inversion charge in the charge transfer transistor's channel can be approximated by qNinv = Cox(VG - VTE). where VT E is the effective threshold voltage and Cox the gate to channel capacitance of the charge transfer transistor. The effective threshold voltage is then given by where VTO is the threshold voltage in the absence of body effect, 1; J the fermi level, Vs the source to substrate voltage, and f the usual body effect coefficient. An even 764 Schwartz, Howard and Hubbard rougher model can be obtained by linearizing the body effect term [6] where Cell co.ntains both the gate oxide capacitance and the effects of parasitic capacitance and T/ = 'Y /2.j12¢ I I . Within the linearized approximation, the change in voltage on a storage node with capacitance Cstore after n transfers is 1 Vn = Va + -(VG - VT - T/Va)(1- exp(-an)) T/ (1) with a = Cell /Cstore and where Va is the initial voltage on the storage node. Due to the dependence of the size of the transfer on the stored voltage, when the transfer direction is reversed the increment size changes unless the stored voltages on the capacitors are equal. This can be partially compensated for by using complementary pairs of p-channel and n-channel charge transfer transistors, in effect using a string of transmission gates to perform charge transfers. A weight decay operation can be introduced by using the more complex string of charge transfer transistors shown in figure lb. A weight decay is initiated by turning off the transistor in the middle of the string (TI) and turning on all the other transistors. When the two sides of the charge transfer string have equilibrated with their respective storage nodes, the connections to the storage nodes ( TM and TP ) are turned off and the two cha.rge transfer transistors ( TCP and TCM ) are allowed to exchange charge by turning on the transistor, TI, which separates them. When two equal charge packets have been obtained TI is turned off again and the charge packets held by TCP and TCM are injected back into the storage capacitors. The resulting change in the stored weight is tl. vdecay = - CCeff (V+ - v_). ox which corresponds to multiplying the weight by a constant a < 1 as desired. Besides allowing for weight decay, the more complex charge string shown in figure Ib ca.n also be used to obtain different size weight changes by using different clock sequences. 3 Experimental Evaluation Test chips have been fabricated in both 1.25J.l and 2.5J.l CMOS, using the AT&T Twin Tub technology[ll]. To evaluate the properties of an individual cell, especially the charge transfer mechanism, an isolated test structure consisting of five storage cells was built on one section of the 2.5J.l chip. The storage cells were differentially read out by two quadrant transconductance amplifiers whose input-output characteristics are shown in figure 2. By using the bias current of the amplifiers as an input, the amplifiers were used as two quadrant multipliers. Since many neural network models call for a sigmoidal nonlinearity, no attempt was made to linearize the operation of the multiplier. The output currents of the five multipliers were summed by a single output wire and the voltages on each of the ten capacitors were Adaptive Neural Networks Using MOB Charge Storage 765 10~--------------------------------------~ o ·10+-------,---~--_r--~--~--~----._--~~ o 2 3 4 5 Input Voltage Figure 2: A family of transfer characteristics from one of the transconductance multipliers for several different values of stored weight. The different branches of the curves are each separated by ten large charge transfers. No attempt was made to linearize the input/output characteristic since many neural network models call for non-linearities. buffered by voltage followers to allow for detailed examination of the inner workings of the cell. After trading off between hold time, resolution and area we decided upon 20Jl long charge transfer transistors and 2000Jl2 storage capacitors with 2.5Jl technology based upon the minimum channel width of 2.5Jl. For a 20Jl long channel and a 2.5V gate to source voltage the channel transit time To is approximately 5 ns and charge transfer clock frequencies exceeding 1 o MHz are possible without measurable pumping of charge into the substrate. The 2.5p wide access transistors were 12J-l long, leading to leakage rates from the individual capacitors of about 1% of the stored value in 100s, limited by surface leakage in our unpassivated test structures. Even with uncapped wafers, the leakage was small enough to allow all the tests described here to be made without special provisions for environmental control of either temperature or humidity. As mentioned earlier, the more complex set of charge transfer transistors needed to introduce weight decay can also be used to obtain several different size of charge transfers, a small weight change by using the two long transistors in sequence and a coarse one by treating the two long transistors and the isolation transistor separating them as a single device. Using the small weight changes, the worst case resolution was 10 bits ( near ~ V = 0 ) and the results where in excellent agreement with the predictions of equation 1 766 Schwartz, Howard and Hubbard 3.2 3.0 Q) C) 2.8 as ... -0 2.6 > ~ 0 2.4 ... of.) as 2.2 Q. as 0 2.0 1.8 0 100 200 300 400 500 Charge transfers Figure 3: The voltage on the two storage capacitors when the weight is initially set to saturation using large increments and then reduced back towards zero using weight decay. The granularity of the curves is an experimental artifact of the digital voltmeter's resolution. using the effective capacitance as a fitting parameter. In the figure 3 we use large charge transfers to quickly increment the weight up to its maximum value and then reduce it back to zero with weight decays, demonstrating the expected exponential dependence of the stored voltage on the number of weight decays. Even under repeated cycling up and down through the entire differential voltage range of the cell, the total amount of charge on the cell remained constant for frequencies under 10M H z with the exception of the expected losses due to leakage. The long term goal of this work is to develop analog VLSI chips that are complete 'learning machines', capable of modified their own weights when provided with input data and some feedback based on the output of the network. However, the study of learning algorithms is in a state of flux and few, if any, algorithms have been optimized for VLSI implementation. Rather than cast an inappropriate algorithm in silicon, we have designed our first chips to be used as adaptive systems with an external controller, allowing us to develop algorithms that are appropriate for the medium once we understand its properties. The networks are organized as rectangular matrix multipliers with voltage inputs and current outputs with 46 inputs and 24 outputs in a 96 pin package for the 1.251-' chip. Since none of the analog input/output lines of the chip are multiplexed, larger and more complicated networks can be built by cascading several chips. To the digital controller, the chip looks like a 1104 x 2 static RAM with some extra clock inputs to drive the charge transfers. The charge transfer clock signals are Adaptive Neural Networks Using MOS Charge Storage 767 distributed globally and are connected to the individual strings of charge transfer transistors through a pair of 2 x 2 cross bar switches controlled by two bits of static RAM local to each cell. The use of a pair of cross bar switches is necessitated by the faciltities for weight decay; if the simpler charge transfer string shown in figure la were used then only a single switch would be needed. When both a cell's RAMs are zeroed, the global charge transfer lines are not connected to the charge transfer transistors. The global lines are connected to the individual strings of charge transfer transistors either normally or in reverse depending upon which RAM cell contains a one. By reversing the order of the signals on the charge transfer lines, a weight change can also be reversed. Neglecting the dependence of the size of the charge transfer upon stored weight, the RAM's represent a weight change vector f).Cij with components f).wij E [-1,0,1]. Once a weight change vector has been written serially to the RAM's, the weight changes along that vector are made in parallel by manipulating the charge transfer lines. This architecture is also a powerful way to implement programable networks of fixed weights since an arbitrary matrix of 10 bit weights can be written to the chip in a few milliseconds or less if an efficient decomposition of the desired weight vector into global charge transfers is made. In view of the speed with which the chip can evaluate the output of a network, an overhead of less than a percent for a refresh operation is acceptable in many applications. 4 Conclusions We have implemented a generic chip to facilitate studying adaptive networks by building them in analog VLSI. By exploiting the well known properties of charge storage and charge injection in a novel way, we have achieved a high enough level of complexity ( > 103 weights and 10 bits of analog depth) to be interesting, in spite of the limitation to a modest 6.00mm x 3.5mm die size required by a multi-project fabrication run. If the cell were optimized to represent fixed weight networks by eliminating weight decay and bi-directional weight changes, the density could easily be increased by a factor of two with no loss in resolution. Once a weight change vector has been written to the RAM cells, charge transfers can be clocked at a rate of 2M H z chip corresponds to a peak learning rate of 2 x 109 updates/second, exceeding the speeds of 'digital neurocomputers' based upon DSP chips by two orders of magnitude. Acknowledgements A large group of people assisted the authors in taking this work from concept to silicon, a few of whom we single out for mention here. The IDA design tools used for the layouts were provided and supported by D. D. Hill and D. D. Shugard at Murray Hill and the 1.25J.l process was supported by D. Wroge and R. Ashton. The first author wishes to acknowledge helpful discussions with H. P. Graf, S. Mackie and G. Taylor, with special thanks to R. G. Swartz. 768 Schwartz, Howard and Hubbard References [1] Bernard Widrow and Samuel D. Stearns. Adaptive Signal Processing. Prentice-Hall, Inc., Englewood Cliffs, N. J., 1985. [2] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A lea.rning algorithm for Boltzman machines. Cognitive Science, 9:147, 1985. [3] Jack Raffel, James Mann, Robert Berger, Antonio Soares, and Sheldon Gilbert. A generic architecture for wafer-scale neuromorphic systems. In IEEE First International Conference on Neural Networks. Volume III, page 501, 1987. [4] Joshua Alspector, Bhusan Gupta, and Robert B. Allen. Performance of a stochastic learning microchip. In Advances in Neural Network Information Processing Systems, 1988. [5] William B. Wilson, Hisham Z. Massoud, Eric J. Swanson, Rhett T. George, and Richard B. Fair. Measurement and modeling of charge feed through in n-channel MOS analog switches. IEEE Journal of Solid-State Circuits, SC-20(6):1206-1213, 1985. [6] George Wegmann, Eric A. Vittoz, and Fouad Ra.ha.li. Charge injection in analog MaS switches. IEEE Journal of Solid-State Circuits, SC-20(6):1091-1097, 1987. [7] James A. Kuo, Robert W. Dutton, and Bruce A. Wooley. MOS pass transistor turn-off transient analysis. IEEE Transactions on Electron Devices, ED-33(10):1545-1555, 1986. [8] James R. Kuo, Robert W. Dutton, and Bruce A. Wooley. Turn-off tra.nsients in circular geometry MOS pass transistors. IEEE Journal Solid-State Circuits, SC-21(5):837-844, 1986. [9] Je-Hurn Shieh, Mahesh Patil, and Bing J. Sheu. Measurement and analysis of charge injection in MOS analog switches. IEEE Journal of Solid State Circuits, SC-22(2):277-281, 1987. [10J R. E. Howard, D. B. Schwartz, and W. E. Hubbard. A programmable analog neural network chip. IEEE Journal of Solid-State Circuits, 24, 1989. [11J J. Argraz-Guerena, R. A. Ashton, W. J. Bertram, R. C. Melin, R. C. Sun, and J. T. Clemens. Twin Tub III - A third generation CMOS. In Proceedings of the International Electron Device Meeting, 1984. Citation P63-6.
|
1988
|
35
|
119
|
IMPLICATIONS OF RECURSIVE DISTRIBUTED REPRESENTATIONS Jordan B. Pollack Laboratory for A I Research Ohio State University Columbus, OH -'3210 ABSTRACT I will describe my recent results on the automatic development of fixedwidth recursive distributed representations of variable-sized hierarchal data structures. One implication of this wolk is that certain types of AI-style data-structures can now be represented in fixed-width analog vectors. Simple inferences can be perfonned using the type of pattern associations that neural networks excel at Another implication arises from noting that these representations become self-similar in the limit Once this door to chaos is opened. many interesting new questions about the representational basis of intelligence emerge, and can (and will) be discussed. INTRODUCTION A major problem for any cognitive system is the capacity for, and the induction of the potentially infinite structures implicated in faculties such as human language and memory. Classical cognitive architectures handle this problem through finite but recursive sets of rules, such as fonnal grammars (Chomsky, 1957). Connectionist architectures, while yielding intriguing insights into fault-tolerance and machine leaming, have, thus far, not handled such productive systems in an adequate fashion. So, it is not surprising that one of the main attacks on connectionism, especially on its application to language processing models, has been on the adequacy of such systems to deal with apparently rule-based behaviors (Pinker & Prince, 1988) and systematicity (Fodor & Pylyshyn, 1988). I had earlier discussed precisely these challenges for connectionism, calling them the generative capacity problem for language, and the representational adequacy problem for data structures (Pollack, 1987b). These problems are actually intimately related, as the capacity to recognize or generate novel language relies on the ability to represent the underlying concept. Recently, I have developed an approach to the representation problem, at least for recursive structures like sequences and trees. Recursive auto-associative memory (RAAM) (Pollack, 1988a). automatically develops recursive distributed representations of finite training sets of such structures, using Back-Propagation (Rumelhart et al., 1986). These representations appear to occupy a novel position in the space of both classical and connectionist symbolic representations. A fixed-width representation of variable-sized symbolic trees leads immediately to the implication that simple fonns of neural-netwolk associative memories may be able to perfonn inferences of a type that are thought to require complex machinery such as variable binding and unification. But when we take seriously the infinite part of the representational adequacy problem, we are lead into a strange intellectual area, to which the second part of this paper is addressed. 527 528 Pollack BACKGROUND RECURSIVE AUTO-ASSOCIATIVE MEMORY A RAAM is composed of two mechanisms: a compressor, and a reconstructor, which are simultaneously trained. The job of the compressor is to encode a small set of fixed-width patterns into a single pattern of the same width. This compression can be recursively applied, from the bottom up, to a fixed-valence tree with distinguished labeled terminals (leaves), resulting in a fixed-width pattern representing the entire structure. The job of the reconstructor is to accurately decode this pattern into its parts, and then to further decode the parts as necessary, until the tenninal patterns are found, resulting in a reconstruction of the original tree. For binary trees with k-bit binary patterns as the leaves, the compressor could be a single-layer feedforward network with 2k inputs and k outputs, along with additional control machinery. The reconstructor could be a single-layer feedforward-network with k inputs and 2k outputs, along with a mechanism for testing whether a pattern is a tenninal. We simultaneously train these two networks in an auto-associative framework as follows. Consider the tree, «0 (A N»(Y (P (0 N»), as one member of a training set of such trees, where the lexical categories are pre-encoded as k-bit vectors. If the 2k-k-2k network is successfully trained (defined below) with the following patterns (among other such patterns in the training environment), the resultant compressor and reconstructor can reliably fonn representations for these binary trees. input pattern hidden pattern output pattern A+N ~ RAN(t) ~ A,+N, O+RAN(t) ~ RDAN(t) ~ O~RAN(t)' D+N ~ RDN(t) ~ O,+N, P+RDN(t) ~ RpDN(t) ~ P,+RDN(t), Y+RpDN(t) ~ RVPDN(t) ~ Y,+RpDN(t), RDAN(t)+RvPDN(t) ~ RDANVPDN<t) ~ RDAN(t)I+RvPDN(t), The (initially random) values of the hidden units, Rj(t), are part of the training environment, so it (and the representations) evolve along with the weights. I Because the training regime involves multiple compressions, but only single reconstructions, we rely on an induction that the reconstructor works. If a reconstructed pattern, say RpDN', is sufficiently close to the original pattern, then its parts can be reconstructed as well. AN EXPERIMENT The tree considered above was one member of the first experiment done on RAAM's. I used a simple context-free parser to parse a set of lexical-category sequences into a set of bracketed binary trees: (0 (A (A (A N»» «0 N)(P (0 N») (Y (0 N» (P (0 (A N») «0 N) Y) 1 This ~moving target" strategy is also used by (Elman, 1988) and (Dyer et aI., 1988). Implications of Recursive Distributed Representations 529 «D N) (V (D (A N»») «D (A N» (V (P (D N»» Each terminal pattern (D A N V & P) was represented as a l-bit-in-5 code padded with 5 zeros. A 20-10-20 RAAM devised the representations shown in figure I. pp p (P (0 N» (P (0 (A N») oOOa· ,··00 DODO' .•. - 0 0000' .•. ·0 o· a - .• . 00· 0·0· . • ·00 D · -Do ·0 ' 000 · ·oD·o·o·O · ·0· ·o·oao · -0' ·0 · 000 (A N) • (A (A N» .. 000· . D a . A A AN' DODO· • • 0 «0 N) V) 00·00 D 000· «0 N)(V (0 (A N»» o· D • • • • 0 D • «0 (A N» (V (P (0 N»» . o· · . 0 . 00 . Figure I. Representations of all the binary trees in the training set. devised by a 20-10-20 RAAM. manually clustered by phrase-type. The squares represent values between 0 and 1 by area. I labeled each tree and its representation by the phrase type in the grammar, and sorted them by type. The RAAM, without baving any intrinsic concepts of phrase-type, has clearly developed a representation with similarity between members of the same type. For example, the third feature seems to be clearly distinguishing sentences from nonsentences, the fifth feature seems to be involved in separating adjective phrases from others, while the tenth feature appears to distinguish prepositional and noun phrases from others.2 At the same time, the representation must be keeping enough information about the subtrees in order to allow the reconstructor to accurately recover the original structure. So, knowledge about structural regularity flows into the wt:ights while constraints about context similarity guide the development of the representations. RECURSIVE DISTRIBUTED REPRESENTATIONS These vectors are a very new kind of representation, a recursive, distributed representation, hinted at by Hinton's (1988) notion of a reduced description. They combine aspects of several disparate representations. Like feature-vectors, they are fixed-width, similarity-based, and their content is easily accessible. Like symbols, they combine only in syntactically well-formed ways. Like symbol-structures, they have constituency and compositionality. And, like pointers. they refer to larger symbol structures 2 In fact, by these metrics, the test case «D N)(P (D N))) should really be classified as a sentence; since it was not used in any other construction, there was no reason for the RAAM to believe otherwise. 530 Pollack which can be efficiently retrieved. But. unlike feature-vectors. they compose. Unlike symbols. they can be compared. Unlike symbol structures. they are fixed in size. And. unlike pointers. they have content. Recursive distributed representations could. potentially. lead to a reintegration of syntax and semantics at a very low level3• Rather than having meaning-free symbols which syntactically combine. and meanings which are recursively ascribed. we could functionally compose symbols which bear their own meanings. IMPLICATIONS One of the reasons for the historical split between symbolic AI and fields such as pattern recognition or neural networks is that the structured representations AI requires do not easily commingle with the representations offered by n-dimensional vectors. Since recursive distributed representations form a bridge from structured representations to n-dimensional vectors. they will allow high-level AI tasks to be accomplished with neural networks. ASSOCIATIVE INFERENCE There are many kinds of inferences which seem to be very easy for humans to perform. In fact, we must perform incredibly long chains of inferences in the act of understanding natural language (Birnbaum. 1986). And yet, when we consider performing those inferences using standard techniques which involve variable binding and unification, the costs seem prohibitive. For humans. however. these inferences seem to cost no more than simple associative priming (Meyer & Schvaneveldt. 1971). Since RAAMS can devise representations of trees as analog patterns which can actually be associated, they may lead to very fast neuro-Iogical inference engines. For example. in a larger experiment. which was reported in (Pollack. 1988b). a 48-16-48 RAAM developed representations for a set of ternary trees. such as (THOUGHT PAT (KNEW JOHN (LOVED MARY JOHN») which corresponded to a set of sentences with complex constituent structure. This RAAM was able to represent. as points within a 16-dimensional hypercube. all cases of (LOVED X Y) where X and Y were chosen from the set {JOHN, MARY. PAT. MAN}. A simple test of whether or not associative inference were possible. then, would be to build a "symmetric love" network, which would perform the simple inference: "If (LOVED X Y) then (LOVED Y X)". A netwoIk with 16 input and output units and 8 hidden units was successfully trained on 12 of the 16 possible associations. and worked perfectly on the remaining 4. (Note that it accomplished this task without any explicit machinery for matching and moving X and Y.) One might think. that in order to chain simple inferences like this one we will need many hidden layers. But there has recently been some coincidental work showing that feed3 The wrong distinction is the inverse of the undifferentiated concept problem in science, such as the fusing of the notions of heat and temperature in the 17th century (Wiser & Carey. 1983). For example. a company which manufactured workstations based on a hardware distinction between characters and graphics had deep trouble when trying to build a modem window system ... Implications of Recursive Distributed Representations 531 forward networks with two layers of hidden units can compute arbitrary mappings (Lapedes & Farber. 1988a; Lippman. 1987). Therefore, we can assume that the sequential application of associative-style inferences can be speeded up, at least by retraining. to a simple 3-cycle process. OPENING THE DOOR TO CHAOS The Capacity of RAAM's As discussed in the introduction. the question of infinite generative capacity is central. 10 the domain of RAAM's the question becomes: Given a finite set of trees to represent. how can the system then represent an infinite number of related trees. For the syntactic-tree experiment reported above. the 20-10-20 RAAM was ooly able to represent 32 new trees. The 48-16-48 RAAM was able to represent many more than it was trained on. but not yet an infinite number in the linguistics sense. I do not yet have any closed analytical forms for the capacity of a recursive autoassociative memory. Given that is is not really a file-cabinet or content-addressable memory, but a memory for a gestalt of rules for recursive pattern compression and reconstruction. capacity results such as those of (Willshaw. 1981) and (Hopfield, 1982) do not directly apply. Binary patterns are not being stored. so one cannot simply count how many. I have considered. however. the capacity of such a memory in the limit, where the actual functions and analog representations are not bounded by single linear transformations and sigmoids or by 32-bit floating point resolution. Figure 2. A plot of the bit-interspersal function. The x and y axis represent the left and right subtrees. and the height represents the output of the function. Consider just a 2-1-2 recursive auto-associator. It is really a reconstructible mapping from points in the unit square to points on the unit line. 10 order to work. the function should define a parametric I-dimensional curve in 2-space. perhaps a set of connected splines.4 As more and more data points need to be encoded. this parametric curve will become more convoluted to cover them . In the limit, it will no longer be a I-dimensional curve. but a space-filling curve with a fractional dimension. 4 (Saund, 1987) originally made the connection between auto-association and dimensionality reduction. If such 532 Pollack One possible functional basis for this ultimate 2-1-2 recursive auto-associator is "bitinterspersal," where the compression function would return a number, between 0 and 1, by interleaving the bits of the binary-fractional representations of the left and right subtrees. Figure 2 depicts this function, not as a space-filling curve, but as a surface, where no two points project to the same height. The surface is a 3-dimensional variant of a recognizable instance of Cantor dust called the devil's staircase. Thus, it is my working hypothesis that alternative activation functions (i.e. other than the usual sigmoidal or threshold), based on fractal or chaotic mathematics, is the critical missing link between neural networlcs and infinite capacity systems. Between AI and Chaos The remainder of this paper is what is behind the door; the result of simultaneous consideration of the fields of AI, Neural Networks, Fractals, and Olaos.s It is, in essence, a proposal on where (I am planning) to look for fruitful interplay between these fields, and what some interdisciplinary problems are which could be solved in this context. There has already been some intrusion of interest in chaos in the physics-based study of neural networlcs as dynamical systems. For example both (Hubennan & Hogg, 1987) and (Kurten, 1987) show how phase-transitions occur in particular neural-like systems, and (Lapedes & Farber, 1988b) demonstrate how a network trained to predict a simple iterated function would follow that function's bifurcations into chaos. However, these efforts are either noticing chaos, or working with it as a domain. At the other end of the spectrum are those relying on chaos to explain such things as the emergence of consciousness, or free will (Braitenberg, 1984, p. 65). In between these extremes lies some very hard problems recognized by AI which, I believe, could benefit from a new viewpoint. Self-Similarity and the Symbol-Grounding Problem The bifurcation between structure and form which leads to the near universality of discrete symbolic structures with ascribed meanings has lead to a yawning gap between cognitive and perceptual subareas of AI. This gulf can be seen between such fields as speech recognition and language comprehension, early versus late vision, and robotics versus planning. The low-level tasks require numeric, sensory re~resentations, while the high-level ones require compositional symbolic representations. The idea of infinitely regressing symbolic representations which bottom-out at perception has been an unimplementable folk idea ("Turtles all the way down") in AI for quite some time. The reason for its lack of luster is that the amount of information in such a structure is considered combinatorially explosive. Unless, of course, one considers self-similarity to be an information-limiting construction. a complete 2-1-2 RAAM could be found. it would give a unique number to every binary tree such that the number of a tree would be a invertible function of the numbers of its two subtrees . .5 Talking about 4 disciples is both difficult, and dangerous. considering the current size of the chasm. and the mutual hostilities: AI thinks NN is just a spectre. NN thinks AI is dead, F thinks it subsumes C, and C thinks F is its just showbiz. 6 It is no surprise then. that neural networks arc much more successful at the former tasks. Implications of Recursive Distributed Representations 533 While working on a ~w activation function for RAAMS which would magically have this property, I have started building modular systems of RAAMs, following Ballard's (1987) work on non-recursive auto-associators. When viewing a RAAM as a constrained system, one can see that the terminal patterns are overconstrained and the highest-level non-terminal patterns are unconstrained. Only those non-terminals which are further compressed have a reasonable similarity constraint. One could imagine a cascade of RAAMs, where the highest non-terminal patterns of a low-level RAAM (say, for encodings of letters) are the terminal patterns for a middle-level RAAM (say, for words), whose non-terminal patterns are the terminals for a higher-level RAAM (say, for sentences). If all the representations were the same width, then there must be natural similarities between the structures at different conceptual scales. Induction Inference and Strange Automata The problem of inductive inference1, of developing a machine which can learn to recognize or generate a language is a pretty hard problem, even for regular languages. In the process of extending my work on a recurrent high-order neural network called sequential cascaded nets (Pollack, 1987a), something strange occurred. It is always possible to completely map out any unknown finite-state machine by providing each known state with every input token, and keeping track of the states. TIris is, in fact, what defines such a machine as finite. Since a recurrent network is a dynamical system, rather than an automaton, one must choose a fuzz-factor for comparing real numbers. For a particular network trained on a context-free grammar, I was unable to map it out. Each time I reduced the fuzz-factor, the machine doubled in size, much like Mandelbrot's coastline (Mandelbrot, 1982) TIris suggests a bidirectional analogy between finite state automata and dynamical systems of the neural network sort8. An automaton has an initial state, a set of states, a lexicon, and and a function which produces a new state given an old state and input token. A subset of states are distinguished as accepting states. A dynamical system has an initial state, and an equation which defines its evolution over time, perhaps in response to environment. Such dynamical systems have elements known as attractor states, to which the state of the system usually evolves. Two such varieties, limit points and limit cycles, correspond directly to similar elements in finite-state automata, states with loops back to themselves, and short boring cycles of states (such as the familiar "Please Login. Enter Password. Bad Password. Please Login ..... ). But there is an element in non-linear dynamical systems which does not have a correlate in formal automata theory, which is the notion of a chaotic, or strange, attractor, fiISt noticed in work on weather prediction (Lorenz, 1963). A chaotic attractor does not repeat. The implications for inductive inference is that while, formally, push-down automata and Turing machines are necessary for recognizing harder classes of languages, such as context-free or context-sensitive, respectively, the idiosyncratic state-table and external memory of such devices make them impossible to induce. On the other hand, chaotic dynamical systems look much like automata, and should be about as hard to induce. The 7 For a good survey see (Angluin & Smith, 1983). J. Feldman recently posed this as a "challenge" problem for neural networks (c.f. Servan-Scrieber, Cleermans, & McClelland (this volume». 8 Wolfram (1984) has, of course, made the analogy between dynamical systems and cellular automata. 534 Pollack infinite memory is internal to the state vector, and the finite-state-control is built into a more regular, but non-linear, function. Fractal Energy Landscapes and Natural Kinds Hopfield (1982) described an associative memory in which each of a finite set of binary vectors to be stored would define a local minima in some energy landscape. The Boltzmann Machine (Ackley et al., 1985) uses a similar physical analogy along with simulated annealing to seek the global minimum in such landscapes as well. Pineda (1987) has a continuous version of such a memory, where the attract or states are analog vectors. One can think of these energy minimization process as a ball rolling down hills. Given a smooth landscape, that ball will roll into a local minima. On the other hand, if the landscape were constructed by recursive similarity, or by a midpoint displacement technique, such as those used in figures of fractal mountains, there will be an infinite number of local minima, which will be detected based on the size of the ball. N aillon and Theeten's report (this volume), in which an exponential number of attractors are used, is along the proposed line. The idea of high-dimensional feature vectors has a long history in psychological studies of memory and representation, and is known to be inadequate from that perspective as well as from the representational requirements of AI. But AI has no good empirical candidates for a theory of mental representation either. Such theories generally break down when dealing with novel instances of Natural Kinds, such as birds, chairs, and games. A robot with necessary and sufficient conditions, logical rules, or circumscribed regions in feature space cannot deal with walking into a room, recognizing and sitting on a hand-shaped chair. If the chairs we know fonn the large-scale local minima of an associative memory, then perhaps the chairs we don't know can also be found as local minima in the same space, albeit on a smaller scale. Of course, all the chairs we know are only smaller-scale minima in our memory for furniture. Fractal Compression and the Capacity of Memory Consider something like the Mandelbrot set as the basis for a reconstructive memory. Rather than storing all pictures, one merely has to store the "pointer" to a picture,9 and, with the help of a simple function and large computer, the picture can be retrieved. Most everyone has seen glossy pictures of the colorful prototype shapes of yeasts and dragons that infinitely appear as the location and scale are changed along the chaotic boundary. The first step in this hypothetical construction is to develop a related set with the additional property that it can be inverted in the following sense: Given a rough sketch of a picture likely to be in the set, return the best "pointer" to it 10 The second step, perhaps using nonnal neural-netwOIk technology, is to build an invertible DOn-linear mapping from the prototypes in a application domain (like chess positions, human faces, sentences, schemata, etc .. ) to the largest-scale prototypes in the mathematical memory space. 9 I.e. a point on the complex plane and the window size 10 Related sets might show up with great frequency using iterated systems, like Newton's method or backpropagation. And a more precise notion of inversion, involving both representational tolerance and scale. is required. Implications of Recursive Distributed Representations 535 Taken together, this hypothetical system turns out to be a look-up table for an infinite set of similar representations which incurs no memory cost for its contents. Only the pointers and the reconstruction function need to be stored. Such a basis for reconstructive storage would render meaningless the recent attempts at "counting the bits" of human memory (Hillis, 1988; Landauer, 1986). While these two steps together sound quite fantastic, it is closely related to the RAAM idea using a chaotIc activation function. The reconstructor produces contents from pointers, while the compressor retums pointers from contents. And the idea of a unifonn fractal basis for memory is not really too distant from the idea of a unifonn basis for visual images, such as iterated fractal surfaces based on the collage theorem (Barnsley et aI.,1985). A moral could be that impressive demonstrations of compression, such as the bidirectional mapping from ideas to language, must be easy when one can discover the underlying regularity. CONCLUSION Recursive auto-associatIve memory can develop fixed-width recursive distributed representations for variable-sized data-structures such as symbolic trees. Given such representations, one implication is that complex inferences, which seemed to require complex infonnation handling strategies, can be accomplished with associations. A second implication is that the representations must become self-similar and spacefilling in the limit. This implication, of fractal and chaotic structures in mental representations, may lead to a reconsideration of many fundamental decisions in computational cognitive science. Dissonance for cognitive scientists can be induced by comparing the infinite output of a fonnallanguage generator (with anybody's rules), to the boundary areas of the Mandelbrot set with its simple underlying function. Which is vaster? Which more natural? For when one considers the relative success of fractal versus euclidean geometry at compactly describing natural objects, such as trees and coastlines, one must wonder at the accuracy of the pervasive description of naturally-occurring mental objects as features or propositions which bottom-out at meaningless tenns. References Ackley, D. H., Hinton, G. E. & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann Machines. Cognitive Science. 9, 147-169. Angluin, D. & Smith, C. H. (1983). Inductive Inference: Theory and Methods. Computing Surveys. 15, 237269. Ballard, D. H. (1987). Modular Learning in Neural Networl<;~. In Proceedings of the Sixth Nationd Conference on Artificial Intelligence. Seattle, 279-284. Bamsley, M. F., Ervin, V., Hardin, D. & Lancaster, J. (1985). Solution of an inverse problem for fractals and other sets. Proceedings of the National Academy of Science. 83. Birnbaum, L. (1986). Integrated processing in planning and understanding. Research Report 489, New Haven: Computer Science Dept., Yale Univeristy. Braitenberg, V. (1984). Vehicles: Experiments in synthetic psychology. Cambridge: MIT press. Otomsky, N. (1957). Syntactic structures. The Hague: Mouton and Co .. Dyer, M. G., Rowers, M. & Wang, Y. A. (1988). Weight Matrix = Pattern of Activation: Encoding Semantic Networks as Distributed Representations in DUAL, a PDP architecture. UCLA-Artificial Intelligence-88-5, Los Angeles: Artificial Intelligence Laboratory, UCLA. Elman. J. L. (1988). Finding Structure in Time. Report 8801. San Diego: Center for Research in Language. UCSD. Fodor, J. & Pylyshyn, A. (1988). Connectionism and Cognitive Architecture: A Critical Analysis. Cognition. 28,3-71. Hillis. W. D. (1988). Intelligence as emergent behavior; or. the songs of eden. Daedelus.117, 175-190. Hinton, G. (1988). Representing Part-Whole hierarchies in connectionist networks. In Proceedings of the Tenth Annual Conference of the Cognitive Science SOciety. Montreal, 48-54. 536 Pollack Hopfield, J. J. (1982). Neural Networks and physical systems with emergent collective computational abilities. Procudings of the National Academy of Sciences USA. 79, 2554-2558. Hubennan, B. A. & Hogg, T. (1987). Phase Transitions in Artificial Intelligence Systems. Artificial Intelligence. 33, 155-172. Kurten, K. E. (1987). Phase transitions in quasirandom neural networks. In Institute of Electrical and Electronics Engineers First International Conference on Neural Networks. San Diego, n-197-20. Landauer, T. K. (1986). How much do people remember? Some estimates on the quantity of learned infonnation in long-term memory .. Cognitive Science. 10, 477-494. Lapedes, A. S. & Farber, R. M. (1988). How Neural Nets Work. LAUR-88-418: Los Alamos. Lapedes, A. S. & Farber, R. M. (1988). Nonlinear Signal Processing using Neural Networks: Prediction and system modeling. Biological Cybernetics. To appear. Lippman, R. P. (1987). An introduction to computing with neural networks. Institute of Electrical and Electronics Engineers ASSP Magazine. April, 4-22. Lorenz, E. N. (1963). Detenninistic Nonperiodic flow. Journal of Atmospheric Sciences. 20, 130-141. Mandelbrot, B. (1982). The Fractal Geometry of Nature. San Francisco: Freeman. Meyer, D. E. & Schvaneveldt, R. W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology. 90, 227-234. Pineda, F. J. (1987). Generalization of Back-Propagation to Recurrent Neural Networks. Physical Review Letters. 59, 2229-2232. Pinker, S. & Prince, A. (1988). On Language and Connectionism: Analysis of a parallel distributed processing model of language inquisition .. Cognition. 28, 73-193. Pollack, J. B. (1987). Cascaded Back Propagation on Dynamic Connectionist Networks. In Proceedings oftM Ninth Conference of the Cognitive Science Society. Seattle, 391-404. Pollack, J. B. (1987). On Connectionist Models of Natural Language Processing. Ph.D. Thesis, Urbana: Computer Science Department, University of Illinois. (Available as MCCS-87-IOO, Computing Research Laboratory, Las Cruces, NM) Pollack, J. B. (1988). Recursive Auto-Associative Memory: Devising Compositional Distributed Representations. In Proceedings of the Tenth Annual Conference of the Cognitive Science Society. Montreal, 33-39. Pollack, J. B. (1988). Recursive Auto-Associative Memory: Devising Compositional Distributed Representations. MCCS-88-124, Las Cruces: Computing Research Laboratory, New Mexico State University. Rumelhart, D. E., Hinton, G. & Williams, R. ( 1986). Learning Internal Representations through Error Propagation. In D. E. Rumelhart, J. L. McClelland & the PDP research Group, (Eds.), Parallel Distributed Processing: Experiments in the Microstructure of Cognition, Vol. l. Cambridge: MIT Press. Saund, E. (1987). Dimensionality Reduction and Constraint in Later Vision. In Proceedings of tM Ninth Annual Conference of the Cognitive Science Society. Seattle, 908-915. Willshaw, D. J. (1981). Holography, Associative Memory, and Inductive Generalization. In G. E. Hinton & J. A. Anderson, (Eds.), Parallel models of associative memory. Hillsdale: Lawrence Erlbaum Associates. Wiser, M. & Carey, S. (1983). When heat and temperature were one. In D. Gentner & A. Stevens, (Eds.), Mental Models. Hillsdale: Erlbaum. Wolfram, S. (1984). Universality and Complexity in Cellular Automata. Physica.1OD, 1-35.
|
1988
|
36
|
120
|
340 BACKPROPAGATION AND ITS APPLICATION TO HANDWRITTEN SIGNATURE VERIFICATION Dorothy A. Mighell Electrical Eng. Dept. Info. Systems Lab Stanford University Stanford, CA 94305 Timothy S. Wilkinson Electrical Eng. Dept. Info. Systems Lab Stanford University Stanford, CA 94305 ABSTRACT Joseph W. Goodman Electrical Eng. Dept. Info. Systems Lab Stanford University Stanford, CA 94305 A pool of handwritten signatures is used to train a neural network for the task of deciding whether or not a given signature is a forgery. The network is a feedforward net, with a binary image as input. There is a hidden layer, with a single unit output layer. The weights are adjusted according to the backpropagation algorithm. The signatures are entered into a C software program through the use of a Datacopy Electronic Digitizing Camera. The binary signatures are normalized and centered. The performance is examined as a function of the training set and network structure. The best scores are on the order of 2% true signature rejection with 2-4% false signature acceptance. INTRODUCTION Signatures are used everyday to authorize the transfer of funds for millions of people. We use our signature as a form of identity, consent, and authorization. Bank checks, credit cards, legal documents and waivers all require the everchanging personalized signature. Forgeries on such transactions amount to millions of dollars lost each year. A trained eye can spot most forgeries, but it is not cost effective to handcheck all signatures due to the massive number of daily transactions. Consequently, only disputed claims and checks written for large amounts are verified. The consumer would certainly benefit from the added protection of automated verification. Neural networks lend themselves very well to signature verification. Previously, they have proven applicable to other signal processing tasks, such as character recognition {Fukishima, 1986} {Jackel, 1988}, sonar target classification {Gorman, 1986}, and control- as in the broom balancer {Tolat, 1988}. HANDWRITING ANALYSIS Signature verification is only one aspect of the study of handwriting analysis. Recognition is the objective, whether it be of the writer or the characters. Writer recognition can be further broken down into identification and verification. IdentiBackpropagation and Handwritten Signature Verification 341 fication selects the author of a sample from among a group of writers. Verification confirms or rejects a written sample for a single author. In both cases, it is the style of writing that is important. Deciphering written text is the basis of character recognition. In this task, linguistic information such as the individual characters or words are extracted from the text. Style must be eliminated to get at the content. A very important application of character recognition is automated reading of zip-codes in the post office {Jackel, 1988}. Data for handwriting analysis may be either dynamic or static. Dynamic data requires special devices for capturing the temporal characteristics of the sample. Features such as pressure, velocity, and position are examined in the dynamic framework. Such analysis is usually performed on-line in real time. Static analysis uses the final trace of the writing, as it appears on paper. Static analysis does not require any special processing devices while the signature is being produced. Centralized verification becomes possible, and the processing may be done off-line. Work has been done in both static and dynamic analysis {Sato, 1982} {Nemcek, 1974}. Generally, signature verification efforts have been more successful using the dynamic information. It would be extremely useful though, to perform the verification using only the written signature. This would eliminate the need for costly machinery at every place of business. Personal checks may also be verified through a static signature analysis. TASK The handwriting analysis task with which this paper is concerned is that of signature verification using an off-line method to detect casual forgeries. Casual forgeries are non-professional forgeries, in which the writer does not practice reproducing the signature. The writer may not even have a copy of the true signature. Casual forgeries are very important to detect. They are far more abundant, and involve greater monetary losses than professional forgeries. This signature verification task falls into the writer recognition category, in which the style of writing is the important variable. The off-line analysis allows centralized verification at a lower cost and broader use. HANDWRITTEN SIGNATURES The signatures for this project were gathered from individuals to produce a pool of 80 true signatures and 66 forgeries. These are signatures, true and false, for one person. There is a further collection of signatures, both true and false, for other persons, but the majority of the results presented will be for the one individual. It will be clear when other individuals are included in the demonstration. The signatures are collected on 3x5 index cards which have a small blue box as 342 Wilkinson, Mighell and Goodman a guideline. The cards are scanned with a CCD array camera from Datacopy, and thresholded to produce binary images. These binary images are centered and normalized to fit into a 128x64 matrix. Either the entire 128x64 image is presented as input, or a 90x64 image of the three initials alone is presented. It is also possible to present preprocessed inputs to the network. SOFTWARE SIMULATION The type of learning algorithm employed is that of backpropagation. Both dwell and momentum are included. Dwell is the type of scheduling employed, in which an image is presented to the network, and the network is allowed to "dwell" on that input for a few iterations while updating its weights. C. Rosenberg and T. Sejnowski have done a few studies on the effects of scheduling on learning {Rosenberg, 1986}. Momentum is a term included in the change of weights equation to speed up learning {Rumelhart, 1986}. The software is written in Microsoft C, and run on an IBM PC/AT with an 80287 math co-processor chip. Included in the simulation is a piece-wise linear approximation to the sigmoid transfer function as shown in Figure 1. This greatly improves the speed of calculation, because an exponential is not calculated. The non-linearity is kept to allow for layering of the network. Most of the details of initialization and update are the same as that reported in NetTalk {Sejnowski, 1986}. OUT ~-111111::::+~----'. IN Figure 1. Piece-wise linear transfer function. Many different nets were trained in this signature verification project, all of which were feed-forward. The output layer most often consisted of a single output neuron, but 5 output neurons have been used as well. If a hidden layer was used, then the number of hidden units ranged from 2 to 53. The networks were both fullyconnected and partially-connected. SAMPLE RUN The simplest network is that of a single neuron taking all 128x64 pixels as input, plus one bias. Each pixel has a weight associated with it, so that the total number of weights is 128x64 + 1 = 8193. Each white pixel is assigned an input value of + 1, each black pixel has a value of -1. The training set consists of 10 true signatures Backpropagation and Handwritten Signature Verification 343 with 10 forgeries. Figure 2a depicts the network structure of this sample run. OUT 1 c:: 0 u CD CD " 0.5 f-.. CD :J "--Q. 0 0 0.5 P(false acceptance) ~1~111J 111. "~~~mlla (b) 1 1/ (a) ~ en LL. C 0.5 0 f 0 0 0.5 (e) Output Values (d) Figure 2. Sample run. a) Network = one output neuron, one weight per pixel, fully connected. Training set = 10 true signatures + 10 forgeries. b) ROC plot for the sample run. (Probability of fa1se acceptance vs probability of true detection). Test set = 70 true signatures + 56 forgeries. c) Clipped picture of the weights for the sample run. White = positive weight, black = negative weight. d) Cumulative distribution function for the true signatures (+) and for the forgeries (0) of the sample run. 1 1 The network is trained on these 20 signatures until all signatures are classified 344 Wilkinson, Mighell and Goodman correctly. The trained network is then tested on the remaining 70 true signatures and 56 forgeries. The results are depicted in Figures 2b and 2d. Figure 2b is a radar operating characteristic curve, or roc plot for short. In this presentation of data, the probability of detecting a true signature is plotted against the probability of accepting a forgery. Roc plots have been used for some time in the radar sciences as a means for visualizing performance {Marcum, 1960}. A perfect roc plot has a right angle in the upper left-hand corner which would show perfect separation of true signatures from forgeries. The curve is plotted by varying the threshold for classification. Everything above the threshold is labeled a true signature, everything below the threshold is labeled a forgery. The roc plot in Figure 2b is close to perfect, but there is some overlap in the output values of the true signatures and forgeries. The overlap can be seen in the cumulative distribution functions (cdfs) for the true and false signatures as shown in Figure 2d. As seen in the cdfs, there is fairly good separation of the output values. For a given threshold of 0.5, the network produces 1% rejection of true signatures as false, with 4% acceptance of forgeries as being true. IT one lowers the threshold for classification down to 0.43, the true rejection becomes nil, with a false acceptance of 7% . A simplified picture of the weights is shown in Figure 2c, with white pixels designating positive weights, and black pixels negative weights. OTHER NETWORKS The sample run above was expanded to include 2 and 3 hidden neurons with the single output neuron. The results were similar to the single unit network, implying that the separation is linear. The 128x64 input image was also divided into regions, with each region feeding into a single neuron. In one network structure, the input was sectioned into 32 equally sized regions of 16x16 pixels. The hidden layer thus has 32 neurons, each neuron receiving 16x16 + 1 inputs. The output neuron had 33 inputs. Likewise, the input image was divided into 53 regions of 16x16 pixels, this time overlapping. Finally, only the initials were presented to the network. (Handwriting experts have noted that leading strokes and separate capital letters are very significant in classification {Osborn, 1929}.) In this case, two types of networks were devised. The first had a single output neuron, the second had three hidden neurons plus one output neuron. Each of the hidden neurons received inputs from only one initial, rather than from all three. The network with the single output neuron produced the best results of all, with 2% true rejection and 2% false acceptance. IMPORTANCE OF FORGERIES IN THE TRAINING SET In all cases, the networks performed much better when forgeries were included in the training set. When an all-white image is presented as the only forgery, performance deteriorates significantly. When no forgeries are present, the network decides that Backpropagation and Handwritten Signature Verification 345 all signatures are true signatures. It is therefore desirable to include actual forgeries in the training set, yet they may be impractical to obtain. One possibility for avoiding·the collection of forgeries is to use computer generated forgeries. Another is to distort the true signatures. A third is to use true signatures of other people as forgeries for the person in question. The attraction of this last option is that the masquerading forgeries are already available for use. NETWORK WITHOUT FORGERIES To test the use of true signatures of other people for forgeries, the following network is devised. Once again, the input is the 128x64 pixel image. The output layer is comprised of five output neurons fully connected to the input image. The function of each output neuron is to be active when presented with a particular persons' signature. When a forgery is present, the output is to be low. Figure 3a depicts this network. The training set has 50 true signatures, ten for each of five people. Each signature has a desired output of true for one neuron, and false for the remaining four neurons. Once the network is trained, it is tested on 210 true signatures and 150 forgeries. Figures 3b and 3c record the results. At a threshold of 0.5, the true rejection is 3% and the false acceptance is 14%. Decreasing the threshold down to 0.41 gives 0% true rejection and 28% false acceptance. These results are similar to the sample run, though not as good. This is a simple demonstration of the use of other true signatures as forgeries. More sophisticated techniques could improve the discrimination. For instance, selecting names with similar lengths or spelling should improve the classification. CONCLUSION Automated signature verification systems would be extremely important in the business world for verifying monetary transactions. Countless dollars are lost each day to instances of casual forgeries. An artificial neural network employing the backpropagation learning algorithm has been trained on both true and false signatures for classification. The results have been very good: 2% rejection of genuine signatures with 2% acceptance of forgeries. The analysis requires only the static picture of the signature, there by offering widespread use through centralized verification. True signatures of other people may substitute for the forgeries in the training set - eliminating the need for collecting non-genuine signatures. 346 Wilkinson, Mighell and Goodman JWG JTH TSW LDK ABH (a) C 1 r--::iif1l---------.. lr-----------~~~--_=~ o (,) CD Q) "t:S U.5 Q) ::J .. -(f. 00.5 o I ~ ~ o~----------~--------~ o~~----~~~--------~ o 0.5 1 P(false acceptance) (b) o 0.5 Output Values (c) Figure 3. Network without forgeries for 5 individuals. a) Network = 5 output neurons, one for each individua~ as indicated by the initials. Training set = 10 true signatures for each individual. b) ROC plot for the network without forgeries. Test set = 210 true signatures + 150 forgeries. c) Cumulative distribution function for the true signatures (+) and for the forgeries (0) of the network without forgeries. Referenees 1 K. Fukishima and S. Miyake, "Neocognitron: A biocybernetic approach to visual pattern recognitionJt , in NHK Laboratorie~ Note, Vol. 336, Sep 1986 (NHK Science and Technical Research Laboratories, Tokyo). Backpropagation and Handwritten Signature Verification 347 P. Gorman and T. J. Sejnowski, "Learned classification of sonar targets using a massively parallel network", in the proceedings of the IEEE ASSP Oct 21, 1986 DSP Workshop, Chatham, MA. L. D. Jackel, H. P. Graf, W. Hubbard, J. S. Denker, and D. Henderson, "An application of neural net chips: handwritten digit recognition", in IEEE International Oonference on Neural Networks 1988, II 107-115. J. T. Marcum, "A statistical theory of target detection by pulsed radar", in IRE Transactions in Information Theory, Vol. IT-6 (Apr.), pp 145-267, 1960. W. F. Nemcek and W. C. Lin, "Experimental investigation of automatic signature verification" in IEEE Transactions on Systems, Man, and Oybernetics, Jan. 1974, pp 121-126. A. S. Osborn, Questioned Documents, 2nd edition (Boyd Printing Co, Albany NY) 1929. C. R. Rosenberg and T. J. Sejnowski, "The spacing effect on NETtalk, a massively parallel network", in Proceedings of the Eighth Annual Oonference of the Oognitive Science Society, (Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1986) 72-89. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propagation", in Parallel Distributed Processing: Explorations in the Microstructures of Oognition. Vol. 1: Foundations, edited by D. E. Rumelhart & J. L. McClelland, (MIT Press, 1986). Y. Sato and K. Kogure, "Online signature verification based on shape, motion, and writing pressure", in Proceedings of the 6th International Oonference on Pattern Recognition, Vol. 2, pp 823-826 (IEEE NY) 1982. T. J. Sejnowski and C. R. Rosenberg, "NETtalk: A Parallel Network that Learns to Read Aloud", Johns Hopkins University Department of Electrical Engineering and Computer Science Technical Report JHU /EECS-86/01, (1986). V. V. Tolat and B. Widrow, "An adaptive 'broom balancer' with visual inputs" , in IEEE International Oonference on Neural Networks 1988, II 641-647.
|
1988
|
37
|
121
|
THEORY OF SELF-ORGANIZATION OF CORTICAL MAPS Shigeru Tanaka Fundamental Research Laboratorys, NEC Corporation 1-1 Miyazaki 4-Chome, Miyamae-ku, Kawasaki, Kanagawa 213, Japan ABSTRACT We have mathematically shown that cortical maps in the primary sensory cortices can be reproduced by using three hypotheses which have physiological basis and meaning. Here, our main focus is on ocular.dominance column formation in the primary visual cortex. Monte Carlo simulations on the segregation of ipsilateral and contralateral afferent terminals are carried out. Based on these, we show that almost all the physiological experimental results concerning the ocular dominance patterns of cats and monkeys reared under normal or various abnormal visual conditions can be explained from a viewpoint of the phase transition phenomena. ROUGH SKETCH OF OUR THEORY In order to describe the use-dependent self-organization of neural connections {Singer,1987 and Frank,1987}, we have proposed a set of coupled equations involving the electrical activities and neural connection density {Tanaka, 1988}, by using the following physiologically based hypotheses: (1) Modifiable synapses grow or collapse due to the competition among themselves for some trophic factors, which are secreted retrogradely from the postsynaptic side to the presynaptic side. (2) Synapses also sprout or retract according to the concurrence of presynaptic spike activity and postsynaptic local membrane depolarization. (3) There already exist lateral connections within the layer, into which the modifiable nerve fibers are destined to project, before the synaptic modification begins. Considering this set of equations, we find that the time scale of electrical activities is much smaller than time course necessary for synapses to grow or retract. So we can apply the adiabatic approximation to the equations. Furthermore, we identify the input electrical activities, i.e., the firing frequency elicited from neurons in the projecting neuronal layer, with the stochastic process which is specialized by the spatial correlation function Ckp;k' p'. Here, k and k' represent the positions of the neurons in the projecting layer. II stands for different pathways such as ipsilateral or contralateral, on-center or off-center, colour specific or nonspecific and so on. From these approximations, we have a nonlinear 451 452 Tanaka stochastic differential equation for the connection density, which describes a survival process of synapses within a small region, due to the strong competition. Therefore, we can look upon an equilibrium solution of this equation as a set of the Potts spin variables 0jk11'S {Wu, 1982}. Here, if the neuron k in the projecting layer sends the axon to the position j in the target layer, 0jk11 = 1 and if not, 0jk11 = O. The Potts spin variable has the following property: If we limit the discussion within such equilibrium solutions, the problem is reduced to the thermodynamics in the spin system. ' The details of the mathematics are not argued here because they are beyond the scope of this paper {Tanaka}. We find that equilibrium behavior of the modifiable nerve terminals can be described in terms of thermodynamics in the system in which Hamiltonian H and fictitious temperature T are given by where k and Ck11 ;k' 11' are the averaged firing frequency and the correlation function, respectively. Vii' describes interaction between synapses in the target layer. q is the ratio of the total averaged membrane potential to the a veraged membrane potential induced through the modifiable synapses from the projecting layer. "tc and "ts are the correlation time of the electrical activities and the time course necessary for synapses to grow or collapse. APPLICATION TO THE OCULAR DOMINANCE COLUMN FORMATION A specific cortical map structure is determined by the choice of the correlation function and the synaptic interaction function. Now, let us neglect k dependence of the correlation function and take into account only ipsilateral and contralateral pathways denoted by p, for mathematical simplicity. In this case, we can reduce the Potts spin variable into the Ising spin one through the following transformation: Theory of Self-Organization of Cortical Maps 453 where j is the position in the layer 4 of the primary visual cortex, and Sj takes only + 1 or -1, according to the ipsilateral or contralateral dominance. We find that this system can be described by Hamiltonian: J H = -h " S .- - "" "" V .. , S . S., L. J 2 L. L. JJ J J j j j':;ej (3) The first term of eq.(3) reflects the ocular dominance shift, while the second term is essential to the ocular dominance stripe segregation. Here, we adopt the following simplified function as Vii': qex qinh V . . , = -2 8 (A -d .. ,) - -2- 8 (A . h- d . . ,) , (4) J J IIA ex J J IIA . h m J J ex tn where djj' is the distance between j and j'. Aex and Ainh are determined by the extent of excitatory and inhibitory lateral connections, respectively. 8 is the step function. q." and q i"~ are propotional to the membrane potentials induced by excitatory and inhibitory neurons {Tanaka}. It is not essential to the qualitative discussion whether the interaction function is given by the use of the step function, the Gaussian function, or others. Next, we define 11+1 and 11-1 as the average firing frequencies of ipsilateral and contralateral retinal ganglion cells (RGCs), and ~± 1 v and ~± 18 as their fluctuations which originate in the visually stimulated and the spontaneous firings of RGCs, respectively. These are used to calculate two new parameters, r and a: (5) a= (6) 454 Tanaka r is related to the correlation of firings elicited from the left and right RGCs. If there are only spontaneous firings, there is no correlation between the left and right RGCs' firings. On the other hand, in the presence of visual stimulation, they will correlate, since the two eyes receive almost the same images in normal animals. a is a function of the imbalance of firings of the left and right RGCs. Now, J and h in eq.(3) can be expressed in terms ofr and a: ( l_a2 ) J=b l-r-1 1 + a2 ' (8) where b1 is a constant of the order of 1, and b2 is determined by average membrane potentials. Using the above equations, it will now be shown that patterns such as the ones observed for the ocular dominance column of new-world monkeys and cats can be explained. The patterns are very much dependent on three parameters r, a and K which is the ratio of the membrane potentials (qinh/qex) induced by the inhibitory and excitatory neurons. RESULTS AND DISCUSSIONS In the subsequent analysis by Monte Carlo simulations, we fix the values of parameters: qex=I.O, Aex =O.25, Ainh=l.O, T=O.25, bl=l.O, b2=O.I, and dx=O.l. dx is the diameter ofa small area which is occupied by one spin. In the computer simulations of Fig. 1, we can see that the stripe patterns become more segregated as the correlation strength r decreases. The similarity of the pattern in Fig.lc to the well-known experimental evidence {Hubel and Wiesel, 1977} is striking. Furthermore, it is known that if the animal has been reared under the condition where the two optic nerves are electrically stimulated synchronously, stripes in the primary visual cortex are not formed {Stryker}. This condition corresponds to r values close to I and again our theory predicts these experimental results as can be seen in Fig.la. On the contrary, if the strabismic animal has been reared under the normal condition {Wiesel and Hubel, 1974}, r is effectively smaller than that of a normal animal. So we expect that the ocular dominance stripe has very sharp delimitations as it is observed experimentally. In the case of a binocularly deprived animal {Wiesel and Hubel, 1974},i.e., ~+lv=~_lv=O, it is reasonable to expect that the situation is similar to the strabismic animal. Theory of Self-Organization of Cortical Maps 455 Figure 1. Ocular dominance patterns given by the computer simulations in the case of the large inhibitory connections (K= 1.0) and the balanced activities (a= 0). The correlation strength r is given in each case: r=0.9 for (a), r=0.6 for (b), and r= 0.1 for (c). In the case of a* 0, we can get asymmetric stripe patterns such as one in Fig.2a. Since this situation corresponds to the condition of the monocular deprivation, we can also explain the experimental observation {Hubel et a1.,1977} successfully. There are other patterns seen in Fig.2b, which we call blob lattice patterns. The existence of such patterns has not been confirmed physiologically, as far as we know. However, this theory on the ocular dominance column formation predicts that the blob lattice patterns will be found if appropriate conditions, such as the period of the monocular Figure 2. Ocular dominance patterns given by the computer simulations in the case of the large inhibitory connections (K=1.0) and the imbalanced activities: a=0.2 for (a) and a= 0.4 for (b). The correlation strength r is given by r= 0.1 for both (a) and (b). 456 Tanaka deprivation, are chosen. We find that the straightness of the stripe pattern is controlled by the parameter K. Namely, if K is large, i.e. inhibitory connections are more effective than excitatory ones, the pattern is straight. However if K is small the pattern has many branches and ends. This is illustrated in Fig. 3c. We can get a pattern similar to the ocular dominance pattern of normal cats {Anderson et al., 1988}, ifK is small and r~rc (Fig.3b). The meaning of rc will be discussed in the following paragraphs. We further get a labyrinth pattern by means of r smaller than r c and the same K. We can think K val ue is specific to the animal under consideration because of its definition. Therefore, this theory also predicts that the ocular dominance pattern of the strabismic cat will be sharply delimitated but not a straight stripe in contrast to the pattern of monkey. Figure 3. Ocular dominance patterns given by the computer simulations in the case of the small inhibitory connections (K=0.3) and the balanced activities(a=O). The correlation strength r is given in each case: r= 0.9 for (a), r=0.6 for (b) and r=O.l for (c). Having seen specific examples, let us now discuss the importance of parameters r and a, which stand for the correlation strength and the imbalance of firings. According to qualitative difference of patterns obtained from our simulations, we classify the parameter space (r, !l) into three regions in Fig.4: In region (S), stripe patterns appear. The left-eye dominance and the right-eye dominance bands are equal in width, for a=O. On the other hand, they are not equal for non-zero value. In region (B), patterns are blob lattices. In region (U), the patterns are uniform and we do not see any spatial modulation. A uniform pattern whose a val ue is close to 0 is a random pattern, while if a is close to 1 or -1 either ipsilateral or contralateral nerve terminals are present. On the horizontal axis, (S) and (U) regions are devided by the critical point rc. In practice if we define the order parameter as the Theory of Self-Organization of Cortical Maps 457 ensemble-averaged amplitude of the dominant Fourier component of spatial patterns, and the susceptibility as the variance of the amplitude, then we can observe their singular behavior near r = r c' Various conditions where animals have been reared correspond positions in the parameter space of Fig.4: normal (N), synchronized electrical stimulation (SES), strabismus (S), binocular deprivation (BD), long-term monocular deprivation (LMD) and short-term monocular deprivation (SMD). If an animal is kept under the monocular deprivation for a long period, the absolute value of is close to 1 and r value is 0, considering eqs.(5) and (6). For a shortterm monocular deprivation, the corresponding point falls on anywhere on the line from N to LMD, because relaxation from the symmetric stripe pattern to the open-eye dominant uniform pattern is incomplete. The position on this line is, therefore, determined by this relaxation period, in which the animal is kept under the monocular deprivation. 1 UiD (U) a (5) BD~~ ______ ~ ____ ~_SE_S~ o S N re 1 r Figure 4. Schematic phase diagram for the pattern of ocular dominance columns. The parameter space (r, a) is devided into three regions: (S) stripe region, (B) blob lattice region, and (U) uniform region. N, SES, S, BD, LMD, and SMD stand for conditions: normal, synchronized electrical stimulation, strabismus, binocular deprivation, long-term monocular deprivation, and short-term monocular deprivation, respectively. We show only the diagram on the upper half plane, because the diagram is symmetrical with respect to the line of a=O. 458 Tanaka CONCLUSION In this report, a new theory has been proposed which is able to explain such use-dependent self-organization as the ocular dominance column formation. We have compared the theoretical results with various experimental data and excellent agreement is observed. We can also explain and predict selforganizing process of other cortical map structures such as the orientation column, the retinotopic organization, and so on. Furthermore, the three main hypotheses of this theory are not confined to the primary visual cortex. This suggests that the theory will have a wide applicability to the formation of cortical map structures seen in the somatosensory cortex {Kaas et al.,1983}, the auditory cortex {Knudsen et al.,1987}, and the cerebellum {Ito,1984}. References P.A.Anderson, J.Olavarria, RC.Van Sluyter, J.Neurosci. 8,2184 (1988). E.Frank, Trends in Neurosci. 10,188 (1987). D.H.Hubel and T.N.Wiesel, Proc.RSoc.Lond.B198,1(1977). D.H.Hubel, T.N.Wiesel, S.LeVay, Phil.Trans.RSoc. Lond. B278, 131 (1977). M.Ito, The Cerebellum and Neural Control (Raven Press, 1984). J.H.Kaas, M.M.Merzenich, H.P.Killackey, Ann. Rev. Neurosci. 6,325 (1983). E.I.Knudsen, S.DuLac, S.D.Esterly, Ann. Rev. Neurosci. 10,41 (1987). W.Singer,in The Neural and Molecular Bases of Learning (Hohn Wiley & Sons Ltd.,1987) pp.301-336; M.P.Stryker, in Developmental Neurophysiology (Johns Hopkins Press), in press. S.Tanaka, The Proceeding ofSICE'88, ESS2-5, p. 1069 (1988). S. Tanaka, to be submitted. T.N.Wiesel and D.H.Hubel, J.Comp.Neurol.158, 307 (1974). F.Y.Wu, Rev. Mod. Phys. 54,235 (1982).
|
1988
|
38
|
122
|
AN ADAPTIVE NETWORK THAT LEARNS SEQUENCES OF TRANSITIONS C. L. Winter Science Applications International Corporation 5151 East Broadway, Suite 900 Tucson, Auizona 85711 ABSTRACT We describe an adaptive network, TIN2, that learns the transition function of a sequential system from observations of its behavior. It integrates two subnets, TIN-I (Winter, Ryan and Turner, 1987) and TIN-2. TIN-2 constructs state representations from examples of system behavior, and its dynamics are the main topics of the paper. TIN-I abstracts transition functions from noisy state representations and environmental data during training, while in operation it produces sequences of transitions in response to variations in input. Dynamics of both nets are based on the Adaptive Resonance Theory of Carpenter and Grossberg (1987). We give results from an experiment in which TIN2 learned the behavior of a system that recognizes strings with an even number of l's . INTRODUCTION Sequential systems respond to variations in their input environment with sequences of activities. They can be described in two ways. A black box description characterizes a system as an input-output function, m = B(u), mapping a string of input symbols, ll, into a single output symbol, m. A sequential automaton description characterizes a system as a sextuple (U, M, S, SO, f, g) where U and M are alphabets of input and output symbols, S is a set of states, sO is an initial state and f and g are transition and output functions respectively. The transition function specifies the current state, St, as a function of the last state and the current input, Ut, (1) In this paper we do not discuss output functions because they are relatively simple. To further simplify discussion, we restrict ourselves to binary input alphabets, although the neural net we describe here can easily be extended to accomodate more complex alphabets. 653 654 Winter A common engineering problem is to identify and then simulate the functionality of a system from observations of its behavior. Simulation is straightforward when we can actually observe the internal states of a system, since then the function f can be specified by learning simple associations among internal states and external inputs. In robotic systems, for instance, internal states can often be characterized by such parameters as stepper motor settings, strain gauge values, etc., and so are directly accessible. Artificial neural systems have peen found useful in such simulations because they can associate large, possibly noisy state space and input variables with state and output variables (Tolat and Widrow, 1988; Winter, Ryan and Turner, 1987). Unfortunately, in many interesting cases we must base simulations on a limited set of examples of a system's black box behavior because its internal workings are unobservable. The black box description is not, by itself, much use as a simulation tool since usually it cannot be specified without resorting to infinitely large input-output tables. As an alternative we can try to develop a sequential automaton description of the system by observing regularities in its black box behavior. Artificial neural systems can contribute to the development of physical machines dedicated to system identification because i) frequently state representations must be derived from many noisy input variables, ii) data must usually be processed in continuous time and iii) the explicit dynamics of artificial neural systems can be used as a framework for hardware implementations. In this paper we give a brief overview of a neural net, TIN2, which learns and processes state transitions from observations of correct black box behavior when the set of observations is large enough to characterize the black box as an automaton. The TIN2 net is based on two component networks. Each uses a modified adaptive resonance circuit (Carpenter and Grossberg, 1987) to associate heterogeneous input patterns. TIN-1 (Winter, Ryan and Turner, 1987) learns and executes transitions when given state representations. It has been used by itself to simulate systems for which explicit state representations are available (Winter, 1988a). TIN-2 is a highly parallel, continuous time implementation of an approach to state representation first outlined by Nerode (1958). Nerode's approach to system simulation relies upon the fact that every string, l!. moves a machine into a particular state, s(y). once it has been processed. The s(y) state can be characterized by putting the system initially into s(u) (by processing y) and then presenting a set of experimental strings. (~1 .... , ~n)' for further processing. Experiments consist of observing the output mi = BUt·~i) where • indicates concatenation. A state can then be represented by the entries in a row of a state characterization table, C (Table 1). The rows of C are indexed by strings, lI, its columns are indexed by experiments. Wi. and its entries are mi. In Table 1 annotations in parentheses indicate nodes (artificial neurons) and subnetworks of TIN-2 equivalent to the corresponding C table entry. During experimentation C expands as states are Adaptive Network That Learns Sequences of Transitions 655 distinguished from one another. The orchestration of experiments, their selection, the TABLE 1. C Table Constructed by TIN-2 A. o (Assembly 1) 1 (Assembly 2) A. 1 (Node 7) o (Node 2) o (Node 5) 1 o (Node 6) o (Node 9) 1 (Node 1) 0 o (Node 1) 1 (Node 6) o (Node 4) 10 o (Node 3) o (Node 2) o (Node 0) role of teachers and of the environment have been investigated by Arbib and Zeiger (1969), Arbib and Manes (1974), Gold (1972 and 1978) and Angluin (1987) to name a few. TIN-2 provides an architecture in which C can be embedded and expanded as necessary. Collections of nodes within TIN-21earn to associate triples (mi, 11, ~i) so that inputting II later results in the output of the representation (m 1, ... , mn)n of the state associated with 11. TIN-2 TIN-2 is composed of separate assemblies of nodes whose dynamics are such that each assembly comes to correspond to a column in the state characterization table C. Thus we call them column-assemblies. Competition among column-assemblies guarantees that nodes of only one assembly, say the ith, learn to respond to experimental pattern ~i' Hence column-assemblies can be labelled ~ 1 ' ~2 and so on, but since labelings are not assigned ahead of time, arbitrarily large sets of experiments can be learned. The theory of adaptive resonance is implemented in TIN-2 column-assemblies through partitioned adaptive resonance circuits (cf. Ryan, Winter and Turner, 1987). Adaptive resonance circuits (Grossberg and Carpenter, 1987; Ryan and Winter, 1987) are composed of four collections of nodes: Input, Comparison (FI), Recognition (F2) and Reset. In TIN-2 Input, Comparison and Reset are split into disjoint m,.u and ~ partitions. The net runs in either training or operational mode, and can move from one to the other as required. The training dynamics of the circuit are such that an F2 node is stimulated by the overall triple (m. n,~, but can be inhibited by a mismatch with any component. During operation input of.u recalls the state representation s(u) = (m 1 .... , mn)n' Node activity for the kth FI partition, FI k' k = m, u, W, is governed by , (2) Here t < 1 scales time, Ii,k is the value of the ith input node of partition k, xi,k is 656 Winter activity in the corresponding node of FI and f is a sigmoid function with range [0. 1]. The elements of I are either 1. -lor O. The dynamics of the TIN-2 circuit are such that 0 indicates the absence of a symbol, while 1 and -1 represent elements of a binary alphabet. The adaptive feedback filter. T. is a matrix (Tji) whose elements. after training. are also 1.-1 orO. Activity, yj. in the jth F2 node is driven by + L meFl,m Bmj h(xm)] - 4[ ~*j f(YTl) + Ruj + Rw] . (3) The feedforward fllter B is composed of matrices (Buj)' (Bmj) and (Bw) whose elements are normalized to the size of the patterns memorized. Note that (Bw) is the same for every node in a given column-assembly. i.e. the rows of (Bw) are all the same. Hence all nodes within a column-assembly learn to respond to the same experimental pattern. w. and it is in this sense that an assembly evolves to become equivalent to a column in table C. During training the sum ~*j f(YTl) in (3) runs through the recognition nodes of all TIN-2 column-assemblies. Thus. during training only one F2 node. say the Jth. can be active at a time across all assemblies. In operation. on the other hand. we remove inhibition due to nodes in other assemblies so that at any time one node in each column-assembly can be active. and an entire state representation can be recalled. The Reset terms Ru,j and Rw in (3) actively inhibit nodes of F2 when mismatches between memory and input occur. Ruj is specific to the jth F2 node. dRujldt = -Ruj + f(Yj) f(v 1I1u II - II £I.u II) . (4) Rw affects all F2 nodes in a column-assembly and is driven by dRw/dt = -Rw + [LjeF2 f(Yj)] f(v IIlw II-II fI.w II). (5) v < 1 is a vigilance parameter (Carpenter and Grossberg. 1987): for either (4) or (5) R > 0 at equilibrium just when the intersection between memory and input. PI = T n I. is relatively small, i.e. R > 0 when v 11111 > II PI II. When the system is in operation. we fix Rw = 0 and input the pattern Iw = O. To recall the row in table C indexed by 11, we input 11 to all column-assemblies. and at equilibrium xi.m = Lje F2 Tjif(Yj). Thus xi,m represents the memory of the element in C corresponding to 11 and the column in C with the same label as the column-assembly. Winter (1988b) discusses recall dynamics in more detail. Adaptive Network That Learns Sequences of Transitions 657 At equilibrium in either training or operational mode only the winning F2 node has YJ *O. so LjTjif(Yj) = TJi in (2). Hence xi.k = 0 if TJi = -li.k. i.e. if memory and input mismatch; IXi.kl = 2 if TJi = Ii,k. i.e. when memory and input match; and IXi.kl = 1 if TJ.i = O. Ii.k *- 0 or ifTJ.i *- O. Ii.k = O. The F1 output function h in (3) is defined so that hex) = 1 if x> 1. hex) = -1 if x < -1 and hex) = 0 if -1 S x S 1. The output pattern ~1 = (h(x1) ..... h(xnl» reflects IJ ('\ Ik. as h(xi) *- 0 only if TJi = Ii.k. The adaptive filters (Buj) and (Bmj) store normalized versions of those patterns on FI.u and F1.m which have stimulated the jth F2 node. The evolution of Bij for u E FI.u or F1 m is driven by • (6) On the other hand (Bw) stores a normalized version of the experiment w which labels the entire column-assembly. Thus all nodes in a column-assembly share a common memory of~. (7) where w E F1 w . • The feedback mters (T uj). (T mj) and (T w) store exact memories of patterns on partitions ofFI: (8) for i E FI.u • F1.m • and (9) for i E FI.w' In operation long-term memory modification is suspended. EXPERIMENT Here we report partial results from an experiment in which TlN-2 learns a state characterization table for an automaton that recognizes strings containing even numbers of . . 658 Winter both I's and O's. More details can be found in Winter (1988b). For notational convenience in this section we will discuss patterns as if they were composed of l's and O's, but be aware that inside TIN-2 every 0 symbol is really a -1. Data is provided in the form of triples eM, ll, YD by a teacher; the data set for this example is given in Table 1. Data were presented to the net in the order shown. The net consisted of three column-assemblies. Each F2 collection contained ten nodes. Although the strings that can be processed by an automaton of this type are in principle arbitrarily long, in practice some limitation on the length of training strings is necessary if for no other reason than that the memory capacity of a computer is finite. For this simple example Input and F I partitions contain eight nodes, but in order to have a special symbol to represent A.. strings are limited to at most six elements. With this restriction the A. symbol can be distinguished from actual input strings through vigilance criteria. Other solutions to the problem of representing A. are being investigated, but for now the special eight bit symbol, 00000011, is used to represent A. in the strings A.-yt. The net was trained using fast-learning (Carpenter and Grossberg, 1987): a triple in Table 1 was presented to the net. and all nodes were allowed to come to their equilibrium values where they were held for about three long-term time units before the next triple was presented. Consider the processing that follows presentation of (0, 1,0) the first datum in Table 1. The net can obtain equivalents to two C table entries from (0, 1,0): the entry in row 1l = 10, column Yi. = A. and the entry in row II =1, column w = O. The string 10 and the membership value 0 were displayed on the A. assembly's input slabs, and in this case the 3rd F2 node learned the association among the two patterns. When the pattern (0, 1, 0) was input to other column-assemblies, one F2 node (in this case the 9th in column-assembly 1) learned to associate elements of the triple. Of course a side effect of this was that column-assembly 1 was labelled by Yi.. = 0 thereafter. When (1. 1, 1) was input next, node 9 in column-assembly 1 tried to respond to the new triple, all nodes in column-assembly 1 were then inhibited by a mismatch on Yi.., and finally node 1 on column-assembly 2 learned (1, 1, 1). From that point on column-assembly 2 was labelled by 1. LEARNING TRANSITIONS The TIN-I net (Winter. Ryan and Turner, 1987) is composed of i) a partitioned adaptive resonance circuit with dynamics similar to (2) - (9) for learning state transitions and ii) a Control Circuit which forces transitions once they have been learned. Transitions are unique in the sense that a previous state and current input completely determine the current state. The partitioned adaptive resonance circuit has three input fields: one for the previous state, one for the current input and one for the next state. TIN-l's F2 nodes learn transitions by associating patterns in the three input fields. Once trained. TIN-l processes strings sequentially. bit-by-bit. Adaptive Network That Learns Sequences of Transitions 659 1L~r--T-'N---2 -~:t ~ TIN-l TIN-2 I~ u.eu Figure 1. Training TIN2. The architecture of TIN2, the net that integrates TIN-2 and TIN-I. is shown in Figure 1. The system resorts to the TIN-2 nets only to learn transitions. If TIN-2 has learned a C table in which examples of all transitions appear, TIN-I can easily learn the automaton's state transitions. A C table contains an example of a transition from state si to state Sj forced by current input u, if it contains i) a row labelled by a string lli which leaves the automaton in si after processing and ii) a row labelled by the string lltu which leaves the automaton in Sj. To teach TIN-l the transition we simply present lli to the lower TIN-2 in Figure I, llieu to the upper TIN-2 net and u to TIN-I. CONCLUSIONS We have described a network, TIN-2, which learns the equivalent of state characterization tables (Gold, 1972). The principle reasons for developing a neural net implementation are i) neural nets are intrinsically massively parallel and so provide a nice model for systems that must process large data sets, ii) although in the interests of brevity we have not stressed the point, neural nets are robust against noisy data, iii) neural nets like the partitioned adaptive resonance circuit have continuous time activity dynamics and so can be synchronized with other elements of a larger real-time system through simple scaling parameters, and iv) the continuous time dynamics and precise architectural specifications of neural nets provide a blueprint for hardware implementations. We have also sketched a neural net, TIN2, that learns state transitions by integrating TIN-2 nets with the TIN-I net (Winter, Ryan and Turner, 1987). When a complete state characterization table is available from TIN-2, TIN2 can be taught transitions from examples of system behavior. However, the ultimate goal of a net like this lies in developing a system that "or,rates acceptably" with a partial state characterization table. To operate acceptably TIN must perform transitions correctly when it can, recognize when it cannot, signal for new data when it is required and expand the state charcterization taole when it must. Happily TIN2 already provides the first two capabilities, and combinations of TIN2 with rule-based controllers and with auxiliary control networks are currently being explored as approachws to satisfy the latter (Winter, 1988b). Nets like TIN2 may eventually prove useful as control elements in physical machines because sequential automata can respond to unpredictable environments with a wide range of behavior. Even very simple automata can repeat activities and make decisions based upon environmental variations. Currently, most physical machines that make decisions are dedicated to a single task; applying one to a new task requires re-programming by a 660 Winter skilled technician. A programmer must, furthermore, determine a priori precisely which machine state - environment associations are significant enough to warrant insertion in the control structure of a given machine. TIN2, on the other hand, is trained, not programmed, and can abstract significant associations from noisy input. It is a "blank slate" that learns the structure of a particular sequential machine from examples. References D. Angluin, "Learning Regular Sets from Queries and Counterexamples", Information and Computation, 75 (2), 1987. M. A. Arbib and E. G. Manes, "Machines in a Category: an Expository Introduction", SIAM Review, 16 (2), 1974. M. A. Arbib and H. P. Zeiger, "On the Relevance of Abstract Algebra to Control Theory", Automatica, 5, 1969. G. Carpenter and S. Grossberg, "A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine", Comput. Vision Graphics Image Process. 37 (54), 1987. E. M. Gold, "System Identification Via State Characterization", Automatica, 8, 1972. E. M. Gold, "Complexity of Automaton Identification from Given Data", Info. and Control, 37, 1978. A. Neroda, "Linear Automaton Transformations", Proc. Am. Math. Soc., 9, 1958. T. W. Ryan and C. L. Winter, "Variations on Adaptive Resonance", in Proc. 1st IntI. Conf. on Neural Networks, IEEE, 1987. T. W. Ryan, C. L. Winter and C. J. Turner, "Dynamic Control of an Artificial Neural System: the Property Inheritance Network", Appl. Optics, 261 (23) 1987. V. V. Tolat and B. Widrow, "An Adaptive Neural Net Controller with Visual Inputs", Neural Networks, I, S upp I, 1988. C. L. Winter, T. W. Ryan and C. J. Turner, "TIN: A Trainable Inference Network", in Proc. 1st Inti. Conf. on Neural Networks, 1987. C. L. Winter, "An Adaptive Network that Flees Pursuit", Neural Networks, I, Supp.l, 1988a. C. L. Winter, "TIN2: An Adaptive Controller", SAIC Tech. Rpt., SAIC, 5151 E. Broadway, Tucson, AZ, 85711, 1988b. Part V Implementation
|
1988
|
39
|
123
|
116 THE BOLTZMANN PERCEPTRON NETWORK: A MULTI-LAYERED FEED-FORWARD NETWORK EQUIVALENT TO THE BOLTZMANN MACHINE Eyal Yair and Allen Gersho Center for Infonnation Processing Research Department of Electrical & Computer Engineering University of California, Santa Barbara, CA 93106 ABSTRACT The concept of the stochastic Boltzmann machine (BM) is auractive for decision making and pattern classification purposes since the probability of attaining the network states is a function of the network energy. Hence, the probability of attaining particular energy minima may be associated with the probabilities of making certain decisions (or classifications). However, because of its stochastic nature, the complexity of the BM is fairly high and therefore such networks are not very likely to be used in practice. In this paper we suggest a way to alleviate this drawback by converting the stochastic BM into a deterministic network which we call the Boltzmann Perceptron Network (BPN). The BPN is functionally equivalent to the BM but has a feed-forward structure and low complexity. No annealing is required. The conditions under which such a convmion is feasible are given. A learning algorithm for the BPN based on the conjugate gradient method is also provided which is somewhat akin to the backpropagation algorithm. INTRODUCTION In decision-making applications, it is desirable to have a network which computes the probabilities of deciding upon each of M possible propositions for any given input pattern. In principle, the Boltzmann machine (BM) (Hinton, Sejnowski and Ackley, 1984) can provide such a capability. The network is composed of a set of binary units connected through symmetric connection links. The units are randomly and asynchronously changing their values in (O.l) according to a stochastic transition rule. The transition rule used by Hinton et. al. defines the probability of a unit to be in the 'on' state as the logistic function of the energy change resulting by changing the value of that unit. The BM can be described by an ergodic Markov chain in which the thermal equilibrium probability of attaining each state obeys the Boltzmann distribution which is a function of only the energy. By associating the set of possible propositions with subsets of network states, the probability of deciding upon each of these propositions can be measured by the probability of attaining the corresponding set of states. This probability is also affected by the temperature. As the temperature This work was supported by the Weizrnann Foundation for lCientific resean:h. by the University of California MICRO program. and by Ben Communications Relearch. Inc. The Boltzmann Perceptron Network 11 7 increases, the Boltzmann probability distribution become more uniform and thus the decision made is 'vague'. The lower the temperature the greater is the probability of attaining states with lower energy thereby leading to more 'distinctive' decisions. This approach, while very attractive in principle, has two major drawbacks which make the complexity of the computations become non-feasible for nontrivial problems. The first is the need for thermal equilibrium in order to obtain the Boltzmann distribution. To make distinctive decisions a low temperature is required. This implies slower convergence towards thermal eqUilibrium. GeneralJy, the method used to reach thermal equilibrium is simulated annealing (SA) (Kirkpatrick et. al., 1983) in which the temperature starts from a high value and is gradualJy reduced to the desired final value. In order to avoid 'freezing' of the network, the cooling schedule should be fairly slow. SA is thus time consuming and computationally expensive. The second drawback is due to the stochastic nature of the computation. Since the network state is a random vector, the desired probabilities have to be estimated by accumulating statistics of the network behavior for only a finite period of time. Hence, a trade-off between speed and accuracy is unavoidable. In this paper, we propose a mechanism to alleviate the above computational drawbacks by converting the stochastic BM into a functionally equivalent deterministic network, which we call the Boltzmann Perceptron Network (BPN). The BPN circumvents the need for a Monte Carlo type of computation and instead evaluates the desired probabilities using a multilayer perceptron-like network. The very time consuming learning process for the BM is similarly replaced by a deterministic learning scheme, somewhat akin to the backpropagalion algorithm. which is computationally affordable. The similarity between the learning algorithm of a BM having a layered structure and that of a two-layer perceptron has been recently pointed out by Hopfield (1987). In this paper we further elaborate on such an equivalence between the BM and the new perceptron-like network, and give the conditions under which the conversion of the stochastic BM into the deterministic BPN is possible. Unlike the original BM, the BPN is virtually always in thermal equilibrium and thus SA is no longer required. Nevertheless, the temperature still plays the same role and thus varying it may be beneficial to control the 'sofmess' of the decisions made by the BPN. Using the BPN as a soft classifier is described in details in (Yair and Gersho, 1989). THE BOLTZMANN PERCEPTRON NETWORK Suppose we have a network of K units connected through symmetric connection links with no self-feedback, so that the connection matrix r is symmetric and zero-diagonal. Let us categorize the units into three different types: input, output and hidden units. The input pattern will be supplied to the network by clamping the input units, denoted by ~ = (x It ••• x; •..• XI f, with this pattern. ~ is a real-valued vector in RI. The output of the network will be observed on the output units 1= (y It .. ,Y,,, , •. ,YM)T, which is a binary vector. The remaining units. denoted y=(vt> .. ,Vj,,,,vJ)T. are the hidden units, which are also binary-valued. The hidden and output units are asynchronously and randomly changing their binary values in (O,I) according to inputs they receive from other units. The state of the network will be denoted by the vector y which is partitioned as follows: II =(~T,yT,IT). The energy associated with state y is denoted by EN and is given by: (1) where ~ is a vector of bias values. partitioned to comply with the partition of 11 as follows: ~T = (f.~T,fl). 118 Yair and Gersho The transition from one state to another is perfonned by selecting each time one unit, say unit k. at random and detennine its output value according to the following stochastic rule: set the output of the unit to 1 with probability Pl:, and to 0 with a probability 1-Pl: . The parameter Pl: is detennined locally by the k -th unit as a function of the energy change ~l: in the following fashion: ~ 1 Pl:=g(~l:) , g(x) = 1 -lIx (2) +e ~l: = (E" (unit k is off) - E,,(unit k is on) ), and p = liT is a control parameter. T is called the temperature and g (.) is the logistic function. With this transition rule the thennal equilibrium probability P" of attaining a state .u. obeys the Boltzmann distribution: 1 -liE PIC = e • (3) Zx where Zx, called the partition junction. is a nonnalization factCX' (independent of y and i) such that the sum of P" over all the 2' +M possible states will sum to unity. In order to use the network in a detenninistic fashion rather than accumulate statistics while observing its random behavior. we should be able to explicitly compute the probability of attaining a certain vector I on the output units while K is clamped on the input units. This probability. denoted by P y Ix • can be expressed as: P ~ P 1 ~ -liE""", y Ix = ~ ".y Ix = 7 ~ e YEB] ~ yeB] (4) where B J is the set of all binary vectors of length J • and v.y I x denotes a state II in which a specific input vector K is clamped. The explicit evaluation of the desired probabilities therefore involves the computation of the partition function for which the number of operations grows exponentially with the number of units. That is. , the complexity is 0 (2'+M). Obviously. this is computationally unacceptable. Nevertheless. we shall see that under a certain restriction on the connection matrix r the explicit computation of the desired probabilities becomes possible with a complexity of 0 (1M) which is computationally feasible. Let us assume that for each input pattern we have M possible propositions which are associated with the M output vectors: 1M = (11 •.. J". •.. J.v) • where L is the m-th column of the MxM identity matrix. Any state of the network having output vector I=L (for any m) will be denoted by v,m I x and will be called a feasible state. All other state vectors v .y I x for 1* L will be considered as intennediate steps between two feasible states. This redefinition of the network states is equivalent to redefining the states of the underlying Markov model of the network and thus conserves the equilibrium Boltzmann distribution. The probability of proposition m for a given input K. denoted by P 1ft Ix. will be taken as the probability of obtaining output vector 1= L given that the output is one of the feasible values. That is. P 1ft Ix = Pr (I = L I K • IEIM ) (5) which can be computed from (4) by restricting the state space to the 2' M feasible state vectors and by setting 1= L . The partition function. conditioned on restricting I to lie in the set of feasible oUtputs,IM. is denoted by ~ and is given by: (6) The Boltzmann Perceptron Network 119 Let us now partition the connection matrix r to comply with the partition of the state vector and rewrite the energy for the feasible state V.ln I x as: -E.,,,,lx = yT(Rx+QL +~D2Y+~ + !!(WK+~D:J". +1.) + KT(~Dl:1+I). (7) Since X is clamped on the input units. the last tenn in the energy expression serves only as a bias tenn for the energy which is independent of the binary units y and X. Therefore. without any loss of generality it may be assumed that D1 = 0 and l.. = Q • The second tenn, denoted by T", Ix • can be simplified since D3 has a zero diagonal. Hence. I T",lx = L Willi Xi + S", i=1 (8) The absence of the matrix D3 in the energy expression means that interconnections between output units have no effect on the probability of attaining output vectors Xe1M. and may be assumed absent without any loss of generality. Defining L", W to be: (9) in which q", is the m -th column of Q. the desired probabilities. P", Ix. for m=l .... M are obtained using (4) and (7) as a function of these quantities as follows: 1 '00 M L,.oo P",lx = -=- e-'" with: ~ = L e (10) ~ "'~ The complexity of evaluating the desired probabilities P", Ix is still exponential with the number of hidden units J due to the sum in (9). However. if we impose the restriction that D2 = O. namely. the hidden units are not directly interconnected. then this sum becomes separable in the components Vj and thus can be decomposed into the product of only J tenns. This restricted connectivity of course imposes some restrictions on the capability of the network compared to that of a fully connected network. On the other hand. it allows the computation of the desired probabilities in a deterministic way with the attractive complexity of only 0 (JM) operations. The tedious estimation of conditional expectations commonly required by the learning algorithm for a BM and the annealing procedure are avoided. and an accurate and computationally affordable scheme becomes possible. We thus suggest a trade-off in which the operation and learning of the BM are tremendously simplified and the exact decision probabilities are computed (rather than their statistical estimates) at the expense of a restricted connectivity. namely. no interconnections are allowed between the hidden units. Hence. in our scheme. the connection matrix. r. becomes zero block-diagonal. meaning that the network has connections only between units of different categories. This structure is shown schematically in Figure 1. Figure 1. Schematic architecture of the stochastic BM. x MLP ...... -----4~ Soft Competition p trt/~ Figure 2. Block diagram of the COlTespondjng detenninistic BPN. 120 Yair and Gersho By applying the property D2=0 to (9). the sum over the space of hidden units. which can be explicitly written as the sum over all the J components of y. can be decomposed using the separability of the different Vj components into a sum of only J tenns as follows: J Lift 00 = ~T 1ft b: + ~ I (V; I X) (11a) where: and (llb) I (-) is called the activation function. Note that as ~ is increased I (-) approaches the linear threshold function in which a zero response is obtained for a negative input and a linear one (with slope ~) for a positive inpuL Finally. the desired probabilities P 1ft Ix can be expressed as a function of the Lift 00 (m=I •..• M) in an expression which can be regarded as the generalization of the logistic function to M inputs: P 1ft Ix = [1 + f e _L...,IIw]-l 11=1 ""'" where: . (12) Eqs. (8) and (11) describe a two-layer feed-forward perceptron-like subnetwork which uses the nonlinearity 10. It evaluates the quantity Lift 00 which we call the score of the m-th proposition. Eq. (12) describes a competition between the scores Lift 00 generated by the M subnetworks (m=I •..• M) which we call a solt competition with lateral inhibition. That is. If several scores receive relatively high values compared to the others. they will share. according to their relative strengths. the total amount (unity) of probability. while inhibiting the remaining probabilities to approach zero. For example. if one of the scores. say Lloo, is large compared to all the other scores. then the exponentiation of the pairwise score differences will result in Pllx=1 while the remaining probabilities will approach zero. Specifically. for any n#C. Plllx= exp (-Ll ,IIoo). which is essentially zero if Lloo is sufficiently high. In other words. by being large compared to the others, Ll W won the competition so that the corresponding probability P II x approaches unity, while all the remaining probabilities have been attenuated by the high value of Ll W to approach zero. Let us examine the effect of the gain ~ on this competition. When ~ is increased. the slope of the activation function I (-) is increased thereby accentuating the differences between the M contenders. In the limit when ~~. one of the Lift W will always be sufficiently large compared to the others. and thus only one proposition will win. The competition then becomes a winner-take-all competition. In this case, the network becomes a maximum a posteriori (MAP) decision scheme in which the Lift W play the role of nonlinear discriminant functions and the most probable proposition for the given input pattern is chosen: Pl1x =1 for k=argmax{Llftool and Plllx=O for n~k. (13) 1ft This results coincides with our earlier observation that the temperature controls the 'softness' of the decision. The lower the temperature. the 'harder' is the competition and the more distinctive are the decisions. However. in contrast to the stochastic network, there is no need to gradually 'cool' the network to achieve a desired (low) temperature. Any desired value of ~ is directly applicable in the BPN scheme. The above notion of soft competition has its merits in a wider scope of applications apart from its role in the BPN classifier. In many competitive schemes a soft competition between a set of contenders has The Boltzmann Perceptron Network 121 a substantial benefit in comparison to the winner-lake-all paradigm. The above competition scheme which can be implemented by a two-layer feed-forward network thus offers a valuable scheme for such purposes. The block diagram of the BPN is depicted in Figure 2. The BPN is thus a four-layer feedforward deterministic network. It is comprised of a two-layer perceptron-like network followed by a two-layer competition network. The competition can be • hard • (winner-ta1ce-all) or 'soft' (graded decision) and is governed by a single gain parameter ~. THE LEARNING ALGORITHM Let us denote the BPN output by the M -dimensional probability vector lx. where: lx = (P 11% ••• ,P '" 1% ••• ,P iii I%)T. For any given set of weights D. the BPN realizes some deterministic mapping'll: RI -+ [O.I:f so that lx = '¥W. The objective of learning is to detennine the mapping 'II (by estimating the set of parameters ID which 'best' explains a finite set of examples given from the desired mapping and are called the training set. The training set is specified by a set of N patterns (~lt .. .L •.. ~ ) (denoted for simplicity by ( ~ }). the a priori probability for each training pattern ~: Q W. and the desired mapping for each pattern x: Ox = (Q 11% •••• Q'" 1% •••• QIiI 1% l. where Q", 1% =Pr (proposition m I I) is the desired probability. For each input pattern presented to the BPN. the actual output probability vector lx is. in general. different from the desired one, Ox. We denote by G% the distortion between the actual ~ and the desired Ox for an input pattern ~. Thus. our task is to determine the network weights (and the bias values) D so that, on the average, the distortion over the whole set of training patterns will be minimized. Adopting the original distortion measure suggested for Boltzmann machines, the average distortion, GOO, is given by: M G%OO = L Q",I% In[Q", 1% I p",I%<ID1 (14) which is always non-negative since ~ and Ox are probability vectors. To minimize G <ID a gradient based minimization search is used. Specifically, a Partial Conjugate Gradient (pcG) search (Fletcher and Reeves, 1964; Luenberger, 1984) was found to be significantly more efficient than the ordinary steepest descent approach which is so widely used in multilayer perceptrons. A further discussion supporting this finding is given in (Yair and Gersho. 1989). For each set of weights we thus have to be able to compute the gradient &= Ve G of the cost function GOO. Let us denote the components of the 'instantaneous' gradient by G!I%=oo% los"" G:':'%=oG% loqjm' G; l%=oG% 10cj. G:'/%=OO% lOw,,", G~I%=oG% lorji. To get the full gradient, the instantaneous components should be accurnutaied while the input patterns are presented (one at a time) to the network. until one full cycle through the whole training set is completed. It is straightforward to show that the gradient may be evaluated in a recursive manner. in a fashion somewhat similar to the evaluation of the gradient by the backpropagation algorithm used for feed-forward networks (Rumelhart et. al., 1986). The evaluation of the gradient is accomplished by propagating the errors e", 1% = Q", 1% - P '" 1% through a linear network, termed the Error Propagation Network (EPN), as follows: G:b = Xi G!I% M G~I%= ~ G~I% J ~ "" ",=1 G"I% = x. GC 1% ji I j (15) The only new variables required are the b;I%. given by b;I%=g(V7 1%). which can be 122 Yair and Gersho easily obtained by applying the logistic nonlinearity to V; 1%. The above error propagation scheme can also be written in a matrix form. Let us define the following notation: L = (e 11% .... ellll% ... ,eM I%)T. E% will be a diagonal M xM matrix whose diagonal is L. Bx = [b~ IX]. a J xM matrix. Let .Lv denote a column vector of length M whose components' are alII's. Similarly we will define the vectors: GAI % = ( ... G:b ... )T. and the matrices: GAI % = [G~X] with the appropriate dimensions (for any A.. E and 1'\). Hence. the error propagation can be written as: G6b = -~L , Gwlx = G61% x.T Gf 1% = _~ B% E% : GC 1% = Gf 1%.Lv • Gr 1% = fic 1% ~T (16) The EPN is depicted in Figure 3. This is a linear system composed of inner and outer products between matrices. which can be efficiently implemented using a neural network. The gradient I is used in the PCG update formula in which a new set of weights is created and is used for the next update iteration. The learning scheme of the BPN is given in Figure 4. Diag left right Bx ·1M right G$/% IT GW/% Gq/lC GqX Gr/ll Figure 3. The mar Propagation Network (EPN). 'Diag' is a diagooalization operator. 'right' and 1efi' are right and left multipliel'l, respectively. Figure 4. The learning scheme. The BPN outputs, ~lC , are compared with the desired probabilities, 9% . The resulted errors,!% ,propagate through the EPN to form the gradient I which is used in the PCG alg. to create the DeW weights. SIMULATION RESULTS We now present several simulation results of two-class classification problems with Gaussian sources. That is. we have two propositions represented by class 0 or class 1. Suppose there are L random sources (i = 1 ... ,1..) over the input space. some of them are attributed to class O. and the others to class 1. Each time. a source is chosen according to an a priori probability P (i). When chosen. the i -th source then emits a pattern ~ according to a probability density Q% Ii 00. Measuring a pattern X. it is desired to decide upon the most probable origin class - in the binary decision problem (MAP classifier). or obtain some estimate to Q 11%. the probability that class 1 emitted this pattern - in the soft classification problem. In the learning phase. a training set of size N was presented to the network. and the weights were iteratively modified by the learning scheme (Figure 4) until convergence. The final weights were used to construct a BPN classifier. which was then tested for new input patterns. The output classification probability of the BPN. P lI%oo. was compared with the true (Bayesian) conditional probability. QlI%W which was computed analytically. Results are shown in Figures 5-7. In Figure 5. two symmetric equi-probable Gaussian sources with substantial overlap were used. one for each class. The network was trained on N = 8 patterns with gain p= 1. Figure 5b shows how the BPN performs for the problem given in Figure Sa. For p = 1. Le .• when the same gain is used in both the training and classification phases. there is an almost perfect match between the BPN output. P 11% (x). denoted in the figures by 'p = 1'. and the true curve. Q 11% (x). For p = 10. the high gain winner-lake-all The Boltzmann Perceptron Network 123 >competition is taking place and the classifier becomes, practically, a binary (yes/no) decision network. In Fig. 6 disconnected decision regions were fonned by using four sources, two of which were attributed to each class. Again, a nearly perfect match can be seen between actual (P = 1) and desired (Q liz) outputs. Also, the simplicity of making 'hard' classification decisions by increasing the gain is again illustrated (p= 10). In Fig. 7 the classifier was required to find the boundary (expressed by QlIz =0.5) between two 2D classes. (5.a) (S.a) 10 (7.a) .... 0.3 'iii >D." .... I / C C» 'iii c I \ , '1:1 >;:: D.2 I class 0 I ~ 0.3 >== :0 cs .0 t 0.1 / / / / :0 0.2 ., .0 o ... 0. 0.1 Q-tl / "' / \ I / / I D.O l-I-oc::I::..J....I....J....L. .......... .J.....I--'--'L...J.....L...I..::.......,.ol...J Ouu~~~~uu~~~~u III III ., U -" -2 0 2 input pattern x .. '" ., -10 0 10 input pattern x 8 o 2 " 8 8 10 (7.b) PI - 0.5 ... D.6 u ... 0.6 o 4 o .0 e 0. J=l ~ 0. 2 J=6 0 .0 L....,j"A-.L....J.--L.......I..-Io......o.. ......... ..L.L.J.--'--'--1o....J D L...J..... .................... .J....I.. ........... ..J....L. .............................. ...&-I...J o -6 0 6 10 0 2 4 8 input pattern - x input pattern - x Figure 5. Figure 6. Figure 7. Figure S: Classification for Gaussian sources. (S.a) The two sources. (S.b) 'Soft' ~= 1) and 'hard' ~ = 10) classifications versus Q 11". J indicates the nwnber of hidden units used. Figure 6: Classification for disconnected decision regions. (6.a) The sources used: dashed lines indicate class 0 and solid lines - class 1. (6.b) Soft (p= 1) and hard (P= 10) classifications versus Q11,,' Figure 7: Classification in a 2D space. (7.a) The two classes and the true boundary indicated by QII" =0.5. (7.b) The boundary found by the BPN, marked by PII" =0.5, versus the true one. References Fletcher, R., Reeves. C.M. (1964). Function minimizatioo by conjugate gradienu. Complll" J., 7, 149-154. Hinton, G.E., Sejnowlki T .R., & Ackley D.H. (1984). Boltmwm machines: constraint satiJfaction networks that learn. CarMgi.-MeJQII T'CMicaJ R,port, CMU-CS-84-U9. Hopfield, 1.1. (1987). Learning algoritJuns IDd probability distributions in feed-forward IDd feed-back networks. Proc. Nail. Acad. Sci. USA, 84, 8429-8433. Kirk~trick, S., Gelatt, C.D., & Vecchi M.P. (1983). Optimization by simulated annealing. Sci'N:', 220, 61l-680. Luenberger, D.G. (1984). LiMar aNi lIOn/war programming, AddiJon-Wesley, Reading, Man. Rumelhart, D.E., Hinton, G.E., & Williams RJ. (1986). Learning internal representations by enor propagatioo. In D.E. Rumelhart & 1.1- McOelland (Eds.), ptJTtJIJeJ Distriblll,d Procus;"g., MIT Press/Brad{ord Books. Yair, E., & Gersho, A. (1989). The Bollmlann perecpcron network: a soft clusifier. Submitted to the JOIITJfQI of N'lII'aJ N'tworb, December, 1988. I
|
1988
|
4
|
124
|
568 DYNAMICS OF ANALOG NEURAL NETWORKS WITH TIME DELAY C.M. Marcus and RM. Westervelt Division of Applied Sciences and Department of Physics Harvard University, Cambridge Massachusetts 02138 ABSTRACT A time delay in the response of the neurons in a network can induce sustained oscillation and chaos. We present a stability criterion based on local stability analysis to prevent sustained oscillation in symmetric delay networks, and show an example of chaotic dynamics in a non-symmetric delay network. L INTRODUCTION Understanding how time delay affects the dynamics of neural networks is important for two reasons: First, some degree of time delay is intrinsic to any physically realized network, both in biological neural systems and in electronic artificial neural networks. As we will show, it is not obvious what constitutes a "small" (i.e. ignorable) delay which will not qualitatively change the network dynamics. For some network configurations, delay much smaller than the intrinsic relaxation time of the network can induce collective oscillatory behavior not predicted by mathematical models which ignore delay. These oscillations mayor may not be desirable; in either case, one should understand when and how new dynamics can appear. The second reason to study time delay is for its intentional use in parallel computation. The dynamics of neural networks which always converge to fixed points are now fairly well understood. Several neural network models have appeared recently which use time delay to produce dynamic computation such as associative recall of sequences [Kleinfeld,1986; Sompolinsky and Kanter, 1986]. It has also been suggested that time delay produces an effective noise in the network dynamics which can yield improved recall of memories [Conwell, 1987] Finally, to the extent that neural networks research is inspired by biological systems, the known presence of time delays in a many real neural systems suggests their usefulness in parallel computation. In this paper we will show how time delay in an analog neural network can produce sustained oscillation and chaos. In section 2 we consider the case of a symmetrically connected network. It is known [Cohen and Grossberg, 1983; Hopfield, 1984] that in the absence of time delay a symmetric network will always converge to a fixed point attractor. We show that adding a fixed delay to the response of each neuron will produce sustained oscillation when the magnitude of the delay exceeds a critical value, which depends on the neuron gain and the network connection topology. We then analyze the Dynamics of Analog Neural Networks with Time Delay 569 all-inhibitory and symmetric ring topologies as examples. In section 3, we discuss chaotic dynamics in asymmetric neural networks, and give an example of a small (N=3) network which shows delay-induced chaos. The analytical results presented here are supported by numerical simulations and experiments performed on a small electronic neural network with controllable time. A detailed derivation of the stability results for the symmetric network is given in [Marcus and Westervelt, 1989], and the electronic circuit used is described in described [Marcus and Westervelt, 1988]. II. STABILITY OF SYMMETRIC NETWORKS WITH DELAY The dynamical system we consider describes an electronic circuit of N saturable amplifiers ("neurons") coupled by a resistive interconnection matrix. The neurons do not respond to an input voltage ui instantaneously, but produce an output after a delay, which we take to be the same for all neurons. The neuron input voltages evolve according to the following equations: N iI.(t) = -u.(t) + L J .. f(u.(t-t». 1 1 j = 1 IJ J (1 ) The transfer function for each neuron is taken to be an identical sigmoidal function feu) with a maximum slope df/du = ~ at u = O. The unit of time in these equations has been scaled to the characteristic network relaxation time, thus t can be thought of as the ratio of delay time to relaxation time. The symmetric interconnection matrix Jij describes the conductance between neurons i and j is normalized to satisfy LjlJijl = 1 for all i. This normalization assumes that each neuron sees the same conductance at its input [Marcus and Westervelt, 1989]. The initial conditions for this system are a set of N continuous functions defined on the interval -t ~ t ~ O. We take each initial function to be constant over that interval, though possibly different for different i. We find numerically that the results do not depend on the form of the initial functions. Linear Stability Analysis at Low Gain StUdying the stability of the fixed point at the origin (ui = 0 for all i) is useful for understanding the source of delay-induced sustained oscillation and will lead to a low-gain stability criterion for symmetric networks. It is important to realize however, that for the system (1) with a sigmoidal nonlinearity, if the origin is stable then it is the unique attractor, which makes for rather uninteresting dynamics. Thus the origin will almost certainly be unstable in any useful configuration. Linear stability analysis about the origin will show that at 't = 0, as the gain ~ is increased, the origin always loses stability by a type of bifurcation which only produces other fixed points, but for 't > 0 an alternative type of bifurcation of the origin can occur which produces the sustained oscillatory modes. The stability criterion derived insures that this alternate bifurcation a Hopf bifurcation - does not occur. The natural coordinate system for the linearized version of (1) is the set of N eigenvectors of the connection matrix Jij' defined as xi(t), i=I, .. N. In terms of the xi(t), 570 Marcus and Westervelt the linearized system can be written i .( t) = - x .( t) + ~ A. x.( t - t ) 1 III (2) where ~ is the neuron gain and Ai (i=I, .. N) are the eigenvalues of Jij' In general, these eigenvalues have both real and imaginary parts; for Jij = Jji the A' are purely real. Assuming exponential time evolution of the form xi(t) = Xi(O)e~it, where si is a complex characteristic exponent, yields a set of N transcendental characteristic equations: (si + l)esit = ~Ai' The condition for stability of the origin, Re(si) < 0 for all i, and the characteristic equations can be used to specify a stability region in the complex plane of eigenvalues, as illustrated in Fig. (la). When all eigenvalues of Jij are within the stability region, the origin is stable. For t = 0, the stability region is defined by Re(A) < lI~, giving a half-plane stability condition familiar from ordinary differential equations. For t > 0, we define the border of the stability region A(e) at an angle e from the Re(A) axis as the radial distance from the point A = 0 to the frrst point (Le. smallest value of A(e» which satisfies the characteristic equation for purely imaginary characteristic exponent Sj i5 iroj. The delay-dependent value of A(e) is given by A(e) = ~.J ro2 + 1 ; ro = - tan (rot - e) (3) where ro is in the range (e-1tI2) ~ClYt ~ e, modulo 21t. (a) Im(A) A(O;t=l) --'"""""'''(b) 100 J3A ReO.) 10 ~~--~--~----~----~ 0.1 1 10 't Figure 1. (a) Regions of Stability in the Complex Plane of Eigenvalues A of the Connection Matrix Jij' for't = 0,1,00. (b) Where Stability Region Crosses the Real-A Axis in the Negative Half Plane. Notice that for nonzero delay the stability region closes on the Re(A) axis in the negative half-plane. It is therefore possible for negative real eigenvalues to induce an instability of the origin. Specifically, if the minimum eigenvalue of the symmetric matrix Jij is more negative than -A(e = 1t) then the origin is unstable. We define this "back door" to the stability region along the real axis as A > 0, dropping the argument e = 1t. A is inversely proportional to the gain ~ and depends on delay as shown in Fig. (lb). For large and small delay, A can be approximated as an explicit function of delay and gain: Dynamics of Analog Neural Networks with Time Delay 571 t«l (4 a) A _ t> > 1 (4b) In the infinite-delay limit, the delay-differential system (1) is equivalent to an iterated map or parallel-update network of the form ui(t+l) = 1] Jij f(uj(t» where t is a discrete iteration index. In this limit, the stability region is circular, corresponding to the fixed point stability condition for the iterated map system. Consider the stability of the origin in a symmetrically connected delay system (1) as the neuron gain ~ is increased from zero to a large value. A bifurcation of the origin will occur when the maximum eigenvalue Amax > 0 of Jij becomes larger than l/~ or when the minimum eigenvalue Amin < 0 becomes more negative than -A = _~-I(ro2+1)lJ2, where ro = -tan(rot), [1CI2 < ro < x]. Which bifurcation occurs first depends on the delay and the eigenvalues of Jr. The bifurcation at Amax = ~-1 is a pitchfork (as it is for t = 0) corresponding to a ctaracteristic exponent si crossing into the positive real half plane along the real axis. This bifurcation creates a pair of fixed points along the eigenvector Xi associated with that eigenvalue. These fixed points constitute a single memory state of the network. The bifurcation at Amin = - A corresponds to a Hopf bifurcation [Marsden and McCracken, 1976] , where a pair of characteristic exponents pass into the real half plane with imaginary components ±ro where ro = -tan(rot), [x/2 < ro < xl. This bifurcation, not present at t = 0, creates an oscillatory attractor along the eigenvector associated with ~in' A simple stability criterion can be constructed by requiring that the most negative eigenvalue of the (symmetric) connection matrix not be more negative than -A. Because A is always larger than its small-delay limit 7tI(2t~), the criterion can be stated as a limit on the size on the delay (in units of the network relaxation time.) t<x 2~A . mIn => no sustained oscillation. (5) Linear stability analysis does not prove global stability, but the criterion (5) is supported by considerable numerical and eXferimental evidence [Marcus and Westervelt, 1989]. For long delays, where A == W ,linear stability analysis suggests that sustained oscillation will not exist as long as _~-1 < Amin' In the infinite-delay limit, it can be shown that this condition insures global stability in the discrete-time parallel-update network. [Marcus and Westervelt, to appear]. At large gain, Eq. (5) does not provide a useful stability criterion because the delay required for stability tends to zero as ~ ~ 00. The nonlinearity of the transfer function becomes important at large gain and stable, fixed-point-only dynamics are found at large gain and nonzero delay, indicating that Eq. (5) is overly conservative at large gain. To understand this, we must include the nonlinearity and consider the stability of the oscillatory modes themselves. This is described in the next section. 572 Marcus and Westervelt Stability in the Large-Gain Limit We now analyze the oscillatory mode at large gain for the particular case of coherent oscillation. We find a second stability criterion which predicts a gain-independent critical delay below which all initial conditions lead to fixed points. This result complements the low gain result of the previous section for this class of network; experimentally and numerically we find excellent agree in both regimes, with a cross-over at the value of gain where fixed points appear away from the origin, p = lIAmax. In considering only coherent oscillation, we not only assume that Iij is symmetric but that its maximum and minimum eigenvalues satisfy 0 < Amax < -Amin and that the eigenvector associated with Amin points in a coherent direction, defined to be along any of the 2N vectors of the form (±I,±l,±I, ... ) in the ui basis. For this case, we find that in the limit of infinite gain, where the nonlinearity is of the form f(u) = sgn(u), multiple fixed point attractors coexist with the oscillatory attractor and that the size of the basin of attraction for the oscillatory mode varies with the delay [Marcus and Westervelt, 1988]. At a critical value of delay 'tcrit the basin of attraction for oscillation vanishes and the oscillatory mode loses stability. In [Marcus and Westervelt, 1989] we show: 't . = -In( 1 + A max / A . ) cnt mIn (6) For delays less than this critical value, all initial states lead to stable fIXed points. Notice that the critical delay for coherent oscillation diverges as IAmax/Aminl ~ 1-. Experimentally and numerically we find that this prediction has more general applicability: None of the symmetric networks investigated which satisfied IAmax/Aminl ~ 1 (and Amax > 0) showed sustained oscillation for 't < -10. This observation is a useful criterion for electronic circuit design, where single-device delays are generally shorter than the circuit relaxation time ('t < 1), but only the case of coherent oscillation is supported by analysis. Examples As a first example, we consider the fully-connected all-inhibitory network, Eq. (1) with Iii = (N-IrI(~ij - 1). This matrix has N-I degenerate eigenvalues at +lI(N-I) and a smgle eigenvalue at -1. A similar network configuration (with delays) has been studied as a model of lateral inhibition in the eye of the horseshoe crab, Limulus [Coleman and Renninger,I975,I976; Hadeler and Tomiuk,I977; anderHeiden, 1980]. Previous analysis of sustained oscillation in this system has assumed a coherent form for the oscillatory solution, which reduces the problem to a single scalar delay-differential equation. However, by constraining the solution to lie on along the coherent direction, the instability of the oscillatory mode discussed above is not seen. Because of this assumption, fixed-point-only dynamics in the large-gain limit with finite delay are not predicted by previous treatments, to our knowledge. Dynamics of Analog Neural Networks with Time Delay 573 The behavior of the network at various values of gain and delay are illustrated in Fig.2 for the particular case of N=3. The four regions labeled A,B,C and D characterize the behavior for all N. At low gain (~ < N-1) the origin is the unique attractor for small delay (region A) and undergoes a Hopf bifurcation at to sustained coherent oscillation at 't 7t(~2_1)-112 for large delay (region B). At ~ = N-1 fixed points away from the origin appear. In addition to these fixed points, an oscillatory attractor exists at large gain for 't > In [(N-1)/(N-2)] (== liN for large N) (region C). Sustained oscillation does not exist below this critical delay (region D). c A D 10 100 Figure 2. Stability Diagram for the All-Inhibitory Delay Network for the Case N = 3. See Text for a Description of A,B,C and D. As a second example, we consider a ring of delayed neurons. We allow the symmetric connections to be of either sign - that is, connections between neighboring pairs can be mutually excitatory or inhibitory - but are all the same strength. The eigenvalues for the symmetric ring of size N are Ak = cos(27t(k+<p)/N), where k = O,1,2 ... N-1, <p = 112 if the produet of connection strengths around the ring is negative, <p = 0 if the product is positive. Borrowing from the language of disordered magnetic systems, a ring which contains an odd number of negative connections (the case <p = 112) is said to be "frustrated." [Toulouse, 1977]. The large-gain stability analysis for the symmetric ring gives a rather surprising result Only frustrated rings with an odd number of neurons will show sustained oscillation. For this case (N odd.aru1 an odd number of negative connections) the critical delay is given by 'tcrit = -In (1 - cos(1tIN». This agrees very well with experimental and numerical data, as does the conclusion that rings with even N do not show sustained oscillation [Marcus and Westervelt, 1989]. The theoretical largegain critical delay for the all-inhibitory network and the frustrated ring of the same size are compared in Fig. 3. Note that the critical delay for the all-inhibitory network decreases (roughly as lIN) for larger networks while the ring becomes less prone to oscillation as the network size increases. 574 Marcus and Westervelt 10 'terit ~ II II II 1 • • ~. • • • • 0.1 • 1 3 5 7 9 11 N Figure 3. Critical Delay from Large-Gain Theory for All-Inhibitory Networks (circles) and Frustrated Rings (squares) of size N. ITI. CHAOS IN NON-SYMMETRIC DELAY NETWORKS Allowing non-symmetric interconnections greatly expands the repertoire of neural network dynamics and can yield new, powerful computational properties. For example, several recent studies have shown that by using both asymmetric connections and time delay, a neural network can accurately recall of sequences of stored patterns [Kleinfeld,1986; Sompolinsky and Kanter,1986]. It has also been shown that for some parameter values, these pattern-generating networks can produce chaotic dynamics [Riedel, et ai, 1988]. Relatively little is known about the dynamics of large asymmetric networks [Amari, 1971,1972; KUrten and Clark,1986; Shinomoto,1986; Sompolinsky, et ai, 1988, Gutfreund,et al,1988]. A recent study of continuous-time networks with random asymmetric connections shows that as N -+ 00 these systems will be chaotic whenever the origin is unstable [Sompolinsky,et al,1988]. In discrete-state (±1) networks, with either parallel or sequential deterministic dynamics, oscillatory modes with long periods are also seen for fully asymmetric random connections (Jij and Jji uncorrelated), but when Jij has either symmetric or antisymmetric correlations short-period attractors seem to predominate [Gutfreund,et al,1988]. It is not clear whether the chaotic dynamics of large random networks will appear in small networks with non-symmetric, but nonrandom, connections. Small networks with asymmetric connections have been used as models of central pattern generators found in many biological neural systems. [Cohen,et ai, 1988] These models frequently use time delay to produce sustained rhythmic output, motivated in part by the known presence of time delay in real central pattern generators. General theoretical principles concerning the dynamics of asymmetric network with delay do not exist at present. It has been shown, however, that large system size is not necessary to produce chaos in neural networks with delay [e.g. Babcock and Westervelt, 1987]. We fmd that small systems (N~ 3) with certain asymmetric connections and time delay can produce sustained chaotic oscillation. An example is shown in Fig. 4: These data were produced using an electronic network [Marcus and Westervelt, 1988] of three neurons with Dynamics of Analog Neural Networks with Time Delay 575 sigmoidal transfer functions f 1 (u(t»=3.8tanh(8u(t-tp, f2(u(t»=2tanh(6.1u(t», f3(u(t»=3.5tanh(2.5u(t», connection resistances of ±1O .0 and input capacitances of lOnF. Fig. 4 shows the network configuration and output voltages VIand V 2 for increasing delay in neuron 1. For t < O.64ms a periodic attractor similar to the upper left figure is found; for t > O.97ms both periodic and chaotic attractors are found. 1.0V2 o 't A 1.0V2 o 0 VI 1.0 0 VI 1.0 Figure 4. Period Doubling to Chaos as the Delay in Neuron 1 is Increased. Chaos in the network of Fig.4 is closely related to a well-known chaotic delaydifferential equation with a noninvertible feedback term [Mackey and Glass,1977]. The noninvertible or "mixed" feedback necessary to produce chaos in the Mackey-Glass equation is achieved in the neural network - which has only monotone transfer functions - by using asymmetric connections. This association between asymmetry and noninvertible feedback suggests that asymmetric connections may be necessary to produce chaotic dynamics in neural networks, even when time delay is present. This conjecture is further supported by considering the two limiting cases of zero delay and infinite delay, neither of which show chaotic dynamics for symmetric connections. IV. CONCLUSION AND OPEN PROBLEMS We have considered the effects of delayed response in a continuous-time neural network. We find that when the delay of each neuron exceeds a critical value sustained oscillatory modes appear in a symmetric network. Stability analysis yields a design criterion for building stable electronic neural networks, but these results can also be used to created desired oscillatory modes in delay networks. For example, a variation of the Hebb rule [Hebb, 1949], created by simply taking the negative of a Hebb matrix, will give negative real eigenvalues corresponding to programed oscillatory patterns. Analyzing the storage capacities and other properties of neural networks with dynamic attractors remain 576 Marcus and Westervelt challenging problems [see, e.g. Gutfreund and Mezard, 1988]. In analyzing the stability of delay systems, we have assumed that the delays and gains of all neurons are identical. This is quite restrictive and is certainly not justified from a biological viewpoint. It would be interesting to study the effects of a wide range of delays in both symmetric and non-symmetric neural networks. It is possible, for example, that the coherent oscillation described above will not persist when the delays are widely distributed. Acknowledgements One of us (CMM) acknowledges support as an AT&T Bell Laboratories Scholar. Research supported in part by JSEP contract NOOOI4-84-K-0465. References S. Amari, 1971, Proc. IEEE, 59, 35. S. Amari, 1972, IEEE Trans. SMC-2, 643. U. an der Heiden, 1980, Analysis of Neural Networks, Vol. 35 of Lecture Notes in Biomathematics (Springer, New York). K.L. Babcock and R.M. Westervelt, 1987, Physica 28D, 305. M.A. Cohen and S. Grossberg, 1983, IEEE Trans. SMC-13, 815. A.H. Cohen, S. Rossignol and S. Grillner, 1988, Neural Control of Rhythmic Motion, (Wiley, New York). B.D. Coleman and G.H. Renninger, 1975, J. Theor. BioI. 51, 243. B.D. Coleman and G.H. Renninger, 1976, SIAM J. Appl. Math. 31, 111. P. R. Conwell, 1987, in Proc. of IEEE First Int. Con! on Neural Networks.III-95. H. Gutfreund, J.D. Reg~r and A.P. Young, 1988, J. Phys. A, 21, 2775. H. Gutfreund and M. Mezard, 1988, Phys. Rev. Lett. 61, 235. K.P. Hadeler and J. Tomiuk, 1977, Arch. Rat. Mech. Anal. 65,87. D.O. Hebb, 1949, The Organization of B,ehavior (Wiley, New York). J.1. Hopfield, 1984, Proc. Nat. Acad. Sci. USA 81, 3008. D. Kleinfeld, 1984, Proc. Nat. Acad. Sci. USA 83,9469. K.E. KUrten and J.W. Clark, 1986, Phys. Lett. 114A, 413. M.C. Mackey and L. Glass, 1977, Science 197, 287. C.M. Marcus and R.M. Westervelt, 1988, in: Proc. IEEE Con[ on Neural Info. Proc. Syst .• Denver. CO. 1987, (American Institute of Physics, New York). C.M. Marcus and R.M. Westervelt, 1989, Phys. Rev. A 39, 347. J .E. Marsden and M. McCracken, The Hopf Bifurcation and its Applications, (SpringerVerlag, New York). U. Riedel, R. KUhn, and J. L. van Hemmen, 1988, Phys. Rev. A 38, 1105. S. Shinomoto, 1986, Prog. Theor. Phys. 75, 1313. H. Sompolinsky and I. Kanter, 1986, Phys. Rev. Lett. 57, 259. H. Sompolinsky, A. erisanti and H.I. Sommers, 1988, Phys. Rev. Lett. 61, 259. G. Toulouse, 1977, Commun. Phys. 2, 115.
|
1988
|
40
|
125
|
634 ON THE K-WINNERS-TAKE-ALL NETWORK E. Majani Jet Propulsion Laboratory California Institute of Technology R. Erlanson, Y. Abu-Mostafa Department of Electrical Engineering California Institute of Technology ABSTRACT We present and rigorously analyze a generalization of the WinnerTake-All Network: the K-Winners-Take-All Network. This network identifies the K largest of a set of N real numbers. The network model used is the continuous Hopfield model. I - INTRODUCTION The Winner-Take-All Network is a network which identifies the largest of N real numbers. Winner-Take-All Networks have been developed using various neural networks models (Grossberg-73, Lippman-87, Feldman-82, Lazzaro-89). We present here a generalization of the Winner-Take-All Network: the K-Winners-Take-All (KWTA) Network. The KWTA Network identifies the K largest of N real numbers. The neural network model we use throughout the paper is the continuous Hopfield network model (Hopfield-84). If the states of the N nodes are initialized to the N real numbers, then, if the gain of the sigmoid is large enough, the network converges to the state with K positive real numbers in the positions of the nodes with the K largest initial states, and N - K negative real numbers everywhere else. Consider the following example: N = 4, K = 2. There are 6 = (~) stable states:(++-_)T, (+_+_)T, (+--+)T, ( __ ++)T, (_+_+)T, and (_++_)T. If the initial state of the network is (0.3, -0.4, 0.7, O.l)T, then the network will converge to (Vi,V2,V3,v4)T where Vi> 0, V2 < 0, V3 > 0, V4 < 0 ((+ _ +_)T). In Section II, we define the KWTA Network (connection weights, external inputs). In Section III, we analyze the equilibrium states and in Section IV, we identify all the stable equilibrium states of the KWTA Network. In Section V, we describe the dynamics of the KWTA Network. In Section VI, we give two important examples of the KWTA Network and comment on an alternate implementation of the KWTA Network. On the K-Winners-Take-All Network 635 II - THE K-WINNERS-TAKE-ALL NETWORK The continuous Hopfield network model (Hopfield-84) (also known as the Grossberg additive model (Grossberg-88)), is characterized by a system of first order differential equations which governs the evolution of the state of the network (i = 1, .. . , N) : The sigmoid function g(u) is defined by: g(u) = f(G· u), where G > 0 is the gain of the sigmoid, and f(u) is defined by: 1. "f/u, 0 < f'(u) < f'(O) = 1, 2. limu .... +oo f( u) = 1, 3. limu .... -oo f( u) = -l. The KWTA Network is characterized by mutually inhibitory interconnections Taj = -1 for i ¥= j, a self connection Tai = a, (Ial < 1) and'an external input (identical for every node) which depends on the number K of winners desired and the size of the network N : ti = 2K - N. The differential equations for the KWTA Network are therefore: for all i, Cd~i = -Aui + (a + l)g(ui) - (tg(uj ) - t) , (1) J=l where A = N - 1 + lal, -1 < a < +1, and t = 2K - N. Let us now study the equilibrium states of the dynamical system defined in (1). We already know from previous work (Hopfield-84) that the network is guaranteed to converge to a stable equilibrium state if the connection matrix (T) is symmetric (and it is here). III - EQUILIBRIUM STATES OF THE NETWORK The equilibrium states u· of the KWTA network are defined by for all i, dUi - 0 dt , I.e., for all i, A (E. g(u~) - (2K - N)) g(u'!') = --u'!' + J J • I a+1 I a+1 (2) Let us now develop necessary conditions for a state u· to be an equilibrium state of the network. Theorem 1: For a given equilibrium state u·, every component ui of u· can be one of at most three distinct values. 636 Majani, Erlanson and Abu-Mostafa Proof of Theorem 1. If we look at equation (2), we see that the last term of the righthandside expression is independent of i; let us denote this term by H(u*). Therefore, the components ut of the equilibrium state u* must be solutions of the equation: g(ui) = _A_u; + H(u*). a+1 Since the sigmoid function g(u) is monotone increasing and A/(a + 1) > 0, then the sigmoid and the line a~l ut + H(u*) intersect in at least one point and at most three (see Figure 1). Note that the constant H(u*) can be different for different equilibrium states u*. • The following theorem shows that the sum of the node outputs is constrained to being close to 2K - N, as desired. Theorem 2: If u* is an equilibrium state of (1), then we have: N (a+ 1)maxg(ui) < '" g(uJ~) -2K +N < (a+ 1) min g(ui). (3) u~<o L..J u'!'>o • j=l • Proof of Theorem 2. Let us rewrite equation (2) in the following way: Aut = (a + 1)g(ui) - (Eg(uj) - (2K - N)). j Since ut and g( un are of the same sign, the term (Lj g( un - (2K - N)) can neither be too large (if ut > 0) nor too low (if ui < 0). Therefore, we must have { (Lj g(uj) - (2K - N)) < (a + 1)g(un, (Lj g(uj) - (2K - N)) > (a + 1)g(ut), which yields (3). for all ut > 0, for all ut < 0, • Theorem 1 states that the components of an equilibrium state can only be one of at most three distinct values. We will distinguish between two types of equilibrium states, for the purposes of our analysis: those which have one or more components ut such that g'( un > a~l' which we categorize as type I, and those which do not (type II). We will show in the next section that for a gain G large enough, no equilibrium state of type II is stable. On the K-Winners-Take-All Network 637 IV - ASYMPTOTIC STABILITY OF EQUILIBRIUM STATES We will first derive a necessary condition for an equilibrium state of (1) to be asymptotically stable. Then we will find the stable equilibrium states of the KWTA Network. IV-I. A NECESSARY CONDITION FOR ASYMPTOTIC STABILITY An important necessary condition for asymptotic stability is given in the following theorem. Theorem 3: Given any asymptotically stable equilibrium state u*, at most one of the components ut of u* may satisfy: '( *) A 9 u· > --. , - a+ 1 Proof of Theorem 3. Theorem 3 is obtained by proving the following three lemmas. Lemma 1: Given any asymptotically stable equilibrium state u*, we always have for all i and j such that i # j : g'(u~) + g'(u~) Ja 2 (g'(un - g'(ujn 2 + 4g'(ung'(uj) A> a '2 J + 2 . (4) Proof of Lemma l. System (1) can be linearized around any equilibrium state u* : d(u ~ u*) ~ L(u*)(u _ u*), where L(u*) = T· diag (g'(ui), ... ,g'( uN» - AI. A necessary and sufficient condition for the asymptotic stability of u* is for L(u*) to be negative definite. A necessary condition for L(u*) to be negative definite is for all 2 X 2 matrices Lij(U*) of the type * (ag'(u~)-A -g'(U~») Lij(U ) = -g,'(ut) ag'(uj):'- A ' (i # j) to be negative definite. This results from an infinitesimal perturbation of components i and j only. Any matrix Lij (u*) has two real eigenvalues. Since the largest one has to be negative, we obtain: ~ (ag'(ui) - A + ag'(uj) - A + Ja 2 (g'(ut) - g'(ujn 2 + 49'(Ut)g'(Uj») < 0 .• 638 Majani, Erlan80n and Abu-Mostafa Lemma 2: Equation (4) implies: Proof of Lemma 2. min (g'(u:),g'(u1)) < 2-1 . a+ Consider the function h of three variables: (5) , • , * _ g'(u;) + g'(u;) va2 (g'(u;) - g'(u;))2 + 4g'(u;)g'(uj) h (a,g (ua),g (Uj)) - a 2 + 2 . If we differentiate h with respect to its third variable g'(uj), we obtain: {)h (a, g'(ut) , g'(uj)) = ~ + a2g'(uj) + (2 - a2)g'(ut) {)g'(uj) 2 2va2 (g'(un-g'(uj))2 +4g'(ung'(uj) which can be shown to be positive if and only if a > -1. But since lal < 1, then if g'(u;) < g'(uj) (without loss of generality), we have: h (a,g'(ui),g'(u1)) > h(a,g'(ui),g'(ui)) = (a+ 1)g'(ut), which, with (4), yields: which yields Lemma 2. Lemma 3: If for all i # j, '( *) A 9 Us < --1' a+ min (g'(ut),g'(u1)) < 2-1, a+ then there can be at most one ui such that: Proof of Lemma 3. A g'(u~) > --. I - a+ 1 • Let us assume there exists a pair (ui, uj) with i # j such that g'( ut) > 0;1 and g'(uj) > 0;1' then (5) would be violated. I On the K-Winners-Take-All Network 639 IV-2. STABLE EQUILmRIUM STATES From Theorem 3, all stable equilibrium states of type I have exactly one component , (at least one and at most one) such that g' ( ,) ~ 0; l' Let N + be the number of components a with g'(a) < 0;1 and a > 0, and let N_ be the number of components (3 with g'(f3) < 0;1 and f3 < 0 (note that N+ + N_ + 1 = N). For a large enough gain G, g(a) and g(f3) can be made arbitrarily close to +1 and -1 respectively. Using Theorem 2, and assuming a large enough gain, we obtain: -1 < N + - K < O. N + and K being integers, there is therefore no stable equilibrium state of type I. For the equilibrium states of type II, we have for all i, ut = a(> 0) or f3( < 0) where g'(a) < 0~1 and g'(f3) < 0;1' For a large enough gain, g(a) and g(f3) can be made arbitrarily close to +1 and -1 respectively. Using theorem 2 and assuming a large enough gain, we obtain: -(a + 1) < 2(N+ - K) < (a + 1), which yields N+ = K. Let us now summarize our results in the following theorem: Theorem 4: For a large enough gain, the only possible asymptotically stable equilibrium states u· of (1) must have K components equal to a > 0 and N - K components equal to f3 < 0, with { ( ) -.....L + K(g(a)-g(p)-2)+N(1+g(P» g a 0+1 a 0+1 , g({3) - ...Lf3 + K(g(a)-g(p)-2)+N(1+g(,8» 0+1 0+1 • (7) Since we are guaranteed to have at least one stable equilibrium state (Hopfield-84), and since any state whose components are a permutation of the components of a stable equilibrium state, is clearly a stable equilibrium state, then we have: Theorem 5: There exist at least (~) stable equilibrium states as defined in Theorem 4. They correspond to the (~) different states obtained by the N! permutations of one stable state with K positive components and N - K positive components. v - THE DYNAMICS OF THE KWTA NETWORK Now that we know the characteristics of the stable equilibrium states of the KWTA Network, we need to show that the KWTA Network will converge to the stable state which has a > 0 in the positions of the K largest initial components. This can be seen clearly by observing that for all i ;/; j : d(u' - u·) C 'dt J =.>t(ui- uj)+(a+1)(g(Ui)-g(Uj». If at some time T, ui(T) = uj(T), then one can show that Vt, Ui(t) = Uj(t). Therefore, for all i ;/; j, Ui(t) - Uj(t) always keeps the same sign. This leads to the following theorem. 640 Majani, Erlan80n and Abu-Mostafa Theorem 6: (Preservation of order) For all nodes i # j, We shall now summarize the results of the last two sections. Theorem 7: Given an initial state u-(O) and a gain G large enough, the KWTA Network will converge to a stable equilibrium state with K components equal to a positive real number (Q > 0) in the positions of the K largest initial components, and N - K components equal to a negative real number (13 < 0) in all other N - K positions. This can be derived directly from Theorems 4, 5 and 6: we know the form of all stable equilibrium states, the order of the initial node states is preserved through time, and there is guaranteed convergence to an equilibrium state. VI - DISCUSSION The well-known Winner-Take-All Network is obtained by setting K to 1. The N/2-Winners-Take-All Network, given a set gf N real numbers, identifies which numbers are above or below the mediaIl~ This task is slightly more complex computationally (~ O(N log(N» than that of the Winner-Take-All (~ O(N». The number of stable states is much larger, ( N) 2N N/2 ~ J21rN' i.e., asymptotically exponential in the size of the network. Although the number of connection weights is N2, there exists an alternate implementation of the KWTA Network which has O(N) connections (see Figure 2). The sum of the outputs of all nodes and the external input is computed, then negated and fed back to all the nodes. In addition, a positive self-connection (a + 1) is needed at every node. The analysis was done for a "large enough" gain G. In practice, the critical value of Gis a~i for the N/2-Winners-Take-All Network, and slightly higher for K # N/2. Also, the analysis was done for an arbitrary value of the self-connection weight a (Ial < 1). In general, if a is close to +1, this will lead to faster convergence and a smaller value of the critical gain than if a is close to -1. On the K-Winners-Take-All Network 641 VII - CONCLUSION The KWTA Network lets all nodes compete until the desired number of winners (K) is obtained. The competition is ibatained by using mutual inhibition between all nodes, while the number of winners K is selected by setting all external inputs to 2K - N. This paper illustrates the capability of the continuous Hopfield Network to solve exactly an interesting decision problem, i.e., identifying the K largest of N real numbers. Acknowledgments The authors would like to thank John Hopfield and Stephen DeWeerth from the California Institute of Technology and Marvin Perlman from the Jet Propulsion Laboratory for insightful discussions about material presented in this paper. Part of the research described in this paper was performed at the Jet Propulsion Laboratory under contract with NASA. References J .A. Feldman, D.H. Ballard, "Connectionist Models and their properties," Cognitive Science, Vol. 6, pp. 205-254, 1982 S. Grossberg, "Contour Enhancement, Short Term Memory, and Constancies in Reverberating Neural Networks," Studies in Applied Mathematics, Vol. LII (52), No.3, pp. 213-257, September 1973 S. Grossberg, "Non-Linear Neural Networks: Principles, Mechanisms, and Architectures," Neural Networks, Vol. 1, pp. 17-61, 1988 J.J. Hopfield, "Neurons with graded response have collective computational properties like those of two-state neurons," Proc. Natl. Acad. Sci. USA, Vol. 81, pp. 3088-3092, May 1984 J. Lazzaro, S. Ryckebusch, M.A. Mahovald, C.A. Mead, "Winner-Take-All Networks of O(N) Complexity," in this volume, 1989 R.P. Lippman, B. Gold, M.L. Malpass, "A Comparison of Hamming and Hopfield Neural Nets for Pattern Classification," MIT Lincoln Lab. Tech. Rep. TR-769, 21 May 1987 642 Majani, Erlanson and Abu-Mostafa u ,1 / , Fj gure 1; I ntersecti on of si gmoj d and line, a+1 N-2K Figure 2; An Implementation of the KWTA Network,
|
1988
|
41
|
126
|
DOES THE NEURON "LEARN" LIKE THE SYNAPSE? RAOUL TAWEL Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 Abstract. An improved learning paradigm that offers a significant reduction in computation time during the supervised learning phase is described. It is based on extending the role that the neuron plays in artificial neural systems. Prior work has regarded the neuron as a strictly passive, non-linear processing element, and the synapse on the other hand as the primary source of information processing and knowledge retention. In this work, the role of the neuron is extended insofar as allowing its parameters to adaptively participate in the learning phase. The temperature of the sigmoid function is an example of such a parameter. During learning, both the synaptic interconnection weights w[j and the neuronal temperatures Tr are optimized so as to capture the knowledge contained within the training set. The method allows each neuron to possess and update its own characteristic local temperature. This algorithm has been applied to logic type of problems such as the XOR or parity problem, resulting in a significant decrease in the required number of training cycles. INTRODUCTION One of the current issues in the theory of supervised learning concerns the scaling properties of neural networks. While low-order neural computations are easily handled on sequential or parallel processors, high-order problems prove to be intractable. The computational burden involved in implementing supervised learning algorithms, such as back-propagation, on networks with large connectivity and/or large training sets is immense and impractical at present. Therefore the treatment of 'real' applications in such areas as image recognition or pattern classification require the development of computationally efficient learning rules. This paper reports such an algorithm. Current neuromorphic models regard the neuron as a strictly passive non-linear element, and the synapse on the other hand as the primary source of knowledge retention. In these models, information processing is performed by propagating the synaptically weighed neuronal contributions in either a feed forward, feed backward, or fully recurrent fashion [1]-[3). Artificial neural networks commonly take the point of view that the neuron can be modeled by a simple non-linear 'wire' type of device. However, evidence exists that information processing in biological neural networks does occur at the neuronal level [4]. Although neuromorphic nets based on simple neurons are useful as a first approximation, a considerable richness is to be gained by extending 'learning' to the neuron. In this work, such an extension is made. The neuron is then seen to provide an additional or secondary source of information processing and knowledge retention. This is achieved by treating both the neuronal and synaptic variables as optimization parameters. The temperature of the sigmoid function is an example of such a neuronal parameter. In much 169 170 Tawel the same way that the synaptic interconnection weights require optimization to reflect the knowledge contained within the training set, so should the temperature terms be optimized. It should be emphasized that the method does not optimize a global neuronal temperature for the whole network, but rather allows each neuron to posses and update its own characteristic local value. ADAPTIVE NEURON MODEL Although the principle of neuronal optimization is an entirely general concept, and therefore applicable to any learning scheme, the popular feed forward back propagation (BP) learning rule has been selected for its implementation and performance evaluation. In this section we develop the mathematical formalism neccessary to implement the adaptive neuron model (ANM). Back propagation is an example of supervised learning where, for each presentation consisting of an input vector iJip and its associated target vector tp, the algorithm attemps to adjust the synaptic weights so as to minimize the sum-squared error E over all patterns p. In its simplest form, back propagation treats the interconnection weights as the only variable and consequently executes gradient descent in weight space. The error term is given by E = L: Ep = ~ L: L: [tf - o?]2 P P i The quantity tf is the ith component of the pth desired output vector pattern and o? is the activation of the corresponding neuron in the final layer n . For notational ease the summation over p is dropped and a single pattern is considered. On completion of learning, the synaptic weights capture the transformation linking the input to output variables. In applications other than toy problems, a major drawback of this algorithm is the excessive convergence time. In this paper it is shown that a significant decrease in convergence time can be realized by allowing the neurons to adaptively participate in the learning process. This means that each neuron is to be characterized by a set of parameters, such as temperature, whose values are optimized according to a rule, and not in a heuristic fashion as in simulated annealing. Upon training completion, learning is thus captured in both the synaptic and neuronal parameters. The activation of a unit - say the ith neuron on the mth layer - is given by or. This response is computed by a non-linear operation on the weighed responses of neurons from the previous layer, as seen in Figure 1. A common function to use is the logistic funtion, 1 o~ = ----..".-=1 1 + e-{36'i and T = 1/ f3 is the temperature of the network. The net weighed input to the neuron is found by summing products of the synaptic weights and corresponding neuronal ouputs from units on the previous layer, sf!' = ""' w~-lof!'-l 1 L..J I] ] j om-t 3 Does the Neuron "Learn" Like the Synapse? 171 ~m Sm f= 1 +e k k ~ )-' I-----O~ '---v-r---J1 ''----• .-------1 '----v--------INPUT FROM NEURON OUTPUT PREVIOUS FROM LAYER NEURON Figure 1. Each neuron in a network is charaterized by a local, temperature dependent, sigmoidal activation function. where oj-1 represents fan-in units and the wij-l represent the pairwise connection strength between neuron i in layer m and neuron j in layer m - l. We have investigated several mathematical methods for the determination of the optimal neuronal temperatures. In this paper, the rule that was selected to optimize these parameters is based on executing gradient descent in the sum squared error E in temperature space. The method requires that the incremental change in the temperature term be proportional to the negative of the derivative of the error term with respect to the temperature. Focussing on the 11h neuron on the ouput layer n, we have ~T.', = -71 aE " aT," In this expression, ij is the temperature learning rate. This equation can be expressed as the product of two terms by the chain rule aE _ aE ao; ar.n aon ar.n , " Substituting expressions and leaving the explicit functional form of the activation function unspecified, i.e. 0; = f(r,n, ... ) we obtain aE n af = - [t, - 0,] ar,n aT," In a similar fashion, the temperature update equation for the previous layer is given by, ~r::-1 __ - aE k TJ ar::- 1 k 172 Tawel U sing the chain rule, this can be expressed as aE aE ao; as; ao~-l ar::- 1 = L aon asn aon-1 ar;:-l k I" 1: Substituting expressions and simplifying reduces the above to aE [~[ n] af n-l] af ar::-1 = L...J - t, 0, asn WI1: ar,n-l k, , 1: By repeating the above derivation for the previous layer, i.e. determining the partial derivative of E with respect to Tj-2 etc., a simple recursive relationship emerges for the temperature terms. Specifically, the updating scheme for the kth neuronal temperature on the mth layer is given by where A rpm - aE U.l k = -1] ar.m k aE m af ar,m = -6k ar,m k 1: In the above expression, the error signal 6r takes on the value, if neuron m lies on an output layer, or em ~ em+l af m Vk = L...J v, m+l W'k , as, if the neuron lies on a hidden layer. SIMULATION RESULTS OF TEMPERATURE OPTIMIZATION The new algorithm was applied to logic problems. The network was trained on a standard benchmark - the exclusive-or logic problem. This is a classic problem requiring hidden units and since many problems involve an XOR as a subproblem. As in plain BP, the application of the proposed learning rule involves two passes. In the first, an input pattern is presented and propagated forward through the network to compute the output values oj. This output is compared to its target value, resulting in an error signal for each output unit. The second pass involves a backward pass through the network during which the error signal is passed along the network and the appropriate weight and temperature changes made. Note that since the synapses and neurons have their own characteristic learning rate, i.e 1] and fj respectively, an additional degree of freedom is introduced in the simulation. This is equivalent to allowing for relative updating time scales for the weights and Does the Neuron "Learn" Like the Synapse? 173 temperatures, i.e. Tw and TT respectively. We have now generated a gradient descent method for finding weights and temperatures in a feed forward network. In deriving the learning rule for temperature optimization in the above section, the derivative of the activation function of a neuron played a key role. We have used a sigmoidal type of function in our simulations whose explicit form is given by, f (81:\ If) = 1 ~f3"'s", +e "" and in Figure 2 it is shown to be extremely sensitive to small changes in temperature. 10. 0.8 0.6 ;:i 0.4 0.2 0. -5 - 3 -1 Figure 2. Activation function shown plotted for several different temperatures. The sigmoid is shown plotted against the net input to a neuron for temperatures ranging from 0.2 to 2.0, in increments of 0.2. However, the steepest curve was for a temperature of 0.01. The derivative of the activation function taken with respect to the temperature is given by of oTr: As shown in Figure 3, the XOR architecture selected has two input units, two hidden units, and a single output unit. Each neuron is characterized by a temperature, and neurons are connected by weights. Prior to training the network, both the weights and temperatures were randomized. The initial and final optimization parameters for a sample training exercise are shown in Figure 3(a) & (b). Specifically, Figure 3(a) shows the values of the randomized weights and temperatures prior to training, and Figure 3(b) shows their values after training the network for 1000 iterations. This is a case where the network has reached a global minimum. In both figures, the numbers associated with the dashed arrows represent the thresholds of the neurons, and the numbers written next to the solid arrows represent the 174 Tawel T=O.951 ,/ -0.979 OUT '" 0.450 (a) ,. ," -1.183 OUT (b) Figure 3. Architecture of NN for XOR problem showing neuronal temperatures and synaptic weights before ( a) and after training (b). ...... , 0.476 excitatory/inhibitory strengths of the pairwise connections. To fully evaluate the convergence speed of the proposed algorithm, a benchmark comparison between it and plain BP was made. In both cases the training was started with identical initial random synaptic weights lying within the range [-2.0, +2.0] and the same synaptic weight learning rate TJ = 0.1. The temperatures of the neurons in the AN M model were randomly selected to lie wjthin the narrow range of [0.9,1.1] and the temperature learning rate ij set at 0.1. Figures 4(a) & (b) summarize the training statistics of this comparison. 100 10"1 111"2 i 10"3 ... 10-4 10"5 10-6 100 101 102 10"1 , ._- ....... , , , , 10-2 " /\ '. ", ' .......... / \ , \ '. "./ '. ", \ " 1'ct3 i 10-4 110"5 10-6 10"7 103 104 105 108 100 101 102 103 ITERATION ITERATION Figure 4. Comparison of training statistics between the adaptive neuron model and plain back propagation. \ \ \ '. 104 , " , ' .... '\. 105 loe Does the Neuron "Learn" Like the Synapse? 175 In both figures, the solid lines represent the ANM and the dashed lines represent the plain BP model. In Figure 4( a), the error is plotted against the training iteration number. In Figure 4(b), the standard deviation of the error over the training set is shown plotted against the training iteration. In the first few hundred training iterations in Figure 4( a), the performance ofBP and the ANM is similar and appears as a broad shoulder in the curve. Recall that both the weights and temperatures are randomized prior to training, and are therefore far from their final values. As a consequence of the low values of the learning rates used, the error is large, and will only begin to get smaller when the weights and temperatures begin to fall in the right domain of values. In the ANM, the shoulder terminus is marked by a phase-transition like discontinuity in both error and standard deviation. For the particular example shown, this occured at the 637th iteration. A several order of magnitude drop in the error and standard deviation is observed within the next 10 iterations. This sharp drop off is followed by a much more gradual decrease in both the error and standard deviation. A more detailed analysis of these results will be published in a longer paper. In learning the XOR problem using standard BP, it has been observed that the network frequently gets trapped in local minima. In Figure 5(a) & (b) we observe such a case as shown by the dotted line. In numerous simulations on this problem, we have determined that the ANM is much less likely to become trapped in local mInIma. 100 ~----""".---........ "_._--------/" i 10-1 10-2 i 10-3 w 11)'4 1C)'6 ~ 100 101 i 102 103 ITERATION Figure 5. Training case where the adaptive neuron model escapes a local minima and plain back propagation does not. CONCLUSIONS "..----------In this paper we have attempted to upgrade and enrich the model of the neuron from a simple static non-linear wire-type construct, to a dynamically reconfigurable one. From a purely computational point of view, there are definite advantages in such an extension. Recall that if N is the number of neurons in a network then the number of synaptic connections typically increases as O(N2). Since the activation 176 Tawel function is extremely sensitive to small changes in temperature and that there are far fewer neuronal parameters to update than synaptic weights, suggests that the adaptive neuron model should offer a significant reduction in convergence time. In this paper we have also shown that the active participation of the neurons during the supervised learning phase led to a significant reduction in the number of training cycles required to learn logic type of problems. In the adaptive neuron model both the synaptic weight interconnection strengths and the neuronal temperature terms are treated as optimization parameters and have their own updating scheme and time scales. This learning rule is based on implementing gradient descent in the sum squared error E with respect to both the weights wr] and temperatures Tim. Preliminary results indicate that the new algorithm can significantly outperform back propagation by reducing the learning time by several orders of magnitude. Specifically, the XOR problem was learnt to a very high precision by the network in :::::: 103 training iterations with a mean square error of:::::: 10-6 versus over 106 iterations with a corresponding mean square error of:::::: 10-3. Acknowledgements. The work described in this paper was performed by the Jet Propulsion Laboratory, California Institute of Technology, and was supported in parts by the National Aeronautics and Space Administration and the Defense Advanced Research Projects Agency through an agreement with the National Aeronautics and Space Administration. REFERENCES 1. D. Rummelhart, J. McClelland, "Parallel Distributed Processing," M.I.T. Press, Cambridge, MA,1986. 2. J. J. Hopfield, Neural Networks as Physical Systems with Emergent Collective Computational Abilities, Proceedings of the National Academy of Science USA 79 (1982), 2554-2558. 3. F. J. Pineda, Generalization Of Backpropagation To Recurrent and Higher Order Neural Networks, in "Neural Information Processing Systems Proceedings," AlP, New York, 1981. 4. L. R. Carley, Presynaptic Neural Information Processing, in "Neural Information Processing Systems Proceedings," AlP, New York, 1981.
|
1988
|
42
|
127
|
224 USE OF MULTI-LAYERED NETWORKS FOR CODING SPEECH WITH PHONETIC FEATURES Yoshua Bengio, Regis Cardin and Renato De Mori Computer Science Dept. McGill University Montreal, Canada H3A2A7 ABSTRACT Piero Cosi Centro di Studio per Ie Ricerche di Fonetica, C.N.R., Via Oberdan,10, 35122 Padova, Italy Preliminary results on speaker-independant speech recognition are reported. A method that combines expertise on neural networks with expertise on speech recognition is used to build the recognition systems. For transient sounds, eventdriven property extractors with variable resolution in the time and frequency domains are used. For sonorant speech, a model of the human auditory system is preferred to FFT as a front-end module. INTRODUCTION Combining a structural or knowledge-based approach for describing speech units with neural networks capable of automatically learning relations between acoustic properties and speech units is the research effort we are attempting. The objective is that of using good generalization models for learning speech units that could be reliably used for many recognition tasks without having to train the system when a new speaker comes in or a new t~sk is considered. Domain (speech re.pognition) specific knowledge is applied for - segmentation and labeling of speech, - definition of event-driven property extractors, - use of an ear model as preproqf3ssing applied to some modules, - coding of network outputs with phonetic features, - modularization of the speech recognition task by dividing the workload into smaller networks performing Simpler tasks. Optimization of learning time and of generalization for the neural networks is sought through the use of neural networks techniques : - use of error back-propagation for learning, Multi-Layered Networks for Coding Phonetic Features 225 switching between on-line learning and batch learning when appropriate, - convergence acceleration with local (weight specific) learning rates, - convergence acceleration with adaptive learning rates based on information on the changes in the direction of the gradient, - control of the presentation of examplars in order to ba I an ce examplars among the different classes, - training of small modules in the first place: - simpler architecture (e.g. first find out the solution to the linearly separable part of the problem), - use of simple recognition task, combined using either Waibel's glue units [Waibel 88] or with simple heuristics. - training on time-shifted inputs to learn ti me invariance and insensitivity to errors in the segmentation preprocessing. - controlling and improving generalization by using several test sets and using one of them to decide when to stop training. EAR MODEL In recent years basilar membrane, inner cell and nerve fiber behavior have been extensively studied by auditory physiologists and neurophysiologists and knowledge about the human auditory pathway has become more accurate [Sachs79,80,83][Delgutte 80,84][Sinex 83]. The computational scheme proposed in this paper for modelling the human auditory system is derived from the one proposed by S. Seneff [Seneff 84,85,86]. The overall system structure which is illustrated in Fig. 1 includes three blocks: the first two of them deal with peripheral transformations occurring in the early stages of the hearing process while the third one attempts to extract information relevant to perception. The first two blocks represent the periphery of the earing system. They are designed using knowledge of the rather well known responses of the corresponding human auditory stages [Delgutte 84]. The third unit attempts to apply a useful processing strategy for the extraction of important speech properties like spectral lines related to formants. The speech signal, band-limited and sampled at 16 kHz, is first prefiltered through a set of four complex zero pairs to eliminate the very high and very low frequency components. The Signal is then analyzed by the first block, a 40-channel critical-band linear filter bank. Filters were designed to optimally fit physiological data [Delgutte 84) such as those observed by [N.V.S. Kiang et al.] and are implemented as a 226 Bengio, Cardin, De Mori and Cosi cascade of complex high frequency zero pairs with taps after each zero pair to individual tuned resonators. The second block of the model is called the hair cell synapse model, it is nonlinear and is intended to capture prominent features of the transformation from basilar membrane vibration, represented by the outputs of the filter bank, to probabilistic response properties of auditory nerve fibers. The outputs of this stage, in accordance with [Seneff 88], represent the probability of firing as a function of time for a set of similar fibers acting as a group. Four different neural mechanisms are modeled in this nonlinear stage. The rectifier is applied to the signal to simulate the high level distinct directional sensitivity present in the inner hair cell current response. The short-term adaptation which seems due to the neurotransmitter release in the synaptic region between the inner hair cell and its connected nerve fibers is Simulated by the "membrane model". The third unit represents the observed gradual loss of synchrony in nerve fiber behaviour as stimulus frequency is increased. The last unit is called "Rapid Adaptation", it performs "Automatic Gain Control" and implements a model of the refractory phenomenon of nerve fibers. The third and last block of the ear model is the synchrony detector which implements the known "phase locking" property of the nerve fibers. It enhances spectral peaks due to vocal tract resonances. IN PUT.J,.SIGNAL 40 • channels Critical Band Filter Bank BASILARi,.MEMBRANE RESPONSE Hair Cell Synapse Model " RRING PROBABILITY Synchrony Detector SYNCHRO~SPECTRUM Figure 1 : Structure of the ear model OUTPUT LAYER HIDD~NO 0 ~o _ 0 LAYER ... __ ~ ... tt HIDD~NO o~O 0 LAYER 0 o 0 tt ~ time Figure 2 : Multi·layered network vlith variable resolution Property Extractor Multi-Layered Networks for Coding Phonetic Features 227 PROPERTY EXTRACTORS For many of the experiments described in this paper, learning is performed by a set of multi-layered neural networks (MLNs) whose execution is decided by a data-driven strategy. This strategy analyzes morphologies of the input data and selects the execution of one or more MLNs as well as the time and frequency resolution of the spectral samples that are applied at the network input. An advantage of using such specialized property extractors is that the number of necessary input connections (and thus of connections) is then minimized, thus improving the generalizing power of the MLNs. Fine time resolution and gross frequency resolution are used, for example, at the onset of a peak of signal energy, while the opposite is used in the middle of a segment containing broad-band noise. The latter situation will allow the duration of the segment analyzed by one instantiation of the selected MLN to be larger than the duration of the signal analyzed in the former case. Property extractors (PEs) are mostly rectangular windows subdivided into cells, as illustrated in Figure 2. Property extractors used in the experiments reported here are described in [Bengio, De Mori & Cardin 88]. A set of PEs form the input of a network called MLN1, executed when a situation characterized by the following rule is detected: SITUATION S1 ( ( deep_dip) (t*)(peak)) or ((ns)(t*)(peak)) or (deep_dip)(sonorant-head)(t*)(peak)) --> execute (MLN1 at t*) (deep_dip), (peak), (ns) are symbols of the PAC alphabet representing respectively a deep dip, a peak in the time evolution of the signal energy and a segment with broad-band nOise; t* is the time at which the first description ends, sonorant-head is a property defined in [De Mori, Merlo et al. 87]. Similar preconditions and networks are established for nonsonorant segments at the end of an utterance. Another MLN called MLN2 is executed only when frication noise is detected. This situation is characterized by the following rule: SITUATION S2 (pr1 = (ns)) --> execute (MLN2 every T =20 msecs.) 228 Bengio, Cardin, De Morl and Cosi EXPERIMENTAL RESULTS EXPERIMENT 1 - task : perform the classification among the following 10 letters of the alphabet, from the E-set : { b,c,d,e,g,k,p,t,v,3} - Input coding defined in [Bengio, De Mori & Cardin 88]. - architecture: two modules have been defined, MLN1 and MLN2. The input units of each PE window are connected to a group of 20 hidden units, which are connected to another group of 10 hidden units. All the units in the last hidden layer are then connected to the 10 output units. - database : in the learning phase, 1400 samples corresponding to 2 pronounciations of each word of the E-set by 70 speakers were used for training MLN1 and MLN2. Ten new speakers were used for testing. The data base contains 40 male and 40 female speakers. - results: an overall error rate of 9.5% was obtained with a maximum error of 20% for the letter Id/. These results are much better than the ones we obtained before and we published recently [De Mori, Lam & Gilloux 87]. An observation of the confusion matrix shows that most of the errors represent cases that appear to be difficult even in human perception. EXPERIMENT 2 - task : similar to the one in experiment 1 i.e. to recognize the h ea d consonant in the context of a certain vowel : lae/'/o/'/ul and Ia!. -subtask 1 : classify pronounciations of the first phoneme of letters A,K,J,Z and digit 7 into the classes {/vowel/,lkI,/j/'/zl,/s/}. -subtask 2 : classify pronounciations of the first phoneme of letter a and digit 4 into the classes {/vowel/,/f/}. -subtask 3 : classify pronounciations of the first phoneme of the letter Y and the digits 1 and 2 into the classes {/vowel/,lt!}. -subtask 4 : classify pronounciations of the first phoneme of letters I,R,W and digits 5 and 9 into the classes {/vowel/,/d/,/f/'/n/} - input coding : as for experiment 1 except that only PEs pertaining to situation 81 were used, as the input to a single MLN. - architecture : two layers of respectively 40 and 20 hidden units followed by an output unit for each of the classes defined for the subtask. - database : 80 speakers (40 males, 40 females) each pronouncing two utterances of each letter and each digit. The first 70 speakers are used for training, the last 10 for testing. - results : subtask 1 : {/vowel/,/k/,/j/'/z/,/s/} preceding vowel lael. 4 % error on test set. Multi-Layered Networks for Coding Phonetic Features 229 subtask 2 : {/vowel/.!f!} preceding vowel 10/. o % error on test set. subtask 3 : {/vowel/.!tI} preceding vowel luI. o % error on test set. subtask 4 : {/vowel/,/d/,/f/,/n/} preceding vowel lal. 3 % error on test set. EXPERIMENT 3 - task: speaker-Independant vowel recognition to discrimine among ten vowels extracted from 10 english words {BEEP,PIT,BED,BAT,BUT,FUR,FAR,SAW,PUT,BOOT}. - input coding : the signal processing method used for this experiment is the one described in the section "ear model". The output of the Generalised Synchrony Detector (GSD) was collected every 5 msecs. and represented by a 40-coefficients vector. Vowels were automatically singled out by an algorithm proposed in [De Mori 85] and a linear interpolation procedure was used to reduce to 10 the variable number of frames per vowel (the first and the last 20 ms were not considered in the interpolation procedure). - architecture: 400 input units (10 frames x 40 filters), a single hidden layer with 20 nodes, 10 output nodes for the ten vowels. - database: speech material consisted in 5 pronounciations of the ten monosyllabic words by 13 speakers (7 male, 6 female) for training and 7 new speakers (3 male, 4 female) for test. - results: In 95.4% of the cases, correct hypotheses were generated with the highest evidence, in 98.5% of the cases correct hypotheses were found in the top two candidates and in 99.4 % of the cases in the top three candidates. The same experiment with FFT spectra instead of data from the ear model gave 870/0 recognition rate in similar experimental conditions. The use of the ear model allowed to produce spectra with a limited number of well defined spectral lines. This represents a good use of speech knowledge according to which formants are vowel parameters with low variance. The use of male and female voices allowed the network to perform an excellent generalization with samples from a limited number of speakers. CONCLUSION The preliminary experiments reported here on speaker normalization combining multi-layered neural networks and speech recognition expertise show promising results. For transient sounds, event-driven property extractors with variable resolutions in the time and frequency domains were used. For sonorant speech with formants, a new model of 230 Bengio, Cardin, De Mori and Cosi the human auditory system was preferred to the classical FFT or LPC representation as a front-end module. More experiments have to be carried out to build an integrated speaker-independant phoneme recognizer based on multiple modules and multiple front-end coding strategies. In order to tune this system, variable depth analysis will be used. New small modules will be designed to specifically correct the deficiencies of trained modules. In addition, we consider strategies to perform recognition at the word level, using as input the sequence of outputs of the MLNs as time flows and new events are encountered. These strategies are also useful to handle slowly varying transitions such as those in diphtongs. REFERENCES Bengio Y., De Mori R. & Cardin R., (1988)"Data-Driven Execution of Multi-Layered Networks for Automatic Speech Recognition", Proceedings of AAAI 88, August 88, Saint Paul, Minnesota,pp.734-738. Bengio Y. & De Mori R. (1988), "Speaker normalization and automatic speech recognition uSing spectral lines and neural networks", Proceedings of the Canadian Conference on Artificial Intelligence (CSCSI-88), Edmonton, AI., May 1988. Delgutte B. (1980), "Representation of speech-like sounds in the discharge patterns of auditory-nerve fibers" , Journal of the Acoustical Society of America, N. 68, pp. 843-857. Delgutte B. & Kiang N.Y.S. (1984) , "Speech coding in the auditory nerve", Journal of Acoustical Society of America, N. 75, pp. 866-907. De Mori R., Laface P. & Mong Y. (1985), "Parallel algorithms for syllable recognition in continuous speech", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-7, N. 1, pp. 5669, 1985. De Morl R., Merlo E., Palakal M. & Rouat J.(1987),"Use of procedural knowledge for automatic speech recognition", Proceedings of the tenth International Joint Conference on Artificial Intelligence, Milan, August 1987, pp. 840-844. De Mori R., Lam L. & Gilloux M. (1987), "Learning and plan refinement in a knowledge-based system for automatic speech recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, No.2, pp.289-305. Multi-Layered Networks for Coding Phonetic Features 231 Kiang N.Y.S., Watanabe T., Thomas E.C. & Clark L.F., "Discharge patterns of single fibers in the eat's aUditory-nerve fibers", Cambridge, MA: MIT press. Rumelhart D.E., Hinton G.E. & Williams R.J. (1986),"Learning internal representation by error propagation", Parallel Distributed Processing : Exploration in the Microstructure of Cognition, vol. 1, pp.318-362, MIT Press, 1986. Seneff S. (1984), "Pitch and spectral estimation of speech based on an auditory synchrony model", Proceedings of ICASSP-84, San Diego, CA. Seneff S. (1985), "Pitch and spectral analysis of speech based on an auditory synchrony model", RLE Technical Report 504 , MIT. Seneff S. (1986), "A computational model for the peripheral auditory system: application to speech recognition research", Proceedings of ICASSP-86, Tokyo, pp. 37.8.1-37.8.4. Seneff S. (1988), "A joint synchrony/mean-rate model of auditory speech processing", Journal of Phonetics, January 1988. Sachs M.B. & Young E.D. (1979),"Representation of steady-state vowels in the temporal aspects of the discharge pattern of populations of auditory nerve fibers", Journal of Acoustical Society of America, N. 66, pp. 1381-1403. Sachs M.B. & Young E.D. {1980},"Effects of nonlinearities on speech encoding in the auditory nerve", Journal of Acoustical SOCiety of America, N. 68, pp. 858-875. Sachs M.B. & Miller M.1. (1983), "Representation of stop consonants in the discharge patterns of auditory-nerve fibers", Journal of Acoustical Society of America, N. 74, pp. 502-517. Sinex D.G. & Geisler C.D. (1983), "Responses of aUditory-nerve fibers to consonant-vowel syllables", Journal of Acoustical Society of America, N. 73, pp. 602-615. Waibel A. (1988),"Modularity in Neural Networks for Speech Recognition", Proc. of the 1988 IEEE Conference on Neural Information Processing Systems, Denver, CO.
|
1988
|
43
|
128
|
ELECTRONIC RECEPTORS FOR TACTILE/HAPTIC· SENSING Andreas G. Andreou Electrical and Computer Engineering The Johns Hopkins University Baltimore, MD 21218 ABSTRACT We discuss synthetic receptors for haptic sensing. These are based on magnetic field sensors (Hall effect structures) fabricated using standard CMOS technologies. These receptors, biased with a small permanent magnet can detect the presence of ferro or ferri-magnetic objects in the vicinity of the sensor. They can also detect the magnitude and direction of the magnetic field. INTRODUCTION The organizational structure and functioning of the sensory periphery in living beings has always been the subject of extensive research. Studies of the retina and the cochlea have revealed a great deal of information as to the ways information is acquired and preprocessed; see for example review chapters in [Barlow and MoHon, 1982]. Understanding of the principles underlying the operation of sensory channels can be utilized to develop machines that can sense their environment and function in it, much like living beings. Although vision is the principal sensory channel to the outside world, the "skin senses" can in some cases provide information that is not available through vision. It is interesting to note, that the performance in identifying objects through the haptic senses can be comparable to vision [Klatzky el al, 1985]; longer learning periods may be necessary though. Tactually guided exploration and shape perception for robotic applications has been extensively investigated by [Hemanli el al, 1988]. A number of synthetic sensory systems for vision and audition based on physiological models for the retina and the cochlea have been prototyped by Mead and his coworkers in VLSI [Mead, 1989]. The key to success in such endeavors is the ability to integrate transducers (such as light sensitive devices) and local processing electronics on the same chip. A technology that offers that possibility is silicon CMOS; furthermore, it is readily available to engineers and scientists through the MOSIS fabrication services[Cohen and Lewicki, 1981]. Receptor cells, are structures in the sensory pathways whose purpose is to convert environmental signals into electrical activity (strictly speaking this is true for • Haptic refers to the perception of vibration, skeletal conformation or position and skin deformation. Tactile refers to the perceptual system that includes only the cutaneous senses of vibration and deformation. 785 786 Andreou exteroceptors). The retina rods and cones are examples of receptors for light stimuli and the Pacinian corpuscles are mechanoreceptors that are sensitive to indentation or pressure on the skin. A synthetic receptor is thus the first and necessary functional element in any synthetic sensory system. For the development of vision systems parasitic bipolar devices can be used [Mead, 1985] to perform the necessary light to electrical signal transduction as well as low level signal amplification. On the other hand, implementation of synthetic receptors for tactile perception is still problematic [Barth et. al., 1986]. Truly tactile transducers (devices sensitive to pressure stimuli) are not available in standard CMOS processes and are only found in specialized fabrication lines. However, devices that are sensitive to magnetic fields can be used to some extend as a substitute. In this paper, we discuss the development of electronic receptor elements that can be used in synthetic haptic/tactile sensing systems. Our receptors are devices which are sensitive to steady state or varying magnetic fields and give electrical signals proportional to the magnetic induction. They can all be fabricated using standard silicon processes such as those offered by MOSIS. We show how our elements can be used for tactile and haptic sensing and compare its characteristics with the features of biological receptors. The spatial resolution of the devices, its frequency response and dynamic range are more than adequate. One of our devices has nano-watt power diSSipation and thus can be used in large arrays for high resolution sensing. THE MAGNETIC-FIELD SENSORY PARADIGM In this section we show qualitatively how to implement synthetic sensory functions of the haptic and tactile senses by using magnetic fields and their interaction with ferri or ferro-magnetic objects. This will motivate the more detailed discussion that follows on the transducer devices. DIRECT SENSING: In this mode of operation the transducer will detect the magnitude and direction of the magnetic induction and convert it into an electrical signal. If the magnetic field is provided by the fringing fields of a small permanent magnet, the strength of the signal will falloff with the distance of the sensor from the magnet Such an arrangement for one dimensional sensing is shown in Figure 1. The experimental data are from the MOS Hall-voltage generator that is described in the next section. The magnetic field was provided by a cylindrical, rare-earth, permanent magnet with magnetic induction B0=250mT, measured on the end surfaces.The vertical axis shows the signal from the transducer (Hall-voltage) and the horizontal axis represents the distance d of the sensor from the surface of the magnet. The above scheme can be used to sense the angular displacement between two fingers at a joint (inset b). By placing a small magnet on one side of the joint and the receptor on the other, the signal from the receptor can be conditioned and converted into a measure of the angle e between the two fingers. The output of our receptor would thus correspond to the output from the Joint Fibers that originate in the Joint Capsule [Johnson, 1981]. Joint angle perception and manual stereognosis is mediated in part by these fibers. The above is just one example of how to use our integrated electronic receptor element for sensing Electronic Receptors for TactileIHaptic Sensing 787 skeletal conformation and position. Since there is no moving parts other than the joint itself, this is a reliable scheme. 2.63 Sensor 2.~ I 1 MAGNET~ 1.88 V (8) 1.50 1.13 .750 Magnet .315 Sensor (b) O.OO~--~~--~----r---~----,-----~ __ ~ ____ ,-__ ~~ __ ~ 0.00 1.00 e.oo 3.00 If.OO 5.00 6.00 7.00 8.00 9.00 10. Distance ot magnet trom sensor (mm) Figure 1. Direct Sensing Using an Integrated Hall-Voltage Transducer PASSIVE SENSING In this mode of operation, the device or an array of devices are permanently biased with a uniform magnetic field whose uniformity can be disturbed in the presence of a ferro or ferri-magnetic object in the vicinity. Signals from an array of such elements would provide ifnormation on the shape of the object that causes the disturbance of the magnetic field. Non-magnetic objects can be sensed if the surface of the chip is covered with a compliant membrane that has some magnetic properties. Note that our receptors can detect the presence of an object without having a direct contact with the object itself. This may in some cases be advantageous. In this application, the magnetic field sensor would act more like the Ruffini organs which exist in the deeper tissue and are primarily sensitive to static skin stretch. The above scheme could also be used for sensing dynamic. stimuli and there is a variety of receptor cells. such as the Pacinian and Meissner's corpuscles that perfonn that function in biological tactile senses [Johnson. 1981]. 7BB Andreou SILICON TRANSDUCERS Magnetic field sensors can be integrated on silicon in a variety of forms. The transduction mechanism is due to some galvanomagnetic effect; the Hall effect or some related phenomenon [Andreou. 1986]. For an up-todate review of integrated magnetic field sensors as well as for the fme points of the discussion that follows in the next two sections. please refer to [Baltes and Popovic. 1986]. The simplest Hall device is the Hallvoltage sensor. This is a four terminal device with two current terminals and two voltage terminals to measure the Hall-voltage (Figure 2). A magnetic field B in the direction perpendicular to the current flow. sets up the Hall-voltage in the direction indicated in the figure. The Hall-voltage is such that it compensates for the Lorentz force on the charge carriers. In the experiment below. we have used a MOS Hall generator instead of a bulk device. The two current contacts are the source and drain of the device and the voltage contacts are formed by small diffusion areas in the channel. The Hall-voltage is linearly related to the magnetic induction B and the total current in the sample I between the drain and the source which is controlled by the gate voltage V gs. " ::> e '>J' -IU l: I :::> M89P n-t Vds=5V 3.00r-------------------------------~--------~--------~--~~~--------~ B=250mT 1.25 .750 D.OO -.7:50 -I." -2.25 B=-2S0mT -3 .01 +------r-----~--.,.---_r_----_r------_r_----_r------_,__-_..-~ 0.0 .SO 1.0 1.5 2.0 1.5 Vgs (V) 3.0 3.' ".0 't.5 Figure 2. An Integrated MOS Hall-Voltage Sensor and Measured Characteristics 5.1 Electronic Receptors for TactileIHaptic Sensing 789 The constant of proportionality K is related to the dimensions of the device and to silicon electronic transport parameters. The device dimensions and biasing conditions are shown in the figure above. Note that the Hall voltage reverses. when the direction of the magnetic field is reversed. The above device was fabricated in a standard 2-micron n-well CMOS process through MOSIS (production run M89P). The signal output of this sensor is a voltage and is relatively small. For the same biasing conditions. the signals can be increased only if the channel is made shorter (increase the transconductance of the device). On the otherhand. when the length of the device approaches its width the Hallvoltage is shorted out by the heavily doped source and drain regions and the signal degrades again. Some of the problems with the Hall-voltage sensor can be avoided if we use a device that gives a current as its output signal; this is discussed in the next section. THE HALL-CURRENT· SENSOR Vgs 0 lI-t 0 B= 250mT " <t "" ..... ..... f11 ::r: I 1-4 .1-11 10-11 '!::--'-!---,---,-----r----y----,.---r-----,r----r----I .50 .55 .60 .65 .7a .75 .80 .85 .90 .95 1.0 Figure 3. The Hall-Current Sensor • Hall current is a misnomer (used also by this author) that exists in the literature to characterize the current flow in the sample when the Hall field is shorted out. 790 Andreou This current is a direct consequence of the Lorentz force and is perpendicular to the direction of cmrent flow without a magnetic field The Lorentz force: t= qO X e depends on the velocity of the carriers in the sample and on the magnetic induction. Since this force is responsible for the transverse cmrent flow a more appropriate name for this sensor is a Lorentz-current sensor. Obviously. given a magnetic field strength. if we want a maximum signal from our device we want the carriers to have the maximum velocity in the channel. We achieve that by operating the devices in the saturation region were the carriers transverse the channel at what is called the "saturation velocity". In this configuration we can also use short channel devices (and consequently smaller devices) so that the high fields in the channel can be set with lower voltages. The geometry for such sensor as well as the biasing circuit is shown in the insets of Figure 3. As with the previous sensor. the magnetic field is applied in the direction perpendicular to the channel (also perpendicular to the plane of the gate electrode). This sensor is a MOS transistor with three terminals and the gate. It has a single source but a split drain. The poly silicon gate is extended in the region between the two drains so that they are electrically independent of each other. The device is biased with a constant drain-source voltage (with the two drains at the same potential) and the cmrents from the two sources are monitored. The current-mode circuitry for the synthetic neurons described by K wabena [K wabena et al .• 1989] can be employed for this function. We operate the device in the subthreshold region (denoted by the gate-source volrages between 0.5 and 0.9 volts). On the application of a transverse magnetic field we observe the imbalance between the two drain currents. The Hall-current. plotted in Figure 3. is twice that cmrent Note that we can operate the device and observe an effect due to the magnetic field at very low currents in the nano-amp range. The graph in Figure 3 shows also the dependence of the Hall-current on the gate voltage. This is a logarithmic relationship because the Hallcurrent is directly related to the total current in the sample Ids through a linear relationship; it is also linearly related to the magnetic field with a proportionality constant Kh. The derivation of a fonnula for the Hall-cmrent can be found in [Andreou. 1986]. DISCUSSION The frequency response of the sensors described above is more than adequate for our applications. The frequency response of the Hall effect mechanism is many order of magnitude higher than the requirements of our circuits which have to work only the Hz an kHz range. Another important criterion for the receptors is their spatial resolution. We have fabricated and tested devices with areas as small as 144 square microns. These are comparable or even better with what is found in biological systems (10 to 150 receptors per square cm). On the otherhand. it is more likely that our final "receptor" elements will be larger. partly. because of the additional electronics. The experimental data shown above are only for stimuli that are static and are simply the raw output from our Electronic Receptors for Tactile/Haptic Sensing 791 transducer itself. Clearly such output signals will not be of much use without some local processing. For example, adaptation mechanisms have to be included in our synthetic receptors for cutaneous sensing. The sensitivity of our transducer elements may be a problem. In that case more sophisticated structures such as parasitic bipolar magnetotransistors or combination of MOS Hall and bipolar devices can be employed for low level signal amplification [Andreou, 1986]. Voltage offsets in the Hall-voltage sensor would also present some problem; the same is true for . current imbalances due to fabrication imperfections in the Hall current sensor. One of the most attractive properties of the Hall-current type sensor described in this paper is its ability to work with very lower voltages and very low currents; one of our devices can operate with bias voltage as low as 3SOmV and total cmrent of InA without compromising its sensitivity. Power dissipation may be a problem when large arrays of these devices are considered. Devices for sensing temperature can also be implemented on a standard silicon CMOS process. Thus a multisensor chip, could be designed that would respond to more than one of the somatic senses. CONCLUSIONS We have demonstrated how to use the magnetic field as a paradigm for haptic sensing. We have also reported on a silicon magnetic field sensor that operates with power dissipation as low as 3SOpW without any compromise in its performance. This is a dualdrain MaS Hall-current device operating in the subthreshold region. Our elements are only very simple "receptors" without any adaptation mechanisms or local processing; this wi11 be the next step in our work. Acknowledg ments This work was funded by the Independent Research and Development program of the Johns Hopkins University, Applied Physics Laboratory. The support and personal interest of Robert Jenkins is gratefully acknowledged. The author has benefited by the occasional contact with Ken Johnson of the Biomedical Engineering Department References H.B. Barlow and J.D. Mollon eds.,The senses, Cambridge University Press, Oxford, 1982. RL. Klatzky, S1. Lederman and V.A. Metzger, "Identifying objects by touch: An 'expert system'," Percept. Psychophys. vol. 37. 1985. H. Hemami, I.S. Bay and R.E. Goddard, "A Conceptual Framework for Tactually Guided Exploration and Shape Perception," IEEE Trans. Biomedical Engineering, vol. 35, No. 2, Feb. 1988. C.A. Mead. Analog VLSI and Neural Systems, Addison and Wesley, (in press). D. Cohen and G. Lewicki, "MOSIS-The ARPA silicon broker," Proc. of the Second Caltech Conference on VLSI, Pasadena, Califomia,1981. C.A. Mead, A sensitive electronic photoreptor, 1985 Chapel Hill Conference on VLSI, Chapel Hill, 1985. 792 Andreou P.w. Barth. M.1. Zdeblick. Z. Kuc and P.A. Beck. "Flexible tactile arrays for robotics: architectural robustness and yield considerations. Tech. Digest. IEEE Solid State Sensors Workshop. Hilton Head Island. 1986. A.G. Andreou. "The Hall effect and related phenomena in microelectronic devices." Ph.D. Dissertation. The Johns Hopkins University. Baltimore. MD. 1986. H.P. Baltes and R.S. Popovic. "Integrated Semiconductor Magnetic Field Sensors." Proceedings IEEE. vo1.74. No.8. Aug. 1986. K.O. Johnson and G.D. Lamb, "Neural mechanisms of spatial discrimination: Neural patterns evoked by Braille-like dot patterns in the monkey." 1. of Phys .• vol. 310. pp. 117-144. 1981. K.A. Boahen. A.G. Andreou. P.O. Pouliquen and A. Pavasovic. "Architectures for Associative Memories Using Current-Mode Analog MOS Circuits:' Proceedings of the Decennial Caltech Conference on VLSI. C. Seitz ed. MIT Press. 1989. Appendix Summaries of Invited Talks
|
1988
|
44
|
129
|
712 A PROGRAMMABLE ANALOG NEURAL COMPUTER AND SIMULATOR Paul Mueller*, Jan Vander Spiegel, David Blackman*, Timothy Chiu, Thomas Clare, Joseph Dao, Christopher Donham, Tzu-pu Hsieh, Marc Loinaz *Dept.of Biochem. Biophys., Dept. of Electrical Engineering. University of Pennsylvania, Philadelphia Pa. ABSTRACT This report describes the design of a programmable general purpose analog neural computer and simulator. It is intended primarily for real-world real-time computations such as analysis of visual or acoustical patterns, robotics and the development of special purpose neural nets. The machine is scalable and composed of interconnected modules containing arrays of neurons, modifiable synapses and switches. It runs entirely in analog mode but connection architecture, synaptic gains and time constants as well as neuron parameters are set digitally. Each neuron has a limited number of inputs and can be connected to any but not all other neurons. For the determination of synaptic gains and the implementation of learning algorithms the neuron outputs are multiplexed, AID converted and stored in digital memory. Even at moderate size of 1()3 to IDS neurons computational speed is expected to exceed that of any current digital computer. OVERVIEW The machine described in this paper is intended to serve as a general purpose programmable neuron analog computer and simulator. Its architecture is loosely based on the cerebral cortex in the sense that there are separate neurons, axons and synapses and that each neuron can receive only a limited number of inputs. However, in contrast to the biological system, the connections can be modified by external control permitting exploration of different architectures in addition to adjustment of synaptic weights and neuron parameters. The general architecture of the computer is shown in Fig. 1. Themachinecontains large numbers of the following separate elements: neurons, synapses, routing switches and connection lines. Arrays of these elements are fabricated on VLSI chips which are mounted on planar chip carriers each of which forms a separate module. These modules are connected directly to neighboring modules. Neuron arrays are arranged in rows and columns and are surrounded by synaptic and axon arrays. A Programmable Analog Neural Computer and Simulator 713 The machine runs entirely in analog mode. However, connection architectures, synaptic gains and neuron parameters such as thresholds and time constants are set by a digital computer. For determining synaptic weights in a learning mode, time segments of the outputs from neurons are multiplexed, digitized and stored in digital memory. The modular design allows expansion to any degree and at moderate to large size. i.e. 103 to lOS neurons. operational speed would exceed that of any currently available digital computer. I I I I • I I I I • I D SWITCHES LINES SYNAPSES NEURONS Figure 1. Layout and general architecture. The machine is composed of different modules shown here as squares. Each module contains on a VLSI chip an array of components (neurons. synapses or switches) and their control circuits. Our prototype design calls for 50 neuron modules for a total of 800 neurons each having 64 synapses. The insert shows the direction of data flow through the modules. Outputs from each neuron leave north and south and are routed through the switch modules east and west and into the synapse modules from north and south. They can also bypass the synapse modules north and south. Input to the neurons through the synapses is from east and west. Power and digital control lines run north and south. THE NEURON MODULES Each neuron chip contains 16 neurons, an analog multiplexer and control logic. (See Figs. 2 & 3.) Input-output relations of the neurons are idealized versions of a typical biological neuron. Each unit has an adjustable threshold (bias), an adjustable minimum output value at threshold and a maximum output (See Fig. 4). Output time constants are selected on the switch chips. The neuron is based on an earlier design which used discrete components (Mueller and Lazzaro, 1986). 714 Mueller, et al Inputs to each neuron come from synapse chips east and west (SIR, SIL), outputs (NO) go to switch chips north and south. Each neuron has a second input that sets the minimum output at threshold which is common for all neurons on the chip and selected through a separate synapse line. The threshold is set from one of the synapses connected to a fixed voltage. An analog multiplexer provides neuron output to a common line, OM, which connects to an AID converter. CK CRO SILl SIL 2 • I SILlS SILI6 NJIS ~6 01\ ANALOG I\LL T I PLEXER NO. NJ2 ORI PHI2 SIR SIR I 2 • I SIR IS SIR I6 Figure 2. Block diagram of the neuron chip containing 16 neurons . .. . -____ ~ ______ ~F ____ ~ __ · ~~ ~ .... b-.. ~ -. ,-~:""",,-........ ~:..._ ~,r "'1<1'" !l'r.:. 11,"" nil');~ Figure 3. Photograph of a test chip containing 5 neurons. A more recent version has only one output sign. A Programmable Analog Neural Computer and Simulator 715 4 o o 5 SUM OF INPUTS/VOLTS Figure 4. Transfer characteristic obtained from a neuron on the chip shown in Fig.3. Each unit has an adjustable threshold, Vt which was set here to 1.5V, a linear transfer region above threshold, an adjustable minimum output at threshold E set to 1 V and a x maximum output, E mo. THE SYNAPSE MODULES Each synapse chip contains a 32 * 16 array of synapses. The synaptic gain of each synapse is set by serial input from the computer and is s tored at each synapse. Dynamic range of the synapse gains covers the range from 0 to 10 with 5 bit resolution, asixth bit determines the sign. The gains are implemented by current mirrors which scale the neuron output after it has been converted from a voltage to a current. The modifIable synapse designs reported in the literature use either analog or digital signals to set the gains (Schwartz, et. al., 1989, Raffel, et.al, 1987, Alspector and Allen, 1987). We chose the latter method because of its greater reproducibility and because direct analog setting of the gains from the neuron outputs would require a prior knowledge of and commitment to a particular learning algorithm. Layout and perfonnance of the synapse module are shown in Figs. 5-7. As seen in Fig. 7a, the synaptic transfer function is linear from 0 to 4 V. The use of current mirrors pennits arbitrary scaling of the synaptic gains (weights) with trade off between range and resolution limited to 5 bits. Our current design calls for a minimum gain of 1/32 and a maximum of 10. The lower end of the dynamic range is detennined by the number of possible inputs per neuron which when active should not drive the neuron output to its limit, whereas the high gain values are needed in situations where a single or very few synapses must be effective such as in the copying of activity from one neuron to another or for veto inhibition. The digital nature of the synaptic gain control does not allow straight forward implementation of a logarithmic gain scale. Fig. 7b. shows two possible relations between digital code and synaptic gain. In one case the total gain is the sum of 5 individual gains each controlled by one bit. This leads inevitably to jumps in the gain curve. In a second case a linear 3 bit gain is multiplied by four different constants 716 Mueller, et al controlled by the 4th and 5th bit. This scheme affords a better approximation to a logarithmic scale. So far we have implemented only the first scheme. Although the resolution of an individual synapse is limited to 5 bits, several synapses driven by one neuron can be combined through switching, permitting greater resolution and dynamic range. o o o NI32 DATA Figure S. Diagram of the synapse module. Each synapse gain is set by a 5 bit word stored in local memory. The memory is implemented as a quasi dynamic shift register that reads the gain data during the programming phase. Voltage to current converters transform the neuron output (N!) into a current. I Conv are current mirrors that scale the currents with 5 bit resolution. The weighted currents are summed on a common line to the neuron input (SO). Figure 6. Photograph of a synapse test chip. A Programmable Analog Neural Computer and Simulator 717 AJOT""'T'--------.,..., i Weight-10 Em ~ u ~ ~ :J 10 t o Welght-1/32 o~----.. ~~~~~~ o 1 2 J Input (Volts) B 1 o -J+--+-----~~~-----~-+-----+_~ o 4 8 12 18 20 24 28 J2 Code Figure 7a. Synapse transfer characteristics for three different settings. The data were obtained from the chip shown in Fig. 6. b. Digital code vs. synaptic gain, squares are current design, triangles use a two bit exponent. THE SWITCH MODULES The switch modules serve to route the signals between neurons and allow changes to the connection architecture. Each module contains a 32*32 cross point array of analog switches which are set by serial digital input. There is also a set of serial switches that can disconnect input and output lines. In addition to switches the modules contain circuits which control the time constants of the synapse transfer function (see Figs. 8 & 9). The switch perfonnance is summarized in Table 1. UIN UIN Lilli RIlIl UN+RIN RIN RIll DIN I BIT f'E.fl.Gty Lilli L3B R30 LIN-0-- RIN L31 R3t 2 BIT f'E.1lfJ/.Y/ LOGIC 0lIl111 0lIl1 031'1 031 Figure 8. Diagram of switching fabric. Squares and circles represent switch cells which connect the horizontal and vertical connectors or cut the conductors. The units labeled T represent adjustable time constants. 718 Mueller, et al Process On resistance Off resistance TABLE 1. Switch Chip Performance 3uCMOS <3 KOhm > 1 TOhm Input capacitance Array download time Memory/switch size < IpF 2us 75u x 90u ADJUSTMENT OF SYNAPTIC TIME CONSTANTS For the analysis or generation of temporal patterns as they occur in motion or speech, adjustable time constants of synaptic transfer must be available (Mueller, 1988). Low pass filtering of the input signal to the synapse with 4 bit control of the time constant over a range of 5 to 500 ms is sufficient to deal with real world data. By com bining the low passed input with a direct input of opposite sign, both originating from the same neuron, the typical "ON" and "OFF" responses which serve as measures of time after beginning and end of events and are common in biological systems can be obtained. Several designs are being considered for implementing the variable low pass filter. Since not all synapses need to have this feature, the circuit will be placed on only a limited number of lines on the switch chip. PACKAGING All chips are mounted on identical quad surface mount carriers. Input and output lines are arranged at right angles with identical leads on opposite sides. The chip carriers are mounted on boards. SOFTWARE CONTROL AND OPERATION Connections, synaptic gains and time constants are set from the central computer either manually or from libraries containing connection architectures for specific tasks. Eventually we envision developing a macro language that would generate subsystems and link them into a larger architecture. Examples are feature specific receptor fields, temporal pattern analyzers, or circuits for motion control. The connection routing is done under graphic control or through routing routines as they are used in circuit board design. A Programmable Analog Neural Computer and Simulator 719 The primary areas of application include real-world real-time or compressed time pattern analysis, robotics, the design of dedicated neural circuits and the exploration of different learning algorithms. Input to the machine can come from sensory transducer arrays such as an electronic retina, cochlea (Mead, 1989) or tactile sensors. For other computational tasks, input is provided by the central digital computer through activation of selected neuron populations via threshold control. It might seem that the limited number of inputs per neuron restricts the computations performed by anyone neuron. However the results obtained by one neuron can be copied through a unity gain synapse to another neuron which receives the appropriate additional inputs. In performance mode the machine could exceed by orders of magnitude the computational speed of any currently available digital computer. A rough estimate of attainable speed can be made as follows: A network with 103 neurons each receiving 100 inputs with synaptic transfer time constants ranging from 1 ms to 1 s, can be described by 103 simultaneous differential equations. Assuming an average step length of 10 us and 10 iterations per step, real time numerical solutions of this system on a digital machine would require approximately 1011 FLOPS. Microsecond time constants and the computation of threshold non-linearities would require a computational speed equivalent to > 1012 FLOPS on a digital computer and this seems a reasonable estimate of the computational power of our machine. Furthermore, in contrast to digital mUltiprocessors, computational power would scale linearly with the number of neurons and connections. Acknowledgements Supported by grants from ONR (NOOOI4-89-J-1249) and NSF (EET 166685). References Alspector, J., Allen, R.B. A neuromorphic VLSI learning system. Advanced research in VLSI. Proceedings of the 1987 Stanford Conference. (1987). Mead, C. Analog VLSI and Neural Systems. Addison Wesley, Reading, Ma (1989). Mueller, P. Computation of temporal Pattern Primitives in a Neural Net for Speech Recognition. InternationalNeuralNetworkSociety. FirstAnnualMeeting,Boston Ma .• (1988). Mueller, P., Lazzaro, J. A Machine for Neural Computation of Acoustical Patterns. AlP Conference Proceedings. 151:321-326, (1986). Raffel, J.I., Mann, J.R., Berger, R., Soares, A.M., Gilbert, S., A Generic Architecture for Wafer-Scale neuromorphic Systems. IEEE First International Conference on Neural Networks. San Diego, CA. (1987). Schwartz, D., Howard, R., Hubbard, W., A Programmable Analog Neural Network Chip, J. of Solid State Circuits, (to be published).
|
1988
|
45
|
130
|
256 AN INFORMATION THEORETIC APPROACH TO RULE-BASED CONNECTIONIST EXPERT SYSTEMS Rodney M. Goodman, John W. Miller Department of Electrical Engineering C altech 116-81 Pasadena, CA 91125 Padhraic Smyth Communication Systems Research Jet Propulsion Laboratories 238-420 4800 Oak Grove Drive Pasadena, CA 91109 Abstract We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner, using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems. Introduction With the advent of relatively cheap mass storage devices it is common in many domains to maintain large databases or logs of data, e.g., in telecommunications, medicine, finance, etc. The question naturally arises as to whether we can extract models from the data in an automated manner and use these models as the basis for an autonomous rational agent in the given domain, i.e., automatically generate "expert systems" from data. There are really two aspects to this problem: firstly learning a model and, secondly, performing inference using this model. What we propose in this paper is a rather novel and hybrid approach to learning and inference. Essentially we combine the qu'alitative knowledge representation ideas of AI with the distributeq, computational advantages of connectionist models, using an underlying theoretical basis tied to information theory. The knowledge representation formalism we adopt is the rule-based representation, a scheme which is well supported by cognitive scientists and AI researchers for modeling higher level symbolic reasoning tasks. We have recently developed an information-theoretic algorithm called ITRULE which extracts an optimal set of probabilistic rules from a given data set [1, 2, 3]. It must be emphasised that we do not use any form of neural learning such as backpropagation in our approach. To put it simply, the ITRULE learning algorithm is far more computationally direct and better understood than (say) backpropagation for this particular learning task of finding the most informative individual rules without reference to their collective properties. Performing useful inference with this model or set of rules, is quite a difficult problem. Exact theoretical schemes such as maximum entropy (ME) are intractable for real-time applications. An Infonnation Theoretic Approach to Expert Systems 257 We have been investigating schemes where the rules represent links on a directed graph and the nodes correspond to propositions, i.e., variable-value pairs. Our approach is characterised by loosely connected, multiple path (arbitrary topology) graph structures, with nodes performing local non-linear decisions as to their true state based on both supporting evidence and their a priori bias. What we have in fact is a recurrent neural network. What is different about this approach compared to a standard connectionist model as learned by a weight-adaptation algorithm such as BP? The difference lies in the semantics of the representation [4]. Weights such as log-odds ratios based on log transformations of probabilities possess a clear meaning to the user, as indeed do the nodes themselves. This explicit representation of knowledge is a key requirement for any system which purports to perform reasoning, probabilistic or otherwise. Conversely, the lack of explicit knowledge representation in most current connectionist approaches, i.e., the "black box" syndrome, is a major limitation to their application in critical domains where user-confidence and explanation facilities are key criteria for deployment in the field. Learning the model Consider that we have M observations or samples available, e.g., the number of items in a database. Each sample datum is described in terms of N attributes or features, which can assume values in a corresponding set of N discrete alphabets. For example our data might be described in the form of lO-component binary vectors. The requirement for discrete rather than continuous-valued attributes is dictated by the very nature of the rule-based representation. In addition it is important to note that we do not assume that the sample data is somehow exhaustive and "correct." There is a tendency in both the neural network and AI learning literature to analyse learning in terms of learning a Boolean function from a truth table. The implicit assumption is often made that given enough samples, and a good enough learning algorithm we can always learn the function exactly. This is a fallacy, since it depends on the feature representation. For any problem of interest there are always hidden causes with a consequent non-zero Bayes misclassification risk, i.e., the function is dependent on non-observable features (unseen columns of the truth table). Only in artificial problems such as game playing is "perfect" classification possible in practical problems nature hides the real features. This phenomenon is well known in the statistical pattern recognition literature and renders invalid those schemes which simply try to perfectly classify or memorise the training data. We use the following simple model of a rule, i.e., IT Y = y then X = x with probability p where X and Yare two attributes (random variables) with "x" and "y" being values in their respective discrete alphabets. Given sample data as described earlier we pose the problem as follows: can we find the "best" rules from a given data set, say the K best rules? We will refer to this problem as that of generalised rule induction, in order to distinguish it from the special case of deriving classification 258 Goodman, Miller and Smyth rules. Clearly we require both a preference measure to rank the rules and a learning algorithm which uses the preference measure to find the K best rules. Let us define the information which the event y yields about the variable X, say !(Xj y). Based on the requirements that !(Xj y) is both non-negative and that its expectation with respect to Y equals the average mutual information J(Xj Y), Blachman [5] showed that the only such function is the j-measure, which is defined as i(Xj y) = p(x\y) log (p(x\y)) + p(x\y) log (p(x)~y)) p(x) p(x) More recently we have shown that i(Xj y) possesses unique properties as a rule information measure [6]. In general the j-measure is the average change in bits required to specify X between the a priori distribution (p(X)) and the a posteriori distribution (p(X\y)). It can also be interpreted as a special case of the cross-entropy or binary discrimination (Kullback [7]) between these two distributions. We further define J(Xj y) as the average information content where J(X; y) = p(Y)-i(Xj y). J(Xj y) simply weights the instantaneous rule information i(X; y) by the probability that the left-hand side will occur, i.e., that the rule will be fired. This definition is motivated by considerations of learning useful rules in a resource-constrained environment. A rule with high information content must be both a good predictor and have a reasonable probability of being fired, i.e., p(y) can not be too small. Interestingly enough our definition of J(Xj y) possesses a well-defined interpretation in terms of classical induction theory, trading off hypothesis simplicity with the goodness-of-fit of the hypothesis to the data [8]. The ITRULE algorithm [1, 2, 3] uses the J-measure to derive the most informative set of rules from an input data set. The algorithm produces a set of K probabilistic rules, ranked in order of decreasing information content. The parameter K may be user-defined or determined via some statistical significance test based on the size of the sample data set available. The algorithm searches the space of possible rules, trading off generality of the rules with their predictiveness, and using informationtheoretic bounds to constrain the search space. Using the Model to Perform Inference Having learned the model we now have at our disposal a set of lower order constraints on the N-th order joint distribution in the form of probabilistic rules. This is our a priori model. In a typical inference situation we are given some initial conditions (i.e., some nodes are clamped), we are allowed to measure the state of some other nodes (possibly at a cost), and we wish to infer the state or probability of one more goal propositions or nodes from the available evidence. It is important to note that this is a much more difficult and general problem than classification of a single, fixed, goal variable, since both the initial conditions and goal propositions may vary considerably from one problem instance to the next. This is the inference problem, determining an a posteriori distribution in the face of incomplete and uncertain information. The exact maximum entropy solution to this problem is inAn Information Theoretic Approach to Expert Systems 259 tractable and, despite the elegance of the problem formulation, stochastic relaxation techniques (Geman [9]) are at present impractical for real-time robust applications. Our motivation then is to perform an approximation to exact Bayesian inference in a robust manner. With this in mind we have developed two particular models which we describe as the hypothesis testing network and the uncertainty network. Principles of the Hypothesis Testing Network In the first model under consideration each directed link from Y to x is assigned a weight corresponding to the weight of evidence of yon x. This idea is not necessarily new, although our interpretation and approach is different to previous work [10, 4]. Hence we have W -1 p{xIY) -1 p(:xIY) :r.y og p(x) og p(x) and R = -log p(x) :r. p(x) and the node x is assigned a threshold term corresponding to a priori bias. We use a sigmoidal activation function, i.e., n 1 a ( x) = --~7'""E=-t----;;R'--, l+e T where l:J.E:r. = I: W:r.y; . q(y,) R:r. ,=1 based on multiple binary inputs Y1 ... Yn to x. Let 8 be the set of all Yi which are hypothesised true (Le., a{yd = 1), so that AE = I p(x) + '" (1 p(xlYd _ 1 p(xIY,)) L.l:r. og p(x) Log p(x) og p(x) y;ES If each y, is conditionally independent given x then we can write p(xIS) = p(x) II p(xIY,) p(xIS) p(x) y;ES p(xlYd Therefore the updating rule for conditionally independent y, is: T . log a(x) = log p(xI8) 1 - a(x) 1 - p(x/S) Hence a(x) > ~ iff p{xI8) > ~ and if T == 1, a(x) is exactly p(xIS). In terms of a hypothesis test, a(x) is chosen true iff: '" I p(XIYi) > I p{x) L- og - og-p(XIYi) p(x) Since this describes the Neyman-Pearson decision region for independent measurements (evidence or yd with R:r. = -log :~~~ [11], this model can be interpreted as a distributed form of hypothesis testing. 260 Goodman, Miller and Smyth Principles of the Uncertainty Network For this model we defined the weight on a directed link from Yi to x as . ( p(XIYi) _ p(xIYi))) W XYi = si.1(XjYi) = Si· p(XIYi}log( p(x) ) + p(xly,)log( p(x) where Si = ±1 and the threshold is the same as the hypothesis model. We can interpret W:Zlli as the change in bits to specify the a posteriori distribution of x. H P(XIYi) > p{x), w:ZYi has positive support for x, i.e., Si = +1. H P{XIYi) < p(x), W:Zlli has negative support for x, Le., Si = -1. IT we interpret the activation a(Yi) as an estimator (p(y)) for p(Yi), then for multiple inputs, i ~ .. ( ) P(XIYi) (_ P(XIYi) ) ~ p(Yi).Si. p(XIYi log( p{x) ) + P xly,) log( p(x) ) • This sum over input links weighted by activation functions can be interpreted as the total directional change in bits required to specify x, as calculated locally by the node x. One can normalise !:1Ex to obtain an average change in bits by dividing by a suitable temperature T. The node x can make a local decision by recovering p(x) from an inverse J-measure transformation of !:1E (the sigmoid is an approximation to this inverse function). Experimental Results and Conclusions In this section we show how rules can be generated from example data and automatically incorporated into a parallel inference network that takes the form of a multi-layer neural network. The network can then be "run" to perform parallel inference. The domain we consider is that of a financial database of mutual funds, using published statistical data [12]. The approach is, however, typical of many different real world domains. Figure 1 shows a portion of a set of typical raw data on no-load mutual funds. Each line is an instance of a fund (with name omitted), and each column represents an attribute (or feature) of the fund. Attributes can be numerical or categorical. Typical categorical attributes are the fund type which reflect the investment objectives of the fund (growth, growth and income, balanced, and agressive growth) and a typical numerical attribute is the five year return on investment expressed as a percentage. There are a total of 88 fund examples in this data set. From this raw data a second quantized set of the 88 examples is produced to serve as the input to ITRULE (Figure 2). In this example the attributes have been categorised to binary values so that they can be directly implemented as binary neurons. The ITRULE software then processes this table to produce a set of rules. The rules are ranked in order of decreasing information according to the J-measure. Figure 3 shows a An Infonnation Theoretic Approach to Expert Systems 261 portion (the top ten rules) of the ITRULE output for the mutual fund data set. The hypothesis test log-likelihood metric h(Xj y), the instantaneous j-measure j(Xj y), and the average J-measure J(Xj y), are all shown, together with the rule transition probability p{x/y). In order to perform inference with the ITRULE rules we need to map the rules into a neural inference net. This is automatically done by ITRULE which generates a network file that can be loaded into a neural network simulator. Thus rule information metrics become connection weights. Figure 4 shows a typical network derived from the ITRULE rule output for the mutual funds data. For clarity not all the connections are shown. The architecture consists of two layers of neurons (or "units"): an input layer and an output layer, both of which have an activation within the range {O,l}. There is one unit in the input layer (and a corresponding unit in the output layer) for each attribute in the mutual funds data. The output feeds back to the input layer, and each layer is synchronously updated. The output units can be considered to be the right hand sides of the rules and thus receive inputs from many rules, where the strength of the connection is the rule's metric. The output units implement a sigmoid activation function on the sum of the inputs, and thus compute an activation which is an estimator of the right hand side posteriori attribute value. The input units simply pass this value on to the output layer and thus have a linear activation. To perform inference on the network, a probe vector of attribute values is loaded into the input and output layers. Known values are clamped and cannot change while unknown or desired attribute values are free to change. The network then relaxes and after several feedback cycles converges to a solution which can be read off the input or output units. To evaluate the models we setup fo~r standard classification tests with varying number of nodes clamped as inPlits. Undamped nodes were set to their a priori probability. After relaxing the network, the activation of the "target" node was compared with the true attribute values for that sample in order to determine classification performance. The two models were each trained on 10 randomly selected sets of 44 samples. The performance results given in Table 1 are the average classification rate of the models on the other 44 unseen samples. The Bayes risk (for a uniform loss matrix) of each classification test was calculated from the 88 samples. The actual performance of the networks occasionally exceeded this value due to small sample variations on the 44/44 cross validations. Table 1 Units Cramped Uncertainty Test HYPOthesis Test 1 - Bayes' Risk 9 66.8% 70.4% 88.6% 5 70.1% 70.1% 80.6% 2 48.2% 63.0% 63.6% 1 51.4% 65.7% 64.8% 262 Goodman, Miller and Smyth We conclude from the performance of the networks as classifiers that they have indeed learned a model of the data using a rule-based representation. The hypothesis network performs slightly better than the uncertainty model, with both being quite close to the estimated optimal rate (the Bayes' risk). Given that we know that the independence assumptions in both models do not hold exactly, we coin the term robust inference to describe this kind of accurate behaviour in the presence of incomplete and uncertain information. Based on these encouraging initial results, our current research is focusing on higher-order rule networks and extending our theoretical understanding of models of this nature. Acknowledgments This work is supported in part by a grant from Pacific Bell, and by Caltech's program in Advanced Technologies sponsored by Aerojet General, General Motors and TRW. Part of the research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. John Miller is supported by NSF grant no. ENG-8711673. References 1. R. M. Goodman and P. Smyth, 'An information theoretic model for rule-based expert systems,' presented at the 1988 International Symposium on Information Theory, Kobe, Japan. 2. R. M. Goodman and P. Smyth, 'Information theoretic rule induction,' Proceedings of the 1988 European Conference on AI, Pitman Publishing: London. 3. R. M. Goodman and P. Smyth, 'Deriving rules from databases: the ITRULE algorithm,' submitted for publication. 4. H. Geffner and J. Pearl, 'On the probabilistic semantics of connectionist networks,' Proceedings of the 1987 IEEE ICNN, vol. II, pp. 187-195. 5. N. M. Blachman, 'The amount of information that y gives about X,' IEEE Transactions on Information Theory, vol. IT-14 (1), 27-31, 1968. 6. P. Smyth and R. M. Goodman, 'The information content of a probabilistic rule,' submitted for publication. 7. S. Kullback, Information Theory and Statistics, New York: Wiley, 1959. 8. D. Angluin and C. Smith, 'Inductive inference: theory and methods,' ACM Computing Surveys, 15(9), pp. 237-270, 1984. 9. S. Geman, 'Stochastic relaxation methods for image restoration and expert systems,' in Maximum Entropy and Bayesian Methods in Science and Engineering (Vol. 2), 265-311, Kluwer Academic Publishers, 1988. 10. G. Hinton and T. Sejnowski, 'Optimal perceptual inference,' Proceedings of the IEEE CVPR 1989. 11. R. E. Blahut, Principles and Practice of Information Theory, Addison-Wesley: Reading, MA, 1987. 12. American Association of Investors, The individual investor's guide to no-load mutual funds, International Publishing Corporation: Chicago, 1987. An Infonnation Theoretic Approach to Expert Systems 263 Fund Type 5 Year Diver- Beta Bull Bear Stocks Invest- Net Distri- Expense Turn- Total Return sity (Risk) Perf. Perf. 0/0 ment Asset butions Ratio % over Assets 0/0 Incm. $ Value $ (%NAV\ Rate %$M Balanced 136 C 0.8 B D 87 0.67 37 .3 17.63 0.79 34 415 Growth 32 .5 C 1.05 E B 81 -0.02 12.5 0.88 1.4 200 16 Growth& Income 88.3 A 0.96 C D 82 0.14 11.9 4.78 1.34 127 27 Agressive -24 A 1.23 E E 95 0.02 6.45 9.30 1.4 1 61 64 Growth&lncome 172 E 0.59 A B 73 0.53 13.6 9.97 1.09 31 113 Balanced 144 C 0.71 B B 51 0.72 13 10.44 0.98 239 190 Flgure1. Raw Mutual Funds Data Type Type Type Type 5 Year Beta Stocks TurnAssets DistriDiver- Bull Bear A B G (?J Return 0/0 >90% over butions sity Perf. Perf. S&P=1380/0 above S&P <100% <$100M <150/0NAV C.D.E C.D.E C.D.E below S&P >100% >$100M >150/0NAV A.B AB A,B no no yes no below under1 no low large high low high low no no yes no below over1 no high small low low low high no no no yes below under1 no high small low high low low no no no yes above under1 no low large low low high high no no no yes below under1 yes low small high high low high no no yes no above under1 no low large high high high low Figure 2. Quantized Mutual Funds Data ITRULE rule output: Mutual Funds p(x/y) j(X;y) J(X;y) h(X;y) 1 IF 2 IF 3 IF 4 IF 5 IF 6 IF 7 IF 8 IF 9 IF 10 IF D D 5yrRebS&P BullJ)erf Assets BullJ)erf typeA BullJ)erf typeGl BullJ)erf typeG Assets above lHEN BullJ)erf high 0.97 0.75 0.235 4.74 low lHEN 5yrRet>S&P below 0.98 0.41 0.201 4.31 large lHEN BullJ)erf high 0.81 0.28 0.127 2.02 high lHEN 5yrRet>s&P above 0.40 0.25 0.127 -1.71 yes lHEN typeG no 0 .04 0 .50 0.123 -3 .87 low lHEN Assets small 0.18 0.25 0.121 -1 .95 yes lHEN typeG no 0.05 0 .49 0.109 -3.74 high lHEN Assets large 0.72 0.21 0.109 1.64 yes lHEN typeA no 0.97 0.27 0.108 3.54 small lHEN Bull perf low 0 .26 0.19 0.103 -1.57 Figure 3. Top Ten Mutual Funds Rules nfo2atl~ 0 0 ~ ~ ~ Input layer - linear units metric connection weights I I one unit per attribute I I o DOD 0 D D 0 I Feedback connections weight = 1 o output layer - sigmoid units Figure 4. Rule Network
|
1988
|
46
|
131
|
248 A CONNECTIONIST EXPERT SYSTEM THAT ACTUALLY WORKS Gary Bradshaw Psychology Richard Fozzard Computer Science University of Colorado Boulder, CO 80302 [email protected] ABSTRACf LouisCeci Computer Science The Space Environment Laboratory in Boulder has collaborated with the University of Colorado to construct a small expert system for solar flare forecasting, called THEa. It performed as well as a skilled human forecaster. We have constructed TheoNet, a three-layer back-propagation connectionist network that learns to forecast flares as well as THEa does. TheoNet's success suggests that a connectionist network can perform the task of knowledge engineering automatically. A study of the internal representations constructed by the network may give insights to the "microstructure" of reasoning processes in the human brain. INTRODUCTION Can neural network learning algorithms let us build "expert systems" automatically, merely by presenting the network with data from the problem domain? We tested this possibility in a domain where a traditional expert system has been developed that is at least as good as the expert, to see if the connectionist approach could stand up to tough competition. Knowledge-based expert systems attempt to capture in a computer program the knowledge of a human expert in a limited doma!n and make this knowledge available to a user with less experience. Such systeins could be valuable as an assistant to a forecaster or as a training tool. In the past three years, the Space Environment Laboratory (SEL) in Boulder has collaborated with the Computer Science and Psychology Departments at the University of Colorado to construct a small expett system emulating a methodology for solar flare forecasting developed by Pat McIntosh, senior solar physicist at SEL. The project convincingly demonstrated the possibilities of this type of computer assistance, which also proved to be a useful tool for formally expressing a methodology, verifying its performance, and instructing novice forecasters. The system, A Connectionist Expert System that Actually Warks 249 named THEO (an OPS-83 production system with about 700 rules), performed as well as a skilled human forecaster using the same methods, and scored well compared with actual forecasts in the period covered by the test data [Lewis and Dennett 1986]. In recent years connectionist (sometimes called "non-symbolic" or "neural") network approaches have been used with varying degrees of success to simulate human behavior in such areas as vision and speech learning and recognition [Hinton 1987, Lehky and Sejnowski 1988, Sejnowski and Rosenberg 1986, Elman and Zipser 1987]. Logic (or "symbolic") approaches have been used to simulate human (especially expert) reasoning [see Newell 1980 and Davis 1982]. There has developed in the artificial intelligence and cognitive psychology communities quite a schism between the two areas of research and the same problem has rarely been attacked by both approaches. It is hardly our intent to debate the relative merits of the two paradigms. The intent of this project is to directly apply a connectionist learning technique (multi-layer backpropagation) to the same problem, even the very same database used in an existing successful rule-based expert system. At this time we know of no current work attempting to do this. Forecasting, as described by those who practice it, is a unique combination of informal reasoning within very soft constraints supplied by often incomplete and inaccurate data. The type of reasoning involved makes it a natural application for traditional rule-based approaches. Solar and flare occurrence data are often inconsistent and noisy. The nature of the data, therefore, calls for careful handling of rule strengths and certainty factors. Yet dealing with this sort of data is exactly one of the strengths claimed for connectionist networks. It may also be that some of the reasoning involves pattern matching of the different categories of data. This is what led us to hope that a connectionist network might be able to learn the necessary internal representations to cope with this task. TECHNICAL APPROACH The TheoNet network model has three layers of simple, neuron-like processing elements called "units". The lowest layer is the input layer and is clamped to a pattern that is a distributed representation of the solar data for a given day. For the middle ("hidden") and upper ("output") layers, each unit's output (called "activation") is the weighted sum of all inputs from the units in the layer below: Yj = ~1 __ 1 + e-Xj where: Xj = ~Y'Wji - 8j , (1) where Yi is the activation of the ith unit in the layer below, Wji is the weight on the connection from the ith to the jth unit, and 9j is the threshold of the jth 250 Fozzard, Bradshaw and Ceci unit. The weights are initially set to random values between -1.0 and +1.0, but are allowed to vary beyond that range. A least mean square error learning procedure called back-propagation is used to modify the weights incrementally for each input data pattern presented to the network. This compares the output unit activations with the "correct" (what actually happened) solar flare activity for that day. This gives the weight update rule: (2) where V E(t) is the partial derivative of least mean square error, £ is a parameter called the learning rate that affects how quickly the network attempts to converge on the appropriate weights (if possible), and a is called the momentum which affects the amount of damping in the procedure. This is as in [Hinton 1987], except that no weight decay was used. Weights were updated after each presentation of an input/output pattern. The network was constructed as shown in Figure 1. The top three output units are intended to code for each of the three classes of solar flares to be forecasted. The individual activations are currently intended to correspond to the relative likelihood of a flare of that class within the next 24 hours (see the analysis of the results below). The 17 input units provide a distributed coding of the ten categories of input data that are currently fed into the "default" mode of the expert system THEO. That is, three binary (on/off) units code for the seven classes of sunspots, two for spot distribution, and so on. The hidden units mediate the transfer of activation from the input to the output units and provide the network with the potential of forming internal representations. Each layer is fully interconnected with the layer above and/or below it, but there are no connections within layers. RESULTS The P3 connectionist network simulator from David Zipser of University of California at San Diego's parallel distributed processing (PDP) group was used to implement and test TheoNet on a Symbolics 3653 workstation. This simulator allowed the use of Lisp code to compile statistics and provided an interactive environment for working with the network simulation. The network was trained and tested using two sets of data of about 500 input/ output pairs (solar data/flare occurrence) each from the THEO database. Many of these resulted in the same input pattern (there were only about 250 different input patterns total), and in many cases the same input would result in different flare results in the following 24 hours. The data was from a low flare frequency period (about 70-80 flares total). These sorts of inconsistencies in the data make the job of prediction difficult to systematize. The network would be A Connectionist Expert System that Actually Works 251 OUTPUT: Flare probability by class c M x "D"=011 "A"=011 "0"=01 growth yes no above 5 reduced less than M1 small • 'W' ,. 'W' ' , ''--'' , . , '--' '--' '--' '--' iJ iJ iJ en c .~ c ! )( )( (G (G ~ 8. .2 .2 CD oS! CD CD .~ S (G 00.... .... () en :; ~ u:: E « « ..c g en E 8 8. .r::. en ·c :::l 8 .52 CD .~ w .2 en .... ~ >. :::l CI > ~ iii N ~ ! ~ C Q. CD CD ·c ~ ~ .e as a::: ...J INPUT SOLAR DATA .!a :I: 1. Modified Zurich class (7 possible values: A/B/C/D/E/F/H) 2. Largest spot size (6 values: X/R/S/ A/H/K) 3. Spot distribution (4 values: X/O/I/C) 4. Activity (reduced / unchanged) 5. Evolution (decay / no growth / or growth) 6. Previous flare activity (less than Ml / Ml / more than Ml) 7. Historically complex (yes/no) 8. Recently became complex on this pass (yes/no) 9. Area (small/large) 10. Area of the largest spot (up to 5/ above 5) Figure 1. Architecture of TheoNet 252 Fozzard, Bradshaw and Ceci trained on one data set and then tested on the other (it did not matter which one was used for which). Two ways of measuring performance were used. An earlier simulation tracked a simple measure called overall-prediction-error. This was the average difference over one complete epoch of input patterns between the activation of an output unit and the "correct" value it was supposed to have. This is directly related to the sum-squared error used by the back-propagation method. While the overall-prediction-error would drop quickly for all flare classes after a dozen epoches or so (about 5 minutes on the Symbolics), individual weights would take much longer to stabilize. Oscillations were seen in weight values if a large learning rate was used. When this was reduced to 0.2 or lower (with a momentum of 0.9), the weights would converge more smoothly to their final values. Overall-prediction-error however, is not a good measure of performance since this could be reduced simply by reducing average activation (a "Just-Say-No" network). Analyzing performance of an expert system is best done using measures from the problem domain. Forecasting problems are essentially probabilistic, requiring the detection of signal from noisy data. Thus forecasting techniques and systems are often analyzed using signal detection theory [Spoehr and Lehmkuhle 1982]. The system was modified to calculate P(H), the probability of a hit, and P(FA), the probability of a false alarm, over each epoch. These parameters depend on the response bias, which determines the activation level used as a threshold for a yes/no response· . A graph of P(H) versus P(FA) gives the receiver operating characteristic or ROC curve. The amount that this curve is bowed away from a 1:1 slope is the degree to which a signal is being detected against background. This was the method used for measuring the performance of THEO [Lewis and Dennett 1986]. As in the earlier simulation, the network was exposed to the test data before and after training. After training, the probability of hits was consistently higher than that of false alarms in all flare classes (Figure 2). Given the limited data and very low activations for X-class flares, it mayor may not be reasonable to draw conclusions about the network's ability to detect these - in the test data set there were only four X-flares in the entire data set. The degree to which the hits exceed false alarms is given by a', the area under the curve. The performance of TheoNet was at least as good as the THEO expert system . .. Even though both THEO and TheoNet have a continuous output (probability of flare and activation), varying the response bias gives a continuous evaluation of performance at any output level. A Connectionist Expert System that Actually Works 253 1.0 a/ 1.0 • / 1.0 / a • / • / /'~/ / "// ~ • \/ ~5 ~5 i ~.5 a Q::' Q::' ~ 0 . / 0 / / , / C-class / M-class / X-class 1/ a' •. 71 ~ a' •. 78 1/ a' •. 90 .5 1.0 .5 1.0 .5 1.0 P(FA) P(FA) P(FA) 1.0 -/ 1.0 / 1.0 / • .a / • / • / @ ~5 . 1' / ~5 .1, / §:5 .,,-// I 0.:' • / Q::. a / Q::. l- . / / / / C-c1ass i / M-class / X-class / a' •. 68 a' •. 70 / a' •. 78 .5 1.0 .5 1.0 .5 1.0 P(FA) P(FA) P(FA) Figure 2. ROC perfonnance measures of TheoNet and THEO CONCLUSIONS Two particularly intriguing prospects are raised by these results. The first is that if a connectionist network can perform the same task as a rule-based system, then a study of the internal representations constructed by the network may give insights to the "microstructure" of how reasoning processes occur in the human brain. These are the same reasoning processes delineated at a higher level of description by the rules in an expert system. How this sort of work might affect the schism between the symbolic and non-symbolic camps (mentioned in the introduction) is anyone's guess. Our hope is that the two paradigms may eventually come to complement and support each other in cognitive science research. The second prospect is more of an engineering nature. Though cortt1ectionist networks do offer some amount of biological plausibility (and hence their trendy status right now), it is difficult to imagine a neural mechanism for the back-propagation algorithm. However, what do engineers care? As a lot, they are more interested in implementing a solution than explaining the nature of human thought. Witness the current explosion of expert system technology in the marketplace today. Yet for all its glamor, expert systems have usually proved time consuming and expensive to implement. The "knowledgeengineering" step of interviewing experts and transferring their knowledge to 254 Fozzard, Bradshaw and Ceci rules that work successfully together has been the most difficult and expensive part, even with advanced knowledge representation languages and expert system shells. TheoNet has shown that at least in this instance, a standard back-propagation network can quickly learn those necessary representations and interactions (rules?) needed to do the same sort of reasoning. Development of THEO (originally presented as one of the quickest developments of a usable expert system) required more than a man-year of work and 700 rules, while TheoNet was developed in less than a week using a simple simulator. In addition, THEO requires about five minutes to process a single prediction while the network requires only a few milliseconds, thus promising better perfonnance under real-time conditions. Many questions remain to be answered. TheoNet has only been tested on a small segment of the ll-year solar cycle. It has yet to be determined how many hidden units are needed for generalization of performance (is a simple pattern associator sufficient?). We would like to examine the internal representations formed and see if there is any relationship to the rules in THEO. Without those interpretations, connectionist networks cannot easily offer the help and explanation facilities of traditional expert systems that are a fallout of the rule-writing process. Since the categories of data used were what is input to THEO, and therefore known to be significant, we need to ask if the network can eliminate redundant or unnecessary categories. We also would like to attempt to implement other well-known expert systems to determine the generality of this approach. Acknowledgements The authors would like to acknowledge the encouragement and advice of Paul Smolensky (University of Colorado) on this project and the desktop publishing eqUipment of Fischer Imaging Corporation in Denver. REFERENCES Randall Davis "Expert Systems: Where Are We? And Where Do We Go From Here?", The AI Magazine, Spring 1982 J.L. Elman and David Zipser Discovering the Hidden Structure of Speech ICS Technical Report 8701, University of California, San Diego Geoffrey Hinton "Learning Translation Invariant Recognition in a Massively Parallel Network", in Proc. Conf. Parallel Architectures and Languages Europe, Eindhoven, The Netherlands, June 1987 A Connectionist Expert System that Actually Works 255 Sidney Lehky and Terrence Sejnowski, "Neural Network Model for the Cortical Representation of Surface Curvature from Images of Shaded Surfaces", in Sensory Processing, J.S. Lund, ed., Oxford 1988 Clayton Lewis and Joann Dennett "Joint CU/NOAA Study Predicts Events on the Sun with Artificial Intelligence Technology", CUEngineering, 1986 Allen Newell ''Physical Symbol Systems", Cognitive Science 4:135-183 David Rumelhart, Jay McClelland, and the PDP research group, Parallel Distributed Processing. Volume 1. Cambridge, MA, Bradford books, 1986 Terrence Sejnowski and C.R. Rosenberg, NETtalk: A parallel network that learns to read aloud Technical Report 86-01 Dept. of Electrical Engineering and Computer Science, Johns Hopkins University, BaltimoreMD K.T. Spoehr and S.W. Lehmkuhle "Signal Detection Theory" in Visual Information Processing, Freeman 1982
|
1988
|
47
|
132
|
802 CRICKET WIND DETECTION John P. Miller Neurobiology Group, University of California, Berkeley, California 94720, U.S.A. A great deal of interest has recently been focused on theories concerning parallel distributed processing in central nervous systems. In particular, many researchers have become very interested in the structure and function of "computational maps" in sensory systems. As defined in a recent review (Knudsen et al, 1987), a "map" is an array of nerve cells, within which there is a systematic variation in the "tuning" of neighboring cells for a particular parameter. For example, the projection from retina to visual cortex is a relatively simple topographic map; each cortical hypercolumn itself contains a more complex "computational" map of preferred line orientation representing the angle of tilt of a simple line stimulus. The overall goal of the research in my lab is to determine how a relatively complex mapped sensory system extracts and encodes information from external stimuli. The preparation we study is the cercal sensory system of the cricket, Acheta domesticus. Crickets (and many other insects) have two antenna-like appendages at the rear of their abdomen, covered with hundreds of "filiform" hairs, resembling bristles on a bottle brush. Deflection of these filiform hairs by wind currents activates mechanosensory receptors, which project into the terminal abdominal ganglion to form a topographic representation (or "map") of "wind space". Primary sensory interneurons having Cricket Wind Detection 803 dendritic branches within this afferent map of wind space are selectively activated by wind stimuli with "relevant" parameters, and generate action potentials at frequencies that depend upon the value of those parameters. The "relevant" parameters are thought to be the direction, velocity, and acceleration of wind currents directed at the animal (Shimozawa & Kanou, 1984a & b). There are only ten pairs of these interneurons which carry the system's output to higher centers. All ten of these output units are identified, and all can be monitored individually with intracellular electrodes or simultaneously with extracellular electrodes. The following specific questions are currently being addressed: What are the response properties of the sensory receptors, and what are the I/O properties of the receptor layer as a whole? What are the response properties of all the units in the output layer? Is all of the direction, velocity and acceleration information that is extracted at the receptor layer also available at the output layer? How is that information encoded? Are any higher order "features" also encoded? What is the overall threshold, sensitivity and dynamic range of the system as a whole for detecting features of wind stimuli? Michael Landolfa is studying the sensory neurons which serve as the inputs to the cercal system. The sensory cell layer consists of about 1000 afferent neurons, each of which innervates a single mechanosensory hair on the cerci. The input/output relationships of single sensory neurons were characterized by recording from an afferent axon while presenting appropriate stimuli to the sensory hairs. The primary results were as follows: 1) Afferents are directionally sensitive. Graphs of afferent response amplitude versus wind direction are approximately sinusoidal, with distinct preferred and anti-preferred directions. 2) Afferents are velocity sensitive. Each afferent encodes wind 804 Miller velocity over a range of approximately 1.5 log units. 3) Different afferents have different velocity thresholds. The overlap of these different sensitivity curves insures that the system as a whole can encode wind velocities that span several log units. 4) The nature of the afferent response to deflection of its sensory hair indicates that the parameter transduced by the afferent is not hair displacement, but change in hair displacement. Thus, a significant portion of the processing which occurs within the cereal sensory system is accomplished at the level of the sensory afferents. This information about the direction and velocity of wind stimuli is encoded by the relative firing rates of at least 10 pairs of identified sensory interneurons. A full analysis of the input/output properties of this system requires that the activity of these output neurons be monitored simultaneously. Shai Gozani has implemented a computer-based system capable of extracting the firing patterns of individual neurons from multi-unit recordings. For these experiments, extracellular electrodes were arrayed along the abdominal nerve cord in each preparation. Wind stimuli of varying directions, velocities and frequencies were presented to the animals. The responses of the cells were analyzed by spike descrimination software based on an algorithm originally developed by Roberts and Hartline (1975). The algorithm employs multiple linear filters, and is capable of descriminating spikes that were coincident in time. The number of spikes that could be descriminated was roughly equal to the number of independent electrodes. These programs are very powerful, and may be of much more general utility for researchers working on other invertebrate and vertebrate preparations. Using these programs and protocols, we have characterized the output of the cereal sensory system in terms of the simultaneous activity patterns of several pairs of identified Cricket Wind Detection 805 in terneurons. The results of these multi-unit recording studies, as well as studies using single intracellular electrodes, have yielded information about the directional tuning and velocity sensitivity of the first order sensory interneurons. Tuning curves representing interneuron response amplitude versus wind direction are approximately sinusoidal, as was the case for the sensory afferents. Sensitivity curves representing interneuron response amplitude versus wind velocity are sigmoidal, with "operating ranges" of about 1.5 log units. The interneurons are segregated into several distinct classes having different but overlapping operating ranges, such that the direction and velocity of any wind stimulus can be uniquely represented as the ratio of activity in the different interneurons. Thus, the overlap of the different direction and velocity sensitivity curves in approximately 20 interneurons insures that the system as a whole can encode the characteristics of wind stimuli having directions that span 360 degrees and velocities that span at least 4 orders of magnitude. We are particularly interested in the mechanisms underlying directional sensitivity in some of the first-order sensory interneurons. Identified interneurons with different morphologies have very different directional sensitivities. The excitatory receptive fields of the different interneurons have been shown to be directly related to the position of their dendrites within the topographic map of wind space formed by the filiform afferents discussed above (Bacon & Murphey, 1984; Jacobs & Miller,1985; Jacobs, Miller & Murphey, 1986). The precise shapes of the directional tuning curves have been shown to be dependent upon two additional factors. First, local inhibitory interneurons can have a stong influence over a cell's response by shunting excitatory inputs from particular directions, and by reducing spontaneous activity during 806 Miller stimuli from a cells "null" direction. Second, the "electroanatomy" of a neuron's dendritic branches determines the relative weighting of synaptic inputs onto its different arborizations. Some specific aims of our continuing research are as follows: 1) to characterize the distribution of all synaptic inputs onto several different types of identified interneurons, 2) to measure the functional properties of individual dendrites of these cell types, 3) to locate the spike initiating zones of the cells, and 4) to synthesize a quantitative explanation of signal processing by each cell. Steps 1,2 & 3 are being accomplished through electrophysiological experiments. Step 4 is being accomplished by developing a compartmental model for each cell type and testing the model through further physiological experiments. These computer modeling studies are being carried out by Rocky Nevin and John Tromp. For these models, the structure of each interneuron's dendritic branches are of particular functional importance, since the flow of bioelectrical currents through these branches determine how signals received from "input" cells are "integrated" and transformed into meaningful output which is transmitted to higher centers. We are now at a point where we can begin to understand the operation of the system as a whole in terms of the structure, function and synaptic connectivity of the individual neurons. The proposed studies will also lay the technical and theoretical ground work for future studies into the nature of signal "decoding" and higher-order processing in this preparation, mechanisms underlying the development, self-organization and regulative plasticity of units within this computational map, and perhaps information processing in more complex mapped sensory systems. Cricket Wind Detection 807 REFERENCES Bacon, J.P. and Murphey, R.K. (1984) Receptive fields of cricket (Acheta domesticus) are determined by their dendritic structure. J.Physiol. (Lond) 352:601 Jacobs, G.A. and Miller, J.P. (1985) Functional properties of individual neu· ronal branches isolated in situ by laser photoinactivation. ScierJ~, 228: 344-346 Jacobs, G.A., Miller, J.P. and Murphey, R.K. (1986) Cellular mechanisms underlying directional sensitivity of an identified sensory interneuron. J. Neurosci. 6(8): 2298-2311 Knudsen, E.I., S. duLac and Esterly, S.D. (1981) Computational maps in the brain. Annual Review of Neuroscien~ 10; 41-66 Roberts, W.M. and Hartline, D.K. (1915) Separation of multi-unit nerve impulse trains by a multi-channel linear filter algorithm. Brain Res. 94: 141- 149. Shimozawa, T. and Kanou, M. (1984a) Varieties of filiform hairs: range fractionation by sensory afi'erents and cereal interneurons of a cricket. J. Compo Physiol. A. 155: 485-493 Shimozawa, T. and Kanou, M. (1984b) The aerodynamics and sensory physiology of range fractionation in the cereal filiform sensilla of the cricket Gryllus bimaculatus. J. Compo Physiol. A. 155: 495-505
|
1988
|
48
|
133
|
ALVINN: AN AUTONOMOUS LAND VEHICLE IN A NEURAL NETWORK Dean A. Pomerleau Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dramatically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand. INTRODUCTION Autonomous navigation has been a difficult problem for traditional vision and robotic techniques, primarily because of the noise and variability associated with real world scenes. Autonomous navigation systems based on traditional image processing and pattern recognition techniques often perform well under certain conditions but have problems with others. Part of the difficulty stems from the fact that the processing performed by these systems remains fixed across various driving situations. Artificial neural networks have displayed promising performance and flexibility in other domains characterized by high degrees of noise and variability, such as handwritten character recognition [Jackel et al., 1988] [Pawlicki et al., 1988] and speech recognition [Waibel et al., 1988]. ALVINN (Autonomous Land Vehicle In a Neural Network) is a connectionist approach to the navigational task of road following. Specifically, ALVINN is an artificial neural network designed to control the NAVLAB, the Carnegie Mellon autonomous navigation test vehicle. NETWORK ARCmTECTURE ALVINN's current architecture consists of a single hidden layer back-propagation network 305 306 Pomerleau Road Intensity Feedback Unit 30x32 Video Input Retina 45 Direction Output Units 8x32 Range Finder Input Retina Figure 1: AL VINN Architecture (See Figure 1). The input layer is divided into three sets of units: two "retinas" and a single intensity feedback unit. The two retinas correspond to the two forms of sensory input available on the NAVLAB vehicle; video and range information. The first retina, consisting of 3002 units, receives video camera input from a road scene. The activation level of each unit in this retina is proportional to the intensity in the blue color band of the corresponding patch of the image. The blue band of the color image is used because it provides the highest contrast between the road and the non-road. The second retina, consisting of 8x32 units, receives input from a laser range finder. The activation level of each unit in this retina is proportional to the proximity of the corresponding area in the image. The road intensity feedback unit indicates whether the road is lighter or darker than the non-road in the previous image. Each of these 1217 input units is fully connected to the hidden layer of 29 units, which is in tum fully connected to the output layer. The output layer consists of 46 units, divided into two groups. The first set of 45 units is a linear representation of the tum curvature along which the vehicle should travel in order to head towards the road center. The middle unit represents the "travel straight ahead" condition while units to the left and right of the center represent successively sharper left and right turns. The network is trained with a desired output vector of all zeros except for a "hill" of activation centered on the unit representing the correct tum curvature, which is the curvature which would bring the vehicle to the road center 7 meters ahead of its current position. More specifically, the desired activation levels for ALVlNN: An Autonomous Land Vehicle in a Neural Network 307 Real Road Image Simulated Road Image Figure 2: Real and simulated road images the nine units centered around the correct tum curvature unit are 0.10, 0.32, 0.61, 0.89, 1.00,0.89,0.61,0.32 and 0.10. During testing, the tum curvature dictated by the network is taken to be the curvature represented by the output unit with the highest activation level. The final output unit is a road intensity feedback unit which indicates whether the road is lighter or darker than the non-road in the current image. During testing, the activation of the output road intensity feedback unit is recirculated to the input layer in the style of Jordan [Jordan, 1988] to aid the network's processing by providing rudimentary infonnation concerning the relative intensities of the road and the non-road in the previous image. TRAINING AND PERFORMANCE Training on actual road images is logistically difficult, because in order to develop a general representation, the network must be presented with a large number of training exemplaIS depicting roads under a wide variety of conditions. Collection of such a data set would be difficult, and changes in parameters such as camera orientation would require collecting an entirely new set of road images. To avoid these difficulties we have developed a simulated road generator which creates road images to be used as training exemplars for the network. Figure 2 depicts the video images of one real and one artificial road. Although not shown in Figure 2, the road generator also creates corresponding simulated range finder images. At the relatively low resolution being used it is difficult to distinguish between real and simulated roads. NetwoIk: training is performed using these artificial road "snapshots" and the Warp back· 308 Pomerleau Figure 3: NAVLAB, the eMU autonomous navigation test vehicle. propagation simulator described in [Pomerleau et al., 1988]. Training involves first creating a set of 1200 road snapshots depicting roads with a wide variety of retinal orientations and positions, under a variety of lighting conditions and with realistic noise levels. Back-propagation is then conducted using this set of exemplars until only asymptotic performance improvements appear likely. During the early stages of training, the input road intensity unit is given a random activation level. This is done to prevent the network from merely learning to copy the activation level of the input road intensity unit to the output road intensity unit, since their activation levels should almost always be identical because the relative intensity of the road and the non-road does not often change between two successive images. Once the network has developed a representation that uses image characteristics to determine the activation level for the output road intensity unit, the network is given as input whether the road would have been darker or lighter than the non-road in the previous image. Using this extra information concerning the relative brightness of the road and the non-road, the network is better able to determine the correct direction for the vehicle to trave1. After 40 epochs of training on the 1200 simulated road snapshots, the network correctly dictates a tum curvature within two units of the correct answer approximately 90% of the time on novel simulated road images. The primary testing of the ALVINN's performance has been conducted on the NAVLAB (See Figure 3). The NAVLAB is a modified Chevy van equipped with 3 Sun computers, a Warp, a video camera, and a laser range finder, which serves as a testbed for the CMU autonomous land vehicle project [Thorpe et al., 1987]. Performance of the network to date is comparable to that achieved by the best traditional vision-based autonomous navigation algorithm at CMU under the limited conditions tested. Specifically, the network can accurately drive the NAVLAB at a speed of 1/2 meter per second along a 400 meter path through a wooded ALVINN: An Autonomous Land Vehicle in a Neural Network 309 Weights to Direction Output Units t~li'C Weight to Output Feedback Unit D Weight from Input Feedback Unit n Weight from Bias Unit • jjllil Weights from Video Camera Retina Weights from Range Finder Retina Road 1 Edges Excitatory Periphery Connections Inhibitory Central Connections Figure 4: Diagram of weights projecting to and from a typical hidden unit in a network trained on roads with a fixed width. The schematics on the right are aids for interpretation. area of the CMU campus under sunny fall conditions. Under similar conditions on the same course, the ALV group at CMU has recently achieved similar driving accuracy at a speed of one meter per second by implementing their image processing autonomous navigation algorithm on the Watp computer. In contrast, the ALVINN network is currently simulated using only an on-boani Sun computer, and dramatic speedups are expected when tests are perfonned using the Watp. NETWORK REPRESENTATION The representation developed by the network to perfonn the road following task depends dramatically on the characteristics of the training set. When trained on examples of roads with a fixed width, the network develops a representations consisting of overlapping road filters. Figure 4 is a diagram of the weights projecting to and from a single hidden unit in such a network. . As indicated by the weights to and from the feedback units, this hidden unit expects the road to be lighter than the non-road in the previous image and supports the road being lighter than the non-road in the current image. More specifically, the weights from the 310 Pomerleau Weights to Direction Output Units Weight to Output Feedback Unit Weight from Input Feedback Unit I[ Weight from Bias Unit • 1::tllII:UIllt k¥ Weights from Video Camera Retina Weights from Range Finder Retina . ' .' : ' : . . . : ' : .' .: . mDIRoad ~ N road ~ on11111111 1111111 111111111111 Figure 5: Diagram of weights projecting to and from a typical hidden unit in a network trained on roads with different widths. video camera retina support the interpretation that this hidden unit is a filter for two light roads, one slightly left and the other slightly right or center (See schematic to the right of the weights from the video retina in Figure 4). This interpretation is also supported by the weights from the range finder retina, which has a much wider field of view than the video camera. This hidden unit is excited if there is high range activity (Le. obstacles) in the periphery and inhibited if there is high range activity in the central region of the scene where this hidden unit expects the road to be (See schematic to the right of the weights from the range finder retina in Figure 4). Finally, the two road filter interpretation is reflected in the weights from this hidden unit to the direction output units. Specifically, this hidden unit has two groups of excitatory connections to the output units, one group dictating a slight left turn and the other group dictating a slight right turn. Hidden units which act as filters for 1 to 3 roads are the representation structures most commonly developed when the network is trained on roads with a fixed width. The network develops a very different representation when trained on snapshots with widely varying road widths. A typical hidden unit from this type of representation is depicted in figure 5. One important feature to notice from the feedback weights is that this unit is filtering for a road which is darlcer than the non-road. More importantly, it is evident from the video camera retina weights that this hidden unit is a filter solely for the left edge of the road (See schematic to the right of the weights from the range finder ALVINN: An Autonomous Land Vehicle in a Neural Network 311 retina in Figure 5). This hidden unit supports a rather wide range of travel directions. This is to be expected, since the correct travel direction for a road with an edge at a particular location varies substantially depending on the road's width. This hidden unit would cooperate with hidden units that detect the right road edge to determine the correct travel direction in any particular situation. DISCUSSION AND EXTENSIONS The distinct representations developed for different circumstances illustrate a key advantage provided by neural networks for autonomous navigation. Namely, in this paradigm the data, not the programmer, determines the salient image features crucial to accurate road navigation. From a practical standpoint, this data responsiveness has dramatically sped ALVINN's development. Once a realistic artificial road generator was developed, back-propagation producted in half an hour a relatively successful road following system. It took many months of algorithm development and parameter tuning by the vision and autonomous navigation groups at CMU to reach a similar level of performance using traditional image processing and pattern recognition techniques. More speculatively, the flexibility of neural network representations provides the possibility of a very different type of autonomous navigation system in which the salient sensory features are determined for specific driving conditions. By interactively training the network on real road images taken as a human drives the NAVLAB, we hope to develop a system that adapts its processing to accommodate current circumstances. This is in contrast with other autonomous navigation systems at CMU [Thorpe et al., 1987] and elsewhere [Dunlay & Seida, 1988] [Dickmanns & Zapp, 1986] [Kuan et al., 1988]. Each of these implementations has relied on a fixed, highly structured and therefore relatively inflexible algorithm for finding and following the road, regardless of the conditions at hand. There are difficulties involved with training "on-the-fly" with real images. If the network is not presented with sufficient variability in its training exemplars to cover the conditions it is likely to encounter when it takes over driving from the human operator, it will not develop a sufficiently robust representation and will perform poorly. In addition, the network must not solely be shown examples of accurate driving, but also how to recover (i.e. return to the road center) once a mistake has been made. Partial initial training on a variety of simulated road images should help eliminate these difficulties and facilitate better performance. Another important advantage gained through the use of neural networks for autonomous navigation is the ease with which they assimilate data from independent sensors. The current ALVINN implementation processes data from two sources, the video camera and the laser range finder. During training, the network discovers how information from each source relates to the task, and weights each accordingly. As an example, range data is in some sense less important for the task of road following than is the video data. The range data contains information concerning the position of obstacles in the scene, but nothing explicit about the location of the road. As a result, the range data is given less significance in the representation, as is illustrated by the relatively small 312 Pomerleau magnitude weights from the range finder retina in the weight diagrams. Figures 4 and 5 illustrate that the range finder connections do correlate with the connections from the video camera, and do contribute to choosing the correct travel direction. Specifically, in both figures, obstacles located outside the area in which the hidden unit expects the road to be located increase the hidden unit's activation level while obstacles located within the expected road boundaries inhibit the hidden unit. However the contributions from the range finger connections aren't necessary for reasonable performance. When ALVINN was tested with normal video input but an obstacle-free range finder image as constant input, there was no noticeable degradation in driving performance. Obviously under off-road driving conditions obstacle avoidance would become much more important and hence one would expect the range finder retina to playa much more significant role in the network's representation. We are currently working on an off-road version of ALVINN to test this hypothesis. Other current directions for this project include conducting more extensive tests of the network's performance under a variety of weather and lighting conditions. These will be crucial for making legitimate performance comparisons between ALVINN and other autonomous navigation techniques. We are also working to increase driving speed by implementing the network simulation on the on-board Warp computer. Additional extensions involve exploring different network architectures for the road following task. These include 1) giving the network additional feedback information by using Elman's [Elman, 1988] technique of recirculating hidden activation levels, 2) adding a second hidden layer to facilitate better internal representations, and 3) adding local connectivity to give the network a priori knowledge of the two dimensional nature of the input In the area of planning, interesting extensions include stopping for, or planning a path around, obstacles. One area of planning that clearly needs work is dealing sensibly with road forks and intersections. Currently upon reaching a fork, the network may output two widely discrepant travel directions, one for each choice. The result is often an oscillation in the dictated travel direction and hence inaccurate road following. Beyond dealing with individual intersections, we would eventually like to integrate a map into the system to enable global point-to-point path planning. CONCLUSION More extensive testing must be performed before definitive conclusions can be drawn concerning the peiformance of ALVINN versus other road followers. We are optimistic concerning the eventual contributions neural networks will make to the area of autonomous navigation. But perhaps just as interesting are the possibilities of contributions in the other direction. We hope that exploring autonomous navigation, and in particular some of the extensions outlined in this paper, will have a significant impact on the field of neural networks. We certainly believe it is important to begin researching and evaluating neural networks in real world situations, and we think autonomous navigation is an interesting application for such an approach. ALVINN: An Autonomous Land Vehicle in a Neural Network 313 Acknowledgements This work would not have been possible without the input and support provided by Dave Touretzky, Joseph Tebelskis, George Gusciora and the CMU Warp group, and particularly Charles Thorpe, Till Crisman, Martial Hebert, David Simon, and rest of the CMU ALV group. This research was supported by the Office of Naval Research under Contracts NOOOI4-87-K-0385, NOOOI4-87-K-0533 and NOOOI4-86-K-0678, by National Science Foundation Grant EET-8716324, by the Defense Advanced Research Projects Agency (DOD) monitored by the Space and Naval Warfare Systems Command under Contract NOOO39-87-C-0251, and by the Strategic Computing Initiative of DARPA, through ARPA Order 5351, and monitored by the U.S. Army Engineer Topographic Laboratories under contract DACA 76-85-C-0003 titled "Road Following". References [Dickmanns & Zapp, 1986] Dickmanns, E.D., Zapp, A. (1986) A curvature-based scheme for improving road vehicle guidance by computer vision. "Mobile Robots", SPIE-Proc. Vol. 727, Cambridge, MA. [Elman, 1988] Elman, J.L, (1988) Finding structure in time. Technical report 8801. Center for Research in Language, University of California, San Diego. [Dunlay & Seida, 1988] Dunlay, R.T., Seida, S. (1988) Parallel off-road perception processing on the ALV. Proc. SPIE Mobile Robot Conference, Cambridge MA. [Jackel et al., 1988] Jackel, L.D., Graf, H.P., Hubbard, W., Denker, J.S., Henderson, D., Guyon, 1. (1988) An application of neural net chips: Handwritten digit recognition. Proceedings of IEEE International Conference on Neural Networks, San Diego, CA. [Jordan, 1988] Jordan, M.l. (1988) Supervised learning and systems with excess degrees of freedom. COINS Tech. Report 88-27, Computer and Infolll1ation Science, University of Massachusetts, Amherst MA. [Kuan et al., 1988] Kuan, D., Phipps, G. and Hsueh, A.-C. Autonomous Robotic Vehicle Road Following. IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 10, Sept. 1988. [pawlicki et al., 1988] Pawlicki, T.E, Lee, D.S., Hull, J.J., Srihari, S.N. (1988) Neural network models and their application to handwritten digit recognition. Proceedings of IEEE International Conference on Neural Networks, San Diego, CA. [pomerleau et al., 1988] Pomerleau, D.A., Gusciora, G.L., Touretzky, D.S., and Kung, H.T. (1988) Neural network simulation at Waq> speed: How we got 17 million connections per second. Proceedings of IEEE International Conference on Neural Networks, San Diego, CA. [Th0IJle et al., 1987] Thorpe, c., Herbert, M., Kanade, T., Shafer S. and the members of the Strategic Computing Vision Lab (1987) Vision and navigation for the Carnegie Mellon NAVLAB. Annual Revi~ of Computer Science Vol. 11, Ed. Joseph Traub, Annual Reviews Inc., Palo Alto, CA. [Waibel et al., 1988] Waibel, A, Hanazawa, T., Hinton, G., Shikano, K., Lang, K. (1988) Phoneme recognition: Neural Networks vs. Hidden Markov Models. Proceedings from Int. Conf. on Acoustics, Speech and Signal Processing, New York, New York.
|
1988
|
49
|
134
|
332 NEURAL NETWORKS THAT LEARN TO DISCRIMINATE SIMILAR KANJI CHARACTERS Yoshihiro Morl Kazuhiko Yokosawa ATR Auditory and Visual Perception Research Laboratories 2-1-61 Shiromi Higashiku Osaka 540 Japan ABSTRACT A neural network is applied to the problem of recognizing Kanji characters. Using a b a c k propagation network learning algorithm. a threelayered. feed-forward network is trained to recognize similar handwritten Kanji characters. In addition. two new methods are utilized to make training effective. The recognition accuracy was higher than that of conventional methods. An analysis of connection weights showed that trained networks can discern the hierarchical structure of Kanji characters. This strategy of trained networks makes high recognition accuracy possible. Our results suggest that neural networks are very effective for Kanji character recognition. 1 INTRODUCTION Neural networks are applied to recognition tasks in many fields. with good results. In the field of letter recognition. net work s have been made which recognize hand-written digits [Burr 1986] and complex printed Chinese characters [Ho 1988]. The performance of these networks has been better than that of conventional methods. However. these results are still rudimentary when we consider not only the large number of Kanji characters. but the distortion involved in hand-written characters. We are aiming to make a large-scale network that recognizes the 3000 Kanji characters commonly used in Japan. Since it is difficult for a single network to discriminate 3000 characters. our plan is to create a large-scale network by Neural Networks that Learn Kanji Characters 333 assembling many smaller ones that would each be responsible for recognizing only a small number of characters. There are two issues concerning implementation of such a largescale network : the ability of individual networks. and organizing the networks. As a first step. the ability of a small network to discriminate similar Kanji characters was investigated. We found that the learning speed and performance of networks are highly influenced by environment (for instance. the order. number. and repetition of training samples). New methods of teaching the environment are utilized to make learning effective. 2 NEW TYPES OF TEACHERS 2.1 PROBLEMS OF BACKPROPAGATION The Backpropagation(BP) learning algorithm only teaches correct answers [Rumelhart 1986]. BP does not care about the recognition rate of each category. If we use ordinary BP in a situation of limited resources. and if there are both easy and difficult categories to learn in the training set. what happens is that the easier category uses up most of the resources in the early stages of training (Figure 1). Yet. for efficiency. the difficult category to learn should get more resources. This weakness of BP makes the learning time longer. Two new methods are used to avoid this problem. In the real world. learning procedures (human) do not exist in isolation. There is also a learning environment. It is therefore natural. and even necessary. to devise teaching methods that incorporate environmental factors. 334 Morl and Yokosawa Separation by BP Ideal Separation Takes more Resources Figure 1. Easily Learned Category Environment (The feature space of samples) Category Learning Procedur Back Propagation Figure 2. Two New methods Neural Networks that Learn Kanji Characters 335 2.2 FIRST METHOD (REVIEW METHOD) This method tracks the performance for each category. At first, training is focused on categories that are not being recognized well. After this, on a more fine-grained level, the error for each sample is first checked, and the greater this error, the more often that sample is presented (Figure 2). This leads to a more balanced recognition over the categories. 2.3 SECOND METHOD (PREPARATION METHOD) The second method, designed to prevent over-training, is to increase the number of training samples when the network's total error rate is observed to fall below a certain value (Figure 2). 3 RECOGNITION EXPERIMENT 3.1 INPUT PATTERN AND NETWORK STRUCTURE Kanji characters are composed of sub-characters called radicals (Figure 3). The four Kanji characters used in our experiment are shown in Figure 4. These characters are all combinations of two kinds of left radicals and two kinds of right radicals. Visually, these characters are similar and hence difficult to discriminate. The training samples for this network were chosen from a database of about 3000 Kanji characters [Saito 1985]. For each character, there are 200 handwritten samples from different writers. 100 are used as training samples, and the remaining 100 are used to test recognition accuracy of the trained network. All samples in the database consist of 64 by 63 dots. If we were to use this pattern as the input to our neural net, the number of units required in the input layer would be too large for the computational abilities of current computers. Therefore, two kinds of feature vectors extracted from handwritten patterns are used as the input. In one of the feature vectors, the "MESH feature ", there are 64 dimensions computing the density of the 8 by 8 small squares into which handwritten samples are divided. In the other, the "LDCD feature" [Hagita 1983], there are 256 dimensions computing a line segment length along four directions horizontal, vertical, and two diagonals in the same 336 Morl and Yokosawa small squares. In this experiment, we use a feed-forward neural network with three layers an input layer. a hidden layer and an output layer -. Each unit of the input layer is connected to all units of the hidden layer, and each unit of the hidden layer is connected to all units of the output layer. Kanji for "Theory" '-" -- o fIB ,- -----,: I F I I ........ ' , ........ , , ..... ' : 0: .. ____ 4 .. - -- _ .. I,,' , , , , :fIB-: , , , I I , '- - ... --' Left radical Right radical Fig. 3 Concept of Radical Fig. 4 Example of Kanji Characters ... ~ 5 11 /' t " 1 0 > 8 0 :61 horizontal component Figure 5. LOCO Feature 3.2 RECOGNITION RESULTS (MESH VS. LDCD) Average recognition rates when the MESH feature was used were 98.5% for training samples and 82.5% for testing samples. Average recognition rates when the LOCO feature was used were Neural Networks that Learn Kanji Characters 337 99.5 % for training samples and 92.0% for testing samples. These recognItion rates for neural networks were higher than for conventional methods we used. 3.3 Recognition Rate & the Number of Samples We gradually increased the number of training samples to investigate the influence of this number on the recognition rate of testing samples. Figure 6 shows the recognition rate of testing samples for ten different amounts of training samples. When the number of training samples are 2 and 3, the recognition rates are lower than for 1 training sample. This result is probably due to the fact that the second samples in each set are not well-written. This result means that an average pattern should be used in the early training period. u Q) .. .. 0 0 c Q) u .. Q) D. 100 __ -cr---CI 90 80 70 60 50 40 1-----~~~--------~------------------., 1 10 Samples I Kanji Category 100 Figure 6. Recognition Rate and the Number of Training Samples 3.4 ANALYSIS OF INNER REPRESENTATION 3.4.1 Weights vs. Difference Between Averaged Samples To investigate how this neural network learns to solve the given task, the weights vector from the input layer to each hidden unit is compared to the difference between averaged samples with a common radical. Since the four Kanji characters in this task are all combinations of two kinds of left radicals and two kinds of right radicals, two hidden units which take charge of left and right radicals, respectively, are enough to accomplish 338 Morl and Yokosawa recogmtIon. At first, 200 samples with the same left radical were averaged. Since there are just two left radicals in the four Kanji characters, this produced two averaged patterns. These two patterns were then subtracted, yielding a pattern that corresponds to the difference between the two left radicals. The same method was used to obtain a pattern that corresponds to the difference between the two right radicals. Then, for each of these patterns, the correlation coefficient with the weights from the input-layer to each hidden unit is calculated. The pattern for left radicals was very highly correlated with hidden unit 1 (R=0.71,p<0.01), and not correlated with hidden unit 2. On the other hand, the pattern for right radicals was very highly correlated with hidden unit 2 (R=0.79,p<0.01), and not correlated with hidden unit 1. In other words, each hidden unit is discriminating among radicals of one particular side of the Kanji characters. 3.4.2 Weights vs. Bayse Discrimination The bayse method is used as a discrimination function when the distribution of the categories is known. Supposing that the distribution of categories in this task is normal distribution and the covariance matrix of each category is equal, the discrimination function becomes first order as given below. f(X) = (J,ll - J,lr)t L X + c (1) L Covariance Matrix with the same radical J,ll Average Vector with the same left radical J,l r Average Vector with the same right radical X Input Feature Vector c Constant The input vector to the input layer is translated to a hidden unit as follows. y y X W a = wx+ a (2) Input Sum Input Feature Vector Weights Matrix from Input Layer to a Hidden Unit Threshold Equation (2) is similar to equation (1). If the network uses a strategy similar to bayse discrimination, there should be some Neural Networks that Learn Kanji Characters 339 correlation between bayse weights (J.l.1 - J.l.r)t L in equation (1) and W in equation (2). When the correlation coefficient between bayse weights and the weights from the input layer to each hidden unit was calculated, there was no significant correlation between them (R=0.02,p>0.05). In other words, the network does not use a strategy like bayse discrimination. 4 CONCLUSION For this experiment, we observed that the learning procedure is influenced by the surrounding environment. With this fact in mind, new methods were proposed to make training within a learning process more effective. These methods lead to balanced recognition rates over categories. The most important result from this experiment is that a network trained with BP can perceive that Kanji characters are composed of radicals. Based on this ability, it is possible to estimate the number of units required for the hidden-layer of a network. Such a network could then fonn the building block of a large-scale network capable of recognizing as many as the 3000 Kanji characters commonly used in Japan. Acknowledgments We are grateful to Dr. Michio Umeda for his support and encouragement. Special thanks to Kazuki Joe for the ideas he provided in our many discussions and for his help in developing simulation programs. Reference [Burr 1986] D.J.Burr,"A Neural Network Digit Recognizer", IEEE-SMC,1621-1625,1986. [Ho 1988] A.Ho and W.Furmanski,"Pattern Recognition by Neural Network Model on Hypercubes" ,HCCA3-528 [Rumelhart 1986] D.E.Rumelhart et aI, "Parallel Distributed Processing",vol.l,The MIT Press,1986. [Saito 1985] T.Saito,H.Yamada,K.Yamamoto,"On the Data Base ETL9 of Handprinted Characters in JIS Chinese Characters and Its Analysis" ,J68-D.4,757-764,1985 [Hagita 1983] N .Hagi ta,S. N aito,I. Masuda, "Recogni tion of Handprinted Chinese Characters by Global and Local Direction Contributivity Density-Feature", J66-D,6,722-729,1983
|
1988
|
5
|
135
|
"FAST LEARNING IN MULTI-RESOLUTION HIERARCHIES" John Moody Yale Computer Science, P.O. Box 2158, New Haven, CT 06520 Abstract A class of fast, supervised learning algorithms is presented. They use local representations, hashing, atld multiple scales of resolution to approximate functions which are piece-wise continuous. Inspired by Albus's CMAC model, the algorithms learn orders of magnitude more rapidly than typical implementations of back propagation, while often achieving comparable qualities of generalization. Furthermore, unlike most traditional function approximation methods, the algorithms are well suited for use in real time adaptive signal processing. Unlike simpler adaptive systems, such as linear predictive coding, the adaptive linear combiner, and the Kalman filter, the new algorithms are capable of efficiently capturing the structure of complicated non-linear systems. As an illustration, the algorithm is applied to the prediction of a chaotic timeseries. 1 Introduction A variety of approaches to adaptive information processing have been developed by workers in disparate disciplines. These include the large body of literature on approximation and interpolation techniques (curve and surface fitting), the linear, real-time adaptive signal processing systems (such as the adaptive linear combiner and the Kalman filter), and most recently, the reincarnation of non-linear neural network models such as the multilayer perceptron. Each of these methods has its strengths and weaknesses. The curve and surface fitting techniques are excellent for off-line data analysis, but are typically not formulated with real-time applications in mind. The linear techniques of adaptive signal processing and adaptive control are well-characterized, but are limited to applications for which linear descriptions are appropriate. Finally, neural network learning models such as back propagation have proven extremely versatile at learning a wide variety of non-linear mappings, but tend to be very slow computationally and are not yet well characterized. The purpose of this paper is to present a general description of a class of supervised learning algorithms which combine the ability of the conventional curve 29 30 Moody fitting and multilayer perceptron methods to precisely learn non-linear mappings with the speed and flexibility required for real-time adaptive application domains. The algorithms are inspired by a simple, but often overlooked, neural network model, Albus's Cerebellar Model Articulation Controller (CMAC) [2,1], and have a great deal in common with the standard techniques of interpolation and approximation. The algorithms "learn from examples", generalize well, and can perform efficiently in real time. Furthermore, they overcome the problems of precision and generalization which limit the standard CMAC model, while retaining the CMAC's speed. 2 System Description The systems are designed to rapidly approximate mappings g: X 1-+ fi from multidimensional input spaces x E Sinput to multidimensional output spaces fi E Soutput. The algorithms can be applied to any problem domain for which a metric can be defined on the input space (typically the Euclidean, Hamming, or Manhattan metric) and for which the desired learned mapping is (to a close approximation) piece-wise continuous. (Discontinuities in the desired mapping, such as those at classification boundaries, are approximated continuously.) Important general classes of such problems include approximation of real-valued functions nn 1-+ nm (such as those found in signal processing), classification problems nn 1-+ f3"l (such as phoneme classification), and boolean mapping problems Bn 1-+ f3"l (such as the NETtalk problem [20]). Here, n are the reals and B is {0,1}. This paper focuses on realvalued mappings; the formulation and application of the algorithms to boolean problem domains will be presented elsewhere. In order to specify the complete learning system in detail, it is easiest to start with simple special cases and build the description from the bottom up: 2.1 A Simple Adaptive Module The simplest special case of the general class under consideration is described as follows. The input space is overlayed with a lattice of points xf3 a local function value or "weight" Vf3 is assigned to every possible lattice point. The output of the system for a given input is: (1) where Nf3(x) is a neighborhood function for the 13th lattice point such that Nf3 = 1 if xf3 is the lattice point closest to the input vector x and Nf3 = 0 otherwise. More generally, the neighborhood functions N can overlap and the sum in equation (1) can be replaced by an average. This results in a greater ability to generalize when training data is sparse, but at the cost of losing fine detail. "Fast Learning in Multi-Resolution Hierarchies" 31 Learning is accomplished by varying the V{3 to minimize the squared error of the system output on a set of training data: E = ~ ~(Zide8ired - zeXi))2 , (2) I where the sum is over all exemplars {Xi, Zide'ired} in the training set. The determination of V{3 is easily formulated as a real time adaptive algorithm by using gradient descent to minimize an instantaneous estimate E(t) of the error: dV dE(t) dt = -fJdV . (3) 2.2 Saving Memory with Hashing: The CMAC The approach of the previous section encounters serious difficulty when the dimension of the input space becomes large and the distribution of data in the input space becomes highly non-uniform. In such cases, allocating a separate function value for each possible lattice point is extremely wasteful, because the majority of lattice points will have no training data within a local neighborhood. As an example, suppose that the input space is four dimensional, but that all input data lies on a fuzzy two dimensional subspace. (Such a situation [projected onto 3-dimensions] is shown in figure [2A].) Furthermore, suppose that the input space is overlayed with a rectangular lattice with K nodes per dimension. The complete lattice will contain K4 nodes, but only O( K2) of those nodes will have training data in their local neighborhoods Thus, only 0(K2) of the weights V{3 will have any meaning. The remaining 0(1(4) weights will be wasted. (This assumes that the lattice is not too fine. If K is too large, then only O(P) of the lattice points will have training data nearby, where P is the number of training data.) An alternative approach is to have only a small number of weights and to allocate them to only those regions of the input space which are populated with training data. This allocation can be accomplished by a dimensionality-reducing mapping from a virtual lattice in the input space onto a lookup table of weights or function values. In the absence of any a priori information about the distribution of data in the input space, the optimal mapping is a random mapping, for example a universal hashing function [8]. The random nature of such a function insures that neighborhood relationships in the virtual lattice are not preserved. The average behavior of an ensemble of universal hashing functions is thus to access all elements of the lookup table with equal probability, regardless of the correlations in the input data. The many-to-one hash function can be represented here as a matrix HT{3 of D's and 1 's with one 1 per column, but many 1 's per row. With this notation, the system response function is: T N zeX) = L L VT H T {3 N{3(x) (4) T=l{3=l 32 Moody CTID --~" Resolution 1 I---.--I.,c, Resolution 2 ~M"·l-I.,c, A Hash Table Figure 1: (A) A simple CMAC module. (B) The computation of errors for a multiresolution hierarchy. The CMAC model of Albus is obtained when a distributed representation of the input space is used and the neighborhood functions NP(x) are overlapping. In this case, the sum over (3 is replaced by an average. Note that, as specified by equation (4), hash table collisions are not resolved. This introduces "collision noise", but the effect of this noise is reduced by 1/ .j(B), where B is the number of neighborhood functions which respond to a given input. Collision noise can be completely eliminated if standard collision resolution techniques are used. A few comments should be made about efficiency. In spite of the costly formal sums in equation (4), actual implementations of the algorithm are extremely fast. The set of non-zero NP (X) on the virtual lattice, the hash function value for each vertex, and the set of corresponding lookup table values ih given by the hash function are easily determined on the fly. The entire hash function H T f3 is never pre-computed, the sum over the index (3 is limit.ed to a few lattice points neighboring the input X, and since each lattice point is associated with only one lookup table value, the formal sum over T disappears. The CMAC model is shown schematically in figure [IA). 2.3 Interpolation: Neighborhood Functions with Graded Response One serious problem with the formulations discussed so far is that the neighborhood functions are constant in their regions of support. Thus, the system response is discontinuous over neighborhood boundaries. This problem can be easily remedied by using neighborhood functions with graded response in order to perform continuous interpolation between lattice points. "Fast Learning in Multi-Resolution Hierarchies" 33 The normalized system response function is then: (5) The functions Rf3 (i) are the graded neighborhood response functions associated with each lattice point if3. They are intended to have local support on the" input space Sinput, thus being non-zero only in a local neighborhood of their associated lattice point Xf3. Each function Rf3(x) attains its maximum value at lattice point i f3 and drops off monotonically to zero as the distance lIif3 - Xli increases. Note that R is not necessarily isotropic or symmetric. Certain classes of localized response functions R defined on certain lattices are self-normalized, meaning that: L Rf3(X) = 1 , for any x. f3 In this case, the equation (5) simplifies to: Z(x) = L L: liT HTf3 Rf3(X) T f3 (6) (7) One particularly important and useful class of of response functions are the Bsplines. However, it is not easy to formulate B-splines on arbitrary lattices in high dimensional spaces. 2.4 M uIti-Resolution Interpolation The final limitation of the methods described so far is that they use a lattice at only one scale of resolution. Without detailed a priori knowledge of the distribution of data in the input space, it is difficult to choose an optimal lattice spacing. Furthermore, there is almost always a trade-off between the ability to generalize and the ability to capture fine detail. When a single coarse resolution is used, generalization is good, but fine details are lost. When a single fine resolution is used, fine details are captured in those regions which contain dense data, but no general picture emerges for those regions in which data is sparse. Good generalization and fine detail can both be captured by using a multiresolution hierarchy. A hierarchical system with L levels represents functions 9 : i 1-+ yin the followmg way: L y(X) = Yi.(x) = L: %A (E) , (8) ~=l where %A is a mapping as described in equation(5) for the A-th level in the hierarchy. The coarsest scale is A = 1 and the finest is A = L. 34 Moody The multi-resolution system is trained such that the finer scales learn corrections to the total output of the coarser scales. This is accomplished by using a hierarchy of error functions. For each level in the hierarchy A, the output for that level fh, is defined to be the partial sum ~ Y>. = 2: Zit . /C=} (Note that Y)..+l = Y>.. + z~+}.) The error for level A is defined to be E).. = 2: E)..(i) , i where the error associated with the ith exemplar is: E (.) 1 (-del .. ( .. ))2 ~ , ="2 Yi Y~ Xi The learning or training procedure for level A involves varying the lookup table values V; for that level to minimize E)... Note that the lookup table values V; for previous or subsequent levels (Ie 1= A) are held fixed during the minimization of E)... Thus, the lookup table values for each level are varied to minimize only the error defined for that level. This hierarchical learning procedure guarantees that the first level mapping Zl is the best possible at that level, the second level mapping Z2 constitutes the best possible corrections to the first level, and the A-th level mapping Z~ constitutes the best possible corrections to the total contributions of all previous levels. The computation of error signals is shown schematically in figure [lB]. It should be noted that multi-resolution approaches have been successfully used in other contexts. Examples are the well-known multigrid methods for solving differential equations and the pyramid architectures used in machine vision [6,7]. 3 Application to Timeseries Prediction The multi-resolution hierarchy can be applied to a wide variety of problem domains as mentioned earlier. Due to space limitations, we consider only one test problem here, the prediction of a chaotic timeseries. As it is usually formulated, the prediction is accomplished by finding a realvalued mapping f : nn 1-+ n which takes a sequence of n recent samples of the timeseries and predicts the value at a future moment. Typically, the state space imbedding in nn is i{t] = (x[t], x[t ~], x[t 2~], x[t 3~)), where ~ is the sampling parameter, and the correct prediction for prediction time T is x[t + T). For the purposes of testing various non-parametric prediction methods, it is assumed that the underlying process which generates the timeseries is unknown. The particular timeseries studied here results from integrating the Mackey-Glass differential-delay equation [14]: dx[t] = -b x[t] + a x[t - r] (9) dt 1 + x[t - r)1o .(II . .. . .. . ' .. •• ' ,a '(O) '. ,' .... : .. .' .. "Fast Learning in Multi-Resolution HierarchieS35 0.0 -0.5 S -1.5 • • • • ---- ..... -------•• • 2.0 2.5 • • • • • • 3.0 3.5 4.0 Num. Data (log 10) Figure 2: (A) Imbedding in three dimensions of 1000 successive points of the Mackey-Glass chaotic timeseries with delay parameter T = 17 and sampling parameter ~ = 6. (B) Normalized Prediction Error vs. Number of Training Data. Squares are runs with the multi-resolution hierarchy runs. The circle is the back propagation benchmark. The horizontal line is included for visual reference only and is not intended to imply a scaling law for back propagation. The solid lines in figure [3] show the resulting timeseries for T = 17, a = 0.2, and b = 0.1; note that it is cyclic, but not periodic. The characteristic time of the series, given by the inverse of the mean of the power spectrum, is tcha,. ~ 50. Classical techniques like linear predictive coding and Gabor-Volterra-Wiener polynomial expansions typically do no better than chance when predicting beyond tcha,. [10]. For purposes of comparison, the sampling parameter and prediction time are chosen to be ~ = 6 and T = 85 > tcha,. respectively. Figure [2A] shows a projection of the four dimensional state space imbedding onto three dimensions. The orbits of the series lie on a fuzzy two dimensional subspace which is a strange attractor of fractal dimension 2.1. This problem has been studied by both conventional data analysis techniques and by neural network methods. It was first studied by Farmer and Sidorowich who locally fitted linear and quadratic surfaces directly to the data. [11,10]. The exemplars in the imbedding space were stored in a k-d tree structure in order to allow rapid determination of proximity relationships [3,4,19]. The local surface fitting method is extremely efficient computationally. This kind of approach has found wide application in the statistics community [5]. Casdagli has applied the method of radial basis functions, which is an exact interpolation method and also depends on explicit storage of the data. [9]. The radial basis functions method is a global method and becomes com36 Moody putationally expensive when the number of exemplars is large, growing as O(P3). Both approaches yield excellent results when used as off-line algorithms, but do not seem to be well suited to real-time application domains. For real-time applications, little a priori knowledge about the data can be assumed, large amounts of past data can't be stored, the function being learned may vary with time, and computing speed is essential. Three different neural network techniques have been applied to the timeseries prediction problem, back propagation [13], self-organized, locally-tuned processing units [18,17], and an approach based on the GMDH method and simulated annealing [21]. The first two approaches can in principle be applied in real time, because they don't require explicit storage of past data and can adapt continuously. Back propagation yields better predictions since it is completely supervised, but the locally-tuned processing units learn substantially faster. The GMDH approach yields excellent results, but is computationally intensive and is probably limited to off-line use. The multi-resolution hierarchy is intended to offer speed, precision, and the ability to adapt continuously in real time. Its application to the Mackey-Glass prediction problem is demonstrated in two different modes of operation: off-line learning and real-time learning. 3.1 Off-Line Learning In off-line mode, a five level hierarchy was trained to predict the future values. At each level, a regular rectangular lattice was used, with each lattice having A intervals and therefore A + 1 nodes per dimension. The lattice resolutions were chosen to be (AI = 4, A2 = 8, A3 = 16, A4 = 32, As = 64). The corresponding number of vertices in each ofthe virtual4-dimensionallattices was therefore (Ml = 625, M2 = 6,561, M3 = 83,521, M4 = 1,185,921, Ms = 17,850,625). The corresponding lookup table sizes were (TI = 625, T2 = 4096, T3 = 4096, T4 = 4096, Ts = 4096). Note that TI = M 1, so hashing was not required for the first layer. For all other layers, T>. < M>., so hashing was used. For layers 3, 4, and 5, T>. <: M>., so hashing resulted in a dramatic reduction in the memory required. The neighborhood response function RI3(E) was a B-spline with support in the 16 cells adjacent to each lattice point EI3. Hash table collisions were not resolved. The learning method used was simple gradient descent. The lookup table values were updated after the presentation of each exemplar. At each level, the training set was presented repeatedly until a convergence criterion was satisfied. The levels were trained sequentially: level 1 was trained until it converged, followed by level 2, and so on. The performance of the system as a function of training set size is shown in figure [2B]. The normalized error is defined as [rms error]/[O'], where 0' is the standard deviation of the timeseries. For each run, a different segment of the timeseries was used. In all cases, the performance was measured on an independent test sequence consisting of the 500 exemplars immediately following the training sequence. The prediction error initially drops rapidly as the number of training data are increased, "Fast Learning in Multi-Resolution Hierarchies" 37 but then begins to level out. This leveling out is most likely caused by collision noise in the hash tables. Collision resolution techniques should improve the results, but have not yet been implemented. For training sets with 500 exemplars, the multi-resolution hierarchy achieved prediction accuracy equivalent to that of a back propagation network trained by Lapedes and Farber [13]. Their network had four linear inputs, one linear output, and two internal layers, each containing 20 sigmoidal units. The layers were fully connected yielding 541 adjustable parameters (weights and thresholds) total. They trained their network in off-line mode using conjugate gradient, which they found to be significantly faster than gradient descent. The multi-resolution hierarchy converged in about 3.5 minutes on a Sun 3/60 for the 500 exemplar runs. Lapedes estimates that the back propagation network required probably 5 to 10 minutes ofCray X/MP time running at about 90 Mflops [12]. This would correspond to about 4, 000 to 8, 000 minutes of Sun 3/60 time. Hence, the multi-resolution hierarchy converged about three orders of magnitude faster that the back propagation network. This comparison should not be taken to be universal, since many implementations of both back propagation and the multi-resolution hierarchy are possible. Other comparisons could easily vary by factors of ten or more. It is interesting to note that the training time for the multi-resolution hierarchy increased sub-linearly with training set size. This is because the lookup table values were varied after the presentation of each exemplar, not after presentation of the whole set. A similar effect should be observable in back propagation nets. In fact, training after the presentation of each exemplar could very likely increase the overall rate of convergence for a back propagation net. 3.2 Real-Time Learning Unlike most standard curve and surface fitting methods, the multi-resolution hierarchy is extremely well-suited for real-time applications. Indeed, the standard CMAC model has been applied to the real-time control of robots with encouraging success [16,15]. Figure [3] illustrates a two level hierarchy (with 5 and 9 nodes per dimension) learning to predict the timeseries for T = 50 from an initial tabula rasa configuration (all lookup table values set to zero). The solid line is the actual timeseries data, while the dashed line are the predicted values. The predicted values lead the actual values in the graphs. Notice that the system discovers the intrinsically cyclic nature of the series almost immediately. At the end of a single pass through 9,900 exemplars, the normalized prediction error is below 5% and the fit looks very good to the eye. On a Sun 3/50, the algorithm required 1.4 msec per level to respond to and learn from each exemplar. At this rate, the two level system was able to process 360 exemplars (over 7 cycles of the timeseries) per second. This rate would be considered phenomenal for a typical back propagation network running on a Sun 3/50. 38 Moody 1.0 0.8 III 0 ~ 0.6 > d 0 . .::l 0.4 u d tf 0.2 0.0 f..J I I I I ~ ) I ~ , H ~ ~-, ' , I '• I I , I' ,I \: II lI II _ 1 h 1 1 1 " o 100 200 300 400 Time ( = # of Exemplars ) 1.0 f..J I I I I _ 0.8 - ~ J t ~ ~ , III ,10 ~I ~ 0.6 , 1 > I r d I I 0 • .::l 0.4 r-< , u d , ~ II 0.2 lI 0.0 h 1 I I 1 9500 9600 9700 9800 9900 Time ( = # of exemplars) Figure 3: An example of learning to predict the Mackey-Glass chaotic timeseries in real time with a two-stage multi-resolution hierarchy. 4 Discussion There are two reasons that the multi-resolution hierarchy learns much more quickly than back propagation. The first is that the hierarchy uses local representations of the input space and thus requires evaluation and modification of only a few lookup table values for each exemplar. In contrast, the complete back propagation net must be evaluated and modified for each exemplar. Second, the learning in the multi-resolution hierarchy is cast as a purely quadratic optimization procedure. In contrast, the back propagation procedure is non-linear and is plagued with a multitude of local minima and plateaus which can significantly retard the learning process. In these respects, the multi-resolution hierarchy is very similar to the local surface fitting techniques exploited by Farmer and Sidorowich. The primary difference, however, is that the hierarchy, with its multi-resolution architecture and hash table data structures offers the flexibility needed for real time problem domains and does not require the explicit storage of past data or the creation of data structures which depend on the distribution of data. Acknowledgements I gratefully acknowledge helpful comments from Chris Darken, Doyne Farmer, Alan Lapedes, Tom Miller, Terry Sejnowski, and John Sidorowich. I am especially grateful for support from ONR grant NOO0l4-86-K-0310, AFOSR grant F4962088-C0025, and a Purdue Army subcontract. "Fast Learning in Multi-Resolution Hierarchies" 39 References [1] J.S. Albus. Brain, Behavior and Rohotic6. Byte Books, 1981. [2] J.S. Albus. A new approach to manipulator control: the cerebellar model articulation controller (CMAC). J. Dyn. SY6. Mea6., Contr., 97:220, 1975. [3] Jon L. Bentley. Multidimensional binary search trees in database applications. IEEE Tran6. on Software Engineering, SE-5:333, 1979. [4] Jon L. Bentley. Multidimensional divide and conquer. Communication6 of the A CM, 23:214, 1980. [5] L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Clauification and Regreuion Tree6. Wadsworth, Monterey, CA, 1984. [6] Peter J. Burt and Edward H. Adelson. The laplacian pyramid as a compact image code. IEEE Tran6. Communication6, COM-31:532, 1983. [7] Peter J. Burt and Edward H. Adelson. A multiresolution spline with application to image mosaics. A CM Tran6. on Graphic6, 2:217, 1983. [8) J.L. Carter and M.N. Wegman. Universal classes of hash functions. In Proceeding6 of the Ninth Annual SIGA CT Conference, 1977. [9] M. Casdagli. Nonlinear Prediction of Chaotic Time Serie6. Technical Report, Queen Mary College, London, 1988. [10) J.D. Fanner and J.J. Sidorowich. Erploiting Cha06 to Predict the Future and Reduce Noi6e. Technical Report, Los Alamos National Laboratory, Los Alamos, New Mexico, 1988. [11] J.D. Fanner and J.J. Sidorowich. Predicting chaotic time series. PhY6icai Review Letter6, 59:845, 1987. [12] A. Lapedes. 1988. Personal communication. [13] A.S. Lapedes and R. Farber. Nonlinear Signal Proceuing U6ing Neural Network6: Prediction and SY6tem Modeling. Technical Report, Los Alamos National Laboratory, Los Alamos, New Mexico, 1987. [14) M.C. Mackey and L. Glass. Oscillation and chaos in physiological control systems. Science, 197:287. [15] W. T. Miller, F. H. Glanz, and L. G. Kraft. Application of a general learning algorithm to the control of robotic manipulators. International Journal of Robotic6 Re6earch, 6(2):84, 1987. [16) W. Thomas Miller. Sensor-based control of robotic manipulators using a general learning algorithm. IEEE Journal of Rohotic6 and Automation, RA-3(2):157, 1987. [17) J. Moody and C. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1989. To Appear. [18] J. Moody and C. Darken. Learning with localized receptive fields. In Touretzky, Hinton, and Sejnowski, editors, Proceeding6 of the 1988 Connectioni6t Model6 Summer School, Morgan Kaufmann, Publishers, 1988. [19] S. Omohundro. Efficient algorithms with neural network behavior. Compler SY6tem6, 1:273. [20] T. Sejnowski and C. Rosenberg. Parallel networks that learn to pronounce English text. Compler SY6tem6, 1:145, 1987. [21) M.F. Tenorio and W.T. Lee. Self-organized neural networks for the identification problem. Poster paper presented at the Neural Infonnation Processing Systems Conference, 1988.
|
1988
|
50
|
136
|
Mapping Classifier Systems Into Neural Networks Lawrence Davis BBN Laboratories BBN Systems and Technologies Corporation 10 Moulton Street Cambridge, MA 02238 January 16, 1989 Abstract Classifier systems are machine learning systems incotporating a genetic algorithm as the learning mechanism. Although they respond to inputs that neural networks can respond to, their internal structure, representation fonnalisms, and learning mechanisms differ marlcedly from those employed by neural network researchers in the same sorts of domains. As a result, one might conclude that these two types of machine learning fonnalisms are intrinsically different. This is one of two papers that, taken together, prove instead that classifier systems and neural networks are equivalent. In this paper, half of the equivalence is demonstrated through the description of a transfonnation procedure that will map classifier systems into neural networks that are isomotphic in behavior. Several alterations on the commonly-used paradigms employed by neural networlc researchers are required in order to make the transfonnation worlc. These alterations are noted and their appropriateness is discussed. The paper concludes with a discussion of the practical import of these results, and with comments on their extensibility. 1 Introd uction Classifier systems are machine learning systems that have been developed since the 1970s by 10hn Holland and, more recently, by other members of the genetic algorithm research community as welll . Classifier systems are varieties of genetic algorithms algorithms for optimization and learning. Genetic algorithms employ techniques inspired by the process of biological evolution in order to "evolve" better and better IThis paper has benefited from discussions with Wayne Mesard, Rich Sutton, Ron Williams, Stewart Wilson, Craig Shaefer, David Montana, Gil Syswerda and other members of BARGAIN, the Boston Area Research Group in Genetic Algorithms and Inductive Networks. 49 50 Davis individuals that are taken to be solutions to problems such as optimizing a function, traversing a maze, etc. (For an explanation of genetic algorithms, the reader is referred to [Goldberg 1989].) Classifier systems receive messages from an external source as inputs and organize themselves using a genetic algorithm so that they will "learn" to produce responses for internal use and for interaction with the external source. This paper is one of two papers exploring the question of the fonnal relationship between classifier systems and neural networks. As normally employed, the two sorts of algorithms are probably distinct, although a procedure for translating the operation of neural networks into isomorphic classifier systems is given in [Belew and Gherrity 1988]. The technique Belew and Gherrity use does not include the conversion of the neural network learning procedure into the classifier system framework, and it appears that the technique will not support such a conversion. Thus, one might conjecture that the two sorts of machine learning systems employ learning techniques that cannot be reconciled, although if there were a subsumption relationship, Belew and Gherrity's result suggests that the set of classifier systems might be a superset of the set of neural networks. The reverse conclusion is suggested by consideration of the inputs that each sort of learning algorithm processes. When viewed as "black boxes", both mechanisms for learning receive inputs, carry out self-modifying procedures, and produce outputs. The class of inputs that are traditionally processed by classifier systems the class of bit strings of a fixed length is a subset of the class of inputs that have been traditionally processed by neural networks. Thus, it appears that classifier systems operate on a subset of the inputs that neural networks can process, when viewed as mechanisms that can modify their behavior. In fact, both these impressions are correct. One can translate classifier systems into neural networks, preserving their learning behavior, and one can translate neural networks into classifier systems, again preserving learning behavior. In order to do so, however, some specializations of each sort of algorithm must be made. This paper deals with the translation from classifier systems to neural networks and with those specializations of neural networks that are required in order for the translation to take place. The reverse translation uses quite different techniques, and is treated in [Davis 1989]. The following sections contain a description of classifier systems, a description of the transformation operator, discussions of the extensibility of the proof, comments on some issues raised in the course of the proof, and conclusions. 2 Classifier Systems A classifier system operates in the context of an environment that sends messages to the system and provides it with reinforcement based on the behavior it displays. A classifier system has two components a message list and a population of rule-like entities called classifiers. Each message on the message list is composed of bits, and Mapping Classifier Systems Into Neural Networks 51 each has a pointer to its source (messages may be generated by the environment or by a classifier.) Each classifier in the population of classifiers has three components: a match string made up of the characters 0,1, and # (for "don't care"); a message made up of the characters 0 and 1; and a strength. The top-level description of a classifier system is that it contains a population of production rules that attempt to match some condition on the message list (thus "classifying" some input) and post their message to the message list, thus potentially affecting the envirorunent or other classifiers. Reinforcement from the environment is used by the classifier system to modify the strengths of its classifiers. Periodically, a genetic algorithm is invoked to create new classifiers, which replace certain members of the classifier set. (For an explanation of classifier systems, their potential as machine learning systems, and their formal properties, the reader is referred to [Holland et al 1986].) Let us specify these processing stages more precisely. A classifier system operates by cycling through a fixed list of procedures. In order, these procedures are: Message List Processing. 1. Clear the message list. 2. Post the envirorunental messages to the message list. 3. Post messages to the message list from classifiers in the post set of the previous cycle. 4. Implement envirorunental reinforcement by analyzing the messages on the message list and altering the strength of classifiers in the post set of the previous cycle. Form the Bid Set. 1. Determine which classifiers match a message in the message list. A classifier matches a message if each bit in its match field matches its corresponding message bit. A 0 matches a 0, a 1 matches a I, and a # matches either bit. The set of all matching classifiers forms the current bid set. 2. Implement bid taxes by subtracting a portion of the strength of each classifier c in the bid set. Add the strength taken from c to the strength of the classifier or classifiers that posted messages matched by c in the prior step. Form the Post Set. 1. If the bid set is larger than the maximum post set size, choose classifiers stochastically to post from the bid set, weighting them in proportion to the magnitude of their bid taxes. The set of classifiers chosen is the post set. Reproduction Reproduction generally does not occur on every cycle. When it does occur, these steps are carried out: 1. Create n children from parents. Use crossover and/or mutation, chOOSing parents stochastically but favoring the strongest ones. (Crossover and mutation are two of the operators used in genetic algorithms.) 2. Set the strength of each child to equal the average of the strength of that child's parents. (Note: this is one of many ways to set the strength of a new classifier. The transformation will work in analogous ways for each of them.) 3. Remove n members of the classifier population and add the n new children to the classifier population. 3 Mapping Classifiers Into Classifier Networks The mapping operator that I shall describe maps each classifier into a classifier network. Each classifier network has links to environmental input units, links to 52 Davis other classifier networks, and match, post, and message units. The weights on the links leading to a match node and leaving a post node are related to the fields in the match and message lists in the classifier. An additional link is added to provide a bias term for the match node. (Note: it is assumed here that the environment posts at most one message per cycle. Modifications to the transfonnation operator to accommodate multiple environmental messages are described in the final comments of this paper.) Given a classifier system CS with n classifiers, each matching and sending messages of length m, we can construct an isomorphic neural network composed of n classifier networks in the following way. For each classifier c in CS, we construct its corresponding classifier network, composed of n match nodes, I post node, and m message nodes. One match node (the environmental match node) has links to inputs from the environment. Each of the other match nodes is linked to the message and post node of another classifier network. The reader is referred to Figure 2 for an example of such a transformation. Each match node in a classifier network has m + 1 incoming links. The weights on the first m links are derived by applying the following transformation to the m elements of c's match field: 0 is associated with weight -1, 1 is associated with weight 1, and # is associated with weight O. The weight. of the final link is set to m + 1 - l, where l is the number of links with weight = 1. Thus, a classifier with match field (1 0 # 0 1) would have an associated network with weights on the links leading to its match nodes of 1, -1, 0, -I, 1, and 4. A classifier with match field (1 0#) would have weights of 1, -I, 0, and 3. The weights on the links to each message node in the classifier network are set to equal the corresponding element of the classifier's message field. Thus, if the message field of the classifier were (0 1 0), the weights on the links leading to the three message nodes in the corresponding classifier network would be 0, I, and O. The weights on all other links in the classifier network are set to 1. Each node in a classifier network uses a threshold function to determine its activation level. Match nodes have thresholds = m + .9. All other nodes have thresholds = .9. If a node's threshold is exceeded, the node's activation level is set to 1. If not, it is set to O. Each classifier network has an associated quantity called strength that may be altered when the network is run, during the processing cycle described below. A cycle of processing of a classifier system CS maps onto the following cycle of processing in a set of classifier networks: Message List Processing. 1. Compute the activation level of each message node in CS. 2. If the environment supplies reinforcement on this cycle, divide that reinforcement by the number of post nodes that are currently active, plus 1 if the environment posted a message on the preceding cycle, and add the quotient to the strength of each active post node's classifier network. 3. If there is a message on this cycle from the environment, map it onto the first m environment nodes so that each node associated with a 0 is off and each node associated with a 1 is on. Tum the final environmental node on. If there is no environmental message, turn all environmental Mapping Classifier Systems Into Neural Networks 53 nodes off. Form the Bid Set. 1. Compute the activation level of each match node in each classifier network. 2. Compute the activation level of each bid node in each classifier network (the set of classifier networks with an active bid node is the bid set). 3. Subtract a fixed proportion of the strength of each classifier network cn in the bid set. Add this amount to the strength of those networks connected to an active match node in cn. (Strength given to the environment passes out of the system.) Form the Post Set. 1. If the bid set is larger than the maximum post set size, choose networks stochastically to post from the bid set, weighting them in proportion to the magnitude of their bid taxes. The set of networks chosen is the post set. (This might be viewed as a stochastic n-winners-take-all procedure). Reproduction. If this is a cycle on which reproduction would occur in the classifier system, carry out its analog in the neural network in the following way. 1. Create n children from parents. Use crossover and/or mutation, choosing parents stochastically but favoring the strongest ones. The ternary alphabet composed of -I, I, and 0 is used instead of the classifier alphabet of 0, 1, and #. After each operator is applied, the final member of the match list is set to m + 1 - l. 2. Write over the weights on the match links and the message links of n classifier networks to match the weights in the children. Choose networks to be re-weighted stochastically, so that the weakest ones are most likely to be chosen. Set the strength of each re-weighted classifier network to be the average of the strengths of its parents. It is simple to show that a classifier network match node will match a message in just those cases in which its associated classifier matched a message. There are three cases to consider. If the original match character was a #, then it matched any message bit. The corresponding link weight is set to 0, so the state of the node it comes from will not affect the activation of the match node it goes to. If the original match character was a 1, then its message bit had to be a 1 for the message to be matched. The corresponding link weight is set to 1, and we see by inspection of the weight on the final link, the match node threshold, and the fact that no other type of link has a positive weight, that every link with weight I must be connected to an active node for the match node to be activated. Finally, the link weight corresponding to a 0 is set to -1. If any of these links is connected to a node that is active, then the effect is that of turning off a node connected to a link with weight 1, and we have just seen that this will cause the match node to be inactive. Given this correspondence in matching behavior, one can verify that a set of classifier networks associated with a classifier system has the following properties: During each cycle of processing of the classifier system, a classifier is in the bid set in just those cases in which its associated networlc has an active bid node. Assuming that both systems use the same randomizing technique, initialized in the same way, the classifier is in the post set in just those cases when the network is in the post set. Finally, the parents that are chosen for reproduction are the transform as of those chosen in the classifier system, and the children produced are the transformations of the classifier system parents. The two systems are isomorphic in operation, assuming that they use the same random number generator. 54 Davis CLASSIFIER NETWORK 1 CLASSIFIER NETWORK 2 strength = 49.3 strength = 21.95 2 Figure 1: Result of mapping a classifier system witH two classifiers into a neural network. MESSAGE NODES TH = .9 POST NODES TH =.9 MATCH NODES TH = 3.9 ENVIRONMENT INPUT NODES Classifier 1 has match field (0 1 #), message field (1 1 0), and strength 49.3. Classifier 2 has match field (1 1 #), message field (0 1 1), and strength 21.95. Mapping Classifier Systems Into Neural Networks 55 4 Concluding Comments The transfonnation procedure described above will map a classifier system into a neural network that operates in the same way. There are several points raised by the techniques used to accomplish the mapping. In closing, let us consider four of them. First, there is some excess complexity in the classifier networks as they are shown here. In fact, one could eliminate all non-environmental match nodes and their links, since one can determine whenever a classifier network is reweigh ted whether it matches the message of each other classifier network in the system. If so, one could introduce a link directly from the post node of the other classifier networlc to the post node of the new networlc. The match nodes to the environment are necessary, as long as one cannot predict what messages the environment will post. Message nodes are necessary as long as messages must be sent out to the environment. If not, they and their incoming links could be eliminated as well. These simplifications have not been introduced here because the extensions discussed next require the complexity of the current architecture. Second, on the genetic algorithm side, the classifier system considered here is an extremely simple one. There are many extensions and refinements that have been used by classifier system researchers. I believe that such refinements can be handled by expanded mapping procedures and by modifications of the architecture of the classifier networks. To give an indication of the way such modifications would go, let us consider two sample cases. The first is the case of an environment that may produce multiple messages on each cycle. To handle multiple messages, an additional link must be added to each environmental match node with weight set to the match node's threshold. This link will latch the match node. An additional match node with links to the environment nodes must be added, and a latched counting node must be attached to it. Given these two architectural modifications, the cycle is modified as follows: During the message matching cycle, a series of subcycles is carried out, one for each message posted by the environment. In each subcycle, an environmental message is input and each environmental match node computes its activation. The environmental match nodes are latched., so that each will be active if it matched any environmental message. The count nodes will record how many were matched by each classifier network. When bid strength'is paid from a classifier network to the posters of messages that it matched, the divisor is the number of environmental messages matched as recorded by the count node, plus the number of other messages matched. Finally, when new weights are written onto a classifier network's links, they are written onto the match node connected to the count node as well. A second sort of complication is that of pass-through bits bits that are passed from a message that is matched to the message that is posted. This sort of mechanism can be implemented in an obvious fashion by complicating the structure of the classifier networlc. Similar complications are produced by considering multiple-message matching, negation, messages to effectors, and so forth. It is an open question whether all such cases can be handled by modifying the architecture and the mapping operator, but I have not yet found one that cannot be so handled. 56 Davis Third, the classifier networks do not use the sigmoid activation functions that support hill-c~bing techniques such as back-propagation. Further, they are recurrent networks rather than strict feed-forwanl networks. Thus, one might wonder whether the fact that one can carry out such transformations should affect the behavior of researchers in the field. This point is one that is taken up at greater length in the companion paper. My conclusion there is that several of the techniques imported into the neural network domain by the mapping appear to improve the performance of neural networks. These include tracking strength in order to guide the learning process, using genetic operators to modify the network makeup. and using population-level measurements in order to determine what aspects of a network to use in reproduction. The reader is referred to [Montana and Davis 1989] for an example of the benefits to be gained by employing these techniques. Finally, one might wonder what the import of this proof is intended to be. In my view, this proof and the companion proof suggest some exciting ways in which one can hybridize the learning techniques of each field. One such approach and its successful application to a real-world problem is characterized in [Montana and Davis 1989]. References [1] Belew, Richard K. and Michael Gherrity, "Back Propagation for the Classifier System", in preparation. [2] Davis, Lawrence, "Mapping Neural Networks into Classifier Systems", submitted to the 1989 International Conference on Genetic Algorithms. [3] Goldberg, David E. Genetic Algorithms in Search, Optimization, and Machine Learning, Addison Wesley 1989. [4] Holland, John H, Keith J. Holyoak, Richard E. Nisbett, and Paul R. Thagard, Induction, MIT Press, 1986. [5] Montana, David J. and Lawrence Davis, "Training Feedforward Neural Networks Using Genetic Algorithms", submitted to the 1989 International Joint Conference on Artificial Intelligence.
|
1988
|
51
|
137
|
A NETWORK FOR IMAGE SEGMENTATION USING COLOR Anya Hurlbert and Tomaso Poggio Center for Biological Information Processing at Whitaker College Department of Brain and Cognitive Science and the MIT AI Laboratory Cambridge, MA 02139 (hur [email protected]) ABSTRACT We propose a parallel network of simple processors to find color boundaries irrespective of spatial changes in illumination, and to spread uniform colors within marked re. glOns. INTRODUCTION To rely on color as a cue in recognizing objects, a visual system must have at least approximate color constancy. Otherwise it might ascribe different characteristics to the same object under different lights. But the first step in using color for recognition, segmenting the scene into regions of different colors, does not require color constancy. In this crucial step color serves simply as a means of distinguishing one object from another in a given scene. Color differences, which mark material boundaries, are essential, while absolute color values are not. The goal of segmentation algorithms is to achieve this first step toward object recognition by finding discontinuities in the image irradiance that mark material boundaries. The problems that segmentation algorithms must solve is how to choose color labels, how to distinguish material boundaries from other changes in the image that give rise to color edges, and how to fill in uniform regions with the appropriate color labels. (Ideally, the color labels should remain constant under changes in the illumination or scene composition and color edges should occur only at material boundaries.) Rubin and Richards (1984 ) show that algorithms can solve the second problem under some conditions by comparing the image irradiance signal in distinct spectral channels on either side of an edge. The goal of the segmentation algorithms we discuss here is to find boundaries between regions of different surface spectral reflectances and to spread uniform colors within them, without explicitly requiring the colors to be constant under changes in illumination. The color labels we use are analogous to the CIE chromaticity coordinates x and y. Under the single source assumption, they change across space 297 298 Hurlbert and Poggio only when the surface spectral reflectance changes, except when strong specularities are present. (The algorithms therefore require help at a later stage to identify between color label changes due to specularities, which we have not yet explicitly incorporated.) The color edges themselves are localised with the help of luminance edges, by analogy with psychophysics of segmentation and filling-in. The Koftka Ring illusion, for example, indicates that color is attributed to surfaces by an interaction between an edge-finding operator and a filling-in operator.1 The interaction is justified by the fact that in the real world changes in surface spectral reflectance are almost always accompanied by changes in brightness. Color Labels We assume that surfaces reflect light according to the neutral-interface-reflection model. In this model (Lee, 1986 , Shaefer, 1984 [3]) the image irradiance I(X,y,A) is the sum of two components, the surface reflection and the body reflection: I(x, y, A) = L(r(x, y), A)[a(r, A)g(6(r)) + bh(6(r))], where A labels wavelength and r( x, y) is the point on the 3D surface to which the image coordinates (x, y) correspond. L(r(x, y), A) is the illumination on the surface. a(r, A) is the spectral reflectance factor of the body reflection component and g(6(r)) its magnitude, which depends on the viewing geometry parameters lumped together in 6(r). The spectral reflectance factor of the specular, or surface reflection, component b is assumed to be constant with respect to A, as is true for inhomogeneous materials such as paints and plastics. For most materials, the magnitude of the specular component h depends strongly on the viewing geometry. Using the single source assumption, we may factor the illumination L into separate spatial and spectral components (L(r, A) L(r)c(A)). Multiplying I by the spectral sensitivities of the color sensors i = 1,2,3 and integrating over wavelength yields the triplet of color values (R, G, B), where and so forth and where the ai and bi are the reflectance factors in the spectral channels defined by the sensor spectral sensitivities. We define the hues u and v as R u= --__ -R+G+B and 1 Note that Land's original retinex algorithm, which thresholds and swns the differences in image irradiance between adjacent points along many paths, accounts for the contribution of edges to color, without introducing a separate luminance edge detector. A Network for Image Segmentation Using Color 299 G v=----R+G+B at each pixel. In Lambertian reflection, the specular reflectance factor b is zero. In this case, u and v are piecewise constant: they change in the image only when the ai(x,y) change. Thus u or v mark discontinuities in the surface spectral reflectance function, e.g they mark material boundaries. Conversely, image regions of constant u correspond to regions of constant surface color. Synthetic images generated with standard computer graphics algorithms (using, for example, the Phong reflectance model) behave in this way: u is constant across the visible surface of a shaded sphere. Across specularities, u in general changes but often not much. Thus one approach to the segmentation problem is to find regions of "constant" u and their boundaries. The difficulty with this approach is that real u data are noisy and unreliable: u is the quotient of numbers that are not only noisy themselves but also, at least for biological photosensor spectral sensitivities, very close to one another. The goals of segmentation algorithms are therefore to enhance discontinuities in u and, within the regions marked by the discontinuities, to smoothe over the noise and fill in the data where they are unreliable. We have explored several methods of meeting these goals. Segmentation Algorithms One method is to regularize - to eliminate the noise and fill in the data, while preserving the discontinuities. Using an algorithm based on Markov Random Field techniques, we have obtained encouraging results on real images (see Poggio et al., 1988). The MRF technique exploits the constraint that u should be piecewise constant within the discontinuity contours and uses image brightness edges as guides in finding the contours. An alternative to the MRF approach is a cooperative network that fills in data and filters out noise while enforcing the constraint of piecewise constancy. The network, a type of Hopfield net, is similar to the cooperative stereo network of Marr and Poggio (1976). Another approach consists of a one-pass winner-take-all scheme. Both algorithms involve loading the initial hue values into discrete bins, an undesirable and biologically unlikely feature. Although they produce good results on noisy synthetic images and can be improved by modification (see Hurlbert, 1989), another class of algorithms which we now describe are simple and effective, especially on parallel computers such as the Connection Machine. Averaging Network One way to avoid small step changes in hue across a uniform surface resulting from initial loading into discrete bins is to relax the local requirement for piecewise 300 Hurlbert and Poggio b. 41 97 " 74 Figure 1: (a) Image of a Mondrian-textured sphere - the red channel. (b) Vertical slice through the specularity in a 75 x 75 pixel region of the three-channel image (R + G + B) of the same sphere. constancy and instead require only that hue be smooth within regions delineated by the edge input. We will see that this local smoothness requirement actually yields an iterative algorithm that provides asymptotically piecewise constant hue regions. To implement the local smoothness criterion we use an averaging scheme that simply replaces the value of each pixel in the hue image with the average of its local surround, iterating many times over the whole image. The algorithm takes as input the hue image (either the u-image or the v-image) and one or two edge images, either luminance edges alone, or luminance edges plus u or v edges, or u edges plus v edges. The edge images are obtained by performing Canny edge detection or by using a thresholded directional first derivative. On each iteration, the value at each pixel in the hue image is replaced by the average of its value and those in its contributing neighborhood. A neighboring pixel is allowed to contribute if (i) it is one of the four pixels sharing a full border with the central pixel (ii) it shares the same edge label with the central pixel in all input edge images (iii) its value is non-zero and (iv) its value is within a fixed range of the central pixel value. The last requirement simply reinforces the edge label requirement when a hue image serves as an input edge image - the edge label requirement allows only those pixels that lie on the same side of an edge to be averaged, while the other insures that only those pixels with similar hues are averaged. More formally A Network for Image Segmentation Using Color 301 where Cn(hf,j) is the set of N(Cn) pixels among the next neighbors of i,j that differ from h~. less than a specified amount and are not crossed by an edge in the edge map(s) (on the assumption that the pixel (i,j) does not belong to an edge). The iteration of this operator is similar to nonlinear diffusion and to discontinuous regularization of the type discussed by Blake and Zisserman (1987), Geman and Geman (1984) and Marroquin (9]. The iterative scheme of the above equation can be derived from minimization via gradient descent of the energy function E = L:Ei,j with where V(x, y) = V(x - y) is a quadratic potential around 0 and constant for Ix - yl above a certain value. The local averaging smoothes noise in the hue values and spreads uniform hues across regions marked by the edge inputs. On images with shading but without strong specularities the algorithm performs a clean segmentation into regions of different hues. Conclusions The averaging scheme finds constant hue regions under the assumptions of a single source and no strong specularities. A strong highlight may originate an edge that could then "break" the averaging operation. In our limited experience most specularities seem to average out and disappear from the smoothed hue map, largely because even strong specularities in the image are much reduced in the initial hue image. The iterative averaging scheme completely eliminates the remaining gradients in hue. It is possible that more powerful discrimination of specularities will require specialized routines and higher-level knowledge (Hurlbert, 1989). Yet this simple network alone is sufficient to reproduce some psychophysical phenomena. In particular, the interaction between brightness and color edges enables the network to mimic such visual "illusions" as the Koftka Ring. We replicate the illusion in the following way. A black-and-white Koft'ka Ring (a uniform grey annulus against a rectangular bipartite background, one side black and the other white) (Hurlbert and Poggio, 1988b) is filtered through the lightness filter estimated in 302 Hurlbert and Poggio a. c. 9.1989 9 72 9.49679872 9.1122449 9 h. 299 Figure 2: (a) A 75x75 pixel region of the u image, including the specularity. (b) The image obtained after 500 iterations of the averaging network on (a), using as edge input the Canny edges of the luminance image. A threshold on differences in the v image allows only similar v values to be averaged. (c) Vertical slice through center of (a). (d) Vertical slice at same coordinates through (b) (note different scales of (c) and (d». A Network for Image Segmentation Using Color 303 the way described elsewhere (Hurlbert and Poggio, 1988a). (For black-and-white images this step replaces the operation of obtaining u and v: in both cases the goal is to eliminate spatial gradients of in the effective illumination.) The filtered Koffka Ring is then fed to the averaging network together with the brightness edges. When in the input image the boundary between the two parts of the background continues across the annulus, in the output image (after 2000 iterations of the averaging network) the annulus splits into two semi-annuli of different colors in the output image, dark grey against the white half, light grey against the black half (Hurlbert, 1989). When the boundary does not continue across the annulus, the annulus remains a uniform grey. These results agree with human perception. Acknowledgements This report describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation; by the Alfred P. Sloan Foundation; by the National Science Foundation; by the Artificial Intelligence Center of Hughes Aircraft Corporation (SI-801534-2); and by the NATO Scientific Affairs Division (0403/87). Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-001O, and in part by ONR contract NOOOI485-K-0124. Tomaso Foggio is supported by the Uncas and Helen Whitaker Chair at the Massachusetts Institute of Technology, Whitaker College. References John Rubin and Whitman Richards. Colour VISIon: representing material categories. Artificial Intelligence Laboratory Memo 764, Massachusetts Institute of Technology, 1984. Hsien-Che Lee. Method for computing the scene-illuminant chromaticity from specular highlights. Journal of the Optical Society of America, 3:1694-1699, 1986. Steven A. Shafer. Using color to separate reflection components. Color Research and Applications, 10(4):210-218, 1985. Tomaso Poggio, J. Little, E. Gamble, W. Gillett, D. Geiger, D. Weinshall, M. Villalba, N. Larson, T. Cass, H. Biilthoff, M. Drumheller, P. Oppenheimer, W. Yang, and A. Hurlbert. The MIT Vision Machine. In Proceedings Image Understanding Workshop, Cambridge, MA, April 1988. Morgan Kaufmann, San Mateo, CA. David Marr and Tomaso Poggio. Cooperative computation of stereo disparity. Science, 194:283-287, 1976. Anya C. Hurlbert. The Computation of Color. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, 1989. Jose L. Marroquin. Probabilistic Solution of Inverse Problems. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, 1985. 304 Hurlbert and Poggio Andrew Blake and Andrew Zisserman. Visual Reconstruction. MIT Press, Cambridge, Mass, 1987. Stuart Geman and Don Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern A nalysis and Machine Intelligence, PAMI-6:721-741, 1984. Anya C. Hurlbert and Tomaso A. Poggio. Learning a color algorithm from examples. In Dana Z. Anderson, editor, Neural Information Processing Systems. American Institute of Physics, 1988. A. C. Hurlbert and T. A. Poggio. Synthesizing a color algorithm from examples. Science, 239:482-485, 1988.
|
1988
|
52
|
138
|
TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC M.R. Walker. S. Haghighi. A. Afghan. and L.A. Akers Center for Solid State Electronics Research Arizona State University Tempe. AZ 85287-6206 [email protected] ABSTRACT Hardware implementation of neuromorphic algorithms is hampered by high degrees of connectivity. Functionally equivalent feedforward networks may be formed by using limited fan-in nodes and additional layers. but this complicates procedures for determining weight magnitudes. No direct mapping of weights exists between fully and limited-interconnect nets. Low-level nonlinearities prevent the formation of internal representations of widely separated spatial features and the use of gradient descent methods to minimize output error is hampered by error magnitude dissipation. The judicious use of linear summations or collection units is proposed as a solution. HARDWARE IMPLEMENTATIONS OF FEEDFORWARD, SYNTHETIC NEURAL SYSTEMS The pursuit of hardware implementations of artificial neural network models is motivated by the need to develop systems which are capable of executing neuromorphic algorithms in real time. The most significant barrier is the high degree of connectivity required between the processing elements. Current interconnect technology does not support the direct implementation of large-scale arrays of this type. In particular. the high fan-in/fan-outs of biology impose connectivity requirements such that the electronic implementation of a highly interconnected biological neural networks of just a few thousand neurons would require a level of connectivity which exceeds the current or even projected interconnection density ofULSI systems (Akers et al. 1988). Highly layered. limited-interconnected architectures are however. especially well suited for VLSI implementations. In previous works. we analyzed the generalization and fault-tolerance characteristics of a limited-interconnect perceptron architecture applied in three simple mappings between binary input space and binary output space and proposed a CMOS architecture (Akers and Walker. 1988). This paper concentrates on developing an understanding of the limitations on layered neural network architectures imposed by hardware implementation and a proposed solution. 777 778 Walker, Haghighi, Afghan and Akers TRAINING CONSIDERATIONS FOR LIMITED .. INTERCONNECT FEEDFORWARD NETWORKS The symbolic layout of the limited fan-in network is shown in Fig. 1. Re-arranging of the individual input components is done to eliminate edge effects. Greater detail on the actual hardware architecture may be found in (Akers and Walker, 1988) As in linear filters, the total number of connections which fan-in to a given processing element determines the degrees of freedom available for forming a hypersurface which implements the desired node output function (Widrow and Stearns, 1985). When processing elements with fixed, low fan-in are employed, the affects of reduced degrees of freedom must be considered in order to develop workable training methods which permit generalization of novel inputs. First. no direct or indirect relation exists between weight magnitudes obtained for a limited-interconnect, multilayered perceptron, and those obtained for the fully connected case. Networks of these types adapted with identical exemplar sets must therefore fonn completely different functions on the input space. Second, low-level nonlinearities prevent direct internal coding of widely separated spatial features in the input set. A related problem arises when hyperplane nonlinearities are used. Multiple hyperplanes required on a subset of input space are impossible when no two second level nodes address identical positions in the input space. Finally, adaptation methods like backpropagation which minimize output error with gradient descent are hindered since the magnitude of the error is dissipated as it back-propagates through large numbers of hidden layers. The appropriate placement of linear summation elements or collection units is a proposed solution. 1 2 3 4 5 12 11 10 9 6 7 8 Figure 1. Symbolic Layout of Limited-Interconnect Feedforward Architecture Training a Limited-Interconnect, Synthetic Neural Ie 779 COMPARISON OF WEIGHT VALVES IN FULLY CONNECTED AND LIMITED-INTERCONNECT NETWORKS Fully connected and limited-interconnect feedforward structures may be functionally equivalent by virtue of identical training sets, but nonlinear node discriminant functions in a fully-connected perceptron network are generally not equivalent to those in a limited-interconnect, multilayered network. This may be shown by comparing the Taylor series expansion of the discriminant functions in the vicinity of the threshold for both types and then equating terms of equivalent order. A simple limited-interconnect network is shown in Fig. 2. x1 x2 y3 x3 x4 Figure 2. Limited-Interconnect Feedforward Network A discriminant function with a fan-in of two may be represented with the following functional form, where e is the threshold and the function is assumed to be continuously differentiable. The Taylor series expansion of the discriminant is, Expanding output node three in Fig. 2 to second order, where fee), fee) and f'(e) are constant terms. Substituting similar expansions for Yl and Y2 into Y3 yields the expression, 780 Walker, Haghighi, Afghan and Akers The output node in the fully-connected case may also be expanded, x1 x2 __ ~y3 x3 x4 Figure 3. Fully Connected Network where Expanding to second order yields, We seek the necessary and sufficient conditions for the two nonlinear discriminant functions to be analytically equivalent. This is accomplished by comparing terms of equal order in the expansions of each output node in the two nets. Equating the constant terms yields, w =-w 5 6 Equating the fIrst order terms, W =W = 1 5 6 f(9) Equating the second order terms, Training a Limited-Interconnect, Synthetic Neural Ie 781 The ftrst two conditions are obviously contradictory. In addition, solving for w5 or w6 using the ftrst and second constraints or the frrst and third constraints yields the trivial result, w5=w6=O. Thus, no relation exists between discriminant functions occurring in the limited and fully connected feedforward networks. This eliminates the possibility that weights obtained for a fully connected network could be transformed and used in a limited-interconnect structure. More signiftcant is the fact that full and limited interconnect nets which are adapted with identical sets of exemplars must form completely different functions on the input space, even though they exhibit identical output behavior. For this reason, it is anticipated that the two network types could produce different responses to a novel input. NON-OVERLAPPING INPUT SUBSETS Signal routing becomes important for networks in which hidden units do not address identical subsets in the proceeding layer. Figure 4 shows an odd-parity algorithm implemented with a limited-interconnect architecture. Large weight magnitudes are indicated by darker lines. Many nodes act as "pass-through" elements in that they have few dominant input and output connections. These node types are necessary to pass lower level signals to common aggregation points. In general, the use of limited fan-in processing elements implementing a nonlinear discriminant function decreases the probability that a given correlation within the input data will be encoded, especially if the "width" of the feature set is greater than the fan-in, requiring encoding at a high level within the net. In addition, since lower-level connections determine the magnitudes of upper level connections in any layered net when baclcpropagation is used, the set of points in weight space available to a limited-interconnect net for realizing a given function is further reduced by the greater number of weight dependencies occurring in limited-interconnect networks, all of which must be satisfted during training. Finally, since gradient descent is basically a shortcut through an NP-complete search in weight space, reduced redundancy and overlapping of internal representations reduces the probability of convergence to a near-optimal solution on the training set. DISSIPATION OF ERROR MAGNITUDE WITH INCREASING NUMBERS OF LAYERS Following the derivation of backpropagation in (plaut, 1986), the magnitude change for a weight connecting a processing element in the m-Iayer with a processing element in the I-layer is given by, where 782 Walker, Haghighi, Afghan and Akers Figure 4. Six-input odd parity function implemented with limited-interconnect then [ f [f [f dYa ] dy j ] dy 1 dy = ~ L··· L(ya-da) dx wb-a • "-d wk_, _k W ~ i..J '1 a=l a x, ) dx l-k dx m k=l J= ) k 1 Where y is the output of the discriminant function, x is the activation level, w is a connection magnitude, and f is the fan-in for each processing element. If N layers of elements intervene between the m-layer and the output layer, then each of the f (N-l) tenns in the above summation consists of the product, Training a Limited-Interconnect, Synthetic Neural Ie 783 dy. ) Wb •• '-d -a x j If we replace the weight magnitudes and the derivatives in each tenn with their mean values, The value of the first derivative of the sigmoid discriminant function is distributed between 0.0 and 0.5. The weight values are typically initially distrlbuted evenly between small positive and negative values. Thus with more layers. the product of the derivatives occurring in each tenn approaches zero. The use of large numbers of perceptron layers therefore has the affect of dissipating the magnitude of the error. This is exacerbated by the low fan-in, which reduces the total number of tenns in the summation. The use of linear collection units (McClelland. 1986), discussed in the following section, is a proposed solution to this problem. LINEAR COLLECTION UNITS As shown in Fig. 5, the output of the limited-interconnect net employing collection units is given by the function, x1 [::>linear summation o non-linear discriminant x2 y3 x3 x4 Figure S. Limited-interconnect network employing linear summations where c 1 and c2 are constants. The position of the summations may be determined by using euclidian k-means clustering on the exemplar set to a priori locate cluster centers 784 Walker, Haghighi, Afghan and Akers and determine their widths (Duda and Hart, 1973). The cluster members would be combined using linear elements until they reached a nonlinear discrminant, located higher in the net and at the cluster center. With this arrangement, weights obtained for a fully-connected net could be mapped using a linear transformation into the limited-interconnect network. Alternatively, backpropagation could be used since error dissipation would be reduced by setting the linear constant c of the summation elements to arbitrarily large values. CONCLUSIONS No direct transformation of weights exists between fully and limited interconnect nets which employ nonlinear discrmiminant functions. The use of gradient descent methods to minimize output error is hampered by error magnitude dissipation. In addition, low-level nonlinearities prevent the formation of internal representations of widely separated spatial features. The use of strategically placed linear summations or collection units is proposed as a means of overcoming difficulties in determining weight values in limited-interconnect perceptron architectures. K-means clustering is proposed as the method for determining placement. References L.A. Akers, M.R. Walker, O.K. Ferry & R.O. Grondin, "Limited Interconnectivity in Synthetic Neural Systems," in R. Eckmiller and C. v.d. Malsburg eds., Neural Computers. Springer-Verlag, 1988. L.A. Akers & M.R. Walker, "A Limited-Interconnect Synthetic Neural IC," Proceedings of the IEEE International Conference on Neural Networks, p. 11-151,1988. B. Widrow & S.D. Stearns, Adaptive Signal Processing. Prentice-Hall, 1985. D.C. Plaut, SJ. Nowlan & G.E. Hinton, "Experiments on Learning by Back Propagation," Carnegie-Mellon University, Dept. of Computer Science Technical Report, June, 1986. J.L. McClelland, "Resource Requirements of Standard and Programmable Nets," in D.E. Rummelhart and J.L. McClelland eds., Parallel Distributed Processing Volume 1: Foundations. MIT Press, 1986. R.O. Duda & P.E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973.
|
1988
|
53
|
139
|
444 A MODEL FOR RESOLUTION ENHANCEMENT (HYPERACUITY) IN SENSORY REPRESENTATION Jun Zhang and John P. Miller Neurobiology Group, University of California, Berkeley, California 94720, U.S.A. ABSTRACT Heiligenberg (1987) recently proposed a model to explain how sensory maps could enhance resolution through orderly arrangement of broadly tuned receptors. We have extended this model to the general case of polynomial weighting schemes and proved that the response function is also a polynomial of the same order. We further demonstrated that the Hermitian polynomials are eigenfunctions of the system. Finally we suggested a biologically plausible mechanism for sensory representation of external stimuli with resolution far exceeding the inter-receptor separation. 1 INTRODUCTION In sensory systems, the stimulus continuum is sampled at discrete points by receptors of finite tuning width d and inter-receptor spacing a. In order to code both stimulus locus and stimulus intensity with a single output, the sampling of individual receptors must be overlapping (i. e. a < d). This discrete and overlapped sampling of the stimulus continuum poses a question of how then the system could reconstruct the sensory stimuli with Resolution Enhancement in Sensory Representation 445 a resolution exceeding that is specified by inter-receptor spacing. This is known as the hyperacuity problem (Westheimer,1975). Heiligenberg (1987) proposed a model in which the array of receptors (with Gaussian-shaped tuning curves) were distributed uniformly along the entire range of stimulus variable x. They contribute excitation to a higher order interneuron, with the synaptic weight of each receptor's input set proportional to its rank index k in the receptor array. Numerical ~lmulation and subsequent mathematical analysis (Baldi and Heiligenberg, 1988) demonstrated that, so long as a <:: d, the response function f( x) of the higher order neuron was monotone increasing and surprisingly linear. The smoothness of this function offers a partial explanation of the general phenomena of hyperacuity (see Baldi and Heiligenberg in this volumn). Here we consider various extensions of this model. Only the main results shall be stated below; their proof is presented elsewhere (Zhang and Miller, in preparation). 2 POLYNOMIAL WEIGHTING FUNCTIONS First, the model can be extended to incorporate other different weighting schemes. The weighting function w( k) specifies the strength of the excitation from the k-th receptor onto the higher order interneuron and therefore determines the shape of its response f( x). In Heiligenberg's original model, the linear weighting scheme w( k) = k is used. A natural extension would then be the polynomial weighting schemes. Indeed, we proved that, for sufficiently large d, a) If w(k) = k2m , then: 446 Zhang and Miller If w( k) = k2m+l , then: f( ) 3 2m+1 X = alX + a3X + ... + a2m+IX where m = 0,1,2, ... , and ai are real constants. Note that for w(k) = kP , f(x) has parity (-I)P , that is, it is an odd function for odd interger p and even function for even interger p. The case of p = 1 reduces to the linear weighting scheme in Heiligenberg's original model. b) If w(k) = Co + clk + c2k2 + ... + cpkP , then: Note that this is a direct result of a), because f( x) is linearly dependent on w( k). The coefficients Ci and ai are usually different for the two polynomials. One would naturally ask: what kind of polynomial weighting function then would yield an identical polynomial response function? This leads to the important conclusion: c) If w(k) = Hp(k) is an Hermitian polynomial, then f(x) = Hp(x) , the same Hermitian polynomial. The Hermitian polynomial Hp(t) is a well-studied function in mathematics. It is defined as: 2 dP 2 Hp(t) = (-I)Pet -d e-t tP For reference purpose, the first four polynomials are given here: Ho(t) 1· , HI(t) 2t· , H2(t) 4t2 - 2· , H3(t) 8t3 - 12t· , Resolution Enhancement in Sensory Representation 447 The conclusion of c) tells us that Hermitian polynomials are unique in the sense that they serve as eigenfunctions of the system. 3 REPRESENTATION OF SENSORY STIMULUS Heiligenberg's model deals with the general problem of two-point resolution, i. e. how sensory system can resolve two nearby point stimuli with a resolution exceeding inter-receptor spacing. Here we go one step further to ask ourselves how a generalized sensory stimulus g( x) is encoded and represented beyond the receptor level with a resolution exceeding the inter-receptor spacing. We'll show that if, instead of a single higher order interneuron, we have a group or layer of interneurons, each connected to the array of sensory receptors using some different but appropriately chosen weighting schemes wn(k), then the representation of the sensory stimulus by this interneuron group (in terms of In , each interneuron's response) is uniquely determined with enhanced resolution (see figure below). INTERNEURON GROUP . . . . . . RECEPTOR ARRAY 448 Zhang and Miller Suppose that 1) each interneuron in this group receives input from the receptor array, its weighting characterized by a Hermitian polynomial H1'(k) ; and that 2) the order p of the Hermitian polynomial is different for each interneuron. We know from mathematics that any stimulus function g( x) satisfying certain boundary conditions can be decomposed in the following way: 00 g(x) = ~ cnHn(x)e-X2 n=O The decomposition is unique in the sense that Cn completely determines g(x). Here we have proved that the response /1' of the p-th interneuron (adopting H1'(k) as weighting scheme) is proportional to c1' : This implies that g( x) can be uniquely represented by the response of this set of interneurons {/1'}' Note that the precision of representation at this higher stage is limited not by the receptor separation, but by the number of neurons available in this interneuron group. 4 EDGE EFFECTS Since the array of receptors must actually be finite in extent, simple weighting schemes may result in edge-effects which severely degrade stimulus resolution near the array boundaries. For instance, the linear model investigated by Heiligenberg and Baldi will have regions of degeneracy where two nearby point stimuli, if located near the boundary defined by receptor array coverage, may yield the same response. We argue that this region of degeneracy can be eliminated or reduced in the following situations: 1) If w( k) approaches zero as k goes to infinity, then the receptor array Resolution Enhancement in Sensory Representation 449 can still be treated as having infinite extent since the contributions by the large index receptors are negligibly small. We proved, using Fourier analysis, that this kind of vanishing-at-infinity weighting scheme could also achieve resolution enhancement provided that the tuning width of the receptor is sufficiently larger than the inter-receptor spacing and meanwhile sufficiently smaller than the effective width of the entire weighting function. 2) If the receptor array "wraps around" into a circular configuration, then it can again be treated as infinite (but periodic) along the angular dimension. This is exactly the case in the wind-sensitive cricket cercal sensory system (Jacobs et al,1986; Jacobs and Miller,1988) where the population of directional selective mechano-receptors covers the entire range of 360 degrees. 5 CONCLUSION Heiligenberg's model, which employs an array of orderly arranged and broadly tuned receptors to enhance the two-point resolution, can be extended in a number of ways. We first proved the general result that the model works for any polynomial weighting scheme. We further demonstrated that Hermitian polynomial is the eigenfunction of this system. This leads to the new concept of stimulus representation, i. e. a group of higher-order interneurons can encode any generalized sensory stimulus with enhanced resolution if they adopt appropriately chosen weighting schemes. Finally we discussed possible ways of eliminating or reducing the "edge-effects". ACKNOWLEDGMENTS This work was supported by NIH grant # ROI-NS26117. 450 Zhang and Miller REFERENCES Baldi, P. and W. Heiligenberg (1988) How sensory maps could enhance resolution through ordered arrangements of broadly tuned receivers. Bioi. Cybern. 59: 314-318. Heiligenberg, W. (1987) Central processing of the sensory information in electric fish. J. Compo Physiol. A 161: 621-631. Jacobs, G.A. and J.P. Miller (1988) Analysis of synaptic integration using the laser photo-inactivation technique. Experientia 44: 361- 462. Jacobs, G.A., Miller, J.P. and R.K. Murphey (1986) Cellular mechanisms underlying directional sensitivity of an identified sensory interneuron. J.Neurosci. 6: 2298-2311. Westheimer, G. (1975) Visual acuity and hyperacuity. Invest. Ophthalmol. Vis. 14: 570-572.
|
1988
|
54
|
140
|
240 TEMPORAL REPRESENTATIONS IN A CONNECTIONIST SPEECH SYSTEM Erich J. Smythe 207 Greenmanville Ave, #6 Mystic, CT 06355 ABSTRACT SYREN is a connectionist model that uses temporal information in a speech signal for syllable recognition. It classifies the rates and directions of formant center transitions, and uses an adaptive method to associate transition events with each syllable. The system uses explicit spatial temporal representations through delay lines. SYREN uses implicit parametric temporal representations in formant transition classification through node activation onset, decay, and transition delays in sub-networks analogous to visual motion detector cells. SYREN recognizes 79% of six repetitions of 24 consonant-vowel syllables when tested on unseen data, and recognizes 100% of its training syllables. INTRODUCTION Living organisms exist in a dynamic environment. Problem solving systems, both natural and synthetic, must relate and interpret events that occur over time. Although connectionist models are based on metaphors from the brain, few have been designed to capture temporal and sequential information common to even the most primitive nervous systems. Yet some of the most popular areas of application of these models, including speech recognition, vision, and motor control, require some form of temporal processing. The variation in a speech signal contains considerable information. Changes in format location or other acoustic parameters (Delattre, et al., 1955; Pols and Schouten, 1982) can determine the identity of constituents of speech, even when segmentation information is obscure. Speech recognition systems have shown good results when they incorporate some temporal information (Waible, et al., 1988, Anderson, et al., 1988). Successful speech systems must incorporate temporal processing. Natural organisms have sensory organs that are continuously updated and can do only limited buffering of input stimuli. Synthetic implementations can buffer their input, transforming time into space. Often the size and complexity of the input representations place limits on the amount of input that can be buffered, especially when data is coming from hundreds or thousands of sensors, and other methods must be found to integrate temporal information. Temporal Representations in a Connectionist Speech System 241 This paper describes SYREN (SYllable REcognition Network), a connectionist network that incorporates various temporal representations for consonant-vowel (CV) syllable recognition by the classification of formant center transitions. Input is presented sequentially, one time slice at a time. The network is described, including the temporal processing used in formant transition classification, learning, and syllable recognition. The results of syllable recognition experiments are discussed in the final section. TEMPORAL REPRESENTATIONS Various types of temporal representations may be used to incorporate time in connectionist models. They range from explicit spatial representations where time is convened into space, to implicit parametric representations where time is incorporated using network computational parameters. Spatiotemporal representations are a middle ground combining the two extremes. The categories represent a continuum rather than absolute distinctions. Several of these types are found in SYREN. EXPLICIT SPATIAL REPRESENTATIONS In a purely spatial representation temporal information is preserved by spreading time steps over space through the network topology. These representations include input buffers, delay lines, and recurrent networks. Fixed input buffers allow interaction between time slices of input. Parts of the network are copied to represent states at panicular time slices. Other methods use sliding input buffers in the form of a queue. Tapped delay lines and delay filters are means of spreading network node activations over time. Composed of chains of network nodes or delay functions, they can preserve the sequential structure of information. A value on a connection from a delay line represents events that have occurred in the past. Delay lines and filters have been used in speech recognition systems by Waible, et al. (1988), and Tank and Hopfield (1987) . Recurrent networks are similar to delay lines in that information is preserved by propagating activation through the network. They can store information indefinitely or generate potentially infinite sequences of behaviors through feedback cycles, whereas delay lines without cycles are limited by their fixed length. Recurrent networks pose problems for learning, although researchers are working on recurrent back propagation networks (Jordan, 1988). Spatial representations are good for explicitly preserving sequences of events, and can simplify the learning of temporal patterns. Resource constraints place a limit on the size of fixed length buffers and delay lines, however. Input data from thousands of sensors place limits on the length of time represented in the buffer, and may not be able to retain information long enough to be of use. Fixed input buffers may introduce edge effects. Interaction is lost between the edges of the buffer and data from preceding and succeeding buffers unless the input is properly segmented. Long delay lines may be computationally expensive as well. 242 Smythe SPATIOTEMPORAL REPRESENTATIONS Implicit parametric methods represent time in connectionist models by the behavior of network nodes. State information stored in individual nodes allows more complex activation functions and the accumulation of statistical information. This method may be used to regulate the flow of activation in the network, provide a trace of previous activation, and learn from data separated in time. Adjusting the parameters of functions such as the interactive activation equation of McClelland and Rumelhart (1982) can control the strength of input, affecting the rate that activation reaches saturation. This leads to pulse trains used in synchronization. Variations in decay parameters control the duration of an activation trace. State and statistical information is useful in learning. Eligibility traces from classical conditioning models provide decaying memory of past connection activation. Temporally weighted averages may be used for weight computations. Spatiotemporal representations combine implicit parametric representations with explicit spatial representations. These include the regulation of propagation time and pulse trains through parameter adjustment. Gating behavior that controls the flow of activation through a network is another spatiotemporal method. SYREN DESCRIPTION SYREN is a connectionist model that incorporates temporal processing in isolated syllable recognition using formant center transitions. Formant center tracts are presented in 5 ms time slices. Input nodes are updated once per time slice. The network classifies the rates and directions of formant transitions. Transition data are used by an adaptive network to associate transition patterns with syllables. A recognition network uses output of the adaptive network to identify a syllable. Complete details of the system maybe found in Smythe (1988). DATA CORPUS Input data consist of formant centers from five repetitions of twenty-four consonant-vowel syllables (the stop consonants Ib, d, gl paired with the vowels Iii, ey, ih, eh, ae, ah, ou, uu/), and an averaged set of each of the five repetitions from work performed by Kewley Port (1982). Each repetition is presented as a binary matrix with a row representing frequency in 20 Hz units, and a column representing time in 5 ms slices. The matrix is given to the input units one column at a time. A '1' in a cell of a matrix represents a formant center at a particular frequency during a particular time slice. FORMANT TRANSITION CLASSIFICATION In the first stage of processing SYREN determines the rate and direction of formant center transitions. Formant transition detectors are subnetworks designed to respond to transitions of one of six rates in either rising or falling directions, Temporal Representations in a Connectionist Speech System 243 and also to steady-state events. The method used is motivated by a mechanism for visual motion detection in the retina that combines interactions between subunits of a dendritic tree and shunting, veto inhibition (Koch et ai, 1982). Formant motion is analogous to visual motion, and formant transitions are treated as a one dimensional case of visual motion. Preferred Transition D' I Branch Nodes Ista Proximal Figure 1. Formant transition detector subnetwork and its preferred descending transition type. The vertical axis is frequency (one row for each input unit) and the horizontal axis is time in 5 ms slices. A detector subnetwork for a slow transition is shown in figure 1, along with its preferred transition. Branch nodes are analogous to dendritic subunits, and serve as activation transmission lines. Their activation is computed by the equation: af+l = af(1- 0) + netf(l - aD Where a is the activation of unit i at time t, net is the weighted input, t is an update cycle (there are 7 updates per time slice), and e is a decay constant. Input to a branch node drives the activation to a maximum value, the rate of which is determined by the strength of the input, In the absence of input the activation decays to O. For the preferred direction, input nodes are activated for two time slices (10 ms) in order from top to bottom. An input node causes the activation of the most distal branch node to rise to a maximum value. This in turn causes the next node to activate, slightly delayed with respect to the first, and so on for the rest of the branch. This results in a pulse of activation flowing along the branch with a transmission delay of roughly one time slice (7 update cycles) from the distal to the proximal end. The most proximal branch node also has a connection to the input node. This connection serves to prime the node for slower transitions. Activation from an input node that lasts for only one time slice will decay in the proximal branch node before the activation from the distal region arrives. If input is present for two time steps the extra activation from the input connection primes the node, quickly driving it to a maximal value when the distal activation arrives. 244 Smythe An S-node provides the output of the detector. It computes a sigmoid squash function and fires (a sudden increase in activation) when sufficient activation is in the proximal branch nodes. For this particular detector, if the transition is too fast (i. e. one time step for each input unit) the proximal nodes will not attain a high enough activation; if the transition is too slow (i.e. three time steps for each input unit) activation on proximal branch nodes from earlier time steps will have decayed before the transition is complete. This architecture is tuned to a slower transition by increasing the transmission time on the branches by varying the connection weights, and by reducing the decay rate by lowering the decay constant. This illustrates the use of parametric manipulations to control temporal behavior in for rate sensitivity. Veto inhibition is used in this detector for direction selectivity. Veto nodes provide inhibition and are activated by input nodes, and use the interactive activation equation for a decaying memory. Had the transition in figure 1. been in the opposite direction, activation from previous time slices on a veto connection would prevent the input node from activating its distal branch node, preventing the flow of activation and the firing of the S-node. Here a veto connection acts as a gate, serving to select input for processing. Detectors are constructed for faster transitions by shortening the transmission lines and by using veto connections for rate sensitivity. A transition detector for a faster transition is shown in figure 2. Here the receptive field is larger, and veto connections are used to select transitions that skip one input unit at each time slice. Veto connections are still used for direction selectivity. Detectors for even faster transitions are created by widening the receptive field and increasing the number of veto connections for rate sensitivity. Detectors are designed to respond to a specific transition type and not to respond to the transitions of other detectors. They will respond to transitions with rates between their own and the next type of detector. For slower transitions the firing of two detectors indicates an intermediate rate. For faster transitions special detectors are designed to fire for only one precise rate by eliminating some of the branches. Different firing patterns of precise and more general detectors distinguish rates. This gives a very fine rate sensitivity throughout the range of transitions. Detector networks are copied to span the entire frequency range with overlapping receptive fields. This yields an array of S-nodes for each transition type. giving excellent spatial resolution of the frequency range. There are 200 S-nodes for each detector type. each signaling a transition that starts and ends at a particular frequency unit. ADAPTIVE NE1WORK The adaptive network learns to associate patterns of formant transitions with specific syllables. To do this it must be able to store at least part of the unfolding patterns or else it is forced to respond to information from only one time slice. Preferred Transition Temporal Representations in a Connectionist Speech System 245 Veto Nodes Branch Nodes Figure 2. Formant transition detector subnetwork for a faster transition. Only the veto connections used for rate sensitivity are shown. The learning algorithm must also deal with past activation histories of connections or else it can only learn from one time slice. The network accomplishes this through tapped delay lines and decaying eligibility traces. There are twenty-four nodes in the adaptive network, each assigned to one syllable. It is a single layer network, trained using a hybrid supervised learning algorithm that merges Widrow-Hoff type learning with a classical conditioning model (Sutton and Barto, 1987). Storage of temporal patterns Tapped delay lines are used to briefly store sequences of formant transition patterns. S-nodes from each transition detector are connected to a tapped delay line of five nodes. Each delay node simply passes on its S-node's activation value once per 5 ms time slice, allowing the delay matrix to store 25 ms (five time slices) of transition patterns. The delay matrix consists of delay lines for each transition detector at each receptive field. Adaptive nodes are connected to every node in the delay matrix. The delay lines do not perform input buffering; information in the delay matrix has been subject to one level of processing. The amount of information stored (the length of the delay line) is limited by efficiency considerations. Adaptive Algorithm Nodes in the adaptive network compute their activation using a sigmoid squash function and adjust their weights according to the equation: W~"!"l = w~· + a(z~ - s~)e~ IJ IJ I I J where w is the weight from a connection from node j to node i at time t. a is a learning constant. z is the expected value of node i, s is the weighted sum of the connections of node i. and e is the exponentially decaying canonical eligibility of 246 Smythe connection j. The eligibility constant gives some variation in the exact timing of transition patterns, allowing limited time warping between training and testing. FINAL RECOGNITION NETWORK The adaptive network is not perfect and results in a number of false alarm errors. Many of these are eliminated by using firing patterns of other adaptive nodes. For example, a node that consistently misfires on one syllable could be blocked by the firing of the correct node for that syllable. Adaptive nodes are connected to a veto recognition network. Since an adaptive node may fire at any time (and at different times) throughout input presentation, delay lines are used to preserve patterns of adaptive node behavior, and veto inhibition is used to block false alarms. Connections in the veto network are enabled or disabled after training. Clearly this is an ad hoc solution, but it suggests the use of representations that are distributed both spatially and temporally. RESULTS AND DISCUSSION In each experiment syllable repetitions were divided into mutually exclusive training and testing sets. A training cycle consisted of one presentation of each member of the training set. In both experiments the networks were trained until adequate performance was achieved, usually after four to ten training cycles. In the first experiment the network was trained on the five raw repetitions and tested on the averaged set. It achieved 92% recognition on the testing set and 100% recognition on the training set. The network had two miss errors on the training set. In the second experiment, the network was trained on four of the raw repetitions and tested on the fifth. Five separate training runs were performed to test the network on each repetition. The network achieved 76% recognition on the testing set for all training runs, and 100% recognition on the training set. In all experiments most of the adaptive nodes responded when there was transition information in the delay matrix. Many responded when both transition and steady-state information was present, using clues from both the consonant and the vowel. This situation occurs only briefly for each formant, since the delay matrix holds information for 5 time slices, and it takes four time slices to signal a steady-state event. Transition information will be at the end of the delay matrix while steady-state is at the beginning. Many nodes were strongly inhibited in the absence of transition information even for their correct syllable, although they had fired earlier in the data presentation. CONCLUSIONS We have shown how different temporal representations and processing methods are used in a connectionist model for syllable recognition. Hybrid connectionist architectures with only slightly more elaborate processing methods can classify acoustic motion and associate sequences of transition events with syllables. The Temporal Representations in a Connectionist Speech System 247 system is not designed as a general speech recognition system, especially since the accurate measurement of formant center frequencies is impractical. Other signal processing techniques, such as spectral peak estimation, can be used without changes in the architecture. This could provide information to a larger speech recognition system. SYREN was influenced by a neurophysiological model for visual motion detection, and shows how knowledge from one processing modality is applied to other problems. The merging of ideas from real nervous systems with existing techniques can add to the connectionist tool kit, resulting in more powerful processing systems. Acknowledgments This research was performed at Indiana University Computer Science Department as part of the author's Ph.D. thesis. The author would like to thank committee members John Barnden and Robert Port for their help and direction, and Donald Lee and Peter Brodeur for their assistance in preparing the manuscript. References Delattre, P. C., Liberman, A. M., Cooper. F. S., 1955, "Acoustic loci and transitional cues for stop consonants," J. Acous. Soc. Am .• 27. 769-773. Jordan, M . I., 1986 "Serial order: A parallel distributed processing approach," ICS Report 8604, UCSD, San Diego. Kewley-Port, D., 1982, "Measurement of formant transitions in naturally produced consonant-vowel syllables," J. Acous. Soc. Am, 72, 379-389. Koch, C., Poggio, T., Torre, V., 1982, "Retinal ganglion cells: A functional interpretation of dendritic morphology," Phil. Trans. R. Soc. Lon.: Series B, 298, 227-264. McClelland, J. L.. Rumelhart, D. E., 1982, "An interactive activation model of context effects in letter perception," Psychological Review, 88, 375-407. Pols, L. C. W., Schouten, M. F. H., 1982, "Perceptual relevance of coarticulation, " in: Carlson, R., and Grandstrom, B., The Representation of Speech in the Peripheral Auditory System, Elsevier, 203-208. Anderson, S., Merrill, J.W.L, Port, R., 1988, "Dynamic speech characterization with recurrent networks," Indiana University Dept. of Computer Science TR. No. 258, Bloomington, In. Smythe, E. J., 1988, "Temporal computation in connectionist models," Indiana University Dept. of Computer Science TR. No. 251, Bloomington, In. Sutton, R. S., Barto, A. G., 1987, "A temporal difference model of classical conditioning," GTE TR87-509.2. Tank, D. W., Hopfield, J. J., 1987, "Concentrating information in time," Proceedings of the IEEE Conference on Neural Networks, San Diego, IV-455-468. Waible, A., Hanazawa, T., Hinton, G., Shikana, K, Lang, K, 1988, "Phoneme recognition: Neural networks vs. Hidden Markov Models," Proc. Int. Conf. Acoustics, Speech, and Signal Processing, 107-110.
|
1988
|
55
|
141
|
314 NEURAL NETWORK STAR PATTERN RECOGNITION FOR SPACECRAFT ATTITUDE DETERMINATION AND CONTROL Phillip Alvelda, A. Miguel San Martin The Jet Propulsion Laboratory, California Institute of Technology, Pasadena, Ca. 91109 ABSTRACT Currently, the most complex spacecraft attitude determination and control tasks are ultimately governed by ground-based systems and personnel. Conventional on-board systems face severe computational bottlenecks introduced by serial microprocessors operating on inherently parallel problems. New computer architectures based on the anatomy of the human brain seem to promise high speed and fault-tolerant solutions to the limitations of serial processing. This paper discusses the latest applications of artificial neural networks to the problem of star pattern recognition for spacecraft attitude determination. INTRODUCTION By design, a conventional on-board microprocessor can perform only one comparison or calculation at a time. Image or pattern recognition problems involving large template sets and high resolution can require an astronomical number of comparisons to a given database. Typical mission planning and optimization tasks require calculations involving a multitude of parameters, where each element has an inherent degree of importance, reliability and noise. Even the most advanced supercomputers running the latest software can require seconds and even minutes to execute a complex pattern recognition or expert system task, often providing incorrect or inefficient solutions to problems that prove trivial to ground control specialists. The intent of ongoing research is to develop a neural network based satellite attitude determination system prototype capable of determining its current three-axis inertial orientation. Such a system that can determine in real-time, which direction the satellite is facing, is needed in order to aim antennas, science instruments, and navigational equipment. For a satellite to be autonomous (an important criterion in interplanetary missions, and most particularly so in the event of a system failure), this task must be performed in a reasonable amount of time with all due consideration to actual environmental, noise and precision constraints. CELESTIAL ATTITUDE DETERMINATION Under normal operating conditions there is a whole repertoire of spacecraft systems that operate in conjunction to perform the attitude determination task, the backbone of which is the Gyro. But a Gyro measures only chaDles in orientation. The current attitude is stored in Neural Network Star Pattern Recognition 318 volatile on-board memory and is updated as the ,yro system inte,rates velocity to provide chanle in anlular position. When there is a power system failure for any reason such as a sinlle-event-upset due to cosmic radiation, an currently stored attitude lafor.atloa Is LOST! One very attractive way of recoverinl attitude information with no a priori knowledge is by USinl on-board imalinl and computer systems to: 1.) Image a portion of the sky, 2.) Compare the characteristic pattern of stars in the sensor fieldof-view to an on-board star catalog, 3.) Thereby identify the stars in the sensor FOV [Field Of View], 4.) Retrieve the identified star coordinates, 5.) Transform and correlate FOV and real-sky coordinates to determine spacecraft attitude. But the problem of matching a limited field of view that contains a small number of stars (out of billions and billions of them), to an onboard fUll-sky catalol containing perhaps thousands of stars has lonl been a severe computational bottleneck. D14~---------;"':~~::.-r---- D13'---~='7""T ,;' ,; \ / ,/ , PAIR 21 PAIR 703 PAIR 22 PAIR 704 GEOMETRIC CONSTRAINTS ,'" ',STORED PAIR ADDRESS PAIR 70121 PAIR 70122 FicuN I.) Serial .tar I.D. catalol rorma' and rnethodololY. The latest serial allorithm to perform this task requires approximately 650 KBytes of RAM to store the on-board star catalol. It incorporates a hilhly optimized allorithm which uses a motorola 68000 to search a sorted database of more than 70,000 star-pair distance values for correlations with the decomposed star pattern in the sensor FOV. It performs the identification process on the order of I second 316 Alvelda and San Martin with a success rate of 99 percent. But it does Dot fit iD the spacecraft oD-board memory, and therefore, no such system has flown on a planetary spacecraft. • USES SUN SENSOR AND ATTITUDE MANEUVERS TO SUN TO SUN CANOPUS FicuN J.) Current Spacecraft attitude inrormation recovery lequence. As a result, state-of-the-art interplanetary spacecraft use several independent sensor systems in ~onjunction to determine attitude with no a priori knowledge. First, the craft is commanded to slew until a Sun Sensor (aligned with the spacecraft's major axis) has locked-on to the sun. The craft must then rotate around that axis until an appropriate star pattern at approximately ninety degrees to the sun is acquired to provide three-axis orientation information. The entire attitude acquisition sequence requires an absolute minimum of thirty minutes, and presupposes that all spacecraft actuator and maneuvering systems are operational. At the phenomenal rendezvous speeds involved in interplanetary navigation, a system failure near mission culmination could mean an almost complete loss of the most valuable scientific data while the spacecraft performs its initial attitude acquisition sequence. NEURAL MOTIVATION The parallel architecture and collective computation properties of a neural network based system address several problems associated with the implementation and performance of the serial star ID algorithm. Instead of searching a lengthy database one element at a time, each stored star pattern is correlated with the field of view concurrently. And whereas standard memory storage technology requires one address in RAM per star-pair distance, the neural star pattern representations are stored in characteristic matrices of interconnections between neurons. This distributed data set representation has several desirable properties. First of all, the 2N redundancy of the serial star-p.air scheme (i.e. which star is at which end of a pair) is discarded and a new more compressed representation emerges from the neuromorphic architecture. Secondly, noise, both statistical (i.e thermal noise) and systematic (i.e. sensor precision limitations), and pattern invariance characteristics are Neural Network Star Pattern Recognition 31 7 incorporated directly into the preprocessing and neural architecture without extra circuitry. The first neural approach The primary motivation from the NASA perspective is to improve satellite attitude determination performance and enable on-board system implementations. The problem methodology for the neural architecture is then slightly different than that of the serial model. Instead of identifying every detected st~r in the field of view, the neural system identifies a single 'Guide Star' with respect to the pattern of dimmer stars around it, and correlates that star's known position with the sensor FOV to determine the pointing axis. If needed, only one other star is then required to fix the roll angle about that axis. So the core of the celestial attitude determination problem changes from multiple star identification and correlation, single star pattern identification. The entire system consists of several modules in a marriage of different technologies. The first neural system architecture uses already mature(i.e. sensor/preprocessor) technologies where they perform well, and neural technology only where conventional systems prove intractable. With an eye towards rapid prototyping and implementation, the system was designed with technologies (such as neural VLSI) that will be available in less than one year. SYSTEM ARCHITECTURE The Star Tracker sensor system The system input is based on the ASTROS II star tracker under development in the Guidance and Control section at the Jet Propulsion Laboratory. The Star tracker optical system images a defocussed portion of the sky (a star sub-field) onto a charged coupled device (C.C.D.). The tracker electronics then generate star centroid position and intensity information and passes this list to the preprocessing system. The Preprocessln8 system This centroia ind intensity information is passed to the preprocessing subsystem where the star pattern is treated to extract noise and pattern invariance. A 'pattern field-of-view' is defined as centered aroun~ ~he brightest (Le. 'Guide Star') in the central portion of the sensor field-ofview. Since the pattern FOV radius is one half that of the sensor FOV the pattern for that 'Guide Star' is then based on a portion of the image that is complete, or invariant, under translational perturbation. The preprocessor then introduces rotational invariance to the 'guide-star' pattern by using only the distances of all other dimmer stars inside the pattern FOV to the central guide star. These distances are then mapped by the preprocessor onto a two dimensional coordinate system of distance versus relative magnitude (normalized to the guide star, the brightest star in the Pattern FOV) to be sampled by the neural associative star catalog. The motivation for this distance map format become clear when issues involving noise invariance and memory capacity are considered. 318 Alvelda and San Martin Because the ASTROS Star Tracker is a limited precision instrument, most particularly in the absolute and relative intensity measures, two major problems arise. First, dimmer stars with intensities near the bottom of the dynamic range of the C.C.D. mayor may not be included in the star pattern. So, the entire distance map is scaled to the brightest star such that the bright, high-confidence measurements are weighted more heavily, while the dimmer and possibly transient stars are of less importance to a given pattern. Secondly, since there are a very large number of stars in the sky, the uniqueness of a given star pattern is governed mostly by the relative star distance measures (which, by the way, are the highest precision measurements provided by the star tracker). In addition, because of the limitations in expected neural hardware, a discrete number of neurons must sample a continuous function. To retain the maximum sample precision with a minimum number of neurons, the neural system uses the biological mechanism of a receptive field for hyperacuity. In other words, a number of neurons respond to a single distance stimulus. The process is analogous to that used on the defocussed image of a point source on the C.C.D. which was integrated over several pixels to generate a centroid at sub-pixel accuracies. To relax the demands on hardware development for the neural module, this point smoothing was performed in the preprocessor instead of being introduced into the neural network architecture and dynamics. The equivalent neural response function then becomes: where: X· I N L 'l'k e -(Ili Ille )2/tl. k=l Xi is the sampling activity of neuron i N is the number of stars in the Pattern Field Of View ILi is the position of neuron i on the sample axis ILk • is the position of the stimulus from star k on the sample axis is the magnitude scale factor of star k, normalized to the brightest star in the PFOV, the 'Guide star' is the width of the gaussian point spread function The Neural system The neural system, a 106 neuron, three-layer, feed-forward network, samples the scaled and smoothed distance map, to provide an output vector with the highest neural output activity representing the best match to one of the pre-trained guide star patterns. The network training algorithm uses the standard backwards error propagation Neural Network Star Pattern Recognition 319 algorithm to set network interconnect weights from a training set of 'Guide Star' patterns derived from the software simulated sky and sensor models. Simulation testbed The computer simulation testbed includes a realistic celestial field model, as well as a detector model that properly represents achievable position and intensity resolution, sensor scan rates, dynamic range, and signal to noise properties. Rapid identification of star patterns was observed in limited training sets as the simulated tracker was oriented randomly within the celestial sphere. PERFORMANCE RESULTS AND PROJECTIONS In terms of improved performance the neural system was quite a success, but not however in the areas which were initialJy expected. While a VLSI implementation might yield considerable system speed-up, the digital simulation testbed neural processing time was of the same order as the serial algorithm, perhaps slightly better. The success rate of the serial system was already better than 99%. The neural net system achieved an accuracy of 100% when the systematic noise (i.e. dropped stars) of the sensor was neglected. When the dropped star effect was introduced, the performance figure dropped to 94%. It was later discovered that the reason for this 'low' rate was due mostly to the limited size of the Yale Bright Star catalog at higher magnitudes (lower star brightness). In sparse regions of the sky, the pattern in the sensor FOV presented by the limited sky model occasionally consisted of only two or three dim stars. When one or two of them drop out because of the Star sensor magnitude precision limitations. at times. there was no pattern left to identify. Further experiments and parametric studies are under way using a more complete Harvard Smithsonian catalog. The big gain was in terms of required memory. The serial algorithm stored over 70,000 star pairs at high precision in addition to code for a rather complex heuristic, artificial intelligence type of algorithm for a total size of 650 KBytes. The Neural algorithm used a connectionist data representation that was able to abstract from the star catalog, pa ttern class similarities, orthagonalities, and in variances in a highly compressed fashion. Network performance remained essentially constant until interconnect precision was decreased to less than four bits per synapse. 3000 synapses at four bits per synapse requires very little computer memory. These simulation results were all derived from a monte carlo run of approximately 200,000 iterations using the simulator testbed. 320 Alvelda and San Martin CONCLUSIONS By means of a clever combination of several technologies and an appropriate data set representation, a star 10 system using one of the most simple neural algorithms outperforms those using the classical serial ones in several aspects, even while running a software simulated neural network. The neural simulator is approximately ten times faster than the equivalent serial algorithm and requires less than one seventh the computer memory. With the transfer to neural VLSI technology, memory requirements will virtually disappear and processing speed will increase by at least an order of magnitude. W1)ere power and weight requirements scale with the hardware chip count, and every pound that must be launched into space costs millions of dollars, neural technology has enabled real-time on-board absolute attitude determination with no a priori information, that may eventually make several accessory satellite systems like horizon and sun sensors obsolete, while increasing the overall reliability of spacecraft systems. Ackaowledgmeats We would like to acknowledge many fruitfull conversations with C. E. Bell, J. Barhen and S. Gulati. Refereaces R. W. H. van Bezooijen. Automated Star Pattern Recognition for Use With the Space Infrared Telescope Facility (SIRTIF). Paper for internal use at The Jet Propulsion Laboratory. P. Gorman, T. J. Sejnowski. Workshop on Neural Network Devices and Applications (Jet Propulsion Laboratory, Pasaden, Ca.) Document 04406, pp.224-237. J. L. Lunkins. Star pattern Recognition for Real Time Attitude Determination. The Journal of Astronautical Science(l979). D. E. Rummelhaft, G. E. Hinton. Parallel Distributed Processing, eds. (MIT Press, Cambridge, Ma.) Vol. 1 pp. 318-364. P. M Salomon, T. A. Glavich. Image Signal Processing and Sub-Pixel Accuracy Star Trackers. SPfE vol. 252 Smart Sensors II (1980). C.C.D. Image Preprocessor Distance Map Neural Sampler Neural Output Star Attitude Look-up Table Neural Network Star Pattern Recognition 321 Redtus from Guide Ster aa DO a cD DOc a CDc I I I I I I I I I 18 28 30 32 41 50 71 86 St 1/1 R.A. Dec. 27 -1.3 2.45 :::::~f;r }=:=n:, :::~J ;=r:: := 29 0.2 0.68 322 Alvelda and San Martin PROTOTYPE HARDWARE IMPLEMENTATJON SERIAL PR')CESS()R CCMROlLER PNO SIC INTERFACE TOACS : ....................................................................................................................................................................... : q q Cl.JQ NEURAL PROCESSOR •••••• 1 CORRELATOR 80 q q ~Q ...............................................................................................................................................................................................................................
|
1988
|
56
|
142
|
A MASSIVELY PARALLEL SELF-TUNING CONTEXT-FREE PARSER! Eugene Santos Jr. Department of Computer Science Brown University Box 1910, Providence, RI 02912 [email protected] ABSTRACT The Parsing and Learning System(PALS) is a massively parallel self-tuning context-free parser. It is capable of parsing sentences of unbounded length mainly due to its parse-tree representation scheme. The system is capable of improving its parsing performance through the presentation of training examples. INTRODUCTION Recent PDP research[Rumelhart et al .• 1986; Feldman and Ballard, 1982; Lippmann, 1987] involving natural language processtng[Fanty, 1988; Selman, 1985; Waltz and Pollack, 1985] have unrealistically restricted sentences to a fixed length. A solution to this problem was presented in the system CONPARSE[Charniak and Santos. 1987]. A parse-tree representation scheme was utilized which allowed for processing sentences of any length. Although successful as a parser. it's achitecture was strictly hand-constructed with no learning of any form. Also. standard learning schemes were not appUcable since it differed from all the popular architectures, in particular. connectionist ones. In this paper. we present the Parsing and Learning System(PALS) which attempts to integrate a learning scheme into CONPARSE. It basically allows CONPARSE to modify and improve its parsing capability. IThis research was supported in part by the Office of Naval Research under contract NOOOI4-79-C-0592, the National Science Foundation under contracts IST-8416034 and IST-8515005, and by the Defense Advanced Research Projects Agency under ARPA Order No. 4786. 537 538 Santos REPRESENTATION OF PARSE TREE A parse-tree Is represented by a matrix where the bottom row consists of the leaves of the tree In left-to-right order and the entries In each column above each leaf correspond to the nodes In the path from leaf to root. For example, looking at the simple parse-tree for the sentence "noun verb noun", the column entries for verb would be verb, vp. and S. (see Figure 1) (As In previous work, PALS takes part -of-speech as input, not words.) S S ,..S """'IIIl VP NP VP NP noun ... verb..oil noun Figure 1. Parse tree as represented by a collection of columns in the matrix. In addition to the columns of nontermlnals. we introduce the binder entries as a means of easily determining whether two Identical nonterminals in adjacent columns represent the same nonterminal in a parse tree (see Figure 2). , S S -S / ~VP NP VP / NP noun verb noun s ~ NP VP I '" noun verb NP noL Figure 2. Parse tree as represented by a collection of columns in the matrix plus binders. To distributively represent the matrix. each entry denotes a collection of labeled computational units. The value of the entry is taken to be the label of the unit with the largest value. A Massively Parallel Self-Tuning Context-Free Parser 539 A nontenninal entry has units which are labeled with the nontenninals of a language plus a special label ''blank''. When the "blank" unit is largest, this indicates that the entry plays no part in representing the current parse tree. A binder entry has units which are labeled from 1 to the number of rows in the matrix. A unit labeled k then denotes the binding of the nontenninal entry on its immediate left to the nontenninal entry in the kth row on its right. To indicate that no binding exists, we use a special unit label "e" called an edge. In general, it is easiest to view an entry as a vector of real numbers where each vector component denotes some symbol. (For more infonnation see [Charntak and Santos, 1987].) In the current implementation of PALS, entry unit values range from 0 to 1. The ideal value for entry units is thus 1 for the largest entry unit and 0 for all remaining entry units. We essentially have "1" being "yes" and "0" being no. LANGUAGE RULES In order to determine the values of the computational units mentioned in the previous section, we apply a set of language rules. Each compuatatlonal unit will be detenntned by some subset of these rules. Each language rule is represented by a single node, called a rule node. A rule node takes its input from several computational units and outputs to a Single computational unit. The output of each rule node is also modified by a non-negative value called a rule-weight. This weight represents the applicability of a language rule to the language we are attempting to parse (see PARSING). In the current implementation of PALS, rule-weight values range from 0 to 1 being similar to probabilities. Basically, a rule node attempts to express some rule of grammar. As with CONPARSE, PALS breaks context-free grammars into several subrules. For example, as part of S --> NP VP, PALS would have a rule stating that an NP entry would like to have an S immediately above it in the same column. Our rule for this grammar rule will then take as input the entry's computational unit labeled NP and output to the unit labeled S in the entry immediately above(see Figure 3). 540 Santos Entry iJ Rule-Node Entry i-l,j Figure 3. A rule node for S above NP. As a more complex example, if an entry is a NP, the NP does not continue right. I.e .. has an edge, and above is an S that continues to the right. then below the second S is a VP. In general. to determine a unit's value, we take all the rule nodes and combine their influences. This will be much clearer when we discuss parsing in the next section. PARSING Since we are dealing with a PDP-type architecture, the size of our matrix is fixed ahead of time. However. the way we use the matrix representation scheme allows us to handle sentences of unbounded length as we shall see. The system parses a sentence by taking the first word and placing it in the lower rightmost entry; it then attempts to construct the column above the word by using its rule nodes. Mter this processing. the system shifts the first word and its column left and inserts the second word. Now both words are processed simultaneously. This shifting and processing continues until the last word is shifted through the matrix (see Figure 4). Since sentence lengths may exceed the size of the matrix. we are only processing a portion at a time. creating partial parse-trees. The complete parse-tree is the combination of these partial ones. A Massively Parallel Self-Tuning Context-Free Parser 541 8 NP noun notDl verb t,f8 8 1- 1-8 8 -~8 / t,fVP NP VP NP VP / NP noun verb noun n01Dl verb notDl Figure 4. Parsing of noun verb noun. Basically, the system builds the tree in a bottom-up fashion. However, it can also build left-right, right-left, and top-down since columns may be of differing height. In general, columns on the left in the matrix will be more complete and hence possibly higher than those on the right. LEARNING The goal of PALS is to learn how to parse a given language. Given a system consisting of a matrix with a set of language rules. we learn parsing by determining how to apply each language rule. In general, when a language rule is inconsistent with the language we are learning to parse, its corresponding rule-weight drops to zero, essentially disconnecting the rule. When a language rule is consistent, its ruleweight then approaches one. In PALS, we learn how to parse a sentence by using training examples. The teacher/trainer gives to the system the complete parse tree of the sentence to be learned. Because of the restrictions imposed by our matrix, we may be unable to fully represent the complete parse tree given by the teacher. To learn how to parse the sentence, we can only utilize a portion of the complete parse tree at anyone time. Given a complete parse tree. the system simply breaks it up into manageable chunks we call snapshots. Snapshots are matrices which contain a portion of the complete parse tree. Given this sequence of snapshots, we present them to the system in a fashion Similar to parsing. The only difference is that we clamped the 542 Santos snapshot to the system matrix while it fires its rule nodes. From this, we can easily detenntne whether a rule node has incorrectly fired or not by seeing if it fired consistently with given snapshot. We punish and reward accordingly. As the system is trained more and more, our rule-weights contain more and more information. We would like the rule-weights of those rules used frequently during training to not change as much as those not frequently used. This serves to stabilize our knowledge. It also prevents the system from being totally corrupted when presented with an incorrect training example. As in traditional methods, we find the new rule-weights by minimizing some function which gives us our desired learning. The function which captures this learning method is Lt,j {CfJ ( Clf.,j - ~i,j )2 + [ ~,j ~i,j + ( 1 - ~,j ) ( 1 - ~iJ ) I 2 ri,j2} where i are the unit labels for some matrix entry. j are the language rules associated with units i, (l1.j are the old rule-weights. ~i,j are the new ruleweights, ci,J is the knowledge preservation coefficient which is a function of the frequency that language rule j for entry unit i has been fired during learning. ri.j is the unmodified rule output using snapshot as input, and Si,j is the measure of the correctness of language rule j for unit 1. RESULTS In the current implementation of PALS, we utilize a 7x6 matrix and an average of fifty language rules per entry unit to parse English. Obviously, our set of language rules will determine what we can and cannot learn. Currently, the system is able to learn and parse a modest subset of the English language. It can parse sentences with moderate sentence embedding and phrase attachment from the following grammar: SM S NP NP PP WHCL S/NP S/NP --> S per --> NPVP --> (det) (adj)* noun (PP)* (WHCL) (INFPI) --> INFP2 --> prep NP --> that S/NP -->VP --> NP VP/NP A Massively Parallel Self-Tuning Context-Free Parser 543 INFPl INFI VP VP VP VP VP VP/NP INFP2 INF2 --> (NP) INFI --> to (adv) VP/NP --> (aux) tIVerb NP (PP)* --> (aux) intrverb --> (aux) copverb NP --> (aux) copverb PP --> (aux) copverb adj --> (aux) trverb (PP)* --> (NP) INF2 --> to (adv) VP We have found that sentences only require approximately two trainings. We have also found that by adding more consistent language rules, the system improved by actually generating parse trees which were more "certain" than previously generated ones. In other words, the values of the entry units in the final parse tree were much closer to the ideal. When we added inconsistent language rules, the system degraded. However, with slightly more training, the system was back to normal. It actually had to first eliminate the inconsistent rules before being able to apply the consistent ones. Finally, we attempted to train the system with incorrect training examples after being trained with correct ones. We found that even though the system degraded slightly, previous learning was not completely lost. This was basically due to the stability employed during learning. CONCLUSIONS We have presented a system capable of parsing and learning how to parse. The system parses by creating a sequence of partial parse trees and then combining them into a parse tree. It also places no limit on sentence length. Given a system consisting of a matrix and an aSSOCiated set of language rules, we attempt to learn how to parse the language described by the complete parse tree supplied by a teacher/trainer. The same set of language rules may also be able to learn a different language. Depending on the diversity of the language rules, it may also learn both simultaneously, te., parse both languages. (A simple example is two languages with distinct terminals and nontenntnals.) The system learns by being presented complete parse-trees and adding to its knowledge by modifying its rule-weights. 544 Santos The system requires a small number of trainings per training example. Also, incorrect training examples do not totally corrupt the system. PROBLEMS AND FUTURE RESEARCH Eventhough the PALS system has managed to integrate learning. there are still some problems. First, as in the CONPARSE system, we can only handle moderately embedded sentences. Second, the system is very positional. Something that is learned in one portion of the matrix is not generalized to other portions. There is no rule aquisition in the PALS system. Currently, all rules are assumed to be built-in to the system. PALS's ability to suppress incorrect rules would entail rule learning if the possible set of language rules was very tightly constrained so that, in effect. all rules could be tried. Some linguists have suggested quite limited schemes but if any would work with PALS is not known. REFERENCES Rumelhart, D. et ~., ParaUel Distributed Processing: Explorations in the Microstructures oj Cognition. Volume 1, The MIT Press (1986). Charniak, E. and Santos, E., "A connectionist context-free parser which is not context-free, but then it is not really connectionist either," The Ninth Annual Coriference oj the Cognitive Science Society pp. 70-77 (1987). Fanty, M., "Learning in Structured Connectionist Networks." TR 252. University of Rochester Computer Science Department (1988). Selman. Bart, "Rule-based processing in a connectionist system for natural language understanding." Technical Report CSRI-168. Computer Systems Research Institute. University of Toronto (1985). Waltz. D. and Pollack. J., "MaSsively parallel parsing: a strongly interactive model of natural language interpretation," Cognitive Science 9 pp. 51-74 (1985). Feldman. J.A. and Ballard, D.H., "Connectionist models and their properties," Cognitive Science 6 pp. 205-254 (1982). Lippmann. R, "An introduction to computing with neural nets," IEEE ASSP Magazine pp. 4-22 (April 1987).
|
1988
|
57
|
143
|
272 NEURAL NET RECEIVERS IN MULTIPLE-ACCESS COMMUNICATIONS Bernd-Peter Paris, Geoffrey Orsak, Mahesh Varanasi, Behnaam Aazhang Department of Electrical and Computer Engineering Rice University Houston, TX 77251-1892 ABSTRACT The application of neural networks to the demodulation of spread-spectrum signals in a multiple-access environment is considered. This study is motivated in large part by the fact that, in a multiuser system, the conventional (matched filter) receiver suffers severe performance degradation as the relative powers of the interfering signals become large (the "near-far" problem). Furthermore, the optimum receiver, which alleviates the near-far problem, is too complex to be of practical use. Receivers based on multi-layer perceptrons are considered as a simple and robust alternative to the optimum solution. The optimum receiver is used to benchmark the performance of the neural net receiver; in particular, it is proven to be instrumental in identifying the decision regions of the neural networks. The back-propagation algorithm and a modified version of it are used to train the neural net. An importance sampling technique is introduced to reduce the number of simulations necessary to evaluate the performance of neural nets. In all examples considered the proposed neural ~et receiver significantly outperforms the conventional recelver. INTRODUCTION In this paper we consider the problem of demodulating signals in a code-division multiple-access (CDMA) Gaussian channel. Multiple accessing in code domain is achieved by spreading the spectrum of the transmitted signals using preassigned code waveforms. The conventional method of demodulating a spread-spectrum signal in a multiuser environment employs one filter matched to the desired signal. Since the conventional receiver ignores the presence of interfering signals it is reliable only when there are few simultaneous transmissions. Furthermore, when the relative received power of the interfering signals become large (the "near-far" problem), severe performance degradation of the system is observed even in situations with relatively low bandwidth efficiencies (defined as the ratio of the number of channel subscribers to the spread of the bandwidth) [Aazhang 87]. For this reason there has been an interest in designing optimum receivers for multi-user communication systems [Verdu 86, Lupas 89, Poor 88]. The resulting optimum demodulators, Neural Net Receivers in Multiple-Access Communications 273 however, have a variable decoding delay with computational and storage complexity that depend exponentially on the number of active users. Unfortunately, this computational intensity is unacceptable in many applications. There is hence a need for near optimum receivers that are robust to near-far effects with a reasonable computational complexity to ensure their practical implementation. In this study, we introduce a class of neural net receivers that are based on multilayer perceptrons trained via the back-propagation algorithm. Neural net receivers are very attractive alternatives to the optimum and conventional receivers due to their highly parallel structures. As we will observe, the performance of the neural net receivers closely track that of the optimum receiver in all examples considered. SYSTEM DESCRIPTION In the multiple-access network of interest, transmitters are assumed to share a radio band in a combination of the time and code domain. One way of multiple accessing in the code domain is spread spectrum, which is a signaling scheme that uses a much wider bandwidth than necessary for a given data rate. Let us assume that in a given time interval there are K active transmitters in the network. In a simple setting, the kth active user, in a symbol interval, transmits a signal from a binary signal set derived from the set of code waveforms assigned to the corresponding user. The signal is time limited to the interval [a, T], where T is the symbol duration. In this paper we will concentrate on symbol-synchronous CDMA systems. Synchronous systems find applications in time slotted channels with the central (base) station transmitting to remote (mobile) terminals and also in relays between central stations. The synchronous problem will also be construed as providing us with a manageable setting to better understand the issues in the more difficult asynchronous situation. In a synchronous CDMA system, the users maintain time synchronism so that the relative time delays associated with all users are assumed to be zero. To illustrate the potentials of the proposed multiuser detector, we present the application to binary PSK direct-sequence signals in coherent systems. Therefore, the signal at a given receiver is the superposition of the K transmitted signals in additive channel noise (see [Aazhang 87, Lupas 89] and references within) P K ret) = L L b~i) Akak(t - iT) cos(we[t - iT] + Ok) + nt, t E ~, (1) i=1 k=1 where P is the packet length, Ak is the signal amplitude, We is the carrier frequency, Ok is the phase angle. The symbol b1i) E {-I, + I} denotes the bit that the kth user is transmitting in the ith time interval. In this model, nt is the additive channel noise which is assumed to be a white Gaussian random process. The time-limited code waveform, denoted.by ak(t), is derived from the spreading sequence assigned to the kth user. That is, ak(t) = Ef=-~/ a)k)p(t - jTe) where pet) is the unit rectangular pulse of duration Te and N is the length of the spreading sequence. One code period !!(k) = [a~k),a~k), . . . ,a~~I] is used for spreading the signal per symbol so 274 Paris, Orsak, Varanasi and Aazhang that T = NTc • In this system, spectrum efficiency is measured as the ratio of the number of channel users to the spread factor, K/ N. In the next two sections, we first consider optimum synchronous demodulation of the multiuser spread-spectrum signal. Then, we introduce the application of neural networks to the multiuser detection problem. OPTIMUM RECEIVER Multiuser detection is an active research area with the objective of developing strategies for demodulation of information sent by several transmitters sharing a channel [Verdu 86, Poor 88, Varanasi 89, Lupas 89]. In these situations with two or more users of a multiple-access Gaussian channel, one filter matched to the desired signal is no longer optimum since the decision statistics are effected by the other signals (e.g., the statistics are disturbed by cross-correlations with the interfering signals). Employing conventional matched filters, because of its structural simplicity, may still be justified if the system is operating at a low bandwidth efficiency. However, as the number of users in the system with fixed bandwidth grows or as the relative received powers of the interfering signals become large, severe performance degradation of the conventional matched filter is observed [Aazhang 87]. For directsequence spread-spectrum systems, optimum receivers obtained by Verdu and Poor require an extremely high degree of software complexity and storage, which may be unacceptable for most multiple-access systems [Verdu 86, Lupas 89]. Despite implementation problems, studies on optimum demodulation illustrate that the effects of interfering signals in a CDMA system, in principle, can be neutralized. A complete study of the suboptimum neural net receiver requires a review of the maximum likelihood sequence detection formulation. Assuming that all possible information sequences are independent and equally likely, and defining !L{ i) = [b~i), b~i), ... , b}2]', it is easy to see that an optimum decision on fL{ i) is a one-shot decision in that it requires the observation of the received signal only in the ith time interval. Without loss of generality, we will therefore focus our attention on i = 0 and drop the time superscript and consider the demodulation of the vector of bits !L with the observation of the received signal in the interval [0,11In a K -user Gaussian channel, the most likely information vector is chosen as that which maximizes the log of the likelihood function (see [Lupas 89]) where Sk(t) = Akak(t) cos(wct + Ok) is the modulating signal of the kth user. The optimum decision can also be written as ~pt = arg max {2y'IL - !L'HIL} , te{ _l,+l}K (3) where H is the K x K matrix of signal cross-correlations such that the (k,l)th element is hk,r =< Sk(t), Sr(t) >. The vector of sufficient statistics '[ consists of the Neural Net Receivers in Multiple-Access Communications 275 outputs of a bank of J{ filters each matched to one of the signals Yk = iT r(t)Sit;(t)dt, for k = 1,2, ... ,K. (4) The maximization in (3) has been shown to be NP-complete [Lupas 89], i.e., no algorithm is known that can solve the maximization problem in polynomial time in K. This computational intensity is unacceptable in many applications. In the next section, we consider a suboptimum receiver that employs artificial neural networks for finding a solution to a maximization problem similar to (3). NEURAL NETWORK Until now the application of neural networ,ks to multiple-access communications has not drawn much attention. In this study we employ neural networks for classifying different signals in synchronous additive Gaussian channels. We assume that the information bits of the first of the K signals is of interest, therefore, the phase angle of the desired signal is assumed to be zero (i.e., (}1 = 0). Two configurations with multi-layer perceptrons and sigmoid nonlinearity are considered for multiuser detection of direct-sequence spread-spectrum signals. One structure is depicted in Figure 1.b where a layered network of perceptrons processes the sufficient statistics (4) of the multi-user Gaussian channel. In this structure the first layer of the net (referred to as the hidden layer) processes [Y1, Y2, ... , YK]. The output layer may only have one node since there is only one signal that is being demodulated. This feed-forward structure is then trained using the back-propagation algorithm [Rumelhart 86]. In an alternate configuration, the continuous-time received signal is converted to an N-dimensional vector by sampling the output of the front-end filter at the chip rate Te- 1 as illustrated in Figure 1.a. The input vector to the net can be written so that the demodulation of the first signal is viewed as a classification problem: (5) where £1(1) is the spreading code vector of the first user, 1] is a length-N vector of filtered Gaussian noise samples and L = E[=2 bkA~ COS(8k)!!(k) is the multipleaccess interference vector with Ak = AkTel2, Vk = 1,2, ... ,K. The layered neural net is then trained to process the input vector for demodulation of the first user's information bits via the back-propagation algorithm. For this configuration we consider two training methods, first the multi-layer receiver is trained, via the backpropagation algorithm, to classify the parity of the desired signal (referred to as the "trained" example) [Lippmann 87]. In another attempt (referred to as the "preset" example), the input layer of the net is preset as Gaussian classifiers and the other layers are trained using the back propagation algorithm [Gschwendtner 88]. Since we are interested in understanding the internal representation of knowledge by the weights of the net, a signal space method is developed to illustrate decision regions. In a K -user system where the spreading sequences are not orthogonal, the 276 Paris, Orsak, Varanasi and Aazhang signals can be represented by orthonormal bases using the Gram-Schmidt procedure. The optimum decision regions in the signal space for the demodulation of 61 are known [Poor 88] and can be directly compared to ones for the neural net. Figure 2 illustrates decision regions for the optimum receiver and for "preset" and "trained" neural net receivers. In this example, two users are sharing a channel with N = 3, signal to noise ratio of user 1 (SN Rd equal to 8dB and relative energies of the two user, E2/ E1 = 6dB. As it is seen in this figure the decision region of the "preset" example is almost identical to the optimum boundary, however, the decision boundary for the "trained" example is quite conservative. Such comparisons are instrumental not only in identifying the pattern by which decisions are made by the neural networks but also in understanding the characteristics of the training algorithms. PERFORMANCE ANALYSIS In this paper, we motivate the application of neural nets to single-user detection in multiuser channels by comparing the performance of the receivers in Figure 1 to that of the conventional and the optimum [Poor 88]. Since exact analysis of the bit error probabilities for the neural net receivers are analytically intractable, we consider Monte Carlo simulations. This method can produce very accurate estimates of bit-error probability if the number of simulations is sufficiently large to ensure occurrence of several erroneous decisions. The fact that these multiuser receivers operate with near optimum error rates puts a tremendous computational burden on the computer system. The new variance reduction scheme, developed by Orsak and Aazhang in [Orsak 89], first shifts the simulated channel noise to bias the simulations and then scales the error rate to obtain an unbiased estimate with a reduced variance. This importance sampling technique, which proved to be extremely effective in single-user detection [Orsak 89], is applied to the analysis of the multiuser systems. As discussed in [Orsak 89], the fundamental issue is to generate more errors by biasing the simulations in cases where the error rate is very small. This strategy is better described by the two-user Gaussian example in Figure 2. In this example the simulation is carried out by generating zero-mean Gaussian noise vectors 'I} , random phase (}2 and random values of the interfering bit 62 . Considering 61 = 1. (corresponding to signals +a1 + a2 or +a1 - a2 which are marked by "+" in Figure 2) error occurs if the statistics fall on the left side of the decision boundary. It can be shown that the most efficient biasing scheme corresponds to a shift of the mean of the Gaussian noise and the multiple-access interference such that the mean of the statistics are placed on the decision boundary (the shifted signals are marked by "0" in Figure 2). Since this strategy generates much more errors than the standard Monte Carlo, errors are weighted to obtain an unbiased estimate of the error rate. The importance sampling technique substantially reduces the number of simulation trials compared to standard Monte Carlo for a given accuracy. In Figure 3 the gain which is defined as the ratio of the number of trials required for a fixed variance using Monte Carlo to that using the importance sampling method, is plotted versus Neural Net Receivers in Multiple-Access Communications 277 the bit-error probability. In this example, the spreading sequence length, N is equal 3 and relative energies of the two user, E2/ El = 6dB. The gain in this example of severe near-far problem is inversely proportional to the error rate. Furthermore, results from extensive analysis indicated that the proposed importance sampling technique is well suited for problems in multi-user communications and less than 100 trials is sufficient for an accurate error probability estimate. NUMERICAL RESULTS The performance of the conventional, optimum [Poor 88] and the neural net receivers are compared via Monte Carlo simulations employing the importance sampling method. Except for a difference in length of training periods, the two configurations in Figure 1 result in similar average bit-error probabilities. Results presented here correspond to the neural net receiver in Figure l.a. A two-user Gaussian channel is considered with severe near-far problem where E2/ El = 6dB and spreading sequence length N = 3. In Figure 4, the average bit-error probabilities of the four receivers (conventional, optimum, neural nets for the "trained" and "preset" examples) are plotted versus the signal to noise ratio of the first user (SN RI). It is clear from this figure that the two neural net receivers outperform the matched filter receiver over the range of SN R l . Figure 5 depicts these average error probabilities versus the relative energies of the two users (i.e., E2/ El ) for a fixed SN Rl = 8dB and N = 3. As expected the conventional receiver becomes multiple-access limited as E2 increases, however, the performance of the neural net receivers closely track that of the optimum receiver for all values of E2 • We also considered a three-user Gaussian example with a high bandwidth efficiency and severe near-far problem where spreading sequence length N = 3 and first and third users have equal energy and second user has four times more energy (Le., E2/ El = 6dB ). The average error probabilities of the four receivers versus SN Rl are depicted in Figure 6. The neural net receivers maintained their near optimum performance even in this three user example with a spread fae tor of 3 corresponding to a bandwidth efficiency of 1. CONCLUSIONS In this paper, we consider the problem of demodulating a signal in a multipleaccess Gaussian channel. The error probability of different neural net receivers were compared with the conventional and optimum receivers in a symbol-synchronous system. As expected the performance of the conventional receiver (matched filter) is very sensitive to the strength of the interfering users. However, the error probability of the neural net receiver is independent of the strength of the other users and is at least one order of magnitude better than the conventional receiver. Except for a difference in the length of training periods, the two configurations in Figure 1 result in similar average bit-error probabilities. However, the training strategies, "preset" and "trained", resulted in slightly different error rates and decision regions. The multi-layer perceptron was very successful in the classification problem in the presence of interfering signals. In all the examples that were considered, two layers 278 Paris, Orsak, Varanasi and Aazhang of perceptrons proved to be sufficient to closely approximate the decision boundary of the optimum receiver. We anticipate that this application of neural networks will shed more light on the potentials of neural nets in digital communications. The issues facing the project were quite general in nature and are reported in many neural network studies. However, we were able to address these issues in multiple-access communications since the disturbances are structured and the optimum receiver (which is NP-hard) is well understood. References [Aazhang 87] B. Aazhang and H. V. Poor. Performance of DS/SSMA Communications in Impulsive Channels-Part I: Linear Correlation Receivers. IEEE Trans. Commun., COM-35(1l):1l79-1188, November 1987. [Gschwendtner 88] A. B. Gschwendtner. DARPA Neural Network Study. AFCEA International Press, 1988. [Lippmann 87] [Lupas 89] [Orsak 89] [Poor 88] [Rumelhart 86] [Varanasi 89] [Verdu 86] R. P. Lippmann and B. Gold. Neural-Net Classifiers Useful for Speech Recognition. In IEEE First Conference on Neural Networks, pages 417-425, San Diego, CA, June 21-24, 1987. R. Lupas and S. Verdu. Linear Multiuser Detectors for Synchronous Code-Division Multiple-Access Channels. IEEE Trans. Info. Theory, IT-34, 1989. G. Orsak and B. Aazhang. On the Theory of Importance Sampling Applied to the Analysis of Detection Systems. IEEE Trans. Commun., COM-37, April, 1989. H. V. Poor and S. Verdu. Single-User Detectors for Multiuser Channels. IEEE Trans. Commun., COM-36(1):50-60, January, 1988. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning Internal Representation by Error Propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. I: Foundations, pages 318-362, MIT Press, 1986. M. K. Varanasi and B. Aazhang. Multistage Detection in Asynchronous Code-Division Multiple-Access Communications. IEEE Trans. Commun., COM-37, 1989. S. Verdu. Optimum Multiuser Asymptotic Efficiency. IEEE Trans. Commun., COM-34(9):890-897, September, 1986. Neural Net Receivers in Multiple-Access Communications 279 Sampler (n+l)T c (a) ret) Figure 1. Two Neural Net Receiver Structures. 4,---------------~.------rr------~ 2 o -2 • o : • Matched Filter .... Neural Net (preset) l , l ,. " r' ~ l ~' Optimum Receiver A- Neural Net (trained) f I " I I : I : .... I -4~----~~~~--~----~----~--~ -3 -2 -1 o 2 3 (b) Figure 2. Decision Boundaries of the Various Receivers. 1012~ ________________________________ ~ 10 10 10 8 .~ 10 6 C!' 10 4 10 2 Opt. Receiver Neural Net (preset) Neural Net (trained) Matched Filter 10° ~~--~~~~~~--~~--~~--~~ 10-13 10 -11 10-9 10 -7 10-5 10-3 10 -1 Prob. of Error Figure 3. Importance Sampling Gain versus Error Rate for 2-user Example. ~1 280 Paris, Orsak, Varanasi and Aazhang 10-1 r--::::~~::::=----I 10-2 10-3 15 10-4 .. 10-5 ~ 10-6 'Q 10-7 ~ 10-8 Q 10-9 "= 10-10 to-11 to-I Matched Filter Neural Net (trained) Neural Net (preset) Opt Receiver to -13+-_...,.._--. __ ,--_...,.._--. __ ,--_ .... 2 4 6 8SNR l~ dB 12 14 16 Figure 4. Prob. of Error as a Function of the SNR (E2/El = 4). 10-1 ~------------------------------~ Matched Filter Neural Net (trained) Neural Net (preset) Opt. Receiver 104+---~--~--~--~--~--,---~~ o 1 2 E2/El 3 4 Figure 5. Influence of MA-Interference (SNR = 8dB). 10-1 '---~~-'--~F.:~~==~====---, 10 -2 10 -3 10-4 10 -5 10 -6 10 -7 10 -8 10 -9 10 -10 10 ~1l 10 -12 Neural Net (trained) Neural Net (preset) Opt. Receiver 10 -13-+-'I'"'"""II""""I"' ........ ~.......,--r' ........ ~......., __ -r-..,--.......,.--.--r--t 2 4 6 8 10 12 14 16 SNR in dB Figure 6. Error Curves for the 3-User Example.
|
1988
|
58
|
144
|
A COMPUTATIONA.LLY ROBUST ANATOlVIICAL MODEL FOR RETIN.AL DIRECTIONAL SELECTI\lITY Norberto M. Grzywacz Center BioI. Inf. Processing MIT, E25-201 Cambridge, MA 02139 ABSTRACT Franklin R. Amthor Dept. Psychol. Univ. Alabama Birmingham Birmingham, AL 35294 We analyze a mathematical model for retinal directionally selective cells based on recent electrophysiological data, and show that its computation of motion direction is robust against noise and speed. INTROduCTION Directionally selective retinal ganglion cells discriminate direction of visual motion relatively independently of speed (Amthor and Grzywacz, 1988a) and with high contrast sensitivity (Grzywacz, Amthor, and Mistler, 1989). These cells respond well to motion in the "preferred" direction, but respond poorly to motion in the opposite, null, direction. There is an increasing amount of experimental work devoted to these cells. Three findings are particularly relevant for this paper: 1- An inhibitory process asymmetric to every point of the receptive field underlies the directional selectivity of ON-OFF ganglion cells of the rabbit retina (Barlow and Levick, 1965). This distributed inhibition allows small motions anywhere in the receptive field center to elicit directionally selective responses. 2- The dendritic tree of directionally selective ganglion cells is highly branched and most of its dendrites are very fine (Amthor, Oyster and Takahashi, 1984; Amthor, Takahashi, and Oyster, 1988). 3The distributions of excitatory and inhibitory synapses along these cells' dendritic tree appear to overlap. (Famiglietti, 1985). Our own recent experiments elucidated some ofthe spatiotemporal properties of these cells' receptive field. In contrast to excitation, which is transient with stimulus, the inhibition is sustained and might arise from sustained amacrine cells (Amthor and Grzywacz, 1988a). Its spatial distribution is wide, extending to the borders of the receptive field center (Grzywacz and Amthor, 1988). Finally, the inhibition seems to be mediated by a high-gain shunting, not hyperpolarizing, synapse, that is, a synapse whose reversal potential is close to cell's resting potential (Amthor and Grzywacz, 1989). 477 478 Grzywacz and Amthor In spite of this large amount of experimental work, theoretical efforts to put these pieces of evidence into a single framework have been virtually inexistent. We propose a directional selectivity model based on our recent data on the inhibition's spatiotemporal and nonlinear properties. This model, which is an elaboration of the model of Torre and Poggio (1978), seems to account for several phenomena related to retinal directional <;eledivity. THE ]\IIODEL Figure 1 illustrates the new model for retinal directional selectivity. In this modd, a stimulus moving in the null direction progressively activates receptive field regions linked to synapses feeding progressively more distal dendrites Of the ganglion cells. Every point in the receptive field center activates adjacent excitatory a~d inhibitory synapses. The inhibitory synapses are assumed to cause shunting inhibition. ("'"e also formulated a pre-ganglionic version of this model, which however, is outside the scope of this paper). NULL .-FIGURE 1. The new model for retinal directional selectivity. This model is different than that of Poggio and Koch (1987), where the motion axis is represented as a sequence of activation of different dendrites. Furthermore, in their model, the inhibitory synapses must be closer to the soma than the excitatory ones. (However, our model is similar a model proposed, and argued against, elsewhere (Koc,h, Poggio, and Torre, 1982). An advantage of our model is that it accounts for the large inhibitory areas t.o most points of the receptive field (Grzywacz and Amthor, 1988). Also, in the new model, the distributions of the excitatory and inhibitory synapses overlap along the dendritic tree, as suggested (Famiglietti, 1985). Finally, the dendritic tree of ONOFF directionally selective ganglion cells (inset- Amthor, Oyster, and Takahashi, A Computationally Robust Anatomical Model 479 1984) is consistent with our model. The tree's fine dendrites favor the multiplicity of directional selectivity and help to deal with noise (see below). In this paper, we make calculations with an unidimensional version of the model dealing with motions in the preferred and null directions. Its receptive field maps into one dendrite of the cell. Set the origin of coordinates of the receptive field to be the point where a dot moving in the null direction enters the receptive field. Let S be the size of the receptive field. Next, set the origin of coordinates in the dendri te to be the soma and let L be the length of the dendrite. The model postulates that a point z in the receptive field activates excitatory and inhibitory synapses in pain t z = zL/ S of the dendrite. The voltages in the presynaptic sites are assumed to be linearly related to the stimulus, [(z,t), that is, there are functions fe{t) and li(t) such that the excitatory, {3e(t), and inhibitory, {3i(t), presynaptic voltages of the synapses to position ;r in the dendrite are . . J = e, l, where "*" stands for convolution. We assume that the integral of Ie is zero, (the excitation is transient), and that the integral of Ii is positive. (In practice, gamma distribution functions for Ii and the derivatives of such functions for Ie were used in this paper's simulations.) The model postulates that the excitatory, ge, and inhibitory, gi, postsynaptic conductances are rectified functions of the presynaptic voltages. In some examples, we use the hyperbolic tangent as the rectification function: where Ij and T; are constants. In other examples, we use the rectification functions described in Grzywacz and Koch (1987), and their model of ON-OFF rectifications. For the postsynaptic site, we assume, without loss of generality, zero reversal potential and neglect the effect of capacitors (justified by the slow time-courses of excitation and inhibition). Also, we assume that the inhibitory synapse leads to shunting inhibition, that is, its conductance is not in series with a battery. Let Ee be the voltage of the excitatory battery. In general, the voltage, V, in different positions of the dendrite is described by the cable equation: d2~~:, t) = Ra (-Ee!}e (z, t) + V (z, t) (ge (z, t) + 9j (z, t) + 9,.)), where Ra is the axoplasm resistance per unit length, g,. is the resting membrane conductance, and the tilde indicates that in this equation the conductances are given per unit length. 480 Grzywacz and Amthor For the calculations illustrated in this paper, the stimuli are always delivered to the receptive field through two narrow slits. Thus, these stimuli activate synapses in two discrete positions of a dendrite. In this paper, we only show results for square wave and sinusoidal modulations of light, however, we also performed calculations for more general motions. The synaptic sites are small so that their resting conductances are negligible, and we assume that outside these sites the excitatory and inhibitory conductances are zero. In this case, the equation for outside the synapses is: d2U dy2 = U, where we defined A = 1/(Ra y,,)1/2 (the length constant), U = V/Ee, and y = Z/A. The boundary conditions used are where L = L/ A, and where if R, is the soma's input resistance, then p = R, /(RaA) (the dendritic-to-soma cond uctance ratio). The first condition means that currents do not flow through the tips of the dendrites. Finally, label by 1 the synapse proximal to the soma, and by 2 the distal one; the boundary conditions at the synapses are lim U = lim U, 1I~lIj 1I~lIj j = 1,2, II> 111 1I<lIj j = 1,2, (I ) where 7'e = geRa).. and 7'i = 9iRa)... It can be shown that the relative inhibitory strength for motions in the preferred direction decreases with L and increases with p. Thus, to favor conditions for multiplicity of direction selectivity in the receptive field, we perform calculations with L -+ 00 and p = 1. The strengths of the excitatory syna.pses are set such that their contribution to somatic voltage in the absence of inhibition is in variant with position. Finally, we ensure that the excitatory synapses never sat urate. Under these conditions, one can show that the voltage in the soma is: U (0) = {27'e,2 + (7'e,2 + 7'i,2 + 2) 7'e,d e2~y (7'£,2 + 7'j,2) 7'e,1 , (2) ((7'i,l +2)7'i,2 + 27'j,1 +4)e26Y 7'i,17'i,2 where 6y is the distance between the synapses. A fina.l quantity, which is used in this paper is the directional selectivity index It. Let Up and Un be the total responses to the second slit in the apparent motion in the preferred and null directions respectively. {Alternatively, for the sinusoidal A Computationally Robust Anatomical Model 481 motion, these quantities are the respective average responses.) \Ve follow Grzywacz and Koch (198i) and define (3) RESULTS This section presents the results of calcuhtions ba."ed on the modd. \Ve address: the multiple computations of directional seir->divity in the {.·ells· receptive fields; the robustness of these computations again~t noise: I h" robustness of these computations against speed. Figure 2 plots the degree of directional selectivity for apparent lIlotions activating two synapses as function of the synapses' distan,'e in a dendrite (computed from Equations 2 and 3). 1. ~--------------------------32 .. 16 .. " c:: ... B >-.. ~ '" .5 .. u .. .. ~ 2 ., c: a '" .. u .. "-'" 0.5 c .0 .00 .50 1.0 1.5 2.0 Dendritic Distance (>.1 l!~IG URE 2. Locality of lnkraction betwt't"'n synap"'t'~ a(I1\-'(1l,'" hv apparent motions. It can be shown that the critical parameter controlling whether a certain synaptic distance produces a criterion directional selectivity is rj (Equation 1). As the parameter rj increases, the criterion distance decreases , Thus, "ince in retinal directionally selective cells the inhibition has high gain (Amthor and Grzywa<."z, 1989) and the dendrites are fine (Amthor, Oyster and Takahashi, 1984; Amthor, Takahashi, and Oyster, 1988), then rj is high, and motions anywhere in receptive field should elicit directionally selective responses (Barlow and Levick, 1965). In other words, the model's receptive field computes motion direction multiple times. Next, we show that the high inhibitory gain and the cells' fine dendrites help to deal with noise, and thus. may explain the high contrast sensitivity (0.5% contrast482 Grzywacz and Amthor Grzywacz, Amthor, and Mistler, 1989) of the cells' directional selectivity. This conclusion is illustrated in Figure 3's multiple plots. .. - -Looo ~1 -NlgII ~I , ".., ...... a .. -"11 ~ 1.0 OUTPUT NOISE VCCJ1ATDA'f 1IfI\/1 NOI. INIIIIT DAY 1IfI\/1 NOIS£ ! , .. .. .. N , , J .50 .. .. .. .. oo-===--e:;..........::.-....;::.,---==--..::.. 00 • . 0 10 •. 00 .0 10 .00 4.0 1.0 Response Response Aupan .. FIGURE 3. The model's computatifm of direction is robust against additive noise in the output, and in the excitatory and inhibitory inputs. To generate this figure, we used Equation 2 assuming that a Gaussian noise is added to the cell's output, excitatory input, or inhibitory input. {In the latter case, we used an approximation that assumes small standard deviation for the inhibitory input's noise.) Once again the critical parameter is the ri defined in Equation 1. The larger this parameter is, the better the model deals with noise. In the case of output noise, an increase of the parameter separatt's the preferred and null mean responses. For noise in the excitatory input, a parameter increase not only separates the means, but also reduces the standard deviation: Shunting inhibition SHUnTS down the noise. Finally, the most dramatic improvement occurs when the noise is in the inhibitory input. (In all these plots, the parameter increase is always by a factor of three.) Since for retinal direc tionally selective ganglion cells, ri is high (high inhibitory gain and fine dendrites), we conclude that the cells' mechanism are particularly well suited to deal with noise. For sinusoidal motions, the directional selectivity is robust for a large range of temporal frequencies provided that the frequencies are sufficiently low (Figure 4). (Nevertheless, the cell's preferred direction responses may be sharply tuned to either temporal frequency or speed- Amthor and Grzywacz, 1988). I ! i is I i :: ~ i , i i is A Computationally Robust Anatomical Model 483 -HI", rl 'llHl rl 2.00 -20.0 I I I --------------2.00 20.0 FrtQlltncy ~I I I I " , I , I \ 200. IE-roo FIGURE 4. Directional selectivity is robust against speed modulation. To generate this curve, we subtracted the average respons~ to a isolated flickering slit from the preferred and null average responses (from Equation 2). This robustness is due to the invariance with speed for low speeds of the relative temporal phase shift between inhibition and excitation. Since the excitation has band-pass characteristics, it leads the stimulus by a constant phase. On the other hand, the inhibition is delayed and advanced in the preferred and null directions respectively, due to the asymmdric spatial integration. The phase shifts due to this integration are also speed invariant. CONCLUSIONS We propose a new model for retinal directional selectivity. The shunting inhibition of ganglion cells (Torre and Poggio, 1978), which is temporally sustained, is the main biophysical mechanism of the model. It postulates that for null direction motion, the stimulus activates regions of the receptive field that are linked to excitatory and inhibitory synapses, which are progressively farther away from the soma. This models accounts for: 1- the distribution of inhibition around points of the receptive field (Grzywacz and Amthor, 1988); 2- the apparent full overlap of the distribution of excitatory and inhibitory synapses along the dendritic trees of directionally selective ganglion cells (Famiglietti, 1985); 3- the multiplicity of directionally selective regions (Barlow and Levick, 1965); 4- the high contrast sensitivity of the cells' directional selectivity (Grzywacz, Amthor, and Mistler, 1989); 5- the relative in variance of directional selectivity on stimulus speed (Amthor and Grzywacz, 1988). Two lessons of our model to neural network modeling are: Threshold is not the only neural mechanism, and the basic computational unit may not be a neuron 484 Grzywacz and Amthor but a piece of membrane (Grzywacz and Poggio, 1989). In our model, nonlinear interactions are relatively confined to specific dendritic tree branches (Torre and Poggio, 1978). This allows local computations by which single cells might generate receptive fields with multiple directionally selective regions, as observed by Barlow and Levick (1965). Such local computations could not occur if the inhibition only worked through a reduction in spike rate by somatic hyperpolarization. Thus, most neural network models may be biologically irrelevant, since they are built upon a too simple model of the neuron. The properties of a network depend strongly on its basic elements. Therefore, to understand the computations of biological networks. it may be essential to first understand the basic biophysical mechanisms of information processing before developing complex networks. ACKNOWLEDGMENTS We thank Lyle Borg-Graham and Tomaso Poggio for helpful discussions. Also, we thank Consuelita Correa for help with the figures. N .M.G. was supported by grant BNS-8809528 from the National Science Foundation, by the Sloan Foundation, and by a grant to Tomaso Poggio and Ellen Hildreth from the Office of Naval Research, Cognitive and Neural Systems Division. F.R.A. was supported by grants from the National Institute of Health (EY05070) and the Sloan Foundation. REFERENCES Amthor & Grzywacz (1988) Invest. Ophthalmol. Vi". Sci. 29:225. Amthor & Grzywacz (1989) Retinal Directional Selectivity Is Accounted for by Shunting Inhibition. Submitted for Publication. Amthor, Oyster & Takahashi (1984) Brain Res. 298:187. Amthor, Takahashi & Oyster (1989) J. Compo Neurol. In Press. Barlow & Levick (1965) J. Physiol. 178:477. Famiglietti (1985) Neuro"ci. Abst. 11:337. Grzywacz & Amthor (1988) Neurosci. Ab"t. 14:603. Grzywacz, Amthor & Mistler (1989) Applicability of Quadratic and Threshold Models to Motion Discrimination in the Rabbit Retina. Submitted for Publication. Grzywacz & Koch (1987) Synapse 1:417. Grzywacz & Poggio (1989) In An Introduction to Neural and Electronic Networks. Zornetzer, Davis & Lau, Eds. Academic Press, Orlando, USA. In Press. Koch, Poggio & Torre (1982) Philos. Tran". R. Soc. B 298:227. Poggio & Koch (1987) Sci. Am. 256:46. Torre & Poggio (1978) Proc. R. Soc. B 202:409.
|
1988
|
59
|
145
|
COMPUTER MODELING OF ASSOCIATIVE LEARNING DANIEL L. ALKON' FRANCIS QUEK2a THOMAS P. VOGL2b 1. Laboratory for Cellular and Molecular NeurobiologYt NINCDS t NIH t Bethesdat MD 20892 2. Environmental Research Institute of Michigan a) P.O. Box 8G18t Ann Arbor t MI 48107 b) 1501 Wilson Blvd. t Suite 1105t Arlington t VA 22209 INTRODUCTION Most of the current neural networks use models which have only tenuous connections to the biological neural systems on which they purport to be based t and negligible input from the neuroscience/biophysics communities. This paper describes an ongoing effort which approaches neural net research in a program of close collaboration of neurosc i ent i sts and eng i neers. The effort is des i gned to elucidate associative learning in the marine snail Hermissenda crassicornis t in which Pavlovian conditioning has been observed. Learning has been isolated in the four neuron network at the convergence of the v i sua 1 and vestibular pathways in this animal t and biophysical changes t specific to learning t have been observed in the membrane of the photoreceptor B cell. A basic charging capacitance model of a neuron is used and enhanced with biologically plausible mechanisms that are necessary to replicate the effect of learning at the cellular level. These mechanisms are non-linear and are t primarilYt instances of second order control systems (e.g. t fatigue t modul at i on of membrane res i stance t time dependent rebound)t but also include shunting and random background fi ri ng. The output of the model of the four- neuron network di sp 1 ays changes in the temporal vari at i on of membrane potential similar to those observed in electrophysiological measurements. 419 420 Alkon, Quek and Vogl NEUROPHYSIOLOGICAL BACKGROUND Alkon 1 showed that Hermissenda crassicornis, a marine snail, is capable of associating two stimuli in a fashion which exhibits all the characteristics of classical Pavlovian conditioning (acquisition, retention, extinction, and savings)2. In these experiments, Hermissenda were trained to associate a visual with a vestibular stimulus. In its normal environment, Hermissenda moves toward light; in turbulence, the animal increases the area of contact of its foot with the surface on which it is moving, reducing its forward velocity. Alkon showed that the snail can be condi-tioned to associate these stimuli through repeated exposures to ordered pairs (light followed by turbulence). When the snails are exposed to light (the unconditioned stimulus) followed by turbulence (the conditioned stimulus) after varying time intervals, the snails transfer to the light their unconditioned response to turbulence (increased area of foot contact); i.e., when presented with light alone, they respond with an increased area of foot contact. The effect of such training lasts for several weeks. It was further shown that the learning was maximized when rotation followed light by a fixed i nterva 1 of about one second, and that such 1 earn i ng exhibits all the characteristics of classical conditioning observed in higher animals. The relevant neural interconnections of Hermissenda have been mapped by Alkon, and learning has been isolated in the four neuron sub-network (Figure 1) at the con-vergence of the visual and vestibular pathways of this animal. Light generates signals in the B cells while turbulence is transduced into signals by the statocyst's hair cells, the animal's vestibular organs. The optic ganglion cell medi ates the interconnect ions between the two sensory pathways. The effects of learning also h\ve been observed at the cellular level. Alkon et al. have shown that biophys i ca 1 changes assoc i ated wi th 1 earni ng occur in the photo-receptor B cell of Hermissenda. The signals in the neurons take the form of voltage dependent ion Optic gon<;llion , I I I \ Computer Modeling of Associative Learning 421 c Figure 1. The four neuron network at the convergence of the visual and vestibular pathways of Hermissenda crassicornis. All filled endings indicate inhibitory synapses; all open endings indicate excitatory synapses. (a) Convergence of synaptic inhibition from the photoreceptor B cell and caudal ha i r cell s unto the opt i c ganglion cell. (b) Pos it i ve synapt i c feedback onto the type B photoreceptor: 1, direct synaptic excitation; 2, indirect excitation -- the ganglion excites the cephalic hair cell that inhibits the caudal hair cell, and thus disinhibits the type B cell; 3, indirect excitation -- the ganglion inhibits the caudal hair cell and thus disinhibits the type B cell. (c) Intra- and intersensory inhibition. The cephalic and caudal hair cells are mutually inhibitory. The B cell inhibits mainly the cephalic hair cell. From: Tabata, M., and Alkon, D. L. Positive synaptic feedback in visual system of nudibranch mollusk Hermissenda Crassicornis. J. of Neurophysiology 48: 174-191 (1982). 422 Alkon, Quek and VogI currents, and learning is reflected in biophysical changes in the membrane of the B cell. The effects of ion currents can be observed in the form of time variations in membrane potential recorded by means of microelectrodes. It is the variation in membrane potential resulting from associative learning that is the focus of this research. Our goal is to model those properties of biological neurons suffi c i ent (and necessary) to demonstrate assoc i ative learning at the neural network level. In order to understand the effect and necessity of each component of the model, a minimalist approach was adopted. Nothing was added to the model which was not necessary to produce a required effect, and then only when neurophysiologists, biophysicists, electrical engineers, and computer scientists agreed that the addition was reasonable from the perspective of their disciplines. METHOD Following Kuffler and Nicholas4 , the model is described in terms of circuit elements. It must be emphasized however, that this is simply a recognition of the fact that the chemical and physical processes occurring in the neuron can be described by (partial) differential equations, as can electronic circuits. The equivalent circuit of the charging and discharging of the neuron membrane is shown in Figure 2. The model was constructed using the P3 network simulation shell developed by Zipser and Rabi n5 • The P3 stri p-chart capabil i ty was particularly useful in facilitating interdisciplinary interactions. Figure 3 shows the response of the basic model of the neuron as the frequency of input pulses is varied. Our aim, however, is not to model an individual neuron. Rather, we consistently focus on properties of the neural network that are necessary and sufficient to demonstrate associative learning. Examination of the behavior of biological neurons reveals additional common properties that express themselves differently depending on the funct i on of the i ndi vi dua 1 neuron. These propert i es inc 1 ude background fi ri ng, second order cont ro 1 s, and shunting. Their inclusion in the model is necessary for the simulation of associative learning, and their implementation is described below. Computer Modeling of Associative Learning 423 INPUT CIRCUITRY ~ .R, """'. ~ S, ~ --.---i ______ -u--------' ... ~ -membrane • R • S potential ~ c Rs, RS2 Rsk • S,. Sk : closes when there are input EPSPs to the the cell. otherwise. open. • • Sd~y : closes when there is no input ESPSs to the the cell. otherwise. open. Figure 2. Circuit model of the basic neuron. The lines from the 1 eft are the inputs to the neuron from its dendritic connections -to presynaptic neurons. Rsk's are the resistances that determine the magnitude of t~e effect (voltage) of the pulses from presynaptic neurons. The gap indicated by open circles is a high impedance coupler. R1 through R~, together with C, determine the rise time for the kth lnput of the potential across the capacitor C, wh i ch represents the membrane. Rdecay cont ro 1 s the discharge time constant of the capacl tor C When the membrane potential (across C) exceeds threshold potential, the neuron fi res a pul se into its axon (to all output connections) and the charge on C is reduced by the "discharge quantum" (see text). BACKGROUND FIRING IN ALL NEURONS Background fi ri ng, i. e., spontaneous fi ri ng by neurons wi thout any input pul ses from other neurons, has been observed in all neurons. The fundamental importance of background firing is exemplified by the fact that in the four neuron network under study, the optic ganglion does 424 Alkon, Quek and Vogl Figure 3. Response of the basic model of a single neuron to a variety of inputs. The four horizontal strips, from top to bottom, show: 1) the input stream; 2) the resulting membrane potential; 3) the resulting stream of output pulses; and 4) the composite output of pulses superimposed on the membrane potent i a 1, ernul at i ng the correspond i ng electrophysiological measurement. The four vertical sections, from left to right, indicate: a) an extended input, simulating exposure of the B cell to light; b) a presynaptic neuron firing at maximum frequency; c) a presynaptic neuron firing at an intermediate frequency; d) a presynaptic neuron firing at a frequency insufficient to cause the neuron to fire ·but sufficient to maintain the neuron at a membrane potential just below firing threshold. BACKGROUND FIRING IN ALL NEURONS (Continued) not have any synapses that excite it (all its inputs are inhibitory). However, the optic ganglion provides the only two excitatory synapses in the entire network (one on the photoreceptor B cell and the other on the cephalad statocyst hair cell). Hence, without back-ground firing, i.e., when there is no external stimuli of the neurons, all activity in the network would cease. Further, without background firing, any stimulus to either the vestibular or the visual receptors will completely swamp the response of the network. Computer Modeling of Associative Learning 425 Background firing is incorporated in our model by applying random pul ses to a 'vi rtua l' exci tatory synapse. By altering the mean frequency of the random pulses, various levels of 'internal' homeostatic neuronal activity can be simulated. Physiologically, this pulse source yields results similar to an ion pump or other energy source, e.g., cAMP, in the biological system. SECOND ORDER CONTROL IN NEURONS Second order controls, i.e., the modulation of cellular parameters as the resul t of the past history of the neuron, appear in all biological neurons and play an essential role in their behavior. The ability of the cell to integrate its internal parameters (membrane potential in particular) over time turns out to be vital not only in understanding neural behavior but, more specifically, in providing the mechanisms that permit temporally specific associative learning. In the course of this investigation, a number of different second order control mechanisms, essential to the proper performance of the model, were elucidated. These mechanisms share a dependence on the time integral of the difference between the instantaneous membrane potential and some reference potential. The part i cul ar second order cont ro 1 mechan isms i ncorporated into the model are: 1) Overshoot in the 1 i ght response of the photoreceptor B cell; 2) maintenance of a post-stimulus state in the B cell subsequent to prolonged stimulation; 3) modulation of the discharge resistance of the B cell; 4) Fatigue in the statocysts and the optical ganglion; and 5) time dependent rebound in the opt i cal gangl ion. In addi t i on to these second order control effects, the model required the inclusion of the observed shunting of competing inputs to the B cell during light exposure. The consequence of the interaction of these mechanisms with the basic model of the neurons in the four neuron network is the natural emergence of temporally specific associative learning. OVERSHOOT IN THE LIGHT RESPONSE OF THE PHOTORECEPTOR B CELL Under strong light exposure, the membrane potential of an isolated photoreceptor B cell experiences an initial 'overshoot' and then settles at a rapidly firing level 426 Alkon, Quek and VogI far above the usual firing potential of the neuron (see Figure 4a). (We refer to the elevated membrane potential of the B cell during illumination as the "active firing membrane potential"). The initial overshoot (and slight ringing) observed in the potential of the biological B cell (F i gure 4a) is the signature of an integral second order control system at work. This control was realized in the model by altering the quantity of charge removed from the cell (the discharge quantum) each time the cell fires. (The biological cell loses charge whenever it fi res and the quant i ty of charge lost vari es wi th the membrane potential.) The discharge quantum is modulated by the definite integral of the difference between the membrane potential and the active firing membrane potential as follows: Qdiseharge(t) = K X h' {Potmembrane(t) - Potaetive ffring(t)}dt o As the membrane potential rises above the active firing membrane potential, the value of the integral rises. The magnitude of the discharge quantum rises with the integra 1 . Th is increased discharge retards the membrane depolarization, until at some point, the size of the discharge quantum outstrips the charging effect of light on the membrane potential, and the potential falls. As the membrane potent i a 1 fall s below the act i ve fi ri ng membrane potential, the magnitude of the discharge quantum begi ns to decrease (i. e. , the value of the integral falls). This, in turn, causes the membrane potential to rise when the charging owing to light input once again overcomes the declining discharge quantum. This process repeats with each subsequent swing in the membrane potential becoming smaller until steady state is reached at the active fi ri ng membrane potent i a 1 . The response of the model to simulated light exposure is shown in Figure 4b. Note that only a single overshoot is obvious and that steady state is rapidly reached. MAINTAINING THE POST-STIMULUS STATE IN THE B CELL During exposure to light, the B cell is strongly depolarized, and the membrane potential is maintained substantially above the firing threshold potential. When the light stimulus is removed, one would expect the cell to fire at its maximum rate so as to bring its membrane Computer Modeling of Associative Learning 427 eel) Lig!!.!J I to 2.9 1 pixels~tick Figure 4. Response of the B cell and the model to a light pulse. (a). Electrophysiological recording of the response of the photoreceptor B cell to 1 i ght. Note the in it i a 1 overshoot and one cycle of oscillation before the membrane potential settles at the "active firing potential." From: Alkon t D.L. Memory Traces in the Brain. Cambridge University Press t London (1987)t p.S8. (b) Response of the model to a light pulse. 428 Alkon, Quek and Vogl potential below the firing threshold (by releasing a discharge quantum w·ith each output pulse). This does not happen in Herm;ssenda; there is, however, a change in the amount of charge released with each output pulse when the cell is highly depolarized. Note that the di scharge quantum is modul ated post-exposure in a manner analogous to that occurring during exposure as discussed above: It is modulated by the magnitude of the membrane potential above the firing threshold. The result of this modulation is that the more positive the membrane potential, the smaller the discharge quantum, (subject to a non-negative minimum value). The average value of the interval between pulses is also modulated by the magnitude of the discharge quantum. This modulation persists until the membrane potential returns to the fi ri ng threshold after cessation of light exposure. This mechanism is particular to the B cell. Upon cessation of vestibular stimulation, hair cells fire rapidly unt il thei r membrane potent i a 1 s are below the fi ri ng threshold, just as the basic model predicts. MODULATION OF DISCHARGE RESISTANCE IN THE B CELL The duration of the post-stimulus membrane potential is determined by the magnitude of the discharge resistance of the B cell. In the model, the discharge resistance changes exponentially toward a predetermined maximum value, R~x' when the membrane potential exceeds the firing threshola. Roose is the baseline value. That is, Rdisch(t-to) = Rmax - {Rmax - RdiSCh(tO)}exp{(t-to)/rrise} when the membrane potential is above the firing threshold, and Rdisch(t-to) = Roose- {Rbase- Rdisch(to)}exp{(t-to)/1'decay} when the membrane potential is below the firing threshold. FATIGUE IN STATOCYST HAIR CELLS In Hermissenda, caudal cell activity actually decreases immediately after it has fired strongly, rather than returning to its normal background level of firing. This effect, which results from the tendency of membrane potential to "fatigue" toward its resting potential, is Computer Modeling of Associative Learning 429 incorporated into our model of the statocyst hair cells using the second order control mechanism previously described. I.e., when the membrane potential of a hair cell is above the firing threshold (e.g., during vestibular stimulation), the shunting resistance of the cell decays exponentially with time toward zero as long as the hair cell membrane potential is above the firing threshold. This resistance is allowed to recover exponentially to its usual value when the membrane potential falls below the firing threshold. FATIGUE OF THE OPTICAL GANGLION CELL DURING HYPERPOLARIZATION In Hermissenda the optical ganglion undergoes hyperpolarization at the beginning of the light pulse and/or vestibular stimulus. Contrary to what one might expect, it then recovers and is close to the firing threshold by the time the stimuli cease. This effect is incorporated into the model by fatigue induced by hyperpolarization. As above, this fatigue is implemented by allowing the shunt i ng res i stance in the gangl ion cell to decrease exponentially toward a minimum value, while the membrane potential is below the firing threshold by a prespecified amount. The value of the minimum shunting resistance is modulated by the magnitude of hyperpolarization (potential difference between the membrane potential and the firing threshold). The shunting resistance recovers exponentially from its hyperpolarized value, once the membrane potential returns to its firing threshold as a result of background firing input. The effect of this decrease is that the ganglion cell will temporarily remain relatively insensitive to the enhanced post-stimulus firing of the B cell until the shunting resistance recovers. Once the membrane potential of the ganglion cell recovers, the pulses from the ganglion cell will excite the B cell and maintain its prolongation effect. (See Figure 1.) The modulation of the minimuM shunting resistance by the magnitude Qf hyperpolarization introduces the first stimulus pairing dependent component in the post-stimulus behavi or of the B cell because the degree of hyperpolarization is higher under paired stimulus conditions. 430 Alkon, Quek and YogI TIME DEPENDENT REBOUND IN THE OPTICAL GANGLION CELL Experimental evidence with Hermissenda indicates that the rebound of the optical ganglion is much stronger than is possible if the usual background activity were the sole cause of this rebound. Furthermore rebound in the gang1 ion ce 11 is stronger when t~e 1 i ght exposure precedes vestibular stimulus by the optimal inter-stimulus interval (lSI). Since the ganglion cell has no exc i tatory input synapses, the increased rebound must result from a mechanism internal to the cell that heightens its background activity during pairing at the optimal lSI. The mechanism must be time dependent and must be able to distinguish between the inhibitory signal which comes from the B cell and that which comes from the caudal hair cell. To achieve this result, two mechanisms must interact. The first mechanism enhances the inhibitory effect of the caudal hair cellon the gangl ion cell. This "caudal inhibition enhancer", elE, is triggered by pu1 ses from the B cell. The elE has the property that it rises exponentially toward 1.0 when a pulse is seen at the synapse from the B cell and decays toward zero when no such pulses are received. The second mechanism provides an increase in the background act i vi ty of the opt i c gang1 i on when the cell is hyperpolarized; it is a fatigue effect at the synapse from the caudal hair cell. This synapse specific fatigue (SSF) rises toward 1.0 as any of the inhibitory synapses onto the ganglion receive a pulse, and decays toward zero when there is no incoming inhibitory pulse. Note that this second order control causes fatigue at the synapse between the caudal hair cell and the ganglion whenever any inhibitory pulse is incident on the ganglion. Control of the lSI resides in the interaction of these two mechanisms. The efficacy of an inhibitory pulse from the caudal cell upon the gang1 ion ce 11 is determi ned by the product of elE and (1 - SSF), the "lSI convo1ver." With light exposure alone or when caudal stimulation follows light, the elE rises toward 1.0 along with the SSF. Initially, (1 - SSF) is close to 1.0 and the elE term dominates the convo1ver function. As elE approaches 1.0, the (1 - SSF) term brings the convolver toward o. At some intermediate time, the lSI convo1ver is at a maximum. Computer Modeling of Associative Learning 431 When vestibular stimulus precedes light exposure, the SSF rises at the start of the vestibular stimulus while the elE remains at o. When light exposure then begins, the elE rises, but by then (1 - SSF) is approaching zero, and the convolver does not reach any significant value. The result of this interaction is that when caudal st~mulation follows light by the optimal lSI, the inhibition of the ganglion will be maximal. This causes he i g htened background act i v i ty in the gang 1 ion. . Upon cessation of stimulus, the heightened background activity will express itself by rapidly depolarizing the ganglion membrane, thereby bringing about the desired rebound firing. SHUNTING OF THE PHOTORECEPTOR B CELL DURING EXPOSURE TO LIGHT In experiments in which light and vestibular stimulus are paired, both the B cell and the caudal hair statocyst cell fire strongly. There is an inhibitory synapse from the hair cell to the B cell (see Figure 1). Without shunting, the hair cell output pulses interfere with the effect of 1 ight on the B cell and prevent it from arriving at a level of depolarization necessary for learning. This is contrary to experimental data which shows that the response of the B ~ell to light (during the light pulse) is constant whether or not vestibular stimulus is present. Biological experiments have determlned that while light is on, the B cell shunting resistance is very low making the cell insensitive to incoming pulses. Figures 5-8 summari.ze the current performance of the model. Figures 6, 7, and 8 present the response to a light pulse of the untrained, sham trained (unpaired light and turbulence), and trained (paired light and turbulence) model of the four neuron network. DISCUSSION The model developed here is more complex than those generally employed in neural network research because the mechanisms invoked are primarily second order controls. Furthermore, while we operated under a paradigm of minimal commitment (no new features unless needed), the functional requi rements of network demanded that di fferent i at i ng features be added to the cells. The model reproduces the 432 Alkon, Quek and Vogl electrophysio10gica1 measure-ments in Hermissenda that are indicative of associative learning. These results call into question the notion that linear and quasi-linear summing elements ar~ capable of emulating neural activity and the learning inherent in neural systems. This preliminary modeling effort has already resulted in a greater understandi.ng of bi 01 ogi cal systems by 1) modeling experiments which cannot be performed in vivo, 2) testing theoretical constructs on the model, and 3) developing hypotheses and proposing neurophysiological experiments. The effort has also significantly assisted in the deve 1 opment of neural network a 1 gori thms by uncoveri ng the necessary and suffi ci ent components for learning at the neurophysiological level. Acknowledgements The authors wish to acknowledge the contribution of Peter Tchoryk, Jr. for assistance in performing the simulation experiments and Kim T. Blackwell for many fruitful discussions of the work and the manuscript. This work was supported in part by ONR contract NOOOI4-88-K-0659. References 1 A1kon, D.L. Memory Traces in the Brain, Cambridge University Press, London,. 1987 and publications referenced therein. 2 A1kon, D.L. Learning in a Marine Snail. Scientific American, 249: 70-84 (1983). 3 A1kon, D.L., Sakakibara, M., Forman, M., Harrigan, R., Lederhend1er, J., and Farley, J. Reduction of two voltage-dependent K+ currents mediates retention of learning association. Behavioral and Neural Bio1. 44:278-300 (1985). 4 Kuff1er, S.W., and Nicholas, J.G. From Neural to Brain: A Cell u1 ar Approach the the Funct i on of the Nervous System. Seinauer Assoc., Pub1., Sunderland, MA (1986). Computer Modeling of Associative Learning 433 RANDOM .fRA, ......... Figure 5. Prolongation of B cell post-stimulus membrane depolarization consequent to learning (exposure to paired stimuli). From: West, A. Barnes, E., Alkon, D.L. Primary changes of voltage responses during retention of associative learning. J. of Neurophysiol. 48:1243-1255 (1982). Note the increase ins i ze of the shaded area wh i ch is the effect of learning. Figure 6. Naive model: response of untrained ("control" in Fig. 5) model to light. 434 Alkon, Quek and VogI Figure 7. Sham training: response of model to light following presentation of 26 randomly alternating ("unpaired" in Fig. 5) light and turbulence inputs. References (Continued) SZipser, D.~ and Rabin~ D. P3: A Parallel Network Simulating System. In Parallel Distributed Processing, Vol. I.~ Chapter 13. Rumelhart~ McClelland, and the PDP Group~ Eds. MIT Press (1986). 6Buhmann~ J.~ and Schulten~ K. Influence of Noise on the Function of a "Physiological" Neural Network. Biological Cybernetics 56:313-328 (1987). Computer Modeling of Associative Learning 435 Figure 8. Trained network: response of network to light following presentation of 13 light and turbulence input at optimum lSI. The top trace of this figure is the B ce 11 response to 1 i ght alone. Note that an increased firing frequency and active membrane potential is maintained after the cessation of light, compared to Figures 6 and 7. This is analogous to what may be seen in Hermissenda, Figure 5. Note also that the optic ganglion and the cephalad hair cell (trace 2 and 3 of this figure) show a decreased post-stimulus firing rate compared with that of Figures 6 and 7.
|
1988
|
6
|
146
|
586 STATISTICAL PREDICTION WITH KANERVA'S SPARSE DISTRmUTED MEMORY David Rogers Research Institute for Advanced Computer Science MS 230-5, NASA Ames Research Center Moffett Field, CA 94035 ABSTRACT A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over- capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with 'Genetic Algorithms', and a method for improving the capacity of SDM even when used as an associative memory. OVERVIEW This work is the result of studies involving two seemingly separate topics that proved to share a common framework. The fIrst topic, statistical prediction, is the task of associating extremely large perceptual state vectors with future events. The second topic, over-capacity in Kanerva's sparse distributed memory (SDM), is a study of the computation done in an SDM when presented with many more associations than its stated capacity. I propose that in conditions of over-capacity, where the associative-memory behavior of an SDM breaks down, the processing performed by the SDM can be used for statistical prediction. A mathematical study of the prediction problem suggests a variant of the standard SDM architecture. This variant not only behaves as a statistical predictor when the SDM is fIlled beyond capacity but is shown to double the capacity of an SDM when used as an associative memory. THE PREDICTION PROBLEM The earliest living creatures had an ability, albeit limited, to perceive the world through crude senses. This ability allowed them to react to changing conditions in Statistical Prediction with Kanerva's Sparse Distributed Memory 587 the environment; for example, to move towards (or away from) light sources. As nervous systems developed, learning was possible; if food appeared sim ultaneously with some other perception, perhaps some odor, a creature could learn to associate that smell with food. As the creatures evolved further, a more rewarding type of learning was possible. Some perceptions, such as the perception of pain or the discovery of food, are very important to an animal. However, by the time the perception occurs, damage may already be done, or an opportunity for gain missed. If a creature could learn to associate current perceptions with future ones, it would have a much better chance to do something about it before damage occurs. This is the prediction problem. The difficulty of the prediction problem is in the extremely large number of possible sensory inputs. For example, a simple animal might have the equivalent of 1000 bits of sensory data at a given time; in this case, the number of possible inputs is greater than the number of atoms in the known universe! In essence, it is an enormous search problem: a living creature must fmd the subregions of the perceptual space which correlate with the features of interest Most of the gigantic perceptual space will be uncorrelated, and hence uninteresting. THE OVERCAPACITY PROBLEM An associative memory is a memory that can recall data when addressed 'close-to' an address where data were previously stored. A number of designs for associative memories have been proposed, such as Hopfield networks (Hopfield, 1986) or the nearest-neighbor associative memory of Baum, Moody, and Wilczek (1987). Memory-related standards such as capacity are usually selected to judge the relative performance of different models. Performance is severely degraded when these memories are filled beyond capacity. Kanerva's sparse distributed memory is an associative memory model developed from the mathematics of high-dimensional spaces (Kanerva, 1988) and is related to the work of David Marr (1969) and James Albus (1971) on the cerebellum of the brain. (For a detailed comparison of SDM to random-access memory, to the cerebellum, and to neural-networks, see (Rogers, 1988b». Like other associative memory models, it exhibits non-memory behavior when near- or over- capacity. Studies of capacity are often over-simplified by the common assumption of uncorrelated random addresses and data. The capacity of some of these memories, including SDM, is degraded if the memory is presented with correlated addresses and data. Such correlations are likely if the addresses and data are from a real-world source. Thus, understanding the over-capacity behavior of an SDM may lead to better procedures for storing correlated data in an associative memory. 588 Rogers SPARSE DISTRmUTED MEMORY Sparse distributed memory can be best illustrated as a variant of random-access memory (RAM). The structure of a twelve-location SDM with ten-bit addresses and ten-bit data is shown in figure 1. (Kanerva, 1988) Reference Address 01010101101 ~ ~ 1101100111 1010101010 0000011110 0011011001 1011101100 0010101111 1101101101 0100000110 0110101001 1011010110 1100010111 1111110011 Location Addresses Radius o Dist Select Input Data lor 01 0111 11 11 0 I tI 0 11 I ++++ttttt+ 1 -1 1 -1 1 + + + + , , , , , +r Sums 1-31-51-3151 513 1 -31 31-3131 Threshold at 0 + , + + + + + + + + Output Data I 0 I 0 I 0111 11 11 0 I 11 0 11 I Figure 1. Structure of a Sparse Distributed Memory A memory location is a row in this figure. The location addresses are set to random addresses. The data counters are initialized to zero. All operations begin with addressing the memory; this entails finding the Hamming distance between the reference address and each of the location addresses. If this distance is less than or equal to the Hamming radius, the select-vector entry is set to I, and that location is tenned selected. The ensemble of such selected locations is called the selected set. Selection is noted in the figure as non-gray rows. A radius is chosen so that only a small percentage of the memory locations are selected for a given reference address. (Later, we will refer to the fact that a memory location defines an activation set of addresses in the address space; the activation set corresponding to a location is the set of reference addresses which activate that memory location. Note the reciprocity Statistical Prediction with Kanerva's Sparse Distributed Memory 589 between the selected set corresponding to a given reference address, and the activation set corresponding to a given location.) When writing to the memory, all selected counters beneath elements of the input data equal to 1 are incremented, and all selected counters beneath elements of the input data equal to 0 are decremented. This completes a write operation. When reading from the memory, the selected data counters are summed columnwise into the register sums. If the value of a sum is greater than or equal to zero, we set the corresponding bit in the output data to 1; otherwise, we set the bit in the output data to O. (When reading, the contents of the input data are ignored.) This example makes clear that a datum is distributed over the data counters of the selected locations when writing, and that the datum is reconstructed during reading by averaging the sums of these counters. However, depending on what additional data were written into some of the selected locations, and depending on how these data correlate with the Original data, the reconstruction may contain noise. THE BEHAVIOR OF AN SDM WHEN AT OVER-CAPACITY Consider an SDM with a I,OOO-bit address and a I-bit datum. In this memory, we are storing associations that are samples of some binary function ( on the space S of all possible addresses. After storing only a few associations, each data counter will have no explicit meaning, since the data values stored in the memory are distributed over many locations. However, once a sufficiently large number of associations are stored in the memory, the data counter gains meaning: when appropriately normalized to the interval [0, 1], it contains a value which is the conditional probability that the data bit is 1, given that its location was selected. This is shown in figure 2. [0 or 1] • S is the space of all possible addresses • L is the set of addresses in S which activate a given memory location • ( is a binary function on S that we want to estimate using the memory • The data counter for L contains the average value of (over L, which equals P( (X) = 1 I X E L ) Figure 2. The Normalized Content of a Data Counter is the Conditional Probability of the Value of ( Being Equal to 1 Given the Reference Addresses are Restricted to the Sphere L. In the prediction problem, we want to find activation sets of the address space that correlate with some desired feature bit. When filled far beyond capacity, the indi590 Rogers vidual memory locations of an SDM are collecting statistics about individual subregions of the address space. To estimate the value of r at a given address, it should be possible to combine the conditional probabilities in the data counters of the selected memory locations to make a "best guess" . In the prediction problem. S is the space of possible sensory inputs. Since most regions of S have no relationship with the datum we wish to predict, most of the memory locations will be in non-informative regions of the address space. Associative memories are not useful for the prediction problem because the key part of the problem is the search for subregions of the address space that are informative. Due to capacity limitations and the extreme size of the address space. memories fill to capacity and fail before enough samples can be written to identify the useful subregions. PREDICTING THE VALUE OF f Each data counter in an SDM can be viewed as an independent estimate of the conditional probability of f being equal to lover the activation set defmed by the counter's memory location. If a point of S is contained in multiple activation sets, each with its own probability estimate, how do we combine these estimates? More directly, when does knowledge of membership in some activation set help us estimate f better? Assume that we know P( f(X) = 1), which is the average value of f over the entire space S. If a data counter in memory location L has the same conditional probability as P( f(X) = 1). then knowing an address is contained in the activation set defining L gives no additional information. (This is what makes the prediction problem hard: most activation sets in S will be uncorrelated with the desired datum.) When is a data counter useful? If a data counter contains a conditional probability far away from the probability for the entire space, then it is highly informative. The more committed a data counter is one way or the other, the more weight it should be given. Ambivalent data counters should be given less weight Figure 3 illustrates this point. Two activation sets of S are shown; the numbers 0 and 1 are the values of r at points in these sets. (Assume that all the points in the activation sets are in these diagrams.) Membership in the left activation set is noninformative, while membership in the right activation set is highly informative. Most activation sets are neither as bad as the left example nor as good as the right example; instead. they are intermediate to these two cases. We can calculate the relative weights of different activation sets if we can estimate the relative signaVnoise ratio of the sets. Statistical Prediction with Kanerva's Sparse Distributed Memory 591 • In the left example, the mean of the activation set is the same as the mean of the P(f(X)=1 I Xe L) = entire space. Membership in this activation P(r(X)=1 I Xe L) = 1 P(f(X)=I) set gives no information; the opinion of such a set should be given zero weight • In the right example, the mean of the activation set is I; membership in this activation set completely determines the value of a point; the opinion of such a set should be given 'infmite' weight. Figure 3. The Predictive Value of an Activation Set Depends on How Much New Infonnation it Gives About the Function f. To obtain a measure of the amount of signal in an activation set L, imagine segregating the points of L into two sectors, which I call the informative sector and the noninformative sector. (Note that this partition will not be unique.) Include in the non-infonnative sector the largest number of points possible such that the percentage of I's and O's equals the corresponding percentages in the overall population of the entire space. The remaining points, which constitute the infonnative sector, will contain all O's or I's. The relative size r of the informative sector compared to L constitutes a measure of the signal. The relative size of the non-infonnative sector to L is (1 - r), and is a measure of the noise. Such a conceptual partition is shown in figure 4. Once the signal and the noise of an activation set is estimated, there are known methods for calculating the weight that should be given to this set when combining with other sets (Rogers, 1988a). That weight is (r / (1 - r)2). Thus, given the conditional probability and the global probability, we can calculate the weight which should be given to that data counter when combined with other counters. P(r(X)=IIXeLinf) = VALUE [0 or 1] Infonnative sector r P(f(X)=11 XEL) - P(f(X)=I) r= ---------------------(1 _ r) VALUE P(f(X)=I) P(f(X)=1 I Xe Lnon) = P(f(X)=I) Figure 4. An Activation Set Dermed by a Memory Location can be Partitioned into Infonnative and Non-infonnative Sectors. 592 Rogers EXPERIMENT AL The given weighting scheme was used in the standard SDM to test its effect on capacity. In the case of random addresses and data, the weights doubled the capacity of the SDM. Even greater savings are likely with correlated data. These results are shown in figure 5. _"DIot 20 zo Ie Ie ! 10 ! 10 ii ii l5 e 0 0 0 100 zoo 3DO 0 100 200 300 WWIIber 01 yri ... .UIIiMr of 9,."_ Figure s. Number of Bitwise Errors vs. Number of Writes in a 256-bit Address, 256-bit Data, l000-Location Sparse Distributed Memory. The Left is the Standard SDM; the Right is the Statistically-Weighted SDM. Graphs Shown are Averages of 16 Runs In deriving the weights, it was assumed that the individual data counters would become meaningful only when a sufficiently large number of associations were stored in the memory. This experiment suggests that even a small number of associations is sufficient to benefit from statistically-based weighting. These results are important, for they suggest that this scheme can be used in an SDM in the full continuum, from low-capacity memory-based uses to over-capacity statistical-prediction uses. CONCLUSIONS Studies of SDM under conditions of over-capacity, in combination with the new problem of statistical prediction, suggests a new range of uses for SDM. By weighting the locations differently depending on their contents, we also have discovered a technique for improving the capacity of the SDM even when used as a memory. This weighting scheme opens new possibilities for learning; for example, these weights can be used to estimate the fitness of the locations for learning algorithms such as Holland's genetic algorithms. Since the statistical prediction problem is primarily a problem of search over extremely large address spaces, such techniques would allow redistribution of the memory locations to regions of the address space which are maximally useful, while abandoning the regions which are non-informative. The combination of learning with memory is a potentially rich area for future study. Finally, many studies of associative memories have explicitly assumed random data Statistical Prediction with Kanerva's Sparse Distributed Memory 593 in their studies; most real-world applications have non-random data. This theory explicitly assumes, and makes use of, correlations between the associations given to the memory. Assumptions such as randomness, which are useful in mathematical studies, must be abandoned if we are to apply these tools to real-world problems. Acknowledgments This work was supported in part by Cooperative Agreements NCC 2-408 and NCC 2-387 from the National Aeronautics and Space Administration (NASA) to the Universities Space Research Association (USRA). Funding related to the Connection Machine was jointly provided by NASA and the Defense Advanced Research Projects Agency (DARPA). All agencies involved were very helpful in promoting this work, for which I am grateful. The entire RIACS staff and the SDM group has been supportive of my work. Louis Iaeckel gave important assistance which guided the early development of these ideas. Bruno Olshausen was a vital sounding-board for this work. Finally, I'll get mushy and thank those who supported my spirits during this project, especially Pentti Kanerva, Rick Claeys, Iohn Bogan, and last but of course not least, my parents, Philip and Cecilia. Love you all. References Albus, I. S., "A theory of cerebellar functions," Math. Bio.,10, pp. 25-61 (1971). Baum, E., Moody, I., and Wilczek, F., "Internal representations for associative memory," Biological Cybernetics, (1987). Holland, I. H., Adaptation in natural and artificial systems, Ann Arbor: University of Michigan Press (1975). Holland, I. H., "Escaping brittleness: the possibilities of general-purpose learning algorithms applied to parallel rule-based systems," in Machine learning, an artificial intelligence approach, Volume II, R. I. Michalski, I. G. Carbonell, and T. M. Mitchell, eds. Los Altos, California: Morgan Kaufmann (1986). Hopfield, IJ., "Neural networks and physical systems with emergent collective computational abilities," Proc. Nat' I Acad. Sci. USA, 79, pp. 2554-8 (1982). Kanerva, Pentti., "Self-propagating Search: A Unified Theory of Memory," Center for the Study of Language and Information Report No. CSLI-84-7 (1984). Kanerva, Pentti., Sparse distributed memory, Cambridge, Mass: MIT Press, 1988. Marr, D., "The cortex of the cerebellum," 1. Physio .• 202, pp. 437-470 (1969). Rogers, David, "Using data-tagging to improve the performance of Kanerva's sparse distributed memory," Research Institute for Advanced Computer Science Technical Report 88.1, NASA Ames Research Center (1988a). Rogers, David, "Kanerva's sparse distributed memory: an associative memory algorithm well-suited to the Connection Machine," Research Institute for Advanced Computer Science Technical Report 88.32, NASA Ames Research Center (l988b).
|
1988
|
60
|
147
|
LEARNING SEQUENTIAL STRUCTURE IN SIMPLE RECURRENT NETWORKS David Servan-Schreiber. Axel Cleeremans. and James L. McClelland Departtnents of Computer Science and Psycholgy Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t-l, together with element t, to predict element t+ 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. Cluster analyses of the hidden-layer patterns of activation showed that they encode prediction-relevant information about the entire path traversed through the network. We illustrate the phases of learning with cluster analyses performed at different points during training. Several connectionist architectures that are explicitly constrained to capture sequential infonnation have been developed. Examples are Time Delay Networks (e.g. Sejnowski & Rosenberg. 1986) -- also called 'moving window' paradigms -- or algorithms such as back-propagation in time (Rumelhart. Hinton & Williams. 1986), Such architectures use explicit representations of several consecutive events. if not of the entire history of past inputs. Recently. Elman (1988) has introduced a simple recurrent network (SRN) that has the potential to master an infinite corpus of sequences with the limited means of a learning procedure that is completely local in time (see Figure I.). CONIlSrr UNI'l'S Figure 1. The simple recurrent network (Elman, 1988) 643 644 Servan-Schreiber, Cleeremans and McClelland In the SRN. the pattern of activation on the hidden units at time t-I. together with the new input pattern. is allowed to influence the pattern of activation at time t . This is achieved by copying the pattern of activation on the hidden layer at time t·I to a set of input units -- called the 'context units' -- at time t. The forward connections in the network are subject to training via back-propagation. but there is no backpropagation through time. In this paper. we show that the SRN can learn to mimic closely a flnite state automaton. both in its behavior and in its state representations. In particular. we show that it can learn to process an infinite corpus of strings based on experience with afinite set of training exemplars. We then describe the phases through which the appropriate internal representations are discovered during training. MASTERING A FINITE STATE GRAMMAR In our first experiment. we asked whether the network could learn the contingencies implied by a small flnite state grammar (see Figure 2). The network was presented with strings derived from this grammar. and was required to try to predict the next letter at every step. These predictions are context dependent since each letter appears twice in the grammar and is followed in each case by different successors. A single unit on the input layer represented a given letter (six input units in total; flve for the letters and one for a begin symbol 'B'). Similar local representations were used on the output layer (with the 'begin' symbol being replaced by an end symbol 'E'). There were three hidden units. s .tart -~ T Figure 2. The small fmite-state grammar (Reber. 1967) Training. On each of 60.000 training trials. a string was generated from the grammar. starting with 'B'. Successive arcs were selected randomly from the 2 possible continuations with a probability of 0.5. Each letter was Learning Sequential Structure in Simple Recurrent Networks 645 then presented sequentially to the network. The activations of the context units were reset to 0.5 at the beginning of each string. After each letter, the error between the network's prediction and the actual successor specified by the string was computed and back-propagated. The 60,000 randomly generated strings ranged from 3 to 30 letters (mean: 7; sd: 3.3). Performance. Three tests were conducted. First, we examined the network's predictions on a set of 70,000 random strings. During this test, the network is first presented with the start signal, and one of the five letters or E is then selected at random as a successor. If that letter is predicted by the network as a legal successor (Le, activation is above 0.3 for the corresponding unit), it is then presented to the input layer on the next time step, and another letter is drawn at random as its successor. This procedure is repeated as long as each letter is predicted as a legal successor until the end signal is selected as the next letter. The procedure is interrupted as soon as the actual successor generated by the random procedure is not predicted by the network, and the string of letters is then considered 'rejected'. A string is considered 'accepted' if all its letters have been predicted as possible continuations up to, and including, the end signal. Of the 70,000 random strings, 0.3 % were grammatical, and 99.7 % were ungrammatical. The network performed flawlessly, accepting all the grammatical strings and rejecting all the others. In a second test, we presented the network with 20,000 generated at random from the grammar, i.e, all these strings were grammatical. Using the same criterion as above, all of these strings were correctly 'accepted'. Finally, we constructed a set of very long grammatical strings -- more than 100 letters long -- and verified that at each step the network correcdy predicted all the possible successors (activations above 0.3) and none of the other letters in the grammar. Analysis of internal representations. What kind of internal representations have developed over the set of hidden units that allow the network to associate the proper predictions to intrinsically ambiguous letters? One way to answer this question is to record the hidden units' activation patterns generated in response to the presentation of individual letters in different contexts. These activation vectors can then be used as input to a cluster analysis program. Figure 3.A. shows the results of such an analysis conducted on a small random set of grammatical strings. The patterns of activation are grouped according to the nodes of the grammar: all the patterns that are used to predict the successors of a given node are grouped together independently of the current letter. This observation sheds some light on the behavior of the network: at each point in a sequence, the pattern of activation stored over the context units provides information about the current node in the grammar. Together with information about the current letter (represented on the input layer), this contextual information is used to produce a new pattern of activation over the hidden layer, that uniquely specifies the next node. In that sense, the network closely approximates the finite-state automaton that would encode the grammar from which the training exemplars were derived. However, a closer look at the cluster analysis reveals that within a cluster corresponding to a particular node, patterns are further divided according to the path traversed before the node is reached. For example, looking at the bottom cluster -- node #5 -- patterns produced by a 'VV', 'PS', 'XS' or 'SXS' ending are grouped separately: 646 Servan-Schreiber, Cleeremans and McClelland I I I '~'--------~----~-----------I • • Ie .. ! • • Ie I • • ... Figure 3. A. Hieruchical cluster analysis of the hidden unit activation patternS after 60.000 presentations of Slrings generated at random from the finite-Slate grammar. B. Cluster analysis of the H.U. activaDon pauans following 2000 epochs of 1raining 011 a set of 22 strings with a maximum length of eightlettets. Learning Sequential Structure in Simple Recurrent Networks 64 7 they are more similar to each other than to the abstract prototype of node #5. This tendency to preserve information about the path is not a characteristic of traditional finite-state automata. ENCODING PATH INFORMATION In a different set of experiments, we asked whether the SRN could learn to use the infonnation about the path that is encoded in the hidden units' patterns of activation. In one of these experiments, we tested whether the network could master length constraints. When strings generated from the small finite-state grammar may only have a maximum of 8 letters, the prediction following the presentation of the same letter in position number six or seven may be different. For example, following the sequence 'TSSSXXV', 'V' is the seventh letter and only another 'V' would be a legal successor. In contrast, following the sequence 'TSSXXV', both 'V' and 'P' are legal successors. A network with 15 hidden units was trained on a small set of length-limited (max. 8 letters) grammatical strings. It was able to use the small activation differences present over the context units - and due to the slightly different sequences presented - to master contingencies such as those illustrated above (see table 1). Table 1. Activation of each output unit following the presentation of 'Y' as the 6th or 7th letter in the string tssxxV tsssxxV T 0.0 0.0 S P X 0.0 0.54 0.0 0.0 0.02 0.0 V E 0.48 0.0 0.97 0.0 A cluster analysis of all the patterns of activation on the hidden layer generated by each letter in each sequence demonstrates how the influence of the path is reflected in these patterns (see figure 3.B.)*. We labeled the arcs according to the letter being presented (the 'current letter') and its position in the grammar defined by Reber. Thus 'VI' refers to the f11'st 'V' in the grammar and 'V2' to the second 'V' which immediately precedes the end of the string. 'Early' and 'Late' refer to whether the letter occurred early or late in the sequence (for example in 'PT . .' 'T2' occurs early; in 'PVPXT . .' it occurs late). Finally, in the left margin we indicated what predictions the corresponding patterns yield on the output layer (e.g, the hidden unit pattern generated by 'BEGIN' predicts 'T' or 'P'). From the figure, it can be seen that the patterns are grouped according to three distinct principles: (1) according to similar predictions, (2) according to similar letters presented on the input units, and (3) according to similar paths. These factors do not necessarily overlap since several occurrences of the same letter in a sequence usually implies different predictions and since similar paths also lead to different predictions depending on the current letter. For example, the top cluster in the figure corresponds to all occurrences of the letter 'V' and is further subdivided among 'V I' and 'V2" * Information about the leaves of the cluster analyses in this and the remaining figures is available in Servan-Schreiber. Cleeremans and McCleUand (1988). 648 Servan-Schreiber, Cleeremans and McClelland The 'V l' cluster is itself further divided between groups where 'V l' occurs early in the sequence (e.g, 'pV .. .') and groups where it occurs later (e.g, 'tssxxV .. .' and 'pvpxV .. .'). Note that the division according to the path does not necessarily correspond to different predictions. For example, 'V 2' always predicts 'END' and always with maximum certainty. Nevertheless, sequences up to 'V 2' are divided according to the path traversed. PHASES OF LEARNING How can information about the path be progressively encoded in the hidden layer patterns of activation? To clarify how the network learns to use the context of preceding letters in a sequence, we will illustrate the different phases of learning with cluster analyses of the hidden layer patterns generated at each phase. To make the analyses simpler, we used a smaller training set than the training set mentioned previously. The corresponding finite-state grammar is shown in Figure 4. In this simpler grammar, the main difference -- besides the reduced number of patterns -- is that the letters 'P' and 'T' appear only once. s x .11It --4IF-----~ ----~Eu T Figure 4. The reduced rmite-state grammar from which 12 strings were generated for training Discovering letters. At epoch 0, before the network has received any training, the hidden unit patterns clearly show an organization by letter: to each letter corresponds an individual cluster. These clusters are already subdivided according to preceding sequences -- the 'path'. This fact illustrates how a pattern of activation on the context units naturally tends to encode the path traversed so far independently of any error correcting procedure. The average distance between the different patterns -- the 'contrast' as it were -- is nonetheless rather small; the scale only goes up to 0.6 (see Figure 5.A.)**. But this is due to the very small initial random ** In all the following figures. the scaJe was automatically detennined by the cluster analysis program. It is important to keep this in mind when comparing the figures to . . : I Learning Sequential Structure in Simple Recurrent Netw:,l.'" 649 Figure 5. Cluster Analyses of the H.U. aclivation pattmls obcained with the reduced set of strings: A. before lraining. B. After 100 epochs of lraining. C. After 700 epochs of training. 650 Servan-Schreiber, Cleeremans and McClelland values of the weights from the input and context layers to the hidden layer. Larger initial values would enhance the network's tendency to capture path infonnation in the hidden unit patterns before training is even started After 100 epochs of training, an organization by letters is still prevalent, however letters have been regrouped according to similar predictions. 'START', 'P' and'S' all make the common prediction of 'X or S' (although'S' also predicts 'END'); 'T' and 'V' make the common prediction of 'V' (although 'V' also predicts 'END' and 'P'). The path information has been almost eliminated: there is very little difference between the patterns generated by two different occurrences of the same letter (see Figure 5.B.). For example, the hidden layer pattern generated by 'S I' and the corresponding output pattern are almost identical to the patterns generated by 'S2' (see table 2). Table 2. Activation of each output unit fOllowing the presentation of the flrst S in the grammar (SI) or the second S (S2> after 100 epochs of training SI S2 T 0.0 0.0 S P 0.36 0.0 0.37 0.0 X 0.33 0.33 V E 0.16 0.17 0.16 0.17 In this phase, the network is learning to ignore the pattern of activation on the context units and to produce an output pattern appropriate to the letter 'S' in any context. This is a direct consequence of the fact that the patterns of activation on the hidden layer -- and hence the context layer -- are continuously changing from one epoch to the next as the weights from the input units (the letters) to the hidden layer are modified. Consequently, adjustments made to the weights from the context layer to the hidden layer are inconsistent from epoch to epoch and cancel each other. In contrast, the network is able to pick up the stable association between each letter and all of its possible successors. Discovering arcs. At the end of this phase, individual letters consistently generate a unique pattern of activation on the hidden layer. This is a crucial step in developing a sensitivity to context: patterns copied onto the context layer have become a unique code designating which letter immediately preceded the current letter. The learning procedure can now exploit the regular association between the pattern on the context layer and the desired output. Around epoch 700, the cluster analysis shows that the network has used this infonnation to differentiate clearly between the fust and second occurrence of the same letter (Figure 5.C.). The pattern generated by 'S2' -which predicts 'END' -- clusters with the pattern generated by 'V2" which also predicts 'END'. The overall difference between all the hidden layer patterns has also more than roughly doubled, as indicated by the change in scale. Encoding the path. During the last phase of learning, the network learns to make different predictions to the same occurrence of a letter (e.g, 'V I') each other. Learning Sequential Structure in Simple Recurrent Networks 651 on the basis of the previous sequence. For example, it learns to differentiate between 'ssxxV' which predicts either 'P' or 'V', and 'sssxxV' which predicts only 'V' by exploiting the small difference between the activation patterns generated by X 2 in the two different contexts. The process through which path information is encoded can be conceptualized in the following way: As the initial papers about backpropagation pointed out, the hidden unit patterns of activation represent an 'encoding' of the features of the input patterns that are relevant to the task. In the recurrent network, the hidden layer is presented with information about the current letter, but also -- on the context layer -- with an encoding of the relevant features of the previous letter. Thus, a given hidden layer pattern can come to encode information about the relevant features of two consecutive letters. When this pattern is fed back on the context layer, the new pattern of activation over the hidden units can come to encode information about three consecutive letters, and so on. In this manner, the context layer patterns can allow the network to maintain prediction-relevant features of an entire sequence. However, it is important to note that information about the path that is not relevant locally (Le, that does not contribute to predicting successors of the current letter) tends not to be encoded in the next hidden layer pattern. It may then be lost for subsequent processing. This tendency is lessened when the network has extra degrees of freedom -- i.e, extra hidden units -- so as to allow small and locally useless differences to survive for several processing steps. CONCLUSION We have shown that the network architecture first proposed by Elman (1988) is capable of mastering an infinite coIpus of strings generated from a finite-state grammar after training on a finite set of exemplars with a learning algorithm that is local in time. The network develops internal representations that correspond to the nodes of the grammar and closely approximates the corresponding minimal finite-state recognizer. We have also shown that the simple recurrent network is able to encode information about contingencies that are not local to a given letter and its immediate predecessor, such as those implied by a length constraint on the strings. Encoding of sequential structure in the patterns of activation over the hidden layers proceeds in stages. The network first develops stable hidden-layer representations for individual letters, and then for individual arcs in the grammar. Finally, the network is able to exploit slight differences in the patterns of activation which denote a specific path through the grammar. Our current work is exploring the relevance of this architecture to the processing of embedded sequences typical of natural language. The results of some preliminary experiments are available in Servan-Schreiber, Cleeremans and McClelland (1988). 652 Servan-Schreiber, Cleeremans and McClelland References Elman. J.L. (1988). Finding structure in time. CRL Technical report 9901. Center for Research in Language. University of California. San Diego. Reber. A.S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior. S. 855-863. Rumelhart. D.E .• Hinton. G.E .• and Williams. R.I. (1986). Learning internal representations by backpropagating errors. Nature 323:533-536. Sejnowski. T J. and Rosenberg C. (1986). NETta1k: A parallel network that learns to read aloud. Technical Report.lohns Hopkins University lHU-EECS-86-01. Servan-Schreiber D. Cleeremans A. and McClelland JL (1988) Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183. Computer Science Department. Carnegie Mellon University. Pittsburgh. PA 15213. Williams. R.J. and Zipser. D. (1988). A learning algorithm for continually running fully recurrent neural networks. ICS Technical report 8805. Institute for Cognitive Science. UCSD. La lolla. CA 92093.
|
1988
|
61
|
148
|
A BACK-PROPAGATION ALGORITHM WITH OPTIMAL USE OF HIDDEN UNITS Yves Chauvin Thomson-CSF, Inc (and Psychology Department, Stanford University) 630, Hansen Way (Suite 250) Palo Alto, CA 94306 ABSTRACT This paper presents a variation of the back-propagation algorithm that makes optimal use of a network hidden units by decr~asing an "energy" term written as a function of the squared activations of these hidden units. The algorithm can automatically find optimal or nearly optimal architectures necessary to solve known Boolean functions, facilitate the interpretation of the activation of the remaining hidden units and automatically estimate the complexity of architectures appropriate for phonetic labeling problems. The general principle of the algorithm can also be adapted to different tasks: for example, it can be used to eliminate the [0, 0] local minimum of the [-1. +1] logistic activation function while preserving a much faster convergence and forcing binary activations over the set of hidden units. PRINCIPLE This paper describes an algorithm which makes optimal use of the hidden units in a network using the standard back-propagation algorithm (Rumelhart. Hinton & Williams, 1986). Optimality is defined as the minimization of a function of the "energy" spent by the hidden units throughtout the network, independently of the chosen architecture, and where the energy is written as a function of the squared activations of the hidden units. The standard back-propagation algorithm is a gradient descent algorithm on the following cost function: P 0 C = I I (dij- Oij)2 [1] j where d is the desired output of an output unit, 0 the actual output, and where the sum is taken over the set of output units 0 for the set of training patterns P. 519 520 Chauvin The following algorithm implements a gradient descent on the following cost function: POP H C = IJer I I (dij - Oij)'l + IJen I I e(ot) [2] j j where e is a positive monotonic function and where the sum of the second term is now taken over a set or subset of the hidden units H. The first term of this cost function will be called the error term, the second, the energy term. In principle, the theoretical minimum of this function is found when the desired activations are equal to the actual activations for all output units and all presented patterns and when the hidden units do not "spend any energy". In practical cases, such a minimum cannot be reached and the hidden units have to "spend some energy" to solve a given problem. The quantity of energy will be in pan determined by the relative importance given to the error and energy terms during gradient descent. In principle, if a hidden unit has a constant activation whatever the pattern presented to the network, it contributes to the energy term only and will be "suppressed" by the algorithm. The precise energy distribution among the hidden units will depend on the actual energy function e. ANALYSIS ALGORITHM IMPLEMENTATION We can write the total cost function that the algorithm tries to minimize as a weighted sum of an error and energy term: [3] The first term is the error term used with the standard back-propagation algorithm in Rumelhan et al. If we have h hidden layers, we can write the total energy term as a sum of all the energy terms corresponding to each hidden layer: h Hi Een = I I e(o}) i j [4] To decrease the energy of the uppermost hidden layer Hh, we can compute the derivative of the energy function with respect to the weights. This derivative will be null for any weight "above" the considered hidden layer. For any weight just below the considered hidden layer, we have (using Rumelhan et al. notation): A Back-Propagation Algorithm 521 [5] ~en ae(ot) ae(01) a01 aOi 2'." ( ) U· = = ---= e OiJ ; net; I anet; a01 ao; anet; [6] where the derivative of e is taken with respect to the .. energy" of the unit i and where f corresponds to the logistic function. For any hidden layer below the considered layer h. the chain rule yields: d1n = f /c(net/c) I dj"Wj/c J [7] This is just. standard back-propagation with a different back-propagated term. If we minimize both the error at the output layer and the energy of the hidden layer h, we can compute the complete weight change for any connection below layer h: A _ ,ur ~en _ t.. A.er ~en) ~ac uW/C1 - a!J.eru/c 01 - a!J.enu/c 01 - aOI\P'eru/c + !J.enu/c = - aOlu/c [8] where d~c is now the delta accumulated for error and energy that we can write as a function of the deltas of the upper layer: [9] This means that instead of propagating the delta for both energy and error. we can compute an accumulated delta for hidden layer h and propagate it back throughout the network. If we minimize the energy of the layers hand h-J, the new accumulated delta will equal the previously accumulated delta added to a new delta energy on layer h-J. The procedure can be repeated throughout the complete network. In shon. the back-propagated error signal used to change the weights of each layer is simply equal to the back-propagated signal used in the previous layer augmented with the delta energy of the current hidden layer. (The algorithm is local and easy to implement). ENERGY FUNCTION The algorithm is sensitive to the energy function e being minimized. The functions used in the simulations described below have the following derivative with 522 Chauvin respect to the squared activations/energy (only this derivative is necessary to implement the algorithm, see Equation [6]): [10] where n is an integer that determines the precise shape of the energy function (see Table 1) and modulates the behavior of the algorithm in the following way. For n = 0, e is a linear function of the energy: "high and low energy" units are equally penalized. For n = I, e is a logarithmic function and "low energy" units become more penalized than uhigh energy" units, in proportion to the linear case. For n = 2, the energy penalty may reach an asymptote as the energy increases: "high energy" units are not penalized more than umiddle energy" units. In the simulations, as expected, it appears that higher values of n tend to suppress "low e-nergy" units. (For n > 2, the behavior of the algorithm was not significantly different from n = 2. for the tests described below). TABLE 1: Energy Functions. 0 I 1 I 2 I n>2 n I I I I ! 0 2 I 0 2 Log(l +02) I ? e I I 1 +02 I I I I BOOLEAN EXAMPLES The algorithm was tested with a set of Boolean problems. In typical tasks, the energy of the network significantly decreases during early learning. Later on, the network finds a better minimum of the total cost function by decreasing the error and by "spending" energy to solve the problem. Figure 1 shows energy and error in function of the number of learning cycles during a typical task (XOR) for 4 different runs. For a broad range of the energy learning rate, the algorithm is quite stable and finds the solution to the given problem. This nice behavior is also quite independent of the variations of the onset of energy minimization. EXCLUSIVE OR AND PARITY The algorithm was tested with EXOR for various network architectures. Figure 2 shows an example of the activation of the hidden units after learning. The algorithm finds a minimal solution (2 hidden units, "minimum logic") to solve the XOR problem when the energy is being minimized. This minimal solution is actually found whatever the starting number of hidden units. If several layers are used, the algorithm finds an optimal or nearly-optimal size for each layer. A Back-Propagation Algorithm 523 ~r-------'--------r------~--------__ ------~ 0.16 Figure 1. Energy and error curves as a function of the number of pattern presentations for different values of the "energy" rate (0, .1, .2, .4). Each energy curve (It e" label) is associated with an error curve (" +" label). During learning, the units "spend" some energy to solve the given task. With parity 3, for a [-1, +1] activation range of the sigmoid function, the algorithm does not find the 2 hidden units optimal solution but has no problem finding a 3 hidden units solution, independently of the staning architecture. SYMMETRY The algorithm was tested with the symmetry problem, described in Rumelhan et al. The minimal solution for this task uses 2 hidden units. The simplest form of the algorithm does not actually find this minimal solution because some weights from the hidden units to the output unit can actually grow enough to compensate the low activations of the hidden units. However, a simple weight decay can prevent these weights from growing too much and allows the network to find the minimal solution. In this case, the total cost function being minimized simply becomes: 524 Chauvin 1 __ 1 .1_1 ---- _1_pattern 2 pattem 3 pattern 2 pattem 3 I I_II 1 __ 10 __ 1- ---Figure 2. Hidden unit activations of a 4 hidden unit network over the 4 EXOR patterns when (left) standard back-propagation and (right) energy minimization are being used during learning. The network is reduced to minimal size (2 hidden units) when the. energy is being minimized. POP H W C = Per I I (di) - Oi)2 + Pen I Ie (ot) + Pw I wf) [11] j j ij PHONETIC LABELING The algorithm was tested with a phonetic labeling task. The input patterns consisted of spectrograms (single speaker, 10x3.2ms spaced time frames, centered, 16 frequencies) corresponding to 9 syllables [ba] , [da], [ga], [bi] , [di], [gi] , and [bu] , [du] , [gu]. The task of the network was to classify these spectrograms (7 tokens per syllable) into three categories corresponding to the three consonants [b], [g], and [g]. Starting with 12 hidden units, the algorithm reduced the network to 3 hidden units. (A hidden unit is considered unused when its activation over the entire range of patterns contributes very little to the activations of the output units). With standard back-propagation, all of the 12 hidden units are usually being used. The resulting network is consistent with the sizes of the hidden layers used by Elman and Zipser (1986) for similar tasks. A Back-Propagation Algorithm 525 EXTENSION OF THE ALGORITHM Equation [2] represents a constraint over the set of possible LMS solutions found by the back-propagation algorithm. With such a constraint. the "zero-energy" level of the hidden units can be (informally) considered as an attractor in the solution space. However. by changing the sign of the energy gradient. such a point now constitutes a repellor in this space. Having such repellors might be useful when a set of activation values are to be avoided during learning. For example. if the activation range of the sigmoid transfer function is [-1. + 1]. the learning speed of the back-propagation algorithm can be greatly improved but the [0. 0] unit activation point (zero-input, zero-output) often behaves as a local minimum. By inversing the sign of the energy gradient during early learning, it is possible to have the point [0, 0] act as a repellor. forcing the network to make "maximal use" of its resources (hidden units). This principle was tested on the parity-3 problem with a network of 7 hidden units. For a given set of coefficients. standard back-propagation can solve parity-3 in about 15 cycles but yields about 65%. of local minima in [0. 0]. By using the "repulsion" constraint, parity-3 can be solved in about 20 cycles with 0% of local minima. Interestingly, it is also possible to design a I'trajectory" of such constraints during learning. For example, the [0, 0] activation point can be built as a repellor during early learning in order to avoid the corresponding local minimum, then as an attractor during middle learning to reduce the size of the hidden layer. and as a repulsor during late learning, to force the hidden units to have binary activations. This type of trajectory was tested on the parity-3 problem with 7 hidden units. In this case, the algorithm always avoids the [0, 0] local minimum. Moreover, the network can be reduced to 3 or 4 hidden units taking binary values over the set of input patterns. In contrast, standard back-propagation often gets stuck in local minima and uses the initial 7 hidden units with analog activation values. CONCLUSION The present algorithm simply imposes a constraint over the LMS solution space. It can be argued that limiting such a solution space can in some cases increase the generalizing propenies of the network (curve-fitting analogy). Although a complete theory of generalization has yet to be formalized, the present algorithm presents a step toward the automatic design of "minimal" networks by imposing constraints on the activations of the hidden units. (Similar constraints on weights can be imposed and have been tested with success by D. E. Rumelhan, Personal Communication. Combinations of constraints on weights and activations are being tested). What is simply shown here is that this energy minimization principle is easy to implement, is robust to a brQad range of parameter values, can find minimal or nearly optimal network sizes when tested with a variety of tasks and can be used to "bend" trajectories of activations during learning. Ackowledgments 526 Chauvin This research was conducted at Thomson-CSF, Inc. in Palo AIto. I would like to thank the Thomson neural net team for useful discussions. Dave Rumelhan and the PDP team at Stanford University were also very helpful. I am especially greateful to Yoshiro Miyata, from Bellcore, for having letting me use his neural net simulator (SunNet) and to Jeff Elman, from UCSD, for having letting me use the speech data that he collected. References. J. L. Elman & D. Zipser. Learning the hidden structure of speech. ICS Technical Repon 8701. Institute for Cognitive Science. University of California, San Diego (1987). D. E. Rumelhan, O. E. Hinton & R. J. Williams. Learning internal representaions by error propagation. In D. E. Rumelhan & J. L. McClelland (Eds.), Parallel Distributed Processing: Exploration in the Microstructure 0/ Cognition. Vol. 1. Cambridge, MA: MIT Press/ Bradford Books (1986) .
|
1988
|
62
|
149
|
GEMINI: GRADIENT ESTIMATION THROUGH MATRIX INVERSION AFTER NOISE INJECTION Yann Le Cun 1 Conrad C. Galland and Geoffrey E. Hinton Department of Computer Science University of Toronto 10 King's College Rd Toronto, Ontario M5S 1A4 Canada ABSTRACT Learning procedures that measure how random perturbations of unit activities correlate with changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities affect the output error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforcement procedures but is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearities in the system. Two simulations demonstrate the effectiveness of GEMINI. OVERVIEW Reinforcement learning procedures typically measure the effects of changes in local variables on a global reinforcement signal in order to determine sensible weight changes. This measurement does not require the connections to be used backwards (as in back-propagation), but it is inefficient when more than a few units are involved. Either the units must be perturbed one at a time, or, if they are perturbed simultaneously, the noise from all the other units must be averaged away over a large number of samples in order to achieve a reasonable signal to noise ratio. So reinforcement learning is much less efficient than back-propagation (BP) but much easier to implement in hardware. GEMINI is a hybrid procedure which retains many of the implementation advantages of reinforcement learning but eliminates some of the inefficiency. GEMINI uses the squared difference between the desired and actual output vectors as a reinforcement signal. It injects random noise at the first hidden layer only, causing correlated noise at later layers. If the noise is sufficiently small, the resultant 1 First Author's present address: Room 4G-332, AT&T Bell Laboratories, Crawfords Corner Rd, Holmdel, NJ 07733 141 142 Le Cun, Galland and Hinton change in the reinforcement signal is a linear function of the noise vector at any given layer. A matrix inversion procedure implemented separately at each hidden layer then determines how small changes in the activities of units in the layer affect the reinforcement signal. This matrix inversi?n gives a much more accurate estimate of the error-derivatives than simply averaging away the effects of noise and, unlike the averaging approach, it can be used when the noise is correlated. The matrix inversion at each layer can be performed iteratively by a local linear network that "learns" to predict the change in reinforcement from the noise vector at that layer. For each input vector, one ordinary forward pass is performed, followed by a number of forward passes each with a small amount of noise added to the total inputs of the first hidden layer. After each forward pass, one iteration of an LMS training procedure is run at each hidden layer in order to improve the estimate of the error-derivatives in that layer. The number of iterations required is comparable to the width of the largest hidden layer. In order to avoid singularities in the matrix inversion procedure, it is necessary for each layer to have fewer units than th~ preceding one. In this hybrid approach, the computations that relate the perturbation vectors to the reinforcement signal are all local to a layer. There is no detailed backpropagation of information, so that GEMINI is more amenable to optical or electronic implementations than BP. The additional time needed to run the gradientestimating inner loop, may be offset by the fact that only forward propagation is required, so this can be made very efficient (e.g. by using analog or optical hardware). TECHNIQUES FOR GRADIENT ESTIMATION The most obvious way to measure the derivative of the cost function w.r.t the weights is to perturb the weights one at a time, for each input vector, and to measure the effect that each weight perturbation has on the cost function, C. The advantage of this technique is that it makes very few assumptions about the way the network computes its output. It is possible to use far fewer perturbations (Barto and Anandan, 1985) if we are using "quasi-linear" units in which the output, Yi, of unit i is a smooth non-linear function, I, of'its total input, Xi, and the total input is a linear function of the incoming weights, Wij and the activities, Yi, of units in the layer below: Xi = L WijYj i Instead of perturbing the weights, we perturb the total input, Xi, received by each unit, in order to measure 8C / 8Xi . Once this derivative is known it is easy to derive 8C / 8Wij for each of the unit's incoming weights by performing a simple local compu tation: 8C 8C __ -_yo 8W ij 8Xi J If the units are perturbed one at a time, we can approximate 8C / 8Xi by GEMINI 143 where 6C is the variation of the cost function induced by a perturbation 6Xi of the total input to unit i. This method is more efficient than perturbing the weights directly, but it still requires as many forward passes as there are hidden units. Reducing the number of perturbations required If the network has a layered, feed-forward, architecture the state of any single layer completely determines the output. This makes it possible to reduce the number of required perturbations and forward passes still further. Perturbing units in the first hidden layer will induce perturbations at the following layers, and we can use these induced perturbations to compute the gradients for these layers. However, since many of the units in a typical hidden layer will be perturbed simultaneously, and since these induced perturbations will generally be correlated, it is necessary to do some local computation within each layer in order to solve the credit assignment problem of deciding how much of the change in the final cost function to attribute to each of the simultaneous perturbations within the layer. This local computation is relatively simple. Let x(k) be the vector of total inputs to units in layer k. Let 6xt(k) be the perturbation vector of layer k at time t. It does not matter for the following analysis whether the perturbations are directly caused (in the first hidden layer) or are induced. For a given state of the network, we have: To compute the gradient w.r.t. layer k we must solve the following system for g,c t = 1. .. P where P is the number of perturbations. Unless P is equal to the number of units in layer k, and the perturbation vectors are linearly independent, this system will be over- or under-determined. In some network architectures it is impossible to induce nl linearly independent perturbation vectors in a hidden layer, I containing nl units. This happens when one of the preceding hidden layers, k, contains fewer units because the perturbation vectors induced by a layer with nk units on the following layer generate at most nk independent directions. So to avoid having to solve an under-determined system, we require "convergent" networks in which each hidden layer has no mbre units than the preceding layer. Using a Special Unit to Allocate Credit within a Layer Instead of directly solving for the 8C/8xi within each layer, we can solve the same system iteratively by minimizing: E = I)6Ct - gf6xt(k))2 t 144 Le Cun, Galland and Hinton D D o o o input layer Figure 1: A GEMINI network. linear unit linear unit This can be done by a special unit whose inputs are the perturbations of layer k and whose desired output is the resulting perturbation of the cost function 6C (figure 1). When the LMS algorithm is used, the weight vector gk of this special unit converges to the gradient of C with respect to the vector of total inputs x(k). If the components of the perturbation vector are uncorrelated, the convergence will be fast and the number of iterations required should be of the order of the the number of units in the layer. Each time a new input vector is presented to the main network, the "inner-loop" minimization process that estimates the 8C / 8Xi must be re-initialized by setting the weights of the special units to zero or by reloading approximately correct weights from a table that associates estimates of the 8C / 8Xi with each input vector. Summary of the Gemini Algorithm 1. Present an input pattern and compute the network state by forward propagation. 2. Present a desired output and evaluate the cost function. 3. Re-initialize the weights of the special units. 4. Repeat until convergence: (a) Perturb the first hidden layer and propagate forward. (b) Measure the induced perturbations in other layers and the output cost function. (c) At each layer apply one step of the LMS rule on the special units to minimize the error between the predicted cost variation and the actual variation. 5. Use the weights of the special units (the estimates of 8C /8Xi ) to compute the weight changes of the main network. 6. Update the weights of the main network. GEMINI 145 A TEST EXAMPLE: CHARACTER RECOGNITION The GEMINI procedure was tested on a simple classification task using a network with two hidden layers. The input layer represented a 16 by 16 binary image of a handwritten digit. The first hidden layer was an 8 by 8 array of units that were locally connected to the input layer in the following way: Each hidden unit connected to a 3 by 3 "receptive field" of input units and the centers of these receptive fields were spaced two "pixels" apart horizontally and vertically. To avoid boundary effects we used wraparound which is unrealistic for real image processing. The second hidden layer was a 4 by 4 array of units each of which was connected to a 5 by 5 receptive field in the previous hidden layer. The centers of these receptive fields were spaced two pixels apart. Finally the output layer contained 10 units, one for each digit, and was fully connected to the second hidden layer. The network contained 1226 weights and biases. The sigmoid function used at each node was of the form f( x) = stanh( mx) with m = 2/3 and s = 1.716, thus f was odd, and had the property that f(l) = 1 (LeCun, 1987). The training set was composed of 6 handwritten exemplars of each of the 10 digits. It should be emphasized that this task is simple (it is linearly separable), and the network has considerably more weights than are required for this problem. Experiments were performed with 64 perturbations in the gradient estimation inner loop. Therefore, assuming that the perturbation vectors were linearly independent, the linear system associated with the first hidden layer was not underconstrained 2. Since a stochastic gradient procedure was used with a single sweep through the training set, the solution was only a rough approximation, though convergence was facilitated by the fact that the components of the perturbations were statistically independent. The linear systems associated with the second hidden layer and the output layer were almost certainly overconstrained 3, so we expected to obtain a better estimate of the gradient for these layers than for the first one. The perturbations injected at the first hidden layer were independent random numbers with a zero-mean gaussian distribution and standard deviation of 0.1. The minimization procedure used for gradient estimation was not a pure LMS, but a pseudo-newton method that used a diagonal approximation to the matrix of second derivatives which scales the learning rates for each link independently (Le Cun, 1987; Becker and Le Cun, 1988). In our case, the update rule for a gradient estimate coefficient was where a'[ is an estimate of the variance of the perturbation for unit i. In the simulations TJ was equal to 0.02 for the first hidden layer, 0.03 for the second hidden layer, and 0.05 for the output layer. Although there was no real need for it, the gradient associated-with the output units was estimated using GEMINI so that we could evaluate the accuracy of gradient estimates far away from the noise-injection 2Jt may have been overconstrained since the actual relation between the perturbation and variation of the cost function is usually non-linear for finite perturbations 3This depends on the degeneracy of the weight matrices 146 Le Cun, Galland and Hinton B.1 B +---~--~--~---r---r---+---+--~--~---4~~ 12 Figure 2: The mean squared error as a function of the number of sweeps through the training set for GEMINI (top curve) and BP (bottom curve). layer. The learning rates for the main network, fi, had different values for each unit and were equal to 0.1 divided by the fan-in of the unit. Figure 2 shows the relative learning rates of BP and GEMINI. The two runs were started from the same initial conditions. Although the learning curve for GEMINI is consistently above the one for BP and is more irregular, the rate of decrease of the two curves is similar. The 60 patterns are all correctly classified after 10 passes through the training set for regular BP, and after 11 passes for GEMINI. In the experiments, the direction of the estimated gradient for a single pattern was within about 20 degrees of the true gradient for the output layer and the second hidden layer, and within 50 degrees for the first hidden layer. Even with such inaccuracies in the gradient direction, the procedure still converged at a reasonable rate. LEARNING TO CONTROL A SIMPLE ROBOT ARM In contrast to the digit recognition task, the robot arm control task considered here is particularily suited to the GEMINI procedure because it contains a nonlinearity which is unknown to the network. In this simulation, a network with 2 input units, a first hidden layer with 8 units, a second with 4 units, and an output layer with 2 units is used to control a simulated arm with two angular degrees of freedom. The problem is to train the network to receive x, y coordinates encoded on the two input units and produce two angles encoded on the output units which would place the end of the arm on the desired input point (figure 3). The units use the same input-output function as in the digit recognition example. Cost. (Euclidean disl. 10 adual point? t ROBOT ARM "unknown" non-lIne.rlty t 91 92 00 o oto 0 000 O~O 0 0 0 00 (a) Jl y GEMINI 147 (x,y) (b) Figure 3: (a) The network trained with the GEMINI procedure, and (b) the 2-D arm controlled by the network. Each point in the training set is successively applied to the inputs and the resultant output angles determined. The training points are chosen so that the code for the output angles exploits most of the sigmoid input-output curve while avoiding the extreme ends. The "unknown" non-linearity is essentially the robot arm, which takes the joint angles as input and then "outputs" the resulting hand coordinates by positioning itself accordingly. The cost function, C, is taken as the square of the Euclidean distance from this point to the desired point. In the simulation, this distance is determined using the appropriate trigonometric relations: where al and a2 are the lengths of the two components of the arm. Although this non-linearity is not actually unknown, analytical derivative calculation can be difficult in many real world applications, and so it is interesting to explore the possibility of a control system that can learn without it. It is found that the minimum number of iterations of the LMS inner loop search needed to obtain good estimates ofthe gradients when compared to values calculated by back-propagation is between 2 and 3 times the number of units in the first hidden layer (figure 4). For this particular kind of problem, the process can be sped up significantly by using the following two modifications. The same training vector can be applied to the inputs and the weights changed repeatedly until the actual output is within a certain radius of the desired output. The gradient estimates are kept between these weight updates, thereby reducing the number of inner loop 148 Le Cun, Galland and Hinton Figure 4: Gradients of the units in all non-input layers, determined (a) by the GEMINI procedure after 24 iterations of the gradient estimating inner loop, and (b) through analytical calculation. The size of the black and white squares indicates the magnitude of negative and positive error gradients respectively. iterations needed at each step. The second modification requires that the arm be made to move continuously through 2-D space by using an appropriately ordered training set. The state of the network changes slowly as a result, leading to a slowly varying gradient. Thus, if the gradient estimate is not reset between successive input vectors, it can track the real gradient, allowing the number of iterations per gradient estimate to be reduced to as little as 5 in this particular network. The results of simulations using training sets of closely spaced points in the first quadrant show that GEMINI is capable of training this network to correctly orient the simulated arm, with significantly improved learning efficiency when the above two modifications are employed. Details of these simulation results and the parameters used to obtain them are given in (Galland, Hinton, and Le Cun, 1989). Acknowledgements This research was funded by grants from the Ontario Information Technology Research Center, the Fyssen Foundation, and the National Science and Engineering Research Council. Geoffrey Hinton is a fellow of the Canadian Institute for Advanced Research. References A. G. Barto and P. Anandan (1985) Pattern recognizing stochastic learning automata. IEEE Transactions on Systems, Man and Cybernetics, 15, 360-375. S. Becker and Y. Le Cun (1988) Improving the convergence of back-propagation learning with second order methods. In Touretzky, D. S., Hinton, G. E. and Sejnowski, T. J., editors, Proceedings of the 1988 Connectionist Summer School, Morgan Kauffman: Los Altos, CA. C. C. Galland, G. E. Hinton and Y. Le Cun (1989) Technical Report, in preparation. Y. Le Cun (1987) Modeles Connexionnistes de l'Apprentissage. Doctoral thesis, University of Paris, 6. D. E. Rumelhart, G. E. Hinton, and R. J. Williams (1986) Learning internal representations by back-propagating errors. Nature, 323, 533-536.
|
1988
|
63
|
150
|
A SELF-LEARNING NEURAL NETWORK A. Hartstein and R. H. Koch IBM - Thomas J. Watson Research Center Yorktown Heights, New York ABSTRACf We propose a new neural network structure that is compatible with silicon technology and has built-in learning capability. The thrust of this network work is a new synapse function. The synapses have the feature that the learning parameter is embodied in the thresholds of MOSFET devices and is local in character. The network is shown to be capable of learning by example as well as exhibiting the desirable features of the Hopfield type networks. The thrust of what we want to discuss is a new synapse function for an artificial neuron to be used in a neural network. We choose the synapse function to be readily implementable in VLSI technology, rather than choosing a function which is either our best guess for the function used by real synapses or mathematically the most tractable. In order to demonstrate that this type of synapse function provides interesting behavior in a neural network, we imbed this type of function in a Hopfield {Hopfield, 1982} type network and provide the synapses with a Hebbian {Hebb, 1949} learning capability. We then show that this type of network functions in much the same way as a Hopfield network and also learns by example. Some of this work has been discussed previously {Hartstein, 1988}. Most neural networks, which have been described, use a multiplicative function for the synapses. The inputs to the neuron are multiplied by weighting factors and then the results are summed in the neuron. The result of the sum is then put into a hard threshold device or a device with a sigmoid output. This is not the easiest function for a MOSFET to perform although it can be done. Over a large range of parameters, a MOSFET is a linear device with the output current being a linear function of the input voltage relative to a threshold voltage. If one could directly utilize these characteristics, one would be able to design a neural network more compactly. 769 770 Hartstein and Koch We propose that we directly use MOSFETs as the input devices for the neurons in the network, utilizing their natural characteristics. We assume the following form for the input of each neuron in our network: V; = 0 ( 2: IIj - T;j I ) 1 (1) where V, is the output, ~ are the inputs and T,j are the learned threshold voltages. In this network we use a representation in which both the V's and the T's range from 0 to + 1. The result of the summation is fed into a non-linear sigmoid function (0). All of the neurons in the network are interconnected, the outputs of each neuron feeding the inputs of every other neuron. The functional form of Eq. 1 might, for instance, represent several n-channel and p-channel MOSFETs in parallel. The memories in this network are contained in the threshold voltages, 1',}" We implement learning in this network using a simple linear Hebbian {Hebb, 1949} learning rule. We use a rule which locally reinforces the state of each input node in a neuron relative to the output of that neuron. The equation governing this learning algorithm is: (2) where 1';j are the initial threshold voltages and T'j are the new threshold voltages after a time,.6.t. Here TJ is a small learning parameter related to this time period, and the offset factor O.S is needed for symmetry. Additional saturation constraints are imposed to ensure that 1';j remain in the interval 0 to + 1. This learning rule is one which is linear in the difference between each input and output of a neuron. This is an enhancing/inhibiting rule. The thresholds are adjusted in such a way that the output of the neuron is either pushed in the same direction as the input (enhancing), or pushed in the opposite direction (inhibiting). For our simple simulations we started the network with all thresholds at O.S and let learning proceed until some saturation occurred. The somewhat more sophisticated method of including a relaxation term in Eq. 2 to slowly push the values toward O.S over time was also explored. The results are essentially the same as for our simple simulations. The interesting question is if we form a network using this type of neuron, what will the overall network response be like? Will the network learn multiple states or will it learn a simple average over all of the states it sees? In order to probe the functioning of this network, we have performed simulations of this network on a digital computer. Each simulation was divided into two phases. The first was a learning phase in which a fixed number of random patterns were presented to the network sequentially for some period of time. During this phase the threshold A Self-Learning Neural Network 771 voltages were allowed to change using the rule in Eq. 2. The second was a testing phase in which learning was turned off and the memories established in the network were probed to determine the essential features of these learned memories. In this way we could test how well the network was able to learn the initial test patterns, how well the network could reconstruct the learned patterns when presented with test patterns containing errors, and how the network responded to random input patterns. We have simulated this network using N fully interconnected neurons, with N in the range of 10 to 200. M random patterns were chosen and sequentially presented to the network for learning. M typically ranged up to N/3. After the learning phase, the nature of the stable states in the network was tested. In general we found that the network is capable of learning all of the input patterns as long as M is not too large. The network also learns the inverse patterns (l's and O's interchanged) due to the inherent symmetry of the network. Additional extraneous patterns are learned which have no obvious connection to the intended learned states. These may be analogous to either the spin glass states or the mixed pattern states discussed for the multiplicative network {Amit, 1985}. Fig. 1 shows the capacity of a 100 neuron network. We attempted to teach the network M states and then probed the network to see how many of the states were successfully learned. This process was repeated many times until we achieved good statistics. We have defined successful learning as 1000;6 accuracy. A more relaxed definition would yield a qualitatively similar curve with larger capacity. The functional form of the learning is peaked at a fixed value of the number of input patterns. For a small number of input patterns, the network essentially learns all of the patterns. Deviations from perfect learning here generally mean 1 bit of information was learned incorrectly. Near the peak the results become more noisy for different learning attempts. Most errors are still only 1 or 2 bits! but the learning in this region becomes marginal as the capacity of the network is approached. For larger values of the number of input patterns the network becomes overloaded and it becomes incapable of learning most of the input states. Some small number of patterns are still learned, but the network is clearly not functioning well. Many of the errors in this region are large, showing little correlation with the intended learned states. This functional form for the learning in the network is the same for all of the network sizes tested. We define the capacity of the network as the average value of the peak number of patterns which can be successfully learned. The inset to Fig. 1 shows the memory capacity of a number of tested networks as a function of the size of the network. The network capacity is seen to be a linear function of the network size. The capacity is proportional to the number of T./s specified. In this 772 Hartstein and Koch example the network capacity was f ouod to be about 8010 of the maximum possible for binary information. This rather low figure results from a trade-off of capacity for the partic\Jlar types of functions that a neural network can perform. It is possible to construct simple memories with 1000.,.6 capacity. N 0 100 200 25------------------------~--------------~ 20 ] ~ 20 ~ 15 E o ~ '0 10 ~ ~ E ~ z 5 , , , '. 10 , , , 0 • o~--~----~--~----~--~ o 10 20 30 40 50 .?;'0 C 0.. 0 U Figure 1. The number of successfully learned patterns as a function of the number of input patterns for a 100 neuron network. The dashed curve is for perfect learning. The inset shows the memory capacity of a threshold neural network as a function of the size of the network. Some important measures of learning in the network are the distribution of stable states in the network after learning has taken place. and the basin of attraction r or each stable point. One can gain a handle on these parameters by probing the network with random test patterns after the network has learned M states. Fig. 2 shows the averaged results of such tests for a 100 neuron network and varying numbers of learned states. The figure shows the probability of finding particular states. both learned and extraneous. The states are ordered first by decreasing A Self-Learning Neural Network 773 probability for the learned states, followed by decreasing probability for the extraneous states. It is clear from the figure that both types of stable states are present in the network. It is also clear that the probabilities of finding different patterns are not equal. Some learned states are more robust than others, that is they have larger basins of attraction. This network model does not partition the available memory space equally among the input patterns. It also provides a large amount of memory space for the extraneous states. Clearly, this is not the optimum situation. 0.8 ~ (a) 0.6 Learned Q) 0.4 -.s (/) 0') 0.2 L Extraneous c: ~ 0.0 --c: G: ..... 0.8 0 .b (b) ~ 0.6 :0 Extraneous e ~ 0.4 0.2 Learned 0.0 0 5 10 15 20 25 30 State Figure 2. The probability of the network finding a specific pattern. Both learned states and extraneous states are found. The figure was obtained for a 100 neuron network. Fig. 2a is for 5 learned patterns and 2b is for 10 learned patterns. Some of the learned states appear to have 0 probability of being found in this simulation. Some of these states are not stable states of the network and will never be found. This is particularly true-when the number of learned states is close to or exceeds the capacity of the network. Others of these states simply have an extremely small probability of being found in a random search because they have small basins of attraction. However, as discussed below, these are still viable states. When the network learns fewer states than its capacity (Fig. 2a), 774 Hartstein and Koch most of the stable states are the learned states. As the capacity is approached or exceeded, most of the stable states are extraneous states. The results shown in Fig. 2 address the question of the networks tolerance to errors. A pattern, which has a large basin of attraction, will be relatively tolerant to errors when being retrieved, whereas, a pattern, which has a small basin of attraction, will be less tolerant of errors. The immunity of the learned patterns to errors in being retrieved can also be tested in a more direct way. One can probe the network with test patterns which start out as the learned patterns, but have a certain number of bits changed randomly. One then monitors the final pattern which the networks finds and compares to the known learned pattern . .$ .s (I) "i 0.8 E o !3 t7' c: ;:; c: 0.6 ~ 04 '0 · ~ ~ 0.2 e 0.. • • 0.0 '------.1--.-.-.--___ .... ..-.4 ... o 10 20 30 40 Hamming Distance Figure 3. Probability of the network finding a specific learned state when the input pattern has a certain Hamming distance. This figure was obtained for a 100 neuron network which was taught 10 random patterns. Fig. 3 shows typical results of such a calculation. The probability of successfully retrieving a pattern is shown as a function of the Hamming distance. the number of bits which were randomly changed in the test pattern. For this simulation a tOO neuron network was used and it was taught 10 patterns. For small Hamming distances the patterns are successfully found 100°,.6 of the time. As the Hamming distance gets larger the network is no longer capable of finding the desired pattern. but rather finds one of the other fixed points. This result is a statistical avA Self-Learning Neural Network 775 erase over all of the states and therefore tends to emphasize patterns with small basins of attraction. This is just the opposite of the types of states emphasized in the analysis shown in Fig. 2. We can define the maximum Hamming distance as the Hamming distance at which the probability of finding the learned state has dropped to SO%. Fig. 4 shows the maximum Hamming distance as a function of the number of learned states in our 100 neuron network. As one expects the maximum Hamming distance gets smaller as the number of learned states increases. Perhaps surprisingly, the relationship is linear. These results are important since one requires a reasonable maximum Hamming distance for any real system to function. These considerations also shed some light on the nature of the functioning of the network and its ability to learn. 60 CI) (.) c ~ i5 40 0 c ·e E 0 :c § 20 . E .~ ::E 0 0 5 10 15 20 M Figure 4. The maximum Hamming distance for a given number of learned states. Results are for a 100 neuron network. This simulation gives us a picture of the way in which the network utilizes its phase space to store information. When only a few patterns are stored in the network, the network divides up the available space among these memories. The learning process is almost always successful. When a larger number of learn~ patterns are attempted, the available space is now divided among more memories. The maximum Hamming distance decreases and more space is taken up byextraneous states. When the memory capacity is exceeded, the phase space allo776 Hartstein and Koch cated to any successful memory is very small and most of the space is taken up by extraneous states. The types of behavior we have described are similar to those found in the Hopfield type memory utilizing multiplicative synapses. In fact our central point is that by using a completely different type of synapse function, we can obtain the same behavior. At the same time we argue since this network was proposed using a synapse function which mirrors the operating characteristics of MOSFETs, it will be much easier to realize in hardware. Therefore, we should be able to construct a smaller more tolerant network with the same operating characteristics. We do not mean to imply that the type of synapse function we have explored can only be used in a Hopfield type network. In fact we feel that this type of neuron is quite general and can successfully be utilized in any type of network. This is at present just a conjecture which needs to be explored more fully. Perhaps the most important message from our work is the realization that one need not be constrained to the multiplicative type of synapse, and that other forms of synapses can perform similar functions in neural networks. This may open up many new avenues of investigation. REFERENCES D.l. Amit, H. Gutfreund and H. Sompolinsky, Phys. Rev. A32, 1001 (1985). A. Hartstein and R.H. Koch, IEEE Int. Conf. on Neural Networks, (SOS Printing, San Diego, 1988), Vol. I, 425. D. Hebb, The Organization of Behaviour, (Wiley, New York, 1949). 1.1. Hopfield, Proc. Natl. Acad. Sci. USA 79, 2554 (1982).
|
1988
|
64
|
151
|
618 NEURAL NETWORKS FOR MODEL MATCHING AND PERCEPTUAL ORGANIZATION Gene Gindi EE Department Yale University New Haven, CT 06520 Eric Mjolsness CS Department Yale University New Haven, CT 06520 ABSTRACT P. Anandan CS Department Yale University New Haven, CT 06520 We introduce an optimization approach for solving problems in computer vision that involve multiple levels of abstraction. Our objective functions include compositional and specialization hierarchies. We cast vision problems as inexact graph matching problems, formulate graph matching in terms of constrained optimization, and use analog neural networks to perform the optimization. The method is applicable to perceptual grouping and model matching. Preliminary experimental results are shown. 1 Introduction The minimization of objective functions is an attractive way to formulate and solve visual recognition problems. Such formulations are parsimonious, being expressible in several lines of algebra, and may be converted into artificial neural networks which perform the optimization. Advantages of such networks including speed, parallelism, cheap analog computing, and biological plausibility have been noted [Hop field and Tank, 1985]. According to a common view of computational vision, recognition involves the construction of abstract descriptions of data governed by a data base of models. Abstractions serve as reduced descriptions of complex data useful for reasoning about the objects and events in the scene. The models indicate what objects and properties may be expected in the scene. The complexity of visual recognition demands that the models be organized into compositional hierarchies which express object-part relationships and specialization hierarchies which express object-class relationships. In this paper, we describe a methodology for expressing model-based visual recognition as the constrained minimization of an objective function. Model-specific objective functions are used to govern the dynamic grouping of image elements into recognizable wholes. Neural networks are used to carry out the minimization. °This work was supported in part by AFOSR grant F49620-88-C-002S, and by DARPA grant DAAAlS-87-K-OOOl, by ONR grant N00014-86-0310. Model Matching and Perceptual Organization 619 Previous work on optimization in vision has typically been restricted to computations occuring at a single of level of abstraction and/or involving a single model [Barrow and Popplestone, 1971,Hummel and Zucker, 1983,Terzopoulos, 1986]. For example, surface interpolation schemes, even when they include discontinuities [Terzopoulos, 1986] do not include explicit models for physical objects whose surface characteristics determine the expected degree of smoothness. By contrast, heterogeneous and hierarchical model-bases often occur in non-optimization approaches to visual recognition [Hanson and Riseman, 1986] including some which use neural networks [Ballard, 1986]. We attempt to obtain greater express ability and efficiency by incorporating hierarchies of abstraction into the optimization paradigm. 2 Casting Model Matching as Optimization We consider a type of objective function which, when minimized by a neural network, is capable of expressing many of the ideas found in Frame systems in Artificial Intelligence [Minsky, 1975]. These "Frameville" objective functions [Mjolsness et al., 1988,Mjolsness et al., 1989] are particularly well suited to applications in model-based vision, with frames acting as few-parameter abstractions of visual objects or perceptual groupings thereof. Each frame contains real-valued parameters, pointers to other frames, and pointers to predefined models (e.g. models of objects in the world) which determine what portion of the objective function acts upon a given frame. 2.1 Model Matching as Graph Matching Model matching involves finding a match between a set of frames, ultimately derived from visual data, and the predefined static models. A set of pointers represent object-part relationships between frames, and are encoded as a graph or sparse matrix called ina. That is, inaij = 0 unless frame j is "in" frame i as one of its parts, in which case inaij = 1 is a "pointer" from j to i. The expected objectpart relationships between the corresponding models is encoded as a fixed graph or sparse matrix INA. A form of inexact graph-matching is required: ina should follow INA as much as is consistent with the data. A sparse match matrix M (0 < Meti < 1) of dynamic variables represents the correspondence between model a and frame i. To find the best match between the two graphs one can minimize a simple objective function for this match matrix, due to Hopfield [Hopfield, 1984] (see also [Feldman et al., 1988,Malsburg, 1986]), which just counts the number of consistent rectangles (see Figure 1a): E(M) = - ~~INAet~inaijMaiM~j. et{3 ij (1) This expression may be understood as follows: For model a and frame i, the match value Meti is to be increased if the neighbors of a (in the INA graph) match to the neighbors of i (in the ina graph). 620 Mjolsness, Gindi and Anandan INA t.2 Model side M 1 •• Data side Figure 1: (a) Examples of Frameville rectangle rule. Shows the rectangle relationship between frames (triangles) representing a wing of a plane, and the plane itself. Circles denote dynamic variables, ovals denote models, and triangles denote frames. For the plane and wing models, the first few parameters of a frame are interpreted as position, length, and orientation. (b) Frameville sibling competition among parts. The match variables along the shaded lines (M3,9 and M2,7) are suppressed in favor of those along the solid lines (M2,9 and M 3,7)' Note that E(M) as defined above can be trivially minimized by setting all the elements of the match matrix to unity. However, to do so will violate additional syntactic constraints of the form h(M) = 0 which are imposed on the optimization, either exactly (Platt and Barr, 1988] or as penalty terms (Hopfield and Tank, 1985] ~h2(M) added to the objective function. Originally the syntactic constraints simply meant that each frame should match one model and vice versa, as in (Hopfield and Tank, 1985]. But in Frameville, a frame can match both a model and one of its specializations (described later), and a single model can match any number of instances or frames. In addition one can usually formulate constraints stating that if a model matches a frame then two distinct parts of the same model must match two distinct part frames and vice-versa. \Ve have found the following Model Matching and Perceptual Organization 621 formulation to be useful: ~ INAa{3Mai - ~ inaijM{3j a j J E inaijMai - E 1NAa{3M{3j {3 0, "'p, i (2) 0, Va,j (3) where the first sum in each equation is necessary when several high-level models (or frames) share a part. (It turns out that the first sums can be forced to zero or one by other constraints.) The resulting competition is illustrated in Figure lb. Another constraint is that M should be binary-valued, i.e., (4) but this constraint can also be handled by a special "analog gain" term in the objective function [Hopfield and Tank, 1985] together with a penalty term c Eai Mai(l - Mai). In Frameville, the ina graph actually becomes variable, and is determined by a dynamic grouping or "perceptual organization" process. These new variables require new constraints, starting with inaij (1 - inaij) = 0, and including many high-level constraints which we now formulate. 2.2 Franles and Objective Functions Frames can be considered as bundles ~ of real-valued parameters Fip, where p indexes the different parameters of a frame. For efficiency in computing complex arithmetic relationships, such as those involved in coordinate transformations, an analog representation of these parameters is used. A frame contains no information concerning its match criteria or control flow; instead, the match criteria are expressed as objective functions and the control flow is determined by the partiCUlar choice of a minimization technique. In Figure la, in order for the rectangle (1,4,9,2) to be consistent, the parameters F4p and F9p should satisfy a criterion dictated by models 1 and 2, such as a restriction on the difference in angles appropriate for a mildly swept back wing. Such a constraint results in the addition of the following term to the objective function: L lNAa{3inaij MaiM{3j Ha{3(~, Pj) (5) i,j,a,{3 where Ha{3(~, Fj) measures the deviation of the parameters of the data frames from that demanded by the models. The term H can express coordinate transformation arithmetic (e.g. H a{3(Xi, Xj) = 1/2[xi - Xj - D.xa{3]2), and its action on a frame f;. is selectively controlled or "gated" by M and ina variables. This is a fundamental extension of the distance metric paradigm in pattern recognition; because of the complexity of the visual world, we use an entire database of distance metrics H a{3. 622 Mjolsness, Gindi and Anandan Figure 2: Frameville specialization hierarchy. The plane model specializes along 154. links to a propeller plane or a jet plane and correspondingly the wing model specializes to prop-wing or jet-wing. Sibling match variables M 6 ,4 and M4 ,4 compete as do M7,9 and M S,9. The winner in these competitions is determined by the consistency of the appropriate rectangles, e.g. if the 4-4-9-5 rectangle is more consistent than the 6-4-9-7 rectangle, then the jet model is favored over the prop model. We index the models (and, indirectly, the data base of H metrics) by introducing a static graph of pointers I54. OI{j to act as both a specialization hierarchy and a discrimination network for visual recognition. A frame may simultaneously match to a model and just one of its specializations: Mcxi - L I54.cx{jMf3i = o. f3 (6) As a result, 154. siblings compete for matches to a given frame (see Figure 2); this competition allows the network to act as a discrimination tree. Frameville networks have great expressive power, but have a potentially serious problem with cost: for n data frames and m models there may be O(nm + 71 2 ) neurons widely interconnected but sparsely activated. The number of connections is at most the number of monomials in the polynomial objective function, namely n2m/, where / is the fan-out of the INA graph. One solution to the cost problem, used in the line grouping experiments reported in [Mjolsness et al., 1989], is to restrict the flexibility of the frame system by setting most M and ina neurons to zero permanently. The few remaining variables can form an efficient data structure Model Matching and Perceptual Organization 623 such as a pyramid in vision. A more flexible solution might enforce the sparseness constraints on the M and ina neurons during minimization, as well as at the fixed point. Then large savings could result from using "virtual" neurons (and connections) which are created and destroyed dynamically. This and other cost-cutting methods are a subject of continuing research. 3 Experimental Results We describe here experiments involving the recognition of simple stick figures. (Other experiments involving the perceptual grouping of lines are reported in [Mjolsness et al., 1989].) The input data (Figure 3(a)) are line segments parameterized by location x, y and orientation (), corresponding to frame parameters Fjp (p = 1,2,3). As seen in Figure 3(b), there are two high-level models, "T" and "L" junctions, each composed of three low-level segments. The task is to recognize instances of "T", "L", and their parts, in a translation-invariant manner. The parameter check term H cx{3 of Equation 5 achieves translation invariance by checking the location and orientation of a given part relative to a designated main part and is given by: Ha{3(~, ff;) = I)Fip - Fjp ~;{3)2 (7) P Here Fjp and Fip are the slots of a low-level segment frame and a high-level main part, respectively, and the quantity ~~{3 is model information that stores coordinate differences. (Rotation invariance can also be formulated if a different parameterization is used.) It should be noted that absence of the main part does not preclude recognition of the high-level model. We used the unconstrained optimization technique in [Hopfield and Tank, 1985] and achieved improved results by including terms demanding that at most one model match a given frame, and that at most one high-level frame include a given low-level frame as its part [Mjolsness et al., 1989]. Figure 3(c) shows results of attempts to recognize the junctions in Figure 3(a). When initialized to random values, the network becomes trapped in unfavorable local minima of the fifth-order objective function. (But with only a single high-level model in the database, the system recognizes a shape amid noise.) If, however, the network is given a "hint" in the form of an initial state with mainparts and high-level matches set correctly, the network converges to the correct state. There is a great deal of unexploited freedom in the design of the model base and its objective functions; there may be good design disciplines which avoid introducing spurious local minima. For example, it may be possible to use ISA and INA hierarchies to guide a network to the desired local minimum. 624 Mjolsness, Gindi and Anandan 10 + + • + + • + + 7 +++++++t + + + + + + + ,+++++++ + + + + + + + + • + + + + + + + + + + 5+++++ ,+++++ 4 + + + + + + + + 3 + + + + + t + + + + + 2 + + + + + + + + + + 1 + + + + + +--t--+ + + + o-;r+ + + + + + +--t-+' + o 1 2 3 4 5 I 7 • 9 10 1 2 3 inaij 123 1 ••• 0000000 20000 ••• 000 30000000000 12345678910 Mpj E; 0000.00000 B .000000000 8 00000.0000 E3 0.00000000 E1 00.0000000 B 000000l1li000 9 0000000000 1 2 3 4 5 6 7 8 9 10 Figure 3: (a) Input data consists of unit-length segments oriented horizontally or vertically. The task is translation-invariant recognition of three segments forming a liT" junction (e.g. sticks 1,2,3) or an "L" (e.g. sticks 5,6,7) amid extraneous noise sticks. (b) Structure of network. Models occur at two levels. INA links are shown for a liT". Each frame has three parameters: position x, y and orientation e. Also shown are some match and ina links. The bold lines highlight a possible consistency rectangle. (c) Experhnental result. The value of each dynamical variable is displayed as the relative area of the shaded portion of a circle. Matrix M{jj indicates low-level matches and MOti indicates high-level matches. Grouping of low-level to high-level frames is indicated by the ina matrix. The parameters of the high-level frames are displayed in the matrix Fip of linear analog neurons. (The parameters of the low-level frames, held fixed, are not displayed.) The few neurons circumscribed by a square, corresponding to correct matches for the main parts of each model, are clamped to a value near unity. Shaded circles indicate the final correct state. Model Matching and Perceptual Organization 625 4 Conclusion Frameville provides opportunities for integrating all levels of vision in a uniform notation which yields analog neural networks. Low-level models such as fixed convolution filters just require analog arithmetic for frame parameters, which is provided. High-level vision typically requires structural matching, also provided. Qualitatively different models may be integrated by specifying their interactions, H cx/3. Acknowledgements We thank J. Utans, J. Ockerbloom and C. Garrett for the Frameville simulations. References [1] Dana Ballard. Cortical connections and parallel processing: structure and function. Behavioral and Brain Sciences, vol 9:67-120, 1986. [2] Harry G. Barrow and R. J. Popplestone. Relational descriptions in picture processing. In D. Mitchie, editor, Machine Intelligence 6, Edinborough University Press, 1971. [3] Jerome A. Feldman, Mark A. Fanty, and Nigel H. Goddard. Computing with structured neural networks. IEEE Computer, 91, March 1988. [4] Allen R. Hanson and E. M. Riseman. A methodology for the development of general knowledge-based vision systems. In M. A. Arbib and A. R. Hanson, editors, Vision, Brain, and Cooperative Computation, MIT Press, 1986. [5] J. J. Hopfield. Personal communication. October 1984. [6] J. J. Hopfield and D. W. Tank. 'Neural' computation of decisions in optimization problems. Biological Cybernetics, vol. 52:141-152, 1985. [7] Robert A. Hummel and S. W. Zucker. On the foundations of relaxation labeling processes. IEEE Transactions on PAMI, vol. PAMI-5:267-287, May 1983. [8] Marvin L. Minsky. A framework for representing knowledge. In P. H. Winston, editor, The Psychology of Computer Vision, McGraw-Hill, 1975. [9] Eric Mjolsness, Gene Gindi, and P. Anandan. Optimization in Model Matching and Perceptual Organization: A First Look. Technical Report YALEU /DCS/RR-634, Yale University, June 1988. [10] Eric Mjolsness, Gene Gindi, and P. Anandan. Optimization in Model Matching and Perceptual Organization. Neural Computation, to appear. [11] John C. Platt and Alan H. Barr. Constraint methods for flexible models. Computer Graphics, 22(4), August 1988. Proceedings of SIGGRAPH '88. [12] Demitri Terzopoulos. Regularization of inverse problems involving discontinuities. IEEE Transactions on PAMI, vol. PAMI-8:413-424, 1986. [13] Christoph von der Malsburg and Elie Bienenstock. Statistical coding and short-term synaptic plasticity: a scheme for knowledge representation in the brain. In Disordered Systems and Biological Organization, pages 247-252, Springer-Verlag, 1986.
|
1988
|
65
|
152
|
366 NEURONAL MAPS FOR SENSORY -MOTOR CONTROL IN THE BARN OWL C.D. Spence, J.C. Pearson, JJ. Gelfand, and R.M. Peterson David Sarnoff Research Center Subsidiary of SRI International CN5300 Princeton, New Jersey 08543-5300 W.E. Sullivan Department of Biology Princeton University Princeton, New Jersey 08544 ABSTRACT The bam owl has fused visual/auditory/motor representations of space in its midbrain which are used to orient the head so that visual or auditory stimuli are centered in the visual field of view. We present models and computer simulations of these structures which address various problems, inclu<lln~ the construction of a map of space from auditory sensory information, and the problem of driving the motor system from these maps. We compare the results with biological data. INTRODUCTION Many neural network models have little resemblance to real neural structures, partly because the brain's higher functions, which they attempt to imitate, are not yet experimentally accessible. Nevertheless, some neural-net researchers are finding that the accessible structures are interesting, and that their functions are potentially useful. Our group is modeling a part of the barn owl's nervous system which orients the head to visual and auditory stimuli. The bam owl's brain stem and midbrain contains a system that locates visual and auditory stimuli in space. The system constructs an auditory map of spatial direction from the non-spatial information in the output of the two cochlea. This map is in the external nucleus of the inferior colliculus, or ICx [Knudsen and Konishi, 1978]. The ICx, along with the visual system, projects to the optic tectum, producing a fused visual and auditory map of space [Knudsen and Knudsen, 1983]. The map in the tectum is the source of target position used by the motor system for orienting the head. In the last futeen years, biologists have determined all of the structures in the system which produces the auditory map of space in the ICx. This system provides sevNeuronal Maps for Sensory-Motor Coritrol in the Barn Owl 367 eral examples of neuronal maps, regions of tissue in which the response properties of neurons vary continuously with position in the map. (For reviews, see Knudsen, 1984; Knudsen, du Lac, and Esterly, 1987; and Konishi, 1986.) Unfortunately, the motor system and the projections from the tectum are not well known, but experimental study of them has recently begun [Masino and Knudsen, 1988]. We should eventually be able to model a complete system, from sensory input to motor output In this paper we present several models of different parts of the head orientation system. Fig. 1 is an overview of the structures we'll discuss. In the second section of this paper we discuss models for the construction of the auditory space map ~ the lex. In the third section we discuss how the optic tectum might drive the motor system. CONSTRUCTION OF AN AUDITORY MAP OF SPACE The bam owl uses two binaural cues to locate objects in space: azimuth is derived from inter-aural time or phase delay (not onset time difference), while elevation is derived from inter-aural intensity difference (due to vertical asymmetries in sensitivCore of the ICc ~ Lateral Shell of the ICc Acoustic External Nucleus of the IC (ICx) ~ Optic Tectum ~ Retina Figure 1. Overview of the neuronal system for target localization in the barn owl (head orients towards potential targets for closer scrutiny). The illustration focuses on the functional representations of the neuronal computation, and does not show all of the relevant connections. The grids represent the centrally synthesized neuronal maps and the patterns within them indicate possible patterns of neuronal activation in response to acoustic stimuli. 368 Spence, Pearson, Gelfand, Peterson and Sullivan ity). Corresponding to these two cues are two separate processing streams which produce maps of the binamal cues, which are shown in Figs. 2-5. The information on these maps must be merged in order to construct the map of space in the ICx. .~ ~ tiS lTD ---. lex :~:~: ~-----~ Azimuth f t ~ ___ .-...IID Figure 2. Standard Model for the construction of an auditory map of space from maps of the binaural cues. Shading represents activity level. lID is Inter-aural Intensity Difference, lTD is Inter-aural Time Delay. A simple model for combining the two maps is shown in Fig. 2. It has not been described explicitly in the literature, but it has been hinted at [Knudsen, et ai, 1987]. For this reason we have called it the standard model. Here all of the neurons representing a given time delay or azimuth in the lTD vs. frequency map project to all of the neurons representing that azimuth in the space map. Thus a stimulus with a certain lTD would excite a strip of cells representing the associated azimuth and all elevations. Similarly, all of the neurons representing a given intensity difference or elevation in the lID vs. frequency map project to all of the neurons representing that elevation in the space map. (TIle map of lID vs. frequency is constructed in the nucleus ventralis lemnisci lateralis pars posterior, or VL Vp. VL Vp neurons are said to be sensitive to intensity difference, that is they fire if the intensity difference is great enough. Neurons in the VL Vp are spatially organized by their intensity difference threshold [Manley. et ai, 1988]. Thus, intensity difference has a bar-chart-like representation, and our model needs some mechanism to pick out the ends of the bars.) Only the neurons at the intersection of the two strips will fire if lateral inhibition allows only those neurons receiving the most stimulation to fue. In the third section we will present a model for connections of inhibitory inter-neurons which can be applied to this model. Part of the motivation for the standard model is the problem with phase ghosts. Phase ghosts occur when the barn owl's nervous system incorrectly associates the wave fronts arriving at the two ears at high frequency. In this case, neurons in the map of lTD vs. frequency become active at locations representing a time delay which Neuronal Maps for Sensory-Motor Control in the Barn Owl 369 differs from the true time delay by an integer multiple of the period of the sound. Because the period varies inversely with the frequency, these phase ghosts will have apparent time delays that vary with frequency. Thus, for stimuli that are not pure tones, if the bam owl can compare the activities in the map at different frequencies, it can eliminate the ghosts. The standard model does this (Fig. 2). In the lTD vs. frequency map there are more neurons fIring at the position of the true lTD than at the ghost positions, so space map neurons representing the true position will receive the most stimulation. Only those neurons representing the true position will fIre because of the lateral inhibition. There is another kind of ghost which we call the multiple-source ghost (Fig. 3). If two sounds occur simultaneously, then space map neurons representing the time delay of one source and the intensity difference of the other will receive a large amount of stimulation. Lateral inhibition may suppress these ghosts, but if so, the owl should only be able to locate one source at a time. In addition, the true sources might be suppressed. The bam owl may actually suffer from this problem, although it seems unlikely if the owl has to function in a noisy environment The relevant behavioral experiments have not yet been done. Experimental evidence does not support the standard model. The ICx receives most of its input from the lateral shell of the central nucleus of the inferior colliculus lTD --.. . ::. • 4~ t / / ., ~Cx 1- ~ " "' i/" I /: ~ " ',/ t liD Azimuth Figure 3. Multiple-source ghosts in the standard model for the construction of the auditory space map. For clarity, only two pure tone stimuli are represented, and their frequencies and locations are such that the "phase ghost" problem is not a factor. The black squares represent regions of cells that are above threshold. The circled regions are those that are fIring in response to the lTD of one stimulus and the lID of another. These regions correspond to phantom targets. 370 Spence, Pearson, Gelfand, Peterson and Sullivan (lateral shell of the ICc) [Knudsen, 1983]. Neurons in the lateral shell are tuned to frequency and time delay, and these parameters are mapped [Wagner. Takahashi. and Konishi, 1987]. However, they are also affected by intensity difference [Knudsen and Konishi. 1978, I. Fujita, private communication]. Thus the lateral shell does not fit the picture of the input to the ICx in the standard model. rather it is some intermediate stage in the processing. We have a model, called the lateral shell model. which does not suffer from multiple-source ghosts (Fig. 4). In this model, the lateral shell of the ICc is a tbreedimensional map of frequency vs. intensity difference vs. time delay. A neuron in the map of time delay vs. frequency in the ICc core projects to all of the neurons in lTD lID lex Azimuth Figure 4. Lateral shell model for the construction of the auditory map of space in the ICx. f: frequency. lTD: inter-aural time delay. lID: inter-aural intensity difference. Neuronal Maps for Sensory-Motor Control in the Barn Owl 371 the three-dimensional map which represent the same time delay and frequency. As in the standard model, a strip of neurons is stimulated, but now the frequency tuning is preserved. The map of intensity difference vs. frequency in the nucleus ventralis lemnisci lateralis pars posterior (VLVp) [Manley, et al, 1988] projects to the threedimensional map in a similar fashion. Lateral inhibition picks out the regions of intersection of the strips. Neurons in the space map in the ICx receive input from the strip of neurons in the three-dimensional map which represent the appropriate time delay and intensity difference, or equivalently azimuth and elevation. Phase ghosts will be present in the three-dimensional map, but in the ICx lateral inhibition will suppress them. Multiple-source ghosts are eliminated in the lateral shell model because the sources are essentially tagged by their frequency spectra. If two sources with no common frequencies are present, there are no neurons which represent the time delay of one source, the intensity difference of another, and a frequency common to both. In the more likely case in which some frequencies are common to both sources, there will be fewer neurons fIring at the ghost positions than at the real positions, so again lateral inhibition in the ICx can suppress fIring at the ghost positions, exactly as it suppresses the phase ghosts. The fact that intensity and time delay information is combined before frequency tuning is lost in the lex suggests that the owl handles multiple-source ghosts by frequency tagging. A three dimensional map is not essential, but it is conceptually simple. Before ending this section, we should mention that others have independently thought of this model. In particular, M. Konishi and co-workers have looked for a spatial organization or mapping of intensity response properties in the lateml shell, but they have not found it They also have said that they can't yet rule it out [M. Konishi, I. Fujita, private communication]. DRMNG THE MOTOR SYSTEM FROM MAPS As mentioned before, all of the parts of the auditory system in the bmin stem and midbrain are known, up to the optic tectum. The optic tectum has a map of spatial direction which is common to both the visual and auditory systems. In addition, it drives the motor system, so if the tectum is stimulated at a point, the bam owl's head will move to face a certain direction. The new orientation is mapped in register with the auditory/visual map of spatial direction, e.g., stimulating a location which represents a stimulus eight degrees to the right of the current orientation of the head will cause the head to turn eight degrees to the right Little is known about the projections of the tectum, although work has started [Masino and Knudsen, 1988]. There was one earlier experiment. Two electrodes were placed in one of the tecta at positions representing sensory stimuli eight degrees and sixty degrees toward the side of the head opposite to this tectum. When either position was stimulated by itself, the alert owl moved its head as expected, by eight or 372 Spence, Pearson, Gelfand, Peterson and Sullivan sixty degrees. When both were stimulated together. the head moved about forty degrees [du Lac and Knudsen. 1987]. The averaging of activity in the tectum is easy to explain. In some motor models it should be produced naturally by the activation of an agonist-antagonist muscle pair (see. for example. Grossberg and Kuperstein. 1986). In the presence of two stimuli. the tension in each muscle is the sum of the tensions it would have for either stimulus alone. so the equilibrium position should be about the average position. We have a different model. which produces a map of the average position. The connection strengths from tectal cells to an averaging map cell decrease quadratically with the difference in represented direction. A quadratic distribution of stimulation is very broad. however. so lateral inhibition is required to make the active region fairly narrow. 1-D tectum -. 25 o o • 0 o. _ • • . . . .".;. . .. . . '" . '. .... . -.rI' ... ' I.' . o 0 0 . .... .' 00 , . .' ' . . -. .- .' . -- . . .. . -.. . ... . .... ,. .. o 0 . . . . o 0 '. .. . . . . . '" . . '. .... _". . '.. . .. • • I ...... '. _ • ••• ••• .- . .. o~----------------~~~------------------------~ Position Figure 5. Averaging map simulation. The upper thin rectangle is a one-dimensional version of the space map in the optic tectum. The two marks represent the position of stimulating electrodes that are simultaneously active. The lower rectangle is the averaging map. with position represented horizontally and time increasing vertically. The squares represent cell ftrings. Note that the activity quickly becomes centered at the average position of the two active positions in the tectum. We have simulated a one-dimensional version of this model. with 128 cells in the tectum. and in the averaging map. 128 excitatory and 128 inhibitory cells. The excitatory cells and the inhibitory cells both receive the same quadratically weighted input from the tectum. Each inhibitory cell inhibits all of the other cells in the averaging map. both excitatory and inhibitory. except for those in a small local neighborhood. (The weights are actually proportional to one minus a gaussian with a maximum of one.) An excitatory cell receives exactly the same input as the inhibitory Neuronal Maps for Sensory-Motor Control in the Barn Owl 373 cell at the same location, so their voltages are the same. Because of this we only show the excitatory cells in Fig. 5. This figure shows cell position horizontally and time increasing vertically. The black squares are plotted at the time a given cell frres. We are interested in whether an architecture like this is biologically plausible. For this reason we have tried to be fairly realistic in our model. The cells obey a membrane equation: dv C = -gl (v - VI) - g (v - v ) - g. (v - v.) dt eel I in which C is the capacitance, the g's are conductances, I refers to leakage quantities. e refers to excitatory quantities, and i refers to inhibitory quantities. The output of a cell is not a real number, but spikes that occur randomly at an average rate that is a monotonic function of the voltage. We used the usual sigmoidal function in this simulation, although the membrane equation automatically limits the voltage and hence the firing rate. A cell that spikes on a given time step affects other cells by affecting the appropriate conductance. To get the effect of a post synaptic potential, the conductances obey the equation of a damped harmonic oscillator: d 2g dg _y -2 -(If-ro g dt2 When a spike from an excitatory cell arrives, we increment the time derivative of g by some amount If the oscillator is overdamped or critically damped, the conductance goes up for a time and then decreases, approaching zero exponentially. We are not suggesting that a damped harmonic oscillator exists in the membranes of neurons, but it efficiently models the dynamics of synaptic transmission. The equations for the conductances also have the nice property that the effects of multiple spikes at different times add. With values for the cell parameters that agree well with experimental data, it takes about twenty milliseconds for the simulated map to settle into a fairly steady state, which is a reasonable time for the function of this map. Also, there was no need to frne tune the parameters; within a fairly wide range the effect of changing them is to change the width of the region of activity. We tried another architecture for the inhibitory intemeurons, in which they received their input from the excitatory neurons and did not inhibit other inhibitory neurons. The voltages in this architecture oscillated for a very long time, without picking out a maximum. The architecture we are now using is apparently superior. Since it is quick to pick out a maximum of a broad distribution of stimulation, it should work very well in other models requiring lateral inhibition. such as the lateral shell model discussed earlier. 374 Spence, Pearson, Gelfand, Peterson and Sullivan CONCLUSION We have presented models for two parts of the barn owl's visual/auditory localization and head orientation system. These models make experimentally testable predictions. and suggest architectures for artificial systems. One model constructs a map of stimulus position from maps of inter-aural intensity and timing differences. This model solves potential problems with ghosts. i.e.. the representation of false sources in the presence of certain kinds of real sources. Another model computes the average value of a quantity represented on a neural map when the activity on the map has a complex distribution. This model explains recent physiological experiments. A simulation with fairly realistic model neurons has shown that a biological structure could perform this function in this way. A common feature of these models is the use of neuronal maps. We have only mentioned a few of the maps in the barn owl. and they are extremely common in other nervous systems. We think this architecture shows great promise for applications in artificial processing systems. Acknowledgments This work was supported by internal funds of the David Sarnoff Research Center. References Grossberg. S. and M. Kuperstein (1986) Neural dynamics of adaptive motor control. North-Holland. Knudsen. E.I. (1983) J. Compo Neurology. 218:174-186. Knudsen. E.I. (1984) in Dynamical Aspects of Neocortical Function. G.M. Edelman, W.E. Gall, and W.M. Cowan, editors, Wiley, New York. Knudsen, E.I. andP.F. Knudsen (1983) J. Compo Neurology, 218:187-196. Knudsen, E.I. and M. Konishi (1978) J. Neurophys .• 41:870-884. Knudsen. E.!.. S. du Lac. and S. Esterly (1987) Ann. Rev. Neurosci., 10:41-56. Konishi. M. (1986) Trends in Neuroscience. April. du Lac. S. and E.!. Knudsen (1987) Soc. for Neurosci. Abstr., 112.10. Manley. G.A .• A.C. Koeppl. and M. Konishi (1988) J. Neurosci. 8:2665-2676 Masino. T .• and E.I. Knudsen (1988) Soc. for Neurosci. Abstr., 496.16. Wagner. H .• T. Takahashi. and M. Konishi (1987) J. Neurosci. 10:3105-3116
|
1988
|
66
|
153
|
PERFORMANCE OF SYNTHETIC NEURAL NETWORK CLASSIFICATION OF NOISY RADAR SIGNALS S. C. Ahalt F. D. Garber I. Jouny A. K. Krishnamurthy Department of Electrical Engineering The Ohio State University, Columbus, Ohio 43210 ABSTRACT This study evaluates the performance of the multilayer-perceptron and the frequency-sensitive competitive learning network in identifying five commercial aircraft from radar backscatter measurements. The performance of the neural network classifiers is compared with that of the nearest-neighbor and maximum-likelihood classifiers. Our results indicate that for this problem, the neural network classifiers are relatively insensitive to changes in the network topology, and to the noise level in the training data. While, for this problem, the traditional algorithms outperform these simple neural classifiers, we feel that neural networks show the potential for improved performance. INTRODUCTION The design of systems that identify objects based on measurements of their radar backscatter signals has traditionally been predicated upon decision-theoretic methods of pattern recognition [1]. While it is true that these methods are characterized by a well-defined sense of optimality, they depend on the availability of accurate models for the statistical properties of the radar measurements. Synthetic neural networks are an attractive alternative to this problem, since they can learn to perform the classification from labeled training data, and do not require knowledge of statistical models [2]. The primary objectives of this investigation are; to establish the feasibility of using synthetic neural networks for the identification of radar objects, and to characterize the trade-oft's between neural network and decision-theoretic methodologies for the design of radar object identification systems. The present study is focused on the performance evaluation of systems operating on the received radar backscatter signals of five commercial aircraft; the Boeing 707, 727, 747, the DC-lO, and the Concord. In particular, we present results for the multi-layer perceptron and the frequency-sensitive competitive learning (FSCL) synthetic network models [2,3] and compare these with results for the nearestneighbor and maximum-likelihood classification algorithms. In this paper, the performance of the classification algorithms is evaluated by means 281 282 Ahalt, Garber, Jouny and Krishnamurthy of computer simulation studies; the results are compared for a number of conditions concerning the radar environment and receiver models. The sensitivity of the neural network classifiers, with respect to the number of layers and the number of hidden units, is investigated. In each case, the results obtained using the synthetic neural network classifiers are compared with those obtained using an (optimal) maximumlikelihood classifier and a (minimum-distance) nearest-neighbor classifier. PROBLEM DESCRIPTION The radar system is modeled as a stepped-frequency system measuring radar backscatter at 8, 11, 17, and 28 MHz. The 8-28 MHz band of frequencies was chosen to correspond to the "resonant region" of the aircraft, i.e., frequencies with wavelengths approximately equal to the length of the object. The four specific frequencies employed for this study were pre-selected from the database maintained at The Ohio State University ElectroScience Laboratory compact radar range as the optimal features among the available measurements in this band [4]. Performance results are presented below for systems modeled as having in-phase and quadrature measurement capability (coherent systems) and for systems modeled as having only signal magnitude measurement capability (non coherent systems). For coherent systems, the observation vector X = [(xI, x~), (x~, x~), (x~, x~), (xt x~)] T represents the in-phase and quadrature components of the noisy backscatter measurements of an unknown target. The elements of X correspond to the complex scattering coefficient whose magnitude is the square root of the measured cross section (in units of square meters, m 2), and whose complex phase is that of the measured signal at that frequency. For noncoherent systems, the observation vector X = [aI, a2, a3, a4]T consists of components which are the magnitudes of the noisy backscatter measurements corresponding to the square root of the measured cross section. For the simulation experiments, it is assumed that the received signal is the result of a superposition of the backscatter signal vector S and noise vector W which is modeled as samples from an additive white Gaussian process. COHERENT MEASUREMENTS In the case of a coherent radar system, the kth frequency component of the observation vector is given by: xL = (s{ + wi), (1) where sL and s~ are the in-phase and quadrature components of the backscatter signal, and wi and W~ are the in-phase and quadrature components of the sample of the additive white Gaussian noise process at that frequency. Each of these components is modeled as a zero-mean Gaussian random variable with variance u 2/2 Performance of Synthetic Neural Network Classification 283 so that the total additive noise contribution at each frequency is complex-valued Gaussian with zero mean and variance 0'2. During operation, the neural network classifier is presented with the observation vector, of dimension eight, consisting of the in-phase and quadrature components of each of the four frequency measurements; (2) Typically, the neural net is trained using 200 samples of the observation vector X for each of the five commercial aircraft discussed above. The desired output vectors are of the form (3) where di,j = 1 for the desired aircraft and is 0 otherwise. Thus, for example, the output vector di for the second aircraft is 0,1,0,0,0, with a 1 appearing in the second position. The structure of the neural nets used can be represented by [8, nl, ... , nh, 5], where there are 8 input neurons, ni hidden layer neurons in the h hidden layers, and 5 output neurons. The first experiment tested the perceptron nets of varying architectures, as shown in Figures 1, and 2. As can be seen, there was little change in performance between the various nets. The effects of the signal-to-noise ratio of the data observed during the training phase on the performance of the perceptron was also investigated. The results are presented in Figure 3. The network showed little change in performance until a training data SNR of 20 dB was reached. We repeated this basic experiment using a winner-take-all network, the FSCL net [3]. Figure 4 shows that the performance of this network is also effected minimally by changes in network architecture. When the FSCL net is trained with noisy data, as shown in Fig. 5, the performance decreases as the SNR of the training data increases, however, the overall performance is still very close to the performance of the multi-layer perceptron. Our final coherent-data experiment compared the performance of the multi-layer perceptron, the FSCL net, a max-likelihood classifier and the nearest neighbor classifier. The results are shown in Figure 6. For this experiment, the training data had no superimposed noise. These results show that the max-likelihood classifier is superior, but requires full knowledge of the noise distribution. On average, the FSCL net performs better than the perceptron, but the nearest neighbor classifier performs better than either of the neural network models. 284 Ahalt, Garber, Jouny and Krishnamurthy 100 -- 8x5x5 90 E- ----- , 8x 10x 5 -----8x2Ox 5 80 E- 8x3Ox5 .... ~ .......... 8x40x 5 70 ~ 60 ....... ~ l 50 ~ t: • 40 \~ ~ \ \ 30 ~ 20 " 10 ~ , . ~ 0 '\ r:::.. -30 -25 ·20 -15 ·10 ·5 o 5 10 15 20 SNR Idbl Figure 1: Performance of the perceptron with different number of hidden units. 100 8x1Ox5.200 90 c- ----_. 8x1Ox10x5.18OO -----8X1Ox10x10x5.18OO 80 70 ~ 80 I\~, l 50 ~ ~ e ai 40 \ 30 \ 20 \ 10 \ ~ 0 '\ ·30 ·25 ·20 ·15 -10 ·5 o 5 10 15 20 SNR Idbl Figure 2: Performance of the perceptron with 1, 2 and 3 hidden layers. Perfonnance of Synthetic Neural Network Classification 285 100 NoIse Free 90 r- ----_. -5 db -----Odb 80 r- 6db .......... 12db 70 t-0 ___ 20 db 80 , l 50 15 ~ 40 i\ \ 30 ~ '\ \\ 20 \ \ 10 " \ ~\ 0 .~ "- ... -....... ~ .... . -3b -25 -20 -15 -10 -5 o 5 10 15 20 SNRldbl Figure 3: Performance of the perceptron for different SNR of the training data. 100 -8 x 10 x5 90 ,.- ----_. 8x2Ox5 -----8x30x5 80 r- -- 8 x40x5 .......... 8x50x5 70 80 ~ ~ l 50 ,~ ~ e ~ 40 \ 30 \ 20 \ 10 \ ~ 0 .'\k. -30 -25 -20 -15 -10 -5 o 5 10 15 20 SNRldbl Figure 4: Performance of FSCL with varying no. of hidden units. 286 Ahalt, Garber, Jouny and Krishnamurthy 100 -Noise Free 90 :-- ----_. -5 db -----Odb 80 :- 6db •••••• 0 • •• 12db 70 ..... 60 ,~ \ l 50 \ ...... e • 40 , . ' . 30 .~ ~ .... 20 \~ \ '. \ '. 10 ~ ~ 0 '.~ ~ .. ...... -30 -25 -20 -15 -10 -5 o 5 10 15 20 SNR Idbl Figure 5: Performance of the FSCL network for different SNR of the training data. 100 -FSCL 8x1Ox5 90 :-- ----_. perceptron 8x1Ox5 -----max. likelihood 80 :-nearest neighbor 70 ~ 60 ... "..~ ~ t 50 g • 40 ~~ 1\, 30 ~ , , 20 ~\ 10 ~ \ 0 ~ ~ -30 -25 -20 -15 -10 -5 o 5 10 15 20 SNR Idbl Figure 6: Comparison of all four classifiers for the coherent data case. Performance of Synthetic Neural Network Classification 287 NONCOHERENT MEASUREMENTS For the case of a noncoherent radar system model, the kth frequency component of the observation vector is given by: (4) where, as before, s{ and s~ are the in-phase and quadrature components of the backscatter signal, and wI and w~ are the in-phase and quadrature components of the additive white Gaussian noise. Hence, while the underlying noise process is additive Gaussian, the resultant distribution of the observation components is Rician for the non coherent system model. For the case of non coherent measurements, the neural network classifier is presented with a four-dimensional observation vector whose components are the magnitudes of the noisy measurements at each of the four frequencies; (5) As in the coherent case, the neural net is typically trained with 200 samples for each of the five aircraft using exemplars of the form discussed above. The structure of the neural nets in this experiment was [4, nl, ... ,nh, 5] and the same training and testing procedure as in the coherent case was followed. Figure 7 shows a comparison of the performance of the neural net classifiers with both the maximum likelihood and nearest neighbor classifiers. As before, the max-likelihood out-performs the other classifiers, with the nearestneighbor classifier is second in performance, and the neural network classifiers perform roughly the same. CONCLUSIONS These experiments lead us to conclude that neural networks are good candidates for radar classification applications. Both of the neural network learning methods we tested have a similar performance and they are both relatively insensitive to changes in network architecture, network topology, and to the noise level of the training data. Because the methods used to implement the neural networks classifiers were relatively simple, we feel that the level of performance of the neural classifiers is quite impressive. Our ongoing research is concentrating on improving neural classifier performance by introducing more sophisticated learning algorithms such as the LVQ algorithm proposed by Kohonen [5]. We are also investigating methods of improving the performance of the perceptron, for example, by increasing training time. 288 Ahalt, Garber, Jouny and Krishnamurthy 100 -FSCL4x20x5 90 :- ----_. perceptron 4X2Ox5 -----max-Okellhood 80 :-- near~, ralghbor-O db 70 --- I~ 60 \\ l 50 !5 I: • 40 '\ \ \\ 30 \\, 20 .~ ~\, 10 '\\ , , 0 ,~ ~ , , ' .. -30 -25 -20 -15 -10 -5 0 5 10 15 20 SNR rdbl Figure 7: Comparison of all four classifiers for the non coherent data case. References [1] B. Bhanu, "Automatic target recognition: State of the art survey," IEEE Transactions on Aerospace and Electronic Systems, vol. AES-22, no. 4, pp. 364-379, July 1986. [2] R. R. Lippmann, "An Introduction to Computing with Neural Nets," IEEE ASSP Magazine, vol. 4, no. 2, pp. 4-22, April 1987. [3] S. C. Ahalt, A. K. Krishnamurthy, P. Chen, and D. E. Melton, "A new competitive learning algorithm for vector quantization using neural networks," Neural Networks, 1989. (submitted). [4] F. D. Garber, N. F. Chamberlain, and O. Snorrason, "Time-domain and frequency-domain feature selection for reliable radar target identification," in Proceedings of the IEEE 1988 National Itadar Conference, pp. 79-84, Ann Arbor, MI, April 20-21, 1988. [5] T . Kohonen, Self-Organization and Associative Memory, 2nd Ed. Berlin: Springer-Veralg, 1988.
|
1988
|
67
|
154
|
Connectionist Learning of Expert Preferences by Comparison Training Gerald Tesauro IBl\f Thomas.1. '''atson Rcsearc11 Centcr PO Box 704, Yorktown Heights, NY 10598 USA Abstract A new training paradigm, caned the "eomparison pa.radigm," is introduced for tasks in which a. network must learn to choose a prdcrred pattern from a set of n alternatives, based on examplcs of Imma.n expert prderences. In this pa.radigm, the inpu t to the network consists of t.wo uf the n alterna tives, and the trained output is the expert's judgement of which pa.ttern is better. This para.digm is applied to the lea,rning of hackgammon, a difficult board ga.me in wllieh the expert selects a move from a. set, of legal mm·es. \Vith compa.rison training, much higher levels of performance can hc a .chiew~d, with networks that are much smaller, and with coding sehemes t.hat are much simpler and easier to understand. Furthermorf', it is possible to set up the network so tha.t it always produces consisten t rank-orderings. 1. Introduction There is now widespread interest, in tlH~ use of conncctillnist networks fllr realworld practical problem solving. The principal areas of applica.tion which have been studied so far involvc rela tiv('ly low-level signal processing and pattern recognition t.asks. However, eOllllectionist networks might also he useful in higher-level tasks whkh a.re curr('nt.l~· tackled hy cxprrt systems a.nd knowledge engineering approadl(,s [2]. In this pa per, 'vc considcr problem domains in which tlte expert is givcn a s('t of 71. alt.enulti,·es as input (71. may be either sma.II or large), and mnst select. t.}l<' m()st dC'sirtlhl(' ()T most prderable alternative. This type of task occurs rep('atccUy throughout the domains of politics, business, eeonomics, mcdicine, a.nd many ot hers. \Vhcthcr it is choosing a fureign-policy option, a. w('apons cont,rador, a course of trea.tment for a disease, or simply what to have for dinner, prohlems requiring choice are constan tly being faced and 801\'(\(1 by 111lma.n experts. How might a. learning system snch as a. cOlllH,ctionist network be set up to learn to ma.ke such choices from human expert cxampI~s? The immediately obvious a,pproa,ch is to train the network to produce a numerical output 99 100 Tesauro "score" for ea,ch input alterna.tive. To make a, choice, then, une would have the network score each a.lterna,tive, and select t.he alterna tive with the highest score. Since the lea,rning system learns from examples, it. seems logical to train the network on a da.ta base of examples in which a, human expert has entered a numerical score for ea,ch possible choice. Howc\'cr, there are two major problems with such a,n a,pproa,ch. First, in many domains in which n is large, it would be tremendously time-consuming for the expert to create a, data base in which each individual a.lternative has been painstaking evalua,ted, even the vast number of obviously ba,d alterna.tives which are not even worth considering. (It is importa.nt for the network t,o sec examples of ba,d alternatives, otherwise it would tend t.o produce high scores for everything.) l\1:ore importa.ntly, in ma,ny domains human experts do not think in terms of absolute scoring functions, and it would thus be extremely difficult to create training data containing a,hsolute scores, because such scoring is alien to the expert's wa,y of thinking a,hout the problem. Inst.cad, the most natural way to make training data is simply tu record the expcrt in adinn, i.e., for ea,eh problem situation, record each of the tlltcrna,th'cs hc htld t.o choose from, a,nd record which one he actually selected. . For these reasons, we advoca.te teaching the network to compare pairs of alterna.tives, rather than scoring individual aIterna.tivcs. In ot.her words, the input should be two of the set of n alternatives, and thc output should be a 1 or 0 depending on which of the two alterna.tives is hetter. From a set of recorded huma,n expert preferences, one can then teach thc network that the expert's choice is better than all other a.ltcrna tivcs. One potential concern raised by this approach is tha t, in performance mode a,fter the network is trained, it might he neccssary t.o mtlke n2 comparisons to select the best alterna.tive, whereas only n individual scor('s tlrc needed in the other approa,ch. However, the network can select the hcst alterna.tive with only n compa.risons by going through the list of altern a tives in ordcr, a.nd compa,ring the current alternative wHh thc b('st ait.erntltive secn so fa.r. If the current alterna.tive is better, it becomcs the ncw 1>('st altcrnative, and if it is worse, it is rejected. Anot.hcr poten tial COIlcern is tha t tI. network which only knows how t.o compare might not produce a consist('nt rank-ordering, i.e., it might sa.y that alternativc a is better t.han h, b is hdtN t.han c, and c is better than a, and then one do('s not know which alt('rnative to select. However, we shall see la.ter that it is possiblc to gllartlutc'c (,(lllsist('ncy with a constrained a.rchitecture which forccs t.he network to c:()mc~ IIp wit.h absolute numerical scores for individual alterna tives. In the following, we shall exa.minc the applic~tI . ti()n of t.he comparison training pa.radigm to the ga.me of backgammon, as considC'ra hIe cxperience ha.s already been obtained in this domain. In previous papC'rs [7,6]' a network was described which lea.rned to pla.y ba,ckgammon from an expert data, base, using the so-called "back-proptl,ga.tion" learning rule [5J. In that system, the network was trained to score individual moyes. In other words, the input Connectionist Learning of Expert Preferences 101 consists of a move (drfined by the initi::ll position bdnre the move a.nd the final position after the move), and the desired output is a. real number indica.ting the strength of the move. Henceforth we shall refer to this training paradigm as the "rcla.t.ive score" par::ldigm. 'Vhilc this approach produced considerable success, it had a. number of serious limitations. \Ve sha.ll see that the compa.rison paradigm solyes one of the most important limita.tions of t.he previous a.pproach, with the rrsult tha t the overall performance of Ute network is much bet ter, the llum her of connedions required is greatly reduced, a.nd. the network's input coding scheme is much simpler and easier to understa.nd. 2. Previous backgammon networks In [7], a network was descrihed which learnrd tl) pla~· fairly good ba.ckga.mmon by ba.ck-propa.gation learning of a. large expert t.raining set, using the rcla,tive score paradigm describc(l pr(,yiollsly. After tr::lining, the network was tested both by measuring its prrformance on ::I t('st sct. of positions not used in training, and hy a,ctual game pl::l)' against hunums ::Ind cOllventional computer programs. The best net.work was ablr to drfeat. Sun :Microsysterns' Gammontool, Ute best available commercial program, hy a sllbsta.ntial ma.rgin, but it was still fa.r from human expert-level performance. The basic conclusion of [7] was that it was possihle to achieve decent levels of performance by this network learning procedure, but it, was not a.n ea.sy matter, and required suhst.antial hum::ln intervention. The choice of a. coding scheme for the inpnt informa.t.ion, for rxamplr, was fonn(l to be an extremely important issue. The hest coding SclH'IrleS contained a. great deal of doma.inspecific information. The best, encoding of the "raw" board iuforma.tion was in terms of concepts that 111lman exprrts use to describe loc::II tra.llsitions, such as "slotting," "stripping," et.c.. Also, a few "pre-colllputrd features" were required in addition t.o the raw board inform::lt.ion. Thus it ,,·as necessary to be a domain expert in order to clpsign a suit,a hIe ndwork I.oding sdteme, a.nd it seemed tha.t the only way t.o discov('r thr hrst cOlling scheme was by painstaking trial and error. This "'::IS som('\dtat. disappoiut,ing, as it was hoped tha.t the net\vork lea rIling l)foc('(illre \y()ul<l alit lima t ically produl.e an expert ba.ckgammon net.work with little or no human ('ffort. 3. Comparison paradigul network set-up 111 the sta.nd.ard pra.ctice of ha.c:k-propag::lt.inn, a compfHisoIt paradigm network would ha.ve a.n input layer, (lIlC ()r morr layrTs of hidden nnits, and a.n output layer, wit.h full connectivity hdwcrn adjacell t l::lyers. The input. layer would represent, two final board posit.ions a and '" and the out.put layer would ha.ve just a single unit t.o represent which board position was better. The 102 Tesauro teacher signal for the output unit would be a, 1 if hoard position (l was better than b, a,nd a 0 if b was better tha.n a. The proposed compa,rison pa.ra.digm network "'ould m;ercome the limita,t.ion of only being able to consider individual moves in isolation, without knO\vledge of what other a.lt.erna,tives arc available. In addit.ion, t.he sophisticated coding scheme that was developed to encode transition informa tion would not be needed, since compa,risons could be based solely on the final board sta.tes. The comparison approa.ch offers grca ter sensitivity in dist.inguishing between close alternatives, a,nd as stated prc"iously, it corrcsp onds more closely to the actual form of human expert knowledge. These advantages a,re formida,ble, hut. t.here are some import.ant. problems with the a.pproach as curren t.ly descrihed. One technical prohlem is tlta,t the learning is significantly slower. This is beca,use 271, comparisons per t.ra.ining position a,re prcsen ted to the network, where 11. '" 20, wh('rC'tls in tllC relative score approach, only about 3-4 moves per position would he presented. It was therefore necessary to develop a, number of technical tricks to increase the speed of the simulator code for t.hi.s specific a.pplica tion (to he described in a future publication). A more fundamental problem with the a.pproa("h, however, is the issue of consistency of network comparisons. Two properties arc required for complete consistency: (1) The comparison between any two positions mllst be unambiguous, i.e., if the network says that. a is bett('r t.han b when a is presented on the left a,nd b on the right, it !tact hetter say that. a is hetter than b if a is on the right and b is on the left. One ca.n show tha t t.his requircs t.he network's output to exaetly invert whenever Ule input. hO(lrd positions are swapped. (2) The compa,risons must he transitire, as alluded to previously, i.e., if a is judged better than h, and 1J is judged better than (', the network lta,d better judge a to be better than c. Sta.ndard unconstrained networks hayc no gUll ra n tee flf sa t.isfying eit.her of these properties. After some thought, howe\"('I, one realizc's that the output inversion symmetry can be enforcC'd by a symmetry rc'ltl tion amongst the weight.s in the network, and that the transiti"ity ancl rank-OHler consist.ency can be guaranteed by separability in the ardtitC'dure, as illllstratC'd in Figurc 1. Here we see tlla,t t.his network really consists of t.wo half-networks, one of which is only conceIllcd with t}w evaluat.ion "fhoard positi.on (I, and the ot.her of which is concerned only wit.h t.h~ C',"alua t.iol\ of hOH.T() posit.inn b. (Duc to the indicated symmetry relation, nne neC'ds only store oue half-network in the simulator code.) Each half-network mrly have on(' or morc' lay('rs of hidden units, but as long as they a,re not cross-coupled, t.he C'valuat.ion of each of the two input boa,rd positions is hoilC'd down to a single rC'al number. Since real numbers always ra,nk-order consist.en tly, t.he ll('t\,'ork's compa,risons a,re al ways com;is ten t. Connectionist Learning of Expert Preferences 103 final position (a) final position (b) Figure 1: A network design for cOlTlpari~()n trailling wit h gnarallteed consistency of comparisons. \Veight groHps han~ ~YlTlmet.ry rc1ati()ns W 1 = W 2 and W 3 == - W 4, which ensures I hat the outPllt cxact.Jy in\Trts upon swapping positions in the input arraJ'. Separation or the hidden unils cOllrlenses the evaluation of each final hoard position into a single real IIIIJTlhcr, thus ensuring transitivity. An importa.nt a.dded benefit of t.his scheme is that an a.hsoillte hoard evalua.tion fundion is obtained in each half-network. This means t.hat the netwurk, to the extcnt that it.s cvaluation fllnct.ioll is ac-cnrat.e, has an intrinsic understanding of a. given posit.ion, as opposed to HH'rely heing a hIe to detect features which correspond t.o good moves. As has heen emphasil':cd by Berliner [1], an intrinsic uudr.rst.a.llding of the position is crucial for play at the highest levels, a.n<l for use of t,he dOllhling cnl)('. Thnf-:, this a.pproach can serve as the ba.sis for fut.ure progress, whNeas thr previous approach of scoring moves was doomed eventually to run into a cirad rud. 4. Results of comparison training The training procedure for the comparison paradigm nC't.work was as follows: Networks ""'ere set up with 289 input. units which enc(l(le a (ic'scription of single final boa.rd position, varying nnmlH'rs (If hidden units, and a single output unit. The training data was taken from a set (If 40n gamrs ill which the author played both sides. This <ia ta sd con hlins a recording of t.he a.n t,hor's preferred move for each position, and no ot.her comments. The engaged positions in the da.ta set weIe selected out, (discngflged ra.cing I)()sit.ions were not studied) and divided int.o five cat<'gories: hearoff, bearin, opponent bearoff, opponent b~a.rin, a.nd a default ca t.egnry ('overing eVPlything else. In each 104 Tesauro Type of n.SP net, CP net _t~e_st_s~e~t _____ (~6_5_1-_1_2-_1~)~(_289-1)_ bcaroff .82 .83 hearin .54 .60 opp. bea.roff· .56 .54 opp. bearin .60 .66 other .58 .65 Table 1: Performance of neLs of indicated size on respedin~ test. sets, as measllred by fraction of positions for which TIel agrees wit h lmTllrlll expert choice of best move. HSP: relative score paradigm, CP: comparison paradigm. category, 200 positions chosen (I.t. r(1UdOIU were set. (lside to he us('d as testing <lat(l.; tIte remaining da ta. (a bon t. 1000 positions in ('(I eh category except t.he dcfa.lllt category, for which a.bout 4000 positions were used) was used to tra.iu networks which spedalil':ed in each category. The learning algorithm llsed was st.a.nd(l.rd back-propagation with IIWInelltllIn a.nd without weight decay. Performa,nce after tr(lining is summaril':ed in Ta.hl('s 1 and 2. Ta.hle 1 gives the performance of each specialist network on the appropriate set of test positions. Results for the comparison par(ldigm nebYllrk are shown for networks without hidden units, because it was fouud that the (lddition of hidden units did not improve the performance. (This is discussed in the following section.) \Ve contra.st. these results with result.s of tr(lining networks in the rela.tive score pa,radigm on the same tr(lilling da.t(l sds. "~e see in Ta hIe 1 tha.t for the bearoff and oppon('nt. hearofi' speeialists, there is only a small cha.nge in perform(lnce under t.he contpa.rison p(lradigm. Fllr the hea.rin and opponcnt. bcarin specialists, t,herr is an improv('ment. in IH'rformance of a.bout 6 percenta.ge poin ts in each casco For t.his pa.rtkular applica tion, this is a. "ery substantial improvement in perfoIIuance. How('y<'I, th(~ most, import(lnt finding is for the default ca tegory, which is much l(lrgcr and mor(' difficult than any of the specialist ea.t.egories. Th(' d('f(lult network's prrformance is the key factor in determining the syst('m's IIvrr(lll g(lme prrfOrIll(lllCe. 'Vit.h cumpa.rison t.rainillg, we find an improvc'Ineur. in perform(lll('e from 58% t.o 65%. Given the sil':e and diffkllIt.y of t,his ca t('gory, t his ('(Ill onl)- he desnibcd as a. huge improvement, in performance, and is all t.h(' mllre rem ark(l hIe when nne considers th(lt. t,he comp(lrison p(I.radigm net has only 30() w('ights, (IS opposed to 8000 weight,s for the relative score paradigm net. Next, a combined game-playing s)-stC'ln was s('t. up llsing t.hr the specialist. nets for all engaged posit.ions. (The G(llTlmontool evaluat.ion function was called for ra.cing posit.ions.) Results are given in T(lble 2. Ag(linst, Gammontool itself, the pcrforma.nce und('r thr comparison p(lr(l(ligm improves from 59% to 64%. Against the a.uthor (and tcachE'r), the pC'rforma.nce improves from a.n estimat,ed 35% (since the nsp net.s (Ire so hig and slow, a.ccnra,te Opponent Ga,mmon tool TesauIO Connectionist Learning of Expert Preferences 105 RSP nets .59 (500 games) .35 (l00 ga,mes) CP nets .64 (2000 games) .42 (400 games) Table 2: Game-playing performance of composite network systems against Gammontool and against the author, as measured by fractioTl of games won, without counting gammons or backgammons. statistics could not be obtained) to about 42%. Qualitatively, one notices a subst.a,ntial overall improvement in the new network's level of pla,y. But what, is most, striking is the nct.work's In)lst case beha,vior. The previous relative-score network ha.d particularly bad worst-case beha,vior: about once every other ga,me, the network would make an atrocious blunder wltich would seriously jeopardize its dlances of winning that ga,me [6]. An alarming fradion of these blunders ,,,,ere seemingly random and could not be logically explained. The new comparison para,digm network's wOIst-ca.se behavior is vastly improvcd in this rega,rd. The frcquency and severity of its mistakes are significantly reducE!d, hut more imp(lrtantly, its mista,kes a.re understandable. (Some of the improvement in this respect may be dne to the elimination of the noisy teacher signal descrihed in [7].) 5. Conclusions \Ve have seen that, in the domain of backgammon, the int.roduct.ion of the comparison training pa.ra,digm has result.ed in networks whkh perform much better, with vastly reduced numbers Ilfweights, a.ud with input. coding schemes tha.t a.re much simpler and easier t.o understand. It was surprising that snch high performa.nce could be obtained in "perceptrou" networks, i.E!., networks withou t hidden units. This reminds us t.hat. one should not. summarily dismiss perceptrons as uninteresting or unworthy of study hecause they arc only ca.pable of lea.rning linearly separable funct.ions [3]. A su hst.an tial component of many difficult real-world problems may lie in the liu('arly separable spectrum, and thus it makes sense to try perceptIons at least as a first attempt. It was also surprising tha.t the use (If hidden units in the comparison- trained networks does not improvc the pcrformanc('. This is nn('xplained, a.nd is t.he subject of current resea.rch. It is, however, not without precedent: in at least one other real-world application [4], it has heen found tha.t networks with hidden units do not pcrform any bettE'r than netwllrks without hidden units. ?vIore generally, one might conclude t.ha.t, in training a. neural network (or indeed a.ny learning system) from human expert. E'xamples in a cumplex domain, there should he a. good match hetween Ute na t,ural form of the expert's knowledge and the method by which the net.work is trained. For domains in which the expert must seh~ct a preferred alt.crnative from a set of alternatives, 106 Tesauro the expert naturally thinks in terms of comparisons a.mongst the top few a.lterna.tives, and the compa.rison paradigm proposed here takes advantage of that fact. It would be possihle in principle to train a network using absolute evaluations, hut the crea.tion of sueh a. training set might he too difficult to underta.ke on a large scale. If the above discussion is coned, then the comparisou pa.ra.digm should be useful in ot,her applications involving expert choice, and in other lea.rning syst,ems besides connectionist networks. Typically expert systems a.re handcrafted by knowledge engineers, ra.ther than learned from human expert examples; however, there has recently been some interest in sl1pervis(~d lea.rning approa.ches. It will be interesting to see if the compa.rison paradigm proves to be useful when supervised lea.rning procedures are applied t,o otller domains involving expert choice. In using the compa.rison paradigm, it. will be important to ha.ve some way to gua.ra.nt,ee that the syst,em's comparisons will be unambiguous and t,ra.nsitive. For feed-forward networks, it was shown in this pa.per how to gua.rantee this using symmetric, separa.t.ed nrtworks; it should be possible to impose similar constraints Oil otll<'r learning systems to enforce consistency. References [l} H. Berliner, "On the construction or c\"aluation runctions for large domains," Proc. of IleA T (1979) 53--55. [2] S. 1. Gallant, "Conncctionist r.xpert systems," Comm. l\eM 31, 152 --169 (1988). P} M. Minsky and S. Papcrt, Pcrceptrons, MIT Press, Camhridge ~fA (1969). [,1] N. Qian and T. J. Sejnowski, "Predicting the' secondary slrucl,nre of globular proteins using neural nctwork models," .1. Mol. Bioi. 202, R(jfi 8R,1 (1988). [5] D. E. Rumelarl and J. L. ~lcClellanrl (cds.), Parallel J>i.<;frifllltecl Processing: Explorations in I.Ire Microsirucl.lIl'r or Cogn;t,;on, Vt)ls. I and 2, l\f1T Prrss, Cambrioge MA (1986). [6} G. Tesauro, "Neural network ddral.s creal.or in hack~annl1(ln match." Univ. of ll1inoiR, Center for Complex S~' strll1s Technical Heporl. CCSl1-8R-6 (1988). [71 G. Tesauro and T. J. Sejnowski, "/\ parallel nei.work that ]rarns to play backgammon," A rtificialTntell;gmrcc, in press (H)89).
|
1988
|
68
|
155
|
WINNER-TAKE-ALL NETWORKS OF O(N) COMPLEXITY J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C. A. Mead California Institute of Technology Pasadena, CA 91125 ABSTRACT We have designed, fabricated, and tested a series of compact CMOS integrated circuits that realize the winner-take-all function. These analog, continuous-time circuits use only O(n) of interconnect to perform this function. We have also modified the winner-take-all circuit, realizing a circuit that computes local nonlinear inhibition. Two general types of inhibition mediate activity in neural systems: subtractive inhibition, which sets a zero level for the computation, and multiplicative (nonlinear) inhibition, which regulates the gain of the computation. We report a physical realization of general nonlinear inhibition in its extreme form, known as winner-take-all. We have designed and fabricated a series of compact, completely functional CMOS integrated circuits that realize the winner-take-all function, using the full analog nature of the medium. This circuit has been used successfully as a component in several VLSI sensory systems that perform auditory localization (Lazzaro and Mead, in press) and visual stereopsis (Mahowald and Delbruck, 1988). Winnertake-all circuits with over 170 inputs function correctly in these sensory systems. We have also modified this global winner-take-all circuit, realizing a circuit that computes local nonlinear inhibition. The circuit allows multiple winners in the network, and is well suited for use in systems that represent a feature space topographically and that process several features in parallel. We have designed, fabricated, and tested a CMOS integrated circuit that computes locally the winner-take-all function of spatially ordered input. 703 704 Lazzaro, Ryckebusch, Mahowald and Mead THE WINNER-TAKE-ALL CmCUIT Figure 1 is a schematic diagram of the winner-take-all circuit. A single wire, associated with the potential Vc, computes the inhibition for the entire circuit; for an n neuron circuit, this wire is O(n) long. To compute the global inhibition, each neuron k contributes a current onto this common wire, using transistor T2a.' To apply this global inhibition locally, each neuron responds to the common wire voltage Vc, using transistor Tla.' This computation is continuous in time; no clocks are used. The circuit exhibits no hysteresis, and operates with a time constant related to the size of the largest input. The output representation of the circuit is not binary; the winning output encodes the logarithm of its associated input. Figure 1. Schematic diagram of the winner-take-all circuit. Each neuron receives a unidirectional current input 11;; the output voltages VI •.. VB represent the result of the winner-take-all computation. If II; = max(II ••• IB ), then VI; is a logarithmic function of 11;; if Ii <: 11;, then Vi ~ O. A static and dynamic ana.lysis of the two-neuron circuit illustrates these system properties. Figure 2 shows a schematic diagram of a two-neuron winner-take-all circuit. To understand the beha.vior of the circuit, we first consider the input condition II = 12 = 1m. Transistors TIl ~d T12 have identical potentials at gate and source, and are both sinking 1m; thus, the drain potentials VI and V2 must be equal. Transistors T21 and T22 have identical source, drain, and gate potentials, and therefore must sink the identical current ICI = IC2 = Ic/2. In the subthreshold region of operation, the equation 1m = 10 exp(Vc/Vo) describes transistors Til and T12 , where 10 is a fabrication parameter, and Vo = kT/qlt. Likewise, the equation Ic/2 = 10 exp((Vm - Vel/Volt where Vm = VI = V2, describes transistors T21 and T22 . Solving for Vm(Im, Ie) yields Vm = Voln(~:) + Voln(:;). (1) Winner-Take-All Networks ofO(N) Complexity 705 Thus, for equal input currents, the circuit produces equal output voltages; this behavior is desirable for a winner-take-all circuit. In addition, the output voltage V m logarithmically encodes the magnitude of the input current 1m. Figure 2. Schematic diagram of a two-neuron winner-take-all circuit. The input condition II = 1m + Oi, 12 = 1m illustrates the inhibitory action of the circuit. Transistor Til must sink 0, more current than in the previous example; as a result, the gate voltage of Til rises. Transistors Tit and TI2 share a common gate, howeverj thus, TI2 must also sink 1m + 0,. But only 1m is present at the drain of T12 • To compensate, the drain voltage of T12 , V2, must decrease. For small OiS, the Early effect serves to decrease the current through Th , decreasing V2 linearly with 0,. For large o's, TI2 must leave saturation, driving V2 to approximately 0 volts. As desired, the output associated with the smaller input diminishes. For large OiS, Ie2 $!:::f 0, and Iel $!:::f Ie. The equation 1m + 0, = 10 exp(Ve/Vo) describes transistor Til' and the equation Ie = 10 exp((VI - Vel/Yo) describes transistor T21 • Solving for VI yields (2) The winning output encodes the logarithm of the associated input. The symmetrical circuit topology ensures similar behavior for increases in 12 relative to II. Equation 2 predicts the winning response of the circuit; a more complex expression, derived in (Lazzaro et.al., 1989), predicts the losing and crossover response of the circuit. Figure 3 is a plot of this analysis, fit to experimental data. Figure 4 shows the wide dynamic range and logarithmic properties of the circuitj the experiment in Figure 3 is repeated for several values of 12 , ranging over four orders of magnitude. The conductance of transistors Til and T1:a determines the losing response of the circuit. Variants of the winner-take-all circuit shown in (Lazzaro et. aI., 1988) achieve losing responses wider and narrower than Figure 3, using circuit and mask layout techniques. 706 Lazzaro, Ryckebusch, Mahowald and Mead WINNER-TAKE-ALL TIME RESPONSE A good winner-take-all circuit should be stable, and should not exhibit damped oscillations ("ringing") in response to input changes. This section explores these dynamic properties of our winner-take-all circuit, and predicts the temporal response of the circuit. Figure 8 shows the two-neuron winner-take-all circuit, with capacitances added to model dynamic behavior. o T 102 Vo Ie Figure 8. Schematic diagram of a two-neuron winner-take-all circuit, with capacitances added for dynamic analysis. 0 is a large MOS capacitor added to each neuron for smoothingj 0., models the parasitic capacitance contributed by the gates of Tu and T12 , the drains of T21 and T22, and the interconnect. (Lazzaro et. al., 1988) shows a small-signal analysis of this circuit. The transfer function for the circuit has real poles, and thus the circuit is stable and does not ring, if 10 > 41(Oe/O), where 11 RlI2 Rl 1. Figure 9 compares this bound with experimental data. H Ie > 41(00 /0), the circuit exhibits first-order behavior. The time constant OVo/I sets the dynamics of the winning neuron, where Vo = A:T /qK. Rl 40 mV. The time constant OVE/I sets the dynamics of the losing neuron, where VE Rl 50 v. Figure 10 compares these predictions with experimental data. Vl,V, (V) Winner-Take-All Networks ofO(N) Complexity 707 2.6 2.4 2.2 2.0 I.S 1.6 1.4 1.2 1.0+--+--+--+--..... ~I----t~--t---f 0.92 0.94 0.96 0.98 1.00 1.02 1.04 1.06 1.0S II/I, Figure 8. Experimental data (circles) and theory (solid lines) for a two-neuron winner-take-all circuit. II, the input current of the first neuron, is swept about the value of 12, the input current of the second neuron; neuron voltage outputs VI and V2 are plotted versus normalized input current. 2.6 I.S 1.6 1.4 1.2 10-11 10-10 10-0 10-8 IdA) Figure 4. The experiment of Figure 3 is repeated for several values of 12; experimental data of output voltage response are plotted versus absolute input current on a log scale. The output voltage VI = V2 is highlighted with a circle for each experiment. The dashed line is a theoretical expression confirming logarithmic behavior over four orders of magnitude (Equation 1). 708 Lazzaro, Ryckebusch, Mahowald and Mead 1 Figure 9. Experimental data (circles) and theoretical statements (solid line) for a two-neuron winner-take-all circuit, showing the smallest 10 , for a given I, necessary for a first-order response to a small-signal step input. Figure 10. Experimental data (symbols) and theoretical statements (solid line) for a two-neuron winner-take-all circuit, showing the time constant of the first-order response to a small-signal step input. The winning response (filled circles) and losing response (triangles) of a winner-take-a.ll circuit are shownj the time constants differ by several orders of magnit ude. Winner~Take~AlI Networks ofO(N) Complexity 709 THE LOCAL NONLINEAR INHIBITION CIRCUIT The winner-take-all circuit in Figure 1, as previously explained, locates the largest input to the circuit. Certain applications require a gentler form of nonlinear inhibition. Sometimes, a circuit that can represent multiple intensity scales is necessary. Without circuit modification, the winner-take-all circuit in Figure 1 can perform this task. (Lazzaro et. al., 1988) explains this mode of operation. Other applications require a local winner-take-all computation, with each winner having inHuence over only a limited spatial area. Figure 12 shows a circuit that computes the local winner-taite-all function. The circuit is identical to the original winner-take-all circuit, except that each neuron connects to its nearest neighbors with a nonlinear resistor circuit (Mead, in press). Each resistor conducts a current Ir in response to a voltage ~V across it, where Ir = I.tanh(~V/(2Vo)). 1., the saturating current of the resistor, is a controllable parameter. The current source, 10, present in the original winner-take-all circuit, is distributed between the resistors in the local winner-take-all circuit. Figure 11. Schematic diagram of a section of the local winner-take-all circuit. Each neuron i receives a unidirectional current input Iii the output voltages Vi represent the result of the local winner-take-all computation. To understand the operation of the local winner-take-all circuit, we consider the circuit response to a spatial impulse, defined as 1" :> 1, where 1 == h~". 1,,:> 1"-1 and 1,,:> 1"+1, so Ve:,. is much larger than Ve:,._l and Ve:lI+l' and the resistor circuits connecting neuron 1: with neuron 1: - 1 and neuron 1: + 1 saturate. Each resistor sinks 1. current when saturatedj transistor T2,. thus conducts 21. + Ie: current. In the subthreshold region of operation, the equation 1" = 10 exp(Ve:,. /Vo) describes transistor TI ,., and the equation 21. + Ie = Ioexp((V" - Ve:,.)/Vo) describes transistor 710 Lazzaro, Ryckebusch, Mahowald and Mead T2,.. Solving for VA: yields VA: = voln((2I. + 10 )/10 ) + voln(IA:/lo). (4) As in the original winner-take-all circuit, the output of a winning neuron encodes the logarithm of that neuron's associated input. As mentioned, the resistor circuit connecting neuron Ie with neuron Ie - 1 sinks 1. CUlTent. The current sources 10 associated with neurons Ie -1, Ie - 2, ... must supply this current. If the current source 10 for neuron Ie - 1 supplies part of this current, the transistor T2,._1 carries no current, and the neuron output VA:-l approaches zero. In this way, a winning neuron inhibits its neighboring neurons. This inhibitory action does not extend throughout the network. Neuron Ie needs only 1. current from neurons Ie - 1, Ie - 2, .... Thus, neurons sufficiently distant from neuron Ie maintain the service of their current source 10, and the outputs of these distant neurons can be active. Since, for a spatial impulse, all neurons Ie - 1, Ie - 2, ... have an equal input current I, all distant neurons have the equal output (5) Similar reasoning applies for neurons Ie + 1, Ie + 2, .... The relative values of 1. and 10 determine the spatial extent of the inhibitory action. Figure 12 shows the spatial impulse response of the local winner-take-all circuit, for different settings of 1./10 , I I I I I o 2 4 6 8 10 12 14 16 Ie (Pollition) Figure 12. Experimental data showing the spatial impulse response of the local winner-take-all circuit, for values of 1./10 ranging over a factor of 12.7. Wider inhibitory responses correspond to larger ratios. For clarity, the plots are vertically displaced in 0.25 volt increments. Winner-Take-All Networks ofO(N) Complexity 711 CONCLUSIONS The circuits described in this paper use the full analog nature of MOS devices to realize an interesting class of neural computations efficiently. The circuits exploit the physics of the medium in many ways. The winner-take-all circuit uses a single wire to compute and communicate inhibition for the entire circuit. Transistor TI,. in the winner-take-all circuit uses two physical phenomena in its computation: its exponential current function encodes the logarithm of the input, and the finite conductance of the transistor defines the losing output response. As evolution exploits all the physical properties of neural devices to optimize system performance, designers of synthetic neural systems should strive to harness the full potential of the physics of their media. Acknow ledgments John Platt, John Wyatt, David Feinstein, Mark Bell, and Dave Gillespie provided mathematical insights in the analysis of the circuit. Lyn Dupre proofread the document. We thank Hewlett-Packard for computing support, and DARPA and MOSIS for chip fabrication. This work was sponsored by the Office of Naval Research and the System Development Foundation. References Lazzaro, J. P., Ryckebusch, S., Mahowald, M.A., and Mead, C.A. (1989). WinnerTake-All Networks of O(N) Oomplexity, Caltech Computer Science Department Technical Report Caltech-CS-TR-21-88. Lazzaro, J. P., and Mead, C.A. {in press}. Silicon Models of Auditory Localization, Neural Oomputation. Mahowald, M.A., and Delbruck, T.I. (1988). An Analog VLSI Implementation of the Marr-Poggio Stereo Correspondence Algorithm, Abstracts of the First Annual INNS Meeting, Boston, 1988, Vol. I, Supplement I, p. 392. Mead, C. A. (in press). Analog VLSI and Neural Systems. Reading, MA: AddisonWesley.
|
1988
|
69
|
156
|
502 LINKS BETWEEN MARKOV MODELS AND MULTILAYER PERCEPTRONS H. Bourlard t,t & C.J. Wellekens t (t)Philips Research Laboratory Brussels, B-1170 Belgium. mInt. Compo Science Institute Berkeley, CA 94704 USA. ABSTRACT Hidden Markov models are widely used for automatic speech recognition. They inherently incorporate the sequential character of the speech signal and are statistically trained. However, the a-priori choice of the model topology limits their flexibility. Another drawback of these models is their weak discriminating power. Multilayer perceptrons are now promising tools in the connectionist approach for classification problems and have already been successfully tested on speech recognition problems. However, the sequential nature of the speech signal remains difficult to handle in that kind of machine. In this paper, a discriminant hidden Markov model is defined and it is shown how a particular multilayer perceptron with contextual and extra feedback input units can be considered as a general form of such Markov models. INTRODUCTION Hidden Markov models (HMM) [Jelinek, 1976; Bourlard et al., 1985] are widely used for automatic isolated and connected speech recognition. Their main advantages lie in the ability to take account of the time sequential order and variability of speech signals. However, the a-priori choice of a model topology (number of states, probability distributions and transition rules) limits the flexibility of the HMl\l's, in particular speech contextual information is difficult to incorporate. Another drawback of these models is their weak discriminating power. This fact is clearly illustrated in [Bourlard & Wellekens, 1989; Waibel et al., 1988] and several solutions have recently been proposed in [Bahl et al., 1986; Bourlard & VVellekens, 1989; Brown, 1987]. The multilayer perceptron (MLP) is now a familiar and promising tool in connectionist approach for classification problems [Rumelhart et al., 1986; Lippmann, 1987} and has already been widely tested on speech recognition problems [Waibel et aI., 1988; Watrous & Shastri, 1987; Bourlard & Wellekens, 1989]. However, the sequential nature of the speech signal remains difficult to handle with ltfLP. It is shown here how an MLP with contextual and extra feedback input units can be considered as a form of discriminant HMM. Links Between Markov Models and Multilayer Perceptrons 503 STOCHASTIC MODELS TRAINING CRITERIA Stochastic speech recognition is based on the comparison of an utterance to be recognized with a set of probabilistic finite state machines known as Hl\1l\f. These are trained such that the probability P(Wi IX) that model Wi has produced the associated utterance X must be maximized, but the parameter space which this optimization is performed over makes the difference between independently trained models and discriminant ones. Indeed, the probability P(WiIX) can be written as P(W.'IX) = P(XIWi).P(Wi) t P(X)· (1) In a recognition phase, P(X) may be considered as a constant since the model parameters are fixed but, in a training phase, this probability depends on the parameters of all possible models. Taking account of the fact that the models are mutually exclusive and if A represents the parameter set (for all possible models), (1) may then be rewritten as: Maximization of P(WdX, A) as given by (2) is usually simplified by restricting it to the subspace of the Uti parameters. This restriction leads to the Maximum Likelihood Estimators (MLE). The summation term in the denominator is constant over the parameter space of Uti and thus, maximization of P(XIWi' A) implies that of its bilinear map (2). A language model provides the value of P(Wi ) independently of the acoustic decoding [Jelinek, 1976]. On the other hand, maximization of P(WiIX, A) with respect to the whole parameter space (Le. the parameters of all models WI, W2 , ••• ) leads to discriminant models since it implies that the contribution of P(X IWi , A)P(Wi) should be enhanced while that of the rival models, represented by L P(XIWk' A)P(Wk), kti should be reduced. This maximization with respect to the whole parameter space has been shown equivalent to the maximization of Mutual Information (l\fMI) between a model and a vector sequence [Bahl et al., 1986; Brown, 1987]. STANDARD HIDDEN MARKOV MODELS In the regular discrete HMM, the acoustic vectors (e.g. corresponding to 10 ms speech frames) are generally quantized in a front-end processor where each one is replaced by the closest (e.g. according to an Euclidean norm) prototype vector 504 Bourlard and Wellekens Yi selected in a predetermined finite set y of cardinality I. Let Q be a set of I< different states q(k), with k = 1, ... , K. Markov models are then constituted by the association (according to a predefined topology) of some of these states. If H~MM are trained along the MLE criterion, the parameters of the models (defined hereunder) must be optimized for maximizing P(XIW) where X is a training sequence of quantized acoustic vectors Xn E y, with n = 1, ... , N and W is its associated Markov model made up of L states ql E Q with l = 1, ... , L. Of course, the same state may occur several times with different indices l, so that L :f. I<. Let us denote by q,/ the presence on state ql at a given time n E [1, N]. Since events q,/ are mutually exclusive, probability P(XIW) can be written for any arbitrary n: L P(XIW) = L P(q,/,XIW) , (3) l=l where P(q,/, XIW) denotes thus the probability that X is produced by W while associating Xn with state ql. Maximization of (3) can be worked out by the classical forward-backward recurrences of the Baum-Welch algorithm [J elinek 1976, Bourlard et al., 1985] Maximization of P(XIW) is also usually approximated by the Viterbi criterion. It can be viewed as a simplified version of the MLE criterion where, instead of taking account of all possible state sequences in W capable of producing X, one merely considers the most probable one. To make all possible paths apparent, (3) can also be rewritten as L L P(XIW) = L ... L P(qt,···,qt"XIW), II =1 IN=1 and the explicit formulation of the Viterbi criterion is obtained by replacing all summations by a "max" operator. Probability (3) is then approximated by: 1 N P(XIW) = max P(qll,···,qlN,XIW) , (4) ll, ... ,lN and can be calculated by the classical dynamic time warping (DTW) algorithm [Bourlard et al., 1985]. In that case, each training vector is then uniquely associated with only one particular transition. In both cases (MLE and Viterbi), it can be shown that, according to classical hypotheses, P(XIW) and P(XIW) are estimated from the set of local parameters p[q(l), Yi Iq(-)(k), W], for i = 1, ... , I and k, f = 1, ... , I<. Notations q (-)(k) and q(l) denote states E Q observed at two consecutive instants. In the particular case of the Viterbi criterion, these parameters are estimated by: Vi E [1, I], Vk, l E [1, [(], (5) where niH denotes the number of times each prototype vector Yi has been associated with a particular transition from q(k) to q(l) during the training. However, if Links Between Markov Models and Multilayer Perceptrons 505 the models are trained along this formulation of the Viterbi algorithm, no discrimination is taken into account. For instance, it is interesting to observe that the local probability (5) is not the suitable measure for the labeling of a prototype vector Yi, i.e. to find the most probable state given a current input vector and a specified previous state. Indeed, the decision should ideally be based on the Bayes rule. In that cae, the most probable state q( f opt ) is defined by f opt = and not on the basis of (5). argmax p[q( f) IYi, q( - >( k)] , f It can easily be proved that the estimate of the Bayes probabilities in (6) are: (6) (7) In the last section, it is shown that these values can be generated at the output of a particular MLP. DISCRIMINANT HMM For quantized acoustic vectors and Viterbi criterion, an alternative HMM using discriminant local probabilities can also be described. Indeed, as the correct criterion should be based on (1), comparing with (4), the "Viterbi formulation" of this probability is (8) Expression (8) clearly puts the best path into evidence. The right hand side factorizes into and suggests two separate steps for the recognition. The first factor represents the acoustic decoding in which the acoustic vector sequence is converted into a sequence of states. Then, the second factor represents a phonological and lexical step: once the sequence of states is known, the model W associated with X can be found from the state sequence without an explicit dependence on X so that For example, if the states represent phonemes, this probability must be estimated from phonological knowledge of the vocabulary once for all in a separate process without any reference to the input vector sequence. On the contrary, P( ql1 ' ••• , q~ IX) is immediately related to the discriminant local probabilities and may be factorized in 506 Bourlard and Wellekens Now, each factor of (9) may be simplified by relaxing the conditional constraints. More specifically, the factors of (9) are assumed dependent on the previous state only and on a signal window of length 2p + 1 centered around the current acoustic vector. The current expression of these local contributions becomes (10) where input contextual information is now taken into account, X~ denoting the vector sequence Xm , X m +1! ... , X n • If input contextual information is neglected (p = 0), equation (10) represents nothing else but the discriminant local probability (7) and is at the root of a discriminant discrete HMM. Of course, as for (7), these local probabilities could also be simply estimated by counting on the training set, but the exponential increase of the number of parameters with the width 2p + 1 of the contextual window would require an exceedingly large storage capacity as an excessive size of training data to obtain statistically significant parameters. It is shown in the following section how this drawback is circumvented by using an MLP. It is indeed proved that, for the training vectors, the optimal outputs of a recurrent and context-sensitive MLP are the estimates of the local probabilities (10). Given its so-called "generalization property", the MLP can then be used for interpolating on the test set. Of course, from the local contributions (10), P(WIX) can still be obtained by the classical one-stage dynamic programming [Ney, 1984; Bourlard et al., 1985] . Indeed, inside HMM, the following dynamic programming recurrence holds (11) where parameter k runs over all possible states preceding qe and P(qeIXr) denotes the cumulated best path probability of reaching state ql and having emitted the partial sequence Xr . RECURRENT MLP AND DISCRIMINANT HMM Let q( k), with k = 1, ... , K, be the output units of an MLP associated with different classes (each of them corresponding a particular state of Q) and I the number of prototype vectors Yi. Let Vi denote a particular binary input of the l\fLP. If no contextual information is used, Vi is the binary representation of the index i of prototype vector Yi and, more precisely, a vector with all zero components but the i-th one equal to 1. In the case of contextual input, vector Vi is obtained by concatenating several representations of prototype vectors belonging to a given contextual window centered on a current '!Ii. The architecture of the resulting MLP is then similar to NETtaik initially described in [Sejnowski & Rosenberg, 1987] for mapping written texts to phoneme strings. The same kind of architecture has also been proved successful in performing the classification of acoustic vector strings into phoneme strings, where each current vector was classified by taking account Links Between Markov Models and Multilayer Perceptrons 507 of its surrounding vectors [Bourlard & Wellekens, 1989]. The input field is then constituted by several groups of units, each group representing a prototype vector. Thus, if 2p + 1 is the width of the contextual window, there are 2p + 1 groups of I units in the input layer. However, since each acoustic vector is classified independently of the preceding classifications in such feedforward architectures, the sequential character of the speech signal is not modeled. The system has no short-term memory from one classification to the next one and successive classifications may be contradictory. This phenomenon does not appear in HMM since only some state sequences are permitted by the particular topology of the model. Let us assume that the training is performed on a sequence of N binary inputs {Vii' ••• , ViN} where each in represents the index of the prototype vector at time n (if no context) or the "index" of one of the I(2p+l) possible inputs (in the case of a 2p+ 1 contextual window). Sequential classification must rely on the previous decisions but the final goal remains the association of the current input vectors with their own classes. An MLP achieving this task will generate, for each current input vector Vin and each class q(f), f = 1, ... , K, an output value g(in, kn' f) depending on the class q(kn ) in which the preceding input vector Vi n_ i was classified. Supervision comes from the a-priori knowledge of the classification of each Vi n • The training of the MLP parameters is usually based on the minimization of a mean square criterion (LMSE) [Rumelhart et al., 1986] which, with our requirements, takes the form: N K E = 4 L L [g(in, kn, f) - d(in, f)]2 , n=1 l=1 (12) where d(in, f) represents the target value of the f-th output associated with the input vector Vi n • Since the purpose is to associate each input vector with a single class, the target outputs, for a vector Vi E q(f), are: d(i, f) d(i, m) 1, 0, Vm ;f:. f , which can also be expressed, for each particular Vi E q(f) as: d(i, m) = bml . The target outputs d( i, f) only depend on the current input vector 'Vi and the considered output unit, and not on the classification of the previous one. The difference between criterion (12) and that of a memoryless machine is the additional index kn which takes account of the previous decision. Collecting all terms depending on the same indexes, (12) can thus be rewritten as: 1 J KKK E = - L L L L nilel· [g(i, k, m) - d(i,m)]2 , 2. *=1 k=1 l=1 m=1 (13) where J = I if the MLP input is context independent and J = .z(2p+l) if a 2p + 1 contextual window is used; nikl represents the number of times 'Vi has been classified 508 Bourlard and Wellekens decisions f~dback left context units current vector input vectors c right context Figure 1: Recurrent and Context-Sensitive MLP (IZI = delay) in q(f) while the previous vector was known to belong to class q(k). Thus, whatever the MLP topology is, i.e. the number of its hidden layers and of units per layer, the optimal output values gopt(i, k, m) are obtained by canceling the partial derivative of E versus g( i, k, m). It can easily be proved that, doing so, the optimal values for the outputs are then gopt(i, k, m) K . Ll=l nikl (14) The optimal g(i, k, m)'s obtained from the minimization of the MLP criterion are thus the estimates of the Bayes probabilities, i.e. the discriminant local probabilities defined by (7) if no context is used and by (10) in the contextual case. It is important to keep in mind that these optimal values can be reached only provided the MLP contains enough parameters and does not get stuck into a local minimum during the training. A convenient way to generate the g(i, k,f) is to modify its input as follows. For each Vin' an extended vector "'in = (vt., Vin) is formed where vt. is an extra input vector containing the information on the decision taken at time n - 1. Since output information is fed back in the input field, such an MLP has a recurrent topology. The final architecture of the corresponding MLP (with contextual information and output feedback) is represented in Figure 1 and is similar in design to the net developed in [J ordan, 1986] to produce output pattern sequences. The main advantage of this topology, when compared with other recurrent models proposed for sequential processing [Elman 1988, Watrous, 1987], over and above the possible interpretation in terms of HMM, is the control of the information fed back during the training. Indeed, since the training data consists of consecutive labeled speech frames, the correct sequence of output states is known and the training is supervised by providing the correct information. Replacing in (13) dei, m) by the optimal values (14) provides a new criterion where the target outputs depend now on the current vector, the considered output and Links Between Markov Models and Multilayer Perceptrons 509 the classification of the previous vector: J KKK [ ]2 E'" = ~ L L L L nikl. g(i,k,m) ;ikm. ' i=l k=l l=l m=l l:l=l n,kl (15) and it is clear (by canceling the partial derivative of E'" versus g(i, k, m)) that the lower bound for E'" is reached for the same optimal outputs as (14) but is now equal to zero, what provides a very useful control parameter during the training phase. It is evident that these results directly follow from the minimized criterion and not from the topology of the model. In that way, it is interesting to note that the same optimal values (14) may also result from other criteria as, for instance, the entropy [Hinton, 1987] or relative entropy [Solla et al., 1988] of the targets with respect to outputs. Indeed, in the case of relative entropy, e.g., criterion (12) is changed in: ~ ~ [ . d(in, f) . ( 1- d(in' f) )] Ee = L.J L.J d(ln, f). In C k f) + (1 - d(ln, £)).In 1 - C k f) , n=l l=l 9 I n , n, 9 In, n, (16) and canceling its partial derivative versus g(i, k, m) yields the optimal values (14). In that case, the optimal outputs effectively correspond now to Ee,min = o. Of course, since these results are independent of the topology of the models, they remain also valid for linear discriminant functions but, in that case, it is not guaranteed that the optimal values (14) can be reached. However, it has to be noted that in some particular cases, even for not linearly separable classes, these optimal values are already obtained with linear discriminant functions (and thus with a one layered perceptron trained according to an LMS criterion). It is also important to point out that the same kind of recurrent A1LP could also be used to estimate local probabilities of higher order Markov models where the local contribution in (9) are no longer assumed dependent on the previous state only but also on several preceding ones. This is easily implemented by extending the input field to the information related to these preceding classifications. Another solution is to represent, in the same extra input vector, a weighted sum (e.g. exponentially decreasing with time) of the preceding outputs [Jordan, 1986]. CONCLUSION Discrimination is an essential requirement in speech recognition and is not incorporated in the standard HMM. A discriminant HMM has been described and links between this new model and a recurrent MLP have been shown. Recurrence permits to take account of the sequential information in the output sequence. Moreover, input contextual information is also easily captured by extending the input field. It has finally been proved that the local probabilites of the discriminant HAf}.l may be computed (or interpolated) by the particular MLP so defined. 510 Bourlard and Wellekens References [1] Bahl L.R., Brown P.F., de Souza P.V. & Mercer R.L. (1986). Maximum Mutual Information Estimation of Hidden Markov Model Parameters for Speech Recognition, Proc.ICASSP-86, Tokyo, ppA9-52, [2] Bourlard H., Kamp Y., Ney H. & Wellekens C.J. (1985). Speaker-Dependent Connected Speech Recognition via Dynamic Programming and Statistical Methods, Speech and Speaker Recognition, Ed. M.R. Schroeder, KARGER, [3] Bourlard H. &. Wellekens C.J. (1989). Speech Pattern Discrimination and Multilayer Perceptrons, Computer, Speech and Language, 3, (to appear), [4] Brown P. (1987). The Acoustic-Modeling Problem in A utomatic Speech Recognition, Ph.D. thesis, Comp.Sc.Dep., Carnegie-Mellon University, [5] Elman J .L. (1988). Finding Structure in Time, CRL Technical Report 8801, UCSD, Report, [6] Hinton G.E. (1987). Connectionist Learning Procedures, Technical Report CMU-CS-87-115, [7] Jelinek F. (1976). Continuous Recognition by Statistical Methods, Proceedings IEEE, vol. 64, noA, pp. 532-555, [8] Jordan M.L. (1986). Serial Order: A Parallel Distributed Processing Approach, UCSD, Tech. Report 8604, [9] Lippmann R.P. (1987). An Introduction to Computing with Neural Nets, IEEE ASSP Magazine, vol. 4, pp. 4-22, [10] Ney H. (1984). The use of a one-stage dynamic programming algorithm for connected word recognition. IEEE Trans. ASSP vol. 32, pp.263-271, [11] Rumelhart D.E., Hinton G.E. & Williams R.J. (1986). Learning Internal Representations by Error Propagation, Parallel Distributed Processing. Exploration of the Microstructure of Cognition. vol. 1: Foundations, Ed. D.E.Rumelhart &. J .L.McClelland, MIT Press, [12] Sejnowski T.J. &. Rosenberg C.R. (1987). Parallel Networks that Learn to Pronounce English Text. Complex Systems, vol. 1, pp. 145-168, [13] Solla S.A., Levin E. &. Fleisher M. (1988). Accelerated Learning in Layered Neural Networks, AT&T Bell Labs. Manuscript, [14] Waibel A., Hanazawa T., Hinton G., Shikano K. & Lang, K. (1988). Phoneme Recognition Using Time-Delay Neural Networks, Proc. ICASSP-88, New York, [15] Watrous R.L. & Shastri L. (1987). Learning phonetic features using connectionist networks: an experiment in speech recognition, Proceedings of the First International Conference on Neural Networks, IV -381-388, San Diego, CA,
|
1988
|
7
|
157
|
NEURAL NETWORK RECOGNIZER FOR HAND-WRITTEN ZIP CODE DIGITS J. S. Denker, W. R. Gardner, H. P. Graf, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel, H. S. Baird, and I. Guyon AT &T Bell Laboratories Holmdel, New Jersey 07733 ABSTRACT This paper describes the construction of a system that recognizes hand-printed digits, using a combination of classical techniques and neural-net methods. The system has been trained and tested on real-world data, derived from zip codes seen on actual U.S. Mail. The system rejects a small percentage of the examples as unclassifiable, and achieves a very low error rate on the remaining examples. The system compares favorably with other state-of-the art recognizers. While some of the methods are specific to this task, it is hoped that many of the techniques will be applicable to a wide range of recognition tasks. MOTIVATION The problem of recognizing hand-written digits is of enormous practical and the~ retical interest [Kahan, Pavlidis, and Baird 1987; Watanabe 1985; Pavlidis 1982]. This project has forced us to formulate and deal with a number of questions ranging from the basic psychophysics of human perception to analog integrated circuit design. This is a topic where "neural net" techniques are expected to be relevant, since the task requires closely mimicking human performance, requires massively parallel processing, involves confident conclusions based on low precision data, and requires learning from examples. It is also a task that can benefit from the high throughput potential of neural network hardware. Many different techniques were needed. This motivated us to compare various classical techniques as well as modern neural-net techniques. This provided valuable information about the strengths, weaknesses, and range of applicability of the numerous methods. The overall task is extremely complex, so we have broken it down into a great number of simpler steps. Broadly speaking, the recognizer is divided into the preprocessor and the classifier. The two main ideas behind the preprocessor are (1) to remove meaningless variations (i.e. noise) and (2) to capture meaningful variations (i.e. salient features). Most of the results reported in this paper are based on a collection of digits taken from hand-written Zip Codes that appeared on real U.S. Mail passing through the 323 324 Denker, et al (j{OL3l-/.jGIJ JI<=t OI~.?4--:; ~789 d/~~'f.!Jd,7 87 012dL/-S-67 get Figure 1: Typical Data Buffalo, N.Y. post office. Details will be discussed elsewhere [Denker et al., 1989]. Examples of such images are shown in figure 1. The digits were written by many different people, using a great variety of writing styles and instruments, with widely varying levels of care. Important parts of the task can be handled nicely by our lab's custom analog neural network VLSI chip [Gra! et aI., 1987; Gra! & deVegvar, 1987], allowing us to perform the necessary computations in a reasonable time. Also, since the chip was not designed with image processing in mind, this provided a good test of the chips' versatility. THE PREPROCESSOR Acquisition The first step is to create a digital version of the image. One must find where on the envelope the zip code is, which is a hard task in itself (Wang and Srihari 1988]. One must also separate each digit from its neighbors. This would be a relatively simple task if we could assume that a character is contiguous and is disconnected from its neighbors, but neither of these assumptions holds in practice. It is also common to find that there are meaningless stray marks in the image. Acquisition, binarization, location, and preliminary segmentation were performed by Poetal Service contractors. In some images there were extraneous marks, so we developed some simple heuristics to remove them while preserving, in most cases, all segments of a split character. Scaling and Deskewing At this point, the size of the image is typically 40 x 60 pixels, although the scaling routine can accept images that are arbitrarily large, or as small as 5 x 13 pixels. A translation and scale factor are then applied to make the image fit in a rectangle Neural Network Recognizer for Hand-Written Zip Code Digits 325 20 x 32 pixels. The character is centered in the rectangle, and just touches either the horizontal or vertical edges, whichever way fits. It is clear that any extraneous marks must be removed before this step, lest the good part of the image be radically compressed in order to make room for some wild mark. The scaling routine changes the horizontal and vertical size of the image by the same factor, so the aspect ratio of the character is preserved. As shown in figure 1, images can differ greatly in the amount of skew, yet be considered the same digit. This is an extremely significant noise source. To remove this noise, we use the methods of [Casey 1970]; see also [Naylor 1971]. That is, we calculate the XY and YY moments of the image, and apply a linear transformation that drives the XY moment to zero. The transformation is a pure shear, not a rotation, because we find that rotation is much less common than skew. The operations of scaling and deskewing are performed in a single step. This yields a speed advantage, and, more importantly, eliminates the quantization noise that would be introduced by storing the intermediate images as pixel maps, were the calculation carried out in separate steps. Skeletonization For the task of digit recognition, the width of the pen used to make the characters is completely meaningless, and is highly variable. It is important to remove this noise source. By deleting pixels at the boundaries of thick strokes. After a few iterations of this process, each stroke will be as thin as possible. The idea is to remove as many pixels as possible without breaking the connectivity. Connectivity is based on the 8 nearest neighbors. This can be formulated as a pattern matching problem we search the image looking for situations in which a pixel should be deleted. The qecisions can be expressed as a convolution, using a rather small kernel, since the identical decision process is repeated for each location in the image, and the decision depends on the configuration of the pixel's nearest and next-nearest neighbors. Figure 2 shows an example of a character before (e) and after (I) skeletonization. It also shows some of the templates we use for skeletonization, together with an indication of where (in the given image) that template was active. To visualize the convolution process, imagine taking a template, laying it over the image in each possible place, and asking if the template is "active" in that place. (The template is the convolution kernel; we use the two terms practically interchangeably.) The portrayal of the template uses the following code: Black indicates that if the corresponding pixel in the image is ON, it will contribute +1 to the activity level of this template. Similarly, gray indicates that the corresponding pixel, if ON, will contribute -5, reducing the activity of this template. The rest of the pixels don't matter. If the net activity level exceeds a predetermined threshold, the template is considered active at this location. The outputs of all the skeletonizer templates 326 Denker, et al a) b) c) d) Figure 2: Skeletonization are eombined in a giant logieal OR, that is, whenever any template is aetive, we eonelude that the pixel presently under the eenter of the template should be deleted. The skeletonization eomputation involves six nested loops: for each iteration I for all X in the image (horizontal eoordinate) for all Y in the image (vertical eoordinate) for all T in the set of template shapes for all P in the template (horizontal) for all Q in the template (vertical) compare image element(X +P, Y +Q) with template(T) element(P, Q) The inner three loops (the loops over T, P, and Q) are performed in parallel, in a single cyde of our special-purpose ehip. The outer three loops (1, X, and Y) are performed serially, calling the ehip repeatedly. The X and Y loops eould be performed in parallel with no change in the algorithms. The additional parallelism would require a proportionate increase in hardware. Neural Network Recognizer for Hand-Written Zip Code Digits 327 The purpose of template a is to detect pixels at the top edge of a thick horizontal line. The three "should be OFF" (light grey shade in figure 2) template elements enforce the requirement that this should be a boundary, while the three "should be ON" (solid black shade in figure 2) template elements enforce the requirement that the line be at least two pixels wide. Template b is analogous to template a, but rotated 90 degrees. Its purpose is to detect pixels at the left edge of a thick vertical line. Template c is similar to, but not exactly the same as, template a rotated 180 degrees. The distinction is necessary because all templates are applied in parallel. A stroke that is only two pixels thick ·must not be attacked from both sides at once, lest it be removed entirely, changing the connectivity of the image. Previous convolutional line-thinning schemes [Naccache 1984] used templates of size 3 x 3, and therefore had to use several serial sub-stages. For parallel operation at least 3 x 4 kernels are needed, and 5 x 5 templates are convenient, powerful, and flexible. Feature Maps Having removed the main sources of meaningless variation, we turn to the task of extracting the meaningful information. It is known from biological studies [Hubel and Wiesel 1962] that the human vision system is sensitive to certain features that occur in images, particularly lines and the ends of lines. We therefore designed detectors for such features. Previous artificial recognizers [\Vatanabe 1985] have used similar feature extractors. Once again we use a convolutional method for locating the features of interest we check each location in the image to see if each particular feature is present there. Figure 3 shows some of the templates we use, and indicates where they become active in an example image. The feature extractor templates are 7 x 7 pixels slightly larger than the skeletonizer templates. Feature b is designed to detect the right-hand end of (approximately) horizontal strokes. This can be seen as follows: in order for the template to become active at a particular point, the image must be able to touch the "should be ON" pixels at the center of the template without touching the surrounding horseshoe-shaped collection of "'must be OFF" pixels. Essentially the only way this can happen is at the right-hand end of a stroke. (An isolated dot in the image will also activate this template, but the images, at this stage, are not supposed to contain dots). Feature d detects (approximately) horizontal strokes. There are 49 different feature extractor templates. The output of each is stored separately. These outputs are called feature maps, since they show what feature(s) occurred where in the image. It is possible, indeed likely, that several different features will occur in the same place. Whereas the outputs of all the skeletonizer templates were combined in a very simple way (a giant OR), the outputs of the feature extractor templates are combined in 328 Denker, et al a) b) c) • I ~------~.~ ~.~------~ Figure 3: Feature Extraction various artful ways. For example, feature" and a similar one are O~d to form a single combined feature that responds to right-hand ends in general. Certain other features are ANDed to form detectors for arcs (long curved strokes). There are 18 combined features, and these are what is passed to the next stage. We need to create a compact representation, but starting from the skeletonized image, we have, instead, created 18 feature maps of the same size. Fortunately, we can now return to the theme of removing meaningless variation. If a certain image contains a particular feature (say a left-hand stroke end) in the upper left corner, it is not really necessary to specify the location of that feature with great precision. To recognize the Ihope of the feature required considerable precision at the input to the convolution, but the pOlitiora of the feature does not require so much precision at the output of the convolution. We call this Coarse Blocking or Coarse Coding of the feature maps. We find that 3 x 5 is sufficent resolution. CLASSIFIERS If the automatic recognizer is unable to classify a particular zip code digit, it may be possible for the Post Office to determine the correct destination by other means. This is costly, but not nearly so costly as a misclassification (substitution error) that causes the envelope to be sent to the wrong destination. Therefore it is critically Neural Network Recognizer for Hand-Written Zip Code Digits 329 important for the system to provide estimates of its confidence, and to reject digits rather than misclassify them. The objective is not simply to maximize the number of classified digits, nor to minimize the number of errors. The objective is to minimize the cost of the whole operation, and this involves a tradeoff between the rejection rate and the error rate. Preliminary Inves tigations Several different classifiers were tried, including Parzen Windows, K nearest neighbors, highly customized layered networks, expert systems, matrix associators, feature spins, and adaptive resonance. We performed preliminary studies to identify the most promising methods. We determined that the top three methods in this list were significantly better suited to our task than the others, and we performed systematic comparisons only among those three. Classical Clustering Methods We used two classical clustering techniques, Parzen Windows (PW) and K Nearest Neighbors (KNN), which are nicely described in Duda and Hart [1973]. In this application, we found (as expected) that they behaved similarly, although PW consistently outperformed KNN by a small margin. These methods have many advantages, not the least of which is that they are well motivated and easily understood in terms of standard Bayesian inference theory. They are well suited to implementation on parallel computers and/or custom hardware. They provide excellent confidence information. Unlike modern adaptive network methods, PW and KNN require no "learning time", Furthermore the performance was reproducible and responded smoothly to improvements in the preprocessor and increases in the size of the training set. This is in contrast to the "noisy" performance of typical layered networks. This is convenient, indeed crucial, during exploratory work. Adaptive Network Methods In the early phases of the project, we found that neural network methods gave rather mediocre results. Later, with a high-performance preprocessor, plus a large training database, we found that a layered network gave the best results, surpassing even Parzen Windows. We used a network with two stages of processing (i.e., two layers of weights), with 40 hidden units and using a one-sided objective function (as opposed to LMS) as described in [Denker and Wittner 1987]. The main theoretical advantage of the layered network over the classical methods is that it can form "higher order" features conjunctions and disjunctions of the features provided by our feature extractor. Once the network is trained, it has the advantage that the classification of each input is very rapid compared to PW or KNN. Furthermore, the weights represent a compact distillation of the training data and thus have a smaller memory requirement. The network provides confidence information that is 330 Denker, et al just as good as the classical methods. This is obtained by comparing the activation level of the most active output against the runner-up unit(s). To check on the effectiveness of the preprocessing stages, we applied these three classification schemes (PW, KNN, and the two-layer network) on 256-bit vectors consisting of raw bit maps of the images with no skeletonization and no feature extraction. For each classification scheme, we found the error rate on the raw bit maps was at least a factor of 5 greater than the error rate on the feature vectors, thus clearly demonstrating the utility of feature extraction. TESTING It is impossible to compare the performance of recognition systems except on identical databases. Using highly motivated "friendly" writers, it is possible to get a dataset that is so clean that practically any algorithm would give outstanding results. On the other hand, if the writers are not motivated to write clearly, the result will be not classifiable by machines of any sort (nor by humans for that matter). It would have been much easier to classify digits that were input using a mouse or bitpad, since the lines in the such an image have zero thickness, and stroke-order information is available. It would also have been much easier to recognize digits from a single writer. The most realistic test data we could obtain was provided by the US Postal Service. It consists of approximately 10,000 digits (1000 in each category) obtained from the zip codes on actual envelopes. The data we received had already been binarized and divided into images of individual digits, rather than multi-digit zip codes, but no further processing had been done. On this data set, our best performance is as follows: if 14% of the images are rejected as unclassifiable, only 1% of the remainder are misclassified. If no images are rejected, approximately 6% are misclassified. Other groups are working with the same dataset, but their results have not yet been published. Informal communications indicate that our results are among the best. CONCLUSIONS We have obtained very good results on this very difficult task. Our methods include low-precision and analog processing, massively parallel computation, extraction of biologically-motivated features, and learning from examples. We feel that this is, therefore, a fine example of a Neural Information Processing System. We emphasize that old-fashioned engineering, classical pattern recognition, and the latest learning-from-examples methods were all absolutely necessary. Without the careful engineering, a direct adaptive network attack would not succeed, but by the same token, without learning from a very large database, it would have been excruciating to engineer a sufficiently accurate representation of the probability space. Neural Network Recognizer for Hand-Written Zip Code Digits 331 Acknowledgements It is a pleasure to acknowledge useful discussions with Patrick Gallinari and technical assistance from Roger Epworth. We thank Tim Barnum of the U.S. Postal Service for making the Zip Code data available to us. References 1. R. G. Casey, "Moment Normalization of Handprinted Characters", IBM J. Res. Develop., 548 (1970) 2. J. S. Denker et al., "Details of the Hand-Written Character Recognizer", to be published (1989) 3. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, John Wiley and Sons (1973) 4. E. Gullichsen and E. Chang, "Pattern Classification by Neural Network: An Experimental System for Icon Recognition", Proc. IEEE First Int. Conf. on Neural Networks, San Diego, IV, 725 (1987) 5. H. P. Graf, W. Hubbard, L. D. Jackel, P.G.N. deVegvar, "A CMOS Associative Memory Chip", Proc. IEEE First Int. Conf. on Neural Networks, San Diego, 111,461 (1987) 6. H.P Graf and P. deVegvar, "A CMOS Implementation of a Neural Network Model", Proc. 1987 Stanford Conf. Advanced Res. VLSI, P. Losleben (ed.) MIT Press, 351 (1987) 7. D. H. Hubel and T. N. Wiesel, "Receptive fields, binocular interaction and functional architecture in the cat's visual cortex", J. Physiology 160, 106 (1962) 8. S. Kahan, T. Pavlidis, and H. S. Baird, "On the Recognition of Printed Characters of Any Font and Size", IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-9, 274 (1987) 9. N. J. Naccache and R. Shinghal, ''SPTA: A Proposed Algorithm for Thinning Binary Patterns", IEEE Trans. Systems, Man, and Cybernetics, SMC-14, 409 (1984) 10. W. C. Naylor, "Some Studies in the Interactive Design of Character Recognition Systems", IEEE Transactions on Computers, 1075 (1971) 11. T. Pavlidis, Algorithms for Graphics and Image Processing, Computer Science Press (1982) 12. C. Y. Suen, M. Berthod, and S. Mori, "Automatic Recognition of Handprinted Characters The State of the Art", Proceedings of the IEEE 68 4, 469 (1980). 13. C-H. Wang and S. N. Srihari, "A Framework for Object Recognition in a Visually Complex Environment and its Application to Locating Address Blocks on Mail Pieces", IntI. J. Computer Vision 2, 125 (1988) 14. S. Watanabe, Pattern Recognition, John Wiley and Sons, New York (1985)
|
1988
|
70
|
158
|
ANALOG IMPLEMENTATION OF SHUNTING NEURAL NETWORKS Bahram Nabet, Robert B. Darling, and Robert B. Pinter Department of Electrical Engineering, FT-lO University of Washington Seattle, WA 98195 ABSTRACT An extremely compact, all analog and fully parallel implementation of a class of shunting recurrent neural networks that is applicable to a wide variety of FET-based integration technologies is proposed. While the contrast enhancement, data compression, and adaptation to mean input intensity capabilities of the network are well suited for processing of sensory information or feature extraction for a content addressable memory (CAM) system, the network also admits a global Liapunov function and can thus achieve stable CAM storage itself. In addition the model can readily function as a front-end processor to an analog adaptive resonance circuit. INTRODUCTION Shunting neural networks are networks in which multiplicative, or shunting, terms of the form Xi Lj f;(Xj) or Xi Lj Ij appear in the short term memory equations, where Xi is activity of a cell or a cell population or an iso-potential portion of a cell and Ii are external inputs arriving at each site. The first case shows recurrent activity, while the second case is non-recurrent or feed forward. The polarity of these terms signify excitatory or inhibitory interactions. Shunting network equations can be derived from various sources such as the passive membrane equation with synaptic interaction (Grossberg 1973, Pinter 1983), models of dendritic interaction (RaIl 1977), or experiments on motoneurons (Ellias and Grossberg 1975). While the exact mechanisms of synaptic interactions are not known in every individual case, neurobiological evidence of shunting interactions appear in several 695 696 Nabet, Darling and Pinter areas such as sensory systems, cerebellum, neocortex, and hippocampus (Grossberg 1973, Pinter 1987). In addition to neurobiology, these networks have been used to successfully explain data from disciplines ranging from population biology (Lotka 1956) to psychophysics and behavioral psychology (Grossberg 1983). Shunting nets have important advantages over additive models which lack the extra nonlinearity introduced by the multiplicative terms. For example, the total activity of the network, shown by Li Xi, approaches a constant even as the input strength grows without bound. This normalization in addition to being computationally desirable has interesting ramifications in visual psychophysics (Grossberg 1983). Introduction of multiplicative terms also provides a negative feedback loop which automatically controls the gain of each cell, contributes to the stability of the network, and allows for large dynamic range of the input to be processed by the network. The automatic gain control property in conjunction with properly chosen nonlinearities in the feedback loop makes the network sensitive to small input values by suppressing noise while not saturating at high input values (Grossberg 1973). Finally, shunting nets have been shown to account for short term adaptation to input properties, such as adaptation level tuning and the shift of sensitivity with background strength (Grossberg 1983), dependence of visual size preference and latency of response on contrast and mean luminance, and dependence of temporal and spatial frequency tuning on contrast and mean luminance (Pinter 1985). IMPLEMENTATION The advantages, generality, and applicability of shunting nets as cited in the previous section make their implementation very desirable, but digital implementation of these networks is very inefficient due to the need for analog to digital conversion, multiplication and addition instructions, and implementation of iterative algorithms. A linear feedback class of these networks (Xi Lj !; (Xj) = Xi Li J{ijXj), however, can be implemented very efficiently with simple, completely parallel and all analog circuits. FRAMEWORK Figure 1 shows the design framework for analog implementation of a class of shunting nets. In this design addition (subtraction) is achieved, via Kirchoff's current law by placing transistors in upper (lower) rails, and through the choice of depletion or enhancement mode devices. Multiplicative, or shunting, interconnections are done by one transistor per interconnect, using a field-effect transistor (FET) in the voltage-variable conductance region. Temporal properties are characterized by cell membrane capacitance C, which can be removed, or in effect replaced by the parasitic device capacitances, if higher speed is desired. A buffer stage is necessary for correct polarity of interconnections and the large fan-out associated with high connectivity of neural networks. Analog Implementation of Shunting Neural Networks 697 + x. , I. , c Vdd -t" '.J . x· J Vss Figure 1. Design framework for implementation of one cell in a shunting network. Voltage output of other cells is connected to the gate of transistors Qi,i' Such a circuit is capable of implementing the general network equation: (1) Excitatory and inhibitory input current sources can also be shunted, with extra circuitry, to implement non-recurrent shunting networks. NMOS, CMOS and GALLIUM ARSENIDE Since the basic cell of Fig. 1 is very similar to a standard logic gate inverter, but with the transistors sized by gate width-to-Iength ratio to operate in the nonsaturated current region, this design is applicable to a variety of FET technologies including NMOS, CMOS, and gallium arsenide (GaAs). A circuit made of all depletion-mode devices such as GaAs MESFET buffered FET logic, can implement all the terms of Eq. (1) except shunting excitatory terms and requires a level shifter in the buffer stage. A design with all enhancement mode devices such as silicon NMOS can do the same but without a level shifter. With the addition of p-channel devices, e.g. Si CMOS, all polarities and all terms of Eq. (1) can be realized. As mentioned previously a buffer stage is necessary for correct polarity of interconnections and fan out/fan in capacity. Figure 2 shows a GaAs MESFET implementation with only depletion mode devices which employs a level shifter as the buffer stage. 698 Nabet, Darling and Pinter VDD-------------r------~--------------~-INPUTS: EXTERNAL OR FROM PREVJOUS LAYER EXCITATORY CONNECTIONS INHIBITORY CONNECTIONS GN~-L--~------~------~-TUNABLE SELF-RELAXATION CONNECTION OUTPUT TO NEXT LAYER VSS--~--....L.... LEVEL SHIFT AND BUFFER STAGE Figure 2. Gallium arsenide MESFET implementation with level shifter and depletion mode devices. Lower rail transistors produce shunting off-surround terms. Upper transistors can produce additive excitatory connections. SPECIFIC IMPLEMENTATION The simplest shunting network that can be implemented by the general framework of Fig.1 is Fig. 2 with only inhibitory connections (lower rail transistors). This circuit implements the network model dX· " d/ = Ii - a,X, + Xi(J(iXi) - Xi(L....J J(ijXj) j#i (2), The simplicity of the implementation is notable; a linear array with nearest neighbor interconnects consists of only 5 transistors, 1-3 diodes, and if required 1 capacitor per cell. A discrete element version of this implementation has been constructed and shows good agreement with expected properties. Steady state output is proportional to the square root of a uniform input thereby compressing the input data and showing adaptation to mean input intensity (figure 3). The network exhibits contrast enhancement of spatial edges which increases with higher mean input strength (figure 4). A point source input elicits an on-center off-surround response, similar to the difference-of-Gaussians receptive field of many excitable cells. This 'receptive field' becomes more pronounced as the input intensity increases, showing the dependence of spatial frequency tuning on mean input level (figure 5). The temporal response of the network is also input dependent since the time constant of the exponential Analog Implementation of Shunting Neural Networks 699 decay of the impulse response decreases with input intensity. Finally, the dependence of the above properties on mean input strength can be tuned by varying the conductance of the central FET. 700.0 --> E 11.1 600.0 500.0 · II 400.0 !:i ~ 5 300.0 ~ o 200.0 100.0 0.1 0.3 0.5 0 .7 0.9 1.1 1.3 1.5 1.7 1.11 2.1 INPUT CURRENT rnA Figure 3. Response of network to uniform input. Output is proportional to the square root of the input. DEPENDENCE OF ENHANCEMENT ON MEAN INPUT 1.5 1.4 ~ :J 1.3 0 ... a 1.2 N i 1.1 II: 0 z 5 1.0 ~ 0 0 .11 a f ... 0.8 ::l IL ~ 0 .7 0.11 2 3 5 7 CELL NUMBER Figure 4. Response of network to spatial edge patterns with the same contrast but increasing mean input level. 700 Nabet, Darling and Pinter Imox ~ 2.0J rnA, llmox J75.74 mil 1.0 0.11 ~ g 0.11 :> 0 Q ~ 0 .7 !5 II. I iii 0.11 i 0.5 II: 0 Z 0.4 O.J 2 J 4 5 II 7 cEll NUMBER INPUT -xOUTPUT Figure 5. Response of network to a point source input. Inset shows the receptive field of fly's lamina monopolar cells (LMC of Lucilia sericata). Horizontal axis of inset in visual angle, vertical a.xis relative voltage units of hyperpolarization. Inset from Pinter et al. (in preparation) CONTENT ADDRESSABILITY AND RELATION TO ART Using a theorem by Cohen and Grossberg (1983), it can be shown that the network equa.tion (2) a.dmits the global Liapunov function n n V - "'(l·ln(xi) - a'x' + K'x~) +.! '" K··x 'XL ~ 1 >. 1 1 1 1 2 ~ IJ J .. , ;=1 j,k=l (3) where>. is a constant, under the constraints Kij = Kji and Xi > O. This shows that in response to an arbitrary input the network always approaches an equilibrium point. The equilibria represent stored patterns and this is Content Addressable Memory (CAM) property. In addition, Eq. (2) is a special case of the feature representation field of an analog adaptive resonance theory ART-2 circuit, (Carpenter and Grossberg 1987), and hence this design can operate as a module in a learning multilayer ART architecture. Analog Implementation of Shunting Neural Networks 701 FUTURE PLANS Due to the very small number of circuit components required to construct a cell, this implementation is quite adaptable to very high integration densities. A solid state implementation of the circuit of figure (2) on a gallium arsenide substrate, chosen for its superiority for opto-electronics applications, is in progress. The chip includes monolithically fabricated photosensors for processing of visual information. All of the basic components of the circuit have been fabricated and tested. With standard 2 micron GaAs BFL design rules, a chip could contain over 1000 cells per cm2 , assuming an average of 20 inputs per cell. CONCLUSIONS The present work has the following distinguishing features: • Implements a mathematically well described and stable model. • Proposes a framework for implementation of shunting nets which are biologically feasible, explain variety of psychophysical and psychological data and have many desirable computational properties. • Has self-sufficient computational capabilities; especially suited for processing of sensory information in general and visual information in particular (N abet and Darling 1988). • Produces a 'good representation' of the input data which is also compatible with the self-organizing multilayer neural network architecture ART-2. • Is suitable for implementation in variety of technologies. • Is parallel, analog, and has very little overhead circuitry . ..-. 702 N abet, Darling and Pinter REFERENCES Carpenter, G.A. and Grossberg, S. (1987) "ART 2: self organization of stable category recognition codes for analog input patterns,". Applied Optics 26, pp. 49194930. Cohen,M.A. and Grossberg, S. (1983) "Absolute stability of global pattern formation and parallel memory storage by competitive neural networks" , IEEE Transactions on Systems Man and Cybernetics SMC-13, pp. 815-826. Ellias, S.A. and Grossberg, S. (1975) "Pattern formation, contrast control, and oscillations in the short term memory of shunting on-center off-surround networks" Biological Cybernetics, 20, pp. 69-98. Grossberg, S. (1973), "Contour enhancement, Short term memory and constancies in reverberating neural networks," Studies in Applied Mathematics, 52, pp. 217257. Grossberg, S. (1983), "The quantized geometry of visual space: the coherent computation of depth, form, and lightness." The behavioral and brain sciences, 6, pp. 625-692. Lotka, A.J. (1956). Elements of mathematical biology. New York: Dover. Nabet, B. and Darling, R.B. (1988). "Implementation of optical sensory neural networks with simple discrete and monolithic circuits," (Abstract) Neural Networks, Vol.l, Suppl. 1, 1988, pp. 396. Pinter, R.B., (1983). "The electrophysiological bases for linear and nonlinear product term lateral inhibition and the consequences for wide-field textured stimuli" J. Theor. Bioi. 105 pp. 233-243. Pinter, R.B. (1985) " Adaptation of spatial modulation transfer functions via nonlinear lateral inhibition" Bioi. Cybernetics 51, pp. 285-291. Pinter, R.B. (1987) "Visual system neural networks: Feedback and feedforward lateral inhibition" Systems and Control Encyclopedia (ed.M.G. Singh) Oxford: Pergamon Press. pp. 5060-5065. Pinter, R.B., Osorio, D., and Srinivasan, M.V., (in preperation) "Shift of edge preference to scototaxis depends on mean luminance and is predicted by a matched filter hypothesis in fly lamina cells" RaIl, W. (1977). "Core conductor theory and cable properties of neurons" in Handbook of Physiology: The Nervous System vol. I, part I, Ed. E.R. Kandel pp. 39-97. Bethesda, MD: American Physiological Society.
|
1988
|
71
|
159
|
594 Range Image Restoration using Mean Field Annealing Griff L. Bilbro Wesley E. Snyder Center for Communications and Signal Processing North Carolina State University Raleigh, NC Abstract A new optimization strategy, Mean Field Annealing, is presented. Its application to MAP restoration of noisy range images is derived and experimentally verified. 1 Introduction The application which motivates this paper is image analysis; specifically the analysis of range images. We [BS86] [GS87] and others [YA85][BJ88] have found that surface curvature has the potential for providing an excellent, view-invariant feature with which to segment range images. Unfortunately, computation of curvature requires, in turn, computation of second derivatives of noisy data. We cast this task as a restoration problem: Given a measurement g(z, y), we assume that g(z, y) resulted from the addition of noise to some "ideal" image fez, y) which we must estimate from three things: 1. The measurement g(z, y). 2. The statistics of the noise, here assumed to be zero mean with variance (1'2. 3. Some a priori knowledge of the smoothness of the underlying surface(s). We will turn this restoration problem into a minimization, and solve that minimization using a strategy called Mean Field A nnealing. A neural net appears to be the ideal architecture for the reSUlting algorithm, and some work in this area has already been reported [CZVJ88]. 2 Simulated Annealing and Mean Field Anneal• Ing The strategy of SSA may be summarized as follows: Let H(f) be the objective function whose minimum we se~k, wher~ /is somt' parameter vector. A parameter T controls the algorithm. The SSA algorithm begins at a relatively high value of T which is gradually reduced. Under certain conditions, SSA will converge to a global optimum, [GGB4] [RS87] H (f) = min{ H (fie)} V fie (1) Range Image Restoration Using Mean Field Annealing 595 even though local minima may occur. However, SSA suffers from two drawbacks: • It is slow, and • there is no way to directly estimate [MMP87] a continuously-valued I or its derivatives. The algorithm presented in section 2.1 perturbs (typically) a single element of fat each iteration. In Mean Field Annealing, we perturb the entire vector f at each iteration by making a deterministic calculation which lowers a certain average of H, < H(f) >, at the current temperature. We thus perform a rather conventional non-linear minimization (e.g. gradient descent), until a minimum is found at that temperature. We will refer to the minimization condition at a given T as the equilibrium for that T. Then, T is reduced, and the previous equilibrium is used as the initial condition for another minimization. MFA thus converts a hard optimization problem into a sequence of easier problems. In the next section, we justify this approach by relating it to SSA. 2.1 Stochastic Simulated Annealing The problem to be solved is to find j where j minimizes H(f). SSA solves this minimization with the following strategy: 1. Define PT ex e- H /T. 2. Find the equilibrium conditions on PT, at the current temperature, T. By equilibrium, we mean that any statistic ofpT(f) is constant. These statistics could be derived from the Markov chain which SSA constructs: jO, p, ... , IN, ... , although in fact such statistical analysis is never done in normal running of an SSA algorithm. 3. Reduce T gradually. 4. As T --+ 0, PT(f) becomes sharply peaked at j, the minimum. 2.2 Mean Field Annealing In Mean Field Annealing, we provide an analytic mechanism for approximating the equilibrium at arbitrary T. In MFA, we define an error function, -H -Hfl fe--ordl fe T (H-Ho)dl EMF(Z, T) = Tln--=-H- - + -/j -----. f eTdl f e- TdJ (2) which follows from Peierl's inequality [BGZ76]: F ~ Fo+ < H - Ho > (3) -H -Hg where F = -Tlnf e---r-dl and Fo = -Tlnf e T dl. The significance of EMF is as follows: the minimum of EMF determines the best approximation given the form 596 Bilbro and Snyder of Ho to the equilibrium statistics of the SSA-generated MRF at T. We will then anneal on T. In the next section, we choose a special form for Ho to simplify this process even further. 1. Define some Ho(f, z) which will be used to estimate H(f). 2. At temperature T, minimize EMF(Z) where EMF is a functional of Ho and H which characterizes the difference between Ho and H. The process of minimizing EMF will result in a value of the parameter z, which we will denote as ZT. 3. Define HT(f) = Ho(f, ZT) and for(f) ex e-iiT /T. 3 Image Restoration Using MFA We choose a Hamiltonian which represents both the noise in the image, and our a priori knowledge of the local shape of the image data. "" 1 2 HN = L.J -2 2 (Ii - gil • (1' (4) , (5) where 18( represents [Bes86] the set of values of pixels neighboring pixel i (e.g. the value of I at i along with the I values at the four nearest neighbors of i); A is some scalar valued function of that set of pixels (e.g. the 5 pixel approximation to the Laplacian or the 9 pixel approximation to the quadratic variation); and (6) The noise term simply says that the image should be similar to the data, given noise of variance (1'2. The prior term drives toward solutions which are locally planar. Recently, a simpler V(z) = z2 and a similar A were successfully used to design a neural net [CZVJ88] which restores images consisting of discrete, but 256-valued pixels. Our formulation of the prior term emphasizes the importance of "point processes," as defined [WP85] by Wolberg and Pavlidis. While we accept the eventual necessity of incorporating line processes [MMP87] [Mar85] [GG84] [Gem87] into restoration, our emphasis in this paper is to provide a rigorous relationship between a point process, the prior model, and the more usual mathematical properties of surfaces. Using range imagery in this problem makes these relationships direct. By adopting this philosophy, we can exploit the results of Grimson [Gri83] as well as those of Brady and Horn [BH83] to improve on the Laplacian. The Gaussian functional form of V is chosen because it is mathematically convenient for Boltzmann statistics and beca.use it reflects the following shape properties recommended for grey level images in the literature and is especially important if Range Image Restoration Using Mean Field Annealing 597 line processes are to be omitted: Besag [Bes86] notes that lito encourage smooth variation", V(A) "should be strictly increasing" in the absolute value of its argument and if "occasional abrupt changes" are expected, it should "quickly reach a maximum" . Rational functions with shapes similar to our V have been used in recent stochastic approaches to image processing [GM85]. In Eq. 6, T is a "soft threshold" which represents our prior knowledge of the probability of various values of \7 2 f (the Laplacian of the undegraded image). For T large, we imply that high values of the Laplacian are common - f is highly textured; for small values of T, we imply that f is generally smooth. We note that for high values of T, the prior term is insignificant, and the best estimate of the image is simply the data. We choose the Mean Field Hamiltonian to be (7) and find that the optimal ZT approximately minimizes (8) both at very high and very low T . We have found experimentally that this approximation to ZT does anneal to a satisfactory restoration. At each temperature, we use gradient descent to find ZT with the following approximation to the gradient of <H>: (9) and -b .. ? V(r·) e- 2( .. h) , y'2;(T+T) . (lO) Differentiating Eq. 8 with this new notation, we find (11) Since 6'+11,; is non-zero only when i + v = i, we have 8 < H > _ :J!j - gj L L IT'(. ) 8 2 + -II ')+11 :J! . (1' ) II (12) and this derivative can be used to find the equilibrium condition. Algorithm 598 Bilbro and Snyder 1. Initially, we use the high temperature assumption, which eliminates the prior term entirely, and results in Z; =g;; for T = 00. (13) This will provide the initial estimate of z. Any other estimate quickly converges to g. 2. Given an image z;, form the image ri = (L ® z);, where the ® indicates convolution. ..? 3. Create the image V. = V' (r·) = - -----l= _-.!'L e :II(T~ .. ) • P , ~T+T)T+T 4. Using 12, perform ordinary non-linear minimization of < H > starting from the current z. The particular strategy followed is not critical. We have successfully used steepest descent and more sophisticated conjugate gradient [PFTV88] methods. The simpler methods seem adequate fot Gaussian noise. 5. Update z to the minimizing z found in step 4. 6. Reduce T and go to 2. When T is sufficiently close to 0, the algorithm is complete. In step 6 above, T essentially defines the appropriate low-temperature stopping point. In section 5, we will elaborate on the determination of T and other such constants. 4 Performance In this section, we describe the performance of the algorithm as it is applied to several range images. We will use range images, in which the data is of the form z = z(z, y). (14) 4.1 Images With High Levels of Noise Figure 1 illustrates a range image consisting of three objects, a wedge (upper left), a cylinder with rounded end and hole (right), and a trapezoidal block viewed from the top. The noise in this region is measured at (1' = 3units out of a total range of about 100 units. Unsophisticated smoothing will not estimate second derivatives of such data without blurring. Following the surface interpolation literature, [Gri83] [BB83] we use the quadratic variation as the argument of the penalty function for the prior term to (15) and performing the derivative in a manner analogous to Eq. 11 and 12. The Laplacian of the restoration is shown in Figure 2. Figure 3 shows a cross-section taken as indicated by the red line on Figure 2. Fig. 1 Original rallge image Fig. 2 Laplacian of the restored image 4.2 Comparison With Existing Techniques I n J~~ l\~lll Fig. 3 Cross section Through Laplacian along Red Line Accurate computation of surface derivatives requires extremely good smoothing of surface noise, while segmentation requires preservat.ion of edges. One suc.h adapt.ive smoothing technique,[Ter87] iterative Gaussian smoothing (IGS) has been successfully applied to range imagery. [PB87] Following this strategy, step edges are first detected, and smoothing is then applied using a small center-weighted kernel. At edges, an even smaller kernel, called a "molecule", is used to smooth right up to the edge without blurring the edge. The smoothing is then iterated. 600 Bilbro and Snyder The results, restoration and Laplacian, of IGS are not nearly as sharp as those shown in Figure 2. 5 Determining the Parameters Although the optimization strategy described in section 3 has no hard thresholds, several parameters exist either explicitly in Eq. 8 or implicitly in the iteration. Good estimates of these parameters will result in improved performance, faster convergence, or both. The parameters are: (1' the standard deviation of the noise b the relative magnitude of the prior term 11 = T + T the initial temperature and T the final temperature. The decrement in T which defines the annealing schedule could also be considered a parameter. However, we have observed that 10% or less per step is always good enough. We find that for depth images of polyhedral scenes, T = 0 so that only one parameter is problem dependent: (1'. For the more realistic case of images which also contain curved surfaces, however, see our technical report [BS88], which also describes the MFA derivation in much more detail. The standard deviation of the noise must be determined independently for each problem class. It is straightforward to estimate (1' to within 50%, and we have observed experimentally that performance of the algorithm is not sensitive to this order of error. We can analytically show that annealing occurs in the region T:::::: IL12(1'2 and choose TJ = 2ILI2(1'2. Here, ILI2 is the squared norm of the operator Land ILI2 = 20 for the usual Laplacian and ILI2 = 12.5 for the quadratic variation. Further analysis shows that b = .J2;ILI(1' is a good choice for the coefficient of the prior term. References [Bes86] J. Besag. On the statistical analysis of dirty pictures. Journal of the Royal Stati6ticCJl Society, B 48(3), 1986. [BGZ76] E. Brezin, J. C. Le Guillon, and J. Zinn-Justin. Field theoretical approach to critical phenomena. In C. Domb and M.S. Green, editors, PhCJ6e Tran6ition6 and Critical Phenomena, chapter 3, Academic Press, New York, 1976. [BH83] M. Brady and B.K.P Horn. Symm~tric operators for surface interpolation. CVGIP, 22, 1983. [BJ88] P.J. Besl and R.C. Jain. Segmentation through variable-order surface fitting. IEEE PAMI, 10(2), 1988. Range Image Restoration Using Mean Field Annealing 601 [BS86] G. Bilbro and W. Snyder. A Linear Time Theory for Recognizing Surface6 in 3-D. Technical Report CCSP-NCSU TR-86/8, Center for Communications and Signal Processing, North Carolina State University, 1986. [BS88] G. L. Bilbro and W. E. Snyder. Range Image Re6toration U6ing Mean Field Annealing. Technical Report NETR 88-19, Available from the Center for Communications and Signal Processing, North Carolina State University, 1988. [CZVJ88] R. Chellappa, Y.-T. Zhou, A. Vaid, and B. K. Jenkins. Image restoration using a neural network. IEEE Tran6action6 on ASSP, 36(7):1141-1151, July 1988. [Gem87] D. Geman. Stochastic model for boundary detection. Vi6ion and Image Computing, 5(2):61-65, 1987. [GG84] D. Geman and S. Geman. Stochastic relaxation, Gibbs Distributions, and the Bayesian restoration of images. IEEE Tran6action6 on PAMI, PAMI-6(6):721-741, November 1984. [GM85] S. Geman and D.E. McClure. Bayesian image analysis: an application to single photon emission tomography. Proceeding6 of the American Stati6tical A uociation, Stati6tical Computing Section, 12-18, 1985. rGri83] W.E.L. Grimson. An implementation of computational theory of visual surface interpolation. CVGIP, 22, 1983. [GS87] B. R. Groshong and W. E. Snyder. Range image segmentation for object recognition. In 18th Pitt6burgh Conference on Modeling and Simulation, Pittsburgh, PA, April 1987. [Mar85] J .L. Marroquin. Probabili6tic Solution to Inver6e Problem6. PhD thesis, M.LT, Cambridge, MA, September 1985. [MMP87] J. Marroquin, S. Mitter, and T. Poggio. Probabilistic solution of illposed problems in computational vision. Journal of American Stati6tical Auociation, 82(397):76-89, March 1987. [PB87] T. Ponce and M. Brady. Toward a surface primal sketch. In T. Kanade, editor, Three Dimen6ional Machine Vi6ion, Kluwer Press, 1987. [PFTV88] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipe6 in C. Cambridge University Press, 1988. [RS87] F. Romeo and A. Sangiovanni-Vencentelli. Probabalistic hill climbing algorithms: properties and applications. In Chapel Hill Conference on VLSI, Computer Science Press, Chapel Hill, NC, 1987. [Ter87] D. Terzopoulos. The role of constraints and discontiuities in visiblesurface reconstruction. In Proc. of 7th International Conf. on AI, pages 1073-1077, 1987. [WP85] G. Wolberg and T. Pavlidis. Restoration of binary images using stochastic relaxation with annealing. Pattern Recognition Letter6, 3(6):375-388, December 1985. [YA85] M. Brady A. Yiulle and H. Asada. Describing surfaces. CVGIP, August 1985.
|
1988
|
72
|
160
|
626 ANALYZING THE ENERGY LANDSCAPES OF DISTRIBUTED WINNER-TAKE-ALL NETWORKS David S. Touretzky School of Computer Science Carnegie Mellon University Pittsburgh, P A 15213 ABSTRACT DCPS (the Distributed Connectionist Production System) is a neural network with complex dynamical properties. Visualizing the energy landscapes of some of its component modules leads to a better intuitive understanding of the model, and suggests ways in which its dynamics can be controlled in order to improve performance on difficult cases. INTRODUCTION Competition through mutual inhibition appears in a wide variety of network designs. This paper discusses a system with unusually complex competitive dynamics. The system is DCPS, the Distributed Connectionist Production System of Touretzky and Hinton (1988). DCPS is a Boltzmann machine composed of five modules, two of which, labeled "Rule Space" and "Bind Space," are winner-take-all (WTA) networks. These modules interact via their effects on two attentional mod ules called clause spaces. Clause spaces are another type of competitive architecture based on mutual inhibition, but they do not produce WTA behavior. Both clause spaces provide evidential input to both WTA nets, but since connections are symmetric they also receive top-down "guidance" from the WTA nets. Thus, unlike most other competitive architectures, in DCPS the external input to a WTA net does not remain constant as its state evolves. Rather, the present output of the WTA net helps to determine which evidence will become visible in the clause spaces in the future. This dynamic attentional mechanism allows rule and bind spaces to work together even though they are not directly connected. DCPS actually uses a distributed version of winner-take-all networks whose operating characteristics differ slightly from the non-distributed version. Analyzing the energy landscapes of DWTA networks has led to a better intuitive understanding of their dynamics. For a complete discussion of the role of DWTA nets in DCPS, and the ways in which insights gained from visualization led to improvements in the system's stochastic search behavior, see [Touretzky, 1989]. Energy Landscapes of Distributed Winner-Take-All Networks 627 DISTRIBUTED WINNER-TAKE-ALL NETWORKS In classical WTA nets [Feldman & Ballard, 1982], a unit's output value is a continuous quantity that reflects its activation level. In this paper we analyze a stochastic, distributed version of winner-take-all dynamics using Boltzmann machines, whose units have only binary outputs [Hinton & Sejnowski, 1986]. The amount of evidential input to these units determines its energy gap [Hopfield, 1982], which in turn determines its probability of being active. The network's degree of confidence in a hypothesis is thus reflected in the amount of time the unit spends in the active state. A good instantaneous approximation to strength of support can be obtained by representing each hypothesis with a clique of k independent units looking at a common evidence pool. The number of active units in a clique reflects the strength of that hypothesis. DCPS uses cliques of size 40. Units in rival cliques compete via inhibitory connections If all units in a clique have identical receptive fields, the result is an "ensemble" Boltzmann machine [Derthick & Tebelskis, 1988]. In DCPS the units have only moderately sized, but highly overlapped, receptive fields, so the amount of evidence individual units perceive is distributed binomially. Small excitatory weights between sibling units help make up for variations in external evidence. They also make states where all the units in a single clique are active be powerful attractors. Energy tours in a DWTA take one of four basic shapes. Examples may be seen in Figure 1a. Let e be the amount of external evidence available to each unit, 0 the unit's threshold, k the clique size, and W, the excitatory weight between siblings. The four shapes are: Eager vee: the evidence is above threshold (e > 0). The system is eager to turn units on; energy decreases as the number of active units goes up. We have a broad, deep energy well, which the system will naturally fall into given the chance. Reluctant vee: the evidence is below threshold, but a little bit of sibling influence (fewer than k/2 siblings) is enough to make up the difference and put the system over the energy barrier. We have e < 0 < e +w,(k-1)/2. The system is initially reluctant to turn units on because that causes the energy to go up, but once over the hump it willingly turns on more units. With all units in the clique active, the system is in an energy well whose energy is below zero. Dimpled peak: with higher thresholds the total energy of the network may remain above zero even when all units are on. This happens when more than half of the siblings must be active to boost each unit above threshold, i.e., e + w,(k - 1) > 0 > e + w,(k - 1)/2. The system can still be trapped in the small energy well that remains, but only at low temperatures. The well is hard to reach since the system must first cross a large energy barrier by traveling far uphill in energy space. Even if it does visit the well, the system may easily bounce out of it again if the well is shallow. 628 Touretzky Smooth peak: when () > e + w.(k - 1), units will be below threshold even with full sibling support. In this case there is no energy well, only a peak. The system wants to turn all units off. VISUALIZING ENERGY LANDSCAPES Let's examine the energy landscape of one WTA space when there is ample evidence in the clause spaces for the winning hypothesis. We select three hypotheses, A, B, and C, with disjoint evidence populations. Let hypothesis B be the best supported one with evidence 100, and let A have evidence 40 and C have evidence 5. We will simplify the situation slightly by assuming that all units in a clique perceive exactly the same evidence. In the left half of Figure 1 b we show the energy curves for A, B, and C, using a value of 69 for the unit thresholds.1 Each curve is generated by starting with all units turned off; units for a particular hypothesis are turned on one at a time until all 40 are on; then they are turned off again one at a time, making the curve symmetric. Since the evidence for hypothesis A is a bit below threshold, its curve is of the "reluctant vee" type. The evidence for hypothesis B is well above threshold, so its curve is an "eager vee." Hypothesis C has almost no evidence; its "dimpled peak" shape is due almost entirely to sibling support. (Sibling weights have a value of +2; rival weights a value of -2.) Note that the energy well for B is considerably deeper than for A. This means at moderate temperature the model can pop out of A's energy well, but it is more likely to remain in B's well. The well for B is also somewhat broader than the well for A, making it easier for the B attractor to capture the model; its attract or region spans a larger portion of state space. The energy tours for hypotheses A, B, and C correspond to traversing three orthogonal edges extending from a corner of a 40 x 40 x 40 cube. A point at location (x, y, z) in this cube corresponds to x A units, y B units, and z C units being active. During the stochastic search, A and B units will be flickering on and off simultaneously, so the model will also visit internal points of the cube not covered in the energy tour diagram. To see these points we will use two additional graphic representations of energy landscapes. First, note that hypothesis C gets so little support that we safely can ignore it and concentrate on A and B. This allows us to focus on just the front face of the state space cube. In Figure 2a, the number of active A units runs from zero to forty along the vertical axis, and the number of active B units runs from zero to forty along the horizontal axis. The arrows at each point on the graph show legal state transitions at zero temperature. For example, at the point where there are are 38 active B units and 3 active A units there are two arrows, pointing down and to the right. This means there are two states the model could enter next: it could either turn off one of the active A units, or turn on one more B unit, respectively. At nonzero temperatures other state transitions 1 All the weights and thresholds used in this paper are actual DCPS values taken from [Touretzky & Hinton, 1988]. Energy Landscapes of Distributed Winner-Take-All Networks 629 are possible, corresponding to uphill moves in energy space, but these two remain the most probable. The points in the upper left and lower right corners of Figure 2a are marked by "Y" shapes. These represent point attractors at the bottoms of energy wells; the model will not move out of these states unless the temperature is greater than zero. Other points in state space are said to be within the region of a particular attractor if all legal transition sequences (at T = 0) from those points lead eventually to the attractor. The attractor regions of A and B are outlined in the figure. Note that the B attractor covers more area than A, as predicted by its greater breadth in the energy tour diagram. Note also that there is a small ridge between the two attractor regions. From starting points on the ridge the model can end up in either final state. Figure 2b shows the depths of the two attractors. The energy well for B is substantially deeper than the well for A. Starting at the point in the lower left corner where there are zero A units and zero B units active, the energy falls off immediately when moving in the B direction (right), but rises initially in the A direction (left) before dropping into a modest energy well when most of the A units are on. Points in the interior of the diagram, representing a combination of A and B units active, have higher energies than points along the edges due to the inhibitory connections between units in rival cliques. We can see from Figures lb and 2 that the attractor for A, although narrower and shallower than the one for B, is still sizable. This is likely to mislead the model, so that some of the time it will get trapped in the wrong energy well. The fact that there is an attractor for A at all is due largely to sibling support, since the raw evidence for A is less than the rule unit threshold. We can eliminate the unwanted energy well for A by choosing thresholds that exceed the maximum sibling support of 2 x 39 = 78. DCPS uses a value of 119. However, early in the stochastic search the evidence visible in the clause spaces will be lower than at the conclusion of the search; high thresholds combined with low evidence would make the B attractor small and very hard to find. (See the right half of Figure Ie, and Figure 3.) Under these conditions the largest attractor is the one with all units turned off: the null hypothesis. ' DISCUSSION Our analysis of energy landscapes pulls us in two directions: we need low thresholds so the correct attractor is broad and easy to find, but we need high thresholds to eliminate unwanted at tractors associated with local energy minima. Two solutions have been investigated. The first is to start out with low thresholds and raise them gradually during the stochastic search. This "pulls the rug out from under" poorlysupported hypotheses while giving the model time to find the desired winner. The second solution involves clipping a corner from the state space hypercube so that the model may never have fewer than 40 units active at a time. This prevents the 630 Touretzky model from falling into the null attractor. When it attempts to drop the number of active units below 40 it is kicked away from the clipped edge by forcing it to turn on a few inactive units at random. Although DCPS is a Boltzmann machine it does not search the state space by simulated annealing in the usual sense. True annealing implies a slow reduction in temperature over many update cycles. Stochastic search in DCPS takes place at a single temperature that has been empirically determined to be the model's approximate "melting point." The search is only allowed to take a few cycles; typically it takes less than 10. Therefore the shapes of energy wells and the dynamics of the search are particularly important, as they determine how likely the model is to wander into particular attractor regions. The work reported here suggests that stochastic search dynamics may be improved by manipulating parameters other than just absolute temperature and cooling rate. Threshold growing and corner clipping appear useful in the case of DWTA nets. Additional details are available in [Touretzky, 1989]. Acknowledgments This research was supported by the Office of Naval Research under contract N0001486-K-0678, and by National Science Foundation grant EET-8716324. I thank Dean Pomerleau, Roni Rosenfeld, Paul Gleichauf, and Lokendra Shastri for helpful comments, and Geoff Hinton for his collaboration in the development of DCPS. References [1] Derthick, M. A., & Tebelskis, J. M. (1988) "Ensemble" Boltzmann machines have collective computational properties like those of Hopfield and Tank neurons. In D. Z. Anderson (ed.), Neural Information Processing Systems. New York: American Institute of Physics. [2] Feldman, J. A., & Ballard, D. H. (1982) Connectionist models and their properties. Cognitive Science 6:205-254. [3] Hinton, G. E., & Sejnowski, T. J. (1986) Learning and relearning in Boltzmann machines. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1. Cambridge, MA: Bradford Books/The MIT Press. [4] Hopfield, J. J. (1982) Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences USA, 79:2554-2558. [5] Touretzky, D. S., & Hinton, G. E. (1988) A distributed connectionist product.ion system. Cognitive Science 12(3):423-466. [6] Touretzky, D. S. (1989) Controlling search dynamics by manipulating energy landscapes. Technical report CMU-CS-89-113, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA. Energy Landscapes of Distributed Winner-Take-All Networks 631 \ ( ~ ! ~ ! Evldlncl: A&4O. "100. C:5. \ , \ \ llnIhold • 69 Evldlncl: A&4O. 1060, C:5. j ! : : : ! \( \ ! , . '= nr.hold = 69 /\ I \ AJ'\ ! . , ! \ · . · . · . · . · . · . Evldlncl: A&4O. "100. C:5. /\ ! ~ ! \ l \ · . · . · . · . · . · . -. 0. \ ! ":. : \f llnIhold = 119 Evldlncl: Aa4O. 1060. C:5. r\ ! \ ! \ ! \ · . · . · . f\\ f \ · . . . nr.hold = 119 1\ I , ; ; .f \ : 1\ 1\ ! \ · . · . · . · . · . · . · . · . · . · . · . · . · . · . · . !A'. : ! ! \ ! \ · . · . · . · . · . · . · . · . · . · . Figure 1: (a) four basic shapes for DWTA energy tours; (b) comparison of low vs. high thresholds in energy tours where there is a high degree of evidence for hypothesis B; (c) corresponding tours with low evidence for B. '~~'eJms A~J~U~ ~u!puods~JJo~ ~q'l (q) !~m'l'eJ~dw~'l OJ~Z 'l'e SUO!'l!su'eJ'l ~'l'e'ls I'e~~1 ('e) 'q 1 ~m~!d JO Jl'eq U~l ~q'l U! S'e '~~U~P!A~ q~!q pU'e sPloqs~Jq'l MOl :~ ~JI1~!d .... \~t'> ••• ~':J. fb e" '69 = PT o4sa.J41 "QIlI1 r •1Ik». '001=8 'O~=~ :aouapT A3 A~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 14444444444444444 44444444444444444 1444444444444444444444444444444444~44444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 144444444444444444444444444~444444444444 1444444~4444444444444444444~444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 '4444444444444444~4444444444444444444 14444444444444444 44444444444444444444 14444444444444444 4 44444444444444444444 14444444444444444 4444444444444444444 14444444444444444 4444444444444444 ~ 144444444444444444 444444444444444 ~~ 14444444444444444444444444444444444 ~~~ 1444444444444444444444444444444444 ~~~~ 144444444444444444444444444444444 ~~~~~ 14444444444444444444444444444444 ~~~~~~ 1444444444444444444444444444444 ~~~~~~~ 144444444444444444444444444444 ~~~~~~~~ 14444444444444444444444444444 ~~~~~~~~~ 1444444444444444444444444444 ~~~~~~~~~~ 144444444444444444444444444 ~~~~~~~~~~~ 14444444444444444444444444 ~~~~~~~~~~~~ 1444444444444444444444444 ~~~~~~~~~~~~~ 144444444444444444444444 ~~~~~~~~~~~~~~ 14444444444444444444444 ~~~~~~~~~~~~~~~ 1444444444444444444444 ~~~~~~~~~~~~~~~~ 144444444444444444444 ~~~~~~~~~~ ~~~~ 14444444444444444444 ~~~~~~~~~ ~~ ~~~~~ 1444444444444444444 ~~~~~~~~~~ ~~~~~ 144444444444444444 ~~~~~~~~~~~~ ~~~~~ 14444444444444444 ~~~~~~~~~~~~~~ ~~~~~~ 1444444444444444 ~~~~~~~~~~~~~~~~~~~~~~ 144444444444444 ~~~~~~~~~~~~~~~~~~~~~~~ 144 ~~~~~~~~~~~~~~~~~~~~~~~ AlIZla,m0J, ~f!9 t-t:;.! ~~ ~ ... -('I) ~ ~~ ~ .. ~ ('I) == ~ ..... ... oq ~ ::r ; ~ ..... ::r C-. ... o ('I) ~ ~ ~ I:T' o ~ ~o.. N fI) ('I) ~ ... ~ o 0.. ~('I) 0 S == "'C:j ('I) ~ < ~ ..... ~o.. ~ ('I) ... ~ ('I) (") _. ('I) ..-O"'~ "-"~ ~ ..... ::r~ ('I) ~ (") I:T' o ('I) ... ... ... ('I) ..... ~oq "t:I::r o ~ 5..;..... ~ ...., oq 0 ('I) ...., ~ ~ ('I) ..... "'oq oq ~ '< ... ~ ('I) ~ .... ... (") ~. (")..~~ tTl. 4 I '\ • ., ~ • ,.., < .... a. III :J 0 III l> II A 0 • tIl II m 0 · -i ~ .., III Ul ~ 0 .... 0.. II ..... ..... \0 · ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 14444444444444444444~44444444444444 14444444444444444444444 444444444444444 t44444444444444444444 4 44444444444444 t44444444444444444444 44444444444444 t444444444444444444444444444444444444444 t44444444444~4444444~4444444444444444 t4444444444444444444 4444444444444444 t44444444444444444444 44444444444444444 t44444444444444444441 44444444444444 t44444444444444444444. 1144444444444444 1444444444444444444444444444444444444444 t444444444444444444444444444444444444444 t444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 t444444444444444444444444444444444444444 t444444444444444444444444444444444444444 t444444444444444444444444444444444444444 t444444444444444444444444444444444444444 1444444444444444444444444444444444444444 1444444444444444444444444444444444444444 t444444444444444444444444444444444444444 14 4444444 444 4 4 , 444444444444444444444444444444444444444 ,~ 44444444444444444444444444444444444444 ,~ 4444444444444444444444444444444444444 ,~~ 444444444444444444444444444444444444 ,~~~ 44444444444444444444444444444444444 , 4444444444444444444444444444444444 , ~ 444444444444444444444444444444444 l ~~ 44444444444444444444444444444444 l~~~~~~~ 4444444444444444444444444444444 444444444444444444444444444444 tE.1 ~ r ~ (D fI) ~ ~ fI) &: 0'" ~ ~ j;l.. ~ ~ ~ ~ (D ~ z (D i a0) to c.o
|
1988
|
73
|
161
|
264 NEURAL APPROACH FOR TV IMAGE COMPRESSION USING A HOPFIELD TYPE NETWORK Martine NAILLON Jean-Bernard THEETEN Laboratoire d'Electronique et de Physique Appliquee * 3 Avenue DESCARTES, BP 15 94451 LIMEIL BREVANNES Cedex FRANCE. ABSTRACT A self-organizing Hopfield network has been developed in the context of Vector Ouantiza-tion, aiming at compression of television images. The metastable states of the spin glass-like network are used as an extra storage resource using the Minimal Overlap learning rule (Krauth and Mezard 1987) to optimize the organization of the attractors. The sel f-organi zi ng scheme that we have devised results in the generation of an adaptive codebook for any qiven TV image. I NTRODOCTI ON The ability of an Hopfield network (Little,1974; Hopfield, 1982,1986; Amit. and al., 1987; Personnaz and al. 1985; Hertz, 1988) to behave as an associative memory usua 11 y aSSlJ11es a pri ori knowl edge of the patterns to be stored. As in many applications they are unknown, the aim of this work is to develop a network capable to learn how to select its attractors. TV image compression using Vector Quantization (V.Q.)(Gray, 1984), a key issue for HOTV transmission, is a typical case, since the non neural algorithms which generate the list of codes (the codebookl are suboptimal. As an alternative to the prani si ng neural canpressi on techni ques (Jackel et al., 1987; Kohonen, 1988; Grossberg, 1987; Cottrel et al., 19B7) our idea is to use the metastability in a spin glass-like net as an additional storage resource and to derive after a "cl assi cal II cl usteri nq a 1 gori thm a sel f-organi zi ng sheme for generatf ng adaptively the codebook. We present the illustrative case of 2D-vectors. * LEP : A member of the Philips Research Organization. Neural Approach for TV Image Compression 265 NON NEURAL APPROACH In V.O., the image is divided into blocks, named vectors, of N pixels (typically 4 x 4 pixels). Given the codebook, each vector is coded by associating it with the nearest element of the list (Nearest Neighbour Classifier) ( fi g ure 1). INPUT YEtTa" EMCaD£" COP1PARE CODE BOOK INDEX ~ ~ ftECDNINDEX CODEBOOK STRUCTED VECTOR Figure 1 : Basic scheme of a vector quantizer. For designing an optimal codebook, a clustering algorithm is app1 ied to a training set of vectors (figure 2), the criterium of optimality being a distorsion measure between the training set and the codebook. The algorithm is actua 11 y subopt ima1, especi a 11 y for non connex training set, as it is based on an iterative computation of centers of grav i ty whi ch tends to overcode the dense regions of poi nts whereas the 1 ight ones are undercoded (figure 2). - - ----------------~r_---------~~-~---~ a 1::.. .. a+--____ ~-------~--------~ 110.0 230.0 PIXEl.. 1 . . ---.------. . -- --- - -- - - -- - . -- ----Figure 2 : Training set of two pixels vectors and the associated codebook canputed by a non neural c1 ustering algorithm: overcoding of the dense regions (pixel 1 148) and subcoding of the light ones. 266 Naillon and Theeten NEURAL APPROACH In a Hopfield neural network, the code vectors are the attractors of the net and the neural dynamics (resolution phase) is substituted to the nearest neighbourg classification. ~en patterns referred to as II prototypes" and named here "explicit memory" are prescribed in a spin glass-like net, other attractors referred to as "metastable states" - are induced in the net (Sherrington and Kirkpatrick, 1975; Toulouse, 1977; Hopfield, 1982; Mezard and al., 1984). We consider those induced attractors as additional memory named here "impl icit memory" whi ch can be used by the network to code the previously mentioned light regions of points. This provides a higher flexibility to the net during the self-organization process, as it can choose in a large basis of explicit and implicit attractors the ones which will optimize the coding task. NEURAL NOTATION A vector of 2 pixels with 8 bits per pel is a vector of 2 dimensions in an Eucl idean space where each dimension corresponds to 256 grey levels. To preserve the Euclidean di stance, we use the well-known themometri c notati on : 256 neurons for 256 level s per dimens i on, the number of neurons set to one, wi th a reg ul ar orderi ng, g iv i ng the pixel luminance, e.g. 2 = 1 1-1-1-1-1 ••• For vectors of dimension 2, 512 neurons will be used, e.g. v=(2,3)= (1 1-1-1 •••••• -1,1 1 1-1-1 ••• ,-1) INDUCTION PROCESS The induced impl icit memory depends on the prescription rule. We have compared the Projection rule (Personnaz and al., 1985) and the Minimal Overlap rule (Krauth and Mezard, 1987). The metastable states are detected by relaxing any point of the training set of the figure 2, to its corresponding prescribe or induced attractor marked in figure 3 with a small diamond. For the two rules, the induction process is rather detenni ni stic, generati ng an orthogonal mesh : if two prototypes (P11,P12) and (P21,P22) are prescribed, a metastable state is induced at the cross-points, namely (P11,P22) and (P21,P12) (figure 3). Neural Approach for TV Image Compression 267 a+-__ ~ ____ ~ __ ~ ar---~----~--~ ... -. .. ..... .. ... ... PDIB. 1 PDCEI. 1 Figure 3 : Comparaison of the induction process for 2 prescription rules. The prescribed states are the full squares, the induced states the open diamonds. What differs between the two rul es ; s the number of induced attractors. For 50 prototypes and a training set of 2000 2d-vectors, the projection rule induces about 1000 metastable states (ratio 1000/50 = 20) whereas Min Over induces only 234 (ratio 4.6). This is due to the different stabil ity of the prescribed and the induced states in the case of Min Over (Naillon and Theeten, to be published). GENERALIZED ATTRACTORS Some attractors are induced out of the image space (Figure 4) as the 512 neurons space has 2512 configurations to be compared with the (28 )2= 216 image configurati ons. We extend the image space by defi n1 ng a "genera 1 i ze d attractor" as the class of patterns having the same number of neurons set to one for each pixel, whatever thei r orderi ng. Such a notati on corresponds to a random thermometri c neural representati on. The simul ati on has shown that the generalized attractors correspond to acceptable states (Figure 4) i.e. they are located at the place when one would like to obtain a normal attractor. 268 N siIlon and Theeten i NO GENERAUZATION WITI-I GENERAUZATJON i i· ~;m""ING~ r' ~ , ~ CII!JeMIJZm AJ'TAACT'OR wrTHOVT AT / .... 1'-.(f;-6 ,~ --< ¥. ~ ~ \l'J!'ft 0-\ -,' .A '1ft 6.. .. [ '& i ~ ~!~~ . '~~6 i ~6 ... ",b. ..._ +6A-+ I't hjlt ~ t " ~~~r.~ lf~ .. ~~ ~ 6+6 ~ I' .... ... ,J::.,..,.~ \ l!Jl!. -~ ..... ,,' '- l!.4--6 \---J. ... 6"'6.-6 ~. -(·1.f~ll1" '. -.,... 'fl •• f~· . 1 i ~~/ ~ .. ;;~ (J.,". - j.J ~. • ~ ~J -'-t~~i &~ ~ f. t I ~ & -. ~, ,)t ~~'~ ~5\ :4• ; .... ... .. .. -.... .. .. .. .. to PIXEL! PIXEL ! Figure 4 : The induced bassins of attractions are represented with arrows. In the left plot, some training vectors have no attractor in the image space. After generalization (randon thermometric notation), the right ~ot shows their corresponding attractors. ADAPTIVE NEURAL CODEBOOK LEARNING An iterative sel f- organi zi ng process has been developed to optimi ze the codebook. For a given TV image, the codebook is defined, at each step of the process, as the set of prescribed and induced attractors, selected by the training set of vectors. The self-organizing scheme is controlled by a cost function, the distorsion measure between the training set and the codebook. Having a target of 50 code vectors, we have to prescri be at each step, as discussed above, typically 50/4.6 = 11 prototypes. As seen in figure Sa, we choose 11 initial prototypes uniformly distributed along the bisecting line. Using the training set of vectors of the figure 2, the induced metastable states are detected with their corresponding bassins of attraction. The 11 most frequent, prescribed or induced, attractors are selected and the 11 centers of gravi ty of thei r bassi ns of attracti on are taken as new prototypes (figure 5b ). After 3 iterations, the distorsion measure stabilizes (Table 1). Neural Approach for TV Image Compression 269 INmALIZATION n: • • i • • • " • " ~ • ~ • • s- • s • • i ..... - --- .... ... -.... Fi gure 5a i i " ~ • ! 8 , I , , i Figure 5b scheme. PlXB.1 PIXEl. 1 Initialization of the self-organizing scheme. ITERATION ·1 PROTOTYPES FAST ORGANIZATION • • • • • N • ~ •• • •• • · · First iteration of the self-organizinq 1001 itrrllioM 1 2 3 4 5 «Iobal codebook «eneralized dislofsion size aUraclors 1571 53 ! 0 1031 57 4 97 79 i 20 97 84 I 20 98 . 68 i 15 Table 1 : Evolution of the distorsion measure versus the iterations of the self-organizing scheme. It stabilizes in 3 iterations. 270 NOOllon and Theeten Fourty 1 i nes of a TV image (the port of Ba 1 timore) of 8 bits per pel, has been coded with an adaptive neural codebook of 50 20-vectors. The coherence of the coding is visible from the apparent continuity of the image (Figure 6). The coded image has 2.5 bits per pel. I j • 1 Figure 6 : Neural coded image with 2.5 bits per pel. CONCLUSION Using a "classical" clusterinq algorithm, a self-organizing scheme has been developed in a Hopfield network f.or the adaptive design of a codebook of small d imensi on vectors ina Vector Quanti zati on techni Que. It has been shown that using the Minimal Overlap prescription rule, the metastable states induced in a spin gl ass-like network can be used as extra-codes. The optimal organization of the prescribed and induced attractors, has been defined as the limit organization obtained from the iterative learning process. It is an example of "learning by selection" as already proposed by physicists and biologists (Toulouse and ale 1986). Hard~re impl ementation on the neural VLSI ci rcuit curren~y designed at LEP should allow for on-line codebook computations. We woul d like to thank J.J. Hopfield who has inspired this study as well H. Bosma and W. Kreuwel s from Phil ips Research Laboratories, Eindhoven, who have allow to initialize this research. Neural Approach for TV Image Compression 271 REFERENCES 1 - J.J. Hopfield, Proc. Nat. Acad. Sci. USA, 79, 2554 - 2558 (1982); J.J. Hopfield and D.W. Tank, SC1ence 233 , 625 (1986) ; W.A. Little, Math. Biosi.,..!2., 101-120 :-T1974). 2 - D.J. ftrnit, H. Gutfreund, and H. Sanpolinslc.y, Phys.Rev. 32, Ann. Phys. 173, 30 (1987). 3 - L. Personnaz, I. Guyon and G. Dreyfus, J. Phys. Lett. 46, L359 (1985). 4 - J.A. Hertz, 2nd International Conference on "Vector and pa ra 11 e 1 canputi ng, Transo, Norway, June (1988). 5 - M.A. Virasoro, Disorder Systems and Biological Organization, ed. E. Bienenstoclc., Springer, Berlin (1985); H. Gutfreund (Racah Institute of Physics, Jerusalem) (1986); C. Cortes, A. Kro<;lh and J .A. Hertz, J. of Phys. A., (1986). 6 - R .M. Gray, IEEE ASSP Magazi ne 5 (Apr. 1984). 7 - L.D. Jackel, R.E. Howard, J.S. Denker, W. Hubbard and S.A. ~ol1a, ADpl ied Ootics, Vol. 26, Q, (1987). 8 - i. Kononen, Finland, Helsinky University of Technology, Tech. ~eo. No. iKK..;:"·A601; T. Kahanen, ~Jeural Networks, 1, ~jumoer :, (1988). 9 - S. Grossoerg, Cognitive ScL,.!.!., 23-63 (1987). 10 - G.W. Cottrell, P. Murro and D.Z. Zioser, Institute of cognitive Science, Report 8702 (1987). 11 - D. Sherrington and S. Kirkpatrick, Phys. Rev. Lett. 35 t 1792 (1975); G. Toulouse, Commun. Phys. 2, 115-119 (lID); M. Mezard , G. Parisi, N. Sourlas , G. Toulouse and M. Virasoro, Phys. Dey. Lett., g, 1156-1159 (1984). 12 - W. Krauth and M. Mezard t J. Phys.A : Math. Gen. 20, L 745-L 752 (1987) 13 - M. ~Jaillon and J.B. Theeten, to be published. 14 - G. Toulouse, S. Dehaene and J.P. Changeux, Pro. Natl.Acad. Sci. USA,~, 1695, (1986).
|
1988
|
74
|
162
|
796 SPEECH RECOGNITION: STATISTICAL AND NEURAL INFORMATION PROCESSING APPROACHES John S. Bridle Speech Research Unit and National Electronics Research Initiative in Pattern Recognition Royal Signals and Radar Establishment Malvern UK Automatic Speech Recognition (ASR) is an artificial perception problem: the input is raw, continuous patterns (no symbols!) and the desired output, which may be words, phonemes, meaning or text, is symbolic. The most successful approach to automatic speech recognition is based on stochastic models. A stochastic model is a theoretical system whose internal state and output undergo a series of transformations governed by probabilistic laws [1]. In the application to speech recognition the unknown patterns of sound are treated as if they were outputs of a stochastic system [18,2]. Information about the classes of patterns is encoded as the structure of these "laws" and the probabilities that govern their operation. The most popular type of SM for ASR is also known as a "hidden Markov model." There are several reasons why the SM approach has been so successful for ASR. It can describe the shape of the spectrum, and has a principled way of describing temporal order, together with variability of both. It is compatible with the hierarchical nature of speech structure [20,18,4], there are powerful algorithms for decoding with respect to the model (recognition), and for adapting the model to fit significant amounts of example data (learning). Firm theoretical (mathematical) foundations enable extensions to be accommodated smoothly (e.g. [3]). There are many deficiencies however. In a typical system the speech signal is first described as a sequence of acoustic vectors (spectrum cross sections or equivalent) at a rate of say 100 per second. The pattern is assumed to consist of a sequence of segments corresponding to discrete states of the model. In each segment the acoustic vectors are drawn from a distribution characteristic of the state, but otherwise independent of one another and of the states before and after. In some systems there is a controlled relationship between states and the phonemes or phones of speech science, but most of the properties and notions which speech scientists assume are importan t are ignored. Most SM approaches are also deficient at a pattern-recognition theory level: The parameters of the models are usually adj usted (using the Baum-Welch re-estimation method [5,2]) so as to maximise the likelihood of the data given the model. This is the right thing to do if the form of the model is actually appropriate for the data, but if not the parameter-optimisation method needs to be concerned with Speech Recognition 797 discrimination between classes (phonemes, words, meanings, ... ) [28,29,30]. A HMM recognition algorithm is designed to find the best explanation of the input in terms of the model. It tracks scores for all plausible current states of the generator and throws away explanations which lead to a current state for which there is a better explanation (Bellman's Dynamic Programming). It may also throwaway explanations which lead to a current state much worse than the best current state (score pruning), producing a Beam Search method. (It is important to keep many hypotheses in hand, particularly when the current input is ambiguous.) Connectionist (or "Neural Network") approaches start with a strong pre-conception of the types of process to be used. They can claim some legitimacy by reference to new (or renewed) theories of cognitive processing. The actual mechanisms used are usually simpler than those of the SM methods, but the mathematical theory (of what can be learnt or computed for instance) is more difficult, particularly for structures which have been proposed for dealing with temporal structure. One of the dreams for connectionist approaches to speech is a network whose inputs accept the speech data as it arrives, it would have an internal state which contains all necessary information about the past input, and the output would be as accurate and early as it could be. The training of networks with their own dynamics is particularly difficult, especially when we are unable to specify what the internal state should be. Some are working on methods for training the fixed points of continuousvalued recurrent non-linear networks [15,16,27]. Prager [6] has attempted to train various types of network in a full state-feedback arrangement. Watrous [9] limits his recurrent connections to self-loops on hidden and output units, but even so the theory of such recursive non-linear filters is formidable. At the other extreme are systems which treat a whole time-frequency-amplitude array (resulting from initial acoustic analysis) as the input to a network, and require a label as output. For example, the performance that Peeling et al. [7] report on multi-speaker small-vocabulary isolated word recognition tasks approach those of the best HMM techniques available on the same data. Invariance to temporal position was trained into the network by presenting the patterns at random positions in a fixed time-window. Waibel et al. [8] use a powerful compromise arrangement which can be thought of either as the replication of smaller networks across the timewindow (a time-spread network [19]) or as a single small network with internal delay lines (a Time-Delay Neural Network [8]). There are no recurrent links except for trivial ones at the output, so training (using Backpropagation) is no great problem. We may think of this as a finite-impulse-response non-linear filter. Reported results on consonant discrimination are encouraging, and better than those of a HMM system on the same data. The system is insensitive to position by virtue of its construction. Kohonen has constructed and demonstrated large vocabulary isolated word [12] and unrestricted vocabulary continuous speech transcription [13J systems which are inspired by neural network ideas, but implemented as algorithms more suitable for 798 Bridle current programmed digital signal processor and CPU chips. Kohonen's phonotopic map technique can be thought of as an unsupervised adaptive quantiser constrained to put its reference points in a non-linear low-dimensional sub-space. His learning vector quantiser technique used for initial labeling combines the advantages of the classic nearest-neighbor method and discriminant training. Among other types of network which have been applied to speech we must mention an interesting class based not on correlations with weight vectors (dot-product) but on distances from reference points. Radial Basis Function theory [22] was developed for multi-dimensional interpolation, and was shown by Broomhead and Lowe [23] to be suitable for many of the jobs that feed-forward networks are used for. The advantage is that it is not difficult to find useful positions for the reference points which define the first, non-linear, transformation. If this is followed by a linear output transformation then the weights can be found by methods which are fast and straightforward. The reference points can be adapted using methods based on backpropagation. Related methods include potential functions [24], Kernel methods [25] and the modified Kanerva network [26]. There is much to be gained form a careful comparison of the theory of stochastic model and neural network approaches to speech recognition. If a NN is to perform speech decoding in a way anything like a SM algorithm it will have a state which is not just one of the states of the hypothetical generative model; the state must include information about the distribution of possible generator states given the pattern so far, and the state transition function must update this distribution depending on the current speech input. It is not clear whether such an internal representation and behavior can be 'learned' from scratch by an otherwise unstructured recurrent network. Stochastic model based algorithms seem to have the edge at present for dealing with temporal sequences. Discrimination-based training inspired by NN techniques may make a significant difference in performance. It would seem that the area where NNs have most to offer is in finding non-linear transformations of the data which take us to a space (perhaps related to formant or articulatory parameters) where comparisons are more relevant to phonetic decisions than purely auditory ones (e.g., [17,10,11]). The resulting transformation could also be viewed as a set of 'feature detectors'. Or perhaps the NN should deliver posterior probabilities of the states of a SM directly [14]. The art of applying a stochastic model or neural network approach is to choose a class of models or networks which is realistic enough to be likely to be able to capture the distinctions (between speech sounds or words for instance) and yet have a structure which makes it amenable to algorithms for building the detail of the models based on examples, and for interpreting particular unknown patterns. Future systems will need to exploit the regularities described by phonetics, to allow the construction of high-performance systems with large vocabularies, and their adaptation to the characteristics of each new user. Speech Recognition 799 There is no doubt that the Stochastic model based methods work best at present, but current systems are generally far inferior to humans even in situations where the usefulness of higher-level processing in minimal. I predict that the next generation of ASR systems will be based on a combination of connectionist and SM theory and techniques, with mainstream speech knowledge used in a rather soft way to decide the structure. It should not be long before the distinction I have been making will disappear [29]. [1] D. R. Cox and H. D. Millar, "The Theory of Stochastic Processes", Methuen, 1965. pp. 721-741. [2] S. E. Levinson, L. R. Rabiner and M. M. Sohndi, "An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition", Bell Syst. Tech. J., vol. 62, no. 4, pp. 10351074, Apr. 1983. [3] M. R. Russell and R. K. Moore, "Explicit modeling of state occupancy in hidden Markov models of automatic speech recognition". IEEE ICASSP-85. [4] S. E. Levinson, "A unified theory of composite pattern analysis for automatic speech recognition', in F. Fallside and W. Woods (eds.), "Computer Speech Processing", Prentice-Hall, 1984. [5] L. E. Baum, "An inequality and associated maximisation technique in statistical estimation of probabilistic functions of a Markov process", Inequalities, vol. 3, pp. 1-8, 1972. [6] R. G. Prager et al., "Boltzmann machines for speech recognition", Computer Speech and Language, vol. 1., no. 1, 1986. [7] S. M. Peeling, R. K. moore and M. J. Tomlinson, "The multi-layer perceptron as a tool for speech pattern processing research", Proc. Inst. Acoustics Conf. on Speech and Hearing, Windermere, November 1986. [8] Waibel et al., ICASSP88, NIPS88 and ASSP forthcoming. [9] R. 1. Watrous, "Connectionist speech recognition using the Temporal Flow model", Proc. IEEE Workshop on Speech Recognition, Harriman NY, June 1988. [10] I. S. Howard and M. A. Huckvale, "Acoustic-phonetic attribute determination using multi-layer perceptrons", IEEE Colloquium Digest 1988/11. [11] M. A. Huckvale and I. S. Howard, "High performance phonetic feature analysis for automatic speech recognition", ICASSP89. [12J T. Kohonen et al., "On-line recognition of spoken words from a large vocabulary", Information Sciences 33, 3-30 (1984). 800 Bridle [13] T. Kohonen, "The 'Neural' phonetic typewriter", IEEE Computer, March 1988. [14] H. Bourlard and C. J. Wellekens, "Multilayer perceptrons and automatic speech recognition", IEEE First IntI. Conf. Neural Networks, San Diego, 1987. [15] R. Rohwer and S. Renals, "Training recurrent networks", Proc. N'Euro-88, Paris, June 1988. [16] L. Almeida, "A learning rule for asynchronous perceptrons with feedback in a combinatorial environment", Proc. IEEE IntI. Conf. Neural Networks, San Diego 1987. [17] A. R. Webb and D. Lowe, "Adaptive feed-forward layered networks as pattern classifiers: a theorem illuminating their success in discriminant analysis" , sub. to N eural Networks. [18] J. K. Baker, "The Dragon system: an overview", IEEE Trans. ASSP-23, no. 1, pp. 24-29, Feb. 1975. [19] J. S. Bridle and R. K. Moore, "Boltzmann machines for speech pattern processing", Proc. Inst. Acoust., November 1984, pp. 1-8. [20] B. H. Repp, "On levels of description in speech research", J. Acoust. Soc. Amer. vol. 69 p. 1462-1464, 1981. [21] R. A. Cole et aI, "Performing fine phonetic distinctions: templates vs. features" , in J. Perkell and D. H. Klatt (eds.), "Symposium on invariance and variability of speech processes", Hillsdale, NJ, Erlbaum 1984. [22] M. J. D. Powell, "Radial basis functions for multi-variate interpolation: a review", IMA Conf. on algorithms for the approximation offunctions and data, Shrivenham 1985. [23] D. Broomhead and D. Lowe, "Multi-variable interpolation and adaptive networks", RSRE memo 4148, Royal Signals and Radar Est., 1988. [24] M. A. Aizerman, E. M. Braverman and L. 1. Rozonoer, "On the method of potential functions", Automatika i Telemekhanika, vol. 26 no. 11, pp. 20862088, 1964. [25] Hand, "Kernel discriminant analysis", Research Studies Press, 1982. [26] R. W. Prager and F. Fallside, "Modified Kanerva model for automatic speech recognition", submitted to Cmputer Speech and Language. [27] F. J. Pineda, "Generalisation of back propagation to recurrent neural networks", Physical Review Letters 1987. [28] L. R. Bahl et aI., Proc. ICASSP88, pp. 493-496. Speech Recognition 801 [29] H. Bourlard and C. J. Wellekens, "Links between Markov models and multilayer perceptrons", this volume. [30] L. Niles, H. Silverman, G. Tajchman, M. Bush, "How limited training data can allow a neural network to out-perform an 'optimal' classifier" , Proc. ICASSP89.
|
1988
|
75
|
163
|
CONVERGENCE AND PATTERN STABILIZATION IN THE BOLTZMANN MACHINE MosheKam Dept. of Electrical and Computer Eng. Drexel University, Philadelphia PA 19104 ABSTRACT Roger Cheng Dept. of Electrical Eng. Princeton University, NJ 08544 The Boltzmann Machine has been introduced as a means to perform global optimization for multimodal objective functions using the principles of simulated annealing. In this paper we consider its utility as a spurious-free content-addressable memory, and provide bounds on its performance in this context. We show how to exploit the machine's ability to escape local minima, in order to use it, at a constant temperature, for unambiguous associative pattern-retrieval in noisy environments. An association rule, which creates a sphere of influence around each stored pattern, is used along with the Machine's dynamics to match the machine's noisy input with one of the pre-stored patterns. Spurious fIxed points, whose regions of attraction are not recognized by the rule, are skipped, due to the Machine's fInite probability to escape from any state. The results apply to the Boltzmann machine and to the asynchronous net of binary threshold elements (Hopfield model'). They provide the network designer with worst-case and best-case bounds for the network's performance, and allow polynomial-time tradeoff studies of design parameters. I. INTRODUCTION The suggestion that artificial neural networks can be utilized for classification, pattern recognition and associative recall has been the main theme of numerous studies which appeared in the last decade (e.g. Rumelhart and McClelland (1986) and Grossberg (1988) and their references.) Among the most popular families of neural networks are fully connected networks of binary threshold elements (e.g. Amari (1972), HopfIeld (1982).) These structures, and the related family of fully connected networks of sigmOidal threshold elements have been used as error-correcting decoders in many applications, among which were interesting applications in optimization (Hopfield and Tank, 1985; Tank and Hopfield, 1986; Kennedy and Chua, 1987.) A common drawback of the many studied schemes is the abundance of 'spurious' local minima, which 'trap' the decoder in undesirable, and often non-interpretable, states during the process of input I stored-pattern association. It is generally accepted now that while the number of arbitrary binary patterns that can be stored in a fully-connected network is of the order of magnitude of N (N = number of the neurons in the network,) the number of created local attractors in the 511 512 Kam and Cheng network's state space is exponential in N. It was proposed (Acldey et al., 1985; Hinton and Sejnowski, 1986) that fully-connected binary neural networks, which update their states on the basis of stochastic state-reassessment rules, could be used for global optimization when the objective function is multi-modal. The suggested architecture, the Boltzmann machine, is based on the principles of simulated annealing ( Kirkpatrick et al., 1983; Geman and Geman, 1984; Aarts et al., 1985; Szu, 1986,) and has been shown to perform interesting tasks of decision making and optimization. However, the learning algorithm that was proposed for the Machine, along with its "cooling" procedures, do not lend themselves to real-time operation. Most studies so far have concentrated on the properties of the Machine in global optimization and only few studies have mentioned possible utilization of the Machine (at constant 'temperature') as a content-addressable memory (e.g. for local optimization. ) In the present paper we propose to use the Boltzmann machine for associative retrieval, and develop bounds on its performance as a content-addressable memory. We introduce a learning algorithm for the Machine, which locally maximizes the stabilization probability of learned patterns. We then proceed to calculate (in polynomial time) upper and lower bounds on the probability that a tuple at a given initial Hamming distance from a stored pattern will get attracted to that pattern. A proposed association rule creates a sphere of influence around each stored pattern, and is indifferent to 'spurious' attractors. Due to the fact that the Machine has a nonzero probability of escape from any state, the 'spurious' attractors are ignored. The obtained bounds allow the assessment of retrieval probabilities, different learning algorithms and necessary learning periods, network 'temperatures' and coding schemes for items to be stored. II. THE MACHINE AS A CONTENT ADDRESSABLE MEMORY The Boltzmann Machine is a multi-connected network of N simple processors called probabilistic binary neurons. The ith neuron is characterized by N-1 real numbers representing the synaptic weights (Wij, j=1,2, ... ,i-1,i+1, ... ,N; Wii is assumed to be zero for all i), a real threshold ('tj) and a binary activity level (Ui E B ={ -1,1},) which we shall also refer to as the neuron's state. The binary N-tuple U = [Ul,U2, ..• ,UN] is called the network's state. We distinguish between two phases of the network's operation: a) The leamjn& phase - when the network parameters Wij and 'ti are determined. This determination could be achieved through autonomous learning of the binary pattern environment by the network (unsupervised learning); through learning of the environment with the help of a 'teacher' which supplies evaluative reinforcement signals (supervised learning); or by an external flxed assignment of parameter values. b) The production phase - when the network's state U is determined. This determination could be performed synchronously by all neurons at the same predetermined time instants, or asynchronously - each neuron reassesses its state independently of the other neurons at random times. The decisions of the neurons regarding their states during reassessment can be arrived at deterministically (the set of neuron inputs determines the neuron's state) or Convergence and Pattem-Stabilization 513 stochastically (the set of neuron inputs shapes a probability distribution for the state-selection law.) We shall describe fast the (asynchronous and stochastic) production rule which our network employs. At random times during the production phase, asynchronously and independently of all other neurons, the ith neuron decides upon its next state, using the probabilistic decision rule u·= J 1 1 with probabilty --~~ l+e-T II er II with probabilty --~~ where -1 l+e-T II N Lllii = L WijUj-ti j=l~ (1) is called the ith energy gap, and Te is a predetermined real number called temperature. The state-updating rule (1) is related to the network's energy level which is described by E=-.![~ u.( ~ w .. u.-t.)] 2 £..J I £..J lJ J I • i=l j=l~ (2) If the network is to fmd a local minimum of E in equation (2), then the ith neuron, when chosen (at random) for state updating, should choose deterministically ui=sgn[ ± Wii Ui - 1i]' (3) j=l~ We note that rule in equation (3) is obtained from rule in equation (1) as Te --+ O. This deterministic choice of Ui guarantees, under symmetry conditions on the weights (Wij=Wji), that the network's state will stabilize at afzxed point in the 2N-tuple state space of the network (Hoptield, 1982), where Definition I: A state UfE BN is called afued point in the state space of the N-neuron network if p .ro<~+1)= Uf I U<'Y = Uf] = 1. (4) A fixed point found through iterations of equation (3) (with i chosen at random at each iteration) may not be the global minimum of the energy in equation (2). A mechanism which seeks the global minimum should avoid local-minimum "traps" by allowing 'uphill' climbing with respect to the value of E. The decision scheme of equation (1) is devised for that purpose, allowing an increase in E with nonzero probability. This provision for progress in the locally 'wrong' direction in order to reap a 'global' advantage later, is in accordance with the principles of simulated annealing techniques, which are used in multimodal optimization. In our case, the probabilities of choosing the locally 'right' decision (equation (3» and the locally 'wrong' decision are determined by the ratio 514 Kam and Cheng of the energy gap ~i to the 'temperature' shaping constant T e . The Boltzmann Machine has been proposed for global minimization and a considerable amount of effort has been invested in devising a good cooling scheme, namely a means to control T e in order to guarantee the finding of a global minimum in a short time (Geman and Geman, 1984, Szu, 1987.) However, the network may also be used as a selective content addressable memory which does not suffer from inadvertently-installed spurious local minima. We consider the following application of the Boltzmann Machine as a scheme for pattern classification under noisy conditions: let an encoder emit a sequence of NXI binary code vectors from a set of Q codewords (or 'patterns',) each having a probability of occurrence of TIm (m = 1,2, ... ,Q). The emitted pattern encounters noise and distortion before it arrives at the decoder, resulting in some of its bits being reversed. The Boltzmann Machine, which is the decoder at the receiving end, accepts this distorted pattern as its initial state (U(O», and observes the consequent time evolution of the network's state U. At a certain time instant nO, the Machine will declare that the input pattern U(O) is to be associated with pattern Bm if U at that instant (u(no» is 'close enough' to Bm. For this purpose we defme Definition 2: The dmax-sphere of influence of pattern B m, a( Bm, dmax) is o(Bm,dmax)={UeBN : HD(U, Bm)~~}. (5) dmax is prespecified. Letl:(~)=uo(Bm' ~)andletno be the smallest integer such that dnJel:(~~ m Therule of atsociation is : associate dO) with Bm at time no, if dllo> which has evolved from u<0) satisfies: U<llo>e o(Bm' ~~ Due to the finite probability of escape from any minimum, the energy minima which correspond to spurious fIXed points are skipped by the network on its way to the energy valleys induced by the designed flXed points (Le. Bl, ... ,BQ.) Ill. ONE-STEP CONTRACTION PROBABILITIES Using the association rule, the utility of the Boltzmann machine for error correction involves the probabilities P r {HD [lfn) ,BnJ ~ d.nax I HD [If°),BnJ = d} m= 1 ,2, ... , Q (6) for predetermined n and dmax . In order to calculate (6) we shall frrst calculate the following one-step attraction probabilities: P(Bm,d,B)=Pr{HDrd~ + 1), BnJ=d+B I HDrd~ ), BnJ=d}whereB= -1,0, 1 (7) For B = -1 we obtain the probability of convergence ; For B = + 1 we obtain the probability of divergence; For B = 0 we obtain the probability of stalemate. An exact calculation of the attraction probabilities in equation (7) is time-exponential and we shall therefore seek lower and upper bounds which can be calculated in polynomial time. We shall reorder the weights of each neuron according to their contribution to the Convergence and Pattern-Stabilization 515 ~i for each pattern, using the notation w;n= {wit bm1, wi2bm2" •• , wiNbmN} ~=max wr ~=max{wF-{~,~, ... ,'fs-l}} (8) i = 1,2, ... ,N, s =2,3, ... ,N, m= 1,2, ... ,Q d d LetL1E~i(d)=~ -2L!f and~~(d)=~ -2L'fN+l-f" (9) 1'=1 1'=1 These values represent the maximum and minimum values that the ith energy gap could assume when the network is at HD of d from Bm. Using these extrema, we can fmd the worst case attraction probabilities : N pwc(B d -1) =.! ~ . [ U-1 (brni) m" N ti' PI AEJ.d) 1+ e---:re AEJ.d) U_l(bmi)e-~ !IF:J.d) l+e----ye and the best case attraction probabilities : + N d L [ U 1 (b .) 1 - U (b .) AE.Jd)] pbc(B d -1)=. mJ -1 mJ -~, N ~ + + e T i=1 AEJ.d) AEuJd) e where for both cases l+e--r l+e--r e e 1-U-1(bmi) AEJ.d) l+e--r e ~) d -~) ~) P (Bm" O)-l-P (Bm' d, -l)-P (Bm' d, 1). (lOa) (lOb) (lla) (lIb) (12) For the worst- (respectively, best-) case probabilities, we have used the extreme values of .1Emi(d) to underestimate (overestimate) convergence and overestimate (underestimate) divergence, given that there is a disagreement in d of the N positions between the network's state and Bm ; we assume that errors are equally likely at each one of the bits. IV. ESTIMATION OF RETRIEVAL PROBABILITIES To estimate the retrieval probabilities, we shall study the Hamming distance of the 516 Kam and Cheng network's state from a stored pattern. The evolution of the network from state to state, as affecting the distance from a stored pattern, can be interpreted in terms of a birth-and-death Markov process (e.g. Howard, 1971) with the probability transition matrix I-Pt,o Pbo 0 0 0 0 0 PII I-Pbl-PII Pbl 0 0 0 0 Pd2 I-Pb2-P& Pb2 0 0 0 0 0 'I' ,lPbbPdi)= 0 Pdt I-Pbk-Pdt Pbk 0 0 0 0 0 PdN-l I-PbN-CPdN-I PbN-I 0 0 0 PdN I-PdN (13) where the birth probability Pbi is the divergence probability of increasing the lID from i to i+ 1, and the death probability Pdi is the contraction probability of decreasing the HD from i to i-I. Given that an input pattern was at HD of do from Bm, the probability that after n steps the network will associate it with Bm is C\P r {lin) --+BrrJ I lID [dO) ,BrrJ = do} = Il r [HD(U(n), Bm) = r I lID(U(O), Bm) = <\,] (14) r=O where we can use the one-step bounds found in section III in order to calculate the worst-case and best-case probabilities of association. Using equations (10) and (11) we define two matrices for e~h pattern Bm; a worst case matrix, V:' and a best case matrix,~: Worst case matrix Pbi=Pwc(Bm,i ,+1) Best case matrix Pbi=pbc(Bm,i ,+1) pdi=pwc(Bm, i ,-1) Pdi=Pbc(Bm, i ,-1). U sing these matrices, it is now possible to calculate lower and upper bounds for the association probabilities needed in equation (14): [1tdo ('I'~1n]r~ Pr[HD(U(n),Bm)=r I lID(U(O), Bm)= <\,] ~[1tdo ('I'~)n]r (ISa) where [x]i indicatestheithelementofthevectorx, and1tdois the unit 1 xn+l vector Convergence and Pattem-Stabilization 51 7 1 i=do (ISb) o The bounds of equation 14(a) can be used to bound the association probability in equation (13). The upper bound of the association probability is obtained by replacing the summed terms in (13) by their upper-bound values: '\P r {U(n) --+Bm11 lID [(f0) ,BnJ= do} S L[1tdo ('P~)nlr (16a) r=O The lower bound cannot be treated similarly, since it is possible that at some instant of time prior to the present time-step (n), the network has already associated its state U with one of the other patterns. We shall therefore use as the lower bound on the convergence probability in equation (14): C\z. L[1Ii1o(~nlrSPr{lfn)--iJnJ IHD[lf°),Bm} (16b) r=O where the underlined matrix is the birth-and-death matrix (13) with (16c) 1 o for i = Ili' Jli+ I, ... , N am Jli = min HD(Bi, Bj)- dmax j = 1, ... ,Q,j~i (16d) Equation (16c) and (16d) assume that the network wanders into the dmax- sphere of influence of a pattern other than Bi, whenever its distance from Bi is Ili or more. This assumption is very conservative, since Ili represents the shortest distance to a competing dmar sphere of influence, and the network could actually wander to distances larger than Ili and still converge eventually into the dmax -sphere of influence of Bi. CONCLUSION We have presented how the Boltzmann Machine can be used as a content-addressable memory, exploiting the stochastic nature of its state-selection procedure in order to escape undesirable minima. An association rule in terms of patterns' spheres of influence is used, along with the Machine's dynamics, in order to match an input tuple with one of the predetermined stored patterns. The system is therefore indifferent to 'spurious' states, whose spheres of influence are not recognized in the retrieval process. We have detailed a technique to calculate the upper and lower bounds on retrieval probabilities of each stored 518 Kam and Cheng pattern. These bounds are functions of the network's parameters (i.e. assignment or learning rules, and the pattern sets); the initial Hamming distance from the stored pattern; the association rule; and the number of production steps. They allow a polynomial-time assessment of the network's capabilities as an associative memory for a given set of patterns; a comparison of different coding schemes for patterns to be stored and retrieved; an assessment of the length of the learning period necessary in order to guarantee a pre specified probability of retrieval; and a comparison of different learning/assignment rules for the network parameters. Examples and additional results are provided in a companion paper (Kam and Cheng, 1989). Acknowledgements This work was supported by NSF grant IRI 8810186. References [1] Aarts,E.H.L., Van Laarhoven,P.J.M. : "Statistical Cooling: A General Approach to Combinatorial Optimization Problems," Phillips 1. Res., Vol. 40, 1985. [2] Ackley,D.H., Hinton,J.E., Sejnowski,T J. : " A Learning Algorithm for Boltzmann Machines," Cognitive Science, Vol. 19, pp. 147-169, 1985. [3] Amari,S-I: "Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements," IEEE Trans. Computers, Vol. C-21, No. 11, pp. 1197-1206, 1972. [4] Geman,S., Geman,D. : "Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images" IEEE Trans. Patte Anal. Mach. Int., pp. 721-741,1984. [5] Grossberg,S.: "Nonlinear Neural Networks: Principles, Mechanisms, and Architectures," Neural Networks, Vol. 1, 1988. [6] Hebb,D.O.: The Organization of Behavior, New York:Wiley, 1949. [7] Hinton,J.E., Sejnowski,T J. "Learning and Relearning in the Boltzmann Machine," in [14] [8] Hopfield,J.J.: "Neural Networks and Physical Systems with Emergent Collective Computational Abilities," Proc. Nat. Acad. Sci. USA, pp. 2554-2558, 1982. [9] Hopfield,J.J., Tank,D. :" 'Neural' Computation of Decision in Optimization Problems," Biological Cybernetics, Vol. 52, pp. 1-12, 1985. [10] Howard,R.A.: Dynamic Probabilistic Systems, New York:Wiley, 1971. [11] Kam,M., Cheng,R.: " Decision Making with the Boltzmann Machine," Proceedings of the 1989 American Control Conference, Vol. 1, Pittsburgh, PA, 1989. [12] Kennedy,M.P., Chua, L.O. :"Circuit Theoretic Solutions for Neural Networks," Proceedings of the 1st Int. Con!. on Neural Networks, San Diego, CA, 1987. [13] Kirkpatrick,S., Gellat,C.D.,Jr., Vecchi,M.P. : "Optimization by Simulated Annealing," Science, 220, pp. 671-680, 1983. [14] Rumelhart,D.E., McClelland,J.L. (editors): Parallel Distributed Processing, Volume 1: Foundations, Cambridge:MIT press, 1986. [15] Szu,H.: "Fast Simulated Annealing," in Denker,J.S.(editor) : Neural Networks for Computing, New York:American Inst. Phys., Vol. 51.,pp. 420-425, 1986. [16] Tank,D.W., Hopfield, J.J. : "Simple 'Neural' Optimization Networks," IEEE Transactions on Circuits and Systems, Vol. CAS-33, No.5, pp. 533-541, 1986.
|
1988
|
76
|
164
|
LINEAR LEARNING: LANDSCAPES AND ALGORITHMS Pierre Baldi Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 What follows extends some of our results of [1] on learning from examples in layered feed-forward networks of linear units. In particular we examine what happens when the ntunber of layers is large or when the connectivity between layers is local and investigate some of the properties of an autoassociative algorithm. Notation will be as in [1] where additional motivations and references can be found. It is usual to criticize linear networks because "linear functions do not compute" and because several layers can always be reduced to one by the proper multiplication of matrices. However this is not the point of view adopted here. It is assumed that the architecture of the network is given (and could perhaps depend on external constraints) and the purpose is to understand what happens during the learning phase, what strategies are adopted by a synaptic weights modifying algorithm, ... [see also Cottrell et al. (1988) for an example of an application and the work of Linsker (1988) on the emergence of feature detecting units in linear networks}. Consider first a two layer network with n input units, n output units and p hidden units (p < n). Let (Xl, YI), ... , (XT, YT) be the set of centered input-output training patterns. The problem is then to find two matrices of weights A and B minimizing the error function E: E(A, B) = L IIYt - ABXtIl2. (1) l<t<T 65 66 Baldi Let ~x x, ~XY, ~yy, ~y x denote the usual covariance matrices. The main result of [1] is a description of the landscape of E, characerised by a multiplicity of saddle points and an absence of local minima. More precisely, the following four facts are true. Fact 1: For any fixed n x p matrix A the function E(A, B) is convex in the coefficients of B and attains its minimum for any B satisfying the equation A'AB~xx = A/~yX. (2) If in addition ~x X is invertible and A is of full rank p, then E is strictly convex and has a unique minimum reached when (3) Fact 2: For any fixed p x n matrix B the function E(A, B) is convex in the coefficients of A and attains its minimum for any A satisfying the equation AB~xxB' = ~YXB'. (4) If in addition ~xx is invertible and B is of full rank p, then E is strictly convex and has a unique minimum reached when (5) Fact 3: Assume that ~x X is invertible. If two matrices A and B define a critical point of E (i.e. a point where 8E / 8aij = 8E /8bij = 0) then the global map W = AB is of the form (6) where P A denotes the matrix of the orthogonal projection onto the subspace spanned by the columns of A and A satisfies (7) Linear Learning: Landscapes and Algorithms 67 with ~ = ~y X ~x~~XY. If A is of full rank p, then A and B define a critical point of E if and only if A satisties (7) and B = R(A), or equivalently if and only if A and W satisfy (6) and (7). Notice that in (6), the matrix ~y X ~x~ is the slope matrix for the ordinary least square regression of Y on X. Fact 4: Assume that ~ is full rank with n distinct eigenvalues At > ... > An. If I = {i t , ... ,ip}(l < it < ... < ip < n) is any ordered p-index set, let Uz = [Uit , ••• , Uip ] denote the matrix formed by the orthononnal eigenvectors of ~ associated with the eigenvalues Ail' ... , Aip • Then two full rank matrices A and B define a critical point of E if and only if there exist an ordered p-index set I and an invertible p x p matrix C such that A=UzC For such a critical point we have E(A,B) = tr(~yy) - L Ai. iEZ (8) (9) (10) (11 ) Therefore a critical point of W of rank p is always the product of the ordinary least squares regression matrix followed by an orthogonal projection onto the subspace spanned by p eigenvectors of~. The map W associated with the index set {I, 2, ... ,p} is the unique local and global minimum of E. The remaining (;) -1 p-index sets correspond to saddle points. All additional critical points defined by matrices A and B which are not of full rank are also saddle points and can be characerized in terms of orthogonal projections onto subspaces spanned by q eigenvectors with q < p. 68 Baldi Deep Networks Consider now the case of a deep network with a first layer of n input units, an (m + 1 )-th layer of n output units and m - 1 hidden layers with an error function given by E(AI, ... ,An)= L IIYt-AIAl ... AmXtll2. (12) l<t<T It is worth noticing that, as in fact 1 and 2 above, if we fix any m-1 of the m matrices AI, ... , Am then E is convex in the remaining matrix of connection weights. Let p (p < n) denote the ntunber of units in the smallest layer of the network (several hidden layers may have p units). In other words the network has a bottleneck of size p. Let i be the index of the corresponding layer and set A = AI A2 ... Am-i+1 B = Am-i+2 ... Am (13) When we let AI, ... , Am vary, the only restriction they impose on A and B is that they be of rank at most p. Conversely, any two matrices A and B of rank at most p can always be decomposed (and in many ways) in products of the form of (13). It results that any local minima of the error function of the deep network should yield a local minima for the corresponding "collapsed" .three layers network induced by (13) and vice versa. Therefore E(AI , ... , Am) does not have any local minima and the global minimal map W· = AIA2 ... Am is unique and given by (10) with index set {I, 2, ... , p}. Notice that of course there is a large number of ways of decomposing W· into a product of the form A I A2 ... Am . Also a saddle point of the error function E(A, B) does not necessarily generate a saddle point of the corresponding E (A I , ... , Am) for the expressions corresponding to the two gradients are very different. Linear Learning: Landscapes and Algorithms 69 Forced Connections. Local Connectivity Assume now an error function of the form E(A) = L IIYt - AXt[[2 l~t~T (14) for a two layers network but where the value of some of the entries of A may be externally prescribed. In particular this includes the case of local connectivity described by relations of the form aij = 0 for any output unit i and any input unit j which are not connected. Clearly the error function E(A) is convex in A. Every constraint of the form aij =cst defines an hyperplane in the space of all possible A. The intersection of all these constraints is therefore a convex set. Thus minimizing E under the given constraints is still a convex optimization problem and so there are no local minima. It should be noticed that, in the case of a network with even only three constrained layers with two matrices A and B and a set of constraints of the form aij =cst on A and bk1 =cst for B, the set of admissible matrices of the form W = AB is, in general, not convex anymore. It is not unreasonable to conjecture that local minima may then arise, though this question needs to be investigated in greater detail. Algorithmic Aspects One of the nice features of the error landscapes described so far is the absence of local minima and the existence, up to equivalence, of a unique global minimum which can be understood in terms of principal component analysis and least square regression. However the landscapes are also characterized by a large number of saddle points which could constitute a problem for a simple gradient descent algorithm during the learning phase. The proof in [1] shows that the lower is the E value corresponding to a saddle point, the more difficult it is to escape from it because of a reduction in the possible number of directions of escape (see also [Chauvin, 1989] in a context of Hebbian learning). To assert how relevant these issues are for practical implementations requires further simulation experiments. On a more 70 Baldi speculative side, it remains also to be seen whether, in a problem of large size, the number and spacing of saddle points encountered during the first stages of a descent process could not be used to "get a feeling" for the type of terrain being descented and as a result to adjust the pace (i. e. the learning rate). We now turn to a simple algorithm for the auto-associative case in a three layers network, i. e. the case where the presence of a teacher can be avoided by setting Yt = Xt and thereby trying to achieve a compression of the input data in the hidden layer. This technique is related to principal component analysis, as described in [1]. If Yt = Xt, it is easy to see from equations (8) and (9) that, if we take the matrix C to be the identity, then at the optimum the matrices A and B are transpose of each other. This heuristically suggests a possible fast algorithm for auto-association, where at each iteration a gradient descent step is applied only to one of the connection matrices while the other is updated in a symmetric fashion using transposition and avoiding to back-propagate the error in one of the layers (see [Williams, 1985] for a similar idea). More formally, the algorithm could be concisely described by A(O) = random B(O) = A'(O) 8E A(k+l)=A(k)-11 8A B(k+l)=A'(k+l) (15) Obviously a similar algorithm can be obtained by setting B(k + 1) = B(k) -118E/8B and A(k + 1) = B'(k + 1). It may actually even be bet ter to alternate the gradient step, one iteration with respect to A and one iteration with respect to B. A simple calculation shows that (15) can be rewritten as A(k + 1) = A(k) + 11(1 W(k))~xxA(k) B(k + 1) = B(k) + 11B(k)~xx(I - W(k)) (16) Linear Learning: Landscapes and Algorithms 71 where W(k) = A(k)B(k). It is natural from what we have already seen to examine the behavior of this algorithm on the eigenvectors of ~xx. Assume that u is an eigenvector of both ~xx and W(k) with eigenvalues ,\ and /-l( k). Then it is easy to see that u is an eigenvector of W(k + 1) with eigenvalue (17) For the algorithm to converge to the optimal W, /-l( k + 1) must converge to 0 or 1. Thus one has to look at the iterates of the function f( x) = x[l + 7],\(1 - x)]2. This can be done in detail and we shall only describe the main points. First of all, f' (x) = 0 iff x = 0 or x = Xa = 1 + (1/7],\) or x = Xb = 1/3 + (1/37],\) and f"(x) = 0 iff x = Xc = 2/3 + (2/37],\) = 2Xb. For the fixed points, f(x) = x iff x = 0, x = 1 or x = Xd = 1 + (2/7],\). Notice also that f(xa) = a and f(1 + (1/7],\)) = (1 + (1/7],\)(1 - 1? Points corresponding to the values 0,1, X a , Xd of the x variable can readily be positioned on the curve of f but the relative position of Xb (and xc) depends on the value assumed by 7],\ with respect to 1/2. Obviously if J1(0) = 0,1 or Xd then J1( k) = 0,1 or Xd, if J1(0) < 0 /-l( k) ~ -00 and if /-l( k) > Xd J1( k) ~ +00. Therefore the algorithm can converge only for a < /-leO) < Xd. When the learning rate is too large, i. e. when 7],\ > 1/2 then even if /-leO) is in the interval (0, Xd) one can see that the algorithm does not converge and may even exhibit complex oscillatory behavior. However when 7],\ < 1/2, if 0 < J1(0) < Xa then J1( k) ~ 1, if /-leO) = Xa then /-l( k) = a and if Xa < J1(0) < Xd then /-l(k) ~ 1. In conclusion, we see that if the algorithm is to be tested, the learning rate should be chosen so that it does not exceed 1/2,\, where ,\ is the largest eigenvalue of ~xx. Even more so than back propagation, it can encounter problems in the proximity of saddle points. Once a non-principal eigenvector of ~xx is learnt, the algorithm rapidly incorporates a projection along that direction which cannot be escaped at later stages. Simulations are required to examine the effects of "noisy gradients" (computed after the presentation of only a few training examples), multiple starting points, variable learning rates, momentum terms, and so forth. 72 Baldi Aknowledgement Work supported by NSF grant DMS-8800323 and in part by ONR contract 411P006-01. References (1) Baldi, P. and Hornik, K. (1988) Neural Networks and Principal Component Analysis: Learning from Examples without Local Minima. Neural Networks, Vol. 2, No. 1. (2) Chauvin, Y. (1989) Another Neural Model as a Principal Component Analyzer. Submitted for publication. (3) Cottrell, G. W., Munro, P. W. and Zipser, D. (1988) Image Compression by Back Propagation: a Demonstration of Extensional Programming. In: Advances in Cognitive Science, Vol. 2, Sharkey, N. E. ed., Norwood, NJ Abbex. (4) Linsker, R. (1988) Self-Organization in a Perceptual Network. Computer 21 (3), 105-117. ( 5) Willi ams, R. J. (1985) Feature Discovery Through Error-Correction Learning. ICS Report 8501, University of California., San Diego. MODELS OF OCULAR DOMINANCE COLUMN FORMATION: ANALYTICAL AND COMPUTATIONAL RESULTS Kenneth D. Miller UCSF Dept. of Physiology Joseph B. Keller SF, CA 94143-0444 Mathematics Dept., Stanford [email protected] ABSTRACT Michael P. Stryker Physiology Dept., UCSF We have previously developed a simple mathematical model for formation of ocular dominance columns in mammalian visual cortex. The model provides a common framework in which a variety of activity-dependent biological machanisms can be studied. Analytic and computational results together now reveal the following: if inputs specific to each eye are locally correlated in their firing, and are not anticorrelated within an arbor radius, monocular cells will robustly form and be organized by intra-cortical interactions into columns. Broader correlations withln each eye, or anti-correlations between the eyes, create a more purely monocular cortex; positive correlation over an arbor radius yields an almost perfectly monocular cortex. Most features of the model can be understood analytically through decomposition into eigenfunctions and linear stability analysis. This allows prediction of the widths of the columns and other features from measurable biological parameters. INTRODUCTION In the developing visual system in many mammalian species, there is initially a uniform, overlapping innervation of layer 4 of the visual cortex by inputs representing the two eyes. Subsequently, these inputs segregate into patches or stripes that are largely or exclusively innervated by inputs serving a single eye, known as ocular dominance patches. The ocular dominance patches are on a small scale compared to the map of the visual world, so that the initially continuous map becomes two interdigitated maps, one representing each eye. These patches, together with the layers of cortex above and below layer 4, whose responses are dominated by the eye innervating the corresponding layer 4 patch, are known as ocular dominance columns. 375 376 Miller, Keller and Stryker The discoveries of this system of ocular dominance and many of the basic features of its development were made by Hubel and Wiesel in a series of pioneering studies in the 1960s and 1970s (e.g. Wiesel and Hubel (1965), Hubel, Wiesel and LeVay (1977)). A recent brief review is in Miller and Stryker (1989). The segregation of patches depends on local correlations of neural activity that are very much greater within neighboring cells in each eye than between the two eyes. Forcing the eyes to fire synchronously prevents segregation, while forcing them to fire more asynchronously than normally causes a more complete segregation than normal. The segregation also depends on the activity of cortical cells. Normally, if one eye is closed in a young kitten during a critical period for developmental plasticity ("monocular deprivation"), the more active, open eye comes to dominantly or exclusively drive most cortical cells, and the inputs and influence of the closed eye become largely confined to small islands of cortex. However, when cortical cells are inhibited from firing, the opposite is the case: the less active eye becomes dominant, suggesting that it is the correlation between pre- and post-synaptic activation that is critical to synaptic strengthening. We have developed and analyzed a simple mathematical model for formation of ocular dominance patches in mammalian visual cortex, which we briefly review here (Miller, Keller, and Stryker, 1986). The model provides a common framework in which a variety of activity-dependent biological models, including Hebb synapses and activity-dependent release and uptake of trophic factors, can be studied. The equations are similar to those developed by Linsker (1986) to study the development of orientation selectivity in visual cortex. We have now extended our analysis and also undertaken extensive simulations to achieve a more complete understanding of the model. Many results have appeared, or will appear, in more detail elsewhere (Miller, Keller and Stryker, 1989; Miller and Stryker, 1989; Miller, 1989). EQUATIONS Consider inputs carrying information from two eyes and co-innervating a single cortical sheet. Let SL(x, 5, t) and SR(x, 5, t), respectively, be the left eye and right eye synaptic weight from eye-coordinate 5 to cortical coordinate x at time t. Consideration of simple activity-dependent models of synaptic plasticity, such as Hebb synapses or activity-dependent release and uptake of trophic or modification factors, leads to equations for the time development of SL and SR: 8t S J(x,5,t) = AA(x-5) L I(x-y)OJK(5-P)SK(y, p,t)_-ySK(x, 5,t)-e. (1) f/,{l,K J, K are variables which each may take on the values L, R. A(x-5) is a connectivity function, giving the number of synapses from 5 to x (we assume an identity mapping from eye coordinates to cortical coordinates). OJ K (5 - P) measures the correlation in firing of inputs from eyes J and K when the inputs are separated by the distance 5 - p. I(x - y) is a model-dependent spread of influence across cortex, telling how two synapses which fire synchronously, separated by the distance x-y, will influence Models of Ocular Dominance Column Formation 377 one another's growth. This influence incorporates lateral synaptic interconnections in the case of Hebb synapses (for linear activation, 1= (1- B)-l, where 1 is the identity matrix and B is the matrix of cortico-cortical synaptic weights), and incorporates the effects of diffusion of trophic or modification factors in models involving such factors. .A, "1 and € are constants. Constraints to conserve or limit the total synaptic strength supported by a single cell, and nonlinearities to keep leftand right-eye synaptic weights positive and less than some maximum, are added. Subtracting the equation for SR from that for SL yields a model equation for the difference, SD(x, 0, t) = SL(x, 0, t) - SR(x, 0, t): 8tSD(x, 0, t) = .AA(x - 0) L I(x - y)eD(o - p)SD(y, p, t) - "1SD(x, 0, t). (2) ".Il Here, CD = eSameEye _ eOppEye, where eSameEye = eLL = eRR, eOppEye = e LR = e RL, and we have assumed statistical equality of the two eyes. SIMULATIONS The development of equation (1) was studied in simulations using three 25 x 25 grids for the two input layers, one representing each eye, and a single cortical layer. Each input cell connects to a 7 x 7 square arbor of cortical cells centered on the corresponding grid point (A(x) = 1 on the square of ±3 grid points, 0 otherwise). Initial synaptic weights are randomly assigned from a uniform distribution between 0.8 and 1.2. Synapses are allowed to decrease to 0 or increase to a weight of 8. Synaptic strength over each cortical cell is conserved by subtracting after each iteration from each active synapse the average change in synaptic strength on that cortical cell. Periodic boundary conditions on the three grids are used. A typical time development of the cortical pattern of ocular dominance is shown in figure 1. For this simulation, correlations within each eye decrease with distance to zero over 4-5 grid points (circularly symmetric gaussian with parameter 2.8 grid points). There are no opposite-eye correlations. The cortical interaction function is a "Mexican hat" function excitatory between nearest neighbors and weakly inhibitory more distantly (I(x) = exp((-1;1)2) ~exp((;l:1)2), .A1 = 0.93.) Individual cortical cell receptive fields refine in size and become monocular (innervated exclusively by a single eye), while individual input arbors refine in size and become confined to alternating cortical stripes (not shown). Dependence of these results on the correlation function is shown in figure 2. Wider ranging correlations within each eye, or addition of opposite-eye anticorrelations, increase the monocularity of cortex. Same-eye anticorrelations decrease monocularity, and if significant within an arbor radius (i.e. within ±3 grid points for the 7 x 7 square arbors) tend to destory the monocular organization, as seen at the lower right. Other simulations (not shown) indicate that same-eye correlation over nearest neighbors is sufficient to give the periodic organization of ocular dominance, while correlation over an arbor radius gives an essentially fully monocular cortex. 378 Miller, Keller and Stryker T=O T=10 T=20 R T=30 T=40 T=80 L Figure 1. Time development of cortical ocular dominance. Results shown after 0, 10, 20,30, 40, 80 iterations. Each pixel represents ocular dominance (Ea SD(x, a)) of a single cortical cell. 40 X 40 pixels are shown, repeating 15 columns and rows of the cortical grid, to reveal the pattern across the periodic boundary conditions. Simulation of time development with varying cortical interaction and arbor functions shows complete agreement with the analytical results presented below. The model also reproduces the experimental effects of monocular deprivation, including the presence of a critical developmental period for this effect. EIGENFUNCTION ANALYSIS Consider an initial condition for which SD ~ 0, and near which equation (2) linearizes some more complex, nonlinear biological reality. SD = 0 is a timeindependent solution of equation (2). Is this solution stable to small perturbations, so that equality of the two eyes will be restored after perturbation, or is it unstable, so that a pattern of ocular dominance will grow? If it is unstable, which pattern will initially grow the fastest? These are inherently linear questions: they depend only on the behavior of the equations when SD is small, so that nonlinear terms are negligible. To solve this problem, we find the characteristic, independently growing modes of equation (2). These are the eigenfunctions of the operator on the right side of that equation. Each mode grows exponentially with growth rate given by its eigenvalue. If any eigenvalue is positive (they are real), the corresponding mode will grow. Then the SD = 0 solution is unstable to perturbation. 2.8 1.4 SAME-EYE CORRELATIONS Models of Ocular Dominance Column Formation 379 + OPP-EYE ANTI-CORR + SAME-EYE ANTI-CORR Figure f. Cortical ocular dominance after fOO iterations for 6 choices of correlation functions. Top left is simulation of figure 1. Top and bottom rows correspond to gaussian same-eye correlations with parameter f.B and 1.4 grid points, respectively. Middle column shows the effect of adding weak, broadly ranging anticorrelations between the two eyes (gaussian with parameter 9 times larger than, and amplitude ~ that oj, the same-eye correlations). Right column shows the effect of instead adding the anticorrelation to the same-eye correlation function. ANALYTICAL CHARACTERIZATION OF EIGENFUNCTIONS Change variables in equation (2) from cortex and inputs, (x, a), to cortex and receptive field, (x, r) with r = x-a. Then equation 2 becomes a convolution in the cortical variable. The result (assume a continuum; results on a grid are similar) is that eigenfunctions are of the form S~,e(x, a, t) = eimoz RFm,e(r). RFm,e is a characteristic receptive field, representing the variation of the eigenfunction as r varies while cortical location x is fixed. m is a pair of real numbers specifying a two dimensional wavenumber of cortical oscillation, and e is an additional index enumerating RFs for a given m. The eigenfunctions can also be written eimoa ARBm..,{r) where ARBm..,(r) = eimor RFm..,(r). ARB is a characteristic arbor, representing the variation of the eigenfunction as r varies while input location a is fixed. While these functions are complex, one can construct real eigenfunctions from them with similar properties (Miller and Stryker, 1989). A monocular (real) eigenfunction is illustrated in figure 3. 380 Miller, Keller and Stryker CHARACTERISTIC RECEPTIVE FIELD I vvv CHARACTERISTIC ARBOR Figure 9. One of the set (identical but for rotations and reflections) of fastestgrowing eigenfunctions for the functions used in figure 1. The monocular receptive fields of synaptic differences SD at different cortical locations, the oscillation across cortez, and the corresponding arbors are illustrated. Modes with RFs dominated by one eye C~::II RFm.dY) -:F 0) will oscillate in dominance with wavelength ~ across cortex. A monocular mode is one for which RF does not change sign. The oscillation of monocular fields, between domination by one eye and domination by the other, yields ocular dominance columns. The fastest growing mode in the linear regime will dominate the final pattern: if its receptive field is monocular, its wavelength will determine the width of the final columns. One can characterize the eigenfunctions analytically in various limiting cases. The general conclusion is as follows. The fastest growing mode's receptive field RF is largely determined by the correlation function CD. If the peak of the fourier transform of CD corresponds to a wavelength much larger than an arbor diameter, the mode will be monocular; if it corresponds to a wavelength smaller than an arbor diameter, the mode will be binocular. If CD selects a monocular mode, a broader CD (more sharply peaked fourier spectrum about wavenumber 0) will increase the dominance in growth rate of the monocular mode over other modes; in the limit Models of Ocular Dominance Column Formation 381 in which CD is constant with distance, only the monocular modes grow and all other modes decay. If the mode is monocular, the peak of the fourier transform of the cortical interaction function selects the wavelength of the cortical oscillation, and thus selects the wavelength of ocular dominance organization. In the limit in which correlations are broad with respect to an arbor, one can calculate that the growth rate of monocular modes as a function of wavenumber of oscillation m is proportional to E, i(m -1)6(1)..42 (1) (where X is the fourier transform of X). In this limit, only l's which are close to 0 can contribute to the sum, so the peak will occur at or near the m which maximizes i(m). There is an exception to the above results if constraints conserve, or limit the change in, the total synaptic strength over the arbor of an input cell. Then monocular modes with wavelength longer than an arbor diameter are suppressed in growth rate, since individual inputs would have to gain or lose strength throughout their arborization. Given a correlation function that leads to monocular cells, a purely excitatory cortical interaction function would lead a single eye to take over all of cortex; however, if constraints conserve synaptic strength over an input arbor, the wavelength will instead be about an arbor diameter, the largest wavelength whose growth rate is not suppressed. Thus, ocular dominance segregation can occur with a purely excitatory cortical interaction function, though this is a less robust phenomenon. Analytically, a constraint conserving strength over afferent arbors, implemented by subtracting the average change in strength over an arbor at each iteration from each synapse in the arbor, transforms the previous expression for the growth rates to E, i(m -1)0(1)..42(1)(1- A~~!?)). COMPUTATION OF EIGENFUNCTIONS Eigenfunctions are computed on a grid, ~nd the resulting growthrates as a function of wavelength are compared to the analytical expression above, in the absence of constraints on afferents. The results, for the parameters used in figure (2), are shown in figure (4). The grey level indicates monocularity of the modes, defined as Er RF(r) normalized on a scale between 0 and 1 (described in Miller and Stryker (1989)). The analytical expression for the growth rate, 1rhose peak coincides in every case with the peak of i(m), accurately predicts the growth rate of monocular modes, even far from the limiting case in which the expression was derived. Broader correlations or opposite-eye anticorrelations enhance the monocularity of modes and the growth rate of monocular modes, while same-eye anticorrelations have the opposite effects. When same-eye anticorrelations are short range compared to an arbor radius, the fastest growing modes are binocular. Results obtained for calculations in the presence of constraints on afferents are also as predicted. With an excitatory cortical interaction function, the spectrum is radically changed by constraints, selecting a mode with a wavelength equal to an arbor diameter rather than one with a wavelength as wide as cortex. With the Mexican hat cortical interaction function used in the simulations, the constraints suppress the growth of long-wavelength monocular modes but do not alter the basic 382 Miller, Keller and Stryker 27.3 . : 2.8 8.98 1.4 · · '.-.. · SAME-EYE CORRELATIONS 35.0 13.8 + OPP-EYE ANTI-CORR 19.7 5.92 + SAME-EYE ANTI-CO~~ Figure 4. Growth rate (vertical axis) as a function of inverse wavelength (horizontal axis) for the six sets of functions used in figure 2, computed on the same grids. Grey level codes maximum monocularity of modes with the given wavelength and growth rate, from fully monocular ( white) to fully binocular (black). The black curve indicates the prediction for relative growth rates of monocular modes given in the l~·mit of broad correlations, as described in the text. structure or peak of the spectrum. CONNECTIONS TO OTHER MODELS The model of Swindale (1980) for ocular dominance segregation emerges as a limiting case of this model when correlations are constant over a bit more than an arbor diameter. Swindale's model assumed an effective interaction between synapses depending only on eye of origin and distance across cortex. Our model gives a biological underpinning to this effective interaction in the limiting case, allows consideration of more general correlation functions, and allows examination of the development of individual arbors and receptive fields and their relationships as well as of overall ocular dominance. Equation 2 is very similar to equations studied by others (Linsker, 1986, 1988; Sanger, this volume). There are several important differences in our results. First, in this model synapses are constrained to remain positive. Biological synapses are Models of Ocular Dominance Column Fonnation 383 either exclusively positive or exclusively negative, and in particular the projection of visual input to visual cortex is purely excitatory. Even if one is modelling a system in which there are both excitatory and inhibitory inputs, these two populations will almost certainly be statistically distinct in their activities and hence not treatable as a single population whose strengths may be either positive or negative. S D, on the other hand, is a biological variable which starts near 0 and may be either positive or negative. This allows for a linear analysis whose results will remain accurate in the presence of nonlinearities, which is crucial for biology. Second, we analyze the effect of intracortical synaptic interactions. These have two impacts on the modes: first, they introduce a phase variation or oscillation across cortex. Second, they typically enhance the growth rate of monocular modes relative to modes whose sign varies across the receptive field. Acknowledgements Supported by an NSF predoctoral fellowship and by grants from the McKnight Foundation and the System Development Foundation. Simulations were performed at the San Diego Supercomputer Center. References Hubel, D.H., T.N. Wiesel and S. LeVay, 1977. Plasticity of ocular dominance columns in monkey striate cortex, Phil. Trans. R. Soc. Lond. B. 278:377-409. Linsker, R., 1986. From basic network principles to neural architecture, Proc. Nat!. Acad. Sci. USA 83:7508-7512, 8390-8394, 8779-8783. Linsker, R., 1988. Self-Organization in a Perceptual Network. IEEE Computer 21:105-117. Miller, K.D., 1989. Correlation-based models of neural development, to appear in Neuroscience and Connectionist Theory (M.A. Gluck & D.E. Rumelhart, Eds.), Hillsdale, NJ: Lawrence Erlbaum Associates. Miller, K.D., J.B. Keller & M.P. Stryker, 1986. Models for the formation of ocular dominance columns solved by linear stability analysis, Soc. N eurosc. Abst. 12:1373. Miller, K.D., J.B. Keller & M.P. Stryker, 1989. Ocular dominance column development: analysis and simulation. Submitted for publication. Miller, K.D. & M.P. Stryker, 1989. The development of ocular dominance columns: mechanisms and models, to appear in Connectionist Modeling and Brain Function: The Developing Interface (S. J. Hanson & C. R. Olson, Eds.), MIT Press/Bradford. Sanger, T.D., 1989. An optimality principle for unsupervised learning, this volume. Swindale, N. V., 1980. A model for the formation of ocular dominance stripes, Proc. R. Soc. Lond. B. 208:265-307. Wiesel, T.N. & D.H. Hubel, 1965. Comparison of the effects of unilateral and bilateral eye closure on cortical unit responses in kittens, J. Neurophysiol. 28:, 1029-1040.
|
1988
|
77
|
165
|
728 DIGITAL REALISATION OF SELF-ORGANISING MAPS Nigel M. Allinson M~rtin J. Johnson Department of Electronics University of York York Y015DD England ABSTRACT Kevin J. Moon A digital realisation of two-dimensional self-organising feature maps is presented. The method is based on subspace classification using an n-tuple technique. Weight vector approximation and orthogonal projections to produce a winnertakes-all network are also discussed. Over one million effective binary weights can be applied in 25ms using a conventional microcomputer. Details of a number of image recognition tasks, including character recognition and object centring, are described. INTRODUCTION Background The overall aim of our work is to develop fast and flexible systems for image recognition, usually for commercial inspection tasks. There is an urgent need for automatic learning systems in such applications, since at present most systems employ heuristic classification techniques. This approach requires an extensive development effort for each new application, which exaggerates implementation costs; and for many tasks, there are no clearly defined features which can be employed for classification. Enquiring of a human expert will often only produce "good" and "bad" examples of each class and not the underlying strategies which he may employ. Our approach is to model in a quite abstract way the perceptual networks found in the mammalian brain for vision. A back-propagation network could be employed to generalise about the input pattern space, and it would find some useful representations. However, there are many difficulties with this approach, since the network structure assumes nothing about the input space and it can be difficult to bound complicated feature clusters using hyperplanes. The mammalian brain is a layered structure, and so another model may be proposed which involves the application of many two-dimensional feature maps. Each map takes information from the output of the preceding one and performs some type of clustering analysis in order to reduce the dimensionality of the input information. For successful recognition, similar patterns must be topologically close so that Digi tal Realisation of Self-Organising Maps 729 novel patterns are in the same general area of the feature map as the class they are most like. There is therefore a need for both global and local ordering processes within the feature map. The process of global ordering in a topological map is termed, by Kohonen (1984), as self-organisation. It Is important to realize that all feedforward networks perform only one function, namely the labelling of areas in a pattern space. This paper concentrates on a technique for realising large, fast, two-dimensional feature maps using a purely digital implementation. Figure 1. Unbounded Feature Map of Local Edges Self Organisation Global ordering needs to adapt the entire neural map, but local ordering needs only local information. Once the optimum global organisation has been found, then only more localised ordering can improve the topological organisation. This process is the basis of the Kohonen clustering algorithm, where the specified area 730 Johnson, Allinson and Moon of adaption decreases with time to give an increasing local ordering. It has been shown that this approach gives optimal ordering at global and local levels (Oja, 1983). It may be considered as a dimensionality reduction algorithm, and can be used as a vector quantiser. Although Kohonen's self-organising feature maps have been successfully applied to speech recognition (Kohonen, 1988; Tattersall et aI., 1988), there has been little Investigation in their application for image recognition. Such feature maps can be used to extract various image primitives, such as textures, localised edges and terminations, at various scales of representations (Johnson and Allinson, 1988). As a simple example, a test image of concentric circles is employed to construct a small feature map of localised edges (Figure 1). The distance measure used is the normalised dot product since in general magnitude information is unimportant. Under these conditions, each neuron output can be considered a similarity measure of the directions between the input pattern and the synaptic weight vector. This map shows that similar edges have been grouped together and that inverses are as far from each other as possible. DIGITAL IMPLEMENTATION Sub-Space Classification Although a conventional serial computer is normally thought of as only performing one operation at a time, there is a task which it can successfully perform involving parallel computation. The action of addressing memory can be thought of as a hi&JhlY parallel process, since it involves the comparison of a word, W, with a set ~ 2 others where N is the number of bits in W. It is, in effect, performing 2 parallel computations - each being a single match. This can be exploited to speed up the simulation of a network by using a conversion between conventional pattern space labelling and binary addressing. Figure 2 shows how the labelling of two-dimensional pattern space is equivalent to the partitioning of the same space by the decision regions of a multiple layer perceptron. If each quantised part of the space is labelled with a number for each class then all that is necessary is for the pattern to be used as an address to give the stored label (i.e. the response) for each class. These labels may form a cluster of any shape and so multiple layers are not required to combine regions. The apparent flaw in the above suggestion is that for anything other than a trivial problem, the labelling of every part of pattern space is impractical. For example a 32 x 32 input vector would require a memory of 21024 words per unit! What is needed is a coding system which uses some basic assumptions about patterns in order to reduce the memory requirements. One assumption which can be made is that patterns will cI uster together into various classes. As early as 1959, a method known as the n-tuple technique was used for pattern recognition (Bledsoe and Browning, 1959). This technique takes a number of subspaces of the pattern x2 l Digital Realisation of Self-Organising Maps 731 PERCEPTRON ~=C1 =c2 I' I,' ,I,' ~I,J::-I' 1.-1-1·' 1' 1· I·' I· ~~t:' ~~~ I.;:I~'I~I-::I ' I, I· ~ ~~ ~~ I~r.-r~ I .· ~ .• " I"~ ~~~ ~~~~ ~ I.' i' I, I.' . ~ ~ I~. , ... 010 I. ' .' I, ~ 1:\ , I· • .t It I-. , ~~ ~~ , ' • I· ,II • I_ i " " 1--1' ~~ ~ • • ~'I 1'1 • '. , 1,' 1' ~ ~ .' f .ltl II • . ' I' • ~ . . ~. 1411. I,' , I" I,' I- I. •• Itili • 1.'1, I· i· - It It 11111. ·1· 1·1·1• = Class 1 0 = Class 2 c1/c2 x1 x2 LABELING The labeling of a quantized subspace is equivalent to the partitioning of pattern space by the multi-layer perceptron. Figure 2. Comparison of Perceptron and Sub-Space Classification space and uses the sum of the resultant labels as the overall response. This gives a set of much smaller memories and inherent in the coding method is that similar patterns will have identical labels. For example, assume a 16 bit pattern - 0101101001010100. Taking a four-bit sample from this, say bits 0-3, giving 0100. This can be used to address a 16 word memory to produce a single bit. If this bit is set to 1, then it is in effect labelling all patterns with 0100 as their first four bits; that is 4096 patterns of the form xxxxxxxxxxxx0100. Taking a second sample, namely bits 4-7 (0101). This labels xxxxxxxx0101xxxx patterns, but when added to the first sample there will be 256 patterns labelled twice (namely, xxxxxxxx01010100) and 7936 (Le. 8192-256) labelled once. The third four-bit sample produces 16 patterns (namely, 732 Johnson, Allinson and Moon xxx(101001010100) labelled three times. The fourth sample produces only one pattem 0101101001010100, which has been labelled four times. If an input pattern is applied which differs from this by one bit, then this will now be labelled three times by the samples; if it differs by two bits, it will either be labelled two or three times depending on whether the changes were in the same four-bit sample or not. Thus a distance measure is implicit in the coding method and reflects the assumed clustering of patterns. Applying this approach to the earlier problem of a 32 x 32 binary input vector and taking 128 eight-bit samples results in a distance measure between 0 and 128 and uses 32K bits of memory per unit. Weight Vector Approximation It is possible to make an estimate of the approximate weight vector for a particular sample from the bit table. For simplicity, consider a binary image from which t samples are taken to form a word, w, where t-1 w = xo + 2x1 + .... + 2 ~-1 This word can be used to address a vector W. Every bit in W[b] which is 1 either increases the weight vector probability where the respective bit in the address is set, or decreases if it is clear. Hence, if BIT [w,i] is the ith bit of wand A[i] is the contents of the memory {O, 1} then, 2t-1 W[b] = E A[i] (2 BIT(b,i) -1) i = 0 This represents an approximate measure of the weight element. Table 1 demonstrates the principle for a four-bit sample memory. Given randomly distributed inputs this binary vector is equivalent to the weight vector [2, 4, 0, -2]. If there is a large number of set bits in the memory for a particular unit then that will always give a high response - that is, it will become saturated. However, if there are too few bits set, this unit will not rfiSpond strongly to a general set of patterns. The number of bits must, therefore, be fixed at the start of training, distributed randomly within the memory and only redistribution of these bits allowed. Set bits could be taken from any other sample, but some samples will be more important than others. The proportion of 1's in an image should not be used as a measure, otherwise large uniform regions will be more significant than the pattern detail. This is a form of magnitude independent operation similar to the use of the normalised dot product applied in the analogue approach and so bits may only be moved from addresses with the same number of set bits as the current address. Digital Realisation of Self-Organising Maps TABLE 1. Weight Vector Approximation Address Weight change Address Weight change X3 x2 x, Xo A W3 W2 W, W x3 x2 x, Xo A W3 W2 W, Wo 0 0 0 0 0 0 1 0 0 0 1 + 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 1 0 1 0 0 a 0 1 1 1 + + 1 0 1 1 0 0 1 0 0 1 + 1 1 0 0 1 + + 0 1 0 1 0 1 1 0 1 1 + + + 0 1 1 0 1 + + 1 1 1 0 1 + + + 0 1 1 1 0 1 1 1 1 1 + + + + Equivalent weight vector 2 4 0-2 Orthogonal Projections In order to speed up the simulation further, instead of representing each unit by a single bit in memory, each unit can be represented by a combination of bits. Hence many calculations can be effectively computed in parallel. The number of units which require a 1 for a particular sample will always be relatively small, and hence these can be coded. The coding method employed is to split the binary word, W, into x and y fields. These projection fields address a two dimensional map and so provide a fast technique of approximating the true content of the memory. The x bits are summed separately to the y bits, and together they give a good estimate of the unit co-ordinates with the most bits set in x and in y. This map becomes, in effect, a winner-takes-all network. The reducing neighbourhood of adaption employed in the Kohonen algorithm can also be readily incorporated by applying an overall mask to this map during the training phase. Though only this output map is required during normal application of the system to image recognition tasks, it is possible to reconstruct the distribution of the twodimensional weight vectors. Figure 3, using the technique illustrated in Table 1, shows this weight vector map for the concentric circle test image applied 733 734 Johnson, Allinson and Moon Figure 3. Reconstructed Feature Map of Local Edges previously in the conventional analogue approach. This is a small digitised map containing 32 x 32 elements each with 16 x 16 input units and can be applied, using a general purpose desktop microcomputer running at 4 mips, in a few milliseconds. APPLICATION EXAMPLES Character Recognition Though a long term objective remains the development of general purpose computer vision systems, with many layers of interacting feature maps together with suitable pre- and post-processing, many commercial tasks require decisions based on a constricted range of objects - that is their perceptual set is severely limited. However, ease of training and speed of application are paramount. An example of such an application involves the recognition of characters. Figures 4 and 5 show an input pattern of hand-drawn A's and B's. The network, using the above digital technique, was given no information concerning the input image and the input window of 32 x 32 pixels was placed randomly on the image. ,The network took less than one minute to adapt and can be applied in 25 ms. This network is a 32 x 32 feature map of 32 x 32 elements, thus giving over one million effective weights. The output map forms two distinct clusters, one for A's in the top right corner of the map (Figure 4), and one for B's in the bottom left corner (Figure 5). If further characters are introduced in the input image then the output map will, during the training phase, self-organise to incorporate them. Digital Realisation of Self-Organising Maps 735 Figure 4. Trained Network Response for 'A' in Input Window Figure 5. Trained Network Response for 'B' in Input Window 736 Johnson, Allinson and Moon Corrupted Images Once the maximum response from the map is known, then the parts of the input window which caused it can be reconstructed to provide a form of ideal input pattern. The reconstructed input pattern is shown in the figures beneath the input image. This reconstruction can be employed to recognise occuluded patterns or to eliminate noise in subsequent input images. Figure 6. Trained Network Response for Corrupted 'A' in Input Window. Reconstructed Input Pattern Shown Below Test Image Figure 6 shows the response of the network, trained on the input image of Figures 4 and 5, to a corrupted image of A's and B's. It has still managed to recognise the input character as an A, but the reconstructed version shows that the extra noise has been eliminated. Object Centring The centering of an object within the input window permits the application of conformant mapping strategies, such as polar exponential grids, to be applied which yields scale and rotation invariant recognition. The same network as employed in the previous example was used, but a target position for the maximum network response was specified and the network was adapted half-way between this and the actual maximum response location. Digital Realisation of Self-Organising Maps 737 Figure 7. Trained Network Response for Off-Centred Character. Input Window is Low-Pass Filtered as shown. Figure 7 shows such a network. When the response is in the centre of the output map then an input object (character) is centred in the recognition window. In the example shown, there is an off-centred response of the trained network for an offcentred character. This deviation is used to change the position of the input window. Once centering has been achieved, object recognition can occur. CONCLUSIONS The application of unsupervised feature maps for image recognition has been demonstrated. The digital realisation technique permits the application of large maps. which can be applied in real time using conventional microcomputers. The use of orthogonal projections to give a winner-take-all network reduces memorY requirements by approximately 3D-fold and gives a computational cost of O(n 1/2), where n is the number of elements in the map. The general approach can be applied in any form of feedforward neural network. Acknowledgements This work has been supported by the I nnovation and Research Priming Fund of the University of York. 738 Johnson, Allinson and Moon References W. W. Bledsoe and I. Browning. Pattern Recognition and Reading by Machine. Proc. East. Joint Compo Conf., 225-232 (1959). M. J. Johnson and N. M. Allinson. An Advanced Neural Network for Visual Pattern Recognition. Proc. UKIT 88, Swansea, 296-299 (1988). T. Kohonen. Self Organization and Associative Memory. Springer-Vertag, Bertin (1984). T. Kohonen. The 'Neural' Phonetic Typewriter. Computer21,11-22 (1988). E. Oja. Subspace Methods of Pattern Recognition. Research Studies Press, Letchworth (1983). G. D. Tattersall, P. W. Linford and R. Linggard. Neural Arrays for Speech Recognition. Br. Telecom Techno/. J. Q. 140-163 (1988).
|
1988
|
78
|
166
|
348 Further Explorations in Visually-Guided Reaching: Making MURPHY Smarter Bartlett W. Mel Center for Complex Systems Research Beckman Institute, University of illinois 405 North Matheus Street Urbana, IL 61801 ABSTRACT MURPHY is a vision-based kinematic controller and path planner based on a connectionist architecture, and implemented with a video camera and Rhino XR-series robot arm. Imitative of the layout of sensory and motor maps in cerebral cortex, MURPHY'S internal representations consist of four coarse-coded populations of simple units representing both static and dynamic aspects of the sensory-motor environment. In previously reported work [4], MURPHY first learned a direct kinematic model of his camera-arm system during a period of extended practice, and then used this "mental model" to heuristically guide his hand to unobstructed visual targets. MURPHY has since been extended in two ways: First, he now learns the inverse differential-kinematics of his arm in addition to ordinary direct kinematics, which allows him to push his hand directly towards a visual target without the need for search. Secondly, he now deals with the much more difficult problem of reaching in the presence of obstacles. INTRODUCTION Visual guidance of a multi-link arm through a cluttered workspace is known to be an extremely difficult computational problem. Classical approaches in the field of robotics have typically broken the problem into pieces of manageable size, including modules for direct and inverse kinematics and dynamics [7], along with a variety of highly complex algorithms for motion planning in the configuration space of a multi-link arm (e.g. [3]). Workers in the field of robotics have rarely (until recently) emphasized neural plausibility at the level of representation and algorithm, opting instead for explicit mathematical computations or complex, multi-stage algorithms using general-purpose data structures. More peculiarly, very little emphasis has been placed on full use of the visual channel for robot control, other than as a source of target shape or coordinates. VIsual Unit Population From""G .; " Hand-Dlrectlon Population Further Explorations in Visually-Guided Reaching 349 • -WI' Joint-Angie Population .. ~ ::=-I Joh.-Veloclty s Population r-"""';;'-'-:'-.. Figure 1: MURPHY's Connectionist Architecture. Four interconnected populations of neuron-like units implement a variety of sensory-motor mappings. Much has been learned of the neural substrate for vision-guided limb control in humans and non-human primates (see [2] for review), albeit at a level too far removed from concrete algorithmic specification to be of direct engineering utility. Nonetheless, a number of general principles of cortical organization have inspired the current approach to vision-based kinematic learning and motion-planning. MURPHY'S connectionist architecture has been based on the observation that a surprisingly large fraction of the vertebrate brain is devoted to the explicit representation of the animal's sensory and motor state [6]. During normal behavior, each of these neural representations carries behaviorally-relevant state information, some yoked to the sensory-epithelia, others to t.he motor system. The effect is a rich set of online associative learning opportunities. Moreover, the visual modality is by far the dominant in the primate brain by measures of sheer real-estate, including a number of areas that are known to be concerned with the representation of limb control in immediate extrapersonal space [2], suggesting that visual processing may overshadow what has usually been perceived as primarily a motor process. MURPHY's ORGANIZATION In the interests of space, we ptesent here a highly reduced description of MURPHY'S organization; the reader is referred to [5] for a much more comprehensive treat350 Mel ment, including a lengthy discussion of the relevance of MURPHY'S structure and function to the psychology, motor-physiology, and neural-basis for visually-guided limb control in primates. The Physical Setup MURPHY's physical setup consists of a 512 x 512 JVC video camera pointed at a Rhino XR-3 robotic arm, whose wrist, elbow, and shoulder rotate freely in the image plane of the camera. White spots are stuck to the arm in convenient places; when the image is thresholded, only the white spots appear in the image (see fig. 2). This arrangement allows continuous control over the complexity of the visual image of the arm, which in turn affects computation time during learning. The arm is software controllable, with a stepper motor for each joint. Arm dynamics are not dealt with in this work. The Connectionist Architecture MURPHY is currently based on four interconnected populations of neuron-like units (fig. 1), encoding both static and dynamic aspects of the sensory-motor environment (only two were used in a previous report [4]). Visual Populations. The principal sensory population is organized as a rectangular, visuotopically-mapped 64 x 64 grid of coarsely-tuned visual units, each of which responds when a visual feature (such as a white spot on the arm) falls into its receptive field (fig 1, upper left). The second population of 24 units encodes the direction of MURPHY'S hand motion through the visual field (fig. 1, lower left)-vector magnitude is ignored at present. These units are thus "fired" only by the distinct visual image of the hand, but are selective for the direction of hand motion through the visual field as MURPHY moves his arm in the workspace. Joint Populations. The third population of 273 units consists of three subpopulations encoding the static joint configuration; the angle of each joint is value-coded individually in a subpopulation dedicated to that joint, consisting of units with overlapping triangular receptive fields. (fig. 1, upper right). The fourth and final population of 24 units also consists of three subpopulations, each value-coding the velocity of one of the three joints (fig. 1, lower right). During both his learning and performance phases to be described in subsequent sections, MURPHY is also able to carry out simple sequential operations that are driven by a control structure external to his connectionist architecture. MURPHY's Kinematics For a detailed discussion of the relation between MURPHY'S novel style of kinematic representation and those used in other approaches to robot control, see [5]. Briefly, in reference to the four unit populations described above, MURPHY'S primary workhorse is his direct kinematic mapping that relates static joint angles to the associated visual image of the arm. This smooth nonlinear mapping comprises both the kinematics of the arm and the optical parameters and global geometry of Further Explorations in Visually-Guided Reaching 351 the camera/imaging system, and is learned and represented as a synaptic projection from the joint-angle to visual-field populations (fig. 1). Post-training, MURPHY can assume an arbitrary joint posture "mentally", i.e. by setting up the appropriate pattern of activation on his joint-angle population without allowing the arm to move. The learned mapping will then synaptically activate a mental image of the arm, in that configuration, on the "post-synaptic" visual-field population. Contemplated movements of the arm can thus be evaluated without overt action-this is the heart of MURPHY'S mental model. MURPHY is also able to learn the inverse differential-kinematics of his arm, a mapping which translates a desired direction of motion through the workspace into the requisite commands to the joints, allowing MURPHY to guide his hand along a desired trajectory through his field of view. This mapping is learned and represented as a synaptic projection originating from both i) the hand-vector population, encoding the desired visual-field direction, and ii) the joint-angle population encoding the current state of the arm, and terminating on the joint-move population, which specifies the appropriate pertubation to the joints (fig. 1, see arrows labelled "Inverse Jacobian"). In the next section, we describe how this learning takes place. HOW MURPHY LEARNS As described in [4,5], MURPHY learns by doing. Thus, during an initial training period for the direct kinematics, MURPHY steps his arm systematically through a small representative sample of the 3.3 billion legal arm configurations (visiting 17,000 configs. in 5 hours). Each step constitutes a standard connectionist training example between his joint-angle and visual-field populations. A novel synaptic learning scheme called sigma-pi learning is used for weight modification [4,5], described elsewhere in great detail [5]. This scheme essentially treats each post-synaptic sigma-pi neuron (see [5]) as an interpolating lookup table of the kind discussed by Albus and others [1], rather than as a standard linear threshold unit. Sigma-pi learning has been inspired by the physical structure and membrane properties of biological neurons, and yields several advantages in performance and simplicity of implementation for the learning of smooth low-dimensional functions [5]. As an implementation note, once the sigma-pi units have been appropriately trained, they are reimplemented using k-d trees, a much more efficient data-structure for a sequential computer (giving a speedup on the order of 50-100). MURPHY'S inverse-differential mapping is learned analogously, where each movement of the arm (rather than each state) is used as a training example. Thus, as the hand sweeps through the visual field during either physical or mental practice, each of the three relevant populations are activated (hand-vector and joint-angle as inputs, joint-move as output), acting again as a single input-output training example for the learning procedure. 352 Mel "" " ' -.':.~~. . . I • Mt! . .. ~~ " .. ~,\ ...... ~~ '., Ie•• •• -, , •• • f Figure 2: Four Visual Representations. The first frame shows the unprocessed camera view of MURPHY'S arm. White spots have been stuck to the arm at various places, such that a thresholded image contains only the white spots. The second frame shows the resulting pattern of activation over the 64 x 64 grid of coarsely-tuned visual units as driven by the camera. The third frame depicts an internally-produced "mental" image of the arm in the same configuration, as driven by weighted connections from the joint-angle population. Note that the mental image is a sloppy, but highly recognizable approximation to the camera-driven trace. The fourth frame shows the mental image generated using k-d trees in the place of sigma-pi units. MURPHY IN ACTION Reaching to Targets In a previous report, MURPHY was only able to reach to a visual target by mentally flailing his way to the target (i.e. by generating a small random change in joint position, evaluating the induced mental image of the arm for proximity to the target, and keeping only those moves that reduced this distance), and then moving the arm physically in one fell sWQOP [4]. On repeated reaches to the same or similar targets, MURPHY was doomed to repeatedly wander his way stupidly and aimlessly to the target. Typical trajectories generated in this way can be seen in fig. 3ABC. Using only the steps in these three trajectories as training examples for MURPHY'S inverse-differential mapping, and then allowing this map to generate "guesses" as to the appropriate joint-move at each step, the trajectories for similar targets are substantially more direct (fig. 3DEF). Avoiding 0 bstacles Augmenting this direct search approach with only a few additional visual heuristics, MURPHY is able to find circuitous paths through complicated obstacle layouts, even when contrived with significant local minima designed to trap the arm (fig. 4). For problems of this kind, MURPHY uses a non-replacement, best-first search with A~ ..... ~ , , , Further Explorations in Visually-Guided Reaching 353 B ; { •• _._--t! . . . . . E )··-r · · · F Figure 3: Improving with Practice. Frames A, B, and C shows MURPHY's initially random search trajectories from start to goal. Joint moves made during these three "mental" reaching episodes were used to train MURPHY's inverse differentalkinematic mapping. Frames D, E, and F show improvement in 3 subsequent reaching trials to nearby targets. backtracking on a quantized grid in confuguration space. Mental images of the arm were generated in sequence, and evaluated according to several criteria: moves that brought the hand closer to the target without collision with obstacles were accepted, marked, and pursued; moves that either had been tried before, pushed the hand out of the visual field, or resulted in collision were rejected (i.e. popped). Collision detection, usually considered a combinatorially expensive operation under typical representational assumptions (see [3]), is here represented as a single, parallel virtual-machine operation that detects superposition between arbitrary obstacleblobs in the visual field and the mental image of the arm. Problems such as that of fig. 4 consumed an average of 10 minutes on a Sun 3-160 running inefficiently with full graphics. Reaching trials only consistently failed when the grain of quantization in MURPHY'S configuration space search prevented him from finding clear paths through too-tight spaces. This problem could be (but has not as yet been) attacked through hierarchical quantization. CONCLUSIONS MURPHY'S design has evolved from three schools of thought: ROBOTICS WITHOUT EQUATIONS, LEARNING WITHOUT TEACHERS, and BETTER LIVING THROUGH VISION. First, the approach illustrates that neurally-inspired representational structures can, without equations, implement the core functional-mappings used in robot control. The approach also demonstrates that a great deal of useful knowledge can be extracted from the environment without need of a teacher, i.e. simply by do354 Mel Figure 4: Reaching for a target (white cross) in the presence of obstacles (miscellaneous other white blobs). MURPHY typically used fewer than 100 internal search steps for problems of this approximate difficulty. Further Explorations in Visually-Guided Reaching 355 ing. Thirdly, the approach illustrates that planning can be naturally carried out simultaneously in joint and workspace coordinates, that is, can be "administered" in joint space, but evaluated using massively parallel visual machine operations. Thus, the use of a massively-parallel architecture makes direct heuristic search through the configuration space of an arm computationally feasible, since a single plan step (i.e. running the direct kinematics and evaluating for progress and/or collision) is reduced to 0(1) virtual machine operations. This feature of the approach is that which most distinguishes MURPHY from other motion-planning schemes. A detailed analysis of the scaling behavior of this approach was carried out in [4] suggesting that a real-time, 3-d vision/6 degree-of-freedom super-MURPHY could be built with state-of-the-art 1988 hardware, though it must be stressed that the competitiveness of the approach depends heavily on massive hardware parallelism that is not conveniently available at this time. Questions also remain as to the scaling of problem difficulty in the jump to a practical real world systems. Acknowledgements This work was supported in part by a University of Illinois Cognitive Science/AI fellowship, the National Center for Supercomputing Applications, Champaign, Illinois, and NSF grant Phy 86-58062. Thanks are also due to Stephen Omohundro for encouragement and scientific support throughout the course of the project. References [1] Albus, J.S. A new approach to manipulator control: the cerebellar model articulation controller (CMAC). ASME J. of Dynamic Systems, Measurement, & Control, September 1975, 220-227. [2] Humphrey, D.R. On the cortical control of visually directed reaching: contributions by nonprecentral motor areas. In Posture and movement, R.E. Talbott & D.R. Humphrey, (Eds.), New York: Raven Press, 1979. [3] Lozano-Perez, T. A simple motion-planning algorithm for general robot manipulators. IEEE J. of Robotics & Automation, 1987, RA-9, 224-238. [4] Mel, B.W. MURPHY: A robot that learns by doing. In Neural information processing systems, p. 544-553, American Institute of Physics, New York, 1988. [5] Mel, B.W. A neurally-inspired connectionist approach to learning and performance in vision-based robot motion planning. Technical Report CCSR-89-17, Center for Complex Systems Research, Beckman Institute, University of Illinois, 405 N. Matheus, Urbana, IL 61801. [6] Merzenich, M.M & Kaas, J. Principles of organization of sensory-perceptual systems in mammals. In Progress in psychobiology and physiological psychology, vol. 9, 1980. [7] Paul, R. Robot manipulators: mathematics, programming, and control. Cambridge: MIT Press, 1981.
|
1988
|
79
|
167
|
SKELETONIZATION: A TECHNIQUE FOR TRIMMING THE FAT FROM A NETWORK VIA RELEVANCE ASSESSMENT Michael C. Mozer Paul Smolensky Department of Computer Science & Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 ABSTRACT This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and improving its performance. The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and automatically trim the least relevant units. This skeletonization technique can be used to simplify networks by eliminating units that convey redundant information; to improve learning performance by first learning with spare hidden units and then trimming the unnecessary ones away, thereby constraining generalization; and to understand the behavior of networks in terms of minimal "rules." INTRODUCTION One thing that connectionist networks have in common with brains is that if you open them up and peer inside, all you can see is a big pile of goo. Internal organization is obscured by the sheer number of units and connections. Although techniques such as hierarchical cluster analysis (Sejnowski & Rosenberg, 1987) have been suggested as a step in understanding network behavior, one would like a better handle on the role that individual units play. This paper proposes one means of using the knowledge in a network to determine the functionality or relevance of individual units. Given a measure of relevance for each unit, the least relevant units can be automatically trimmed from the network to construct a skeleton version of the network. Skeleton networks have several potential applications: • Constraining generalization. By eliminating input and hidden units that serve no purpose, the number of parameters in the network is reduced and generalization will be constrained (and hopefully improved) . • Speeding up learning. Learning is fast with many hidden units, but a large number of hidden units allows for many possible generalizations. Learning is slower with few 107 108 Mozer and Smolensky hidden units. but generalization tends to be better. One idea for speeding up learning is to train a network with many hidden units and then eliminate the irrelevant ones. This may lead to a rapid learning of the training set. and then gradually, an improvement in generalization performance . • Understanding the behavior of a network in terms of "rules". One often wishes to get a handle on the behavior of a network by analyzing the network in terms of a small number of rules instead of an enormous number of parameters. In such situations, one may prefer a simple network that performed correctly on 95% of the cases over a complex network that performed correctly on 100%. The skeletonization process can discover such a simplified network. Several researchers (Chauvin, 1989; Hanson & Pratt, 1989; David Rumelhart, personal communication, 1988) have studied techniques for the closely related problem of reducing the number of free parameters in back propagation networks. Their approach involves adding extra cost terms to the usual error function that cause nonessential weights and units to decay away. We have opted for a different approach the all-ornone removal of units which is not a gradient descent procedure. The motivation for our approach was twofold. First, our initial interest was in designing a procedure that could serve to focus "attention" on the most important units, hence an explicit relevance metric was needed. Second, our impression is that it is a tricky matter to balance a primary and secondary error term against one another. One must determine the relative weighting of these terms, weightings that may have to be adjusted over the course of learning. In our experience, it is often impossible to avoid local minima compromise solutions that partially satisfy each of the error terms. This conclusion is supported by the experiments of Hanson and Pratt (1989). DETERMINING THE RELEVANCE OF A UNIT Consider a multi-layer feedforward network. How might we determine whether a given unit serves an important function in the network? One obvious source of information is its outgoing connections. If a unit in layer I has many large-weighted connections, then one might expect its activity to have a big impact on higher layers. However, this need not be. The effects of these connections may cancel each other out; even a large input to units in layer 1+1 will have little influence if these units are near saturation; outgoing connections from the innervated units in 1+1 may be small; and the unit in I may have a more-or-Iess constant activity, in which case it could be replaced by a bias on units in 1+1. Thus, a more accurate measure of the relevance of a unit is needed. What one really wants to know is, what will happen to the performance of the network when a unit is removed? That is, how well does the network do with the unit versus without it? For unit i , then, a straightforward measure of the relevance, Pi' is Pi = E without .wt i - E with .wt i , where E is the error of the network on the training set. The problem with this measure is that to compute the error with a given unit removed, a complete pass must be made through the training sel Thus, the cost of computing P is O(np) stimulus presentations, where n is the number of units in the network and p is the number of patterns in the Skeletonization 109 trammg set. Further, if the training set is not fixed or is not known to the experimenter, additional difficulties arise in computing p. We therefore set out to find a good approximation to p. Before presenting this approximation, it is fust necessary to introduce an additional bit of notation. Suppose that associated with each unit i is a coefficient ai which represents the attentional strength of the unit (see Figure 1). This coefficient can be thought of as gating the flow of activity from the unit: OJ = f (~Wji aioi) , i where OJ is the activity of unit j, Wji the connection strength to j from i, and f the sigmoid squashing function. If ai = 0, unit i has no influence on the rest of the network; if ai = I, unit i is a conventional unit. In terms of a. the relevance of unit i can then be rewritten as Pi = Ea.:::(J - Ea.=1 • We can approximate Pi using the derivative of the error with respect to a: lim E a.=y - E a.=1 1-+1 y- 1 iJE =-iJa· I a.=1 Assuming that this equality holds approximately for y = 0: Ea.:::(J - Ea.=1 dE ------ -or -1 daj a.=1 O .. fi . th A iJE ur approxunatton or pj IS en Pi = --;-- . uaj This derivative can be computed using an error propagation procedure very similar to that used in adjusting the weights with back propagation. Additionally, note that because the approximation assumes that o,j is I, the aj never need be changed. Thus. the ai are not actual parameters of the system, just a bit of notational convenience used in Figure 1. A 4-2-3 network with attentionaI coefficients on the input and hidden units. 110 Mozer and Smolensky estimating relevance. In practice, we have found that dE Ida. fluctuates strongly in time and a more stable estimate that yields better results is an exponentially-decaying time average of the derivative. In the simulations reported below, we use the following measure: dE(t) ~i(t+I)=.8~i(t)+ .2 d . Clj One fmal detail of relevance assessment we need to mention is that relevance is computed based on a linear error function, E' = 1: I tpj opJ I (where p is an index over patterns, j over output units; tpj is the target output, Opj the actual output). The usual quadratic error function, Ef = !Jtpj Opj )2, provides a poor estima~ of relevance if the output pattern is close to the target. This difficulty with Ef is further elaborated in Mozer and Smolensky (1989). In the results reported below, while Ef is used as the error metric in training the weights via conventional back propagation, ~ is measured using E'. This involves separate back propagation phases for computing the weight updates and the relevance measures. A SIMPLE EXAMPLE: THE CUE SALIENCE PROBLEM Consider a network with four inputs labeled A-D, one hidden unit, and one output. We generated ten training patterns such that the correlations between each input unit and the output are as shown in the fIrst row of Table 1. (In this particular task. a hidden layer is not necessary. The inclusion of the hidden unit simply allowed us to use a standard three-layer architecture for all tasks.) In this and subsequent simulations, unit activities range from -I to I, input and target output patterns are binary (-lor 1) vectors. Training continues until all output activities are within some acceptable margin of the target value. Additional details of the training procedure and network parameters are described in Mozer and Smolensky (1989). To perform perfectly, the network need only attend to input A. This is not what the input-hidden connections do, however; their weights have the same qualitative proflle as the correlations (second row of Table 1).1 In contrast, the relevance values for the input Table 1 Input Unit A B C D Correlation with Output Unit 1.0 0.6 0.2 0.0 Input-Hidden Connection Strmgths 3.15 1.23 .83 - .01 Pi 5.36 0.(11 0.06 0.00 ~i 0.46 -0.03 0.01 -0.02 1 The values reported in Table 1 are an average over 100 replications of the simulation with different initial random weights. Bs:fore averaging. however, the signs of the weights were flipped if the hidden-output connection was negative. Skeletonization 111 units show A to be highly relevant while B-D have negligible relevance. Further, the qualitative picture presented by the profile of (Ji s is identical to that of the Pi s. Thus, while the weights merely reflect the statistics of the training set, "i indicates the functionality of the units. THE RULE-PLUS-EXCEPfION PROBLEM Consider a network with four binary inputs labeled A-D and one binary output. The task is to learn the function AB+ABci5; the output unit should be on whenever both A and B are on, or in the special case that all inputs are off. With two hidden units, back propagation arrives at a solution in which one unit responds to AB the rule and the other to ABeD the exception. Clearly, the AB unit is more relevant to the solution; it accounts for fifteen cases whereas the ABeD unit accounts for only one. This fact is reflected in the (Ji: in 100 replications of the simulation, the mean value of (JAB was 1.49 whereas ~ABCD was only .17. These values are extremely reliable; the standard errors are .003 and .005, respectively. Relevance was also measured using the quadratic error function. With this metric, the AB unit is incorrectly judged as being less relevant than the ABcD unit (JiB is .029 and (JIBCD is .033. As mentioned above, the basis of the failme of the quadratic error function is that "f grossly underestimates the true relevance as the output error gbes to zero. Because the one exception pattern is invariably the last to be learned, the pUtput error for the fIfteen non-exception patterns is significantly lower, and consequently, the relevance values computed on the basis of the non-exception patterns are much smaller than those computed on the basis of the one exception pattern. This results in the relevance assessment derived from the exception pattern dominating the overall relevance measure, and in the incorrect relevance assignments described above. However, this problem can be avoided by assessing relevance using the linear error function. If we attempted to "trim" the rule-plus-exception network by eliminating hidden units, the logical first candidate would be the less relevant ABeD unit. This trimming process would leave us with a simpler network a skeleton network whose behavior is easily characterized in tenns of a simple rule, but which could only account for 15 of the 16 input cases. CONSTRUCTING SKELETON NETWORKS In the remaining examples we construct skeleton networks using the relevance metric. The procedure is as follows: (1) train the network until all output unit activities are within some specified margin around the target value (for details, see Mozer & Smolensky, 1989); (2) compute" for each unit; (3) remove the unit with the smallest ,,; and (4) repeat steps 1-3 a specified number of times. In the examples below, we have chosen to trim either the input units or the hidden units, not both simultaneously. but there is no reason why this could not be done. We have not yet addressed the crucial question of how much to trim away from the network. At present. we specify in advance when to stop trimming. However. the procedure described above makes use only of the ordinal values of the (J. One untapped 112 Mozer and Smolensky source of information that may be quite informative is the magnitudes of the (J. A large increase in the minimum (J value as trimming progresses may indicate that further trimming will seriously disrupt performance in the network. THE TRAIN PROBLEM Consider the task of determining a rule that discriminates the "east" trains from the "west" trains in Figure 2. There are two simple rules simple in the sense that the rules require a minimal number of input features: East trains have a long car and triangle load in car or an open car or white wheels on car. Thus, of the seven features that describe each train, only two are essential for making the east/west discrimination. A 7-1-1 network trained on this task using back propagation learns quickly, but the final solution takes into consideration nearly all the inputs because 6 of the 7 features are partially correlated with the east/west discrimination. When the skeletonization procedure is applied to trim the number of inputs from 7 to 2, however, the network is successfully trimmed to the minimal set of input features either long car and triangle load, or open car and white wheels on car on each of 100 replications of the simulation we ran. The trimming task is far from trivial. The expected success rate with random removal of the inputs is only 9.5%. Other skeletonization procedures we experimented with resulted in success rates of 50%-90%. THE FOUR-BIT MULTIPLEXOR PROBLEM Consider a network that learns to behave as a four-bit multiplexor. The task is, given 6 binary inputs labeled A-D, Mit and M2, and one binary output, to map one of the inputs A-D to the output contingent on the values of MI and M2. The logical function being computed is MI~A + MIM2B + M1M1C + M\M1.D. EAST WEST I.-Jib Figure 2. The train problem. Adapted from Medin, Wattenmaker, & Michalski, 1987. Skeletonization 113 Table 2 median epochs median epochs architecture failure rate to criterion to criterion (with 8 hidden) (with 4 hidden) standard 4-hidden net 17% -52 8-+4 skeleton net 0% 25 45 A standard 4-hidden unit back propgation network was tested against a skeletonized network that began with 8 hidden units initially and was trimmed to 4 (an 8-+4 skeleton network). If the network did not reach the performance criterion within 1000 training epochs, we assumed that the network was stuck in a local minimum and counted the run as a failure. Performance statistics for the two networks are shown in Table 2, averaged over 100 replications. The standard network fails to reach criterion on 17% of the runs. whereas the skeleton network always obtains a solution with 8 hidden units and the solution is not lost as the hidden layer is trimmed to 4 units.2 The skeleton network with 8 hidden units reaches criterion in about half the number of training epochs required by the standard network. From this point, hidden units are trimmed one at a time from the skeleton network, and after each cut the network is retrained to criterion. Nonetheless, the total number of epochs required to train the initial 8 hidden unit network and then trim it down to 4 is still less than that required for the standard network with 4 units. Furthermore, as hidden units are trimmed, the performance of the skeleton network remains close to criterion, so the improvement in learning is substantial. THE RANDOM MAPPING PROBLEM The problem here is to map a set of random 20-element input vectors to random 2element output vectors. Twenty random input-output pairs were used as the training set Ten such training sets were generated and tested. A standard 2-hidden unit network was tested against a 6~2 skeleton network. For each training set and architecture, 100 replications of the simulation were run. If criterion was not reached within 1000 training epochs, we assumed that the network was stuck in a local minimum and counted the run as a failure. As Table 3 shows, the standard network failed to reach criterion with two hidden units on 17% of all runs. whereas the skeleton network failed with the hidden layer trimmed to two units on only 8.3% of runs. In 9 of the 10 training sets, the failure rate of the skeleton network was lower than that of the standard network. Both networks required comparable amounts of training to reach criterion with two hidden units, but the skeleton network reaches criterion much sooner with six hidden units, and its performance does not significantly decline as the network is trimmed. These results parallel those of the fourbit multiplexor. 1 Here and below we report median epochs to criterion rather than mean epochs to avoid aberrations caused by the large number of epochs consumed in failure runs. 114 Mozer and Smolensky Table 3 standard network 642 skeleton network median epochs median epochs median epochs training set % failures to criterion % failures to criterion to criterion (2 hidden) (6 hidden) (2 hidden) 1 14 20 7 11 22 2 16 69 13 12 47 3 25 34 0 7 14 4 33 38 0 10 21 5 38 96 55 35 <max> 6 9 17 1 9 17 7 9 28 5 14 43 8 6 13 0 8 16 9 8 12 0 8 17 10 12 12 2 8 17 SUMMARY AND CONCLUSIONS We proposed a method of using the know ledge in a network to determine the relevance of individual units. The relevance metric can identify which input or hidden units are most critical to the performance of the network. The least relevant units can then be trimmed to construct a skeleton version of the network. Skeleton networks have application in two different scenarios. as our simulations demonstrated: • Understanding the behavior of a network in terms of "rules" The cue salience problem. The relevance metric singled out the one input that was sufficient to solve the problem. The other inputs conveyed redundant information. The rule-plus-exception problem. The relevance metric was able to distinguish the hidden unit that was responsible for correctly handling most cases (the general rule) from the hidden unit that dealt with an exceptional case. The train problem. The relevance metric correctly discovered the minimal set of input features required to describe a category . • Improving learning performance The four-bit multiplexor. Whereas a standard network was often unable to discover a solution. the skeleton network never failed. Further. the skeleton network learned the training set more quickly. The random mapping problem. As in the multiplexor problem. the skeleton network succeeded considerably more often with comparable overall learning speed. and less training was required to reach criterion initially. Basically. the skeletonization technique allows a network to use spare input and hidden units to learn a set of training examples rapidly. and gradually. as units are trimmed. to discover a more concise characterization of the underlying regularities of the task. In the process, local minima seem to be avoided without increasing the overall learning time. Skeletonization 115 One somewhat surprising result is the ease with which a network is able to recover when a unit is removed. Conventional wisdom has it that if, say, a network is given excess hidden units, it will memorize the training set, thereby making use of all the hidden units available to it. However, in our simulations, the network does not seem to be distributing the solution across all hidden units because even with no further training, removal of a hidden unit often does not drop performance below the criterion. In any case, there generally appears to be an easy path from the solution with many units to the solution with fewer. Although we have presented skeletonization as a technique for trimming units from a network, there is no reason why a similar procedure could not operate on individual connections instead. Basically, an 0. coefficient would be required for each connection, allowing for the computation of aE lao.. Yann Ie Cun (personal communication, 1989) has independently developed 'a procedure quite similar to our skeletonization technique which operates on individual connections. Acknowledgements Our thanks to Colleen Seifert for conversations that led to this work; to Dave Goldberg, Geoff Hinton, and Yann Ie Cun for their feedback; and to Eric Jorgensen for saving us from computer hell. This work was supported by grant 87-2-36 from the Sloan Foundation to Geoffrey Hinton, a grant from the James S. McDonnell Foundation to Michael Mozer, and a Sloan Foundation grant and NSF grants IRI-8609599, ECE-8617947, and CDR-8622236 to Paul Smolensky. Rererences Chauvin, Y. (1989). A back-propagation algorithm with optimal use of hidden units. In Advances in Neural Network Information Processing Systems. San Mateo, CA: Morgan Kaufmann. Hanson, S. J., & Pratt, L. Y. (1989). Some comparisons of constraints for minimal network construction with back propagation. In Advances in Neural Network Information Processing Systems. San Mateo, CA: Morgan Kaufmann. Medin, D. L., Wattenmaker, W. D., & Michalski, R. S. (1987). Constraints and preferences in inductive learning: An experimental study of human and machine performance. Cognitive Science, 11, 299-339. Mozer, M. C., & Smolensky, P. (1989). Skeletonization: A technique for trimming the fat from a network via relevance assessment (Technical Report CU-CS-421-89). Boulder: University of Colorado, Department of Computer Science. Sejnowski, T. J., & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text. Complex Systems, 1, 145-168.
|
1988
|
8
|
168
|
160 SCALING AND GENERALIZATION IN NEURAL NETWORKS: A CASE STUDY Subutai Ahmad Center for Complex Systems Research University of Illinois at Urbana-Champaign 508 S. 6th St., Champaign, IL 61820 ABSTRACT Gerald Tesauro IBM Watson Research Center PO Box 704 Yorktown Heights, NY 10598 The issues of scaling and generalization have emerged as key issues in current studies of supervised learning from examples in neural networks. Questions such as how many training patterns and training cycles are needed for a problem of a given size and difficulty, how to represent the inllUh and how to choose useful training exemplars, are of considerable theoretical and practical importance. Several intuitive rules of thumb have been obtained from empirical studies, but as yet there are few rigorous results. In this paper we summarize a study Qf generalization in the simplest possible case-perceptron networks learning linearly separable functions. The task chosen was the majority function (i.e. return a 1 if a majority of the input units are on), a predicate with a number of useful properties. We find that many aspects of.generalization in multilayer networks learning large, difficult tasks are reproduced in this simple domain, in which concrete numerical results and even some analytic understanding can be achieved. 1 INTRODUCTION In recent years there has been a tremendous growth in the study of machines which learn. One class of learning systems which has been fairly popular is neural networks. Originally motivated by the study of the nervous system in biological organisms and as an abstract model of computation, they have since been applied to a wide variety of real-world problems (for examples see [Sejnowski and Rosenberg, 87] and [Tesauro and Sejnowski, 88]). Although the results have been encouraging, there is actually little understanding of the extensibility of the formalism. In particular, little is known of the resources required when dealing with large problems (i.e. scaling), and the abilities of networks to respond to novel situations (i.e. generaliz ation). The objective of this paper is to gain some insight into the relationships between three fundament~l quantities under a variety of situations. In particular we are interested in the relationships between the size of the network, the number of training Scaling and Generalization in Neural Networks 161 instances, and the generalization that the network performs, with an emphasis on the effects of the input representation and the particular patterns present in the training set. As a first step to a detailed understanding, we summarize a study of scaling and generalization in the simplest possible case. Using feed forward networks, the type of networks most common in the literature, we examine the majority function (return a 1 if a majority of the inputs are on), a boolean predicate with a number of useful features. By using a combination of computer simulations and analysis in the limited domain of the majority function, we obtain some concrete numerical results which provide insight into the process of generalization and which will hopefully lead to a better understanding of learning in neural networks in general.· 2 THE MAJORITY FUNCTION The function we have chosen to study is the majority function, a simple predicate whose output is a 1 if and only if more than half of the input units are on. This function has a number of useful properties which facilitate a study of this type. The function has a natural appeal and can occur in several different contexts in the real-world. The problem is linearly separable (i.e. of predicate order 1 [Minsky and Papert, 69]). A version of the perceptron convergence theorem applies, so we are guaranteed that a network with one layer of weights can learn the function. Finally, when there are an odd number of input units, exactly half of the possible inputs results in an output of 1. This property tends to minimize any negative effects that may result from having too many positive or negative training examples. 3 METHODOLOGY The class of networks used are feed forward networks [Rumelhart and McClelland, 86], a general category of networks that include perceptrons and the multi-layered networks most often used in current research. Since majority is a boolean function of predicate order 1, we use a network with no hidden units. The output function used was a sigmoid with a bias. The basic procedure consisted of three steps. First the network was initialized to some random starting weights. Next it was trained using back propagation on a set of training patterns. Finally, the performance of the network was tested on a set of random test patterns. This performance figure was used as the estimate of the network's generalization. Since there is a large amount of randomness in the procedure, most of our data are averages over several simulations. O. The material contained in this paper is a condensation of portions of the first author's M.S. thesis [Ahmad, 88]. 162 Ahmad and Tesauro f 0.50 0.42 0.33 0.25 0.17 0.08 0.00 0 70 140 210 280 350 5 420 Figure 1: The average failure rate as a function of S. d = 25 Notation. In the following discussion, we denote 5 to be the number of training patterns, d the number of input units, and c the number of cycles through the training set. Let f be the failure rate (the fraction of misclassified training instances), and rr be the set of training patterns. 4 RANDOM TRAINING PATTERNS We first examine the failure rate as a function of 5 and d. Figure 1 shows the graph of the average failure rate as a function of S, for a fixed input size d = 25. Not surprisingly we find that the failure rate decreases fairly monotonically with 5. Our simulations show that in fact, for majority there is a well defined relationship between the failure rate and 5. Figure 2 shows this for a network with 25 input units. The figure indicates that In f is proportional to 5 implying that the failure rate decreases exponentially with 5, i.e., , = ae-fJs . 1/ {3 can be thought of as a characteristic training set size, corresponding to a failure rate of a/e. Obtaining the exact scaling relationship of l/P was somewhat tricky. Plotting {3 on a log-log plot against d showed it to be close to a straight line, indicating that 1/ {3 increases'" d(J for some constant a. Extracting the exponent by measuring the slope of the log-log graph turned out to be very error prone, since the data only ranged over one order of magnitude. An alternate method for obtaining the exponent is to look for a particular exponent a by setting 5 = ad(J. Since a linear scaling relationship is theoretically plausible, we measured the failure rate of the network In! Scaling and Generalization in Neural Networks 163 G."'" -1.000 -Z.OOO -3.000 -4.000 -5.000 -6.000 0.0 70.0 140.0 Z10.0 Z80.0 350.0 4Z0.0 s Figure 2: In f as a function of S. d = 25. The slope was == -0.01 at S = ad for various values of a. As Figure 3 shows, the failure rate remains more or less constant for fixed values of a, indicating a linear scaling relationship with d. Thus O( d) training patterns should be required to learn majority to a fixed level of performance. Note that if we require perfect learning, then the failure rate has to be < 1/(2d - S) ,..,. 1/2d • By substituting this for f in the above formula and solving for S, we get that (1 )(dln 2 + In a) patterns are required. The extra factor of d suggests that O( d2) would be required to learn majority perfectly. We will show in Section 6.1 that this is actually an overestimate. 5 THE INPUT REPRESENTATION So far in our simulations we have used the representation commonly used for boolean predicates. Whenever an input feature has been true, we clamped the corresponding input unit to a 1, and when it has been off we have clamped it to a O. There is no reason, however, why some other representation couldn't have been used. Notice that in back propagation the weight change is proportional to the incoming input signal, hence the weight from a particular input unit to the output unit is changed only when the pattern is misclassified and the input unit is non-zero. The weight remains unchanged when the input unit is O. If the 0,1 representation were changed to a-l,+1 representation each weight will be changed more often, hence the network should learn the training set quicker (simulations in [Stornetta and Huberman, 81] reported such a decrease in training time using a -i, +i representation.) 164 Ahmad and Tesauro f 0.50 0.42 0.33 0.25 0.17 0.08 ~~-------------------S=3d S=5d S=7d 0.00 +----+----+-----+---+----+---.... 60 20 27 33 40 47 53 d Figure 3: Failure ra.te VB d with S = 3d, 5d, 7 d. We found that not only did the training time decrease with the new representation, the generalization of the network improved significantly. The scaling of the failure rate with respect to S is unchanged, but for any fixed value of S, the generalization is about 5 - 10% better. Also, the scaling with respect to dis still linear, but the constant for a fixed performance level is smaller. Although the exact reason for the improved generalization is unclear, the following might be a plausible reason. A weight is changed only if the corresponding input is non-zero. By the definition of the majority function, the average number of units that are on for the positive instances is higher than for the negative instances. Hence, using the 0,1 representation, the weight changes are more pronounced for the positive instances than for the negative instances. Since the weights are changed whenever a pattern is misclassified, the net result is that the weight change is greater when a positive event is misclassified than when a negative event is misclassified. Thus, there seems to be a bias in the 0,1 representation for correcting the hyperplane more when a positive event is misclassified. In the new representation, both positive and negative events are treated equally hence it is unbiased. The basic lesson here seems to be that one should carefully examine every choice that has been made during the design process. The representation of the input, even down to such low level details as deciding whether "off" should be represented as 0 or -1, could make a significant difference in the generalization. Scaling and Generalization in Neural Networks 165 6 BORDER PATTERNS We now consider a method for improving the generalization by intelligently selecting the patterns in the training set. Normally, for a given training set, when the inputs are spread evenly around the input space, there can be several generalizations which are consistent with the patterns. The performance of the network on the test set becomes a random event, depending on the initial state of the network. If practical, it makes sense to choose training patterns whic~ can limit the possible generalizations. In particular, if we can find those examples which are closest to the separating surface, we can maximally constrain the number of generalizations. The solution that the network converges to using these "border" patterns should have a higher probability of being a good separator. In general finding a perfect set of border patterns can be computationally expensive, however there might exist simple heuristics which can help select good training examples. We explored one heuristic for choosing such points: selecting only those patterns in which the number of 1 's is either one less or one more than half the number of input units. Intuitively, these inputs should be close to the desired separating surface, thereby constraining'the network more than random patterns would. Our results show that using only border patterns in the training set, there is a large increase in the expected performance of the network for a given S. In addition, the scaling behavior as a function of S seems to be very different and is faster than an exponential decrease. (Figure 4 shows typical failure rate vs S curves comparing border patterns, the -1,+1 representation, and the 0,1 representation.) 6.1 BORDER PATTERNS AND PERFECT LEARNING We say the network has perfectly learned a function when the test patterns are never misclassified. For the majority function, one can argue that at least some border patterns must be present in order to guarantee perfect performance. If no border patterns were in the training set, then the network could have learned the f - 1 of d or the f + 1 of d function. Furthermon~, if we know that a certain number of border patterns is guaranteed to give perfect performance, say bed), then given the probability that a random pattern is a border pattern, we can calculate the expected number of random patterns sufficient to learn majority. For odd d, there are 2 * ( ; ) border patterns, so the probability of choosing a border pattern randomly is: ( ; ) 2d- 1 As d gets larger this probability decreases as 1/.fd.* The expected number of randomly chosen patterns required before b( d) border patterns are chosen is therefore: 0* This can be shown using Stirling's approximation to d!. 166 Ahmad and Tesauro / --......, '\ 0.50 f 0.42 0.33 0.25 0.17 0.08 0.001 0 58 117 175 233 292 350 S Figure 4: Graph showing the average failure rate vs. S using the 0,1 representation (right), the -1,+1 representation (middle), and using border patterns (left). The network had 23 inputs units and was tested on a test set consisting of 1024 patterns. b( cl)Vd. From our data we find that 3d border patterns are always sufficient to learn the test set perfectly. From this, and from the theoretical results in [Cover, 65], we can be confident that b( cI) is linear in d. Thus, O( fi3/2) random patterns should be sufficient to learn majority perfectly. It should be mentioned that border patterns are not the only patterns which contribute to the generalization of the network. Figure 5 shows that the failure rate of the network when trained with random training patterns which happen to contain b border patterns is substantially better than a training set consisting of only b border patterns. Note that perfect performance is achieved at the same point in both cases. 7 CONCLUSION In this paper we have described a systematic study of some of the various factors affecting scaling and generalization in neural networks. Using empirical studies in a simple test domain, we were able to obtain precise scaling relationships between the performance of the network, the number of training patterns, and the size of the network. It was shown that for a fixed network size, the failure rate decreases exponentially with the size of the training set. The number of patterns required to Scaling and Generalization in Neural Networks 167 f •. u •. u •• U •• 17 .... .... • II .. II ,. II N wnber of border patterna. Figure 5: This figure compares the failure rate on a random training set which happens to contain b border patterns (bottom plot) with a training set composed of only b border patterns (top plot). achieve a fixed performance level was shown to increase linearly with the network SIZe. A general finding was that the performance of the network was very sensitive to a number of factors. A slight change in the input representation caused a jump in the performance of the network. The specific patterns in the training set had a large influence on the final weights and on the generalization. By selecting the training patterns intelligently, the performance of the network was increased significantly. The notion of border patterns were introduced as the most interesting patterns in the training set. As far as the number of patterns required to teach a function to the network, these patterns are near optimal. It was shown that a network trained only on border patterns generalizes substantially better than one trained on the same number of random patterns. Border patterns were also used to derive an expected bound on the number of random patterns sufficient to learn majority perfectly. It was shown tha,t on average, O(d3 / 2 ) random patterns are sufficient to learn majority perfectly. In conclusion, this paper advocates a careful study of the process of generalization in neural networks. There are a large number of different factors which can affect the performance. Any assumptions made when applying neural networks to a realworld problem should be made with care. Although much more work needs to be 168 Ahmad and Tesauro done, it was shown that many of the issues can be effectively studied in a simple test domain. Acknowledgements We thank T. Sejnowski, R. Rivest and A. Barron for helpful discussions. We also thank T. Sejnowski and B. Bogstad for assistance in development of the simulator code. This work was partially supported by the National Center for Supercomputing Applications and by National Science Foundation grant Phy 86-58062. References [Ahmad,88] S. Ahmad. A Study of Scaling and Generalization in Neural Networks. Technical Report UIUCDCS-R-88-1454, Department of Computer Science, University of Illinois, Urbana-Champaign, IL, 1988. [Cover, 65] T. Cover. Geometric and satistical properties of systems oflinear equations. IEEE Trans. Elect. Comp., 14:326-334, 1965. [Minsky and Papert, 69] Marvin Minsky and Seymour Papert. Perceptrons. MIT Press, Cambridge, Mass., 1969. [Muroga, 71] S Muroga. Threshold Logic and its Applications. Wiley, New York, 1971. [Rumelhart and McClelland, 86] D. E. Rumelhart and J. L. McClelland, editors. Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations. Volume I, MIT Press, Cambridge, Mass., 1986. [Stornetta and Huberman, 87] W.S. Stornetta and B.A. Huberman. An improved three-layer, back propagation algorithm. In Proceedings of the IEEE First International Conference on Neural Networks, San Diego, CA, 1987. [Sejnowski and Rosenberg, 87] T.J. Sejnowski and C.R. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145-168, 1987. [Tesauro and Sejnowski, 88] G. Tesauro and T.J. Sejnowski. A Parallel Network that Learns to Play Backgammon. Technical Report CCSR-88-2, Center for Complex Systems Research, University of Illinois, Urbana-Champaign, IL, 1988.
|
1988
|
80
|
169
|
560 A MODEL OF NEURAL OSCILLATOR FOR A UNIFIED SUEt10DULE A.B.Kirillov, G.N.Borisyuk, R.M.Borisyuk, Ye.I.Kovalenko, V.I.Makarenko,V.A.Chulaevsky, V.I.Kryukov Research Computer Center USSR Academy of Sciences Pushchino, Moscow Region 142292 USSR AmTRACT A new model of a controlled neuron oscillatJOr, proposed earlier {Kryukov et aI, 1986} for the interpretation of the neural activity in various parts of the central nervous system, may have important applications in engineering and in the theory of brain functions. The oscillator has a good stability of the oscillation period, its frequency is regulated linearly in a wide range and it can exhibit arbitrarily long oscillation periods without changing the time constants of its elements. The latter is achieved by using the critical slowdown in the dynamics arising in a network of nonformal excitatory neurons {Kovalenko et aI, 1984, Kryukov, 1984}. By changing the parameters of the oscillator one can obtain various functional modes which are necessary to develop a model of higher brain function. mE CECILLATOR Our oscillator comprises several hundreds of modelled excitatory neurons (located at the 6i tes of a plane lattice) and one inhibitory neuron. The latter receives output stgnals from all the excitatory neurons and its own output is transmitted via feedback to every excitatory neuron (Fig. 1). Each excit~tory neuron is connected bilaterally with its four nearest neighbours. Each neuron has a threshold r(t) decaying exponentially to a A Model of Neural Oscillator for a Unified Submodule 561 e i value roo or roo (for an excitatory or inhibitory neuron). A Gaussian noise with zero mean and standard deviation a is added to a threshold. A membrane potential of a neuron is the sum of input impulses decaying exponentially when there are no inwt. If the membrane potential exceeds the threshold, the neuron fires and sends impulses to the neighbouring neurons. An imWlse from excitatory neuron to excitatory one increases the membrane potential of the latter by aee, from the excitatory to the inhibitory by aei, and from the inhibitory to the excitatory decreases the membrane potential by aie' We consider a discrete time model J the time step being equal to the absolute refractory period. We associate a variable xi(t) with each excitatory neuron. If the i-th neuron fires at step t, we take x.(t)=1; if it 1 does not, then Xi (t)=O. The mean E(t)=l/N ~i (t) will be referred to as the network acti vi ty, where N is the number of excitatory neurons. A B 'f.! ----- ---Figure 1. A - neuron, B - scheme of interconnections Let us consider a situation when inhibitory feedback is cut off. Then such a model exhibits a critical slowdown of the dynamics {Kovalenko et al, 1984, Kryukov, 1984}. Namely, if the interconnections and parameters of neurons are chosen appropriately , initial pattern of activated neurons has an unusually long lifetime as compared with the time of membrane potential decay. In this mode R(t) is slowly increasing and 562 Kirillov, et al causes the inhibitory neuron to fire. Now J if we tum on the negative feedback, outPUt impulse from inhibitory neuron sharply decreases membrane potentials of excitatory neurons. As a a consequence, K( t) falls down and process starts from the beginning. We studied this oscillator by means of simulation model. There are 400 excitatory neurons (20*20 lattice) and one inhibitory neuron in our model. THE MAIN PKFml'IHS OF THE <EClILATOO a. When the thresholds of excitatory neurons are high enough, the inhibitory neuron does not fire and there are no A B I f I I I I I I I I I I I I I I I "'-i I I I. I I II • I I • • • L ~ 'I1.z I I I I I I I I I I I II fl.l .. I n, • i- • II i • ill I n2 • II. •• III • fl.J rep. 126 + 16 t. ~&·,aLJJ~Ji.l I I III • __ II! •• • II •. I fl.z H11111 •• I n3 Figure 2. Oscillatory mode. A - network activity, B - neuron spike trains ~~!!l~~~~lues of r! the network activity R(t) changes periodically and excitatory neurons generate bursts of spikes (Fig. 2). The inhibitory neuron generates regular periodical spike trains. c. If the parameters are chosen appropriately, the mean oscillation period is much greater than the mean interspike interval of a network neuron. The frequency of oscillations is regulated by r! (Fig. 3A) or, which is the same, by the A Model of Neural Oscillator for a Unified Submodule 563 intensity of the inp.lt flow. The miniIwm period is determined by the decay rate of the inhibitory input, the maximum - by the lifetime of the metastable state. A B lIT k 60 .4 50 40 .3 30 ~ .... 20 10 .1 j 9. 10. II. 12. 13. 't!. ';C J: 1:.C I·T :.cc r F~ 3. A - oscillation frequency lIT vs. threshold r!, B - coefficient of variation of the period K vs. period d. The coefficient of variation of the period is of the order of several percent, rut it increases at low frequencies (F~. 3B). The stability of oscillations can be increased by introducing some inhomogeneity in the network, for example, when a part of excitatory neurons will receive no inhibitory signals. OOCILLA'lOO UNDER ntPULSE STIMOLATI~ In this section we consider first the neural network without the inhibitory neuron. But we imitate a periodic input to the network by slowly varying the thresholds ret) of the excitatory neurons. Namely, we add to r( t) a value Ar-A· sin(Wt) and fire a part of the network at some phase of the sine wave. Then we look at the time needed for the network to restore its background activity. There are specific values of a phase for which this time is rather b~ (Fig. 4A). Now consider the full ocsillator with an oscillation period T (in this section T=35±2. 5 time steps) . We stimulate the oscillator by periodical (with the period tat <35) sharp increase of membrane potential of each excitatory neuron by a value 8st. As the stimulation proceeds, the oscillation period gradually decreases from T--35 to some value Tat' remaining then equal to Tat. The 564 Kirillov, et al value of Tat depends on the stimulation intensity Sst: as Sat gets greater, Tat tends to the st1lw.lation period tst' A B • ,It "" • a o 10 5 o c 10 20 Figure 4, A - threshold modulation, B - duration of the network responce vs. phase of threshold modulation, C - critical st1lw.lation intensity vs, stimulation period For every stimulation period tst there is characteristic value &0 of the st1lw.lation intensity Sst' such that with Sst>&o the value of Tst is equal to the stimulation period tat' The dependence between &0 and tat is close to a linear one (Fig, 4B). The usual relaxation oscillator also exibitB a linear dependence between &0 and tat' At the same time, we did not find in our oscillator any resonance phenomena essential to a linear oscillator, '1'HK NE'l1«)BK WITH INTKBNAL R>ISE In a further development of the neural oscillator we tried to ooild a model that will be more adequate to the biological counterpart. To this end, we changed the structure of interconnections and tried to define more correctly the noise component of the i.np.lt signal coming to an excitatory neuron, In the model described above we A Model of Neural Oscillator for a Unified Submodule 565 imitated the sum of inputs from distant neurons by independent Gaussian noise. Here we used real noise produced by the network. In order to simulate this internal noise, we randomly choose 16 distant neighbours for every exi tatory neuron. Then we assume that the network elements are adjusted to work in a certain noise environment. This means that a ' mean' internal noise would provide conditions for the neuron to be the most sensitive for the information coming from its nearest nelghbors . So. for every neuron i we calculate the sum k. =&c . (t), where 1 J summation is over all distant nelghbors of this neuron, and compare it with the mean internal noise k=1/N Lk.. The 1 internal noise for the neuron i now is ni=C(ki-k), where C>O is a constant. We choose model parameters in such a way that the noise component is of the order of several percent of the membrane potential. Nevertheless, the network exhibits in this case a dramatic increase of the lifetime of initial pattern of activated neurons, as compared with the network with independent Gaussian noise. A range of parameters, for which this slowdown of the dynamics is observed, is also considerably irtCreased. Hence, longer perioos and better perioo stability could be obtained for our generator if we use internal noise. THE CHAIN OF THREE SUBMODULES: A MODEL OF COLUMN OSCILLATOR Now we consider a small System constituted of three oscillator submodules, A, B and C, connected consecutively so that submodule A can transmit excitation to submodule B, B to C, and C to A. The excitation can only be transmitted when the total activity of the submodule reaches its threshold level, i.e. when the corresponding inhibitory neuron fires. After the inhibitory neuron has fired, the activity of its submodule is set to be small enough for the submodule not to be active with large probability until the excitation from another submodule comes. Therefore, we expect A, B and C to work consecutively. In fact, in our simulation experiments we observed such behavior of the 566 Kirillov, et al SeT) A T 20 35 o '-------15 L..-______ _ 10 12 10 12 Figure 5. Chain of three sutmodules. Period of oscillations (A) and its standard deviation (B) vs. noise amplitude closed chain of 3 basic submodules. The activity of the whole system is nearly periodic. Figure 5A displays the period T vs. the noise amplitude a. The scale of a is chosen so that 0.5 corresponds approximately to the resting potential. An interesting feature of the chain is that the standard deviation SeT) of the period (Fig. 5B) is small enough, even for the oscillator of relatively small size. The upper lines in Fig. 5 correspond to square 10*10 network, middle - to 9*9, lower - to 8*8 one. One can see that the loss of 36 percent of elements only causes a reduction of the working range without the loss of stability. CXHUJSI~ Though we have not considered all the interesting modes of the oscillator, we believe that, owing to the phenomenon of metastability, the same oscillator exhibits different behaviour under slightly different threshold parameters and the same and/or different inPuts. Let us enumerate the most interesting functional possibilities of the oscillator, which can be easily obtained from our results. 1.Pacemaker with the frequency regulated in a wide range and with a high period stability, as compared with the neuron (Fig. 313). 2. Integrator (input=threshold, output=phase) with a wide A Model of Neural Oscillator for a Unified Submodule 567 range of linear regulation (see Fig. 3A). 3.Generator of damped oscillations (for discontinuous inPut). 4. Delay device controlled by an external signal. 5.Phase comparator (see Fig. 4A). We have already used these functions for the interPretation of electrical activity of several functionally different neural structures {Kryukov et aI, 1986}. The other functions will be used in a system model of attention {Kryukov, 1989} presented in this volume. All these considerations justify the name of our neural oscillator - a unified submodule for a ' resonance' neurocomputer. References E. I. Kovalenko, G. N. Borisyuk, R. M. Borisyuk, A. B. Kirillov, V . I . Kryukov. Short-tenn memory as a metastable state. II. S1IWlation model, Cybernetics and Systems Research3 2, R. Trappl (ed.), Elsevier, pp. 266-270 (1984) V. I. Kryukov. Short-tenn memory as a metastable state. I. Master equation approach, Cybernetics and Systems Research 3 2, R. Trappl (ed.), Elsevier, pp. 261-265 (1984) V. I. Kryukov. "Neurolocator" , a model of attention (1989) (in this volume). V. I. Kryukov, G. N. Borisyuk, R. M. Borisyuk, A. B. Kirillov, Ye. I. Kovalenko. The Metastable and Unstable States in the Brain (in Russian), Pushchino, Acad. Sci. USSR (1986) (to appear in Stochastic Cellular Syste.ms: Ergodici tY3 l1emory3 Morphogenesis, Manchester University Press, 1989).
|
1988
|
81
|
170
|
AN OPTIMALITY PRINCIPLE FOR UNSUPERVISED LEARNING Terence D. Sanger MIT AI Laboratory, NE43-743 Cambridge, MA 02139 ([email protected]) ABSTRACT We propose an optimality principle for training an unsupervised feedforward neural network based upon maximal ability to reconstruct the input data from the network outputs. We describe an algorithm which can be used to train either linear or nonlinear networks with certain types of nonlinearity. Examples of applications to the problems of image coding, feature detection, and analysis of randomdot stereograms are presented. 1. INTRODUCTION There are many algorithms for unsupervised training of neural networks, each of which has a particular optimality criterion as its goal. (For a partial review, see (Hinton, 1987, Lippmann, 1987).) We have presented a new algorithm for training single-layer linear networks which has been shown to have optimality properties associated with the Karhunen-Loeve expansion (Sanger, 1988b). We now show that a similar algorithm can be applied to certain types of nonlinear feedforward networks, and we give some examples of its behavior. The optimality principle which we will use to describe the algorithm is based on the idea of maximizing information which was first proposed as a desirable property of neural networks by Linsker (1986, 1988). Unfortunately, measuring the information in network outputs can be difficult without precise knowledge of the distribution on the input data, so we seek another measure which is related to information but which is easier to compute. If instead of maximizing information, we try to maximize our ability to reconstruct the input (with minimum mean-squared error) given the output of the network, we are able to obtain some useful results. Note that this is not equivalent to maximizing information except in some special cases. However, it contains the intuitive notion that the input data is being represented by the network in such a way that very little of it has been "lost". 11 12 Sanger 2. LINEAR CASE We now summarize some of the results in (Sanger, 1988b). A single-layer linear feedforward network is described by an M xN matrix C of weights such that if x is a vector of N inputs and y is a vector of M outputs with M < N, we have y = Cx. As mentioned above, we choose an optimality principle defined so that we can best reconstruct the inputs to the network given the outputs. We want to minimize the mean squared error E[(x - x)2] where x is the actual input which is zero-mean with correlation matrix Q = E[xxT], and x is a linear estimation of this input given the output y. The linear least squares estimate (LLSE) is given by and we will assume that x is computed in this way for any matrix C of weights which we choose. The mean-squared error for the LLSE is given by and it is well known that this is minimized if the rows of C are a linear combination of the first M eigenvectors of the correlation matrix Q. One optimal choice of C is the Singular Value Decomposition (SVD) of Q, for which the output correlation matrix E[yyT] = CQCT will be the diagonal matrix of eigenvalues of Q. In this case, the outputs are uncorrelated and the sum of their variances (traceE[yyT]) is maximal for any set of M un correlated outputs. We can thus think of the eigenvectors as being obtained by any process which maximizes the output variance while maintaining the outputs uncorrelated. We now define the optimal single-layer linear network as that network whose weights represent the first M eigenvectors of the input correlation matrix Q. The optimal network thus minimizes the mean-squared approximation error E[(x - x)2] given the shape constraint that M < N. 2.1 LINEAR ALGORITHM We have previously proposed a weight-update rule called the "Generalized Hebbian Algorithm" , and proven that this algorithm causes the rows of the weight matrix C to converge to the eigenvectors of the input correlation matrix Q (Sanger, 1988a,b). The algorithm is given by: C(t + 1) = C(t) + I (y(t)xT(t) - LT[y(t)yT(t)]C(t») (1) where I is a rate constant which decreases as l/t, x(t) is an input sample vector, yet) = C(t)x(t), and LTD is an operator which makes its matrix argument lower triangular by setting all entries above the diagonal to zero. This algorithm can be implemented using only a local synaptic learning rule (Sanger, 1988b). Since the Generalized Hebbian Algorithm computes the eigenvectors of the input correlation matrix Q, it is related to the Singular Value Decomposition (SVD), An Optimality Principle for Unsupervised Learning 13 c. Figure 1: (aJ original image. (bJ image coded at .36 bits per pixel. (cJ masks learned by the network which were used for vector quantized coding of 8x8 blocks of the image. Principal Components Analysis (PCA), and the Karhunen-Loeve Transform (KLT). (For a review of several related algorithms for performing the KLT, see (Oja, 1983).) 2.2 IMAGE CODING We present one example of the behavior of a single-layer linear network. (This example appears in (Sanger, 1988b).) Figure 1 a shows an original 256x256x8bit image which was used for training a network. 8x8 blocks of the image were chosen by scanning over the image, and these were used as training inputs to a network with 64 inputs and 8 outputs. After training, the set of weights for each output (figure lc) represents a vector quantizing mask. Each 8x8 block of the input image is then coded using the outputs of the network. Each output is quantized with a number of bits related to the log of the variance, and the original figure is approximated from the quantized outputs. The reconstruction of figure 1 b uses a total of 23 bits per 8x8 region, which gives a data rate of 0.36 bits per pixel. The fact that the image could be represented using such a low bit rate indicates that the masks that were found represent significant features which are useful for recognition. This image coding technique is equivalent to block-coded KLT methods common in the literature. 14 Sanger 3. NONLINEAR CASE In general, training a nonlinear unsupervised network to approximate nonlinear functions is very difficult. Because of the large (infinite-dimensional) space of possible functions, it is important to have detailed knowledge of the class of functions which are useful in order to design an efficient network algorithm. (Several people pointed out to me that the talk implied such knowledge is not necessary, but unfortunately such an implication is false.) The network structure we consider is a linear layer represented by a matrix C (which is perhaps an interior layer of a larger network) followed by node nonlinearities (1'(Yi) where Yi is the ith linear output, followed by another linear layer (perhaps followed by more layers). We assume that the nonlinearities (1'0 are fixed, and that the only parameters susceptible to training are the linear weights C. If z is the M-vector of outputs after the nonlinearity, then we can write each component Zi = (1'(Yi) = (1'(CiX) where Ci is the ith row of the matrix C. Note that the level contours of each function Zi are determined entirely by the vector Ci, and that the effect of (1'0 is limited to modifying the output value. Intuitively, we thus expect that if Yi encodes a useful parameter of the input x, then Zi will encode the same parameter, although scaled by the nonlinearity (1'0. This can be formalized, and if we choose our optimality principle to again be minimum mean-squared linear approximation of the original input x given the output z, the best solution remains when the rows of C are a linear combination of the first M eigenvectors of the input correlation matrix Q (Bourlard and Kamp, 1988). In two of the simulations, the nonlinearity (1'0 which we use is a rectification nonlinearity, given by { Yi (1'(Yd = 0 if Yi 20 if Yi <0 Note that at most one of {(1'(Yi), (1'( -Yi)} is nonzero at any time, so these two values are uncorrelated. Therefore, if we maximize the variance of y (before the nonlinearity) while maintaining the elements of Z (after the nonlinearity) uncorrelated, we need 2M outputs in order to represent the data available from an M-vector y. Note that 2M may be greater than the number of inputs N, so that the "hidden layer" Z can have more elements than the input. 3.1 NONLINEAR ALGORITHM The nonlinear Generalized Hebbian Algorithm has exactly the same form as for the linear case, except that we substitute the use of the output values after the nonlinearity for the linear values. The algorithm is thus given by: C(t + 1) = C(t) + 'Y (z(t)xT(t) - LT[z(t)zT(t)]C(t)) (2) where the elements of z are given by Zi(t) = (1'(Yi(t)), with y(t) = C(t)x(t). Although we have not proven that this algorithm converges, a heuristic analysis of its behavior (for a rectification nonlinearity and Gaussian input distribution) An Optimality Principle for Unsupervised Learning 15 shows that stable points may exist for which each row of C is proportional to an eigenvector of Q, and pairs of rows are either the negative of each other or orthogonal. In practice, the rows of C are ordered by decreasing output variance, and occur in pairs for which one member is the negative of the other. This choice of C will maximize the sum of the output variances for uncorrelated outputs, so long as the input is Gaussian. It also allows optimal linear estimation of the input given the output, so long as both polarities of each of the eigenvectors are present. 3.2 NONLINEAR EXAMPLES 3.2.1 Encoder Problem We compare the performance of two nonlinear networks which have learned to perform an identity mapping (the "encoder" problem). One is trained by backpropagation, (Rumelhart et a/., 1986) and the other has two hidden layers trained using the unsupervised Hebbian algorithm, while the output layer is trained using a supervised LMS algorithm (Widrow and Hoff, 1960). The network has 5 inputs, two hidden layers of 3 units each, and 5 outputs. There is a sigmoid nonlinearity at each hidden layer, but the thresholds are all kept at zero. The task is to minimize the mean-squared difference between the inputs and the outputs. The input is a zero-mean correlated Gaussian random 5-vector, and both algorithms are presented with the same sequence of inputs. The unsupervised-trained network converged to a steady state after 1600 examples, and the backpropagation network converged after 2400 (convergence determined by no further decrease in average error). The RMS error at steady state was 0.42 for both algorithms (this figure should be compared to the sum of the variances of the inputs, which was 5.0). Therefore, for this particular task, there is no significant difference in performance between backpropagation and the Generalized Hebbian Algorithm. This is an encouraging result, since if we can use an unsupervised algorithm to solve other problems, the training time will scale at most linearly with the number of layers. 3.2.2 Nonlinear Receptive Fields Several investigators have shown that Hebbian algorithms can discover useful image features related to the receptive fields of cells in primate visual cortex (see for example (Bienenstock et a/., 1982, Linsker, 1986, Barrow, 1987». One of the more recent methods uses an algorithm very similar to the one proposed here to find the principal component of the input (Linsker, 1986). We performed an experiment to find out what types of nonlinear receptive fields could be learned by the Generalized Hebbian Algorithm if provided with similar input to that used by Linsker. We used a single-layer nonlinear network with 4096 inputs arranged in a 64x64 grid, and 16 outputs with a rectification nonlinearity. The input data consisted of images of low-pass filtered white Gaussian noise multiplied by a Gaussian window. After 5000 samples, the 16 outputs learned the masks shown in figure 2. These masks possess qualitative similarity to the receptive fields of cells found in the visual cortex of cat and monkey (see for example (Andrews and Pollen, 1979». They are equivalent to the masks learned by a purely linear network (Sanger, 1988b), except that both positive and negative polarities of most mask shapes are present here. 16 Sanger Figure 2: Nonlinear receptive fields ordered from left-to-right and top-to-bottom. 3.2.3 Stereo We now show how the nonlinear Generalized Hebbian Algorithm can be used to train a two-layer network to detect disparity edges. The network has 128 inputs, 8 types of unit in the hidden layer with a rectification nonlinearity, and 4 types of output unit. A 128x128 pixel random-dot stereo pair was generated in which the left half had a disparity of two pixels, and the right half had zero disparity. The image was convolved with a vertically-oriented elliptical Gaussian mask to remove high-frequency vertical components. Corresponding 8x8 blocks of the left and right images (64 pixels from each image) were multiplied by a Gaussian window function and presented as input to the network, which was allowed to learn the first layer according to the unsupervised algorithm. After 4000 iterations, the first layer had converged to a set of 8 pairs of masks. These masks were convolved with the images (the left mask was convolved with the left image, and the right mask with the right image, and the two results were summed and rectified) to produce a pattern of activity at the hidden layer. (Although there were only 8 types of hidden unit, we now allow one of each type to be centered at every input image location to obtain a pattern of total activity.) Figure 3 shows this activity, and we can see that the last four masks are disparity-sensitive since they respond preferentially to either the 2 pixel disparity or the zero disparity region of the image. An Optimality Principle for Unsupervised Learning 1 7 Figure 3: Hidden layer response for a two-layer nonlinear network trained on stereo images. The left half of the input random dot image has a 2 pixel disparity, and the right half has zero disparity. Figure 4: Output layer response for a two-layer nonlinear network trained on stereo Images. Since we were interested in disparity, we trained the second layer using only the last four hidden unit types. The second layer had 1024 (=4x16x16) inputs organized as a 16x16 receptive field in each of the four hidden unit "planes". The outputs did not have any nonlinearity. Training was performed by scanning over the hidden unit activity pattern (successive examples overlapped by 8 pixels) and 6000 iterations were used to produce the second-layer weights. The masks that were learned were then convolved with the hidden unit activity pattern to produce an output unit activity pattern, shown in figure 4. The third output is clearly sensitive to a change in disparity (a depth edge). If we generate several different random-dot stereograms and average the output results, 18 Sanger Figure 5: Output layer response averaged over ten stereograms with a central 2 pixel disparity square and zero disparity surround. we see that the other outputs are also sensitive (on average) to disparity changes, but not as much as the third. Figure 5 shows the averaged response to 10 stereograms with a central 2 pixel disparity square against a zero disparity background. Note that the ability to detect disparity edges requires the rectification nonlinearity at the hidden layer, since no linear function has this property. 4. CONCLUSION We have shown that the unsupervised Generalized Hebbian Algorithm can produce useful networks. The algorithm has been proven to converge only for single-layer linear networks. However, when applied to nonlinear networks with certain types of nonlinearity, it appears to converge to good results. In certain cases, it operates by maintaining the outputs uncorrelated while maximizing their variance. We have not investigated its behavior on nonlinearities other than rectification or sigmoids, so we can make no predictions about its general utility. Nevertheless, the few examples presented for the nonlinear case are encouraging, and suggest that further investigation of this algorithm will yield interesting results. Acknowledgements I would like to express my gratitude to the many people at the NIPS conference and elsewhere whose comments, criticisms, and suggestions have increased my understanding of these results. In particular, thanks are due to Ralph Linsker for pointing out to me an important error in the presentation and for his comments on the manuscript, as well as to John Denker, Steve Nowlan, Rich Sutton, Tom Breuel, and my advisor Tomaso Poggio. This report describes research done at the MIT Artificial Intelligence Laboratory, and sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Alfred P. Sloan Foundation; by the National Science Foundation; by the Artificial Intelligence Center of Hughes Aircraft Corporation (SI-801534-2); and by the NATO Scientific Affairs Division (0403/87). Support for the A. I. Laboratory's artificial intelligence research is provided by the An Optimality Principle for Unsupervised Learning 19 Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. The author was supported during part of this research by a National Science Foundation Graduate fellowship, and later by a Medical Scientist Training Program grant. References Andrews B. W., Pollen D. A., 1979, Relationship between spatial frequency selectivity and receptive field profile of simple cells, J. Physiol., 287:163-176. Barrow H. G., 1987, Learning receptive fields, In Pmc. IEEE 1st Ann. Conference on Neural Networks, volume 4, pages 115-121, San Diego, CA. Bienenstock E. L., Cooper L. N., Munro P. W., 1982, Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex, J. Neuroscience, 2(1):32-48. Bourlard H., Kamp Y., 1988, Auto-association by multilayer perceptrons and singular value decomposition, Biological Cybernetics, 59:291-294. Hinton G. E., 1987, Connectionist learning procedures, CMU Tech. Report CS-87115. Linsker R., 1986, From basic network principles to neural architecture, Proc. Natl. Acad. Sci. USA, 83:7508-7512. Linsker R., 1988, Self-organization in a perceptual network, Computer, 21(3):105117. Lippmann R. P., 1987, An introduction to computing with neural nets, IEEE A SSP Magazine, pages 4-22. Oja E., 1983, Subspace Methods of Pattern Recognition, Research Studies Press, UK. Rumelhart D. E., Hinton G. E., Williams R. J., 1986, Learning representations by back-propagating errors, Nature, 323(9):533-536. Sanger T. D., 1988a, Optimal unsupervised learning, Neural Networks, 1(Sl):127, Proc. 1st Ann. INNS meeting, Boston, MA. Sanger T. D., 1988b, Optimal unsupervised learning in a single-layer linear feedforward neural network, submitted to Neural Networks. Widrow B., Hoff M. E., 1960, Adaptive switching circuits, In IRE WESCON Conv. Record, Part 4, pages 96-104.
|
1988
|
82
|
171
|
NEURAL ANALOG DIFFUSION-ENHANCEMENT LAYER AND SPATIO-TEMPORAL GROUPING IN EARLY VISION Allen M. Waxman·,t, Michael Seibert·,t,RobertCunninghamt and I ian Wu· • Laboratory for Sensory Robotics Boston University Boston, MA 02215 t Machine Intelligence Group MIT Lincoln Laboratory Lexington, MA 02173 ABSTRACT A new class of neural network aimed at early visual processing is described; we call it a Neural Analog Diffusion-Enhancement Layer or "NADEL." The network consists of two levels which are coupled through feedfoward and shunted feedback connections. The lower level is a two-dimensional diffusion map which accepts visual features as input, and spreads activity over larger scales as a function of time. The upper layer is periodically fed the activity from the diffusion layer and locates local maxima in it (an extreme form of contrast enhancement) using a network of local comparators. These local maxima are fed back to the diffusion layer using an on-center/off-surround shunting anatomy. The maxima are also available as output of the network. The network dynamics serves to cluster features on multiple scales as a function of time, and can be used in a variety of early visual processing tasks such as: extraction of comers and high curvature points along edge contours, line end detection, gap filling in contours, generation of fixation points, perceptual grouping on multiple scales, correspondence and path impletion in long-range apparent motion, and building 2-D shape representations that are invariant to location, orientation, scale, and small deformation on the visual field. INTRODUCTION Computer vision is often divided into two main stages, "early vision" and "late vision", which correspond to image processing and knowledge-based recognition/interpretation, respectively. Image processing for early vision involves algorithms for feature enhancement and extraction (e.g. edges and comers), feature grouping (i.e., perceptual We acknowledge support from the Machine Intelligence Group of MIT Lincoln Laboratory. The views expressed are those of the authors and do not reflect the official policy or position of the U.S. Government 289 290 Waxman, Seibert, Cunningham and Wu organization), and the extraction of physical properties for object surfaces that comprise a scene (e.g. reflectance, depth, surface slopes and curvatures, discontinuities). The computer vision literature is characterized by a plethora of algorithms to achieve many of these computations, though they are hardly robust in performance. Biological neural network processing does, of course, achieve all of these early vision tasks, as evidenced by psychological studies of the preattentive phase of human visual processing. Often, such studies provide motivation for new algorithms in computer vision. In contrast to this algorithmic approach, computational neural network processing tries to glean organizational and functional insights from the biological realizations, in order to emulate their information processing capabilities. This is desirable. mainly because of the adaptive and real-time nature of the neural network architecture. Here, we shall demonstrate that a single neural architecture based on dynamical diffusionenhancement networks can realize a large variety of early vision tasks that deal mainly with perceptual grouping. The ability to group image features on multiple scales as a function of time, follows from "attractive forces" that emerge from the network dynamics. We have already implemented the NADEL (in 16-bit arithmetic) on a videorate parallel computer, the PIPE [Kent et al .• 1985], as well as a SUN-3 workstation. THE NADEL The Neural Analog Diffusion-Enhancement Layer was recently introduced by Seibert & Waxman [1989]. and is illustrated in Figure 1; it consists primarily of two levels which are coupled via feedforward and shunted feedback connections. Low-level features extracted from the imagery provide input to the lower level (a 2-D map) which spreads input activity over larger scales as time progresses via diffusion, allowing for passive decay of activity. The diffused activity is periodically sampled and passed upward to a contrast-enhancing level (another 2-D map) which locates local maxima in the terrain of diffuse activity. However, this forward pathway is masked by receptive fields which pass only regions of activity with positive Gaussian curvature and negative mean curvature; that is these receptive fields play the role of inhibitory dendro-dendritic modulatory gates. This masking fascilitates the local maxima detection in the upper level. These local maxima detected by the upper level are fed back to the lower diffusion level using a shunting dynamics with on-center/off-surround anatomy (cf. [Grossberg, 1973] on the importance of shunting for automatic gain control, and the role of center/surround anatomies in competitive networks). The local maxima are also available as outputs of the network, and take on different interpretations as a function of the input. A number of examples of spatio-temporal grouping will be illustrated in the next section. The primary result of diffusion-enhancement network dynamics is to create a longrange attractive force between isolated featural inputs. This force manifests itself by shifting the local maxima of activity toward one another. leading to a featural grouping over mUltiple scales as a function of time. This is shown in Figure 1, where two featural inputs spread their initial excitations over time. The individual activities superpose. with the tail of one gaussian crossing the maximum of the other gaussian at an angle. This Neural Analog Diffusion-Enhancement Layer 291 biases the superposition of activities, adding more activity to one side of a maximum than another, causing a shift in the local maxima toward one another. Eventually, the local maxima merge into a single maximum at the centroid of the individual inputs. If we keep track of the local maxima as diffusion progresses (by connecting the output of the enhancement layer to another layer which stores activity in short term memory), then the two initial inputs will become connected by a line. In Figure 1 we also illustrate the grouping of five features in two clusters, a configuration possessing two spatial scales. After little diffusion the local maxima are located where the initial inputs were. Further diffusion causes each cluster to form a single local maximum at the cluster centroid. Eventually, both clusters merge into a single hump of activity with one maximum at the centroid of the five initial inputs. Thus, multi scale grouping over time emerges. The examples of Figure 1 use only diffusion without any feedback, yet they illustrate the importance of localizing the local maxima through a kind of contrast-enhancement on another layer. The local maxima of activity serve as "place tokens" representing grouped features at a particular scale. The feedback pathway re-activates the diffusion layer, thereby allowing the grouping process to proceed to still larger scales, even across featureless areas of imagery. The dynamical evolution of activity in the NADEL can be modeled using a modified diffusion equation [Seibert & Waxman, 1989]. However, in our simulations of the NADEL we don't actually solve this differential equation directly. Instead, each iteration of the NADEL consists of a spreading of activity using gaussian convolution, allowing for passive decay, then sampling the diffusion layer, masking out areas which are not positive Gaussian curvature and negative mean curvature activity surfaces, detecting one local maximum in each of these convex areas, and feeding this back to the diffusion layer with a shunted on-center/off-surround excitation at the local maxima. In the biological system, diffusion can be accomplished via a recurrent network of cells with offcenter/on-surround lateral connectivity, or more directly using electrotonic coupling across gap junctions as in the horizontal cell layer of the retina [Dowling, 1987]. Curvature masking of the activity surface can be accomplished using oriented offcenter/on-surround receptive fields that modulate the connections between the two primary layers of the NADEL. SPATIO-TEMPORAL GROUPING We give several examples of grouping phenomena in early vision, utilizing the NADEL. In all cases its parameters correspond to gaussian spreading with 0'=3 and passive decay of 1 % per iteration, and on-center/off-surround feedback with 0'+=11'12 and 0'_=1. Grouping of Two Points: The simple case of two instantaneous point stimuli input simultaneously to the NADEL is summarized in Figure 2. We plot the time (N network iterations) it takes to merge the two inputs, as a function of their initial separation (S pixels). For S:S;6 the points merge in one iteration; for S>24 activity equilibrates and shifting of local maxima never begins. 292 Waxman, Seibert, Cunningham and Wu Grouping on Multiple Scales: Figure 3 illusttates the hierarchy of groupings generated by a square outline (31 pixels on a side) with gaps (9 pixels wide). Comer and line-end features are first enhanced using complementary center-surround receptive fields (modeled as a rectified response to a difference-of-gaussians), and located at the local maxima of activity. These features are shown superimposed on the shape in 3a; they serve as input to the NADEL. Figure 3b shows the loci of local maxima determined up to the second stable grouping, superimposed over the shape. Boundary completion fills the gaps in the square. In Figure 3c we show the loci of local maxima on the image plane, after the final grouping has occured (N=I00 iterations). The trajectory of local maxima through space-time (x,y,t) is shown in Figure 3d after the fourth grouping. It reveals a hierarchical organization similar to the "scale-space diagrams" of Witkin [1983]. It can be seen from Figure 3d that successive groupings form stable entities in that the place tokens remain stationary for several iterations of the NADEL. It isn't until activity has diffused farther out to the next representative scale that these local maxima start moving once again, and eventually merge. This relates stable perceptual groupings to place tokens (i.e., local maxima of activity) that are not in motion on the diffusion layer. The motion of place tokens can be measured in the same fashion as feature point motion across the visual field. Real-time receptive fields for measuring the motion of image edge and point features have recently been developed by Waxman et ale [1988]. Grouping of Time-Varying Inputs: The simplest example in this case corresponds to the grouping of two lights that are flashed at different locations at different times. When the time interval between flashes (Stimulus Onset Asynchrony SOA) is set appropriately, one perceives a smooth motion _or "path impletion" between the stimuli. This percept of "long-range apparent motion" is the cornerstone of the Gestalt Psychology moverment, and has remained unexplained for one-hundred years now [Kolers, 1972]. We have applied the NADEL to a variety of classical problems in apparent motion including the "split motion" percept and the mUlti-point Ternus configuration [Waxman et al., 1989]. Here we consider only the case of motion between two stimuli, where we interpret the locus of local maxima as the impleted path in apparent motion. However, the direction of perceived motion is not determined by the grouping process itself; only the path. We make the additional assumption that grouping generates a motion percept only if the second stimulus begins to shift immediately upon input to the NADEL. We suggest that the motion percept occurs only after path impletion is complete. That is, while grouping is active, its outputs are suppressed from our perception (a form of "ttansient-onsustained inhibition" analogous to saccadic suppression). By varying the separation between the two stimuli, and the time (SOA) between their inputs, we can plot regimes for which the NADEL predicts apparent motion. This is shown in Figure 4, which compares favorably with the psychophysical results summarized in Figure 3.2 of [Kolers, 1972]. We find regimes in which complete paths are formed ("smooth motion"), partial paths are formed ("jumpy motion"), and no immediate shifting occurs ("no motion"). The maximum allowable SOA between stimuli (upper curves) is determined by the passive decay rate. Increasing this decay from 1 % to 3% will decrease the maximum SOA by a Neural Analog Diffusion-Enhancement Layer 293 factor of five. The minimum allowable SOA (lower curves) increases with increasing separation, since it takes longer for activity from the first stimulus to influence a more distant second stimulus. The linearity of the lower boundary has been interpreted by [Waxman et aI .• 1989] as suggestive of Korte's "third law" [Kolers, 1972], when taken in combination with a logarithmic transformation of the visual field [Schwartz, 1980]. Attentional Cues and Invariant Representations: Place tokens which emerge as stable groupings over time can also provide attentional cues to a vision system. They would typically drive saccadic eye motions during scene inspection, with the relative activities of these maxima and their order of emergence determining the sequence of rapid eye motions. Such eye motions are known to play a key role in human visual perception [Yarbus, 1967]; they are influenced by both bottom-up perceptual cues as well as topdown expectations. The neuromorphic vision system developed by Seibert & Waxman [1989], shown in Figure 5, utilizes the NADEL to drive "eye motions", and thereby achieve translational invariance in 2-D object learning and recognition. This is followed by a log-polar transform (which emulates the geniculo-cortical connections [Schwartz, 1980]) and another NADEL to achieve rotation and scale in variance as well. Further coding of the transformed feature points by overlapping receptive fields provides invariance to small deformation. Pattern learning and recognition is then achieved using an Adaptive Resonance Theory (ART-2) network [Carpenter & Grossberg, 1987]. REFERENCES G. Carpenter & S. Grossberg (1987). ART-2: Self-organization of stable category recognition codes for analog input patterns. Applied Optics 26, pp. 4919-4930. I.E. Dowling (1987). The RETINA: An approachable part or the brain. Cambridge, MA: Harvard University Press. S. Grossberg (1973). Contour enhancement, short term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics 52. pp.217-257. E.W. Kent, M.O. Shneier & R. Lumia (1985). PIPE: Pipelined Image Processing Engine. Journal of Parallel and Distributed Computing 2, pp. 50-78. P.A. Kolers (1972). Aspects or Motion Perception. New York: Pergamon Press. E.L. Schwartz (1980). Computational anatomy and functional architecture of striate cortex: A spatial mapping approach to perceptual coding. Vision Research 20, pp. 645669. M. Seibert & A.M. Waxman (1989). Spreading activation layers, visual saccades and invariant representations for neural pattern recognition systems. N~ural Networks 2, pp. 9-27. 294 Waxman, Seibert, Cunningham and Wu A.M. Waxman, J. Wu & F. Bergholm (1988). Convected activation profiles and the measurement of visual motion. Proceeds. 1988 IEEE Conference on Computer Vision and Pattern Recognition. Ann Arbor, MI, pp. 717-723. A.M. Waxman. J. Wu & M. Seibert (1989). Computing visual motion in the short and the long: From receptive fields to neural networks. Proceeds. IEEE 1989 Workshop on Visual Motion. Irvine, CA. A.P. Witkin (1983). Scale space filtering. Proceeds. of the International Joint Conference on Artificial Intelligence. Karlsruhe, pp. 1019-1021. AL. Yarbus (1967). Eye Movements and Vision. New York: Plenum Press. ON·CENTER! OFF·SURROUND o OFF·CENTER! a.l-SURAOUNQ o SHUNTNG FEE08AO< o ON-CENTER! OFF·SURAOUND Figure 1 • (left) The NADEL takes featural input and diffuses it over a 2-D map as a function of time. Local maxima of activity are detected by the upper layer and fed back to the diffusion layer using an on-center/off-surround shunting anatomy. (right. top) Features spread their activities which superpose to generate an attractive force, causing distant features to group. (right, bottom) Grouping progresses in time over multiple scales. with small clusters emerging before extended clusters. Clusters are represented by their local ~axima of activity, which serve as place tokens. <! ~ .9 ~ .x ~ ~ 1000 800 600 .00 200 lOa so 60 ~o 20 10 6 I / / I / I / / I / I I I / Separalion 5 (pIXelS) 1 /, I I; Figure 2 _ The time (network iterations N) to merge two points input simultaneously to the NADEL, as a function of their initial separation (S pixels). lmilllillUml:l • • • ~~ .. ~ (a) y .--.... ,-_ ..... (e) 1ii~Ill'J!I!IIIl!lll!l1llllllm= !,~ ~l" 1 _· 1~~'l .' F~ ;h~ ' ~h ~~!:Ji. t~5~ :··'Id ':'.~ .J\llt! I~~ !" ~ Ii' ~ ~,I tt~i'><'! k;'~ "~~~"'r..;.'(!S.1-:'~-' ~7'~~':;' ~,~o.~_, ... ~~~~ . (b) Cd) Figure 3 - Perceptual grouping of a square outline with gaps. z ~ e. ~ e. t2 ~ ~ r!3. g ~ g. I g ("t~ ~ ~ ~ '-0 c:.n 200 100 80 60 ~ 40 <II c: .2 ~ 20 a> -l: 0 ~ iii 10 .s 8 C 6 C/) 4 2 1 Smooth Motion ./ ,/ ./ ./ ./' ./' Jumpy Motion ,/ ./ / 6 / / / / ./ ./ ./ ./ Separation S (pixels) ,/ Figure 4 - Apparent motion between two flashed lights: Stimulus Onset Asynchrony SOA (network iterations N) vs. Separation S (pixels). Solid curves indicate boundaries between which introduction of the second light yields immediate shifting of local maxima; dashed curves (above solid curves) indicate when final merge occurs yielding the impleted path. No Motion E_pectat Ions Spatial relations VIA NADEL FEATURE MAP EYE MOTIONS Figure 5 - The neuromorphic vision system for invariant learning and recognition of 2-D object~ utilizes three NADEL networks. ~ co (J') J ~ .... g,;+ g ::s e. ::s I§. a ~ ~ ~
|
1988
|
83
|
172
|
OPTIMIZATION BY MEAN FIELD ANNEALING Griff Bilbro Reinhold Mann Thomas K. Miller ECE Dept. Eng. Physics and Math. Div. ECE Dept. NCSU Oak Ridge N atl. Lab. NCSU Raleigh, NC 27695 Oak Ridge, TN 37831 Raleigh, N C 27695 Wesley. E. Snyder David E. Van den Bout Mark White ECE Dept. ECE Dept. ECE Dept. NCSU NCSU NCSU Raleigh, NC 27695 Raleigh, NC 27695 Raleigh, NC 27695 ABSTRACT Nearly optimal solutions to many combinatorial problems can be found using stochastic simulated annealing. This paper extends the concept of simulated annealing from its original formulation as a Markov process to a new formulation based on mean field theory. Mean field annealing essentially replaces the discrete degrees of freedom in simulated annealing with their average values as computed by the mean field approximation. The net result is that equilibrium at a given temperature is achieved 1-2 orders of magnitude faster than with simulated annealing. A general framework for the mean field annealing algorithm is derived, and its relationship to Hopfield networks is shown. The behavior of MFA is examined both analytically and experimentally for a generic combinatorial optimization problem: graph bipartitioning. This analysis indicates the presence of critical temperatures which could be important in improving the performance of neural networks. STOCHASTIC VERSUS MEAN FIELD In combinatorial optimization problems, an objective function or Hamiltonian, H(s), is presented which depends on a vector of interacting 3pim, S = {81," .,8N}, in some complex nonlinear way. Stochastic simulated annealing (SSA) (S. Kirkpatrick, C. Gelatt, and M. Vecchi (1983)) finds a global minimum of H by combining gradient descent with a random process. This combination allows, under certain conditions, choices of s which actually increa3e H, thus providing SSA with a mechanism for escaping from local minima. The frequency and severity of these uphill moves is reduced by slowly decreasing a parameter T (often referred to as the temperature) such that the system settles into a global optimum. Two conceptual operationo; are involved in simulated annealing: a thermodatic operation which schedules decreases in the temperature, and a relazation operation 91 92 Bilbro, et al which iteratively finds the equilibrium solution at the new temperature using the final state of the system at the previous temperature as a starting point. In SSA, relaxation occurs by randomly altering components of s with a probability determined by both T and the change in H caused by each such operation. This corresponds to probabilistic transitions in a Markov chain. In mean field annealing (MFA), some aspects of the optimization problem are replaced with their means or averages from the underlying Markov chain (e.g. s is replaced with its average, (s)). As the temperature is decreased, the MFA algorithm updates these averages based on their values at the previous temperature. Because computation using the means attains equilibrium faster than using the corresponding Markov chain, MFA relaxes to a solution at each temperature much faster than does SSA, which leads to an overall decrease in computational effort. In this paper, we present the MFA formulation in the context of the familiar Ising Hamiltonian and discuss its relationship to Hopfield neural networks. Then the application of MFA to the problem of graph bipartitioning is discussed, where we have analytically and experimentally investigated the affect of temperature on the behavior of MFA and observed speedups of 50:1 over SSA. MFA AND HOPFIELD NETWORKS Optimization theory, like physics, often concerns itself with systems possessing a large number ofinteracting degrees offreedom. Physicists often simplify their problems by using the mean field approzimation: a simple analytic approximation of the behavior of systems of particles or spins in thermal equilibrium. In a corresponding manner, arbitrary functions can be optimized by using an analytic version of stochastic simulated annealing based on a technique analogous to the mean field approximation. The derivation of MFA presented here uses the naive mean field (D. J. Thouless, P.W. Anderson, and R.G. Palmer (1977)) and starts with a simple Ising Hamiltonian of N spins coupled by a product interaction: L L ""' { Vi· = V.·i symmetry H(s) = ~Si + L..J'Vi;sis; where 'E {O '1} . t . s, , an eger spans. , i ;i:' Factoring H(s) shows the interaction between a spin s, and the rest of the system: H(s) = Si . (~ + 2 L Vi;S;) + L h"s" + L L V";s,,s; . (1) ;~, "i:' "i:i ;i:".' The mean or effective field affecting s, is the average of its coefficient in (1): w, = (h, + 2 E;i:i Vi;s;) = ~ + 2 L Vi; (s;) = HI(Si)=l - HI(s,}=o' (2) ; I-i The last part of (2) shows that I for the Ising case, the mean field can be simply calculated from the difference in the Hamiltonian caused by changing (s,) from zero Optimization by Mean Field Annealing 93 1. Initialize spin averages and add noise: 8i = 1/2 + 6 Vi. 2. Perform this relaxation step until a fixed-point is found: a. Select a spin average (8,) at random from (s). h. Compute the mean field ~i = 14 + 2 E;;ti 'Vi; (8;). c. Compute the new spin average (8i) = {I + exp (~dT)} -1. 3. Decrease T and repeat step 2 until freezing occurs. Figure 1. The Mean Field Annealing Algorithm to one while holding the other spin averages constant. By taking the Boltzmannweighted average of the state values, the spin average is found to be (3) Equilibrium is established at a given temperature when equations (2) and (3) hold for each spin. The MFA algorithm (Figure 1) begins at a high temperature where this fized-point is easy to determine. The fixed-point is tracked as T is lowered by iterating a relaxation step which uses the spin averages to calculate a new mean field that is then used to update the spin averages. As the temperature is lowered, the optimum solution is found as the limit of this sequence of fixed-points. The relationship of Hopfield neural networks to MFA becomes apparent if the relaxation step in Figure 1 is recast in a parallel form in which the entire mean field vector partially moves towards its new state, and then all the spin averages are updated using ~nc1ll. As'Y -t 0, these difference equations become non-linear differential equations, dip, = h. + 2 ~ 11:'{8') - ip. 'Vi · dt ' L.,;'" , I ; which are equivaleut to the equations of motion for the Hopfield network (J. J. Hopfield and D. W. Tank (1985», Vi, 94 Bilbro, et al provided we make Gi = Pi = 1 and use a sigmoidal transfer function 1 f( ui) = -1 -+-ex-p"""( u-i /-:-T-"') • Thus, the evolution of a solution in a Hopfield network is a special case of the relaxation toward an equilibrium state effected by the MFA algorithm at a fixed temperature. THE GRAPH BIPARTITIONING PROBLEM Formally, a graph consists of a set of N nodes such that nodes 11i and n; are connected by an edge with weight ~; (which could be zero). The graph bipartitioning problem involves equally distributing the graph nodes across two bins, bo and bl , while minimizing the combined weight of the edges with endpoints in opposite bins. These two sub-objectives tend to frustrate one another in that the first goal is satisfied when the nodes are equally divided between the bins, but the second goal is met (trivially) by assigning all the nodes to a single bin. MEAN FIELD FORMULATION An optimal solution for the bipartitioning problem minimizes the Hamiltonian In the first term, each edge attracts adjacent nodes into the same bin with a force proportional to its weight. Counter balancing this attraction is .,., an amorphous repulsive force between all of the nodes which discourages them from clustering together. The average spin of a node 11i can be determined from its mean field: C)i = L)Vi; -.,.) ;¢i EXPERIl\1ENTAL RESULTS L 2(Vi; - "')(8;} . ;¢i Table 1 compares the performance of the MFA algorithm of Figure 1 with SSA in terms of total optimization and computational effort for 100 trials on each of three example graphs. While the bipartitions found by SSA and MFA are nearly equivalent, MFA required as little as 2% of the number of iterations needed by SSA. The effect of the decrease in temperature upon the spin averages is depicted in Figure 2. At high tempera.tures the graph bipartition is maximally disordered, (i.e. (8i) R: i Vi), but as the system is cooled past a critical temperature, Te , each node Optimization by Mean Field Annealing 95 TABLE 1. Comparison of SSA and MFA on Graph Bipartitioning G1 G1 G1 Nodes/Edges 83/115 100/200 100/400 Solution Value (HMFA/ HSSA 0.762 1.078 1.030 RelaxatIOn Iterations (1M F A/ Iss A 0.187 0.063 0.019 1.0 0.8 0.6 <Si> 0.4 0.2 0.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 IOB#!I(T) Figure 2. The Effect of Decreasing Temperature on Spin Averages begins to move predominantly into one or the other of the two bins (as evidenced by the drift of the spin averages towards 1 or 0). The changes in the spin averages cause H to decrease rapidly in the vicinty of Te. To analyze the effect of temperature on the spin averages, the behavior of a cluster C of spins is idealized with the assumptions: 1. The repulsive force which balances the bin contents is negligible within C (7' = 0) compared to the attractive forces arising from the graph edgesj 2. The attractive force exerted by each edge is replaced with an average attractive force V = E, E; Vi; / E where E is the number of non-zero weighted edgesj 3. On average, each graph node is adjacent to e = 2E/N neighboring nodesj ( 4. The movement of the nodes in a cluster can be uniformly described by some deviation, u, such that (") = (1 + u)/2. 96 Bilbro, et al Using this model, a cluster moves according to (4) The solution to (4) is a fixed point with (1' = 0 when T is high. This fixed point becomes unstable and the spins diverge from 1/2 when the temperature is lowered to the point where Solving shows that Tc = Ve/2, which agrees with our experiments and is within ±20% of those observed in (C. Peterson and J. R. Anderson (1987)). The point at which the nodes freeze into their respective bins can be found using (4) and assuming a worst-case situation in which a node is attracted by a single edge (i.e. e = 1). In this case, the spin deviation will cross an arbitrary threshold, (1', (usually set ±0.9), when V(1' Tf = . In(1 + (1't) - In(1 - (1't) A cooling scpedule is now needed which prescribes how many relaxation iterations, la, are required at each temperature to reach equilibrium as the system is annealed from Tc to Tf. Further analysis of (4) shows that Ia ex ITc/(Tc - T)I. Thus, more iterations are required to reach equilibrium around Tc than anywhere else, which agrees with observations made during our experiments. ,The affect of using fewer iterations at various temperatures was empirically studied using the following procedure: 1. Each spin average was initialized to 1/2 and a small amount of noise was added to break the symmetry of the problem. 2. An initial temperature Ti was imposed, and the mean field equations were iterated I times for each node. 3. After completing the iterations at 11, the temperature was quenched to near zero and the mean field equations were again iterated I times to saturate each node at one or zero. The results of applying this procedure to one of our example graphs with different values ofT, and I are shown in Figure 3. Selecting an initial temperature near Tc and performing sufficient iterations of the mean field equations (I ~ 40 in this case) gives final bipartitions that are usually near-opt'imum, while performing an insufficient number of iterations (I = 5 or I = 20) leads to poor solutions. However, even a large number of iterations will not compensate if T, is set so low that the initial convergence causes the graph to abruptly freeze into a local minimum. The highest Optimization by Mean Field Annealing 97 6.0 5.0 4.0 3.0 t POOR SOLUTIONS / ~ " 2.0 1.0 GOOD SOLUTIONS ~ -1.0 -0.5 0.0 0.5 IOHe(Ti) Figure 3. The Effect of Initial Temperature and Iterations on the Solution quality solutions are found when T, ~ Tc and a sufficient number of relaxations are performed, as shown in the traces for I = 40 and I = 90. This seems to perform as well as slow cooling and requires much less effort. Obviously, much of the structure of the optimal solution must be present after equilibrating at Te. Due to the equivalence we have shown between Hopfleld networks and MFA, this fact may be useful in tuning the gains in Hopfield networks to get better performance. CONCLUSIONS The concept of mean field annealing (MFA) has been introduced and compared to stochastic simulated annealing (SSA) which it closely resembles in both derivation and implementation. In the graph bipartitioning application, we saw the level of optimization achieved by MFA was comparable to that achieved by SSA, but 1-2 orders of magnitude fewer relaxation iterations were required. This speedup is achieved because the average values of the discrete degrees of freedom used by MFA relax to their equilibrium values much faster than the corresponding Markov chain employed in SSA. We have seen similar results when applying MFA to a other problems including N-way graph partitioning (D. E. Van den Bout and T. K. Miller III (1988)), restoration of range and luminance images (Griff Bilbro and Wesley Snyder (1988)), and image half toning (T. K. Miller III and D. E. Van den Bout (1989)). As was shown, the MFA algorithm can be formulated as a parallel iterative procedure, so it should also perform well in parallel processing environments. This has been verified by successfully porting MFA to a ZIP array processor, a 64-node 98 Bilbro, et al NCUBE hypercube computer, and a 10-processor Sequent Balance shared-memory multiprocessor with near-linear speedups in each case. In addition to the speed advantages of MFA, the fact that the system state is represented by continuous variables allows the use of simple analytic techniques to characterize the system dynamics. The dynamics of the MFA algorithm were examined for the problem of graph bipartitioning, revealing the existence of a critical temperature, Te , at which optimization begins to occur. It was also experimentally determined that MFA found better solutions when annealing began near Tc rather than at some lower temperature. Due to the correspondence shown between MFA and Hopfield networks, the critical temperature may be of use in setting the neural gains so that better solutions are found. Acknowledgements This work was partially supported by the North Carolina State University Center for Communications and Signal Processing and Computer Systems 'Laboratory, and by the Office of Basic Energy Sciences, and the Office of Technology Support Programs, U.S. Department of Energy, under contract No. DE-AC05-840R21400 with Martin Marietta Energy Systems, Inc. References Griff Bilbro and Wesley Snyder (1988) Image restoration by mean field annealing. In Advance, in Neural Network Information Proce"ing SYBteffll. D. E. Van den Bout and T. K. Miller III (1988) Graph partitioning using annealed neural networks. Submitted to IEEE Tran,. on Circuiu and SYBtem,. J. J. Hopfield and D. W. Tank (1985) Neural computation of decision in optimization problems. Biological Cybernetic" 52, 141-152. T. K. Miller III and D. E. Van den Bout (1989) Image halftoning by mean field annealing. Submitted to ICNN'89. S. Kirkpatrick, C. Gelatt, and M. Vecchi (1983) Optimization by simulated annealing. Science, 220(4598),671-680. C. Peterson and J. R. Anderson (1987) Neural Network, and NP-complete Optimization Problem,: a Performance Study on the Graph Bi,ection Problem. Technical Report MCC-EI-287-87, MCC. D. J. Thouless, P.'¥l. Anderson, and R.G. Palmer (1977) Solution of 'solvable model of a spin glass'. Phil. Mag., 35(3), 593-601.
|
1988
|
84
|
173
|
186 AN APPLICATION OF THE PRINCIPLE OF MAXIMUM INFORMATION PRESERVATION TO LINEAR SYSTEMS Ralph Linsker IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 ABSTRACT This paper addresses the problem of determining the weights for a set of linear filters (model "cells") so as to maximize the ensemble-averaged information that the cells' output values jointly convey about their input values, given the statistical properties of the ensemble of input vectors. The quantity that is maximized is the Shannon information rate, or equivalently the average mutual information between input and output. Several models for the role of processing noise are analyzed, and the biological motivation for considering them is described. For simple models in which nearby input signal values (in space or time) are correlated, the cells resulting from this optimization process include center-surround cells and cells sensitive to temporal variations in input signal. INTRODUCTION I have previously proposed [Linsker, 1987, 1988] a principle of "maximum information preservation," also called the "infomax" principle, that may account for certain aspects of the organization of a layered perceptual network. The principle applies to a layer L of cells (which may be the input layer or an intermediate layer of the network) that provides input to a next layer M. The mapping of the input signal vector L onto an output signal vector M, f:L ~ M, is characterized by a conditional probability density function ("pdf") p(MI L). The set S of allowed mappings I is specified. The input pdf PL(L) is also given. (In the cases considered here, there is no feedback from M to L.) The infomax principle states that a mapping I should be chosen for which the Shannon information rate [Shannon, 1949] R(j) == f dL PL(L) f dM p(MI L) 10g[P(MI L)/PM(M)] (1) is a maximum (over allIin the set S). Here PM(M) == fdLPL(L)P(MIL) is the pdf of the output signal vector M. R is identical to the average mutual information between Land M. Maximum Infonnation Preservation to Linear Systems 187 To understand better how the info max principle may be applied to biological systems and complex synthetic networks, it is useful to solve the infomax optimization problem explicitly for simpler systems whose properties are nonetheless biologically motivated. This paper therefore deals with the practical computation of infomax solutions for cases in which the mappings! are constrained to be linear. INFOMAX SOLUTIONS FOR A SET OF LINEAR FILTERS We consider the case of linear model "neurons" with multivariate Gaussian input and additive Gaussian noise. There are N input (L) cells and N' output (M) cells. The input column vector L = (Lt,~, ... ,LNF is randomly selected from an N-dimensional Gaussian distribution having mean zero. That is, (2) where QL is the covariance matrix of the input activities, Q6 = J dL PL(L)LjLj • (Superscript T denotes the matrix transpose.) To specify the set S of allowed mappings !:L .... M, we define a processing model that includes a description of (i) how noise enters during processing, (ii) the independent variables over which we are to maximize R, and (iii) any constraints on their values. Figure 1 shows several such models. We shall analyze the simplest, then explain the motivation for the more complex models and analyze them in turn. Model A -- Additive noise of constant variance In Model A of Fig. 1 the output signal value of the nth M cell is: (3) The noise components "11 are independently and identically distributed (fli.i.d. ") random variables drawn from a Gaussian distribution having a mean of zero and variance B. Each mapping !:L .... M is characterized by the values of the {Cnj } and the noise parameter B. The elements of the covariance matrix of the output activities are (using Eqn. 3) (4) where ~nm = 1 if n = m and 0 otherwise. Evaluating Eqn. 1 for this processing model gives the information rate: R(j) = (1/2) In Det W(j) (5) where ~m = Q:!'/ B. (R is the difference of two entropy terms. See [Shannon, 1949], p.57, for the entropy of a Gaussian distribution.) 188 Linsker If the components Cni of the C matrix are allowed to be arbitrarily large, then the information rate can be made arbitrarily large, and the effects of noise become arbitrarily small. One way to limit C is to impose a "resource constraint" on each M cell. An example of such a constraint is ~jqj = 1 for all n. One can then attempt directly, using numerical methods, to maximize Eqn. 5 over all allowed C for given B. However, when some additional conditions (below) are satisfied, further analytical progress can be made. Suppose the NL-cells are uniformly spaced along the line interval [0,1] with periodic boundary conditions, so that cell N is next to cell 1. [The analysis can be extended to a two- (or higher-) dimensional array in a straightforward manner.] Suppose also that (for given N) the covariance Q6 of the input values at cells i and j is a function QL(Sj) only of the displacement s'J from i to j. (We deal with the periodicity by defining Sab = b - a Ya~ and choosing the integer Yab such that -N/2 S Sab < N/2.) Then QL is a Toeplitz matrix, and its eigenvalues {Ak} are the components of the discrete Fourier transform ("F.T.") of QL(S): Ak = ~sQL(s) exp( -2~ks/N), (-N/2) S k < N/2. (6) We now impose two more conditions: (1) N' = N. This simplifies the resulting expressions, but is otherwise inessential, as we shall discuss. (2) We constrain each M cell to have the same arrangement of C-values relative to the M cell's position. That is, Cnj is to be a function C(Sni) only of the displacement Sni from n to i. This constraint substantially reduces the computational demands. We would not expect Figure 1. L· I L· I (S,C) (D) Four processing models (A)-(D): Each diagram shows a single M cell (indexed by n) having output activity Mn. Inputs {LJ may be common to many M cells. All noise contributions (dotted lines) are uncorrelated with one another and with {LJ. GC = gain control (see text). Maximum Information Preservation to Linear Systems 189 it to hold in general in a biologically realistic model -- since different M cells should be allowed to develop different arrangements of weights -- although even then it could be used as an Ansatz to provide a lower bound on R. The section, "Temporally-correlated input patterns," deals with a situation in which it is biologically plausible to impose this constraint. Under these conditions, (Q:!') is also a Toeplitz matrix. Its eigenvalues are the components of the F.T. of QM(snm). For N' = N these eigenvalues are (B + A~k) , where Zk = ICkl2 and Ck == ~sC(s) exp( -2'TT~ks/N) is the F.T. of C(s). [This expression for the eigenvalues is obtained by rewriting Eqn. 4 as: QM(snm) = B8n_m.o + ~j.jC(snJQL(Sj)C(sm) ,and taking the F.T. of both sides.] Therefore R = (1/2)~k In[l + AJcZk/ B]. (7) We want to maximize R subject to ~sC(S)2 = 1, which is equivalent to ~Zk = N . Using the Lagrange multiplier method, we maximize A == R + 11-(~Zk - N) over all nonnegative {Zk}' Solving for (JA/ (JZk = 0 and requiring Zk ~ 0 for all k gives the solution: Zk = max[( -1/211-) - (B/Ak)' 0], (8) where (given B) 11- is chosen such that ~Zk = N. Note that while the optimal {Zk} are uniquely determined, the phases of the {ck} are completely arbitrary [except that since the {C(s)} are real, we must have Ck * = c_ k for all k]. The {C(s)} values are therefore not uniquely determined. Fig. 2a shows two of the solutions for .an example in which QL(S) = exp[ - (s/ So)2] with So = 6, N=N'=64, and B.:..:.l. Both solutions have ZO.±1 ..... ±6=5.417, 5.409, 5.378, 5.306, 5.134,4.689,3.376, and all other Zk == O. Setting all Ck phases to zero yields the solid curve; a particular random choice of phases yields the dotted dHve. We shall later see that imposing locality conditions on the {C(s)} (e.g., penalizing nonzero C(s) for large I s I) can remove the phase ambiguity. Our solution (Eqn. 8) can be described in terms of a so-called "water-filling" analogy: If one plots B /Ak versus k, then Zk is the depth of "water" at k when one "pours" into the "vessel" defined by the B / Ak curve a total quantity of "water" that corresponds to ~Zk = N and brings the "water level" to ( -1/211-). Let us contrast this problem with two other problems to which the "water-filling" analogy has been applied in the information-theory literature. In our notation, they are: 1. Given a transfer function {C(s)} and the noise variance B, how should a given total input signal power ~Ak be apportioned among the various wavenumbers k so as to maximize the information rate R [Gallager, 1968]? Our problem is complementary to this: we fix the input signal properties and seek an optimal transfer function subject to constraints. 190 Linsker 2. Rate-distortion (R-D) calculation [Berger, 1971]: Given a distortion measure (that defines a "distance" between the actual input signal and an estimate of it that can be reconstructed from the channel's output), and the input power spectrum p.k}, what choice of {Zk} minimizes the average distortion for given information rate (or minimizes the required rate for given distortion)? In the R-D problem there is a process of reconstruction, and a given measure for assessing the "goodness" of reconstruction. In contrast, in our network there is no reconstruction of the input signal, and no criterion of the "goodness" of such a hypothetical reconstruction is provided. Note also that infomax optimization is not the same as computing which channel (that is, which mapping !:L .... M) selected from an allowed set has the maximum information-theoretic capacity. In that problem, one is free to encode the inputs before transmission so as to make optimal use of (Le., "achieve the capacity of") the channel. In our case, there is no such pre-encoding; the input ensemble is prescribed (by the environment or by the output of an earlier processing stage) and we need to maximize the channel rate for that ensemble. The simplifying condition that N = N' (above) is unnecessarily restrictive. Eqn. 7 can be easily generalized to the case in which N is a mUltiple of N' and the N' M cells are uniformly spaced on the unit interval. Moreover, in the limit that 1/ N' is much smaller than the correlation length scale of QL, it can be shown that R is unchanged when we simultaneously increase N' and B by the same factor. (For example, two adjacent M cells each having noise variance 2B jointly convey the same information Figure 2. c c \ ,: \,/ (0) l ..-.' s (b) -10 Example infomax solutions C(s) for locally-correlated inputs: (a) Model A; region of nonnegligible C(s) extends over all s; phase ambiguity in Ck yields non unique C(s) solutions, two of which are shown. See text for details. (b) Models C (solid curve) and D (dotted curve) with Gaussian g(S)-l favoring short connections; shows center-surround receptive fields, more pronounced in Model D. (c) "Temporal receptive field" using Model D for temporally correlated scalar input to a single M cell; C(s) is the weight applied to the input signal that occurred s time steps ago. Spacing between ordinate marks is 0.1; ~ C(S)2 = 1 in each case. c Maximum Information Preservation to Linear Systems 191 about L as one M cell having noise variance B.) For biological applications we are mainly interested in cases in which there are many L cells [so that C(s) can be treated as a function of a continuous variable] and many M cells (so that the effect of the noise process is described by the single parameter B/ N). The analysis so far shows two limitations of Model A. First, the constraint ~iqi = 1 is quite arbitrary. (It certainly does not appear to be a biologically natural constraint to impose!) Second, for biological applications we are interested in predicting the favored values of {C(s)}, but the phase ambiguity prevents this. In the next section we show that a modified noise model leads naturally, without arbitrary constraints on ~iqi' to the same results derived above. We then turn to a model that favors local connections over long-range ones, and that resolves the phase ambiguity issue. Model B -- Independent noise on each input line In Model B of Fig. 1 each input Li to the nth M cell is corrupted by Li.d. Gaussian noise Vl1i of mean zero and variance B. The output is (9) Since each Vni is independent of all other noise terms (and of the inputs {Li}), we find (10) We may rewrite the last term as B~l1m (~iqy!2 (~jc;)l/2. The information rate is then R = (1/2) In DetWwhere (11) Define c' ni == Cl1i(~kqk)-1/2 ; then J¥,.m = ~lIm + (~,.jc'lIiQbC' mj)/ B. Note that this is identical (except for the replacement C ~ C') to the expression following Eqn. (5), in which QM was given by Eqn. (4). By definition, the {C' nil satisfy ~iC';i = 1 for all n. Therefore, the problem of maximizing R for this model (with no constraints on ~jq;) is identical to the problem we solved in the previous section. Model C -- Favoring of local connections Since the arborizations of biological cells tend to be spatially localized in many cases, we are led to consider constraints or cost terms that favor localization. There are various ways to implement this. Here we present a way of modifying the noise process so that the infomax principle itself favors localized solutions, without requiring additional terms unrelated to information transmission. Model C of Fig. 1 is the same as Model B, except that now the longer connections are "noisier" than the shorter ones. That is, the variance of VIIi is <V;i> = B~(sn;) where g(s) increases with 1 s I. [Equivalently, one could attenuate the signal on the (i ~ n) line by g(sll;) 1/2 and have the same noise variance Bo on all lines.] 192 Linsker This change causes the last term of Eqn. 10 to be replaced by Bo8I1m~g(SIl)qi . Under the conditions discussed earlier (Toeplitz QL and QM, and N = N), we derive (12) Recall that the {ck } are related to {C(s)} by a Fourier transform (see just before Eqn. 7). To cotppute which choice of IC(s)} maximizes R for a given problem, we used a gradient ascent algorithm several times, each time using a different random set of initial I C(s)} values. For the problems whose solutions are exhibited in Figs. 2b and 2c, multiple starting points usually yielded the same solution to within the error tolerance specified for the algorithm [apart from an arbitrary factor by which all of the C(s)'s can be multiplied without affecting R], and that solution had the largest R of any obtained for the given problem. That is, a limitation sometimes associated with gradient ascent algorithms -- namely, that they may yield multiple "solutions" that are local, but far from global, maxima -- did not appear to be a difficulty in these cases. Fig. 2b (solid curve) shows the infomax solution for an example having QL(S) = exp[ (S/sO)2] and g(s) = exp[(s/s.)2] with So = 4, s. = 6, N = N = 32, and Bo = 0.1. There is a central excitatory peak flanked by shallow inhibitory sidelobes (and weaker additional oscillations). (As noted, the negative of this solution, having a central inhibitory region and excitatory sidelobes, gives the same R.) As Bo is increased (a range from 0.001 to 20 was studied), the peak broadens, the sidelobes become shallower (relative to the peak), and the receptive fields of nearby M cells increasingly overlap. This behavior is an example of the "redundancy-diversity" tradeoff discussed in [Linsker, 1988]. Model D -- Bounded output variance Our previous models all produce output values Mn whose variance is not explicitly constrained. More biologically realistic cells have limited output variance. For example, a cell's firing rate must lie between zero and some maximum value. Thus, the output of a model nonlinear cell is often taken to be a sigmoid function of (~iCII;L)· Within the context of linear cell models, we can capture the effect of a bounded output variance by using Model D of Fig. 1. We pass the intermediate output ~iClIi(Li + VIIi) through a gain control QC that normalizes the output variance to unity, then we add a final (Li.d. Gaussian) noise term V'II of variance R.. That is, (13) Without the last term, this model wo~ld be identical to Model C, since mUltiplying both the signal and the VIIi noise by the same factor GC would not affect R. The last term in effect fixes the number of output values that can be discriminated (Le., not confounded with each other by the noise process V'II) to be of order Rl1!2. The information rate for this model is derived to be (cf. Eqn. 12): Maximum Information Preservation to Linear Systems 193 (14) where V( C) is the variance of the intermediate output before it is passed through GC: (15) Fig. 2b (dotted curve) shows the infomax solution (numerically obtained as above) for the same QL(S) and g(s) functions and parameter values as were used to generate the solid curve (for Model C), but with the new parameter Bl = 0.4. The effect of the new Bl noise process in this case is to deepen the inhibitory sidelobes (relative to the central peak). The more pronounced center-surround character of the resulting M cell dampens the response of the cell to differences (between different input patterns) in the spatially uniform component of the input pattern. This response property allows the L .... M mapping to be info max-optimal when the dynamic range of the cells' output response is constrained.· (A competing effect can complicate the analysis: If Bl is increased much further, for example to 50 in the case discussed, the sidelobes move to larger s and become shallower. This behavior resembles that discussed at the end of the previous section for the case of increasing Bo; in the present case it is the overall noise level that is being increased when Bl increases and Bo is kept constant.) TemporaUy-correlated input patterns Let us see how infomax can be used to extract regularities in input time series, as contrasted with the spatially-correlated input patterns discussed above. We consider a single M cell that, at each discrete time denoted by n, can process inputs {LJ from earlier times i ~ n (via delay !ines, for example). We use the same Model D as before. There are two differences: First, we want g(s) = 00 for all s > 0 (input lines from future times are "infinitely noisy"). [A technical point: Our use of periodic boundary conditions, while computationally convenient, means that the input value that will occur s time steps from now is the same value that occurred (N s) steps ago. We deal with this by choosing g(s) to equal 1 at s = 0, to increase as s .... -N/2 (going into the past), and to increase further as s decreases from +N/2 to 1, corresponding to increasingly remote past times. The periodicity causes no unphysical effects, provided that we make g(s) increase rapidly enough (or make N large enough) so that C(s) is negligible for time intervals comparable to N.] Second, the fact that C,,; is a function only of s'" is now a consequence of the constancy of connection weights C(s) of a single M cell with time, rather than merely a convenient Ansatz to facilitate the infomax computation for a set of many M cells (as it was in previous sections). The infomax solution is shown in Fig. 2c for an example having QL(S) = exp[ (S/So)2]; g(s) = exp[ -t(s}/s.J with t(s} = s for s ~ 0 and t(s} = s - N for s ~ 1; So = 4, Sl = 6, N = 32, Bo = 0.1, and Bl = 0.4. The result is that the "temporal receptive field" of the M cell is excitatory for recent times, and 194 Linsker inhibitory for somewhat more remote times (with additional weaker oscillations). The cell's output can be viewed approximately as a linear combination of a smoothed input and a smoothed first time derivative of the input, just as the output of the center-surround cell of Fig. 2b can be viewed as a linear combination of a smoothed input and a smoothed second spatial derivative of the input. As in Fig. 2b, setting BI = 0 (not shown) lessens the relative inhibitory contribution. SUMMARY To gain insight into the operation of the principle of maximum information preservation, we have applied the principle to the problem of the optimal design of an array of linear filters under various conditions. The filter models that have been used are motivated by certain features that appear to be characteristic of biological networks. These features include the favoring of short connections and the constrained range of output signal values. When nearby input signals (in space or time) are correlated, the infomax-optimal solutions for the cases studied include (1) center-surround cells and (2) cells sensitive to temporal variations in input. The results of the mathematical analysis presented here apply also to arbitrary input covariance functions of the form QL( I i - j I). We have also presented more general expressions for the information rate, which can be used even when QL is not of this form. The cases discussed illustrate the operation of the infomax principle in some relatively simple but instructive situations. The analysis and results suggest how the principle may be applied to more biologically realistic networks and input ensembles. References T. Berger, Rate Distortion Theory (Prentice-Hall, Englewood Cliffs, N.J., 1971), chap. 4. R. G. Gallager, Information Theory and Reliable Communication (John Wiley and Sons, N.Y., 1968), p. 388. R. Linsker, in: Neural Information Processing Systems (Denver, Nov. 1987), ed. D. Z. Anderson (Amer. Inst. of Physics, N.Y.), pp. 485-494. R. Linsker, Computer 21 (3) 105-117 (March 1988). C. E. Shannon and W. Weaver, The Mathematical Theory of Communication (Univ. of Illinois Press, Urbana, 1949).
|
1988
|
85
|
174
|
SPREADING ACTIVATION OVER DISTRIBUTED MICROFEATURES * James Hendler Depart.ment, of Computer Science University of Maryland College Park, MD 20742 ABSTRACT One att·empt at explaining human inferencing is that of spreading activat,ion, particularly in the st.ructured connectionist paradigm. This has resulted in t.he building of systems with semantically nameable nodes which perform inferencing by examining t.he pat,t.erns of activation spread. In this paper we demonst.rate t.hat simple structured network infert'ncing can be p(>rformed by passing art.iva.t.ion over the weights learned by a distributed algarit,hm. Thus, an account, is provided which explains a wellbehaved rela t ionship bet.ween structured and distri butt'd conn('ct.ionist. a.pproachrs. INTRODUCTION A primar~· difference brtween t,he nPllral net.works of 20 years ago and t.he ("urrent genera Lion of connect,ionist models is t.he addit.ion of mechanisms whic h permit t.he s),st,em to create all internal represent,ation. These subsymbolic, semantica.lly unnameable, feat.urrs which a.re induced by connectionist. learning algorithms havr been discussed as bt,ing of import. bot,h in structured and distribut.ed ronnl"ctionist nrtworks (cf. Feldman and Balla.rd , 1982; Rumelhart and McClelland, 198(j). The fact that network learning algorit.hms can creatr these rm·cro!eal'ure,s is not, however. enough in itself t.o aC('Qunt for how rognition works. Most. of what, we call int.elligent thought. dt'rives from being able t,o rea:son about. t.he relatioll:::> I)t'tween object.s, to hypothesize about event.s a.nd things, etc. If we are to do cognit.ive modeling we must. complet.e the story by rxplainiJlg how networks can rea,-;oll in tht> wa.y that, humans (or other int,elligrnt beings) do. aile attempt at (-'xplaining such rea.<;oning is that of spreading activat.ion ill the structured cOllllect,ionist. and marker- passing (cf. Charniak, ]983; Hendler, 1987) • The aut.hor is also affiliatf' ci wit.h thp Instit.ut.e for Acivanced C'omputpr Studies a.nd t.he Systems Research Center a.t the Universit.y of Ma.ryland. Funding for this work was provided in part by Office of Naval Resf'arch Grallt N00014-88--K - 0,560. 553 554 Hendler approaches. In t,hese syst,em::i semantically nanwable nodes permit an energy spread; and reasoning about tilt' world is accounted for by looking at, either stahh' configurations of tJw activation (the st.ruet.ured connectionist. approach) or at. t,he paths fOllnd by examining int.ersect,ions among t.he nodes (the markerpassing te('llnique). In this paper we will demonst.rate t.hat. simple Rt,ruct.lJrednetwork- like infE'rencing ('an be performed by pa.ssing act.ivat.ion over t,he wt'ightR lea.rned by a distribut.ed algorithm. Thus, an account. is provided which explains a well-behaved relation~hip bet.ween st.ruct,ured and dist.ributed connt'ct.ionist a pproa.ches. THE SPREADING ACTIVATION MODEL In this paper we will demonstrate that local connect.ionist.- Iike net.works can be built by spreading activat.ion ovel' t.he microfeaturcs learned b.\' a dist.ribut.ed network. To show t.his, we st,art wit.h a simple example which c1f'Jl1onst.rat.es the activation spreading mechanism used. ThE' part.icular net.work We will use ill this example is a 6- 3-8 three-layer net.work t rained by t.he hack-propagation learning algorithm. The training set used is shown in table 1. The \w·ights bet.ween the out.put node::; and hidden units which are learned by the Iletwork (after h'arning t.o tlw !)O% level for a typical rlln) are shown in figure 1. TABLE 1. Training Set. for Examph' 1. Inpll t Output. Pattern Pattern 000000 10000000 0000 J 1 01000000 001100 00100000 OOll}1 00010000 110000 00001000 lIOO]] 00000100 ] I I 100 00000010 111111 00000001 Spreading Activation over Distributed Microfeatures 555 Weights h1 h2 h3 n1 -4.98 4.40 -2.82 n2 -6.99 -4.99 -2.23 n3 -6.11 3.49 0.30 n4 -6.37 -4.68 2.53 n5 4.36 3.73 -5.09 n6 4.38 -5.97 -3.67 n7 0.89 1.07 3.32 nB 3.88 -6.95 1.88 Figure 1. Weights L~arnecl by Back Propagation To underst.and how t,he act,ivat.ioll spreads, let. us examine what. occurs when activation is started at node nl wit.h a weight of 1. This activation strength is divided by the outbranching of the nodf' and then mult.iplied by t.he weight of each link to t.he hidden units. Thus a.ctiva.tion flows from nl t.o hi with a strength of 1/ 3 1" Weighl(tl1 ,hl}. A similar computat.ion is made t.o each of t.he other hidden units. This act.ivat.ion now spreads to each of the ot.her Ollt.pUt. nodes in turn. Thus, n2 would gain act,ivat.ion of Acti','ation(hl) I Wet"ght(n2,hl)/ 8 -1Activat-ion(h2) 1" lYe'ight(n2)d)/ 8 + Ad1'vation(h3) 1" Weight(n2,h3)/ 8 or .80 from rli. Table 2 shows a graph of t.he act.ivat,ioll spread between the output units. The table, which is symmetric, can t.hus be read as showing t.he out,put at each of the other units when an activation st.rength of 1 is placed at t,he named node. Looking at. the table we see t,hat. t.il(' hight:'st, activat.ion OCClJrs among nodes which share t.he most feat,ures of tJlf' input (i .e. ~anlf' value and posit.ion) while t.he lowest is se('n among tho::;t:' pat.tt'fIls shariug t.he fewest feat.ures. 556 Hendler However, as well as having this property, t.able 2 can be seen as providing a matrix which specifies t.he weights bet.ween t.he out.put nodes if viewed as a st.ruct,ured net.work. That. is. ,,1 is conned·ed to rtf! by a strength of + .80, to nB by a st.rength of + 1.0:3, etc. Thus. by I\~ing this t.echniquE' distribut.ed r{'presentations can be turned into connectivity weight.s for st.ructured lIet.works. When non-· ort.hogonal wpights are lIsed. t he same act.ivat ion--spreading a.lgorithm produces a struct.ured network whi('h can be used for more complex inferenciug than can the dist.ributed net work alone. We demonst.rat.e t.his b)' a simple. and aga.in contrin·d. example. This example is mot.ivat.ed by Gary Cot.trell's st.ructured JI10dd for word sense disambiguation (CoUrt'll, Hl85). Cottrell, using weights clerived by hand, demonstrat.ed that. a struct.ured connect.ionis!. net.work could di:';t inguish both word- sense a.nd case- slot assignm('nts for amhiguous lexical items. Pre~enkd wit.h the sent{'nce "John threw the fight." f·he syst.em would ndivate a node for one meaning of "throw/' presented wit.h "John t.hrew t.he ball" it would come up wit.h another. The nodes of Cottrell's network included worels (John. Threw , etc .). word RenseR (Johnl, Propp\. etc.) :lnd cClI'P- sloLs (TAGT (lIgt'llt of tl1(-' throw). PAGT (ag('nt of t,he Propel). etc.). TABLE 2. Adivat.jon Spn>ad in 6--:3·8 Network. ,,1 II::! oj 114 ·//5 uB Tl7 u8 nl • .80 1.03 .17 .38 -1.57 - .:38 - 2.3 n2 .80 • 1.02 2.6 -1.57 .:31 -.79 .14 n.'J 1.03 1.02 • .97 - .0:3 - 2.03 - .0:3 --1.97 n4 .17 2.60 .97 • - 2.42 -.:38 -.Of! .52 '115 .38 -1.57 - .63 -2.42 • .04 -.;38 -.77 116 -1.57 .31 -2 .03 -.38 .64 • -.6 2.14 n7 - .38 -.79 - .03 -.09 -.;38 - .f) • .09 uB - 2.3 -.14 -l.f!7 .52 - .77 2.1 ·1 .09 • To duplicate Gary's network via training, we presented a 3-layer backprop net ... work wit.h a training set in which distribut.ed pat.t.erns, H'ry loosely corresponding to a "dictionary" of word (-,Ileodingsl were associa.ted wit,h a vector representing each of the individual noell's which would he represented in Cottrt-ll's system, but. wit.h no struct.ure. Thus, each elemPIlt. in t.he training set. is }- Whieh in any realisti r system would sornp. day be rpplaeed by aduaJ signal pro.;pssing outp"t.S Of Othpf rpprespnt.alinns of ai'liial worn pfonuneiat.ioll forms. Spreading Activation over Distributed Microfeatures 557 a 16 bit vretor (represent.ing a four word srnt.en('e. ea('h word as a. 4 bit. pattrrn), associated wit.h allot.he)· 16 bit. ve('tor repr<'spnt.ing t.he nodes Bobl Johnl propE'l t.hrfW IJght I ball J jlJgt pob) tagt. tobj bob John t.hrfw U1E' fight ball For t.his example, the system was t.rnineci on th(· E'1I('odings of t.he four srnt.encrs John t.hrew the ball John threw the fight. Bob t.1Hfw t be hall Bob t.hrfw Lllf 6ght wit.h the output set. high foJ' tho:-;e objects /II the second vedor whi('h \v('/,e a.ppropriatf·\y a.<.;so('iaj,rd . (IS shown in Tabh· :3. TABLE ;~. Tr<lining S(·t fa)' Example Z. Input. Pattern 011 0 0001 0101 OOlO 01 J 0 OOOJ 0101 J010 100/ OOOJ 010J 00/0 J 00 J 000 1 0101 10 10 Output. Patt.el'n JOO 1 1000 1J J 01 I 10 lOlOOlJI00101101 0/0/100011011110 01 JOOJ 1100011101 Upon complet.ion of t.he kaming, t.he aet.ivat.ioJl sprrading algorithm was used to derive a table of COlll1f'ctivity weights betw('ell the out.put. unit.s as shown in table 4. These weight.s W(')'e then t.ransferred into a local COllnect ionist simulat.or and a very simple act.ivation spn'adiJlg modrl was llsed to examine t.he result·s. \Vhpn Wt' run t.he simulator. u~ing tht' aeti\'atioIl spn·ading oY('r leaJ'lwd wpights. exactly t.he f('sult.s prodllced by Cott.reJl's n{'twork lIr{' seen . Thus: Act.ivation from tht, nodt's corresponding to john. tbroll', tllf-. alld fif/hl cam,e a posit.ive activation at. the node for ('Throw" and a negative a('t.ivat.ion at t.he node for "Propr!." while A('tivat.ion from joh" throlt' the ball sprt'ad positiniy to " Prope'" ,lIlel not. t.o 'I t IIrow . ,. Furt.her, ot·her effrct.s which cHP also predict.ed by C'ou.rdl's model lire seeJl: Act.iYat.ioJl at. TAGT and TOB.! spreads posit.i\'t' activation to TIr.,.o/l' and not t.o Propel. and Activation at PAG?' and POB) causes a spread t.o Propel but. not to Thro'W. 558 Hendler TABLE 4. Connectivity Weights for Example 2. *** -·0.12 0.01 -0.01 -0.01 0.01 0.01 0.01 - 0.01 -0.01 0.12 - 0 .12 ···0.0:3 -0.0:3 -0.01 0.00 - 0.12 *** - 0.01 0.01 0.01 - 0.01 -0.01 -0.01 0.01 0.01 -·0.12 0 .12 0.0:3 O.O~ 0.01 ·-0.01 0.01 -OJ)} *u -0.04 -0.04 0.04 0.04 0.05 -0.05 - 0.04 0.01 - 0.01 - 0.02 - 0.02 - 0.04 0.04 -0.01 0.01 ·-0.04 *** 0.04 -0.04 - 0.04 -0.05 0.05 0.04 ·_·0.01 0.01 0.02 0.02 0.04 - 0 .04 -0.0] 0.01 -0.04 0.04 u* -0.04 -0.0,5 -0,05 0.05 0,04 -0,01 0,01 0.02 0.02 0.04 - 0.04 0,01 -0,01 0,04 -0.04 -0,04 *** 0.05 0,05 -0.0.5 -·0.04 0.01 - 0.01 - 0.02 -0.02 - 0.04 0.04 O.OJ -0.01 0.04 -0.04 -0,05 O.O!) *** 0.05 -0.05 -0.05 0.01 ··0.00 -0.02 -0.02 - 0.04 0.05 0.01 ·-0.01 0.05 -0.05 -0,05 0.05 O.OS u* -·0.05 -O.O!) 0.01 -O.OJ -0'()2 -0,02 ··0.04 0.05 -0.01 0.01 -0,05 0.05 0.05 -0.05 -0.05 - 0.05 *** 0.05 -0.01 O.()] 0 .02 0.0:3 0.04 - 0.05 -0.01 0.01 -0.04 0.04 0.04 -0.04 -0,05 -0.05 0.05 u* -0.01 0.01 0.02 0.02 0.04 -0.05 0.12 -0.12 0 .01 -0.01 - 0.01 0.01 0.01 0.01 -0.01 -0.01 *** ·-O.l:.? ·-0.0:3 -0.0:3 ·-0.01 0.0] - 0.12 0.12 ·-0,01 0.01 0.01 -·0.01 -0.00 -0.01 0.01 O.OJ ··0.12 *** O.O:~ 0.0:3 0 .01 ··0.00 -0.0:3 0.03 -0.02 0.02 0.02 -·0.02 -·0.02 -0.02 0 .20 0.02 -0.02 0.02 0.02 ·-0.0:3 0.0:3 *** ··0.03 0.0:3 ··0.02 0,02 0.02 -0.02 ·-0.02 -0.02 * ** 0.02 - 0.02 0.0;3 0.02 0.0:3 0.0:) 0.20 ··0.0] 0.01 -·0.04 0.04 0.04 - 0.04 -·0.04 -0.04 0.02 *** . ~.O .• O.O<J D.O·' · 0.01 O.OJ 0.02 0.00 O.OJ 0.04 -0.04 ·-0.04 0.04 0.0,5 0,05 0.02 ·0.04 *** O.O!) O.OG O.OJ ·0.00 0 .02 We believe that results like this one ma.y argue t.hat. :-;tl'uetllrt'd IId·works are int.pgrally linked to dist.ributed networks ill t,hat. distribllt.<·d Ilf'twork le:ullillg t.echuiqut's may provide a fundamental ba~i:-; for ('xplaining the ('ogllit.in' df'VelOPment. of structured networks . III addition, Wt' see that :,-imple illf{'J'f'llt ial rt'a~on ing ('an be produ('ed using purdy cOIlJledioni~t. mockl:,. CONCLUDING REMARKS Wt' have at.t.empt.ed t.o ;;;hO\ .... that a model u;;;ing an activation spn'ading va.rinnt ean be used t.o take learned connect.ionist models aJld lwrform SOllle limit ed forms of inferencing upon t.helll. Furt.her, WP have argllt"d t.hat t.his tf'chniqlll' Illay pro"idr a comput.ational modd in which strllctured network;;; C;}l) Iw leal'llPd and t.hat st,ruct.ured net.work::; provide the infeI't'ncing (·apnbilities missing in purely dist.rihut.ed models. However. befort' w(' can t.ruly furtht'r thi~ claim. ~ignit1('ant work remain;;; to be done. We must ext.end alld ('xplort' such models. particularly t'xamining whether t.hese t·ypes of t.echniqups ('an be extt'llded t.o handle tlw complt'xitr t·hat can be found in real·-world prol)JpIll~ and sl'rioll~ ('ogllitin models . In part ieula.r we are beginlling an examinat.ion of two cru('ial isslIP::;: First., will t.he technique described above work for realist.ic problem·;? In particular, can t.hl' infen·ncing be designed t.o impact on t.1l(' ]'Pwgnition by t 11(> di.stribut.ed net.work? If so, one ('ould Sf'f', for l'xample, a speech recognit.ion program coupled to a system like Cottrell's natural languagt' sy:.;\.eJll, providing a handle for a t.ext. underst.anding system. Similarly such a t.e('hnique might. allow the int.('gration of top -down and bott.OIll'-IIP pro('('ssing for vision a.nd ot.ht'r such signal Pl'ocf'ssing tasks. Spreading Activation over Distributed Microfeatures 559 Secondly. we wish to see jf more complex spreading activation models could be hooked to this t.ype of model. Could networks such as t.hose proposed by Sha.st.ri (1985), Diedt'rich (HI8!»). and Polla.ck and 'Valt.z (1982) which provide complex inferencing bllt requirt' more ,structure than simply wt'ight.s bet.ween unit.s, be ab.stract.t'cI out. of the learned weight.s? Two partieular areas current.ly being pur~llf'd by t.JH' author. for t'xampk foclls on act.i\'e inhibit.ioll modeb for dt't.ermining whetlwr port.jon~ of the nt'twork C(ln lw ~lIpPJ'f'ssed t.o provide mort' complex inferencing and t.he learning of structures given temporally ordered information. References Charniak, E. Passing markt'rs: A t.heory of contt'xtual influence lJl language comprehension Cognitive S6ence, 7(3), 198:~. 171-190. Cottrell, G.W. A Connection-ist Approach to Word- Sense DiMlmb1'gllation, Doet.oral Disst'rta.t.ion, Comput.t'r Scienc(' Dt'pt.. l fllivl'rsit.y of Hochesler, 1985. Diederich, J. Parallelverarbeil/l'flg in 1IetzlL'erk-ba.~ie1'fen Systemen PhD Disst'rtation, Dt'pt. of Linguistics, llni\'ersity of Rielt~f('ld. J985. Feldman, J.A. and Ballard , D.H. (J 982). Connect.ionist models a.nd t.heir propt:'rt.ies. Cognit1've Science. 6. 205-·254Hendler, J.A. JlItegrating Maf'l.·er -pali8illg aod Problem Solving: A .~p,.e(/dirtg artivatiou approach to 1'ml)ro'l'cd choice ill ploflJlillg Lawrence Erlbaulll As~ociate~, N .J., Novt'mbt'J', Hlg7. Pollack, J.B. and 'Yalt.7., D.L Nat·u!'al Language Processing lIsiug spr£'ading act.ivat.ion and lateral inhibit.ion Proceedings of the FO'llrth intema.i1'o'l1al Conference of Ihe Cognitive S'o'eure S'ociety, 1982, .50-58. Rurnelhart .. D.E. and McClelland. J.L. ((,ds.) Parallel D';strib'llfed Computing Cambridge. ,Ma.: MJT Prel-is. Shastri, L. El'ideul1'al RWMuing ·in SefJ/(llItic /\.retll'{)rk.~: A formal theor:1J ((tid //8 parallel implementatioTl Doct.oral DiS8t'l't at.jon, Computt'r Scit'nce Depart ment, l Jnivefsit.y of Roc-hester, Sept .. J 985.
|
1988
|
86
|
175
|
Consonant Recognition by Modular Construction of Large Phonemic Time-Delay Neural Networks Abstract Alex Waibel Carnegie-Mellon University Pittsburgh, PA 15213, A TR Interpreting Telephony Research Laboratories Osaka, Japan In this paperl we show that neural networks for speech recognition can be constructed in a modular fashion by exploiting the hidden structure of previously trained phonetic subcategory networks. The performance of resulting larger phonetic nets was found to be as good as the performance of the subcomponent nets by themselves. This approach avoids the excessive learning times that would be necessary to train larger networks and allows for incremental learning. Large time-delay neural networks constructed incrementally by applying these modular training techniques achieved a recognition performance of 96.0% for all consonants. 1. Introduction Recently we have demonstrated that connectionist architectures capable of capturing some critical aspects of the dynamic nature of speech, can achieve superior recognition performance for difficult but small phonemic discrimination tasks such as discrimination of the voiced consonants B,D and G [Waibel 89, Waibel 88a]. Encouraged by these results we wanted to explore the question, how we might expand on these models to make them useful for the design of speech recognition systems. A problem that emerges as we attempt to apply neural network models to the full speech recognition problem is the problem of scaling. Simply extending neural networks to ever larger structures and retraining them as one monolithic net quickly exceeds the capabilities of the fastest and largest supercomputers. The search complexity of finding a good solutions in a huge space of possible network configurations also soon assumes unmanageable proportions. Moreover, having to decide on all possible classes for recognition ahead of time as well as collecting sufficient data to train such a large monolithic network is impractical to say the least. In an effort to extend our models from small recognition tasks to large scale speech recognition systems, we must therefore explore modularity and incremental learning as design strategies to break up a large learning task into smaller subtasks. Breaking up a large task into subtasks to be tackled by individual black boxes interconnected in ad hoc arrangements, on the other hand, would mean to abandon one of the most attractive aspects of connectionism: the ability to perform complex constraint satisfaction in a massively parallel and interconnected fashion, in view of an overall optimal perfonnance goal. In this paper we demonstrate based on a set of experiments aimed at phoneme recognition that it is indeed possible to construct large neural networks incrementally by exploiting the hidden structure of smaller pretrained subcomponent 1 An extended version of this paper will also appear in the Proceedings of the 1989 International Conference on Acoustics, Speech and Signal Processing. Copyright: IEEE. Reprinted with pennission. 215 216 Waibel networks. 2. Small Phonemic Classes by Time-Delay Neural Networks In our previous work, we have proposed a Time-Delay Neural Network architecture (as shown on the left of Fig. 1 for B,D,G) as an approach to phoneme discrimination that achieves very high recognition scores [Waibel 89, Waibel 88a]. Its multilayer architecture, its shift-invariance and the time delayed connections of its units all contributed to its performance by allowing the net to develop complex, non-linear decision surfaces and insensitivity to misalignments and by incorporating contextual information into decision making (see [Waibel 89, Waibel 88a] for detailed analysis and discussion). It is trained by the back-propagation procedure [Rurnelhart 86] using shared weights for different time shifted positions of the net [Waibel 89 , Waibel 88a]. In spirit it has similarities to other models recently proposed [Watrous 88, Tank 87]. This network, however, had only been trained for the voiced stops B,D,G and we began our extensions by training similar networks for the other phonemic classes in our database. Intlgrltlon •• • ••••••• nlO •••••••••• t,n • . ••••••• 'N, • ••••••• •• • • • •••••• IJI' .. ....... "" .....•••. ••••••••• III • • ••••• • • a&t ••••••••• UI --------"~-"--'-~ .. , IS ,,_"'" 10 mS.c tr.me rate ~ n, · n " . . , -... I " ~ . , OutDut Llyt' Figure 1. The TDNN architecture: BOO-net (left), BooPTK-net (right) All phoneme tokens in our experiments were extracted using phonetic handlabels from a large vocabulary database of 5240 common Japanese words. Each word in the database was spoken in isolation by one male native Japanese speaker. All utterances were recorded in a sound proof booth and digitized at a 12 kHz sampling rate. The database was then split into a training set and a testing set of 2620 utterances each. A 150 msec range around a phoneme boundary was excised for each phoneme token and 16 mel scale fllterbank coefficients computed every 10 msec [Waibel 89, Waibel 88a]. The Consonant Recognition by Modular Construction 21 7 preprocessed training and testing data was then used to train or to evaluate our TDNNs' performance for various phoneme classes. For each class, TDNNs with an architecture similar to the BOO-net in Fig.l were trained. A total of seven nets aimed at the major coarse phonetic classes in Japanese were trained, including voiced stops B, D. G, voiceless stops P,T,I(, the nasals M, N and syllabic nasals, fricatives S, SR, R and Z, affricates CR, TS,liquids and glides R, W, Y and fmally the set of vowels A, I, U, E and O. Each of these nets was given between two and five phonemes to distinguish and the pertinent input data was presented for learning. Note, that each net was trained only within each respective coarse class and has no notion of phonemes from other classes yet. Evaluation of each net on test data within each of these subcategories revealed that an average rate of9S.S% can be achieved (see [WaibeISSb] for a more detailed tabulation of results). 3. Scaling TDNNs to Larger Phonemic Classes We have seen that TDNNs achieve superior recognition performance on difficult but small recognition tasks. To train these networlcs substantial computational resources were needed. This raises the question of how our networks could be extended to encompass all phonemes or handle speech recognition in general. To shed light on this question of scaling, we consider first the problem of extending our networks from the task of voiced stop consonant recognition (hence the BOO-task) to the task of distinguishing among all stop consonants (the BOOPTK-task). For a network aimed at the discrimination of the voiced stops (a BOO-net), approximately 6000 connections had to be trained over about SOO training tokens. An identical net (also with approximately 6000 connections2) can achieve discrimination among the voiceless stops ("P", "T" and "K"). To extend our networks to the recognition of all stops, i.e., the voiced and the unvoiced stops (B,D,G,P,T,K), a larger net is required. We have trained such a network for experimental purposes. To allow for the necessary number of features to develop we have given this net 20 units in the first hidden layer, 6 units in hidden layer 2 and 6 output units. On the right of Fig. 1 we show this net in actual operation with a "G" presented at its input. Eventually a high performance network was obtained that achieves 9S.3% correct recognition over a 1613token BDGPTK-test database, but it took inordinate amounts of learning to arrive at the trained net (IS days on a 4 processor Alliant!). Although going from voiced stops to all stops is only a modest increase in task size, about IS,OOO connections had to be trained. To make matters worse, not only the number of connections should be increased with task size, but in general the amount of training data required for good generalization of a larger net has to be increased as well. Naturally, there are practical limits to the size of a training database, and more training data translates into even more learning time. Learning is further complicated by the increased complexity of the higher dimensional weightspace in large nets as well as the limited preciSion of our simulators. Despite progress towards faster learning algorithms [Haffner 88, Fahlman 88], it is clear that we cannot hope for one single monolithic network to be trained within reasonable time as we 2Note. that these are connettions over which a back-propagation pass is performed during each iteration. Since many of them share the same weights, only a small fraction (about SOO) of them are actually free pararneten. 218 Waibel increase size to handle larger and larger tasks. Moreover, requiring that all classes be considered and samples of each class be presented during training, is undesirable for practical reasons as we contemplate the design of large neural systems. Alternative ways to modularly construct and incrementally train such large neural systems must therefore be explored. 3.1. Experiments with Modularity Four experiments were performed to explore methodologies for constructing phonetic neural nets from smaller component subnets. As a task we used again stop consonant recognition (BooPTK) although other tasks have recently been explored with similar success (BOO and MNsN) [Waibel 88c]. As in the previous section we used a large database of 5240 common Japanese words spoken in isolation from which the testing an training tokens for the voiced stops (the BOO-set) and for the voiceless stops (the PTKset) was extracted. Two separate TDNNs have been trained. On testing data the BOO-net used here performed 98.3% correct for the BDG-set and the PTK-net achieved 98.7% correct recognition for the PTK-set As a fIrst naive attempt we have now simply run a speech token from either set (i.e., B,D,G,P,T or K) through both a BOO-net and a PTK-net and selected the class with the highest activation from either net as the recognition result. As might have been expected (the component nets had only been trained for their respective classes), poor recognition performance (60.5%) resulted from the 6 class experiment. This is partially due to the inhibitory property of the TDNN that we have observed elsewhere [Waibel 89]. To combine the two networks more effectively, therefore, portions of the net had to be retrained. lOG ... •••• O""tpul L'yt' ~~Q.I.~. ".t -=::::-::-:: ___ ~ MIdden I..,.., 1 .•..... .... .. ~!!I!·III· •• . .... i •••••••••••••• · ............. . ...... . . . .. . . . · ............. . · ............ . • •••• I ••••••••• •••••• ••••••••• ~I::~:~.::::::: .•.........•• ~: ...........•• ~~ .. . .......... . · ............. . ••••••••••••••• Figure 2. BDGPTK-net trained from hidden units from a Boo- and a PTK-net. We start by assuming that the fIrst hidden layer in either net already contains all the lower Consonant Recognition by Modular Construction 219 level acoustic phonetic features we need for proper identification of the stops and freeze the connections from the input layer (the speech data) to the first hidden layer's 8 units in the BOO-net and the 8 units in the PTK-neL Back-propagation learning is then performed only on the connections between these 16 (= 2 X 8) units in hidden layer 1 and hidden layer 2 and between hidden layer 2 and the combined BooPTK-net's output. This network is shown in Fig.2 with a "G" token presented as input. Only the higher layer connections had to be retrained (for about one day) in this case and the resulting network achieved a recognition performance of 98.1 % over the testing data. Combination of the two subnets has therefore yielded a good net although a slight performance degradation compared to the subnets was observed. This degradation could be explained by the increased complexity of the task. but also by the inability of this net to develop lower level acoustic-phonetic features in hidden layer 1. Such features may in fact be needed for discrimination between the two stop classes. in addition to the withinclass features. In a third experiment. we therefore flrst train a separate fiNN to perform the voiced/unvoiced (V /UV) distinction between the Boo- and the PTK-task. The network has a very similar structure as the BOO-net. except that only four hidden units were used in hidden layer 1 and two in hidden layer 2 and at the output. This V/UV-net achieved better than 99% voiCed/unvoiced classification on the test data and its hidden units developed in the process are now used as additional features for the BooPTK-task. The connections from the input to the flrst hidden layer of the Boo-. the PTK- and the V/UV nets are frozen and only the connections that combine the 20 units in hidden layer 1 to the higher layers are retrained. Training of the V /UV -net and subsequent combination training took between one and two days. The resulting net was evaluated as before on our testing database and achieved a recognition score of 98.4% correct. i ~, ! ' " ~t.g'''lan OutDut llyt' ' ... -. ..... ' .. __ .. . Frtt Fr .. ; \ .. ~ ___ ~_~_ MtddtnUl,." Freel ',....... .:~: : :. : : . • • • , , Figure 3. Combination of a BDG-net and a PTK-net using 4 additional units in hidden layer 1 as free "Connectionist Glue". In the previous experiment, good results could be obtained by adding units that we believed to be the useful class distinctive features that were missing in our second experiment. In a fourth experiment. we have now examined an approach that allows for 220 Waibel the network to be free to discover any additional features that might be useful to merge the two component networks. In stead of previously training a class distinctive network. we now add four units to hidden layer 1. whose connections to the input are free to learn any missing discriminatory features to supplement the 16 frozen BOO and PTK features. We call these units the "connectionist glue" that we apply to merge two distinct networks into a new combined net. This network is shown in Fig.3. The hidden units of hidden layer 1 from the BOO-net are shown on the left and those from the PTK-net on the right. The connections from the moving input window to these units have been trained individually on Boo- and PTK-data. respectively. and -as before- remain fIxed during combination learning. In the middle on hidden layer 1 we show the 4 free "Glue" units. Combination learning now searches for an optimal combination of the existing Boo- and PTK-features and also supplements these by learning additional interclass discriminatory features. Combination retraining with "glue" required a two day training run. Performance evaluation of this network over the BDGPTK test database yielded a recognition rate of 98.4%. In addition to the techniques described so far. it may be useful to free all connections in a large modularly constructed network for an additional small amount of fine tuning. This has been done for the BooPTK-net shown in Fig.3 yielding some additional performance improvements. Each iteration of the full network is indeed very slow. but convergence is reached after only few additional tuning iterations. The resulting network fmally achieved (over testing data) a recognition score of 98.6%. 3.2. Steps for the Design of Large Scale Neural Nets Method bdg ptk bdgptk Individual TDNNs 98 .3~ 98.7 % TDNN:Max. ActlvatlOn GO . 5~ Reb-aiD BDGPTK 98.3 ~ Reb-aiD Combined Higher Layers 98.1 % Reb-aiD with VIUV-units 98 .4~ Reb-aiD with Glue 98 . 4~ All-Net Fine Tuning 98.6~ Table 3-1: From BOO to BDGPTK; Modular Scaling Methods. Table 3-1 summarizes the major results from our experiments. In the fIrst row it shows the recognition performance of the two initial TDNNs trained individually to perform the Boo- and the PTK-tasks. respectively. Underneath. we show the results from the various experiments described in the previous section. The results indicate, that larger TDNNs can indeed be trained incrementally. without requiring excessive amounts of training and without loss in performance. The total incremental training time was between one third and one half of a full monolithically trained net and the resulting Consonant Recognition by Modular Construction 221 networks appear to perform slightly better. Even more astonishingly, they appear to achieve performance as high as the subcomponent BDG- and PTK-nets alone. As a strategy for the efficient construction of larger networks we have found the following concepts to be extremely effective: modular,incremental learning, class distinctive learning, connectionist glue, partial and selective learning and all-netfine tuning. 4. Recognition of all Consonants The incremental learning techniques explored so far can now be applied to the design of networks capable of recognizing all consonants. 4.1. Network Architecture Outpu' La.,.., , ' I ' I \ I " FilM. \ (Frtl) , , , HlGlclen Layer 1 , • Q G , T K II( H,N S S,K Z It VI Y , \ \ , I' .. H' \ (Fre., I \ \ \ Figure 4. Modular Construction of an All Consonant Network Our consonant TDNN (shown in Fig.4.1) was constructed modularly froHi networks aimed at the consonant subcategories, i.e., the BDG-, PTK-, MNsN-, SShHZ-, TsCh- and the RWY -tasks. Each of these nets had been trained before to discriminate between the consonants within each class. Hidden layers 1 and 2 were then extracted from these nets, i.e. their weights copied and frozen in a new combined consonant TDNN. In addition, an interclass discrimination net was trained that distinguishes between the consonant subclasses and thus hopefully provides missing featural information for interclass discrimination much like the V /UV network described in the previous section. The structure of this network was very similar to other subcategory TDNN s, except that we have allowed for 20 units in hidden layer 1 and 6 hidden units (one for each coarse consonant class) in hidden layer 2. The weights leading into hidden layers 1 and 2 were then also copied from this interclass discrimination net into the consonant network and frozen. Three connections were then established to each of the 18 consonant output categories (B,D,G,P,T,K,M,N,sN,S, Sh.H,Z,Ch,Ts,R,W and Y): one to connect an output 222 Waibel unit with the appropriate interclass discrimination unit in hidden layer 2, one with the appropriate intra class discrimination unit from hidden layer 2 of the corresponding subcategory net and one with the always activated threshold unit (not shown in Fig.4.1) The overall network architecture is shown in Fig.4.1 for the case of an incoming test token (e.g., a "G"). For simplicity, Fig.4.1 shows only the hidden layers from the BDG-,PTK,SShHZ- and the inter-class discrimination nets. At the output, only the two connections leading to the correctly activated "G" -output unit are shown. Units and connections pertaining to the other subcategories as well as connections leading to the 17 other output units are omitted for clarity in Fig.4.1. All free weights were initialized with small random weights and then trained. 4.2. Results Consonants Task Recognition Rate (%) bdg 98.6 ptk 98.7 mnN 96.6 sshhz 99.3 chts 100.0 rwy 99.9 cons. class 96.7 All consonant TDNN 95.0 All-Net Fine Tuning 95.9 Table 4-1: Consonant Recognition Performance Results. Table 4.2 summarizes our results for the consonant recognition task. In the first 6 rows the recognition results (measured over the available test data in their respective sublasses) are given. The entry "cons.class" shows the performance of the interclass discrimination net in identifying the coarse phonemic subclass of an unknown token. 96.7% of all tokens were correctly categorized into one of the six consonant subclasses. Mter completion of combination learning the entire net was evaluated over 3061 consonant test tokens, and achieved a 95.0% recognition accuracy. All-net fme tuning was then performed by freeing up all connections in the network to allow for small additional adjustments in the interest of better overall performance. Mter completion of all-net fine tuning, the performance of the network then improved to 96.0% correct. To put these recognition results into perspective, we have compared these results with several other competing recognition techniques and found that our incrementally trained net compares favorably [Waibel 88b). Consonant Recognition by Modular Construction 223 5. Conclusion The serious problems associated with scaling smaller phonemic subcomponent networks to larger phonemic tasks are overcome by careful modular design. Modular design is achieved by several important strategies: selective and incremental learning of subcomponent tasks, exploitation of previously learned hidden structure, the application of connectionist glue or class distinctive features to allow for separate networks to "grow" together, partial training of portions of a larger net and finally, all-net fine tuning for making small additional adjustments in a large net Our findings suggest, that judicious application of a number of connectionist design techniques could lead to the successful design of high performance large scale connectionist speech recognition systems. References [Fahlman 88] Fahl man , S.E. An Empirical Study of Learning Speed in BackPropagation Networks. Technical Report CMU-CS-88-162, Carnegie-Mellon University, June, 1988. [Haffner 88] Haffner, P., Waibel, A. and Shikano, K. Fast Back-Propagation Learning Methods for Neural Networks in Speech. In Proceedings of the Fall Meeting of the Acoustical Society of Japan. October, 1988. [Rumelhart 86] Rumelhart, D.E., Hinton, G.E. and Williams, R.J. Learning Internal Representations by Error Propagation. In McClelland, J L. and Rumelhart, D.E. (editor), Parallel Distributed Processing; Explorations in the Microstructure of Cognition, chapter 8, pages 318-362. MIT Press, Cambridge, MA, 1986. [Tank 87] Tank, D.W. and Hopfield, JJ. Neural Computation by Concentrating Information in Time. In Proceedings National Academy of Sciences, pages 1896-1900. April, 1987. [WaibeI88a] Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. and Lang K. Phoneme Recognition: Neural Networks vs. Hidden Markov Models. In IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 8.S3.3. April, 1988. [Waibel 88b] Waibel, A., Sawai, H. and Shikano, K. Modularity and Scaling in Large Phonemic Neural Networks. Technical Report TR-I-0034, ATR Interpreting Telephony Research Laboratories, July, 1988. [Waibel 8Se] Waibel, A. Connectionist Glue: Modular Design of Neural Speech Systems. In Touretzky, D.S., Hinton, G.E. and Sejnowski, T J. (editors), Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann, 1988. [Waibel 89] Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. and Lang K. Phoneme Recognition Using Time-Delay Neural Networks. IEEE, Transactions on Acoustics, Speech and Signal Processing, March, 1989. [Watrous 88] Watrous, R. Speech Recognition Using Connectionist Networks. PhD thesis, University of Pennsylvania, October, 1988.
|
1988
|
87
|
176
|
2 CONSTRAINTS ON ADAPTIVE NETWORKS FOR MODELING HUMAN GENERALIZATION M. Pavel Mark A. Gluck Departm£1Il of Psychology Stanford University Stanford. CA 94305 ABSTRACT Van Henkle The potential of adaptive networks to learn categorization rules and to model human performance is studied by comparing how natural and artificial systems respond to new inputs, i.e., how they generalize. Like humans, networks can learn a detenninistic categorization task by a variety of alternative individual solutions. An analysis of the constraints imposed by using networks with the minimal number of hidden units shows that this "minimal configuration" constraint is not sufficient to explain and predict human performance; only a few solutions were found to be shared by both humans and minimal adaptive networks. A further analysis of human and network generalizations indicates that initial conditions may provide important constraints on generalization. A new technique, which we call "reversed learning", is described for finding appropriate initial conditions. INTRODUCTION We are investigating the potential of adaptive networks to learn categorization tasks and to model human performance. In particular we have studied how both natural and artificial systems respond to new inputs, that is, how they generalize. In this paper we first describe a computational technique to analyze generalizations by adaptive networks. For a given network structure and a given classification problem, the technique enumerates all possible network solutions to the problem. We then report the results of an empirical study of human categorization learning. The generalizations of human subjects are compared to those of adaptive networks. A cluster analysis of both human and network generalizations indicates, significant differences between human perfonnance and possible network behaviors. Finally, we examine the role of the initial state of a network for biasing the solutions found by the network. Using data on the relations between human subjects' initial and final performance during training, we develop a new technique, called "reversed learning", which shows some potential for modeling human learning processes using adaptive networks. The scope of our analyses is limited to generalizations in deterministic pattern classification (categorization) tasks. Modeling Human Generalization 3 The basic difficulty in generalization is that there exist many different classification rules ("solutions") that that correctly classify the training set but which categorize novel objects differently. The number and diversity of possible solutions depend on the language defining the pattern recognizer. However, additional constraints can be used in conjunction with many types of pattern categorizers to eliminate some, hopefully undesirable, solutions. One typical way of introducing additional constraints is to minimize the representation. For example minimizing the number of equations and parameters in a mathematical expression, or the number of rules in a rule-based system would assure that some identification maps would not be computable. In the case of adaptive networks, minimizing the size of adaptive networks, which reduces the number of possible encoded functions, may result in improved generalization perfonnance (Rumelhart, 1988). The critical theoretical and applied questions in pattern recognition involve characterization and implementation of desirable constraints. In the first part of this paper we describe an analysis of adaptive networks that characterizes the solution space for any particular problem. ANALYSES OF ADAPTIVE NETWORKS Feed-forward adaptive networks considered in this paper will be defined as directed graphs with linear threshold units (LTV) as nodes and with edges labeled by real-valued weights. The output or activations of a unit is detennined by a monotonic nonlinear function of a weighted sum of the activation of all units whose edges tenninate on that unit There are three types of units within a feed-forward layered architecture: (1) Input units whose activity is determined by external input; (2) output units whose activity is taken as the response; and (3) the remaining units, called hidden units. For the sake of simplicity our discussion will be limited to objects represented by binary valued vectors. A fully connected feed-forward network with an unlimited number of hidden units can compute any boolean function. Such a general network, therefore, provides no constraints on the solutions. Therefore, additional constraints must be imposed for the network to prefer one generalization over another. One such constraint is minimizing the size of the network. In order to explore the effect of minimizing the number of hidden units we first identify the minimal network architecture and then examine its generalizations. Most of the results in this area have been limited to finding bounds on the expected number of possible patterns that could be classified by a given network (e.g. Cover, 1965; Volper and Hampson, 1987; Valiant, 1984; Baum & Haussler, 1989). The bounds found by these researchers hold for all possible categorizations and are, therefore, too broad to be useful for the analysis of particular categorization problems. To determine the generalization behavior for a particular network architecture, a specific 4 Gluck, Pavel and Henkle categorization problem and a training set it is necessary to find find all possible solutions and the corresponding generalizations. To do this we used a computational (not a simulation) procedure developed by Pavel and Moore (1988) for finding minimal networks solving specific categorization problems. Pavel and Moore (1988) defined two network solutions to be different if at least one hidden unit categorized at least one object in the training set differently. Using this definition their algorithm finds all possible different solutions. Because finding network solutions is NP-complete (Judd, 1987), for larger problems Pavel and Moore used a probabilistic version of the algorithm to estimate the distribution of generalization responses. One way to characterize the constraints on generalization is in terms of the number of possible solutions. A larger number of possible solutions indicates that generalizations will be less predictable. The critical result of the analysis is that, even for minimal networks. the number of different network solutions is often quite large. Moreover. the number of solutions increases rapidly with increases in the number of hidden units. The apparent lack of constraints can also be demonstrated by finding the probability that a network with a randomly selected hidden layer can solve a given categorization problem. That is, suppose that we se~t n different hidden units, each unit representing a linear discriminant fwction. The activations of these random hidden wits can be viewed as a ttansformation of the input patterns. We can ask what is the probability that an output unit can be found to perfonn the desired dichotomization. A typical example of a result of this analysis is shown in Figure 1 for the three-dimensional (3~) parity problem. In the minimal configuration involving three hidden units there were 62 different solutions to the 3D parity problem. The rapid increase in probability (high slope of the curve in Figure 1) indicates that adding a few more hidden units rapidly increases the probability that a random hidden layer will solve the 3D parity problem. 100 ...... -. ~. , , 10 " , , • , z II g 80 , , !; , , -' , i 40 , , ~ , , ---EXPERIMENT , 20 , -3D PARITY ~ , , , 0 0 2 4 6 • 10 12 HIOOENUNITS Figure 1 1be proportion of solutions to 3D parity problem (solid line) and the experimental task (dashed line) as a function of the number of hidden units. The results of a more detailed analysis of the generalization performance of the minimal networks will be discussed following a description of a categorization experiment with Modeling Human Generalization 5 human subjects. HUMAN CA TEGORIZA TION EXPERIMENT In this experiment human subjects learned to categorize objects which were defined by four dimensional binary vectors. Of the 24 possible objects, subjects were trained to classify a subset of 8 objects into two categories of 4 objects each. The specific assignments of objects into categories was patterned after Medin et aI. (1982) and is shown in Figure 2. Eight of the patterns are designated as a training set and the remaining eight comprise the test seL The assignment of the patterns in the training set into two categories was such that there were many combinations of rules that could be used to correctly perfonn the categorization. For example, the first two dimensions could be used with one other dimension. The training patterns could also be categorized on the basis of an exclusive or (XOR) of the last two dimensions. The type of solution obtained by a human subject could only be determined by examining responses to the test set as well as the training seL TRAINING SET TEST SET X1 1 1 0 1 001 0 000 1 1 1 0 1 DIMENSIONS ~ 1 1 1 0 000 1 001 0 1 1 1 0 ~ 101 0 101 0 o 1 0 1 o 1 0 1 X. 101 0 o 1 0 1 o 1 0 1 o 1 0 1 CATEGORY AAAA BBBB ??? ? ???? FigllTe 2. PattemI to be clulmed. (Adapted from Medin et aI .• 1982). In the actual experiments, subjects were asked to perform a medical diagnosis for each pattern of four symptoms (dimensions). The experimental procedure will be described here only briefly because the details of this experiment have been described elsewhere in detail (pavel, Gluck, Henkle, 1988). Each of the patterns was presented serially in a randomized order. Subjects responded with one of the categories and then received feedback. The training of each individual continued until he reached a criterion (responding correctly to 32 consecutive stimuli) or until each pattern had been presented 32 times. The data reported here is based on 78 subjects, half (39) who learned the task to criterion and half who did DOL Following the training phase, subjects were tested using all 16 possible patterns. The results of the test phase enabled us to determine the generalizations perfonned by the subjects. Subjects' generalizations were used to estimate the "functions" that they may have been using. For example, of the 39 criterion subjects, 15 used a solution that was consistent with the exclusive-or (XOR) of the dimensions x 3 and X4. We use "response profiles" to graph responses for an ensemble of functions, in this case for a group of subjects. A response profile represents the probability of assigning each 6 Gluck, Pavel and Henkle /I) z a:: loll ~ C ~ pattern to category "A". For example, the response profile for the XOR solution is shown in Figure 3A. For convenience we define the responses to the test set as the "generalization profile". The response profile of all subjects who reached the criterion is shown in Figure 3D. The responses of our criterion subjects to the training set were basically identical and correct The distribution of subjects' genezalization profiles reflected in the overall generalization profile are indicative of considerable individual differences 1001 0110 1101 1110 1011 0100 0011 0000 0101 1010 0001 0010 1000 0111 1100 1111 00 02 04 06 08 10 12 PROPORTION " .. /I) z a:: loll ~ C ~ 1001 0110 1101 1110 1011~ 0100 -=:===::-0011~ 0000 r--0101 1010 0001 0010 1000 0111 1100 1111 . 00 02 04 06 01 10 12 PROPORTION " It.Figwe 3. (A) Response profile of the XOR solution. and (B) a proportion of the response "A" to all patterns for human subjects (dark bars) and minimal networks (light bars). The lower 8 patterns are from the training set and the upper 8 patterns from the test set. MODEliNG THE RESPONSE PROFILE One of our goals is to model subjects' distribution of categorizations as represented by the response profile in Figure 3D. We considered three natural approaches to such modeling: (1) Statistical/proximity models, (2) Minimal disjunctive normal forms (DNF), and (3) Minimal two-layer networks. The statistical approach is based on the assumption that the response profile over subjects represents the probability of categorizations performed by each subject Our data are not consistent with that assumption because each subject appeared to behave deterministically. The second approach, using the minimal DNF is also not a good candidate because there are only four such solutions and the response profile over those solutions differs considerably from that of the SUbjects. Turning to the adaptive network solutions, we found all the solutions using the linear programming technique described above (pavel & Moore, 1988). The minimal two-layer adaptive network that was capable of solving the training set problem consisted of two hidden units. The proportion of solutions as a Modeling Human Generalization 7 function of the number of hidden units is shown in Figure 1 by the dashed line. For the minimal network there were 18 different solutions. These 18 solutions had 8 different individual generalization profiles. Assuming that each of the 18 network solution is equally likely. we computed the generalization profile for minimal network shown in Figure 3B. The response profile for the minimal network represents the probability that a randomly selected minimal network will assign a given pattern to category "A". Even without statistical testing we can conclude that the generalization profiles for humans and networks are quite different. It is possible. however. that humans and minimal networks obtain similar solutions and that the differences in the average responses are due to the particular statistical sampling assumption used for the minimal networks (i.e. each solution is equally likely). In order to determine the overlap of solutions we examined the generalization profiles in more detail. CLUSTERING ANALYSIS OF GENERALIZATION PROFILES To analyze the similarity in solutions we defined a metric on generalization profiles. The Hamming distance between two profiles is equal to the number of patterns that are categorized differently. For example. the distance between generalization profile •• A A B A B B B B" and "A A B B B B A B" is equal to two. because the two profiles differ on only the fourth and seventh pattern. Figure 4 shows the results of a cluster analysis using a hierarchical clustering procedure that maximizes the average distance between clusters. o c • • c c • • c c ~ • • • • • • • • • ; ~ • c ! c • • ~ c ~ • ~ • • = ~ • ~ ~ ~ I c • • • • • 3 c c c • • • • c • ~ • • • • • I • • • • • • • • • Figlll'll 4. Results of hierarchical clustering for human (left) and network (right) generalization profiles. • • c • • • c • • 3 c • c In this graph the average distance between any two clusters is shown by the value of the lowest common node in the tree. The clustering analysis indicates that humans and 8 Gluck, Pavel and Henkle networks obtained widely different generalization profiles. Only three generalization profiles were found to be common to human and networks. This number of common generalizations is to be expected by chance if the human and network solutions are independent Thus, even if there exists a learning algorithm that approximates the human probability distribution of responses, the minimal network would not be a good model of human perfonnance in this task. It is clear from the previously described network analysis that somewhat larger networks with different constraints could account for human solutions. In order to characterize the additional constraints, we examined subjects' individual strategies to find out why individual subjects obtained different solutions. ANALYSIS OF HUMAN LEARNING STRATEGIES Human learning strategies that lead to preferences for particular solutions may best be modeled in networks by imposing constraints and providing hints (Abu-Mostafa 1989). These include choosing the network architecture and a learning rule, constraining connectivity, and specifying initial conditions. We will focus on the specification of initial conditions. 30 20 10 o CI .. CONSISTENT • CONSISTENT lOR NON lOR SUBJECT TYPES NO CRrTERION FiglU'e 5. The number of consistent or non-stable responses (black) and the nwnber of stable incorrect responses (light) for XOR, Non-XOR criterion su~ jeers, and for those who never reached criterion. Our effort to examine initial conditions was motivated by large differences in learning curves (Pavel et al., 1988) between subjects who obtained the XOR solutions and those who did not The subjects who did not obtain the XOR solutions would perfonn much better on some patterns (e.g. 0001) then the XOR subjects, but worse on other patterns (e.g. 10(0). We concluded that these subjects during the first few trials discovered rules Modeling Human Generalization 9 that categorized most of the training patterns correctly but failed on one or two training patterns. We examined the sequences of subjects' responses to see how well they adhered to "incorrect" rules. We designated a response to a pattern as stable if the individual responded the same way to that pattern at least four times in a row. We designated a response as consistent if the response was stable and correct The results of the analysis are shown in Figure 5. These results indicate that the subjects who eventually achieved the XOR solution were less likely to generate stable incorrect solutions. Another important result is that those subjects who never learned the correct responses to the training set were not responding randomly. Rather, they were systematically using incorrect rules. On the basis of these results, we conclude that subjects' initial strategies may be important detenninants of their final solutions. REVERSED LEARNING For simplicity we identify subjects' initial conditions by their responses on the first few trials. An important theoretical question is whether or not it is possible to find a network structure, initial conditions and a learning rule such that the network can represent both the initial and final behavior of the subject In order to study this problem we developed a technique we call ""reversed leaming". It is based on a perturbation analysis of feedforward networks. We use the fact that the error surface in a small neighborhood of a minimum is well approximated by a quadratic surface. Hence, a well behaved gradient descent procedure with a starting point in the neighborhood of the minimum will find that 'minimum. The reversed learning procedure consists of three phases. (1) A network is trained to a final desired state of a particular individual, using both the training and the test patterns. (2) Using only the training patterns, the network is then trained to achieve the initial state of that individual subject closest to the desired final state (3) The network is trained with only the training patterns and the solution is compared to the subject's response profiles. Our preliminary results indicate that this procedure leads in many cases to initial conditions that favor the desired solutions. We are currently investigating conditions for finding the optimal initial states. CONCLUSION The main goal of this study was to examine constraints imposed by humans (experimentally) and networks (linear programming) on learning of simple binary categorization tasks. We characterize the constraints by analyzing responses to novel stimuli. We showed that. like the humans, networks learn the detenninistic categorization task and find many, very different. individual solutions. Thus adaptive networks are better models than statistical models and DNF rules. The constraints imposed by minimal networks, however, appear to differ from those imposed by human learners in that there are only a few solutions shared between human and adaptive networks. After a detailed analysis of 10 Gluck, Pavel and Henkle the human learning process we concluded that initial conditions may provide imPOl'Wlt constraints. In fact we consider the set of initial conditions as .powerful "hints" (AbuMostafa, 1989) which reduces the number of potential solutions. without reducing the complexity of the problem. We demonstrated the potential effectiveness of these constraints using a perturbation technique. which we call reversed learning, for finding appropriate initial conditions. Acknowledgements This work was supported by research grants from the National Science Foundation (BNS-86-18049) to Gordon Bower and Mark Gluck. and (IST-8511589) to M. Pavel. and by a grant from NASA Ames (NCC 2-269) to Stanford University. We thank Steve Sloman and Bob Rehder for useful discussions and their comments on this draft References Abu-Mostafa, Y. S. Learning by example with hints. NIPS. 1989. Baum, E. B .• & Haussler. D. What size net gives vaUd generalization? NIPS, 1989. Cover. T. (June 1965). Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers. EC-14. 3. 326-334. Judd. J. S. Complexity of connectionist learning with various node functions. Presented at the First IEEE International Conference on Neural Networks. San Diego, June 1987. Medin. D. L .• Altom. M. W .• Edelson. S. M .• & Freko. D. (1982). Correlated symptoms and simulated medical classification. Journal of Experimental Psychology: Learning. Memory. & Cognition, 8(1).37-50. Pavel. M .• Gluck, M. A .• & Henkle. V. Generalization by humans and multi-layer adaptive networks. Submitted to Tenth Annual Conference of the Cognitive Science Society. August 17-19, 1988. Pavel. M .• & Moore, R. T. (1988). Computational analysis of solutions of two-layer adaptive networks. APL Technical Repon, Dept. of Psychology. Stanford University. Valiant, L. G. (1984). A theory of the learnable. Comm. ACM. 27.11.1134-1142. Volper. D. J •• & Hampson. S. E. (1987). Learning and using specific instances. Biological Cybernetics, 56 •.
|
1988
|
88
|
177
|
Self Organizing Neural Networks for the Identification Problem Manoel Fernando Tenorio School of Electrical Engineering Purdue University VV. Lafayette, UN. 47907 [email protected] ABSTRACT VVei-Tsih Lee School of Electrical Engineering Purdue University VV. Lafayette, UN. 47907 [email protected] This work introduces a new method called Self Organizing Neural Network (SONN) algorithm and demonstrates its use in a system identification task. The algorithm constructs the network, chooses the neuron functions, and adjusts the weights. It is compared to the Back-Propagation algorithm in the identification of the chaotic time series. The results shows that SONN constructs a simpler, more accurate model. requiring less training data and epochs. The algorithm can be applied and generalized to appilications as a classifier. I. INTRODUCTION 1.1 THE SYSTEM IDENTIFICATION PROBLEM In various engineering applications, it is important to be able to estimate, interpolate, and extrapolate the behavior of an unknown system when only its input-output pairs are available. Algorithms which produce an estimation of the system behavior based on these pairs fall under the category of system identification techniques. 1.2 SYSTEM IDENTIFICATION USING NEURAL NETWORKS A general form to represent systems, both linear and nonlinear, is the KolmogorovGarbor polynomial tGarbor. 19611 shown below: y = ao + L aixi + L L aijxiXj + ... 1 i J (1) 57 58 Tenorio and Lee where the y is the output. and x the input to the system. [Garbor .1961] proposed a learning method that adjusted the coefficient of (1) by minimizing the mean square error between each desired output sample and the actual output This paper describes a supervised learning algorithm for structure construction and adjustment Here. systems which can be described by (1) are presented. The computation of the function for each neuron performs a choice from a set of possible functions previously assigned to the algorithm. and it is general enough to accept a wide range of both continuous and discrete functions. In this work. the set is taken from variants of the 2-input quadratic polynomial for simplicity. although there is no requirement making it so. This approach abandons the simplistic mean-square error for perfonnance measure in favor of a modified Minimum Description Length (MOL) criterion [Rissanen,1975]. with provisions to measure the complexity of the model generated. The algorithm searches for the simplest model which generates the best estimate. The modified MDL. from hereon named the Structure Estimation Criterion (SEC). is applied hierarchically in the selection of the optimal neuron transfer function from the function set. and then used as an optimal criterion to guide the construction of the structure. The connectivity of the resulting structure is arbitrary. and under the correct conditions [Geman&Geman. 84] the estimation of the struCture is optimal in tenns of the output error and low function complexity. This approach shares the same spirit of GMDH-type algorithms. However, the concept of parameter estimation from Information Theory. combined with a stochastic search algorithm - Simulated Annealing. was used to create a new tool for system identification. This work is organized as follows: section II presents the problem formulation and the Self Organizing Neural Network (SONN) algorithm description; section III describes the results of the application of SONN to a well known problem tested before using other neural network algorithms [Lapede8&Farber. 1987; Moody. 1988]; and fmally, section IV presents a discussion of the results and future directions for this work. II. THE SELF ORGANIZING NEURAL NETWORK ALGORITHM 11.1 SELF ORGANIZING STRUCTURES The Self Organizing Neural Network (SONN) algorithm performs a search on the model space by the construction of hypersurfaces. A network of nodes. each node representing a hypersurface. is organized to be an approximate model of the real system. SONN can be fully characterized by three major components. which can be modified to incorporate knowledge about the process: (1) a generating rule of the primitive neuron transfer functions. (2) an evaluation method which accesses the quality of the model. and. (3) a structure search strategy. Below. the components of SONN are discussed. ll.2 THE ALGORITHM STRUCTURE Self Organizing Neural Networks 59 11.2.1 The Generating Rule Given a set of observations S: S = {(Xl, Yl),(Xl, Yl)",,(XI, YI)} Yi = f(XV + 11 generated by (2) where f(.) is represented by a Kolmogorov-Garbor polynomial. and the random variable 11 is nonnally distributed. N(O.l). The dimensions of Y is m. and the dimensions of X is n. Every component Yk of Y fonns a hypersurface Yk = fk(X) in the space of dim (X) + 1. The problem is to fmd f(.). given the observations S. which is a corrupted version of the desired function. In this work. the model which estimates f(.) is desired to be as accurate and simple (small number of parameters. and low degree of non linearity) as possible. The approach taken here is to estimate the simplest model which best describes f(.) by generating optimal functions for each neuron. which can be viewed as the construction of a hypersurface based on the observed data. It can be described as follows: given a set of observations S; use p components of the n dimensional space of X to create a hypersurface which best describes Yk = f(X). through a three step process. First, given X = [xl' x2' x3' .... xn) and Yk' and the mapping '¥ n: [Xl' x2' x3' .... Xn) -> [x'¥(1)' x'¥(2)' x,¥(3)' .... x'¥(n»)' construct the hypersurface hi (x'¥(1)' x'¥(2)' x,¥(3)' .... x'¥(n» (hi after the fIrst iteration) of p+ 1 dimensions. where '¥ n is a projection from n dimensions to p dimensions. The elements of the domain of '¥ n are called tenninals. Second. If the global optimality criterion is reached by the construction of hi(x'¥(l)' x'¥(2)' x,¥(3)' .... x'¥(n»' then stop. otherwise continue to the third step. Thud. generate from [Xl' x2' x3' .... xn.hl(x'¥(l)' x'¥(2)' x,¥(3)' .... x'¥(n») a new p+l dimensional hypersurface hi+ I through the extended mapping '¥ n+ 1 (.). and reapply the second step.The resulting model is a multilayered neural network whose topology is arbitrarily complex and created by a stochastic search guided by a structure estimation criterion. For simplicity in this work. the set of prototype functions (F) is restricted to be 2-input quadratic surfaces or smaller. with only four possible types: y = 8o+alxl +a2x2 (3) y = 8o+alxl +a2x2+a3xlx2 (4) Y = 3o+alxl+a2x1 (5) Y = 8o+alxl+a2x2+a3xlx2+~x1+a5x~ (6) 11.2.2 Evaluation or the Model Based on the MDL Criterion The selection rule (T) of the neuron transfer function was based on a modifIcation of the Minimal Description Length (MOL) information criterion. In [Rissanen. 1975] the principle of minimal description for statistical estimation was developed. The MDL provides a trade-off between the accuracy and the complexity of the model by including the structure estimation tenn of the fInal model. The final model (with the minimal 60 Tenorio and Lee MOL) is optimum in the sense of being a consistent estimate of the number of parameters while achieving the minimum error [Rissanen.1980]. Given a sequence of observation xl,x2,x3 •...• xN from the random variable X. the dominant tenn of the MDL in [Rissanen. 1975] is: MDL = -log f(xI8) + 0.5 k log N where f(xI8) is the estimated probability density function of the model. k is the number of parameters. and N is the number of observations. The first tenn is actually the negative of the maximum likelihood (ML) with respect to the estimated parameter. The second term describes the structure of the models and it is used as a penalty for the complexity of the model. In the case of linear polynomial regression. the MOL is: MDL = -0.5 N log S~ + 0.5 k log N (8) where k is the number of coefficients in the model selected. In the SONN algorithm. the MDL criterion is modified to operate both recursively and hierarchically. First. the concept of the MDL is applied to each candidate prototype surface for a given neuron. Second. the acceptance of the node. based on Simulated Annealing. uses the MDL measure as the system energy. However. since the new neuron is generated from terminals which can be the output of other neurons. the original defmition of the MDL is unable to compute the true number of system parameters of the final function. Recall that due to the arbitrary connectivity. feedback loops and other configurations it is non trivial to compute the number of parameters in the entire structure. In order to reflect the hierarchical nature of the model. a modified MDL called Structure Estimation Criterion (SEC) is used in conjunction with an heuristic estimator of the number of parameters in the system at each stage of the algorithm. A computationally efficient heuristic for the estimation of the number of parameters in the model is based on the fact that SONN creates a tree-like structure with multiple roots at the input terminals. Then k. in expression (8). can be estimated recursively by: k = kL + kR + (no. of parameters of the current node) (9) where kL and kR are the estimated number of parameters of the left and right parents of the current node. respectively. This heuristic estimator is neither a lower bound nor an upper bound of the true number of parameter in the model. 11.2.3 The SONN Algorithm To explain the algorithm. the following definitions are necessary: Node - neuron and the associated function. connections. and SEC; BASIC NODE - A node for the system input variable; FRONT NODE - A node without children; IN1ERMIDIA TE NODE - The nodes that are neither front or basic nodes; STATE - The collection of nodes. and the configuration of their interconnection; INITIAL STATE (SO - The state with only basic nodes; PARENT AND CHILD STATE - The child state is equal to the parent state except for f a new node and its interconnection generated on the parent state structure; NEIGHBOR STATE - A state that is either a child or a parent state of another; ENERGY Self Organizing Neural Networks 61 OF THE STATE (SEC-Si) - The energy of the state is defined as the minimum SEC of all the front nodes in that state. In the SONN algorithm. the search for the correct model structure is done via Simulated Annealing. Therefore the algorithm at times can accept partial structures that look less than ideal. In the same way. it is able to discard partially constructed substructures in search for better results. The use of this algorithm implies that the node accepting rule (R) varies at run-time according to a cooling temperature m schedule. The SONN algorithm is as follows: Initialize T, and S[ Repeat Repeat Sj = generate (Si), - application of P. If accept ( SEC_Sj. SEC_Si, T) then Si = Sj. - application ofR. WUiI the number of new neurons is greater than N. Decrease the temperature T. until The temperature T is smaller than tend (Terminal temperature for Simulated Annealing). Each neuron output and the system input variables are called terminals. Tenninals are viewed as potential dimensions from which a new hypersurface can be constructed. Every tenninal represents the best tentative to approximate the system function with the available infmnatioo. and are therefore treated equally. lll. EXAMPLE - THE CHAOTIC TIME SERIES In the following results. the chaotic time series generated by the Mackay-Glass differential equations was used. The SONN with the SEC. and its heuristic variant were used to obtain the approximate model of the system. The result is compared with those obtained by using the nonlinear signal processing method [LapedeS&Farber. 1987] . The advantages and disadvantages of both approaches are analyzed in the next section. 111.1 Structure of the Problem The MacKay-Glass differential equation used here can be described as: dX(t) = a x(t - t) _ b x(t) at 1 + x10(t - t) (10) By setting a = 0.2. b = 0.1. and t = 17. a chaotic time series with a' strange attractor of fractal dimension about 3.5 will be produced [Lapedes&Farber. 1987] . To compare the accuracy of prediction the nonnalized root mean square error is used as a perfonnance index: 62 Tenorio and Lee nnalized RMSE RMSE no - Standard Deviation (ll) 111.2. SONN WITH THE HEURISTIC SEC (SONN.H) In the following examples, a modified hewistic version of the SEC is used. The estimator of the number of parameters is given by (9), and the fmal configuraion is shown in figure 1. 111.2.1 Node 19 In this subsection, SONN is allowed to generate up to the 19th accepted node. In this first version of the algorithm, all neurons have the same number of interconnections. and therefore draw their transfer function from the same pool of functions .. Generalizations of the algorithm can be easily made to accommodate multiple input functions, and neuron transfer function assignment being drawn from separate pools. In this example, the first one hundred points of the time series was used for training, and samples 101 through 400 used for prediction testing. The total number of weights in the network is 27. The performance index average 0.07. The output of the network is overlapped in the figure 2 with the original time series. For comparison purposes, a GDR network with the structure used in [LapedeS&Farber, 1987] is trained for 6500 epochs. The training data consisted of the first 500 points of the time series, and the testing data ran from the 501st sample to the 832nd. The total number of weights is 165. and the fmal performance index equal to 0.12. This was done to give both algorithms similar computational resources. Figure 3 shows the original time series overlapped with the GDR network output. Ill.2.2 NODE 37 In this subsection, the model chosen was formed by the 37th accepted node. The network was trained in a similar manner to the flfSt example, sioce it is part of the same run. The final number of weights is 40, and the performance index 0.018. Figure 4 shows the output of the network overlapped with the original time series. Figure 5 shows the GDR with 11,500 epochs. Notice that in both cases, the GDR network demands 150 connections and 150 weights. as compared to 12 connections and 27 weights for the first example and 10 connections and 40 weights for the second example. The comparison of the performance of different models is listed in figure 6. IV. Conclusion and Future Work In this study, we proposed a new approach for the identification problem based on a flexible, self-organizing neural network (SONN) structure. The variable structure provides the opportunity to search and construct the optimal model based on input-output observations. The hierarchical version of the MDL, called the structure estimation criteria, Self Organizing Neural Networks 63 was used to guide the trade-off between the model complexity and the accuracy of the estimation. The SONN approach demonstrates potential usefulness as a tool for system identification through the example of modeling a chaotic time series. REFERENCE Garber, D .• eL al. ,"A universal nonlinear filter, predicator and simulator which optimizes itsekf by a learning process," IEE Proc.,18B, pp. 422-438, 1961 RissanenJ. "Modeling by shortest data description," Automatica. vo1.14, pp. 465471.1975 Gemen, S, and Gernen D., "Stochastic relaxation, gibbs deisribution, and the bayesian restoration of images." IEEE PAMI .• PAMI-6,pp.721-741. 1984 Lapedes.A. and Farber, R. ,"Nonlinear signal processing using neural networks: Predication and system modeling," TR.. LA-UR-87-2662. 1987 Moody. J. This volume Rissanen:,,J. "Consistent order estimation of autoregressive processing by shortest description of data." Analysis and optimization of stochastic system. Jacobs et. al. Eds. N.Y. Academic. 1980 Figure 1. The 37th State Generated --------_ .. _ .. - .. _---_. S"s._IO.·SOHNI_ 191 0> o. o. , ••. f..---.....---....---__.-----.------I '. ,.,. '.0 0'" ,IIA Figure 2. SONN 19th Model. P.I. = 0.06 64 Tenorio and Lee ..--------------------. +----~-- - _- ..-~ -. --l 'lft ~nn nn. Figure 3. GDR after 6.500 Epochs. P.I. = 0.12 no ".0 '.00 Figure 4. SONN 37th Model. P.I. = 0.038 ~------------------. -----<; •• 1_ 10 .. "ICk P'''Dlq1I!O" -III II 500 EDOCM , \ ~ r , \ Ii \ I rl , \ \ .. , i ' , • ...-()t,... -. ·,-...C)o .... .. .0 .no ~ ~o . no OOD Figure 5. GDR after 11.500 Epochs. P.I. = 0.018 Comparison ollhe Perform alice Index 014~-------------------------. 0.12 0. 10 002 000 .,.-._._.;,_.", .......... .o. _._._ .. _~ .. _._._._._.- .. -.~.,..~ .............. ~ ........ .. ............... ...................... . ... -_.... ..... . . ... . -. .. .. ~ ..... . 002 _______ --------'--12u W' I>:'UU t: I'''~''~ ~Vt~tllfl""Uaj l~, SOliN (o .. ~j ~7) BP \ I :;W Ep~I" Figure 6. Perfonnance Index Versus the Number of Predicted Points
|
1988
|
89
|
178
|
662 A PASSIVE SHARED ELEMENT ANALOG ELECTRICAL COCHLEA Joe Eisenberg Bioeng. Group U.C. Berkeley David Feld Dept. Elect. Eng. 207-30 Cory Hall U.C. Berkeley Berkeley, CA. 94720 ABSTRACT Edwin Lewis Dept Elect. Eng. U.C. Berkeley We present a simplified model of the micromechanics of the human cochlea, realized with electrical elements. Simulation of the model shows that it retains four signal processing features whose importance we argue on the basis of engineering logic and evolutionary evidence. Furthermore, just as the cochlea does, the model achieves massively parallel signal processing in a structurally economic way, by means of shared elements. By extracting what we believe are the five essential features of the cochlea, we hope to design a useful front-end filter to process acoustic images and to obtain a better understanding of the auditory system. INTRODUCTION Results of psychoacoustical and physiological experiments in humans indicate that the auditory system creates acoustic images via massively parallel neural computations. These computations enable the brain to perform voice detection, sound localization, and many other complex taskS. For example, by recording a random signal with a wide range of frequency components, and playing this signal simultaneously through both channels of a stereo headset, one causes the brain to create an acoustical image of a "shsh" sound in the center of the head. Delaying the presentation of just one frequency component in the random signal going to one ear and simultaneously playing the original signal to the other ear, one would still have the image of a "shsh" in the center of the head; however if one mentally searches the acoustical image space carefully, a clear tone can be found far off to one side of the head. The frequency of this tone will be that of the component with the time delay to one ear. Both ears still are receiving wide-band random signals. The isolated tone will not be perceptible from the signal to either ear alone; but with both signals together, the brain has enough data to isolate the delayed tone in an acoustical image. The brain achieves this by massively parallel neural computation. Because the acoustic front-end fIlter for the brain is the cochlea, people have proposed that analogs of the cochlea might serve well as front-end fIlters for man-made processors of acoustical images (Lyon, Mead, 1988). If we were to base a cochlear analog on current biophysical models of this structure, it would be extraordinarily complicated and extremely difficult to realize with hardware. Because of this, we want to start with a cochlear model that incorporates a minimum set of essential ingredients. The ears of lower vertebrates, such as alligators and frogs, provide some clues to help identify those ingredients. These animals presumably have to compute acoustic images similar to ours, A Passive Shared Element Analog Electrical Cochlea 663 but they do not have cochleas. The acoustic front-end filters in the ears of these animals evolved independently and in parallel to the evolution of the cochlea. Nevertheless, those front-end filters share four functional properties with the part of the cochlea which responds to the lower 7 out of 10 octaves of hearing (20 Hz. to 2560 Hz.): 1. They are multichannel ftlters with each channel covering a different part of the frequency spectrum. 2. Each channel is a relatively broad-band frequency filter. 3. Each filter has an extremely steep high-frequency band edge (typically 60 to 200 db/oct). 4. Each filter has nearly linear phase shift as a function of frequency, within its passband. The front-end acoustical filters of lower vertebrates also have at least one structural feature in common with the cochlea: namely, the various filter channels share their dynamic components. This is the fifth property we choose to include. Properties 1 and 3 provide good resolution in frequency; properties 2 and 4 are what filter designers would add to provide good resolution in time. In order to compute acoustical images with the neural networks in our brain, we need both kinds of resolution: time and frequency. Shared elements, a structural feature, has obvious advantages for economy of construction. The fact that evolution has come to these same front-end filter properties repeatedly suggests that these properties have compelling advantages with respect to an animal's survival. We submit that we can realize all of these properties very well with the simplest of the modern cochlear models, namely that of Joseph Zwislocki (1965). This is a transmission line model made entirely of passive elements. Figure 1 Drawing of the ear with the cochlea represented by an electrical analog. 664 Feld, Eisenberg and Lewis A COCHLEAR MODEL In order to illustrate Zwislocki's model, a quick review of the mechanics of the cochlea is useful. Figure 1 depicts the ear with the cochlea represented by an electrical analog. A sound pressure wave enters the outer ear and strikes the ear drum which, in turn, causes the three bones of the middle ear to vibrate. The last bone, known as the stapes, is connected to the oval window (represented in figure 1 by the voltage source at the beginning of the electrical analog), where the acoustic energy enters the cochlea. As the acoustic energy is transferred to the oval window, a fluid-mechanical wave is formed along a structure known as the basilar membrane ([his membrane and the surrounding fluid is represented by the series and shunt circuit elements of figure 1). As the basilar membrane vibrates, the acoustical signal is transduced to neural impulses which travel along the auditory nerve, carrying the data used by the nervous system to compute auditory images. Figure 2 is taken from a paper by Zweig et al. (1976), and depicts an uncoiled cochlea. As the fluid-mechanical wave travels through the cochlea: 1) The wave gradually slows down, and 2) The higher-frequency components of the wave are absorbed, leaving an increasingly narrower band of low-frequency components proceeding on toward the far end of the cochlea. If we were to uncoil and enlarge the basilar membrane it would look like a swim fin (figure 3). If we now were to push on the basilar membrane, it would push back like a spring. It is most compliant at the wide, thin end of the fin. Thus as one moves along the basilar membrane from its basal to apical end, its compliance increases. Zwislocki's transmission-line model was tapered in this same way. Scala vestibuli Helicotrema Figure 2 - Uncoiled cochlea (Zweig. 1976). A Passive Shared Element Analog Electrical Cochlea 665 Basal End Figure 3 Simplified uncoiled and enlarged drawing of the basilar membrane. Zwislocki's model of the cochlea is a distributed parameter transmission line. Figure 4 shows a lumped electrical analog of the model. The series elements (L 1,,, .Lo) represent the local inertia of the cochlear fluid. The shunt capacitive elements (CI, ... Co) represent the local compliance of the basilar membrane. The shunt resistive elements (R 1 , ... Rn) represent the local viscous resistance of the basilar membrane and associated fluid. The model has one input and a huge 'number of outputs. The input, sound pressure at the oval window, is represented here as a voltage source. The outputs are either the displacements or the velocities of the various regions of the basilar membrane. Figure 4 Transmission line model of the cochlea represented as an electrical circuit. In the electrical analog, shown in figure 4, we have selected velocities as the outputs (in order to compare our data to real neural tuning curves) and we have represented those velocities as the currents (11 , .. .In). The original analysis of Zwislocki's tapered transmission line model did not produce the steep high frequency band edges that are observed in real cochleas. This deficiency was a major driving force behind the early development of more complex cochlear models. Recently, it was found that the original analysis placed the Zwislocki model in an inappropriate mode of operation (Lewis, 1984). In this mode,determined by the relative parameter values, the high frequency band edges had very gentle slopes. Solving the partial differential equations for the model with the help of a commonly used simplification (the WKB approximation), one finds a second mode of operation. In this mode, the model produces all five of the properties that we desire, including extraordinarily steep high-frequency band edges. 666 Feld, Eisenberg and Lewis RESULTS We were curious to know whether or not the newly-found mode of operation, with its very steep high-frequency band edges, could be found in a finite-element version of the model. If so, we should be able to realize a lumped, analog version of the Zwislocki model for use as a practical front-end filter for acoustical image formation and processing. We decided to implement the finite element model in SPICE. SPICE is a software package that is used for electrical circuit simulation. Our SPICE model showed the following: As long as the model was made up of enough segments, and as long as the elements had appropriate parameter values, the second mode of operation indeed was available. Furthermore, it was the predominant mode of operation when the parameter values of the model were matched to biophysical data for the cochlea. ,. L OG (FREQUENCY) Figure 5 - Frequency response of the basilar membrane velocity. 100000 CD "~ :3 L\.l :z: < Q: Q:I ::E ~ u Z ~ ~ ]/\ 1-0 < ~ CI) I "," ,,,I I I ", ",I " 1.1 U , ... FREQUENCY (kHz) Figure 6 - Inverted neural tuning curves from three afferent fibers of a cat cochlea (Kiang and Moxon, 1974). Figure 5 shows the magnitude of the electrical analog's response plotted against frequency on log-log coordinates. The five curves correspond to five different locations along our model. The cutoff frequencies span approximately seven octaves. Further adjustments of the parameters will be needed in order to shift these curves to span the lower seven octaves of human audition. For comparison, figure 6 shows threshold response curves of a cat cochlea from a paper by Kiang and Moxon (1974). These curves are inverted intentionally because Kiang and Moxon plotted stimulus threshold vs. frequency rather than response amplitude vs. frequency. We use these neural tuning curves for comparison because direct observations of cochlear mechanics have been limited to the basal end. Furthermore, in the realm of single frequencies and small signals, Evans has produced compelling evidence that this is a valid comparison (Evans, in press). These three curves are typical of the lower seven octaves of hearing. One obvious discrepancy between Kiang and Moxon's data and our results is that our model does not exhibit the sharp corners occurring at the band edges. The term sharp corner denotes the fact that the transition between the shallow rising edge and steep falling edge of a given curve is abrupt i.e. the corner is not rounded. A Passive Shared Element Analog Electrical Cochlea 667 Figure 7 shows what happens to the response curve at a single location along our model as the number of stages is increased. The curve on the right, in figure 7, was derived with 500 stages and does not change much as we increase the number of stages indefinitely. Thus the curve represents a convergence of the solution of the lumped parameter Zwislocki model to the distributed parameter model. The middle curve was derived with 100 stages and the left-hand curve was derived with 50. In any lumpedelement transmission line, there occurs an artifactual cutoff which occurs roughly at the point where the given input wavelength exceeds the dimensions of the lumped elements. If we do not lump the stages in our model finely enough, we observe this artifactual cutoff as opposed to the true cutoff of Zwislocki's distributed parameter model. This phenomena is clearly observed in the curve derived from 50 stages and may account for the sharper comers in response curves from real cochleas. However, in order to make our finite element model operate in a manner analogous to that of the distributed parameter Zwislocki model we need approximately 500 stages. ~ ~ c ~ ~ ~ "' 0 ~ z " .. ~ 10 L OG !FREQUENCY) IdB} Figure 7 - Convergence of cuttoff points as the number of branches increase. "' 0 ~ ~ .. ~ 100000 L OG (FREQUENCY) (HZ) Figure 8 - Frequency response of the basilar membrane velocity without the Heliocotrema. 10000 0 A critical element in the Zwislocki model is a terminating resistor, representing the heliocotrema (see Rh in figure 3). The heliocotrema is a small hole at the end of the basilar membrane. Figure 8, shows the effects of removing that resistor. The irregular frequency characteristics are quite different from the experimental data and represent possibly wild excursions of the basilar membrane. Figure 9. shows phase data for the Zwislocki model, which is linear as a function of frequency. Anderson et aI (1971), show similar results in the squirrel monkey. With lumped-element analysis we are able to obtain temporal as well as spectral responses. For a temporal waveform such as an acoustic pulse, the linear relationship between phase and frequency guarantees that those Fourier components which pass through the spectral filter will be reassembled with proper phase relationships at the output of the filter. As it travels down the basilar membrane. the temporal waveform will simply be smoothed, due to loss of its higher-frequency components and delayed, due to the linear phase shift. Figure 10 shows the response of our electrical analog to a 1 msec wide square pulse at the input. The curves represent the time courses of basilar membrane displacement at four equally spaced locations along the cochlea. The curve on the right represents the response at the apical end of the cochlea (the end farthest from the input). The curve on the left represents the response at a point 25 percent of the distance 668 Feld, Eisenberg and Lewis input end. The impulse responses of mammalian cochleas and of the auditory filters of lower vertebrates all show a slight ringing, again indicating a deficiency in our model. LINEA.R FREOUENCY {HZ' Figure 9 - Phase response of the basilar membrane velocity. w ~ d > 2000 TIfoilE (SECONDS) Figure 10 - Traveling square wave pulse along the membrane from the basal to apical end. CONCLUSION Apical End 04 Research activity studying the function of higher level brain processing is in its infancy and little is known about how the various features of the cochlea, such as linear phase, sharp band edges, as well as nonlinear features, such f1S two-tone suppression and cubic difference tone excitation, are used by the brain. Therefore, our approach, in developing a cochlear model, is to incorporate only the most essential ingredients. We have incorporated the five properties mentioned in the introduction which provide simplicity of analysis, economy of hardware construction, and the preservation of both temporal and spectral resolution. The inclusion of these properties is also consistent with the fact that they are found in numerous species. We have found that in the correct mode of operation a tapered transmission line model can exhibit these five important cochlear properties. A lumped-element approximation can be used to simulate this model as long as at least 500 stages are used. As observed in figure 7, by decreasing the number of stages below 500, the solution to the lumped-element model no longer adheres to the Zwislocki model. In fact, the output of the coursely lumped model more closely resembles the neural tuning data of the cochlea in that it produces very sharp corners. There is some evidence that indicates the cochlea is constructed of discrete components. Indeed, the hair cells themselves are discretized. If this idea is valid, a model constructed of as little as 50 branches may more accurately represent the cochlear mechanics then the Zwislocki model. Our simple model has some drawbacks in that it does not replicate various properties of the cochlea. For example, it does not span the full ten octaves of human audition, nor does it explain any of the experimentally observed nonlinear aspects seen in the cochlea. However, we take this approach because it provides us with a powerful analysis tool that will enable us to study the behavior of lumped-element cochlear models. This tool will allow us to proceed to the next step; the building of a hardware analog of the cochlea. A Passive Shared Element Analog Electrical Cochlea 669 allow US to proceed to the next step; the building of a hardware analog of the cochlea. RESEARCH DIRECTIONS In and of itself, the tapered shared element travelling wave structure we have chosen is interesting to analyze. In order to get even further insight into how this filter works and to aid in the building of a hardware version of such a filter, we plan to study the placement of the poles and zeroes of the transfer function at each tap along the structure. In a travelling wave transmission line we expect that the transfer function at each tap will have the same denominator. Therefore, it must be the numerators of the transfer functions which will change, i.e. the zeroes will change from tap to tap. It will be of interest to see what role the zeroes play in such a ladder structure. Furthennore, it will be of great interest to us to study what happens to the poles and zeroes of the transfer function at each tap as the number of stag~s is increased (approaching the distributed parameter ftlter), or decreased (apprpaching the lumped-element cutoff version of the filter with sharper corners). We should emphasize that our circuit is bi-directional, i.e. there is loading from the stages before and after each tap, as in the real cochlea. For this reason, we must consider carefully the options for hardware realization of our circuit. We might choose to make a mechanical structure on silicon or some oilier medium, or we could convert our structure into a uni-directional circuit and build it as a digital or analog circuit Using this design we plan to build an acoustic imaging device that will enable us to explore various signal processing tasks. One such task would be to extract acoustic signals from noise. All species need to cope with two types of noise, internal sensor and amplifier noise, and external noise such as that generated by wind. Spectral decomposition is on effective way to deal with internal noise. For example, the amplitudes of the spectral components in the passband of a filter are largely undiminished, whereas the broadband noise, passed by the filter, is proportional to the square root of the bandwidth. External noise reduction can be accomplished by spatial decomposition. When temporal resolution is preserved in signals, spatial decomposition can be achieved by cross correlation of the signals from two ears. Therefore, from these two properties, spectral and temporal resolution, one can construct an acoustic imaging system in which signals buried in a sea of noise can be extracted. Acknowledgments We would like to thank Thuan Nguyen for figure 1, Eva Poinar who helped with the figures, Michael Sneary for valuable discussion, and Bruce Parnas for help with programming. 670 Feld, Eisenberg and Lewis References Anderson, DJ., Rose, I.E., Hind, I.E .• Brugge I.F.. Temporal Position of Discharges in Single Auditory Nerve Fibers Within the Cycle of a Sine-Wave Stimulus: Frequency and Intensity Effects, J. Acoust. Soc. A~z.., 49. 1131-1139, 1971. Evans, E.F .• Cochlear Filtering: A View Seen Through the Temporal Discharge Patterns of Single Cochlear Nerve Fibers. A talk given at the 1988 NATO advanced workshop. to be published as (J.P. Wilson, D.T. Kemp, eds.) Mechanics of Hearing. Plenum Press. N.Y. Kiang, N.Y.S .• Moxon, E.C .• Tails of Tuning Curves of Auditory-Nerve Fibers, J. Acoust. Soc. Am., 55, 620-630. 1974. Lewis. E.R., High Frequency Rolloff in a Cochlear Model Without critical-layer resonance. J. Acoust. Soc. Am .• 76 (3) September, 1984. Lyon, R.F., Mead, C.A .• An Analog Electronic Cochlea, IEEE Trans.-ASSP, 36. 11191134. 1988. Zweig. G., Lipes. R.. Pierce. I.R.. The Cochlear Compromise, J. Acoust. Soc. Am., 59, 975-982. 1976. Zwislocki.I., Analysis of Some Auditory Characteristics, in Handbook of Mathematical Psychology, Vol. 3. (Wiley. New York), pp. 1-97, 1965.
|
1988
|
9
|
179
|
LEARNING WITH TEMPORAL DERIVATIVES IN PULSE-CODED NEURONAL SYSTEMS Mark Gluck David B. Parker Department of Psychology Stanford University Stanford, CA 94305 Abstract Eric S. Reifsnider A number of learning models have recently been proposed which involve calculations of temporal differences (or derivatives in continuous-time models). These models. like most adaptive network models. are formulated in tenns of frequency (or activation), a useful abstraction of neuronal firing rates. To more precisely evaluate the implications of a neuronal model. it may be preferable to develop a model which transmits discrete pulse-coded information. We point out that many functions and properties of neuronal processing and learning may depend. in subtle ways. on the pulse-coded nature of the information coding and transmission properties of neuron systems. When compared to formulations in terms of activation. computing with temporal derivatives (or differences) as proposed by Kosko (1986). Klopf (1988). and Sutton (1988). is both more stable and easier when reformulated for a more neuronally realistic pulse-coded system. In reformulating these models in terms of pulse-coding. our motivation has been to enable us to draw further parallels and connections between real-time behavioral models of learning and biological circuit models of the substrates underlying learning and memory. INTRODUCTION Learning algorithms are generally defined in terms of continuously-valued levels of input and output activity. This is true of most training methods for adaptive networks. (e.g .• Parker. 1987; Rumelhart. Hinton. & Williams, 1986; Werbos. 1974; Widrow & Hoff, 1960). and also for behavioral models of animal and hwnan learning. (e.g. Gluck & Bower. 1988a, 1988b; Rescorla & Wagner. 1972). as well as more biologically oriented models of neuronal function (e.g .• Bear & Cooper, in press; Hebb, 1949; Granger. Abros-Ingerson, Staubli, & Lynch, in press; Gluck & Thompson, 1987; Gluck. Reifsnider. & Thompson. in press; McNaughton & Nadel. in press; Gluck & Rumelhart. in press). In spite of the attractive simplicity and utility of the "activation" construct 195 196 Parker, Gluck and Reifsnider neurons use discrete trains of pulses for the transmission of information from cell to cell. Frequency (or activation) is a useful abstraction of pulse trains. especially for bridging the gap between whole-animal and single neuron behavior. To more precisely evaluate the implications of a neuronal model. it may be preferable to develop a model which transmits discrete pulse-coded information; it is possible that many functions and properties of neuronal processing and learning may depend. in subtle ways. on the pulse-coded nature of the information coding and transmission properties of neuron systems. In the last few years, a number of learning models have been proposed which involve computations of temporal differences (or derivatives in continuous-time models). Klopf (1988) presented a formal real-time model of classical conditioning that predicts the magnitude of conditioned responses (CRs). given the temporal relationships between conditioned stimuli (eSs) and an unconditional stimulus (US). Klopf's model incorporates a "differential-Hebbian" learning algorithm in which changes in presynaptic levels of activity are correlated with changes in postsynaptic levels of activity. Motivated by the constraints and motives of engineering. rather than animal learning. Kosko (1986) proposed the same basic rule and provided extensive analytic insights into its properties. Sutton (1988) introduced a class of incremental learning procedures. called "temporal difference" methods. which update associative (predictive) weights according to the difference between temporally successive predictions. In addition to the applied potential of this class of algorithms. Sutton & Barto (1987) show how their model. like Klopf's (1988) model. provides a good fit to a wide range of behavioral data on classical conditioning. These models. all of which depend on computations involving changes over time in activation levels. have been successful both for predicting a wide range of behavioral animal learning data (Klopf. 1988; Sutton & Barto. 1987) and for solving useful engineering problems in adaptive prediction (Kosko. 1986; Sutton. 1988). The possibility that these models might represent the computational properties of individual neurons. seems, at first glance. highly unlikely. However. we show by reformulating these models for pulse-coded communication (as in neuronal systems) rather than in terms of abstract activation levels. the computational soundness as well as the biological relevance of the models is improved. By avoiding the use of unstable differencing methods in computing the time-derivative of activation levels. and by increasing the error-tolerance of the computations, pulse coding will be shown to improve the accuracy and reliability of these models. The pulse coded models will also be shown to lend themselves to a closer comparison to the function of real neurons than do models that operate with activation levels. As the ability of researchers to directly measure neuronal behavior grows. the value of such close comparisons will increase. As an example. we describe here a pulse-coded version of Klopf's differential-Hebbian model of classification learning. Further details are contained in Gluck. Parker. & Reifsnider. 1988. Learning with Temporal Derivatives 197 Pulse-Coding in Neuronal Systems We begin by outlining the general theory and engineering advantages of pulse-coding and then describe a pulse-coded refonnulation of differential-Hebbian learning. The key idea is quite simple and can be summarized as follows: Frequency can be seen, loosely speaking, as an integral of pulses; conversely, therefore, pulses can be thought of as carrying infonnation about the derivatives of frequency. Thus, computing with the "derivatives of frequency" is analogous to computing with pulses. As described below, our basic conclusion is that differential-Hebbian learning (Klopf, 1988; Kosko, 1986) when refonnulated for a pulse-coded system is both more stable and easier to compute than is apparent when the rule is fonnulated in tenns of frequencies. These results have important implications for any learning model which is based on computing with timederivatives, such as Sutton's Temporal Difference model (Sutton, 1988; Sutton & Barto, 1987) There are many ways to electrically transmit analog information from point to point. Perhaps the most obvious way is to transmit the infonnation as a signal level. In electronic systems, for example, data that varies between 0 and 1 can be transmitted as a voltage level that varies between 0 volts and 1 volt This method can be unreliable, however, because the receiver of the information can't tell if a constant DC voltage offset has been added to the information, or if crosstalk has occurred with a nearby signal path. To the exact degree that the signal is interfered with, the data as read by the receiver will be erroneously altered. The consequences of faults appearing in the signal are particularly serious for systems that are based on derivatives of the signal. In such systems, even a small, but sudden, unintended change in signal level can drastically alter its derivative, creating large errors. A more reliable way to transmit analog information is to encode it as the frequency of a series of pulses. A receiver can reliably detennine if it has received a pulse, even in the face of DC voltage offsets or moderate crosstalk. Most errors will not be large enough to constitute a pulse, and thus will have no effect on the transmitted infonnation. The receiver can count the number of pulses received in a given time window to detennine the frequency of the pulses. Further infonnation on encoding analog infonnation as the frequency of a series of pulses can be found in many electrical engineeri.ng textbooks (e.g., Horowitz & Hill, 1980). As noted by Parker (1987), another advantage of coding an analog signal as the frequency of a series of pulses is that the time derivative of the signal can be easily and stably calculated: If x (t) represents a series of pulses (x equals 1 if a pulse is occuring at time t; otherwise it equals 0) then we can estimate the frequency, f (t), of the series of pulses using an exponentially weighted time average: f (t) = Jllx ('t)e-Jl{t-'t) d't 198 Parker, Gluck and Reifsnider where Jl is the decay constant. The well known formula for the derivative of 1 (t) is AtJP- = Jl~(t)-/(t)) Thus. the time derivative of pulse-coded information can be calculated without using any unstable differencing methods. it is simply a function of presence or absence of a pulse relative to the current expectation (frequency) of pulses. As described earlier. calculation of time derivatives is a critical component of the learning algorithms proposed by Klopf (1988). Kosko (1986) and Sutton (Sutton. 1988; Sutton & Barto 1987). They are also an important aspect of 2nd order (pseudo-newtonian) extensions of the backpropogation learning rule for multi-layer adaptive "connectionist" networks (parker. 1987). Summary 01 Klopf s Model Klopf (1988) proposed a model of classical conditioning which incorporates the same learning rule proposed by Kosko (1986) and which extends some of the ideas presented in Sutton and Barto's (1981) real-time generalization of Rescorla and Wagner's (1972) model of classical conditioning. The mathematical specification of Klopf s model consists of two equations: one which calculates output signals based on a weighted sum of input signals (drives) and one which determines changes in synapse efficacy due to changes in signal levels. The specification of signal output level is defined as where: y (t ) is the measure of postsynaptic frequency of firing at time t; Wi (t) is the efficacy (positive or negative) of the i th synapse; Xi (t) is the frequency of action potentials at the i th synapse; 9 is the threshold of firing; and n is the number of synapses on the "neuron". This equation expresses the idea that the postsynaptic firing frequency depends on the summation of the weighted presynaptic firing frequencies. Wi (t )Xi (t ). relative to some threshold. 9. The learning mechanism is defined as where: ~Wi (t) is the change in efficacy of the i th synapse at time t; ~y (t) is the change in postsynaptic firing at time t; 't' is the longest interstimulus interval over which delayed conditioning is effective. The C j are empirically established learning rate constants -each corresponding to a different inter-stimulus interval. In order to accurately simulate various behavioral phenomena observed in classical conditioning. Klopf adds three ancillary assumptions to his model. First. he places a lower bound of 0 on the activation of the node. Second. he proposes that changes in synaptic Learning with Temporal Derivatives 199 weight, ~w; (t), be calculated only when the change in presynaptic signal level is positive -- that is, when Ax; (t-j) > O. Third, he proposes separate excitatory and inhibitory weights in contrast to the single real-valued associative weights in other conditioning models (e.g., Rescorla & Wagner, 1972; Sutton & Barto, 1981). It is intriguing to note that all of these assumptions are not only sufficiently justified by constraints from behavioral data but are also motivated by neuronal constraints. For a further examination of the biological and behavioral factors supporting these assumptions see Gluck, Parker, and Reifsnider (1988). The strength of Klopf's model as a simple formal behavioral model of classical conditioning is evident. Although the model has not yielded any new behavioral predictions, it has demonstrated an impressive ability to reproduce a wide, though not necessarily complete, range of Pavlovian behavioral phenomena with a minimum of assumptions. Klopf (1988) specifies his learning algorithm in terms of activation or frequency levels. Because neuronal systems communicate through the transmission of discrete pulses, it is difficult to evaluate the biological plausibility of an algorithm when so formulated. For this reason, we present and evaluate a pulse-coded reformulation of Klopf's model. A Pulse-Coded Reformulation of Klopf s Model We illustrate here a pulse-coded reformulation of Klopf's (1988) model of classical conditioning. The equations that make up the model are fairly simple. A neuron is said to have fired an output pulse at time t if vet) > e, where e is a threshold value and vet) is defined as follows: vet) = (l-d)v(t-l) + !:Wi(t)Xi(t) (1) where v (t) an auxiliary variable, d is a small positive constant representing the leakage or decay rate, Wi (t) is the efficacy of synapse i at time t, and Xi (t) is the frequency of presynaptic pulses at time t at synapse i. The input to the decision of whether the neuron will fire consists of the weights and efficacies of the synapses as well as information about previous activation levels at the neuronal output Note that the leakage rate, d, causes older information about activation levels to have less impact on current values of v (t) than does recent information of the same type. The output of the neuron, p (t), is: v (t) > e then p (t) = 1 (pulse generated) v (t ) ~ e then p (t) = 0 (no pulse generated) It is important that once p (t) has been determined, v (t) will need to be adjusted if 200 Parker, Gluck and Reifsnider p (t) = 1. To reflect the fact that the neuron has fired, (i.e., p (t) = 1) then v (t) = v (t) - 1. This decrement occurs after p (t) has been determined for the current t. Frequencies of pulses at the output node and at the synapses are calculated using the following equations: / (t) = / (t-l) + 11/(t) where 11/ (t) = m(p (t) - / (t-l)) where / (t) is the frequency of outgoing pulses at time t; p (t) is the ouput (1 or 0) of the neuron at time t ; and m is a small positive constant representing a leakage rate for the frequency calculation. Following Klopf (1988), changes in synapse efficacy occur according to (2) where I1Wi(t) = Wi (t+l) - Wi(t) and l1y (t) and ru:i (t) are calculated analogously to 11/ (t); 't is the longest interstimulus interval (lSI) over which delay conditioning is effective; and C j is an empirically established set of learning rates which govern the efficacy of conditioning at an lSI of j . Changes in Wi (t) are governed by the learning rule in Equation 2 which alters v (t) via Equation 1. Figure 1 shows the results of a computer simulation of a pulse-coded version of Klopf's conditioning model. The first graph shows the excitatory weight (dotted line) and inhibitory weight (dashed line) of the CS "synapse". Also on the same graph is the net synaptic weight (solid line), the sum of the excitatory and inhibitory weights. The subsequent graphs show CS input pulses, US input pulses, and the output (CR) pulses. The simulation consists of three acquisition trials followed by three extinction trials. Learning with Temporal Derivatives 201 0.6 en l! 0.4 ( .............................. .J ~ .................................. ( ................................. , ·V1···········U ............................. . .~ 02 ; . ~ 0.0 ·0.2 . ....... . ------------------------------------------~-----~----------o 50 100 150 cycle 200 250 Figure 1. Simulation of pulse.coded version of Klopf's conditioning model. Top panel shows excitatory and inhibitory weights as dashed lines and the net synaptic weight of the CS as a solid line. Lower panels show the CS and US inputs and the CR output. As expected, excitatory weight increases in magnitude over the three ~quisition trials, while inhibitory weight is stable. During the first two extinction trials, the excitatory and the net synaptic weights decrease in magnitude, while the inhibitory weight increases. Thus, the CS produces a decreasing amount of output pulses (the CR). During the third extinction trial the net synaptic weight is so low that the CS cannot produce output pulses, and so the CR is extinct. However, as net weight and excitatory weight remain positive, there are residual effects of the acquisition which will accelerate reacquisition. Because a threshold must be reached before a neuronal output pulse can be emitted, and because output must occur for weight changes to occur, pulse coding adds to the accelerated reacquisition effect that is evident in the original Klopf model; extinction is halted before net weight is zero, when pulses can no longer be produced. 300 202 Parker, Gluck and Reifsnider Discussion To facilitate comparison between learning algorithms involving temporal derivative computations and actual neuronal capabilities. we formulated a pulse-coded variation of Klopfs classical conditioning model. Our basic conclusion is that computing with temporal derivatives (or differences) as proposed by Kosko (1986). Klopf (1988). and Sutton (1988). is more stable and easier when reformulated for a more neuronally realistic. pulse-coded system. than when the rules are fonnulated in terms of frequencies or activation. It is our hope that further examination of the characteristics of pulse-coded systems may reveal facts that bear on the characteristics of neuronal function. In refonnulating these algorithms in terms of pulse-coding. our motivation has been to enable us to draw further parallels and connections between real-time behavioral models of learning and biological circuit models of the substrates underlying classical conditioning. (e.g .• Thompson. 1986; Gluck & Thompson. 1987; Donegan. Gluck. & Thompson. in press). More generally. noting the similarities and differences between algorithmic/behavioral theories and biological capabilities is one way of laying the groundwork for developing more complete integrated theories of the biological bases of associative learning (Donegan. Gluck. & Thompson. in press). Acknowledgments Correspondence should be addressed to: Mark A. Gluck. Dept of Psychology. Jordan Hall; Bldg. 420. Stanford. CA 94305. For their commentary and critique on earlier drafts of this and related papers. we are indebted to Harry Klopf. Bart Kosko. Richard Sutton. and Richard Thompson. This research was supported by an Office of Naval Research Grant to R. F. Thompson and M. A. Gluck. References Bear. M. F., & Cooper, L. N. (in press). Molecular mechanisms for synaptic modification in the visual cortex: Interaction between theory and experiment. In M. A. Gluck, & D. E. Rumelhart (Eds.), Neuroscience and Connectionist Theory. Hillsdale, N.J.: Lawrence Erlbaum Associates .. Donegan, N. H., Gluck, M. A., & Thompson, R. F. (1989). Integrating behavioral and biological models of classical conditioning. In R. D. Hawkins, & G. H. Bower (Eds.), Computational models of learning in simple neural systems (Volume 22 of the Psychology of Learning and Motivation). New York: Academic Press. Gluck, M. A., & Bower. G. H. (1988a). Evaluating an adaptive network model of human learning. Journal of Memory and Language, 27, 166-195. Gluck, M. A., & Bower, G. H. (1988b). From conditioning to category learning: An adaptive network model. Journal of Experimental Psychology: General, 117(3), 225-244. Learning with Temporal Derivatives 203 Gluck, M. A., Parker, D. B., & Reifsnider, E. (1988). Some biological implications of a differential-Hebbian learning rule. Psychobiology, 16(3), 298-302. Gluck, M. A, Reifsnider, E. S., & Thompson, R. F. (in press). Adaptive signal processing and temporal coarse coding: Cerebellar models of classical conditioning and VOR Adaptation. In M. A. Gluck, & D. E. Rumelhart (Eds.), Neuroscience and Connectionist Theory. Hillsdale, N.1.: Lawrence Erlbaum Associates .. Gluck, M. A, & Rumelhart, D. E. (in press). Neuroscience and Connectionist Theory. Hillsdale, N.J.: Lawrence Erlbaum Associates .. Gluck, M. A., & Thompson, R. F. (1987). Modeling the neural substrates of associative learning and memory: A computational approach. Psychological Review, 94, 176-191. Granger, R., Ambros-Ingerson, 1., Staubli, U., & Lynch, G. (in press). Memorial operation of multipIe, interacting simulated brain structures. In M. A. Gluck, & D. E. Rumelhart (Eds.), Neuroscience and Connectionist Theory. Hillsdale, N.J.: Lawrence Erlbaum Associates .. Hebb, D. (1949). Organization of Behavior. New York: Wiley & Sons. Horowitz, P., & Hill, W. (1980). The Art of Electronics. Cambridge, England: Cambridge University Press. Klopf, A. H. (1988). A neuronal model of classical conditioning. Psychobiology, 16(2), 85-125. Kosko, B. (1986). Differential hebbian learning. In 1. S. Denker (Ed.), Neural Networksfor Computing, AlP Conference Proceedings 151 (pp. 265-270). New York: American Institute of Physics. McNaughton, B. L., & Nadel, L. (in press). Hebb-Marr networks and the neurobiological representation of action in space. In M. A. Gluck, & D. E. Rumelhart (Eds.), Neuroscience and Connectionist Theory. Hillsdale, N.J.: Lawrence Erlbaum Associates .. Parker, D. B. (1987). Optimal Algorithms for Adaptive Networks: Second Order Back Propagation, Second Order Direct Propagation, and Second Order Hebbian Learning. Proceedings of the IEEE First Annual Conference on Neural Networks. San Diego, California:, . Rescorla. R. A, & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and non-reinforcement. In A. H. Black, & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory. New York: AppletonCentury-Crofts. RumeIhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propogation. In D. Rumelhart, & 1. McClelland (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1: Foundations). Cambridge, M.A.: MIT Press. Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3, 9-44. Sutton, R. S., & Barto, A. G. (1981). Toward a modem theory of adaptive networks: Expectation and prediction. Psychological Review, 88, 135-170. Sutton, R. S., & Barto, A. G. (1987). A temporal-difference model of classical conditioning. In Proceedings of the 9th Annual Conference of the Cognitive Science Society. Seattle, WA. Thompson, R. F. (1986). The neurobiology ofleaming and memory. Science, 233, 941-947. Werbos, P. (1974). Beyond regression: New tools for prediction and analysis in the behavioral sciences. Doctoral dissertation (Economics), Harvard University, Cambridge, Mass .. Widrow, B., & Hoff, M. E. (1960). Adaptive switching circuits. Institute of Radio Engineers, Western Electronic Show and Convention, Convention Record, 4,96-194. Part II Application
|
1988
|
90
|
180
|
356 USING BACKPROPAGATION WITH TEMPORAL WINDOWS TO LEARN THE DYNAMICS OF THE CMU DIRECT-DRIVE ARM II K. Y. Goldberg and B. A. Pearlmutter School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT Computing the inverse dynamics of a robot ann is an active area of research in the control literature. We hope to learn the inverse dynamics by training a neural network on the measured response of a physical ann. The input to the network is a temporal window of measured positions; output is a vector of torques. We train the network on data measured from the first two joints of the CMU Direct-Drive Arm II as it moves through a randomly-generated sample of "pick-and-place" trajectories. We then test generalization with a new trajectory and compare its output with the torque measured at the physical arm. The network is shown to generalize with a root mean square error/standard deviation (RMSS) of 0.10. We interpreted the weights of the network in tenns of the velocity and acceleration filters used in conventional control theory. INTRODUCTION Dynamics is the study of forces. The dynamic response of a robot arm relates joint torques to the position, velocity, and acceleration of its links. In order to control an ann at high spee<L it is important to model this interaction. In practice however, the dynamic response is extremely difficult to predict. A dynamic controller for a robot ann is shown in Figure 1. Feedforward torques for a desired trajectory are computed off-line using a model of arm dynamics and applied to the joints at every cycle in an effort to linearize the resulting system. An independent feedback loop at each joint is used to correct remaining errors and compensate for external disturbances. See ([3]) for an introduction to dynamiC control of robot arms. Conventional control theory has difficulty addressing physical effects such as friction [I], backlash [2], torque non-linearity (especially dead zone and saturation) [2], highfrequency dynamics [2], sampling effects [7], and sensor noise [7]. We propose to use a three-layer backpropagation network [4] with sigmoid thresholds to fill the box marked "inverse arm" in Figure 1. We will treat the arm as an unknown non-linear transfer function to be represented by weights of the network.
|
1988
|
91
|
181
|
394 STORING COVARIANCE BY THE ASSOCIATIVE LONG·TERM POTENTIATION AND DEPRESSION OF SYNAPTIC STRENGTHS IN THE HIPPOCAMPUS Patric K. Stanton· and Terrence J. Sejnowskit Department of Biophysics Johns Hopkins University Baltimore, MD 21218 ABSTRACT In modeling studies or memory based on neural networks, both the selective enhancement and depression or synaptic strengths are required ror effident storage or inrormation (Sejnowski, 1977a,b; Kohonen, 1984; Bienenstock et aI, 1982; Sejnowski and Tesauro, 1989). We have tested this assumption in the hippocampus, a cortical structure or the brain that is involved in long-term memory. A brier, high-frequency activation or excitatory synapses in the hippocampus produces an increase in synaptic strength known as long-term potentiation, or L TP (BUss and Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it requires the simultaneous release or neurotransmitter from presynaptic terminals coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller, 1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or synaptic strength that could balance LTP has not yet been demonstrated. We studied the associative interactions between separate inputs onto the same dendritic trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency input which, by itselr, does not persistently change synaptic strength, can either increase (associative L TP) or decrease in strength (associative long-term depression or LTD) depending upon whether it is positively or negatively correlated in time with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian, and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarization sufficient to block postsynaptic activity. Thus, associative L TP and associative L TO are capable or storing inrormation contained in the covariance between separate, converging hippocampal inputs • • Present address: Dep~ents of NeW'Oscience and Neurology, Albert Einstein College of Medicine, 1410 Pelham Parkway South, Bronx, NY 10461 USA. tPresent address: Computational Neurobiology Laboratory, The Salk Institute, P.O. Box 85800, San Diego, CA 92138 USA. Storing Covariance by Synaptic Strengths in the Hippocampus 395 INTRODUCTION Associative L TP can be produced in some hippocampal neuroos when lowfrequency. (Weak) and high-frequency (Strong) inputs to the same cells are simultaneously activated (Levy and Steward, 1979; Levy and Steward, 1983; Barrionuevo and Brown, 1983). When stimulated alone, a weak input does not have a long-lasting effect on synaptic strength; however, when paired with stimulation of a separate strong input sufficient to produce homo synaptic LTP of that pathway, the weak pathway is associatively potentiated. Neural network modeling studies have predicted that, in addition to this Hebbian form of plasticity, synaptic strength should be weakened when weak and strong inputs are anti-correlated (Sejnowski, 1977a,b; Kohonen, 1984; Bienenstock et al, 1982; Sejnowski and Tesauro, 1989). Evidence for heterosynaptic depression in the hippocampus has been found for inputs that are inactive (Levy and Steward, 1979; Lynch et al, 1977) or weakly active (Levy and Steward, 1983) during the stimulation of a strong input, but this depression did not depend on any pattern of weak input activity and was not typically as long-lasting as LTP. Therefore, we searched for conditions under which stimulation of a hippocampal pathway, rather than its inactivity, could produce either long-term depression or potentiation of synaptic strengths, depending on the pattern of stimulation. The stimulus paradigm that we used, illustrated in Fig. I, is based on the finding that bursts of stimuli at 5 Hz are optimal in eliciting LTP in the hippocampus (Larson and Lynch, 1986). A highfrequency burst (S'IRONG) stimulus was applied to Schaffer collateral axons and a lowfrequency (WEAK) stimulus given to a separate subicular input coming from the opposite side of the recording site, but terminating on dendrites of the same population of CAl pyramidal neurons. Due to the rhythmic nature of the strong input bursts, each weak input shock could be either superimposed on the middle of each burst of the strong input (IN PHASE), or placed symmetrically between bursts (OUT OF PHASE). RESULTS Extracellular evoked field potentials were recorded from the apical dendritic and somatic layers of CAl pyramidal cells. The weak stimulus train was first applied alone and did not itself induce long-lasting changes. The strong site was then stimulated alone, which elicited homosynaptic LTP of the strong pathway but did not significantly alter amplitude of responses to the weak input. When weak and strong inputs were activated IN PHASE, there was an associative L TP of the weak input synapses, as shown in Fig. 2a. Both the synaptic excitatory post-synaptic potential (e.p.s.p.) (Ae.p.s.p. = +49.8 ± 7.8%, n=20) and population action potential (&Pike = +65.4 ± 16.0%, n=14) were significantly enhanced for at least 60 min up to 180 min following stimulation. In contrast, when weak and strong inputs were applied OUT OF PHASE, they elicited an associative long-term depression (L TO) of the weak input synapses, as shown in Fig. 2b. There was a marked reduction in the population spike (-46.5 ± 11.4%, n=10) with smaller decreases in the e.p.s.p. (-13.8 ± 3.5%, n=13). Note that the stimulus patterns applied to each input were identical in these two experiments, and only the relative 396 Stanton and Sejnowski phase of the weak and strong stimuli was altered. With these stimulus patterns. synaptic strength could be repeatedly enhanced and depressed in a single slice. as illustrated in Fig 2c. As a control experiment to determine whether information concerning covariance between the inputs was actually a determinant of plasticity. we combined the in phase and out of phase conditions, giving both the weak input shocks superimposed on the bursts plus those between the bursts. for a net frequency of 10 Hz. This pattern. which resulted in zero covariance between weak and strong inputs. produced no net change in weak input synaptic strength measmed by extracellular evoked potentials. Thus. the assoa b A.SSOCIA.TIVE STIMULUS PA.RA.DIGMS POSJTIVE.L Y CORKELA TED · "IN PHASE" ~K~~ _I~ __ ~I ____ ~I ____ ~I_ SI1IONG,NJO\IT ..u.Jj1ll11l..-1 ---1&1111 ..... 11 ---1&1 111 ..... 11 ---,I~IIII NEGATIVELY CORRELATED· 'our OF PHASE" W[AKIN'lTf STIONG 'N'''' ~I 11111 --,-; 11111 11111 Figure 1. Hippocampal slice preparation and stimulus paradigms. a: The in vitro hippocampal slice showing recording sites in CAl pyramidal cell somatic (stratum pyramidale) and dendritic (stratum radiatum) layers. and stimulus sites activating Schaffer collateral (STRONG) and commissural (WEAK) afferents. Hippocampal slices (400 Jlm thick) were incubated in an interface slice chamber at 34-35 0 C. Extracellular (1-5 M!l resistance, 2M NaCI filled) and intracellular (70-120 M n. 2M K-acetate filled) recording electrodes. and bipolar glass-insulated platinum wire stimulating electrodes (50 Jlm tip diameter). were prepared by standard methods (Mody et al, 1988). b: Stimulus paradigms used. Strong input stimuli (STRONG INPUT) were four trains of 100 Hz bursts. Each burst had 5 stimuli and the interburst interval was 200 msec. Each train lasted 2 seconds for a total of 50 stimuli. Weak input stimuli (WEAK INPUT) were four trains of shocks at 5 Hz frequency. each train lasting for 2 seconds. When these inputs were IN PHASE. the weak single shocks were superimposed on the middle of each burst of the strong input. When the weak input was OUT OF PHASE. the single shocks were placed symmetrically between the bursts. Storing Covariance by Synaptic Strengths in the Hippocampus 397 ciative LTP and LTD mechanisms appear to be balanced in a manner ideal for the storage of temporal covariance relations. The simultaneous depolarization of the postsynaptic membrane and activation of glutamate receptors of the N-methyl-D-aspartate (NMDA) subtype appears to be necessary for LTP induction (Collingridge et ai, 1983; Harris et al, 1984; Wigstrom and Gustaffson, 1984). The SJ?read of current from strong to weak synapses in the dendritic tree, d ASSOCIATIVE LON(;.TE~ I'OTENTIATION !!Ll!!!!. --b ASSOCIATIVE LONG-TE~ DE,/tESSION I • 11111 11111. I c e... I I I I Figure 2. mustration of associative long-term potentiation (LTP) and associative longterm depression (LTD) using extracellular recordings. a: Associative LTP of evoked excitatory postsynaptic potentials (e.p.s.p.'s) and population action potential responses in the weak inpuL Test responses are shown before (Pre) and 30 min after (post) application of weak stimuli in phase with the coactive strong input. b: Associative LTD of evoked e.p.s.p.'s and population spike responses in the weak input. Test responses are shown before (Pre) and 30 min after (post) application of weak stimuli out of phase with the coactive strong input. c: Time course of the changes in population spike amplitude observed at each input for a typical experiment. Test responses from the strong input (S, open circles), show that the high-frequency bursts (5 pulses/l00 Hz, 200 msec interburst interval as in Fig. 1) elicited synapse-specific LTP independent of other input activity. Test responses from the weak input (W. filled circles) show that stimulation of the weak pathway out of phase with the strong one produced associative LTD (Assoc LTD) of this input. Associative LTP (Assoc LTP) of the same pathway was then elicited following in phase stimulation. Amplitude and duration of associative LTD or L TP could be increased by stimulating input pathways with more trains of shocks. 398 Stanton and Sejnowski coupled with release of glutamate from the weak inputs, could account for the ability of the strong pathway to associatively potentiate a weak one (Kelso et al, 1986; Malinow and Miller, 1986; Gustaffson et al, 1987). Consistent with this hypothesis, we find that the NMDA receptor antagonist 2-amino-S-phosphonovaleric acid (APS, 10 J.1M) blocks induction of associative LTP in CAl pyramidal neurons (data not shown, n=S). In contrast, the application of APS to the bathing solution at this same concentration had no significant effect on associative LTD (data not shown, n=6). Thus, the induction of LTD seems to involve cellular mechanisms different from associative LTP. The conditions necessary for LTD induction were explored in another series of experiments using intracellular recordings from CAl pyramidal neurons made using standard techniques (Mody et al, 1988). Induction of associative L TP (Fig 3; WEAK S+W IN PHASE) produced an increase in amplitude of the single cell evoked e.p.s.p. and a lowered action potential threshold in the weak pathway, as reported previously (Barrionuevo and Brown, 1983). Conversely, the induction of associative LTD (Fig. 3; WEAK S+W OUT OF PHASE) was accompanied by a long-lasting reduction of e.p.s.p. amplitude and reduced ability to elicit action potential firing. As in control extracellular experiments, the weak input alone produced no long-lasting alterations in intracellular e.p.s.p.'s or firing properties, while the strong input alone yielded specific increases of the strong pathway e.p.s.p. without altering e.p.s.p. 's elicited by weak input stimulation. PRE 30 min POST S+W OUT OF PHASE 30 min POST S+W IN PHASE Figure 3. Demonstration of associative L TP and LTD using intracellular recordings from a CAl pyramidal neuron. Intracellular e.p.s.p.'s prior to repetitive stimulation (pre), 30 min after out of phase stimulation (S+ W OUT OF PHASE), and 30 min after subsequent in phase stimuli (S+ W IN PHASE). The strong input (Schaffer collateral side, lower traces) exhibited LTP of the evoked e.p.s.p. independent of weak input activity. Out of phase stimulation of the weak (Subicular side, upper traces) pathway produced a marked, persistent reduction in e.p.s.p. amplitude. In the same cell, subsequent in phase stimuli resulted in associative L TP of the weak input that reversed the LTD and enhanced amplitude of the e.p.s.p. past the original baseline. (RMP = -62 mY, RN = 30 MO) Storing Covariance by Synaptic Strengths in the Hippocampus 399 A weak stimulus that is out of phase with a strong one anives when the postsynaptic neuron is hyperpolarized as a consequence of inhibitory postsynaptic potentials and afterhyperpolarization from mechanisms intrinsic to pyramidal neurons. This suggests that postsynaptic hyperpolarization coupled with presynaptic activation may trigger L'ID. To test this hypothesis, we injected current with intracellular microelectrodes to hyperpolarize or depolarize the cell while stimulating a synaptic input. Pairing the injection of depolarizing current with the weak input led to L TP of those synapses (Fig. 4a; STIM; a COI'ITROL (W.c:ULVllj b PRE r " , i -Jj PRE •• IDPOST S'I1M • DEPOL ~l"V lS.,.c I --" \ "---lOlIIin POST STlM • HYPERPOL Figure 4. Pairing of postsynaptic hyperpolarization with stimulation of synapses on CAl hippocampal pyramidal neurons produces L'ID specific to the activated pathway, while pairing of postsynaptic depolarization with synaptic stimulation produces synapsespecific LTP. a: Intracellular evoked e.p.s.p.'s are shown at stimulated (STIM) and unstimulated (CONTROL) pathway synapses before (Pre) and 30 min after (post) pairing a 20 mY depolarization (constant current +2.0 nA) with 5 Hz synaptic stimulation. The stimulated pathway exhibited associative LTP of the e.p.s.p., while the control, unstimulated input showed no change in synaptic strength. (RMP = -65 mY; RN = 35 Mfl) b: Intracellular e.p.s.p. 's are shown evoked at stimulated and control pathway synapses before (Pre) and 30 min after (post) pairing a 20 mV hyperpolarization (constant current -1.0 nA) with 5 Hz synaptic stimulation. The input (STIM) activated during the hyperpolarization showed associative LTD of synaptic evoked e.p.s.p.'s, while synaptic strength of the silent input (CONTROL) was unaltered. (RMP = -62 m V; RN = 38M!l) 400 Stanton and Sejnowski +64.0 -9.7%, n=4), while a control input inactive during the stimulation did not change (CONTROL), as reported previously (Kelso et al, 1986; Malinow and Miller, 1986; Gustaffson et al, 1987). Conversely, prolonged hyperpolarizing current injection paired with the same low-frequency stimuli led to induction of LTD in the stimulated pathway (Fig. 4b; STIM; -40.3 ± 6.3%, n=6). but not in the unstimulated pathway (CONTROL). The application of either depolarizing current, hyperpolarizing current, or the weak 5 Hz synaptic stimulation alone did not induce long-term alterations in synaptic strengths. Thus. hyperpolarization and simultaneous presynaptic activity supply sufficient conditions for the induction of LTD in CAl pyramidal neurons. CONCLUSIONS These experiments identify a novel fono of anti-Hebbian synaptic plasticity in the hippocampus and confirm predictions made from modeling studies of information storage in neural networks. Unlike previous reports of synaptic depression in the hippocampus, the plasticity is associative, long-lasting, and is produced when presynaptic activity occurs while the postsynaptic membrane is hyperpolarized. In combination with Hebbian mechanisms also present at hippocampal synapses. associative LTP and associative LTD may allow neurons in the hippocampus to compute and store covariance between inputs (Sejnowski, 1977a,b; Stanton and Sejnowski. 1989). These finding make temporal as well as spatial context an important feature of memory mechanisms in the hippocampus. Elsewhere in the brain, the receptive field properties of cells in cat visual cortex can be altered by visual experience paired with iontophoretic excitation or depression of cellular activity (Fregnac et al, 1988; Greuel et al, 1988). In particular, the chronic hyperpolarization of neurons in visual cortex coupled with presynaptic transmitter release leads to a long-teno depression of the active. but not inactive, inputs from the lateral geniculate nucleus (Reiter and Stryker, 1988). Thus. both Hebbian and anti-Hebbian mechanisms found in the hippocampus seem to also be present in other brain areas, and covariance of firing patterns between converging inputs a likely key to understanding higher cognitive function. This research was supported by grants from the National Science Foundation and the Office of Naval research to TJS. We thank Drs. Charles Stevens and Richard Morris for discussions about related experiments. Rererences Bienenstock, E., Cooper. LN. and Munro. P. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2. 32-48 (1982). Barrionuevo, G. and Brown, T.H. Associative long-teno potentiation in hippocampal slices. Proc. Nat. Acad. Sci. (USA) 80, 7347-7351 (1983). Bliss. T.V.P. and Lomo, T. Long-lasting potentiation of synaptic ttansmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. J. Physiol. (Lond.) 232. 331-356 (1973). Storing Covariance by Synaptic Strengths in the Hippocampus 401 Collingridge, GL., Kehl, SJ. and McLennan, H. Excitatory amino acids in synaptic transmission in the Schaffer collateral-commissural pathway of the rat hippocampus. J. Physiol. (Lond.) 334, 33-46 (1983). Fregnac, Y., Shulz, D., Thorpe, S. and Bienenstock, E. A cellular analogue of visual cortical plasticity. Nature (Lond.) 333, 367-370 (1988). Greuel. J .M.. Luhmann. H.J. and Singer. W. Pharmacological induction of usedependent receptive field modifications in visual cortex. Science 242,74-77 (1988). Gustafsson, B., Wigstrom, H., Abraham, W.C. and Huang. Y.Y. Long-term potentiation in the hippocampus using depolarizing current pulses as the conditioning stimulus to single volley synaptic potentials. J. Neurosci. 7, 774-780 (1987). Harris. E.W., Ganong, A.H. and Cotman, C.W. Long-term potentiation in the hippocampus involves activation of N-metbyl-D-aspartate receptors. Brain Res. 323, 132137 (1984). Kelso, S.R.. Ganong, A.H. and Brown, T.H. Hebbian synapses in hippocampus. Proc. Natl. Acad. Sci. USA 83, 5326-5330 (1986). Kohonen. T. Self-Organization and Associative Memory. (Springer-Verlag. Heidelberg, 1984). Larson. J. and Lynch. G. Synaptic potentiation in hippocampus by patterned stimulation involves two events. Science 232, 985-988 (1986). Levy. W.B. and Steward, O. Synapses as associative memory elements in the hippocampal formation. Brain Res. 175,233-245 (1979). Levy. W.B. and Steward, O. Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus. Neuroscience 8, 791-797 (1983). Lynch. G.S., Dunwiddie. T. and Gribkoff. V. Heterosynaptic depression: a postsynaptic correlate oflong-term potentiation. Nature (Lond.) 266. 737-739 (1977). Malinow. R. and Miller, J.P. Postsynaptic hyperpolarization during conditioning reversibly blocks induction of long-term potentiation Nature (Lond.)32.0. 529-530 (1986). Mody. I.. Stanton. P K. and Heinemann. U. Activation of N-methyl-D-aspartate (NMDA) receptors parallels changes in cellular and synaptic properties of dentate gyrus granule cells after kindling. J. Neurophysiol. 59. 1033-1054 (1988). Reiter, H.O. and Stryker, M.P. Neural plasticity without postsynaptic action potentials: Less-active inputs become dominant when kitten visual cortical cells are pharmacologically inhibited. Proc. Natl. Acad. Sci. USA 85, 3623-3627 (1988). Sejnowski, T J. and Tesauro, G. Building network learning algorithms from Hebbian synapses, in: Brain Organization and Memory J L. McGaugh, N.M. Weinberger, and G. Lynch, Eds. (Oxford Univ. Press, New York, in press). Sejnowski, T J. Storing covariance with nonlinearly interacting neurons. J. Math. Biology 4, 303-321 (1977). Sejnowski, T. J. Statistical constraints on synaptic plasticity. J. Theor. Biology 69, 385389 (1977). Stanton, P.K. and Sejnowski, TJ. Associative long-term depression in the hippocampus: Evidence for anti-Hebbian synaptic plasticity. Nature (Lond.), in review. Wigstrom, H. and Gustafsson, B. A possible correlate of the postsynaptic condition for long-lasting potentiation in the guinea pig hippocampus in vitro. Neurosci. Lett. 44, 327·332 (1984).
|
1988
|
92
|
182
|
DYNAMIC, NON·LOCAL ROLE BINDINGS AND INFERENCING IN A LOCALIST NETWORK FOR NATURAL LANGUAGE UNDERSTANDING· Trent E. Lange Michael G. Dyer Artificial Intelligence Laboratory Computer Science Department University of California, Los Angeles Los Angeles, CA 90024 ABSTRACT This paper introduces a means to handle the critical problem of nonlocal role-bindings in localist spreading-activation networks. Every conceptual node in the network broadcasts a stable, uniquely-identifying activation pattern, called its signature. A dynamic role-binding is created when a role's binding node has an activation that matches the bound concept's signature. Most importantly, signatures are propagated across long paths of nodes to handle the non-local role-bindings necessary for inferencing. Our localist network model, ROBIN (ROle Binding and Inferencing Network), uses signature activations to robustly represent schemata role-bindings and thus perfonn the inferencing, plan/goal analysis, schema instantiation, word-sense disambiguation, and dynamic re-interpretation portions of the natural language understanding process. MOTIVATION Understanding natural language is a difficult task, often requiring a reader to make multiple inferences to understand the motives of actors and to connect actions that are unrelated on the basis of surface semantics alone. An example of this is the sentence: Sl: "John put the pot inside the dishwasher because the police were coming." A complex plan/goal analysis of S 1 must be made to understand the actors' actions and disambiguate "pot" to MARIJUANA by overriding the local context of "dishwasher". *This research is supported in part by a contract with the JTF program of the DOD and grants from the IT A Foundation and the Hughes Artificial Intelligence Center. 545 546 Lange and Dyer DISTRIBUTED SPREADING·ACTIVATION NETWORKS Distributed connectionist models, such as [McClelland and Kawamoto, 1986] and [Touretzky and Hinton, 1985], are receiving much interest because their models are closer to the neural level than symbolic systems, such as [Dyer, 1983]. Despite this attention, no distributed network has yet exhibited the ability to handle natural language input having complexity even near to that of S 1. The primary reason for this current lack of success is the inability to perform dynamic role-bindings and to propagate these binding constraints during inferencing. Distributed networks, furthermore, are sequential at the knowledge level and lack the representation of structure needed to handle complex conceptual relationships [Feldman, 1986]. LOCALIST SPREADING·ACTIVATION NETWORKS Localist spreading-activation networks, such as [Cottrell and Small, 1983] and [Waltz and Pollack, 1985], also seem more neurally plausible than symbolic logic/Lisp-based systems. Knowledge is represented in localist networks by simple computational nodes and their interconnections, with each node standing for a distinct concept. Activation on a conceptual node represents the amount of evidence available for that concept in the current context. Unlike distributed networks, localist networks are parallel at the knowledge level and are able to represent structural relationships between concepts. Because of this, many different inference paths can be pursued simultaneously; a necessity if the quick responses that people are able to generate is to be modelled. Unfortunately, however, the evidential activation on the conceptual nodes of previous 10calist networks gives no clue as to where that evidence came from. Because of this, previous localist models have been similar to distributed connectionist models in their inability to handle dynamic, non-local bindings -- and thus remain unsuited to higher-level knowledge tasks where inferencing is required. ROBIN Our research has resulted in ROBIN (ROle Binding and Inferencing Network), a localist spreading-activation model with additional structure to handle the dynamic role-bindings and inferencing needed for building in-depth representations of complex and ambiguous sentences, such as S 1. ROBIN's networks are built entirely with simple computational elements that clearly have the possibility of realization at the neural level. Figure 1 shows an overview of a semantic network embedded in ROBIN after input for sentence S 1 has been presented. The network has made the inferences necessary to form a plan/goal analysis of the actors' actions, with the role-bindings being instantiated dynamically with activation. The final interpretation selected is the most highly-activated path of frames inside the darkly shaded area. As in previous Iocalist models, ROBIN's networks have a node for every known concept Dynamic, Non-Local Role Bindings and Inferencing John COOking-Pot Figure 1. Semantic network embedded in ROBIN. showing inferences dynamically made after S 1 is presented. Thickness of frame boundaries shows their amount of evidential activation. Darkly shaded area indicates the most highly-activated path of frames representing the most probable plan/goal analysis of the input. Dashed area shows discarded dishwashercleaning interpretation. Frames outside of both areas show a small portion of the network that received no evidential or signature activation. Each frame is actually represented by the connectivity of a set of nodes. in the network. Relations between concepts arc represented by weighted connections between their respective nodes. The activation of a conceptual node is evidential. 547 548 Lange and Dyer corresponding to the amount of evidence available for the concept and the likelihood that it is selected in the current context. Simply representing the amount of evidence available for a concept. however. is not sufficient for complex language understanding tasks. Role-binding requires that some means exist for identifying a concept that is being dynamically bound to a role in distant areas of the network. A network may have never heard about JOHN having the goal of A VOID-DETECTION of his MARIJUANA. but it must be able to infer just such a possibility to understand S 1. SIGNATURE ACTIVATION IN ROBIN Every conceptual node in ROBIN's localist network has associated with it an identification node broadcasting a stable. uniquely-identifying activation pattern. called its signature. A dynamic binding is created when a role's binding node has an activation that matches the activation of the bound concept's signature node. <loiicY I Si~2.91 ~--=? ooking-Pot <ijV I Sig=3.1 ~--~'" ~---g) , I I I , , , , _3.1 , @i)--(Transfer-Inside ) Figure 2. Several concepts and their uniquely-identifying signature nodes are shown. along with the Actor role of the TRANSFER-INSIDE frame. The dotted arrow from the binding node (black circle) to the signature node of JOHN represents the virtual binding indicated by the shared signature activation. and does not exist as an actual connection. In Figure 2. the virtual binding of the Actor role node of action TRANSFER-INSIDE to JOHN is represented by the fact that its binding node. the solid black circle. has the same activation (3.1) as JOHN's signature node. PROPAGATION OF SIGNATURES FOR ROLE-BINDING The most important feature of ROBIN's signature activations is that the model passes them. as activation, across long paths of nodes to handle the non-local role-bindings necessary for inferencing. Figure 3 illustrates how the structure of the network automatically accomplishes this in a ROBIN network segment that implements a portion of the semantic network of Figure 1. I I I I I I I I I I I I I I I I I I I I I I I I Dynamic, Non-Local Role Bindings and Inferencing 549 {--------------.:=-:.-~~---=.---===-===-::=-=====---.::---~:---:-----------~~ ~' I ., . _ :: ... " I , I -...' I 1 , I '\ I I " t \ , I ,///IIIIIIIIiII~::::::::::::::::::::::::::::::::::::::::::::::::::::::::::.::::,,;::::::::::::i , I I I I :, . Figure 3. Simplified ROBIN network segment showing parallel paths over which evidential activation (bottom plane) and signature activation (top plane) are spread for inferencing. Signature nodes (rectangles) and binding nodes (solid black circles) are in the top plane. Thickness of conceptual node boundaries (ovals) represents their level of evidential activation after quiescence has been reached for sentence S 1. (The names on the nodes are not used by ROBIN in any way. being used simply to set up the network's structure initially and to aid in analysis.) Evidential activation is spread through the paths between conceptual nodes on the bottom plane (Le. TRANSFER-INSIDE and its Object role), while signature activation for dynamic rOle-bindings is spread across the parallel paths of corresponding binding nodes on the top plane. Nodes and connections for the Actor, Planner, and Location roles are not shown. Initially there is no activation on any of the conceptual or binding nooes in the network. When input for S 1 is presented, the concept TRANSFER-INSIDE receives evidential activation from the phrase "John put the pot inside the dishwasher", while the binding nodes of its Object role get the activations (6.8 and 9.2) of the signatures for MARIJUANA and COOKJNG-par, representing the candidate bindings from the word ''pot''. As activation starts to spread, INSIDE-OF receives evidential activation from TRANSFER-INSIDE, representing the strong evidence that something is now inside of something else. Concurrently, the signature activations on the binding nodes of TRANSFER-INSIDE's Object propagate to the corresponding binding nodes of INSIDE-OF's Object. The network has thus made the crucial inference of exactly which thing is inside of the other. Similarly, as time goes on, INSIDE-OF-DISHWASHER and INSIDE-OFOPAQUE receive evidential activation, with inferencing continuing by the propagation of signature activation to their corresponding binding nodes. 550 Lange and Dyer SPREAD OF ACTIVATION IN SENTENCE SI The rest of the semantic network needed to understand S 1 (Figure 1) is also built utilizing the structure of Figure 3. Both evidential and signature activation continue to spread from the phrase "1ohn put the pot inside the dishwasher", propagating along the chain of related concepts down to the CLEAN goal, with some reaching goal AVOID-DETECfION. The phrase "because the police were coming" then causes evidential and signature activation to spread along a path from TRANSFER-SELF to both goals POLICE-CAPTURE and A VOID-DETECTION, until the activation of the network fmally settles. SELECTING AMONG CANDIDATE BINDINGS In Figure 3, signature activations for both of the ambiguous meanings of the word "pot" were propagated along the Object roles, with MARIJUANA and COOKING-POT being the candidate bindings for the role. The network's interpretation of which concept is selected at any given time is the binding whose concept has greater evidential activation. Because all candidate bindings are spread along the network, with none being discarded until processing is completed, ROBIN is easily able to handle meaning re-interpretations without resorting to backtracking. For example, a re-interpretation of the word "pot" back to COOKING-Par occurs when SI is followed by "They were coming over for dinner." During the interpretation of SI, COOKING-POT initially receives more evidential activation than MARIJUANA by connections from the highly stereotypical usage of the dishwasher for the CLEAN goal. The network's decision between the two candidate bindings at that point would be that it was a COOKING-POT that was INSIDE-OF the DISHWASHER. However, reinforcement and feedback from the inference paths generated by the POlleE's TRANSFER-SELF eventually cause MARIJUANA to win out. The final selection of MARIJUANA over the COOKING-POT bindings is represented simply by the fact that MARIJUANA has greater evidential activation. The resulting most highly-activated path of nodes and non-local bindings represents the plan/goal analysis in Figure 1. A more detailed description of ROBIN's network structure can be found in [Lange, 1989]. EVIDENTIAL VS SIGNATURE ACTIVATION It is important to emphasize the differences between ROBIN's evidential and signature activation. Both are simply activation from a computational point of view, but they propagate across separate pathways and fulfil different functions. Evidential Activation: 1) Previous work -- Similar to the activation of previous localist models. 2) Function -- Activation on a node represents the amount of evidence available for a node and the likelihood that its concept is selected in the current context. 3) Node pathways -- Spreads along weighted evidential pathways between related frames. 4) Dynamic structure -- Decides among candidate structures; i.e. in Figure I, MARIJUANA is more highly-activated than COOKING-POT, so is selected as the currently most plausible role-binding throughout the inference path. Dynamic, Non-Local Role Bindings and Inferencing 551 Signature Activation: 1) Previous work -- First introduced in ROBIN. 2) Function -- Activation on a node is part of a unique pattern of signature activation representing a dynamic, virtual binding of the signature's concept. 3) Node pathways -- Spreads along role-binding paths between corresponding roles of related frames. 4) Dynamic structure -- Represents a potential (candidate) dynamic structure; i.e., that either MARIJUANA or COOKING-POT is INSIDE-OF a DISHWASHER. NETWORK BUILDING BLOCKS AND NEURAL PLAUSIBILITY ROBIN builds its networks with elements that each perform a simple computation on their inputs: summation, summation with thresholding and decay, multiplication, or maximization. The connections between units are either weighted excitatory or inhibitory. Max units, i.e. those outputting the maximum of their inputs, are used because of their ability to pass on signature activations without alteration. ROBIN's most controversial element will likely be the signature-producing nodes that generate the uniquely-identifying activations upon which dynamic role-binding is based. These identifier nodes need to broadcast their unique signature activation throughout the time the concept they represent is active, and be able to broadcast the same signature whenever needed. Reference to neuroscience literature [Segundo et al., 1981, 1964] reveals that self-feedbacking groups of "pacemaker" neurons have roughly this ability: "The mechanism described determines stable patterns in which, over a clearly defined frequency range, the output discharge is locked in phase and frequency ... " [Segundo et al., 1964] Similar to pacemakers are central pattern generators (CPGs) [Ryckebusch et al., 1988], which produce different stable patterns of neuronal oscillations. Groups of pacemakers or CPGs could conceivably be used to build ROBIN's signature-producing nodes, with oscillator phase-locking implementing virtual bindings of signatures. In any case, the simple computational elements ROBIN is built upon appear to be as neurally plausible as those of current distributed models. FUTURE WORK There are several directions for future research: (1) Self-organization of network structure -- non-local bindings allow ROBIN to create novel network instances over its pre-existing structure. Over time, repeated instantiations should cause modification of weights and recruitment of underutilized nodes to alter the network structure. (2) Signature dynamics -currently, the identifying signatures are single arbitrary activations; instead, signatures should be distributed patterns of activation that are learned adaptively over time, with similar concepts possessing similar signature patterns. 552 Lange and Dyer CONCLUSION This paper describes ROBIN. a domain-independent localist spreading-activation network model that approaches many of the problems of nattIrallanguage understanding. including those of inferencing and frame selection. To allow this. the activation on the network's simple computational nodes is of one of two types: (a) evidential activation. to indicate the likelihood that a concept is selected. and (b) signature activation. to uniquely identify concepts and allow the representation and propagation of dynamic virtual role-bindings not possible in previous localist or distributed models. ROBIN's localist networks use the spread of evidential and signature activation along their built-in structure of simple computational nodes to form a single most highly-activated path representing a plan/goal analysis of the input. It thus performs the inferencing. plan/goal analysis. schema instantiation. word-sense disambiguation. and dynamic reinterpretation tasks required for natural language understanding. References Cottrell. G. & Small. S. (1982): A Connectionist Scheme for Modeling Word-Sense Disambiguation. Cognition and Brain Theory, 6. p. 89-120. Dyer. M. G. (1983): In-Depth Understanding: A Computer Model of Integrated Processingfor Narrative Comprehension, MIT Press, Cambridge, MA. Feldman. 1. A. (1986): Neural Representation of Conceptual Knowledge. (Technical Report TR 189), Department of Computer Science, University of Rochester. Lange, T. (1989): (forthcoming) High-Level Inferencing in a Localist Network. Master's Thesis. Department of Computer Science. University of California. Los Angeles. McClelland. 1. L. & Kawamoto. A. H. (1986): Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences. In McClelland & Rumelhart (eds.) Parallel Distributed Processing: Vol 2. Cambridge, MA: The MIT Press. Ryckebusch. S .• Mead. C., & Bower. 1. M. (1988): Modeling a Central Pattern Generator in Software and Hardware: Tritonia in Sea Moss. Proceedings of IEEE Conference on Neural Information Processing Systems -- Natural and Synthetic (NIPS88). Denver. CO. Segundo. J. P., Perkel. D. H .• Schulman, J. H., Bullock. T. H .• & Moore. G. P (1964): Pacemaker Neurons: Effects of Regularly Spaced Synaptic Input. Science, Volume 145, Number 3627, p. 61-63. Segundo. 1. P. & Kohn. A. F. (1981): A Model of Excitatory Synaptic Interactions Between Pacemakers. Its Reality, its Generality, and the Principles Involved. Biological Cybernetics, Volume 40. p. 113-126. Touretzky. D. S. & Hinton. G. E. (1985): Symbols among the Neurons: Details of a Connectionist Inference Architecture. Proceedings of the International Joint Conference on Artificial Intelligence. Los Angeles, CA. Waltz. D. & Pollack. J. (1985): MaSSively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpretation. Cognitive Science, Volume 9, Number 1. p. 51-74.
|
1988
|
93
|
183
|
FIXED POINT ANALYSIS FOR RECURRENT NETWORKS Mary B. Ottaway Patrice Y. Simard Dept. of Computer Science University of Rochester Rochester NY 14627 ABSTRACT Dana H. Ballard This paper provides a systematic analysis of the recurrent backpropagation (RBP) algorithm, introducing a number of new results. The main limitation of the RBP algorithm is that it assumes the convergence of the network to a stable fixed point in order to backpropagate the error signals. We show by experiment and eigenvalue analysis that this condition can be violated and that chaotic behavior can be avoided. Next we examine the advantages of RBP over the standard backpropagation algorithm. RBP is shown to build stable fixed points corresponding to the input patterns. This makes it an appropriate tool for content addressable memories, one-to-many function learning, and inverse problems. INTRODUCTION In the last few years there has been a great resurgence of interest in neural network learning algorithms. One of the most successful of these is the Backpropagation learning algorithm of [Rumelhart 86], which has shown its usefulness in a number of applications. This algorithm is representative of others that exploit internal units to represent very nonlinear decision surfaces [Lippman 87] and thus overcomes the limits of the classical percept ron [Rosenblatt 62]. With its enormous advantages, the backpropagation algorithm has a number of disadvantages. Two of these are the inability to fill in patterns and the inability to solve one-to-many inverse problems [Jordan 88]. These limitations follow from the fact that the algorithm is only defined for a feedforward network. Thus if part of the pattern is missing or corrupted in the input, this error will be propagated through to the output and the original pattern will not be restored. In one-to-many problems, several solutions are possible for a given input. On a feedforward net, the competing targets for a given input introduce contradictory error signals and learning in unsuccessful. Very recently, these limitations have been removed with the specification of a recurrent backpropagation algorithm [Pineda 87]. This algorithm effectively extends the backpropagation idea to networks of arbitrary connection topologies. This advantage, however, does not come without some risk. Since the connections in the network are not symmetric, the stability of the network is not guaranteed. For some choices of weights, the state of the units may oscillate indefinitely. This paper provides a systematic analysis of the recurrent backpropagation (RBP) algorithm, introducing a number of new results. The main limitation of the RBP algorithm is that it assumes the convergence of the network to a stable fixed point in order to 149 150 Simard, Ottaway and Ballard backpropagate the error signals. We show by experiment and eigenvalue analysis that this condition can be violated and that chaotic behavior can be avoided. Next we examine the advantage in convergence speed of RBP over the standard backpropagation algorithm. RBP is shown to build stable fixed points corresponding to the input patterns. This makes it an appropriate tool for content addressable memories, manyto-one function learning and inverse problem. MODEL DESCRIPTION The simulations have been done on a recurrent backpropagation network with first order units. Using the same formalism as [Pineda 87], the vector state x is updated according to the equation: where Ui = LWijXj for i=I,2, . .. ,N j The activation function is the logistic function (1) (2) (3) The networks we will consider are organized in modules (or sets) of units that perform similar functions. For example, we talk about fully connected module if each unit in the module is connected to ea.ch of the others. An input module is a set of units where each unit has non-zero input function Ii . Note that a single unit can belong to more than one module at a time. The performance of the network is measured through the energy function: where N 1 '"' 2 E = "2 L..J Jj i=1 (4) (5) An output module is a set of units i such that Ji '" O. Units that do not belong to any input or output modules are called hidden units. A unit (resp module) can be damped and undamped. When the unit (resp module) is undamped, Ii = Ji = 0 for the unit (resp the module). If the unit is damped, it behave according to the pattern presented to the network. Unclamping a unit results in making it hidden. Clamping and unclamping actions are handy concepts for the study of content addressable memory or generalization. The goal for the network is to minimize the energy function by changing the weights accordingly. One way is to perform a gradient descent in E using the delta rule: (6) where 71 is a learning rate constant. The weight variation as a function of the error is given by the formula [Pineda 8i, Almeida 87] (7) Fixed Point Analysis for Recurrent Networks 151 where yi is a solution of the dynamical system (8) The above discussion, assumes that the input function I and the target T are constant over time. In our simulation however, we have a set of patterns Po. presented to the network. A pattern is a tuple in ([0,1] U {U})N, where N is the total number of units and U stands for undamped. The ith value of the tuple is the value assigned to Ii and Ti when the pattern is presented to the network (if the value is U, the unit is undamped for the time of presentation of the pattern). This definition of a pattern does not allow hOI. and Tio. to have different values. This is not an important restriction however since we can can always simulate such an (inconsistent) unit with two units. The energy function to be minimized over all the patterns is defined by the equation: Etotal = L E( a) (9) a The gradient of Etotal is simply the sum of the gradients of E(a), and hence the updating equation has the form: dWij/dt = 17 I: Yi(a)xj(a) (10) a \"'hen a pat tern Po. is presented to the network, an approximation of x j ( a) is first computed by doing a few iterations using equation 1 (propagation). Then, an approximation of yoo (a) is evaluated by iterating equation 8 (backpropagation). The weights are finally updated using equation 10. If we assume the number of iterations to evaluate xj(a) and yj(a) to be constant, the total number of operations required to update the weights is O( N 2 ). The validity of this assumption will be discussed in a later section. CONVERGENCE OF THE NETWORK The learning algorithm for our network assumes a correct approximation of xoo. This value is computed by recursively propagating the activation signals according to equation 1. The effect of varying the number of propagations can be illustrated with a simple experiment. Consider a fully connected network of eight units (it's a directed anti-reflexive graph). Fonr of them are auto-associative units which are presented various patterns of zeroes and ones. An auto-associative unit is best viewed as two visible units, one having a.ll of the incoming connect.ions and one having all of the outgoing connections. When the autoassociat.ive unit is not cla.mped, it is viewed as a hidden unit. The four remaining units arc hidden. The error is measured by the differences between the a.ctivations (from the incoming connections) of t.he auto-associat.ive units and the corresponding target value 7~ for each pattern. In running the experiment, eight patterns were presented to the network perfo[213zrming 1 to 5 propagations of the activations using Equation 1, 20 backpropa.gations of the error signals according to Equation 8, and one update (Equation 10) of the weights per preselltat,ion. We define an epoch to be a sweep through the eight patterns using the above formula of execution on each. The corresponding results using a learning rate of 0.2 are shown in figure 1. It can easily be seen that using one or two propagations does not suffice to set the hidden units to their correct values. However, the network does learn correctly how to reproduce the eight patterns when 3 or more 152 Simard, Ottaway and Ballard 4 1 'on 3 •......• .2. P.I:<?P.~~~~!?~~ .............. ........ . ...... " ....... . Error 2 1 0 0 500 1000 1500 2000 4 3 Error 2 back-propagation 1 3 4 back-propagations 0 0 500 1000 1500 2000 Figure 1: Learning curves for a recurrent network with different numbers of propagations of the activation and back-propagation of the error signals. Fixed Point Analysis for Recurrent Networks 153 propagations are done after each presentation of a new pattern. This is not surprising since the rate of convergence to a fixed point is geometric (if the fixed point is stable), thus making only a few propagations necessary. We suspect that larger networks with a fully connected topology will still only require a few iterations of forward propagation if the fixed points are fairly stable. In the next section, we will study a problem, where this assumption is not true. In such a situation, we use an algorithm where the number of forward propagations varies dynamically. For some specialized networks such as a feed-forward one, the number of propagations must be at least equal to the number of layers, so that the output units receive the activation corresponding to the input before the error signal is sent. Similarly, yOO is computed recursively by iterative steps. We used the same experiment as described above with 1 to 4 back-propagations of the error signals to evaluate the time yt takes to converge. The rest of this experiment remained the same as above, except that the number of propagations for xt was set to 20. The learning curves are shown in figure 1. It is interesting to note that wi th only one propagation of the error signal, the system was able to complete the learning, for the isolated curve tends toward the other curves as time increases. The remaining four curves lie along the same path because the error signals rapidly become meaningless after few iterations. The reason for this is that the error signals are multiplied by gi(ui) = WijXi(I- Xi) when going from unit i to unit j, which is usually much smaller than one because I xi(I-xi) I is smaller than 0.25. The fact that one iteration of the error signal is enough to provide learning is interesting for VLSI applications: it enables the units to work together in an asynchronous fashion. If each unit propagates the activation much more often than it backpropagates the error signals the system is, on average, in a stable state when the backpropagation occurs and the patterns are learned slowly. This ability for recurrent networks to work without synchronization mechanisms makes them more compatible with physiological network systems. The above discussion assumed that X OO exists and can be computed by recursively computing the activation function. However, it has been shown ([Simard 88]) that for any activation function, there are always sets of weights such that there exist no stable fixed points. This fact is alarming since X OO is computed recursively, which implies that if there is no stable fixed point, x' will fail to converge, and incorrect error signals will be propagated through the system. Fortunately, the absence of stable fixed points turns out not to be a problem in practice. One reason for this is that they are very likely to be present given a reasonable set of initial weights. The network almost always starts with a unique stable fixed point. The fixed points are searched by following the zero curve of the homotopy map (11) for different [ail starting at ).. = O. The results indicate that the probability of getting unstable fixed points increases with the size of the network. We always found a stable fixed point for networks with less than 50 units. Out of 500 trials of 100 unit networks starting with random weights between -1 and +1, we found two set of weights with no stable fixed points. However, even in that case, most of the eigenvalues were much less than 1, which means that oscillations are limited to one or two eigenvector axes. Since it is possible to start with a network that has no stable fixed points, it is of interest whether it will still learn correctly. Since searching for all the fixed points (by trying different [ail in equation 11) is computationally expensive, we choose, as before, a simple learning experiment. The network's layout is the same as previously described. However, we know (from the previous result) than it probably has no unstable fixed points 154 Simard, Ottaway and Ballard Eigen1 value \ .. .. .. --~~--------------'" . ... ......... .. . ........... . .... . .. ....... . O-L-----------------------------------------3 2 Error 1 -':'::":":w" ~.... >.~ .............. -.&.. • .....:,.. • ...:.,.. •• •• ••• • • • ....... -..., ' 0. -. ........ -:. ~----.--... . . ----.. ... .. O-+--------------~----------------r_------------o 50 100 Figure 2: top Maximum eigenvalues for the unstable fixed point as a function of the number of epochs. bottom Error as a function of the number of epochs. Fixed Point Analysis for Recurrent Networks 155 since it only has four hidden units. To increase the probability of getting a fixed point that is unstable, we make the initial weights range from -3 to 3 and set the thresholds so that [0.5] is a fixed point for one of the patterns. This fixed point is more likely to be unstable since the partial derivative ofthe functions (which are equal to 8gi(Ui)/8xj = wijxi(l-Xi) at the fixed point) are maximized at [Xi] = [0.5] and therefore the Jacobian is more likely to have big eigenvalues. Figure 2 shows the stability of that particular fixed point and the error as a function of the number of epochs. Three different simulations were done with different sets of random initial weights. As clearly shown in the figure, the network learns despite the absence of stable fixed points. Moreover, the observed fixed point(s) become stable as learning progresses. In the absence of stable fixed points, the weights are modified after a fixed number of propagations and backpropagations. Even though the state vector of the network is not precisely defined, the state space trajectory lies in a delimited volume. As learning progresses, the projection of this volume on the visible units diminishes to a single point (stable) and moves toward a target point that correspond to the presented pattern on the visible units. Note that our energy function does not impose constraints on the state space trajectories projected on the hidden units [Pearl mutter 88]. "RUNAWAY" SIMULATIONS The next question that arises is whether a recurrent network goes to the same fixed point at successive epochs (for a given input) and what happens if it does not. To answer this question, we construct two networks, one with only feed forward connections and one with feed back connections. Bot.h networks have 3 modules (input, hidden and output) of 4 units each. The connections of the feed forward network are between the input and the hidden module and between the hidden and the output module. The connections of the recurrent net.work are identical except that the there are connections between the units of the hidden module. The rationale behind this layout is to ensure fairness of comparison between feed forward and feedback backpropagation. Each network is presented sixteen distiuct patterns on the input with sixteen different random patterns on the output. The patterns consist of zeros and ones. This task is purposely chosen to be fairly difficult (16 fixed points on the four hidden units for the recurrent net) and will make the evaluation of X OO difficult. The learning curves for the networks are shown in Figure 3 for a learning rate of 0.2. We can see that the network with recurrent connections learn a slightly faster than the feed forward network. However, a more careful analysis reveals that when the learning rate is increased, the recurrent network doesn't always learn properly. The success of the learning depends on the number of iterations we use in the computation of xt. As clearly shown on the Figure 3, if we use 30 iterations for xt the network fails to learn, although 40 iterations yields reasonable results. The two cases only differ by the value of xt used when the error signals are backpropagated. According to our interpretation, recurrent backpropagation learns by moving the fixed points (or small volume state trajectories) toward target values (determined by the output). As learning progresses, the distances between the fixed points and the target values diminish, causing the error signals to become smaller and the learning to slow down. However if the network doesn't come close enough to the fixed point (or the small volume state trajectory), the new error (the distance between the current state and the target) can suddenly be very large (relatively to the distance between the fixed point and the target). Large incorrect error signals are then introduced into the system. There are two Ca.seS: if the learning rate is small, a near miss has lit tie effect on the learning curve and RBP learns faster than the feed forward network. If, on the other hand, the learning rate 156 Simard, Ottaway and Ballard 6 4 Error ..... .... . " ..... ... . ... . .. . O-+-------.--------.-------r-------~------~------o 500 1000 1500 2000 2500 6 4 Error ' .. 2 . ..•.. . ....•.. .•.• ~ .....•. .• ~.--rr-.-r-. ..."... • ..._:_.__:"'.~.' • . O-+------~--------.-------r-------~------~------o 100 200 300 400 500 6 4 Error 2 O-+------~------~.-------r-------~------~------o 100 200 300 400 500 Figure 3: Error as a function of the number of epochs for a feed forward net (dotted) and a recurrent net (solid or dashed). top: The learning rate is set to 0.2. center: The learning rate is set to 1.0. The solid and the dashed lines are for recurrent net with 30 and 40 iterations of xt per epochs respectively. bottom: The learning rate is variable. The recurrent network has a variable number of iteration of xt per epochs. 1 X2 0 Fixed Point Analysis for Recurrent Networks 157 1 1 • . ~ \ \ \ X2 X2 • ~\ I ~ . 0 • 0 0 1 0 1 0 Xl Xl Xl Figure 4: State space and fixed point. Xl and X2 are the activation of two units of a fully connected network. left: Before learning, there is one stable fixed point center: After learning a few pattern, there are two desired stable fixed points. right: After learning several patterns, there are two desired stable fixed points and one undesired stable fixed point. is big, a near miss will induce important incorrect error signals into the system which in turn makes the next miss more dramatic. This runaway situation is depicted on the center of Figure 3. To circumvent this problem we vary the number of propagations as needed until successive states on the state trajectory are sufficiently close. The resulting learning curves for feed forward and recurrent nets are plotted at the bottom of Figure 3. In these simulations the learning rates are adjusted dynamically so that successive error vectors are almost colinear, that is: 0.7 < cos(~w:j' ~W:1l) < 0.9 (12) As can be seen recurrent and feed forward nets learn at the same speed. It is interesting to mention that the average learning rate for the recurrent net is significantly smaller (:::::: 0.65) than for the feed forward net (:::::: 0.80). Surprisingly, this doesn't affect the learning speed. CONTENT ADDRESSABLE MEMORIBS An interesting property of recurrent networks is their ability to generate fixed points that can be used to perform content addressable memory [Lapedes 86, Pineda 87]. Initially, a fully connected network usually has only one stable fixed point (all units undamped) (see Figure 4, left). By clamping a few (autoassociative) units to given patterns, it is possible, by learning, to create stable fixed points for the undamped network (Figure 4, center). To illustrate this property, we build a network of 6 units: 3 auto associative units 1 158 Simard, Ottaway and Ballard fixed points Maximum autoassociative units hidden units eigenvalue 0.0402 0.0395 0.9800 0.8699 0.0763 0.0478 0.4419 0.9649 0.0176 0.0450 0.0724 0.8803 0.4596 0.6939 0.0830 0.9662 0.0658 0.2136 0.0880 0.8832 0.8470 0.9400 0.9619 0.9252 0.1142 0.1692 0.5164 0.8941 0.9076 0.5201 0.0391 0.0448 0.6909 0.7431 1.2702 Table 1: Fixed points for content addressable memory and 3 hidden units. The three autoassociative units are presented patterns with an odd number of ones in them (there are 4 such patterns on 3 units: 1 0 0, 0 1 0, 0 Oland 1 1 1). The network is fully connected. After 5000 epochs, the auto-associative units are undamped for testing. All the fixed points found for the network of 6 (undamped) units are given in table 1. As can be seen, the four stable fixed points are exactly the four patterns presented to the network. Moreover their stability guarantees that the network can be used for CAM (content addressable memory) or for one-to-many function learning. Indeed, if the network is presented incomplete or corrupted patterns (sufficiently dose to a previously learned pattern), it will restore the pattern as soon as the incorrect or missing units are undamped by converging to a stable fixed point. If there are several correct pattern completions for the damped units, the network will converge to one of the pattern depending on the initial conditions of the undamped units (which determine the state space trajectory). These highly desirable properties are the main advantages of having feedback connections. V.le note from table 1 that a fifth (incorrect) fixed point has also be found. However, this fixed point is unstable (Maximum eigenvalue = 1.27) and will therefore never be found during recursive searches. In the previous example, there are no undesired stable fixed points. They are, however, likely to appear if the learning task becomes more complex (Figure 4, right). The reason why they are difficult to avoid is that unless the units are undamped (the learning is stopped), the network cannot reach them. Algorithms which eliminate spurious fixed points are presently under study. CONCLUSION In this paper, we have studied the effect of introducing feedback connections into feed forward networks. We have shown that the potential disadvantages of the algorithm, such as the absence of stable fixed points and chaotic behavior, can be overcome. The resulting systems ha\'e several interesting properties. First, allowing arbitrary connections makes a network more physiologically plausible by removing structural constraints on the topology. Second, the increased number of connections diminishes the sensitivity to noise and slightly improves the speed of learning. Finally, feedback connections allow the network to restore incomplete or corrupted patterns by following the state space trajectory to a stable fixed point. This property can also be used for one-to-many function learning. A limitation of the algorithm, however, is that spurious stable fixed points could lead to incorrect pattern completion. Fixed Point Analysis for Recurrent Networks 159 References [Almeida 87] Luis B. Almeida, in the Proceedings of the IEEE First Annual International Conference on Neural Networks, San Diego, California, June 1987. [Lapedes 86] Alan S. Lapedes & Robert M. Farber A self-optimizing nonsymmetrical neural net for content addressable memory and pattern recognition. Physica D22, 247259, 1986. [Lippman 87] Richard P. Lippman, An introduction to computing with neural networks, IEEE ASSP Magazine April 1987. [Jordan 88] Michael I. Jordan, Supervised learning and systems with excess degrees of freedom. COINS Technical Report 88-27. Massachusetts Institute of Technology. 1988. [Pearlmutter 88] Barak A. Pearlmutter. Learning State Space Trajectories in Recurrent Neural Networks. Proceedings of the Connectionnist Models Summer School. pp. 113117. 1988. [Pineda 87] Fernando J. Pineda. Generalization of backpropagation to recurrent and higher order neural networks. Neural Information Processing Systems, New York, 1987. [Pineda 88] Fernando J. Pineda. Dynamics and Architecture in Neural Computation. Journal of Complexity, special issue on Neural Network. September 1988. [Simard 88] Patrice Y. Simard, Mary B. Ottaway and Dana H. Ballard, Analysis of recurrent backpropagation. Technical Report 253. Computer Science, University of Rochester, 1988. [Rosenblatt 62] F. Rosenblatt, Principles of Neurodynamics, New York: Spartam Books, 1962. [Rumelhart 86] D. E. Rumelhart, G. E. Hinton, & R. J. Williams, Learning internal representations by back-propagating errors. Nature, 323,533-536.
|
1988
|
94
|
184
|
614 Gish and Blanz Comparing the Performance of Connectionist and Statistical Classifiers on an Image Segmentation Problem Sheri L. Gish w. E. Blanz IBM Almaden Research Center 650 Harry Road San Jose, CA 95120 ABSTRACT In the development of an image segmentation system for real time image processing applications, we apply the classical decision analysis paradigm by viewing image segmentation as a pixel classifica.tion task. We use supervised training to derive a classifier for our system from a set of examples of a particular pixel classification problem. In this study, we test the suitability of a connectionist method against two statistical methods, Gaussian maximum likelihood classifier and first, second, and third degree polynomial classifiers, for the solution of a "real world" image segmentation problem taken from combustion research. Classifiers are derived using all three methods, and the performance of all of the classifiers on the training data set as well as on 3 separate entire test images is measured. 1 Introduction We are applying the trainable machine paradigm in our development of an image segmentation system to be used in real time image processing applications. We view image segmentation as a classical decision analysis task; each pixel in a scene is described by a set of measurements, and we use that set of measurements with a classifier of our choice to determine the region or object within a scene to which that pixel belongs. Performing image segmentation as a decision analysis task provides several advantages. We can exploit the inherent trainability found in decision Comparing the Performance of Connectionist and Statistical Classifiers 615 analysis systems [1 J and use supervised training to derive a classifier from a set of examples of a particular pixel classification problem. Classifiers derived using the trainable machine paradigm will exhibit the property of generalization, and thus can be applied to data representing a set of problems similar to the example problem. In our pixel classification scheme, the classifier can be derived solely from the qU8J1titative characteristics of the problem data. Our approach eliminates the dependency on qualitative characteristics of the problem data which often is characteristic of explicitly derived classification algorithms [2,3J. Classical decision 8J1alysis methods employ statistical techniques. We have compared a connectionist system to a set of alternative statistical methods on classification problems in which the classifier is derived using supervised training, 8J1d have found that the connectionist alternative is comparable, and in some cases preferable, to the statistical alternatives in terms of performance on problems of varying complexity [4J. That comparison study also 8J.lruyzed the alternative methods in terms of cost of implementation of the solution architecture in digital LSI. hl terms of our cost analysis, the connectionist architectures were much simpler to implement than the statistical architectures for the more complex classification problems; this property of the connectionist methods makes them very attractive implementation choices for systems requiring hardware implementations for difficult applications. In this study, we evaluate the perform8J.lce of a connectionist method and several statisticrumethods as the classifier component of our real time image segmentation system. The classification problem we use is a "real world" pixel classification task using images of the size (200 pixels by 200 pixels) and variable data quality typical of the problems a production system would be used to solve. We thus test the suitability of the connectionist method for incorporation in a system with the performance requirements of our system, as well as the feasibility of our exploiting the adv8J.ttages the simple connectionist architectures provide for systems implemented in hardware. 2 Methods 2.1 The Image Segmentation System The image segmentation system we use is described in [5J, and summarized in Figure 1. The system is designed to perform low level image segmentation in real time; for production, the feature extraction and classifier system components are implemented in hardware. The classifier par8J.neters are derived during the Training Phase. A user at a workstation outlines the regions or objects of interest in a training image. The system performs low level feature extraction on the training image, and the results of the feature extraction plus the input from the user are combined automatically by the system to form a training data set. The system then applies a supervised training method making use of the training data set in order to derive the coefficients for the classifier which can perform the pixel classification task. The feature extraction process is capable of computing 14 classes of features for each pixel; up to 10 features with the highest discriminatory power are used to 616 Gish and Blanz describe all of the pixels in the image. TIns selection of features is based only on an analysis of the results of the feattue extraction process and is independent of the supervised learning paradigm being used to derive the classifier [6]. The identical feature extraction process is applied in both the Training and Running Phases for a particular image segmentation problem. Training Images Test Image Coefficients for Classifier Segmented Image TRAINING PHASE RUNNING PHASE Figure 1: Diagram of the real time image segmentation system. 2.2 The Image Segmentation Problem The image segmentation problem used in this study is from combustion research and is described in [3]. The images are from a series of images of a combustion chamber taken by a high speed camera during the inflammation process of a gas/air mixhue. The segmentation task is to determine the area of inflamed gas in the image; therefore, the pixels in the image are classified into 3 different classes: cylinder, uninflamed gas, and flamed gas (See Figure 2). Exact determination of the area of flamed gas is not possible using pixel classification alone, but the greater the success of the pixel classification step, the greater the likelihood that a real time image segmentation system could be used successfully on this problem. 2.3 The Classifiers The set of classifiers used in tIns study is composed of a connectionist classifier based on the Parallel Distributed Processing (PDP) model described in [7] and two statistical methods: a Gaussian maximum likelihood classifier (a Bayes classifier), and a polynomial classifier based on first, second, and third degree polynomials. Tlus set of classifiers was used in a general study comparing the performance of Comparing the Performance of Connectionist and Statistical Classifiers 617 Figure 2: The imnge segmentntion problem is to classify each imllge pixel into 1 of 3 regions. the alternatives on a set of classification problems; all of the classifiers as well as adaptation procedures are described in detnil in that study [4]. Implementation and adaptation of nll classifiers in this study was performed as software simulation. The connectionist classifier was implemented in eMU Common Lisp rmming on an IBM RT workstation. The connectionist classifier nrchitecture is a multi-Inyer feedforwnrd network with one hidden layer. The network is fully connected, but there nre only connections between ndjacent layers. The number of units in the input and output layers are determined by the number of features in the fenture vector describing ench pixel and a binary encoding scheme for the class to which the pixel belongs, respectively. The number of units in the hidden layer is an architectural "free parnmeter." The network used in this study has 10 units in the input layer, 12 units in the hidden layer, and 3 units in the outPllt layer. Network activation is achieved by using the continuous, nonlinear logistic function defined in [8]. The connectionist adaptation procedure is the applicntion of the backpropagation learning rule also defined in [8]. For this problem, the learning rnte TJ = 0.01 and the momentum a = 0.9; both terms were held conshmt throughout adaptntion. The presentation of all of the patterns ill the training data set is termed a trial; network weights nnd unit binses were updated after the presentation of each pattern during a trial. The training data set for this problem was generated automatically by the image segmentation system. This training data set consists of approximately 4,000 ten element (feature) vectors (each vector describes one pixel); each vector is labeled as belonging to one of the 3 regions of interest in the imnge. The training data set was constructed from one entire training image, and is composed of vectors stntistically representative of the pixels in each of the 3 regions of interest in that image. 618 Gish and Dlanz All of the classifiers tested in this study were adapted from the same training data set. The connectionist classifier was defined to be converged for tlus problem before it was tested. Network convergence is determined from the results of two separate tests. III the first test, the difference between the network output and the target output averaged over the entire training data set has to reach a minimum. In the second test, the performance of the network in classifying the training data set is measured, and the number of misclassifications made by the network has to reach a minimum. Actual network performance in classifying a pattern is measured after post-processing of the output vector. The real outputs of each unit in the output layer are assigned the values of 0 or 1 by application of a 0.5 decision threshold. In our binary encoding scheme, the output vector should have only one element with the value 1; that element corresponds to one of the 3 classes. H the network produces an output vector with either more than one element with the value 1 or all elements with the value 0, the pattern generating that output is considered rejected. For the test problem in this study, all of the classifiers were set to reject patterns in the test data samples. All of the statistical classifiers had a rejection threshold set to 0.03. 3 Results The performance of each of the classifiers (connectionist, Gaussian maximum likelihood, and linear, quadratic, and cubic polynomial) was measured on the training data set and test data representing 3 entire images taken from the series of combustion chamber images. One of those images, labeled Inlage 1, is the image from which the training data set was constructed. The performance of all of the classifiers is summarized in Table 1. Althollgh all of the classifiers were able to classify the training data set with comparably few misclassifications, the Gaussian maximum likelihood classifier and the quadratic polynomial classifier were unable to perform on any of the 3 entire test images. The connectionist classifier was the only alternative tested in this study to deliver acceptable performance on all 3 test images; the connectionist classifier had lower error rates on the test images than it delivered on the training data sample. Both the linear polynomial and cubic polynomial classifiers performed acceptably on the test Image 2, but then both exhibited high error rates on the other two test images. For this image segmentation problem, only the connectionist method generalized from the training data set to a solution with acceptable performance. In Figure 3, the results from pixel classification performed by the connectionist and polynonual classifiers on all 3 test images are portrayed as segmented images. The actual test images are included at the left of the figure. 4 Conclusions Our results demonstrate the feasibility of the application of a connectionist decision analysis method to the solution of a ureal world" image segmentation problem. The ~ata Sel ,----T . . raInIng Data Image 1 C Comparing the Performance of Connectionist and Statistical Classifiers 619 Conne;;l;on;sl II - Polynom;al --~auss;an --] Classifier Classifier Classifier Errorfl I Rejectb Degree I Error I Reject Error I Reject 1O.40%-~-.64% 1 '1l.25% 1.62% 12.84% ---0.12%2 9.61% 1.41% 3 8.13% 1.05% 8.84% 1.72% 1 41.70% 4.63% 94.27% 0.00% 2 57.55% 3.66% 3 25.86% 0.28% r-~------~~--~-r~~~~~r-------~~----~~----~ Image 2 5.82% 1.53% 1 12.01% 2.00% 69.09% 0.01% 2 68.01 % 0.58% 3 4.68% 0.26% Image 3 6.31 % - -::-1.-=-6-=3 %=o- tf---- 1=---19.68 % 5.43 % 88.35% 0.00% 2 45.89% 1.41% 3 25.75% 0.28% , _______ ~ ______ L_ _ _ ____ ~ ______ L_ _____ _ ~ ______ ~ ___ ___ ~ ______ __ flPercent misclauificatioDi for all patterns. bpercent of all patterns rejected. Clmage from which training data let was taken. Table 1: A sununary of the performance of the c16Ssifiers. inclusion of a connectionist classifier in our supervised segmentation system will allow us to meet our performance requirements under real world problem constraints. Although the application of connectionism to the solution of real time machine vision problems represents a new processing method, our solution strategy h6S remained consistent with the decision analysis paradigm. Our connectionist cl6Ssifiers are derived solely from the quantitative characteristics of the problem data; our connectionist architecture thus remains simple and need not be re-designed according to qualitative characteristics of each specific problem to which it will be applied. Our connectionist architecture is independent of the image size; we have applied the identical architecture successfully to images which range in size from 200 pixels by 200 pixels to 512 pixels by 512 pixels [9). In most research to date in which neural networks are applied to machine vision, entire images explicitly are mapped to networks by making each pixel in an image correspond to a different unit in a network layer (see [10,11) for examples). This "pixel map" representation makes scaling up to larger image sizes from the idealized "toy" research images a significant problem. Most statistical pattern classification methods require that problem data satisfy tIle assumptions of statistical models; unfortunately, real world problem data are complex and of variable quality and thus rarely can be used to guide the choice of an appropriate method for the solution of a particular problem a priori. For the image segmentation problem reported in this study, our cI6Ssifier performance results show that the problem data actually did not satisfy the assumptions behind the statistical models underlying the Gaussian maximum likelihood classifier or the polynomial 620 Gish and Blanz Figure 3: The grey levels assigned to each region nre: Black cylinder, Light Grey uninflamed gas, Grey fhnned gas. Original images nre at the left of the figure. classifiers. It appenrs that the Gaussian model least fits our problem data, the polynomial classifiers provide a slightly better fi t, and the connect.ionist method provides the fit required for the solution of the problem. It is also notable that all the alternative m.ethods in this study could be aflapted to perform acceptably on the training data set, but extensive testing on several different entire images was required in order to demonstrate the true performance of the n1t.ernntive lllethods on the actual problem., rather than just on the trnining data set. These results show that a connectionist method is a viable choice for n system. such as ours which requires a simple nrchitecture readily implemented in hardware, the flexibility to handle cOl1lpi('x problems described by large amounts of data, and the robustness to not require problem data to meet, many model assnmptions 11 priori. Comparing the Performance of Connectionist and Statistical Classifiers 621 References [lJ R. O. Duda a.nd P. E. H6I't. Pattern Cla$$ification and Scene Analy,i$. Wiley, New York, 1973. [2J W. E. Blanz, J. L. C. Sanz, and D. Petkovic. Control-free low-level image segmentation: Theory, architecture,a.nd experimentation. In J. L. C. Sanz, editor, Advance$ of Machine Vi$ion, Application$ and Architect-ure" SpringerVerlag, 1988. [3] B. Straub and W. E. Blanz. Combined decision theoretic and syntactic approach to image segmentation. Machine Vi,ion and Application$, 2(1 ):17-30, 1989. [4J Sheri L. Gish and W. E. Blallz. Comparing a Connectioni$t Trainable Clauifier with Clauical Stati$tical Deci,ion AnalY$i$ Method$. Research Report RJ 6891 (65717), IBM, Jtme 1989. [5] W. E. Bla.nz, B. Slmng, C. Cox, W. Greiner, B. Dom, a.nd D. Petkovic. De$ign and implementation of a low level image ,egmentation architecture - LISA. Research Report RJ 7194 (67673), IBM, December 1989. [6] W. E. BI611z. Non-p6I'ametric feature selection for multiple class processes. In Proc. 9th Int. Con/. Pattern Recognition, Rome, Italy, Nov. 14-17 1988. [7J David E. Rumelhart, J61ues L. McClelland, et a1. Parallel Di$tributed Proceuing. MIT Press, C61ubridge, Massachusetts, 1986. [8] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Willia.ms. Le6I'nillg internal representations by error propagation. In David E. Rumelllart, James L. McClell611d, et aI., editors, Parallel Di,tributed Proce66ing, chapter 8, MIT Press, Cambridge, Massachusetts, 1986. [9] W. E. Bla.nz 6l1d Sheri L. Gish. A Connectioni,t Clauifier Architecture Applied To Image Segmentation. Rese6I'ch Report RJ 7193 (67672), IBM, December 1989. [10} K. Fukushima, S. Miyake, and T. Ito. Neocognitron: a neura.lnetwork model for a mechanism of visual pattern recognition. IEEE Tran$actio1t$ on Sy,tem" Man, and Cybernetic$, SMC-13(5):826-834, 1983. [U} Y. Hirai. A model of humau associative processor. IEEE Tran$action$ on Sy,tem$, Man, and Cybernetic$, SMC-13(5):851-857, 1983.
|
1989
|
1
|
185
|
348 Farotimi, Demho and Kailath Neural Network Weight Matrix Synthesis Using Optimal Control Techniques O. Farotimi A. Dembo Information Systems Lab. Electrical Engineering Dept. Stanford University, Stanford, CA 94305 ABSTRACT T. Kailath Given a set of input-output training samples, we describe a procedure for determining the time sequence of weights for a dynamic neural network to model an arbitrary input-output process. We formulate the input-output mapping problem as an optimal control problem, defining a performance index to be minimized as a function of time-varying weights. We solve the resulting nonlinear two-point-boundary-value problem, and this yields the training rule. For the performance index chosen, this rule turns out to be a continuous time generalization of the outer product rule earlier suggested heuristically by Hopfield for designing associative memories. Learning curves for the new technique are presented. 1 INTRODUCTION Suppose that we desire to model as best as possible some unknown map 4> : u V, where U, V ~ nn. One way we might go about doing this is to collect as many input-output samples {(9in, 90ud : 4>(9in ) = 9 0ud as possible and "find" some function f : U V such that a suitable distance metric d(f( z(t)), 4>(z(t)))I ZE{9 ... :4>c9 ... )=9o .. d is minimized. In the foregoing, we assume a system of ordinary differential equations motivated by dynamic neural network structures[l] [2]. In particular we set up an n-dimensional Neural Network Weight Matrix Synthesis 349 neural network; call it N. Our goal is to synthesize a possibly time varying weight matrix for N such that for initial conditions zeta), the input-output transformation, or flow 1 : zeta) -- I(z(t,» associated with N approximates closely the desired map 4>. For the purposes of synthesizing the weight program for N, we consider another system, say S, a formal nL-dimensional system of differential equations comprising L n-dimensional subsystems. With the exception that all L n-dimensional subsystems are constrained to have the same weight matrix, they are otherwise identical and decoupled. We shall use this system to determine the optimal weight program given L input-output samples. The resulting time program of weights is then applied to the original n-dimensional system N during normal operation. We emphasize the difference between this scheme and a simple L-fold replication of N: the latter will yield a practically unwieldy nL x nL weight matrix sequence, and in fact will generally not discover the underlying map from U to V, discovering instead different maps for each input-output sample pair. By constraining the weight matrix sequence to be an identical n x n matrix for each subsystem during this synthesis phase, our scheme in essence forces the weight sequence to capture some underlying relationship between all the input-output pairs. This is arguably the best estimate of the map given the information we have. Using formal optimal control techniques[3], we set up a performance index to maximize the correlation between the system S output and the desired output. This optimization technique leads in general to a nonlinear two-point-boundary-value problem, and is not usually solvable analytically. For this particular performance index we are able to derive an analytical solution to the optimization problem. The optimal interconnection matrix at each time is the sum (over the index of all samples) of the outer products between each desired output n-vector and the corresponding subsystem output. At the end of this synthesis procedure, the weight matrix sequence represents an optimal time-varying program for the weights of the n-dimensional neural network N that will approximate 4> : U -- V. We remark that in the ideal case, the weight matrix at the final time (i.e one element of the time sequence) corresponds to the symmetric matrix suggested empirically by Hopfield for associative memory applications[4]. It becomes clear that the Hopfield matrix is suboptimal for associative memory, being just one point on the optimal weight trajectory; it is optimal only in the special case where the initial conditions coincide exactly with the desired output. In Section 2 we outline the mathematical formulation and solution of the synthesis technique, and in Section 3 we present the learning curves. The learning curves also by default yield the system performance over the training samples, and we compare this performance to that of the outer product rule. In Section 4 we give concluding remarks and give the directions of our future work. Although the results here are derived for a specific case of the neuron state equation, and a specific choice of performance index, in further work we have extended the results to very general state equations and performance indices. 350 Farotimi, Dembo and Kailath 2 SYNTHESIS OF WEIGHT MATRIX TIME SEQUENCE Suppose we have a training set consisting of L pairs of n-dimensional vectors (o(r)i, e(r\), r = 1,2, ... , L, i = 1,2, ... , n. For example, in an autoassociative system in which we desire to store e(r)i,r = 1,2, ... ,L,i = 1,2, ... ,n, we can choose the o(r)i, r = 1,2, ... , L, i = 1,2, ... , n to be sample points in the neighborhood of (}(r)i in n-dimensional space. The idea here is that by training the network to map samples in the neighborhood of an exemplar to the exemplar, it will have developed a map that can smoothly interpolate (or generalize) to other points around the exemplar that may not be in the training set. In this paper we deal with the issue of finding the weight matrix that transforms the neural network dynamics into such a map. We demonstrate through simulation results that such a map can be achieved. For autoassociation, and using error vectors drawn from the training set, we show that the method here performs better (in an error-correcting sense) than the outer product rule. We are still investigating the performance of the network in generalizing to samples outside the training set. We construct an n-dimensional neural network system N to model the underlying input-output map according to N: z(t) = -z(t) + W(t)g(z(t), (1) We interpret z as the neuron activation, g(z(t)) is the neuron output, and W(t) is the neural network weight matrix. To determine the appropriate W(t), we define an nL-dimensional formal system of differential equations, S S: z·(t) = -z.(t) + W.(t)g(z.), g(z.(to» = iJ (2) formed by concatenating the equations for N L times. W. (t) is block-diagonal with identical blocks W(t). 8 is the concatenated vector of sample desired outputs, iJ is the concatenated vector of sample inputs. The performance index for S is minJ = min {-z.T(tI)8 + 41t' (-2Z. T(t)8 + /3Q + /3-1 t WJ(t)Wi(t») dt} " to i=1 (3) The performance index is chosen to minimize the negative of the correlation between the (concatenated) neuron activation and the (concatenated) desired output vectors, or equivalently maximize the correlation between the activation and the desired output at the final time tl, (the term -Z.T(t1 )8). Along the way from initial time to to final time t I, the term -z. T (t)8 under the integral penalizes decorrelation of the neuron activation and the desired output. Wj(t), j = 1,2, ... , n are the rows of W(t), and /3 is a positive constant. The term /3-1 Ei=l wJ(t)Wj(t) effects a bound Neural Network Weight Matrix Synthesis 351 on the magnitude of the weights. The term n L n L Q(g(Z(t») = L L L L o/r)o/v)g(zu(v»g(zu(r», j=lr=lu=lv=1 and its meaning will be clear when we examine the optimal path later. g(.) is assumed Cl differentiable. Proceeding formally[3], we define the Hamiltonian: H = ~ ( _2zT(I)9 + Q + t WJ<I)BWj(I») + >7(1)( -z(l) + W.(I)g(z(l))) (4) ~ ( _2",T(I)9 + Q + t WJ<I)BWj(I») - >7(1)",(1) + t. t, A(r)jwJ<l)g(r)(z(l» where .\T (t) = [ ,\(1)1 (t) ,\(1)2(t) ... ,\(L)n (t) ] is the vector of Lagrange multipliers, and we have used the fact that W.(t) is blockdiagonal with identical blocks W(t) in writing the summation of the last term in the second line of equation (4). The Euler-Lagrange equations are then given by ( OH)T 1 (OQ)T (Og)T T OZ = 2 oz - (9 + .\(t» + oz W. (t)'\(t) -9 o L oH = w'f B + """ ,\(r) .g(r)T (z(t» ow. J ~ J J r=1 From equation (7) we have L Wij(t) = -f3 L ,\(r) jg(z;<r)(t» r=l Choosing .\(t) = -9 (5) (6) (7) (8) (9) satisfies the final condition (6), and with some algebra we find that this choice is also consistent with equations (5) and (7). The optimal weight program is therefore L Wij (t) = f3 L o(r\g(z;<r)(t» (10) r=l This describes the weight paradigm to be applied to the n-dimensional neural network .. system /II in order to model the underlying map described by the sample 352 Farotimi, Dembo and KaiIath points. A similar result can be derived for the discrete-time network z(k + 1) = W(k)g(z(k»: L wi;(k) = f3 L o(r)ig(x/r)(k» r=l 2.1 REMARKS • Meaning ofQ. On the optimal path, using equation (10), it is straightforward to show that n f3Q = f3- 1 L wT(t)w;(t) ;=1 Thus Q acts like another integral constraint term on the weights. • The Optimal Return Function. The optimal return function[3], which is the value of the performance index on the optimal path can be shown to be Thus the optimal weight matrix W(t) seeks at every instant to minimize the negative correlation (or maximize the correlation) on the optimal path in the formal system S (and hence in the neural network N). • Comparison with outer product rule. It is worthwhile to compare equation (10) with the outer product rule: L Wi; = f3 L o(r) jo(r); (11) r=l We see that the outer product rule is just one point on the weight trajectory defined by equation (10) - the point at final time tf when g(X/r)(tf») = o(r)j' 3 LEARNING CURVES In our simulation we considered 14 8-dimensional vectors as the desired outputs. The weight synthesis or learning phase is as follows: we initialized the 112-dimensional formal synthesis system S with a corrupted version of the vector set, and used equation (10) to find the optimal 8 x 8 weight matrix sequence for an 8-dimensional neural network N to correctly classify any of the corrupted 14 vectors. The weight sequence is recorded. This procedure is required only once for any given training set. After this learning is completed, the normal operation of the neural network N consists in running it using the weights obtained from the synthesis phase above. The resulting network describes a continuous input-output map. At points belonging to the training set this map coincides with the underlying map we are trying to model. For points outside the training set, it performs a nonlinear interpolation Neural Network Weight Matrix Synthesis 353 (generalization) the nature of which is determined by the training set as well as the neuron state equation. Figure 1 shows the learning procedure through time. The curves labeled "Optimally Trained Network" shows the behavior of two correlation measures as the training proceeds. One correlation measure used was the cosine of the angle between the desired vector (8) and the neuron activation (z) vector. The other correlation measure was the cosine of the angle between the desired vector (8) and the neuron output (g(z(t») vector. Given our system initialization in equation (2), the correlation g(z(t»)T8 more accurately represents our objective) although the performance index (3) reflects the correlation zT 8. The reason for our performance index choice is that the weight trajectory yielded by g(z(t»)T8 leads the system to an all-zero, trivial equilibri um for a sigmoid gC) (we used such a g(.) with saturation values at +1 and -1 in our simulations). This is not the case for the weight trajectory yielded by zT8. Since g(z(t» is monotonic with z, zT 8 represented an admissible alternative choice for the performance index. The results bear this out. Another possible choice is (g(z(t» + z)T8. This gives similar results upon simulation. The correlation measures are plotted on the ordinate. The abscissa is the number of computer iterations. A discrete-time network with real-valued parameters was used. The total number of errors in the 14 8-bit binary {I, -I} vectors used to initialize the system was 21. This results in an average of 1.5 errors per 8-bit vector. We note that the learning was completed in two time steps. Therefore, in this case at least) we see that the storage requirement is not intensive - only two weight matrices need to be stored during the synthesis phase. We note that the learning phase by default also represents the autoassociative system error-correcting performance over input samples drawn from the training set. Therefore over the training set we can compare this performance with that of the outer product rule (11). By considering corrupted input vectors from the training set, we compare the error-correcting capabilities of the two methods, not their capacities to store uncorrupted vectors. In fact we see that the two weight rules become identical when we initialize with the true vectors (this equivalence is not a peculiarity of the new technique, but merely a consequence of the particular performance index chosen). In other words, this comparison is a test of the extent of the basins of attraction around the desired memories for the two techniques. Looking at the curves labeled "Conventional Outer Product", we see that the new technique performs better than the outer product rule. 4 CONCLUSIONS AND FURTHER WORK We have described a technique for training neural networks based on formal tools from optimal control theory. For a specific example consisting of learning the inputoutput map in a training set we derived the relevant weight equations and illustrated the learning phase of the method. This example gives a weight rule that turns out to be a continuous-time generalization of the outer-product rule. Using corrupted vectors from the training set, we show that the new rule performs better in error-correction than the outer-product rule. Simulations on the generalization capa5ilities of the method are ongoing and are not included in the present work. 354 Farotimi, Dembo and Kailath I t lo , O.9~ I DISCItEn CUI 0.9 0.1 0.1 0.6 0-' 0.4 0.3 0.1 0.1 ·-~T"""Hc __ 0.1 ... c.........aa-........ 0.1 I Q.6 t 0-' 0.4 D.l 0.1 • 4.1 0 10 » 30 40 50 60 10 10 90 laD 0 10 ~ Figure 1: Learning Curves Although we considered a training set consisting of input-output vector pairs as the starting point for the procedure, a closer examination shows that this is not required. More generally, what is required is a performance index that reflects the objective of the training. Also in our ongoing work we have extended the results to more general forms of the state equation and the performance index. Using an appropriate performance index we are investigating a network for the Travelling Salesman Problem and related applications like Tracking and Data Association. References [1] Michael A. Cohen & Stephen Grossberg, "Absolute Stability of Global Pattern Formation and Parallel Memory Storage by Competitive Neural Networks," IEEE Transactions on Systems, Man and Cybernetics SMC-13 (1983),815-826. [2] J. J. Hopfield & D. W. Tank, "Neural Computation of Decisions in Optimization Problems," Biological Cybernetics 52 (1985), 141-152. [3] Arthur E. Bryson & Yu-Chi Ho, Applied Optimal Control, Hemisphere, 1975. [4] J. J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational Abilities," Proceedings of the National Academy of Sciences 79 (1982), 2554-2558.
|
1989
|
10
|
186
|
232 Sejnowski, Yuhas, Goldstein and Jenkins Combining Visual and Acoustic Speech Signals with a Neural Network Improves Intelligibility T .J. Sejnowski The Salk Institute and Department of Biology The University of California at San Diego San Diego, CA 92037 B.P. Yuhas M.H. Goldstein, Jr. Department of Electrical and Computer Engineering The Johns Hopkins University Baltimore, MD 21218 ABSTRACT R.E. Jenkins The Applied Physics Laboratory The Johns Hopkins University Laurel, MD 20707 Acoustic speech recognition degrades in the presence of noise. Compensatory information is available from the visual speech signals around the speaker's mouth. Previous attempts at using these visual speech signals to improve automatic speech recognition systems have combined the acoustic and visual speech information at a symbolic level using heuristic rules. In this paper, we demonstrate an alternative approach to fusing the visual and acoustic speech information by training feedforward neural networks to map the visual signal onto the corresponding short-term spectral amplitude envelope (STSAE) of the acoustic signal. This information can be directly combined with the degraded acoustic STSAE. Significant improvements are demonstrated in vowel recognition from noise-degraded acoustic signals. These results are compared to the performance of humans, as well as other pattern matching and estimation algorithms. 1 INTRODUCTION Current automatic speech recognition systems rely almost exclusively on the acoustic speech signal, and as a consequence, these systems often perform poorly in noisy Combining Visual and Acoustic Speech Signals 233 environments. To compensate for noise-degradation of the acoustic signal, one can either attempt to remove the noise from the acoustic :;ignal or supplement the acoustic signal with other sources of speech information. One such source is the visible movements of the mouth. For humans, visual speech signals can improve speech perception when the acoustic signal is degraded by noise (Sumby and Pollack, 1954) and can serve as a source of speech information when the acoustic signal is completely absent through lipreading. How can these visual speech signals be used to improve the automatic recognition of speech? One speech recognition system that has extensively used the visual speech signals was developed by Eric Petajan (1987). For a limited vocabulary, Petajan demonstrated that the visual speech signals can be used to significantly improve automatic speech recognition compared to the acoustic recognition alone. The system relied upon a codebook of images that were used to translate incoming images into corresponding symbols. These symbol strings were then compared to stored sequences representing different words in the vocabulary. This categorical treatment of speech signals is required because of the computational limitations of currently available digital serial hardware. This paper proposes an alternative method for processing visual speech signals based on analog computation in a distributed network architecture. By using many interconnected processors working in parallel large amounts of data can be handled concurrently. In addition to speeding up the computation, this approach does not require segmentation in the early stages of processing; rather, analog signals from the visual and auditory pathways flow through networks in real time and can be combined directly. Results are presented from a series of experiments that use neural networks to process the visual speech signals of two talkers. In these preliminary experiments, the results are limited to static images of vowels. We demonstrate that these networks are able to extract speech information from the visual images, and that this information can be used to improve automatic vowel recognition. 2 VISUAL AND ACOUSTIC SPEECH SIGNALS The acoustic speech signal can be modeled as the response of the vocal tract filter to a sound source (Fant, 1960). The resonances of the vocal tract are called formants. They often appear as peaks in the short-term power spectrum, and are sufficient to identify the individual vowels (Peterson and Barney, 1953). The overall shape of the short-time spectra is important for general speech perception (Cole, 1980). The configuration of the articulators define the shape of the vocal tract and the corresponding resonance characteristics of the filter. While some of the articulators are visible on the face of the speaker (e.g., the lips, teeth and sometimes the tip of the tongue), others are not. The contribution of the visible articulators to the acoustic signal results in speech sounds that are much more susceptible to acoustic noise distortion than are the contributions from the hidden articulators (Petajan, 1987), and therefore, the visual speech signal tends to complement the acoustic 234 Sejnowski, Yuhas, Goldstein and Jenkins signal. For example, the visibly distinct speech sounds Ibl and Ikl are among the first pairs to be confused when presented acoustically in the presence of noise. Because of this complementary structure, the perception of speech in noise is greatly improved when both speech signals are present. How and at what level are these two speech signals being combined? In previous attempts at using the visual speech signals, the information from the visual signal was incorporated into the recognition system after the signals were categorized (Petajan, 1987). In the approach taken here, visual signals will be used to resolve ambiguities in the acoustic signal before either is categorized. By combining these two sources of information at an early stage of processing, it is possible to reduce the number of erroneous decisions made and increase the amount of information passed to later stages of processing (Summerfield, 1987). The additional information provided by the visual signal can serve to constrain the possible interpretations of an ambiguous acoustic signal, or it can serve as an alternative source of speech information when the acoustical signal is heavily noise-corrupted. In either case, a massive amount of computation must be performed on the raw data. New massively-parallel architectures based on neural networks and new training procedures have made this approach feasible. 3 INTERPRETING THE VISUAL SIGNALS In our approach, the visual signal was mapped directly into an acoustic representation closely related to the vocal tract's transfer function (Summerfield, 1987). This representation allowed the visual signal to be fused with the acoustic signal prior to any symbolic encoding. The visual signals provide only a partial description of the vocal tract transfer function and that description is usually ambiguous. For a given visual signal there are many possible configurations of the full vocal tract, and consequently many possible corresponding acoustic signals. The goal was to define a good estimate of that acoustic signal from the visual signal and then use that estimate in conjunction with any residual acoustic information. The speech signals used in these experiments were obtained from a male speaker who was video taped while seated facing the camera, under well-lit conditions. The visual and acoustic signals were then transferred and stored on laser disc (Bernstein and Eberhardt, 1986), which allowed the access of individual video frames and the corresponding sound track. The NTSC video standard is based upon 30 frames per second and words are preserved as a series of frames on the laser disc. A data set was constructed of 12 examples of 9 different vowels (Yuhas et al., 1989). A reduced area-of-interest in the image was automatically defined and centered around the mouth. The resulting sub-image was sampled to produce a topographically accurate image of 20 x 25 pixels that would serve to represent the visual speech signal. While not the most efficient encoding one could use, it is faithful to the parallel approach to computation advocated here and represents what one might observe through an array of sensors. Combining Visual and Acoustic Speech Signals 235 Along with each video frame on the laser disc there is 33 ms of acoustic speech. The representation chosen for the acoustic output structure was the short-time spectral amplitude envelope (STSAE) of the acoustic signal, because it is essential to speech recognition and also closely related to the vocal tract's transfer function. It can be calculated from the short-term power spectrum of the acoustic signal. The speech signal was sampled and cepstral analysis was used to produced a smooth envelope of the original power spectrum that could be sampled at 32 frequencies. Figure 1: Typical lip images presented to the network. Three-layered feedforward networks with non-linear units were used to perform the mapping. A lip image was presented across 500 input units, and an estimated STSAE was produced across 32 output units. Networks with five hidden units were found to provide the necessary bandwidth while minimizing the effects of over-learning. The standard backpropagation technique was used to compute the error gradients for training the network. However, instead of using a fixed-step steepest-descent algorithm for updating the weights, the error gradient was used in a conjugate-gradient algorithm. The weights were changed only after all of the training patterns were presented. 4 INTEGRATING THE VISUAL AND ACOUSTIC SPEECH SIGNALS To evaluate the spectral estimates, a feedforward network was trained to recognize vowels from their STSAE's. With no noise present, the trained network could correctly categorized 100% of the 54 STSAE's in its training set: thus serving as a perfect recognizer for this data. The vowel recognizer was then presented with speech information through two channels, as shown in Fig. 2. The path on the bottom represents the information obtained from the acoustic signal, while the path on the top provides information obtained from the corresponding visual speech signal. To assess the performance of the recognizer in noise, clean spectral envelopes were systematically degraded by noise and then presented to the recognizer. In this particular condition, no visual input was given to the network. The noise was introduced by adding a normalized random vector to the STSAE. Noise corrupted vectors were produced at 3 dB intervals from -12 dB to 24 dB. At each step 6 different vectors were produced, and the performance reported was the average. Fig. 3 shows the recognition rates as a function of the speech-to- noise ratio. At a speech-to-noise ratio of -12 dB, the recognizer was operating at chance or 11.1%. Next, a network trained to estimate the spectral envelopes from images was used 236 Sejnowski, Yuhas, Goldstein and Jenkins to provide an independent STSAE input into the recognizer (along the top of Fig. 2). This network was not trained on any of the data that was used in training the vowel recognizer. The task remained to combine these two STSAE's. STSAE estimated from the visual signal Visual Neural ~ Speech > l...-N_e_t",_or_k--JF==~: Signal _ Acoustic Speech Signal ~ ~ ==» Recognizer ~ Combined >1f==» b STSAE of the Acoustic signal STSAE's Noise Figure 2: A vowel recognizer that integrates the acoustic and visual speech signals. We considered three different ways of combining the estimates obtained from visual signals with the noised degraded acoustic envelopes. The first approach was to simply average the two envelopes, which proved to be less than optimal. The recognizer was able to identify 55.6% ofthe STSAE estimated from the visual signal, but when the visual estimate was combined with the noise degraded acoustic signal the recognizer was only capable of 35% at a SIN of -12 dB. Similarly, at very high signal-to-noise ratios, the combined input produced poorer results than the acoustic signal alone provided. To correct for this, the two inputs needed to be weighted according to the relative amount of information available from each source. A weighting factor was introduced which was a function of speech-to-noise: a SViltJll1 + (1 - a) SAcotJ,tic (1) The optimal value for the parameter a was found empirically to vary linearly with the speech-to-noise ratio in dB. The value for a ranged from approximately 0.8 at SIN of -12dB to 0.0 at 24 dB. The results obtained from using the a weighted average are shown in Fig. 3. The third method used to fuse the two STSAE's was with a second-order neural network (Rumelhart et al. 1986). Sigma-pi networks were trained to take in noisedegraded acoustic envelopes and estimated envelopes from the corresponding visual speech signal. The networks were able to recreate the noise-free acoustic envelope with greater accuracy than any of the other methods, as measured by mean squared error. This increased accuracy did not however translate into improved recognition rates. Combining Visual and Acoustic Speech Signals 237 100 80 ~ '-" .... 60 0 <l) L.. L.. 40 0 u 20 0 -15 -9 -3 3 9 15 21 27 SIN (dB) Figure 3: The visual contribution to speech recognition in noise. The lower curve shows the performance of the recognizer under varying signal-to-noise conditions using only the acoustic channel. The top curve shows the final improvement when the two channels were combined using the a weighted average. 5 COMPARING PERFORMANCE The performance of the network was compared to more traditional signal-processing techniques. 5.1 K-NEAREST NEIGHBORS In this first comparison, an estimate of the STSAE was obtained using a k-nearest neighbors approach. The images in the training set were stored along with their corresponding STSAE calculated from the acoustic signal. These images served as the data base of stored templates. Individual images from the test set were correlated against all of the stored images and the closest k images were selected. The acoustic STSAE corresponding to the k selected images were then averaged to produce an estimate of the STSAE corresponding to the test image. Using this procedure for various values of k, average MSE was calculated for the test set. This procedure was then repeated with the test and training set reversed. For values ofk between 2 and 6 the k-nearest neighbor estimator was able to produce STSAE estimates with approximately the same accuracy as the neural networks. Those networks evaluated after 500 training epochs produced estimates with 9% more error than the KNN approach, while those weights corresponding to the networks' best performance, as defined above, produced estimates with 5% less error. 5.1.1 PRINCIPAL COMPONENT ANALYSIS A second method of comparison was to obtain an STSAE estimate using a combination of optimal linear techniques. The first step was to encode the images using a Hotelling transform, which produces an optimal encoding of an image with respect to a least-mean-squared error. The encoded image Yi was computed from the 238 Sejnowski, Yuhas, Goldstein and Jenkins normalized image %i using (2) where m1: was the mean image. A was a transformation matrix whose rows were the five largest eigenvectors of the covariance matrix of the images. The vector Yi represents the image as do the hidden units of the neural network. The second step was to find a mapping from the encoded image vector Yi to the corresponding short-term spectral envelope Si using a linear least-squares fit. For the Yi'S calculated above, a B was found that provided the best estimate of the desired Si: (3) If we think of the matrix A as corresponding to the weights from the input layer to the hidden units, then B maps the hidden units to the output units. The networks trained to produce STSAE estimates were far superior to those obtained using the coefficients of A and B. This was true not only for the training data from which A and B were calculated, but also for the test data set. When compared to networks trained for 500 epochs, the networks produced estimates of the STSAE's that were 46% better on the training set and 12% better on the test set. 6 CONCLUSION Humans are capable of combining information received through distinct sensory channels with great speed and ease. The combined use of the visual and acoustic speech signals is just one example of integrating information across modalities. Sumby and Pollack (1954) have shown that the relative improvement provided by the visual signal varies with the signal-to-noise ratio of the acoustic signal. By combining the speech information available from the two speech signals before categorizing, we obtained performance that was comparable to that demonstrated by humans. We have shown that visual and acoustic speech information can be effectively fused without requiring categorical preprocessing. The low-level integration of the two speech signals was particularly useful when the signal-to-noise ratio ranged from 3 dB to 15 dB, where the combined signals were recognized with a greater accuracy than either of the two component signals alone. In contrast, an independent categorical decisions on each channel would have required additional information in the form of ad hoc rules to produce the same level of performance. Lip reading research has traditionally focused on the identification and evaluation of visual features (Montgomery and Jackson, 1983). Reducing the original speech signals to a finite set of predefined parameters or to discrete symbols can waste a tremendous amount of information. For an automatic recognition system this information may prove to be useful at a later stage of processing. In our approach, speech information in the visual signal is accessed without requiring discrete feature analysis or making categorical decisions. Com bining Visual and Acoustic Speech Signals 239 This line of research has consequences for other problems, such as target identification based on multiple sensors. For example, the same problems arise in designing systems that combine radar and infrared data. Mapping into a common representation using neural network models could also be applied to these problem domains. The key insight is to combine this information at a stage prior to categorization. Neural network learning procedures allow systems to be constructed for performing the mappings as long as sufficient data are available to train the network. Acknowledgements This research was supported by grant AFOSR-86-0256 from the Air Force Office of Scientific Research and by the Applied Physics Laboratory's IRAD. References Bernstein, L.E. and Eberhardt, S.P. (1986). Johns Hopkins Lipreading Corpus I-II, Johns Hopkins University, Baltimore, MD Cole, R.A. (1980). (Ed.) Perception and Production of Fluent Speech, Lawrence Erlbaum Assoc, Publishers, Hillsdale, NJ Fant, G. (1960). Acoustic Theory of Speech Production. Mouton & Co., Publishers, The Hague, Netherlands Montgomery, A. and Jackson, P.L. (1983). Physical Characteristics of the lips underlying vowel lipreading. J. Acoust. Soc. Am. 73, 2134-2144. Petajan, E.D. (1987). An improved Automatic Lipreading System To Enhance Speech Recognition. Bell Laboratories Technical Report No. 11251-871012-111TM. Peterson, G.E. and Barney, H.L. (1952). Control Methods Used in a Study of the Vowels. J. Acoust. Soc. Am. 24, 175-184. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986). Learning internal representations by error propagation. In: D.E. Rumelhart and J.L. McClelland. (Eds.) Parallel Distributed Processing in the Microstructure of Cognition: Vol 1. Foundations MIT Press, Cambridge, MA Sumby, W.H. and Pollack, I. (1954). Visual Contribution to Speech Intelligibility in Noise. J. Acoust. Soc. Am. 26, 212-215. Summerfield, Q.(1987). Some preliminaries to a comprehensive account of audiovisual speech perception. In: B. Dodd and R. Campbell (Eds. ) Hearing by Eye: The Pschology of Lip-Reading, Lawrence-Erlbaum Assoc, Hillsdale, NJ. Yuhas, B.P., Goldstein, M.H. Jr. and Sejnowski, T.J. (1989). Integration of Acoustic and Visual Speech Signals Using Neural Networks. IEEE Comm Magazine 27, November 65-71.
|
1989
|
100
|
187
|
10 Spence and Pearson The Computation of Sound Source Elevation the Barn Owl Clay D. Spence John C. Pearson David Sarnoff Research Center CN5300 Princeton, NJ 08543-5300 ABSTRACT The midbrain of the barn owl contains a map-like representation of sound source direction which is used to precisely orient the head toward targets of interest. Elevation is computed from the interaural difference in sound level. We present models and computer simulations of two stages of level difference processing which qualitatively agree with known anatomy and physiology, and make several striking predictions. 1 INTRODUCTION . 'In The auditory system of the barn owl constructs a map of sound direction in the external nucleus of the inferior colliculus (lex) after several stages of processing the output of the cochlea. This representation of space enables the owl to orient its head to sounds with an accuracy greater than any other tested land animal [Knudsen, et aI, 1979]. Elevation and azimuth are processed in separate streams before being merged in the ICx [Konishi, 1986]. Much of this processing is done with neuronal maps, regions of tissue in which the position of active neurons varies continuously with some parameters, e.g., the retina is a map of spatial direction. In this paper we present models and simulations of two of the stages of elevation processing that make several testable predictions. The relatively elaborate structure of this system emphasizes one difference between the sum-and-sigmoid model neuron and real neurons, namely the difficulty of doing subtraction with real neurons. We first briefly review the available data on the elevation system. The Computation of Sound Source Elc\'ution in the Bam Owl 11 C __ ICX_) ICl VlVp + NA IlD SENSITIVE _ iijj (!(,\,\ \ \ \ '1'1? ~. - i If?' ----~ IlD & ASI SENSITIVE dorsal central ventral -L' .... -------.~ IlD ( MONAURAL) ~ ~ Intensity Figure 1: Overview of the Barn Owl's Elevation System. ABI: average binaural intensity. ILD: Inter aural level difference. Graphs show cell responses as a function of ILD (or monaural intensity for NA). 2 KNOWN PROPERTIES OF THE ELEVATION SYSTEM The owl computes the elevation to a sound source from the inter-aural sound pressure level difference (ILD).l Elevation is related to ILD because the owl's ears are asymmetric, so that the right ear is most sensitive to sounds from above, and the left ear is most sensitive to sounds from below [Moiseff, 1989]. After the cochlea, the first nucleus in the ILD system is nucleus angularis (NA) (Fig. 1). NA neurons are monaural, responding only to ipsilateral stimuli. 2 Their outputs are a simple spike rate code for the sound pressure level on that side of the head, with firing rates that increase monotonically with sound pressure level over a rather broad range, typically 30 dB [Sullivan and Konishi, 1984]. 1 Azimuth is computed from the interaural time or phase delay. 2 Neurons in all of the nuclei we will discuss except rex have fairly narrow frequency tuning curves. 12 Spence and Pearson Each NA projects to the contralateral nucleus ventralis lemnisci lateralis pars posterior (VLVp). VLVp neurons are excited by contralateral stimuli, but inhibited by ipsilateral stimuli. The source of the ipsilateral inhibition is the contralateral VLVp [Takahashi, 1988]. VLVp neurons are said to be sensitive to ILD, that is their ILD response curves are sigmoidal, in contrast to ICx neurons which are said to be tuned to ILD, that is their ILD response curves are bell-shaped. Frequency is mapped along the anterior-posterior direction, with slabs of similarly tuned cells perpendicular to this axis. Within such a slab, cell responses to ILD vary systematically along the dorsal-ventral axis, and show no variation along the medio-Iateral axis. The strength of ipsilateral inhibition3 varies roughly sigmoidally along the dorsal-ventral axis, being nearly 100% dorsally and nearly 0% ventrally. The ILD threshold, or ILD at which the cell's response is half its maximum value, varies from about 20 dB dorsally to -20 dB ventrally. The response of these neurons is not independent of the average binaural intensity (ABI), so they cannot code elevation unambiguously. As the ABI is increased, the ILD response curves of dorsal cells shift to higher ILD, those of ventral cells shift to lower ILD, and those of central cells keep the same thresholds, but their slopes increase (Fig. 1) [Manley, et aI, 1988]. Each VLVp projects contralaterally to the lateral shell of the central nucleus of the inferior colli cuI us (ICL) [T. T. Takahashi and M. Konishi, unpublished]. The ICL appears to be the nucleus in which azimuth and elevation information is merged before forming the space map in the ICx [Spence, et aI, 1989]. At least two kinds of ICL neurons have been observed, some with ILD-sensitive responses as in the VLVp and some with ILD-tuned responses as in the ICx [Fujita and Konishi, 1989]. Manley, Koppl and Konishi have suggested that inputs from both VLVps could interact to form the tuned responses [Manley, et aI, 1988]. The second model we will present suggests a simple method for forming tuned responses in the ICL with input from only one VLVp. 3 A MODEL OF THE VLVp We have developed simulations of matched iso-frequency slabs from each VLVp in order to investigate the consequences of different patterns of connections between them. We attempted to account for the observed gradient of inhibition by using a gradient in the number of inhibitory cells. A dorsal-ventral gradient in the number density of different cell types has been observed in staining experiments [C. E. Carr, et aI, 1989], with GABAergic cells4 more numerous at the dorsal end and a nonGABAergic type more numerous at the ventral end. To model this, our simulation has a "unit" representing a group of neurons at each of forty positions along the VLVp. Each unit has a voltage v which obeys the equation 3measured functionally, not actual synaptic strength. See [Manley, et al, 1988] for details. 4 GABAergic cells are usually thought to be inhibitory. The Computation or Sound Source Elevation in the Bam Owl 13 ~ .......... ,.,/ . ' ..... o 25 50 InlensUy LEFT NA SHELL ~ ""/ .' " ..... .... ,.' o 25 50 Inlenslly RIGHT Figure 2: Models of Level Difference Computation in the VLVps and Generation of Tuned Responses in the ICL. Sizes of Circles represent the number density of inhibitory neurons, while triangles represent excitatory neurons. This describes the charging and discharging ofthe capacitance C through the various conductances g, driven by the voltages VN, all of these being properties of the cell membrane. The subscript L refers to passive leakage variables, E refers to excitatory variables, and I refers to inhibitory variables. These model units have firing rates which are sigmoidal functions of v. The output on a given time step is a number of spikes, which is chosen randomly with a Poisson distribution whose mean is the unit's current firing rate times the length of the time step. gE and g[ obey the equation d2g dg 2 dt2 = -"I dt - w g, the equation for a damped harmonic oscillator. The effect of one unit's spike on another unit is to "kick" its conductance g, that is it simply increments the conductance's time derivative by some amount depending on the strength of the connection. 14 Spence and ~arson ILD =·20 dB ILD=OdB ILD = 20 dB dorsal ventral LEFT ~ RATE ~RIGHT Figure 3: Output of Simulation of VLVps at Several ILDs. Position is represented on the vertical axis. Firing rate is represented by the horizontal length of the black bars. Inhibitory neurons increment dgI/dt, while excitatory neurons increment dgE/dt. 'Y and ware chosen so that the oscillator is at least critically damped, and 9 remains non-negative. This model gives a fairly realistic post-synaptic potential, and the effects of multiple spikes naturally add. The gradient of cell types is modeled by having a different maximum firing rate at each level in the VLVp. The VLVp model is shown in figure 2. Here, central neurons of each VLVp project to central neurons of the other VLVp, while more dorsal neurons project to more ventral neurons, and conversely. This forms a sort of "criss-cross" pattern ofprojections. In our simulation these projections are somewhat broad, each unit projecting with equal strength to all units in a small patch. In order for the dorsal neurons to be more strongly inhibited, there must be more inhibitory neurons at the ventral end of each VLVp, so in our simulation the maximum firing rate is higher there and decreases linearly toward the dorsal end. A presumed second neuron type is used for ouput, but we assumed its inputs and dynamics were the same as the inhibitory neurons and so we didn't model them. The input to the VLVps from the two NAs was modeled as a constant input proportional to the sound pressure level in the corresponding ear. We did not use Poisson distributed firing in this case because the spike trains of NA neurons are very regular [Sullivan and Konishi, 1984]. NA input was the same to each unit in the VLVp. Figure 3 shows spatial activity patterns of the two simulated VLVps for three different ILDs, all at the same ABI. The criss-cross inhibitory connections effectively cause these bars of activity to compete with each other so that their lengths are always approximately complementary. Figure 4 presents results of both models discussed in this paper for various ABIs and ILDs. The output of VLVp units qualitatively matches the experimentally determined responses, in particular the ILD response curves show similar shifts with ABI. for the different dorsal-ventral positions in the VLVp (see Fig. 3 in [Manley, et aI, 1988]). Since the observed non-GABAergic neurons are more numerous at the ventral end of the VLVp and The Computation of Sound Source Elevation in the Barn Owl 15 VLVp IeL ~ 100 80 DORSAL DORSAL VLVp input ~ ABI(dB) LIne Type ......... 60 10 .................... r..::I 20 ---~ 40 3D 40 -so ._._ ... ~ 20 Z ...... ~ 0 -... ~ .. :=:. .. "':':" .... ~ .. ~,...~ 100 80 CENTRAL CENTRAL VLVp input ~ ....,60 ~ 40 ~ 20 Z ~ 0 ~ 100 ~~~~~ __ §-~--~ ~ 0 ~.,. 8 /,,........... . /// ..•..•...• I ...... . 60 VENTRAL VL Vp input I .' I ..... / I ... / ./ VENTRAL -// .: ........ .! o ............... . 40 20 -20 -10 o 10 20 -20 -10 0 10 20 ILD (dB) ILD (dB) Figure 4: ILD Response Curves of the VLVp and ICL models. Curves show percent of maximum firing rate versus ILD for several ABls. 16 Spence and Pearson our model's inhibitory neurons are also more numerous there, this model predicts that at least some of the non-GABAergic cells in the VLVp are the neurons which provide the mutual inhibition between the VLVps. 4 A MODEL OF ILD-TUNED NEURONS IN THE ICL In this section we present a model to explain how leL neurons can be tuned to ILD if they only receive input from the ILD-sensitive neurons in one VLVp. The model essentially takes the derivative of the spatial activity pattern in the VLVp, converting the sigmoidal activity pattern into a pattern with a localized region of activity corresponding to the end of the bar. The model is shown in figure 2. The VLVp projects topographically to ICL neurons, exciting two different types. This would excite bars of activity in the ICL, except one type of leL neuron inhibits the other type. Each inhibitory neuron projects to tuned neurons which represent a smaller ILD, to one side in the map. The inhibitory neurons acquire the bar shaped activity pattern from the VLVp, and are ILD-sensitive as a result. Of the neurons of the second type, only those which receive input from the end of the bar are not also inhibited and prevented from firing. Our simulation used the model neurons described above, with input to the ICL taken from our model of the VLVp. Each unit in the VLVp projected to a patch of units in the leL with connection strengths proportional to a gaussian function of distance from the center of the patch. (Equal strengths for the connections from a given neuron worked poorly.) The results are shown in figure 4. The model shows sharp tuning, although the maximum firing rates are rather small. The ILD response curves show the same kind of ABI dependence as those of the VLVp model. There is no published data to confirm or refute this, but we know that neurons in the space map in the ICx do not show ABI dependence. There is a direct input from the contralateral NA to the ICL which may be involved in removing ABI dependence, but we have not considered that possibility in this work. 5 CONCLUSION We have presented two models of parts of the owl's elevation or interaural level difference (ILD) system. One predicts a "criss-cross" geometry for the connections between the owl's two VLVps. In this geometry cells at the dorsal end of either VLVp inhibit cells at the ventral end of the other, and are inhibited by them. Cells closer to the center of one VLVp interact with cells closer to the center of the other, so that the central cells of each VLVp interact with each other (Fig. 2). This model also predicts that the non-GABAergic cells in the VLVp are the cells which project to the other VLVp. The other model explains how the ICL, with input from one VLVp, can contain neurons tuned to ILD. It does this essentially by computing the spatial derivative of the activity pattern in the VLVp. This model predicts that the ILD-sensitive neurons in the ICL inhibit the ILD-tuned neurons in the ICL. Simulations with semi-realistic model neurons show that these models The Computation of Sound Source Elevation in the Barn Owl 17 are plausible, that is they can qualitatively reproduce the published data on the responses of neurons in the VLVp and the leL to different intensities of sound in the two ears. Although these are models, they are good examples of the simplicity of information processing in neuronal maps. One interesting feature of this system is the elaborate mechanism used to do subtraction. With the usual model of a neuron, which calculates a sigmoidal function of a weighted sum of its inputs, subtraction would be very easy. This demonstrates the inadequacy of such simple model neurons to provide insight into some real neural functions. Acknowledgements This work was supported by AFOSR contract F49620-89-C-0131. References C. E. Carr, I. Fujita, and M. Konishi. (1989) Distribution of GABAergic neurons and terminals in the auditory system of the barn owl. The Journal of Comparative Neurology 286: 190-207. I. Fujita and M. Konishi. (1989) Transition from single to multiple frequency channels in the processing of binaural disparity cues in the owl's midbrain. Society for Neuroscience Abstracts 15: 114. E. I. Knudsen, G. G. Blasdel, and M. Konishi. (1979) Sound localization by the barn owl measured with the search coil technique. Journal of Comparative Physiology 133:1-11. M. Konishi. (1986) Centrally synthesized maps of sensory space. Trends in Neurosciences April, 163-168. G. A. Manley, C. Koppl, and M. Konishi. (1988) A neural map of interaural intensity differences in the brain stem of the barn owl. The Journal of Neuroscience 8(8): 2665-2676. A. Moiseff. (1989) Binaural disparity cues available to the barn owl for sound localization. Journal of Comparative Physiology 164: 629-636. C. D. Spence, J. C. Pearson, J. J. Gelfand, R. M. Peterson, and W. E. Sullivan. (1989) Neuronal maps for sensory-motor control in the barn owl. In D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 1, 748-760. San Mateo, CA: Morgan Kaufmann. W. E. Sullivan and M. Konishi. (1984) Segregation of stimulus phase and intensity coding in the cochlear nucleus of the barn owl. The Journal of Neuroscience 4(7): 1787-1799. T. T. Takahashi. (1988) Commissural projections mediate inhibition in a lateral lemniscal nucleus of the barn owl. Society for Neuroscience Abstracts 14: 323.
|
1989
|
101
|
188
|
76 Kammen, Koch and Holmes Collective Oscillations in the Visual Cortex Daniel Kammen & Christof Koch Computation and Neural Systems Caltech 216-76 Pasadena, CA 91125 Philip J. H oImes Dept. of Theor. & Applied Mechanics Cornell University Ithaca, NY 14853 ABSTRACT The firing patterns of populations of cells in the cat visual cortex can exhibit oscillatory responses in the range of 35 - 85 Hz. Furthermore, groups of neurons many mm's apart can be highly synchronized as long as the cells have similar orientation tuning. We investigate two basic network architectures that incorporate either nearest-neighbor or global feedback interactions and conclude that non-local feedback plays a fundamental role in the initial synchronization and dynamic stability of the oscillations. 1 INTRODUCTION 40 - 60 Hz oscillations have long been reported in the rat and rabbit olfactory bulb and cortex on the basis of single- and multi-unit recordings as well as EEG activity (Freeman, 1972; Wilson & Bower 1990). Recently, two groups (Eckhorn et ai., 1988 and Gray et ai., 1989) have reported highly synchronized, stimulus specific oscillations in the 35 - 85 Hz range in areas 17, 18 and PMLS of anesthetized as well as awake cats. Neurons with similar orientation tuning up to 7 mm apart show phase-locked oscillations, with a phase shift of less than 3 msec. We address here the computational architecture necessary to subserve this process by investigating to what extent two neuronal architectures, nearest-neighbor coupling and feedback from a central "comparator", can synchronize neuronal oscillations in a robust and rapid manner. Collective Oscillations in the Visual Cortex 77 It was argued in earlier work on central pattern generators (Cohen et al., 1982), that in studying coupling effects among large populations of oscillating neurons, one can ignore the details of individual oscillators and represent each one by a single periodic variable: its phase. Our approach assumes a population of neuronal oscillators, firing repetitively in response to synaptic input. Each cell (or group of tightly electrically coupled cells) has an associated variable representing the membrane potential. In particular, when (Ji = 7r, an action potential is generated and the phase is reset to its initial value (in our case to -7r). The number of times per unit time (Ji passes through 7r, i.e. d(Ji/dt, is then proportional to the firing frequency of the neuron. For a network of n + 1 such oscillators, our basic model is (1) where Wi represents the synaptic input to neuron i and I, a function of the phases, represents the coupling within the network. Each oscillator i in isolation (i.e. with Ii = 0), exhibits asymptotically stable periodic oscillations; that is, if the input is changed the oscillator will rapidly adjust to a new firing rate. In our model Wi is assumed to derive from neurons in the lateral geniculate nucleus (LG N) and is purely excitatory. 2 FREQUENCY AND PHASE LOCKING Any realistic model of the observed, highly synchronized, oscillations must account for the fact that the individual neurons oscillate at different frequencies in isolation. This is due to variations in the synaptic input, Wi, as well as in the intrinsic properties of the cells. We will contrast the abilities of two markedly different network architectures to synchronize these oscillations. The "chain" model (Fig. 1 top) consists of a one-dimensional array of oscillators connected to their nearest neighbors, while in the alternative "comparator" model (Fig. 1 middle), an array of neurons project to a single unit, where the phases are averaged (i.e. (lin) L~=o Oi(t)). This average is then feed back to every neuron in the network. In the continuum limit (on the unit interval) with all Ii = I being identical, the two models are (Chain Model) (Comparator Model) 80(x, t) 8t 8(J(x, t) 8t W(x) + .!.. 88f (4)) (2) n x w(x) + 1((J(x, t) -10 1 (J(s, t)ds), (3) where 0 < x < 1 and 4> is the phase gradient, 4> = ~M. In the chain model, we require that I be an odd function (for simplicity of analysis only) while our analysis of the comparator model holds for any continuous function I. We use two spatially separated "spots" of width 6 and amplitude Q' as visual input (Fig. 1 bottom). This pattern was chosen as a simple version of the double-bar stimulus that (Gray et al. 1989) found to evoke coherent oscillatory activity in widely separated populations of visual cortical cells. 78 Kammen, Koch and Holmes -~ • • • • 00(0) 00(0) m(x) I I ta ••• 8i=n(t) -• + m(n) men) Ita x Figure 1: The linear chain (top) and comparator (middle) architectures. The spatial pattern of inputs is indicated by Wj(x). See equs. 2 & 3 for a mathematical description of the models. The "two spot" input is shown at bottom and represents two parts of a perceptually extended figure. We determine under what circumstances the chain model will develop frequencylocked solutions, such that every oscillator fires at the same frequency (but not necessarily at the same time), i.e. 82(} /8x8t = O. We prove (Kammen, et al. 1990) that frequency-locked solutions exist as long as In(wx- fo :17 w(s)ds)1 does not exceed the maximal value of I, Imax (with w = f; w(s)ds the mean excitation level). Thus, if the excitation is too irregular or the chain too long (n » 1), we will not find frequency-locked solutions. Phase coherence between the excited regions is not generally maintained and is, in fact, strongly a function of the initial conditions. Another feature of the chain model is that the onset of frequency locking is slow and takes time of order Vii. The location of the stimulus has no effect on phase relationships in the comparator model due to the global nature of the feedback. The comparator model exhibits two distinct regimes of behavior depending on the amplitude of the input, a. In the case of the two spot input (Fig. 1 bottom ), if a is small, all neurons will frequencylock regardless of location, that is units responding to both the "figure" and the background ("ground") will oscillate at the same frequency. They will, however, fire at different times, with () Jig 1= () gnd. If a is above a critical threshold, the units responding to the "figure" will decouple in frequency as well as phase from the background while still maintaining internal phase coherency. Phase gradients never exist within the excited groups, no matter what the input amplitude. Collective Oscillations in the Visual Cortex 79 We numerically simulated the chain and comparator models with the two spot input for the coupling function fCf}) = sin(f}). Additive Gaussian noise was included in the input, Wi. Our analytical results were confirmed; frequency and phase gradients were always present in the chain model (Fig. 2A) even though the coupling strength was ten times greater than that of the comparator modeL In the comparator network small excitation levels led to frequency-locking along the entire array and to phasecoupled activity within the illuminated areas (Fig. 2B), while large excitation levels led to phase and frequency decoupling between the "figure" and the "background" (Fig. 2C). The excited regions in the comparator settle very rapidly - within 2 to 3 cycles - into phase-locked activity with small phase-delays. The chain model, on the other hand, exhibits strong sensitivity to initial conditions as well as a very slow approach to coherence that is still not complete even after 50 cycles (See Fig. 2). A B c Figure 2: The phase portrait of the chain (A), weak (B) and strongly (C) excited comparator networks after 50 cycles. The input, indicated by the horizontal lines, is the two spot pattern. Note that the central, unstimulated, region in the chain model has been "dragged along" by the flanking excited regions. 3 STABILITY ANALYSIS Perhaps the most intriguing aspect of the oscillations concerns the role that they may play in cortical information processing and the labeling of cells responding to a single perceptual object. To be useful in object coding, the oscillations must exhibit some degree of noise tolerance both in the input signal and in the stability of the population to variation in the firing times of individual cells. The degree to which input noise to individual neurons disrupts the synchronization f th I ·· d . d b h . input noise ~ F II o e popu atlOn IS etermme y t e ratio coupling strength = irT. or sma perturbations, wet) = Wo + f(t), the action of the feedback, from the nearest neighbors in the chain and from the entire network in the comparator, will compensate for the noise and the neuron will maintain coherence with the excited population. As f is increased first phase and then frequency coherence will be lost. In Fig. 3 we compare the dynamical stability of the chain and comparator models. In each case the phase, (J, of a unit receiving perturbated input is plotted as the deviation from the average phase, (Jo, of all the excited units receiving input WOo The chain in highly sensitive to noise: even 10% stochastic noise significantly perturbs the phase of the neuron. In the comparator model (Fig. 3B) noise must reach the 80 Kammen, Koch and Holmes 40% level to have a similar effect on the phase. As the noise increases above 0.30wo even frequency coherence is lost in the chain model (broken error bars). Frequency coherence is maintained in the comparator for f = 0.60wo. e +0.10 ~ 0.00 -G.OS -G.IO ~ A) B) ............. ~ .... 1t .... :1.... ..~ • l "I 0.0 20 40 60 0.0 20 40 60 £ (% of 000) Figure 3: The result of a perturbation on the phase, 0, for the chain (A) and comparator (B) models. The terminus of the error bars gives the resulting deviation from the unperturbed value. Broken bars indicate both phase and frequency decoupling. The stability of the solutions of the comparator model to variability in the activity of individual neurons can easily be demonstrated. For simplicity consider the case of a single input of amplitude WI superposed on a background of amplitude Woo The solutions in each region are: dOo dt dOl dt I ( 00 - 01) wo+ 2 1(01 - 00) WI + 2 . (4) (5) We define the difference in the solutions to be ¢(t) = 01(t)-00(t) and Aw = W1-WO. We then have an equation for the rate the solutions converge or diverge: d¢ <P <P = Aw + 1(-) - 1(--)· dt 2 2 (6) If the solutions are stable (of constant velocity) then d01/dt = dOo/dt and 01 = Oo+c with c a constant. We then have the stable solution ¢* = c d¢* /dt = Aw + I(~) I( ~) = O. Stability of the solutions can be seen by perturbing 01 to 01 = 00 + c + f with If I < 1. The perturbed solution, ¢ = ¢* + f, has the derivative d¢/dt = df/dt. Developing I( ¢) into a Taylor series around ¢* and neglecting terms on the order of f2 and higher, we arrive at df = :. [1'( c:..) I'( =:..)] dt 2 2 + 2 . (7) Collective Oscillations in the Visual Cortex 81 If f(¢) is odd then f'(¢) is even, and eq. (7) reduces to d€ = €f'(:') dt 2 . (8) Thus, if f'(c/2) < 0 the perturbations will decay to zero and the system will maintain phase locking within the excited regions. 4 THE FREQUENCY MODEL The model discussed so far assumes that the feedback is only a function of the phases. In particular, this implies that the comparator computes the average phase across the population. Consider, however, a model where the feedback is proportional to the average firing frequency of a group of neurons. Let us therefore replace phase in the feedback function with firing frequency, aO(x, t) = w(x) f (ao(x, t) _ aO(x, t)) at + at at (9) with ¥t = It J; O(s, t)ds = ¥t. This is a very special differential equation as can be seen by setting v(x,t) = aO(x, t)/at. This yields an algebraic equation for v with no explicit time dependency: v(x) = w(x) + f(v(x) - v(x)) (10) and, after an integration, we have, O(x, t) = lot v(x)dt = v(x)t + Oo(x). (11) Thus, the phase relationships depend on the initial conditions, Oo(x), and no phase locking occurs. While frequency locking only occurs for w(x) = 0 the feedback can lead to tight frequency coupling among the excited neurons. Reformulating the chain model in terms of firing-frequencies, we have ao(x, t) = !. (aw(x) + !. a2 f (ao(x, t))) at n ax n ax2 at (12) under the assumption that f(-x) = -f(x). With -y(x,t) = a~~~!t), we again arrive at a stationary algebraic equation 1 (aw 1 a2 ) -y(x) =;;ax + ;;-ax2fb(x)) , (13) and ¢(x, t) = lot -y(x)dt = -y(x)t + ¢o(x) (14) In other words, the system will develop a time-dependent phase gradient. Frequency locked solutions of the sort ¥t = 0 everywhere only occur if w(x) = 0 everywhere. Thus, the chain architecture leads to very static behavior, with little ability to either phase- or frequency-lock. 82 Kammen, Koch and Holmes 5 DISCUSSION We have investigated the ability of two networks of relaxation oscillators with different connectivity patterns to synchronize their oscillations. Our investigation has been prompted by recent experimental results pertaining to the existence of frequency- and phase-locked oscillations in the mammalian visual cortex (Gray et al., 1989; Eckhorn et al., 1988). While these 35 - 85 Hz oscillations are induced by the visual stimulus, usually a flashing or moving bar, they are not locked to the frequency of the stimulus. Most surprising is the finding that cells tuned to the same orientation, but separated by up to 7 mm, not only exhibit coherent oscillatory activity, but do so with a phase-shift of less than 3 msec (Gray et al., 1989).1 We have assumed the existence of a population of cortical oscillators, such as those reported in cortical slice preparations (Llimis, 1988; Chagnac-Amitai and Connors, 1989). The issue is then how such a population of oscillators can rapidly begin to fire in near total synchrony. Two neuronal architectures suggest themselves. As a mechanism for establishing coherent oscillatory activity the comparator model is far superior to a nearest-neighbor model. The comparator rapidly (within 1 - 3 cycles) achieves phase coherence, while the chain model exhibits a far slower onset of synchronization and is highly sensitive to the initial conditions. Once initiated, the oscillations in the two models exhibit markedly different stability characteristics. The diffusive nature of communication in the chain results in little ability to regUlate the firing of individual units and consequently only highly homogeneous inputs will result in collective oscillations. The long-range connections present in the comparator, however, result in stable collective oscillations even in the presence of significant noise levels. Noise uniformly distributed about the mean firing level will have little effect due to the averaging performed by the comparator unit. A more realistic model of the interconnection architecture of the cortex will certainly have to take both local as well as global neuronal pathways into account and the ever-present delays in cellular and network signal propagation (Kammen, et al., 1990). Long range (up to 6 mm) lateral excitatory connections have been reported (Gilbert and Wiesel, 1983). However, their low conduction velocities (~ 1 mm/msec) would lead to significant phase-shifts in contrast to the data. While the cortical circuitry contains both local as well as global connection, our results imply that a cortical architecture with one or more "comparator" neurons driven by the averaged activity of the hypercolumnar cell populations is an attractive mechanism for synchronizing the observed oscillations. We have also developed a model where the firing frequency, and not the phase is involved in the dynamics. Coding based on phase information requires that the cells track the time interval between incident spikes whereas the firing frequency is available as the raw spike rate. This computation can be readily implemented 1 Note that this result is obtained by averaging over many trials. The phase-shift for individual trial may possibly be larger, but could be randomly distributed from trial to trial around the origin. Collective Oscillations in the Visual Cortex 83 neurobiologically and is entirely consistent with the known biophysics of cortical cells. Von der Malsburg (1985) has argued that the temporal synchronization of groups of neurons labels perceptually distinct objects, subserving figure-ground segregation. Both firing frequency and inter-cell phase (timing) relationships of ensembles of neurons are potential channels to encode the signatures of various objects in the visual field. Perceptually distinct objects could be coded by groups of synchronized neurons, all locked to the same frequency with the groups only distinguished by their phase relationships. We do not believe, however, that phase is a robust enough variable to code this information across the cortex, A more robust scheme is one in which groups of synchronized neurons are locked at different firing frequencies. Acknowledgement D.K. is a recipient of a Weizman Postdoctoral Fellowship. P.H. acknowledges support from the Sherman Fairchild Foundation and C.K. from the Air Force Office of Scientific Research, a NSF Presidential Young Investigator Award and from the James S. McDonnell Foundation. We would like to thank Francis Crick for useful comments and discussions. References Chagnac-Amitai, Y. & Connors, B. W. (1989) 1. Neurophys., 62, 1149. Cohen, A. H., Holmes, P. J. & Rand R. H. (1982) 1. Math. Bioi. 3,345. Eckhorn, R., Bauer, R., Jordan, W., Brosch, M., Kruse, W., Munk, M. & Reitboeck, H. J. (1988) Bioi. Cybern., 60, 121. Freeman, W.J. (1972) 1. Neurophysiol. 35,762. Gilbert, C. D. & T.N. Wiesel (1983) 1. Neurosci. 3, 1116. Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature 338, 334. Kammen, D. M., Koch, C. and Holmes, P. J. (1990) Proc. Natl. Acad. Sci. USA, submitted. Kopell N. & Ermentrout, G. B. (1986) Comm. Pure Appl. Math. 39,623. Llimis, R. R. (1988) Science 242, 1654. von der Malsburg, C. (1985) Ber. Bunsenges Phys. Chem., 89, 703. Wilson, M. A. & Bower, J. (1990) 1. Neurophysiol., in press.
|
1989
|
11
|
189
|
828 Cowan Neural networks: the early days J.D. Cowan Department of Mathematics, Committee on Neurobiology, and Brain Research Institute, The University of Chicago, 5734 S. Univ. Ave., Chicago, Illinois 60637 ABSTRACT A short account is given of various investigations of neural network properties, beginning with the classic work of McCulloch & Pitts. Early work on neurodynamics and statistical mechanics, analogies with magnetic materials, fault tolerance via parallel distributed processing, memory, learning, and pattern recognition, is described. 1 INTRODUCTION In this brief account of the early days in neural network research, it is not possible to be comprehensive. This article then is a somewhat subjective survey of some, but not all, of the developments in the theory of neural networks in the twent-five year period, from 1943 to 1968, when many of the ideas and concepts were formulated, which define the field of neural network research. This comprises work on connections with automata theory and computability; neurodynamics, both deterministic and statistical; analogies with magnetic materials and spin systems; reliability via parallel and parallel distributed processing; modifiable synapses and conditioning; associative memory; and supervised and unsupervised learning. 2 McCULLOCH-PITTS NETWORKS The modem era may be said to have begun with the work of McCulloch and Pitts (1943). This is too well-known to need commenting on. Let me just make some historical remarks. McCulloch, who was by training a psychiatrist and neuroanatomist, spent some twenty years thinking about the representation of event in the nervous system. From 1941 to 1951 he worked in Chicago. Chicago at that time was one of the centers of neural of Neural Networks: The Early Days 829 Figure1: Warren McCulloch circa 1962 network research, mainly through the work of the Rashevsky group in the Committee on Mathematical Biology at the University of Chicago. Rashevsky, Landahl, Rapaport and Shim bel, among others, carried out many early investigations of the dynamics of neural networks, using a mixture of calculus and algebra. In 1942 McCulloch was introduced to Walter Pitts, then a 17 year old student of Rashevsky's. Pitts was a mathematical prodigy who had joined the Committee sometime in 1941. There is an (apocryphal) story that Pitts was led to the Rashevsky group after a chance meeting with the philosopher Bertrand Russell, at that time a visitor to the University of Chicago. In any event Pitts was already working on algebraic aspects of neural networks, and it did not take him long to see the point behind McCulloch's quest for the embodiment of mind. In one of McCulloch later essays (McCulloch 1961) he describes the history of his efforts thus: My object, as a psychologist, was to invent a least psychic event, or "psychon", that would have the following properties: First, it was to be so simple an event that it either happened or else it did not happen. Second, it was to happen only if its bound cause had happened-shades of Duns Scotus!-that is, it was to imply its temporal antecedent. Third it was to propose this to subsequent psychons. Fourth, these were to be compounded to produce the equivalents of more complicated propositions concerning their antecedents .. .In 1921 it dawned on me that these events might be regarded as the all-ornothing impulses of neurons, combined by convergence upon the next neuron to yield complexes of propositional events. Their subsequent 1943 paper was remarkable in many respects. It is best appreciated within the zeitgeist of the era when it was written. As Papert has documented in his introduction to a collection of McCulloch's papers (McCulloch 1967), 1943 was a semi830 Cowan nal year for the development of the science of the mind. Craik's monograph The Nature of Explanation and the paper "Behavior, Purpose and Teleology, by Rosenbleuth, Wiener and Bigelow, were also published in 1943. As Papert noted, "The common feature [of these publications] is their recognition that the laws governing the embodiment of mind should be sought among the laws governing information rather than energy or matter". The paper by McCulloch and Pitts certainly lies within this framework. Figure 2: Walter Pitts circa 1952 McCulloch-Pitts networks (hence-forth referred to as MP networks), are finite state automata embodying the logic of propositions, with quantifiers, as McCulloch wished; and permit the framing of sharp hypotheses about the nature of brain mechanisms, in a form equivalent to computer programs. This was a remarkable achievement. It established once and for all, the validity of making formal models of brain mechanisms, if not their veridicality. It also established the possibility of a rigorous theory of mind, in that neural networks with feedback loops can exhibit purposive behavior, or as McCulloch and Pitts put it: both the formal and the final aspects of that activity which we are wont to call mental are rigorously deducible from present neurophysiology ... [and] that in [imaginable networks] ... "Mind" no longer "goes more ghostly than a ghost". 2.1 FAULT TOLERANCE :MP networks were the first designed to perform specific logical tasks; and of course logic can be mapped into arithmetic. Landahl, McCulloch and Pitts (1943), for example, noted that the arithmetical operations +, 1-, and x can be obtained in MP networks via the logical operations OR. NOT, and AND. Thus the arithmetical expression a-a.b = a.(l-b)
|
1989
|
12
|
190
|
340 Carter, Rudolph and Nucci Operational Fault Tolerance of CMAC Networks Michael J. Carter Franklin J. Rudolph Adam J. Nucci Intelligent Structures Group Department of Electrical and Computer Engineering University of New Hampshire Durham, NH 03824-3591 ABSTRACT The performance sensitivity of Albus' CMAC network was studied for the scenario in which faults are introduced into the adjustable weights after training has been accomplished. It was found that fault sensitivity was reduced with increased generalization when "loss of weight" faults were considered, but sensitivity was increased for "saturated weight" faults. 1 INTRODUCTION Fault-tolerance is often cited as an inherent property of neural networks, and is thought by many to be a natural consequence of "massively parallel" computational architectures. Numerous anecdotal reports of fault-tolerance experiments, primarily in pattern classification tasks, abound in the literature. However, there has been surprisingly little rigorous investigation of the fault-tolerance properties of various network architectures in other application areas. In this paper we investigate the fault-tolerance of the CMAC (Cerebellar Model Arithmetic Computer) network [Albus 1975] in a systematic manner. CMAC networks have attracted much recent attention because of their successful application in robotic manipulator control [Ersu 1984, Miller 1986, Lane 1988]. Since fault-tolerance is a key concern in critical control tasks, there is added impetus to study Operational Fault Tolerance of CMAC Networks 341 this aspect of CMAC performance. In particular. we examined the effect on network performance of faults introduced into the adjustable weight layer after training has been accomplished in a fault-free environment The degradation of approximation error due to faults was studied for the task of learning simple real functions of a single variable. The influence of receptive field width and total CMAC memory size on the fault sensitivity of the network was evaluated by means of simulation. 2 THE CMAC NETWORK ARCHITECTURE The CMAC network shown in Figure 1 implements a form of distributed table lookup. It consists of two parts: 1) an address generator module. and 2) a layer of adjustable weights. The address generator is a fixed algorithmic transformation from the input space to the space of weight addresses. This transformation has two important properties: 1) Only a fixed number C of weights are activated in response to any particular input. and more importantly, only these weights are adjusted during training; 2) It is locally generalizing. in the sense that any two input points separated by Euclidean distance less than some threshold produce activated weight subsets that are close in Hamming distance, i.e. the two weight subsets have many weights in common. Input points that are separated by more than the threshold distance produce non-overlapping activated weight subsets. The first property gives rise to the extremely fast training times noted by all CMAC investigators. The number of weights activated by any input is referred to as the "generalization parameter". and is typically a small number ranging from 4 to 128 in practical applications [Miller 1986]. Only the activated weights are summed to form the response to the current input. A simple delta rule adjustment procedure is used to update the activated weights in response to the presentation of an input-desired output exemplar pair. Note that there is no adjustment of the address generator transformation during learning. and indeed, there are no "weights" available for adjustment in the address generator. It should also be noted that the hash-coded mapping is in general necessary because there are many more resolution cells in the input space than there are unique finite combinations of weights in the physical memory. As a result, the local generalization property will be disturbed because some distant inputs share common weight addresses in their activated weight subsets due to hashing collisions. While the CMAC network readily lends itself to the task of learning and mimicking multidimensional nonlinear transformations. the investigation of network fault-tolerance in this setting is daunting! For reasons discussed in the next section. we opted to study CMAC fault-tolerance for simple one-dimensional input and output spaces without the use of the hash-coded mapping. 3 .FAULT-TOLERANCE EXPERIMENTS We distinguish between two types of fault-tolerance in neural networks [Carter 1988]: operational fault-tolerance and learning fault-tolerance. Operational fault-tolerance deals with the sensitivity of network performance to faults inttoduced after learning has been 342 Carter, Rudolph and Nucci accomplished in a fault-free environment. Learning fault-tolerance refers to the sensitivity of network performance to faults (either permanent or transient) which are present during training. It should be noted that the term fault-tolerance as used here applies only to faults that represent perturbations in network parameters or topology, and does not refer to noisy or censored input data. Indeed, we believe that the latter usage is both inappropriate and inconsistent with conventional usage in the computer fault-tolerance community. 3.1 EXPERIMENT DESIGN PHILOSOPHY Since the CMAC network is widely used for learning nonlinear functions (e.g. the motor drive voltage to joint angle transformation for a multiple degree-of-freedom robotic manipulator), the obvious measure of network performance is function approximation error. The sensitivity of approximation error to faults is the subject of this paper. There are several types of faults that are of concern in the CMAC architecture. Faults that occur in the address generator module may ultimately have the most severe impact on approximation error since the selection of incorrect weight addresses will likely produce a bad response. On the other hand, since the address generator is an algorithm rather than a true network of simple computational units, the fault-tolerance of any serial processor implementation of the algorithm will be difficult to study. For this reason we initially elected to study the fault sensitivity of the adjustable weight layer only. The choice of fault types and fault placement strategies for neural network fault tolerance studies is not at all straightforward. Unlike classical fault-tolerance studies in digital systems which use "stuck-at-zero" and "stuck-at-one" faults, neural networks which use analog or mixed analog/digital implementations may suffer from a host of fault types. In order to make some progress, and to study the fault tolerance of the CMAC network at the architectural level rather than at the device level, we opted for a variation on the "stuck-at" fault model of digital systems. Since this study was concerned only with the adjustable weight layer, and since we assumed that weight storage is most likely to be digital (though this will certainly change as good analog memory technologies are developed), we considered two fault models which are admittedly severe. The first is a "loss of weight" fault which results in the selected weight being set to zero, while the second is a "saturated weight" fault which might correspond to the situation of a stuck-at-one fault in the most significant bit of a single weight register. The question of fault placement is also problematic. In the absence of a specific circuit level implementation of the network, it is difficult to postulate a model for fault distribution. We adopted a somewhat perverse outlook in the hope of characterizing the network's fault tolerance under a worst-case fault placement strategy. The insight gained will still prove to be valuable in more benign fault placement tests (e.g. random fault placement), and in addition, if one can devise network modifications which yield good fault-tolerance in this extreme case, there is hope of still better performance in more Operational Fault Tolerance of CMAC Networks 343 typical instances of circuit failure. When placing "loss of weight" faults, we attacked large magnitude weight locations fast, and continued to add more such faults to locations ranked in descending order of weight magnitude. Likewise, when placing saturated weight faults we attacked small magnitude weight locations first, and successive faults were placed in locations ordered by ascending weight magnitude. Since the activated weights are simply summed to form a response in CMAC, faults of both types create an error in the response which is equal to the weight change in the faulted location. Hence, our strategy was designed to produce the maximum output error for a given number of faults. In placing faults of either type, however, we did not place two faults within a single activated weight subset. Our strategy was thus not an absolute worst-case strategy, but was still more stressful than a purely random fault placement strategy. Finally, we did not mix fault types in any single experiment. The fault tolerance experiments presented in the next section all had the same general structure. The network under study was trained to reproduce (to a specified level of approximation error) a real function of a single variable, y=f(x), based upon presentation of (x,y) exemplar pairs. Faults of the types described previously were then introduced, and the resulting degradation in approximation error was logged versus the number of faults. Many such experiments were conducted with varying CMAC memory size and generalization parameter while learning the same exemplar function. We considered smoothly varying functions (sinusoids of varying spatial frequency) and discontinuous functions (step functions) on a bounded interval. 3.2 EXPERIMENT RESULTS AND DISCUSSION In this section we present the results of experiments in which the function to be learned is held fixed, while the generalization parameter of the CMAC network to be tested is varied. The total number of weights (also referred to here as memory locations) is the same in each batch of experiments. Memory sizes of 50, 250, and 1000 were investigated, but only the results for the case N=250 are presented here. They exemplify the trends observed for all memory sizes. Figure 2 shows the dependence of RMS (root mean square) approximation error on the number of loss-of-weight faults injected for generalization parameter values C=4, 8, 16. The task was that of reproducing a single cycle of a sinusoidal function on the input interval. Note that approximation error was diminished with increasing generalization at any fault severity level. For saturated weight faults, however, approximation error incr.eased with increasing generalization! The reason for this contrasting behavior becomes clear upon examination of Figure 3. Observe also in Figure 2 that the increase in RMS error due to the introduction of a single fault can be as much as an order of magnitude. This is somewhat deceptive since the scale of the error is rather small (typically 10-3 or so), and so it may not seem of great consequence. However, as one may note in Figure 3, the effect of a single fault is highly localized, so RMS approximation error may be a poor choice of performance measure in selected 344 Carter, Rudolph and Nucci applications. In particular, saturated weight faults in nominally small weight magnitude locations create a large relative response error, and this may be devastating in real-time control applications. Loss-of-weight faults are more benign, and their impact may be diluted by increasing generalization. The penalty for doing so, however, is increased sensitivity to saturated weight faults because larger regions of the network mapping are affected by a single fault Figure 4 displays some of the results of fault-tolerance tests with a discontinuous exemplar function. Note the large variation in stored weight values necessary to reproduce the step function. When a large magnitude weight needed to form the step transition was faulted, the result was a double step (Figure 4(b» or a shifted transition point (Figure 4(c». The extent of the fault impact was diminished with decreasing generalization. Since pattern classification tasks are equivalent to learning a discontinuous function over the input feature space, this finding suggests that improved fault-tolerance in such tasks might be obtained by reducing the generalization parameter C. This would limit the shifting of pattern class boundaries in the presence of weight faults. Preliminary experiments, however, also showed that learning of discontinuous exemplar functions proceeded much more slowly with small values of the generalization parameter. 4 CONCLUSIONS AND OPEN QUESTIONS The CMAC network is well-suited to applications that demand fast learning of unknown multidimensional, static mappings (such as those arising in nonlinear control and signal processing systems). The results of the preliminary investigations reported here suggest that the fault-tolerance of conventional CMAC networks may not be as great as one might hope on the basis of anecdotal evidence in the prior literature with other network architectures. Network fault sensitivity does not seem to be uniform, and the location of particularly sensitive weights is very much dependent on the exemplar function to be le3f!led. Furthermore, the obvious fault-tolerance enhancement technique of increasing generalization (i.e. distributing the response computation over more weight locations) has the undesirable effect of increasing sensitivity to saturated weight faults. While the local generalization feature of CMAC has the desirable attribute of limiting the region of fault impact, it suggests that global approximation error measures may be misleading. A low value of RMS error degradation may in fact mask a much more severe response error over a small region of the mapping. Finally, one must be cautious in making assessments of the fault-tolerance of a fixed network on the basis of tests using a single mapping. Discontinuous exemplar functions produce stored weight distributions which are much more fault-sensitive than those associated with smoothly varying functions, and such functions are clearly of interest in pattern classification. Many important open questions remain concerning the fault-tolerance properties of the CMAC network. The effect of faults on the address generator module has yet to be determined. Collisions in the hash-coded mapping effectively propagate weight faults to Operational Fault Tolerance of CMAC Networks 345 remote regions of the input space, and the impact of this phenomenon on overall fault-tolerance has not been assessed. Much more work is needed on the role that exemplar function smoothness plays in detennining the fault-tolerance of a fIxed topology network. Acknowledgements The authors would like to thank Tom Miller, Fil Glanz. Gordon Kraft. and Edgar An for many helpful discussions on the CMAC network architecture. This work was supported in part by an Analog Devices Career Development Professorship and by a General Electric Foundation Young Faculty Grant awarded to MJ. Carter. References J.S. Albus. (1975) "A new approach to manipulator control: the Cerebellar Model Articulation Controller (CMAC)," Trans. ASME- 1. Dynamic Syst .. Meas .. Contr. 97 ; 220-227. MJ. Carter. (1988) "The illusion of fault-tolerance in neural networks for pattern recognition and signal processing," Proc. Technical Session on Fault-Tolerant Integrated Systems. Durham, NH: University of New Hampshire. E. Ersu and J. Militzer. (1984) "Real-time implementation of an aSSOC13Uve memory-based learning control scheme for non-linear multivariable processes," Proc. 1 st Measurement and Control Symposium on Applications of Multivariable Systems Techniques; 109-119. S. Lane, D. Handelman, and J. Gelfand. (1988) "A neural network computational map approach to reflexive motor control," Proc. IEEE Intelligent Control Conf. Arlington, VA. W.T. Miller. (1986) "A nonlinear learning controller for robotic manipulators," Proc. SPIE: Intelligent Robots and Computer Vision 726; 416-423. x C addrc:a elo:c:liaa IiDu --~ ~ ..----0 y 346 Carter, Rudolph and Nucci .. ~------------------------~ • • w· oo ~ • c ~ 4 -I _1' •. ~~--------------.....-. • Figure 2: Sinusoid Approximation Enor vs. Number of "Loss-of-Weight" Faults · • u ! U 'i. • • · · i4.1 • ---... .... 1--.-----.......------...... ---•• . ~ ~ u • • • · · • loU __ Ii ...... ......... -----...... -----...... --· • u ,. • u t • zea ! u~----------~--________ -::,.:~ .... .... +-----------...-• II. -.,~ Figure 3: Network Response and Stored Weight Values. a) single lost weight. generalization C=4; b) single lost weight, C=16; c) single saturated weight, C=16. Operational Fault Tolerance of CMAC Networks 347 1.5 1.0 v '0 :I 0.5 ~ e < 0.0 v .: :s -<1.5 V a: neIWOrk response ~ • stOn:d weighlS ·1.0 ·1.5 a 100 200 Memory Localioa • 0.8 .. '0 :I ~ :: « ... -<1.2 '" netWOrk response ~ .. 0 stored WCIg/l1S = ·1.2 a 100 200 Memory Loatioa 2 SatUrated Fault ~ .-~ « a network response 0 stOred wcighlS ·2 a 100 200 Memory Locatioa Figure 4: Network Response and Stored Weight Values. a) no faults. transition at location 125. C=8; b) single lost weight, C=16; c) single saturated weight, C=16.
|
1989
|
13
|
191
|
Effects of Firing Synchrony on Signal Propagation in Layered Networks 141 Effects of Firing Synchrony on Signal Propagation in Layered Networks G. T. Kenyon,l E. E. Fetz,2 R. D. Puffl 1 Department of Physics FM-15, 2Department of Physiology and Biophysics SJ-40 University of Washington, Seattle, Wa. 98195 ABSTRACT Spiking neurons which integrate to threshold and fire were used to study the transmission of frequency modulated (FM) signals through layered networks. Firing correlations between cells in the input layer were found to modulate the transmission of FM signals under certain dynamical conditions. A tonic level of activity was maintained by providing each cell with a source of Poissondistributed synaptic input. When the average membrane depolarization produced by the synaptic input was sufficiently below threshold, the firing correlations between cells in the input layer could greatly amplify the signal present in subsequent layers. When the depolarization was sufficiently close to threshold, however, the firing synchrony between cells in the initial layers could no longer effect the propagation of FM signals. In this latter case, integrateand-fire neurons could be effectively modeled by simpler analog elements governed by a linear input-output relation. 1 Introduction Physiologists have long recognized that neurons may code information in their instantaneous firing rates. Analog neuron models have been proposed which assume that a single function (usually identified with the firing rate) is sufficient to characterize the output state of a cell. We investigate whether biological neurons may use firing correlations as an additional method of coding information. Specifically, we use computer simulations of integrate-and-fire neurons to examine how various levels of synchronous firing activity affect the transmission of frequency-modulated 142 Kenyon, Fetz and Puff (FM) signals through layered networks. Our principal observation is that for certain dynamical modes of activity, a sufficient level of firing synchrony can considerably amplify the conduction of FM signals. This work is partly motivated by recent experimental results obtained from primary visual cortex [1, 2] which report the existence of synchronized stimulus-evoked oscillations (SEa's) between populations of cells whose receptive fields share some attribute. 2 Description of Simulation For these simulations we used integrate-and-fire neurons as a reasonable compromise between biological accuracy and mathematical convenience. The subthreshold membrane potential of each cell is governed by an over-damped second-order differential equation with source terms to account for synaptic input: (1) where ¢Ic is the membrane potential of cell k, N is the number of cells, Tic; is the synaptic weight from cell j to cell k, tj are the firing times for the ph cell, Tp is the synaptic weight of the Poisson-distributed input source, Pic are the firing times of Poisson-distributed input, and Tr and Ttl are the rise and decay times of the EPSP. The Poisson-distributed input represents the synaptic drive from a large presynaptic population of neurons. Equation 1 is augmented by a threshold firing condition then (2) where 9(t - tAJ is the threshold of the kth cell, and T/1 is the absolute refractory period. If the conditions (2) do not hold then ¢Ic continues to be governed by equation 1. The threshold is 00 during the absolute refractory period and decays exponentially during the relative refractory period: 9(t - tk) = { ~' -(t-t' )/., n upe It" +uo, if t t~ < T /1 ; otherwise, (3) where, 60 is the resting threshold value, f)p is the maximum increase of 9 during the relative refractory period, and Tp is the time constant characterizing the relative refractory period. 2.1 Simulation Parameters Tr and Ttl. are set to 0.2 msec and 1 msec, respectively. Tp and TAl; are always (1/100)90 • This strength was chosen as typical of synapses in the eNS. To sustain Effects or Firing Synchrony on Signal Propagation in Layered Networks 143 ..-.. > e -o o 20 40 0 20 40 (mae<:) (msec) Figure 1: Example membrane potential trajectories for two different modes of activity. EPSP's arrive at mean frequency, LIm, that is higher for mode I (a) than for mode II (b). Dotted line below threshold indicates asymptotic membrane potential. activity, during each interval Ttl, a cell must receive ~ (Bo/Tp) = 100 Poisson~istributed inputs. Resting potential is set to 0.0 mV and Bo to 10 mY. 4>1' and 4>1' are set to 0.0 mV and -1.0 mV /msec, which simulates a small hyperpolarization after firing. Ta and Tp were each set to 1 msec, and Bp to 1.0 mY . 3 Response Properties of Single Cells Figure 1 illustrates membrane potential trajectories for two modes of activity. In mode I (fig. la), synaptic input drives the membrane potential to an asymptotic value (dotted line) within one standard deviation of ()o. In mode II (fig. 1b), the asymptotic membrane potential is more than one standard deviation below ()o' Figure 2 illustrates the change in average firing rate produced by an EPSP, as measured by a cross-correlation histogram (CCH) between the Poisson source and the target cell. In mode I (fig. 2a), the CCH is characterized by a primary peak followed by a period of reduced activity. The derivative of the EPSP, when measured in units of Bo , approximates the peak magnitude of the CCH. In mode II (fig. 2b), the CCB peak is not followed by a period of reduced activity. The EPSP itself, measured in units of Bo and divided by Td, predicts the peak magnitude of the CCB. The transform between the EPSP and the resulting change in firing rate has been discussed by several authors [3, 4]. Figures 2c and 2d show the cumulative area (CUSUM) between the CCH and the baseline firing rate. The CUSUM asymptotes to a finite value, ~, which can be interpreted as the average number of additional firings produced by the EPSP. ~ increases with EPSP amplitude in a manner which depends on the mode of activity (fig. 2e). In mode II, the response is amplified for large inputs (concave up). In mode I, the response curve is concave down. The amplified response to large inputs during mode II activity is understandable in terms of the threshold crossing mechanism. Populations of such cells should respond preferentially to synchronous synaptic input [5]. 144 Kenyon, Fetz and Puff 0 6 0 6 b) d) .2 mode II mode II .02 -. i u 11/ t/) .01 E -0 o 6 0 6 0,.1 .2 (msec) EPSP Amplitude in units 01 fl. Figure 2: Response to EPSP for two different modes of activity. a) and b) Cross-correlogram with Poisson input source. Mode I and mode II respectively. c) and d) CUSUM computed from a) and b). e) A vs. EPSP amplitude for both modes of activity. 4 Analog Neuron Models The histograms shown in Figures 2a,b may be used to compute the impulse response kernel, U, for a cell in either of the two modes of activity, simply by subtracting the baseline firing rate and normalizing to a unit impulse strength. If the cell behaves as a linear system in response to a small impulse, U may be used to compute the response of the cell to any time-varying input. In terms of U, the change in firing rate, 6F, produced by an external source of Poisson-distributed impulses arriving with an instantaneous frequency Ft(t) is given by (4) where, Tt is the amplitude of the incoming EPSP's. For the layered network used in our simulations, equation 4 may be generalized to yield an iterative relation giving the signal in one layer in terms of the signal in the previous layer. (5) :z: u u Effects of Firing Synchrony on Signal Propagation in Layered Networks 145 o 4 o 4 (msec) o 4 I tI • 1/ til a -~. -4 0 4 -4 0 4 -4 0 4 (msec) Figure 3: Signal propagation in mode I network. a) Response in first three layers due to a single impulse delivered simultaneously to all cells in the first layer. Ratio of common to independent input given by percentages at top of figure. First row corresponds to input layer. Firing synchrony does not effect signal propagation through mode I cells. Prediction of analog neuron model (solid line) gives a good description of signal propagation at all synchrony levels tested. b) Synchrony between cells in the same layer measured by MGH. Firing synchrony within a layer increases with layer depth for all initial values of the synchrony in the first layer. where, 6Fi is the change in instantaneous firing rate for cells in the ith layer,1i+l,t is the synaptic weight between layer i and i + 1, and N is the number of cells per layer. Equation 5 follows from an equivalent analog neuron model with a linear input-output relation. This convolution method has been proposed previously [6). 5 Effects of Firing Synchrony on Signal Propagation A layered network was designed such that the cells in the first layer receive impulses from both common and independent sources. The ratio of the two inputs was adjusted to control the degree of firing synchrony between cells in the initial layer. Each cell in a given layer projects to all the cells in the succeeding layer with equal strength, 1~o9o. All simulations use 50 cells per layer. Figure 3a shows the response of cells in the mode I state to a single impulse of strength 1~o9o delivered simultaneously to all the cells in the first layer. In this and all subsequent figures, successive layers are shown from top to bottom and synchrony (defined as the fraction of common input for cells in the first layer) increases from 146 Kenyon, Fetz and Puff ..-.. i (.) 1/ III e ....... !I: u u .03 o 4 o 4 (msec) .2 .2 -4 0 4 -4 0 4 (msec) -4 0 4 Figure 4: Signal propagation in mode II network. Same organization as fig. 3. a) At initial levels of synchrony above:::::: 30%, signal propagation is amplified significantly. The propagation of relatively asynchronous signals is still adequately described by the analog neuron model. b) Firing synchrony within a layer increases with layer depth for initial synchrony levels above:::::: 30%. Below this level synchrony within a layer decreases with layer depth. left to right. Figure 3a shows that signals propagate through layers of interneurons with little dependence on firing synchrony. The solid line is the prediction from an equivalent analog neuron model with a linear input-output relation (eq. 5). At all levels of input synchrony, signal propagation is reasonably well approximated by the simplified model. Firing synchrony between cells in the same layer may be measured using a mass correlogram (MeH). The MeH is defined as the auto-correlation of the population spike record, which combines the individual spike records of all cells in a given layer. Figure 3b shows that for all initial levels of synchrony produced in the input layer, the intra-layer firing synchrony increased rapidly with layer depth. The simulations were repeated using an identical network, but with the tonic level of input reduced sufficiently to fix the cells in the mode II state (fig. 4). In contrast with the mode I case, the effect of firing synchrony is substantial. When firing is asynchronous only a weak impulse response is present in the third layer (fig. 4a, bottom left), as predicted by the analog neuron model (eq. 5). For levels of input synchrony above ~ 30%, however, the response in the third layer is substantially more prominent. A similar effect occurs for synchrony within a layer. At input Effects of Firing Synchrony on Signal Propagation in Layered Networks 147 o 4 804 804 8 (msee) o 4 804 804 8 (msec) Figure 5: Propagation of sinusoidal signals. Similar organization to figs. 3,4. Top row shows modulation of input sources. a) Mode I activity. Signal propagation is not significantly influenced by the level of firing synchrony. Analog neuron model (solid line) gives reasonable prediction of signal tranmission. b) Mode II activity. At initial levels of firing synchrony above:::::: 30%, signal propagation is amplified. The propagation of asynchronous signals is still well described by the analog neuron model. Period of applied oscillation = 10 msec. synchrony levels below :::::: 30%, firing synchrony between cells in the same layer (fig. 4b) falls off in successive layers. Above this level, however, synchrony grows rapidly from layer to layer. To confirm that our results are not limited to the propagation of signals generated by a single impulse, oscillatory signals were produced by sinusoidally modulating the firing rates of both the common and independent input sources to the first layer (fig. 5). In the mode I state (fig. 5a), we again find that firing synchrony does not significantly alter the degree of signal penetration. The solid line shows that signal transmission is adequately described by the simplified model (eqs. 4,5). In the mode II case, however, firing synchrony is seen to have an amplifying effect on sinusoidal signals as well (fig. 5b). Although the propagation of asynchronous signals is well described by the analog neuron model, at higher levels of synchrony propagation is enhanced. 148 Kenyon, Fetz and Puff 6 Discussion It is widely accepted that biological neurons code information in their spike density or firing rate. The degree to which the firing correlations between neurons can code additional information by modulating the transmission of FM signals, depends strongly on dynamical factors. We have shown that for cells whose average membrane potential is sufficiently below the threshold for firing, spike correlations can significantly enhance the transmission of FM signals. We have also shown that the propagation of asynchronous signals is well described by analog neuron models with linear transforms. These results may be useful for understanding the role played by synchronized SEQ's in primary visual cortex [1,2]. Such signals may be propagated more effectively to subsequent processing areas as a consequence of their relative synchronization. These observations may also pertain to the neural mechanisms underlying the increased levels of synchronous discharge of cerebral cortex cells observed in slow wave sleep [7J. Another relevant phenomenon is the spread of synchronous discharge from an epileptic focus; the extent to which synchronous activity is propagated through surrounding areas may be modulated by changing their level of activation through voluntary effort or changing levels of arousal. These physiological phenomena may involve mechanisms similar to those exhibited by our network model. , Acknowledgements This work is supported by an NIH pre-doctoral training grant in molecular biophysics (grant # T32-GM 08268) and by the Office of Naval Research (contract # N 00018-89-J-1240). References [1J C. M. Gray, P. Konig, A. K. Engel, W. Singer, Nature 338:334-337 (1989) [2) R. Eckhorn, R. Bauer, W. Jordan, M. Brosch, W. Kruse, H. J. Reitboeck, Bio. Cyber. 60:121-130 (1988) [3) E. E. Fetz, B. Gustafsson, J. Physiol. 341:387-410 (1983) [4J P. A. Kirkwood, J. Neurosci. Meth. 1:107-132 (1979) [5] M. Abeles, Local Cortical Circuits: Studies of Brain Function. Springer, New York, Vol. 6 (1982) [6] E. E. Fetz, Neural Information Processing Systems American Institute of Physics. (1988) [7] H. Noda, W.R.Adey, J. Neurophysiol. 23:672-684 (1970)
|
1989
|
14
|
192
|
2 Simmons Acoustic-Imaging Computations by Echolocating Bats: Unification of Diversely-Represented Stimulus Features into Whole Images. James A. Simmons Department of Psychology and Section of Neurobiology, Division of Biology and Medicine Brown University, Providence, RI 02912. ABSTRACT The echolocating bat, Eptesicus fuscus, perceives the distance to sonar targets from the delay of echoes and the shape of targets from the spectrum of echoes. However, shape is perceived in terms of the target's range proftle. The time separation of echo components from parts of the target located at different distances is reconstructed from the echo spectrum and added to the estimate of absolute delay already derived from the arrival-time of echoes. The bat thus perceives the distance to targets and depth within targets along the same psychological range dimension, which is computed. The image corresponds to the crosscorrelation function of echoes. Fusion of physiologically distinct time- and frequency-domain representations into a fmal, common time-domain image illustrates the binding of withinmodality features into a unified, whole image. To support the structure of images along the dimension of range, bats can perceive echo delay with a hyperacuity of 10 nanoseconds. Acoustic-Imaging Computations by Echolocating Bats 3 THE SONAR O.~ BATS Bats are flying mammals, whose lives are largely nocturnal. They have evolved the capacity to orient in darkness using a biological sonar called echolocation, which they use to avoid obstacles to flight and to detect, identify, and track flying insects for interception (Griffm, 1958). Echolocating bats emit brief, mostly ultrasonic sonar sounds and perceive objects from echoes that return to their ears. The bat's auditory system acts as the sonar receiver, processing echoes to reconstruct images of the objects themselves. Many bats emit frequencymodulated (FM) signals; the big brown bat, Eptesicus fuscus, transmits sounds with durations of several milliseconds containing frequencies from about 20 to 100 kHz arranged in two or three hannonic sweeps (Fig. 1). The images that Eptesicus ultimately perceives retain crucial features of the original sonar wave100 N 80 I ~ -;:: 60 () c ~ 40 cO> ..... 20 ......... 1 msec o~ ____________ ~ Figure I: Spectrogram of a sonar sound emitted by the big brown bat, Eptesicus fuscus (Simmons, 1989). forms, thus revealing how echoes are processed to reconstruct a display of the object itself. Several important general aspects of perception are embodied in specific echo-processing operations in the bat's sonar. By recognizing constraints imposed when echoes are encoded in terms of neural activity in the bat's auditory system, recent experiments have identified a nove) use of time- and frequencydomain techniques as the basis for acoustic imaging in FM echolocation. The intrinsically reciprocal properties of time- and frequency-domain representations are exploited in the neural algorithms which the bat uses to unify disparate features into whole images. IMAGES OF SINGLE-GI.JNT TARGETS A simple sonar target consists of a single reflecting point, or glint, located at a discrete range and reflecting a single replica of the incident sonar signal. A complex target consists of several glints at slightly different ranges. It thus reflects compound echoes composed of individual replicas of the incident sound arriving 4 Simmons at slightly different delays. To dctennine the distance to a target, or target range, echolocating bats estimate the delay of echoes (Simmons, 1989). The bat's image of a single-glint target is constructed around its estimate of echo delay, and the shape of the image can be measured behaviorally. The performance of bats trained to discriminate between echoes that jitter in delay and echoes that are stationary in delay yields a graph of the image itself (Altes, 1989), together with an indication of the accuracy of the delay estimate that underlies it (Simmons, 1979; Simmons, Perragamo, Moss, Stevenson, & Altes, in press). Fig. 2 shows Jitter Performonce Crasscorrelatian Function / 1"-.... /\ ./.\ -" '\./.\ j \ /.'/ /..-. / \ ....... . . -50 -40 -030 -20 -10 0 10 ~o )0 40 50 -50 -40 -JO -20 -10 0 10 20 JO 40 50 Time (mIcroseconds) Time (microseconds) Figure 2: Graphs showing the bat's image of a single-glint target from jitter discrimination experilnents (left) for comparison with the crosscorrelation function of echoes (right). The zero point on each time axis corresponds to the objective arrival-time of the echoes (about 3 msec in this experiment; Sinlmons, Perragamo, et aI., in press). the image of a single-glint target perceived by Eptesicus, expressed in terms of echo delay (58 Ilsec/cm of range). Prom the bat's jitter discrimination performance, the target is perceived at its true range. Also, the image has a fme structure consisting of a central peak corresponding to the location of the target and two prominent side-peaks as ghost images located about 35 }lsec or 0.6 cm nearer and farther than the main peak. This image fme structure reflects the composition of the waveform of the echoes themselves; it approximates the crosscorrelation function of echoes (Fig. 2). The discovery that the bat perceives an image corresponding to the crosscorrelation function of echoes provides a view of the hidden machinery of the bat's sonar receiver. The bat's estimate of echo delay evidently is based upon a capacity of the auditory system to represent virtually all of the information available in echo waveforms that is relevant to determining delay, including the phase of echoes relative to emissions (Simmons, Ferragamo, et al, in press). The bat's initial auditory representation of these FM signals resembles spectrograms Acoustic-Imaging Computations by Echolocating Bats 5 that consist of neural impulses marking the time-of-occurrence of succeSSlve frequencies in the FM sweeps of the sounds (Fig. 3). Each nerve im150 120 100 " 80 . \. 60 ":. N 50 ~ I .x: 40 .. .~ .::\ 30 .. 25 "I ) '. 20 15 0 time '.\~. +, ~ .. I~ ':~ -=\. 5 (msec) 10 Hgure 3: Neural spectrograms representing a sonar emission (left) and an echo from a target located about I m away (right), The individual dots are neural impulses conveying the instantaneous frequency of the FM sweeps (see Fig. 1). The 6msec time separation of the two spectrograms indicates target range in the bat's sonar receiver (Simmons & Kick, 1984). pulse travels in a "channel" that is tuned to a particular excitatory frequency (Bodenhamer & Pollak, 1981) as a consequence of the frequency analyzing properties of the cochlea.. The cochlear filters are followed by rectification and low-pass filtering, so in a conventional sense the phase of the filtered signals is destroyed in the course of forming the spectrograms. However, Fig. 2 shows that the bat is able to reconstruct the crosscorrclation function of echoes from its spectrogram-like auditory representation. The individual neural "points" in the spectrogram signify instantaneous frequency, and the recovery of the fIne structure in the image may exploit properties of instantaneous frequency when the images are assembled by integrating numerous separate delay measurements across different frequencies. The fact that the crosscorrelation function emerges from these neural computations is provocative from theoretical and technological viewpoints--the bat appears to employ novel real-time algorithms that can transform echoes into spectrograms and then into the sonar ambiguity function itself. The range-axis image of a single-glint target has a fIne structure surrounding a central peak that constitutes the bat's estimate of echo delay (Fig. 2). The width of this peak corresponds to the limiting accuracy of the bat's delay estimate, allowing for the ambiguity represented by the side-peaks located about 35 Jlsec away. In Fig. 2, the data-points arc spaced 5 Jlsec apart along the time axis (approximately the Nyquist sampling interval for the bat's signals), and the true width of the central peak is poorly shown. Fig. 4 shows the performance of three Eptesicus in an experiment to measure this width with smaller delay steps. The 6 Simmons 100 Figure 4: A graph of the " 90 /~-~------pelformance of Eptesicus " ~ 80 discriminating echo-delay 1'. g. ~. Oeloy line jitters that change small " m ~ 70 • Bot #I 1 .-. Bot. 3 .--. steps. The bats' limiting " 1 Bot. 50-0 ~ 60 I ' , acuity about 10 for u 1 Cable IS nsec c Bat.3 0--0 e 50 Bot'5 .-0 75% correct responses " 0.. 40 (Simmons, Perragamo, et a1., 0 5 10 15 20 25 30 35 40 45 50 55 60 TIme (nanosetonds) in press). bats can detect a shift of as little as 10 nsec as a hyperacuity (Altes, 1989) for echo delay in the jitter task. In estimating echo delay, the bat must integrate spectrogram delay estimates across separate frequencies in the FM sweeps of emissions and echoes (see Fig. 3), and it arrives at a very accurate composite estimate indeed. Timing accuracy in the nanosecond range is a previously unsuspected capahility of the nervous system, and it is likely that more complex algorithms than just integration of information across frequencies lie behind this fine acuity (see below on amplitude-latency trading and perceived delay). IMAGES OI<~ lWO-GLINT TARGETS Complex targets such as airborne insects reflect echoes composed of several replicas of the incident sound separated by short intervals of time (Simmons & Chen, 1989). Por insect-sized targets, with dimensions of a few centimeters, this time separation of echo components is unlikely to exceed 100 to 150 Jlsec. Because the bat's signals arc several milliseconds long, the echoes from complex targets thus will contain echo components that largely overlap. The auditory system of Eptesicus has an integration-time of about 350 Jlsec for reception of sonar echoes (Simmons, Freedman, et at., 1989). Two echo components that arrive together within this integration-time will merge together into a single compound echo having an arrival-time as a whole that indicates the delay of the first echo component, and having a series of notches in its spectrum that indicates the time separation of the first and second components. In the bat's auditory representation, echo delay corresponds to the time separation of the emission and echo spectrograms (see Fig. 3), while the notches in the compound echo spectrum appear as '1101es" in the spectrogram--that is, as frequencies that fail to appear in echoes. The location and spacing of these notches or holes in frequency is related to the separation of the two echo components in lime. The crucial point is that the constraint imposed by the 350-Jlsec integration-time for echo reception disperses the information required to reconstruct the detailed range Acoustic-Imaging Computations by Echolocating Bats 7 structure of the complex target into both the time and the frequency dimensions of the neural spectrograms. FptesicuJ extracts an estimate of the overall delay of the waveform of compound echoes from two-glint targets. This time estimate leads to a range-axis image of the closer of the two glints in the target (the target's leading edge). This part of the image exhibits the same properties as the image of a single-glint target--it is encoded by the time-of-occurrence of neural discharges in the spectrograms and it resembles the crosscorrclation function for the first echo component (Simmons, Moss, & Perragamo, 1990; Simmons, Ferragamo, et al., in press; see Simmons, 1989). The bat also perceives a range-axis image of the farther of the two glints (the target's trailing edge). This image is located at a perceived distance that corresponds to the bat's estimate of the time separation of the two echo components that make up the compound echo. Fig. 5 shows the performance of EpleJicuJ in a jitter discrimination experiment in which one of the 8, a'i i~~I ! I I I I , o 20 40 lime (psec) Figure 5: A graph comparing the crosscorrelation function of echoes from a two-glint target with a delay separation of 10 Jlsec (top) with the bat's jitter discrimination performance using tlus compound echo as a stimulus (bottom). The two glints arc indicated as a I and aI' (Simmons, 1989). jittering stimulus echoes contained two replicas of the bat's emitted sound separated by 10 Jlsec. The bat perceives two distinct reflecting points along the range axis. Both glints appear as events along the range axis in a time-domain image even though the existence of the second glint could only be inferred from the frequency domain because the delay separation of 10 Jlsec is much shorter than the receiver's integration time. The image of the second glint resembles the crosscorrelation function of the later of the two echo components. The bat adds it to the crosscorrelation function for the earlier component when the whole image is formed. 8 Simmons ACOUSTIC-IMA(;E PROCESSING BY FM BATS Somehow Eptesicus recovers sufficient information from the timing of neural discharges across the frequencies in the PM sweeps of emissions and echoes to reconstruct the crosscorrelation function of echoes from the flfst glint in the complex target and to estimate delay with nanosecond accuracy. This fundamentally time-domain image is derived from the processing of information initially also represented in the time domain, as demonstrated by the occurrence of changes in apparent delay as echo amplitude increases or decreases: The location of the perceived crosscorrelation function for the flfst glint can be shifted by predictable amounts along the time axis according to the separately-measured amplitude-latency trading relation for Eptesicus (about -17 }lsec/dB; Simmons, Moss, & Perragamo, 1990; Simmons, Ferragamo, et aI., in press), indicating that neural response latency--that is, neural discharge timing--conveys the crucial information about delay in the bat's auditory system. The second glint in the complex target manifests itself as a crosscorrelation-like image component, too. However, the bat must transform spectral information into the time domain to arrive at such a time- or range-axis representation for the second glint. This transformed time-domain image is added to the time-domain image for the first glint in such a way that the absolute range of the second glint is referred to that of the first glint. Shifts in the apparent range of the flfst glint caused by neural discharges undergoing amplitude-latency trading will carry the image of the second glint along with it to a new range value (Simmons, Moss, & Perragamo, 1990). Evidently, the psychological dimension of absolute range supports the image of the target as a whole. This helps to explain the bat's extraordinary IO-nsec accuracy for perceiving delay. For the psychological range or delay axis to accept fine-grain range infonnation about the separation of glints in complex targets, its intrinsic accuracy must be adequate to receive the information that is transformed from the frequency domain. The bat achieves fusion of image components by transfonning one component into the numerical fonnat for the other and then adding them together. The experimental dissociation of the images of the first and second glints from different effects of latency shifts demonstrates the independence of their initial physiological representations. Furthennore, the expected latency shift does not occur for frequencies whose amplitudes are low because they coincide with spectral notches; the bat's fine nanosecond acuity thus seems to involve removal of discharges at "untrustworthy" frequencies prior to integration of discharge timing across frequencies. The delay-tuning of neurons is usually thought to represent the conversion of a temporal code (timing of neural discharges) into a "place" code (the location of activity on the neural map). The bat's unusual acuity of 10 nsec suggests that this conversion of a temporal to a "place" code is only partial. Acoustic-Imaging Computations by EchoIocating Bats 9 Not. only does the site of activity on the neural map convey information about delay, but the timing of discharges in map neurons may also play a critical role in the map-reading operation. The bat's fIne acuity may emerge in the behavioral data because initial neural encoding of the stimulus conditions in the jitter task involves the same parameter of neural rcsponses--timing--that later is intimately associated with map-reading in the brain. Echolocation may thus fortuitously be a good system in which to explore this basic perceptual process. Ackllowledgmen ts Research supported by grants from ONR, NIH, NIMH, ORF, and SOF. References R. A. Altes (1989) Ubiquity of hyperacuity, 1. Acoust. Soc. Am. 85: 943-952. R. D. Bodenhamer & G. O. Pollak (1981) Time and frequency domain processing in the inferior colliculus of echolocating bats, Hearing Res. 5: 317-355. O. R. Griffin (1958) Listening in the Dark, Yale Univ. Press. 1. A. Simmons (1979) Perception of echo phase information in bat sonar, Science, 207: 1336-1338. 1. A. Simmons (1989) A view of the world through the bat's ear: the formation of acoustic images in echolocation, Cognition 33: 155-199. J. A. Simmons & L. Chen (1989) The acoustic basis for target discrimination by PM echolocating bats, 1. Acoust. Soc. Am. 86: 1333-1350. 1. A. Simmons, M. Ferragamo, C. F. Moss, S. B. Stevenson, & R. A. Altes (in press) Discrimination of jittered sonar echoes by the echolocating bat, Eplesicus fuscus: the shape of target unages in echolocation, 1. Compo Physiol. A. 1. A. Simmons, E. G. Freedman, S. B. Stevenson, L. Chen, & T. 1. Wohlgenant (1989) Clutter interference and the integration tUne of echoes in the echolocating bat, Eptesicus fuscus, J. Acoust. Soc. Am. 86: 1318-1332. 1. A. Simmons & S. A. Kick (1984) Physiological mechanisms for spatial fIltering and unage enhancement in the sonar of bats, Ann. Rev. Physiol. 46: 599614. J. A. Simmons, C. F. Moss, & M. Ferragamo (1990) Convergence of temporal and spectral information into acoustic images perceived by the echolocating bat, Eptesicus fuscus, 1. Compo Physiol. A 166:
|
1989
|
15
|
193
|
710 Pineda Time DependentAdaptive Neural Networks Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g. gradient descent or conjugate gradient). Techniques for evaluating gradients and performing minimizations are well developed in the field of optimal control and system identification, but are only now being introduced to the neural network community. Not all algorithms that are useful or efficient in control problems are realizable as physical neural networks. In particular, physical neural network algorithms must satisfy locality, scaling and causality constraints. Locality simply is the constraint that one should be able to update each connection using only presynaptic and postsynaptic infonnation. There should be no need to use infonnation from neurons or connections that are not in physical contact with a given connection. Scaling, for this paper, refers to the Time Dependent Adaptive Neural Networks 711 scaling law that governs the amount of computation or hardware that is required to perform the weight updates. For neural networks, where the number of weights can become very large, the amount of hardware or computation required to calculate the gradient must scale linearly with the number of weights. Otherwise, large networks are not possible. Finally, learning algorithms must be causal since physical neural networks must evolve forwards in time. Many algorithms for learning time-dependent behavior, although they are seductively elegant and computationally efficient, cannot be implemented as physical systems because the gradient evaluation requires time evolution in two directions. In this paper networks that violate the causality constraint will be referred to as unphysical. It is useful to understand how scalability and causality trade off in various gradient evaluation algorithms. In the next section three related gradient evaluation algorithms are derived and their scaling and causality properties are compared. The three algorithms demonstrate a natural progression from a causal algorithm that scales poorly to an a causal algorithm that scales linearly. The difficulties that these exact algorithms exhibit appear to be inescapable. This suggests that approximation schemes that do not calculate exact gradients or that exploit special properties of the tasks to-be-Ieamed may lead to physically realizable neural networks. The final section of this paper suggests an approach that could be exploited in systems where the time scale of the to-be-Ieamed task is much slower than the relaxation time scale of the adaptive neurons. 2 ANALYSIS OF ALGORITHMS We will begin by reviewing the learning algorithms that apply to time-dependent recurrent networks. The control literature generally derives these algorithms by taking a variational approach (e.g. Bryson and Ho, 1975). Here we will take a somewhat unconventional approach and restrict oursel yes to the domain of differential equations and their solutions. To begin with, let us take a concrete example. Consider the neural system given by the equation dx· n , (it = X i+ ~ w I(x) + I I ,=1 (1) Where f(.) is a sigmoid shaped function (e.g. tanh(.)) and ~is an external input This system is a well studied neural model (e.g. Aplevich, 1968; Cowan, 1967; Hopfield, 1984; Malsburg, 1973; Sejnowski, 1977). The goal is to find the weight matrix w that causes the states x(t) of the output units to follow a specified trajectory x(t). The actually trajectory depends not only on the weight matrix but also on the external input vector I. To find the weights one minimizes a measure of the difference between the actual trajectory x(t) and the desired trajectory ~(t). This measure is a functional of the trajectories and a function of the weights. It is given by f t I 1 2 E(w ,t I,t ) = 2 .L dt (x ,{t) - ~,{t)) (2) ,e 0 t o where 0 is the set of output units. We shall, only for the purpose of algorithm comparison, 712 Pineda make the following assumptions: (1) That the networks are fully connected (2) That all the interval [tD,tr] is divided into q segments with numerical integrations performed using the Euler method and (3) That all the operations are performed with the same precision. This will allow us to easily estimate the amount of computation and memory required for each algorithm relative to the others. 2.1 ALGORITHM A If the objective function E is differentiated with respect to w n one obtains aE nft! =- L d t J i(t) P irit ) aw rs i=1 t where and where o J.= { g i(t)- x i(t) if i E 0 '0 ififl.O ax · , Pirs=-aWrs (3a) (3b) (3c) To evaluate Pirs' differentiate equation (1) with respect to w n and observe that the time derivative and the partial derivative with respect to w n commute. The resulting equation is where and where dp irs ~L ( ) -d = ~ ij'X j Pjrs+Sir. t . 1 J= (4a) (4b) (4c) The initial condition for eqn. (4a) is p(t) = O. Equations (1), (3) and (4) can be used to calculate the gradient for a learning rule. This is the approach taken by Williams and Zipser (1989) and also discussed by Pearlmutter(1988). Williams and Zipser further observe that one can use the instantaneous value of p(t) and J(t) to update the weights continually provided the weights change slowly. The computationally intensive part of this algorithm occurs in the integration of equation (4a). There are n3 components to p hence there are Ji3 equations. Accordingly the amount of hardware or memory required to perform the calculation will scale like n3• Each of these equations requires a summation over all the neurons, hence the amount of computation (measured in multiply-accumulates) goes like It per time step, and there are q time steps, hence the total number of multiply-accumulates scales like n4q Clearly, the scaling properties of this approach are very poor and it cannot be practically applied to very large networks. 2.2 ALGORITHM B Rather than numerically integrate the system of equations (4a) to obtain p(t), suppose we write down the formal solution. This solution is Time Dependent Adaptive Neural Networks 713 11 "f' Pirs(t)='LKij(t,to)PjrsCt 0)+ 'L drKjj(t,f)Sjrs(i) j=1 j=1 '0 (Sa) The matrix K is defined by the expression K (' 2' ,) = ex p(.r.. '~T L (x (T))) (5b) This matrix is known as the propagator or transition matrix. The expression for Pit. consists of a homogeneous solution and a particular solution. The choice of initial condition Pirs(to) = 0 leaves only the particular solution. If the particular solution is substituted back into eqn. (3a), one eventually obtains the following expression for the gradient aE 11 'f f ' -= -'Lf dt d-r J;Ct)K irU ,-r)f(x s(-r)) aw rs j=1 '0 '0 (6) To obtain this expression one must observe that s. can be expressed in terms of x , i.e. use In • eqn. (4c). This allows the summation over j to be performed trivially, thus resulting in eqn.(6). The familiar outer product form of backpropagation is not yet manifest in this expression. To uncover it, change the order of the integrations. This requires some care because the limits of the integration are not the same. The result is aE 11 f If If -=-'L d-rf dt Jj(t)K irU ,-r)f(x sC-r)) aw rs i = 1 '0 l' (7) Inspection ofthis expression reveals that neither the summation over i nor the integration over 't includes x.(t), thus it is useful to factor it out. Consequently equation (7) takes on the familiar outer product form of backpropagation aE If -= -f dt Y r(t)f(x sU)) (8) aw rs l' Where yr(t) is defined to be 11 If Y r(-r) =- 'L f dt Jj(t)K irU ,-r) i= 1 t' (9) Equation (8), defines an expression for the gradient, provided we can calculate Yr(t) from eqn. (9). In principle, this can be done since the propagator K and the vector J are both completely determined by x(t). The computationally intensive part of this algorithm is the calculation of K(t, 't) for all values of t and't. The calculation requires the integration of equations of the form dK i: ,-r) L (x U) K (t ,-r) (10) for q different values of't. There are n2different equations to integrate for each value of't Consequently there are n2q integrations to be performed where the interval from to to tf is divided into q intervals. The calculation of all the components ofK(t,'t), from tr to t ,scales like n3q2, since each integration requires n multiply-accumulates per time step and there are q time steps. Similarly, the memory requirements scale like n2q2. This is because K has n2 components for each (t,'t) pair and there are q2 such pairs. 714 Pineda Equation (10) must be integrated forwards in time from t= 't to t = trand backwards in time from t= 't to t = to. This is because K must satisfy K( 't»'t) = 1 (the identity matrix) for all 'to This condition follows from the definition of K eqn. (5b). Finally, we observe that expression (9) is the time-dependent analog of the expression used by Rohwer and Forrest (1987) to calculate the gradient in recurrent networks. The analogy can be made somewhat more explicit by writingK(t,'t) as the inverse K-l('t,t). Thus we see that y( t) can be expressed in terms of a matrix inverse just as in the Rohwer and Forrest algorithm. 2.3 ALGORITHM C The final algorithm is familiar from continuous time optimal control and identification. The algorithm is usually derived by performing a variation on the functional given by eqn. (2). This results in a two-point boundary value problem. On the other hand, we know that y is given by eqn. (9). So we simply observe that this is the particular solution of the differential equation dy T - ([t= L (x (t))y +J (11) Where LT is the transpose of the matrix defined in eqn. (4b). To see this simply substitute the form for y into eqn. (11) and verify that it is indeed the solution to the equation. The particular solution to eqn. (11) vanishes only if y(1r) = O. In other words: to obtain yet) we need only integrate eqn. (11) backwards from the final condition y(t~ = O. This is just the algorithm introduced to the neural network community by Pearlmutter (1988). This also corresponds to the unfolding in time approach discussed by Rumelhart et al. (1986), provided that all the equations are discretized and one takes At = 1. The two point boundary value problem is rather straight forward to solve because the equation for x(t) is independent of yet). Both x(t) and yet) can be obtained with n multiplyaccumulates per time step. There are q time steps from to to tfand bothx(t) and yet) have n components, hence the calculation of x(t) and yet) scales like 02q. The weight update equation also requires n2q mUltiply- accumulates. Thus the computational requirements of the algorithm as a whole scale like n2q The memory required also scales like n2q, since it is necessary to save each value of x(t) along the trajectory to compute yet). 2.4 SCALING VS CAUSALITY The results of the previous sections are summarized in table 1 below. We see that we have a progression of tradeoffs between scaling and causality. That is, we must choose between a causal algorithm with exploding computational and storage requirements and an a causal algorithm with modest storage requirements. There is no q dependence in the memory requirments because the integral given in eqn. (3a) can be accumulated at each time step. Algorithm B has some of the worst features of both algorithms. Time Dependent Adaptive Neural Networks 715 Table 1: Comparison of three algorithms Algorithm Memory Multiply diirection of integations A B C -accumulates x and p are both forward in time x is forward, K is forward and backward x is forward, y is backward in time. Digital hardware has no difficulties (at least over finite time intervals) with a causal algorithms provided a stack is available to act as a memory that can recall states in reverse order. To the extent that the gradient calculations are carried out on digital machines, it makes sense to use algorithm C because it is the most efficient. In analog VLSI however, it is difficult to imagine how to build a continually running network that uses an a causal algorithm. Algorithm A is attractive for physical implementation because it could be run continually and in real time (Williams and Zipser, 1989). However, its scaling properties preclude the possibility of building very large networks based on the algorithm. Recently, Zipser (1990) has suggested that a divide and conquer approach may reduce the computational and spatial complexity of the algorithm. This approach, although promising, does not always work and there is as yet no convergence proof. How then, is it possible to learn trajectories using local, scalable and causal algorithms? In the next section a possible avenue of attack is suggested. 3 EXPLOITING DISPARATE TIME SCALES I assert that for some classes of problems there are scalable and causal algorithms that approximate the gradient and that these algorithms can be found by exploiting the disparity in time scales found in these classes of problems. In particular, I assert that when the time scale of the adaptive units is fast compared to the time scale of the behavior to be learned, it is possible to find scalable and causal adaptive algorithms. A general formalism for doing this will not be presented here, instead a simple, perhaps artificial, example will be presented. This example minimizes an error function for a time dependent problem. It is likely that trajectory generation in motor control problems are of this type. The characteristic time scales of the trajectories that need to be generated are determined by inertia and friction. These mechanical time scales are considerably longer than the electronic time scales that occur in VLSI. Thus it seems that for robotic problems, there may be no need to use the completely general algorithms discussed in section 2. Instead, algorithms that take advantage of the disparity between the mechanical and the electronic time scales are likely to be more useful for learning to generate trajectories. he task is to map from a periodic input I(t) to a periodic output ~(t). The basic idea is to use the continuous-time recurrent-backpropagation approach with slowly varying timedependent inputs rather than with static inputs. The learning is done in real-time and in a continuous fashion. Consider a set of n "fast" neurons (i= 1, .. ,n) each of which satisfies the 716 Pineda additive activation dynamics determined by eqn (1). Assume that the initial weights are sufficientl y small that the dynamics of the network would be convergent if the inputs I were constant. The external input vector ~ is applied to the network through the vector I. It has been previously shown (pineda, 1988) that the ij-th component of the gradient ofE is equal to yfjf(xf) where Xfj is the steady state solution of eqn. (1) and where yfjis a component of the steady state solution of dy T f -= L (x )y +1 dt (12) where the components ofLT are given by eqn. (4.b). Note that the relative sign between equations (11) and (12) is what enables this algorithm to be causal. Now suppose that instead of a fixed input vector I, we use a slowly varying input I(t/'t ) where't is the characteristic time scale over which the input changes significantly. If w~ take as lite gradient descent algorithm, the dynamics defined by dw rs (13) 't'w([t=Y i(t)X /t) where't .. is the time constant that defines the (slow) time scale over which w changes and where Xj is the instantaneous solution of eqn. (1) and Yj is the instantaneous solution of eqn.(12) . Then in the adiabatic limit the Cartesian product yl(x) in eqn. (13) approximates the negative gradient of the objective function E, that is (14) This approach can map one continuous trajectory into another continuous trajectory, provided the trajectories change slowly enough. Furthermore, learning occurs causally and scalably. There is no memory in the model, i.e. the output of the adaptive neurons depends only on their input and not on their internal state. Thus, this network can never learn to perform tasks that require memory unless the learning algorithm is modified to learn the appropriate transitions. This is the major drawback of the adiabatic approach. Some state information can be incorporated into this model by using recurrent connections in which case the network can have multiple basins and the final state will depend on the initial state of the net as well as on the inputs, but this will not be pursued here. Simple simulations were performed to verify that the approach did indeed perform gradient descent. One simulation is presented here for the benefit of investigators who may wish to verify the results. A feedforward network topology consisting of two input units, five hidden units and two output units was used for the adaptive network. Units were numbered sequentially, 1 through 9, beginning with the input layer and ending in the output layer. Time dependent external inputs for the two input neurons were generated with time dependence II = sin(27tt) and ~ = cos(2m). The targets for the output neurons were ~ = R sin(27tt) and ~9 =R cos(2m) where R = 1.0 + 0.lsin(6m). All the equations were simultaneously integrated using 4th order Runge-Kutta with a time step of 0.1. A relaxation time scale was introduced into the forward and backward propagation equations by multiplying the time derivatives in eqns. (1) and (12) by't" and 'tyrespectively. These time scales were set to't" ='ty= 0.5. The adaptive time scale of the weights was 't .. = 1.0. The error in the network was initially, E = Time Dependent Adaptive Neural Networks 717 10 and the integration was cut off when the error reached a plateau at E = 0.12. The learning curve is shown in Fig. 1. The trained trajectory did not exactly reach the desired solution. In particular the network did not learn the odd order hannonic that modulates R. By way of comparison, a conventional backpropagation approach that calculated a cumulative gradient over the trajectory and used conjugate gradient for the descent, was able to converge to the global minimum. 12,---------------------------------~ 10-' III III 8 m m 6 42 -. m m m m ED m O+-__ ~---~~··~··E··~·.B .. ~ .. B .. B . . ~·.m··D·~···~~ I I I o 10 20 30 40 50 Time Figure 1: Learning curve. One time unit corresponds to a single oscillation 4 SUMMARY The key points of this paper are: 1) Exact minimization algorithms for learning timedependent behavior either scale poorl y or else violate causality and 2) Approximate gradient calculations will likely lead to causal and scalable learning algorithms. The adiabatic approach should be useful for learning to generate trajectories of the kind encountered when learning motor skills. References herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply any endorsement by the U oited States Government or the Jet Propulsion Laboratory, California Institute of Technology. The work described in this paper was carried out at the Center for Space Microelectonrics Technology, Jet Propulsion Laboratory, California Institute of Technology. Support for the work came from the Air Force Office of Scientific Research through an agreement with the National Aeronautics and Space Administration (AFOSR-ISSA-90-0027). REFERENCES Aplevich,J.D. (1968). Models of certain nonlinear systems. InE.R.Caianiello(Ed.),Neural Networks, (pp. 110-115), Berlin: Springer Verlag. Bryson, A. E. and Ho, Y. (1975). Applied Optimal Control: Optimization. Estimation. and 718 Pineda Control. New York: Hemisphere Publishing Co. Cowan, J. D. (1967). A mathematical theory of central nervous activity. Unpublished dissertation, Imperial College, University of London. Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Nat. Acad. Sci. USA, Bio., .. 8.l. 3088-3092. Malsburg, C. van der (1973). Self-organization of orientation sensitive cells in striate cortex, Kybernetic, 14,85-100. Pearlmutter, B. A. (1988), Learning state space trajectories in recurrent neural networks: A preliminary report, (Tech. Rep. AlP-54), Department of Computer Science , Carnegie Mellon University, Pittsburgh, PA Pineda, F. J. (1988). Dynamics and Architecture for Neural Computation. Journal of Complexity,~, (pp.216-245) Rowher R, R. and Forrest, B. (1987). Training time dependence in neural networks, In M. CaudilandC.Butler,(Eds.),ProceedingsoftheIEEEFirstAnnuallnternationalConference on Neural Networks, ~, (pp. 701-708). San Diego, California: IEEE. Rumelhart, D. E., Hinton, G. E., and Willaims, R.J. (1986). Learning Internal Representations by Error Propagation. In D. E. Rumelhart and J. L. McClelland, (Eds.), Parallel Distributed Processing, (pp. 318-362). Cambridge: M.LT. Press. Sejnowski, T. J. (1977). Storing covariance with nonlinearly interacting neurons. Journal of Mathematical Biology, ~,303 .. 321. Williams, R.I. and Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1, (pp. 270-280). Zipser, D. (1990). Subgrouping reduces complexity and speeds up learning in recurrent networks, (this volume).
|
1989
|
16
|
194
|
Neural Network Visualization 465 NEURAL NETWORK VISUALIZATION Jakub Wejchert Gerald Tesauro IB M Research T.J. Watson Research Center Yorktown Heights NY 10598 ABSTRACT We have developed graphics to visualize static and dynamic information in layered neural network learning systems. Emphasis was placed on creating new visuals that make use of spatial arrangements, size information, animation and color. We applied these tools to the study of back-propagation learning of simple Boolean predicates, and have obtained new insights into the dynamics of the learning process. 1 INTRODUCTION Although neural network learning systems are being widely investigated by many researchers via computer simulations, the graphical display of information in these simulations has received relatively little attention. In other fields such as fluid dynamics and chaos theory, the development of "scientific visualization" techniques (1,3) have proven to be a tremendously useful aid to research, development, and education. Similar benefits should result from the application of these techniques to neural networks research. In this article, several visualization methods are introduced to investigate learning in neural networks which use the back-propagation algorithm. A multi-window 466 Wejchert and Tesauro environment is used that allows different aspects of the simulation to be displayed simultaneously in each window. As an application, the toolkit is used to study small networks learning Boolean functions. The animations are used to observe the emerging structure of connection strengths, to study the temporal behaviour, and to understand the relationships and effects of parameters. The simulations and graphics can run at real-time speeds. 2 VISUAL REPRESENTATIONS First, we introduce our techniques for representing both the instantaneous dynamics of the learning process, and the full temporal trajectory of the network during the course of one or more learning runs. 2.1 The Bond Diagram In the first of these diagrams, the geometrical structure of a connected network is used as a basis for the representation. As it is of interest to try to see how the internal configuration of weights relates to the problem the network is learning, it is clearly worthwile to have a graphical representation that explicitly includes weight information integrated with network topology. This differs from "Hinton diagrams" (2), in which data may only be indirectly related to the network structure. In our representation nodes are represented by circles, the area of which are proportional to the threshold values. Triangles or lines are used to represent the weights or their rate of change. The triangles or line segments emanate from the nodes and point toward the connecting nodes. Their lengths indicate the magnitude of the weight or weight derivative. We call this the "bond diagram". In this diagram, one can look at any node and clearly see the magnitude of the weights feeding into and out of it. Also, a sense of direction is built into the picture since the bonds point to the node that they are connected to. Further, the collection of weights form distinct patterns that can be easily perceived, so that one can also infer global information from the overall patterns formed. 2.2 The Trajectory Diagram A further limitation of Hinton diagrams is that they provide a relatively poor representation of dynamic information. Therefore, to understand more about the dynamics of learning we introduce another visual tool that gives a two-dimensional projection of the weight space of the network. This represents the learning process as a trajectory in a reduced dimensional space. By representing the value of the error function as the color of the point in weight space, one obtains a sense of the contours of the error hypersurface, and the dynamics of the gradient-descent evolution on this hypersurface. We call this the "trajectory diagram". The scheme is based on the premise that the human user has a good visual notion of vector addition. To represent an n-dimensional point, its axial components are defined as vectors and then are plotted radially in the plane; the vector sum of these is then calculated to yield the point representing the n-dimensional position. Neural Network Visualization 467 It is obvious that for n > 2 the resultant point is not unique, however, the method does allow one to infer information about families of similar trajectories, make comparisons between trajectories and notice important deviations in behaviour. 2.3 Implementation The graphics software was written in C using X-Windows v. 11. The C code was interfaced to a FORTRAN neural network simulator. The whole package ran under UNIX, on an RT workstation. Using the portability of X-Windows the graphics could be run remotely on different machines using a local area network. Excecution time was slow for real-time interaction except for very small networks (typically up 30 weights). For larger networks, the Stellar graphics workstation was used, whereby the simulator code could be vectorized and parallelized. 3 APPLICATION EXAMPLES With the graphics we investigated networks learning Boolean functions: binary input vectors were presented to the network through the input nodes, and the teacher signal was set to either 1 or O. Here, we show networks learning majority, and symmetry functions. The output of the majority function is 1 only if more than half of the input nodes are on; simple symmetry distiguishes between input vectors that are symmetric or anti-symmetric about a central axis; general symmetry identifies perfectly symmetric patterns out of all other permutations. Using the graphics, one can watch how solutions to a particular problem are obtained, how different parameters affect these solutions, and observe stages at which learning decisions are made. At the start of the simulations the weights are set to small random values. During learning, many example patterns of vectors are presented to the input of the network and weights are adjusted accordingly. Initially the rate of change of weights is small, later as the simulation gets under way the weights change rapidly, until small changes are made as the system moves toward the final solution. Distinct patterns of triangles show the configuration of weights in their final form. 3.1 The Majority Function Figure 1 shows a bond diagram for a network that has learnt the majority function. During the run, many input patterns were presented to the network during which time the weights were changed. The weights evolve from small random values through to an almost uniform set corresponding to the solution of the problem. Towards the end, a large output node is displayed and the magnitudes of all the weights are roughly uniform, indicating that a large bias (or threshold) is required to offset the sum of the weights. Majority is quite a simple problem for the network to learn; more complicated functions require hidden units. 3.2 The Simple Symmetry Function In this case only symmetric or perfectly anti-symmetric patterns are presented and the network is taught to distinguish between these. In solving this problem, the 468 Wejchert and Tesauro Figure 1: A near-final configuration of weights for the majority function. All the weights are positive. The disc corresponds to the threshold of the output unit. Neural Network Visualization 469 network chose (correctly) that it needs only two units to make the decision whether the input is totally symmetric or totally anti-symmetric. (In fact, any symmetrically separated input pair will work.) It was found that the simple pattern created by the bond representation carries over into the more general symmetry function, where the network must identify perfectly symmetric inputs from all the other permutations. 3.3 The General Symmetry Function Here, the network is required to detect symmtery out of all the possible input patterns. As can be seen from the bond diagram (figure 2) the network has chosen a hierarchical structure of weights to solve the problem, using the basic pattern of weights of simple symmtery. The major decision is made on the outer pair and additional decisions are made on the remaining pairs with decreasing strength. As before, the choice of pairs in the hierarchy depends on the initial random weights. By watching the animations, we could make some observations about the stages of learning. We found that the early behavior was the most critical as it was at this stage that the signs of the weights feeding to the hidden units were determined. At the later stages the relative magnitudes of the weights were adapted. 3.4 The Visualization Environment Figure 3 shows the visualization environment with most of the windows active. The upper window shows the total error, and the lower window the state of the output unit. Typically, the error initially stays high then decreases rapidly and then levels off to zero as final adjustments are made to the weights. Spikes in this curve are due to the method of presenting patterns at random. The state of the output unit initially oscillates and then bifurcates into the two requires output states. The two extra windows on the right show the trajectory diagrams for the two hidden units. These diagrams are generalizations of phase diagrams: components of a point in a high dimensional space are plotted radially in the plane and treated as vectors whose sum yields a point in the two-dimensional representation. We have found these diagrams useful in observing the trajectories of the two hidden units, in which case they are representations of paths in a six-dimensional weight space. In cases where the network does converge to a correct solution, the paths of the two hidden units either try to match each other (in which case the configurations of the units were identical) or move in opposite directions (in which case the units were opposites ). By contrast, for learning runs which do not converge to global optima we found that usually one of the hidden units followed a normal trajectory whereas the other unit was not able to achieve the appropriate match or anti-match. This is because the signs of the weights to the second hidden unit were not correct and the learning algorithm could not make the necessary adjustments. At a certain point early in learning the unit would travel off on a completely different trajectory. These observations suggest a heuristic that could improve learning by setting initial trajectories in the "correct" directions. 470 \Vejchert and Tesauro Figure 2: The bond diagram for a network that has learnt the symmetry function. There are six input units, two hidden and one output. Weights are shown by bonds emantating from nodes. In the graphics positive and negative weights are colored red and blue respectively. In this grey-scale photo the negative weights are marked with diagonal lines to distiguish them from positive weights. Neural Network Visualization 471 Figure 3: An example of the graphics with most of the windows active; the command line appears on the bottom. The central window shows the bond diagram of the General Symmetry function. The upp er left window shows the total error, and the lower left window the state of the output unit. The two windows on the right show the trajectory diagrams for the two hidden units. The "spokes" in this diagram correspond to the magnitude of the weights. The trace of dots are the paths of the two units in weight space. 472 Wejchert and Tesauro In general, the trajectory diagram has similar uses to a conventional phase plot: it can distinguish between different regions of configuration space; it can be used to detect critical stages of the dynamics of a system; and it gives a "trace" of its time evolution. 4 CONCLUSION A set of computer graphics visualization programs have been designed and interfaced to a back-propagation simulator. Some new visualization tools were introduced such as the bond and trajectory diagrams. These and other visual tools were integrated into an interactive multi-window environment. During the course of the work it was found that the graphics was useful in a number of ways: in giving a clearer picture of the internal representation of weights, the effects of parameters, the detection of errors in the code, and pointing out aspects of the simulation that had not been expected beforehand. Also, insight was gained into principles of designing graphics for scientific processes. It would be of interest to extend our visualization techniques to include large networks with thousands of nodes and tens of thousands of weights. We are currently examining a number of alternative techniques which are more appropriate for large data-set regimes. Acknow ledgements We wish to thank Scott Kirkpatrick for help and encouragment during the project. We also thank members of the visualization lab and the animation lab for use of their resources. References (1) McCormick B H, DeFanti T A Brown M D (Eds), "Visualization in Scientific Computing" Computer Graphics 21, 6, November (1987). See also "Visualization in Scientific Computing-A Synopsis", IEEE Computer Graphics and Applications, July (1987). (2) Rumelhart D E, McClelland J L, "Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1" MIT Press, Cambridge, MA (1986). (3) Tufte E R, "The Visual Display of Quantitative Information", Graphic Press, Chesire, CT (1983). PART VI: NEW LEARNING ALGORITHMS
|
1989
|
17
|
195
|
Discovering the Structure of a Reactive Environment by Exploration 439 Discovering the Structure of a Reactive Environment by Exploration Michael C. Mozer Department of Computer Science and Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 ABSTRACT Jonatban Bachrach DepartmentofCompu~ and Infonnation Science University of Massachusetts Amherst, MA 01003 Consider a robot wandering around an unfamiliar environment. performing actions and sensing the resulting environmental states. The robot's task is to construct an internal model of its environment. a model that will allow it to predict the consequences of its actions and to determine what sequences of actions to take to reach particular goal states. Rivest and Schapire (1987&, 1987b; Schapire. 1988) have studied this problem and have designed a symbolic algorithm to strategically explore and infer the structure of "finite state" environments. The heart of this algorithm is a clever representation of the environment called an update graph. We have developed a connectionist implementation of the update graph using a highly-specialized network architecture. With back propagation learning and a trivial exploration strategy choosing random actions the connectionist network can outperfonn the Rivest and Schapire algorithm on simple problems. The network has the additional strength that it can accommodate stochastic environments. Perhaps the greatest virtue of the connectionist approach is that it suggests generalizations of the update graph representation that do not arise from a traditional, symbolic perspective. 1 INTRODUCTION Consider a robot placed in an unfamiliar environment The robot is allowed to wander around the environment, performing actions and sensing the resulting environmental states. With sufficient exploration, the robot should be able to construct an internal model of the environment, a model that will allow it to predict the consequences of its actions and to determine what sequence of actions must be taken to reach a particular goal state. In this paper, we describe a connectionist network that accomplishes this task, based on a representation of finite-state automata developed by Rivest and Scbapire 440 Mozer and Bachrach (1987a, 1987b; Schapire. 1988). The environments we wish to consider can be modeled by a finite-state automaton (FSA). In each environment. the robot has a set of discrete actions it can execute to move from one environmental state to another. At each environmental state. a set of binary-valued sensations can be detected by the robot To illustrate the concepts and methods in our work, we use as an extended example a simple environment, the n -room world (from Rivest and Schapire). The n -room world consists of n rooms arranged in a circular chain. Each room is connected to the two adjacent rooms. In each room is a light bulb and light switch. The robot can sense whether the light in the room where it currently stands is on or off. The robot has three possible actions: move to the next room down the chain (0). move to the next room up the chain (U). and toggle the light switch in the current room (T). 2 MODELING THE ENVIRONMENT If the FSA corresponding to the n -room world is known, the sensory consequences of any sequence of actions can be predicted. Further. the FSA can be used to determine a sequence of actions to take to obtain a certain goal state. Although one might try developing an algorithm to learn the FSA directly, there are several arguments against doing so (Schapire, 1988). Most important is that the FSA often does not capture structure inherent in the environment. Rather than trying to learn the FSA, Rivest and Scbapire suggest learning another representation of the environment called an update graph. The advantage of the update graph is that in environments with regularities, the number of nodes in the update graph can be much smaller than in the FSA (e.g., 2n versus 2" for the n -room world). Rivest and Schapire's formal definition of the update graph is based on the notion of tests that can be performed on the environment. and the equivalence of different tests. In this section, we present an alternative, more intuitive view of the update graph that facilitates a connectionist interpretation. Consider a three-room world. To model this environment, the essential knowledge required is the status of the lights in the current room (CUR), the next room up from the ClUTent room (UP), and the next room down from the current room (DOWN). Assume the update graph has a node for each of these environmental variables. Further assume that each node has an associated value indicating whether the light in the particular room is on or off. If we know the values of the variables in the current environmental state, what will their new values be after taking some action, say u1 When the robot moves to the next room up, the new value of CUR becomes the previous value of UP; the new value of DOWN becomes the previous value of CUR; and in the three-room world, the new value of UP becomes the previous value of DOWN. As depicted in Figure la, this action thus results in shifting values around in the three nodes. This makes sense because moving up does not affect the status of any light, but it does alter the robot's position with respect to the three rooms. Figure 1b shows the analogous flow of information for the action o. Finally, the action T should cause the status of the current room's light to be complemented while the other two rooms remain unaffected (Figure 1c). In Figure 1d, the three sets of links from Figures la-c have been superimposed and have been labeled with the appropriate action. One final detail: The Rivest and Schapire update graph formalism does not make use of the "complementation" link. To avoid it, each node may be split into two values. one Discovering the Structure of a Reactive Environment by Exploration 441 representing the status of a room and the other its complement (Figure Ie). Toggling thus involves exchanging the values of CUR and CUR. Just as the values of CUR, UP, and DOWN must be shifted for the actions u and D, so must their complements. Given the update graph in Figure Ie and the value of each node for the current environmental state, the resuk of any sequence of actions can be predicted simply by shifting values around in the graph. Thus, as far as predicting the input/output behavior of the environment is concerned, the update graph serves the same purpose as the FSA. A defining and nonobvious (from the current description) property of an update graph is that each node has exactly one incoming link for each action. We call this the oneinput-per-action constraint. For example, CUR gets input from CUR for the action T, from UP for u. and from DOWN for D. (a) (c) N (e) ~ @ -~ (6) (d) T Flgure 1: (a) Links between nodes indicating the desired infonnation flow on pedonning the action u. CUR represenu that status of the Jighu in the ament room, UP the status of the Jighu in the next room up, and DOWN the status of the lights in the next room down. (b) Links between nodes indicating the desired infonnation flow on perfonning the action D. (c) Links between nodes indicating the desired infonnation flow on perfonning the action T. The "_" on the link from CUR to iuelf indicates that the value must be complemented. (d) Links from the three separate actions superimposed and labeled by the action. (e) The complementation link can be avoided by adding a set of nodes that represent the complemenu of the original seL Thil is the update grapb for a three-room world. 442 Mozer and Bachrach 3 THE RIVEST AND SCHAPIRE ALGORITHM Rivest and Schapire have developed a symbolic algorithm (hereafter, the RS algorithm) to strategically explore an environment and learn its update graph representation. The RS algorithm fonnulates explicit hypotheses about regularities in the environment and tests these hypotheses one or a relatively small number at a time. As a result, the algorithm may not make full use of the environmental feedback obtained. It thus seems worthwhile to consider alternative approaches that could allow more efficient use of the environmental feedback, and hence, more efficient learning of the update graph. We have taken connectionist approach, which has shown quite promising results in preliminary experiments and suggests other significant benefits. We detail these benefits below, but must first describe the basic approach. 4 THE UPDATE GRAPH AS A CONNECTIONIST NETWORK How might we tum the update graph into a connectionist network? Start by asswning one unit in a network for each node in the update graph. The activity level of the unit represents the truth value associated with the update graph node. Some of these units serve as "outputs" of the network. For example, in the three-room world, the output of the network is the unit that represents the status of the current room. In other enviroDments, there may several sensations in which case there will be several output units. What is the analog of the labeled links in the update graph? The labels indicate that values are to be sent down a link when a particular action occurs. In connectionist tenns, the links should be gated by the action. To elaborate, we might include a set of units that represent the possible actions; these units act to multiplicatively gate the flow of activity between units in the update graph. Thus, when a particular action is to be perfonned, the corresponding action unit is activated, and the connections that are gated by this action become enabled. If the action units fonn a local representation, i.e., only one is active at a time, exactly one set of connections is enabled at a time. Consequently, the gated connections can be replaced by a set of weight matrices, one per action. To predict the consequences of a particular action, the weight matrix for that action is plugged into the network and activity is allowed to propagate through the connections. Thus, the network is dynamically rewired contingent on the current action. The effect of activity propagation should be that the new activity of a unit is the previous activity of some other unit A linear activation function is sufficient to achieve this: X(t) = Wa(t)X(t-l), (1) where a (t) is the action selected at time t, Wa (t) is the weight matrix associated with this action, and X(t) is the activity vector that results from taking action a (t). Assuming weight matrices which have zeroes in each row except for one connection of strength 1 (the one-input-per-action constraint), the activation rule will cause activity values to be copied around the network. 5 TRAINING THE NETWORK TO BE AN UPDATE GRAPH We have described a connectionist network that can behave as an update graph, and now tum to the procedure used to learn the connection strengths in this network. For expository purposes, assume that the number of units in the update graph is known in advance. Discovering the Structure of a Reactive Environment by Exploration 443 (This is not necessary, as we show in Mozer & Bachrach, 1989.) A weight matrix is required for each action, with a potential non-zero connection between every pair of units. As in most connectionist learning procedures, the weight matrices are initialized to random values; the outcome of learning will be a set of matrices that represent the update graph connectivity. If the network is to behave as an update graph, the one-input-per-action constraint must be satisfied. In terms of the connectivity matrices, this means that each row of each weight matrix should have connection strengths of zero except for one value which is 1. To achieve this property, additional constraints are placed on the weights. We have explored a combination of three constraints: (1) l:w~j = 1, (2) l:Waij = 1, and (3) Waij ~ 0, j j where waij is the connection strength to i from j for action a. Constraint 1 is satisfied by introducing an additional cost term to the error function. Constraints 2 and 3 are rigidly enforced by renormalizing the Wai following each weight update. The normalization procedure finds the shortest distance projection from the updated weight vector to the hyperplane specified by constraint 2 that also satisfies constraint 3. At each time step t, the training procedure consists the following sequence of events: 1. An action. a (t), is selected at random. 2. The weight matrix for that action, Wa(t). is used to compute the activities at t, X(t), from the previous activities X(t-l). 3. The selected action is performed on the environment and the resulting sensations are observed. 4. The observed sensations are compared with the sensations predicted by the network (Le., the activities of units chosen to represent the sensations) to compute a measure of error. To this error is added the contribution of constraint 1. 5. The back propagation "unfolding-in-time" procedure (Rumelhart, Hinton. & Williams, 1986) is used to compute the derivative of the error with respect to weights at the current and earlier time steps, W a(t-;)' for i =0 ... 't-l. 6. The weight matrices for each action are updated using the overall error gradient and then are renormalized to enforce constraints 2 and 3. 7. The temporal record of unit activities, X(t-i) for i=O· .. 't, which is maintained to permit back propagation in time, is updated to reflect the new weights. (See further explanation below.) 8. The activities of the output units at time t, which represent the predicted sensations, are replaced by the actual observed sensations. Steps 5-7 require further elaboration. The error measured at time t may be due to incorrect propagation of activities from time t-l, which would call for modification of the weight matrix Wa(t). But the error may also be attributed to incorrect propagation of activities at earlier times. Thus. back propagation is usui to assign blame to the weights at earlier times. One critical parameter of training is the amount of temporal history, 't, to consider. We have found that. for a particular problem, error propagation beyond a cer444 Mozer and Bachrach lain critical number of steps does not improve learning performance, although any fewer does indeed harm performance. In the results described below, we set 't for a particular problem to what appeared to be a safe limit: one less than the number of nodes in the update graph solution of the problem. To back propagate error in time, we maintain a temporal record of unit activities. However, a problem arises with these activities following a weight update: the activities are no longer consistent with the weights i.e., Equation I is violated. Because the error derivatives computed by back propagation are exact only when Equation I is satisfied, future weight updates based on the inconsistent activities are not assured of being correct. Empirically, we have found the algorithm extremely unstable if we do not address this problem. In most situations where back propagation is applied to temporally-extended sequences. the sequences are of finite length. Consequently. it is possible to wait until the end of the sequence to update the weights, at which point consistency between activities and weights no longer matters because the system starts fresh at the beginning of the next sequence. In the present situation. however, the sequence of actions does not tenninate. We thus were forced to consider alternative means of ensuring consistency. The most successful approach involved updating the activities after each weight change to force consistency (step 7 of the list above). To do this, we propagated the earliest activities in the temporal record. X(t--'t). forward again to time t, using the updated weight matrices. 6 RESULTS Figure 2 shows the weights in the update graph network for the three-room world after the robot has taken 6,000 steps. The Figure depicts a connectivity pattern identical to that of the update graph of Figure Ie. To explain the correspondence, think of the diagram as being in the shape of a person who has a head, left and right arms, left and right legs, and a heart. For the action U, the head the output unit receives input from the left leg, the left leg from the heart, and the heart from the head, thereby fonning a three-unit loop. The other three units the left arm, right arm, and right leg fonn a Flgure 2: Weights learned after 6,000 exploratory steps in the three-room world. Each large diagram represents the weights corresponding to one of the three actic.lI. Each small diagram contained within a large diagram represents the connection strengths feeding into a particular Wlit for a particular action. There are six Wlits, hence six small diagrams. The output Wlit, which indicates the state of the light in the wrrent room, is the protruding "head" of the large diagram. A white square in a particular position of a small diagram represents the strength of connection from the unit in the homologous position in the large diagram to the unit represented by the small diagram. The area of the square is proportional to the cormection strength. Discovering the Structure of a Reactive Environment by Exploration 445 similar loop. For the action D, the same two loops are present but in the reverse direction. These two loops also appear in Figure Ie. For the action T, the left and right anns, heart, and left leg each keep their current value, while the head and the right leg exchange values. This corresponds to the exchange of values between the CUR and CUR nodes of the Figure Ie. In addition to learning the update graph connectivity, the network has simultaneously learned the correct activity values associated with each node for the current state of the environment. Armed with this infonnation, the network can predict the outcome of any sequence of actions. Indeed, the prediction error drops to zero, causing learning to cease and the network to become completely stable. Now for the bad news: The network does not converge for every set of random initial weights, and when it does, it requires on the order of 6,000 steps. However, when the weight constraints are removed, that the network converges without fail and in about 300 steps. In Mozer and Bachrach (1989), we consider why the weight constraints are hannful and suggest several remedies. Without weight constraints, the resulting weight matrix, which contains a collection of positive and negative weights of varying magnitudes, is not readily interpreted. In the case of the n -room world,. one reason why the final weights are difficult to interpret is because the net has discovered a solution that does not satisfy the RS update graph fonnalism; it has discovered the notion of complementation links of the sort shown in Figure ld. With the use of complementation links, only three units are required, not six. Consequently, the three unnecessary units are either cut out of the solution or encode infonnation redundantly. Table 1 compares the perfonnance of the RS algorithm against that of the connectionist network without weight constraints for several environments. Perfonnance is measured in tenns of the median number of actions the robot must take before it is able to predict the outcome of subsequent actions. (Further details of the experiments can be found in Mozer and Bachrach, 1989.) In simple environments, the connectionist update graph can outperfonn the RS algorithm. This result is quite surprising when considering that the action sequence used to train the network is generated at random, in contrast to the RS algorithm, which involves a strategy for exploring the environment. We conjecture that the network does as well as it does because it considers and updates many hypotheses in parallel at each time step. In complex environments, however, the network does poorly. By "complex", we mean that the number of nodes in the update graph is quite large and the number of distinguishing environmental sensations is relatively small. For example, the network failed to learn a 32-room world, whereas the RS algorithm succeeded. An intelligent exploration strategy seems necessary in this case: random actions will take too long to search the state space. This is one direction our future work will take. Beyond the potential speedups offered by connectionist learning algorithms, the connectionist approach has other benefits. Table 1: Nwnber of Steps Required to Learn Update Graph Environment RS Connectionist Algorithm Update Graph Little Prince Wodd 200 91 Car Radio World 27,695 8,167 Four-Room World 1,388 1,308 32-Room World 52,436 fails 446 Mozer and Bachrach • Perfonnance of the network appears insensitive to prior knowledge of the number of nodes in the update graph being learned. In contrast, the RS algorithm requires an upper bound on the update graph complexity, and performance degrades significantly if the upper bound isn't tight. • The network is able to accommodate "noisy" environments, also in contrast to the RS algorithm. • Owing learning, the network continually makes predictions about what sensations will result from a particular action, and these predictions improve with experience. The RS algorithm cannot make predictions until learning is complete; it could perhaps be modified to do so, but there would be an associated cost. • Treating the update graph as matrices of connection strengths has suggested generalizations of the update graph formalism that don't arise from a more traditional analysis. First, there is the fairly direct extension of allowing complementation links. Second, because the connectionist network is a linear system. any rank-preserving linear transform of the weight matrices will produce an equivalent system, but one that does not have the local connectivity of the update graph (see Mozer & Bachrach, 1989). The linearity of the network also allows us to use tools of linear algebra to analyze the resulting connectivity matrices. These benefits indicate that the connectionist approach to the environment-modeling problem is worthy of further study. We do not wish to claim that the connectionist approach supercedes the impressive work of Rivest and Schapire. However, it offers complementary strengths and alternative conceptualizations of the learning problem. Acknowledgements Our thanks to Rob Schapire, Paul Smolensky, and Rich Sutton for helpful discussions. This work was supported by a grant from the James S. McDonnell Foundation to Michael Mozer. grant 87-236 from the Sloan Foundation to Geoffrey Hinton. and grant AFOSR-87"()()30 from the Air Force Office of Scientific Research. Bolling AFB, to Andrew Barto. References Mozer, M. C., & Bachrach, J. (1989). Discovering the structure of a reactive environment by exploration (Teclmical Report CU-CS-451-89). Boulder, CO: University of Colorado, Department of Computer Science. Rivest, R. L., & Schapire, R. E. (1987). Diversity-based inference of finite automata. In Proceedings of the Twenty-Eighth Annual Symposium on Foundations of Computer Science (pp. 78-87). Rivest, R. L., & Schapire, R. E. (1987). A new approach to unsupervised learning in detenninistic environments. In P. Langley (Ed.), Proceedings of the Fourth Inlernational Workslwp on Machine Learning (pp. 364-375). Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume I: Foundations (pp. 318-362). Cambridge, MA: MIT Press/Bradford Books. Schapire, R. E. (1988). Diversity-based inference ofjiniJe automara. Unpublished master's thesis, Massachusetts Instiblte of Technology, Cambridge, MA.
|
1989
|
18
|
196
|
332 Hormel A Sell-organizing Associative Memory System lor Control Applications Michael Bormel Department of Control Theory and Robotics Technical University of Darmstadt Schlossgraben 1 6100 Darmstadt/W.-Ger.any ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory system AHS, which itself is a major part of the learning control loop LERNAS. A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) is fixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory and Robotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory system AHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear process model and an appropriate nonlinear control strategy (Fig. 1). e&;ected process response planne~ control inputs ~ eValuation/ predictive opti.ization :: process .ode!" . IF opti.iud actual/past control input process infor.ation I"- 1---- -- f actual control input ~ ~~ unknown process ~ control strate y 1- r:;.:> red setpoint I ~ t} {J process infor.ation short ter. .e.ory II>. o o co C c .. .. • -• > -.. u -... • .. I>. c:o o -o .. .. c o u • E .. I -• • .. I laSSOCialive lIe.ory syste. '\~IS Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory. The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required to learn the process behaviour. For a good performance of the control loop it· is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere. Actually, the amount of collected data is small during the transition phase between two 334 Hormel setpoints but is large during setpoint control. Therefore a self-organizing variable generalization, adapting itself to the amount of available data would be very advantageous. Up to now, when working with fixed generalization, finding the right parameters has meant to find the best compromise between performance and learning time required to generate a process model. This paper will show a possibility to introduce a self-organizing variable generalization capability into the existing AMS/CMAC algorithm. 2 THE AMS-CONCEPT The associative memory syste. AMS is based on the "Cerebellar Model Articulation Controller CMAC" as presented by J.S. Albus. The information processing structure of AMS can be divided into three stages. 1.) Each component of a n-dimensional input vector (stimulus) activates a fixed number p of sensory cells, the receptive fields of which are overlapping. So n·p sensory cells become active. 2.) The active sensory cells are grouped to form p n-dimensional vectors. These vectors are mapped to p association cells. The merged receptive fields of the sensory cells described by one vector can be seen as a hypercube in the n-dimensional input space and therefore as the receptive field of the association cell. In normal applications the total number of available association cells is about 100·p. 3.) The association cells are connected to the output cells by modifiable synaptic weights. The output cell computes the mean value of all weights that are connected to active association cells (active weights). Figure 2 shows the basic principle of the associative memory system AMS. A Self-organizing Associative Memory System for Control Applications 335 output value input space adjustable weights Figure 2: The basic aechanism of AMS During training the generated output is compared with a desired output, the error is computed and equally distributed over all active weights. For the mapping of sensory cells to association cells a hash-coding mechanism is used. 3 THE SELF-ORGANIZING FEATURE MAP An approach for explaining the self-organizing capabilities of the nervous system has been presented by T. Kohonen (Kohonen, 1988). In his "self-organizing feature mapft a network of laterally interconnected neurons can adapt itself according to the density of trained points in the input space. Presenting a n-diaensional input vector to the network causes every neuron to produce an output signal which is correlated with the similarity between the input vector and a "template vector" which may be stored in the synaptic weights of the neuron. Due to the "mexican-hat" coupling function between the neurons, the one with the maximum output activity will excite its nearest neighbours but will inhibit neurons farther away, therefore generating a localized response in the network. The active cells can now adapt their input weights in order to increase their similarity to the input vector. If we define the receptive field of a neuron by the number of input vectors for which the neurons activity is greater than 336 Hormel that of any other neuron in the net, this yields the effect that in areas with a high density of trained points the receptive fields become small whereas in areas with a low density of trained points the size of the receptive fields is large. Is mentioned above this is a desired effect when workin; with a learning control loop. 4 SELF-ORGANIZING VARIABLE GENERALIZATION Both of the approaches above have several advantages and disadvantages when using them for real-time control applications. In the AKS algorithm one does not have to care for predefining a network and the coupling functions or coupling matrices among the elements of the network. Association and weight cells are generated when they are needed during training and can be adressed very quietly to produce a memory response. One of the disadvantages is the fixed generalization once the parameters of a .eaory unit have been chosen. Unlike AHS, the feature map allows the adaption of the network according to the input data. This advantage has to be payed for by extensive search for the best matching neuron in the network and therefore the response time of the network aay be too large for real-tiae control when working with big networks. These problems can be overcome when allowing that the mapping of sensory cells to association cells in AKS is no longer fixed but can be changed during training. To accomplish this a template vector t is introduced for every association cell. This vector i serves as an indicator for the stimuli by which the association cell has been accessed previously. During an associative recall for a stimulus !o a preliminary set of p association cells is activated by the hash coding mechanism. Due to the self-organizing process during training the template vectors do not need to correspond to the input vector !o. For the search for the A Self-organizing Associative Memory System for Control Applications 337 best aatching cell the template vector 10 of the accessed association cell is compared to the stiaulus and a difference vector is calculated. 6. = t. - L. , ." v i = O, ••• ,n s n number of searching steps s (1) This vector can now be used to compute a virtual stimulus which compensates the mapping errors of the hash-coding mechanism. ~+1 = ~ - -4 i=O, ••• ,n s The best matching cell is found for j = ain II 6. " . '1. i = O, ••• ,ns 1 (2) (3) and can be adressed by the virtual stimulus ~j when using the hash coding mechanism. This search mechanism ensures that the best matching cell is found even if self organization is in effect. During training the template vectors of the association cells are updated by t(t+l) = a(k,d) ·(!(k) -let»~ + t(k) (4) d lateral distance of neurons in the network where t(k) denotes the value of the teaplate vector at time k and ~(k) denotes the stimulus. a(t,d) is a monotonic decreasing function of time and the lateral distance between neurons in the network. 6 SIMULATION RESULTS Figure 3 and 4 show some simulation results of the presented algorithm for the dase of a two dimensional stimulus vector. 338 Hormel Figure J shows the expected positions in input space of the untrained template vectors ( x denotes untrained association cells). • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • · . . . . . . . . . . . . . . • • • • • • • • • • • • • · . . . . . . . . . . . . . . • • • • • • • • • • . . . • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • · . . . . . . . . . . . . . . • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Figure J: Untrained .etwork Figure 4 shows the network after 2000 training steps with stimuli of gaussian distribution in input space. The position of the template vectors of trained cells has shifted into the direction of the better trained areas, so that more association cells are used to represent this area than before. Therefore the stored information will be more exact in this area. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • · • • • · • • • ••• • • • • • • • • • • • • • • • • • • • • · . . . . · • • • • • • • • • · . . . . Figure 4: Network after 2000 training steps A Self-organizing Associative Memory System for Control Applications 339 6 CONCLUSION The ney algorithm presented above introduces the capability to adapt the storage mechanisms of a CMAC-type associative memory according to the arriving stimuli. This will result in various degrees of generalization depending on the number of trained points in a given area. It therefore will make it unnecessary to choose a generalization factor as a compromise between several constraints when representing nonlinear functions by storing them in this type of associative memory. Some results on tests will be presented together with a comparison on respective results for the original AMS. Acknowledgements This work was sponsored by the German !inistry for Research and Technology (BMFT) under grant no. ITR 8800 B/5 References E. Ersue, H. Tolle. (1988) Learning Control Structures with Neuron-Like Associative memories. In: v. Seelen, Shaw, Leinhos (Eds.) Organization of Neural Networks, VCH Verlagsgesellschaft, Weinheim, FRG, 1988 J.S. llbu~ (1972) Theoretical and experimental aspects of a cerebellar model, PhD thesis, University of Maryland, USA E. Ersue, X. Mao (1983) Control of pH by Use of a Self-organizing Concept with Associative Memories. ACI'83, Kopenhagen, Denmark E. Ersue, J. Militzel (1984) Real-tiae Implementation of an Associative Memory-based Learning Control Scheme for Nonlin-ear Jfultivariable Systems. SymposiuDl on "Applications of Multivariable System Techniques", Plymouth, UK T. Kohonen. (1988) Self-Organization and Associative Memory, 2nd Ed., Springer Verlag
|
1989
|
19
|
197
|
Analog Circuits for Constrained Optimization 777 A nalog Circuits for Constrained Optimization John C. Platt 1 Computer Science Department, 256-80 California Institute of Technology Pasadena, CA 91125 ABSTRACT This paper explores whether analog circuitry can adequately perform constrained optimization. Constrained optimization circuits are designed using the differential multiplier method. These circuits fulfill time-varying constraints correctly. Example circuits include a quadratic programming circuit and a constrained flip-flop. 1 INTRODUCTION Converting perceptual and cognitive tasks into constrained optimization problems is a useful way of generating neural networks to solve those tasks. Researchers have used constrained optimization networks to solve the traveling salesman problem [Durbin, 1987] [Hopfield, 1985], to perform object recognition [Gindi, 1988], and to decode error-correcting codes [Platt, 1986]. Implementing constrained optimization in analog VLSI is advantageous, because an analog VLSI chip can solve a large number of differential equations in parallel [Mead, 1989]. However, analog circuits only approximate the desired differential equations. Therefore, we have built test circuits to determine whether analog circuits can fulfill user-specified constraints. 2 THE DIFFERENTIAL MULTIPLIER METHOD The differential multiplier method (DMM) is a method for creating differential equations that perform constrained optimization. The DMM was originally proposed by [Arrow, 1958] as an economic model. It was used as a neural network by [Platt, 1987]. 1 Current address: Synaptics, 2860 Zanker Road, Suite IDS, San Jose, CA 95134 778 Platt _£f ~ A _gf ~ I g ~V X I I --Figure 1. The architecture of the DMM. The x capacitor in the figure represents the Xi neurons in the network. The - f' box computes the current needed for the neurons to minimize f . The rest of the circuitry causes the network to fulfill the constraint g( i) = o. x y G3 Figure 2. A circuit that implements quadratic programming. x, y, and A are voltages. "Te" refers to a transconductance amplifier. Analog Circuits for Constrained Optimization 779 A constrained optimization problem is find a x such that I(x) is minimized subject to a constraint g(x) = O. In order to find a constrained minimum, the DMM finds the critical points (x, A) of the Lagrangian & = I(x) + Ag(i), (1) by performing gradient descent on the variables x and gradient ascent on the Lagrange multiplier A: dXi _ 0& _ 0 I \ og --------A-, dt OXi OXi OXi dA 0& _ dt = + OA = g(x). (2) The DMM can be thought of as a neural network which performs gradient descent on a function I(x), plus feedback circuitry to find the A that causes the neural network output to fulfill the constraint g(i) = 0 (see figure 1). The gradient ascent on the A is necessary for stability. The stability can be examined by combining the two equations (2) to yield a set of second-order differential equations (3) which is analogous to the equations that govern a spring-mass-damping system. The differential equations (3) converge to the constrained minima if the damping matrix (4) is positive definite. The DMM can be extended to satisfy multiple simultaneous constraints. The stability of the DMM can also be improved. See [Platt, 1987] for more details. 3 QUADRATIC PROGRAMMING CIRCUIT This section describes a circuit that solves a specific quadratic programming problem for two variables. A quadratic programming circuit is interesting, because the basic differential multiplier method is guaranteed to find the constrained minimum. Also, quadratic programming is useful: it is frequently a sub-problem in a more complex task. A method of solving general nonlinear constrained optimization is sequential quadratic programming [Gill, 1981]. We build a circuit to solve a time-dependent quadratic programming problem for two variables: minA(x - XO)2 + B(y - YO)2, (5) subject to the constraint ex + Dy + E(t) = O. (6) 780 Platt Constraint Fulfillment for Quadratic Programming ~, 0.2 I I I I I I observed, target (V) 0.0 I I I I I I -0.2 ~ 0.0 0.4 0.8 1.2 1.6 2.0 Time (10- 2 Sec) Figure 3. Plot of two input voltages of transconductance amplifier. The dashed line is the externally applied voltage E(t). The solid line is the circuit's solution of -Cx - Dy. The constraint depends on time: the voltage E(t) is a square wave. The linear constraint is fulfilled when the two voltages are the same. When E(t) changes suddenly, the circuit changes -Cx - Dy to compensate. The unusually shaped noise is caused by digitization by the oscilloscope. Constraint Fulfillment with Ringing 0.3 0.1 observed, target (V) -0.1 -0.3 0.0 1.0 2.0 3.0 4.0 Time (10- 2 Sec) Figure 4. Plot of two input voltages of transconductance amplifier: the constraint forces are increased, which causes the system to undergo damped oscillations around the constraint manifold. Analog Circuits for Constrained Optimization 781 The basic differential multiplier method converts the quadratic programming problem into a system of differential equations: dx kl dt = -2Ax + 2Axo - C).., dy k2 dt = -2By + 2Byo - D)", (7) d)" k3 dt = ex + Dy + E(t). The first two equations are implemented with a resistor and capacitor (with a follower for zero output impedance). The third is implemented with resistor summing into the negative input of a transconductance amplifier. The positive input of the amplifier is connected to E(t). The circuit in figure 2 implements the system of differential equations (8) where K is the transconductance of the transconductance amplifier. The two systems of differential equations (7) and (8) can match with suitably chosen constants. The circuit in figure 2 actually performs quadratic programming. The constraint is fulfilled when the voltages on the inputs of the transconductance amplifier are the same. The 9 function is a difference between these voltages. Figure 3 is a plot of -Cx - Dy and E(t) as a function of time: they match reasonably well. The circuit in figure 2 therefore successfully fulfills the specified constraint. Decreasing the capacitance C3 changes the spring constant of the second-order differential equation. The forces that push the system towards the constraint manifold are increased without changing the damping. Therefore, the system becomes underdamped and the constraint is fulfilled with ringing (see figure 4). The circuit in figure 2 can be easily expanded to solve general quadratic programming for N variables: simply add more Xi neurons) and interconnect them with resistors. 4 CONSTRAINED FLIP-FLOP A flip-flop is two inverters hooked together in a ring. It is a bistable circuit: one inverter is on while the other inverter is off. A flip-flop can also be considered the simplest neural network: two neurons which inhibit each other. If the inverters have infinite gain, then the flip-flop in figure 5 minimizes the function 782 Platt G2 -V:! GI U1 I U2 G4 -VI h Figure 5. A flip-flop. U1 and U2 are voltages. ... G2 G1 UI G1 I e1 Gg G4 -===Figure 6. A circuit for constraining a flip-flop. Ul, U2 , and A are voltages. observed, target (V) Analog Circuits for Constrained Optimization 783 Constraint Satisfaction for Non-Quadratic f 0.8 0.4 0.0 0.0 0.4 0.8 1.2 1.6 Time (10- 2 Sec) Figure 7. Constraint fulfillment for a non-quadratic optimization function. The plot consists of the two input voltages of the transconductance amplifier. Again, E(t) is the dashed line and -Cx - Dy is the solid line. The constraint is fulfilled when the two voltages are the same. As the constraint changes with time, the flipflop changes state and the location of the constrained minimum changes abruptly. After the abrupt change, the constraint is temporarily not fulfilled. However, the circuit quickly fulfills the constraint. The temporary violation of the constraint causes the transient spikes in the -Cx - Dy voltage. 784 Platt Now, we can construct a circuit that minimizes the function in equation (9), subject to some linear constraint ex + Dy + E(t) = 0, where x and y are the inputs to the inverters. The circuit diagram is shown in figure 6. Notice that this circuit is very similar to the quadratic programming circuit. Now, the x and y circuits are linked with a flip-flop, which adds non-quadratic terms to the optimization function. The voltages -ex - Dy and E(t) for this circuit are plotted in figure 7. For most of the time, -ex - Dy is close to the externally applied voltage E(t). However, because G1 ;/; G4 and G2 ;/; G5 , the flip-flop moves from one minima to the other and the constraint is temporarily violated. But, the circuitry gradually enforces the constraint again. The temporary constraint violation can be seen in figure 7. 5 CONCLUSIONS This paper examines real circuits that have been constrained with the differential multiplier method. The differential multiplier method seems to work, even when the underlying circuit is non-linear, as in the case of the constrained flip-flop. Other papers examine applications of the differential multiplier method [Platt, 19S7] [Gindi, 19S5]. These applications could be built with the same parallel analog hardware discussed in this paper. Acknowledgement This paper was made possible by funding from AT&T Bell Labs. Hardware was provided by Carver Mead, and Synaptics, Inc. References Arrow, K., Hurwicz, L., Uzawa, H., [195S], Studies in Linear Nonlinear Programming, Stanford University Press, Stanford, CA. Durbin, R., Willshaw, D., [19S7], "An Analogue Approach to the Travelling Salesman Problem," Nature, 326, 6S9-69l. Gill, P. E., Murray, W., Wright, M. H., [19S1], Practical Optimization, Academic Press, London. Gindi, G, Mjolsness, E., Anandan, P., [19SS], "Neural Networks for Model Matching and Perceptual Organization," Advances in Neural Information Processing Systems I, 61S-625. Hopfield, J. J., Tank, D. W., [19S5], "'Neural' Computation of Decisions in Optimization Problems," Bioi. Cyber., 52, 141-152. Mead, C. A., [19S9], Analog VLSI and Neural Systems, Addison-Wesley, Reading, MA. Platt, J. C., Hopfield, J. J., [19S6], "Analog Decoding with Neural Networks," Neural Networks for Computing, Snowbird, UT, 364-369. Platt, J. C., Barr, A., [19S7], "Constrained Differential Optimization," Neural Information and Processing Systems, 612-621.
|
1989
|
2
|
198
|
668 Dembo, Siu and Kailath Complexity of Finite Precision Neural Network Classifier Amir Dembo1 Inform. Systems Lab. Stanford University Stanford, Calif. 94305 Kai-Yeung Siu Inform. Systems Lab. Stanford University Stanford, Calif. 94305 ABSTRACT Thomas Kailath Inform. Systems Lab. Stanford University Stanford, Calif. 94305 A rigorous analysis on the finite precision computational <)Spects of neural network as a pattern classifier via a probabilistic approach is presented. Even though there exist negative results on the capability of perceptron, we show the following positive results: Given n pattern vectors each represented by en bits where e > 1, that are uniformly distributed, with high probability the perceptron can perform all possible binary classifications of the patterns. Moreover, the resulting neural network requires a vanishingly small proportion O(log n/n) of the memory that would be required for complete storage of the patterns. Further, the perceptron algorithm takes O(n2) arithmetic operations with high probability, whereas other methods such as linear programming takes O(n3 .5 ) in the worst case. We also indicate some mathematical connections with VLSI circuit testing and the theory of random matrices. 1 Introduction It is well known that the percept ron algorithm can be used to find the appropriate parameters in a linear threshold device for pattern classification, provided the pattern vectors are linearly separable. Since the number of parameters in a perceptron is significantly fewer than that needed to store the whole data set, it is tempting to 1 The coauthor is now with the Mathematics and Statistics Department of Stanford University. Complexity of Finite Precision Neural Network Classifier 669 conclude that when the patterns are linearly separable, the perceptron can achieve a reduction in storage complexity. However, Minsky and Papert [1] have shown an example in which both the learning time and the parameters increase exponentially, when the perceptron would need much more storage than does the whole list of patterns. Ways around such examples can be explored by noting that analysis that assumes real arithmetic and disregards finite precision aspects might yield misleading results. For example, we present below a simple network with one real valued weight that can simulate all possible classifications of n real valued patterns into k classes, when unlimited accuracy and continuous distribution of the patterns are assumed. For simplicity, let us assume the patterns are real numbers in [0,1]. Consider the following sequence {xi,i} generated by each pattern Xi for i = 1, ... , n: Xi,l = k· Xi modk Xi,i = k . xi,i-l mod k lor j > 1 U(Xi,j) = [xi,i) where [] denotes the integer part. Let I: {Xl, ... , Xn} --+ {O, ... , k-l} denote the desired classification of the patterns. It is easy to see that for any continuous distribution on [0,1], there exists a j such that U(Xi,j) = I(xi), with probability one. So, the network y = u(x,w) may simulate any classification with w = j determined from the desired classification as shown above. So in this paper, we emphasize the finite precision computational aspects of pattern classification problems and provide partial answers to the following questions: • Can the perceptron be used as an efficient form of memory'? • Does the 'learning' time of perceptron become too long to be practical most of the time even when the patterns are assumed to be linearly separable '? • How do the convergence results compare to those obtained by solving system of linear inequalities'? We attempt to answer the above questions by using a probabilistic approach. The theorems will be presented without proofs; details of the proof will appear in a complete paper. In the following analysis, the phrase 'with high probability' means the probability of the underlying event goes to 1 as the number of patterns goes to 670 Dembo, Siu and Kailath infinity. First, we shall introduce the classical model of a perceptron in more details and give some known results on its limitation as a pattern classifier. 2 The Perceptron A perceptron is a linear threshold device which computes a linear combination of the coordinates of the pattern vector, compares the value with a threshold and outputs +1 or -1 if the value is larger or smaller than the threshold respectively. More formally, we have Output: Input: Parameters: d sign{ < w, i > -8} = sign{L Xi . Wi - 8} i=l weights threshold 8 E R sign{y} = { ~~ if y ~ 0 otherwise Given m patterns xi, ... ,x~ in Rd, there are 2m possible ways of classifying each of the patterns to ± 1. When a desired classification of the patterns is achieveable by a perceptron, the patterns are said to be linearly separable. Rosenblatt(1962) [2] showed that if the patterns are linearly separable, then there is a 'learning' algorithm which he called perceptron learning algorithm to find the appropriate parameters wand 8. Let CTi = ±1 be the desired classification of the pattern xi. Also, let Yi = CTi • xi. The perceptron learning algorithm runs as follows: 1. Set k = 1, choose an initial value of w( k) ¥ O. 2. Select an i E {I, ... , n}, set Y(k) = yi. 3. If w( k) . y( k) ~ 0, goto 2. Else 4. Set w(k + 1) = w(k) + Y(k), k = k + 1, go to 2. Complexity of Finite Precision Neural Network Classifier 671 The algorithm terminates when step 3 is true for all Yi. If the patterns are linearly separable, then the above perceptron algorithm is guaranteed to converge in finitely many iterations, i.e. Step 4 would be reached only finitely often. The existence of such simple and elegant 'learning' algorithm had brought a great deal of interests during the 60's. However, the capability of the perceptron is very limited since only a small portion of the 2m possible binary classifications can be achieved. In fact, Cover(1965) [3] has shown that a perceptron can at most classify the patterns into 2 dI:1 ( ) m - 1 = O(md- 1) I i=O different ways out of the 2m possibilities. The above upper bound O( md- 1 ) is achieved when the pattern vectors are in general position i.e. every subset of d vectors in {xi, ... , x~} are linearly independent. An immediate generalization of this result is the following: Theorem 1 For any function f( w, i) which lies in a function space of dimension r, i. e. if we can write f(w,i) = al (w)!t (i) + ... + ar(w)fr(i) then the number of possible classifications of m patterns by sign{f(w, in is bounded by O(mr-l) 3 A New Look at the Perceptron The reason why perceptron is so limited in its capability as a pattern classifier is that the dimension of the pattern vector space is kept fixed while the number of patterns is increased. We consider the binary expansion of each coordinate and view the real pattern vector as a binary vector, but in a much higher dimensional space. The intuition behind this is that we are now making use of every bit of information in the pattern. Let us assume that each pattern vector has dimension d and that each coordinate is given with m bits of accuracy, which grows with the number of patterns n in such a way that d· m = c· n for some c > 1. By considering the binary expansion, we can treat the patterns as binary vectors, i.e. each vector belongs to {+l,-lyn. If we want to classify the patterns into k classes, we can use logk number of binary classifiers, each classifying the patterns into the corresponding bit of the binary encoding of the k classes. So without loss of generality, we assume that the number of classes equals 2. Now the classification problem can be viewed as an implementation of a partial Boolean function whose value is only specified on 672 Dem bo, Siu and Kailath n inputs out of the 2cn possible ones. For arbitrary input patterns, there does not seem to exist an efficient way other than complete storage of the patterns and the use of a look-up table for classification, which will require O(n2) bits. It is natural to ask if this is the best we can do. Surprisingly, using probabilistic method in combinatorics [4] (counting arguments), we can show the following: Theorem 2 For n sufficiently large, there exists a system that can simulate all possible binary classifications with parameter storage of n + 2 log n bits. Moreover, a recent result from the theory of VLSI testing [5], implies that at least n + log n bits are needed. As the proof of theorem 1 is non-constructive, both the learning of the parameters and the retrieval of the desired classification in the 'optimal' system may be too complex for any practical purpose. Besides, since there is almost no redundancy in the storage of parameters in such an 'optimal' system, there will be no 'generalization' properties. i.e. It is difficult to predict what the output of the system would be on patterns that are not trained. However, a perceptron classifier, while sub-optimal in terms of Theorem 3 below, requires only O(n log n) bits for parameter storage, compared with O(n2 ) bits for a table look up classifier. In addition, it will exhibit 'generalization' properties in the sense that new patterns that are close in Hamming distance to those trained patterns are likely to be classified into the same class. So, if we allow some vanishingly small probability of error, we can give an affirmative answer to the first question raised at the beginning: Theorem 3 Assume the n pattern vectors are uniformly distributed over {+1, _1}cn, then with high probability, the patterns can be classified into a1l2n possible ways using perceptron algorithm. Further, the storage of parameters requires only O( n log n) bits. In other words, when the input patterns are given with high precision, perceptron can be used as an efficient form of memory. The known upper bound on the learning time of percept ron depends on the maximum length of the input pattern vectors, and the minimum distance fJ of the pattern vectors to a separating hyperplane. In the following analysis, our probabilistic assumption guarantees the pattern vectors to be linearly independent with high probability and thus linearly separable. In order to give an probabilistic upper bound on the learning time of the perceptron, we first give a lower bound on the minimum distance fJ with high probability: Lemma 1 Let n be the number of pattern vectors each in Rm, where m = (1 + f)n and f is any constant> O. Assume the entries of each vector v are iid random variables with zero mean and bounded second moment. Then with probability --+ 1 Complexity or Finite Precision Neural Network Classifier 673 as n --+ 00 , there exists a separating hyperplane and a 15* > 0 such that each vector is at a distance of at least 15* from it. In our case, each coordinate of the patterns is assumed to be equally likely ±1 and clearly the conditions in the above lemma are satisfied. In general, when the dimension of the pattern vectors is larger than and increases linearly with the number of patterns, the above theorem applies provided the patterns are given with high enough precision that a continuous distribution is a sufficiently good model for analysis. The above lemma makes use of a famous conjecture from the theory of random matrices [6] which gives a lower bound on the minimum singular value of a random matrix. We actually proved the conjecture during our course of study, which states which states that the minimum singular value of a en by n random matrix with c> 1, grows as Fn almost surely. Theorem 4 Let An be a en X n random matrix with c > 1, whose entries are i. i. d. entries with zero mean and bounded second moment, 0'"(-) denote the minimum singular value of a matrix. Then there exists f3 > 0 such that lim inf u( A~) > f3 n-oo yn with probability 1. Note that our probabilistic assumption on the patterns includes a wide class of distributions, in particular the zero mean normal and symmetric uniform distribution on a bounded interval. In addition, they satisfy the following condition: (*) There exists a a> 0 such that P{[v[ > aFn} --+ 0 as n --+ 00. Before we answer the last two questions raised at the beginning, we state the following known result on the perceptron algorithm as a second lemma: Lemma 2 Suppose there exists a unit vector w* such that w* . v > 15 for some 15 > 0 and for all pattern vectors v. Then the perceptron algorithm will converge to a solution vector in ::::; N2 /152 number of iterations, where N is the maximum length of the pattern vectors. N ow we are ready to state the following Theorem 5 Suppose the patterns satisfy the probabilistic assumptions stated in 674 Dembo, Siu and Kailath Lemma 1 and the condition (*), then with high probability, the perceptron takes O( n 2 ) arithmetic operations to terminate. As mentioned earlier, another way of finding a separating hyperplane is to solve a system of linear inequalities using linear programming, which requires O( n3 .S) arithmetic operations [7]. Under our probabilistic assumptions, the patterns are linearly independent with high probability, so that we can actually solve a system of linear equations. However, this still requires O(n3 ) arithmetic operations. Further, these methods require batch processing in the sense that all patterns have to be stored in advance in order to find the desired parameters, in constrast to the sequential 'learning' nature of the perceptron algorithm. So for training this neural network classifier, perceptron algorithm seems more preferable. When the number of patterns is polynomial in the total number of bits representing each pattern, we may first extend each vector to a dimension at least as large as the number of patterns, and then apply the perceptron to compress the storage of parameters. One way of adding these extra bits is to form products of the coordinates within each pattern. Note that by doing so, the coordinates of each pattern are pairwise independent. We conjecture that theorem 3 still applies, implying even more reduction in storage requirements. Simulation results strongly support our conjecture. 4 Conclusion In this paper, the finite precision computational aspects of pattern classification problems are emphasized. We show that the perceptron, in contrast to common belief, can be quite efficient as a pattern classifier, provided the patterns are given with high enough precision. Using a probabilistic approach, we show that the perceptron algorithm can even outperform linear programming under certain conditions. During the course of this work, we also discovered some mathematical connections with VLSI circuit testing and the theory of random matrices. In particular, we have proved an open conjecture regarding the minimum singular value of a random matrix. Acknowledgements This work was supported in part by the Joint Services Program at Stanford University (US Army, US Navy, US Air Force) under Contract DAAL03-88-C-OOll, and NASA Headquarters, Center for Aeronautics and Space Information Sciences (CASIS) under Grant NAGW-419-S5. Complexity or Finite Precision Neural Network Classifier 675 References [1] M. Minsky and S. Papert, Perceptrons, The MIT Press, expanded edition, 1988. [2] F. Rosenblatt, Principles of Neurodynamics, Spartan Books, New York, 1962. [3] T. M. Cover, "Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition", IEEE Trans. on Electronic Computers, EC-14:326-34, 1965. [4] P. Erdos and J. Spencer, Probabilistic Methods in Combinatorics, Academic Press/ Akademiai Kiado, New York-Budapest, 1974. [5] G. Seroussi and N. Bshouty, "Vector Sets for Exhaustive Testing of Logic Circuits", IEEE Trans. Inform. Theory, IT-34:513-522, 1988. [6] J. Cohen, H. Kesten and C. Newman, editor, Random Matrices and Their Applications, volume 50 of Contemporary Mathematics, American Mathematical Society, 1986. [7] N. Karmarkar, "A New Polynomial-Time Algorithm for Linear Programming", Combinatorica 1, pages 373-395, 1984.
|
1989
|
20
|
199
|
308 Donnett and Smithers Neuronal Group Selection Theory: A Grounding in Robotics Jim Donnett and Tim Smithers Department of Artificial Intelligence University of Edinburgh 5 Forrest Hill Edinburgh EH12QL SCOTLAND ABSTRACT In this paper, we discuss a current attempt at applying the organizational principle Edelman calls Neuronal Group Selection to the control of a real, two-link robotic manipulator. We begin by motivating the need for an alternative to the position-control paradigm of classical robotics, and suggest that a possible avenue is to look at the primitive animal limb 'neurologically ballistic' control mode. We have been considering a selectionist approach to coordinating a simple perception-action task. 1 MOTIVATION The majority of industrial robots in the world are mechanical manipUlators often arm-like devices consisting of some number of rigid links with actuators mounted where the links join that move adjacent links relative to each other, rotationally or translation ally. At the joints there are typically also sensors measuring the relative position of adjacent links, and it is in terms of position that manipulators are generally controlled (a desired motion is specified as a desired position of the end effector, from which can be derived the necessary positions of the links comprising the manipulator). Position control dominates largely for historical reasons, rooted in bang-bang control: manipulators bumped between mechanical stops placed so as to enforce a desired trajectory for the end effector. Neuronal Group Selection Theory: A Grounding in Robotics 309 1.1 SERVOMECHANISMS Mechanical stops have been superceded by position-controlling servomechanisms, negative feedback systems in which, for a typical manipulator with revolute joints, a desired joint angle is compared with a feedback signal from the joint sensor signalling actual measured angle; the difference controls the motive power output of the joint actuator proportionally. Where a manipulator is constructed of a number of links, there might be a servomechanism for each joint. In combination, it is well known that joint motions can affect each other adversely, requiring careful design and analysis to reduce the possibility of unpleasant dynamical instabilities. This is especially important when the manipulator will be required to execute fast movements involving many or all of the joints. We are interested in such dynamic tasks, and acknowledge some successful servomechanistic solutions (see [Andersson 19881, who describes a ping pong playing robot), but seek an alternative that is not as computationally expensive. 1.2 ESCAPING POSITION CONTROL In Nature, fast reaching and striking is a primitive and fundamental mode of control. In fast, time-optimal, neurologically ballistic movements (such as horizontal rotations of the head where subjects are instructed to turn it as fast as possible, [Hannaford and Stark 1985]), muscle activity patterns seem to show three phases: a launching phase (a burst of agonist), a braking phase (an antagonist burst), and a locking phase (a second agonist burst). Experiments have shown (see [Wadman et al. 1979]) that at least the first 100 mS of activity is the same even if a movement is blocked mechanically (without forewarning the subject), suggesting that the launch is specified from predetermined initial conditions (and is not immediately modified from proprioceptive information). With the braking and locking phases acting as a damping device at the end of the motion, the complete motion of the arm is essentially specified by the initial conditions a mode radically differing from traditional robot positional control. The overall coordination of movements might even seem naive and simple when compared with the intricacies of servomechanisms (see [Braitenberg 1989, N ahvi and Hashemi 19841 who discuss the crane driver's strategy for shifting loads quickly and time-optimally). The concept of letting insights (such as these) that can be gained from the biological sciences shape the engineering principles used to create artificial autonomous systems is finding favour with a growing number of researchers in robotics. As it is not generally trivial to see how life's devices can be mapped onto machines, there is a need for some fundamental experimental work to develop and test the basic theoretical and empirical components of this approach, and we have been considering various robotics problems from this perspective. Here, we discuss an experimental two-link manipulator that performs a simple manipulation task hitting a simple object perceived to be within its reach. The perception of the object specifies the initial conditions that determine an arm mo310 Donnett and Smithers tion that reaches it. In relating initial conditions with motor currents, we have been considering a scheme based on Neuronal Group Selection Theory [Edelman 1987, Reeke and Edelman 1988], a theory of brain organization. We believe this to be the first attempt to apply selectionist ideas in a real machine, rather than just in simulation. 2 NEURONAL GROUP SELECTION THEORY Edelman proposes Neuronal Group Selection (NGS) [Edelman 1978] as an organizing principle for higher brain function mainly a biological basis for perception primarily applicable to the mammalian (and specifically, human) nervous system [Edelman 1981]. The essential idea is that groups of cells, structurally varied as a result of developmental processes, comprise a population from which are selected those groups whose function leads to adaptive behaviour of the system. Similar notions appear in immunology and, of course, evolutionary theory, although the effects of neuronal group selection are manifest in the lifetime of the organism. There are two premises on which the principle rests. The first is that the unit of selection is a cell group of perhaps 50 to 10,000 neurons. Intra-group connections between cells are assumed to vary (greatly) between groups, but other connections in the brain (particularly inter-group) are quite specific. The second premise is that the kinds of nervous systems whose organization the principle addresses are able to adapt to circumstances not previously encountered by the organism or its species [Edelman 1978]. 2.1 THREE CENTRAL TENETS There are three important ideas in the NGS theory [Edelman 1987]. • A first selective process (cell division, migration, differentiation, or death) results in structural diversity providing a primary repertoire of variant cell groups. • A second selective process occurs as the organism experiences its environment; group activity that correlates with adaptive behaviour leads to differential amplification of intra- and inter-group synaptic strengths (the connectivity pattern remains unchanged). From the primary repertoire are thus selected groups whose adaptive functioning means they are more likely to find future use these groups form the ,econdary repertoire. • Secondary repertoires themselves form populations, and the NGS theory additionally requires a notion of reentry, or connections between repertoires, usually arranged in maps, of which the well-known retinotopic mapping of the visual system is typical. These connections are critical for they correlate motor and sensory repertoires, and lend the world the kind of spatiotemporal continuity we all experience. Neuronal Group Selection Theory: A Grounding in Robotics 311 2.2 REQUffiEMENTS OF SELECTIVE SYSTEMS To be selective, a system must satisfy three requirements IReeke and Edelman 1988]. Given a configuration of input signals (ultimately from the sensory epithelia, but for 'deeper' repertoires mainly coming from other neuronal groups), if a group responds in a specific way it has matched the input IEdelman 1978]. The first requirement of a selective system is that it have a sufficiently large repertoire of variant elements to ensure that an adequate match can be found for a wide range of inputs. Secondly, enough of the groups in a repertoire must 'see' the diverse input signals effectively and quickly so that selection can operate on these groups. And finally, there must be a means for 'amplifying' the contribution, to the repertoire, of groups whose operation when matching input signals has led to adaptive behaviour. In determining the necessary number of groups in a repertoire, one must consider the relationship between repertoire size and the specificity of member groups. On the one hand, if groups are very specific, repertoires will need to be very large in order to recognize a wide range of possible inputs. On the other hand, if groups are not as discriminating, it will be possible to have smaller numbers of them, but in the limit (a single group with virtually no specificity) different signals will no longer be distinguishable. A simple way to quantify this might be to assume that each of N groups has a fixed probability, P, of matching an input configuration; then a typical measure IEdelman 1978] relating the effectiveness of recognition, r, to the number of groups is r = 1 - (1 - p)N (see Fig. 1). r log N Figure 1: Recognition as a Function of Repertoire Size From the shape of the curve in Fig. 1, it is clear that, for such a measure, below some lower threshold for N, the efficacy of recognition is equally poor. Similarly, above an upper threshold for N, recognition does not improve substantially as more groups are added. 3 SELECTIONISM IN OUR EXPERIMENT Our manipulator is required to touch an object perceived to be within reach. This is a well-defined but non-trivial problem in motor-sensory coordination. Churchland proposes a geometrical solution for his two-eyed 'crab' IChurchland 1986]' in which 312 Donnett and Smithers eye angles are mapped to those joint angles (the crab has a two-link arm) that would bring the end of the arm to the point currently foveated by the eyes. Such a novel solution, in which computation is implicit and massively parallel, would be welcome; however, the crab is a simulation, and no heed is paid to the question of how the appropriate sensory-motor mapping could be generated for a real arm. Reeke and Edelman discuss an automaton, Darwin III, similar to the crab, but which by selectional processes develops the ability to manipulate objects presented to it in its environment [Reeke and Edelman 19881. The Darwin III simulation does not account for arm dynamics; however, Edelman suggests that the training paradigm is able to handle dynamic effects as well as the geometry of the problem [Edelman 19891. We are attempting to implement a mechanical analogue of Darwin III, somewhat simplified, but which will experience the real dynamics of motion. S.l EXPERIMENTAL ARCHITECTURE AND HARDWARE The mechanical arrangement of our manipulator is shown in Fig. 2. The two links have agonist/antagonist tendon-drive arrangement, with an actuator per tendon. There are strain gauges in-line with the tendons. A manipulator 'reach' is specified by six parameters: burst amplitude and period for each of the three phases, launch, brake, and lock. 'I. 'I. 'I. l 'I. 'I. ',tendons 'I. , , , " '0 " \ upper-arm D ri upper-arm left actuator U right actuator forearm/ ~ forearm left actuator right actuator Figure 2: Manipulator Mechanical Configuration Neuronal Group Selection Theory: A Grounding in Robotics 313 At the end of the manipulator is an array of eleven pyroelectic-effect infrared detectors arranged in a U-shaped pattern. The relative location of a warm object presented to the arm is registered by the sensors, and is converted to eleven 8-bit integers. Since the sensor output is proportional to detected infrared energy flux, objects at the same temperature will give a more positive reading if they are close to the sensors than if they are further away. Also, a near object will register on adjacent sensors, not just on the one oriented towards it. Therefore, for a single, small object, a histogram of the eleven values will have a peak, and showing two things (Fig. 3): the sensor 'seeing' the most flux indicates the relative direction of the object, and the sharpness of the peak is proportional to the distance of the object. (object distant and to the left) (object near and straight ahead) Figure 3: Histograms for Distant Versus Near Objects Modelled on Darwin III [Reeke and Edelman 1988], the architecture of the selectional perception-action coordinator is as in Fig. 4. The boxes represent repertoires of appropriately interconnected groups of 'neurons'. Darwin III responds mainly to contour in a two-dimensional world, analogous to the recognition of histogram shape in our system. Where Darwin Ill's 'unique response' network is sensitive to line segment lengths and orientations, ours is sensitive to the length of subsequences in the array of sensor output values in which values increase or decrease by the same amount, and the amounts by which they change; similarly, where Darwin Ill's 'generic response' network is sensitive to presence of or changes in orientation of lines, ours responds to the presence of the subsequences mentioned above, and the positions in the array where two subsequences abut. The recognition repertoires are reciprocally connected, and both connect to the motor repertoire which consists of ballistic-movement 6-tuples. The system considers 'touching perceived object' to be adaptive, so when recognition activity correlates with a given 6-tuple, amplification ensures that the same response will be favoured in future. 314 Donnett and Smithers 4 WORK TO DATE As the sensing system is not yet functional, this aspect of the system is currently simulated in an IBM PC/AT. The rest of the electrical and mechanical hardware is in place. The major difficulty currently faced is that the selectional system will become computationally intensive on a serial machine. FEATURE DETECTOR COMBINATION RESPONSES WORLD classification couple FEATURE CORRELATOR COMBINATION RESPONSES (UNIQU~ cim"':r"~f,~./(GENERIC) ~ motor map MOTOR ACTIONS Figure 4: Experimental Architecture For each possible ballistic 'reach', there must be a representation for the 'reach 6-tuple'. Therefore, the motor repertoire must become large as the dexterity of the manipulator is increased. Similarly, as the array of sensors is extended (resolution increased, or field of view widened), the classification repertoires must also grow. On a serial machine, polling the groups in the repertoires must be done one at a time, introducing a substantial delay between the registration of object and the actual touch, precluding the interception by the manipulator of fast moving objects. We are exploring possibilities for parallelizing the selectional process (and have for this reason constructed a network of processing elements), with the expectation that this will lead us closer to fast, dynamic manipulation, at minimal computational expense. Neuronal Group Selection Theory: A Grounding in Robotics 315 References Russell L. Andersson. A Robot Ping-Pong Player: Experiment in Real- Time Intelligent Control. MIT Press, Cambridge, MA, 1988. Valentino Braitenberg. "Some types of movement" , in C.G. Langton, ed., Artificial Life, pp. 555-565, Addison-Wesley, 1989. Paul M. Churchland. "Some reductive strategies in cognitive neurobiology". Mind, 95:279-309, 1986. Jim Donnett and Tim Smithers. "Behaviour-based control of a two-link ballistic arm". Dept. of Artificial Intelligence, University of Edinburgh, Research Paper RP .158, 1990. Gerald M. Edelman. "Group selection and phasic reentrant signalling: a theory of higher brain function", in G.M. Edelman and V.B. Mountcastle, eds., The Mindful Brain, pp. 51-100, MIT Press, Cambridge, MA, 1978. Gerald M. Edelman. "Group selection as the basis for higher brain function", in F.O. Schmitt et al., eds., Organization of the Cerebral Cortex, pp. 535-563, MIT Press, Cambridge, MA, 1981. Gerald M. Edelman. Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, New York, 1987. Gerald M. Edelman. Personal correspondence, 1989. Blake Hannaford and Lawrence Stark. "Roles of the elements of the triphasic control signal". Experimental Neurology, 90:619-634, 1985. M.J. Nahvi and M.R. Hashemi. "A synthetic motor control system; possible parallels with transformations in cerebellar cortex", in J .R. Bloedel et al., eds., Cerebellar Functions, pp. 67-69, Springer-Verlag, 1984. George N. Reeke Jr. and Gerald M. Edelman. "Real brains and artificial intelligence", in Stephen R. Graubard, ed., The Artificial Intelligence Debate, pp. 143-173, The MIT Press, Cambridge, MA, 1988. W.J. Wadman, J.J. Denier van der Gon, R.H. Geuse, and C.R. Mol. "Control of fast goal-directed arm movements". Journal of Human Movement Studies, 5:3-17, 1979.
|
1989
|
21
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.