Dataset Viewer
Auto-converted to Parquet Duplicate
index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
0
BIT - SERIAL NEURAL NETWORKS Alan F. Murray, Anthony V. W. Smith and Zoe F. Butler. Department of Electrical Engineering, University of Edinburgh, The King's Buildings, Mayfield Road, Edinburgh, Scotland, EH93JL. ABSTRACT 573 A bit - serial VLSI neural network is described from an initial architecture for a synapse array through to silicon layout and board design. The issues surrounding bit - serial computation, and analog/digital arithmetic are discussed and the parallel development of a hybrid analog/digital neural network is outlined. Learning and recall capabilities are reported for the bit - serial network along with a projected specification for a 64 - neuron, bit - serial board operating at 20 MHz. This technique is extended to a 256 (2562 synapses) network with an update time of 3ms, using a "paging" technique to time - multiplex calculations through the synapse array. 1. INTRODUCTION The functions a synthetic neural network may aspire to mimic are the ability to consider many solutions simultaneously, an ability to work with corrupted data and a natural fault tolerance. This arises from the parallelism and distributed knowledge representation which gives rise to gentle degradation as faults appear. These functions are attractive to implementation in VLSI and WSI. For example, the natural fault - tolerance could be useful in silicon wafers with imperfect yield, where the network degradation is approximately proportional to the non-functioning silicon area. To cast neural networks in engineering language, a neuron is a state machine that is either "on" or "off', which in general assumes intermediate states as it switches smoothly between these extrema. The synapses weighting the signals from a transmitting neuron such that it is more or less excitatory or inhibitory to the receiving neuron. The set of synaptic weights determines the stable states and represents the learned information in a system. The neural state, VI' is related to the total neural activity stimulated by inputs to the neuron through an activation junction, F. Neural activity is the level of excitation of the neuron and the activation is the way it reacts in a response to a change in activation. The neural output state at time t, V[, is related to x[ by V[ = F (xf) (1) The activation function is a "squashing" function ensuring that (say) Vi is 1 when Xi is large and -1 when Xi is small. The neural update function is therefore straightforward: . i-n-l ,+1 -, + ~ ~ T V' XI XI • •••• 0 ~ ii J J-O where 8 represents the rate of change of neural activity, Tij and n is the number of terms giving an n - neuron array [1]. (2) is the synaptic weight Although the neural function is simple enough, in a totally interconnected n - neuron network there are n 2 synapses requiring n 2 multiplications and summations and © American Institute of Physics 1988 574 a large number of interconnects. The challenge in VLSI is therefore to design a simple, compact synapse that can be repeated to build a VLSI neural network with manageable interconnect. In a network with fixed functionality, this is relatively straightforward. H the network is to be able to learn, however, the synaptic weights must be programmable, and therefore more complicated. 2. DESIGNING A NEURAL NETWORK IN VLSI There are fundamentally two approaches to implementing any function in silicon digital and analog. Each technique has its advantages and disadvantages, and these are listed below, along with the merits and demerits of bit - serial architectures in digital (synchronous) systems. Digital vs. analog: The primary advantage of digital design for a synapse array is that digital memory is well understood, and can be incorporated easily. Learning networks are therefore possible without recourse to unusual techniques or technologies. Other strengths of a digital approach are that design techniques are advanced, automated and well understood and noise immunity and computational speed can be high. Unattractive features are that digital circuits of this complexity need to be synchronous and all states and activities are quantised, while real neural networks are asynchronous and unquantised. Furthermore, digital multipliers occupy a large silicon area, giving a low synapse count on a single chip. The advantages of analog circuitry are that asynchronous behaviour and smooth neural activation are automatic. Circuit elements can be small, but noise immunity is relatively low and arbitrarily high precision is not possible. Most importantly, no reliable analog, non - volatile memory technology is as yet readily available. For this reason, learning networks lend themselves more naturally to digital design and implementation. Several groups are developing neural chips and boards, and the following listing does not pretend to be exhaustive. It is included, rather, to indicate the spread of activity in this field. Analog techniques have been used to build resistor I operational amplifier networks [2,3] similar to those proposed by Hopfield and Tank [4]. A large group at Caltech is developing networks implementing early vision and auditory processing functions using the intrinsic nonlinearities of MaS transistors in the subthreshold regime [5,6]. The problem of implementing analog networks with electrically programmable synapses has been addressed using CCDIMNOS technology [7]. Finally, Garth [8] is developing a digital neural accelerator board ("Netsim") that is effectively a fast SIMD processor with supporting memory and communications chips. Bit - serial vs. bit - parallel: Bit - serial arithmetic and communication is efficient for computational processes, allowing good communication within and between VLSI chips and tightly pipelined arithmetic structures. It is ideal for neural networks as it minimises the interconnect requirement by eliminating multi - wire busses. Although a bit - parallel design would be free from computational latency (delay between input and output), pipelining makes optimal use of the high bit rates possible in serial systems, and makes for efficient circuit usage. 2.1 An asynchronous pulse stream VLSI neural network: In addition to the digital system that forms the substance of this paper, we are developing a hybrid analOg/digital network family. This work is outlined here, and has been reported in greater detail elsewhere [9, 10, 11]. The generic (logical and layout) architecture of a single network of n totally interconnected neurons is shown 575 schematically in figure 1. Neurons are represented by circles, which signal their states, Vi upward into a matrix of synaptic operators. The state signals are connected to a n - bit horizontal bus running through the synaptic array, with a connection to each synaptic operator in every column. All columns have n operators (denoted by squares) and each operator adds its synaptic contribution, Tij V j , to the running total of activity for the neuron i at the foot of the column. The synaptic function is therefore to multiply the signalling neuron state, Vj , by the synaptic weight, Tij , and to add this product to the running total. This architecture is common to both the bit - serial and pulse - stream networks. Synapse States { Vj } Neurons Figure 1. Generic architecture for a network of n totally interconnected neurons. This type of architecture has many attractions for implementation in 2 - dimensional j=II -1 silicon as the summation 2 Tij Vj is distributed in space. The interconnect j=O requirement (n inputs to each neuron) is therefore distributed through a column, reducing the need for long - range wiring. The architecture is modular, regular and can be easily expanded. In the hybrid analog/digital system, the circuitry uses a "pulse stream" signalling method similar to that in a natural neural system. Neurons indicate their state by the presence or absence of pulses on their outputs, and synaptic weighting is achieved by time - chopping the presynaptic pulse stream prior to adding it to the postsynaptic activity summation. It is therefore asynchronous and imposes no fundamental limitations on the activation or neural state. Figure 2 shows the pulse stream mechanism in more detail. The synaptic weight is stored in digital memory local to the operator. Each synaptic operator has an excitatory and inhibitory pulse stream input and output. The resultant product of a synaptic operation, Tij Vj , is added to the running total propagating down either the excitatory or inhibitory channel. One binary bit (the MSBit) of the stored Tij determines whether the contribution is excitatory or inhibitory. The incoming excitatory and inhibitory pulse stream inputs to a neuron are integrated to give a neural activation potential that varies smoothly from 0 to 5 V. This potential controls a feedback loop with an odd number of logic inversions and 576 . • • V , .u.u, • XT •• Figure 2. Pulse stream arithmetic. Neurons are denoted by 0 and synaptic operators by D. thus forms a switched "ring - oscillator". H the inhibitory input dominates, the feedback loop is broken. H excitatory spikes subsequently dominate at the input, the neural activity rises to 5V and the feedback loop oscillates with a period determined by a delay around the loop. The resultant periodic waveform is then converted to a series of voltage spikes, whose pulse rate represents the neural state, Vi' Interestingly, a not dissimilar technique is reported elsewhere in this volume, although the synapse function is executed differently [12]. 3. A 5 - STATE BIT - SERIAL NEURAL NETWORK The overall architecture of the 5 - state bit - serial neural network is identical to that of the pulse stream network. It is an array of n2 interconnected synchronous synaptic operators, and whereas the pulse stream method allowed Vj to assume all values between "off' and "on", the 5 - state network VJ is constrained to 0, ±0.5 Qr ± 1. The resultant activation function is shown in Figure 3. Full digital multiplication is costly in silicon area, but multiplication of Tij by Vj = 0.5 merely requires the synaptic weight to be right - shifted by 1 bit. Similarly, multiplication by 0.25 involves a further right - shift of Til' and multiplication by 0.0 is trivially easy. VJ < 0 is not problematic, as a switchable adder/subtractor is not much more complex than an adder. Five neural states are therefore feasible with circuitry that is only slightly more complex than a simple serial adder. The neural state expands from a 1 bit to a 3 bit (5 - state) representation, where the bits represent "add/subtract?", "shift?" and "multiply by O?". Figure 4 shows part of the synaptic array. Each synaptic operator includes an 8 bit shift register memory block holding the synaptic weight, Til' A 3 bit bus for the 5 neural states runs horizontally above each synaptic row. Single phase dynamic CMOS has been used with a clock frequency in excess of 20 MHz [13). Details of a synaptic operator are shown in figure 5. The synaptic weight Til cycles around the shift register and the neural state Vj is present on the state bus. During the first clock CYCle, the synaptic weight is multiplied by the neural state and during the second, the most significant bit (MSBit) of the resultant Tij Vj is sign - extended for lHRESHOLD State VJ ..... -------=-------.. Activity sJ s· "5 STATE" "Sharper" "Smoother" ~.....::~-"'--x.&..t------ Activity "J Figure 3. "Hard - threshold", 5 - state and sigmoid activation functions. J-a-1T v ~ .. J J-li v, v, Figure 4. Section of the synaptic array of the 5 - state activation function neural network. 8 bits to allow for word growth in the running summation. A least significant bit (LSBit) signal running down the synaptic columns indicates the arrival of the LSBit of the Xj running total. If the neural state is ±O.5 the synaptic weight is right shifted by 1 bit and then added to or subtracted from the running total. A multiplication of ± 1 adds or subtracts the weight from the total and multiplication by 0 577 578 .0.5 .0.0 Add! Subtract Add/Subtract Carry Figure S. The synaptic operator with a 5 - state activation function. does not alter the running summation. The final summation at the foot of the column is thresholded externally according to the 5 - state activation function in figure 3. As the neuron activity Xj' increases through a threshold value x" ideal sigmoidal activation represents a smooth switch of neural state from -1 to 1. The 5 - state "staircase" function gives a superficially much better approximation to the sigmoid form than a (much simpler to implement) threshold function. The sharpness of the transition can be controlled to "tune" the neural dynamics for learning and computation. The control parameter is referred to as temperature by analogy with statistical functions with this sigmoidal form. High "temperature" gives a smoother staircase and sigmoid, while a temperature of 0 reduces both to the ''Hopfield'' - like threshold function. The effects of temperature on both learning and recall for the threshold and 5 - state activation options are discussed in section 4. 4. LEARNING AND RECALL WITH VLSI CONSTRAINTS Before implementing the reduced - arithmetic network in VLSI, simulation experiments were conducted to verify that the 5 - state model represented a worthwhile enhancement over simple threshold activation. The "benchmark" problem was chosen for its ubiquitousness, rather than for its intrinsic value. The implications for learning and recall of the 5 - state model, the threshold (2 - state) model and smooth sigmoidal activation ( 00 - state) were compared at varying temperatures with a restricted dynamic range for the weights Tij • In each simulation a totally interconnected 64 node network attempted to learn 32 random patterns using the delta rule learning algorithm (see for example [14]). Each pattern was then corrupted with 25% noise and recall attempted to probe the content addressable memory properties under the three different activation options. During learning, individual weights can become large (positive or negative). When weights are "driven" beyond the maximum value in a hardware implementation, 579 which is determined by the size of the synaptic weight blocks, some limiting mechanism must be introduced. For example, with eight bit weight registers, the limitation is -128 S Tij S 127. With integer weights, this can be seen to be a problem of dynamic range, where it is the relationship between the smallest possible weight (± 1) and the largest (+ 127/-128) that is the issue. Results: Fig. 6 shows examples of the results obtained, studying learning using 5 state activation at different temperatures, and recall using both 5 - state and threshold activation. At temperature T=O, the 5 - state and threshold models are degenerate, and the results identical. Increasing smoothness of activation (temperature) during learning improves the quality of learning regardless of the activation function used in recall, as more patterns are recognised successfully. Using 5 - state activation in recall is more effective than simple threshold activation. The effect of dynamic range restrictions can be assessed from the horizontal axis, where T/j:6. is shown. The results from these and many other experiments may be summarised as follows:5 - State activation vs. threshold: 1) Learning with 5 - state activation was protracted over the threshold activation, as binary patterns were being learnt, and the inclusion of intermediate values added extra degrees of freedom. 2) Weight sets learnt using the 5 - state activation function were "better" than those learnt via threshold activation, as the recall properties of both 5 - state and threshold networks using such a weight set were more robust against noise. 3) Full sigmoidal activation was better than 5 - state, but the enhancement was less significant than that incurred by moving from threshold - 5 - state. This suggests that the law of diminishing returns applies to addition of levels to the neural state Vi' This issue has been studied mathematically [15], with results that agree qualitatively with ours. Weight Saturation: Three methods were tried to deal with weight saturation. Firstly, inclusion of a decay, or "forgetting" term was included in the learning cycle [1]. It is our view that this technique can produce the desired weight limiting property, but in the time available for experiments, we were unable to "tune" the rate of decay sufficiently well to confirm it. Renormalisation of the weights (division to bring large weights back into the dynamic range) was very unsuccessful, suggesting that information distributed throughout the numerically small weights was being destroyed. Finally, the weights were allowed to "clip" (ie any weight outside the dynamic range was set to the maximum allowed value). This method proved very successful, as the learning algorithm adjusted the weights over which it still had control to compensate for the saturation effect. It is interesting to note that other experiments have indicated that Hopfield nets can "forget" in a different way, under different learning control, giving preference to recently acquired memories [16]. The results from the saturation experiments were:1) For the 32 pattemJ64 node problem, integer weights with a dynamic range greater than ±30 were necessary to give enough storage capability. 2) For weights with maximum values TiJ = 50-70, "clipping" occurs, but network performance is not seriously degraded over that with an unrestricted weight set. 580 15 "0 10 c = .2 en e u 5 -~ 0 0 ,- .... ---------., ... e ~ ;A ....... ;.. f:'-:' :::::7.:::.::-:::-: f'-. I ".' , ,. i ! ! , i I I , 20 30 40 50 60 70 Limit 5 . state activation function recal1 15 T=30 _._.-.T=20 T=10 T=O ,.. .•. -..... -.•. _ .•. .. -.-._.-.. , i j''''-,,'i ~------------- . . .,. '" j ••••••• •••••••••••••••• •••••• j I O~~~~--~~ __ ~~ __ o 20 30 40 50 60 70 Limit tlHopficld" activation function recall Figure 6. Recall of patterns learned with the 5 . state activation function and subsequently restored using the 5-state and the hard - threshold activation functions. T is the "temperature", or smoothness of the activation function, and "limit" the value ofTI;· These results showed that the 5 - state model was worthy of implementation as a VLSI neural board, and suggested that 8 - bit weights were sufficient. S. PROJECTED SPECIFICATION OF A HARDWARE NEURAL BOARD The specification of a 64 neuron board is given here, using a 5 - state bit - serial 64 x 64 synapse array with a derated clock speed of 20 MHz. The synaptic weights are 8 bit words and the word length of the running summation XI is 16 bits to allow for growth. A 64 synapse column has a computational latency of 80 clock cycles or bits, giving an update time of 4 .... s for the network. The time to load the weights into the array is limited to 6O .... s by the supporting RAM, with an access time of 12Ons. These load and update times mean that the network is executing 1 x 10' operations/second, where one operation is ± Tlj Vj • This is much faster than a natural neural network, and much faster than is necessary in a hardware accelerator. We have therefore developed a "paging" architecture, that effectively "trades off" some of this excessive speed against increased network size. A "moving - patch" neural board: An array of the 5 - state synapses is currently being fabricated as a VLSI integrated circuit. The shift registers and the adderlsubtractor for each synapse occupy a disappointingly large silicon area, allowing only a 3 x 9 synaptic array. To achieve a suitable size neural network from this array, several chips need to be included on a board with memory and control circuitry. The "moving patch" concept is shown in figure 7, where a small array of synapses is passed over a much larger n x n synaptic array. Each time the array is "moved" to represent another set of synapses, new weights must be loaded into it. For example, the first set of weights will be T 11 •. , T;J ... T 21 ... T 2j to Tjj , the second set Tj + 1,l to T u etc.. The final weight to be loaded will be Smaller "Patch" n neurons .. om synaptic array moves over array rr~ _____ ) __ -.. ~'> Figure 7. The "moving patch" concept, passing a small synaptic "patch" over a larger run synapse array. TNt· Static, off - the - shelf RAM is used to store the weights and the whole operation is pipelined for maximum efficiency. Figure 8 shows the board level design for the network. Control Synaptic Accelerator Chips HOST Figure 8. A "moving patch" neural network board. The small "patch" that moves around the array to give n neurons comprises 4 VLSI synaptic accelerator chips to give a 6 x 18 synaptic array. The number of neurons to be simulated is 256 and the weights for these are stored in 0.5 Mb of RAM with a load time of 8ms. For each "patch" movement, the partial runnin~ summatinn ;. 581 582 calculated for each column, is stored in a separate RAM until it is required to be added into the next appropriate summation. The update time for the board is 3ms giving 2 x 107 operations/second. This is slower than the 64 neuron specification, but the network is 16 times larger, as the arithmetic elements are being used more efficiently. To achieve a network of greater than 256 neurons, more RAM is required to store the weights. The network is then slower unless a larger number of accelerator chips is used to give a larger moving "patch". 6. CONCLUSIONS A strategy and design method has been given for the construction of bit - serial VLSI neural network chips and circuit boards. Bit - serial arithmetic, coupled to a reduced arithmetic style, enhances the level of integration possible beyond more conventional digital, bit - parallel schemes. The restrictions imposed on both synaptic weight size and arithmetic precision by VLSI constraints have been examined and shown to be tolerable, using the associative memory problem as a test. While we believe our digital approach to represent a good compromise between arithmetic accuracy and circuit complexity, we acknowledge that the level of integration is disappointingly low. It is our belief that, while digital approaches may be interesting and useful in the medium term, essentially as hardware accelerators for neural simulations, analog techniques represent the best ultimate option in 2 - dimensional silicon. To this end, we are currently pursuing techniques for analog pseudo - static memory, using standard CMOS technology. In any event, the full development of a nonvolatile analog memory technology, such as the MNOS technique [7], is key to the long - term future of VLSI neural nets that can learn. 7. ACKNOWLEDGEMENTS The authors acknowledge the support of the Science and Engineering Research Council (UK) in the execution of this work. References 1. S. Grossberg, "Some Physiological and Biochemical Consequences of Psychological Postulates," Proc. Natl. Acad. Sci. USA, vol. 60, pp. 758 - 765, 1968. 2. H. P. Graf, L. D. Jackel, R. E. Howard, B. Straughn, J. S. Denker, W. Hubbard, D. M. Tennant, and D. Schwartz, "VLSI Implementation of a Neural Network Memory with Several Hundreds of Neurons," Proc. AlP Conference on Neural Networks for Computing. Snowbird, pp. 182 - 187, 1986. 3. W. S. Mackie, H. P. Graf, and J. S. Denker, "Microelectronic Implementation of Connectionist Neural Network Models," IEEE Conference on Neural Information Processing Systems. Denver, 1987. 4. J. J. Hopfield and D. W. Tank, "Neural" Computation of Decisions in Optimisation Problems," BioI. Cybern., vol. 52, pp. 141 - 152, 1985. 5. M. A. Sivilotti, M. A. Mahowald, and C. A. Mead, Real - Time Visual Computations Using Analog CMOS Processing Arrays, 1987. To be published 6. C. A. Mead, "Networks for Real - Time Sensory Processing," IEEE Conference on Neural Information Processing Systems, Denver, 1987. 583 7. J. P. Sage, K. Thompson. and R. S. Withers, "An Artificial Neural Network Integrated Circuit Based on MNOSlCCD Principles," Proc. AlP Conference on Neural Networlcs for Computing, Snowbird, pp. 381 - 385, 1986. 8. S. C. J. Garth, "A Chipset for High Speed Simulation of Neural Network Systems," IEEE Conference on Neural Networlc.s, San Diego, 1987. 9. A. F. Murray and A. V. W. Smith, "A Novel Computational and Signalling Method for VLSI Neural Networks," European Solid State Circuits Conference , 1987. 10. A. F. Murray and A. J. W. Smith, "Asynchronous Arithmetic for VLSI Neural Systems," Electronics Letters, vol. 23, no. 12, p. 642, June, 1987. 11. A. F. Murray and A. V. W. Smith, "Asynchronous VLSI Neural Networks using Pulse Stream Arithmetic," IEEE Journal of Solid-State Circuits and Systems, 1988. To be published 12. M. E. Gaspar, "Pulsed Neural Networks: Hardware, Software and the Hopfield AID Converter Example," IEEE Conference on Neural Information Processing Systems. Denver, 1987. 13. M. S. McGregor, P. B. Denyer, and A. F. Murray, "A Single - Phase Clocking Scheme for CMOS VLSI," Advanced Research in VLSI " Proceedings of the 1987 Stanford Conference, 1987. 14. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning Internal Representations by Error Propagation," Parallel Distributed Processing " Explorations in the Microstructure of Cognition, vol. 1, pp. 318 - 362, 1986. 15. M. Fleisher and E. Levin, "The Hopfiled Model with Multilevel Neurons Models," IEEE Conference on Neural Information Processing Systems. Denver, 1987. 16. G. Parisi, "A Memory that Forgets," J. Phys. A .' Math. Gen., vol. 19, pp. L617 - L620, 1986.
1987
1
1
474 OPTIMIZA nON WITH ARTIFICIAL NEURAL NETWORK SYSTEMS: A MAPPING PRINCIPLE AND A COMPARISON TO GRADIENT BASED METHODS t Harrison MonFook Leong Research Institute for Advanced Computer Science NASA Ames Research Center 230-5 Moffett Field, CA, 94035 ABSTRACT General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The perfonnance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradientsearch. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods. INTRODUCTION Solving optimization problems with systems of equations based on neurobiological principles has recently received a great deal of attention. Much of this interest began when an artificial neural network was devised to find near-optimal solutions to an np-complete problem 13. Since then, a number of problems have been mapped into the same artificial neural network and variations of it 10.13,14,17.18,19.21,23.24. In this paper, a unifying principle underlying these mappings is derived for systems of first to nth -order ordinary differential equations. This mapping principle bears similarity to the mathematical tools used to generate optimization methods based on the gradient. In view of this, it seemed important to compare the optimization efficiency of dynamical systems constructed by the neural network mapping principle with dynamical systems constructed from the gradient. . THE PRINCIPLE This paper concerns itself with networks of computational units having a state variable V, a function! that describes how a unit is driven by inputs, a linear ordinary differential operator with constant coefficients D (v) that describes the dynamical response of each unit, and a function g that describes how the output of a computational unit is detennined from its state v. In particular, the paper explores how outputs of the computational units evolve with time in tenns of a scalar function E, a single state variable for the whole network. Fig. I summarizes the relationships between variables, functions, and operators associated with each computational unit. Eq. (1) summarizes the equations of motion for a network composed of such units: "-+(M) 1 D (v) = (g 1 (v I)' ...• gN (VN ) ) (I) where the i th element of jJ(M) is D(M)(Vj), superscript (M) denotes that operator D is Mth order, the i th element of 1 is !i(gl(VI) • ...• gN(VN», and the network is comprised of N computational units. The network of Hopfield 12 has M=I, functions 1 are weighted linear sums, and functions 1 (where the ith element of 1 is gj(Vj) ) are all the same sigmoid function. We will examine two ways of defining functions 1 given a function F. Along with these definitions will be t Work supported by NASA Cooperative Agreement No. NCC 2-408 © American Institute of Physics 1988 475 defined corresponding functions E that will be used to describe the dynamics of Eq. (1). The first method corresponds to optimization methods introduced by artificial neural network research. It will be referred to as method V y ("dell gil): ! == VyF (2a) with associated E function tN[ dv'(S)jdg .(S) E"j = F("g)-JL D(M)(v·(S»- -' ' ds. i ' dt dt (2b) Here, V xR denotes the gradient of H, where partials are taken with respect to variables of X, and E7 denotes the E function associated with gradient operator V 7' With appropriate operator D and functions 1 and g, Er is simply the "energy function" of Hopfield 12. Note that Eq. (2a) makes explicit that we will only be concerned with 1 that can be derived from scalar potential functions. For example, this restriction excludes artificial neural networks that have connections between excitatory and inhibitory units such as that of Freeman 8. The second method corresponds to optimization methods based on the gradient. It will be referred to as method V if ("dell v"): 1 == VyoF (3a) with associated E function t N [ dv· (s) 1 dv · (s ) Ev> = FCg) -JL D(M)(v.(s»--' ' ds i I dt dt (3b) where notation is analogous to that for Eqs. (2). The critical result ~_ •• that allows us to map \\ optimization problems into networks described by Eq. (1) is that conditions on the constituents of the equation can be chosen so that along any solution trajectory, the E function corresponding to the system will be a monotonic function of time. For method V"j' here are the conditions: all func/ tions g are 1) differentiable /gl(V 1) 'Tg2(v:z) computational unit i : transform that detennines unit i's output from state variable Vi differential operator specifying the dynamical characteristics of unit i function governing how inputs to unit i are combined to drive it and 2) monotonic in the I' same sense. Only the first Figure 1: Schematic of a computational unit i from which netcondition is needed to works considered in this paper are constructed. Triangles suggest make a similar assertion for connections between computational units. method V v- When these conditions are met and when solutions of Eq. (1) exist, the dynamical systems can be used for optimization. The appendix contains proofs for the monotonicity of function E along solution trajectories and references necessary existence theorems. In conclusion, mapping optimization problems onto dynamical systems summarized by Eq. (l) can be reduced to a matter of differentiation if a scalar function representation of the problem can be found and the integrals of Eqs. (2b) and (3b) are ignorable. This last assumption is certainly upheld for the case where operator D has no derivatives less than M'h order. In simulations below, it will be observed to hold for the case M =1 with a nonzero O'h order derivative in D . (Also see Lapedes and Falber 19.) PERSPECTIVES OF RECENT WORK 476 The fonnulations above can be used to classify the neural network optimization techniques used in several recent studies. In these studies, the functions 1 were all identical. For the most part, following Hopfield's fonnulation, researchers 10.13.14.17.23.24 have used method Vy to derive fonns of Eq. (1) that exhibit the ability to find extrema of E-t with Ey quadratic in functions 1 and all functions 1 describable by sigmoid functions such as tanh (x ). However, several researchers have written about artificial neural networks associated with non-quadratic E functions. Method Vy has been used to derive systems capable of finding extrema of non-quadrntic Ey 19. Method V v has been used to derive systems capable of optimizing Ev where Ev were not necessarily quadratic in variables V 21. A sort of hybrid of the two methods was used by Jeffery and Rosner 18 to find extrema of functions that were not quadratic. The important distinction is that their functions j were derived from a given function Fusing Eq. (3a) where, in addition, a sign definite diagonal matrix was introduced; the left side of Eq. (3a) was left multiplied by this matrix. A perspective on the relationship between all three methods to construct dynamical systems for optimization is summarized by Eq. (4) which describes the relationship between methods Vyand Vyo: V? = <liag [a~~;ll-l V,J' (4) where diag [ Xi] is a diagonal matrix with Xi as the diagonal element of row i. (A similar equation has been derived for quadratic F s.) The relationship between the method of Jeffery and Rosner and Vv is simply Eq. (4) with the time dependent diagonal matrix replaced by a constant diagonal matrix of free parameters. It is noted that Jeffery and Rosner presented timing results that compared simulated annealing. conjugate-gradient, and artificial neural network methods for optimization. Their results are not comparable to the results reported below since they used computation time as a perfonnance measure, not settling times of analog systems. The perspective provided by Eq. (4) will be useful for anticipating the relative performance of methods V ~ and V v in the analytical example below and will aid in understanding the results of computer simulations. COMPARISON OF METHODS Vt AND Vv When M =1 and operator D has no Ofh order derivatives, method V v is the basis of gradientsearch methods of optimization. Given the long history of of such methods. it is important to know what possible benefits could be achieved by the relatively ne,w optimization scheme. method Vy. In the following. the optimization efficiency of methods V t and V v is compared by comparing settling times. the time required for dynamical systems described by Eq. (1) to traverse a continuous path to local optima. To qualify this perfonnance measure. this study anticipates application to the creation of analog devices that would instantiate Eq. (1); hence, we are not interested in estimating the number of discrete steps that would be required to find local optima, an appropriate performance measure if the point was to develop new numerical methods. An analytical example will serve to illustrate the possibility of improvements in settling time by using method V t instead of method V V' Computer simulations will be reported for more complicated problems following this example. For the analytical example, we will examine the case where all functions 1 are identical and g(v) = tanhG(v -Th) (5) where G > 0 is the gain and Th is the threshold. Transforms similar to this are widely used in artificial neural network research. Suppose we wish to use such computational units to search a multi-dimensional binary solution space. We note that !li.. = G sech 2G(v -Th) (6) dv is near 0 at valid solution states (comers of a hypercube for the case of binary solution spaces). We see from Eq. (4) that near a valid solution state. a network based on method Vy will allow computational units to recede from incorrect states and approach correct states comparatively faster. Does 477 this imply faster settling time for method V"t? To obtain an analytical comparison of settling times, consider the case where M =1 and operator D has no Om order derivatives and 1 F = ~('.·(tanhGv·)(tanhGv · ) 2~'J • J 'oJ where matrix S is symmetric. Method V y gives network equations dV =StanhGv dt and method V v gives network equations ~ = diag [G sech 2Gvj 1 S tanhGV (7) (8) (9) where tanhGY denotes a vector with i'" component tanhGv;. For method V r there is one stable point, i.e. where ':: = 0, at V = O . For method V v the stable points are V = 0 and V € V where V is the set of vectors with component values that are either +- or -. Further trivialization allows for comparing estimates of settling times: Suppose S is diagonal. For this case, if Vj = 0 is on the trajectory of any computational unit i for one method, Vj = 0 is on the trajectory of that unit for the other method; hence, a comparison of settling times can be obtained by comparing time estimates for a computational unit to evolve from near 0 to near an extremum or, equivalently, the converse. Specifically, let the interval be [Bo, I-a] where 0< Bo<l-a and o<a<1. For method V.., integrating velocity over time gives the estimate 1 [1 [1 1 1 [1-a ~ lJ T Vi = G '2 5(2-5) - l-aJ + In "5(2-a) 00 (10) and for method V y the estimate is T,,;= ~ln [~~~) ~l (11) From these estimates, method V v will always take longer to satisfy the criterion for convergence: Note that only with the largest value for Bo, Bo = 1-5, is the first term of Eq. (10) zero; for any smaller Bo, this term is positive. Unfortunately, this simple analysis cannot be generalized to nondiagonal S. With diagonal S, all computational units operate independently. Hence, the derivation of ':: is irrelevant with respect to convergence rates; convergence rate depends only on the diagonal element of S having the smallest magnitude. In this sense, the problem is one dimensional. But for non-diagonal S, the problem would be, in general, multi-dimensional and, hence, the direction of ':: becomes relevant To compare settling times for non-diagonal S, computer simulations were done. 'These are described below. COMPUTER SIMULA nONS Methods The problem chosen for study was a much simplified version of a problem in medical imaging: Given electromagnetic field measurements taken from the human scalp, identify the location and magnitude of cerebral activity giving rise to the fields. This problem has received much attention in the last 20 years 3,6.7. The problem, sufficient for our purposes here, was reduced to the following problem: given a few samples of the electric potential field at the surface of a spherical conductor within which reside several static electric dipoles, identify the dipole locations and moments. For this situation, there is a closed form solution for electric potential fields at the 478 spherical surface: (12) where ~ is the electric potential at the spherical conductor surface, 'Xsamp/~ is the location of the sample point ( x denotes a vector, i the corresponding unit vector, and x the corresponding vector magnitude), j1; is the dipole moment of dipole i, and d; is the vector from dipole i to X:ampl~ (This equation can be derived from one derived by Brody, Terry, and Ideker 4 ). Fig. 2 facilitates picturing these relationships. Figure 2: Vectors of Eq. (12). With this analytical solution, the problem was formulated as a least squares minimization problem where the variables were dipole moments. In short, the following process was used: A dipole model was chosen. This model was used with Eq. (12) to calculate potentials at points on a sphere which covered about 60% of the surface. A cluster of internal locations that encompassed the locations of the model was specified. The two optimization techniques were then required to determine dipole moment values at cluster locations such that the collection of dipoles at cluster locations accurately reflected the dipole distribution specified by the model. This was to be done given only the potential values at the sample points and an initial guess of dipole moments at cluster locations. The optimization systems were to accomplish the task by minimizing the sum of squared differences between potentials calculated using the dipole model and potentials calculated using a guess of dipole moments at cluster locations where the sum is taken over all sample points. Further simplifications of the problem included 1) choosing the dipole model locations to correspond exactly to various locations of the cluster, 2) requiring dipole model moments to.be I, 0, or -I, and 3) representing dipole moments at cluster locations with two bit binary numbers. To describe the dynamical systems used, it suffices to specify operator D and functions '( of Eq. (1) and function F used in Eqs. (2a) and (3a). Operator D was d D = dt + 1. (13) Eq. (5) with a multiplicative factor of 112 was used for all functions '(. Hence, regarding simplification 3) above, each cluster location was associated with two computational units. Considering simplification 2) above, dipole moment magnitude 1 would be represented by both computational units being in the high state, for -I, both in the low state, and for 0, one in the high state and one in the low state. Regarding function F , F = ~ [~lMaSlll'~d(X:) - <Ilcillomr ('Xs) r -c ~ g (v)2 all samp/~ all compu,ariOflal (14) poims s u"irs j where ~_as""~d is calculated from the dipole model and Eq. (12) (The subscript measured is used because the role of the dipole model is to simulate electric potentials that would be measured in a real world situation. In real world situations, we do not know the source distribution underlying ~_asar~d .), C is an experimentally detennined constant (.002 was used), and ~clJIS'~r is Eq. (12) where the sum of Eq. (12) is taken over all cluster locations and the k,h coordinate of the i,h cluster location dipole moment is • Pi#: = ~ g (Vil:b)' (15) all bits b 479 Index j of Eq. (14) corresponds to one combination of indices ikb. Sample points, 100 of them, were scattered semi-uniformly over the spherical surface emphasized by horizontal shading in Fig. 3. Ouster locations, 11, and model dipoles, 5, were scattered within the subset of the sphere emphasized by vertical shading. For the dipole model used, 10 dipole moment components were non-zero; hence, optimization techniques needed to hold 56 dipole moment components at zero and set 10 components to correct non-zero values in order to correctly identify the dipole model underlying ~_Qs"'~d' I I , ' I I I , I I 0.8 I , I I I , I I I relative radii The dynamical systems corresponding to methods V,. and Vv' were integrated using the forward Euler method (e.g. Press, Flannery, Teukolsky, and Vetterling 22). Numerical methods were observed to be convergent experimentally: settling time and path length were observed to asymtotically approach stable values as step size of the numerical integrator was decreased over two orders of magnitude. Figure 3: illustration of the distribution of sample points on the surface of the sphericll conductor (horizontal shading) and the distribution of model dipole locations and cluster locations within the conductor (verticll shading). Settling times, path lengths, and relative directions of travel were calculated for the two optimization methods using several different initial bit patterns at the cluster locations. In other words. the search was started at different corners of the hypercube comprising the space of acceptable solutions. One corner of the hypercube was chosen to be the target solution. (Note that a zero dipole moment has a degenerate two bit representation in the dynamical systems explored; the target corner was arbitrarily chosen to be one of the degenerate solutions.) Note from Eq. (5) that for the network to reach a hypercube corner, all elements of v would have to be singular. For this reason, settling time and other measures were studied as a function of the proximity of the computational units to their extremum states. Computations were done on a Sequent Balance. Results Graph 1 shows results for exploring settling time as a function of extremum depth, the minimum of the deviations of variables v from the threshold of functions g. Extremum depth is reported in multiples of the width of functions g. The term transition, used in the caption of Graph 1 and below, refers to the movement of a computational unit from one extremum state to the other. The calculations were done for two initial states, one where the output of 1 computational unit was set to zero and one where outputs of 13 computational units were set to zero; bence, 1 and 13, respectively, half transitions were required to reach the target hypercube comer. It can be observed that settling time increases faster for method V v' than that for method V y just as we would expect from considering Eqs. (4) and (5). However, it can be observed that method V v is still an order of magnitude faster even wben extremum depth is 3 widths of functions g. For the purpose of unambiguously identifying what hypercube corner the dynamical system settles 5 +,1 I - ~ I ~ I " ~ 4 3 # '" ... t---. =- o o 1 2 3 4 extremum depth Graph 1: settling time as a function of extremum depth. #: method V r- 1 half transition required. .: method V r 13 half transitions required. +: method V.... 1 half transition required. -: V .... 13 half transitions required. 480 to, this extremum depth is more than adequate. Table 1 displays results for various initial conditions. Angles are reported in degrees. These measures refer to the angle between directions of travel in v-space as specified by the two optimization methods. The average angle reported is taken over all trajectory points visited by the numerical integrator. Initial angle is the angle at the beginning of the path. Parasite cost percentage is a measure that compares parasite cost, the integral in Eqs. (2b) and (3b), to the range of function F over the path: . parasite cost parasite cost % = 100x IFF I fi",," ;,udal transitions time relative path initial Mean angle extremum parasite reauired time len2th anlZle (std dev) deoth cost % 1 0.16 100 6.1 68 76 (3.8) 2.3 0.22 0.0016 1.9 76 (3.5) 2.3 0.039 2 0.14 78 4.7 75 72 (4.3) 2.5 0.055 0.0018 1.9 73 (4.1) 2.5 0.016 3 0.15 71 4.7 74 71 (3.7) 2.3 0.051 0.0021 2.1 72 (3.0) 2.5 0.0093 7 0.19 59 4.6 63 69 (4.1) 2.4 0.058 0.0032 2.4 71 (7.0) 2.7 0.0033 10 0.17 49 3.8 60 63 (2.8) 2.5 0.030 0.0035 2.5 64 (4.7) 2.8 O.OOO6{) 13 0.80 110 9.2 39 77 (11) 2.3 0.076 0.0074 3.2 71 (8.9) 2.7 0.0028 Table 1: Settling time and other measurements for various required transitions. For each transition case, the upper row is for V y and the lower row is for V vStd deY denotes standard deviation. See text for definition of measurement terms and units. (16) Noting the differences in path length and angles reported, it is clear that the path taken to the target hypercube comer was quite different for the two methods. Method V v settles from 1 to 2 orders of magnitude faster than method V -r and usually takes a path less than half as long. These relationships did not change significantly for different values for c of Eq. (14) and coefficients of Eq. (13) (both unity in Eq. (13». Values used favored method V r Parasite cost is consistently less significant for method V v and is quite small for both methods. To further compare the ability of the optimization methods to solve the brain imaging problem, a large variety of initial hypercube comers were tested. Table 2 displays results that suggest the ability of each method to locate the target comer or to converge to a solution that was consistent with the dipole model. Initial comers were chosen by randomly selecting a number of computational units and setting them to eXtI"emwn states opposite to that required by the target solution. Five cases were run for each case of required transitions. It can be observed that the system based on method Vv is better at finding the target comer and is much better at finding a solution that is consistent with the dipole model. DISCUSSION The simulation results seem to contradict settling time predictions of the second analytical example. It is intuitively clear that there is no contradiction when considering the analytical example as a one dimensional search and the simulations as multi-dimensional searches. Consider Fig. 4 which illustrates one dimensional search starting at point I. Since both optimization methods must decrease function E monotonically, both must head along the same path to the minimum point A. Now consider Fig. 5 which illustrates a two dimensional search starting at point I: Here, the two methods needn't follow the same paths. The two dashed paths suggest that method V." can still be transitions I V .. Vv required ~erent dipole different target different dipole different target solution comer comer, solution comer comer 3 1 0 4 0 0 5 4 1 1 3 0 1 4 5 I 0 1 4 0 1 I 4 6 2 1 2 0 1 4 7 4 0 1 0 I 1 I 4 13 5 0 0 1 3 1 20 5 0 0 0 5 I 0 26 5 0 0 2 3 0 33 5 0 0 3 2 0 40 5 0 0 3 I 2 I 0 46 I 5 0 0 2 3 0 53 I 5 0 0 4 1 ! 0 Table 2: Solutions found starting from various initial conditions, five cases for each transition case. Different dipole solution indicates that the system assigned non-zero dipole moments at cluster locations that did not correspond to locations of the dipole model sources. Different corner indicates the solution was consistent with the dipole model but was not the target hypercube comer. Target corner indicates that the solution was the target solution. v 481 monotonically decreasing E while traversing a more circuitous route to minimum B or traversing a path to minimum A. The longer path lengths reported in Table 1 for method V ~ suggest the occurrence of the fonner. The data of Table 2 verifies the occurrence of the latter: Note that for many cases where the system based on method V v settled to the . Figure 4: One dimensional search target comer, the system based on method V ~ settled to some other minimum. for minima. I E Would we observe similar differences in optimization efficiency for other optimization problems that also have binary solution spaces? A view that supports the plausibility of the affirmative is the following: Consider Eq. (4) and Eq. (5). We have already made the observation that method V v would slow convergence into extrema of functions g. We have observed this experimentally via Graph 1. These observations suggest that computational units of V v systems tend to stay closer to the transition regions of functions g compared to computational units of V'I systems. It seems plausible that this property may allow V v systems to avoid advancing too deeply toward ineffective solutions and, hence, allow the systems to approach effective solutions more Figure 5: Two dimensional search efficiently. 1bis behavior might also be the explanation for for minima. the comparative success of method V v revealed in Table 2. Regarding the construction of electronic circuitry to instantiate Eq. (l), systems based on method V v would require the introduction of a component implementing multiplication by the derivative of functions g. This additional complexity may binder the use of method V v for the 482 (a) Output (b) Input construction of analog circuits for optimization. To illustrate the extent of this additional complexity, Fig. 6a shows a schematized circuit for a computational unit of method V -r and Fig. 6b shows a schematized circuit for a computational unit of method V T The simulations reported above suggest that there may be problems for which improvements in settling time may offset complications that might come with added circuit complexity. On the problem of imaging cerebral activity, the results above suggest the possibility of constructing analog devices to do the job. Consider the problem of analyzing electric potentials from the scalp of one perOutput son: It is noted that the measured electric potentials, Figure 6: Schematized circuits for a com~_as"rcd' appear as linear coefficients in F of Eq. (14); hence, they would appear as constant terms in 1 putational unit Notation is consistem with Horowitz and Hill IS. Shading of of Eq. (1). Thus. cf)_asllrcd would be implemented as amplifiers is to e3IIllark components amplifier biases in the circuits of Figs. 6. This is a referred to in the text. a) Computational significant benefit. To understand this. note that funcunit for method V r b) Computational tion Ij of Fig. 1 corresponding to the optimization of . ti thod V function F of Eq. (14) would involve a weighted umt or me ... linear sum of inputs g 1 (v 1 ), ••• , gN (VN). The weights would be the nonlinear coefficients of Eq. (14) and correspond to the strengths of the connections shown in Fig. 1. These connection strengths need only be calculated once for the person ar!d Car! then be set in hardware using, for example, a resistor network. Electric potential measurements could then be ar!alyzed by simply using the measurements to bias the input to shaded amplifiers of Figs. 6. For initialization, the system can be initialized with all dipole moments at zero (the 10 transition case in Table 1). This is a reasonable first guess if it is assumed that cluster locations are far denser than the loci of cerebral activity to be observed. For subsequent measurements, the solution for immediately preceding measurements would be a reasonable initial state if it is assumed that cerebral activity of interest waxes and wanes continuously. Might non-invasive real time imaging of cerebral activity be possible using such optimization devices? Results of this study are far from adequate for answering this question. Many complexities that have been avoided may nUllify the practicality of the idea. Among these problems are: 1) The experiment avoided the possibility of dipole sources actually occurring at locations other than cluster locations. The minimization of function F of Eq. (14) may circumvent this problem by employing the superposition of dipole moments at neighboring cluster locations to give a sufficient model in the mear!. 2) The experiment asswned a very restricted range of dipole strengths. This might be dealt with by increasing the number of bits used to represent dipole moments. 3) The conductor model, a homogeneously conducting sphere, may not be sufficient to model the hwnan head 16. Non-sphericity ar!d major inhomogeneities in conductivity Car! be dealt with, to a certain extent, by replacing Eq. (12) with a generalized equation based on a numerical approximation of a boundary integral equation 20 4) The cerebral activity of interest may not be observable at the scalp. 5) Not all forms of cerebral activity give rise to dipolar sources. (For example, this is well known in olfactory cortex 8.) 6) Activity of interest may be overwhelmed by irrelevant activity. Many methods have been devised to contend with this problem (For example, Gevins and Morgan 9.) Clearly, much theoretical work is left to be done. CONCLUDING REMARKS 483 In this study. the mapping principle underlying the application of artificial neural networks to the optimization of multi-dimensional scalar functions has been stated explicitly. Hopfield 12 has shown that for some scalar functions. i.e. functions F quadratic in functions 1. this mapping can lead to dynamical systems that can be easily implemented in hardware. notably. hardware that requires electronic components common to semiconductor technology. Here. mapping principles that have been known for a considerably longer period of time. those underlying gradient based optimization, have been shown capable of leading to dynamical systems that can also be implemented using semiconductor hardware. A problem in medical imaging which requires the search of a multi-dimensional surface full of local extrema has suggested the superiority of the latter mapping principle with respect to settling time of the corresponding dynamical system. 1bis advantage may be quite significant when searching for global extrema using techniques such as iterated descent 2 or iterated genetic hill climbing 1 where many searches for local extrema are required. This advantage is further emphasized by the brain imaging problem: volumes of measurements can be analyzed without reconfiguring the interconnections between computational units; hence, the cost of developing problem specific hardware for finding local extrema may be justifiable. Finally. simulations have contributed plausibility to a possible scheme for non-invasively imaging cerebral activity. APPENDIX To show that for a dynamical system based on method V r E,. is a monotonic function of time given that all functions g are differentiable and monotonic in the same sense, we need to show that the derivative of ET with respect to time is semi-definite: dET N dF T dgj N [M dVj ] dg, = L-- - L D( )(Vj)-- -. dt j dgj dt i dt dt (Ala) Substituting Eq. (2a), dET N [ dV'] dg· == I, f· -D(M)(v·)+-' -'. dt j' 'dt dt (Alb) Using Eq. (1), d~ = N [dVi ]2 dgi ~O dt ~ dt av· s , , (Alc) as needed. The appropriate inequality depends on the sense in which functions 1 are monotonic. In a similar manner, the result can be obtained for method V v>- With the condition that functions 1 are differentiable, we can show that the derivative of 4 is semi-definite: dE.". N dFv dv· N [ dV'] dv· _v = I,--' - I, D(M)(Vj)_-' -'. dt j dVj dt j dt dt Using Eqs. (3a) and (1), dEv N [dVj ]2~ --~0 dt - ~ dt S , as needed. (A2a) (A2b) In order to use the results derived above to conclude that Eq. (1) can be used for optimization of functions 4 and Et in the vicinity of some point vo. we need to show that there exists a neighborhood of Vo in which there exist solution trajectories to Eq. (1). The necessary existence theorems and transformations of Eq. (1) needed in order to apply the theorems can be found in many texts on ordinary differential equations; e.g. Guckenheimer and Holmes 11. Here, it is mainly important to state that the theorems require that functions ,£c(1), functions g are differentiable, and initial conditions are specified for all derivatives of lower order than M. 484 ACKNOWLEDGEMENTS I would like to thank Dr. Michael Raugh and Dr. Pentti Kanerva for constructive criticism and support. I would like to thank Bill Baird and Dr. James Keeler for reviewing this work. I would like to thank Dr. Derek Fender, Dr. John Hopfield, and Dr. Stanley Klein for giving me opportunities that fostered this conglomeration of ideas. REFERENCES [1] Ackley D.H., "Stochastic iterated genetic bill climbing", PhD. dissertation, Carnegie Mellon U.,1987. [2] Bawn E., Neural Networks for Computing, ed. Denker 1.S. (AlP Confrnc. Proc. 151, ed. Lerner R.G.), p53-58, 1986. [3] Brody D.A., IEEE Trans. vBME-32, n2, pl06-110, 1968. [4] Brody D.A., Terry F.H., !deker RE., IEEE Trans. vBME-20, p141-143, 1973. [5] Cohen M.A., Grossberg S., IEEE Trans. vSMC-13, p815-826, 1983. [6] Cuffin B.N., IEEE Trans. vBME-33, n9, p854-861. 1986. [7] Darcey T.M., AIr J.P., Fender D.H., Prog. Brain Res., v54, pI28-134, 1980. [8] Freeman W J., "Mass Action in the Nervous System", Academic Press, Inc., 1975. [9] Gevins A.S., Morgan N.H., IEEE Trans., vBME-33, n12, pl054-1068, 1986. [10] Goles E., Vichniac G.Y., Neural Networks for Computing, ed. Denker J.S. (AlP Confrnc. Proc. 151, ed. Lerner R.G.), p165-181, 1986. [11] Guckenheimer J., Holmes P., "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields", Springer Verlag, 1983. [12] Hopfield J.I., Proc. Nat!. Acad. Sci., v81, p3088-3092, 1984. [13] Hopfield 1.1., Tank D.W., Bio. Cybrn., v52, p141-152, 1985. [14] Hopfield 1.J., Tank D.W., Science, v233, n4764, p625-633, 1986. [15] Horowitz P., Hill W., "The art of electronics", Cambridge U. Press, 1983. [16] Hosek RS., Sances A., Jodat RW., Larson S.I., IEEE Trans., vBME-25, nS, p405-413, 1978. [171 Hutchinson J.M., Koch C., Neural Networks for Computing, ed. Denker J.S. (AlP Confrnc. Proc. 151, ed. Lerner RG.), p235-240, 1986. [18] Jeffery W., Rosner R, Astrophys. I., v310, p473-481, 1986. [19] Lapedes A., Farber R., Neural Networks for Computing, ed. Denker 1.S. (AlP Confrnc. Proc. lSI, ed. Lerner RG.), p283-298, 1986. [20] Leong H.M.F., ''Frequency dependence of electromagnetic fields: models appropriate for the brain", PhD. dissertation, California Institute of Technology, 1986. [21] Platt I.C., Hopfield J.J., Neural Networks for Computing, ed. Denker I.S. (AlP Confrnc. Proc. 151, ed. Lerner RG.), p364-369, 1986. [22] Press W.H., Flannery B.P., Teukolsky S.A., Vetterling W.T., "Numerical Recipes", Cambridge U. Press, 1986. [23] Takeda M., Goodman J.W., Applied Optics, v25. n18, p3033-3046, 1986. [24] Tank D.W., Hopfield I.J., "Neural computation by concentrating infornation in time", preprint, 1987.
1987
10
2
OPTIMAL NEURAL SPIKE CLASSIFICATION Abstract Amir F. Atiya(*) and James M. Bower(**) (*) Dept. of Electrical Engineering (**) Division of Biology California Institute of Technology Ca 91125 Being able to record the electrical activities of a number of neurons simultaneously is likely to be important in the study of the functional organization of networks of real neurons. Using one extracellular microelectrode to record from several neurons is one approach to studying the response properties of sets of adjacent and therefore likely related neurons. However, to do this, it is necessary to correctly classify the signals generated by these different neurons. This paper considers this problem of classifying the signals in such an extracellular recording, based upon their shapes, and specifically considers the classification of signals in the case when spikes overlap temporally. Introduction How single neurons in a network of neurons interact when processing information is likely to be a fundamental question central to understanding how real neural networks compute. In the mammalian nervous system we know that spatially adjacent neurons are, in general, more likely to interact, as well as receive common inputs. Thus neurobiologists are interested in devising techniques that allow adjacent groups of neurons to be sampled simultaneously. Unfortunately, the small scale of real neural networks makes inserting one recording electrode per cell impractical. Therefore, one is forced to use single electrodes designed to sample neural signals evoked by several cells at once. While this approach provides the multi-neuron recordings being sought, it also presents a rather serious waveform classification problem because the actual temporal sequence of action potentials in each individual neuron must be deciphered. This paper describes a method for classifying the activities of several individual neurons recorded simultaneously using a single electrode. Description of the Problem 95 Over the last two decades considerable attention 1-8 has been devoted to the problem of classification of action potentials in multi-neuron recordings. These action potentials (also referred to as "spikes") are the extracellularly recorded signal produced by a single neuron when it is passing information to other neurons (Fig. 1). Fortunately, spikes recorded from the same cell are more or less similar in shape, while spikes coming from different neurons usually have somewhat different shapes, depending on the neuron type, electrode characteristics, the distance between the electrode and the neuron, and the intervening medium. Fig. 1 illustrates some representative variations in spike shapes. It is our objective to detect and classify different spikes based on their shapes. However, relying entirely on the shape of the spikes presents difficulties. For example spikes from different neurons can overlap temporally producing novel waveforms (see Fig. 2 for an example of an overlap). To deal with these overlaps, one has first to detect the occurrence of an overlap, and then estimate the constituent spikes. Unfortunately, only a few of the available spike separation algorithms consider these events, even though they are potentially very important in understanding neural networks. Those few tend to rely © American Institute of Physics 1988 96 on heuristic rules and subtractive methods to resolve overlap cases. No currently published method we are aware of attempts to use knowledge of the likelihood of overlap events for detecting them, which is at the basis of the method we will describe. Fig. 1 An example of a multi-neuron recording overlapping spikes Fig. 2 An example of a temporal overlap of action potentials General Approach The first step in classifying neural waveforms is obviously to identify the typical spike shapes occurring in a particular recording. To do this we have applied a learning algorithm on the beginning portion of the recording, which in an unsupervised fashion (i.e. without the intervention of a human operator) estimates the shapes. After the learning stage we have the classification stage, which is applied on the remaining portion of the recording. A new classification method is proposed, which gives minimum probability of error, even in case of the occurrence of overlapping spikes. Both the learning and the classification algorithms require a preprocessing step to detect the position of the spike candidate in the data record. Detection: For the first task of detection most researchers use a simple level detecting algorithm, that signals a spike when recorded voltage levels cross a certain voltage threshold. However, variations in recording position due to natural brain movements during recording (e.g. respiration) can cause changes in relative height of the positive to the negative peak. Thus, a level detector (using either a positive or a negative threshold) can miss some spikes. Alternatively, we have chosen to detect an event by sliding a window of fixed length until a time when the peak to peak value within the window exceeds a certain threshold. Learning: Learning is performed on the beginning portion of the sampled data using the Isodata clustering algorithm 9. The task is to estimate the number of neurons n whose spikes are represented in the waveform and learn the different shapes of the spikes of the various neurons. For that purpose we apply the clustering algorithm choosing only one feature 97 from the spike, the peak to peak value which we have found to be quite an effective feature. Note that using the peak to peak value in the learning stage does not necessitate using it for classification (one might need additional or different features, especially for tackling the case of spike overlap) . The Optimal Olassification Rule: Once we have identified the number of different events present, the classification stage is concerned with estimating the identities of the spikes in the recording, based on the typical spike shapes obtained in the learning stage. In our classification scheme we make use of the information given by the shape of the detected spike as well as the firing rates of the different neurons. Although the shape plays in general the most important role in the classification, the rates become a more significant factor when dealing with overlapping events. This is because in general overlap is considerably less frequent than single spikes. The shape information is given by a set of features extracted from the waveform. Let x be the feature vector of the detected spike (e.g. the samples of the spike waveform). Let N I , ... , Nn represent the different neurons. The detection algorithm tells us only that at least one spike occurred in the narrow interval (t - TI,t + T2) (= say 1) where t is the instant of the peak of the detected spike, TI and T2 are constants chosen subjectively according to the smallest possible time separation between two consecutive spikes, identifiable as two separate (nonoverlapping) spikes. By definition, if more than one spike occurs in the interval I, then we have an overlap. As a matter of convention, the instant of the occurrence of a spike i. .. taken to be that of the spike peak. For simplicity, we will consider the case of two possibly overlapping spikes, though the method can be extended easily to more. The classification rule which results in minimum probability of error is the one which chooses the neuron (or pair of neurons in case of overlap) which has the maximum likelihood. We have therefore to compare the Pi'S and the P,/s, defined as ~ = P(Ni fired in Ilx, A), i = 1, ... ,n P,j = P(N, and Nj fired in Ilx, A), l,j=I, ... ,n, j<l where A represents the event that one or two spikes occurred in the interval I. In other words Pi the probability that what has been detected is a single spike from neuron i, whereas P,j is the probability that we have two overlapping spikes from neurons land j (note that spikes from the same neuron never overlap). Henceforth we will use I to denote probability density. For the purpose of abbreviation let Bi(t) mean "neuron Ni fired at t". The classification problem can be reduced to comparing the following likelihood functions: i = 1, ... ,n (la) " j = 1, ... , n, j < I (lb) (for a derivation refer to Appendix). Let Ii be the density of the inter-spike interval and Ti be the most recent firing instant of neuron Ni . IT we are given the fact that neuron Ni has been idle for at least a period of duration t Ti, we get A disadvantage of using (2) is that the available /i's and T&,S are only estimates, which depend on the previous classification results, Further, for reliable estimation of the densities Ii, one needs a large number of spikes and therefore a long learning period since we are estimating a 98 whole function. Therefore, we have not used this form, but instead have used the following two schemes. In the first one, we ignore the knowledge about the previous firing pattern except for the estimated firing rates >'1, ... , >'n of the different neurons Nl, ... , Nn respectively. Then the probability of a spike coming from neuron Ni in an interval of duration dt is simply >'idt. Hence In the second scheme we do not use any previous knowledge except for the total firing rate (of all neurons), say a. Then Although the second scheme does not use as much of the information about the firing pattern as the first scheme does, it has the advantage of obtaining and using a more reliable estimate of the firing rate, because in general the overall firing rate changes less with time than the individual rates and because the estimate of a does not depend on previous classification results. However, it is useful mostly when the firing rates of the different neurons do not vary much, otherwise the firt scheme is preferred. In real recording situations, sometimes one encounters voltage signals which are much different than any of the previously learned typical spike shapes or their pairwise overlaps. This can happen for example due to a falsely detected noise event, a spike from a class not encountered in the learning stage, or to the overlap of three or more spikes. To cope with these cases we use the reject option. This means that we refuse to classify the detected spike because of the unlikeliness of the assumed event A. The reject option is therefore employed whenever P(Alx) is smaller than a certain threshold. We know that P(Alx) = J(A,x)/[J(A,x) + J(AC ,x)] where AC is the complement of the event A. The density f(AC,x) can be approximated as uniform (over the possible values of x) because a large variety of cases are covered by the event AC. It follows that one can just compare J(A,x) to a threshold. Hence the decision strategy becomes finally: Reject if the sum of the likelihood functions is less than a threshold. Otherwise choose the neuron (or pair of neurons) corresponding to the largest likelihood functions. Note that the sum of the likelihood functions equals J(A,x) (refer to Appendix). Now, let us evaluate the integrals in (1). Overlapping spikes are assumed to add linearly. Since we intend to handle the overlap case, we have to use a set of features Xm which obeys the following. Given the features of two of the waveforms, then one can compute those of their overlap. A good such candidate is the set of the samples of the spike (or possibly also just part of the samples). The added noise, partly thermal noise from the electrode and partly due to firings from distant neurons, can usually be approximated as white Gaussian. Let the variance be a 2 • The integrals in the likelihood functions can be approximated as summations (note in fact that we have samples available, not a continuous waveform). Let yi represent the typical feature vector (template) associated with neuron Ni , with the mth component being y;". Then M J(xIB/(kI), Bj(kd) = (21r)~/2aM exp[ - 2~2 '~l (x m - y!n-k 1 y~_k2)2] 99 where Xm is the mth component of x, and M is the dimension of x. This leads to the following likelihood functions M~ M L~ = f(Bd k)) L exp[- 2~2 :L (xm - Y:n_kJ2] kl=-M 1 m=l M) Ml M LL· = f(B,(k))f(Bj(k)) L L exp[- 2~2 L (xm - y!n-k 1 y~_kl)2] kl=-Mlkl=-Ml m=l where k is the spike instant, and the interval from -Ml to M2 corresponds to the interval I defined at the beginning of the Section. Implementation The techniques we have just described were tested in the following way. For the first experiment we identified two spike classes in a recording from the rat cerebellum. A signal is created, composed of a number of spikes from the two classes at random instants, plus noise. To make the situation as realistic as possible, the added noise is taken from idle periods (i.e. non-spiking) of a real recording. The reason for using such an artificially generated signal is to be able to know the class identities of the spikes, in order to test our approach quantitatively. We implement the detection and classification techniques on the obtained signal, with various values of noise amplitude. In our case the ratio of the peak to peak values of the templates turns out to be 1.375. Also, the spike rate of one of the clases is twice that of the other class. Fig.3a shows the results with applying the first scheme (i.e. using Eq. 3). The overall percentage correct classification for all spikes (solid curve) and the percentage correct classification for overlapping spikes (dashed curve) are plotted versus the standard deviation of the noise (]" normalized with respect to the peak h of the large template. Notice that the overall classification accuracy is near 100% for (]" I h less than 0.15, which is actually the range of noise amplitudes we mostly encountered in our work with real recordings. Observe also the good results for classifying overlapping events. We have applied also the second scheme (i.e. using Eq. 4) and obtained similar results. We wish to mention that the thresholds for detection and for the reject option are set up so as to obtain no more than 3% falsely detected spikes. A similar experiment is performed with three waveforms (three classes), where two of the waveforms are the same as those used in the first experiment . The third is the average of the first two. All the three neurons have the same spike rate (i.e. ..\1 = ..\2 = ..\3)' Hence both classification schemes are equivalent in this case. Fig. 3b shows the overall as well as the sub-category of overlap classification results. One observes that the results are worse than those for the two-class case. This is because the spacings between the templates are in general smaller. Notice also that the accuracy in resolving overlapping events is now tangibly less than the overall accuracy. However, one can say that the results are acceptable in the range of (]" Ih less than 0.1. The following experiment is also performed using the same data. We would like to investigate the importance of the information given by the (overall) firing rate on the problem of classifying overlapping events. In our method the summation in the likelihood functions for single spikes is multiplied by Otln, while that for overlapping spikes is multiplied by (Otln)2 . Usually Otln is considerably less than one. Hence we have a factor which gives less weight for overlapping events. Now, consider the case of ignoring completely the information given by the firing rate and relying solely on sha.pe information. We assume that overlapping spikes from any two given classes represent "new" class of waveforms and that each of these overlap classes has the same rate as that of a single-spike cla.ss. In that case we can obtain expressions for the likelihood functions as consisting just the summations, i.e. free of the rate 100 1 •• -.. -; • -; 51.• C • d._ e ; ... It._ I . I. ....S 1 •• .. 51." C . ... . ; ... It._ ... " 1.111 I.ISZ '.l,' I. I. ... .. 1.1" ••••• /t. • I ••• ,1. a 1 • • .,-; ~ • 51.· c C ... • • ; ... It._ I. I. ..... 1.1" 1.11t 1.I!Il ••••• /t. c Fig. 3 a) Overall (solid curve) and overlap (dashed curve) classification accuracy for a two class case b b) Overall (solid curve) and overlap (dashed curve) classification accuracy for a three class case c)Percent of incorrect classification of single spikes as overlap solid curve: scheme utilzing the spike rate dashed curve: scheme not utilising the spike rate I .ISZ l .ltI factor Olin (refer to Appendix). An experiment is performed using that scheme (on the same three class data). One observes that the method classifies a number of single spikes wrongly as overlaps, much more than our original scheme does (see Fig. 3c), especially for the large noise case. On the other hand, the number of overlaps which are classified wrongly as single spikes is near zero for both schemes. Finally, in the last experiment the techniques are implemented on real recordings from the rat cerebellum. The recorded signal is band-pass-filtered in the frequency range 300 Hz - 10 KHz, then sampled with a rate of 20KHz. For classification, we take 20 samples per spike as features. Fig. 4 shows the results ofthe proposed method, using the first scheme (Eq. 3). The number of neurons whose spikes are represented in the waveform is estimated to be four. The 101 detection threshold is set up so that spikes which are too small are disregarded, because they come from several neurons far away from the electrode and are hard to distinguish. Notice the overlap of classes 1 and 2, which was detected. We used the second scheme also on the same portion and it gave similar results as those of the first scheme (only one of the spikes is classified differently). Overall, the discrepancies between classifications done by the proposed method and an experienced human observer were found to be small. 3 2 3 3 3 2 1 4 1 1 3 1,2 3 3 2 2 3 4 Fig. 4 Classification results for a recording from the rat cerebellum Conclusion Many researchers have considered the problem of spike classification in multi-neuron recordings, but only few have tackled the case of spike overlap, which could occur frequently, particularly if the group of neurons under study is stimulated. In this work we propose a method for spike classification, which can also aid in detecting and classifying overlapping spikes. By taking into account the statistical properties of the discharges of the neurons sampled, this method minimizes the probability of classification error. The application of the method to artificial as well as real recordings confirm its effectiveness. Appendix Consider first P'i' We can write 102 We can also obtain R. = It+T2[t+T2 f(x,AIBI(td,Bj (t2))f(B (t ) B ·(t ))dt dt IJ f( A) I 1, J 2 1 2 · t-T1 t-Tl x, Now, consider the two events B1(td and Bj (t2 ). In the absense of any information about their dependence, we assume that they are independent. We get Within the interval I, both f(B/(tt)) and f(Bj (t2)) hardly vary because the duration of I is very small compared to a typical inter-spike interval. Therefore we get the following approximation: f(B/(td) ~ f(B,(t)) f(B j (t2)) ~ f(Bj(t)). The expression for P"j becomes f(B,(t))f(B ·(t)) [t+T2 [t- T2 P"j ~ () J f(xIB/(td, B j (t2))dt 1dt2 • f X , A t-Tl t-Tl Notice that the term A was omitted from the argument of the density inside the integral, because the occurrence of two spikes at tl and t2El implies the occurrence of A. A similar derivation for ~ results in The term f(x, A) is common to all the Pils and the Pi's. Hence one can simply compare the following likelihood functions: Aeknow ledgement Our thanks to Dr. Yaser Abu-Mostafa for his assistance with this work. This project was supported by the Caltech Program of Advanced Technology (sponsored by Aerojet,GM,GTE, and TRW), and the Joseph Drown Foundation. References II] M. Abeles and M. Goldstein, Proc. IEEE, 65, pp.762-773, 1977. 12] G. Dinning and A. Sanderson, IEEE Trans. Bio - M ed. Eng., BME-28, pp. 804-812, 1981. 13] E. D'Hollander and G. Orban, IEEE Trans . Bio-Med. Eng., BME-26, pp. 279-284, 1979. 14] D. Mishelevich, IEEE Trans. Bio-Med. Eng., BMFr17, pp. 147-150, 1970. Is] V. Prochazka and H. Kornhuber, Electroenceph. din. Neurophysiol., 32, pp. 91-93, 1973. 16] W. Roberts, Bioi. Gybernet., 35, pp. 73-80, 1979. 17] W. Roberts and D. Hartline, Brain Res., 94, pp. 141-149, 1975. 18] E. Schmidt, J. Neurosci. Methods, 12, pp. 95-111, 1984. 19] R. Duda and P. Hart, Pattern Classification and Scene Analysis, John Wiley, 1973.
1987
11
3
495 REFLEXIVE ASSOCIATIVE MEMORIES Hendrlcus G. Loos Laguna Research Laboratory, Fallbrook, CA 92028-9765 ABSTRACT In the synchronous discrete model, the average memory capacity of bidirectional associative memories (BAMs) is compared with that of Hopfield memories, by means of a calculat10n of the percentage of good recall for 100 random BAMs of dimension 64x64, for different numbers of stored vectors. The memory capac1ty Is found to be much smal1er than the Kosko upper bound, which Is the lesser of the two dimensions of the BAM. On the average, a 64x64 BAM has about 68 % of the capacity of the corresponding Hopfield memory with the same number of neurons. Orthonormal coding of the BAM Increases the effective storage capaCity by only 25 %. The memory capacity limitations are due to spurious stable states, which arise In BAMs In much the same way as in Hopfleld memories. Occurrence of spurious stable states can be avoided by replacing the thresholding in the backlayer of the BAM by another nonl1near process, here called "Dominant Label Selection" (DLS). The simplest DLS is the wlnner-take-all net, which gives a fault-sensitive memory. Fault tolerance can be improved by the use of an orthogonal or unitary transformation. An optical application of the latter is a Fourier transform, which is implemented simply by a lens. I NTRODUCT ION A reflexive associative memory, also called bidirectional associative memory, is a two-layer neural net with bidirectional connections between the layers. This architecture is implied by Dana Anderson's optical resonator 1, and by similar configurations2,3. Bart KoSk04 coined the name "Bidirectional Associative Memory" (BAM), and Investigated several basic propertles4- 6. We are here concerned with the memory capac1ty of the BAM, with the relation between BAMs and Hopfleld memories7, and with certain variations on the BAM. © American Institute of Physics 1988 496 BAM STRUCTURE We will use the discrete model In which the state of a layer of neurons Is described by a bipolar vector. The Dirac notationS will be used, In which I> and <I denote respectively column and row vectors. <al and la> are each other transposes, <alb> Is a scalar product, and la><bl is an outer product. As depicted in Fig. 1, the BAM has two layers of neurons, a front layer of N neurons w tth state vector If>, and a back layer back layer. P neurons back of P neurons with state vector state vector b stroke Ib>. The bidirectional connecsignal flow In two directions. 1 1 tlons between the layers allow frOnt1ay~r. 'N ~eurons forward The front stroke gives Ib>= state vector f stroke s(Blf», where B 15 the connecFig. 1. BAM structure tlon matrix, and s( ) Is a threshold function, operating at zero. The back stroke results 1n an u~graded front state <f'I=s( <biB), whIch also may be wr1tten as !r'>=s(B Ib> >. where the superscr1pt T denotes transpos1t10n. We consider the synchronous model. where all neurons of a layer are updated s1multaneously. but the front and back layers are UPdated at d1fferent t1mes. The BAM act10n 1s shown 1n F1g. 2. The forward stroke entalls takIng scalar products between a front state vector If> and the rows or B, and enter1ng the thresholded results as elements of the back state vector Ib>. In the back stroke we take threshold ing f & reflection lID NxP FIg. 2. BAM act 10n threshold ing & reflection b v ~ ~hreShOlding 4J feedback & NxN V Ftg. 3. Autoassoc1at1ve memory act10n scalar products of Ib> w1th column vectors of B, and enter the thresholded results as elements of an upgraded state vector 1('>. In contrast, the act10n of an autoassoc1at1ve memory 1s shown 1n F1gure 3. The BAM may also be described as an autoassoc1at1ve memory5 by 497 concatenating the front and back vectors tnto a s1ngle state vector Iv>=lf,b>,and by taking the (N+P)x(N+P) connection matrtx as shown in F1g. 4. This autoassoclat1ve memory has the same number of neurons as our f . b'----"" BAM, viz. N+P. The BAM operat1on where ----!' initially only the front state 1s specizero [IDT lID zero f thresholding & feedback f1ed may be obtained with the corresb ponding autoassoc1ative memory by initially spectfying Ib> as zero, and by Fig. 4. BAM as autoassoarranging the threshold1ng operat1on ctative memory such that s(O) does not alter the state vector component. For a Hopfteld memory 7 the connection matrix 1s M H=( I 1m> <mD -MI , m=l (1) where 1m>, m= 1 to M, are stored vectors, and I is the tdentity matr1x. Writing the N+P d1mens1onal vectors 1m> as concatenations Idm,cm>, (1) takes the form M H-( I (ldm><dml+lcm><cml+ldm><cml+lcm><dmD)-MI , (2) m=l w1th proper block plactng of submatr1ces understood. Writing M K= Llcm><dml , (3) M m=l M Hd=(Lldm><dmD-MI, Hc=( L'lcm><cml>-MI, (4) m=l m=l where the I are identities in appropriate subspaces, the Hopfield matrix H may be partitioned as shown in Fig. 5. K is just the BAM matrix given by Kosko5, and previously used by Kohonen9 for linear heteroassoclatjve memories. Comparison of Figs. 4 and 5 shows that in the synchronous discrete model the BAM with connection matrix (3) is equivalent to a Hopfield memory in which the diagonal blocks Hd and Hc have been 498 deleted. Since the Hopfleld memory is robust~ this "prun1ng" may not affect much the associative recall of stored vectors~ if M is small; however~ on the average~ pruning will not improve the memory capaclty. It follows that, on the average~ a discrete synchronous BAM with matrix (3) can at best have the capacity of a Hopfleld memory with the same number of neurons. We have performed computations of the average memory capacity for 64x64 BAMs and for corresponding 128x 128 Hopfleld memories. Monte Carlo calculations were done for 100 memories) each of which stores M random bipolar vectors. The straight recall of all these vectors was checked) al10wtng for 24 Iterations. For the BAMs) the iterations were started with a forward stroke in which one of the stored vectors Idm> was used as input. The percentage of good recall and its standard deviation were calculated. The results plotted in Fig. 6 show that the square BAM has about 68~ of the capacity of the corresponding Hopfleld memory. Although the total number of neurons is the same) the BAM only needs 1/4 of the number of connections of the Hopfield memory. The storage capacity found Is much smaller than the Kosko 6 upper bound) which Is min (N)P). JR[= Fig. 5. Partitioned Hopfield matrix 10 20 30 40 50 60 M. number of stored vectors Fig. 6. ~ of good recall versus M CODED BAM So far) we have considered both front and back states to be used for data. There is another use of the BAM in which only front states are used as data) and the back states are seen as providing a code) label, or pOinter for the front state. Such use was antiCipated in our expression (3) for the BAM matrix which stores data vectors Idm> and their labels or codes lem>. For a square BAM. such an arrangement cuts the Information contained in a single stored data vector jn half. However, the freedom of 499 choosing the labels fCm> may perhaps be put to good use. Part of the problem of spurious stable statesl which plagues BAMs as well as Hopf1eld memories as they are loaded up, is due to the lack of orthogonality of the stored vectors. In the coded BAM we have the opportunity to remove part of this problem by choosing the labels as orthonorma1. Such labels have been used previously by Kohonen9 1n linear heteroassociative memories. The question whether memory capacity can be Improved In this manner was explored by taking 64x64 BAt1s In which the labels are chosen as Hadamard vectors. The latter are bipolar vectors with Euclidean norm ,.fp, which form an orthonormal set. These vectors are rows of a PxP Hadamard matrix; for a discussion see Harwtt and Sloane 1 0. The storage capacity of such Hadamard-coded BAMs was calculated as function of the number M of stored vectors for 100 cases for each value of M, in the manner discussed before. The percentage of good recall and its standard deviation are shown 1n Fig. 6. It Is seen that the Hadamard coding gives about a factor 2.5 in M, compared to the ordinary 64x64 BAM. However, the coded BAM has only half the stored data vector dimension. Accounting for this factor 2 reduction of data vector dimension, the effective storage capacity advantage obtained by Hadamard coding comes to only 25 ~. HALF BAt1 WITH HADAMARD CODING For the coded BAM there is the option of deleting the threshold operation In the front layer. The resulting architecture may be called "half BAt1". In the half BAM, thresholding Is only done on the labels, and consequently, the data may be taken as analog vectors. Although such an arrangement diminishes the robustness of the memory somewhat, there are applications of interest. We have calculated the percentage of good recall for 1 00 cases, and found that giving up the data thresholding cuts the storage capacity of the Hadamard-coded BAt1 by about 60 %. SELECTIVE REFLEXIVE MEMORY The memory capacity limitations shown in Fig. 6 are due to the occurence of spurious states when the memories are loaded up. Consider a discrete BAM with stored data vectors 1m>, m= 1 to M, orthonormal labels Icm>, and the connection matrix 500 (5) For an input data vector Iv> which is closest to the stored data vector 11 >, one has 1n the forward stroke M Ib>=s(clc 1 >+ L amlcm» , (6) m=2 where c=< llv> • and am=<mlv> (7) M Although for m# 1 am<c, for some vector component the sum L amlcm> m=2 may accumulate to such a large value as to affect the thresholded result Ib>. The problem would be avoided jf the thresholding operation s( ) in the back layer of the BAM were to be replaced by another nonl1near operation which selects, from the I inear combination M clc 1 >+ L amlcm> m=2 (8) the dominant label Ic 1 >. The hypothetical device which performs this operation is here called the "Dominant Label Selector" (DLS) 11, and we call the resulting memory architecture "Selective Reflexive Memory" (SRM). With the back state selected as the dominant label Ic 1 >, the back stroke gives <f'I=s( <c ,IK)=s(P< 1 D=< 11, by the orthogonal ity of the labels Icm>. It follows 11 that the SRM g1ves perfect assoc1attve recall of the nearest stored data vector, for any number of vectors stored. Of course, the llnear independence of the P-dimensionallabel vectors Icm>, m= 1 to M, requires P>=M. The DLS must select, from a linear combination of orthonormal labels, the dominant label. A trivial case is obtained by choosing the 501 labels Icm> as basis vectors Ium>, which have all components zero except for the mth component, which 1s unity. With this choice of labels, the f DLS may be taken as a winner~ winner­ b take-all net Flg.7. Simplest reflexive memory with DLS take-all net W, as shown in Fig. 7. This case appears to be Included in Adapt Ive Resonance Theory (ART) 12 as a special sjmpllf1ed case. A relationship between the ordinary BAM and ART was pOinted out by KoskoS. As in ART, there Is cons1derable fault sensitivity tn this memory, because the stored data vectors appear in the connectton matrix as rows. A memory with better fault tolerance may be obtained by using orthogonal labels other than basis vectors. The DLS can then be taken as an orthogonal transformation 6 followed by a winner-take-an net, as shown 1n Fig. 8. 6 is to be chosen such that 1t transforms the labels Icm> f I 1 1[ (G i u l tnto vectors proportional to the rthogonal 1 transforbasts vectors um>. This can always ,.0 mation winner/' take-all net be done by tak1ng p (9) F1g. 8. Select1ve reflex1ve memory G= [Iup> <cpl , p=l where the Icp>, p= 1 to P, form a complete orthonormal set which contains the labels Icm>, m=l to M. The neurons in the DLS serve as grandmother cells. Once a single winning cell has been activated, I.e., the state of the layer Is a single basis vector, say lu I ) J this vector must be passed back, after appllcation of the transformation G- 1, such as to produce the label IC1> at the back of the BAM. Since G 1s orthogonal. we have 6- 1 =6 T, so that the reQu1red 1nverse transformation may be accompl1shed sfmply by sending the bas1s vector back through the transformer; this gives P <u 116=[ <u 1 IUp><cpl=<c 11 p=l (10) 502 as required. HAlF SRM The SRM may be modified by deleting the thresholding operation in the front layer. The front neurons then have a I inear output, which is reflected back through the SRM, as shown in Fig. 9. In this case, the f I i near neurons / orthogonal 1 .1 ~ U (G T I ,- transformation winner'/' take-all net Fig. 9. Half SRM with l1near neurons in front layer stored data vectors and the input data vectors may be taken as analog vectors, but we reQu1re all the stored vectors to have the same norm. The act i on of the SRM proceeds in the same way as described above, except that we now require the orthonormal labels to have unit norm. It follows that, just l1ke the full SRM, the half SRM gives perfect associative recall to the nearest stored vector, for any number of stored vectors up to the dimension P of the labels. The latter condition 1s due to the fact that a P-dimensional vector space can at most conta1n P orthonormal vectors. In the SRM the output transform Gis 1ntroduced in order to improve the fauJt tolerance of the connection matrix K. This is accomplished at the cost of some fault sensitivity of G, the extent of which needs to be investigated. In this regard 1t is noted that in certatn optical implementat ions of reflexive memories, such as Dana Anderson's resonator I and Similar conflgurations2,3, the transformation G is a Fourier transform, which is implemented simply as a lens. Such an implementation ts quite insentive to the common semiconductor damage mechanisms. EQUIVALENT AUTOASSOCIATIVE MEMORIES Concatenation of the front and back state vectors allows description of the SRMs tn terms of autoassociative memories. For the SRM which uses basis vectors as labels the corresponding autoassociative memory js shown tn Fjg. 10. This connect jon matrtx structure was also proposed by Guest et. a1. 13. The wtnner-take-all net W needs to be /' /' f b zero I[T !r WI " [~ " ~ slow thres holding & feedback "I' f blJ h ast thres olding& feedback Fig. 10. Equivalent autoassociat lve memory 503 given t1me to settle on a basis vector state before the state Ib> can influence the front state If>. This may perhaps be achieved by arranging the W network to have a thresholding and feedback which are fast compared with that of the K network. An alternate method may be to equip the W network w1th an output gate which is opened only after the W net has sett led. These arrangements present a compUcatlon and cause a delay, which in some appllcations may be 1nappropriate, and In others may be acceptable in a trade between speed and memory density. For the SRM wtth output transformer and orthonormal1abels other fb, w ~eedback (OJ [T I[ (OJ (Q) (G (OJ (GT WI f thresholded b linear W thresholded + output gate Fig. 11. Autoassoc1at1ve memory equivalent to SRM with transform output gate wr ~ winner-take-all .......... Woutput :t@ b back layer, linear '--___ -' f front layer II = BAM connections @ =orthogonal transformat i on W! ~ winner-take-all net Fig. 12. Structure of SRM than basis vectors, a corresponding autoassoclat1ve memory may be composed as shown In Fig.l1. An output gate in the w layer is chosen as the device which prevents the backstroke through the BAM to take place before the w1nner-take-al net has settled. The same effect may perhaps be achieved by choosing different response times for the neuron layers f and w. These matters require investigation. Unless the output transform G 1s already required for other reasons, as in some optical resonators, the DLS with output transform is clumsy. I t would far better to combine the transformer G and the net W into a single network. To find such a DLS should be considered a cha 11 enge. 504 The wort< was partly supported by the Defense Advanced Research projects Agency, ARPA order -5916, through Contract DAAHOI-86-C -0968 with the U.S. Army Missile Command. REFERENCES 1. D. Z. Anderson, "Coherent optical eigenstate memory", Opt. Lett. 11, 56 (1986). 2. B. H. Soffer, G. J. Dunning, Y. Owechko, and E. Marom, "Associative holographic memory with feedback using phase-conjugate mirrors", Opt. Lett. II, 1 18 ( 1986). 3. A. Yarrtv and S. K. Wong, "Assoctat ive memories based on messagebearing optical modes In phase-conjugate resonators", Opt. Lett. 11, 186 (1986). 4. B. Kosko, "Adaptive Cognitive ProceSSing", NSF Workshop for Neural Networks and Neuromorphlc Systems, Boston, Mass., Oct. &-8, 1986. 5. B. KOSKO, "Bidirectional Associative Memories", IEEE Trans. SMC, In press, 1987. 6. B. KOSKO, "Adaptive Bidirectional Associative Memories", Appl. Opt., 1n press, 1987. 7. J. J. Hopfleld, "Neural networks and physical systems with emergent collective computational ablJ1tles", Proc. NatJ. Acad. Sct. USA 79, 2554 ( 1982). 8. P. A. M. Dirac, THE PRINCI PLES OF QUANTLt1 MECHANICS, Oxford, 1958. 9. T. Kohonen, "Correlation Matrix Memories", HelsinsKi University of Technology Report TKK-F-A 130, 1970. 10. M. Harwit and N. J. A Sloane, HADAMARD TRANSFORM OPTICS, Academic Press, New York, 1979. 11. H. G. Loos, It Adaptive Stochastic Content-Addressable Memory", Final Report, ARPA Order 5916, Contract DAAHO 1-86-C-0968, March 1987. 12. G. A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine", Computer Vision, Graphics, and Image processing, 37, 54 (1987). 13. R. D. TeKolste and C. C. Guest, "Optical Cohen-Grossberg System with Ali-Optical FeedbaCK", IEEE First Annual International Conference on Neural Networks, San Diego, June 21-24, 1987.
1987
12
4
534 The Performance of Convex Set projection Based Neural Networks Robert J. Marks II, Les E. Atlas, Seho Oh and James A. Ritcey Interactive Systems Design Lab, FT-IO University of Washington, Seattle, Wa 98195. ABSTRACT We donsider a class of neural networks whose performance can be analyzed and geometrically visualized in a signal space environment. Alternating projection neural networks (APNN' s) perform by alternately projecting between two or more constraint sets. Criteria for desired and unique convergence are easily established. The network can be configured in either a homogeneous or layered form. The number of patterns that can be stored in the network is on the order of the number of input and hidden neurons. If the output neurons can take on only one of two states, then the trained layered APNN can be easily configured to converge in one iteration. More generally, convergence is at an exponential rate. Convergence can be improved by the use of sigmoid type nonlinearities, network relaxation and/or increasing the number of neurons in the hidden layer. The manner in which the network responds to data for which it was not specifically trained (i.e. how it generalizes) can be directly evaluated analytically. 1. INTRODUCTION In this paper, we depart from the performance analysis techniques normally applied to neural networks. Instead, a signal space approach is used to gain new insights via ease of analysis and geometrical interpretation. Building on a foundation laid elsewherel - 3 , we demonstrate that alternating projecting neural network's (APNN's) formulated from such a viewpoint can be configured in layered form or homogeneously. Significiantly, APNN's have advantages over other neural network architectures . For example, (a) APNN's perform by alternatingly projecting between two or more constraint sets. Criteria can be established for proper iterative convergence for both synchronous and asynchronous operation. This is in contrast to the more conventional technique of formulation of an energy metric for the neural networks, establishing a lower energy bound and showing that the energy reduces each iteration4- 7 • Such procedures generally do not address the accuracy of the final solution. In order to assure that such networks arrive at the desired globally minimum energy, computationaly lengthly procedures such as simulated annealing are usedB - 10 • For synchronous networks, steady state oscillation can occur between two states of the same energyll (b) Homogeneous neural networks such as Hopfield's content addressable memory4,12-14 do not scale well, i.e. the capacity © American Institute of Physics 1988 535 of Hopfield's neural networks less than doubles when the number of neurons is doubled 15-16. Also, the capacity of previously proposed layered neural networks17 ,18 is not well understood. The capacity of the layered APNN'S, on the other hand, is roughly equal to the number of input and hidden neurons19 • (c) The speed of backward error propagation learning 17-18 can be painfully slow. Layered APNN's, on the other hand, can be trained on only one pass through the training data 2 • If the network memory does not saturate, new data can easily be learned without repeating previous data. Neither is the effectiveness of recall of previous data diminished. Unlike layered back propagation neural networks, the APNN recalls by iteration. Under certain important applications, however, the APNN will recall in one iteration. (d) The manner in which layered APNN's generalizes to data for which it was not trained can be analyzed straightforwardly. The outline of this paper is as follows. After establishing the dynamics of the APNN in the next section, sufficient criteria for proper convergence are given. The convergence dynamics of the APNN are explored. Wise use of nonlinearities, e.g. the sigmoidal type nonlinearities 2 , improve the network's performance. Establishing a hidden layer of neurons whose states are a nonlinear function of the input neurons' states is shown to increase the network's capacity and the network's convergence rate as well. The manner in which the networks respond to data outside of the training set is also addressed. 2. THE ALTERNATING PROJECTION NEURAL NETWORK In this section, we Nonlinear modificiations established the to the network performance attributes are considered later. notation for the APNN. made to impose certain Consider a set of N continuous level linearly independent library vectors (or patterns) of length L> N: {£n I OSnSN}. We form the library matrix !:. = [£1 1£2 I ... I£N ] and the neural network interconnect matrixa T = F (!:.T !:. )-1 FT where the superscript T denotes transposition. We divide the L neurons into two sets: one in which the states are known and the remainder in which the states are unknown. This partition may change from application to application. Let Sk (M) be the state of the kth node at time M. If the kth node falls into the known catego~, its state is clamped to the known value (i.e. Sk (M) = Ik where I is some library vector). The states of the remaining floating neurons are equal to the sum of the inputs into the node. That is, Sk (M) = i k , where L i k = r tp k sp (1) p = 1 a The interconnect matrix is better trained iteratively2. To include a new library vector £, the interconnects are updated as ~T ~T~ ~ ~ ! + (EE ) / (E E) where E = (.!. - !) f. 536 If all neurons change state simultaneously (i.e. sp = sp (M-l) ), then the net is said to operate synchronously. If only one neuron changes state at a time, the network is operating asynchronously. Let P be the number of clamped neurons. We have provenl that the neural states converge strongly to the extrapolated library vector if the first P rows of ! (denoted KP) form a matrix of full column rank. That is, no column of ~ can be expressed as a linear combination of those remainin.,v. 2 By strong convergenceb , we mean lim II 1 (M) - t II == 0 where II x II == iTi. M~OO Lastly, note that subsumed in the criterion that ~ be full rank is the condition that the number of library vectors not exceed the number of known neural states (P ~ N). Techniques to bypass this restriction by using hidden neurons are discussed in section 5. Partition Notation: that neurons 1 through floating. We adopt the Without loss of generality, we will assume P are clamped and the remaining neurons are vectOr partitioning notation 7 IIp] 1 = ~ io where Ip is the P-tuple of the first P elements of 1. and 10 is a vector of the remaining Q = L-P. We can thus write, for example, ~ [ f~ If~ I ... If: ]. Using this partition notation, we can define the neural clamping operator by: 7 _ IL] !l ~ 7 10 Thus, the first P elements of I are clamped to l P • The remaining Q nodes "float". Partition notation useful. Define for the interconnect matrix will also prove T r!2 I !lJ L~ where ~2 is a P by P and !4 a Q by Q matrix. 3. STEADY STATE CONVERGENCE PROOFS For purposes of later reference, we address convergence of the network for synchronous operation. Asynchronous operation is addressed in reference 2. For proper convergence, both cases require that ~ be full rank. For synchronous operation, the network iteration in (1) followed by clamping can be written as: ~ ~ s(M+l) =!l ~ sCM) (2) As is illustrated in l - 3, this operation can easily be visualized in an L dimensional signal space. b The referenced convergence proofs prove strong convergence in an infinite dimensional Hilbert space. In a discrete finite dimensional space, both strong and weak convergence imply uniform convergencel9 • 2D , i.e. 1(M)~t as M~oo. 537 For a given partition with P clamped neurons, (2) can be written in partitioned form as [ ;'(M+J l*J[ I' J !l (3) !3!4 ~o (M) The states of the P clamped neurons are not affected by their input sum. Thus, there is no contribution to the iteration by ~1 and ~2. We can equivalently write (3) as -+0 -;tp-+o s (M+ 1) = !3 f +!4 s (M) (4 ) We show in that if fp is full rank, then the spectral radius (magnitude of the maximum eigenvalue) of ~4 is strictly less than one19 • It follows that the steady state solution of (4) is: (5 ) where, since fp is full rank, we have made use of our claim that -+0 -;to S (00) = f (6) 4. CONVERGENCE DYNAMICS In this section, we explore different convergence dynamics of the APNN when fp is full column rank. If the library matrix displays certain orthogonality characteristics, or if there is a single output (floating) neuron, convergence can be achieved in a single iteration. More generally, convergence is at an exponential rate. Two techniques are presented to improve convergence. The first is standard relaxation. Use of nonlinear convex constraint at each neuron is discussed elsewhere2 ,19. One Step Convergence: There are at least two important cases where the APNN converges other than uniformly in one iteration. Both require that the output be bipolar (±1). Convergence is in one step in the sense that -;to • -+0 f = Slgn s (1) (7) where the vector operation sign takes the sign of each element of the vector on which it operates. CASE 1: If there is a single output neuron, then, from (4), (5) and (6), sO (1) (1 t LL ) ,0 . Since the eigenvalue of the (scalar) matrix, !4 = tL L lies between zero and one 1 9, we conclude that 1t LL > O. Thus, if ,0 is restricted to ±1, (7) follows immediately. A technique to extend this result to an arbitrary number of output neurons in a layered network is discussed in section 7. CASE 2: For certain library matrices, the APNN can also display one step convergence. We showed that if the columns of K are orthogonal and the columns of fp are also orthogonal, then one synchronous iteration results in floating states proportional to the steady 538 state values19 • Specifically, for the floating neurons, tP 2 ~o (1) II II 10 111112 (8) An important special case of (8) is when the elements of Fare all ±1 and orthogonal. If each element were chosen by a 50-50 coin flip, for example, we would expect (in the statistical sense) that this would be the case. Exponential Convergence: More generally, the convergence rate of the APNN is exponential and is a function of the eigenstructure of .!4. Let {~r I 1 ~ r ~ Q } denote the eigenvectors of .!4 and {Ar } the corresponding eigenvalues. Define ~ = [ ~l 1~2 I ... I~o] and the diagonal matrix A4 such that diag ~ = [AI A2 ... Ao] T • Then we can . A T • -+ T-+ -. T • 1 f Wrl.te :!.4.=~ _4 ~. Defl.ne x (M) =~ s (M). S.;nce ~ ~ = I, \t...,. fol ows T ro~ the--+differe-ace equatJ-on i~ ('Up that x(M+l)=~:!.4 ~ ~ sCM) + ~ .!31 =~4 x (M) + g where g = ~.!3 t. The solution to this difference equation is M 't' "r [ 1 _ "kM + 1 ] ,,- 1 1J /\ok g k = /\0 ( 1 /\ok) g k (9) r = 0 Since the spectral radius of !4 is less than one19 , ~: ~ 0 as M ~ ~. Our steady state result is thus xk (~) = (1 Ak ) gk. Equation . ["M+l] (9) can therefore be wrl.tten as xk (M) = 1 /\ok xk (~). The eCflivalent of a "time constant" in this exponential convergence is 1/ tn (111 Ak I). The speed of convergence is thus dictated by the spectral radius of .!4. As we have shown19 later, adding neurons in a hidden layer in an APNN can significiantly reduce this spectral radius and thus improve the convergence rate. Relaxation: Both the projection and clamping operations can be relaxed to alter the network's convergence without affecting its steady state20 - 21 • For the interconnects, we choose an appropriate value of the relaxation parameter a in the interval (0,2) and 9 redefine the interconnect matrix as T aT + (1 a)I or equivalently, = {a(tnn -l)+1 a tnrn ; n =m TO see the effect of such relaxation on convergence, we need simply exam\ne the resulting ::dgenvalues. If .!4 has eigenvalues {Ar I, then .!4 has eigenvalues Ar = 1 + a (Ar - 1). A Wl.se choice of a reduces the spectral radius of .!~ with respect to that of .!4' and thus decreases the time constant of the network's convergence. Any of the operators projecting onto convex sets can be relaxed without affecting steady state convergence19 - 20 • These include the ~ operator2 and the sigmoid-type neural operator that projects onto a box. Choice of stationary relaxation parameters without numerical andlor empirical study of each specific case, however, generally remains more of an art than a science. 539 5. LAYERED APNN' S The networks thus far considered are homogeneous in the sense that any neuron can be clamped or floating. If the partition is such that the same set of neurons always provides the network stimulus and the remainder respond, then the networks can be simplified. Clamped neurons, for example, ignore the states of the other neurons. The corresponding interconnects can then be deleted from the neural network architecture. When the neurons are so partitioned, we will refer the APNN as layered. In this section, we explore various aspects of the layered APNN and in particular, the use of a so called hidden layer of neurons to increase the storage capacity of the network. An alternate architecture for a homogeneous APNN that require only Q neurons has been reported by Marks 2 • Hidden Layers: In its generic form, the APNN cannot perform a simple exclusive or (XOR). Indeed, failure to perform this same operation was a nail in the coffin of the perceptron22 . Rumelhart et. al.1 7 -18 revived the percept ron by adding additional layers of neurons. Although doing so allowed nonlinear discrimination, the iterative training of such networks can be painfully slow. With the addition of a hidden layer, the APNN likewise generalizes. In contrast, the APNN can be trained by looking at each data vector only once1 • Although neural networks will not likely be used for performing XOR's, their use in explaining the role of hidden neurons is quite instructive. The library matrix for the XOR is f- [~ ~ ~ ~ 1 The first two rOwS of F do not form a matrix of full column rank. Our approach is to augment fp with two more rows such that the resulting matrix is full rank. Most any nonlinear combination of the first two rowS will in general increase the matrix rank. Such a procedure, for example, is used in ~-classifiers23 . possible nonlinear operations include multiplication, a logical "AND" and running a weighted sum of the clamped neural states through a memoryless nonlinearity such as a sigmoid. This latter alteration is particularly well suited to neural architectures. To illustrate with the exclusive or (XOR) , a new hidden neural state is set equal to the exponentiation of the sum of the first two rows. A second hidden neurons will be assigned a value equal to the cosine of the sum of the first two neural states multiplied by Tt/2. (The choice of nonlinearities here is arbitrary. ) The augmented library matrix is 0 0 1 1 0 1 0 1 !:.+ 1 e e e 2 1 0 0 -1 0 1 1 0 540 In either the training or look-up mode, the states of the hidden neurons are clamped indirectly as a result of clamping the input neurons. The playback architecture for this network is shown in Fig .1. The interconnect values for the dashed lines are unity. The remaining interconnects are from the projection matrix formed from !+. Geometrical Interpretation In lower dimensions, the effects of hidden neurons can be nicely illustrated geometrically. Consider the library matrix F = Clearly IP = (1/2 1) . Let the determined by the nonlineariy x 2 the first row of f. Then !+ = [ t: I t; ] 1 1/2 ] neurons where x [ 1/2 = 1i4 in the hidden layer be denotes the elements in 1;2 J The corresponding geometry is shown in Fig. 2 for x the input neuron, y the output and h the hidden neuron. The augmented library vectors are shown and a portion of the generated subspace is shown lightly shaded. The surface of h = x 2 resembles a cylindrical lens in three dimensions. Note that the linear variety corresponding to f = 1/2 intersects the cylindrical lens and subspace only at 1+. Similarly, the x = 1 plane intersects the lens and subspace at 12 • Thus, in both cases, clamping the input corresponding to the first element of one of the two library vectors uniquely determines the library vector. Convergence Improvement: Use of additional neurons in the hidden layer will improve the convergence rate of the APNN19 • Specifically, the spectral radius of the .!4 matrix is decreased as additional neurons are added. The dominant time constant controlling convergence is thus decreased. Capacity: Under the assumption that nonlinearities are chosen such that the augmented fp matrix is of full rank, the number of vectors which can be stored in the layered APNN is equal to the sum of the number of neurons in the input and hidden layers. Note, then, that interconnects between the input and output neurons are not needed if there are a sufficiently large number of neurons in the hidden layer. 6. GENERALIZATION We are assured that the APNN will converge to the desired result if a portion of a training vector is used to stimulate the network. What, however, will be the response if an initialization is used that is not in the training set or, in other words, how does the network generalize from the training set? To illustrate generalization, we return to the XOR problem. Let S5 (M) denote the state of the output neuron at the Mth (synchronous) y / , "" " / X / , , "/ "541 loyer: input 3 exp hidden Figure 1. Illustration of a layered APNN fori performing an XOR. l( Figure 2. A geometrical illustration of the use of an x 2 nonlinearity to determine the states of hidden neurons. Figure 3. Response of the elementary XOR APNN using an exponential and trignometric nonlinearity in the hidden layer. Note that, at the corners, the function is equal to the XOR of the Figure 4. The generalization of the XOR networks formed by thresholding the function in Fig . 3 at 3/4. Different hidden layer nonlinearities result in different generalizations. 542 iteration. If S1 and S2 denote the input clamped value, then S5 (m+1) =t1 5 Sl + t 25 S2 + t35 S3 + t4 5 S4 + t5 5 S5 (m) where S3 =exp (Sl +S2 ) and S4 =cos [1t (S1 + S2) /2] To reach steady state, we let m tend to infinity and solve for S5 (~) : 1 A plot of S5 (~) versus (S1,S2) is shown in Figure 3. The plot goes through 1 and zero according to the XOR of the corner coordinates. Thresholding Figure 3 at 3/4 results in the generalization perspective plot shown in Figure 4. To analyze the network's generalization when there are more than one output neuron, we use (5) of which (10) is a special case. If conditions are such that there is one step convergence, then generalization plots of the type in Figure 4 can be computed from one network iteration using (7). 7. NOTES (a) There clearly exists a great amount of freedom in the choice of the nonlinearities in the hidden layer. Their effect on the network performance is currently not well understood. One can envision, however, choosing nonlinearities to enhance some network attribute such as interconnect reduction, classification region shaping (generalization) or convergence acceleration. (b) There is a possibility that for a given set of hidden neuron nonlinearities, augmentation of the fp matrix coincidentally will result in a matrix of deficent column rank, proper convergence is then not assured. It may also result in a poorly conditioned matrix, convergence will then be quite slow. A practical solution to these problems is to pad the hidden layer with additional neurons. As we have noted, this will improve the convergence rate. (c) We have shown in section 4 that if an APNN has a single bipolar output neuron, the network converges in one step in the sense of (7). Visualize a layered APNN with a single output neuron. If there are a sufficiently large number of neurons in the hidden layer, then the input layer does not need to be connected to the output layer. Consider a second neural network identical to the first in the input and hidden layers except the hidden to output interconnects are different. Since the two networks are different only in the output interconnects, the two networks can be combined into a singlee network with two output neurons. The interconnects from the hidden layer to the output neurons are identical to those used in the single output neurons architectures. The new network will also converge in one step. This process can clearly be extended to an arbitrary number of output neurons. REFERENCES 1. R.J. Marks II, "A Class of Continuous Level Associative Memory Neural Nets," ~. Opt., vo1.26, no.10, p.200S, 1987. 543 2. K.F. Cheung et. al., "Neural Net Associative Memories Based on Convex Set Projections," Proc. IEEE 1st International Conf. on Neural Networks, San Diego, 1987. 3. R.J. Marks II et. al., "A Class of Continuous Level Neural Nets," Proc. 14th Congress of International Commission for Optics Conf., Quebec, Canada, 1987. 4. J.J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational Abilities," Proceedings Nat. Acad. of Sciences, USA, vol.79, p.2554, 1982. 5. J.J. Hopfield et. al., "Neural Computation of Decisions in Optimization Problem," BioI. Cyber., vol. 52, p.141, 1985. 6. ·D. W. Tank et. al., "Simple Neurel Optimization Networks: an AID Converter, Signal Decision Circuit and a Linear Programming Circuit," IEEE Trans. Cir. ~., vol. CAS-33, p.533, 1986. 7. M. Takeda et. ai, "Neural Networks for Computation: Number Representation and Programming Complexity," ~. Opt., vol. 25, no. 18, p.3033, 1986. 8. S. Geman et. al., "Stochastic Relaxation, Gibb's Distributions, and the Bayesian Restoration of Images," IEEE Trans. Pattern Recog. & Machine Intelligence., vol. PAMI-6, p.721, 1984. 9. S. Kirkpatrick et. al. ,"Optimization by Simulated Annealing," Science, vol. 220, no. 4598, p.671, 1983. 10. D.H. Ackley et. al., "A Learning Algorithm for Boltzmann Machines," Cognitive Science, vol. 9, p.147, 1985. 11. K.F. Cheung et. al., "Synchronous vs. Asynchronous Behaviour of Hopfield's CAM Neural Net," to appear in Applied Optics. 12. R.P. Lippmann, "An Introduction to Computing With Neural nets," IEEE ASSP Magazine, p.7, Apr 1987. 13. N. Farhat et. al .. , "Optical Implementation of the Hopfield Model," ~. Opt., vol. 24, pp.1469, 1985. 14. L.E. Atlas, "Auditory Coding in Higher Centers of the CNS," IEEE Eng. in Medicine and Biology Magazine, p.29, Jun 1987. 15. Y.S. Abu-Mostafa et. al., "Information Capacity of the Hopfield Model, " IEEE Trans. Inf. Theory, vol. IT-31, p.461, 1985. 16. R.J. McEliece et. al.,"The Capacity of the Hopfield Associative Memory, " IEEE Trans. Inf. Theory (submitted), 1986. 17. D.E. Rumelhart et. al., Parallel Distributed Prooessing, vol. I & II, Bradford Books, Cambridge, MA, 1986. 18. D.E. Rumelhart et. al., "Learning Representations by Back-Propagation Errors," Nature. vol. 323, no. 6088, p.533, 1986. 19. R.J. Marks II et. al.,"Alternating Projection Neural Networks," ISDL report *11587, Nov. 1987 (Submitted for publication) . 20. D.C. Youla et. al, "Image Restoration by the Method of Convex Projections: Part I-Theory," IEEE Trans. Med. Imaging, vol. MI-1, p.81, 1982. 21. M. I. Sezan and H. Stark. "Image Restoration by the Method of Convex Projections: Part II-Applications and Numerical Results," IEEE Trans. Med. Imaging, vol. MI-1, p.95, 1985. 22. M. Minsky et. al., Perceptrons, MIT Press, Cambridge, MA, 1969. 23. J. Sklansky et. al., Pattern Classifiers and Trainable Machines, Springer-Verlag, New York, 1981.
1987
13
5
144 SPEECH RECOGNITION EXPERIMENTS WITH PERCEPTRONS D. J. Burr Bell Communications Research Morristown, NJ 07960 ABSTRACT Artificial neural networks (ANNs) are capable of accurate recognition of simple speech vocabularies such as isolated digits [1]. This paper looks at two more difficult vocabularies, the alphabetic E-set and a set of polysyllabic words. The E-set is difficult because it contains weak discriminants and polysyllables are difficult because of timing variation. Polysyllabic word recognition is aided by a time pre-alignment technique based on dynamic programming and E-set recognition is improved by focusing attention. Recognition accuracies are better than 98% for both vocabularies when implemented with a single layer perceptron. INTRODUCTION Artificial neural networks perform well on simple pattern recognition tasks. On speaker trained spoken digits a layered network performs as accurately as a conventional nearest neighbor classifier trained on the same tokens [1]. Spoken digits are easy to recognize since they are for the most part monosyllabic and are distinguished by strong vowels. It is reasonable to ask whether artificial neural networks can also solve more difficult speech recognition problems. Polysyllabic recognition is difficult because multi-syllable words exhibit large timing variation. Another difficult vocabulary, the alphabetic E-set, consists of the words B, C, D, E, G, P, T, V, and Z. This vocabulary is hard since the distinguishing sounds are short in duration and low in energy. We show that a simple one-layer perceptron [7] can solve both problems very well if a good input representation is used and sufficient examples are given. We examine two spectral representations a smoothed FFT (fast Fourier transform) and an LPC (linear prediction coefficient) spectrum. A time stabilization technique is described which pre-aligns speech templates based on peaks in the energy contour. Finally, by focusing attention of the artificial neural network to the beginning of the word, recognition accuracy of the E-set can be consistently increased. A layered neural network, a relative of the earlier percept ron [7], can be trained by a simple gradient descent process [8]. Layered networks have been © American Institute of Physics 1988 145 applied successflJ.lly to speech recognition [1], handwriting recognition [2], and to speech synthesis [11]. A variation of a layered network [3] uses feedback to model causal constraints, which can be useful in learning speech and language. Hidden neurons within a layered network are the building blocks that are used to form solutions to specific problems. The number of hidden units required is related to the problem [1,2]. Though a single hidden layer can form any mapping [12], no more than two layers are needed for disjunctive normal form [4]. The second layer may be useful in providing more stable learning and representation in the presence of noise. Though neural nets have been shown to perform as well as conventional techniques[I,5], neural nets may do better when classes have outliers [5]. PERCEPTRONS A simple perceptron contains one input layer and one output layer of neurons directly connected to each other (no hidden neurons). This is often called a one-layer system, referring to the single layer of weights connecting input to output. Figure 1. shows a one-layer perceptron configured to sense speech patterns on a two-dimensional grid. The input consists of a 64-point spectrum at each of twenty time slices. Each of the 1280 inputs is connected to each of the output neurons, though only a sampling of connections are shown. There is one output neuron corresponding to each pattern class. Neurons have standard linear-weighted inputs with logistic activation. C(1) C(2) FR:<lBC'V .... 64 units C(N-1) C(N) Figure 1. A single layer perceptron sensing a time-frequency array of sample data. Each output neuron CU) (1 <i<N) corresponds to a pattern class and is full connected to the input array (for clarity only a few connections are shown). An input word is fit to the grid region by applying an automatic endpoint detection algorithm. The algorithm is a variation of one proposed by Rabiner and Sambur [9] which employs a double threshold successive approximation 146 method. Endpoints are determined by first detecting threshold crossings of energy and then of zero crossing rate. In practice a level crossing other than zero is used to prevent endpoints from being triggered by background sounds. INPUT REPRESENTATIONS Two different input representations were used in this study. The first is a Fourier representation smoothed in both time and frequency. Speech is sampled at 10 KHz ap.d Hamming windowed at a number of sample points. A 128-point FFT spectrum is computed to produce a template of 64 spectral samples at each of twenty time frames. The template is smoothed twice with a time window of length three and a frequency window of length eight. For comparison purposes an LPC spectrum is computed using a tenth order model on 300-sample Hamming windows. Analysis is performed using the autocorrelation method with Durbin recursion [6]. The resulting spectrum is smoothed over three time frames. Sample spectra for the utterance "neural-nets" is shown in Figure 2. Notice the relative smoothness of the LPC spectrum which directly models spectral peaks. FFT LPC Figure 2. FFT and LPC time-frequency plots for the utterance "neural nets". Time is toward the left, and frequency, toward the right. DYNAMIC TIME ALIGMv1ENT Conventional speech recognition systems often employ a time normalization technique based on dynamic programming [10]. It is used to warp the time scales of two utterances to obtain optimal alignment between their spectral frames. We employ a variation of dynamic programming which aligns energy contours rather than spectra. A reference energy template is chosen for each pattern class, and incoming patterns are warped onto it. Figure 3 shows five utterances of "neural-nets" both before and after time alignment. Notice the improved alignment of energy peaks. 147 § I § § § I >\b ~ ~ II III z W ~ ~ !I ! .. .. 10 10 . .. (a. ) TIME (b) Figure 3. (a) Superimposed energy plots of five different utterances of "neural nets". (b). Same utterances after dynamic time alignment. POLYSYLLABLE RECOGNITION Twenty polysyllabic words containing three to five syllables were chosen, and five tokens of each were recorded by a single male speaker. A variable number of tokens were used to train a simple perceptron to study the effect of training set size on performance. Two performance measures were used: classification accuracy, and an RMS error measure. Training tokens were permuted to obtain additional experimental data points. Figure 4. Output responses of a perceptron trained with one token per class (left) and four tokens per class (right). 148 Figure 4 shows two representative perspective plots of the output of a perceptron trained on one and four tokens respectively per class. Plots show network response (z-coordinate) as a function of output node (left axis) and test word index (right axis). Note that more training tokens produce a more ideal map - a map should have ones along the diagonal and zeroes everywhere else. Table 1 shows the results of these experiments for three different representations: (1) FFT, (2) LPC and (3) time aligned LPC. This table lists classification accuracy as a function of number of training tokens and input representation. The perceptron learned to classify the unseen patterns perfectly for all cases except the FFT with a single training pattern. Table 1. Polysyllabic Word Recognition AccuraclT Number Training Tokens 1 2 3 4 FFT 98.7% 100% 100% 100% LPC 100% 100% 100% 100% Time Aligned LPC 100% 100% 100% 100% Permuted Trials 400 300 200 100 A different performance measure, the RMS error, evaluates the degree to which the trained network output responses Rjk approximate the ideal targets Tjk • The measure "is evaluated over the N non-trained tokens and M output nodes of the network. Tik equals 1 for J=k and 0 for J=I=k. Figure 5 shows plots of RMS error as a function of input representation and training patterns. Note that the FFT representation produced the highest error, LPC was about 40% less, and time-aligned LPC only marginally better than non-aligned LPC. In a situation where many choices must be made (i.e. vocabularies much larger than 20 words) LPC is the preferred choice, and time alignment could be useful to disambiguate similar words. Increased number of training tokens results in improved performance in all cases. 149 o ci ,-----------------------------~ '" 0 .. FFT i "! lii I0 5 0 g W tJl ~ LPC a: '" 0 TIme Aligned LPC 0 o o ~--~----~--~----~ __ ~ ____ ~ 1.0 2.0 3.0 4.0 Number Traln'ng Tokens Figure 5. RMS error versus number of training tokens for various input representations. E-SET VOCABULARY The E-Set vocabulary consists of the nine E-words of the English alphabet B, C, D, E, G, P, T, V, Z. Twenty tokens of each of the nine classes were recorded by a single male speaker. To maximize the sizes of training and test sets, half were used for training and the other half for testing. Ten permutations produced a total of 900 separate recognition trials. Figure 6 shows typical LPC templates for the nine classes. Notice the double formant ridge due to the ''E'' sound, which is common to all tokens. Another characteristic feature is the FO ridge - the upward fold on the left of all tokens which characterizes voicing or pitched sound. 150 Figure 6. LP C time-frequency plots for representative tokens of the E-set words. Figure 7. Time-frequency plots of weight values connected to each output neuron ''E'' through "z" in a trained perceptron. 151 Figure 7 shows similar plots illustrating the weights learned by the network when trained on ten tokens of each class. These are plotted like spectra, since one weight is associated with each spectral sample. Note that the patterns have some formant structure. A recognition accuracy of 91.4% included perfect scores for classes C, E, and G. Notice that weights along the FO contour are mostly small and some are slightly negative. This is a response to the voiced ''E" sound common to all classes. The network has learned to discount "voicing" as a discriminator for this vocabulary. Notice also the strong "hilly" terrain near the beginning of most templates. This shows where the network has decided to focus much of its discriminating power. Note in particular the hill-valley pair at the beginning of ''p'' and "T". These are near to formants F2/F3 and could conceivably be formant onset detectors. Note the complicated detector pattern for the ''V'' sound. The classes that are easy to discriminate (C, E, G) produce relatively fiat and uninter~sting weight spaces. A highly convoluted weight space must therefore be correlated with difficulty in discrimination. It makes little sense however that the network should be working hard in the late time C'E" sound) portion of the utterance. Perhaps additional training might reduce this activity, since the network would eventually find little consistent difference there. A second experiment was conducted to help the network to focus attention. The first k frames of each input token were averaged to produce an average spectrum. These average spectra were then used in a simple nearest neighbor recognizer scheme. Recognition accuracy was measured as a function of k. The highest performance was for k=8, indicating that the first 40% of the word contained most of the "action". B C D E C P T V Z B 08 0 0 0 0 0 0 c 0 100 0 0 0 0 0 0 0 D 0 0 08 0 0 2 0 0 0 E 0 0 0 100 0 0 0 0 0 c 0 0 0 0 100 0 0 0 0 p 0 0 3 0 0 03 4 0 0 T 0 0 0 0 0 0 100 0 0 V 2 0 0 0 0 2 0 08 0 Z 0 0 0 0 0 0 0 09 Figure 8. Confusion matrix of the E-set focused on the first 40% of each word. 152 All words were resampled to concentrate 20 time frames into the first 40% of the word. LPC spectra were recomputed using a 16th order model and the network was trained on the new templates. Performance increased from 91.4% to 98.2%. There were only 16 classification errors out of the 900 recognition tests. The confusion matrix is shown in Figure 8. Learning times for all experiments consisted of about ten passes through the training set. When weights were primed with average spectral values rather than random values, learning time decreased slightly. CONCLUSIONS Artificial neural networks are capable of high performance in pattern recognition applications, matching or exceeding that of conventional classifiers. We have shown that for difficult speech problems such as time alignment and weak discriminability, artificial neural networks perform at high accuracy exceeding 98%. One-layer perceptrons learn these difficult tasks almost effortlessly - not in spite of their simplicity, but because of it. REFERENCES 1. D. J. Burr, "A Neural Network Digit Recognizer", Proceedings of IEEE Conference on Systems, Man, and Cybernetics, Atlanta, GA, October, 1986, pp. 1621-1625. 2. D. J. Burr, "Experiments with a Connectionist Text Reader," IEEE International Conference on Neural Networks, San Diego, CA, June, 1987. 3. M. I. Jordan, "Serial Order: A Parallel Distributed Processing Approach," ICS Report 8604, UCSD Institute for Cognitive Science, La Jolla, CA, May 1986. 4. S. J. Hanson, and D. J. Burr, 'What Connectionist Models Learn: Toward a Theory of Representation in Multi-Layered Neural Networ.ks," submitted for pu blication. 5. W. Y. Huang and R. P. Lippmann, "Comparisons Between Neural Net and Conventional Classifiers," IEEE International Conference on Neural Networks, San Diego, CA, June 21-23, 1987. 6. J. D. Markel and A. H. Gray, Jr., Linear Prediction of Speech, SpringerVerlag, New York, 1976. 7. M. L. Minsky and S. Papert, Perceptrons, MIT Press, Cambridge, Mass., 1969. 153 8. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, ''Learning Internal Representations by Error Propagation," in Parallel Distributed Processing, Vol. 1, D. E. Rumelhart and J. L. McClelland, eds., MIT Press, 1986, pp. 318362. 9. L. R. Rabiner and M. R. Sambur, "An Algorithm for Determining the Endpoints of Isolated Utterances," BSTJ, Vol. 54,297-315, Feb. 1975. 10. H. Sakoe and S. Chiba, "Dynamic Programming Optimization for Spoken Word Recognition," IEEE Trans. Acoust., Speech, Signal Processing, Vol. ASSP-26, No.1, 43-49, Feb. 1978. 11. T. J. Sejnowski and C. R. Rosenberg, "NETtalk: A Parallel Network that Learns to Read Aloud," Technical Report JHU/EECS-86/01, Johns Hopkins University Electrical Engineering and Computer Science, 1986. 12. A. Wieland and R. Leighton, "Geometric Analysis of Neural Network Capabilities," IEEE International Conference on Neural Networks, San Deigo, CA, June 21-24, 1987.
1987
14
6
ON PROPERTIES OF NETWORKS OF NEURON-LIKE ELEMENTS Pierre Baldi· and Santosh S. Venkatesht 15 December 1987 Abstract The complexity and computational capacity of multi-layered, feedforward neural networks is examined. Neural networks for special purpose (structured) functions are examined from the perspective of circuit complexity. Known results in complexity theory are applied to the special instance of neural network circuits, and in particular, classes of functions that can be implemented in shallow circuits characterised. Some conclusions are also drawn about learning complexity, and some open problems raised. The dual problem of determining the computational capacity of a class of multi-layered networks with dynamics regulated by an algebraic Hamiltonian is considered. Formal results are presented on the storage capacities of programmed higher-order structures, and a tradeoff between ease of programming and capacity is shown. A precise determination is made of the static fixed point structure of random higher-order constructs, and phase-transitions (0-1 laws) are shown. 1 INTRODUCTION In this article we consider two aspects of computation with neural networks. Firstly we consider the problem of the complexity of the network required to compute classes of specified (structured) functions. We give a brief overview of basic known complexity theorems for readers familiar with neural network models but less familiar with circuit complexity theories. We argue that there is considerable computational and physiological justification for the thesis that shallow circuits (Le., networks with relatively few layers) are computationally more efficient. We hence concentrate on structured (as opposed to random) problems that can be computed in shallow (constant depth) circuits with a relatively few number (polynomial) of elements, and demonstrate classes of structured problems that are amenable to such low cost solutions. We discuss an allied problem-the complexity of learning-and close with some open problems and a discussion of the observed limitations of the theoretical approach. We next turn to a rigourous classification of how much a network of given structure can do; i.e., the computational capacity of a given construct. (This is, in ·Department of Mathematics, University of California (San Diego), La Jolla, CA 92093 tMoore School of Electrical Engineering, University of Pennsylvania, Philadelphia, PA 19104 © American Institute of Physics 1988 41 42 a sense, the mirror image of the problem considered above, where we were seeking to design a minimal structure to perform a given task.) In this article we restrict ourselves to the analysis of higher-order neural structures obtained from polynomial threshold rules. We demonstrate that these higher-order networks are a special class of layered neural network, and present formal results on storage capacities for these constructs. Specifically, for the case of programmed interactions we demonstrate that the storage capacity is of the order of n d where d is the interaction order. For the case of random interactions, a type of phase transition is observed in the distribution of fixed points as a function of attraction depth. 2 COMPLEXITY There exist two broad classes of constraints on compl,ltations. 1. Physical constraints: These are related to the hardware in which the computation is embedded, and include among others time constants, energy limitations, volumes and geometrical relations in 3D space, and bandwidth capacities. 2. Logical constraints: These can be further subdivided into • Computability constraints-for instance, there exist unsolvable problems, i.e., functions such as the halting problem which are not computable in an absolute sense . • Complexity constraints-usually giving upper and/or lower bounds on the amount of resources such as the time, or the number of gates required to compute a given function. As an instance, the assertion "There exists an exponential time algorithm for the Traveling Salesman Problem," provides a computational upper bound. If we view brains as computational devices, it is not unreasonable to think that in the course of the evolutionary process, nature may have been faced several times by problems related to physical and perhaps to a minor degree logical constraints on computations. If this is the case, then complexity theory in a broad sense could contribute in the future to our understanding of parallel computations and architectural issues both in natural and synthetic neural systems. A simple theory of parallel processing at the macro level (where the elements are processors) can be developed based on the ratio of the time spent on communications between processors [7] for different classes of problems and different processor architecture and interconnections. However, this approach does not seem to work for parallel processing at the level of circuits, especially if calculations and communications are intricately entangled. Recent neural or connectionist models are based on a common structure, that of highly interconnected networks of linear (or polynomial) threshold (or with sigmoid input-output function) units with adjustable interconnection weights. We shall therefore review the complexity theory of such circuits. In doing so, it will be sometimes helpful to contrast it with the similar theory based on Boolean (AND, OR, NOT) gates. The presentation will be rather informal and technical complements can easily be found in the references. Consider a circuit as being on a cyclic oriented graph connecting n Boolean inputs to one Boolean output. The nodes of the graph correspond to the gates (the n input units, the "hidden" units, and the output unit) of the circuit. The size of the circuit is the total number of gates and the depth is the length of the longest path connecting one input to the output. For a layered, feed-forward circuit, the width is the average number of computational units in the hidden (or interior) layers of elements. The first obvious thing when comparing Boolean and threshold logic is that they are equivalent in the sense that any Boolean function can be implemented using either logic. In fact, any such function can be computed in a circuit of depth two and exponential size. Simple counting arguments show that the fraction of functions requiring a circuit of exponential size approaches one as n -+ 00 in both cases, i.e., a random function will in general require an exponential size circuit. (Paradoxically, it is very difficult to construct a family of functions for which we can prove that an exponential circuit is necessary.) Yet, threshold logic is more powerful than Boolean logic. A Boolean gate can compute only one function whereas a threshold gate can compute to the order of 2on2 functions by varying the weights with 1/2 ~ a ~ 1 (see [19] for the lower bound; the upper bound is a classical hyperplane counting argument, see for instance [20,30)). It would hence appear plausible that there exist wide classes of problems which can be computed by threshold logic with circuits substantially smaller than those required by Boolean logic. An important result which separates threshold and Boolean logic from this point of view has been demonstrated by Yao [31] (see [10,24] for an elegant proof). The result is that in order to compute a function such as parity in a circuit of constant depth k, at least exp(cnl/2k) Boolean gates with unbounded fanin are required. As we shall demonstrate shortly, a circuit of depth two and linear size is sufficient for the computation of such functions using threshold logic. It is not unusual to hear discussions about the tradeoffs between the depth and the width of a circuit. We believe that one of the main constributions of complexity analysis is to show that this tradeoff is in some sense minimal and that in fact there exists a very strong bias in favor of shallow (Le., constant depth) circuits. There are multiple reasons for this. In general, for a fixed size, the number of different functions computable by a circuit of small depth exceeds the number of those computable by a deeper circuit. That is, if one had no a priori knowledge regarding the function to be computed and was given hidden units, then the optimal strategy would be to choose a circuit of depth two with the m units in a single layer. In addition, if we view computations as propagating in a feedforward mode from the inputs to the output unit, then shallow circuits compute faster. And the deeper a circuit, the more difficult become the issues of time delays, synchronisation, and precision on the computations. Finally, it should be noticed that given overall responses of a few hundred milliseconds and given the known time scales for synaptic integration, biological circuitry must be shallow, at least within a "module" and this is corroborated by anatomical data. The relative slowness of neurons and their shallow circuit architecture are to be taken together with the "analog factor" and "entropy factor" [1] to understand the necessary high-connectivity requirements of neural systems. 43 44 From the previous analysis emerges an important class of circuits in threshold logic characterised by polynomial size and shallow depth. We have seen that, in general, a random function cannot be computed by such circuits. However, many interesting functions-the structured problems--are far from random, and it is then natural to ask what is the class of functions computable by such circuits? While a complete characterisation is probably difficult, there are several sub-classes of structural functions which are known to be computable in shallow poly-size circuits. The symmetric functions, i.e., functions which are invariant under any permutation of the n input variables, are an important class of structured problems that can be implemented in shallow polynomial size circuits. In fact, any symmetric function can be computed by a threshold circuit of depth two and linear size; (n hidden units and one output unit are always sufficient). We demonstrate the validity of this assertion by the following instructive construction. We consider n binary inputs, each taking on values -1 and 1 only, and threshold gates as units. Now array the 2n possible inputs in n + 1 rows with the elements in each row being permuted versions of each other (i.e., n-tuples in a row all have the same number of +1's) and with the rows going monotonically from zero +1's to n +l's. Any given symmetric Boolean function clearly assumes the same value for all elements (Boolean n-tuples) in a row, so that contiguous rows where the function assumes the value +1 form bands. (There are at most n/2 bands-the worst case occuring for the parity function.) The symmetric function can now be computed with 2B threshold gates in a single hidden layer with the topmost "neuron" being activated only if the number of +1's in the input exceeds the number of +1's in the lower edge of the lowest band, and proceeding systematically, the lowest "neuron" being activated only if the number of +1's in the input exceeds the number of +1's in the upper edge of the highest band. An input string will be within a band if and only if an odd number of hidden neurons are activated startbg contiguously from the top of the hidden layer, and conversely. Hence, a single output unit can compute the given symmetric function. It is easy to see that arithmetic operations on binary strings can be performed with polysize small depth circuits. Reif [23] has shown that for a fixed degree of precision, any analytic function such as polynomials, exponentials, and trigonometric functions can be approximated with small and shallow threshold circuits. Finally, in many situations one is interested in the value of a function only for a vanishingly small (Le., polynomial) fraction of the total number of possible inputs 2n. These functions can be implemented by polysize shallow circuits and one can relate the size and depths of the circuit to the cardinal of the interesting inputs. So far we only have been concerned with the complexity of threshold circuits. We now turn to the complexity of learning, i.e., the problem of finding the weights required to implement a given function. Consider the problem of repeating m points in 1R l coloured in two colours, using k hyperplanes so that any region contains only monochromatic points. If i and k are fixed the problem can be solved in polynomial time. If either i or k goes to infinity, the problem becomes NP-complete [1]. As a result, it is not difficult to see that the general learning problem is NP-complete (see also [12] for a different proof and [21] for a proof of the fact it is already NP-complete in the case of one single threshold gate). Some remarks on the limitations of the complexity approach are a pro]XJs at this juncture: 1. While a variety of structured Boolean functions can be implemented at relatively low cost with networks of linear threshold gates (McCulloch-Pitts neurons), the extension to different input-output functions and the continuous domain is not always straightforward. 2. Even restricting ourselves to networks of relatively simple Boolean devices such as the linear threshold gate, in many instances, only relatively weak bounds are available for computational cost and complexity. 3. Time is probably the single most important ingredient which is completely absent from these threshold units and their interconnections [17,14]; there are, in addition, non-biological aspects of connectionist models [8]. 4. Finally, complexity results (where available) are often asymptotic in nature and may not be meaningful in the range corresponding to a particular application. We shall end this section with a few open questions and speculations. One problem has to do with the time it takes to learn. Learning is often seen as a very slow process both in artificial models (cf. back propagation, for instance) and biological systems (cf. human acquisition of complex skills). However, if we follow the standards of complexity theory, in order to be effective over a wide variety of scales, a single learning algorithm should be polynomial time. We can therefore ask what is learnable by examples in polynomial time by polynomial size shallow threshold circuits? The status of back propagation type of algorithms with respect to this question is not very clear. The existence of many tasks which are easily executed by biological organisms and for which no satisfactory computer program has been found so far leads to the question of the specificity of learning algorithms, i.e., whether there exists a complexity class of problems or functions for which a "program" can be found only by learning from examples as opposed to by traditional programming. There is some circumstantial evidence against such conjecture. As pointed out by Valiant [25], cryptography can be seen in some sense as the opposite of learning. The conjectures existence of one way function, i.e., functions which can be constructed in polynomial time but cannot be invested (from examples) in polynomial time suggests that learning algorithms may have strict limitations. In addition, for most of the artificial applications seen so far, the programs obtained through learning do not outperform the best already known software, though there may be many other reasons for that. However, even if such a complexity class does not exist, learning algorithm may still be very important because of their inexpensiveness and generality. The work of Valiant [26,13] on polynomial time learning of Boolean formulas in his "distribution free model" explores some additional limitations of what can be learned by examples without including any additional knowledge. Learning may therefore turn out to be a powerful, inexpensive but limited family of algorithms that need to be incorporated as "sub-routines" of more global 45 46 programs, the structure of which may be -harder to find. Should evolution be regarded as an "exponential" time learning process complemented by the "polynomial" time type of learning occurring in the lifetime of organisms? 3 CAPACITY In the previous section the focus of our investigation was on the structure and cost of minimal networks that would compute specified Boolean functions. We now consider the dual question: What is the computational capacity of a threshold network of given structure? As with the issues on complexity, it turns out that for fairly general networks, the capacity results favour shallow (but perhaps broad) circuits [29]. In this discourse, however, we shall restrict ourselves to a specified class of higher-order networks, and to problems of associative memory. We will just quote the principal rigourous results here, and present the involved proofs elsewhere [4]. We consider systems of n densely interacting threshold units each of which yields an instantaneous state -1 or +1. (This corresponds in the literature to a system of n Ising spins, or alternatively, a system of n neural states.) The state space is hence the set of vertices of the hypercube. We will in this discussion also restrict our attention throughout to symmetric interaction systems wherein the interconnections between threshold elements is bidirectional. Let Id be the family of all subsets of cardinality d + 1 of the set {1, 2, ... , n}. n Clearly IIdl = ( d + 1)· For any subset I of {1, 2, ... , n}, and for every state deC U = {Ul,U2, ... ,un }E lBn = {-1,l}n, set UI = fIiEIui. Definition 1 A homogeneous algebraic threshold network of degree d is a network of n threshold elements with interactions specified by a set of ( d: 1 ) real coefficients WI indexed by I in I d, and the evolution rule ut = sgn ( L WIUI\{i}) IeId :ieI (1) These systems can be readily seen to be natural generalisations to higherorder of the familiar case d = 1 of linear threshold networks. The added degrees of freedom in the interaction coefficients can potentially result in enhanced flexibility and programming capability over the linear case as has been noted independently by several authors recently [2,3,4,5,22,27]. Note that each d-wise product uI\i is just the parity of the corresponding d inputs, and by our earlier discussion, this can be computed with d hidden units in one layer followed by a single threshold unit. Thus the higher-order network can be realised by a network of depth three, where the first hidden layer has d( ~ ) units, the second hidden layer has ( ~ ) units, and there are n output units which feedback into the n input units. Note that the weights from the input to the first hidden layer, and the first hidden layer to the second are fixed (computing the various d-wise products), and the weights from the second hidden layer to the output are the coefficients WI which are free parameters. These systems can be identified either with long range interactions for higherorder spin glasses at zero temperature, or higher-order neural networks. Starting from an arbitrary configuration or state, the system evolves asynchronously by a sequence of single "spin" flips involving spins which are misaligned with the instantaneous "molecular field." The dynamics of these symmetric higher-order systems are regulated analogous to the linear system by higher-order extensions of the classical quadratic Hamiltonian. We define the homogeneous algebraic Hamiltonian of degree d by Hd(u) = - E WI'UI· IeId (2) The algebraic Hamiltonians are functionals akin in behaviour to the classical quadratic Hamiltonian as has been previously demonstrated [5]. Proposition 1 The functional H d is non-increasing under the evolution rule 1. In the terminology of spin glasses, the state trajectories of these higher-order networks can be seen to be following essentially a zero-temperature Monte Carlo (or Glauber) dynamics. Because of the monotonicity of the algebraic Hamiltonians given by equation 2 under the asynchronous evolution rule 1, the system always reaches a stable state (fixed point) where the relation 1 is satisfied for each of the n spins or neural states. The fixed points are hence the arbiters of system dynamics, and determine the computational capacity of the system. System behaviour and applications are somewhat different depending on whether the interactions are random or programmed. The case of random interactions lends itself to natural extensions of spin glass formulations, while programmed interactions yield applications of higher-order extensions of neural network models. We consider the two cases in turn. 3.1 PROGRAMMED INTERACTIONS Here we query whether given sets of binary n-vectors can be stored as fixed points by a suitable selection of interaction coefficients. If such sets of prescribed vectors can be stored as stable states for some suitable choice of interaction coefficients, then proposition 1 will ensure that the chosen vectors are at the bottom of "energy wells" in the state space with each vector exercising a region of attraction around it-all characterestics of a physical associative memory. In such a situation the dynamical evolution of the network can be interpreted in terms of computations: error-correction, nearest neighbour search and associative memory. Of importance here is the maximum number of states that can be stored as fixed points for an appropriate choice of algebraic threshold network. This represents the maximal information storage capacity of such higher-order neural networks. Let d represent the degree ofthe algebraic threshold network. Let u(l), ... , u(m) be the m-set of vectors which we require to store as fixed points in a suitable algebraic threshold network. We will henceforth refer to these prescribed vectors as 47 48 memories. We define the storage capacity of an algebraic threshold network of degree d to be the maximal number m of arbitrarily chosen memories which can be stored with high probability for appropriate choices of coefficients in the network. Theorem 1 The maximal (algorithm independent) storage capacity of a homogeneous algebraic threshold network of degree d is less than or equal to 2 ( ~ ). Generalised Sum of Outer-Products Rule: The classical Reb bian rule for the linear case d = 1 (cf. [11] and quoted references) can be naturally extended to networks of higher-order. The coefficients WI, IE Id are constructed as the sum of generalised Kronecker outer-products, m WI = L u~a). a=l Theorem 2 The storage capacity of the outer-product algorithm applied to a homogeneous algebraic threshold network of degree d is less than or equal to nd /2(d + l)logn (also cf. [15,27]). Generalised Spectral Rule: For d = 1 the spectral rule amounts to iteratively projecting states orthogonally onto the linear space generated by u(1), ... , u(m), and then taking the closest point on the hypercube to this projection (cf. [27,28]). This approach can be extended to higher-orders as we now describe. Let W denote the n X N(n,d) matrix of coefficients WI arranged lexicographically; i.e., Wl,l,2, ... ,d-l,d Wl,2,3, ... ,d,d+l Wl,n-d+l,n-d+2, ... ,n-l,n W= W2,l,2, ... ,d-l,d W2,2,3, ... ,d,d+l W2,n-d+l,n-d+2, ... ,n-l,n Wn ,l,2, ... ,d-l,d Wn ,2,3, ... ,d,d+l W n ,n-d+l,n-d+2, ... ,n-l,n Note that the symmetry and the "zero-diagonal" nature of the interactions have been relaxed to increase capacity. Let U be the n X m matrix of memories. Form the extended N(n,d) X m binary matrix 1 U = [lu(l) ... lu(m)], where u(a) l,2,. .. ,d-l,d (a) u1,2, ... ,d-l,d+l (a) un _ d+ l,n- d+2, ... ,n-l,n Let A = dgP'<l) ... ),(m)] be a m X m diagonal matrix with positive diagonal terms. A generalisation of the spectral algorithm for choosing coefficients yields W = UA1Ut where 1 ut is the pseudo-inverse of 1 U. Theorem 3 The storage capacity of the generalised spectral algorithm is at best n ( d ). 3.2 RANDOM INTERACTIONS We consider homogeneous algebraic threshold networks whose weights WI are Li.d., N(O, 1) random variables. This is a natural generalisation to higher-order of Ising spin glasses with Gaussian interactions. We will show an asymptotic estimate for the number of fixed points of the structure. Asymptotic results for the usual case d = 1 of linear threshold networks with Gaussian interactions have been reported in the literature [6,9,16]. For i = 1, ... , n set s~ = Ui L: WIUI\i . IeId:ieI For each n the random variables S~, i = 1, ... , n are identically distributed, jointly Gaussian variables with zero mean, and variance O'~ = ( n ~ 1 ). Definition 2 For any given f3 ~ 0, a state u E run is f3-strongly stable iff S~ ~ f3O'n, for each i = 1, ... , n. The case f3 = 0 reverts to the usual case of fixed points. The parameter f3 is essentially a measure of how deep the well of attraction surrounding the fixed point is. The following proposition asserts that a 0-1 law ("phase transition") governs the expected number of fixed points which have wells of attraction above a certain depth. Let Fd(f3) be the expected number of f3-strongly stable states. Theorem 4 Corresponding to each fixed interaction order d there exists a positive constant f3d such that as n --+ 00, if f3 < f3d if f3 = f3d if f3 > f3d , where kd(f3) > 0, and 0 ~ Cd(f3) < 1 are parameters depending solely on f3 and the interaction order d. 4 CONCLUSION In fine, it appears possible to design shallow, polynomial size threshold circuits to compute a wide class of structured problems. The thesis that shallow circuits compute more efficiently than deep circuits is borne out. For the particular case of 49 50 higher-order networks, all the garnered results appear to point in the same direction: For neural networks of fixed degree d, the maximal number of programmable states is essentially of the order of nd• The total number of fixed points, however, appear to be exponential in number (at least for the random interaction case) though almost all of them have constant attraction depths. References [1] Y. S. Abu-Mostafa, "Number of synapses per neuron," in Analog VLSI and Neural Systems, ed. C. Mead, Addison Wesley, 1987. [2] P. Baldi, II. Some Contributions to the Theory of Neural Networks. Ph.D. Thesis, California Insitute of Technology, June 1986. [3] P. Baldi and S. S. Venkatesh, "Number of stable points for spin glasses and neural networks of higher orders," Phys. Rev. Lett., vol. 58, pp. 913-916, 1987. [4] P. Baldi and S. S. Venkatesh, "Fixed points of algebraic threshold networks," in preparation. [5] H. H. Chen, et al, "Higher order correlation model of associative memory," in Neural Networks for Computing. New York: AlP Conf. Proc., vol. 151, 1986. [6] S. F. Edwards and F. Tanaka, "Analytical theory of the ground state properties of a spin glass: I. ising spin glass," Jnl. Phys. F, vol. 10, pp. 2769-2778, 1980. [7] G. C. Fox and S. W. Otto, "Concurrent Computations and the Theory of Complex Systems," Caltech Concurrent Computation Program, March 1986. [8] F. H. Grick and C. Asanuma, ~'Certain aspects of the anatomy and physiology of the cerebral cortex," in Parallel Distributed Processing, vol. 2, eds. D. E. Rumelhart and J. L. McCelland, pp. 333-371, MIT Press, 1986. [9] D. J. Gross and M. Mezard, "The simplest spin glass," Nucl. Phys., vol. B240, pp. 431-452, 1984. [10] J. Hasted, "Almost optimal lower bounds for small depth circuits," Proc. 18-th ACM STOC, pp. 6-20, 1986. [11] J. J. Hopfield, "Neural networks and physical sytems with emergent collective computational abilities," Proc. Natl. Acad. Sci. USA, vol. 79, pp. 25.54-2558, 1982. [12] J. S. Judd, "Complexity of connectionist learning with various node functions," Dept. of Computer and Information Science Technical Report, vol. 87-60, Univ. of Massachussetts, Amherst, 1987. [13] M. Kearns, M. Li, 1. Pitt, and L. Valiant, "On the learnability of Boolean formulae," Proc. 19-th ACM STOC, 1987. [14] C. Koch, T. Poggio, and V. Torre, "Retinal ganglion cells: A functional interpretation of dendritic morphology," Phil. Trans. R. Soc. London, vol. B 288, pp. 227-264, 1982. [15] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh, "The capacity of the Hopfield associative memory," IEEE Trans. Inform. Theory, vol. IT-33, pp. 461-482, 1987. [16] R. J. McEliece and E. C. Posner, "The number of stable points of an infiniterange spin glass memory," JPL Telecomm. and Data Acquisition Progress Report, vol. 42-83, pp. 209-215, 1985. [17] C. A. Mead (ed.), Analog VLSI and Neural Systems, Addison Wesley, 1987. [18] N. Megiddo, "On the complexity of polyhedral separability," to appear in Jnl. Discrete and Computational Geometry, 1987. [19] S. Muroga, "Lower bounds on the number of threshold functions," IEEE Trans. Elec. Comp., vol. 15, pp. 805-806, 1966. [20] S. Muroga, Threshold Logic and its Applications, Wiley Interscience, 1971. [21] V. N. Peled and B. Simeone, "Polynomial-time algorithms for regular setcovering and threshold synthesis," Discr. Appl. Math., vol. 12, pp. 57-69, 1985. [22] D. Psaltis and C. H. Park, "Nonlinear discriminant functions and associative memories," in Neural Networks for Computing. New York: AlP Conf. Proc., vol. 151, 1986. [23] J. Reif, "On threshold circuits and polynomial computation," preprint. [24] R. Smolenski, "Algebraic methods in the theory of lower bounds for Boolean circuit complexity," Proc. J9-th ACM STOC, 1987. [25] L. G. Valiant, "A theory of the learnable," Comm. ACM, vol. 27, pp. 1134-1142, 1984. [26] L. G. Valiant, "Deductive learning," Phil. Trans. R. Soc. London, vol. A 312, pp. 441-446, 1984. [27] S. S. Venkatesh, Linear Maps with Point Rules: Applications to Pattern Classification and Associativ~ Memory. Ph.D. Thesis, California Institute of Technology, Aug. 1986. [28] S. S. Venkatesh and D. Psaltis, "Linear and logarithmic capacities in associative neural networks," to appear IEEE Trans. Inform. Theory. [29] S. S. Venkatesh, D. Psaltis, and J. Yu, private communication. [30] R. O. Winder, "Bounds on threshold gate realisability," IRE Trans. Elec. Comp., vol. EC-12, pp. 561-564, 1963. [31] A. C. C. Yaa, "Separating the poly-time hierarchy by oracles," Proc. 26-th IEEE FOCS, pp. 1-10, 1985. 51
1987
15
7
"'Ensemble' Boltzmann Units have Collective Computational Properties like those of Hopfield an(...TRUNCATED)
1987
16
8
"262 ON TROPISTIC PROCESSING AND ITS APPLICATIONS Manuel F. Fernandez General Electric Adva(...TRUNCATED)
1987
17
9
"814 NEUROMORPHIC NETWORKS BASED ON SPARSE OPTICAL ORTHOGONAL CODES Mario P. Vecchi and Jaw(...TRUNCATED)
1987
18
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
24