3 resultados para network learning

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.

Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.

Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental question in neuroscience is how distributed networks of neurons communicate and coordinate dynamically and specifically. Several models propose that oscillating local networks can transiently couple to each other through phase-locked firing. Coherent local field potentials (LFP) between synaptically connected regions is often presented as evidence for such coupling. The physiological correlates of LFP signals depend on many anatomical and physiological factors, however, and how the underlying neural processes collectively generate features of different spatiotemporal scales is poorly understood. High frequency oscillations in the hippocampus, including gamma rhythms (30-100 Hz) that are organized by the theta oscillations (5-10 Hz) during active exploration and REM sleep, as well as sharp wave-ripples (SWRs, 140-200 Hz) during immobility or slow wave sleep, have each been associated with various aspects of learning and memory. Deciphering their physiology and functional consequences is crucial to understanding the operation of the hippocampal network.

We investigated the origins and coordination of high frequency LFPs in the hippocampo-entorhinal network using both biophysical models and analyses of large-scale recordings in behaving and sleeping rats. We found that the synchronization of pyramidal cell spikes substantially shapes, or even dominates, the electrical signature of SWRs in area CA1 of the hippocampus. The precise mechanisms coordinating this synchrony are still unresolved, but they appear to also affect CA1 activity during theta oscillations. The input to CA1, which often arrives in the form of gamma-frequency waves of activity from area CA3 and layer 3 of entorhinal cortex (EC3), did not strongly influence the timing of CA1 pyramidal cells. Rather, our data are more consistent with local network interactions governing pyramidal cells' spike timing during the integration of their inputs. Furthermore, the relative timing of input from EC3 and CA3 during the theta cycle matched that found in previous work to engage mechanisms for synapse modification and active dendritic processes. Our work demonstrates how local networks interact with upstream inputs to generate a coordinated hippocampal output during behavior and sleep, in the form of theta-gamma coupling and SWRs.