985 resultados para copying photos
Resumo:
This article compares the performance of Fuzzy ARTMAP with that of Learned Vector Quantization and Back Propagation on a handwritten character recognition task. Training with Fuzzy ARTMAP to a fixed criterion used many fewer epochs. Voting with Fuzzy ARTMAP yielded the highest recognition rates.
Resumo:
A feedforward neural network for invariant image preprocessing is proposed that represents the position1 orientation and size of an image figure (where it is) in a multiplexed spatial map. This map is used to generate an invariant representation of the figure that is insensitive to position1 orientation, and size for purposes of pattern recognition (what it is). A multiscale array of oriented filters followed by competition between orientations and scales is used to define the Where filter.
Resumo:
We wish to construct a realization theory of stable neural networks and use this theory to model the variety of stable dynamics apparent in natural data. Such a theory should have numerous applications to constructing specific artificial neural networks with desired dynamical behavior. The networks used in this theory should have well understood dynamics yet be as diverse as possible to capture natural diversity. In this article, I describe a parameterized family of higher order, gradient-like neural networks which have known arbitrary equilibria with unstable manifolds of known specified dimension. Moreover, any system with hyperbolic dynamics is conjugate to one of these systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric. fits to known stable systems, is either non-constructive, lacks generality, or has unspecified attracting equilibria. More specifically, We construct a parameterized family of gradient-like neural networks with a simple feedback rule which will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data, on nested periodic orbits.
Resumo:
This article describes a. neural pattern generator based on a cooperative-competitive feedback neural network. The two-channel version of the generator supports both in-phase and anti-phase oscillations. A scalar arousal level controls both the oscillation phase and frequency. As arousal increases, oscillation frequency increases and bifurcations from in-phase to anti-phase, or anti-phase to in-phase oscillations can occur. Coupled versions of the model exhibit oscillatory patterns which correspond to the gaits used in locomotion and other oscillatory movements by various animals.
Resumo:
This article describes a neural network model capable of generating a spatial representation of the pitch of an acoustic source. Pitch is one of several auditory percepts used by humans to separate multiple sound sources in the environment from each other. The model provides a neural instantiation of a type of "harmonic sieve". It is capable of quantitatively simulating a large body of psychoacoustical data, including new data on octave shift perception.
Resumo:
An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to two large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.
Resumo:
A new family of neural network architectures is presented. This family of architectures solves the problem of constructing and training minimal neural network classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a neural network expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a Very Large Scale Integrated (VLSI) circuit. The rules that the neural network generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.
Resumo:
A dynamic distributed model is presented that reproduces the dynamics of a wide range of varied battle scenarios with a general and abstract representation. The model illustrates the rich dynamic behavior that can be achieved from a simple generic model.
Resumo:
This paper describes the design of a self~organizing, hierarchical neural network model of unsupervised serial learning. The model learns to recognize, store, and recall sequences of unitized patterns, using either short-term memory (STM) or both STM and long-term memory (LTM) mechanisms. Timing information is learned and recall {both from STM and from LTM) is performed with a learned rhythmical structure. The network, bearing similarities with ART (Carpenter & Grossberg 1987a), learns to map temporal sequences to unitized patterns, which makes it suitable for hierarchical operation. It is therefore capable of self-organizing codes for sequences of sequences. The capacity is only limited by the number of nodes provided. Selected simulation results are reported to illustrate system properties.
Resumo:
This paper studies several applications of genetic algorithms (GAs) within the neural networks field. After generating a robust GA engine, the system was used to generate neural network circuit architectures. This was accomplished by using the GA to determine the weights in a fully interconnected network. The importance of the internal genetic representation was shown by testing different approaches. The effects in speed of optimization of varying the constraints imposed upon the desired network were also studied. It was observed that relatively loose constraints provided results comparable to a fully constrained system. The type of neural network circuits generated were recurrent competitive fields as described by Grossberg (1982).
Resumo:
Genetic Algorithms (GAs) make use of an internal representation of a given system in order to perform optimization functions. The actual structural layout of this representation, called a genome, has a crucial impact on the outcome of the optimization process. The purpose of this paper is to study the effects of different internal representations in a GA, which generates neural networks. A second GA was used to optimize the genome structure. This structure produces an optimized system within a shorter time interval.
Resumo:
This paper introduces a new class of predictive ART architectures, called Adaptive Resonance Associative Map (ARAM) which performs rapid, yet stable heteroassociative learning in real time environment. ARAM can be visualized as two ART modules sharing a single recognition code layer. The unit for recruiting a recognition code is a pattern pair. Code stabilization is ensured by restricting coding to states where resonances are reached in both modules. Simulation results have shown that ARAM is capable of self-stabilizing association of arbitrary pattern pairs of arbitrary complexity appearing in arbitrary sequence by fast learning in real time environment. Due to the symmetrical network structure, associative recall can be performed in both directions.
Resumo:
One of the advantages of biological skeleto-motor systems is the opponent muscle design, which in principle makes it possible to achieve facile independent control of joint angle and joint stiffness. Prior analysis of equilibrium states of a biologically-based neural network for opponent muscle control, the FLETE model, revealed that such independent control requires specialized interneuronal circuitry to efficiently coordinate the opponent force generators. In this chapter, we refine the FLETE circuit variables specification and update the equilibrium analysis. We also incorporate additional neuronal circuitry that ensures efficient opponent force generation and velocity regulation during movement.
Resumo:
This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observer's head. Various psychophysical data concerning binocular distance perception and reaching behavior are clarified by this representation. The representation provides a foundation for learning head-centered and body-centered invariant representations of both foveated and non-foveated 3-D target positions. It also enables a solution to be developed of the classical motor equivalence problem, whereby many different joint configurations of a redundant manipulator can all be used to realize a desired trajectory in 3-D space.
Resumo:
A neural network theory of :3-D vision, called FACADE Theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a Boundary Contour System (BCS) and a Feature Contour System (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that arc mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, DaVinci stereopsis, a 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analysed. The BCS and FCS sub-systems model aspects of how the two parvocellular cortical processing streams that join the Lateral Geniculate Nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-Depth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact. with cortical mechanisms of spatial attention, attentive objcect learning, and visual search. Adaptive Resonance Theory (ART) mechanisms model aspects of how prestriate visual cortex interacts reciprocally with a visual object recognition system in inferotemporal cortex (IT) for purposes of attentive object learning and categorization. Object attention mechanisms of the What cortical processing stream through IT cortex are distinguished from spatial attention mechanisms of the Where cortical processing stream through parietal cortex. Parvocellular BCS and FCS signals interact with the model What stream. Parvocellular FCS and magnocellular Motion BCS signals interact with the model Where stream. Reciprocal interactions between these visual, What, and Where mechanisms arc used to discuss data about visual search and saccadic eye movements, including fast search of conjunctive targets, search of 3-D surfaces, selective search of like-colored targets, attentive tracking of multi-element groupings, and recursive search of simultaneously presented targets.