65 resultados para Telemedicine university network (RUTE)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a new method for predicting viral resistance to seven protease inhibitors from the HIV-1 genotype, and for identifying the positions in the protease gene at which the specific nature of the mutation affects resistance. The neural network Analog ARTMAP predicts protease inhibitor resistance from viral genotypes. A feature selection method detects genetic positions that contribute to resistance both alone and through interactions with other positions. This method has identified positions 35, 37, 62, and 77, where traditional feature selection methods have not detected a contribution to resistance. At several positions in the protease gene, mutations confer differing degress of resistance, depending on the specific amino acid to which the sequence has mutated. To find these positions, an Amino Acid Space is introduced to represent genes in a vector space that captures the functional similarity between amino acid pairs. Feature selection identifies several new positions, including 36, 37, and 43, with amino acid-specific contributions to resistance. Analog ARTMAP networks applied to inputs that represent specific amino acids at these positions perform better than networks that use only mutation locations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Single-channel Fusion ARTMAP is functionally equivalent to Fuzzy ART during unsupervised learning and to Fuzzy ARTMAP during supervised learning. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, become inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called paraellel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of them. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network. Fusion ARTMAP's multi-channel coding is illustrated by simulations of the Quadruped Mammal database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a self-organizing neural network that rapidly learns a body-centered representation of 3-D target positions. This representation remains invariant under head and eye movements, and is a key component of sensory-motor systems for producing motor equivalent reaches to targets (Bullock, Grossberg, and Guenther, 1993).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces ART-EMAP, a neural architecture that uses spatial and temporal evidence accumulation to extend the capabilities of fuzzy ARTMAP. ART-EMAP combines supervised and unsupervised learning and a medium-term memory process to accomplish stable pattern category recognition in a noisy input environment. The ART-EMAP system features (i) distributed pattern registration at a view category field; (ii) a decision criterion for mapping between view and object categories which can delay categorization of ambiguous objects and trigger an evidence accumulation process when faced with a low confidence prediction; (iii) a process that accumulates evidence at a medium-term memory (MTM) field; and (iv) an unsupervised learning algorithm to fine-tune performance after a limited initial period of supervised network training. ART-EMAP dynamics are illustrated with a benchmark simulation example. Applications include 3-D object recognition from a series of ambiguous 2-D views.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A model which extends the adaptive resonance theory model to sequential memory is presented. This new model learns sequences of events and recalls a sequence when presented with parts of the sequence. A sequence can have repeated events and different sequences can share events. The ART model is modified by creating interconnected sublayers within ART's F2 layer. Nodes within F2 learn temporal patterns by forming recency gradients within LTM. Versions of the ART model like ART I, ART 2, and fuzzy ART can be used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new neural network architecture is introduced for the recognition of pattern classes after supervised and unsupervised learning. Applications include spatio-temporal image understanding and prediction and 3-D object recognition from a series of ambiguous 2-D views. The architecture, called ART-EMAP, achieves a synthesis of adaptive resonance theory (ART) and spatial and temporal evidence integration for dynamic predictive mapping (EMAP). ART-EMAP extends the capabilities of fuzzy ARTMAP in four incremental stages. Stage 1 introduces distributed pattern representation at a view category field. Stage 2 adds a decision criterion to the mapping between view and object categories, delaying identification of ambiguous objects when faced with a low confidence prediction. Stage 3 augments the system with a field where evidence accumulates in medium-term memory (MTM). Stage 4 adds an unsupervised learning process to fine-tune performance after the limited initial period of supervised network training. Each ART-EMAP stage is illustrated with a benchmark simulation example, using both noisy and noise-free data. A concluding set of simulations demonstrate ART-EMAP performance on a difficult 3-D object recognition problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distributed outstar, a generalization of the outstar neural network for spatial pattern learning, is introduced. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field of arbitrarily many nodes, whose activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse, whereby a path weight decreases in joint proportion to the transmitted path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals. Three synaptic transmission functions, by a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all. When source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the unit of long-term memory in such a system is an adaptive threshold, rather than the multiplicative path weight widely used in neural models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a self-organizing, real-time, hierarchical neural network model of sequential processing, and shows how it can be used to induce recognition codes corresponding to word categories and elementary grammatical structures. The model, first introduced in Mannes (1992), learns to recognize, store, and recall sequences of unitized patterns in a stable manner, either using short-term memory alone, or using long-term memory weights. Memory capacity is only limited by the number of nodes provided. Sequences are mapped to unitized patterns, making the model suitable for hierarchical operation. By using multiple modules arranged in a hierarchy and a simple mapping between output of lower levels and the input of higher levels, the induction of codes representing word category and simple phrase structures is an emergent property of the model. Simulation results are reported to illustrate this behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intrinsic and extrinsic speaker normalization methods are systematically compared using a neural network (fuzzy ARTMAP) and L1 and L2 K-Nearest Neighbor (K-NN) categorizers trained and tested on disjoint sets of speakers of the Peterson-Barney vowel database. Intrinsic methods include one nonscaled, four psychophysical scales (bark, bark with endcorrection, mel, ERB), and three log scales, each tested on four combinations of F0 , F1, F2, F3. Extrinsic methods include four speaker adaptation schemes, each combined with the 32 intrinsic methods: centroid subtraction across all frequencies (CS), centroid subtraction for each frequency (CSi), linear scale (LS), and linear transformation (LT). ARTMAP and KNN show similar trends, with K-NN performing better, but requiring about ten times as much memory. The optimal intrinsic normalization method is bark scale, or bark with endcorrection, using the differences between all frequencies (Diff All). The order of performance for the extrinsic methods is LT, CSi, LS, and CS, with fuzzy ARTMAP performing best using bark scale with Diff All; and K-NN choosing psychophysical measures for all except CSi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network system, NAVITE, for incremental trajectory generation and obstacle avoidance is presented. Unlike other approaches, the system is effective in unstructured environments. Multimodal inforrnation from visual and range data is used for obstacle detection and to eliminate uncertainty in the measurements. Optimal paths are computed without explicitly optimizing cost functions, therefore reducing computational expenses. Simulations of a planar mobile robot (including the dynamic characteristics of the plant) in obstacle-free and object avoidance trajectories are presented. The system can be extended to incorporate global map information into the local decision-making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We can recognize objects through receiving continuously huge temporal information including redundancy and noise, and can memorize them. This paper proposes a neural network model which extracts pre-recognized patterns from temporally sequential patterns which include redundancy, and memorizes the patterns temporarily. This model consists of an adaptive resonance system and a recurrent time-delay network. The extraction is executed by the matching mechanism of the adaptive resonance system, and the temporal information is processed and stored by the recurrent network. Simple simulations are examined to exemplify the property of extraction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a constantly changing world, humans are adapted to alternate routinely between attending to familiar objects and testing hypotheses about novel ones. We can rapidly learn to recognize and narne novel objects without unselectively disrupting our memories of familiar ones. We can notice fine details that differentiate nearly identical objects and generalize across broad classes of dissimilar objects. This chapter describes a class of self-organizing neural network architectures--called ARTMAP-- that are capable of fast, yet stable, on-line recognition learning, hypothesis testing, and naming in response to an arbitrary stream of input patterns (Carpenter, Grossberg, Markuzon, Reynolds, and Rosen, 1992; Carpenter, Grossberg, and Reynolds, 1991). The intrinsic stability of ARTMAP allows the system to learn incrementally for an unlimited period of time. System stability properties can be traced to the structure of its learned memories, which encode clusters of attended features into its recognition categories, rather than slow averages of category inputs. The level of detail in the learned attentional focus is determined moment-by-moment, depending on predictive success: an error due to over-generalization automatically focuses attention on additional input details enough of which are learned in a new recognition category so that the predictive error will not be repeated. An ARTMAP system creates an evolving map between a variable number of learned categories that compress one feature space (e.g., visual features) to learned categories of another feature space (e.g., auditory features). Input vectors can be either binary or analog. Computational properties of the networks enable them to perform significantly better in benchmark studies than alternative machine learning, genetic algorithm, or neural network models. Some of the critical problems that challenge and constrain any such autonomous learning system will next be illustrated. Design principles that work together to solve these problems are then outlined. These principles are realized in the ARTMAP architecture, which is specified as an algorithm. Finally, ARTMAP dynamics are illustrated by means of a series of benchmark simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Fusion ARTMAP generalizes the fuzzy ARTMAP architecture in order to adaptively classify multi-channel data. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, beco1ne inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called parallel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of thmn. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes a neural network model that addresses the acquisition of speaking skills by infants and subsequent motor equivalent production of speech sounds. The model learns two mappings during a babbling phase. A phonetic-to-orosensory mapping specifies a vocal tract target for each speech sound; these targets take the form of convex regions in orosensory coordinates defining the shape of the vocal tract. The babbling process wherein these convex region targets are formed explains how an infant can learn phoneme-specific and language-specific limits on acceptable variability of articulator movements. The model also learns an orosensory-to-articulatory mapping wherein cells coding desired movement directions in orosensory space learn articulator movements that achieve these orosensory movement directions. The resulting mapping provides a natural explanation for the formation of coordinative structures. This mapping also makes efficient use of redundancy in the articulator system, thereby providing the model with motor equivalent capabilities. Simulations verify the model's ability to compensate for constraints or perturbations applied to the articulators automatically and without new learning and to explain contextual variability seen in human speech production.