896 resultados para Recurrent associative self-organizing map


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper present an environmental contingency forecasting tool based on Neural Networks (NN). Forecasting tool analyzes every hour and daily Sulphur Dioxide (SO2) concentrations and Meteorological data time series. Pollutant concentrations and meteorological variables are self-organized applying a Self-organizing Map (SOM) NN in different classes. Classes are used in training phase of a General Regression Neural Network (GRNN) classifier to provide an air quality forecast. In this case a time series set obtained from Environmental Monitoring Network (EMN) of the city of Salamanca, Guanajuato, México is used. Results verify the potential of this method versus other statistical classification methods and also variables correlation is solved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel method for enabling a robot to determine the direction to a sound source through interacting with its environment. The method uses a new neural network, the Parameter-Less Self-Organizing Map algorithm, and reinforcement learning to achieve rapid and accurate response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) has been introduced as a principled alternative to the Self-Organizing Map (SOM). As well as avoiding a number of deficiencies in the SOM, the GTM algorithm has the key property that the smoothness properties of the model are decoupled from the reference vectors, and are described by a continuous mapping from a lower-dimensional latent space into the data space. Magnification factors, which are approximated by the difference between code-book vectors in SOMs, can therefore be evaluated for the GTM model as continuous functions of the latent variables using the techniques of differential geometry. They play an important role in data visualization by highlighting the boundaries between data clusters, and are illustrated here for both a toy data set, and a problem involving the identification of crab species from morphological data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnification factors specify the extent to which the area of a small patch of the latent (or `feature') space of a topographic mapping is magnified on projection to the data space, and are of considerable interest in both neuro-biological and data analysis contexts. Previous attempts to consider magnification factors for the self-organizing map (SOM) algorithm have been hindered because the mapping is only defined at discrete points (given by the reference vectors). In this paper we consider the batch version of SOM, for which a continuous mapping can be defined, as well as the Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) which has been introduced as a probabilistic formulation of the SOM. We show how the techniques of differential geometry can be used to determine magnification factors as continuous functions of the latent space coordinates. The results are illustrated here using a problem involving the identification of crab species from morphological data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generative topographic mapping (GTM) model was introduced by Bishop et al. (1998, Neural Comput. 10(1), 215-234) as a probabilistic re- formulation of the self-organizing map (SOM). It offers a number of advantages compared with the standard SOM, and has already been used in a variety of applications. In this paper we report on several extensions of the GTM, including an incremental version of the EM algorithm for estimating the model parameters, the use of local subspace models, extensions to mixed discrete and continuous data, semi-linear models which permit the use of high-dimensional manifolds whilst avoiding computational intractability, Bayesian inference applied to hyper-parameters, and an alternative framework for the GTM based on Gaussian processes. All of these developments directly exploit the probabilistic structure of the GTM, thereby allowing the underlying modelling assumptions to be made explicit. They also highlight the advantages of adopting a consistent probabilistic framework for the formulation of pattern recognition algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dendritic cell algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a ‘context aware’ detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a self-organizing map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La autoorganización es un proceso de aprendizaje no supervisado mediante el cual se descubren características, relaciones, patrones significativos o prototipos en los datos. Entre los sistemas neuronales autoorganizados más usados destaca el el mapa autoorganizado o SOM (Self-Organizing Map), el cual ha sido aplicado en multitud de campos distintos. Sin embargo, este modelo autoorganizado tiene varias limitaciones relacionadas con su tamaño, topología, falta de representación de relaciones jerárquicas, etc. La red neuronal llamada gas neuronal creciente o GNG (Growing Neural Gas), es un ejemplo de modelo neuronal autoorganizado con mayor flexibilidad que el SOM ya que está basado en un grafo de unidades de proceso en vez de en una topología fija. A pesar de su éxito, se ha prestado poca atención a su extensión jerárquica, a diferencia de muchos otros modelos que tienen varias versiones jerárquicas. El gas neuronal jerárquico creciente o GHNG (Growing Hierarchical Neural Gas) es una extensión jerárquica del GNG en el que se aprende un árbol de grafos, donde el algoritmo original del GNG se ha mejorado distinguiendo entre una fase de crecimiento y una fase de convergencia. Los resultados experimentales demuestran las capacidades de autoorganización y aprendizaje jerárquico de esta red.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-organizing neural networks have been implemented in a wide range of application areas such as speech processing, image processing, optimization and robotics. Recent variations to the basic model proposed by the authors enable it to order state space using a subset of the input vector and to apply a local adaptation procedure that does not rely on a predefined test duration limit. Both these variations have been incorporated into a new feature map architecture that forms an integral part of an Hybrid Learning System (HLS) based on a genetic-based classifier system. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset targets set against a subset of their features. The system's objective is to achieve these targets by evolving a behavioural repertoire that efficiently explores and exploits the problem environment. Feature maps encode two types of knowledge within HLS — long-term memory traces of useful regularities within the environment and the classifier performance data calibrated against an object's feature states and targets. Self-organization of these networks constitutes non-genetic-based (experience-driven) learning within HLS. This paper presents a description of the HLS architecture and an analysis of the modified feature map implementing associative memory. Initial results are presented that demonstrate the behaviour of the system on a simple control task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The notion of information processing has dominated the study of the mind for over six decades. However, before the advent of cognitivism, one of the most prominent theoretical ideas was that of Habit. This is a concept with a rich and complex history, which is again starting to awaken interest, following recent embodied, enactive critiques of computationalist frameworks. We offer here a very brief history of the concept of habit in the form of a genealogical network-map. This serves to provide an overview of the richness of this notion and as a guide for further re-appraisal. We identify 77 thinkers and their influences, and group them into seven schools of thought. Two major trends can be distinguished. One is the associationist trend, starting with the work of Locke and Hume, developed by Hartley, Bain, and Mill to be later absorbed into behaviorism through pioneering animal psychologists (Morgan and Thorndike). This tradition conceived of habits atomistically and as automatisms (a conception later debunked by cognitivism). Another historical trend we have called organicism inherits the legacy of Aristotle and develops along German idealism, French spiritualism, pragmatism, and phenomenology. It feeds into the work of continental psychologists in the early 20th century, influencing important figures such as Merleau-Ponty, Piaget, and Gibson. But it has not yet been taken up by mainstream cognitive neuroscience and psychology. Habits, in this tradition, are seen as ecological, self-organizing structures that relate to a web of predispositions and plastic dependencies both in the agent and in the environment. In addition, they are not conceptualized in opposition to rational, volitional processes, but as transversing a continuum from reflective to embodied intentionality. These are properties that make habit a particularly attractive idea for embodied, enactive perspectives, which can now re-evaluate it in light of dynamical systems theory and complexity research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agents inhabiting large scale environments are faced with the problem of generating maps by which they can navigate. One solution to this problem is to use probabilistic roadmaps which rely on selecting and connecting a set of points that describe the interconnectivity of free space. However, the time required to generate these maps can be prohibitive, and agents do not typically know the environment in advance. In this paper we show that the optimal combination of different point selection methods used to create the map is dependent on the environment, no point selection method dominates. This motivates a novel self-adaptive approach for an agent to combine several point selection methods. The success rate of our approach is comparable to the state of the art and the generation cost is substantially reduced. Self-adaptation therefore enables a more efficient use of the agent's resources. Results are presented for both a set of archetypal scenarios and large scale virtual environments based in Second Life, representing real locations in London.