926 resultados para self-organizing maps of Kohonen


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gamma-radiation exposure has both short- and long-term adverse health effects. The threat of modern terrorism places human populations at risk for radiological exposures, yet current medical countermeasures to radiation exposure are limited. Here we describe metabolomics for gamma-radiation biodosimetry in a mouse model. Mice were gamma-irradiated at doses of 0, 3 and 8 Gy (2.57 Gy/min), and urine samples collected over the first 24 h after exposure were analyzed by ultra-performance liquid chromatography-time-of-flight mass spectrometry (UPLC-TOFMS). Multivariate data were analyzed by orthogonal partial least squares (OPLS). Both 3- and 8-Gy exposures yielded distinct urine metabolomic phenotypes. The top 22 ions for 3 and 8 Gy were analyzed further, including tandem mass spectrometric comparison with authentic standards, revealing that N-hexanoylglycine and beta-thymidine are urinary biomarkers of exposure to 3 and 8 Gy, 3-hydroxy-2-methylbenzoic acid 3-O-sulfate is elevated in urine of mice exposed to 3 but not 8 Gy, and taurine is elevated after 8 but not 3 Gy. Gene Expression Dynamics Inspector (GEDI) self-organizing maps showed clear dose-response relationships for subsets of the urine metabolome. This approach is useful for identifying mice exposed to gamma radiation and for developing metabolomic strategies for noninvasive radiation biodosimetry in humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the basic tools to work with wireless sensors. TinyOShas a componentbased architecture which enables rapid innovation and implementation while minimizing code size as required by the severe memory constraints inherent in sensor networks. TinyOS's component library includes network protocols, distributed services, sensor drivers, and data acquisition tools ? all of which can be used asia or be further refined for a custom application. TinyOS was originally developed as a research project at the University of California Berkeley, but has since grown to have an international community of developers and users. Some algorithms concerning packet routing are shown. Incar entertainment systems can be based on wireless sensors in order to obtain information from Internet, but routing protocols must be implemented in order to avoid bottleneck problems. Ant Colony algorithms are really useful in such cases, therefore they can be embedded into the sensors to perform such routing task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Providing security to the emerging field of ambient intelligence will be difficult if we rely only on existing techniques, given their dynamic and heterogeneous nature. Moreover, security demands of these systems are expected to grow, as many applications will require accurate context modeling. In this work we propose an enhancement to the reputation systems traditionally deployed for securing these systems. Different anomaly detectors are combined using the immunological paradigm to optimize reputation system performance in response to evolving security requirements. As an example, the experiments show how a combination of detectors based on unsupervised techniques (self-organizing maps and genetic algorithms) can help to significantly reduce the global response time of the reputation system. The proposed solution offers many benefits: scalability, fast response to adversarial activities, ability to detect unknown attacks, high adaptability, and high ability in detecting and confining attacks. For these reasons, we believe that our solution is capable of coping with the dynamism of ambient intelligence systems and the growing requirements of security demands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present our research into self-organizing building algorithms. This idea of self-organization of animal/plants behaviour interests researchers to explore the mechanisms required for this emergent phenomena and try to apply them in other domains. We were able to implement a typical construction algorithm in a 3D simulation environment and reproduce the results of previous research in the area. LSystems, morphogenetic programming and wasp nest building are explained in order to understand self-organizing models. We proposed Grammatical swarm as a good tool to optimize building structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. We test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, we use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O escoamento bifásico de gás-líquido é encontrado em muitos circuitos fechados que utilizam circulação natural para fins de resfriamento. O fenômeno da circulação natural é importante nos recentes projetos de centrais nucleares para a remoção de calor. O circuito de circulação natural (Circuito de Circulação Natural - CCN), instalado no Instituto de Pesquisas Energéticas e Nucleares, IPEN / CNEN, é um circuito experimento concebido para fornecer dados termo-hidráulicos relacionados com escoamento monofásico ou bifásico em condições de circulação natural. A estimativa de transferência de calor tem sido melhorada com base em modelos que requerem uma previsão precisa de transições de padrão de escoamento. Este trabalho apresenta testes experimentais desenvolvidos no CCN para a visualização dos fenômenos de instabilidade em ciclos de circulação natural básica e classificar os padrões de escoamento bifásico associados aos transientes e instabilidades estáticas de escoamento. As imagens são comparadas e agrupadas utilizando mapas auto-organizáveis de Kohonen (SOM), aplicados em diferentes características da imagem digital. Coeficientes da Transformada Discreta de Cossenos de Quadro Completo (FFDCT) foram utilizados como entrada para a tarefa de classificação, levando a bons resultados. Os protótipos de FFDCT obtidos podem ser associados a cada padrão de escoamento possibilitando uma melhor compreensão da instabilidade observada. Uma metodologia sistemática foi utilizada para verificar a robustez do método.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organising Maps (GHSOMs) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labelled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Self-Organizing Map (SOM) algorithm has been extensively studied and has been applied with considerable success to a wide variety of problems. However, the algorithm is derived from heuristic ideas and this leads to a number of significant limitations. In this paper, we consider the problem of modelling the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. We introduce a novel form of latent variable model, which we call the GTM algorithm (for Generative Topographic Mapping), which allows general non-linear transformations from latent space to data space, and which is trained using the EM (expectation-maximization) algorithm. Our approach overcomes the limitations of the SOM, while introducing no significant disadvantages. We demonstrate the performance of the GTM algorithm on simulated data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multidimensional compound optimization is a new paradigm in the drug discovery process, yielding efficiencies during early stages and reducing attrition in the later stages of drug development. The success of this strategy relies heavily on understanding this multidimensional data and extracting useful information from it. This paper demonstrates how principled visualization algorithms can be used to understand and explore a large data set created in the early stages of drug discovery. The experiments presented are performed on a real-world data set comprising biological activity data and some whole-molecular physicochemical properties. Data visualization is a popular way of presenting complex data in a simpler form. We have applied powerful principled visualization methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), to help the domain experts (screening scientists, chemists, biologists, etc.) understand and draw meaningful decisions. We also benchmark these principled methods against relatively better known visualization approaches, principal component analysis (PCA), Sammon's mapping, and self-organizing maps (SOMs), to demonstrate their enhanced power to help the user visualize the large multidimensional data sets one has to deal with during the early stages of the drug discovery process. The results reported clearly show that the GTM and HGTM algorithms allow the user to cluster active compounds for different targets and understand them better than the benchmarks. An interactive software tool supporting these visualization algorithms was provided to the domain experts. The tool facilitates the domain experts by exploration of the projection obtained from the visualization algorithms providing facilities such as parallel coordinate plots, magnification factors, directional curvatures, and integration with industry standard software. © 2006 American Chemical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Implementation studies and related research in organizational theory can be enhanced by drawing on the field of complex systems to understand better and, as a consequence, more successfully manage change. This article reinterprets data previously published in the British Journal of Management to reveal a new contribution, that policy implementation processes should be understood as a self-organizing system in which adaptive abilities are extremely important for stakeholders. In other words, national policy is reinterpreted at the local level, with each local organization uniquely mixing elements of national policy with their own requirements making policy implementation unpredictable and more sketchy. The original article explained different paces and directions of change in terms of traditional management processes: leadership, politics, implementation and vision. By reinterpreting the data, it is possible to reveal that deeper level processes, which are more emergent, are also at work influencing change, which the authors label possibility space. Implications for theory, policy and practice are identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a technique for building complex and adaptive meshes for urban and architectural design. The combination of a self-organizing map and cellular automata algorithms stands as a method for generating meshes otherwise static. This intends to be an auxiliary tool for the architect or the urban planner, improving control over large amounts of spatial information. The traditional grid employed as design aid is improved to become more general and flexible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural Networks have been successfully employed in different biomedical settings. They have been useful for feature extractions from images and biomedical data in a variety of diagnostic applications. In this paper, they are applied as a diagnostic tool for classifying different levels of gastric electrical uncoupling in controlled acute experiments on dogs. Data was collected from 16 dogs using six bipolar electrodes inserted into the serosa of the antral wall. Each dog underwent three recordings under different conditions: (1) basal state, (2) mild surgically-induced uncoupling, and (3) severe surgically-induced uncoupling. For each condition half-hour recordings were made. The neural network was implemented according to the Learning Vector Quantization model. This is a supervised learning model of the Kohonen Self-Organizing Maps. Majority of the recordings collected from the dogs were used for network training. Remaining recordings served as a testing tool to examine the validity of the training procedure. Approximately 90% of the dogs from the neural network training set were classified properly. However, only 31% of the dogs not included in the training process were accurately diagnosed. The poor neural-network based diagnosis of recordings that did not participate in the training process might have been caused by inappropriate representation of input data. Previous research has suggested characterizing signals according to certain features of the recorded data. This method, if employed, would reduce the noise and possibly improve the diagnostic abilities of the neural network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient's extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.^