970 resultados para data visualization
Resumo:
The creation of three-dimensional (3D) drawings for proposed designs for construction, re-construction and rehabilitation activities are becoming increasingly common for highway designers, whether by department of transportation (DOT) employees or consulting engineers. However, technical challenges exist that prevent the use of these 3D drawings/models from being used as the basis of interactive simulation. Use of driving simulation to service the needs of the transportation industry in the US lags behind Europe due to several factors, including lack of technical infrastructure at DOTs, cost of maintaining and supporting simulation infrastructure—traditionally done by simulation domain experts—and cost and effort to translate DOT domain data into the simulation domain.
Resumo:
PURPOSE: To describe the use of anterior segment optical coherence tomography (AS-OCT) to clarify the position and patency of aqueous shunt devices in the anterior chamber of eyes where corneal edema or tube position does not permit a satisfactory view. DESIGN: Noncomparative observational case series. METHODS: Four cases are reported in which aqueous shunt malposition or obstruction was suspected but the shunt could not be seen on clinical examination. The patients underwent AS-OCT to identify the position and patency of the shunt tip. RESULTS: In each case, AS-OCT provided data regarding tube position and/or patency that could not be obtained by slit-lamp examination or by gonioscopy that influenced management. CONCLUSIONS: AS-OCT can be used to visualize anterior chamber tubes in the presence of corneal edema that precludes an adequate view or in cases where the tube is retracted into the cornea. In such cases, AS-OCT is useful in identifying shunt patency and position, which helps guide clinical decision making.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
Direct identification as well as isolation of antigen-specific T cells became possible since the development of "tetramers" based on avidin-fluorochrome conjugates associated with mono-biotinylated class I MHC-peptide monomeric complexes. In principle, a series of distinct class I MHC-peptide tetramers, each labelled with a different fluorochrome, would allow to simultaneously enumerate as many unique antigen-specific CD8(+) T cells. Practically, however, only phycoerythrin and allophycocyanin conjugated tetramers have been generally available, imposing serious constraints for multiple labeling. To overcome this limitation, we have developed dextramers which are multimers based on a dextran backbone bearing multiple fluorescein and streptavidin moieties. Here we demonstrate the functionality and optimization of these new probes on human CD8(+) T cell clones with four independent antigen specificities. Their applications to the analysis of relatively low frequency antigen-specific T cells in peripheral blood, as well as their use in fluorescence microscopy, are demonstrated. The data show that dextramers produce a stronger signal than their fluoresceinated tetramer counterparts. Thus, these could become the reagents of choice as the antigen-specific T cell labeling transitions from basic research to clinical application.
Resumo:
Utilizing enhanced visualization in transportation planning and design gained popularity in the last decade. This work aimed at demonstrating the concept of utilizing a highly immersive, virtual reality simulation engine for creating dynamic, interactive, full-scale, three-dimensional (3D) models of highway infrastructure. For this project, the highway infrastructure element chosen was a two-way, stop-controlled intersection (TWSCI). VirtuTrace, a virtual reality simulation engine developed by the principal investigator, was used to construct the dynamic 3D model of the TWSCI. The model was implemented in C6, which is Iowa State University’s Cave Automatic Virtual Environment (CAVE). Representatives from the Institute of Transportation at Iowa State University, as well as representatives from the Iowa Department of Transportation, experienced the simulated TWSCI. The two teams identified verbally the significant potential that the approach introduces for the application of next-generation simulated environments to road design and safety evaluation.
Resumo:
BACKGROUND: There is an ever-increasing volume of data on host genes that are modulated during HIV infection, influence disease susceptibility or carry genetic variants that impact HIV infection. We created GuavaH (Genomic Utility for Association and Viral Analyses in HIV, http://www.GuavaH.org), a public resource that supports multipurpose analysis of genome-wide genetic variation and gene expression profile across multiple phenotypes relevant to HIV biology. FINDINGS: We included original data from 8 genome and transcriptome studies addressing viral and host responses in and ex vivo. These studies cover phenotypes such as HIV acquisition, plasma viral load, disease progression, viral replication cycle, latency and viral-host genome interaction. This represents genome-wide association data from more than 4,000 individuals, exome sequencing data from 392 individuals, in vivo transcriptome microarray data from 127 patients/conditions, and 60 sets of RNA-seq data. Additionally, GuavaH allows visualization of protein variation in ~8,000 individuals from the general population. The publicly available GuavaH framework supports queries on (i) unique single nucleotide polymorphism across different HIV related phenotypes, (ii) gene structure and variation, (iii) in vivo gene expression in the setting of human infection (CD4+ T cells), and (iv) in vitro gene expression data in models of permissive infection, latency and reactivation. CONCLUSIONS: The complexity of the analysis of host genetic influences on HIV biology and pathogenesis calls for comprehensive motors of research on curated data. The tool developed here allows queries and supports validation of the rapidly growing body of host genomic information pertinent to HIV research.
Resumo:
Over the past decade, many efforts have been made to identify MHC class II-restricted epitopes from different tumor-associated Ags. Melan-A/MART-1(26-35) parental or Melan-A/MART-1(26-35(A27L)) analog epitopes have been widely used in melanoma immunotherapy to induce and boost CTL responses, but only one Th epitope is currently known (Melan-A51-73, DRB1*0401 restricted). In this study, we describe two novel Melan-A/MART-1-derived sequences recognized by CD4 T cells from melanoma patients. These epitopes can be mimicked by peptides Melan-A27-40 presented by HLA-DRB1*0101 and HLA-DRB1*0102 and Melan-A25-36 presented by HLA-DQB1*0602 and HLA-DRB1*0301. CD4 T cell clones specific for these epitopes recognize Melan-A/MART-1+ tumor cells and Melan-A/MART-1-transduced EBV-B cells and recognition is reduced by inhibitors of the MHC class II presentation pathway. This suggests that the epitopes are naturally processed and presented by EBV-B cells and melanoma cells. Moreover, Melan-A-specific Abs could be detected in the serum of patients with measurable CD4 T cell responses specific for Melan-A/MART-1. Interestingly, even the short Melan-A/MART-1(26-35(A27L)) peptide was recognized by CD4 T cells from HLA-DQ6+ and HLA-DR3+ melanoma patients. Using Melan-A/MART-1(25-36)/DQ6 tetramers, we could detect Ag-specific CD4 T cells directly ex vivo in circulating lymphocytes of a melanoma patient. Together, these results provide the basis for monitoring of naturally occurring and vaccine-induced Melan-A/MART-1-specific CD4 T cell responses, allowing precise and ex vivo characterization of responding T cells.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Visualization is a relatively recent tool available to engineers for enhancing transportation project design through improved communication, decision making, and stakeholder feedback. Current visualization techniques include image composites, video composites, 2D drawings, drive-through or fly-through animations, 3D rendering models, virtual reality, and 4D CAD. These methods are used mainly to communicate within the design and construction team and between the team and external stakeholders. Use of visualization improves understanding of design intent and project concepts and facilitates effective decision making. However, visualization tools are typically used for presentation only in large-scale urban projects. Visualization is not widely accepted due to a lack of demonstrated engineering benefits for typical agency projects, such as small- and medium-sized projects, rural projects, and projects where external stakeholder communication is not a major issue. Furthermore, there is a perceived high cost of investment of both financial and human capital in adopting visualization tools. The most advanced visualization technique of virtual reality has only been used in academic research settings, and 4D CAD has been used on a very limited basis for highly complicated specialty projects. However, there are a number of less intensive visualization methods available which may provide some benefit to many agency projects. In this paper, we present the results of a feasibility study examining the use of visualization and simulation applications for improving highway planning, design, construction, and safety and mobility.
Resumo:
Visualization is a relatively recent tool available to engineers for enhancing transportation project design through improved communication, decision making, and stakeholder feedback. Current visualization techniques include image composites, video composites, 2D drawings, drive-through or fly-through animations, 3D rendering models, virtual reality, and 4D CAD. These methods are used mainly to communicate within the design and construction team and between the team and external stakeholders. Use of visualization improves understanding of design intent and project concepts and facilitates effective decision making. However, visualization tools are typically used for presentation only in large-scale urban projects. Visualization is not widely accepted due to a lack of demonstrated engineering benefits for typical agency projects, such as small- and medium-sized projects, rural projects, and projects where external stakeholder communication is not a major issue. Furthermore, there is a perceived high cost of investment of both financial and human capital in adopting visualization tools. The most advanced visualization technique of virtual reality has only been used in academic research settings, and 4D CAD has been used on a very limited basis for highly complicated specialty projects. However, there are a number of less intensive visualization methods available which may provide some benefit to many agency projects. In this paper, we present the results of a feasibility study examining the use of visualization and simulation applications for improving highway planning, design, construction, and safety and mobility.
Resumo:
The paper deals with the development and application of the methodology for automatic mapping of pollution/contamination data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve this problem. The automatic tuning of isotropic and an anisotropic GRNN model using cross-validation procedure is presented. Results are compared with k-nearest-neighbours interpolation algorithm using independent validation data set. Quality of mapping is controlled by the analysis of raw data and the residuals using variography. Maps of probabilities of exceeding a given decision level and ?thick? isoline visualization of the uncertainties are presented as examples of decision-oriented mapping. Real case study is based on mapping of radioactively contaminated territories.
Resumo:
In order to compare coronary magnetic resonance angiography (MRA) data obtained with different scanning methodologies, adequate visualization and presentation of the coronary MRA data need to be ensured. Furthermore, an objective quantitative comparison between images acquired with different scanning methods is desirable. To address this need, a software tool ("Soap-Bubble") that facilitates visualization and quantitative comparison of 3D volume targeted coronary MRA data was developed. In the present implementation, the user interactively specifies a curved subvolume (enclosed in the 3D coronary MRA data set) that closely encompasses the coronary arterial segments. With a 3D Delaunay triangulation and a parallel projection, this enables the simultaneous display of multiple coronary segments in one 2D representation. For objective quantitative analysis, frequently explored quantitative parameters such as signal-to-noise ratio (SNR); contrast-to-noise ratio (CNR); and vessel length, sharpness, and diameter can be assessed. The present tool supports visualization and objective, quantitative comparisons of coronary MRA data obtained with different scanning methods. The first results obtained in healthy adults and in patients with coronary artery disease are presented.
Resumo:
We present a participant study that compares biological data exploration tasks using volume renderings of laser confocal microscopy data across three environments that vary in level of immersion: a desktop, fishtank, and cave system. For the tasks, data, and visualization approach used in our study, we found that subjects qualitatively preferred and quantitatively performed better in the cave compared with the fishtank and desktop. Subjects performed real-world biological data analysis tasks that emphasized understanding spatial relationships including characterizing the general features in a volume, identifying colocated features, and reporting geometric relationships such as whether clusters of cells were coplanar. After analyzing data in each environment, subjects were asked to choose which environment they wanted to analyze additional data sets in - subjects uniformly selected the cave environment.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.