967 resultados para multiview visualization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present the ViRVIG Institute, a recently created institution that joins two well-known research groups: MOVING in Barcelona, and GGG in Girona. Our main research topics are Virtual Reality devices and interaction techniques, complex data models, realistic materials and lighting, geometry processing, and medical image visualization. We briefly introduce the history of both research groups and present some representative projects. Finally, we sketch our lines for future research

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a concern that agriculture will no longer be able to meet, on a global scale, the growing demand for food. Facing such a challenge requires new patterns of thinking in the context of complexity and sustainability sciences. This paper, focused on the social dimension of the study and management of agricultural systems, suggests that rethinking the study of agricultural systems entails analyzing them as complex socio-ecological systems, as well as considering the differing thinking patterns of diverse stakeholders. The intersubjective nature of knowledge, as studied by different philosophical schools, needs to be better integrated into the study and management of agricultural systems than it is done so far, forcing us to accept that there are no simplistic solutions, and to seek a better understanding of the social dimension of agriculture. Different agriculture related problems require different policy and institutional approaches. Finally, the intersubjective nature of knowledge asks for the visualization of different framings and the power relations taking place in the decision-making process. Rethinking management of agricultural systems implies that policy making should be shaped by different principles: learning, flexibility, adaptation, scale-matching, participation, diversity enhancement and precaution hold the promise to significantly improve current standard management procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: There is an ever-increasing volume of data on host genes that are modulated during HIV infection, influence disease susceptibility or carry genetic variants that impact HIV infection. We created GuavaH (Genomic Utility for Association and Viral Analyses in HIV, http://www.GuavaH.org), a public resource that supports multipurpose analysis of genome-wide genetic variation and gene expression profile across multiple phenotypes relevant to HIV biology. FINDINGS: We included original data from 8 genome and transcriptome studies addressing viral and host responses in and ex vivo. These studies cover phenotypes such as HIV acquisition, plasma viral load, disease progression, viral replication cycle, latency and viral-host genome interaction. This represents genome-wide association data from more than 4,000 individuals, exome sequencing data from 392 individuals, in vivo transcriptome microarray data from 127 patients/conditions, and 60 sets of RNA-seq data. Additionally, GuavaH allows visualization of protein variation in ~8,000 individuals from the general population. The publicly available GuavaH framework supports queries on (i) unique single nucleotide polymorphism across different HIV related phenotypes, (ii) gene structure and variation, (iii) in vivo gene expression in the setting of human infection (CD4+ T cells), and (iv) in vitro gene expression data in models of permissive infection, latency and reactivation. CONCLUSIONS: The complexity of the analysis of host genetic influences on HIV biology and pathogenesis calls for comprehensive motors of research on curated data. The tool developed here allows queries and supports validation of the rapidly growing body of host genomic information pertinent to HIV research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dendritic cells (DCs) are the most potent antigen-presenting cells in the human lung and are now recognized as crucial initiators of immune responses in general. They are arranged as sentinels in a dense surveillance network inside and below the epithelium of the airways and alveoli, where thet are ideally situated to sample inhaled antigen. DCs are known to play a pivotal role in maintaining the balance between tolerance and active immune response in the respiratory system. It is no surprise that the lungs became a main focus of DC-related investigations as this organ provides a large interface for interactions of inhaled antigens with the human body. During recent years there has been a constantly growing body of lung DC-related publications that draw their data from in vitro models, animal models and human studies. This review focuses on the biology and functions of different DC populations in the lung and highlights the advantages and drawbacks of different models with which to study the role of lung DCs. Furthermore, we present a number of up-to-date visualization techniques to characterize DC-related cell interactions in vitro and/or in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: Visualization of coronary blood flow in the right and left coronary system in volunteers and patients by means of a modified inversion-prepared bright-blood coronary magnetic resonance angiography (cMRA) sequence. MATERIALS AND METHODS: cMRA was performed in 14 healthy volunteers and 19 patients on a 1.5 Tesla MR system using a free-breathing 3D balanced turbo field echo (b-TFE) sequence with radial k-space sampling. For magnetization preparation a slab selective and a 2D selective inversion pulse were used for the right and left coronary system, respectively. cMRA images were evaluated in terms of clinically relevant stenoses (< 50 %) and compared to conventional catheter angiography. Signal was measured in the coronary arteries (coro), the aorta (ao) and in the epicardial fat (fat) to determine SNR and CNR. In addition, maximal visible vessel length, and vessel border definition were analyzed. RESULTS: The use of a selective inversion pre-pulse allowed direct visualization of the coronary blood flow in the right and left coronary system. The measured SNR and CNR, vessel length, and vessel sharpness in volunteers (SNR coro: 28.3 +/- 5.0; SNR ao: 37.6 +/- 8.4; CNR coro-fat: 25.3 +/- 4.5; LAD: 128.0 cm +/- 8.8; RCA: 74.6 cm +/- 12.4; Sharpness: 66.6 % +/- 4.8) were slightly increased compared to those in patients (SNR coro: 24.1 +/- 3.8; SNR ao: 33.8 +/- 11.4; CNR coro-fat: 19.9 +/- 3.3; LAD: 112.5 cm +/- 13.8; RCA: 69.6 cm +/- 16.6; Sharpness: 58.9 % +/- 7.9; n.s.). In the patient study the assessment of 42 coronary segments lead to correct identification of 10 clinically relevant stenoses. CONCLUSION: The modification of a previously published inversion-prepared cMRA sequence allowed direct visualization of the coronary blood flow in the right as well as in the left coronary system. In addition, this sequence proved to be highly sensitive regarding the assessment of clinically relevant stenotic lesions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Center for Transportation Research and Education (CTRE) used the traffic simulation model CORSIM to access proposed capacity and safety improvement strategies for the U.S. 61 corridor through Burlington, Iowa. The comparison between the base and alternative models allow for evaluation of the traffic flow performance under the existing conditions as well as other design scenarios. The models also provide visualization of performance for interpretation by technical staff, public policy makers, and the public. The objectives of this project are to evaluate the use of traffic simulation models for future use by the Iowa Department of Transportation (DOT) and to develop procedures for employing simulation modeling to conduct the analysis of alternative designs. This report presents both the findings of the U.S. 61 evaluation and an overview of model development procedures. The first part of the report includes the simulation modeling development procedures. The simulation analysis is illustrated through the Burlington U.S. 61 corridor case study application. Part I is not intended to be a user manual but simply introductory guidelines for traffic simulation modeling. Part II of the report evaluates the proposed improvement concepts in a side by side comparison of the base and alternative models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this review, we summarize how the new concept of digital optics applied to the field of holographic microscopy has allowed the development of a reliable and flexible digital holographic quantitative phase microscopy (DH-QPM) technique at the nanoscale particularly suitable for cell imaging. Particular emphasis is placed on the original biological information provided by the quantitative phase signal. We present the most relevant DH-QPM applications in the field of cell biology, including automated cell counts, recognition, classification, three-dimensional tracking, discrimination between physiological and pathophysiological states, and the study of cell membrane fluctuations at the nanoscale. In the last part, original results show how DH-QPM can address two important issues in the field of neurobiology, namely, multiple-site optical recording of neuronal activity and noninvasive visualization of dendritic spine dynamics resulting from a full digital holographic microscopy tomographic approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"Morphing Romania and the Moldova Province" gives a short insight of cartograms. Digital cartograms provide potential to move away from classical visualization of geographical data and benefit of new understanding of our world. They introduce a human vision instead of a planimetric one. By applying the Gastner-Newman algorithm for generating density-equalising cartograms to Romania and its Moldova province we can discuss the making of cartograms in general.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To evaluate gadocoletic acid (B-22956), a gadolinium-based paramagnetic blood pool agent, for contrast-enhanced coronary magnetic resonance angiography (MRA) in a Phase I clinical trial, and to compare the findings with those obtained using a standard noncontrast T2 preparation sequence. MATERIALS AND METHODS: The left coronary system was imaged in 12 healthy volunteers before B-22956 application and 5 (N = 11) and 45 (N = 7) minutes after application of 0.075 mmol/kg of body weight (BW) of B-22956. Additionally, imaging of the right coronary system was performed 23 minutes after B-22956 application (N = 6). A three-dimensional gradient echo sequence with T2 preparation (precontrast) or inversion recovery (IR) pulse (postcontrast) with real-time navigator correction was used. Assessment of the left and right coronary systems was performed qualitatively (a 4-point visual score for image quality) and quantitatively in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, visible vessel length, maximal luminal diameter, and the number of visible side branches. RESULTS: Significant (P < 0.01) increases in SNR (+42%) and CNR (+86%) were noted five minutes after B-22956 application, compared to precontrast T2 preparation values. A significant increase in CNR (+40%, P < 0.05) was also noted 45 minutes postcontrast. Vessels (left anterior descending artery (LAD), left coronary circumflex (LCx), and right coronary artery (RCA)) were also significantly (P < 0.05) sharper on postcontrast images. Significant increases in vessel length were noted for the LAD (P < 0.05) and LCx and RCA (both P < 0.01), while significantly more side branches were noted for the LAD and RCA (both P < 0.05) when compared to precontrast T2 preparation values. CONCLUSION: The use of the intravascular contrast agent B-22956 substantially improves both objective and subjective parameters of image quality on high-resolution three-dimensional coronary MRA. The increase in SNR, CNR, and vessel sharpness minimizes current limitations of coronary artery visualization with high-resolution coronary MRA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To implement a double-inversion bright-blood coronary MR angiography sequence using a cylindrical re-inversion prepulse for selective visualization of the coronary arteries. MATERIALS AND METHODS: Local re-inversion bright-blood magnetization preparation was implemented using a nonselective inversion followed by a cylindrical aortic re-inversion prepulse. After an inversion delay that allows for in-flow of the labeled blood-pool into the coronary arteries, three-dimensional radial steady-state free-precession (SSFP) imaging (repetition/echo time, 7.2/3.6 ms; flip angle, 120 degrees, 16 profiles per RR interval; field of view, 360 mm; matrix, 512, twelve 3-mm slices) is performed. Coronary MR angiography was performed in three healthy volunteers and in one patient on a commercial 1.5 Tesla whole-body MR System. RESULTS: In all subjects, coronary arteries were selectively visualized with positive contrast. In addition, a middle-grade stenosis of the proximal right coronary artery was seen in one patient. CONCLUSION: A novel T1 contrast-enhancement strategy is presented for selective visualization of the coronary arteries without extrinsic contrast medium application. In comparison to former arterial spin-labeling schemes, the proposed magnetization preparation obviates the need for a second data set and subtraction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work investigates novel alternative means of interaction in a virtual environment (VE).We analyze whether humans can remap established body functions to learn to interact with digital information in an environment that is cross-sensory by nature and uses vocal utterances in order to influence (abstract) virtual objects. We thus establish a correlation among learning, control of the interface, and the perceived sense of presence in the VE. The application enables intuitive interaction by mapping actions (the prosodic aspects of the human voice) to a certain response (i.e., visualization). A series of single-user and multiuser studies shows that users can gain control of the intuitive interface and learn to adapt to new and previously unseen tasks in VEs. Despite the abstract nature of the presented environment, presence scores were generally very high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. RESULTS: We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i) exhaustive population-genetic analyses including those based on the coalescent theory; ii) analysis adapted to the shallow data generated by the high-throughput genome projects; iii) use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv) identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v) visualization of the results integrated with current genome annotations in commonly available genome browsers. CONCLUSION: VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffusion MRI has evolved towards an important clinical diagnostic and research tool. Though clinical routine is using mainly diffusion weighted and tensor imaging approaches, Q-ball imaging and diffusion spectrum imaging techniques have become more widely available. They are frequently used in research-oriented investigations in particular those aiming at measuring brain network connectivity. In this work, we aim at assessing the dependency of connectivity measurements on various diffusion encoding schemes in combination with appropriate data modeling. We process and compare the structural connection matrices computed from several diffusion encoding schemes, including diffusion tensor imaging, q-ball imaging and high angular resolution schemes, such as diffusion spectrum imaging with a publically available processing pipeline for data reconstruction, tracking and visualization of diffusion MR imaging. The results indicate that the high angular resolution schemes maximize the number of obtained connections when applying identical processing strategies to the different diffusion schemes. Compared to the conventional diffusion tensor imaging, the added connectivity is mainly found for pathways in the 50-100mm range, corresponding to neighboring association fibers and long-range associative, striatal and commissural fiber pathways. The analysis of the major associative fiber tracts of the brain reveals striking differences between the applied diffusion schemes. More complex data modeling techniques (beyond tensor model) are recommended 1) if the tracts of interest run through large fiber crossings such as the centrum semi-ovale, or 2) if non-dominant fiber populations, e.g. the neighboring association fibers are the subject of investigation. An important finding of the study is that since the ground truth sensitivity and specificity is not known, the comparability between results arising from different strategies in data reconstruction and/or tracking becomes implausible to understand.