160 resultados para Multiresolution Visualization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The process of comparing a fingermark recovered from a crime scene with the fingerprint taken from a known individual involves the characterization and comparison of different ridge details on both the mark and the print. Fingerprints examiners commonly classify these characteristics into three different groups, depending on their level of discriminating power. It is commonly considered that the general pattern of the ridge flow constitutes first-level detail, specific ridge flow and minutiaes (e.g. ending ridges, bifurcations) constitutes second-level detail, and fine ridge details (e. g. pore positions and shapes) are described as third-level details.In this study, the reproducibility of a selection of third-level characteristics is investigated. The reproducibility of these features is examined on serveral recordings of a same finger, first acquired using only optical visualization techniques and second on impressions developed using common firngermark development techniques. Prior to the evaluation of the reproducibility of the considered characteristics, digital images of the fingerprints were recorded at two different resolutions (1000 and 2000 ppi). This allowed the study to also examine the influence of higher resolution on the considered characteristics. It was observed that the increase in the resolution did not result in better feature detection or comparison between images.The examination of the reproducibility of a selection of third-level characteristics showed that the most reproducible features observed were minutiae shapes and pore positions along the ridges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using an extract of nuclei from the estrogen-responsive human breast cancer cell line MCF-7, protein-DNA complexes were assembled in vitro at the 5' end of the Xenopus laevis vitellogenin gene B2 that is normally expressed in liver after estrogen induction. The complexes formed were analyzed by electron microscopy after labeling by the indirect colloidal gold immunological method using a monoclonal antibody specific for the human estrogen receptor. As identified by its interaction with protein A-gold, the antibody was found linked to two protein-DNA complexes, the first localized at the estrogen responsive element of the gene and the second in intron I, thus proving a direct participation of the receptor in these two complexes. The procedure used allows the visualization and rapid localization of specific transcription factors bound in vitro to a promoter or any other gene region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A cardiac-triggered, free-breathing, 3D balanced FFE projection renal MR angiography (MRA) technique with a 2D pencil beam aortic labeling pulse for selective aortic spin tagging was developed. For respiratory motion artifact suppression during free breathing, a prospective real-time navigator was implemented for renal MRA. Images obtained with the new approach were compared with standard contrast-enhanced (CE) 3D breath-hold MRA in seven swine. Signal properties and vessel visualization were analyzed. With the presented technique, high-resolution, high-contrast renal projection MRA with superior vessel length visualization (including a greater visible number of distal branches of the renal arteries) compared to standard breath-hold CE-MRA was obtained. The present results warrant clinical studies in patients with renal artery disease.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: The objective of our study was to investigate the impact of radial k-space sampling and steady-state free precession (SSFP) imaging on image quality in MRI of coronary vessel walls. SUBJECTS AND METHODS: Eleven subjects were examined on a 1.5-T MR system using three high-resolution navigator-gated and cardiac-triggered 3D black blood sequences (cartesian gradient-echo [GRE], radial GRE, and radial SSFP) with identical spatial resolution (0.9 x 0.9 x 2.4 mm3). The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel wall sharpness, and motion artifacts were analyzed. RESULTS: The mean SNR and CNR of the coronary vessel wall were improved using radial imaging and were best using radial k-space sampling combined with SSFP imaging. Vessel border definition was similar for all three sequences. Radial k-space sampling was found to be less sensitive to motion. Consistently good image quality was seen with the radial GRE sequence. CONCLUSION: Radial k-space sampling in MRI of coronary vessel walls resulted in fewer motion artifacts and improved SNR and CNR. The use of SSFP imaging, however, did not result in improved coronary vessel wall visualization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. RESULTS: Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. CONCLUSION: ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to study the various health influencing parameters related to engineered nanoparticles as well as to soot emitted b diesel engines, there is an urgent need for appropriate sampling devices and methods for cell exposure studies that simulate the respiratory system and facilitate associated biological and toxicological tests. The objective of the present work was the further advancement of a Multiculture Exposure Chamber (MEC) into a dose-controlled system for efficient delivery of nanoparticles to cells. It was validated with various types of nanoparticles (diesel engine soot aggregates, engineered nanoparticles for various applications) and with state-of-the-art nanoparticle measurement instrumentation to assess the local deposition of nanoparticles on the cell cultures. The dose of nanoparticles to which cell cultures are being exposed was evaluated in the normal operation of the in vitro cell culture exposure chamber based on measurements of the size specific nanoparticle collection efficiency of a cell free device. The average efficiency in delivering nanoparticles in the MEC was approximately 82%. The nanoparticle deposition was demonstrated by Transmission Electron Microscopy (TEM). Analysis and design of the MEC employs Computational Fluid Dynamics (CFD) and true to geometry representations of nanoparticles with the aim to assess the uniformity of nanoparticle deposition among the culture wells. Final testing of the dose-controlled cell exposure system was performed by exposing A549 lung cell cultures to fluorescently labeled nanoparticles. Delivery of aerosolized nanoparticles was demonstrated by visualization of the nanoparticle fluorescence in the cell cultures following exposure. Also monitored was the potential of the aerosolized nanoparticles to generate reactive oxygen species (ROS) (e.g. free radicals and peroxides generation), thus expressing the oxidative stress of the cells which can cause extensive cellular damage or damage on DNA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: The objective of our study was to establish optimal perfusion conditions for high-resolution postmortem angiography that would permit dynamic visualization of the arterial and venous systems. MATERIALS AND METHODS: Cadavers of two dogs and one cat were perfused with diesel oil through a peristaltic pump. The lipophilic contrast agent Lipiodol Ultra Fluide was then injected, and angiography was performed. The efficiency of perfusion was evaluated in the chick chorioallantoic membrane. RESULTS: Vessels could be seen up to the level of the smaller supplying and draining vessels. Hence, both the arterial and the venous sides of the vascular system could be distinguished. The chorioallantoic membrane assay revealed that diesel oil enters microvessels up to 50 microm in diameter and that it does not penetrate the capillary network. CONCLUSION: After establishing a postmortem circulation by diesel oil perfusion, angiography can be performed by injection of Lipiodol Ultra Fluide. The resolution of the images obtained up to 3 days after death is comparable to that achieved in clinical angiography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: There is an ever-increasing volume of data on host genes that are modulated during HIV infection, influence disease susceptibility or carry genetic variants that impact HIV infection. We created GuavaH (Genomic Utility for Association and Viral Analyses in HIV, http://www.GuavaH.org), a public resource that supports multipurpose analysis of genome-wide genetic variation and gene expression profile across multiple phenotypes relevant to HIV biology. FINDINGS: We included original data from 8 genome and transcriptome studies addressing viral and host responses in and ex vivo. These studies cover phenotypes such as HIV acquisition, plasma viral load, disease progression, viral replication cycle, latency and viral-host genome interaction. This represents genome-wide association data from more than 4,000 individuals, exome sequencing data from 392 individuals, in vivo transcriptome microarray data from 127 patients/conditions, and 60 sets of RNA-seq data. Additionally, GuavaH allows visualization of protein variation in ~8,000 individuals from the general population. The publicly available GuavaH framework supports queries on (i) unique single nucleotide polymorphism across different HIV related phenotypes, (ii) gene structure and variation, (iii) in vivo gene expression in the setting of human infection (CD4+ T cells), and (iv) in vitro gene expression data in models of permissive infection, latency and reactivation. CONCLUSIONS: The complexity of the analysis of host genetic influences on HIV biology and pathogenesis calls for comprehensive motors of research on curated data. The tool developed here allows queries and supports validation of the rapidly growing body of host genomic information pertinent to HIV research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dendritic cells (DCs) are the most potent antigen-presenting cells in the human lung and are now recognized as crucial initiators of immune responses in general. They are arranged as sentinels in a dense surveillance network inside and below the epithelium of the airways and alveoli, where thet are ideally situated to sample inhaled antigen. DCs are known to play a pivotal role in maintaining the balance between tolerance and active immune response in the respiratory system. It is no surprise that the lungs became a main focus of DC-related investigations as this organ provides a large interface for interactions of inhaled antigens with the human body. During recent years there has been a constantly growing body of lung DC-related publications that draw their data from in vitro models, animal models and human studies. This review focuses on the biology and functions of different DC populations in the lung and highlights the advantages and drawbacks of different models with which to study the role of lung DCs. Furthermore, we present a number of up-to-date visualization techniques to characterize DC-related cell interactions in vitro and/or in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: Visualization of coronary blood flow in the right and left coronary system in volunteers and patients by means of a modified inversion-prepared bright-blood coronary magnetic resonance angiography (cMRA) sequence. MATERIALS AND METHODS: cMRA was performed in 14 healthy volunteers and 19 patients on a 1.5 Tesla MR system using a free-breathing 3D balanced turbo field echo (b-TFE) sequence with radial k-space sampling. For magnetization preparation a slab selective and a 2D selective inversion pulse were used for the right and left coronary system, respectively. cMRA images were evaluated in terms of clinically relevant stenoses (< 50 %) and compared to conventional catheter angiography. Signal was measured in the coronary arteries (coro), the aorta (ao) and in the epicardial fat (fat) to determine SNR and CNR. In addition, maximal visible vessel length, and vessel border definition were analyzed. RESULTS: The use of a selective inversion pre-pulse allowed direct visualization of the coronary blood flow in the right and left coronary system. The measured SNR and CNR, vessel length, and vessel sharpness in volunteers (SNR coro: 28.3 +/- 5.0; SNR ao: 37.6 +/- 8.4; CNR coro-fat: 25.3 +/- 4.5; LAD: 128.0 cm +/- 8.8; RCA: 74.6 cm +/- 12.4; Sharpness: 66.6 % +/- 4.8) were slightly increased compared to those in patients (SNR coro: 24.1 +/- 3.8; SNR ao: 33.8 +/- 11.4; CNR coro-fat: 19.9 +/- 3.3; LAD: 112.5 cm +/- 13.8; RCA: 69.6 cm +/- 16.6; Sharpness: 58.9 % +/- 7.9; n.s.). In the patient study the assessment of 42 coronary segments lead to correct identification of 10 clinically relevant stenoses. CONCLUSION: The modification of a previously published inversion-prepared cMRA sequence allowed direct visualization of the coronary blood flow in the right as well as in the left coronary system. In addition, this sequence proved to be highly sensitive regarding the assessment of clinically relevant stenotic lesions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this review, we summarize how the new concept of digital optics applied to the field of holographic microscopy has allowed the development of a reliable and flexible digital holographic quantitative phase microscopy (DH-QPM) technique at the nanoscale particularly suitable for cell imaging. Particular emphasis is placed on the original biological information provided by the quantitative phase signal. We present the most relevant DH-QPM applications in the field of cell biology, including automated cell counts, recognition, classification, three-dimensional tracking, discrimination between physiological and pathophysiological states, and the study of cell membrane fluctuations at the nanoscale. In the last part, original results show how DH-QPM can address two important issues in the field of neurobiology, namely, multiple-site optical recording of neuronal activity and noninvasive visualization of dendritic spine dynamics resulting from a full digital holographic microscopy tomographic approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"Morphing Romania and the Moldova Province" gives a short insight of cartograms. Digital cartograms provide potential to move away from classical visualization of geographical data and benefit of new understanding of our world. They introduce a human vision instead of a planimetric one. By applying the Gastner-Newman algorithm for generating density-equalising cartograms to Romania and its Moldova province we can discuss the making of cartograms in general.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To evaluate gadocoletic acid (B-22956), a gadolinium-based paramagnetic blood pool agent, for contrast-enhanced coronary magnetic resonance angiography (MRA) in a Phase I clinical trial, and to compare the findings with those obtained using a standard noncontrast T2 preparation sequence. MATERIALS AND METHODS: The left coronary system was imaged in 12 healthy volunteers before B-22956 application and 5 (N = 11) and 45 (N = 7) minutes after application of 0.075 mmol/kg of body weight (BW) of B-22956. Additionally, imaging of the right coronary system was performed 23 minutes after B-22956 application (N = 6). A three-dimensional gradient echo sequence with T2 preparation (precontrast) or inversion recovery (IR) pulse (postcontrast) with real-time navigator correction was used. Assessment of the left and right coronary systems was performed qualitatively (a 4-point visual score for image quality) and quantitatively in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, visible vessel length, maximal luminal diameter, and the number of visible side branches. RESULTS: Significant (P < 0.01) increases in SNR (+42%) and CNR (+86%) were noted five minutes after B-22956 application, compared to precontrast T2 preparation values. A significant increase in CNR (+40%, P < 0.05) was also noted 45 minutes postcontrast. Vessels (left anterior descending artery (LAD), left coronary circumflex (LCx), and right coronary artery (RCA)) were also significantly (P < 0.05) sharper on postcontrast images. Significant increases in vessel length were noted for the LAD (P < 0.05) and LCx and RCA (both P < 0.01), while significantly more side branches were noted for the LAD and RCA (both P < 0.05) when compared to precontrast T2 preparation values. CONCLUSION: The use of the intravascular contrast agent B-22956 substantially improves both objective and subjective parameters of image quality on high-resolution three-dimensional coronary MRA. The increase in SNR, CNR, and vessel sharpness minimizes current limitations of coronary artery visualization with high-resolution coronary MRA.