963 resultados para Spatiotemporal visualization
Resumo:
This paper presents a very fine grid hydrological model based on the spatiotemporal repartition of precipitation and on the topography. The goal is to estimate the flood on a catchment area, using a Probable Maximum Precipitation (PMP) leading to a Probable Maximum Flood (PMF). The spatiotemporal distribution of the precipitation was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flows, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the exfiltration and the snowmelt. This model was applied to several Swiss basins using measured rain, with results showing a good correlation between the simulated and observed flows. This good correlation proves that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type. In this article we present some results obtained using a PMP rainfall and the developed model.
Resumo:
Gait analysis methods to estimate spatiotemporal measures, based on two, three or four gyroscopes attached on lower limbs have been discussed in the literature. The most common approach to reduce the number of sensing units is to simplify the underlying biomechanical gait model. In this study, we propose a novel method based on prediction of movements of thighs from movements of shanks. Datasets from three previous studies were used. Data from the first study (ten healthy subjects and ten with Parkinson's disease) were used to develop and calibrate a system with only two gyroscopes attached on shanks. Data from two other studies (36 subjects with hip replacement, seven subjects with coxarthrosis, and eight control subjects) were used for comparison with the other methods and for assessment of error compared to a motion capture system. Results show that the error of estimation of stride length compared to motion capture with the system with four gyroscopes and our new method based on two gyroscopes was close ( -0.8 ±6.6 versus 3.8 ±6.6 cm). An alternative with three sensing units did not show better results (error: -0.2 ±8.4 cm). Finally, a fourth that also used two units but with a simpler gait model had the highest bias compared to the reference (error: -25.6 ±7.6 cm). We concluded that it is feasible to estimate movements of thighs from movements of shanks to reduce number of needed sensing units from 4 to 2 in context of ambulatory gait analysis.
Resumo:
There is a concern that agriculture will no longer be able to meet, on a global scale, the growing demand for food. Facing such a challenge requires new patterns of thinking in the context of complexity and sustainability sciences. This paper, focused on the social dimension of the study and management of agricultural systems, suggests that rethinking the study of agricultural systems entails analyzing them as complex socio-ecological systems, as well as considering the differing thinking patterns of diverse stakeholders. The intersubjective nature of knowledge, as studied by different philosophical schools, needs to be better integrated into the study and management of agricultural systems than it is done so far, forcing us to accept that there are no simplistic solutions, and to seek a better understanding of the social dimension of agriculture. Different agriculture related problems require different policy and institutional approaches. Finally, the intersubjective nature of knowledge asks for the visualization of different framings and the power relations taking place in the decision-making process. Rethinking management of agricultural systems implies that policy making should be shaped by different principles: learning, flexibility, adaptation, scale-matching, participation, diversity enhancement and precaution hold the promise to significantly improve current standard management procedures.
Resumo:
BACKGROUND: There is an ever-increasing volume of data on host genes that are modulated during HIV infection, influence disease susceptibility or carry genetic variants that impact HIV infection. We created GuavaH (Genomic Utility for Association and Viral Analyses in HIV, http://www.GuavaH.org), a public resource that supports multipurpose analysis of genome-wide genetic variation and gene expression profile across multiple phenotypes relevant to HIV biology. FINDINGS: We included original data from 8 genome and transcriptome studies addressing viral and host responses in and ex vivo. These studies cover phenotypes such as HIV acquisition, plasma viral load, disease progression, viral replication cycle, latency and viral-host genome interaction. This represents genome-wide association data from more than 4,000 individuals, exome sequencing data from 392 individuals, in vivo transcriptome microarray data from 127 patients/conditions, and 60 sets of RNA-seq data. Additionally, GuavaH allows visualization of protein variation in ~8,000 individuals from the general population. The publicly available GuavaH framework supports queries on (i) unique single nucleotide polymorphism across different HIV related phenotypes, (ii) gene structure and variation, (iii) in vivo gene expression in the setting of human infection (CD4+ T cells), and (iv) in vitro gene expression data in models of permissive infection, latency and reactivation. CONCLUSIONS: The complexity of the analysis of host genetic influences on HIV biology and pathogenesis calls for comprehensive motors of research on curated data. The tool developed here allows queries and supports validation of the rapidly growing body of host genomic information pertinent to HIV research.
Resumo:
Dendritic cells (DCs) are the most potent antigen-presenting cells in the human lung and are now recognized as crucial initiators of immune responses in general. They are arranged as sentinels in a dense surveillance network inside and below the epithelium of the airways and alveoli, where thet are ideally situated to sample inhaled antigen. DCs are known to play a pivotal role in maintaining the balance between tolerance and active immune response in the respiratory system. It is no surprise that the lungs became a main focus of DC-related investigations as this organ provides a large interface for interactions of inhaled antigens with the human body. During recent years there has been a constantly growing body of lung DC-related publications that draw their data from in vitro models, animal models and human studies. This review focuses on the biology and functions of different DC populations in the lung and highlights the advantages and drawbacks of different models with which to study the role of lung DCs. Furthermore, we present a number of up-to-date visualization techniques to characterize DC-related cell interactions in vitro and/or in vivo.
Resumo:
PURPOSE: Visualization of coronary blood flow in the right and left coronary system in volunteers and patients by means of a modified inversion-prepared bright-blood coronary magnetic resonance angiography (cMRA) sequence. MATERIALS AND METHODS: cMRA was performed in 14 healthy volunteers and 19 patients on a 1.5 Tesla MR system using a free-breathing 3D balanced turbo field echo (b-TFE) sequence with radial k-space sampling. For magnetization preparation a slab selective and a 2D selective inversion pulse were used for the right and left coronary system, respectively. cMRA images were evaluated in terms of clinically relevant stenoses (< 50 %) and compared to conventional catheter angiography. Signal was measured in the coronary arteries (coro), the aorta (ao) and in the epicardial fat (fat) to determine SNR and CNR. In addition, maximal visible vessel length, and vessel border definition were analyzed. RESULTS: The use of a selective inversion pre-pulse allowed direct visualization of the coronary blood flow in the right and left coronary system. The measured SNR and CNR, vessel length, and vessel sharpness in volunteers (SNR coro: 28.3 +/- 5.0; SNR ao: 37.6 +/- 8.4; CNR coro-fat: 25.3 +/- 4.5; LAD: 128.0 cm +/- 8.8; RCA: 74.6 cm +/- 12.4; Sharpness: 66.6 % +/- 4.8) were slightly increased compared to those in patients (SNR coro: 24.1 +/- 3.8; SNR ao: 33.8 +/- 11.4; CNR coro-fat: 19.9 +/- 3.3; LAD: 112.5 cm +/- 13.8; RCA: 69.6 cm +/- 16.6; Sharpness: 58.9 % +/- 7.9; n.s.). In the patient study the assessment of 42 coronary segments lead to correct identification of 10 clinically relevant stenoses. CONCLUSION: The modification of a previously published inversion-prepared cMRA sequence allowed direct visualization of the coronary blood flow in the right as well as in the left coronary system. In addition, this sequence proved to be highly sensitive regarding the assessment of clinically relevant stenotic lesions.
Resumo:
The Center for Transportation Research and Education (CTRE) used the traffic simulation model CORSIM to access proposed capacity and safety improvement strategies for the U.S. 61 corridor through Burlington, Iowa. The comparison between the base and alternative models allow for evaluation of the traffic flow performance under the existing conditions as well as other design scenarios. The models also provide visualization of performance for interpretation by technical staff, public policy makers, and the public. The objectives of this project are to evaluate the use of traffic simulation models for future use by the Iowa Department of Transportation (DOT) and to develop procedures for employing simulation modeling to conduct the analysis of alternative designs. This report presents both the findings of the U.S. 61 evaluation and an overview of model development procedures. The first part of the report includes the simulation modeling development procedures. The simulation analysis is illustrated through the Burlington U.S. 61 corridor case study application. Part I is not intended to be a user manual but simply introductory guidelines for traffic simulation modeling. Part II of the report evaluates the proposed improvement concepts in a side by side comparison of the base and alternative models.
Resumo:
In this review, we summarize how the new concept of digital optics applied to the field of holographic microscopy has allowed the development of a reliable and flexible digital holographic quantitative phase microscopy (DH-QPM) technique at the nanoscale particularly suitable for cell imaging. Particular emphasis is placed on the original biological information provided by the quantitative phase signal. We present the most relevant DH-QPM applications in the field of cell biology, including automated cell counts, recognition, classification, three-dimensional tracking, discrimination between physiological and pathophysiological states, and the study of cell membrane fluctuations at the nanoscale. In the last part, original results show how DH-QPM can address two important issues in the field of neurobiology, namely, multiple-site optical recording of neuronal activity and noninvasive visualization of dendritic spine dynamics resulting from a full digital holographic microscopy tomographic approach.
Resumo:
"Morphing Romania and the Moldova Province" gives a short insight of cartograms. Digital cartograms provide potential to move away from classical visualization of geographical data and benefit of new understanding of our world. They introduce a human vision instead of a planimetric one. By applying the Gastner-Newman algorithm for generating density-equalising cartograms to Romania and its Moldova province we can discuss the making of cartograms in general.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
PURPOSE: To evaluate gadocoletic acid (B-22956), a gadolinium-based paramagnetic blood pool agent, for contrast-enhanced coronary magnetic resonance angiography (MRA) in a Phase I clinical trial, and to compare the findings with those obtained using a standard noncontrast T2 preparation sequence. MATERIALS AND METHODS: The left coronary system was imaged in 12 healthy volunteers before B-22956 application and 5 (N = 11) and 45 (N = 7) minutes after application of 0.075 mmol/kg of body weight (BW) of B-22956. Additionally, imaging of the right coronary system was performed 23 minutes after B-22956 application (N = 6). A three-dimensional gradient echo sequence with T2 preparation (precontrast) or inversion recovery (IR) pulse (postcontrast) with real-time navigator correction was used. Assessment of the left and right coronary systems was performed qualitatively (a 4-point visual score for image quality) and quantitatively in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, visible vessel length, maximal luminal diameter, and the number of visible side branches. RESULTS: Significant (P < 0.01) increases in SNR (+42%) and CNR (+86%) were noted five minutes after B-22956 application, compared to precontrast T2 preparation values. A significant increase in CNR (+40%, P < 0.05) was also noted 45 minutes postcontrast. Vessels (left anterior descending artery (LAD), left coronary circumflex (LCx), and right coronary artery (RCA)) were also significantly (P < 0.05) sharper on postcontrast images. Significant increases in vessel length were noted for the LAD (P < 0.05) and LCx and RCA (both P < 0.01), while significantly more side branches were noted for the LAD and RCA (both P < 0.05) when compared to precontrast T2 preparation values. CONCLUSION: The use of the intravascular contrast agent B-22956 substantially improves both objective and subjective parameters of image quality on high-resolution three-dimensional coronary MRA. The increase in SNR, CNR, and vessel sharpness minimizes current limitations of coronary artery visualization with high-resolution coronary MRA.
Resumo:
Seeing seems effortless, despite the need to segregate and integrate visual information that varies in quality, quantity, and location. The extent to which seeing passively recapitulates the external world is challenged by phenomena such as illusory contours, an example of visual completion whereby borders are perceived despite their physical absence in the image. Instead, visual completion and seeing are increasingly conceived as active processes, dependent on information exchange across neural populations. How this is instantiated in the brain remains controversial. Divergent models emanate from single-unit and population-level electrophysiology, neuroimaging, and neurostimulation studies. We reconcile discrepant findings from different methods and disciplines, and underscore the importance of taking into account spatiotemporal brain dynamics in generating models of brain function and perception.
Resumo:
Introduction: Neuronal oscillations have been the focus of increasing interest in the neuroscientific community, in part because they have been considered as a possible integrating mechanism through which internal states can influence stimulus processing in a top-down way (Engel et al., 2001). Moreover, increasing evidence indicates that oscillations in different frequency bands interact with one other through coupling mechanisms (Jensen and Colgin, 2007). The existence and the importance of these cross-frequency couplings during various tasks have been verified by recent studies (Canolty et al., 2006; Lakatos et al., 2007). In this study, we measure the strength and directionality of two types of couplings - phase-amplitude couplings and phase-phase couplings - between various bands in EEG data recorded during an illusory contour experiment that were identified using a recently-proposed adaptive frequency tracking algorithm (Van Zaen et al., 2010). Methods: The data used in this study have been taken from a previously published study examining the spatiotemporal mechanisms of illusory contour processing (Murray et al., 2002). The EEG in the present study were from a subset of nine subjects. Each stimulus was composed of 'pac-man' inducers presented in two orientations: IC, when an illusory contour was present, and NC, when no contour could be detected. The signals recorded by the electrodes P2, P4, P6, PO4 and PO6 were averaged, and filtered into the following bands: 4-8Hz, 8-12Hz, 15-25Hz, 35-45Hz, 45-55Hz, 55-65Hz and 65-75Hz. An adaptive frequency tracking algorithm (Van Zaen et al., 2010) was then applied in each band in order to extract the main oscillation and estimate its frequency. This additional step ensures that clean phase information is obtained when taking the Hilbert transform. The frequency estimated by the tracker was averaged over sliding windows and then used to compare the two conditions. Two types of cross-frequency couplings were considered: phase-amplitude couplings and phase-phase couplings. Both types were measured with the phase locking value (PLV, Lachaux et al., 1999) over sliding windows. The phase-amplitude couplings were computed with the phase of the low frequency oscillation and the phase of the amplitude of the high frequency one. Different coupling coefficients were used when measuring phase-phase couplings in order to estimate different m:n synchronizations (4:3, 3:2, 2:1, 3:1, 4:1, 5:1, 6:1, 7:1, 8:1 and 9:1) and to take into account the frequency differences across bands. Moreover, the direction of coupling was estimated with a directionality index (Bahraminasab et al., 2008). Finally, the two conditions IC and NC were compared with ANOVAs with 'subject' as a random effect and 'condition' as a fixed effect. Before computing the statistical tests, the PLV values were transformed into approximately normal variables (Penny et al., 2008). Results: When comparing the mean estimated frequency across conditions, a significant difference was found only in the 4-8Hz band, such that the frequency within this band was significantly higher for IC than NC stimuli starting at ~250ms post-stimulus onset (Fig. 1; solid line shows IC and dashed line NC). Significant differences in phase-amplitude couplings were obtained only when the 4-8 Hz band was taken as the low frequency band. Moreover, in all significant situations, the coupling strength is higher for the NC than IC condition. An example of significant difference between conditions is shown in Fig. 2 for the phase-amplitude coupling between the 4-8Hz and 55-65Hz bands (p-value in top panel and mean PLV values in the bottom panel). A decrease in coupling strength was observed shortly after stimulus onset for both conditions and was greater for the condition IC. This phenomenon was observed with all other frequency bands. The results obtained for the phase-phase couplings were more complex. As for the phase-amplitude couplings, all significant differences were obtained when the 4-8Hz band was considered as the low frequency band. The stimulus condition exhibiting the higher coupling strength depended on the ratio of the coupling coefficients. When this ratio was small, the IC condition exhibited the higher phase-phase coupling strength. When this ratio was large, the NC condition exhibited the higher coupling strength. Fig. 3 shows the phase-phase couplings between the 4-8Hz and 35-45Hz bands for the coupling coefficient 6:1, and the coupling strength was significantly higher for the IC than NC condition. By contrast, for the coupling coefficient 9:1 the NC condition gave the higher coupling strength (Fig. 4). Control analyses verified that it is not a consequence of the frequency difference between the two conditions in the 4-8Hz band. The directionality measures indicated a transfer of information from the low frequency components towards the high frequency ones. Conclusions: Adaptive tracking is a feasible method for EEG analyses, revealing information both about stimulus-related differences and coupling patterns across frequencies. Theta oscillations play a central role in illusory shape processing and more generally in visual processing. The presence vs. absence of illusory shapes was paralleled by faster theta oscillations. Phase-amplitude couplings were decreased more for IC than NC and might be due to a resetting mechanism. The complex patterns in phase-phase coupling between theta and beta/gamma suggest that the contribution of these oscillations to visual binding and stimulus processing are not as straightforward as conventionally held. Causality analyses further suggest that theta oscillations drive beta/gamma oscillations (see also Schroeder and Lakatos, 2009). The present findings highlight the need for applying more sophisticated signal analyses in order to establish a fuller understanding of the functional role of neural oscillations.
Resumo:
The objectives of this work were to evaluate the richness and diversity of the Poduromorpha fauna in two biotopes in Restinga de Maricá, RJ, Brazil, to identify the characteristic species of each biotope and to determine the relationships between the community structure and the abiotic environmental parameters. Representatives of the Poduromorpha (Collembola) order were studied under an ecological viewpoint in halophyte-psammophyte vegetation and foredune zone in preserved areas of Restinga de Maricá, a sand dune environment in the state of Rio de Janeiro, Brazil. The foredune zone showed the highest diversity, richness and equitability of springtail species. Differences in the fundamental, accessory and accidental species in each environment were encountered. Paraxenylla piloua was found to be an indicator species of the halophyte-psammophyte vegetation, while Friesea reducta, Pseudachorutes difficilis and Xenylla maritima were indicators of the foredune zone. The canonical correspondence analysis indicated pH, organic matter content and soil humidity as the most important factors influencing the spatiotemporal distribution of the species.
Local re-inversion coronary MR angiography: arterial spin-labeling without the need for subtraction.
Resumo:
PURPOSE: To implement a double-inversion bright-blood coronary MR angiography sequence using a cylindrical re-inversion prepulse for selective visualization of the coronary arteries. MATERIALS AND METHODS: Local re-inversion bright-blood magnetization preparation was implemented using a nonselective inversion followed by a cylindrical aortic re-inversion prepulse. After an inversion delay that allows for in-flow of the labeled blood-pool into the coronary arteries, three-dimensional radial steady-state free-precession (SSFP) imaging (repetition/echo time, 7.2/3.6 ms; flip angle, 120 degrees, 16 profiles per RR interval; field of view, 360 mm; matrix, 512, twelve 3-mm slices) is performed. Coronary MR angiography was performed in three healthy volunteers and in one patient on a commercial 1.5 Tesla whole-body MR System. RESULTS: In all subjects, coronary arteries were selectively visualized with positive contrast. In addition, a middle-grade stenosis of the proximal right coronary artery was seen in one patient. CONCLUSION: A novel T1 contrast-enhancement strategy is presented for selective visualization of the coronary arteries without extrinsic contrast medium application. In comparison to former arterial spin-labeling schemes, the proposed magnetization preparation obviates the need for a second data set and subtraction.